256 79 23MB
English Pages 2000 [2167] Year 2005
ENCYCLOPEDIA
OF
HUMAN NUTRITION SECOND EDITION
ENCYCLOPEDIA
OF
HUMAN NUTRITION SECOND EDITION Editor-in-Chief
BENJAMIN CABALLERO Editors
LINDSAY ALLEN ANDREW PRENTICE
ACADEMIC PRESS
Amsterdam Boston Heidelberg London New York Oxford Paris San Diego San Francisco Singapore Sydney Tokyo
Elsevier Ltd., The Boulevard, Langford Lane, Kidlington, Oxford, OX5 1GB, UK ª 2005 Elsevier Ltd.
The following articles are US Government works in the public domain and not subject to copyright: CAROTENOIDS/Chemistry, Sources and Physiology FOOD FORTIFICATION/Developed Countries FRUCTOSE LEGUMES TEA TUBERCULOSIS/Nutrition and Susceptibility TUBERCULOSIS/Nutritional Management VEGETARIAN DIETS All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic, or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publishers. Permissions may be sought directly from Elsevier’s Rights Department in Oxford, UK: phone (+44) 1865 843830, fax (+44) 1865 853333, e-mail [email protected]. Requests may also be completed on-line via the homepage (http://www.elsevier.com/locate/permissions). Second edition 2005 Library of Congress Control Number: 2004113614 A catalogue record for this book is available from the British Library ISBN 0-12-150110-8 (set)
This book is printed on acid-free paper Printed and bound in Spain
EDITORIAL ADVISORY BOARD
EDITOR-IN-CHIEF Benjamin Caballero Johns Hopkins University Maryland USA
EDITORS Lindsay Allen University of California Davis, CA, USA Andrew Prentice London School of Hygiene & Tropical Medicine London, UK
Christopher Bates MRC Human Nutrition Research Cambridge, UK
Hedley C Freake University of Connecticut Storrs, CT, USA
Carolyn D Berdanier University of Georgia Athens, GA, USA
Catherine Geissler King’s College London London, UK
Bruce R Bistrian Harvard Medical School Boston, MA, USA
Susan A Jebb MRC Human Nutrition Research Cambridge, UK
Johanna T Dwyer Frances Stern Nutrition Center Boston, MA, USA Paul Finglas Institute of Food Research Norwich, UK Terrence Forrester Tropical Medicine Research Institute University of the West Indies, Mona Campus, Kingston, Jamaica
Rachel Johnson University of Vermont Burlington, VT, USA Janet C King Children’s Hospital Oakland Research Institute Oakland, CA, USA Anura Kurpad St John’s National Academy of Health Sciences Bangalore, India
vi
EDITORIAL ADVISORY BOARD
Kim Fleisher Michaelson The Royal Veterinary and Agricultural University Frederiksberg, Denmark
Carlos Monteiro University of Saˆo Paulo Saˆo Paulo, Brazil
John M Pettifor University of the Witwatersrand & Chris Hani-Baragwanath Hospital Johannesburg, South Africa
Barry M Popkin University of North Carolina Chapel Hill, NC, USA Michele J Sadler MJSR Associates Ashford, UK Ricardo Uauy London School of Hygiene and Tropical Medicine UK and INTA University of Chile, Santiago, Chile David York Pennington Biomedical Research Center Baton Rouge, LA, USA
FOREWORD
W
hy an encyclopedia? The original Greek word means ‘the circle of arts and sciences essential for a liberal education’, and such a book was intended to embrace all knowledge. That was the aim of the famous Encyclopedie produced by Diderot and d’Alembert in the middle of the 18th century, which contributed so much to what has been called the Enlightenment. It is recorded that after all the authors had corrected the proofs of their contributions, the printer secretly cut out whatever he thought might give offence to the king, mutilated most of the best articles and burnt the manuscripts! Later, and less controversially, the word ‘encyclopedia’ came to be used for an exhaustive repertory of information on some particular department of knowledge. It is in this class that the present work falls. In recent years the scope of Human Nutrition as a scientific discipline has expanded enormously. I used to think of it as an applied subject, relying on the basic sciences of physiology and biochemistry in much the same way that engineering relies on physics. That traditional relationship remains and is fundamental, but the field is now much wider. At one end of the spectrum epidemiological studies and the techniques on which they depend have played a major part in establishing the relationships between diet, nutritional status and health, and there is greater recognition of the importance of social factors. At the other end of the spectrum we are becoming increasingly aware of the genetic determinants of ways in which the body handles food and is able to resist adverse influences of the environment. Nutritionists are thus beginning to explore the mechanisms by which nutrients influence the expression of genes in the knowledge that nutrients are among the most powerful of all influences on gene expression. This has brought nutrition to the centre of the new ‘post-genome’ challenge of understanding the effects on human health of gene-environment interactions. In parallel with this widening of the subject there has been an increase in opportunities for training and research in nutrition, with new departments and new courses being developed in universities, medical schools and schools of public health, along with a greater involvement of schoolchildren and their teachers. Public interest in nutrition is intense and needs to be guided by sound science. Governments are realizing more and more the role that nutrition plays in the prevention of disease and the maintenance of good health, and the need to develop a nutrition policy that is integrated with policies for food production. The first edition of the Encyclopaedia of Human Nutrition established it as one of the major reference works in our discipline. The second edition has been completely revised to take account of new knowledge in our rapidly advancing field. This new edition is as comprehensive as the present state of knowledge allows, but is not overly technical and is well supplied with suggestions for further reading. All the articles have been carefully reviewed and although some of the subjects are controversial and sensitive, the publishers have not exerted the kind of political censorship that so infuriated Diderot.
J.C. Waterlow Emeritus Professor of Human Nutrition London School of Hygiene and Tropical Medicine February 2005
INTRODUCTION
T
he science of human nutrition and its applications to health promotion continue to gain momentum. In the relatively short time since the release of the first edition of this Encyclopedia, a few landmark discoveries have had a dramatic multiplying effect over nutrition science: the mapping of the human genome, the links between molecular bioenergetics and lifespan, the influence of nutrients on viral mutation, to name a few. But perhaps the strongest evidence of the importance of nutrition for human health comes from the fact that almost 60% of the diseases that kill humans are related to diet and lifestyle (including smoking and physical activity). These are all modifiable risk factors. As individuals and organizations intensify their efforts to reduce disease risks, the need for multidisciplinary work becomes more apparent. Today, an effective research or program team is likely to include several professionals from fields other than nutrition. For both nutrition and non-nutrition scientists, keeping up to date on the concepts and interrelationships between nutrient needs, dietary intake and health outcomes is essential. The new edition of the Encyclopedia of Human Nutrition hopes to address these needs. While rigorously scientific and up to date, EHN provides concise and easily understandable summaries on a wide variety of topics. The nutrition scientist will find that the Encyclopedia is an effective tool to ‘‘fill the void’’ of information in areas beyond his/her field of expertise. Professionals from other fields will appreciate the ease of alphabetical listing of topics, and the presentation of information in a rigorous but concise way, with generous aid from graphs and diagrams. For a work that involved more than 340 authors requires, coordination and attention to detail is critical. The editors were fortunate to have the support of an excellent team from Elsevier’s Major Reference Works division. Sara Gorman and Paula O’Connell initiated the project, and Tracey Mills and Samuel Coleman saw it to its successful completion. We trust that this Encyclopedia will be a useful addition to the knowledge base of professionals involved in research, patient care, and health promotion around the globe. Benjamin Caballero, Lindsay Allen and Andrew Prentice Editors April 2005
GUIDE TO USE OF THE ENCYCLOPEDIA
Structure of the Encyclopedia The material in the Encyclopedia is arranged as a series of entries in alphabetical order. Most entries consist of several articles that deal with various aspects of a topic and are arranged in a logical sequence within an entry. Some entries comprise a single article. To help you realize the full potential of the material in the Encyclopedia we have provided three features to help you find the topic of your choice: a Contents List, Cross-References and an Index.
1. Contents List Your first point of reference will probably be the contents list. The complete contents lists, which appears at the front of each volume will provide you with both the volume number and the page number of the entry. On the opening page of an entry a contents list is provided so that the full details of the articles within the entry are immediately available. Alternatively you may choose to browse through a volume using the alphabetical order of the entries as your guide. To assist you in identifying your location within the Encyclopedia a running headline indicates the current entry and the current article within that entry. You will find ‘dummy entries’ where obvious synonyms exist for entries or where we have grouped together related topics. Dummy entries appear in both the contents lists and the body of the text. Example If you were attempting to locate material on food intake measurement via the contents list: FOOD INTAKE see DIETARY INTAKE MEASUREMENT: Methodology; Validation. DIETARY SURVEYS. MEAL SIZE AND FREQUENCY The dummy entry directs you to the Methodology article, in The Dietary Intake Measurement entry. At the appropriate location in the contents list, the page numbers for articles under Dietary Intake Measurement are given. If you were trying to locate the material by browsing through the text and you looked up Food intake then the following information would be provided in the dummy entry:
Food Intake see Dietary Intake Measurement: Methodology; Validation. Dietary Surveys. Meal Size and Frequency
Alternatively, if you were looking up Dietary Intake Measurement the following information would be provided:
DIETARY INTAKE MEASUREMENT Contents Methodology Validation
xii GUIDE TO USE OF THE ENCYCLOPEDIA
2. Cross-References All of the articles in the Encyclopedia have been extensively cross-referenced. The cross-references, which appear at the end of an article, serve three different functions. For example, at the end of the ADOLESCENTS/Nutritional Problems article, cross-references are used: i. To indicate if a topic is discussed in greater detail elsewhere. See also: Adolescents: Nutritional Requirements of Adolescents. Anemia: Iron-Deficiency Anemia. Calcium: Physiology. Eating Disorders: Anorexia Nervosa; Bulimia Nervosa; Binge Eating. Folic Acid: Physiology, Dietary Sources, and Requirements. Iron: Physiology, Dietary Sources, and Requirements. Obesity: Definition, Aetiology, and Assessment. Osteoporosis: Nutritional Factors. Zinc: Physiology. ii. To draw the reader’s attention to parallel discussions in other articles. See also: Adolescents: Nutritional Requirements of Adolescents. Anemia: Iron-Deficiency Anemia. Calcium: Physiology. Eating Disorders: Anorexia Nervosa; Bulimia Nervosa; Binge Eating. Folic Acid: Physiology, Dietary Sources, and Requirements. Iron: Physiology, Dietary Sources, and Requirements. Obesity: Definition, Aetiology, and Assessment. Osteoporosis: Nutritional Factors Zinc: Physiology. iii. To indicate material that broadens the discussion. See also: Adolescents: Nutritional Requirements of Adolescents. Anemia: Iron-Deficiency Anemia. Calcium: Physiology. Eating Disorders: Anorexia Nervosa; Bulimia Nervosa; Binge Eating. Follic Acid: Physiology, Dietary Sources, and Requirements. Iron: Physiology, Dietary Sources, and Requirements. Obesity: Definition, Aetiology, and Assessment. Osteoporosis: Nutritional Factors. Zinc: Physiology.
3. Index The index will provide you with the page number where the material is located, and the index entries differentiate between material that is a whole article, is part of an article or is data presented in a figure or table. Detailed notes are provided on the opening page of the index.
4. Contributors A full list of contributors appears at the beginning of each volume.
CONTRIBUTORS
E Abalos Centro Rosarino de Estudios Perinatales Rosario, Argentina
L J Appel Johns Hopkins University Baltimore, MD, USA
A Abi-Hanna Johns Hopkins School of Medicine Baltimore, MD, USA
A Arin˜o University of Zaragoza Zaragoza, Spain
L S Adair University of North Carolina Chapel Hill, NC, USA
M J Arnaud Nestle S.A. Vevey, Switzerland
A Ahmed Obetech Obesity Research Center Richmond, VA, USA
E W Askew University of Utah Salt Lake City, UT, USA
B Ahre´n Lund University Lund, Sweden
R L Atkinson Obetech Obesity Research Center Richmond, VA, USA
J Akre´ World Health Organization, Geneva, Switzerland
S A Atkinson McMaster University Hamilton, ON, Canada
A J Alberg Johns Hopkins Bloomberg School of Public Health Baltimore, MD, USA
L S A Augustin University of Toronto Toronto, ON, Canada
L H Allen University of California at Davis Davis, CA, USA
D J Baer US Department of Agriculture Beltsville, MD, USA
D Anderson University of Bradford Bradford, UK
A Baqui Johns Hopkins Bloomberg School of Public Health Baltimore, MD, USA
J J B Anderson University of North Carolina Chapel Hill, NC, USA
Y Barnett Nottingham Trent University Nottingham, UK
R A Anderson US Department of Agriculture Beltsville, MD, USA
G E Bartley Agricultural Research Service Albany, CA, USA
xiv
CONTRIBUTORS
C J Bates MRC Human Nutrition Research Cambridge, UK
F Branca Istituto Nazionale di Ricerca per gli Alimenti e la Nutrizione Rome, Italy
J A Beltra´n University of Zaragoza Zaragoza, Spain
J Brand-Miller University of Sydney Sydney, NSW, Australia
A E Bender Leatherhead, UK
A Briend Institut de Recherche pour le De´veloppement Paris, France
D A Bender University College London London, UK I F F Benzie The Hong Kong Polytechnic University Hong Kong SAR, China C D Berdanier University of Georgia Athens, GA, USA R Bhatia United Nations World Food Programme Rome, Italy Z A Bhutta The Aga Khan University Karachi, Pakistan J E Bines University of Melbourne Melbourne, VIC, Australia J Binkley Vanderbilt Center for Human Nutrition Nashville, TN, USA R Black Johns Hopkins Bloomberg School of Public Health Baltimore, MD, USA J E Blundell University of Leeds Leeds, UK
P Browne St James’s Hospital Dublin, Ireland I A Brownlee University of Newcastle Newcastle-upon-Tyne, UK H Brunner Centre Hospitalier Universitaire Vaudois Lausanne, Switzerland A J Buckley University of Cambridge Cambridge, UK H H Butchko Exponent, Inc. Wood Dale, IL, USA J Buttriss British Nutrition Foundation London, UK B Caballero Johns Hopkins Bloomberg School of Public Health and Johns Hopkins University Baltimore, MD, USA E A Carrey Institute of Child Health London, UK
A T Borchers University of California at Davis Davis, CA, USA
A Cassidy School of Medicine University of East Anglia Norwich, UK
C Boreham University of Ulster at Jordanstown Jordanstown, UK
G E Caughey Royal Adelaide Hospital Adelaide, SA, Australia
CONTRIBUTORS
xv
J P Cegielski Centers for Disease Control and Prevention Atlanta, GA, USA
R C Cottrell The Sugar Bureau London, UK
C M Champagne Pennington Biomedical Research Center Baton Rouge, LA, USA
W A Coward MRC Human Nutrition Research Cambridge, UK
S C Chen US Department of Agriculture Beltsville, MD, USA
J M Cox Johns Hopkins Hospital Baltimore, MD, USA
L Cheskin Johns Hopkins University Baltimore, MD, USA
S Cox London School of Hygiene and Tropical Medicine London, UK
S Chung Columbia University New York, NY, USA
P D’Acapito Istituto Nazionale di Ricerca per gli Alimenti e la Nutrizione Rome, Italy
L G Cleland Royal Adelaide Hospital Adelaide, SA, Australia L Cobiac CSIRO Health Sciences and Nutrition Adelaide, SA, Australia G A Colditz Harvard Medical School Boston, MA, USA T J Cole Institute of Child Health London, UK L A Coleman Marshfield Clinic Research Foundation Marshfield, WI, USA
S Daniell Vanderbilt Center for Human Nutrition Nashville, TN, USA O Dary The MOST Project Arlington, VA, USA T J David University of Manchester Manchester, UK C P G M de Groot Wageningen University Wageningen, The Netherlands M de Onis World Health Organization Geneva, Switzerland
S Collier Children’s Hospital, Boston, Harvard Medical School, and Harvard School of Public Health Boston, MA, USA
M C de Souza Universidad de Mogi das Cruzes Sa˜o Paulo, Brazil
M Collins Muckamore Abbey Hospital Antrim, UK
R de Souza University of Toronto Toronto, ON, Canada
K G Conner Johns Hopkins Hospital Baltimore, MD, USA
C H C Dejong University Hospital Maastricht Maastricht, The Netherlands
K C Costas Children’s Hospital Boston Boston, MA, USA
L Demeshlaira Emory University Atlanta, GA, USA
xvi
CONTRIBUTORS
K G Dewey University of California at Davis Davis, CA, USA
J Dwyer Tufts University Boston, MA, USA
H L Dewraj The Aga Khan University Karachi, Pakistan
J Eaton–Evans University of Ulster Coleraine, UK
C Doherty MRC Keneba The Gambia C M Donangelo Universidade Federal do Rio de Janeiro Rio de Janeiro, Brazil A Dornhorst Imperial College at Hammersmith Hospital London, UK E Dowler University of Warwick Coventry, UK J Dowsett St Vincent’s University Hospital Dublin, Ireland A K Draper University of Westminster London, UK M L Dreyfuss Johns Hopkins Bloomberg School of Public Health Baltimore, MD, USA R D’Souza Queen Mary’s, University of London London, UK C Duggan Harvard Medical School Boston, MA, USA A G Dulloo University of Fribourg Fribourg, Switzerland
C A Edwards University of Glasgow Glasgow, UK M Elia University of Southampton Southampton, UK P W Emery King’s College London London, UK J L Ensunsa University of California at Davis Davis, CA, USA C Feillet-Coudray National Institute for Agricultural Research Clermont-Ferrand, France J D Fernstrom University of Pittsburgh Pittsburgh, PA, USA M H Fernstrom University of Pittsburgh Pittsburgh, PA, USA F Fidanza University of Rome Tor Vergata Rome, Italy P Fieldhouse The University of Manitoba Winnipeg, MB, Canada
E B Duly Ulster Hospital Belfast, UK
N Finer Luton and Dunstable Hospital NHS Trust Luton, UK
J L Dupont Florida State University Tallahassee, FL, USA
J Fiore University of Westminster London, UK
CONTRIBUTORS H C Freake University of Connecticut Storrs, CT, USA
J Go´mez-Ambrosi Universidad de Navarra Pamplona, Spain
J Freitas Tufts University Boston, MA, USA
J M Graham University of California at Davis Davis, CA, USA
R E Frisch Harvard Center for Population and Development Studies Cambridge, MA, USA
J Gray Guildford, UK
G Frost Imperial College at Hammersmith Hospital London, UK G Fru¨hbeck Universidad de Navarra Pamplona, Spain D Gallagher Columbia University New York, NY, USA L Galland Applied Nutrition Inc. New York, NY, USA C Geissler King’s College London London, UK M E Gershwin University of California at Davis Davis, CA, USA H Ghattas London School of Hygiene and Tropical Medicine London, UK E L Gibson University College London London, UK T P Gill University of Sydney Sydney, NSW, Australia
J P Greaves London, UK M W Green Aston University Birmingham, UK R Green University of California Davis, CA, USA R F Grimble University of Southampton Southampton, UK M Grønbæk National Institute of Public Health Copenhagen, Denmark J D Groopman Johns Hopkins University Baltimore MD, USA S M Grundy University of Texas Southwestern Medical Center Dallas, TX, USA M A Grusak Baylor College of Medicine Houston, TX, USA M Gueimonde University of Turku Turku, Finland
W Gilmore University of Ulster Coleraine, UK
C S Gulotta Johns Hopkins University and Kennedy Krieger Institute Baltimore, MD, USA
G R Goldberg MRC Human Nutrition Research Cambridge, UK
P Haggarty Rowett Research Institute Aberdeen, UK
xvii
xviii
CONTRIBUTORS
J C G Halford University of Liverpool Liverpool, UK
J M Hodgson University of Western Australia Perth, WA, Australia
C H Halsted University of California at Davis Davis, CA, USA
M F Holick Boston University Medical Center Boston, MA, USA
J Hampsey Johns Hopkins School of Medicine Baltimore, MD, USA
C Hotz National Institute of Public Health Morelos, Mexico
E D Harris Texas A&M University College Station, TX, USA
R Houston Emory University Atlanta, GA, USA
Z L Harris Johns Hopkins Hospital and School of Medicine Baltimore, MD, USA
H-Y Huang Johns Hopkins University Baltimore, MD, USA
P J Havel University of California at Davis Davis, CA, USA W W Hay Jr University of Colorado Health Sciences Center Aurora, CO, USA R G Heine University of Melbourne Melbourne, VIC, Australia R Heinzen Johns Hopkins Bloomberg School of Public Health Baltimore, MD, USA A Herrera University of Zaragoza Zaragoza, Spain B S Hetzel Women’s and Children’s Hospital North Adelaide, SA, Australia
J R Hunt USDA-ARS Grand Forks Human Nutrition Research Center Grand Forks, ND, USA R Hunter King’s College London London, UK P Hyland Nottingham Trent University Nottingham, UK B K Ishida Agricultural Research Service Albany, CA, USA J Jacquet University of Geneva Geneva, Switzerland M J James Royal Adelaide Hospital Adelaide, SA, Australia
A J Hill University of Leeds Leeds, UK
W P T James International Association for the Study of Obesity/ International Obesity Task Force Offices London, UK
S A Hill Southampton General Hospital Southampton, UK
A G Jardine University of Glasgow Glasgow, UK
G A Hitman Queen Mary’s, University of London London, UK
S A Jebb MRC Human Nutrition Research Cambridge, UK
CONTRIBUTORS K N Jeejeebhoy University of Toronto Toronto, ON, Canada
P Kirk University of Ulster Coleraine, UK
D J A Jenkins University of Toronto Toronto, ON, Canada
S F L Kirk University of Leeds Leeds, UK
G L Jensen Vanderbilt Center for Human Nutrition Nashville, TN, USA
P N Kirke The Health Research Board Dublin, Ireland
I T Johnson Institute of Food Research Norwich, UK
G L Klein University of Texas Medical Branch at Galveston Galveston TX, USA
P A Judd University of Central Lancashire Preston, UK
R D W Klemm Johns Hopkins University Baltimore, MD, USA
M A Kalarchian University of Pittsburgh Pittsburgh, PA, USA R M Katz Johns Hopkins University School of Medicine and Mount Washington Pediatric Hospital Baltimore, MD, USA C L Keen University of California at Davis Davis, CA, USA
D M Klurfeld US Department of Agriculture Beltville, MD, USA P G Kopelman Queen Mary’s, University of London London, UK J Krick Kennedy–Krieger Institute Baltimore, MD, USA
N L Keim US Department of Agriculture Davis, CA, USA
D Kritchevsky Wistar Institute Philadelphia, PA, USA
E Kelly Harvard Medical School Boston, MA, USA
R Lang University of Teeside Middlesbrough, UK
C W C Kendall University of Toronto Toronto, ON, Canada
A Laurentin Universidad Central de Venezuela Caracas, Venezuela
T W Kensler Johns Hopkins University Baltimore, MD, USA
A Laverty Muckamore Abbey Hospital Antrim, UK
J E Kerstetter University of Connecticut Storrs, CT, USA
M Lawson Institute of Child Health London, UK
M Kiely University College Cork Cork, Ireland
F E Leahy University of Auckland Auckland, New Zealand
xix
xx CONTRIBUTORS A R Leeds King’s College London London, UK
A Maqbool The Children’s Hospital of Philadelphia Philadelphia, PA, USA
J Leiper University of Aberdeen Aberdeen, UK
M D Marcus University of Pittsburgh Pittsburgh, PA, USA
M D Levine University of Pittsburgh Pittsburgh, PA, USA
E Marietta The Mayo Clinic College of Medicine Rochester, MN, USA
A H Lichtenstein Tufts University Boston MA, USA
P B Mark University of Glasgow Glasgow, UK
E Lin Emory University Atlanta, GA, USA
V Marks University of Surrey Guildford, UK
L Lissner Sahlgrenska Academy at Go¨teborg University Go¨teborg, Sweden
D L Marsden Children’s Hospital Boston Boston, MA, USA
C Lo Children’s Hospital, Boston, Harvard Medical School, and Harvard School of Public Health Boston, MA, USA
R J Maughan Loughborough University Loughborough, UK
P A Lofgren Oak Park, IL, USA
K C McCowen Beth Israel Deaconess Medical Center and Harvard Medical School Boston, MA, USA
B Lo¨nnerdal University of California at Davis Davis, CA, USA
S S McDonald Raleigh, NC, USA
M J Luetkemeier Alma College Alma, MI, USA
S McLaren London South Bank University London, UK
Y C Luiking University Hospital Maastricht Maastricht, The Netherlands
J L McManaman University of Colorado Denver, CO, USA
P G Lunn University of Cambridge Cambridge, UK
D N McMurray Texas A&M University College Station, TX, USA
C K Lutter Pan American Health Organization Washington, DC, USA
D J McNamara Egg Nutrition Center Washington, DC, USA
A MacDonald The Children’s Hospital Birmingham, UK
J McPartlin Trinity College Dublin, Ireland
CONTRIBUTORS R P Mensink Maastricht University Maastricht, The Netherlands
S P Murphy University of Hawaii Honolulu, HI, USA
M Merialdi World Health Organization Geneva, Switzerland
J Murray The Mayo Clinic College of Medicine Rochester, MN, USA
A R Michell St Bartholomew’s Hospital London, UK
R Nalubola Center for Food Safety and Applied Nutrition, US Food and Drug Administration, MD, USA
J W Miller UC Davis Medical Center Sacramento, CA, USA
J L Napoli University of California Berkeley, CA, USA
P Miller Kennedy–Krieger Institute Baltimore, MD, USA
V Nehra The Mayo Clinic College of Medicine Rochester, MN, USA
D J Millward University of Surrey Guildford, UK
B Nejadnik Johns Hopkins University Baltimore, MD, USA
D M Mock University of Arkansas for Medical Sciences Little Rock, AR, USA
M Nelson King’s College London London, UK
N Moore John Hopkins School of Medicine Baltimore, MD, USA
P Nestel International Food Policy Research Institute Washington, DC, USA
J O Mora The MOST Project Arlington, VA, USA
L M Neufeld National Institute of Public Health Cuernavaca, Mexico
T Morgan University of Melbourne Melbourne, VIC, Australia
M C Neville University of Colorado Denver, CO, USA
T A Mori University of Western Australia Perth, WA, Australia
F Nielsen Grand Forks Human Nutrition Research Center Grand Forks, ND, USA
J E Morley St Louis University St Louis, MO, USA
N Noah London School of Hygiene and Tropical Medicine London, UK
P A Morrissey University College Cork Cork, Ireland
K O O’Brien Johns Hopkins University Baltimore, MD, USA
M H Murphy University of Ulster at Jordanstown Jordanstown, UK
S H Oh Johns Hopkins General Clinical Research Center Baltimore, MD, USA
xxi
xxii CONTRIBUTORS J M Ordovas Tufts University Boston, MA, USA
J Powell-Tuck Queen Mary’s, University of London London, UK
S E Ozanne University of Cambridge Cambridge, UK
V Preedy King’s College London London, UK
D M Paige Johns Hopkins Bloomberg School of Public Health Baltimore, MD, USA
N D Priest Middlesex University London, UK
J P Pearson University of Newcastle Newcastle-upon-Tyne, UK S S Percival University of Florida Gainesville, FL, USA T Peters King’s College Hospital London, UK B J Petersen Exponent, Inc. Washington DC, USA J C Phillips BIBRA International Ltd Carshalton, UK M F Picciano National Institutes of Health Bethesda, MD, USA A Pietrobelli Verona University Medical School Verona, Italy S Pin Johns Hopkins Hospital and School of Medicine Baltimore, MD, USA B M Popkin University of North Carolina Chapel Hill, NC, USA E M E Poskitt London School of Hygiene and Tropical Medicine London, UK A D Postle University of Southampton Southampton, UK
R Rajendram King’s College London London, UK A Raman University of Wisconsin–Madison Madison, WI, USA H A Raynor Brown University Providence, RI, USA Y Rayssiguier National Institute for Agricultural Research Clermont-Ferrand, France L N Richardson United Nations World Food Programme Rome, Italy F J Rohr Children’s Hospital Boston Boston, MA, USA A R Rolla Harvard Medical School Boston, MA, USA P Roncale´s University of Zaragoza Zaragoza, Spain A C Ross The Pennsylvania State University University Park, PA, USA R Roubenoff Millennium Pharmaceuticals, Inc. Cambridge, MA, USA and Tufts University Boston, MA, USA
CONTRIBUTORS D Rumsey University of Sheffield Sheffield, UK
D A Schoeller University of Wisconsin–Madison Madison, WI, USA
C H S Ruxton Nutrition Communications Cupar, UK
L Schuberth Kennedy Krieger Institute Baltimore, MD, USA
J M Saavedra John Hopkins School of Medicine Baltimore, MD, USA
K J Schulze Johns Hopkins Bloomberg School of Public Health Baltimore, MD, USA
J E Sable University of California at Davis Davis, CA, USA
Y Schutz University of Lausanne Lausanne, Switzerland
M J Sadler MJSR Associates Ashford, UK
K B Schwarz Johns Hopkins School of Medicine Baltimore, MD, USA
N R Sahyoun University of Maryland College Park, MD, USA
J M Scott Trinity College Dublin Dublin, Ireland
S Salminen University of Turku Turku, Finland
C Shaw Royal Marsden NHS Foundation Trust London, UK
M Saltmarsh Alton, UK
J Shedlock Johns Hopkins Hospital and School of Medicine Baltimore, MD, USA
J M Samet Johns Hopkins Bloomberg School of Public Health Baltimore, MD, USA C P Sa´nchez-Castillo National Institute of Medical Sciences and Nutrition Salvador Zubira´n, Tlalpan, Mexico M Santosham Johns Hopkins Bloomberg School of Public Health Baltimore, MD, USA
S M Shirreffs Loughborough University Loughborough, UK R Shrimpton Institute of Child Health London, UK H A Simmonds Guy’s Hospital London, UK
C D Saudek Johns Hopkins School of Medicine Baltimore, MD, USA
A P Simopoulos The Center for Genetics, Nutrition and Health Washington, DC, USA
A O Scheimann Johns Hopkins School of Medicine Baltimore, MD, USA
R J Smith Brown Medical School Providence, RI, USA
B Schneeman University of California at Davis Davis, CA, USA
P B Soeters University Hospital Maastricht Maastricht, The Netherlands
xxiii
xxiv
CONTRIBUTORS
N Solomons Center for Studies of Sensory Impairment, Aging and Metabolism (CeSSIAM) Guatemala City, Guatemala J A Solon MRC Laboratories Gambia Banjul, The Gambia K Srinath Reddy All India Institute of Medical Sciences New Delhi, India S Stanner British Nutrition Foundation London, UK J Stevens University of North Carolina at Chapel Hill Chapel Hill, NC, USA
H S Thesmar Egg Nutrition Center Washington, DC, USA B M Thomson Rowett Research Institute Aberdeen, UK D I Thurnham University of Ulster Coleraine, UK L Tolentino National Institute of Public Health Cuernavaca, Mexico D L Topping CSIRO Health Sciences and Nutrition Adelaide, SA, Australia
J J Strain University of Ulster Coleraine, UK
B Torun Center for Research and Teaching in Latin America (CIDAL) Guatemala City, Guatemala
R J Stratton University of Southampton Southampton, UK
M G Traber Oregon State University Corvallis, OR, USA
R J Stubbs The Rowett Research Institute Aberdeen, UK
T R Trinick Ulster Hospital Belfast, UK
C L Stylianopoulos Johns Hopkins University Baltimore, MD, USA
K P Truesdale University of North Carolina at Chapel Hill Chapel Hill, NC, USA
A W Subudhi University of Colorado at Colorado Colorado Springs, CO, USA
N M F Trugo Universidade Federal do Rio de Janeiro Rio de Janeiro, Brazil
J Sudagani Queen Mary’s, University of London London, UK
P M Tsai Harvard Medical School Boston, MA, USA
S A Tanumihardjo University of Wisconsin-Madison Madison, WI, USA
K L Tucker Tufts University Boston, MA, USA
J A Tayek Harbor–UCLA Medical Center Torrance, CA, USA
O Tully St Vincent’s University Hospital Dublin, Ireland
E H M Temme University of Leuven Leuven, Belgium
E C Uchegbu Royal Hallamshire Hospital Sheffield, UK
CONTRIBUTORS M C G van de Poll University Hospital Maastricht Maastricht, The Netherlands
R R Wing Brown University Providence, RI, USA
W A van Staveren Wageningen University Wageningen, The Netherlands
C K Winter University of California at Davis Davis, CA, USA
J Villar World Health Organization Geneva, Switzerland
H Wiseman King’s College London London, UK
M L Wahlqvist Monash University Victoria, VIC, Australia
M Wolraich Vanderbilt University Nashville, TN, USA
A F Walker The University of Reading Reading, UK
R J Wood Tufts University Boston, MA, USA
P A Watkins Kennedy Krieger Institute and Johns Hopkins University School of Medicine Baltimore, MD, USA
X Xu Johns Hopkins Hospital and School of Medicine Baltimore, MD, USA
A A Welch University of Cambridge Cambridge, UK R W Welch University of Ulster Coleraine, UK
Z Yang University of Wisconsin-Madison Madison, WI, USA A A Yates ENVIRON Health Sciences Arlington, VA, USA
K P West Jr Johns Hopkins University Baltimore, MD, USA
S H Zeisel University of North Carolina at Chapel Hill Chapel Hill, NC, USA
S Whybrow The Rowett Research Institute Aberdeen, UK
X Zhu University of North Carolina at Chapel Hill Chapel Hill, NC, USA
D H Williamson Radcliffe Infirmary Oxford, UK
S Zidenberg-Cherr University of California at Davis Davis, CA, USA
M-M G Wilson St Louis University St Louis, MO, USA
T R Ziegler Emory University Atlanta, GA, USA
xxv
CONTENTS
VOLUME 1 A ACIDS see ELECTROLYTES: Acid-Base Balance G Fru¨hbeck and J Go´mez-Ambrosi
ADIPOSE TISSUE
1
ADOLESCENTS Nutritional Requirements Nutritional Problems AGING
C H S Ruxton and J Fiore
15
C Lo
26
P Hyland and Y Barnett
40
ALCOHOL Absorption, Metabolism and Physiological Effects Disease Risk and Beneficial Effects
R Rajendram, R Hunter, V Preedy and T Peters
M Grønbæk
Effects of Consumption on Diet and Nutritional Status ALUMINUM
48 57
C H Halsted
N D Priest
62 69
AMINO ACIDS Chemistry and Classification Metabolism
P W Emery
76
P W Emery
Specific Functions
84
M C G van de Poll, Y C Luiking, C H C Dejong and P B Soeters
92
ANEMIA Iron-Deficiency Anemia Megaloblastic Anemia
K J Schulze and M L Dreyfuss J M Scott and P Browne
101 109
ANTIOXIDANTS Diet and Antioxidant Defense Observational Studies Intervention Studies
I F F Benzie and J J Strain
I F F Benzie
117 131
S Stanner
138
APPETITE Physiological and Neurobiological Aspects Psychobiological and Behavioral Aspects ARTHRITIS
J C G Halford and J E Blundell R J Stubbs, S Whybrow and J E Blundell
L A Coleman and R Roubenoff
147 154 163
ASCORBIC ACID Physiology, Dietary Sources and Requirements Deficiency States
D A Bender
C J Bates
ATHEROSCLEROSIS see CHOLESTEROL: Sources, Absorption, Function and Metabolism. CORONARY HEART DISEASE: Prevention
169 176
xxviii
CONTENTS
B B VITAMINS see COBALAMINS. NIACIN. PANTOTHENIC ACID. RIBOFLAVIN. THIAMIN: Physiology; Beriberi. VITAMIN B6 BACTERIA see INFECTION: Nutritional Interactions; Nutritional Management in Adults BASES see ELECTROLYTES: Acid-Base Balance BEER see ALCOHOL: Absorption, Metabolism and Physiological Effects; Disease Risk and Beneficial Effects; Effects of Consumption on Diet and Nutritional Status BEHAVIOR
E L Gibson and M W Green
183
BERIBERI see THIAMIN: Beriberi BEVERAGES see ALCOHOL: Absorption, Metabolism and Physiological Effects; Disease Risk and Beneficial Effects; Effects of Consumption on Diet and Nutritional Status. TEA BIOAVAILABILITY BIOTIN
R J Wood
195
D M Mock
201
BLOOD LIPIDS/FATS see HYPERLIPIDEMIA: Overview. LIPOPROTEINS BLOOD PRESSURE see HYPERTENSION: Etiology BODY COMPOSITION BONE
D Gallagher and S Chung
210
B M Thomson
220
BRAIN AND NERVOUS SYSTEM
J D Fernstrom and M H Fernstrom
225
BREAST FEEDING
C K Lutter
232
BURNS PATIENTS
S A Hill
238
C CAFFEINE CALCIUM
M J Arnaud
247
L H Allen and J E Kerstetter
253
CALORIES see ENERGY: Balance; Requirements. ENERGY EXPENDITURE: Indirect Calorimetry; Doubly Labeled Water CANCER Epidemiology and Associations Between Diet and Cancer
G A Colditz
Epidemiology of Gastrointestinal Cancers Other Than Colorectal Cancers Epidemiology of Lung Cancer Dietary Management
A J Alberg and J M Samet
C Shaw
Effects on Nutritional Status
260 H-Y Huang
266 272 284
C Shaw
Carcinogenic Substances in Food
289
D Anderson and J C Phillips
295
CARBOHYDRATES Chemistry and Classification Regulation of Metabolism
C L Stylianopoulos C L Stylianopoulos
Requirements and Dietary Importance Resistant Starch and Oligosaccharides
C L Stylianopoulos A Laurentin and C A Edwards
303 309 316 322
CARCINOGENS see CANCER: Carcinogenic Substances in Food CAROTENOIDS Chemistry, Sources and Physiology Epidemiology of Health Effects CEREAL GRAINS
B K Ishida and G E Bartley S A Tanumihardjo and Z Yang
R W Welch
330 339 346
CHEESE see DAIRY PRODUCTS CHILDREN Nutritional Requirements Nutritional Problems
M Lawson E M E Poskitt
357 370
CONTENTS
xxix
CHOLECALCIFEROL see VITAMIN D: Physiology, Dietary Sources and Requirements; Rickets and Osteomalacia CHOLESTEROL Sources, Absorption, Function and Metabolism Factors Determining Blood Levels
CHOLINE AND PHOSPHATIDYLCHOLINE CHROMIUM
D J McNamara
S M Grundy X Zhu and S H Zeisel
R A Anderson
COBALAMINS
392 396
R Green
CELIAC DISEASE
379 385
401
V Nehra, E Marietta and J Murray
407
COFACTORS Inorganic
E D Harris
Organic
418
E D Harris
427
COFFEE see CAFFEINE COLON Structure and Function Disorders
A Maqbool
A Maqbool
448
Nutritional Management of Disorders COMPLEMENTARY FEEDING COPPER
439 D M Klurfeld
460
K G Dewey
465
X Xu, S Pin, J Shedlock and Z L Harris
471
CORONARY HEART DISEASE Hemostatic Factors Lipid Theory Prevention
476
D Kritchevsky
482
K Srinath Reddy
487
CYSTIC FIBROSIS CYTOKINES
W Gilmore
J Dowsett and O Tully
494
R F Grimble
501
D DAIRY PRODUCTS
J Buttriss
DEHYDRATION
511
A W Subudhi, E W Askew and M J Luetkemeier
DENTAL DISEASE
R C Cottrell
518 527
DIABETES MELLITUS Etiology and Epidemiology
J Sudagani and G A Hitman
Classification and Chemical Pathology Dietary Management
535
K C McCowen and R J Smith
C D Saudek and S H Oh
DIARRHEAL DISEASES
543 551
A Baqui, R Heinzen, M Santosham and R Black
565
DIETARY FIBER Physiological Effects and Effects on Absorption Potential Role in Etiology of Disease
I T Johnson
D L Topping and L Cobiac
Role in Nutritional Management of Disease
A R Leeds
572 578 586
VOLUME 2 DIETARY GUIDELINES, INTERNATIONAL PERSPECTIVES
B Schneeman
1
DIETARY INTAKE MEASUREMENT Methodology Validation
A A Welch M Nelson
DIETARY SURVEYS
K L Tucker
7 16 27
xxx CONTENTS DIETETICS
P A Judd
32
DIGESTIBILITY see BIOAVAILABILITY DRUG–NUTRIENT INTERACTIONS
K G Conner
38
E EARLY ORIGINS OF DISEASE Fetal
A J Buckley and S E Ozanne
Non-Fetal
L S Adair
51 59
EATING BEHAVIOR see MEAL SIZE AND FREQUENCY EATING DISORDERS Anorexia Nervosa
A R Rolla
Bulimia Nervosa
A J Hill and S F L Kirk
Binge Eating EGGS
M D Marcus, M A Kalarchian and M D Levine
D J McNamara and H S Thesmar
66 74 80 86
EICOSANOIDS see PROSTAGLANDINS AND LEUKOTRIENES ELECTROLYTES Acid-Base Balance
A G Jardine and P B Mark
Water–Electrolyte Balance
S M Shirreffs and R J Maughan
93 100
ENERGY Metabolism Balance
S Cox
106
Y Schutz
115
Requirements Adaptation
W P T James A G Dulloo and J Jacquet
125 131
ENERGY EXPENDITURE Indirect Calorimetry
A Raman and D A Schoeller
Doubly Labeled Water
W A Coward
139 145
EXERCISE Beneficial Effects
C Boreham and M H Murphy
154
Diet and Exercise
R J Maughan
162
F FAMINE
K P West Jr
169
FAT-SOLUBLE VITAMINS see VITAMIN A: Biochemistry and Physiological Role. VITAMIN D: Physiology, Dietary Sources and Requirements; Rickets and Osteomalacia. VITAMIN E: Metabolism and Requirements. VITAMIN K FAT STORES see ADIPOSE TISSUE FATS see FATTY ACIDS: Metabolism; Monounsaturated; Omega-3 Polyunsaturated; Omega-6 Polyunsaturated; Saturated; Trans Fatty Acids. LIPIDS: Chemistry and Classification; Composition and Role of Phospholipids FATS AND OILS
A H Lichtenstein
177
FATTY ACIDS Metabolism
P A Watkins
Monounsaturated
186
P Kirk
198
Omega-3 Polyunsaturated
A P Simopoulos
205
Omega-6 Polyunsaturated
J M Hodgson, T A Mori and M L Wahlqvist
219
Saturated
R P Mensink and E H M Temme
Trans Fatty Acids
M J Sadler
225 230
CONTENTS FERTILITY
R E Frisch
xxxi 237
FETAL ORIGINS OF DISEASE see EARLY ORIGINS OF DISEASE: Fetal; Non-Fetal FIBER see DIETARY FIBER: Physiological Effects and Effects on Absorption; Potential Role in Etiology of Disease; Role in Nutritional Management of Disease FISH
A Arin˜o, J A Beltra´n, A Herrera and P Roncale´s
247
FLAVONOIDS see PHYTOCHEMICALS: Classification and Occurrence; Epidemiological Factors FOLATE see FOLIC ACID FOLIC ACID
J McPartlin
257
FOOD ALLERGIES Etiology
T J David
265
Diagnosis and Management
T J David
FOOD CHOICE, INFLUENCING FACTORS FOOD COMPOSITION DATA FOOD FOLKLORE
270 A K Draper
277
S P Murphy
282
J Dwyer and J Freitas
289
FOOD FORTIFICATION Developed Countries
R Nalubola
295
Developing Countries
O Dary and J O Mora
302
FOOD INTAKE see DIETARY INTAKE MEASUREMENT: Methodology; Validation. DIETARY SURVEYS. MEAL SIZE AND FREQUENCY FOOD INTOLERANCE
T J David
309
FOOD SAFETY Mycotoxins Pesticides
J D Groopman and T W Kensler
317
M Saltmarsh
Bacterial Contamination Other Contaminants Heavy Metals
323 N Noah
329
C K Winter
340
G L Klein
344
FORTIFICATION see FOOD FORTIFICATION: Developed Countries; Developing Countries FRUCTOSE
N L Keim and P J Havel
FRUITS AND VEGETABLES
351
A E Bender
356
FUNCTIONAL FOODS Health Effects and Clinical Applications Regulatory Aspects
L Galland
360
H H Butchko and B J Petersen
366
G GALACTOSE
A Abi-Hanna and J M Saavedra
GALL BLADDER DISORDERS
377
B Nejadnik and L Cheskin
384
GERIATRIC NUTRITION see OLDER PEOPLE: Physiological Changes; Nutritional Requirements; Nutrition-Related Problems; Nutritional Management of Geriatric Patients GLUCOSE Chemistry and Dietary Sources
D J A Jenkins, R de Souza, L S A Augustin and C W C Kendall
Metabolism and Maintenance of Blood Glucose Level Glucose Tolerance GLYCEMIC INDEX
V Marks
B Ahre´n
390 398 405
G Frost and A Dornhorst
413
GOITRE see IODINE: Deficiency Disorders GOUT
L A Coleman and R Roubenoff
419
GRAINS see CEREAL GRAINS GROWTH AND DEVELOPMENT, PHYSIOLOGICAL ASPECTS
W W Hay Jr
423
xxxii CONTENTS GROWTH MONITORING
T J Cole
433
GUT FLORA see MICROBIOTA OF THE INTESTINE: Probiotics; Prebiotics
H HANDICAP Down’s Syndrome
M Collins and A Laverty
443
A O Scheimann
449
Prader–Willi Syndrome Cerebral Palsy
J Krick and P Miller
452
HEART DISEASE see CORONARY HEART DISEASE: Hemostatic Factors; Lipid Theory; Prevention HEIGHT see NUTRITIONAL ASSESSMENT: Anthropometry HOMOCYSTEINE HUNGER
J W Miller
462
J C G Halford, A J Hill and J E Blundell
HYPERACTIVITY
469
M Wolraich
475
HYPERLIPIDEMIA Overview
T R Trinick and E B Duly
Nutritional Management
479
A H Lichtenstein
491
HYPERTENSION Etiology
T Morgan and H Brunner
Dietary Factors
L J Appel
Nutritional Management HYPOGLYCEMIA
499 506
C M Champagne
513
V Marks
523
VOLUME 3 I IMMUNE SYSTEM see IMMUNITY: Physiological Aspects; Effects of Iron and Zinc IMMUNITY Physiological Aspects
A T Borchers, C L Keen and M E Gershwin
Effects of Iron and Zinc
C Doherty
1 7
INBORN ERRORS OF METABOLISM Classification and Biochemical Aspects
D L Marsden
Nutritional Management of Phenylketonuria
D L Marsden, F J Rohr and K C Costas
13 22
INFANTS Nutritional Requirements Feeding Problems
S A Atkinson
28
R M Katz, L Schuberth and C S Gulotta
42
INFECTION Nutritional Interactions
H Ghattas
Nutritional Management in Adults
47 J A Tayek
54
INTESTINE see SMALL INTESTINE: Structure and Function; Disorders; MICROBIOTA OF THE INTESTINE: Probiotics; Prebiotics IODINE Physiology, Dietary Sources and Requirements Deficiency Disorders IRON
R Houston
B S Hetzel
J R Hunt
ISCHEMIC HEART DISEASE see CORONARY HEART DISEASE: Lipid Theory
66 74 82
CONTENTS
xxxiii
K KESHAN DISEASE see SELENIUM KETOSIS
D H Williamson
91
L LACTATION Physiology
J L McManaman and M C Neville
Dietary Requirements
N M F Trugo and C M Donangelo
LACTOSE INTOLERANCE LEGUMES
99
D M Paige
106 113
M A Grusak
120
LIPIDS Chemistry and Classification
J L Dupont
Composition and Role of Phospholipids LIPOPROTEINS
126
A D Postle
J M Ordovas
LIVER DISORDERS
132 143
J Hampsey and K B Schwarz
150
LOW BIRTHWEIGHT AND PRETERM INFANTS Causes, Prevalence and Prevention Nutritional Management LUNG DISEASES
M Merialdi and M de Onis
J M Cox
161 168
A MacDonald
175
LYCOPENES AND RELATED COMPOUNDS
C J Bates
184
M MAGNESIUM
C Feillet-Coudray and Y Rayssiguier
MALABSORPTION SYNDROMES
P M Tsai and C Duggan
191 196
MALNUTRITION Primary, Causes Epidemiology and Prevention Secondary, Diagnosis and Management MANGANESE
A Briend and P Nestel
N Solomons
C L Keen, J L Ensunsa, B Lo¨nnerdal and S Zidenberg-Cherr
MEAL SIZE AND FREQUENCY
F E Leahy
MEAT, POULTRY AND MEAT PRODUCTS
203 212 217 225
P A Lofgren
230
MENKES SYNDROME see COPPER MICROBIOTA OF THE INTESTINE Prebiotics
J M Saavedra and N Moore
237
Probiotics
M Gueimonde and S Salminen
244
MILK see DAIRY PRODUCTS MINERALS see CALCIUM. MAGNESIUM. PHOSPHORUS. POTASSIUM. SODIUM: Physiology MOLYBDENUM see ULTRATRACE ELEMENTS MONOSATURATED FAT see FATTY ACIDS: Monounsaturated MYCOTOXINS see FOOD SAFETY: Mycotoxins
N NIACIN
C J Bates
253
NITROGEN see AMINO ACIDS: Chemistry and Classification; Metabolism. PROTEIN: Digestion and Bioavailability; Quality and Sources; Requirements and Role in Diet; Deficiency NUCLEIC ACIDS
E A Carrey and H A Simmonds
260
xxxiv
CONTENTS
NUTRIENT–GENE INTERACTIONS Molecular Aspects
C D Berdanier and H C Freake
269
Health Implications
C D Berdanier and H C Freake
276
NUTRIENT REQUIREMENTS, INTERNATIONAL PERSPECTIVES
A A Yates
NUTRITION POLICIES IN DEVELOPING AND DEVELOPED COUNTRIES NUTRITION TRANSITION, DIET CHANGE AND ITS IMPLICATIONS
C Geissler
B M Popkin
282 293 301
NUTRITIONAL ASSESSMENT J Eaton–Evans
311
Biochemical Indices
Anthropometry
F Fidanza
318
Clinical Examination
B Caballero
329
M Elia and R J Stratton
332
NUTRITIONAL SUPPORT In the Home Setting Adults, Enteral
K N Jeejeebhoy
Adults, Parenteral
342
J Binkley, S Daniell and G L Jensen
Infants and Children, Parenteral
S Collier and C Lo
349 357
NUTRITIONAL SURVEILLANCE Developed Countries Developing Countries NUTS AND SEEDS
N R Sahyoun
363
L M Neufeld and L Tolentino
371
J Gray
381
O OBESITY Definition, Etiology and Assessment Fat Distribution
A Pietrobelli
J Stevens and K P Truesdale
Childhood Obesity Complications
E M E Poskitt
389 392 399
A Ahmed and R L Atkinson
406
Prevention
T P Gill
413
Treatment
E C Uchegbu and P G Kopelman
421
OILS see FATS AND OILS OLDER PEOPLE Physiological Changes
N Solomons
Nutritional Requirements Nutrition-Related Problems
431
N Solomons
437
C P G M de Groot and W A van Staveren
Nutritional Management of Geriatric Patients
M-M G Wilson and J E Morley
444 449
OSTEOMALACIA see VITAMIN D: Rickets and Osteomalacia OSTEOPOROSIS
K O O’Brien
460
OXIDANT DAMAGE see ANTIOXIDANTS: Observational Studies; Intervention Studies
P PANTOTHENIC ACID PARASITISM
C J Bates
P G Lunn
467 472
PATHOGENS see INFECTION: Nutritional Interactions; Nutritional Management in Adults PELLAGRA
C J Bates
481
PESTICIDES see FOOD SAFETY: Pesticides PHENYLKETONURIA see INBORN ERRORS OF METABOLISM: Nutritional Management of Phenylketonuria PHOSPHATE see SMALL INTESTINE: Structure and Function PHOSPHORUS
J J B Anderson
486
CONTENTS
xxxv
PHYSICAL ACTIVITY see EXERCISE: Beneficial Effects; Diet and Exercise PHYTOCHEMICALS Classification and Occurrence Epidemiological Factors
A Cassidy
490
H Wiseman
497
PHYTO-ESTROGENS see PHYTOCHEMICALS: Classification and Occurrence; Epidemiological Factors POLYUNSATURATED FATTY ACIDS see FATTY ACIDS: Omega-3 Polyunsaturated; Omega-6 Polyunsaturated POTASSIUM
L J Appel
509
POULTRY see MEAT, POULTRY AND MEAT PRODUCTS PREGNANCY Role of Placenta in Nutrient Transfer Nutrient Requirements
P Haggarty
L H Allen
521
Energy Requirements and Metabolic Adaptations Weight Gain
513 G R Goldberg
L H Allen and J M Graham
528 533
VOLUME 4 PREGNANCY Safe Diet for Pregnancy
S Stanner
1
Dietary Guidelines and Safe Supplement Use Prevention of Neural Tube Defects Pre-eclampsia and Diet
L H Allen, J M Graham and J E Sabel
P N Kirke and J M Scott
E Abalos and J Villar
PREMENSTRUAL SYNDROME
27
M C de Souza and Ann F Walker
PROSTAGLANDINS AND LEUKOTRIENES
8 15
G E Caughey, M J James and L G Cleland
35 42
PROTEIN Synthesis and Turnover
D J Millward
Requirements and Role in Diet Digestion and Bioavailability Quality and Sources Deficiency
50
D J Millward
58
Z A Bhutta
66
B Torun
73
Z A Bhutta and H L Dewraj
82
PULSES see LEGUMES PYRIDOXINE see VITAMIN B6
R REFUGEES
R Bhatia and L N Richardson
RELIGIOUS CUSTOMS, INFLUENCE ON DIET
87 P Fieldhouse
93
RESPIRATORY DISEASES see CANCER: Epidemiology of Lung Cancer. LUNG DISEASES RETINOL see VITAMIN A: Biochemistry and Physiological Role; Deficiency and Interventions RIBOFLAVIN
C J Bates
RICKETS see VITAMIN D: Rickets and Osteomalacia ROUGHAGE see DIETARY FIBER: Physiological Effects and Effects on Absorption; Potential Role in Etiology of Disease; Role in Nutritional Management of Disease
S SALT see SODIUM: Physiology; Salt Intake and Health SATIETY see APPETITE: Physiological and Neurobiological Aspects
100
xxxvi
CONTENTS
SATURATED FAT see FATTY ACIDS: Saturated SEASONALITY
F Branca and P D’Acapito
109
SEEDS see NUTS AND SEEDS SELENIUM
C J Bates
118
SENESCENCE see AGING SKINFOLD THICKNESS see NUTRITIONAL ASSESSMENT: Anthropometry SMALL INTESTINE Structure and Function Disorders
D Rumsey
126
R D’Souza and J Powell-Tuck
SOCIO-ECONOMIC STATUS
133
E Dowler
140
SODIUM Physiology
A R Michell
150 C P Sa´nchez-Castillo and W P T James
Salt Intake and Health
154
SODIUM CHLORIDE see SODIUM: Salt Intake and Health SPIRITS see ALCOHOL: Absorption, Metabolism and Physiological Effects; Disease Risk and Beneficial Effects; Effects of Consumption on Diet and Nutritional Status SPORTS NUTRITION
R J Maughan
167
STARCH see CARBOHYDRATES: Chemistry and Classification; Regulation of Metabolism; Requirements and Dietary Importance; Resistant Starch and Oligosaccharides STARVATION AND FASTING
J E Bines and R G Heine
173
J P Pearson and I A Brownlee
180
STOMACH Structure and Function Disorders
J A Solon
190
STROKE, NUTRITIONAL MANAGEMENT
S McLaren
196
SUCROSE Nutritional Role, Absorption and Metabolism Dietary Sucrose and Disease
J Brand-Miller
B Caballero
204 212
SUGAR see CARBOHYDRATES: Chemistry and Classification; Regulation of Metabolism; Requirements and Dietary Importance; GALACTOSE. GLUCOSE: Chemistry and Dietary Sources; Metabolism and Maintenance of Blood Glucose Level; Glucose Tolerance. SUCROSE: Nutritional Role, Absorption and Metabolism; Dietary Sucrose and Disease SUPPLEMENTATION Dietary Supplements
S S Percival
Role of Micronutrient Supplementation
214 R D W Klemm
220
Developing Countries
R Shrimpton
227
Developed Countries
M F Picciano and S S McDonald
233
SURGERY Perioperative Feeding
E Kelly
Long-term Nutritional Management
241 E Lin and T R Ziegler
246
T TEA
D J Baer and S C Chen
257
TEETH see DENTAL DISEASE THIAMIN Physiology Beriberi
D I Thurnham D I Thurnham
263 269
CONTENTS THIRST
J Leiper
xxxvii 278
TOCOPHEROL see VITAMIN E: Metabolism and Requirements; Physiology and Health Effects TRACE ELEMENTS see CHROMIUM. COPPER. IMMUNITY: Effects of Iron and Zinc. IODINE: Physiology, Dietary Sources and Requirements. IRON. MANGANESE. SELENIUM. ZINC: Physiology TRANS FATTY ACIDS see FATTY ACIDS: Trans Fatty Acids TUBERCULOSIS Nutrition and Susceptibility Nutritional Management
J P Cegielski and D N McMurray J P Cegielski and L Demeshlaira
287 294
TUMOR see CANCER: Epidemiology and Associations Between Diet and Cancer; Epidemiology of Gastrointestinal Cancers Other Than Colorectal Cancers; Epidemiology of Lung Cancer
U ULTRATRACE ELEMENTS
F Nielsen
299
UNITED NATIONS CHILDREN’S FUND URBAN NUTRITION
J P Greaves and R Shrimpton
N Solomons
311 317
V VEGAN DIETS see VEGETARIAN DIETS VEGETABLES see FRUITS AND VEGETABLES VEGETARIAN DIETS
J Dwyer
323
VITAMIN A Physiology
A C Ross
329
Biochemistry and Physiological Role Deficiency and Interventions
J L Napoli
339
K P West Jr
348
VITAMIN B1 see THIAMIN: Physiology; Beriberi VITAMIN B2 see RIBOFLAVIN VITAMIN B6
D A Bender
359
VITAMIN B12 see COBALAMINS VITAMIN C see ASCORBIC ACID: Physiology, Dietary Sources and Requirements; Deficiency States VITAMIN D Physiology, Dietary Sources and Requirements Rickets and Osteomalacia
M F Holick
368
J J B Anderson
378
Metabolism and Requirements
M G Traber
383
Physiology and Health Effects
P A Morrissey and M Kiely
389
VITAMIN E
VITAMIN K
C J Bates
398
W WATER see THIRST WEIGHT MANAGEMENT Approaches
N Finer
Weight Maintenance Weight Cycling WHOLE GRAINS
407 H A Raynor and R R Wing
413
L Lissner
421
R Lang and S A Jebb
427
WILSON’S DISEASE see COPPER
xxxviii
CONTENTS
WINE see ALCOHOL: Absorption, Metabolism and Physiological Effects; Disease Risk and Beneficial Effects; Effects of Consumption on Diet and Nutritional Status WORLD HEALTH ORGANIZATION
J Akre´
437
Y YOGURT see DAIRY PRODUCTS. FUNCTIONAL FOODS: Health Effects and Clinical Applications; MICROBIOTA OF THE INTESTINE: Probiotics; Prebiotics
Z ZINC Physiology
H C Freake
Deficiency in Developing Countries, Intervention Studies INDEX
447 C Hotz
454 463
A Acids see Electrolytes: Acid-Base Balance
ADIPOSE TISSUE G Fru¨hbeck and J Go´mez-Ambrosi, Universidad de Navarra, Pamplona, Spain ª 2005 Elsevier Ltd. All rights reserved.
Introduction The role of white adipose tissue (WAT) in storing and releasing lipids for oxidation by skeletal muscle and other tissues became so firmly established decades ago that a persistent lack of interest hindered the study of the extraordinarily dynamic behavior of adipocytes. However, disentangling the neuroendocrine systems that regulate energy homeostasis and adiposity has jumped to a first-priority challenge, with the recognition of obesity as one of the major public health problems. Strictly speaking, obesity is not defined as an excess of body weight but as an increased adipose tissue accretion, to the extent that health may be adversely affected. Therefore, in the last decades, adipose tissue has become the research focus of biomedical scientists for epidemiological, pathophysiological, and molecular reasons. Although the primary role of adipocytes is to store triglycerides during periods of caloric excess and to mobilize this reserve when expenditure exceeds intake, it is now widely recognized that adipose tissue lies at the heart of a complex network that participates in the regulation of a variety of quite diverse biological functions (Figure 1).
Development Adipose tissue develops extensively in homeotherms with the proportion to body weight
varying greatly among species. Adipocytes differentiate from stellate or fusiform precursor cells of mesenchymal origin. There are two processes of adipose tissue formation. In the primary fat formation, which takes place relatively early (in human fetuses the first traces of a fat organ are detectable between the 14th and 16th weeks of prenatal life), gland-like aggregations of epitheloid precursor cells, called lipoblasts or preadipocytes, are laid down in specific locations and accumulate multiple lipid droplets becoming brown adipocytes. The secondary fat formation takes place later in fetal life (after the 23rd week of gestation) as well as in the early postnatal period, whereby the differentiation of other fusiform precursor cells that accumulate lipid to ultimately coalesce into a single large drop per cell leads to the dissemination of fat depots formed by unilocular white adipocytes in many areas of connective tissue. Adipose tissue may be partitioned by connective tissue septa into lobules. The number of fat lobules remains constant, while in the subsequent developmental phases the lobules continuously increase in size. At the sites of early fat development, a multilocular morphology of adipocytes predominates, reflecting the early developmental stage. Microscopic studies have shown that the second trimester may be a critical period for the development of obesity in later life. At the beginning of the third trimester, adipocytes are present in the main fat depots but are still relatively small. During embryonic development it is important to emphasize the temporospacial tight coordination of angiogenesis with the formation of fat cell clusters. At birth, body fat has been reported to
2 ADIPOSE TISSUE
Appetite regulation Body weight homeostasis
Vascular tone control
Immunity
Fibrinolysis
WAT Coagulation
Reproduction
Angiogenesis
Others Glucose and lipid metabolism
Figure 1 Dynamic view of white adipose tissue based on the pleiotropic effects on quite diverse physiological functions.
account for approximately 16% of total body weight (with brown fat constituting 2–5%) with an increase in body fat of around 0.7–2.8 kg during the first year of life. Adipogenesis, i.e., the development of adipose tissue, varies according to sex and age. Furthermore, the existence of sensitive periods for changes in adipose tissue cellularity throughout life has been postulated. In this regard, two peaks of accelerated adipose mass enlargement have been established, namely after birth and between 9 and 13 years of age. The capacity for cell proliferation and differentiation is highest during the first year of life, while it is less pronounced in the years before puberty. Thereafter, the rate of cell proliferation slows down during adolescence and, in weight stable individuals, remains fairly constant throughout adulthood. In case of a maintained positive energy balance adipose mass expansion takes places initially by an enlargement of the existing fat cells. The perpetuation of this situation ends up in severe obesity where the total fat cell number can be easily trebled. Childhood-onset obesity is characterized by a combination of fat cell hyperplasia and hypertrophy, whereas in adult-onset obesity a hypertrophic growth predominates. However, it has been recently shown that adult humans are capable of new adipocyte formation, with fat tissue containing a significant proportion of cells with the ability to undergo differentiation. Interestingly, the hyperplasic growth of fat cells in adults does not take place until the existing adipocytes reach a critical cell size. Initially, excess energy storage starts as hypertrophic obesity resulting from the accumulation of excess lipid in a normal number of unilocular
adipose cells. In this case, adipocytes may be four times their normal size. If the positive energy balance is maintained, a hyperplasic or hypercellular obesity characterized by a greater than normal number of cells is developed. Recent observations regarding the occurrence of apoptosis in WAT have changed the traditional belief that acquisition of fat cells is irreversible. The adipose lineage originates from multipotent mesenchymal stem cells that develop into adipoblasts (Figure 2). Commitment of these adipoblasts gives rise to preadipose cells (preadipocytes), which are cells that have expressed early but not late markers and have yet to accumulate triacylglycerol stores (Figure 3). Multipotent stem cells and adipoblasts, which are found during embryonic development, are still present postnatally. The relationship between brown and white fat during development has not been completely solved. Brown adipocytes can be detected among all white fat depots in variable amounts depending on species, localization, and environmental temperature. The transformation of characteristic brown adipocytes into white fat cells can take place rapidly in numerous species and depots during postnatal development. The morphological and functional changes that take place in the course of adipogenesis represent a shift in transcription factor expression and activity leading from a primitive, multipotent state to a final phenotype characterized by alterations in cell shape and lipid accumulation. Various redundant signaling pathways and transcription factors directly influence fat cell development by converging in the upregulation of PPAR, which embodies a common and essential regulator of adipogenesis as well as of adipocyte hypertrophy. Among the broad panoply of transcription factors, C/EBPs and the basic helixloop-helix family (ADD1/SREBP-1c) also stand out together with their link with the existing nutritional status. The transcriptional repression of adipogenesis includes both active and passive mechanisms. The former directly interferes with the transcriptional machinery, while the latter is based on the binding of negative regulators to yield inactive forms of known activators. Hormones, cytokines, growth factors, and nutrients influence the dynamic changes related to adipose tissue mass as well as its pattern of distribution (Figure 4). The responsiveness of fat cells to neurohumoral signals may vary according to peculiarities in the adipose lineage stage at the moment of exposure. Moreover, the simultaneous presence of some adipogenic factors at specific threshold concentrations may be a necessary requirement to trigger terminal differentiation.
ADIPOSE TISSUE
3
Figure 2 Schematic diagram of the histogenesis of white and brown adipocytes. C/EBPs, CCAAT/enhancer binding proteins; PGC-1, peroxisome proliferator-activated receptor- coactivator-1; PPAR, peroxisome proliferator-activated receptor-.
Structure Adipose tissue is a special loose connective tissue dominated by adipocytes. The name of these cells is based on the presence of a large lipid droplet with ‘adipo’ derived from the Latin adeps meaning ‘pertaining to fat.’ In adipose tissue, fat cells are individually held in place by delicate reticular fibers clustering in lobular masses bounded by fibrous septa surrounded by a rich capillary network. In adults, adipocytes may comprise around 90% of adipose mass accounting only for roughly 25% of the total cell population. Thus, adipose tissue itself is composed not only of adipocytes, but also other cell types called the stroma-vascular fraction, comprising blood cells, endothelial cells, pericytes, and adipose precursor cells among others (Figure 5);
these account for the remaining 75% of the total cell population, representing a wide range of targets for extensive autocrine-paracrine cross-talk. Adipocytes, which are typically spherical and vary enormously in size (20–200 mm in diameter, with variable volumes ranging from a few picoliters to about 3 nanoliters), are embedded in a connective tissue matrix and are uniquely adapted to store and release energy. Surplus energy is assimilated by adipocytes and stored as lipid droplets. The stored fat is composed mainly of triacylglycerols (about 95% of the total lipid content comprised principally of oleic and palmitic acids) and to a smaller degree of diacylglycerols, phospholipids, unesterified fatty acids, and cholesterol. To accommodate the lipids adipocytes are capable of changing their
4 ADIPOSE TISSUE
Mesenchymal stem cell
Adipoblast
Immature adipocyte
Preadipocyte
Clonal Molecular/ expansion Proliferation physiological Growth arrest + events early markers’ appearance and emerging regulatory genes
Mature adipocyte
Lipid accumulation
Pref-1 ECM alterations Cytoskeletal remodeling LPL CD36 SREBP-1 C/EBPβ & δ PPARγ C/EBPα GLUT4 Lipogenic enzymes aP2 Leptin & other secreted factors Figure 3 Multistep process of adipogenesis together with events and participating regulatory elements. aP2, adipocyte fatty acid binding protein; C/EBP, CCAAT/enhancer binding protein ; C/EBP & , CCAAT/enhancer binding protein & ; CD36, fatty acid translocase; ECM, extracellular matrix; GLUT4, glucose transporter type 4; LPL, lipoprotein lipase; PPAR, peroxisome proliferatoractivated receptor-; Pref-1, preadipocyte factor-1; SREBP-1, sterol regulatory element binding protein-1.
diameter 20-fold and their volumes by several thousand-fold. However, fat cells do not increase in size indefinitely. Once a maximum capacity is attained, which in humans averages 1000 picoliters, the
formation of new adipocytes from the precursor pool takes place. Histologically, the interior of adipocytes appears unstained since the techniques of standard tissue
ADIPOGENIC FACTORS
ANTIADIPOGENIC FACTORS
• angiotensin II • diet rich in saturated fat • estrogens • glucocorticoids • IGF-1 • insulin • LIF • long-chain fatty acids
• catecholamines • EGF • flavonoids • GH • IL-1 • IL-6 • leptin • PDGF • PGF2α • testosterone • TGF-β • TNF-α
• lysophosphatidic acid • MCSF • PAI-1 • PPARs • prolactin • retinoids • thyroid hormones
Figure 4 Factors exerting a direct effect on adipose mass. EGF, epidermal growth factor; GH, growth hormone; IGF-1, insulin-like growth factor-1; IL-1, interleukin-1; IL-6, interleukin-6; LIF, leukemia inhibitory factor; MCSF, macrophage colony stimulating factor; PAI-1, plasminogen activator inhibitor-1; PDGF, platelet-derived growth factor; PGF2, prostaglandin F2; PPARs, peroxisome proliferator-activated receptors; TGF-, transforming growth factor-; TNF-, tumor necrosis factor-.
ADIPOSE TISSUE
5
35–70% adipocytes
Stromal cell fraction • preadipocytes • fibroblasts • blood cells • poorly differentiated • endothelial cells mesenchymal cells • pericytes • very small fat cells
WAT
Figure 5 Schematic representation of cell types present in adipose tissue. WAT, white adipose tissue.
preparation dissolve out the lipids, leaving a thin rim of eosinophilic cytoplasm that typically loses its round shape during tissue processing, thus contributing to the sponge-like appearance of WAT in routine preparations for light microscopy (Figure 6 and Figure 7). Owing to the fact that about 90% of the cell volume is a lipid droplet, the small dark nucleus becomes a flattened semilunar structure pushed against the edge of the cell and the thin cytoplasmic rim is also pushed to the periphery of the adipocytes. Mature white adipose cells contain a single large lipid droplet and are described as unilocular. However, developing white adipocytes are transiently multilocular containing multiple lipid droplets before these finally coalesce into a single large drop (Figure 8). The nucleus is round or oval in young fat cells, but is cup-shaped and peripherally displaced in mature adipocytes. The cytoplasm is stretched to form a thin sheath around the fat globule, although a relatively large volume is concentrated around the nucleus. A thin external lamina called basal lamina surrounds the cell. The smooth cell membrane shows no microvilli but has abundant smooth micropinocytotic invaginations that often fuse to form small vacuoles appearing as rosette-like configurations (Figure 9). Mitochondria are few in number with loosely arranged membranous cristae. The Golgi zone is small and the cytoplasm is filled with free ribosomes, but contains only a limited number of short profiles of the
granular endoplasmic reticulum. Occasional lysosomes can be found. The coalescent lipid droplets contain a mixture of neutral fats, triglycerides, fatty acids, phospholipids, and cholesterol. A thin interface membrane separates the lipid droplet from the cytoplasmic matrix. Peripheral to this membrane is a system of parallel meridional thin filaments. Because of the size of these cells, relative to the thickness of the section, the nucleus (accounting for only one-fortieth of the cell volume) may not always be present in the section. Unilocular adipocytes usually appear in clumps near blood vessels, which is reasonable since the source and dispersion of material stored in fat cells depends on transportation by the vascular system. Brown fat is a specialized type of adipose tissue that plays an important role in body temperature regulation. In the newborn brown fat is well developed in the neck and interscapular region. It has a limited distribution in childhood, and occurs only to a small degree in adult humans, while it is present in significant amounts in rodents and hibernating animals. The brown color is derived from a rich vascular network and abundant mitochondria and lysosomes. The individual multilocular adipocytes are frothy appearing cells due to the fact that the lipid, which does not coalesce as readily as in white fat cells and is normally stored in multiple small droplets, has been leached out during tissue
6 ADIPOSE TISSUE
(A)
(A)
(B) Figure 6 (A) Human subcutaneous white adipose tissue with Masson trichrome staining (10; bar = 100 mm). (B) Same tissue at a higher magnification (40; bar = 25 mm). (Courtesy of Dr. M A Burrell and M Archanco, University of Navarra, Spain.)
processing (Figure 10). The spherical nuclei are centrally or eccentrically located within the cell. Compared to the unilocular white adipocytes, the cytoplasm of the multilocular brown fat cell is relatively abundant and strongly stained because of the numerous mitochondria present. The mitochondria are involved in the oxidation of the stored lipid, but because they exhibit a reduced potential to carry out oxidative phosphorylation, the energy produced is released in the form of heat due to the uncoupling activity of UCP and not captured in adenosine triphosphate (ATP). Therefore, brown adipose tissue is extremely well vascularized so that the blood is warmed when it passes through the active tissue.
(B) Figure 7 (A) Human omental white adipose tissue with Masson trichrome staining (10; bar = 100 mm). (B) Same tissue at a higher magnification (40; bar = 25 mm). (Courtesy of Dr. M A Burrell and M Archanco, University of Navarra, Spain.)
Distribution White adipose tissue may represent the largest endocrine tissue of the whole organism, especially in overweight and obese patients. The anatomical distribution of individual fat pads dispersed throughout the whole body and not connected to each other contradicts the classic organ-specific localization. WAT exhibits clear, regional differences in its sites of predilection (Table 1). The hypodermal region invariably contains fat, except in a few places such as the eyelids and the scrotum. Adipocytes also accumulate around organs like the kidneys and adrenals, in the coronary sulcus of the heart, in bone marrow, mesentery, and omentum. Unilocular fat is
ADIPOSE TISSUE
7
Figure 8 Paraffin section of rat abdominal white adipose tissue with a hematoxylin and eosin stain showing the simultaneous presence of uni- and multilocular adipocytes (40; bar = 25 mm). (Courtesy of Dr. M A Burrell and M Archanco, University of Navarra, Spain.)
widely distributed in the subcutaneous tissue of humans but exhibits quantitative regional differences that are influenced by age and sex. In infants and young children there is a continuous subcutaneous fat layer, the panniculus adiposus, over the whole body. This layer thins out in some areas in adults but persists and grows thicker in certain other regions. The sites differ in their distribution among sexes, being responsible for the characteristic body form of males and females, termed android and ginecoid fat distribution. In males, the main regions include the nape of the neck, the subcutaneous area over the deltoid and triceps muscles, and the lumbosacral region. In females, subcutaneous fat is most abundant in the buttocks, epitrochanteric region, anterior and lateral aspects of the thighs, as well as the breasts. Additionally, extensive fat depots are found in the omentum, mesenteries, and the retroperitoneal area of both sexes. In wellnourished, sedentary individuals, the fat distribution persists and becomes more obvious with advancing age with males tending to deposit more fat in the visceral compartment. Depot-specific differences may be related not only to the metabolism of fat cells but also to their capacity to form new adipocytes. Additionally, regional differences may result from variations in hormone receptor distribution as well as from specific local environmental characteristics as a consequence of differences in innervation and vascularization. Regional distribution of body fat is known to be an important indicator for metabolic and cardiovascular alterations in some individuals.
(A)
(B) Figure 9 (A) Transmission electron micrographs with the characteristically displaced nucleus to one side and slightly flattened by the accumulated lipid. The cytoplasm of the fat cell is reduced to a thin rim around the lipid droplet (7725). (B) The cytoplasm contains several small lipid droplets that have not yet coalesced. A few filamentous mitochondria, occasional cisternae of endoplasmic reticulum, and a moderate number of free ribosomes are usually visible (15 000). (Courtesy of Dr. M A Burrell and M Archanco, University of Navarra, Spain.)
8 ADIPOSE TISSUE Table 1 Distribution of main human adipose tissue depots Subcutaneous (approx. 80%; deep þ superficial layers) Truncal – Cervical – Dorsal – Lumbar Abdominal Gluteofemoral Mammary
(A)
Visceral (approx. 20%; thoracic-abdominal-pelvic) Intrathoracic (extra-intrapericardial) Intra-abdominopelvic – Intraperitoneal Omental (greater and lesser omentum) Mesenteric (epiplon, small intestine, colon, rectum) Umbilical – Extraperitoneal Peripancreatic (infiltrated with brown adipocytes) Perirenal (infiltrated with brown adipocytes) – Intrapelvic Gonadal (parametrial, retrouterine, retropubic) Urogenital (paravesical, para-retrorectal) Intraparenchymatous (physiologically or pathologically) Inter-intramuscular and perimuscular (inside the muscle fascia) Perivascular Paraosseal (interface between bone and muscle) Ectopic (steatosis, intramyocardial, lypodystrophy, etc.)
Hyperlipidemia Cardiovascular disease
Metabolic syndrome
Cancer Obstructive sleep apnea (B) Figure 10 (A) Paraffin section of rat brown adipose tissue with a hematoxylin and eosin stain (20; bar = 50 mm). (B) Same tissue at a higher magnification (40; bar = 25 mm). (Courtesy of Dr. M A Burrell and M Archanco, University of Navarra.)
The observation that the topographic distribution of adipose tissue is relevant to understanding the relation of obesity to disturbances in glucose and lipid metabolism was formulated before the 1950s. Since then numerous prospective studies have revealed that android or male-type obesity correlates more often with an elevated mortality and risk for the development of diabetes mellitus type 2, dyslipidemia, hypertension, and atherosclerosis than gynoid or female-type obesity. Obesity has been reported to cause or exacerbate a large number of health problems with a known impact on both life expectancy and quality of life. In this respect, the association of increased adiposity is accompanied by important pathophysiological
Infertility Adiposity
Hyperuricemia
Psychosocial distress Osteoarthritis
Atherosclerosis/ inflammation
Gastrointestinal alterations Others
Figure 11 Main comorbidities associated with increased adiposity.
alterations, which lead to the development of a wide range of comorbidities (Figure 11).
Function Although many cell types contain small reserves of carbohydrate and lipid, the adipose tissue is the body’s most capacious energy reservoir. Because of the high energy content per unit weight of fat as well as its hydrophobicity, the storage of energy in the form of triglycerides is a highly efficient biochemical phenomenon (1 g of adipose tissue contains around 800 mg triacylglycerol and only about 100 mg of
ADIPOSE TISSUE
water). It represents quantitatively the most variable component of the organism, ranging from a few per cent of body weight in top athletes to more than half of the total body weight in severely obese patients. The normal range is about 10–20% body fat for males and around 20–30% for females, accounting approximately for a 2-month energy reserve. During pregnancy most species accrue additional reserves of adipose tissue to help support the development of the fetus and to further facilitate the lactation period. Energy balance regulation is an extremely complex process composed of multiple interacting homeostatic and behavioral pathways aimed at maintaining constant energy stores. It is now evident that body weight control is achieved through highly orchestrated interactions between nutrient selection, organoleptic influences, and neuroendocrine responses to diet as well as being influenced by genetic and environmental factors. The concept that circulating signals generated in proportion to body fat stores influence appetite and energy expenditure in a coordinated manner to regulate body weight was proposed almost 50 years ago. According to this model, changes in energy balance sufficient to alter body fat stores are signaled via one or more circulating factors acting in the brain to elicit compensatory changes in order to match energy intake to energy expenditure. This was formulated as the ‘lipostatic theory’ assuming that as adipose tissue mass enlarges, a factor that acts as a sensing
9
hormone or ‘lipostat’ in a negative feedback control from adipose tissue to hypothalamic receptors informs the brain about the abundance of body fat, thereby allowing feeding behavior, metabolism, and endocrine physiology to be coupled to the nutritional state of the organism. The existing body of evidence gathered in the last decades through targeted expression or knockout of specific genes involved in different steps of the pathways controlling food intake, body weight, adiposity, or fat distribution has clearly contributed to unraveling the underlying mechanisms of energy homeostasis. The findings have fostered the notion of a far more complex system than previously thought, involving the integration of a plethora of factors. The identification of adipose tissue as a multifunctional organ as opposed to a passive organ for the storage of excess energy in the form of fat has been brought about by the emerging body of evidence gathered during the last few decades. This pleiotropic nature is based on the ability of fat cells to secrete a large number of hormones, growth factors, enzymes, cytokines, complement factors, and matrix proteins, collectively termed adipokines or adipocytokines (Table 2, Figure 12), at the same time as expressing receptors for most of these factors (Table 3), which warrants extensive cross-talk at a local and systemic level in response to specific external stimuli or metabolic changes. The vast majority of adipocyte-derived factors have been shown to be dysregulated in alterations accompanied by changes
Table 2 Relevant factors secreted by adipose tissue into the bloodstream Molecule
Function/effect
Adiponectin/ACRP30/AdipoQ/ apM1/GBP28 Adipsin Angiotensinogen ASP FFA
Plays a protective role in the pathogenesis of type 2 diabetes and cardiovascular diseases
Glycerol IGF-I IL-6 Leptin NO PAI-1 PGI2 & PGF2 Resistin TNF- VEGF
Possible link between the complement pathway and adipose tissue metabolism Precursor of angiotensin II; regulator of blood pressure and electrolyte homeostasis Influences the rate of triacylglycerol synthesis in adipose tissue Oxidized in tissues to produce local energy. Serve as a substrate for triglyceride and structural molecules synthesis. Involved in the development of insulin resistance Structural component of the major classes of biological lipids and gluconeogenic precursor Stimulates proliferation of a wide variety of cells and mediates many of the effects of growth hormone Implicated in host defense, glucose and lipid metabolism, and regulation of body weight Signals to the brain about body fat stores. Regulation of appetite and energy expenditure. Wide variety of physiological functions Important regulator of vascular tone. Pleiotropic involvement in pathophysiological conditions Potent inhibitor of the fibrinolytic system Implicated in regulatory functions such as inflammation and blood clotting, ovulation, menstruation, and acid secretion Putative role in insulin resistance May participate in inflammation Interferes with insulin receptor signaling and is a possible cause of the development of insulin resistance in obesity Stimulation of angiogenesis
10
ADIPOSE TISSUE
Immune response
Vasoactive factors Lipid metabolism Angiotensinogen Monobutyrin Adiponectin PAI-1 Eicosanoids VEGF Tissue factor Nitric oxide
ApoE LPL Glycerol
Growth factors TGFβ IGF-1 HGF NGF Lysophosphatidic acid PGI2, PGF2α LIF Fibronectin
Binding proteins Retinol Inflammation IL-1Ra IL-1β IL-8 IL-10 CRP MCP-1 α1-acid glycoprot. VAP-1/SSAO
Adipsin ASP Factors B and C3 CSFs IL-17 D SAA3
Others Cytokines TNFα + sR IL-6 + sR Leptin
Glucose metabolism FFA Resistin Proteins extracellular matrix Osteonectin
Figure 12 Factors secreted by white adipose tissue, which underlie the multifunctional nature of this endocrine organ. Although due to their pleiotropic effects some of the elements might be included in more than one physiological role, they have been included only under one function for simplicity reasons. apoE, apolipoprotein E; ASP, acylation-stimulating protein; CRP, C-reactive protein; CSFs, colony-stimulating factors; FFA, free fatty acids; HGF, hepatocyte growth factor; IGF-1, insulin-like growth factor-1; IL-10, interleukin-10; IL-17 D, interleukin-17 D; IL-1Ra, interleukin-1 receptor antagonist; IL-1, interleukin-1; IL-6, interleukin, 6; IL-8, interleukin-8; LIF, leukemia inhibitory factor; LPL, lipoprotein lipase; MCP-1, monocyte chemoattractant protein-1; NGF, nerve growth factor; PAI-1, plasminogen activator inhibitor -1; PGF2, prostaglandin F2; PGI2, prostacyclin; SAA3, serum amyloid A3; sR, soluble receptor; TGF-, transforming growth factor-; TNF-, tumor necrosis factor-; VAP-1/SSAO, vascular adhesion protein-1/semicarbazidesensitive amine oxidase; VEGF, vascular endothelial growth factor.
in adipose tissue mass such as overfeeding and lipodystrophy, thus providing evidence for their implication in the etiopathology and comorbidities asssociated with obesity and cachexia. WAT is actively involved in cell function regulation through a complex network of endocrine, paracrine, and autocrine signals that influence the response of many tissues, including hypothalamus, pancreas, liver, skeletal muscle, kidneys, endothelium, and immune system, among others. Adipose tissue serves the functions of being a store for reserve energy, insulation against heat loss through the skin, and a protective padding of certain organs. A rapid turnover of stored fat can take place, and with only a few exceptions (orbit, major joints as well as palm and foot sole), the adipose tissue can be used up almost completely during starvation. Adipocytes are uniquely equipped to participate in the regulation of other functions such as reproduction, immune response, blood pressure control, coagulation, fibrinolysis, and angiogenesis, among others. This multifunctional nature is based on the existence of the full complement of enzymes, regulatory proteins, hormones, cytokines, and receptors needed to
carry out an extensive cross-talk at both a local and systemic level in response to specific external stimuli or neuroendocrine changes. This secretory nature has prompted the view of WAT as an extremely active endocrine tissue. Interestingly, the high number and ample spectrum of genes found to be expressed in WAT together with the changes observed in samples of obese patients substantiates the view of an extraordinarily active and plastic tissue. The complex and complementary nature of the expression profile observed in adipose tissue from obese organisms reflects a plethora of adaptive changes affecting crucial physiological functions that may need to be further explored through genomic and proteomic approaches. The endocrine activity of WAT was postulated almost 20 years ago when the tissue’s ability for steroid hormone interconversion was alluded to. In recent years, especially since the discovery of leptin, the list of adipocyte-derived factors has been increasing at a phenomenal pace. Another way of addressing the production of adipose-derived factors is by focusing on the function they are implicated in (Figure 12). One of the best known
ADIPOSE TISSUE
11
Table 3 Main receptors expressed by adipose tissue Receptor Hormone-cytokine receptors Adenosine Adiponectin (AdipoR1 & AdipoR2) Angiotensin II
GH IGF-I & -II IL-6 Insulin Leptin (OB-R) NPY-Y1 & Y5 Prostaglandin TGF- TNF- VEGF
Main effect of receptor activation on adipocyte metabolism
Inhibition of lipolysis Regulation of insulin sensitivity and fatty acid oxidation Increase of lipogenesis Stimulation of prostacyclin production by mature fat cells. Interaction with insulin in regulation of adipocyte metabolism Induction of leptin and IGF-I expression. Stimulation of lipolysis Inhibition of lipolysis. Stimulation of glucose transport and oxidation LPL activity inhibition. Induction of lipolysis Inhibition of lipolysis and stimulation of lipogenesis. Induction of glucose uptake and oxidation. Stimulation of leptin expression Stimulation of lipolysis. Autocrine regulation of leptin expression Inhibition of lipolysis. Induction of leptin expression Strong antilypolitic effects (PGE2). Modulation of preadipocyte differentiation (PGF2 and PGI2) Potent inhibition of adipocyte differentiation Stimulation of lipolysis. Regulation of leptin secretion. Potent inhibition of adipocyte differentiation. Involvement in development of insulin resistance Stimulation of angiogenesis
Catecholamine-nervous system receptors Muscarinic Inhibition of lipolysis Nicotinic Stimulation of lipolysis 1-AR Induction of inositol phosphate production and PKC activation 2-AR Inhibition of lipolysis. Regulation of preadipocyte growth 1-, 2- & 3-AR Stimulation of lipolysis. Induction of thermogenesis. Reduction of leptin mRNA levels Nuclear receptors Androgen Estrogen Glucocorticoids PPAR PPAR RAR/RXR T3 Lipoprotein receptors HDL LDL VLDL
Control of adipose tissue development (antiadipogenic signals). Modulation of leptin expression Control of adipose tissue development (proadipogenic signals). Modulation of leptin expression Stimulation of adipocyte differentiation Regulation of fat metabolism. Plays a central role in fatty acid-controlled differentiation of preadipose cells Induction of adipocyte differentiation and insulin sensitivity Regulation of adipocyte differentiation Stimulation of lipolysis. Regulation of leptin secretion. Induction of adipocyte differentiation. Regulation of insulin effects Clearance and metabolism of HDL Stimulation of cholesterol uptake Binding and internalization of VLDL particles. Involvement in lipid accumulation
Abbreviations: ACRP30, adipocyte complement-related protein of 30 kDa; apM1, adipose most abundant gene transcript 1; ASP, acylation-stimulating protein; FFA, free fatty acids; GBP28, gelatin-binding protein 28; GH, growth hormone; HDL, high density lipoprotein; IGF, insulin-like growth factor; IL-6, interleukin 6; LDL, low density lipoprotein; LPL, lipoprotein lipase; NO, nitric oxide; NPY-Y1 & -Y5, neuropeptide receptors Y-1 & -5; OB-R, leptin receptor; PAI-1, plasminogen activator inhibitor -1; PGE2, prostaglandin E2; PGF2, prostaglandin F2; PGI2, prostacyclin; PPAR, peroxisome proliferator-activated receptor; RAR, retinoic acid receptor; RXR, retinoid x receptor; T3, triiodothyronine; TGF-, transforming growth factor-; TNF-, tumor necrosis factor-; VEGF, vascular endothelial growth factor; VLDL, very low-density lipoprotein; 1- & 2-AR, 1- & 2-adrenergic receptors; 1-, 2- & 3-AR, 1-, 2- & 3 adrenergic receptors.
aspects of WAT physiology relates to the synthesis of products involved in lipid metabolism such as perilipin, adipocyte lipid-binding protein (ALBP, FABP4, or aP2), CETP (cholesteryl ester transfer protein), and retinol binding protein (RBP). Adipose tissue has also been identified as a source of production of factors with immunological properties participating in immunity and stress responses, as is the case for ASP (acylation-simulating protein)
and metallothionein. More recently, the pivotal role of adipocyte-derived factors in cardiovascular function control such as angiotensinogen, adiponectin, peroxisome proliferator-activated receptor angiopoietin related protein/fasting-induced adipose factor (PGAR/FIAF), and C-reactive protein (CRP) has been established. A further subsection of proteins produced by adipose tissue concerns other factors with an autocrine-paracrine function like
12
ADIPOSE TISSUE
PPAR- (peroxisome proliferator-activated receptor), IGF-1, monobutyrin, and the UCPs. It is generally assumed that under normal physiological circumstances adult humans are practically devoid of functional brown adipose tissue. As is the case in other larger mammals the functional capacity of brown adipose tissue decreases because of the relatively higher ratio between heat production from basal metabolism and the smaller surface area encountered in adult animals. In addition, clothing and indoor life have reduced the need for adaptive nonshivering thermogenesis. However, it has been recently shown that human WAT can be infiltrated with brown adipocytes expressing UCP-1.
Regulation of Metabolism The control of fat storage and mobilization has been marked by the identification of a number of regulatory mechanisms in the last few decades. Isotopic tracer studies have clearly shown that lipids are continuously being mobilized and renewed even in individuals in energy balance. Fatty acid esterification and triglyceride hydrolysis take place continuously. The half-life of depot lipids in rodents is about 8 days, meaning that almost 10% of the fatty acid stored in adipose tissue is replaced daily by new fatty acids. The balance between lipid loss and accretion determines the net outcome on energy homeostasis. The synthesis of triglycerides, also termed lipogenesis, requires a supply of fatty acids and glycerol. The main sources of fatty acids are the liver and the small intestine. Fatty acids are esterified with glycerol phosphate in the liver to produce triglycerides. Since triglycerides are bulky polar molecules that do not cross cell membranes well, they must be hydrolyzed to fatty acids and glycerol before entering fat cells. Serum very low-density lipoproteins (VLDLs) are the major form in which triacylglycerols are carried from the liver to WAT. Short-chain fatty acids (16 carbons or less) can be absorbed from the gastrointestinal tract and carried in chylomicra directly to the adipocyte. Inside fat cells, glycerol is mainly synthesized from glucose. In WAT, fatty acids can be synthesized from several precursors, such as glucose, lactate, and certain amino acids, with glucose being quantitatively the most important in humans. In the case of glucose, GLUT4, the principal glucose transporter of adipocytes, controls the entry of the substrate into the adipocyte. Insulin is known to stimulate glucose transport by promoting GLUT4 recruitment as well as increasing its activity. Inside the adipocyte, glucose is initially phosphorylated and then metabolized both in the cytosol and in the mitochondria to produce cytosolic
acetyl-CoA with the flux being influenced by phosphofructokinase and pyruvate dehydrogenase. Glycerol does not readily enter the adipocyte, but the membrane-permeable fatty acids do. Once inside the fat cells, fatty acids are re-esterified with glycerol phosphate to yield triglycerides. Lipogenesis is favored by insulin, which activates pyruvate kinase, pyruvate dehydrogenase, acetyl-CoA carboxylase, and glycerol phosphate acyltransferase. When excess nutrients are available insulin decreases acetyl-CoA entry into the tricarboxylic acid cycle while directing it towards fat synthesis. This insulin effect is antagonized by growth hormone. The gut hormones glucagon-like peptide 1 and gastric inhibitory peptide also increase fatty acid synthesis, while glucagon and catecholamines inactivate acetyl-CoA carboxylase, thus decreasing the rate of fatty acid synthesis. The release of glycerol and free fatty acids by lipolysis plays a critical role in the ability of the organism to provide energy from triglyceride stores. In this sense, the processes of lipolysis and lipogenesis are crucial for the attainment of body weight control. For this purpose adipocytes are equipped with a well-developed enzymatic machinery, together with a number of nonsecreted proteins and binding factors directly involved in the regulation of lipid metabolism. The hydrolysis of triglycerides from circulating VLDL and chylomicrons is catalyzed by lipoprotein lipase (LPL). This ratelimiting step plays an important role in directing fat partitioning. Although LPL controls fatty acid entry into adipocytes, fat mass has been shown to be preserved by endogenous synthesis. From observations made in patients with total LPL deficiency it can also be concluded that fat deposition can take place in the absence of LPL. A further key enzyme catalyzing a rate-limiting step of lipolysis is HSL (hormome sensitive lipase), which cleaves triacylglycerol to yield glycerol and fatty acids. Some fatty acids are re-esterified, so that the fatty acid: glycerol ratio leaving the cell is usually less than the theoretical 3:1. Increased concentrations of cAMP activate HSL as well as promote its movement from the cytosol to the lipid droplet surface. Catecholamines and glucagon are known inducers of the lipolytic activity, while the stimulation of lipolysis is attenuated by adenosine and protaglandin E2. Interestingly, HSL deficiency leads to male sterility and adipocyte hypertrophy, but not to obesity, with an unaltered basal lipolytic activity suggesting that other lipases may also play a relevant role in fat mobilization. The lipid droplets contained in adipocytes are coated by structural proteins, such as perilipin, that stabilize the single fat drops and prevent triglyceride
ADIPOSE TISSUE
hydrolysis in the basal state. The phosphorylation of perilipin following adrenergic stimulation or other hormonal inputs induces a structural change of the lipid droplet that allows the hydrolysis of triglycerides. After hormonal stimulation, HSL and perilipin are phosphorylated and HSL translocates to the lipid droplet. ALBP, also termed aP2, then binds to the N-terminal region of HSL, preventing fatty acid inhibition of the enzyme’s hydrolytic activity. The function of CETP is to promote the exchange of cholesterol esters of triglycerides between plasma lipoproteins. Fasting, high-cholesterol diets as well as insulin stimulate CETP synthesis and secretion in WAT. In plasma, CETP participates in the modulation of reverse cholesterol transport by facilitating the transfer of cholesterol esters from high-density lipoprotein (HDL) to triglyceride-rich apoB-containing lipoproteins. VLDLs, in particular, are converted to low-density lipoproteins (LDLs), which are subjected to hepatic clearance by the apoB/E receptor system. Adipose tissue probably represents one of the major sources of CETP in humans. Therefore, WAT represents a cholesterol storage organ, whereby peripheral cholesterol is taken up by HDL particles, acting as cholesterol efflux acceptors, and is returned for hepatic excretion. In obesity, the activity and protein mass of circulating CETP is increased showing a negative correlation with HDL concentrations at the same time as a positive correlation with fasting glycemia and insulinemia suggesting a potential link with insulin resistance. Synthesis and secretion of RBP by adipocytes is induced by retinoic acid and shows that WAT plays an important role in retinoid storage and metabolism. In fact, RBP mRNA is one of the most abundant transcripts present in both rodent and human adipose tissue. Hepatic and renal tissues have been regarded as the main sites of RBP production, while the quantitative and physiological significance of the WAT contribution remains to be fully elucidated. The processes participating in the control of energy balance, as well as the intermediary lipid and carbohydrate metabolism, are intricately linked by neurohumoral mediators. The coordination of the implicated molecular and biochemical pathways underlies, at least in part, the large number of intracellular and secreted proteins produced by WAT with autocrine, paracrine, and endocrine effects. The finding that WAT secretes a plethora of pleiotropic adipokines at the same time as expressing receptors for a huge range of compounds has led to the development of new insights into the functions of adipose tissue at both the basic and clinical level. At this early juncture in the course of adipose tissue research, much has been discovered. However, a great deal more remains to be
13
learned about its physiology and clinical relevance. Given the adipocyte’s versatile and ever-expanding list of secretory proteins, additional and unexpected discoveries are sure to emerge. The growth, cellular composition, and gene expression pattern of adipose tissue is under the regulation of a large selection of central mechanisms and local effectors. The exact nature and control of this complex cross-talk has not been fully elucidated and represents an exciting research topic.
Abbreviations ACRP30/apM1/ GBP28
ADD1/SREBP-1C
ALBP/FABP4/aP2 apoE ASP ATP cAMP CD36 C/EBPs CETP CRP CSF ECM EGF FFA FGF GH GLP-1 GLUT4 HDL HGF HSL IGF IL IL-1Ra LDL LIF LPL MCP-1 MCSF MIF MIP-1
adipocyte complement-related protein of 30 kDa/adipose most abundant gene transcript 1/gelatin-binding protein 28 adipocyte determination and differentiation factor-1/sterol regulatory element binding protein-1c adipocyte fatty acid binding protein apolipoprotein E acylation-stimulating protein adenosine triphosphate cyclic adenosin monophosphate fatty acid translocase CCAAT/enhancer binding proteins cholesteryl ester transfer protein C-reactive protein colony-stimulating factor extracellular matrix epidermal growth factor free fatty acids fibroblast growth factor growth hormone glucagon-like peptide-1 glucose transporter type 4 high density lipoprotein hepatocyte growth factor hormone-sensitive lipase insulin-like growth factor interleukin interleukin-1 receptor antagonist low density lipoprotein leukemia inhibitory factor lipoprotein lipase monocyte chemoattractant protein-1 macrophage colony stimulating factor macrophage migration inhibitory factor macrophage inflammatory protein-1
14
ADIPOSE TISSUE
NGF NO NPY-Y1 & -Y5 OB-R PAI-1 PDGF PGAR/FIAF
PGC-1
PGE2 PGF2 PGI2 PPAR Pref-1 RAR RBP RXR SAA3 T3 TGF- TNF- UCP VAP-1/SSAO
VEGF VLDL WAT 1- & 2-AR 1-, 2- & 3-AR
nerve growth factor nitric oxide neuropeptide receptors Y-1 & -5 leptin receptor plasminogen activator inhibitor-1 platelet-derived growth factor peroxisome proliferatoractivated receptor angiopoietin related protein/fasting-induced adipose factor peroxisome proliferatoractivated receptor- coactivator-1 prostaglandin E2 prostaglandin F2 prostacyclin peroxisome proliferatoractivated receptor preadipocyte factor-1 retinoic acid receptor retinol binding protein retinoid x receptor serum amyloid A3 triiodothyronine transforming growth factor- tumor necrosis factor- uncoupling protein vascular adhesion protein-1/ semicarbazide-sensitive amine oxidase vascular endothelial growth factor very low density lipoprotein white adipose tissue 1- & 2-adrenergic receptors 1-, 2- & 3 adrenergic receptors
See also: Cholesterol: Sources, Absorption, Function and Metabolism; Factors Determining Blood Levels. Diabetes Mellitus: Etiology and Epidemiology; Classification and Chemical Pathology; Dietary Management. Fatty Acids: Metabolism; Monounsaturated; Omega-3 Polyunsaturated; Omega-6 Polyunsaturated; Saturated; Trans Fatty Acids. Hypertension: Etiology. Lipids: Chemistry and Classification; Composition and Role of Phospholipids. Lipoproteins. Obesity: Definition, Etiology and
Assessment; Fat Distribution; Childhood Obesity; Complications; Prevention; Treatment. Pregnancy: Safe Diet for Pregnancy.
Further Reading Ailhaud G and Hauner H (2004) Development of white adipose tissue. In: Bray GA and Bouchard C (eds.) Handbook of Obesity. Etiology and Pathophysiology, 2nd edn, pp. 481–514. New York: Marcel Dekker, Inc. Frayn KN, Karpe F, Fielding BA, Macdonald IA, and Coppack SW (2003) Integrative physiology of human adipose tissue. International Journal of Obesity 27: 875–888. Fried SK and Ross RR (2004) Biology of visceral adipose tissue. In: Bray GA and Bouchard C (eds.) Handbook of Obesity. Etiology and Pathophysiology, 2nd edn, pp. 589–614. New York: Marcel Dekker, Inc. Fru¨hbeck G (2004) The adipose tissue as a source of vasoactive factors. Current Medicinal Chemistry (Cardiovascular & Hematological Agents) 2: 197–208. Fru¨hbeck G and Go´mez-Ambrosi J (2003) Control of body weight: a physiologic and transgenic perspective. Diabetologia 46: 143–172. Fru¨hbeck G, Go´mez-Ambrosi J, Muruza´bal FJ, and Burrell MA (2001) The adipocyte: a model for integration of endocrine and metabolic signaling in energy metabolism regulation. American Journal of Physiology 280: E827–E847. Go´mez-Ambrosi J, Catala´n V, Diez-Caballero A, Martı´nez-Cruz A, Gil MJ, Garcı´a-Foncillas J, Cienfuegos JA, Salvador J, Mato JM, and Fru¨hbeck G (2004) Gene expression profile of omental adipose tissue in human obesity. The FASEB Journal 18: 215–217. Lafontan M and Berlan M (2003) Do regional differences in adipocyte biology provide new pathophysiological insights? Trends in Pharmacological Sciences 24: 276–283. Langin D and Lafontan M (2000) Millennium fat-cell lipolysis reveals unsuspected novel tracks. Hormone and Metabolic Research 32: 443–452. Pond CM (1999) Physiological specialisation of adipose tissue. Progress in Lipid Research 38: 225–248. Rosen ED, Walkey CJ, Puigserver P, and Spiegelman BM (2000) Transcriptional regulation of adipogenesis. Genes and Development 14: 1293–1307. Shen W, Wang Z, Punyanita M, Lei J, Sinav A, Kral JG, Imielinska C, Ross R, and Heymsfield SB (2003) Adipose quantification by imaging methods: a proposed classification. Obesity Research 11: 5–16. Trayhurn P and Beattie JH (2001) Physiological role of adipose tissue: white adipose tissue as an endocrine and secretory organ. Proceedings of the Nutrition Society 60: 329–339. Unger RH (2003) The physiology of cellular liporegulation. Annual Review of Physiology 65: 333–347. Wajchenberg BL (2000) Subcutaneous and visceral adipose tissue: their relation to the metabolic syndrome. Endocrine Reviews 21: 697–738.
ADOLESCENTS/Nutritional Requirements
15
ADOLESCENTS Contents Nutritional Requirements Nutritional Problems
Nutritional Requirements
Growth
C H S Ruxton, Nutrition Communications, Cupar, UK J Fiore, University of Westminster, London, UK
During prepubescent childhood, the growth of boys and girls follows a similar trajectory, although boys may be slightly taller and heavier than girls. Around the 9th year, the pubertal growth spurt, which can last up to 3.5 years, will occur in girls with boys beginning 2 years later. Girls reach their full height approximately 2 years before boys and are, therefore, the taller of the two sexes for a period of time. Current UK standards for height and weight during adolescence are presented in Table 1. Maximum height velocity is generally seen in the year preceding menarche for girls and at around 14 years for boys. On average, weight velocity peaks at 12.9 years for girls and 14.3 years for boys. Annual growth rates during adolescence can be as much as 9 cm/8.8 kg in girls and 10.3 cm/9.8 kg in boys. Energy and protein intakes per kilogram body weight have been observed to peak during maximal growth, suggesting increased requirements during adolescence. Undernutrition in this crucial window of development can result in a slow height increment, lower peak bone mass, and delayed puberty. On the other hand, overnutrition is not without its risks. It is believed that obesity in young girls can bring about an early menarche, which then increases the risk of breast cancer in later adulthood. Menarche is deemed precocious if it occurs before the age of eight. Rising childhood obesity levels in Western countries have resulted in a rise in the proportion of girls displaying precocious menarche.
ª 2005 Elsevier Ltd. All rights reserved.
Introduction Adolescence is the period of transition between childhood and adulthood. This reflects not only the physical and emotional changes experienced by the adolescent, but the development of dietary behaviors. Whereas younger children are characterized by their resistance to new experiences, the adolescent may use food to assert their independence, not always in a beneficial way. This section will cover development in adolescence and highlight nutrients that are important during this time. Information on adolescent energy and nutrient intakes from a broad range of countries will be presented. The findings will be put in context with dietary recommendations.
Physical Changes During Adolescence Adolescence is generally assumed to be the period of human development from 10 to 18 years of age, a time during which rapid growth and physical maturity take place.
Table 1 Percentiles for height, weight, and body mass index Age (years)
Height (cm)
Weight (kg)
Body mass index
3rd
50th
97th
3rd
50th
97th
2nd
50th
99.6th
(a) Boys 11 16 18
130.8 158.9 163.3
143.2 173.0 176.4
155.8 187.4 189.7
26.1 44.9 52.0
34.5 60.2 66.2
50.9 83.2 87.9
14 16 17
17 20 21
26 30 32
(b) Girls 11 16 18
130.9 151.6 152.3
143.8 163.0 163.6
156.9 174.6 175.0
26.0 42.8 44.7
35.9 55.3 57.2
53.6 74.1 76.3
14 16 17
17 20 21
27 31 32
16
ADOLESCENTS/Nutritional Requirements
It is not fully known when growth ceases. Certainly, height gains of up to 2 cm can still occur between 17 and 28 years. Important nutrients for growth include protein, iron, calcium, vitamin C, vitamin D, and zinc. Calcium, in particular, has a key role in bone development, and huge increments in bone density are seen during adolescence under the influence of sex hormones. Bone density peaks in the early twenties and a low bone density at this time is related to increased osteoporosis risk in later life, especially for women. Studies have suggested that body mass index in adolescence is the best predictor of adult bone density, explaining why children who experience anorexia nervosa are likely to have a higher risk of osteoporosis. Adipose stores
There are few differences in body fat between boys and girls in the prepubertal stage. However, during puberty, girls develop adipose tissue at a greater rate than boys, laying down stores in the breast and hip regions. The pattern for boys is rather different and tends towards a more central deposition. Methods for estimating fatness in adolescents include weight for height, body mass index (weight in kilograms/height in meters2), skinfold thickness measures, bioelectrical impedance analysis, densitometry, magnetic resonance imaging, dual energy X-ray absorptiometry, and computer tomography. Waist circumference is gaining popularity as a useful proxy of fatness in the field. Many researchers argue that it is a better predictor than body mass index (BMI) of the central adipose stores, which place the individual most at risk from later obesity, diabetes, and coronary heart disease. Current UK standards for BMI and waist circumference are outlined in Table 1. The 90th percentile is viewed as the lower cut-off point for classification of overweight and can identify those at risk of chronic disease. In a Norwegian longitudinal survey, adolescents with a mean baseline BMI above the 95th centile increased their risk of early mortality by 80–100% compared with adolescents whose mean baseline BMI was between 25th and 75th centiles. Despite this intriguing data, it is notoriously difficult to establish which adolescents will persist with an excess body weight into adulthood. This is partly because adolescents have yet to reach their full height and partly because the etiology of obesity is related to lifestyle factors that may change with time. Attempts to track fatness from childhood to adulthood have produced contradictory results, with some authors claiming that certain ages, such as 7 years and adolescence, are ‘risk’ points for the development of later obesity and others finding that only the adiposity of older adolescents tracks
to adulthood. Thus, there is no guarantee that the overweight adolescent will remain so in later life. Sexual Development
In girls, the onset of menarche at around 13 years is triggered by the attainment of a specific level of body fat, with taller, heavier girls more likely to experience an early menarche. Vigorous exercise, e.g., gymnastics and endurance running, can delay the menarche, due both to the physiological effects of regular training and the depletion of body fat. Iron becomes more important for girls as menstrual periods become regular and heavier, and there is evidence that the iron status of many girls may be inadequate. Low iron status in this age group is, in part, due to higher requirements, but it is also linked to nutritional practices such as missing breakfast, avoiding red meat, and dieting.
Dietary Recommendations There are, of course, a variety of national recommendations for nutritional intake, which, for adolescents, are normally based on a combination of deficiency studies and extrapolations from adult studies. In the UK, US, and Canada, guidelines have evolved from a simple recommended dietary intake (RDI) to a more complex bell-shaped distribution with a mean representing the intake likely to satisfy the needs of 50% of the population. The upper extreme, at the 97.5th centile, represents the intake likely to meet the needs of the majority of the population, while the lower extreme, at the 2.5th centile, represents the lowest acceptable intake. Current UK reference nutrient intakes (RNIs), presented in Table 2, cover a range of nutrients from fats and sugars to the main micronutrients. Dietary guidelines are an important reference point for nutrition scientists and dietitians, but it must also be borne in mind that they relate to the average needs of populations, rather than individuals. Instead of numerical recommendations, many nations have adopted more conceptual ways of representing the ideal diet. This makes sense as recommended nutrient intakes are poorly understood by the public and need to be put into context by health professionals. Communication tools such as the plate model, pyramid system, food groups, and traffic light system can help to get healthy eating messages across to adolescents.
Dietary Intakes There is a lay belief that most adolescents have a nutritionally inadequate diet yet, despite reported
ADOLESCENTS/Nutritional Requirements
17
Table 2 UK Dietary guidelines for adolescents (a) Dietary reference values macronutrients Age group (years)
Sex
Energy (MJ)
Protein (g)
NSP (g)
Fat (% energy)
Starch/intrinsic sugars (% energy)
Nonmilk extrinsic sugars (% energy)
11–14
M F M F
9.27 7.92 11.51 8.83
42.1 41.2 55.2 45.0
18 18 18 18
35 35 35 35
39 39 39 39
11 11 11 11
15–18
(b) Reference nutrient intakes vitamins and minerals Age group (years)
Sex
Vit. B2 (mg)
Vit. B2 (mg)
Niacin (mg)
Vit. B6 (mg)
Vit. B12 (g)
Folate (g)
Vit. C (mg)
Vit. A (g)
Ca (mg)
Fe (mg)
Zn (mg)
11–14
M F M F
0.9 0.7 1.1 0.8
1.2 1.1 1.3 1.1
15 12 18 14
1.2 1.0 1.5 1.2
1.2 1.2 1.5 1.5
200 200 200 200
35 35 40 40
600 600 700 600
1000 800 1000 800
11.3 14.8 11.3 14.8
9.0 9.0 9.5 7.0
15–18
NSP, nonstarch polysaccharide.
low intakes of some micronutrients in surveys, there is little evidence of widespread clinical deficiencies, or indications that adolescents are failing to achieve appropriate heights and weights. Iron is the exception, where mean intakes are low and clinical markers suggest deficiency in some age groups. There is justifiable concern about the general healthiness of diets eaten by ‘at risk’ subgroups such as dieters, smokers, strict vegetarians, and adolescents who drink excess amounts of alcohol. Dietary surveys
Mean daily intakes of energy and selected micronutrients from a selection of major international surveys of adolescents are presented in Table 3. Caution should be exercised when interpreting data from dietary surveys because under-reporting of energy is widespread in adolescent and adult populations. Selective under-reporting, often focused on energy-dense or high-fat foods, can partially explain low reported intakes of energy and certain micronutrients. It is also complex to make comparisons between the data from different countries given the range of dietary assessment methods used. There is normally a trade-off between sample size and methodology, which sees the larger surveys favoring less precise methods such as 24-h recalls or food frequency questionnaires in order to make data collection more economical. The results of the most recent UK National Diet and Nutrition Survey (NDNS) of 2672 young people aged 4– 18 years (adolescent values given in Table 4) will be discussed in detail as this represents a survey with particularly strong dietary methodology (i.e., 7-day weighed inventory).
Energy and Protein
Despite mean height and weight data, which are consistent with expected results, energy intakes in UK adolescents remain below estimated average requirements (EARs). Mean energy intakes for boys and girls were 77–89% of EARs; a similar finding to that demonstrated by surveys of younger children and adults. Girls aged 15–18 years had the lowest energy intakes as a proportion of EARs and, apart from under-reporting, this could be due to smoking, slimming, or indeed lower than anticipated energy expenditure. It is well documented that physical activity is particularly low in adolescent girls. Indeed, the NDNS reported that 60% of girls (and 40% of boys) failed to perform the recommended amount of 1 h moderate physical activity per day. Popular sources of energy in the UK adolescent diet included cereal products (one third of energy), savory snacks, potatoes, meat/meat products, white bread, milk/dairy products, biscuits/cakes, spreading fats, and confectionery. Soft drinks contributed on average 6% of energy intakes. Figure 1 gives a comparison of energy intakes across a range of countries; mainly in Europe. The values represent the mean of reported energy intakes for children aged 9–18 years in these countries, with the majority of surveys focusing on intakes of 11–18 year olds. It is interesting that a large number of countries display similar results (around 10 000 kJ day1), with a handful of countries, namely Germany, Greece, Portugal, Sweden, and the UK displaying intakes closer to 8000 kJ. For these countries, under-reporting, lower energy requirements, or conscious energy restriction prompted by weight concerns could be reasons for the apparent low intakes.
Sex (age in years)
M (12–15) M (16–18) F (12–15) F (16–18)
M (11–14) M (15–18) F (10–14) F (15–18)
M (11–12) M (12–18) F (11–12) F (1–18)
M (13–15) F (13–15)
M (11–14) M (15–18) F (11–14) F (15–18)
M (12–13) M (12–18) M (15–16) F (12–13) F (12–18) F (15–16)
M (10–13) M (11–14) M (11–18) M (13–18) F (10–13) F (11–14) F (11–18) F (13–18)
Country
Australia 24HR 1995
Austria 7dUR, 24HR 1991, 2002
Belgium 3dUR, FFQ 1991, 1995
Canada 24HR, 1993
Denmark 7dUR 1995
Finland 24HR, 4dUR, 3DUR 1996–97
France DH, 1dWR 1988, 1993–94 – 10.83 – 12.10 – 8.84 – 9.16
10.2 – 11.8 8.5 – 8.6
10.90 12.15 8.70 9.70
9.71 7.09
11.49 13.06 11.72 9.44
9.49 11.65 9.49 8.49
11.59 13.53 8.53 8.69
– 2587 – 2892 – 2112 – 2188
2437 – 2820 2031 – 2055
2605 2903 2079 2318
2321 1695
2746 3122 2802 2256
2268 2784 2268 2029
2777 3233 2038 2076
– 15.4 15.7 14.9 – 15.9 16.1 16.1
– – 15.0 – – 14.0
– 14.0 – 14.0
15.0 15.0
11.6 13.0 11.6 14.9
13.2 12.9 12.6 12.7
15.1 15.4 8.5 8.7
47.8 – – 48.8 47.7 – – 45.7
– 47.0 – – 50.0 –
51.0 – 51.0 –
51.0 54.0
– 48.6 – 48.8
48.2 50.0 49.4 49.5
50.9 49.6 51.1 50.1
142.5 – – 126.8 113.3 – – 98.2
– – – – – –
– – – –
– –
– 149 – 112
– – – –
33.5 32.9 33.1 32.1
– 36.5 – 36.0 – – – –
– 40.0 – – 37.0 –
35.0 35.0 34.0 34.0
34.0 32.0
– 37.2 – 36.7
35.2 37.2 35.8 33.5
24.7 24.5 25.6 24.0
– 12.6 – 12.5 – 11.4 – 10.4
– 19.7 – – 13.5 –
– – – –
15.8 10.5
– 13.4 – 8.2
13.0 15.4 10.2 13.4
16.1 17.9 11.0 11.1
Energy Energy Protein CHO Sugars Fat Fe (mJ) (kcal) (% energy) (% energy) (g) (% energy) (mg)
Table 3 Key international surveys of adolescent dietary intakes
– – – –
– – – –
1250 835 – 1300 1100 835 – 1100
1230 – – – – –
1286 1362 1061 1121
– – – – – – – –
– – – – – –
– – – –
1299 1191 954 892
– 913 – 805
903 1002 834 784
1.2 1.4 1.0 1.4 1.0 – 1.3 –
1.8 – – 1.7 – 1.9
1.5 1.5 1.1 1.2
1.7 1.2
– 1.5 1.0 1.2
1.4 1.4 1.1 1.0
2.4 2.3 1.5 1.5
2.1 1.8 2.2 – 1.8 1.8 – 1.7
2.2 – – – – –
2.2 2.3 1.7 1.8
2.2 1.6
– 1.7 – 1.3
1.6 1.7 1.4 1.2
3.0 3.0 2.0 1.8
1.7 1.8 – 2.0 1.5 1.8 – 1.4
– – – – – –
1.7 – 1.4 1.5
1.6 1.1
– 1.6 – 1.2
1.5 1.5 1.3 1.2
– – – –
11.0 5.6 – 7.0 7.5 5.6 – 7.0
– – – – – –
6.7 7.1 5.1 5.5
5.1 3.4
– – – –
5.7 – 5.0 4.0
– – – –
– – – – – 17.0 – –
28.0 17.0 – – – –
27.0 30.0 23.0 23.0
– –
– – – –
– – – –
46.0 53.5 33.4 35.3
– 253 – – – 253 – –
– – – – – –
304 295 238 266
205 155
– – – –
229 247 217 201
271 313 206 217
Continued
88 91 – 127 99 91 – 112
81 – – – – –
79 80 72 79
110 118
– 83 – 78
113 140 132 99
121 154 124 126
Vit. A Vit. B1 Vit. B2 Vit. B6 Vit. B12 Niacin Folate Vit. C (mg) (g) (g) (mg) (g) (mg) (g) (mg)
1093 1296 1280 1186 784 1130 801 877
Ca (mg)
10.9 11.6 8.7 9.1 12.4 8.86
M (10–11) M (12–14) M (14–16) F (10–11) F (12–14) F (14–16)
M (11–14) M (15–17) F (11–14) F (12–15) F (15–17)
M (15–19) F (15–19)
M (13–16) M (16–19) F (13–16) F (16–19)
M (15–18) F (15–18)
M (11–14) M (13–14) M (13–15) F (11–14) F (13–14) F (13–15)
Greece 1dWR, 24HR 1993–94, 1999
Ireland DH 1988
Japan UR n/a
Netherlands 2dUR 1997–98
New Zealand 24HR, FFQ 1997
Norway 1dWR, FFQ n/a – 15.0 – 10.9 – –
10.6 8.0
11.3 14.0 9.10 – 8.90
– 8.90 9.00 – 9.70 7.08
9.08 10.41 11.14 7.78 8.49 8.59
M (10–12) M (13–14) M (15–18) F (10–12) F (13–14) F (15–18)
Germany DH, 3-d/7-d Recall, 1dWR 1985–95, 1998
– 3585 – 2605 – –
2963 2117
2605 2772 2079 2175
2545 1918
2700 3346 2174 – 2127
– 2126 2151 – 2318 1692
2170 2487 2661 1860 2028 2052
– – 13.4 – – 13.7
15.0 14.0
13.1 13.3 13.7 13.4
– – – 54.9 – –
49.0 51.0
51.2 49.5 50.3 50.3
51.9 50.4
48.9
13.9 14.4 15.1
50.3 49.3 50.2
44.0 45.0 47.2 44.0 48.0 46.0
46.0 45.6 46.9 47.6 45.1 46.5
– 14.2 –
15.7 15.6 15.0 15.6 14.7 14.5
12.9 13.2 13.2 12.9 12.6 13.1
37.1
– – – – – –
82 69
188 184 146 152
31.1 – – 28.9 – –
35.0 34.0
35.5 35.4 35.9 35.5
28.3 29.7
– – –
36.3 36.0 36.0
41.8 –
40.0 –
38.0 37.5 38.3 36.4 39.2 37.0
– – –
– 97 – – 88 –
– – – – – –
– – – – – –
15.2 10.4
10.9 11.5 9.0 9.9
8.6 7.4
14.7 19.3 – 12.4 11.6
11.0 13.5 13.8 10.0 10.1 9.4
12.4 14.3 14.8 11.1 12.2 12.3
Energy Energy Protein CHO Sugars Fat Fe (mJ) (kcal) (% energy) (% energy) (g) (% energy) (mg)
Sex (age in years)
Country
Table 3 Continued
– 1625 – – 1142 –
957 783
1045 1095 904 908
633 516
1208 1549 962 – 950
963 1011 871 851 748 771
795 893 902 681 754 728
Ca (mg)
– – – – – –
– – – – –
– – – – – –
– – – – – –
505 342
778 972 724 754
978 875
– 2.1 – – 1.6 –
1.8 1.3
1.2 1.3 1.0 1.2
1.2 0.9
1.8 2.2 – 1.4 1.3
– 1.7 2.5 – 1.4 2.4
1.1 1.3 1.4 1.0 1.1 1.0
– 2.8 – – 2.1 –
2.1 1.5
1.6 1.6 1.4 1.4
1.4 1.2
2.5 3.1 1.9 – 1.8
– 2.1 2.0 – 1.6 1.5
1.2 1.5 1.7 1.2 1.1 1.3
– – – – – –
1.8 1.1
1.6 1.8 1.3 1.4
1.3 1.1
2.2 2.6 1.7 – 1.6
– 1.8 1.9 – – 1.3
1.3 1.4 1.6 1.1 1.2 1.5
– – – – – –
4.9 3.2
3.9 4.4 3.4 3.4
8.1 6.4
4.9 7.2 3.9 – 4.0
– 4.7 4.3 – 4.1 3.2
4.4 5.5 5.7 4.0 4.2 4.4
– – – – – –
43.0 28.0
– – – –
16.6 13.2
40.2 51.7 32.0 – 32.0
– 18.0 19.2 – – 13.2
24.0 28.6 30.4 20.4 24.1 24.6
– – – – – –
280 203
– – – –
303 268
246 306 198 – 182
– 226 251 – 212 217
221 245 263 203 210 216
Continued
– 110 – – 104 –
155 120
79 71 81 81
89 91
76 95 76 – 79
119 112 123 108 108 118
87 98 97 87 98 92
Vit. A Vit. B1 Vit. B2 Vit. B6 Vit. B12 Niacin Folate Vit. C (g) (mg) (g) (g) (mg) (mg) (g) (mg)
M (12–18) M (13–17) F (12–18) F (13–17)
M (10–12) M (13–15) F (10–12) F (13–15) F (17–18)
M (13–14) M (14–16) M (17–18) F (13–14) F (14–15) F (17–18)
M (11–12) M (13–14) M (15–18) F (11–12) F (13–14) F (15–18)
M (11–14) F (11–14)
M (12–19) F (12–19)
Portugal 24HR 1995
Spain 24HR, FFQ 1989–92
Sweden 7dUR 1989–90, 1993–94
Switzerland 7dUR 1994–95
Turkey 24HR 2003
USA 24HR 1999–2000 11.24 8.34
9.92 9.41
– 11.98 12.56 – 7.90 8.12
– 8.90 10.50 – 7.21 7.88
11.50 12.54 10.89 1052 –
8.86 9.41 9.40 8.14
2686 1993
2372 2250
– 2863 3001 – 1887 1939
– 2127 2509 – 1722 1884
2747 2997 2602 2514 –
2117 2248 2248 1945
13.9 13.4
15.0 14.6
13.3 – – – – –
– – 14.7 – – 14.2
15.4 17.8 13.5 16.1 –
– 17.6 – 17.8
54.2 55.5
50.9 48.2
46.1 – – 49.4 – –
– 52.6 – 49.4 – 54.1
51.8 47.7 43.9 42.0 –
49.1 – 53.4 –
– –
– –
– – – – – –
– – – – – –
– – – – –
– – – –
32.0 31.1
34.1 37.2
– 40.1 35.0 – 37.4 35.8
– 32.1 – – – –
40.3 40.1 40.8 42.1 –
– – 33.3 –
18.3 13.4
13.3 11.8
– 16.0 – – 9.3 –
17.4 18.2 – – 13.4 13.3
11.3 16.5 – 13.2 13.3
– – – –
Energy Energy Protein CHO Sugars Fat Fe (mJ) (kcal) (% energy) (% energy) (g) (% energy) (mg) – – – –
– – – – – –
– – – – – –
1081 – 793 –
1030 1151 1060 1386
– 1311 1157 – 819 832
1279 1406 1472 1061 1046 966
– –
1.2 1.1
– 1.5 1.3 – – 1.5
– 1.8 1.8 – 1.4 1.2
1.4 2.1 1.3 1.9 –
– – – –
– –
2.0 2.0
– 2.2 1.8 – 1.3 1.3
– 2.4 2.8 – 1.8 1.8
1.4 1.8 1.3 1.6 –
– – – –
– –
1.7 1.7
– – – – – –
– 2.0 2.2 – 1.5 1.5
– – – – –
– – – –
– –
4.0 3.9
– – – – – –
– 6.6 8.7 – 4.9 5.5
4.7 7.2 7.2 9.6 –
– – – –
– –
13.1 12.6
– – – – – –
– 33.5 36 – 24.9 23.0
28.0 40.0 25.0 36.0 –
– – – –
421 323
179 163
– – – – – –
– 178 138 – 144 105
128 159 138 168 –
– – – –
– –
127 135
– 185 163 – 110 146
– 68 77 – 68 77
71 68 96 84 –
– 77 – 99
Vit. A Vit. B1 Vit. B2 Vit. B6 Vit. B12 Niacin Folate Vit. C (g) (mg) (g) (g) (mg) (mg) (g) (mg)
713 749 746 691 666 1088 653 982 – –
890 – 853 –
Ca (mg)
24HR refers to ‘24 hour’ recall. WR, weighed record; FFQ, food intake questionnaire; UR, unweighed record; DH, diet history. Vitamin A = micrograms retinol equivalent. Dates of actual surveys are given where available. Data from more than one survey are presented for some countries.
Sex (age in years)
Country
Table 3 Continued
8.28 89% EAR
9.69 83% EAR
7.03 89% EAR
6.82 77% EAR
M (11–14) N = 234
M (15–18) N = 179
F (11–14) N = 238
F (15–18) N = 210
13.9 121% RNI
12.7 128% RNI
13.9 139% RNI
13.1 152% RNI
Protein (% energy)
50.6
51.2
50.5
51.7
CHO (% energy)
15.3
16.2
15.8
16.9
NMES (% energy)
35.9
36.1
35.9
35.2
Fat (% energy)
10.6
10.2
13.3
11.6
NSP (g)
8.9 60% RNI
9.1 61% RNI
12.6 112% RNI
10.8 96% RNI
Fe (mg)
653 82% RNI
641 80% RNI
878 88% RNI
799 80% RNI
Ca (mg)
562 94% RNI
482 80% RNI
628 90% RNI
577 96% RNI
Vit. A (g)
1.41 176% RNI
1.42 203% RNI
1.93 175% RNI
1.71 190% RNI
Vit. B1 (mg)
1.34 122% RNI
1.35 123% RNI
1.95 150% RNI
1.74 145% RNI
Vit. B2 (mg)
2.0 167% RNI
1.9 190% RNI
2.7 180% RNI
2.2 183% RNI
Vit. B6 (g)
3.4 227% RNI
3.3 275% RNI
5.0 333% RNI
4.5 375% RNI
Vit. B12 (g)
25.6 183% RNI
24.8 207% RNI
36.8 204% RNI
30 200% RNI
Niacin (mg)
Study conducted January to December 1997 with a sample size of 2672. EAR, estimated average requirement; RNI, reference nutrient intake; NMES, Nonmilk extrinsic sugars (similar to added sugars); NSP, nonstarch polysaccharide.
Energy (MJ)
Sex (age in years) Sample size
Table 4 Average daily dietary intakes of UK adolescents from the National Diet and Nutrition Survey (2000)
215 108% RNI
210 105% RNI
309 154% RNI
247 124% RNI
Folate (g)
81.2 203% RNI
73.7 210% RNI
86.5 216% RNI
78.4 224% RNI
Vit. C (mg)
22
ADOLESCENTS/Nutritional Requirements
14.00
12.00
Energy (MJ)
10.00
8.00
6.00
4.00
2.00
S U
K U
ey rk
itz Tu
Sw
en ed
Sw
ai
n
l Sp
ga rtu
w
Po
or N
Ze
al
an
ay
d
s nd ew N
N
et
he
rla
la Ire
ec re
nd
e
y an G
ce
er
m
an G
Fr
an
d
k nl Fi
ar
m
m en D
lg
iu
ria Be
st Au
Au
st
ra
lia
0.00
Figure 1 Reported energy intakes (kJ) for adolescents in a selection of countries.
In the NDNS, mean protein intakes were considerably in excess of requirements, as assessed by RNI, for all ages and both sexes. The main sources were meat and meat products (which contributed 30% of overall protein), cereals, bread, and milk products. It is believed that protein requirements in adolescents are between 0.8 and 1.0 g per kg body mass, although this fails to take into account any additional needs related to regular exercise (which are likely to be minor for most sports and be covered by normal protein intakes). As a proportion of energy, protein intakes were higher in Southern European countries, Australia, and New Zealand compared with intakes in the US and Northern European countries. Fat
Mean total fat intake as a proportion of energy in the NDNS was around 35%, corresponding to the UK dietary reference value (DRV). This is lower than the intakes (38–40% energy from fat) found in previous studies. However, intakes of saturated fat, at 14% energy, still exceeded the DRV of 11% energy. Of more concern was the subgroup of adolescents in the highest percentile of intakes who consumed around 17% energy from saturated fat.
This emphasizes the view that, although mean intakes may look acceptable when compared with dietary guidelines, there may be ‘at risk’ groups whose dietary habits predispose them to a greater risk of chronic disease. Main sources of saturated fat in the adolescent diet included meat and meat products (around 20%), savory snacks, and fried foods. In most other countries, fat intakes were 36–38% energy with the highest fat intake reported in Finland, Greece, Belgium, Germany, Switzerland, and Spain at around 38% energy. In the US, where the dietary guideline is 30%, intakes were around 32% energy from fat. Carbohydrates
Average total carbohydrate intake in the NDNS was close to the DRV of 50% energy. The main sources were cereals, bread, savory snacks, vegetables, and potatoes. Fiber intakes, expressed as nonstarch polysaccharide (NSP), were 10– 13 g day1, which approached 70% of the adult guideline. Vegetables, potatoes, and savory snacks together contributed 40% of NSP. Interesting, there was no clear relationship between NSP and bowel movements, although it was noted that adolescents who experienced less than one bowel
ADOLESCENTS/Nutritional Requirements
movement per day tended to have NSP intakes at the lowest end of the distribution spectrum. The mean intake of nonmilk extrinsic sugars (a proxy for added sugars) was 16% of energy, around 4 percentage points higher than the DRV of 11% food energy. Key sources were soft drinks (providing 42% of sugars), sugar preserves, and confectionery, particularly chocolate. Children from lower income households tended to have lower intakes of total carbohydrate, nonmilk extrinsic sugars, and NSP compared with children from higher income households. Recommendations to reduce fat are often accompanied by those urging a decrease in added sugars due to concerns about obesity, dental health, and micronutrient dilution. However, an inverse relationship between fat and sugars is evident in the majority of dietary surveys, suggesting that concurrent reductions in fat and sugar may neither be realistic nor totally beneficial. A previous survey found a difference of 4% energy from fat between children in the lowest and highest thirds of sugar intake. Observational studies, including the latest NDNS, have also found an inverse relationship between body mass index and sugar intake. Explanations for this include selfimposed sugar restrictions amongst heavier people, and food choices in favor of higher sugar, low-fat foods, which could be less obesigenic. With respect to the potential impact of added sugars on micronutrient dilution, studies in the UK, Germany, and the US have found that a broad range of sugar intake is consistent with adequate micronutrient intakes. This may be partly due to fortification of sugar-containing foods, e.g., breakfast cereals. Lower levels of vitamins and minerals tend to be seen only at the upper and lower extremes of sugar consumption, suggesting that these diets lack variety. Micronutrients
Main sources of micronutrients are breakfast cereals, milk, bread, chips/potatoes, and eggs. Surveys that report comparisons between intakes and recommendations have found satisfactory intakes for most micronutrients when means are considered. Intakes of vitamins B1, B2, B6, B12 C, and niacin greatly exceeded RNIs in the NDNS, perhaps reflecting high protein intakes and the fortification of popular foods such as breakfast cereals, bread, and beverages. Even folate, a problem nutrient in earlier studies, was consumed at an acceptable level. Nutrient intakes that remain at lower than expected levels were iron and zinc for both sexes,
23
and calcium and vitamin A for girls. Mean iron intake was particularly low in 11–18-year-old girls at 60% of the RNI (see Table 4). Mean iron intakes often fail to meet recommended levels in the majority of studies reported, particularly in women and girls. This may reflect avoidance of iron-containing foods, e.g., red meat, for reasons of perceived health, food safety, or dislike. Iron status is also hampered by absorption rates, which can be as low as 10%. It is important to reverse this trend as increasing numbers of young girls are now demonstrating clinical evidence of poor iron status, e.g., more than a quarter of 15–18-year-old girls in the NDNS. A New Zealand survey reported that 4–6% of adolescents were anemic. Good sources of iron are meat/meat products, breakfast cereals, bread, chips/potatoes, chocolate, and crisps. Around 25% of iron intakes are from fortified foods, which supply non-heme iron. The latter four food groups are not particularly rich in iron but, nevertheless, contribute over 10% due to the significant amounts eaten. Poor intakes of calcium are of concern due to the rising incidence of osteoporosis in later life, especially amongst women. While average calcium intakes were around 80% of the RNI in the NDNS, there was a considerable proportion of adolescents with intakes below the lower RNI (the bottom end of the acceptable spectrum). In 11–14-yearold children, 12% of boys and 24% of girls fell into this category, while in 15–18 year olds, the figures were 9% and 19%, respectively. Good sources of calcium are milk, cheese, yogurt, tinned fish, and, in many countries, fortified grain products. Concern has been expressed that the rise in soft drink consumption has displaced milk from the diets of adolescents and this could be contributing to the low calcium intakes found in many surveys. Fluid milk consumption has fallen dramatically over the last decade in Western countries and this is due to a range of factors including preference for other beverages, dieters’ concerns about calories, and attitude of adolescents towards milk. It should not be forgotten that physical activity is an important aspect in the prevention of osteoporosis. Some life-style practices, such as smoking and drinking alcohol, are related to a higher requirement for micronutrients, suggesting that specific groups of adolescents may be more at risk from a poor nutrient status.
Impact of Lifestyle on Dietary Intakes Young people consume particular foods and diets for a variety of reasons, often completely unrelated to their nutritional content. These can include:
24
ADOLESCENTS/Nutritional Requirements
slimming or weight control (whether justified or not); peer group pressure to consume certain foods or brands; the development of personal ideology, such as the use of vegetarian diets; following a specific diet to enhance sporting prowess; or even convenience. Energy and nutrient intakes are influenced by specialized eating patterns, thus it is important to consider life-style choices when interpreting dietary survey data. Breakfast Consumption
Breakfast is identified in many studies as a nutrientdense, low-fat meal, yet is often omitted by adolescents. Around 10% of younger children miss breakfast, rising to 20% as adulthood is approached. Boys omit breakfast less than girls and favor cereals rather than bread or a cooked breakfast. Data on breakfast habits have revealed higher intakes of sugars, fiber, and micronutrients, such as folate, niacin, iron, calcium and zinc, amongst high breakfast cereal consumers. Fat intakes, as a proportion of energy, are inversely related to breakfast cereal intake, probably due to the higher carbohydrate intakes of breakfast consumers. Previous surveys of adolescents have found an inverse relationship between breakfast cereal consumption and body mass index. Consumption of School Lunches
Although the popularity of school lunches has diminished over the last 10 years, they are still eaten regularly by almost 40% of children, particularly those from lower socioeconomic groups. School lunches have been found to contribute 30– 40% of total energy and are often criticized for containing a high proportion of fat and low levels of key micronutrients such as vitamin C and calcium. Older children often prefer to eat lunch at cafes and take-aways rather than consider school meals and this practice has been found to relate to lower nutrient-dense diets, particularly in the case of iron. Initiatives have been taken forward in many schools to improve the quality and perception of school meals including action groups involving pupils, caterers, and teachers. There have also been efforts at government level to integrate the production of school meals with classroom-based topics around nutrition, health, and life style. It is too early to say whether these efforts have had a significant impact on the nutrition of adolescents. Snacking and Soft Drink Consumption
There has been a general shift over the last decade towards fewer meals eaten at home and more eaten
in restaurants and cafes combined with an increase in snacking. Snacks, including soft drinks, now contribute a significant proportion of the daily energy intake of adolescents. Concerns about the possible impact of snacks on measures of overweight and nutrient composition have not been borne out by the evidence, although it is acknowledged that data collection in this area is complicated by the myriad of definitions for ‘snack.’ A number of observational studies have found that frequent snackers have similar nutrient intakes to those who snack infrequently. With respect to body size, snacking tends to relate to a lower body mass index rather than one that is high. Intervention studies also provide valuable evidence on the effects of snacking. A study in adults, which attempted to increase consumption of snacks to around 25% of daily energy using a variety of low- and high-fat products, found that the subjects compensated for the additional energy by reducing the amount eaten at meals. While these data suggest that snacking is more benign than was previously thought, it is important to emphasize the concept of balance. Common snack foods amongst adolescents are potato crisps, carbonated drinks, biscuits, and confectionery. While these foods certainly have a role in creating variety and enjoyment in the diet, no one would argue that they should represent the primary sources of energy for young people. In the case of soft drinks, evidence from short-term intervention studies suggests that higher intakes (in excess of two cans per day) are linked with higher energy intakes and lower intakes of micronutrients. Yet most epidemiological studies show an inverse correlation between sugar consumption (a proxy for soft drink consumption) and mean body mass index. Further work is needed to determine optimal cut-offs for soft drink intakes, particularly for adolescents who tend to be major consumers. Smoking
The proportion of adolescent smokers rises with age and is between 8% and 20% with an average exposure, in older children, of around 40 cigarettes per week. Since the 1980s, smoking has decreased in adolescent boys but not in girls. Smokers tend to have different dietary habits from nonsmokers and this is reflected in their nutrient intakes. Studies have found that smokers consume less dairy foods, wholemeal bread, fruit and breakfast cereals, and more coffee, alcohol and chips. Smokers’ diets tend to be lower in fiber, vitamin B1, and vitamin C compared with nonsmokers. In a study of 18 year olds, male smokers had higher percentage energy from fat and lower intakes of sugars and iron. Contrary to
ADOLESCENTS/Nutritional Requirements
evidence from adult surveys, smoking has not been found to relate to body size in adolescents, although the opposite is believed to be true for teenage girls who use smoking as a misguided means to control energy intake. As would be expected, dietary restraint is more common amongst female smokers.
25
children from lower socioeconomic backgrounds. Such a dietary pattern, characterized by lower than optimal levels of protective nutrients, combined with a higher prevalence of smoking, may partly explain the higher burden of chronic disease experienced by people from lower socioeconomic groups.
Consumption of Alcohol
In the NDNS, alcohol was consumed by 10% of 11– 14 year olds and 37–46% of 15–18 year olds with older boys most likely to drink alcohol. Other European surveys have found higher proportions (60– 90% in 14–18-year-old males), while US surveys have found similar proportions to the UK. The average contribution of alcohol to energy intakes in the NDNS was just over 1%, with higher contributions reported by Danish and Irish studies (around 2–5% energy). Excess alcohol intake can increase micronutrient requirements but few younger adolescents fall into this category. However, binge drinking in the 15–18-year-old age group is a concern. One US study found that 20% of adolescents could be classed as problem drinkers, while 7% could be classed as alcoholics. Regular moderate consumption of alcohol can contribute to obesity since the energy provided by alcoholic drinks rarely displaces energy from other food sources. This is likely to increase overall daily energy intakes and could lead to a positive energy balance. Other Factors that Impact on Dietary Intakes
Comparisons between boys and girls often reveal differences in dietary patterns, yet these are seldom consistent between surveys. On the whole, boys eat more meat and dairy products, while girls favor fruit, salad vegetables, and artificially sweetened drinks. The dietary practices of girls are more likely to be influenced by a desire to limit energy intakes. Lower intakes of dairy products, meat, and breakfast cereals seen in older adolescent girls explain their typically poor intakes of iron and calcium. Differences in diet are sometimes seen between children from different social classes or income groups. In the NDNS, children from a lower socioeconomic background were less likely to consume low-fat dairy foods, fruit juice, salad vegetables, high-fiber cereals, fruit juices, and fruit than children from a higher socioeconomic background. This impacted on mean daily nutrient intakes with lower socioeconomic children consuming less protein, total sugars, total carbohydrate, and fiber. There was a similar trend for micronutrients, particularly vitamin C. Some surveys have found higher fat intakes in
Promoting Optimal Diets The findings of the studies shown in Tables 3 and 4 reveal that most adolescents in the developed world are likely to be receiving adequate energy and protein to support growth. The intakes of micronutrients found in subgroups of the population may not be high enough to ensure optimal health but it is difficult to interpret the effects of these without appropriate biochemical data. For iron, there is good evidence of clinical deficiency in low iron consumers, particularly girls but for other nutrients, biochemical evidence is scarce. Longitudinal studies that attempt to link early diet with the incidence of later disease are a valuable tool and seem to suggest that high intakes of fruit, vegetables, folate, and n-3 polyunsaturated fatty acids (present in oily fish) are dietary indicators that relate to important aspects of health later in life. Despite these scientific findings, health messages relating to fruit and vegetables seem to have fallen on deaf ears. The NDNS showed that 70% of children had eaten no citrus fruit during the week of the dietary survey. Around 60% had eaten no green leafy vegetables or tomatoes, valuable sources of vitamins and minerals. Since energy intake is the main predictor of micronutrient intakes, it makes sense to ensure that adolescents avoid restricting energy. Yet this finding needs to be considered against a background of rising obesity in the adolescent population. There is strong evidence that adolescence is the time when substantial reductions in physical activity are seen and such a trend, combined with lower energy intakes, could result in larger numbers of children failing to meet their individual nutrient requirements. The key to tackling this lies as much with physical activity as it does with dietary intervention. Energy intakes need to be maintained at a level suitable for optimal micronutrient uptake while, at the same time, energy expenditure should be increased to ensure energy balance. A wide range of foods encompassing the main food groups will ensure a nutrient-dense diet. Special conditions in adolescence, such as pregnancy, lactation, and sports training, may increase requirements above normal and merit manipulation of the diet to
26
ADOLESCENTS/Nutritional Problems
favor food groups known to be important sources of certain nutrients.
Nutritional Problems
Conclusions
C Lo, Childrens’ Hospital Boston, Harvard Medical School and Harvard School of Public Health, Boston, MA, USA
Diets of adolescents in developed countries meet the macronutrient requirements of the majority of individuals resulting in appropriate rates of growth. While fat intakes, as a proportion of energy, have continued to decline towards dietary guidelines, concern remains over the intakes of iron, calcium, zinc, and vitamin A in many subgroups of adolescents, particularly older girls. Maintaining adequate energy intakes and encouraging consumption of fruit, vegetables, lean meat, and oily fish may be a key route to achieving an optimal intake of micronutrients. Present recommendations for adolescents include a continuing reduction in dietary fat to help prevent later diseases of affluence. This should be combined with encouragement to increase physical activity in order to address the rising incidence of obesity in most developed countries. See also: Adolescents: Nutritional Problems. Alcohol: Absorption, Metabolism and Physiological Effects; Disease Risk and Beneficial Effects; Effects of Consumption on Diet and Nutritional Status. Calcium. Dietary Surveys. Osteoporosis.
Further Reading Alexy U, Sichert-Hellert W, and Kersting M (2003) Associations between intake of added sugars and intakes of nutrients and food groups in the diets of German children and adolescents. British Journal of Nutrition 90: 441–447. Cruz JA (2000) Dietary habits and nutritional status in adolescents over Europe–Southern Europe. European Journal of Clinical Nutrition 54(supplement 1): S29–S35. Deckelbaum RJ and Williams CL (2001) Childhood obesity: the health issue. Obesity Research 9(supplement 4): 239S–243S. Frary CD, Johnson RK, and Wang MQ (2004) Children and adolescents’ choices of foods and beverages high in added sugars are associated with intakes of key nutrients and food groups. Journal of Adolescent Health 34: 56–63. Gregory JR, Lowe S, Bates CJ et al. (2000) National Diet and Nutrition Survey: Young People Aged 4 to 18 Years. London: The Stationery Office. Lambert J, Agostoni C, Elmadfa I et al. (2004) Diet intake and nutritional status of children and adolescents in Europe. British Journal of Nutrition 92(supplement 2): S147–S211. Ruxton CHS, Storer H, Thomas B, and Talbot D (2000) Teenagers and young adults. In: Thomas B (ed.) Manual of Dietetic Practice, 2nd ed, pp. 256–262. UK: Blackwells: Oxford. Serra-Majem L (2001) Vitamin and mineral intakes in European children. Is food fortification needed? Public Health Nutrition 4: 101–107.
ª 2005 Elsevier Ltd. All rights reserved.
Introduction: Normal Adolescent Growth and Diets Adolescence is a unique time of rapid growth, with half of eventual adult weight and 45% of peak bone mass accumulated during adolescence. Adolescence is a time when peak physical muscular development and exercise performance is reached. However, adolescent diets are often notorious for their reliance on snacks and ‘junk foods’ that are high in calories, sugar, salt, and saturated fat, which could provide extra energy for high-activity demands of teenagers, but often risk becoming part of bad habits leading to obesity and increased risk of atherosclerotic heart disease in later life. Although most studies have been on older subjects, it is now clear that many Western diseases, especially heart disease, stroke, diabetes, hypertension, and many cancers, are diet related, and that diets high in saturated fat and low in fruits, vegetables, and fiber may increase risks of heart disease. Indeed, autopsy reports of atherosclerotic plaques already present in adolescents who died accidentally suggests that prevention of heart disease should start quite early in life. Epidemiologic evidence from large cohort studies have concluded that a striking 80% reduction of heart disease and diabetes might be achieved in those with diets lower in saturated and trans fat and higher in fruits, vegetables, folate, fiber, and n-3 fish oils. Other factors include regular exercise, moderate alcohol use, and avoidance of obesity and smoking.
Nutrient Requirements About every 10 years, the Institute of Medicine convenes several committees of nutrition scientists to review the scientific literature and recommend levels of daily dietary nutrients that would keep 95% of the population from developing deficiencies. In the past, the dietary reference intakes (DRIs) or recommended dietary allowances (RDAs) concentrated on ensuring that nutrient deficiencies were minimized by specifying lower limits of intakes. However, it is now clear that many Western diets provide too much of some nutrients such as total calories, simple carbohydrates, saturated fats, and salt. Therefore, recent editions of DRIs (see Table 1 to 5) have
600 900 900
600 700 700
Males 9–13 years 14–18 years 19–30 years
Females 9–13 years 14–18 years 19–30 years
45 65 75
45 75 90
Vitamin C (mg day1)
5* 5* 5*
5* 5* 5*
Vitamin D (g day1)
11 15 15
11 15 15
Vitamin E (g day1)
60* 75* 90*
60* 75* 120*
Vitamin K (g day1)
0.9 1.0 1.1
0.9 1.2 1.2
Thiamin (mg day1)
0.9 1.0 1.1
0.9 1.3 1.3
Riboflavin (mg/day1)
12 14 14
12 16 16
Niacin (g day1)
1.0 1.2 1.3
1.0 1.3 1.3
Vitamin B6 (mg day1)
300 400 400
300 400 400
Folate (g day1)
1.8 2.4 2.4
1.8 2.4 2.4
Vitamin B12 (g day1)
4* 5* 5*
4* 5* 5*
Pantothenic Acid (mg day1)
20* 25* 30*
20* 25* 30*
Biotin (g day1)
375* 400* 425*
375* 550* 550*
Choline (g day1)
Food and Nutrition Board, Institute of Medicine, The National Academies. Copyright 2001 by the National Academy of Sciences. All rights reserved. This table (taken from the DRI reports, see http://www.nap.edu) presents recommended dietary allowances (RDAs) in bold type and adequate intakes (AIs) in ordinary type followed by an asterisk (*). RDAs and AIs may both be used as goals for individual intake. RDAs are set to meet the needs of almost all (97–98%) individuals in a group. For healthy breast-fed infants, the AI is the mean intake. The AI for other life-stage and gender groups is believed to cover needs of all individuals in the group, but lack of data or uncertainty in the data prevent being able to specify with confidence the percentage of individuals covered by this intake.
Vitamin A (g day1)
Life stage group
Table 1 Recommended dietary allowances and adequate intakes
1300* 1300* 1000*
1300* 1300* 1000*
Males 9–13 years 14–18 years 19–30 years
Females 9–13 years 14–18 years 19–30 years
21* 24* 25*
25* 35* 35*
Chromium (g day1)
700 890 900
700 890 900
Copper (g day1)
2* 3* 3*
2* 3* 4*
Fluoride (mg day1)
120 150 150
120 150 150
Iodine (g day1)
8 15 18
8 11 8
Iron (mg day1)
240 360 310
240 410 400
Magnesium (mg day1)
1.6* 1.6* 1.8*
1.9* 2.2* 2.3*
Manganese (mg day1)
34 43 45
34 43 45
Molybdenum (g day1)
1250 1250 700
1250 1250 700
Phosphorus (mg day1)
40 55 55
40 55 55
Selenium (g day1)
8 9 8
8 11 11
Zinc (mg day1)
Food and Nutrition Board, Institute of Medicine, National Academies. Copyright 2001 by the National Academy of Sciences. All rights reserved. This table presents recommended dietary allowances (RDAs) in bold type and adequate intakes (AIs) in ordinary type followed by an asterisk (*). RDAs and AIs may both be used as goals for individual intake. RDAs are set to meet the needs of almost all (97–98%) individuals in a group. For healthy breast-fed infants, the AI is the mean intake. The AI for other life-stage and gender groups is believed to cover needs of all individuals in the group, but lack of data or uncertainty in the data prevent being able to specify with confidence the percentage of individuals covered by this intake. Sources: Dietary Reference Intakes for Calcium, Phosphorous, Magnesium, Vitamin D, and Fluoride (1997); Dietary Reference Intakes for Thiamin, Riboflavin, Niacin, Vitamin B6, Folate, Vitamin B12, Pantothenic Acid, Biotin, and Choline (1998); Dietary Reference Intakes for Vitamin C, Vitamin E, Selenium, and Carotenoids (2000); and Dietary Reference Intakes for Vitamin A, Vitamin K, Arsenic, Boron, Chromium, Copper, Iodine, Iron, Manganese, Molybdenum, Nickel, Silicon, Vanadium, and Zinc (2001). These reports may be accessed via http://www.nap.edu
Calcium (mg day1)
Life stage group
Table 2 Recommended dietary allowances and adequate intakes
1700 2800 3000
Males, females 9–13 years 14–18 years 19–70 years
1200 1800 2000
Vitamin C (mg day1)
50 50 50
Vitamin D (g day1)
600 800 1000
Vitamin E (g day1)
ND ND ND ND ND ND ND ND ND 20 30 35
60 80 100
Vitamin B6 Vitamin Thiamin Riboflavin Niacin (g day1) (mg day1) K
600 800 1000
ND ND ND
ND ND ND
ND ND ND
2.0 3.0 3.5
Vitamin Pantothenic Biotin Choline Folate acid (g day1) (g day1) B12
ND ND ND
Carotenoids
a UL = The maximum level of daily nutrient intake that is likely to pose no risk of adverse effects. Unless otherwise specified, the UL represents total intake from food, water, and supplements. Owing to lack of suitable data, ULs could not be established for vitamin K, thiamin, riboflavin, vitamin B12, pantothenic acid, biotin, or carotenoids. In the absence of ULs, extra caution may be warranted in consuming levels above recommended intakes. Food and Nutrition Board, Institute of Medicine, National Academies. Copyright 2001 by the National Academy of Sciences. All rights reserved. Sources: Dietary Reference Intakes for Calcium, Phosphorous, Magnesium, Vitamin D, and Fluoride (1997); Dietary Reference Intakes for Thiamin, Riboflavin, Niacin, Vitamin B6, Folate, Vitamin B12, Pantothenic Acid, Biotin, and Choline (1998); Dietary Reference Intakes for Vitamin C, Vitamin E, Selenium, and Carotenoids (2000); and Dietary Reference Intakes for Vitamin A, Vitamin K, Arsenic, Boron, Chromium, Copper, Iodine, Iron, Manganese, Molybdenum, Nickel, Silicon, Vanadium, and Zinc (2001). These reports may be accessed via http://www.nap.edu
Vitamin A (g day1)
Life stage group
Table 3 Dietary reference intakes (DRIs): tolerable upper intake levels (UL)a, vitamins
2.5 2.5 2.5
ND ND ND 5000 8000 10 000 10 10 10 600 900 1100
40 45 45 350 350 350
Chromium Copper Fluoride Iodine Iron Magnesium (g day1) (mg day1) (g day1) (mg day1) (g day1)
6 9 11
Manganese (mg day1)
1100 1700 2000
0.6 1.0 1.0
4 4 4
280 400 400
Molybdenum Nickel Phosphorus Selenium (mg day1) (g day1) (g day1) (g day1)
ND ND ND
ND ND 1.8
Silicon Vanadium (g day1)
23 34 40
Zinc (mg day1)
UL = The maximum level of daily nutrient intake that is likely to pose no risk of adverse effects. Unless otherwise specified, the UL represents total intake from food, water, and supplements. Owing to lack of suitable data, ULs could not be established for arsenic, chromium, and silicon. In the absence of ULs, extra caution may be warranted in consuming levels above recommended intakes. Food and Nutrition Board, Institute of Medicine, National Academies. Copyright 2001 by the National Academy of Sciences. All rights reserved. Sources: Dietary Reference Intakes for Calcium, Phosphorous, Magnesium, Vitamin D, and Fluoride (1997); Dietary Reference Intakes for Thiamin, Riboflavin, Niacin, Vitamin B6, Folate, Vitamin B12, Pantothenic Acid, Biotin, and Choline (1998); Dietary Reference Intakes for Vitamin C, Vitamin E, Selenium, and Carotenoids (2000); and Dietary Reference Intakes for Vitamin A, Vitamin K, Arsenic, Boron, Chromium, Copper, Iodine, Iron, Manganese, Molybdenum, Nickel, Silicon, Vanadium, and Zinc (2001). These reports may be accessed via http://www.nap.edu
a
11 17 20
Arsenic Boron Calcium (mg day1) (g day1)
Males, females 9–13 years ND 14–18 years ND 19–50 years ND
Life stage group
Table 4 Dietary reference intakes (DRIs): tolerable upper intake levels (UL)a, Elements
9 12 12
0.7 0.9 0.9 0.8 0.9 0.9 9 11 11
9 12 12 0.8 1.0 1.1
0.8 1.1 1.1 250 330 320
250 330 320 1.5 2.0 2.0
1.5 2.0 2.0 540 685 700
540 685 700 73 95 95
73 95 95 5.7 7.9 8.1
5.9 7.7 6
200 300 255
200 340 330
26 33 34
26 33 34
1055 1055 580
1055 1055 580
35 45 45
35 45 45
7.0 7.3 6.8
7.0 8.5 9.4
a As retinol activity equivalents (RAEs). 1 RAE = 1 Tg retinol, 12 Tg #-carotene, 24 Tg I-carotene, or 24 Tg #-cryptoxanthin. The RAE for dietary provitamin A carotenoids is twofold greater than retinol equivalents (RE), whereas the RAE for preformed vitamin A is the same as RE. b As -tocopherol. -Tocopherol includes RRR--tocopherol, the only form of -tocopherol that occurs naturally in foods, and the 2R-stereoisomeric forms of -tocopherol (RRR-, RSR-, RRS-, and RSS--tocopherol) that occur in fortified foods and supplements. It does not include the 2S-stereoisomeric forms of -tocopherol (SRR-, SSR-, SRS-, and SSS--tocopherol), also found in fortified foods and supplements. c As niacin equivalents (NE). 1 mg of niacin = 60 mg of tryptophan. d As dietary folate equivalents (DFE). 1 DFE = 1 mg food folate = 0.6 mg of folic acid from fortified food or as a supplement consumed with food = 0.5 mg of a supplement taken on an empty stomach. Food and Nutrition Board, Institute of Medicine, National Academies. Copyright 2001 by the National Academy of Sciences. All rights reserved. This table presents estimated average requirements (EARs), which serve two purposes: for assessing adequacy of population intakes, and as the basis for calculating recommended dietary allowances (RDAs) for individuals for those nutrients. EARs have not been established for vitamin D, vitamin K, pantothenic acid, biotin, choline, calcium, chromium, fluoride, manganese, or other nutrients not yet evaluated via the DRI process.
39 56 60
0.8 1.1 1.1
420 485 500
0.7 1.0 1.0
Females 9–13 years 14–18 years 19–30 years
9 12 12
445 630 625
Males 9–13 years 14–18 years 19–30 years
39 63 75
Vit C Thiamin Vit B6 Vit B12 Vit A Vit E Copper Iodine Magnesium Molybdenum Phosphorus Selenium Zinc Iron Riboflavin Niacin Folate (g day1)a (mg day1) (g day1)b (mg day1) (mg day1) (g day1)c (mg day1) (g day1)d (g day1) (g day1) (g day1) (mg day1) (mg day1) (g day1) (mg day1) (g day1) (mg day1)
Life stage group
Table 5 Dietary reference intakes (DRIs): estimated average requirements
32
ADOLESCENTS/Nutritional Problems
specified estimated average requirements (EARs), adequate intakes (AIs), and upper limits (ULs).
Obesity Obesity has recently become an epidemic in the US, with 31% of American adults classified as obese (body mass index >30 kg m2) and 68% classified as overweight (body mass index >25 kg m2) in 2000. The prevalence of obesity in childhood tripled from 5% in 1980 to 15% in 2000 according to National Health and Nutrition Examination Surveys (NHANES). There is every indication that the developed countries of Western Europe are not far behind. Indeed, obesity is becoming a worldwide problem, rapidly increasing in many developing countries including China and India, and overtaking undernutrition as the major nutritional problem. Although obesity affects children in all socioeconomic classes, it is more prevalent in those of lower socioeconomic status in the US and developed countries, whereas it tends to affect the well-off in developing countries. This suggests that food insecurity and poor food choices are more the problem than lack of availability because of poverty. Although only 30% of obesity begins in adolescence, some estimate that 80% of obese adolescents will become obese adults, and obese adolescents are at much more risk for diabetes and major medical complications later in life. Since long-term weight loss is usually very difficult to achieve and is often unsuccessful despite widespread attempts at dieting, efforts to prevent obesity in early life are important. Ultimately, weight gain results from dietary energy intake exceeding metabolic basal needs and activity. Only rarely is this due to some identifiable disorder of basal metabolic requirements such as hypothyroidism. However, it is difficult to measure either dietary intake or activity with enough accuracy to detect the relatively small mismatch necessary to add weight. For example, a small increase in dietary intake of 200 kcal day1, without a corresponding increase in activity could theoretically result in a weight gain of 8 kg over the course of a year. Although the heritability of obesity has been estimated to be on the order of 60–80% on the basis of twin studies and family histories, the genetics of obesity are complex and just beginning to be understood. Adult weight is much more reflective of biological parents rather than adoptive parents in twin studies. Known genetic syndromes producing obesity in humans are rare (on the order of 1–2% of obese patients) but should be considered, such as trisomy 21 (Down’s syndrome), Prader-Willi,
Bardet-Biedl and Beckwith-Wiedemann syndromes, hypothyroidism, and polycystic ovary syndrome. The adipose fat cell is not only a passive storage site but an endocrinologically active secretor of many substances like leptin, adiponectin, and cytokines, which participate in an inflammatory response and may mediate a host of adverse consequences, including insulin resistance and diabetes. Obesity is related to an increased risk of developing type 2 insulin-resistance diabetes mellitus, hyperlipidemia, heart disease, obstructive sleep apnea, asthma and other respiratory problems, back pain and orthopedic problems, fatty liver (nonalcoholic steato-hepatitis or NASH), gallstones, and depression. The increasing incidence of type 2 diabetes in obese adolescents is already being noticed, with estimates of 200 000 diabetics under age 20 years in the US predicted to rise to a lifetime risk of developing diabetes of 33–39% for those born in the year 2000. The rapid increase in obesity has made standards based on population percentiles meaningless as medical obesity involved more than just the top 5% of weight-for-age. Instead of just relying on cross-sectional height- and weight-for-age graphs (see Figures 1 and 2), there has developed a need for a more valid indicator of obesity. The body mass index (BMI) charts recently released by the Centers for Disease Control allow for tracking of BMI standards for adolescents, who should have a BMI lower than the 20–25 kg m2 expected for adults. Although long-term validation data is not as available as in adults, in adolescents obesity is considered above the 95th percentile for age, with risk for obesity defined as above 85th percentile for age. Body mass index is defined as weight (in kilograms) divided by height (in meters) squared, and is considered the best anthropometric surrogate for body composition (see Figures 3 and 4). Waist size may be an easier measurement to follow in adults, and particularly identifies central adiposity. Measurements by tape and caliper of mid-arm circumference and triceps skinfolds have a fairly good correlation (0.7–0.8) with more expensive research methods of underwater weighing and dual-energy X-ray absorptiometry (DEXA), and can be made even more accurate by including biceps, subscapular, and suprailiac skinfold measurements. Bioelectric impedance measures the difference in resistance between adipose and lean body tissue, but can be affected by fluid shifts especially in ill patients. Physical examination should include blood pressure measurement because of the high percentage of comorbidity of the metabolic syndrome (obesity, hypertension, dyslipidemia, and/or diabetes).
ADOLESCENTS/Nutritional Problems 33
Figure 1 Weight-for-age percentiles: boys, 2–20 years. (Developed by the National Center for Health Statistics in collaboration with the National Center for Chronic Disease Prevention and Health Promotion 2000: http://www.cdc.gov/growthcharts)
The metabolic syndrome is defined as three or more of the following: abdominal obesity (waist circumference greater than 40 inches (100 cm) in men or 35 inches (90 cm) in women), fasting hypertriglyceridemia (40% of those over 60), but is increasingly seen at younger ages (7% of 20–29 years old). Acanthosis nigricans is a skin hyperpigmentation, chiefly around the neck, seen in about 20% of obese patients, especially African-Americans, which reflects insulin resistance and this finding should
34
ADOLESCENTS/Nutritional Problems
Figure 2 Weight-for-age percentiles: girls, 2–20 years. (Developed by the National Center for Health Statistics in collaboration with the National Center for Chronic Disease Prevention and Health Promotion 2000: http://www.cdc.gov/growthcharts)
provoke screening tests for type 2 diabetes. Laboratory screening tests might include thyroid-stimulating hormone for hypothyroidism, fasting glucose, insulin, and glycosylated hemoglobin (HbA1C) for type 2 diabetes. Diet histories and diet recalls are particularly important in nutritional assessments, but quantitative
calorie counts are particularly unreliable in obese patients because of widespread conscious and subconscious underreporting of 20% or more. Regular meetings with a dietician should involve counseling on healthy eating choices. The recommendations regarding daily activity should include hours of television watching per day or per week because this is
ADOLESCENTS/Nutritional Problems 35
Figure 3 Body mass index-for-age percentiles: boys, 2–20 years. (Developed by the National Center for Health Statistics in collaboration with the National Center for Chronic Disease Prevention and Health Promotion 2000: http://www.cdc.gov/growthcharts)
well correlated with obesity, not only because of decreased activity but also because of the influence of commercial snack food advertising. Treatment should ideally involve a multidisciplinary team with a dietician, social worker, physical therapist, and physician, concentrating on lifestyle modification, moderate caloric restriction and regular exercise, with frequent follow-up and compliance
being a good indicator of likelihood of success. Recent success with low-carbohydrate diets rather than the traditional low-fat diet advice suggests the importance of the role of satiety in maintaining caloric restriction. Most commercial diet plans promise short-term weight loss, but very few long-term studies have shown these to keep weight off for more than 6–12 months. As adolescents naturally
36
ADOLESCENTS/Nutritional Problems
Figure 4 Body mass index-for-age percentiles: girls, 2–20 years. (Developed by the National Center for Health Statistics in collaboration with the National Center for Chronic Disease Prevention and Health Promotion 2000: http://www.cdc.gov/growthcharts)
gain weight with height as they progress through puberty, it is probably more important that they learn healthy eating and activity habits over the long term rather than losing weight quickly only to gain it back within a few months. Medications such as phenteramine-fenfluramine and stimulants have gained recent notoriety with unforeseen side effects. Possible treatment with
leptin and other hormones or antagonists has much future promise, but so far has been effective only in rare patients with specific defects. Surgical gastroplasty has proven the most successful long-term therapy for massively obese adults, possibly because of suppression of ghrelin, increased satiety, and reduced hunger, but morbidity and mortality is variable and the option of major surgery should be
ADOLESCENTS/Nutritional Problems 37
carefully considered only as a last resort before offering it to any adolescents.
Eating Disorders Eating disorders affect 3–5 million in the US; 86% are diagnosed before the age of 20 and up to 11% of high-school students are affected. More than 90% are female, 95% Caucasian, and 75% have an onset in adolescence. Eating disorders are probably the most frequent causes of undernutrition in adolescents in developed countries, but only a relatively small percentage meet the full Diagnostic and Statistical Manual (DSM) IV criteria for anorexia nervosa (see Table 6), while most cases fall into the more general category eating disorder NOS (not otherwise specified). Bulimia, binge eating, and/or purging are probably much more common than full-blown anorexia nervosa, with some estimates of up to 20–30% of college women in the US, and often occur surreptitiously without telltale weight loss. Lifetime prevalence estimates range from 0.5% to 3% for anorexia nervosa and 1–19% for bulimia. So far eating disorders are considered rare in developing countries, but prevalence often increases dramatically when Western influences such as television advertising are introduced, as was the experience in the South Pacific Islands. The pathophysiology of anorexia nervosa is not well understood, and there is probably a combination of environmental and psychological factors with a biochemical imbalance of neurotransmitters, Table 6 DSM-IV criteria for anorexia nervosa A. Refusal to maintain body weight at or above a minimally normal weight for age and height (e.g., weight loss leading to maintenance of body weight less than 85% of that expected or failure to make expected weight gain during period of growth, leading to body weight less than 85% of that expected) B. Intense fear of gaining weight or becoming fat, even though underweight C. Disturbance in the way in which one’s body weight or shape is experienced; undue influence of body weight or shape on self-evaluation, or denial of the seriousness of the current low body weight D. In postmenarchal females, amenorrhea, that is, the absence of at least three consecutive menstrual cycles Specify types Restricting type: during the episode of anorexia nervosa, the person does not regularly engage in binge eating or purging behavior (i.e., self-induced vomiting or the misuse of laxatives or diuretics) Binge-eating-purging type: during the episode of anorexia nervosa, the person has regularly engaged in binge eating or purging behavior (i.e., self-induced vomiting or the misuse of laxatives or diuretics)
especially serotonin and its precursor 5-hyroxyindole acetic acid, which tends to be reduced. There is a substantial biologic predisposition to run in families with heritability in twin studies of 35–90%. Eating disorders should be suspected in any adolescent below normal weight ranges or with recent weight loss, but other medical conditions such as intestinal malabsorption, inflammatory bowel disease, and malignancy should also be considered. It is important to realize that most height and weight charts represent cross-sectional population norms, which may not be as sensitive as longitudinal tracking or height velocity of individuals, since puberty occurs at different ages. For example, a 12-year-old who does not gain weight for 6 months may just be entering puberty, or might be severely affected by growth failure due to a malignancy or inflammatory bowel disease. Physical signs and symptoms of inadequate caloric intake may include amenorrhea, cold hands and feet, dry skin and hair, constipation, headaches, fainting, dizziness, lethargy, hypothermia, bradycardia, orthostatic hypotension, and edema. There is no specific laboratory diagnosis, but there are often endocrine and electrolyte abnormalities especially hypokalemia, hypophosphatemia, and hypochloremic metabolic alkalosis from vomiting, which often require careful supplementation. Treatment may be very difficult and prolonged, often involving behavior therapy and occasionally long inpatient stays in a locked unit with threats of forced nasogastric feeding to maintain weight. There is a high risk of refeeding syndrome with edema, possible arrhythmias, and sudden death from electrolyte abnormalities, so protocols have been developed to provide a slow increase of calories, supplemented by adequate amounts of phosphorus and potassium. The anorexic patient’s persistent distorted view of body image reality is very resistant to casual counseling. The consequences of anorexia nervosa can be quite severe and include menstrual dysfunction, cardiovascular disease, arrhythmias, anemia, liver disease, swollen joints, endocrinopathies, cerebral atrophy, and even sudden death. There is a significant bone loss or osteopenia associated with amenorrhea and lack of estrogen stimulation, which is not completely reversed even with hormone replacement. Anorexia nervosa is well associated with other psychiatric diagnoses such as depression, anxiety, personality disorders, obsessive-compulsive disorder, and substance abuse, and psychiatric problems often continue to remain an issue even when normal weight is maintained. Prognosis is relatively poor compared to other adolescent medical illnesses,
38
ADOLESCENTS/Nutritional Problems
with 33% persistence at 5 years and 17% at 11 years. Six per cent die within 5 years and 8.3% by 11 years.
Other Nutritional Diseases In many countries of the world, HIV infection and acquired immunodeficiency syndrome (AIDS) has become one of the leading causes of undernutrition and cachexia, especially in younger patients. Indeed, many of the syndromes and consequences of proteinenergy malnutrition are also seen in AIDS cachexia, such as frequent respiratory and other infections, diarrhea, malabsorption, and rashes. Weight loss is an AIDS-defining symptom, and weight loss of a third of usual weight usually signifies terminal illness. Fortunately, new generations of protease inhibitors and other medications have dramatically slowed the progression of HIV infection in many patients, as well as reducing the vertical transmission rate. Indeed, some studies have suggested that multivitamin supplementation of pregnant mothers may itself reduce vertical transmission rates in developing countries where antivirals are difficult to obtain. Proper attention to nutrition, with early enteral energy and micronutrient supplementation, is an important part of care, which is best instituted long before weight loss becomes manifest.
Specific Nutrients Calcium
Calcium is the major component of bone, providing structural skeletal support to the human body (see 00033). The approximately 2–3 kg of bone calcium in each person also provides a storage reservoir for the small percentage of ionized calcium that allows muscle to contract, nerves to communicate, enzymes to function, and cells to react. The body has developed several hormonal mechanisms, including vitamin D, parathyroid hormone, and calcitonin, to protect the small amount of ionized calcium in the blood from changing drastically. Tight control of blood calcium levels is needed because unduly low blood calcium might result in uncontrolled tetanic muscle contractions and seizures, while high blood calcium levels may cause kidney stones and muscle calcifications. To increase blood calcium levels, vitamin D and its metabolites increase calcium absorption from the intestinal tract, parathyroid hormone increases calcium reabsorption from the kidney, and both increase resorption of calcium from the bone. During the early years of life, calcium is deposited in the bone as it grows, but after about the 3rd
decade, there is a steady decline in bone calcium. This is especially marked after menopause in women, when estrogen declines, and often leads to bone loss (osteopenia) to below a threshold that predisposes women in particular to fractures (osteoporosis). Osteoporosis is not just a disease of the elderly, and may occur in much younger patients, especially athletic young women, those with anorexia nervosa, those on steroids and other medications, and in anyone on prolonged bed rest, including astronauts experiencing long periods of weightlessness. Dietary calcium is often seen as the most limiting factor in the development of peak bone mass, and strategies to increase dietary calcium have been promoted. Other factors in the development of bone mineral include height, weight, racial background and inheritance, gender, activity, vitamin D deficiency, parathyroid hormone deficiency, vitamin A, vitamin K, growth hormone, calcium, phosphorus, and magnesium. Phosphorus, the other major component of bone mineral, is relatively common in the diet. In the 1997 DRIs, AIs of calcium were raised from 800 to 1300 mg in 9–18 year olds. Only a small percentage of the population takes in the RDA for calcium. The estimated average calcium intake in American women is only about 500–600 mg a day, and is much lower in the developing world (as low as 200 mg a day). From calcium tracer studies performed since the 1950s, intestinal calcium absorption ranges from 10% to 40% of ingested calcium, with a higher percentage absorption with lower calcium intakes. A large percentage (usually 70– 80%) of dietary calcium is from milk and dairy products, which provides about 250 mg calcium per 8 oz (240 ml) glass of milk, and most studies show better absorption from dairy products than from vegetable sources. However, many people, especially non-Caucasians, develop relative lactose intolerance after childhood, and are reluctant to increase their dairy food intake. Thus, attention has focused on whether supplementation or fortification with calcium, especially during adolescence, will ensure achievement of peak bone mass. Calcium supplementation in adolescent females has shown short-term increases in bone mineral density, but this may be because it increases mineralization in a limited amount of trabecular bone, and it remains to be seen whether this leads to long-term improvement or protection against future fractures. Also, most studies still assume that increased bone mineral density is synonymous with reduced fracture risk, although fractures may depend on many other factors such as optimal bone architecture and lack of falls. Although the
ADOLESCENTS/Nutritional Problems 39
majority of scientific opinion probably favors increased dietary calcium intake in adolescence, the factors that control bone mineralization are not yet completely understood, and long-term protection against eventual bone loss and fractures remains to be demonstrated by randomized clinical trials. Iron
Iron deficiency is one of the most common vitamin or mineral deficiencies in the world, affecting 20% or more of women and children especially in developing countries. Adolescent women who have started menses or who are pregnant are particularly at risk for developing iron deficiency, which may ldevelop long before iron stores are exhausted and anemia ensues. Anemia (low hemoglobin or red cell volume) may lead to reduced school and work performance and may affect cognitive function, as well as leading to cardiovascular and growth problems. Diagnosis is made most simply by hemoglobin level or packed red cell volume (hematocrit) and red cell morphology, or alternatively by transferrin saturation, serum ferritin, or serum iron level. Microscopic examination of a red cell smear typically shows red cells that are small (microcytic) and pale (hypochromic).
unusual to find a documented case of clinical zinc deficiency apart from occasional cases of acrodermatitis enteropathica, there has been recent concern over the possibility of relative zinc deficiency, especially among chronically ill patients with excessive intestinal secretions. Zinc deficiency could lead to impaired taste (hypogeusia) and appetite and immunodeficiency as well as affecting growth. A large group of adolescents in Shiraz, Iran was described to be of very short stature because of dietary zinc deficiency. Similarly, a group of people in Keshan, China was found to develop cardiomyopathy because of a selenium deficiency in the soil. Iodine deficiency is surprisingly common worldwide, perhaps involving up to half of the world population or 3 billion people, especially in areas of Southeast Asia where it is not supplemented in salt. It may cause hypothyroidism, goiter (neck masses), cretinism, or impaired intelligence if severe. See also: Adolescents: Nutritional Requirements. Anemia: Iron-Deficiency Anemia. Calcium. Eating Disorders: Anorexia Nervosa; Bulimia Nervosa; Binge Eating. Folic Acid. Iron. Obesity: Definition, Etiology and Assessment. Osteoporosis. Zinc: Physiology.
Folate
Folate is a vitamin that is responsible for one-carbon methyl transfer in a variety of cellular reactions, including formation of purines and pyrimidines, which make up DNA and RNA. Folate deficiency may result in megaloblastic anemia, as forming red cells fail to divide. As the best source of folate is in green leafy vegetables, folate nutrition may be marginal in many adolescents. Recent epidemiologic evidence suggests that folate supplementation, at levels that are higher than usual dietary intake (200–400 mg day1), reduced the incidence of neural tube defects (anencephaly and spina bifida) in newborns. Supplementation needs to be started early in pregnancy, within the first 8 weeks and before most pregnancies are apparent, so should involve most women of child-bearing age. The recent decision to fortify grains and cereals with folic acid in the US will also reduce serum homocysteine levels, lowering the risk of cardiovascular disease. Zinc and Other Minerals
Zinc is a component of many metalloenzymes including those needed for growth, pancreatic enzymes, and intestinal secretions. Although it is
Further Reading (2002) Adolescent Nutrition: a springboard for health. Journal of the American Dietetic Association Supplement March. Cheung LWY and Richmond JB (eds.) (1995) Child Health, Nutrition, and Physical Activity, Human Kinetics. Windsor, Ontario. Ebbeling CB, Pawlak DB, and Ludwig DS (2002) Childhood obesity: public health crisis, common sense cure. Lancet 360: 473–482. Grand R, Sutphen J, and Dietz W (eds.) (1987) Pediatric Nutrition. London: Butterworth. Heald F (1969) In Adolescent Nutrition and Growth. New York: Appleton Century Croft. Hu FB, Manson JE, Stampfer MJ et al. (2001) Diet, lifestyle, and the risk of type 2 diabetes mellitus in women. New England Journal of Medicine 345(11): 790–797. Kleinman R (ed.) (2004) Pediatric Nutrition Handbook, 5th edn. Elk Grove Village, Illinois American Academy of Pediatrics. Koletzko B, girardet JP, Klish W, and Tabacco O (2002) Obesity in children and adolescents worldwide. Journal of Pediatric Gastroenterology and Nutrition 202: S205–S212. McKigney J and Munro H (eds.) (1973) Nutrient Requirements in Adolescents. Cambridge: MIT Press. Rickert VI (ed.) (1996) Adolescent Nutrition: Assessment and Management. Boston, MA: Jones and Bartlett. Styne DM (2001) Childhood and adolescent obesity. Pediatric Clinics of North America 48: 823–854. Walker WA, Watkins J, and Duggan C (eds.) (2003) Nutrition in Pediatrics, 3rd edn. London: BC Decker.
40
AGING
AGING P Hyland and Y Barnett, Nottingham Trent University, Nottingham, UK ª 2005 Elsevier Ltd. All rights reserved.
Introduction The aging processes, and interventions to ameliorate them, have fascinated humans since the dawn of civilization. Research into aging is now a vital area of human endeavor, as our species reaches the limits of its longevity and faces the prospect of an aging population. This article aims to highlight the processes involved with aging and how they affect the entire hierarchical structure of living organisms, from molecules to cells, tissues, organs, and systems. Accordingly, many theories have evolved to explain the aging processes at each of these levels. A brief overview of these theories will highlight the framework for investigations into the aging processes with the ultimate aim of reducing their deleterious effects, such as age-related disease, perhaps with nutritional and molecular biological intervention strategies. The term ‘aging’ can have a wide variety of different meanings in different circumstances. For example, the normal processes from birth, through growth and maturation, an extended period of adulthood, and on to senescence can be thought of as aging. The term is used here to describe a progressive sequence of detrimental age-related changes that are observed to occur in every individual of a given species, although they may appear at different rates. These changes lead to a breakdown in the normal homeostatic mechanisms, with the result that the functional capacity of the body and its ability to respond to a wide variety of extrinsic and intrinsic agents is often decreased. This causes the degradation of structural elements within the cells, tissues, and organs of the body, leading eventually to the onset of age-related disorders and ultimately death.
in the 1900s was around 47 years. By the end of the twentieth century this rose to a mean of 78 and 76 years in western Europe and north America, respectively, with many individuals living much longer. This dramatic increase in average life expectancy has been largely due to improvements in environmental conditions such as nutrition, housing, sanitation, and medical and social services, and has resulted in a large increase in the number of older people around the world. This change in the age structure of society is compounded by the decreasing fertility levels in the world’s populations leading to large gains in worldwide median population ages. Our aging populations have a growing number and proportion of older people and, importantly, a growing number and proportion of very elderly people. Based on the current rates and trends in population growth it has been predicted that by the year 2025 the elderly population (aged 65 and above) in the world’s MDCs will increase by more than 50%, and will more than double worldwide. The elderly population itself is aging with the very elderly (aged 80 and above) being the fastest growing section of the elderly population. This
Genetics: ‘senescence’ genes, genes coding for components of biomolecule defense systems, etc.
Lifestyle factors: diet, housing, exercise, etc.
Environment: exposure to chemicals, disease-causing organisms, etc.
Rate of aging
Age-related diseases
Social and Demographic Considerations An individual’s life expectancy is contributed to by the interaction of intrinsic (genetic and epigenetic) factors with extrinsic (environmental and life style) factors (Figure 1). In the world’s more developed countries (MDCs) the life expectancy at birth
Death Figure 1 Interactive factors that contribute to the aging process. (Reproduced with permission from Barnett YA (1994) Nutrition and the ageing process. British Journal of Biomedical Sciences 51: 278–287.)
AGING 41
changing demographic picture will result in a large increased prevalence worldwide of long-term illness, disability, and the degenerative diseases associated with aging. These alterations in the proportions of the population of working age and those beyond working age will have a significant impact on the funding and costs of healthcare for all nations, making research into aging of critical international importance.
Theories of Aging The human body has a hierarchy of structure and function, ranging from cellular biomolecules, through to organelles and cells, and on to tissues, organs, and the body’s various systems. The biological manifestations that occur with aging affect the entire hierarchical structure of living systems. Agerelated effects are seen in the accumulation of damaged cellular biomolecules (e.g., advanced glycosylation end products, lipid peroxidation products, genetic damage, and mutation), damaged organelles (mitochondria), and loss of cellular function, which contributes to dysfunction of the body’s tissues, organs, and systems. These hierarchical changes have paved the way for over 300 theories in an attempt to explain how and why aging occurs. These theories have previously been broadly categorized into: (1) programed or genetic theories; and (2) damage accumulation (stochastic) theories. However, with ongoing research these categories have not proven to be entirely comprehensive or mutually exclusive and it is more likely that there is a shifting range throughout the life span that reflects a decreasing influence of genetic factors and an increasing influence of stochastic events. Programed and Genetic Theories
Programed and genetic theories propose that the process of aging follows a biological timetable, perhaps a continuation of the one that regulates childhood growth and development. There are a number of lines of evidence supporting these theories. Longevity genes It is clear that aging is controlled to some extent by genetic mechanisms. The distinct differences in life span among species are a direct indication of genetic control, at least at the species level. A number of genes have been identified in yeast, nematode worms (Caenorhabditis elegans), and fruit flies (Drosophila melanogaster) that significantly increase the organism’s potential maximum life span. The products of these genes act
in a diverse number of ways and are involved in stress response and resistance, development, signal transduction, transcriptional regulation, and metabolic activity. However, the genetics of longevity have not been as revealing in mammalian studies. In mouse systems genes involved with immune response have been implicated in longevity, as has the ‘longevity gene’ p66shc, which is involved in signal transduction pathways that regulate the cellular response to oxidative stress. In humans, a number of mitochondrial DNA polymorphisms are associated with longevity. Linkage analysis in humans systems has associated certain genes on chromosome 4 with exceptional longevity. Further support for human longevity genes may be provided by the observation that siblings and parents of centenarians live longer. The major histocompatibility complex (MHC), the master genetic control of the immune system, may also be one of the gene systems controlling aging, since a number of genetic defects that cause immunodeficiency shorten the life span of humans. Certain MHC phenotypes have also been associated with malignancy, autoimmune disease, Alzheimer’s disease, and xeroderma pigmentosum in humans. Accelerated aging syndromes No distinct phenocopy exists for normal aging, but there are several genetic diseases/syndromes that display some features of accelerated aging, including HutchinsonGilford syndrome (classic early onset Progeria), Werner’s syndrome, and Down’s syndrome. Patients with these syndromes suffer from many signs of premature aging including hair loss, early greying, and skin atrophy, and also suffer from premature age-related diseases such as atherosclerosis, osteoporosis, and glucose intolerance. The defined genetics involved in these syndromes provide strong evidence for the genetic basis of aging. Neuroendocrine theories These theories propose that functional decrements in neurons and their associated hormones are pivotal to the aging process. An important version of this theory suggests that the hypothalamic-pituitary-adrenal (HPA) axis is the key regulator of mammalian aging. The neuroendocrine system regulates early development, growth, puberty, the reproductive system, metabolism, and many normal physiological functions. Functional changes to this system could exert effects of aging throughout an organism. However, the cells of the neuroendocrine system are subject to the normal cellular aging processes found in all cells, and the changes occurring in the
42
AGING
neuroendocrine system may be secondary expressions of the aging phenotype. Immunologic theory and immunosenescence Deterioration of the immune system with aging (‘immunosenescence’) may contribute to morbidity and mortality due to decreased resistance to infection and, possibly, certain cancers in the aged. T-cell function decreases and autoimmune phenomena increase in elderly individuals. Although the immune system obviously plays a central role in health status and survival, again the cells of the immune system are subject to the normal cellular aging processes found in all cells. Changes to the immune system may be secondary expressions of the aging phenotype. Cellular senescence At the cellular level, most, if not all, somatic cell types have a limited replicative capacity in vitro before they senesce and die. The number of cell population doublings in vitro is inversely correlated with donor age. This is called the ‘Hayflick phenomenon’ after the scientist credited with its discovery. This limit in the capacity of a cell type or tissue to divide and replenish itself would have major repercussions in vivo. There is evidence that replicative senescence is related to in vivo aging, but definitive evidence that senescent cells accumulate in vivo is lacking to date. Many alterations to normal cellular physiology are exhibited with the senescent phenotype, indicating that senescent cells exist in a growth state that is quite distinct from that of young cells and are subject to a complex alteration to their cellular physiology. A number of possible explanations for limiting the number of cell population doublings have been proposed, including a tumor suppressive mechanism. One proposal is that the shortening of telomeres, the sequences of noncoding DNA located at the end of chromosomes, is a measure of the number of cell divisions that a cell has experienced. These telomeres may act as specialized regions of the genome, a sacrificial ‘sentinel’ zone, for the detection of DNA damage being noncoding, more prone to damage, and less prone to repair than the genome as a whole. Damage to telomeres transposes to telomere shortening, and loss of telomere higher order structure may trigger senescence and/or apoptosis. Studies involving fusion of normal cells (subject to senescence) with immortal cell lines in vitro have clearly demonstrated that the senescent phenotype is dominant, and that unlimited division potential results from changes in normal growth control mechanisms. These fusion studies have also revealed the existence of several dominant genes associated
with the process of cellular senescence. These genes reside on a number of chromosomes, including 1,4, and X. Disposable soma theory The disposable soma theory suggests that aging is due to stochastic background damage to the organism, i.e., damage that is not repaired efficiently because the energy resources of the somatic cells are limited. So, instead of wasting large amounts of energy in maintaining the whole body in good condition, it is far more economical to simply repair the heritable stem cell genetic material, in order to ensure the survival of the species. In this way the future of the species is secured at the expense of individual lives. When the somatic energy supply is exhausted, the body ages and dies, but the genetic material survives (in the next generation). Damage Accumulation (Stochastic) Theories
The ‘damage’ or ‘error’ theories emphasize intrinsic and environmental insults to our cellular components that accumulate throughout life and gradually cause alterations in biological function and the physiological decline associated with aging. Somatic mutation and DNA repair Damage to DNA occurs throughout the lifetime of a cell. If this damage is not repaired or removed then mutations may result. Mutations may result in the synthesis of aberrant proteins with altered or absent biological function; alterations to the transcriptional and translational machinery of a cell; and deregulation of gene control. The accumulation of mutations on their own, or in combination with other agerelated changes, may lead to alterations in cellular function and ultimately the onset of age-related disease. Error catastrophe This theory suggests that damage to mechanisms that synthesize proteins results in faulty proteins, which accumulate to a level that causes catastrophic damage to cells, tissues, and organs. Altered protein structure has been clearly demonstrated to occur with age; however, most of these changes are posttranslational in nature, and hence do not support this theory of aging. Such changes to protein structure may result in progressive loss of ‘self-recognition’ by the cells of the immune system and thus increase the likelihood that the immune system would identify self-cells as foreign and launch an immune attack. Indeed, the incidence of autoimmune episodes is known to increase with age.
AGING 43
Cross-linking The cross-linking theory states that an accumulation of cross-linked biomolecules caused by a covalent or hydrogen bond damages cellular and tissue function through molecular aggregation and decreased mobility. The modified malfunctional biomolecules accumulate and become increasingly resistant to degradation processes and may represent a physical impairment to the functioning of organs. There is evidence in vitro for such cross-linking over time in collagen and in other proteins, and in DNA. Many agents exist within the body that have the potential to act as cross-linking agents, e.g., aldehydes, antibodies, free radicals, quinones, citric acid, and polyvalent metals, to name but a few. Free radicals The most popular, widely tested and influential of the damage accumulation theories of aging is the ‘free radical’ theory, first proposed by Harman in 1956. Free radicals from intrinsic and extrinsic sources (Table 1) can lead to activation of cytoplasmic and/or nuclear signal transduction pathways, modulation of gene and protein expression, and also alterations to the structure and ultimately the function of biomolecules. Free radicals may thus induce alterations to normal cell, tissue, and organ functions, which may result in a breakdown of homeostatic mechanisms and lead to the onset of age-related disorders and ultimately death. It can
be predicted from this theory that the life span of an organism may be increased by slowing down the rate of initiation of random free radical reactions or by decreasing their chain length. Studies have demonstrated that it is possible to increase the life span of cells in vitro by culturing them with various antioxidants or free radical scavengers. Antioxidant supplementation with a spin-trapping agent has been demonstrated to increase the lifespan of the senescence accelerated mouse, although as yet there is little evidence for increasing the life span of a normal mammalian species by such strategies. Mitochondrial DNA damage This hypothesis combines elements of several theories, covering both the stochastic and genetic classes of aging theories. It is proposed that free radical reactive oxygen species generated in the mitochondria contribute significantly to the somatic accumulation of mitochondrial DNA mutations. This leads to a downward spiral wherein mitochondrial DNA damage results in defective mitochondrial respiration that further enhances oxygen free radical production, mitochondrial DNA damage, and mutation. This leads to the loss of vital bioenergetic capacity eventually resulting in aging and cell death. The absence of evidence that exclusively supports any one theory leaves no doubt that aging is due to many processes, interactive and interdependent, that determine life span and death.
Table 1 Extrinsic and intrinsic sources of free radicals Extrinsic sources
Intrinsic sources
Radiation: ionizing, ultraviolet
Plasma membrane: lipoxygenase, cycloxygenase, NADPH oxidase Mitochondria: electron transport, ubiquinone, NADH dehydrogenase Microsomes: electron transport, cytochrome p450, cytochrome b5 Peroxisomes: oxidases, flavoproteins
Drug oxidation: paracetamol, carbon tetrachloride, cocaine Oxidizing gases: oxygen, ozone, nitrogen dioxide Xenobiotic elements: arsenic (As), lead (Pb), mercury (Hg), cadmium (Cd) Redox cycling substances: paraquat, diquat, alloxan, doxorubicin Heat shock Cigarette smoke and combustion products
Phagocytic cells: neutrophils, macrophytes, eosinophils, endothelial cells Auto-oxidation reactions: Metal catalyzed reactions Other: hemoglobin, flavins, xanthine oxidase, monoamine oxidase, galactose oxidase, indolamine dioxygenase, tryptophan dioxygenase Ischemia – reperfusion
Age-Related Diseases Regardless of the molecular mechanisms that underlie the aging process, a number of well-characterized changes to the structure and therefore the function of the major cellular biomolecules (lipids, proteins, carbohydrates, and nucleic acids) are known to occur with age (Table 2). The age-related alterations to the structure and therefore the function of cellular biomolecules have physiological consequences and may directly cause or lead to an increased susceptibility to the development of a number of diseases (Figure 2). Cellular biomolecules are constantly exposed to a variety of extrinsic and intrinsic agents that have the potential to cause damage. A number of defense systems exist, e.g., antioxidant enzymes and DNA repair systems, which aim to reduce, remove, or repair damaged biomolecules. These defense systems are not perfect, however, and biomolecular damage may still occur. Such damage can result in the degradation of structural elements within the cells, tissues, and organs of the body, leading to a decline in biological function and eventually to disease and death.
44
AGING
Table 2 Major age-related alterations in biomolecule structure and the resultant physiological consequences of such structural changes Biomolecule
Alteration
Physiological consequence
Lipids
Lipid peroxidation
Proteins
Racemization, deamination, oxidatation, and carbamylation
Carbohydrates
Fragmentation, depolymerization Glucose auto-oxidation
Nucleic acids
Strand breaks Base adducts Loss of 5-methyl cytosine from DNA
Oxidized membranes become rigid, lose selective permeability and integrity. Cell death may occur Peroxidation products can act as cross-linking agents and may play a role in protein aggregation, the generation of DNA damage and mutations, and the age-related pigment lipofuscin Alterations to long-lived proteins may contribute to aging and/or pathologies. For example, modified crystallins may aggregate in the lens of the eye thus leading to the formation of cataracts Cross-linking and formation of advance glycosylation end-products (AGEs), which can severely affect protein structure and function Effects on the maintenance of cellular homeostasis Alters physical properties of connective tissue. Such alteration may be involved in the etiology and pathogenesis of osteoarthritis and other age-related joint disorders Glycosylation of proteins in vivo with subsequent alteration of biological function; for example, glycosylation of insulin in patients with diabetes may result in altered biological function of insulin and so contribute to the pathogenesis of the disease Damage could be expected to interfere with the processes of transcription, translation, and DNA replication. Such interference may reduce a cell’s capacity to synthesize vital polypeptides/proteins. In such circumstances cell death may occur. The accumulation of a number of hits in critical cellular genes associated with the control of cell growth and division has been shown to result in the process of carcinogenesis Dedifferentiation of cells (5-methylcytosine plays an important role in switching off genes as part of gene regulation) If viable, such dedifferentiated cells may have altered physiology and may contribute to altered tissue/organ function
The physiological alterations with age proceed at different rates in different individuals. Some of the common changes seen in humans are: the function of the immune system decreases by the age of 30 years of age, reducing defenses against infection or tumor establishment and increasing the likelihood of autoimmune disorders; metabolism starts to slow down at around 25 years of age; kidney and liver function decline; blood vessels lose their elasticity; bone mass peaks at age 30 years and drops about 1% per year thereafter; the senses fade; the epidermis becomes dry and the dermis thins; the quality of and need for sleep diminish; and the brain loses 20% of its weight, slowing recall and mental performance. A number of age-related diseases may develop as a consequence of the tissue, organ, and system deterioration (Table 3).
Modification of the Aging Process Can the adverse consequences of aging be prevented? Down through the ages many have pursued the elixir of life. Attempts to increase the average life expectancy and quality of life in the elderly can only succeed by slowing the aging process itself. In
humans, the rate of functional decline associated with aging may be reduced through good nutrition, exercise, timely health care, and avoidance of risk factors for age-related disease. Nutritional Modification
It is clear that diet contributes in substantial ways to the development of age-related diseases and that modification of the diet can contribute to their prevention and thus help to improve the quality of life in old age. Macronutrient intake levels can play a significant part in the progression of age-related diseases and affect the quality of life. For example, the total and proportional intakes of polyunsaturated fatty acids and saturated fatty acids in the Western diet may have an effect on the incidence of atherosclerosis and cardiovascular diseases. Our dietary requirements also change as we age and if such changes are not properly addressed this could lead to suboptimal nutritional status. This challenge is compounded by a decrease in the body’s ability to monitor food and nutrient intakes. Dietary intake and requirements are complex issues, intertwined with many health and life style issues. However, most research points towards the need for
AGING 45
Biomolecule damaging agents e.g., free radicals Defense mechanisms that aim to prevent or repair biomolecule damage e.g., antioxidants
Biomolecule damage may still occur Nucleic acids Altered: structure (mutation) gene expression transcription and translation
Proteins Altered: protein structure biological activity Protein aggregation Activation of proteolytic enzymes Protein aggregation DNA – protein cross-links
Lipids Membrane peroxidation and destruction leads to rigidity of cell membranes, loss of selective permeability, and loss of membrane integrity. Aggregation with proteins, pigments, and metal ions to form lipofuscin
Biomolecule damage Cell with altered biological function – aging cell
Cell death
Decline in tissue and organ functions
by chance
Cancer cell
Normal cell
Development of age-related disorders
Death Figure 2 Biomolecule damage and the aging process. (Reproduced with permission from Barnett YA (1994) Nutrition and the aging process. British Journal of Biomedical Sciences 51: 278–287.)
a varied diet as we age, with an increased emphasis on micronutrient intake levels. An exemplary diet for healthy aging can be found in the traditional diet of Okinawa, Japan. Okinawans are the longest-living population in the world according to the World Health Organization, with low disability rates and the lowest frequencies of coronary heart disease, stroke, and cancer in the world. This has been attributed to healthy life style factors such as regular physical activity, minimal tobacco use, and developed social support networks as antistress mechanisms, all of which are underpinned by a varied diet low in salt and fat (with monosaturates as the principal fat) and high levels of micronutrient and antioxidant consumption.
Vitamins and micronutrients The mechanisms by which certain vitamins and micronutrients mediate their protective effect in relation to a number of agerelated disorders is based in large part upon their abilities to prevent the formation of free radicals or scavenging them as they are formed, either directly (e.g., vitamins C, E, and -carotene) or indirectly (e.g., copper/zinc superoxide dismutase, manganesedependent superoxide dismutase, selenium-dependent glutathione peroxidase). Table 4 summarizes the effects that a variety of vitamins and micronutrients can have on age-related disease. Only by exploring more fully the underlying molecular mechanisms of aging and the major classes of antioxidants will it be possible to establish the role
46
AGING
Table 3 Major age-related alterations in vivo and the resultant pathological conditions
Table 4 Effects of vitamins and micronutrients on age-related disorders
Body system
Pathological changes
Vitamin or micronutrient
Possible effect on age-related disorder
Cardiovascular
Atherosclerosis, coronary heart disease, hypertension Reduction of cognitive function, development of various dementias (e.g., Alzheimer’s disease and Parkinson’s disease) Noninsulin-dependent diabetes, hypercortisolemia Anemia, myelofibrosis General decline in immune system function, particularly in T cells Osteoporosis, osteoarthritis, skeletal muscle atrophy Glomerulosclerosis, interstitial fibrosis Decreased spermatogenesis, hyalinization of semeniferous tubules Interstitial fibrosis, decreased vital capacity, chronic obstructive pulmonary disease Cataracts, senile macular degeneration, diabetic retinopathy Cancer
Vitamins B6, E copper, zinc, and selenium
Impairment of immune function in older humans if inadequate amounts Increased amount in the diet is associated with delayed development of various forms of cataract Protective effect against the development of lung cancer in smokers Dietary supplementation associated with a decreased risk of age-related macular degeneration Absolute or relative deficiency associated with development of a number of cancers (not breast cancer) Dietary supplementation may decrease the rate of development of atherosclerosis Dietary deficits are associated with an increased risk of cardiovascular disease
Central nervous system
Endocrine Hemopoietic Immune Musculoskeletal Renal Reproductive Respiratory
Sense organs All systems
of, and develop strategies for using various classes of antioxidants to reduce the effects of aging. Other dietary components may also have a beneficial effect in preventing or delaying the onset of age-related disease. For example, as a deterrent against the onset of osteoporosis, adults should ensure adequate calcium and vitamin D intakes. Dietary energy restriction The effect of caloric restriction on life span has only been convincingly demonstrated in rodents to date. Feeding mice and rats diets that are severely deficient in energy (about 35% of that of animals fed ad libitum, after the initial period of growth) retards the aging of body tissues, inhibits the development of disease and tumors, and prolongs life span significantly. The exact mechanism of action of dietary energy restriction remains to be elucidated, but may involve modulation of free radical metabolism, or the reduced hormone excretion that occurs in dietary restricted animals may lower whole body metabolism resulting in less ‘wear and tear’ to body organs and tissues. Current investigations into the effects of dietary energy restriction (of about 30%) on the life spans of primates, squirrels, and rhesus monkeys continue. Caloric restriction in rhesus monkeys leads to reductions in body temperature and energy expenditure consistent with the rodent studies. These investigations should have direct implications for a dietary energy restriction intervention aimed at slowing
Vitamins C, E, and carotenoids
Carotenoids and zinc
Selenium
Vitamin C, -carotene, -tocopherol, and zinc
Selenium, copper, zinc, lithium, vanadium, chromium, and magnesium Vitamins B12, B6, and folate
Chromium
Adequate levels throughout a lifetime may prevent some of the age-related decrease in cognitive function Deficiency is associated with an increased risk of the development of type 2 diabetes mellitus
down the aging process in humans, should any humans wish to extend their life span at such a cost. Once the mechanisms of effects of caloric restriction on longevity are understood it may be possible to develop drugs that act through these mechanisms directly, mitigating the need for diets that interfere with the quality of life. Molecular Biological Interventions and the Aging Process
Accelerated aging syndromes show degenerative characteristics similar to those appearing during normal aging. The mutations leading to these disorders are being identified and their roles in the aging process are being elucidated. Examining differences in the genetic material from normal elderly people and those with progeria should help to give a better understanding of the genetic mechanisms of aging. Identification of a control gene or genes that inhibit
AGING 47
the action of the genes producing the progeroid phenotype might make it possible to slow down aberrant protein production in normal people as well. As an example, the genetic defect that predisposes individuals to the development of Werner’s syndrome has now been elucidated. Individuals with this disease carry two copies of a mutant gene that codes for a helicase enzyme (helicases split apart or unwind the two strands of the DNA double helix). DNA helicases play a role in DNA replication and repair. In light of the biological function of these enzymes it has been proposed that the reason for the premature aging in Werner’s syndrome is that the defective helicase prevents DNA repair enzymes from removing background DNA damage, which thus becomes fixed as mutations, with consequent deleterious effects on cellular function. It remains to be determined whether increasing the fidelity or activity of helicases in cells will extend their life span. Since it appears that the loss of telomeric DNA sequences can lead to replicative senescence in dividing cells, in theory by preventing such telomere loss the life span of the cell could be extended. A naturally occurring enzyme, telomerase, exists to restore telomeric DNA sequences lost by replication. Telomerase is normally only functional in germ cells. Manipulating certain cell types (e.g., cells of the immune system) to regulate the expression telomerase may extend their functional life span. Drugs that enhance telomerase activity in somatic cells are currently being developed. However, cellular senescence has been implicated as a tumor suppressor mechanism and it has been found that cancer cells express telomerase. An uncontrolled expression of this enzyme in somatic cells may lead to the onset of malignancy through uncontrolled cell proliferation. Thus, any intervention aiming to increase life span based on the cellular expression of telomerase must strike a balance between maintaining controlled cell division and uncontrolled proliferation. A number of single gene mutations have been identified that affect metabolic function, hormonal signaling, and gene silencing pathways. In the future it may be possible to develop drugs to mimic the antiaging effects that these genes exert.
See also: Antioxidants: Diet and Antioxidant Defense; Observational Studies; Intervention Studies. Cancer: Epidemiology and Associations Between Diet and Cancer. Coronary Heart Disease: Lipid Theory; Prevention. Fats and Oils. Fatty Acids: Monounsaturated; Saturated. Growth and Development, Physiological Aspects. Lipids: Chemistry and Classification; Composition and Role of Phospholipids. Nucleic Acids. Nutrient Requirements, International Perspectives. Older People: Nutritional Requirements; Nutrition-Related Problems; Nutritional Management of Geriatric Patients. Protein: Synthesis and Turnover; Requirements and Role in Diet; Digestion and Bioavailability. Supplementation: Role of Micronutrient Supplementation.
Further Reading Barnett YA (1994) Nutrition and the ageing process. British Journal of Biomedical Sciences 51: 278–287. Bellamy D (ed.) (1995) Ageing: A Biomedical Perspective. Chichester: Wiley. Esser K and Martin GM (1995) Molecular Aspects of Ageing Chichester: Wiley. Finch CE (1991) Longevity, Senescence and the Genome Chicago: University of Chicago Press. Hayflick L (1993) Aspects of cellular ageing. Reviews in Clinical Gerontology 3: 207–222. Kanungo MS (1994) In Genes and Ageing. Cambridge: Cambridge University Press. Kirkland JL (2002) The biology of senescence: potential for the prevention of disease. Clinics in Geriatric Medicine 18: 383–405. Kirkwood TBL (1992) Comparative lifespans of species: why do species have the lifespans they do? American Journal of Clinical Nutrition 55: 1191S–1195S. Mera SL (1992) Senescence and pathology in ageing. Medical Laboratory Sciences 4: 271–282. (1995) Somatic mutations and ageing: cause or effect? Mutation Research, DNAging (special issue) 338: 1–234. Tominaga K, Olgun A, Smith JR, and Periera-Smith OM (2002) Genetics of cellular senescence. Mechanisms of Ageing and Development 123: 927–936. Troen BR (2003) The biology of ageing. The Mount Sinai Journal of Medicine 70(1): 3–22. US Bureau of the Census (1999) Report WP/98, World Population Profile. Washington, DC: US Government Printing Office. von Zglinicki T, Bu¨rkle A, and Kirkwood TBL (2001) Stress, DNA damage and ageing – an integrated approach. Experimental Gerontology 36: 1049–1062.
48
ALCOHOL/Absorption, Metabolism and Physiological Effects
ALCOHOL Contents Absorption, Metabolism and Physiological Effects Disease Risk and Beneficial Effects Effects of Consumption on Diet and Nutritional Status
Absorption, Metabolism and Physiological Effects R Rajendram, R Hunter and V Preedy, King’s College London, London, UK T Peters, King’s College Hospital, London, UK ª 2005 Elsevier Ltd. All rights reserved.
After caffeine, ethanol is the most commonly used recreational drug worldwide. ‘Alcohol’ is synonymous with ‘ethanol,’ and ‘drinking’ often describes the consumption of beverages containing ethanol. In the United Kingdom, a unit of alcohol (standard alcoholic drink; Table 1) contains 8 g of ethanol. The Department of Health (United Kingdom) and several of the medical Royal Colleges have recommended sensible limits for alcohol intake based on units of alcohol. However, because the amount of ethanol in one unit varies throughout the world (Tables 2 and 3), the unit system does not allow international comparisons. Despite these guidelines, the quantity of alcohol consumed varies widely. Many enjoy the pleasant psychopharmacological effects of alcohol. However, Table 1 Unit system of ethanol content of alcoholic beveragesa Beverage containing ethanol Half pint of low-strength beer (284 ml) Pint of beer (568 ml) 500 ml of high-strength beer Pint of cider One glass of wine (125 ml) Bottle of wine (750 ml) One measure of spirits (e.g., whisky, gin, vodka) Bottle of spirits (e.g., vodka; 750 ml) a
Units of ethanol 1 2 6 2 1 6 1 36
The unit system is a convenient way of quantifying consumption of ethanol and offers a suitable means to give practical guidance. However, there are several problems with the unit system. The ethanol content of various brands of alcoholic beverages varies considerably (for example, alcohol content of beers/ales is 0.5–9.0%—a pint may contain 2–5 units) and the amounts of alcohol consumed in homes bear little in common with standard measures.
some experience adverse reactions due to genetic variation of enzymes that metabolize alcohol. Misuse of alcohol undoubtedly induces pathological changes in most organs of the body. Some questionable data have suggested that alcohol may be beneficial in the reduction of ischaemic heart disease. Many of the effects of alcohol correlate with the peak concentration of ethanol in the blood during a drinking session. It is therefore important to understand the factors that influence the blood ethanol concentration (BEC) achieved from a dose of ethanol.
Physical Properties of Ethanol Ethanol is produced from the fermentation of glucose by yeast. Ethanol (Figure 1) is highly soluble in water due to its polar hydroxyl (OH) group. The nonpolar (C2H5) group enables ethanol to dissolve lipids and thereby disrupt biological membranes. As a relatively uncharged molecule, ethanol crosses cell membranes by passive diffusion.
Absorption and Distribution of Alcohol The basic principles of alcohol absorption from the gastrointestinal (GI) tract and subsequent distribution are well understood. Beverages containing ethanol pass down the oesophagus into the stomach. The endogenous flora of the GI tract can also transform food into a mixture of alcohols including ethanol. This is particularly important if there are anatomical variations in the upper GI tract (e.g., diverticulae). Alcohol continues down the GI tract until absorbed. The ethanol concentration therefore Table 2 Geographical variation in the amount of ethanol in one unita Country
Amount of alcohol (g)
Japan United States Australia and New Zealand United Kingdom
14 12 10 8
a
The unit system does not permit international comparisons.
ALCOHOL/Absorption, Metabolism and Physiological Effects 49 Table 3 Guidelines for the consumption of alcohola Men (units)
Low risk Hazardous Harmful
Women (units)
Weekly b
Daily c
Weekly b
Daily c
0–21 22–50 >50
3–4 4
0–14 15–35 >35
2–3 3 1–2d
a Guidelines regarding the consumption of alcohol are designed to reduce harm. The Royal Colleges’ (1995) guidelines are for weekly consumption rates, and the Department of Health’s (1995) guidelines are for daily consumption. b Recommendations of the Working Group of the Royal Colleges of Physicians, Psychiatrists and General Practitioners (UK). c Recommendations of the Department of Health (UK). d When pregnant or about to become pregnant, consumption of more than 1 or 2 units of alcohol, one or two times per week, is harmful.
gastric emptying is the main determinant of absorption because most ethanol is absorbed after leaving the stomach through the pylorus. Alcohol diffuses from the blood into tissues across capillary walls. Ethanol concentration equilibrates between blood and the extracellular fluid within a single pass. However, equilibration between blood water and total tissue water may take several hours, depending on the cross-sectional area of the capillary bed and tissue blood flow. Ethanol enters most tissues but its solubility in bone and fat is negligible. Therefore, in the postabsorption phase, the volume of distribution of ethanol reflects total body water. Thus, for a given dose, BEC will reflect lean body mass.
Metabolism of Alcohol The rate at which alcohol is eliminated from the blood by oxidization varies from 6 to 10 g/h. This is reflected by the BEC, which falls by 9–20 mg/dl/h after consumption of ethanol. After a dose of 0.6–0.9 g/kg body weight without food, elimination of ethanol is approximately 15 mg/dl blood/h. However, many factors influence this rate and there is considerable individual variation. Absorbed ethanol is initially oxidized to acetaldehyde (Figure 2) by one of three pathways (Figure 3):
Polar hydroxyl group
H
H
H
C
C
H
H
O
H
Non-polar carbon backbone Figure 1 Chemical structure of ethanol.
decreases down the GI tract. There is also a concentration gradient of ethanol from the lumen to the blood. The concentration of ethanol is much higher in the lumen of the upper small intestine than in plasma (Table 4). Alcohol diffuses passively across the cell membranes of the mucosal surface into the submucosal space and then the submucosal capillaries. Absorption occurs across all of the GI mucosa but is fastest in the duodenum and jejunum. The rate of Table 4 Approximate ethanol concentrations in the gastrointestinal tract and in the blood after a dose of ethanola Site
Stomach Jejunum Ileum Blood (15–120 minutes after dosage)
Ethanol concentration g/dl
mmol/l
8 4 0.1–0.2 0.1–0.2
1740 870 22–43 22–43
1. Alcohol dehydrogenase (ADH)—cystosol 2. Microsomal ethanol oxidizing system (MEOS)— endoplasmic reticulum 3. Catalase—peroxisomes Alcohol Dehydrogenase
ADH couples oxidation of ethanol to reduction of nicotinamide adenine dinucleotide (NADþ) to NADH. ADH has a wide range of substrates and functions, including dehydrogenation of steroids and oxidation of fatty acids. Alcohol Dehydrogenase Isoenzymes
ADH is a zinc metalloprotein with five classes of isoenzymes that arise from the association of eight different subunits into dimers (Table 5). A genetic model accounts for these five classes of ADH as
H a Ethanol appears in the blood as quickly as 5 minutes after ingestion and is rapidly distributed around the body. A dose of 0.8 g ethanol/kg body weight (56 g ethanol (7 units) consumed by a 70 kg male) should result in a blood ethanol concentration of 100–200 mg/dl (22–43 mmol/l) between 15 and 120 minutes after dosage. Highest concentrations occur after 30–90 minutes.
H
H
C
C
H
H
H O
Acetaldehyde
H
C
O C
H
O
–
Acetate
Figure 2 Chemical structures of acetaldehyde and acetate, the products of ethanol metabolism.
50
ALCOHOL/Absorption, Metabolism and Physiological Effects
HEPATOCYTE
Ethanol NAD+
Acetaldehyde Aldehyde Dehydrogenase
Alcohol Dehydrogenase
NAD+
Acetaldehyde
NADH + H+
NADH + H+ CYTOSOL
Acetate
MITOCHONDRIA
Ethanol Microsomal Ethanol Oxidizing System
H2O2 Catalase
NADPH + H+
2H2O
NADPH Cytochrome P450 Reductase
Acetaldehyde
NADP+
Oxidized
Ethanol O2
CYP2E1 H2O Reduced
Acetaldehyde
PEROXISOME ENDOPLASMIC RETICULUM Figure 3 Pathways of ethanol metabolism.
products of five gene loci (ADH1–5). Class 1 isoenzymes generally require a low concentration of ethanol to achieve ‘half-maximal activity’ (low Km), whereas class 2 isoenzymes have a relatively high Km. Class 3 ADH has a low affinity for ethanol and does not participate in the oxidation of ethanol in the liver. Class 4 ADH is found in the human stomach and class 5 has been reported in liver and Table 5 Classes of alcohol dehydrogenase isoenzymes Km (mmol/l)a
Vmax
Class
Subunit Location
1 ADH1 ADH2 ADH3
Liver Liver, lung Liver, stomach
4 0.05–34 0.6–1.0
54
2 ADH4
Liver, cornea
34
40
3 ADH5
Most tissues
1000
4 ADH7
, m
Stomach, oesophagus, other mucosae
20
5 ADH6
—
Liver, stomach
30
stomach. Whereas the majority of ethanol metabolism occurs in the liver, gastric ADH is responsible for a small portion of ethanol oxidation. Catalase
Peroxisomal catalase, which requires the presence of hydrogen peroxide (H2O2), is of little significance in the metabolism of ethanol. Metabolism of ethanol by ADH inhibits catalase activity because H2O2 production is inhibited by the reducing equivalents produced by ADH. Microsomal Ethanol Oxidizing System
1510
a Km supplied is for ethanol; ADH also oxidizes other substrates. Adapted with permission from Kwo PY and Crabb DW (2002) Genetics of ethanol metabolism and alcoholic liver disease. In: Sherman DIN, Preedy VR and Watson RR (eds.) Ethanol and the Liver. Mechanisms and Management, pp. 95–129. London: Taylor & Francis.
Chronic administration of ethanol with nutritionally adequate diets increases clearance of ethanol from the blood. In 1968, the MEOS was identified. The MEOS has a higher Km for ethanol (8–10 mmol/l) than ADH (0.2–2.0 mmol/l) so at low BEC, ADH is more important. However, unlike the other pathways, MEOS is highly inducible by chronic alcohol consumption. The key enzyme of the MEOS is cytochrome P4502E1 (CYP2E1). Chronic alcohol use is associated with a 4- to 10-fold increase of CYP2E1 due to increases in mRNA levels and rate of translation. Acetaldehyde Metabolism
Acetaldehyde is highly toxic but is rapidly converted to acetate. This conversion is catalyzed by aldehyde
ALCOHOL/Absorption, Metabolism and Physiological Effects 51 Table 6 Classes of aldehyde dehydrogenase isoenzymes a
Class
Structure
Location
Km (mol/l)
1 ALDH1
4
Cytosolic Many tissues: highest in liver
30
2 ALDH2
4
Mitochondrial Present in all tissues except red blood cells Liver > kidney > muscle > heart
1
a Km supplied is for acetaldehyde; ALDH also oxidizes other substrates. Adapted with permission from Kwo PY and Crabb DW (2002) Genetics of ethanol metabolism and alcoholic liver disease. In: Sherman DIN, Preedy VR and Watson RR (eds.) Ethanol and the Liver. Mechanisms and Management, pp. 95–129. London: Taylor & Francis.
dehydrogenase (ALDH) and is accompanied by reduction of NADþ (Figure 3). There are several isoenzymes of ALDH (Table 6). The most important are ALDH1 (cytosolic) and ALDH2 (mitochondrial). The presence of ALDH in tissues may reduce the toxic effects of acetaldehyde. In alcoholics, the oxidation of ethanol is increased by induction of MEOS. However, the capacity of mitochondria to oxidize acetaldehyde is reduced. Hepatic acetaldehyde therefore increases with chronic ethanol consumption. A significant increase of acetaldehyde in hepatic venous blood reflects the high tissue level. Metabolism of Acetate
The final metabolism of acetate derived from ethanol remains unclear. However, some important principles have been elucidated: 1. The majority of absorbed ethanol is metabolized in the liver and released as acetate. Acetate release from the liver increases 212 times after ethanol consumption. 2. Acetyl-CoA synthetase catalyzes the conversion of acetate to acetyl-CoA via a reaction requiring adenosine triphosphate. The adenosine monophosphate produced is converted to adenosine in a reaction catalyzed by 50 -nucleosidase. 3. Acetyl-CoA may be converted to glycerol, glycogen, and lipid, particularly in the fed state. However, this only accounts for a small fraction of absorbed ethanol. 4. The acetyl-CoA generated from acetate may be used to generate adenosine triphosphate via the Kreb’s cycle.
5. Acetate readily crosses the blood–brain barrier and is actively metabolized in the brain. The neurotransmitter acetylcholine is produced from acetyl-CoA in cholinergic neurons. 6. Both cardiac and skeletal muscle are very important in the metabolism of acetate. Based on these observations, future studies on the effects of ethanol metabolism should focus on skeletal and cardiac muscle, adipose tissue, and the brain.
Blood Ethanol Concentration The relationship between BEC and the effects of alcohol is complex and varies between individuals and with patterns of drinking. Many of the effects correlate with the peak concentration of ethanol in the blood and organs during a drinking session. Other effects are due to products of metabolism and the total dose of ethanol ingested over a period of time. These two considerations are not entirely separable because the ethanol concentration during a session may determine which pathways of ethanol metabolism predominate. It is of considerable clinical interest to understand what factors increase the probability of higher maximum ethanol concentrations for any given level of consumption.
Factors Affecting Blood Ethanol Concentration Gender Differences in Blood Ethanol Concentration
Women achieve higher peak BEC than men given the same dose of ethanol per kilogram of body weight. The volume of distribution of ethanol reflects total body water. Because the bodies of women contain a greater proportion of fat, it is not surprising that the BEC is higher in women. However, gender differences in the gastric metabolism of ethanol may also be relevant. Period over which the Alcohol Is Consumed
Rapid intake of alcohol increases the concentration of ethanol in the stomach and small intestine. The greater the concentration gradient of alcohol, the faster the absorption of ethanol and therefore peak BEC. If alcohol is consumed and absorbed faster than the rate of oxidation, then BEC increases. Effects of Food on Blood Ethanol Concentration
The peak BEC is reduced when alcohol is consumed with or after food. Food delays gastric emptying into
52
ALCOHOL/Absorption, Metabolism and Physiological Effects Table 7 Alcohol content of selected beverages
Blood Ethanol Concentration (mg/dl)
100 90
Beverage
Dosing after overnight fast Dosing after breakfast
80 70
Low-strength beers High-strength beers Wine Brandy Vodka Gin Whisky
60 50 40 30 20
Alcohol content g/dl (%)
mmol/l
mol/l
3–4 8–9 7–14 35–45 35–50 35–50 35–75
650–870 1740–1960 1520–3040 7610–9780 7610–10870 7610–10870 7610–16300
0.65–0.87 1.74–1.96 1.52–3.04 7.61–9.78 7.61–10.87 7.61–10.87 7.61–16.30
10 0 0
1
2
3
4
5
6
First-Pass Metabolism of Ethanol
Time (hours) Figure 4 Blood ethanol concentration curve after oral dosing of ethanol. A subject injected 0.8 g/kg ethanol over 30 minutes either after an overnight fast or after breakfast. The peak blood ethanol concentration and the area under the curve are reduced if ethanol is consumed with food.
the duodenum and reduces the sharp early rise in BEC seen when alcohol is taken on an empty stomach. Food also increases elimination of ethanol from the blood. The area under the BEC/time curve (AUC) is reduced (Figure 4). The contributions of various nutrients to these effects have been studied, but small, often conflicting, differences have been found. It appears that the caloric value of the meal is more important than the precise balance of nutrients. In animal studies ethanol is often administered with other nutrients in liquid diets. The AUC is less when alcohol is given in a liquid diet than with the same dose of ethanol in water. The different blood ethanol profile in these models may affect the expression of pathology. However, food increases splanchnic blood flow, which maintains the ethanol diffusion gradient in the small intestine. Food-induced impairment of gastric emptying may be partially offset by faster absorption of ethanol in the duodenum. Beverage Alcohol Content and Blood Ethanol Concentration
The ethanol concentration of the beverage consumed (Table 7) affects ethanol absorption and can affect BEC. Absorption is fastest when the concentration is 10–30%. Below 10%, the low ethanol concentration in the GI tract reduces diffusion and the greater volume of liquid slows gastric emptying. However, concentrations above 30% irritate the GI mucosa and the pyloric sphincter, increasing secretion of mucous and delaying gastric emptying.
The AUC is significantly lower after oral dosing of ethanol than after intravenous or intraperitoneal administration. The total dose of intravenously administered ethanol is available to the systemic circulation. The difference between AUCoral and AUCiv represents the fraction of the oral dose that was either not absorbed or metabolized before entering the systemic circulation (first-pass metabolism (FPM)). The ratio of AUCoral to AUCiv reflects the oral bioavailability of ethanol. The investigation of ethanol metabolism has primarily focused on the liver and its relationship to liver pathology. However, gastric metabolism accounts for approximately 5% of ethanol oxidation and 2–10% is excreted in the breath, sweat, or urine. The rest is metabolized by the liver. After absorption, ethanol is transported to the liver in the portal vein. Some is metabolized by the liver before reaching the systemic circulation. However, hepatic ADH is saturated at a BEC that may be achieved in an average-size adult after consumption of one or two units. If ADH is saturated by ethanol from the systemic blood via hepatic artery, ethanol in the portal blood must compete for binding to ADH. Although hepatic oxidation of ethanol cannot increase once ADH is saturated, gastric ADH can significantly metabolize ethanol at the high concentrations in the stomach after initial ingestion. If gastric emptying of ethanol is delayed, prolonged contact with gastric ADH increases FPM. Conversely, fasting, which greatly increases the speed of gastric emptying, virtually eliminates gastric FPM.
Physiological Effects of Alcohol Ethanol or the products of its metabolism affect nearly all cellular structures and functions. Effects of Alcohol on the Central Nervous System
Ethanol generally decreases the activity of the central nervous system. In relation to alcohol, the most
ALCOHOL/Absorption, Metabolism and Physiological Effects 53
important neurotransmitters in the brain are glutamate, gamma-aminobutyric acid (GABA), dopamine, and serotonin. Glutamate is the major excitatory neurotransmitter in the brain. Ethanol inhibits the N-methylD-aspartate (NMDA) subset of glutamate receptors. Ethanol thereby reduces the excitatory effects of glutamate. GABA is the major inhibitory neurotransmitter in the brain. Alcohol facilitates the action of the GABA-a receptor, increasing inhibition. Changes to these receptors seem to be important in the development of tolerance of and dependence on alcohol. Dopamine is involved in the rewarding aspects of alcohol consumption. ‘Enjoyable’ activities such as eating or use of other recreational drugs also release dopamine in the nucleus accumbens of the brain. Serotonin is also involved in the in reward processes and may be important in encouraging alcohol use. The most obvious effects of ethanol intoxication on the central nervous system begin with behavior modification (e.g., cheerfulness, impaired judgment, and loss of inhibitions). These ‘excitatory’ effects result from the disinhibition described previously (inhibition of cells in the brain that are usually inhibitory). As a result of these effects, it is well recognized that driving under the influence of ethanol is unsafe. However, the definition of what is safe or acceptable varies between countries (Table 8) and often changes. The effects of ethanol are dose dependent (Table 9) and further intake causes agitation, slurred speech, memory loss, double vision, and loss of coordination. This may progress to depression of consciousness and loss of airway protective reflexes, with danger of aspiration, suffocation, and death.
Table 9 Relationship between amount of ethanol consumed, blood ethanol concentration (BEC), and effect of ethanol on the central nervous system Alcohol consumed (units)
Possible BEC
Effect
1–5
10–50 mg/dl 2–11 mmol/l 30–100 mg/dl 7–22 mmol/l
No obvious change in behavior
2–7
Euphoria Sociability
8–15
90–250 mg/dl 20–54 mmol/l
11–20
180–300 mg/dl 39–65 mmol/l
Confusion
15–25
Stupor 22–30
Norway and Sweden France, Germany, Italy, and Australia United Kingdom, United States, and Canada Russia
Blood ethanol concentration mg/dl
mmol/l
20 50
4.3 11
80
17
‘‘Drunkenness’’
350–500 mg/dl 76–108 mmol/l Coma
Table 8 Legal limits of blood ethanol concentrations for drivinga Legal limit b
250–400 mg/dl 54–87 mmol/l
38
>600 mg/dl >130 mmol/l Death
Increased self-confidence; loss of inhibitions Impaired judgment, attention, and control Mild sensorimotor impairment, delayed reaction times Legal limits for driving generally fall within this range (see Table 8) Loss of critical judgment Impairment of perception, memory, and comprehension Reduced visual acuity Reduced coordination, impaired balance Drowsiness Disorientation Exaggerated emotional states Disturbances of vision and perception of color, form, motion, and depth Increased pain threshold Further reduction of coordination, staggering gait, slurred speech Loss of motor functions Markedly reduced response to stimuli Marked loss of coordination, inability to stand/walk Incontinence Impaired consciousness Unconsciousness Reduced or abolished reflexes Incontinence Cardiovascular and respiratory depression (death possible) Respiratory arrest
a Approximate amounts of alcohol required by a 70 kg male to produce the corresponding blood ethanol concentration and intoxicating effects of ethanol. One unit of alcohol contains 8g of ethanol. Adapted with permission from Morgan MY and Ritson B (2003) Alcohol and Health: A Handbook for Students and Medical Practitioners, 4th edn. London: Medical Council on Alcohol.
a
Ethanol impairs judgment and coordination. It is well recognized that driving under the influence of ethanol is unsafe. However, the definition of what is safe or acceptable varies between countries and can change as a result of social, political, or scientific influences. b Legislation regarding legal limits of blood ethanol for driving may change.
This sequence of events is particularly relevant in the hospital setting, where patients may present intoxicated with a reduced level of consciousness. It is difficult to determine whether there is coexisting pathology such as an extradural hematoma or overdose of other
54
ALCOHOL/Absorption, Metabolism and Physiological Effects
drugs in addition to ethanol. Although measurement of BEC is helpful (Table 9), it is safest to assume that alcohol is not responsible for any disturbance in consciousness and to search for another cause. Neuroendocrine Effects of Alcohol
Alcohol activates the sympathetic nervous system, increasing circulating catecholamines from the adrenal medulla. Hypothalamic–pituitary stimulation results in increased circulating cortisol from the adrenal cortex and can, rarely, cause a pseudoCushing’s syndrome with typical moon-shaped face, truncal obesity, and muscle weakness. Alcoholics with pseudo-Cushing’s show many of the biochemical features of Cushing’s syndrome, including failure to suppress cortisol with a 48-h low-dose dexamethasone suppression test. However, they may be distinguished by an insulin stress test. In pseudo-Cushing’s, the cortisol rises in response to insulin-induced hypoglycemia, but in true Cushing’s there is no response to hypoglycemia. Ethanol affects hypothalamic osmoreceptors, reducing vasopressin release. This increases salt and water excretion from the kidney, causing polyuria. Significant dehydration may result particularly with consumption of spirits containing high concentrations of ethanol and little water. Loss of hypothalamic neurons (which secrete vasopressin) has also been described in chronic alcoholics, suggesting long-term consequences for fluid balance. Plasma atrial natriuretic peptide, increased by alcohol consumption, may also increase diuresis and resultant dehydration. Alcoholism also affects the hypothalamic–pituitary–gonadal axis. These effects are further exacerbated by alcoholic liver disease. There are conflicting data regarding the changes observed. Testosterone is either normal or decreased in men, but it may increase in women. Estradiol is increased in men and women, and it increases as hepatic dysfunction deteriorates. Production of sex hormonebinding globulin is also perturbed by alcohol. The development of female secondary sexual characteristics in men (e.g., gynaecomastia and testicular atrophy) generally only occurs after the development of cirrhosis. In women, the hormonal changes may reduce libido, disrupt menstruation, or even induce premature menopause. Sexual dysfunction is also common in men with reduced libido and impotence. Fertility may also be reduced, with decreased sperm counts and motility. Effects of Alcohol on Muscle
Myopathy is common, affecting up to two-thirds of all alcoholics. It is characterized by wasting,
weakness, and myalgia and improves with abstinence. Histology correlates with symptoms and shows selective atrophy of type II muscle fibers. Ethanol causes a reduction in muscle protein and ribonucleic acid content. The underlying mechanism is unclear, but rates of muscle protein synthesis are reduced, whereas protein degradation is either unaffected or inhibited. Attention has focused on the role of acetaldehyde adducts and free radicals in the pathogenesis of alcoholic myopathy. Alcohol and Nutrition
The nutritional status of alcoholics is often impaired. Some of the pathophysiological changes seen in alcoholics are direct consequences of malnutrition. However, in the 1960s, Charles Lieber demonstrated that many alcohol-induced pathologies, including alcoholic hepatitis, cirrhosis, and myopathy, are reproducible in animals fed a nutritionally adequate diet. Consequently, the concept that all alcohol-induced pathologies are due to nutritional deficiencies is outdated and incorrect. Myopathy is a direct consequence of alcohol or acetaldehyde on muscle and is not necessarily associated with malnutrition. Assessment of nutritional status in chronic alcoholics using anthropometric measures (e.g., limb circumference and muscle mass) may be misleading in the presence of myopathy. Acute or chronic ethanol administration impairs the absorption of several nutrients, including glucose, amino acids, biotin, folate, and ascorbic acid. There is no strong evidence that alcohol impairs absorption of magnesium, riboflavin, or pyridoxine, so these deficiencies are due to poor intakes. Hepatogastrointestinal damage (e.g., villous injury, bacterial overgrowth of the intestine, pancreatic damage, or cholestasis) may impair the absorption of some nutrients such as the fat-soluble vitamins (A, D, E, and K). In contrast, iron stores may be adequate as absorption is increased. Effects of Alcohol on the Cardiovascular System
Alcohol affects both the heart and the peripheral vasculature. Acutely, alcohol causes peripheral vasodilatation, giving a false sensation of warmth that can be dangerous. Heat loss is rapid in cold weather or when swimming, but reduced awareness leaves people vulnerable to hypothermia. The main adverse effect of acute alcohol on the cardiovascular system is the induction of arrhythmias. These are often harmless and experienced as palpitations but can rarely be fatal. Chronic ethanol consumption can cause systemic hypertension and
ALCOHOL/Absorption, Metabolism and Physiological Effects 55
congestive cardiomyopathy. Alcoholic cardiomyopathy accounts for up to one-third of dilated cardiomyopathies but may improve with abstinence or progress to death. The beneficial, cardioprotective effects of alcohol consumption have been broadcast widely. This observation is based on population studies of mortality due to ischemic heart disease, case–control studies, and animal experiments. However, there is no evidence from randomised controlled trials. The apparent protective effect of alcohol may therefore result from a confounding factor. Furthermore, on the population level, the burden of alcohol-induced morbidity and mortality far outweighs any possible cardiovascular benefit. Effects of Alcohol on Liver Function
Central to the effects of ethanol is the liver, in which 60–90% of ethanol metabolism occurs. Ethanol displaces many of the substrates usually metabolized in the liver. Metabolism of ethanol by ADH in the liver generates reducing equivalents. ALDH also generates NADH with conversion of acetaldehyde to acetate. The NADH/NADþ ratio is increased, with a corresponding increase in the lactate/pyruvate ratio. If lactic acidosis combines with a -hydroxybutyrate predominant ketoacidosis, the blood pH can fall to 7.1 and hypoglycemia may occur. Severe ketoacidosis and hypoglycemia can cause permanent brain damage. However, in general the prognosis of alcohol-induced acidosis is good. Lactic acid also reduces the renal capacity for urate excretion. Hyperuricemia is exacerbated by alcohol-induced ketosis and acetate-mediated purine generation. Hyperuricemia explains, at least in part, the clinical observation that alcohol misuse can precipitate gout. The excess NADH promotes fatty acid synthesis and inhibits lipid oxidation in the mitochondria, resulting in fat accumulation. Fatty changes are usually asymptomatic but can be seen on ultrasound or computed tomography scanning, and they are associated with abnormal liver toxicity tests (e.g., raised activities of serum -glutamyl transferase, aspartate aminotransferase, and alanine transaminases). Progression to alcoholic hepatitis involves invasion of the liver by neutrophils with hepatocyte necrosis. Giant mitochondria are visible and dense cytoplasmic lesions (Mallory bodies) are seen. Alcoholic hepatitis can be asymptomatic but usually presents with abdominal pain, fever, and jaundice, or, depending on the severity of disease, patients may have encephalopathy, ascites, and ankle oedema. Continued alcohol consumption may lead to cirrhosis. However, not all alcoholics progress to
cirrhosis. The reason for this is unclear. It has been suggested that genetic factors and differences in immune response may play a role. In alcoholic cirrhosis there is fibrocollagenous deposition, with scarring and disruption of surrounding hepatic architecture. There is ongoing necrosis with concurrent regeneration. Alcoholic cirrhosis is classically said to be micronodular, but often a mixed pattern is present. The underlying pathological mechanisms are complex and are the subject of debate. Induction of the MEOS and oxidation of ethanol by catalase result in free radical production. Glutathione (a free radical scavenger) is reduced in alcoholics, impairing the ability to dispose of free radicals. Mitochondrial damage occurs, limiting their capacity to oxidize fatty acids. Peroxisomal oxidation of fatty acids further increases free radical production. These changes eventually result in hepatocyte necrosis, and inflammation and fibrosis ensue. Acetaldehyde also contributes by promoting collagen synthesis and fibrosis. Alcohol and Facial Flushing
Genetic variations in ADH and ALDH may explain why particular individuals develop some of the pathologies of alcoholism and others do not. For example, up to 50% of Orientals have a genetically determined reduction in ALDH2 activity (‘flushing’ phenotype). As a result, acetaldehyde accumulates after ethanol administration, with plasma levels up to 20 times higher in people with ALDH2 deficiency. Even small amounts of alcohol produce a rapid facial flush, tachycardia, headache, and nausea. Acetaldehyde partly acts through catecholamines, although other mediators have been implicated, including histamine, bradykinin, prostaglandin, and endogenous opioids. This is similar to the disulfiram reaction due to the rise of acetaldehyde after inhibition of ALDH. Disulfiram is used therapeutically to encourage abstinence in alcohol rehabilitation programs. The aversive effects of acetaldehyde may reduce the development of alcoholism and the incidence of cirrhosis in ‘flushers.’ However, some alcoholics with ALDH2 deficiency and, presumably, higher hepatic acetaldehyde levels develop alcoholic liver disease at a lower intake of ethanol than controls. Effects of Acetaldehyde
Acetaldehyde is highly toxic and can bind cellular constituents (e.g., proteins including CYP2E1, lipids, and nucleic acids) to produce harmful acetaldehyde adducts (Figure 5). Adduct formation changes
56
ALCOHOL/Absorption, Metabolism and Physiological Effects
Acetaldehyde Acetaldehyde
Adduct
Altered Structure & Properties Immune Response
Cell Component Figure 5 Formation of acetaldehyde adducts.
the structure and the biochemical properties of the affected molecules. The new structures may be recognized as foreign antigens by the immune system and initiate a damaging response. Adduct formation leads to retention of protein within hepatocytes, contributing to the hepatomegaly, and several toxic manifestations, including impairment of antioxidant mechanisms (e.g., decreased glutathione (GSH)). Acetaldehyde thereby promotes free radical-mediated toxicity and lipid peroxidation. Binding of acetaldehyde with cysteine (one of the three amino acids that comprise GSH) and/or GSH also reduces liver GSH content. Chronic ethanol administration significantly increases rates of GSH turnover in rats. Acute ethanol administration inhibits GSH synthesis and increases losses from the liver. Furthermore, mitochondrial GSH is selectively depleted and this may contribute to the marked disruption of mitochondria in alcoholic cirrhosis.
Summary Ethanol is probably the most commonly used recreational drug worldwide. Taken orally, alcohol is absorbed from the GI tract by diffusion and is rapidly distributed throughout the body in the blood before entering tissues by diffusion. Ethanol is metabolized to acetaldehyde mainly in the stomach and liver. Acetaldehyde is highly toxic and binds cellular constituents, generating harmful acetaldehyde adducts. Acetaldehyde is further oxidized to acetate, but the fate of acetate and its role in the effects of ethanol are much less clear. Ethanol and the products of its metabolism affect nearly every cellular structure or function and are a significant cause of morbidity and mortality. See also: Alcohol: Disease Risk and Beneficial Effects; Effects of Consumption on Diet and Nutritional Status. Liver Disorders.
Effects of Acetate
The role of acetate in alcohol-induced pathology is not well understood. The uptake and utilization of acetate by tissues depend on the activity of acetylCoA synthetase. Acetyl-CoA and adenosine are produced from the metabolism of acetate. Acetate crosses the blood–brain barrier easily and is actively metabolized in the brain. Many of the central nervous system depressant effects of ethanol may be blocked by adenosine receptor blockers. Thus, acetate and adenosine may be important in the intoxicating effects of ethanol. Ethanol increases portal blood flow, mainly by increasing GI tract blood flow. This effect is reproduced by acetate. Acetate also increases coronary blood flow, myocardial contractility, and cardiac output. Acetate inhibits lipolysis in adipose tissue and promotes steatosis in the liver. The reduced circulating free fatty acids (a source of energy for many tissues) may have significant metabolic consequences. Thus, many of the effects of alcohol may be due to acetate.
Further Reading Department of Health (1995) Sensible Drinking: The Report of an Inter-Departmental Working Group. London: Department of Health. Gluud C (2002) Endocrine system. In: Sherman DIN, Preedy VR, and Watson RR (eds.) Ethanol and the Liver. Mechanisms and Management, pp. 472–494. London: Taylor & Francis. Haber PS (2000) Metabolism of alcohol by the human stomach. Alcoholism: Clinical & Experimental Research 24: 407–408. Henderson L, Gregory J, Irving K and Swan G (2003) The National Diet and Nutrition Survey: adults aged 19–64 years. Volume 2: Energy, protein, carbohydrate, fat and alcohol intake. London: TSO. Israel Y, Orrego H, and Carmichael FJ (1994) Acetate-mediated effects of ethanol. Alcoholism: Clinical & Experimental Research 18(1): 144–148. Jones AW (2000) Aspects of in-vivo pharmacokinetics of ethanol. Alcoholism: Clinical & Experimental Research 24: 400–402. Kwo PY and Crabb DW (2002) Genetics of ethanol metabolism and alcoholic liver disease. In: Sherman DIN, Preedy VR, and Watson RR (eds.) Ethanol and the Liver. Mechanisms and Management, pp. 95–129. London: Taylor & Francis. Lader D and Meltzer H (2002) Drinking: Adults’ Behaviour and Knowledge in 2002. London: Office for National Statistics.
ALCOHOL/Disease Risk and Beneficial Effects 57 Lieber CS (1996) The metabolism of alcohol and its implications for the pathogenesis of disease. In: Preedy VR and Watson RR (eds.) Alcohol and the Gastrointestinal Tract, pp. 19–39. New York: CRC Press. Lieber CS (2000) Alcohol: Its metabolism and interaction with nutrients. Annual Review of Nutrition 20: 395–430. Mezey E (1985) Effect of ethanol on intestinal morphology, metabolism and function. In: Seitz HK and Kommerell B (eds.) Alcohol Related Diseases in Gastroenterology, pp. 342–360. Berlin: Springer-Verlag. Morgan MY and Ritson B (2003) Alcohol and Health: A Handbook for Students and Medical Practitioners, 4th edn. London: Medical Council on Alcohol. Peters TJ and Preedy VR (1999) Chronic alcohol abuse: Effects on the body. Medicine 27: 11–15. Preedy VR, Adachi J, Ueno Y et al. (2001) Alcoholic skeletal muscle myopathy: Definitions, features, contribution of neuropathy, impact and diagnosis. European Journal of Neurology 8: 677–687. Preedy VR, Patel VB, Reilly ME et al. (1999) Oxidants, antioxidants and alcohol: Implications for skeletal and cardiac muscle. Frontiers in Bioscience 4: 58–66. Royal Colleges (1995) Alcohol and the heart in perspective. Sensible limits reaffirmed. A Working Group of the Royal Colleges of Physicians, Psychiatrists and General Practitioners. Journal of the Royal College of Physicians of London 29: 266–271.
effect of a light to moderate alcohol intake and a detrimental effect of a high alcohol intake (Figure 1). Some have explained the J shape as an artefact due to misclassification or confounding. Prevailing beliefs among these researchers is that abstainers comprise a mix of former heavy drinkers, underreporting drinkers, ill people who have stopped drinking, and people with an especially unhealthy lifestyle apart from abstaining. However, most researchers attribute the ‘J’ to a combination of beneficial and harmful effects of ethanol. This is based on findings from population studies of alcohol-related morbidity and cause-specific mortality that show a decreased relative risk of coronary heart disease, and an increased risk of certain cancers and cirrhosis, with increased alcohol intake. Further evidence derives from studies in which people who were ill at baseline were excluded, and these confirmed the previously mentioned findings. Benefits—Coronary Heart Disease
Disease Risk and Beneficial Effects M Grønbæk, National Institute of Public Health, Copenhagen, Denmark
A large number of investigators have studied the relation between alcohol intake and coronary heart disease. Studies indicate that the descending leg of the curve is mainly attributable to death from coronary heart disease, as mentioned previously. The lowest risk seems to be among subjects reporting an
ª 2005 Elsevier Ltd. All rights reserved.
1.6 1.5 1.4 1.3 All-cause mortality
Alcohol has for hundreds of years been part of the diet for many people. When enjoyed in small amounts and together with meals, alcohol may have positive effects on health, especially on the prevention of coronary heart disease. In larger amounts, and especially drunk in binges, alcohol is a toxic and dependence-inducing substance, with many short- and long-term detrimental effects. The latter, combined with the high alcohol intake in subsets of the population, implies that alcohol has a major impact on public health in most Western countries. A higher alcohol intake results in higher rates of certain cancer, cirrhosis, suicide, traffic accidents, abuse, and a number of socioeconomic conditions.
1.2 1.1 1 0.9 0.8 0.7 0.6 0
Alcohol and Mortality Amount of Alcohol
Several large prospective population studies from many countries have described the impact of alcohol intake on mortality as J-shaped, indicating both the beneficial
2 (women), >4 (men)
Unknown; higher risk in smoking alcoholics Increases estrogen production Risk increases with low folate
>2 >3 >6 >3 >6
Increased liver fat synthesis Toxicity of alcohol metabolism
(women) 10 years (men) 15 years (women) 15 years (men) 20 years
Increased collagen synthesis
10 years 10–15 years Binge drinking
Acute inflammation of pancreas Loss of exocrine and endocrine pancreatic cells Mitochondrial damage of muscle cells or thiamine deficiency
1–2 in social setting
Legal intoxication
10–20 in rapid succession Follows binge 10–15 years
Severe toxicity Neuronal hyperexcitability Thiamine deficiency
5–10 years
Combinations of iron, folate and pyridoxal deficiencies
strokes. Whereas red and white wine both contain protective antioxidant flavonoids, moderate amounts of alcohol also improve the circulating lipid profile by increasing levels of high-density lipoprotein and tissue plasminogen activator while reducing platelet adhesiveness.
The risks of Excessive Alcohol Consumption Unlike other abused drugs, chronic alcohol in excess affects many different organ systems, which include the liver, pancreas, heart, and brain (Table 1). Excessive chronic alcohol use also increases the risk of certain cancers. While these risks are apparent among the 7% of US citizens over aged 14 who abuse alcohol, their prevalence is generally no less in countries such as France, Italy, and Spain where drinking wine with meals is considered part of the culture. The organ damage from chronic alcoholism may impact on processes of nutrient assimilation and metabolism, as is the case with chronic liver and pancreatic disease, or may be modulated in large part by nutrient deficiencies, as with thiamine and brain function. This section will consider specific effects of alcohol abuse on certain organs as a
background for consideration of specific effects on nutritional status. Alcoholic Liver Disease
Alcoholic liver disease is among the top ten causes of mortality in the US with somewhat higher mortality rates in western European countries where wine is considered a dietary staple, and is a leading cause of death in Russia. Among the three stages of alcoholic liver disease, fatty liver is related to the acute effects of alcohol on hepatic lipid metabolism and is completely reversible. By contrast, alcoholic hepatitis usually occurs after a decade or more of chronic drinking, is associated with inflammation of the liver and necrosis of liver cells, and carries about a 40% mortality risk for each hospitalization. Alcoholic cirrhosis represents irreversible scarring of the liver with loss of liver cells, and may be associated with alcoholic hepatitis. The scarring process greatly alters the circulation of blood through the liver and is associated with increased blood pressure in the portal (visceral) circulation and shunting of blood flow away from the liver and through other organs such as the esophagus. The potentially lethal complications of portal hypertension include rupture of esophageal varices, ascites or accumulation of fluid
ALCOHOL/Effects of Consumption on Diet and Nutritional Status 65
in the abdominal cavity, and the syndrome of hepatic encephalopathy, which is due to inadequate hepatic detoxification of substances in the visceral blood that is shunted around the liver. The risk of developing alcoholic cirrhosis is dependent upon the amount of alcohol exposure independent of the presence or absence of malnutrition. For example, a study of well-nourished German male executives found that the incidence of alcoholic cirrhosis was directly related to the daily amount and duration of alcohol consumption, such that daily ingestion of 160 g alcohol, equivalent to that found in a pint of whisky, over a 15-year period predicted a 50% risk of cirrhosis on liver biopsy. Other worldwide demographic data indicate that mortality rates from cirrhosis of the liver can be related to national per capita alcohol intake. These studies have defined the threshold risk for eventual development of alcoholic cirrhosis as 6 drinks per day for men, and about half that for women. Pancreatitis and Pancreatic Insufficiency
Pancreatitis occurs less frequently than liver disease in chronic alcoholics, and is characterized by severe attacks of abdominal pain due to pancreatic inflammation, while pancreatic insufficiency is due to the eventual destruction of pancreatic cells that secrete digestive enzymes and insulin. This destructive process is associated with progressive scarring of the pancreas together with distortion and partial blockage of the pancreatic ducts, which promote recurrent episodes of acute inflammatory pancreatitis. Since the pancreas is the site of production of proteases and lipases for protein and lipid digestion, destruction of more than 90% of the pancreas results in significant malabsorption of these major dietary constituents, as well as diabetes secondary to reduced insulin secretion. Consequently, patients with pancreatic insufficiency exhibit severe loss of body fat and muscle protein. Since the absorption of fat-soluble vitamins is dependent upon pancreatic lipase for solubilization of dietary fat, these patients are also at risk for deficiencies of vitamins A, D, and E. Cancers
Chronic alcoholics are at increased risk for cancer of the oro-pharynx and esophagus, colon, and breast. The risk of oro-pharyngeal cancer is greatest when heavy smoking is combined with excessive daily alcohol. Increased risk of squamous cell cancer of the esophagus is also compounded by smoking and may be associated with deficiencies of vitamin A and zinc. Breast cancer in women may be mediated
through increased estrogen production during heavy alcohol intake. Colon cancer risk is greatest among alcoholics with marginal folate deficiency. Heart
Although coronary disease risk is decreased by alcohol consumption, excessive alcohol use also impairs cardiac muscle function. Episodic heavy drinking bouts can lead to arrhythmias in the ‘holiday heart’ syndrome. Chronic alcoholics are prone to left-sided heart failure secondary to decreased mitochondrial function of cardiac muscle cells, possibly mediated by abnormal fatty acid metabolism. A specific form of high output heart failure, or ‘wet beriberi,’ occurs in association with thiamine deficiency. Neurological Effects
The many neurological effects of acute and chronic alcohol abuse can be categorized as those related directly to alcohol, those secondary to chronic liver disease, and those mediated by thiamine deficiency. The stages of acute alcohol toxicity progress upward from legal intoxication with reduced reaction time and judgment, as occurs with blood levels greater than 0.08 g dl1 that usually define legal intoxication, to coma and death with levels greater than 0.4 g dl1. While mild intoxication is common with social drinking, coma and death have been described among college age males who consume excessive amounts of alcohol in a very short period of time. Automobile accidents, which account for a large portion of alcohol-related deaths, are more common in drunken pedestrians than drivers. Intoxication also leads to frequent falls and head trauma, and subdural hematoma can present with delayed but progressive loss of cognition, headaches, and eventual death. Chronic alcoholics are prone to episodes of alcohol withdrawal, which can be characterized according to stages of tremulousness, seizures, and delirium tremens with hyper-excitability and hallucinations at any time up to 5 days after the last drink. This state of altered consciousness is distinct from hepatic encephalopathy, which results from diversion of toxic nitrogenous substances around the scarred cirrhotic liver and is associated with progressive slowing of cerebral functions with stages of confusion, loss of cognition, and eventual coma and death. Progressive altered cognition and judgment can also result from cerebral atrophy following years of heavy drinking, and may also be mediated by thiamine deficiency as described in greater detail below.
66
ALCOHOL/Effects of Consumption on Diet and Nutritional Status
Anemia
Chronic alcoholics who substitute large amounts of alcohol for other dietary constituents are at risk for developing anemia. The causes of anemia in chronic alcoholics are multifactorial, including iron deficiency secondary to bleeding from episodic gastritis or other gastrointestinal sites, folate deficiency from inadequate diet or malabsorption, and deficiency of pyridoxine (vitamin B6) due to abnormal effects on its metabolism. Consequently, the bone marrow may demonstrate absent iron and mixtures of megaloblastosis from folate deficiency and sideroblastosis from pyridoxine deficiency.
The Effects of Alcohol Consumption on Nutritional Status Body Weight and Energy Balance
The effects of alcohol on body weight are dependent upon the timing and amount of alcohol consumption in relation to meals and on the presence or absence of organ damage, in particular alcoholic liver disease (Table 2). Whereas body weight is usually unaffected by moderate alcohol consumption, chronic alcoholics who drink daily while substituting alcohol for other dietary constituents lose weight due to the energy neutral effect of alcohol in the diet. Moderate drinkers on weight loss regimens are less likely to lose weight while consuming alcohol with their meals since one effect of alcohol is to decrease restraint over food intake. At the same time, those who consume alcohol with high-fat meals are more likely to gain weight due to an acute effect of alcohol on reducing the oxidation of fat at the same time as it promotes its storage. The presence of alcoholic liver disease results in significant changes in body composition and energy balance. Although fatty liver is fully reversible, progression to alcoholic hepatitis can have profound effects on nutritional status. According to large Table 2 Effects of alcohol on body weight Drinking behavior Moderate drinking Reduce weight Increase weight Heavy drinking Reduce weight
Increase weight
Explanation
Substitution of carbohydrate by alcohol; more likely in women Decreased dietary restraint Substitution of nonalcohol calories by alcohol calories, which are ‘wasted’ during metabolism Alcohol metabolism decreases lipid metabolism, promotes fat storage
multicenter studies, alcoholic hepatitis patients demonstrate universal evidence for protein calorie malnutrition, according to the physical findings of muscle wasting and edema, low levels of serum albumin and other visceral proteins, and decreased cell-mediated immunity, whereas their 6-month mortality is related in part to the severity of malnutrition. Anorexia is a major cause of weight loss in alcoholic liver disease, and may be caused by increased circulating levels of leptin. Furthermore, active alcoholic hepatitis contributes to increased resting energy expenditure as another cause of weight loss. On the other hand, resting energy expenditure is normal in stable alcoholic cirrhotics who are also typically underweight or malnourished in part due to preferential metabolism of endogenous fat stores. At the same time, the digestion of dietary fat is decreased in cirrhotic patients due to diminished secretion of bile salts and pancreatic enzymes. Micronutrient Deficiencies
The chronic exposure to excessive amounts of ethanol is associated with deficiencies of multiple nutrients, in particular thiamine, folate, pyridoxine, vitamin A, vitamin D, and zinc (Table 3). The frequency of these deficiencies is increased in the presence of alcoholic liver disease, which results in decreased numbers of hepatocytes for vitamin storage and metabolism. Many of the clinical signs of alcoholic liver disease are related to vitamin deficiencies. Thiamine
Low circulating levels of thiamine have been described in 80% of patients with alcoholic cirrhosis. Thiamine pyrophosphate is a coenzyme in the intermediary metabolism of carbohydrates, in particular for transketolases, which play a role in cardiac and neurological functions. While alcoholic beverages are essentially devoid of thiamine, acute exposure to alcohol decreases the activity of intestinal transporters required for thiamine absorption. The major neurological signs and symptoms of thiamine deficiency in alcoholics include peripheral neuropathy, partial paresis of ocular muscles, widebased gait secondary to cerebellar lesions, cognitive defects, and severe memory loss. The presence of peripheral neuropathy is sometimes referred to as ‘dry beriberi,’ while the other symptoms constitute the Wernicke-Korsokoff syndrome. Whereas abnormal eye movements can be treated acutely by thiamine injections, the other signs are often permanent and contribute to the dementia that often afflicts
ALCOHOL/Effects of Consumption on Diet and Nutritional Status 67 Table 3 Micronutrient deficiencies in chronic alcoholic patients Deficiency
Cause
Effect
Thiamine
Poor diet Intestinal malabsorption
Folate
Pyridoxine (vitamin B6)
Vitamin A
Vitamin D Zinc
Iron
Poor diet Intestinal malabsorption Decreased liver storage Increase urine excretion Poor diet Displacement from circulating albumin promotes urine excretion Malabsorption Increased biliary secretion
Malabsorption Decteased sun exposure Poor diet Increaded urine excretion
Gastrointestinal bleeding
alcoholics after years of drinking. ‘Wet beriberi’ refers to the high-output cardiac failure that can also occur in thiamine-deficient alcoholics, and is responsive to thiamine therapy in addition to conventional treatment. Since endogenous thiamine is used during carbohydrate metabolism, acute cardiac failure can be precipitated by the administration of intravenous glucose to malnourished and marginally thiamine-deficient patients by depletion of remaining thiamine stores. This process can be prevented by the addition of soluble vitamins including thiamine to malnourished chronic alcoholic patients who are undergoing treatment for medical emergencies. Folate
Folates are polyglutamylated in their dietary forms and circulate in the methylated and reduced monoglutatate form. Folates function in DNA synthesis and cell turnover, and play a central role in methionine metabolism as substrate for the enzyme methionine synthase in the conversion of homocysteine to methionine. While originally recognized as a cause of megaloblastic anemia, the expanding consequences of folate deficiency are related to elevated circulating homocysteine and include increased risk for neural tube defects and other congenital abnormalities in newborns and altered cognition in the elderly. Prior to folate fortification in the US, the incidence of low serum folate levels in chronic alcoholics was at about 80%. Megaloblastic anemia, due to the negative effects of folate deficiency on DNA synthesis, has been described in about one-third of patients with alcoholic liver disease. Excessive alcohol
Peripheral neuropathy Wernicke-Korsokoff syndrome High output heart failure Megaloblastic anemia Hyperhomocysteinemia Neural tube defect Altered cognition Peripheral neuropathy Sideroblastic anemia
Night blindness May promote development of alcoholic
liver disease Calcium deficiency Metabolic bone disease Night blindness Decreased taste Decreased immune funtion Anemia
use is associated with reversible hyperhomocysteinemia in chronic alcoholics because of the inhibitory effect of alcohol or its metabolite acetaldehyde on methionine synthase. Furthermore, folate deficiency may play a role in the pathogenesis of alcoholic liver disease by exacerbating abnormalities in the metabolism of S-adenosylmethionine. The causes of folate deficiency in chronic alcoholism are multiple. With the exception of beer, all alcoholic beverages are devoid of folate, and the typical diet of the chronic alcoholic does not include its fresh vegetable sources. Chronic alcoholism is associated with intestinal folate malabsorption, decreased liver folate uptake, and accelerated folate excretion in the urine. In addition, alcoholic liver disease results in decreased liver stores of folate, so the duration of time for development of folate deficiency with marginal diet is shortened. Pyridoxine Deficiency
Pyridoxine (vitamin B6) is required for transamination reactions, including the elimination of homocysteine. Pyridoxine deficiency in chronic alcoholism is caused by poor diet, whereas displacement of pyridoxal phosphate from circulating albumin by the alcohol metabolite acetaldehyde increases its urinary excretion. Low serum levels of pyridoxal phosphate are common in chronic alcoholics, and pyridoxine deficiency is manifest by peripheral neuropathy and sideroblastic anemia. In alcoholic hepatitis, the serum level of alanine transaminase (ALT) is disproportionately low compared to aspartate
68
ALCOHOL/Effects of Consumption on Diet and Nutritional Status
transaminase (AST), due to the requirement of pyridoxine for ALT activity. Vitamin B12
The incidence of vitamin B12 deficiency in chronic alcoholism is undefined, since serum levels are often normal or increased due to the presence of B12 analogs in alcoholic liver disease. Nevertheless, the intestinal absorption of vitamin B12 is decreased in chronic alcoholics due to defective uptake at the ileum. Presumed low levels of vitamin B12 in the liver may contribute to abnormal hepatic methionine metabolism with elevated serum homocysteine, since this vitamin is a cofactor for methionine synthase. Vitamin A
Although serum levels of vitamin A are usually normal in chronic alcoholics, liver retinoids are progressively lowered through the stages of alcoholic liver disease. Retinoids may play a central role in hepatic function, where vitamin A is stored as retinyl esters in fat-storing transitional Ito cells. The process of transformation of Ito cells to collagen-producing, hepatic stellate cells is associated with depletion of retinyl esters, which may be implicated in the development of alcoholic liver disease. The causes of vitamin A deficiency in alcoholic liver disease include malabsorption, which is due to decreased secretion of bile and pancreatic enzymes necessary for the digestion of dietary retinyl esters and their incorporation into water-soluble micelles prior to intestinal transport. In addition, the transport of retinol is impaired due to decreased hepatic production of retinol-binding protein. Thirdly, the metabolism of alcohol induces microsomal enzymes that promote the production of polar retinol metabolites, which are more easily excreted in the bile. The signs of vitamin A deficiency include night blindness with increased risk of automobile accidents and increased risk of esophageal cancer due to abnormal squamous cell cycling. Conversely, patients with alcoholic liver disease are more susceptible to vitamin A hepatotoxicity so that supplemental doses should be used with caution.
of this fat-soluble vitamin, poor diet, and often decreased sun exposure. Calcium deficiency results from low levels of vitamin D that are required to regulate its absorption, and also because the fat malabsorption that often accompanies alcoholic liver disease results in increased binding of calcium to unabsorbed intestinal fatty acids. Zinc
Zinc is a cofactor for many enzymatic reactions including retinol dehydrogenase, is stored in the pancreas, and circulates in the blood bound mainly to albumin. Chronic alcoholic patients are frequently zinc deficient because of poor diet, deficiency of pancreatic enzymes, and increased urine excretion due to low zinc-binding albumin in the circulation. The consequences of zinc deficiency include night blindness from decreased production of retinal, decreased taste, and hypogonadism, which may result in lowered testosterone levels and increased risk of osteoporosis in men. Since zinc is required for cellular immunity, its deficiency may contribute to increased infection risk in alcoholic patients. Iron
Chronic alcoholic patients are often iron deficient because of increased frequency of gastrointestinal bleeding, typically due to alcoholic gastritis or esophageal tears from frequent retching and vomiting, or from rupture of esophageal varices in patients with cirrhosis and portal hypertension. The major consequence of iron deficiency is anemia, which may be compounded by the concurrent effects of folate and pyridoxine deficiencies. Conversely, increased exposure to iron, e.g., from cooking in iron pots, increases the likelihood and severity of alcoholic liver disease, since the presence of iron in the liver promotes oxidative liver damage during the metabolism of alcohol. See also: Ascorbic Acid: Deficiency States. Calcium. Cancer: Epidemiology and Associations Between Diet and Cancer. Folic Acid. Iron. Liver Disorders. Thiamin: Physiology. Vitamin A: Biochemistry and Physiological Role. Vitamin B6. Vitamin E: Metabolism and Requirements. Zinc: Physiology.
Vitamin D and Calcium
Chronic alcoholic patients are at increased risk for metabolic bone disease due to low vitamin D and hence decreased absorption of calcium. Alcoholic liver disease increases the likelihood of low circulating levels of 25-hydroxy vitamin D because of decreased excretion of bile required for absorption
Further Reading Halsted CH (2004) Nutrition and alcoholic liver disease. Seminars in Liver Diseases 24: 289–304. Halsted CH (1995) Alcohol and folate interactions: clinical implications. In: Bailey LB (ed.) Folate in Health and Disease, pp. 313–327. New York: M. Decker, Inc.
ALUMINUM 69 Klatsky AL (2002) Alcohol and cardiovascular diseases: a historical overview. Annals of the New York Academy of Science 957: 7–15. Lieber CS (1992) In Medical and Nutritional Complications of Alcoholism: Mechanisms and Management. New York and London: Plenum Medical Book Company. Lieber CS (2000) ALCOHOL: its metabolism and interaction with nutrients. Annual Review of Nutrition 20: 395–430. Lieber CS (2004) New concepts of the pathogenesis of alcoholic liver disease lead to novel treatments. Current Gastroenterology Reports 6: 60–65. McClain CJ, Hill DB, Song Z, Chawla R, Watson WH, Chen T, and Barve S (2002) S-Adenosylmethionine, cytokines, and alcoholic liver disease. Alcohol 27: 185–192.
Mendenhall C, Roselle GA, Gartside P, and Moritz T (1995) Relationship of protein calorie malnutrition to alcoholic liver disease: a reexamination of data from two Veterans Administration Cooperative Studies. Alcoholism: Clinical and Experimental Research 19: 635–641. Mezey E (1991) Interaction between alcohol and nutrition in the pathogenesis of alcoholic liver disease. Seminars in Liver Disease 11: 340–348. Nanji A (1993) Role of eicosanoids in experimental alcoholic liver disease. Alcohol 10: 443–446. Secretary of Health and Human Services (2000) Tenth Special Report to the U.S.Congress on Alcohol and Health. US Department of Health and Human Services, National Institute of Alcohol Abuse and Alcoholism.
ALUMINUM N D Priest, Middlesex University, London, UK ª 2005 Elsevier Ltd. All rights reserved.
Occurrence in Food and the Environment Properties and Natural Occurrence
Aluminum was discovered in 1825 by the Danish chemist Oersted. It is a soft, ductile, malleable, silvery metal. Its atomic number is 13, and it has one stable isotope, 27Al. Aluminum belongs to group 3a of the periodic table, along with boron, indium, gallium, and thallium. It most commonly forms trivalent ionic (Al3þ) compounds, but it has some covalent characteristics. Aluminum is the most common metal in the earth’s crust and is the third most common element. It is too reactive to occur in nature as the free metal. Aluminum occurs in natural systems as the trivalent ion and in these it has no oxidation-reduction chemistry. In aqueous solution, the chemistry is complicated by the formation of several pH-dependent complex ions. These ions—Al(OH)2þ, 3þ Al(OH)þ and 2 , and Al(OH)4 —compete with Al Al(OH)3 within aquatic systems. Aluminum is minimally soluble in water at approximately pH 6, when the Al(OH)þ 2 ion dominates, but solubility increases at lower and higher pH values. At pH 7 and higher, the most important ion is Al(OH) 4 , whereas at low pH values Al3þ dominates. In contrast to its abundance in the earth’s crust, most natural waters contain very little dissolved aluminum (often 1 mg l1 may occur. Aluminum concentrations in tap water should not exceed 200 mg l1a guideline specified by the World Health Organization (WHO) on esthetic grounds. Air concentrations of aluminum range from less than 1 mg m3 in rural environments to as high as 10 mg m3 in urban, industrialized areas. The higher levels in the latter result from the dust-creating activities of urban man. Nonfood Uses
Aluminum compounds are widely utilized by industry. They are used in the paper industry, for water purification, in the dye industry, in missile fuels, in paints and pigments, in the textile industry, as a catalyst in oil refining, in the glass industry, and as components of cosmetic and pharmaceutical preparations. Of these, the uses within the cosmetic/ pharmaceutical industry are of particular significance since they provide the most likely sources of aluminum uptake by the body. The following are major cosmetic/pharmaceutical uses of aluminum compounds:
Aluminum hydroxide as an antacid, particularly for patients suffering from peptic and duodenal ulcers Aluminum hydroxide as an effective, nonabsorbed phosfate binder for patients with longstanding kidney failure
70
ALUMINUM
As a component of buffered aspirin Aluminum hydroxide and monostearates as components of some vaccines/injection solutions Aluminum chloride, aluminum zirconium glycine complex, and aluminum chlorohydrate as the active ingredients of antiperspirants Many of these applications are under review and their use is discouraged where alternatives of equal efficacy are available and where the potential for high aluminum uptakes exists. For example, both calcium carbonate and lanthanum sulfate are possible alternatives to the long-term use of aluminum hydroxide as a phosfate binder. Food Uses of Aluminum Compounds
Aluminum compounds that may be employed as food additives are listed in Table 1. Although most are present in foods as trace components, others may be present in significant quantities. For example, aluminum-based baking powders, employing sodium aluminum phosfate (SALP), may contain more than 10 mg g1 of aluminum, and bread or cake made with these may contain 5–15 mg of the element per slice. American processed cheese may contain as much as 50 mg of aluminum per slice due to the addition of Kasel, an emulsifying agent. Pickled cucumbers may contain 10 mg of aluminum per fruit when alum has been employed as a firming agent. Aluminum anticaking agents may also be present in significant quantities in common table salt. Table 1 Permitted aluminum-containing food additives and uses Compound
Use
Aluminum
Metallic color for surface treatment Acidic compound used as a neutralizing agent and as a buffer Acidic compound used as a neutralizing agent, a buffer, and a firming agent Buffer, neutralizing agent, and firming agent Firming agent in pickling Anticaking agent for powders Anticaking agent for powders Anticaking agent for powders
Aluminum ammonium sulfate (ammonium alum) Aluminum potassium sulfate (potassium alum) Aluminum sodium sulfate (soda/sodium alum) Aluminum sulfate (alum) Aluminum calcium silicate Aluminum sodium silicate Sodium calcium aluminosilicate Kaolin (contains aluminum oxide) Sodium aluminum phosfate (acidic), SALP Sodium aluminum phosfate (basic), Kasel
Anticaking agent for powders Acid, raising (leavening) agent for flour Emulsifying salt
Natural Aluminum in Food
Even though concentrations of aluminum in soil are high (3–10%), most food plants contain little aluminum. Reports describe diverse levels in different foods and reported values vary for similar foods. Much of this variation results from either the inadequate removal of soil and/or contamination of foods with soil prior to analysis or the use of poor analytical techniques. A selection of results for plant foods is given in Table 2. This shows that most uncooked plant foods contain 60
61.0 W 51 22.5 W þ 499 12.2 W þ 746 14.7 W þ 496 8.7 W þ 829 10.5 W þ 596
0.255 W 0.214 0.0941 W þ 2.09 0.0510 W þ 3.12 0.0615 W þ 2.08 0.0364 W þ 3.47 0.0439 W þ 2.49
W, body weight expressed in kilograms; MJ, megajoules. (Data from WHO (1986) Energy and Protein Requirements. Report of a Joint FAO/WHO/UNU Expert Consultation. Technical Report Series 724. Geneva: World Health Organization.)
Thermic Effect of Food or Postprandial Thermogenesis
The energy expenditure increases significantly after a meal. The thermic effect of food is mainly due to the energy cost of nutrient absorption and storage. The total thermic effect of food over 24 h represents 10% of the total energy expenditure in sedentary subjects. The thermic effect of nutrients mainly depends on the energy costs of processing and/or storing the nutrient. Expressed in per cent of the energy content of the nutrient, values of 8%, 2%, 20–30%, and 22% have been reported for glucose, fat, protein, and ethanol, respectively. Glucose-induced thermogenesis mainly results from the cost of glycogen synthesis and substrate cycling. Glucose storage as glycogen requires 2 mol ATP/mol. In comparison with the 38 mol ATP produced on complete oxidation of glucose, the energy cost of glucose storage as glycogen corresponds to
Table 3 Determinants of resting (basal) metabolic rate
Body size Body composition (lean vs. obese) Gender Age Physiological status (growth, pregnancy, and lactation) Genetic make-up Hormonal status (e.g., Follicular ve luteal phase) – Temperature (body internal and environment) – Pharmacological agents (e.g., nicotine and caffeine) – Disease (fever, tumors, burns, etc.)
120 ENERGY/Balance
Energy Expenditure Due to Physical Activity
The energy spent on physical activity depends on the type and intensity of the physical activity and on the time spent in different activities. Physical activity is often considered to be synonymous with ‘muscular work’, which has a strict definition in physics (force distance) when external work is performed in the environment. During muscular work (muscle contraction), the muscle produces 3–4 times more heat than mechanical energy, so that useful work costs more than muscle work. There is a wide variation in the energy cost of any activity both within and between individuals. The latter variation is due to differences in body size and in the speed and dexterity with which an activity is performed. In order to adjust for differences in body size, the energy cost of physical activities are expressed as multiples of BMR. These generally range from 1 to 5 for most activities, but can reach values between 10 and 14 during intense exercise. In terms of daily energy expenditure, physical activity accounts for 15–40% of total energy expenditure but it can represent up to 70% of daily energy expenditure in an individual involved in heavy manual work or
Table 4 Exogenous and endogenous factors influencing the three components of energy expenditure Components
Endogenous
Basal
Fat-free mass Thyroid
metabolic rate
Exogenous
hormones
Thermogenesis
Protein turnover Nutritional status Sympathetic nervous system activity Insulin resistance (obesity)
Physical
activity
‘Fidgeting’ Muscular mass Work efficiency Fitness level _ 2max) (VO
Macronutrient intake (þalcohol)
Cold exposure Stress Thermogenic stimuli (coffee, tobacco) Thermogenic drugs Duration intensity, and frequency of physical activity
competition athletics. For most people in industrialized societies, however, the contribution of physical activity to daily energy expenditure is relatively small. The numerous factors influencing the 3 components of energy expenditure are outlined in Table 4. The effect of body weight in average women (60 kg) on energy expenditure is illustrated in Figure 5. The relationship is slightly curvilinear because of differences in body composition in terms of leanness and fatness. Resting metabolic rate is shown as a baseline value. Just as described above for a specific activity, it has been customary to express total energy expenditure 3500 Energy expenditure (kcal day –1)
5% (or 2/38) of the energy content of glucose stored. Cycling of glucose to glucose-6-phosphate and back to glucose, to fructose-1,6-diphosphate and back to glucose-6-phosphate, or to lactate and back to glucose, is occurring at variable rates and is an energy-requiring process that may increase the thermic effect of carbohydrates. The thermic effect of dietary fat is very small; an increase of 2% of its energy content has been described during infusion of an emulsion of triglyceride. This slight increase in energy expenditure is explained by the ATP consumption in the process of free fatty acid reesterification to triglyceride. As a consequence, the dietary energy of fat is used very efficiently. The thermic effect of proteins is the highest of all nutrients (20–30% of the energy content of proteins). Ingested proteins are degraded in the gut into amino acids. After absorption, amino acids are deaminated, their amino group transferred to urea, and their carbon skeleton converted to glucose. These biochemical processes require the consumption of energy amounting to 25% of the energy content of amino acids. The second pathway of amino acid metabolism is protein synthesis. The energy expended for the synthesis of the peptide bonds also represents 25% of the energy content of amino acids. Therefore, irrespective of their metabolic pathway, the thermogenesis induced after absorption of amino acids represents 25% of their energy content.
3000
MR 1.6 ×
%)
(+60
2500 1.2 ×
2000
20%)
Physical activity
+
MR (
1500 Resting metabolic rate
1000
RMR
500 0 50
60
70
80 90 100 110 120 130 Body weight (kg)
Figure 5 Effect of body weight on total energy expenditure at two levels of physical activity in young women. A physical activity level (PAL) of 1.2 represents minimal physical activity compatible with health, whereas a value of 1.6 represents a ‘medium’ level of physical activity.
ENERGY/Balance 121
(TEE) relative to RMR (TEE/RMR or TEE/BMR) to offset the large variation in RMR among subjects of difference body weight & body composition. This quotient is called physical activity level (PAL) and reflects multiples of RMR. A PAL of 1.5 indicates that TEE is 50% greater than RMR over 24 h.
Macronutrient Balance, Energy Balance, and Storage Since macronutrients (carbohydrate, fat, protein, and alcohol) are the sources of energy, it is logical to consider energy balance and macronutrient balance together as the opposite side of the same coin. There is a direct relationship between energy balance and macronutrient balance, and the sum of individual substrate balance (expressed as energy) must be equivalent to the overall energy balance. Thus:
amount of nitrogen excreted in the urine during the test period. One approach to calculate the nutrient oxidation rate is based on the oxygen consumption and CO2 production due to the oxidation rates of the three nutrients carbohydrate, fat, and protein Figure 6 respectively. In a subject oxidizing c grams per min of carbohydrate (as glucose), f grams per min of fat, and excreting n grams per min of urinary nitrogen, the following equations, can be used: _ 2 = 0:746c þ 2:02f þ 6:31n VO
½4
_ VCO 2 = 0:746c þ 1:43f þ 5:27n
½5
and
We can solve equations 4 and 5 for the unknown c and f this way:
carbohydrate balance ¼ exogenous carbohydrate carbohydrate oxidation
_ _ c = 4:59VCO 2 3:25VO2 3:68n
½6
_ 2 1:69VCO _ f = 1:69VO 2 1:72n
½7
Because 1 g urinary nitrogen arises from approximately 6.25 g protein, the protein oxidation rate (p in grams per min) is given by the equation
protein balance ¼ exogenous protein protein oxidation lipid balance ¼ exogenous lipid lipid oxidation
p = 6:25n
It follows that substrate balance E balance. Fat balance is closely related to energy balance (Figure 6). Indirect calorimetry also allows computation of the nutrient oxidation rates in the whole body. An index of protein oxidation is obtained from the total Lean Obese
150
½8
Energy stores (constituted mainly of fat stores) are big as a proportion of the food intake (2000 kcal day1, mixed diet in a 60-kg nonobese woman with 25% body fat). The total energy stored is about 90 times total daily energy intake: typically fat stores are 175 times daily fat intake, protein 133 times daily protein intake, and carbohydrate only 1.3 times daily carbohydrate intake (Figure 7).
Energy stores
50 0 Energy stores
Fat balance (g day–1)
100
–50 –100
Fat (77%)
Glycogen (1 day) but are subject to effects of label sequestration over shorter periods. Sequestration refers to trapping, or fixation, of the label in tissues that utilize bicarbonate/CO2 for their metabolic functions. Shorter duration of collection of breath samples requires a correction for the fraction of label that is sequestered. This is based on the assumption that similar amounts of label are sequestered in various individuals. When breath samples are collected over longer durations, the sequestration is often assumed to be negligible. Some investigators have used a bolus bicarbonate administration rather than the continuous infusion. These investigators measured the rate at which the label concentration decreases with time as a measure of CO2 turnover and the initial concentration as a measure of the body’s bicarbonate pool size. Taken together, these provided a measure of energy expenditure during a short period of constant physical activity. Doubly Labeled Water
This is an isotope dilution technique wherein deuterium and heavy oxygen-labeled water (doubly labeled water, DLW) are given to individuals and timed urine samples are collected to measure the elimination rates of 2H and 18O in the urine. 2H label from DLW mixes with the body water and is eliminated as water in the urine. Similarly, 18O label from DLW is eliminated as water, but it is also utilized in bicarbonate synthesis and hence is also eliminated in the breath as CO2. The difference in turnover rates of isotopic 2H-H and 18O-labeled water is proportional to CO2 production. Energy expenditure, oxygen consumption, water intake, and metabolic water production can be calculated using standard indirect calorimetry equations with an estimated RER (Figure 5). In practice, a measured dose of DLW is given to the subject whose energy expenditure is to be measured. Body water samples, such as blood, urine, saliva, or breath water, are collected before dosing and after equilibrium is attained. The isotopic disappearance rates of 18O and 2H as CO2 in breath or
Log Isotopic Enrichment
ENERGY EXPENDITURE/Doubly Labeled Water
2H
18O
Time
145
measure physical fitness, and evaluate macronutrient utilization during exercise and rest. Clinicians have used indirect calorimetry to optimize the nutritional support in metabolic disorders as in parenterally fed patients and to quantify the energy expenditure in mechanically ventilated patients. Indirect calorimetry is a reliable, convenient, and accurate diagnostic and prognostic tool in experimental and clinical settings. Indirect calorimetry has such universal appeal because animals and humans derive their energy for sustenance by transforming the chemical energy from the nutrients they consume to heat through respiration, and their existence depends on their ability to balance energy intake and expenditure.
Dose given Figure 5 Time course on log scale for the enrichments of the stable isotopes 18-oxygen and deuterium when administered to the subject. Both the tracer enrichments increase rapidly in the body water pool until they reach distribution equilibrium (2–4 h). The enrichments then start to decline as the body water turns over during metabolism. 18-Oxygen is eliminated at a faster rate because it is excreted as water and CO2 in breath, whereas deuterium is eliminated as water only. The difference in elimination rates of these two tracers is proportional to the rate of CO2 production by the subject.
H2O in urine, saliva, or breath water, respectively, are determined from the change in isotopic enrichments of the before dosing and after equilibrium samples. The doubly labeled water method is both simple and noninvasive. It has been validated in various animals and humans, with the CO2 production rate showing a mean measurement error of less than 5%. Unlike the majority of the other methods, the doubly labeled water method provides a measure of average energy expended over a period of 3–21 days without restricting the subject’s movement and thus provides a better estimate of habitual energy expenditure than the other methods. The doubly labeled water method, however, does not provide any information on the pattern or intensity of any one activity during that time but the overall average energy expenditure. This method is also expensive due to the cost of the 18 O and it does require sophisticated mass spectrometric analyses.
See also: Energy: Metabolism; Balance; Requirements.
Further Reading Elia M, Fuller NJ, and Murgatroyd PR (1992) Measurement of bicarbonate turnover in humans: Applicability to estimation of energy expenditure. American Journal of Physiology 263: E676–E687. Headley JM (2003) Indirect calorimetry. AACN Clinical Issues 14(2): 155–167. Jequier E, Acheson K, and Schutz Y (1987) Assessment of energy expenditure and fuel utilization in man. Annual Review of Nutrition 7: 187–208. Macfarlane DJ (2001) Automated metabolic gas analysis systems. Sports Medicine 31(12): 841–861. Molnar JA, Cunnigham JJ, Miyatani S et al. (1986) Closed-circuit metabolic system with multiple applications. Journal of Applied Physiology 61(4): 1582–1585. Murgatroyd PR, Shetty PS, and Prentice AM (1993) Techniques for the measurement of human energy expenditure: A practical guide. International Journal of Obesity 17: 549–568. Peel C and Utsey C (1993) Oxygen consumption using the K2 telemetry system and a metabolic cart. Medicine and Science in Sports and Exercise 25(3): 296–400. Schoeller DA and Webb P (1984) Five-day comparison of the doubly labeled water method with respiratory gas exchange. American Journal of Clinical Nutrition 40(1): 153–158. Simonson DC and DeFronzo RA (1990) Indirect calorimetry: Methodological and interpretive problems. American Journal of Physiology 258: E399–E412.
Doubly Labeled Water Summary Indirect calorimetry is a noninvasive, reliable, and valuable tool in assessing energy expenditure, evaluating fuel utilization by the body. It has been used extensively for both scientific investigation and medical evaluation and care. Scientists from various fields have used it effectively to measure energy expenditure, establish nutrient requirements,
W A Coward, MRC Human Nutrition Research, Cambridge, UK ª 2005 Elsevier Ltd. All rights reserved.
Like methods for the measurement of energy expenditure by respiratory gas analysis, the doubly labeled water (DLW) method is indirect. The disappearance
146 ENERGY EXPENDITURE/Doubly Labeled Water
of stable isotope tracers, given orally, is used to model water and water plus carbon dioxide turnover. Carbon dioxide production rate is then estimated by difference and energy expenditure calculated from it. In practice, this means that subjects merely drink labeled water, samples of body water (e.g., urine, saliva, or blood) are collected over a few days, and these are then passed to the laboratory for tracer analysis and calculation. The method is thus uniquely objective; it is noninvasive and nonrestrictive in that its application does not interfere with normal lifestyles and comparable results can in principle be obtained in any circumstances without subject or observer influence. Complex measurement techniques do not need to be exported to the site where the subjects are located. However, underlying the apparent simplicity are concepts and techniques that are not commonly tools of trade for many potential users of the methodology. In a complete review, these, as well as method practice and results, need to be explained.
Method Fundamentals Stable Isotopes as Tracers
Although radioactive tracers are familiar tools, the use of tracer elements and compounds to measure metabolic processes was developed first with stable isotopes in the late 1930s by Schoenheimer and Rittenberg soon after 2H and 15N (both stable isotopes) became available. Unlike radioactive isotopes, which are largely man-made, unstable, and decay to other elements, stable isotopes do not decay and are ubiquitous. Virtually all elements exist in nature in at least two stable isotopic forms with the same numbers of electrons and protons but with differing numbers of neutrons in the nucleus. The level of a specific isotopic form in nature is called its natural abundance. For tracer experiments, an element or a simple compound containing it, enriched with one of the isotopes, is prepared by mass-dependent separation on an industrial scale. This is then incorporated into the substrate of interest for biological experiments. In the current context, 2H2O (deuterium oxide, heavy water) is readily available from the electrolysis of water. Water enriched with 18O is prepared directly by fractional distillation or from nitric oxide after its cryogenic distillation. No radioactivity is involved in the use of stable isotopes in human experiments; thus, the only effects that have to be considered in relation to risk to the subject are related to the physical properties of the isotopic labeled compound. There is inevitably some degree of isotopic discrimination in
physical and enzymatic processes, but because stable isotopes are normally present in all biological material at natural abundance levels, the relevant consideration is only by how much and for how long amounts are changed in experimental procedures. Because highly precise measurement techniques are used, it is necessary only to increase isotopic enrichments in body water from natural abundance by very small amounts. In a typical experiment, 2H enrichment might be increased from 150 to 300 parts per million (ppm) and 18O from 2000 to 2400 ppm, and a return to natural abundance levels will occur with a biological half-life of 5–7 days. There is no evidence that amounts many times larger than these have any harmful effects. Measuring Isotopic Enrichment
Mass spectrometry is a generic name for a family of methodologies in which compounds are ionised and separated on the basis of mass:charge ratio. The method of choice for the measurement of isotopic enrichment with sufficient precision for DLW experiments is isotope ratio mass spectrometry. This technique is applicable only to relatively simple molecules. It separates ions such as [2H1H]þ and [1H1H]þ (mass 3 and 2) or [12C16O18O]þ and [12C16O16O]þ (mass 46 and 44) and measures isotopic ratios (R) relative to an international standard, such as Vienna Standard Mean Ocean Water (V-SMOW; Table 1). For the DLW method, therefore, the isotopic enrichment in water from biological samples has to be measured as hydrogen or carbon dioxide. For hydrogen isotope analysis, a variety of methods have been used for the conversion including reduction by reaction with hot uranium or zinc, but these methods are difficult to automate. Currently favoured methods are the exchange of hydrogen in the water sample with gaseous hydrogen by equilibration in the presence of a platinum catalyst or reduction with hot chromium. Both of these techniques are automated in commercially available equipment. For oxygen isotopes, samples are usually equilibrated Table 1 Typical isotopic ratios and equivalent enrichments measured in DLW experimentsa Sample
V-SMOW Background Postdose
2 H isotope ratio (ppm)
155.76 152.28 342.67 Rsample a Enrichment = 103 RV SMOW
2 H enrichment (%)
18
O isotope ratio (ppm)
18
0 22.34 1200 1 .
2005.2 1995.74 2305.98
0 4.72 150
V-SMOW, Vienna Standard Mean Ocean Water.
O enrichment (%)
ENERGY EXPENDITURE/Doubly Labeled Water
Single Pool Kinetics
Considering only hydrogen, Figure 1 represents a subject, in water balance, with a total body water of N mol with water (tracee) input and output rates of F mol/day containing 2H at a naturally abundant molar concentration, Cb. A fractional output or rate constant is defined as K = F/N. If a small quantity (D mol) of water labeled with 2 H tracer is added to the pool, it will be removed from it according to the monoexponential relationship qt qb = DeKt where D is the amount of tracer given, qt is the total amount (mol) in the body pool at time t (days), and qb is the amount always present due to inflow at natural abundance. K is a fractional rate constant, sometimes defined in terms of the biological half-life T1/2. This can be calculated as T1/2 = ln2/K = 0.693/K. Since input and output rates are the same and the amount of tracer added is small relative to the pool size, we can write q t qb D Kt = or Ct Cb = ðC0 Cb ÞeKt e N N where C0 Cb is the increment in isotopic concentration resulting from the administration of the dose, and N can be calculated as N = D/(C0 Cb). The foregoing equations have been written in terms of isotopic concentration (e.g., C = 2H/(2H þ 1H)), but mass spectrometry measurements are in terms of ratio (e.g., R = 2H/1H) and in practice, for DLW calculations R or enrichment relative to a standard is invariably substituted for C with no effect on results at the low levels of enrichment applied in this methodology.
2
H2O and its dilution in body water was a way of measuring body water mass and turnover. Lifson showed that the oxygen in carbon dioxide, the waste product of energy metabolism, was in equilibrium in the body with body water: H2 O þ CO2 () H2 CO3
He realized, therefore, that the greater apparent turnover of body water measured with H 2 18O in comparison to turnover measured with 2 H 2O (Figure 2) was a consequence of carbon dioxide production, as shown in Figure 3. Thus, there was potential for a method that would permit the measurement of total CO2 output and hence energy expenditure over long periods merely by isotopic analysis of samples of body fluids. Initially, the method was applied only to small animals because
Isotopic enrichment
with carbon dioxide with exchange of oxygen between the water and carbon dioxide. This procedure is also automated.
147
2H 18O
0
2
4
6
8
10
12
14
Days Figure 2 Exponential loss of 2H and 18O from body water. The insert shows the data on a log scale.
Principles of the Method 2H 18O 2
When Lifson first began his physiological experiments with newly available 18O in the mid-1950s, it was already well-known that oral dosing with
2H 18O 2
H2O CO2 + H2O N
F = KN
O2 H2CO3 H, C, O in food
Figure 1 A simple one-compartment model of water turnover.
C18O2
Figure 3 The fate of an oral bolus dose of 2H and 18O given as water (DLW).
148 ENERGY EXPENDITURE/Doubly Labeled Water
the 18O isotope was (and still is) expensive and instrumental limitations meant that relatively large doses had to be given to achieve adequate measurement precision. However, in the 1980s human studies, which are the focus of this article, became possible and in 1998 a basic unified methodological approach was established as a result of a meeting of the experts in the field (International Dietary Energy Consultancy Group). The publication derived from this meeting remains a valuable tool. The following are the underlying assumptions of the method: 1. Body water is a single compartment that the isotopes label and from which they are lost. 2. 2H is lost only as water. 3. 18O is lost as water and carbon dioxide. 4. Total body water and output rates of water and carbon dioxide are constant. 5. Water and carbon dioxide loss occurs with the same enrichment as that coexisting in body water. 6. Background isotope intakes are constant. Taking these in turn, assumption 1 is not correct. Evidence from many studies shows that the single compartments labelled by the isotopes are not the same size; 2H space is approximately 3% larger than 18O space. However, there is no evidence that isotope sequestration is a significant factor in human studies (assumptions 2 and 3). Water and carbon dioxide production rates are unlikely to be constant during a measurement period (assumption 4), but provided variations are random and not unidirectional during the measurement period, justifying the use of mean values for a period in any case, the method will not produce biased results. Allowing assumptions 1–4, simple equations can be formulated (values of F and N are in mol and K in days1). FH2O is measured as FH2 O = KD ND and the water plus carbon dioxide output (expressed in mol water equivalents) is FH2 O þ CO2 = KO NO Carbon dioxide production is then FCO2 =
KO NO KD ND 2
The factor of 2 arises because 2 mol of water is equivalent to 1 mol of carbon dioxide. These simple relationships are in practice modified to correct for isotopic fractionation that, contrary to assumption 5, does occur. Where evaporative water losses occur, relatively less 2H and
18
O leave the body in water vapour compared with liquid water. Fractionation factors are defined as 2 H=1 H vapour ¼ 0:941; f1 ¼ 2 H=1 H liqiud
18
f2 ¼
O=16 O
18 O=16 O
vapour ¼ 0:991; liqiud
O=16 O CO2 ¼ 1:037 f3 ¼ 18 O=16 O 18
H2 O
Thus, water vapour is isotopically depleted in 2H and 18O and carbon dioxide is relatively more enriched in 18O compared to liquid water. If it is assumed that a constant proportion (x) of water losses is fractionated, carbon dioxide production rate becomes FCO2 ¼
KO NO KD ND ðxf2 þ 1 xÞ 2f3 ðxf1 þ 1 xÞ 2f3
This procedure is most frequently used for infants and young children, in whom values of x are assumed to be 0.15–0.20. For adults, fractionated water losses (Ff) are often defined in terms of FCO2 (Ff = 2.1FCO2), in which case FCO2 ¼
KO NO KD ND 2f3 þ 2:1ðf2 f1 Þ
Assumption 6 relates to the requirement that a predose sample should represent the effect of normal natural abundance isotope input. In most cases, background isotopic enrichment is likely to vary only randomly during a measurement period and so the issues are about the relationship between the background sample measured, the mean background and its random variation during the experimental period, the extent to which background variations in 2H and 18 O are covariant, and the size of isotope doses and postdose enrichments in relation to these variations. In most experimental situations investigated with affordable isotopic doses, background variation contributes to the internal errors of the method and limits the extent to which better analytical precision improves results. In some circumstances (e.g., subjects moving from one place to another and use of large amounts of rehydration fluids in hospitalised patients), it is possible that a predose sample taken to represent isotopic background is not at all meaningful and the best advice may be to avoid these circumstances rather than try to correct for them.
ENERGY EXPENDITURE/Doubly Labeled Water
Table 2 ‘What if’ calculations for a typical subject (NO = 2000, ND = 2066, KO = 0.12, KD = 0.10) Fractionated water losses defined in terms of FCO2 (Ff = 2.1 FCO2) for mean and assumed CV = 10% 2 SD = 1.68 FCO2 Mean = 2.1 FCO2 þ2 SD = 2.58 FCO2 Assumed RQ (typical mean 2 SD)
2 SD = 0.825 Mean = 0.85 þ2 SD = 0.875
CO2 production relative to value for mean 1.010 1 0.981 Energy expenditure relative to value for mean 1.024 1 0.978
CV, coefficient of variation; RQ, respiratory quotient.
NO ND þ 1:007 1:041 TBW ¼ 2 0 0 NO ¼ 1:007ðTBWÞ ND = 1:041ðTBWÞ
Figure 4 illustrates some aspects of total imprecision and the origins of the variance for a typical subject defined in Table 3 when different dosing regimes are applied, with 18O enrichment being varied at a constant initial 2H:18O ratio of 8. The following are general considerations: 1. Naturally occurring covariance in 2H and 18O enrichment in baseline samples can be used to mitigate errors resulting from physiological variation in these values if dose sizes are suitably tailored to the slope of the variation. Optimum doses in this respect are predicted by 2 H 2n 1 = S pn 18 O 2 1 optimal where (2H/18O)optimal is the ratio of immediate postdose - background enrichments (rel V-SMOW) for 100
20
90
18
80
16
70
14
60
12
50
10
40
8
30
6
20
4
10
2
0
25
50 100 150 200 Initial 18O enrichment (‰ rel V-SMOW)
250
Measurement CV (%)
where FCO2 is mol. RQ is calculated from dietary information or assumed to have a particular population value, such as 0.85. Insertion of typical Western adult values (NO = 2000, ND = 2066, KO = 0.12, and KD = 0.10) into the relevant equations and ‘what if’ experimentation will allow the reader to test the effect of making changes to the assumptions and values. Table 2 provides examples that show that serious errors or bias, for groups or individuals, are unlikely unless the applied population means for assumed values are grossly incorrect or the coefficient of variation (CV) is large. Experimentation with the data, however, will also show that the magnitude of the difference between KONO and KDND is crucial. The method depends on precisely determining a relatively small difference between these two experimentally measured, larger values. This difference is approximately 20% in the example but can be much less when water turnover is high relative to carbon dioxide production (e.g., very young infants or subjects living in the tropics). For the slopes (KO and KD) a minimum of two time points are required sufficiently far apart in time (two or three biological half-lives) to allow good precision on the slope determination with doses of sufficient magnitude to avoid detrimental effects of natural abundance variations and the limitations of analytical precision, especially at the end of the measurement period. In some protocols, more than two samples are measured, and this permits error calculations based on the goodness of fit of the data. Isotope distribution
spaces are calculated from samples taken soon after dose administration (the ‘plateau method’) or by extrapolation of the disappearance curves to t = 0. Distribution spaces may be normalized to popula0 0 tion-based estimates (ND and NO ) of their relation to total body water (TBW):
Variance (% total)
Finally, FCO2 values have to be converted into values for energy expenditure based on a fixed relationship between these quantities that depends on metabolic fuels used, expressed as a respiratory quotient (RQ). We can write 346:7 þ 124:3 Energy expenditure ðkJÞ ¼ FCO2 RQ
149
0
Background (natural abundance variation) Post dose (biological variation) Post dose (analytical error) Background (analytical error) Measurement CV
Figure 4 Origin of errors and their size in DLW experiments. The line and right axis show the total CV at different isotope doses in a typical subject defined in Table 3. The bars and left axis indicate the proportion of the total variance derived from each source of error.
150 ENERGY EXPENDITURE/Doubly Labeled Water Table 3 Typical estimates and measurement precision in a DLW experiment lasting 14 d Parameter
Value
NO ND KO KD Proportional error in postdose 2H samples originating from variations in water turnover (SD) Variance in postdose 18O accounted for by variance in 2H (excluding analytical errors) 18 O analytical error at baseline (SD) 2 H analytical error at baseline (SD) 18 O analytical error for enriched samples (SD) 2 H analytical error for enriched samples (SD) 18 O background variation (SD) 2 H background variation (SD) Variance in background 2H accounted for by variance in 18O (excluding analytical errors) Slope of background 2H enrichment on background 18O enrichment
2000 mol 2066 mol 0.12 day1 0.10 day1 0.01
90%
0.15% 1.5% 0.5% of value þ 0.15% 0.5% of value þ 1.5% 0.15% 1.2% 100%
8
2
H and 18O, S is the slope of background 2H enrichment on background 18O enrichment, n is the experiment duration in terms of the number of biological half-lives for the 2H isotope, and p is KO/KD. 2. Much of the deviation of the 2H and 18O data from the model for the postdose samples is covariant because it relates to inconstancy of water turnover. Errors thus tend to cancel, and this considerably reduces the potential impact of variance from this source. 3. Although the analytical errors applied in this case are not the lowest reported, they are probably typical and it can be seen that they always account for much of the variance. 4. Errors consequent on background uncertainty become very important when amounts of dose are reduced, but in practice, cost always limits the amount of 18O that can be given. For this example, adequate precision in the total energy expenditure(TEE) measurement is predicted for 18O doses producing initial enrichment in the range of 100–150% rel V-SMOW.
Protocols There are, of course, variations depending on the type of subjects to be investigated, and either
exclusively urine or saliva samples can be collected. Typically, for adult subjects, after the collection of a predose sample of urine or saliva, they are asked to drink an accurately weighed mixture of the isotopes to give the required enrichment in body water. A small sample of the dose should be retained for isotope analysis. The dose bottle is then rinsed with a further amount of water (50 ml) and this is also drunk. Most investigators fast their subjects for at least 6 h and may restrict food and water intake during the time when the isotopes are equilibrating in body water. If a plateau method is used for the determination of dilution spaces, the requirement is to collect a sample after equilibration is complete but before turnover begins to reduce enrichment. This will usually require a series of three samples collected at successive hourly intervals between 4 and 8 h. If urine samples are used, the first one should be discarded. A further two samples are collected two or three biological half-lives apart. In most adult cases, experiments will last 14 days; however, for both the timing of the plateau samples and the length of time of the study, it is advisable to establish specific times for the population under investigation. If dilution spaces are to be calculated from the intercept of isotope disappearance curves, postdose samples should begin to be collected on day 1 postdose and on subsequent days during the measurement period. Minimally, samples should be collected at the beginning and end of the measurement period (e.g., days 1, 2, 13, and 14). If a plateau method is used, samples are best collected in the presence of the investigation team, but when the intercept method is used subjects can be instructed to collect, label, and store their own samples. A few ml of urine, or saliva are sufficient for analyses, and should be collected and capped immediately to avoid evaporation and possible contamination. For long-term storage, samples should be stored frozen but may be refrigerated in the short term and need not be frozen for shipping. Experience suggests that often it is the dose administration and sample collection that cause method failures. A good technique and high precision are needed for enrichment measurements but samples can always be reanalysed. Failures consequent on poor technique in subject-related procedures cannot be rectified and can be costly, especially if they are repeated through a whole investigation. New users of the methodology are advised to test all procedures in pilot work before full-scale application in a study. Enrichment of samples is best calculated in terms of fraction of the dose given; that is,
ENERGY EXPENDITURE/Doubly Labeled Water
18:02d ES EP TD ED ET where E is isotopic enrichment, d is a weight (g) of dose diluted in T (g) tap water, and D is the weight (g) of dose given. Subscripts S, P, D, and T refer to postdose sample, predose sample, diluted dose, and tap water, respectively. The reciprocal of plateau values is the isotope dilution space (ND or NO). The reciprocal of the value at the time zero intercept of a plot of its log value vs time provides alternative dilution space estimates. The slope is the rate constant (KD or KO).
Validations and Reproducibility Comparisons between DLW and calorimetry suggest a precision of 4 or 5%, but it should be remembered that studies of this type are highly controlled and may not properly reflect the real-life situation to which the method is intended to apply. The closest useful estimates are therefore perhaps those provided by an analysis of test/retest situations in which the same subjects were measured in more or less the same physiological conditions. Figure 5 shows a compilation of such data. Apart from the labourers studied in the tropics, where the precision of the estimates may have been limited by known
151
high water turnover rates, the data are quite consistent, with a mean of 8%. Subtraction of a likely contribution of 4% from total measurement error suggests a within-subject variation of 7%.
Applications of DLW in Nutrition DLW and Energy Intake
Examination of the history of DLW in man suggests that there was an expectation that much would be learned in relation to the development of obesity as an outcome of identified long-term positive energy balance. Certainly in the initial phases of its use in human studies in the late 1980s, experimental protocols were most often designed to measure as accurately as possible the differences between energy intake and energy expenditure, but the findings from these experiments invariably exposed the limitations of energy intake measurements. Probably because the DLW concepts were then somewhat alien to conventional nutrition, the notion that intake measurements were more often than not inaccurate and underestimates was not at first easily accepted, but the most recent of several reviews records a very convincing body of evidence (Figure 6). However, although exposing a problem, most of these observations by themselves do nothing
25
35 CV Energy expenditure
30
20
15
20
15
10
Energy expenditure (MJ/d)
Within subject CV (%)
25
10 5 5
0
0 20 5 13 6
5 10 19 5 4 10 9 16 8 No. of subjects in study
9 14 4
Mean
Figure 5 Reproducibility of the DLW method. (Data from Schoeller DA and Hnilicka JM (1996) Reliability of the doubly labeled water method for the measurement of total daily energy expenditure in free-living subjects. Journal of Nutrition 126: 348S–354S.)
152 ENERGY EXPENDITURE/Doubly Labeled Water
1.5
18 16
1.3
1.1 12
0.9
10 8
0.7
Energy expenditure (MJ/d)
Intake: Expenditure
14
6 0.5 4
Mean
2 13F 43F 6F 12F 36F 6F 29F 10F 37F 18F 14F 11F 10F 15F 9F 10M/F 53F 6M/F 7M 6F 39M 28M 27M 10M 16M 10F 30M
0.3
No. of subjects (A) 1.1
12
1
Intake: Expenditure
0.8 8
0.7 0.6
6 0.5 0.4
4
0.3 0.2
Energy expenditure (MJ/d)
10
0.9
2
0.1
28M/F 28M/F
10F 10F 10F 10F
16F 16F
34F 34F 34F
26F 26F 26F
0 10F 10F 10F 10F
0
No. of subjects Food record
Diet history
24 h recall
FFQ
Energy expenditure
(B) Figure 6 Accuracy of energy intake measurements assessed by DLW. (A) Dietary record data and (B) simultaneous use of more than one instrument. (Data from Trabulsi J and Schoeller DA (2001) Evaluation of dietary assessment instruments against doubly labeled water, a biomarker of habitual energy intake. American Journal of Physiology (Endocrinology and Metabolism) 281: E891–E899.)
ENERGY EXPENDITURE/Doubly Labeled Water
to solve it, not least because the studies are too small and indications of the nature and degree of correlation between DLW energy expenditure measurements and intake have not always been reported. The issue of detecting and correcting for bias in food and specific nutrient intake measurements remains a problem to which DLW is being applied as a biomarker of energy intake in large-scale studies.
DLW and Other Noninvasive Energy Expenditure Measurements Although DLW can be regarded as the reference noninvasive total energy expenditure measurement, isotope cost and the need for mass spectrometric analyses will always limit it to specialist rather than widescale application. There is thus a need to validate or at least understand the limitations of preexisting methodologies and alternatives under development. A significant consideration is that although DLW measurements in an individual include basal metabolic rate as a component of the total expenditure, in alternatives the focus is most often on activities and their energy cost, and basal metabolic rate is measured separately or derived from prediction equations. This means that comparisons of total energy expenditure derived from DLW and the alternatives include a component representing approximately 70% of the total that is not dependent on the activity measurement method. In these circumstances, it is not surprising that activity-based TEE measurements often show good correlation between DLW and on average tend to be similar, but they should be treated with caution with respect to the validity of the activity measurements. Calculation of the energy cost of activity (TEE - resting metabolic rate) for comparison between methods is a much more useful comparison but is not always available. DLW and Energy Requirements
The energy requirement of an individual is the intake from food that will balance expenditure when an individual has a body size and composition, and level of physical activity, consistent with long-term good health and that will allow for the maintenance of an economically necessary and socially desirable level of physical activity. In principle, these measurements could be obtained from the measurement of food intake or by factorial methods summing estimates of resting metabolic rate with the energy costs of activity. In practice, neither of these approaches is satisfactory; food
153
intake is generally underestimated and no single instrument for the measurement of activity is sufficiently well validated to justify its general use. However, both in the United States (Standing Committee on the Scientific Evaluation of Dietary Reference Intakes) and internationally (FAO/WHO/ UNU) the decision has been made to use DLW estimates of energy expenditure to provide the basis for the estimation of requirements. Given the relatively small number of laboratories involved in this work and its relatively short history, it is quite remarkable that sufficient data are available for this exercise. The normative US databases consist of adults (n = 407) and children (n = 525), obese adults (n = 360) and children (n = 309), and subsets for pregnant and lactating women. Regression equations derived from the data sets are used to predict requirements.
Conclusions This article provided insight into how the DLW method works, showed how it should be used, and highlighted three areas in which it is clear that DLW has made, or at least has begun to make, a significant impact on nutrition research. The method is relatively expensive and uses scarce resources in terms of expertise, instruments, and materials. However, where the research requirement matches method capabilities, in terms of accuracy and precision, it is a uniquely effective tool. See also: Energy: Metabolism; Balance; Requirements. Energy Expenditure: Indirect Calorimetry.
Further Reading Ainslie P, Reilly T, and Westerterp K (2003) Estimating human energy expenditure: A review of techniques with particular reference to doubly labelled water. Sports Medicine 33: 683–698. Black AE (2000) The sensitivity and specificity of the Goldberg cut-off for EI:BMR for identifying diet reports of poor validity. European Journal of Clinical Nutrition 54: 395–404. Coward WA and Cole TJ (1991) The doubly labeled water method for the measurement of energy expenditure in humans: Risks and benefits. In: Whitehead RG and Prentice A (eds.) New Techniques in Nutritional Research, pp. 139– 176. San Diego: Academic Press. Food and Nutrition Board (2002) Energy. In Dietary Reference Intakes for Energy, Carbohydrate, Fiber, Fat, Fatty Acids, Cholesterol, Protein, and Amino Acids (Macronutrients), pp. 93–206. Washington, DC: National Academies Press. Jones PJ and Leatherdale ST (1991) Stable isotopes in clinical research: Safety reaffirmed. Clinical Science (London) 80: 277–280. Koletzko B, Sauerwald T, and Demmelmair H (1997) Safety of stable isotope use. European Journal of Pediatrics 156(supplement 1): S12–S17.
154 EXERCISE/Beneficial Effects Lifson N, Gordon GB, and McClintock R (1955) Measurement of total carbon dioxide production by means of D218O. Journal of Applied Physiology 7: 704–710. Prentice AM (ed.) (1990) The Doubly-Labelled Water Method for Measuring Energy Expenditure. Vienna: International Atomic Energy Agency. Schoenheimer R and Rittenberg D (1939) Studies in protein metabolism. I. General considerations in the application of isotopes to the study of protein metabolism. The normal abundance of nitrogen isotopes in amino acids. Journal of Biological Chemistry 127: 285–290.
Speakman J (1997) Doubly Labelled Water: Theory and Practice. Dordrecht, The Netherlands: Kluwer Academic. Schoeller DA (2002) Validation of habitual energy intake. Public Health Nutrition 5: 883–888. Schoeller DA and DeLany P (1998) Human energy balance: What have we learned from the doubly labeled water method. American Journal of Clinical Nutrition 68: 930S–979S. Wong WW (2003) Energy utilization with doubly labelled water. In: Abrams SA and Wong WW (eds.) Stable Isotopes in Human Nutrition, pp. 85–106. Cambridge, MA: CABI.
EXERCISE Contents Beneficial Effects Diet and Exercise
Beneficial Effects C Boreham and M H Murphy, University of Ulster at Jordanstown, Jordanstown, UK ª 2005 Elsevier Ltd. All rights reserved.
This article examines the roles that physical activity, exercise, and fitness may play in the regulation of energy balance and in the etiology of major diseases such as coronary heart disease, cancer, and osteoporosis. Before proceeding, it is necessary to define the key terms of reference. ‘Physical activity’ can be defined as ‘‘any bodily movement produced by skeletal muscles that results in energy expenditure.’’ ‘Exercise’ (often used interchangeably with ‘physical activity’) is defined as ‘‘physical activity which is regular, planned, and structured with the aim of improving or maintaining one or more aspects of physical fitness.’’ ‘Physical fitness’ is ‘‘a set of outcomes or traits relating to the ability to perform physical activity.’’
Exercise and Energy Balance Energy balance occurs when the total energy expenditure of an individual equals his or her total energy intake from the diet. If intake exceeds expenditure the result is an increase in the storage of energy primarily as body fat. If intake is below expenditure, body energy content or body fat decreases. In humans, energy is expended in three ways: maintaining the physiological functions of the body at rest,
often termed resting metabolic rate (RMR); ingesting food and digesting and assimilating nutrients, or the thermic effect of food (TEF); and skeletal muscular contractions involved in spontaneous physical activity or planned exercise. Of these components, the energy expenditure associated with physical activity and exercise is the factor that accounts for the greatest variability between individuals (Table 1). In addition, energy expenditure through physical activity is the only component that may be reasonably Table 1 Estimated daily energy expenditure (approximate) for individuals of different age, weight, gender, and level of activitya Status
Infant, male, age 3 months, body weight 6 kg Child, male, age 4 years, body weight 17 kg Teenager, male, age 13 years, body weight 46 kg Sedentary femaleb Sedentary malec Female, moderately activeb Male, moderately activec Female, very activeb Male, very activec
Estimated daily energy expenditure (kcal) 760 (3200 kJ) 1520 (6400 kJ) 2200 (9200 kJ) 1950 2500 2200 3000 2500 3200
(8100 kJ) (10 200 kJ) (9200 kJ) (12 500 kJ) (10 400 kJ) (13 300 kJ)
a Values are based on estimated average requirements from a report by the Committee on Medical Aspects of Food Policy (1991). Dietary reference values are for food energy and nutrients for the United Kingdom. b Based on female age 25 years, body weight 60 kg. c Based on male age 25 years, body weight 70 kg.
EXERCISE/Beneficial Effects
controlled by an individual, and therefore it may represent an appropriate method for altering energy balance. Physical activity is estimated to make up 5–40% of daily energy expenditure depending on the activity habits of the individual, with RMR and TEF accounting for 60–75 and 10–15%, respectively. Aside from its direct independent effect on daily energy expenditure, evidence suggests that exercise may also alter RMR, TEF, and the energy expenditure caused by spontaneous physical activity. Energy Expenditure during Exercise
The magnitude of energy expenditure during exercise is dependent on several factors, including the mode, intensity, and duration of exercise, as well as the body mass of the individual. When determining the metabolic cost of weightbearing physical activity, energy expenditure needs to be expressed in relation to body size since a small person will expend less energy performing a given activity (e.g., walking up a flight of stairs) than a larger person performing the same activity. Therefore, to calculate the energy cost of a given activity it is necessary to know the energy cost in kcal (kJ) per kilogram of body weight. The term MET (metabolic equivalent) may also be used to indicate the ratio of the rate of energy expenditure during a given activity to resting metabolic rate (RMR). An example illustrates how METs are used to quantify energy expenditure during exercise. If an individual with a body mass of 70 kg expends 70 kcal (300 kJ) per hour at rest (RMR), and walking at a speed of 5.6 km per hour requires 280 kcal (1200 kJ) per hour, the energy cost of the activity is 4 METs or four times the RMR of the individual. Since body size is a determinant of both RMR and the energy expenditure during exercise, a heavier individual will have a higher RMR but will still require four times this level of expenditure (or 4 METs) to walk at the same speed. Table 2 Table 2 Energy costs of popular physical activities Activity Walking Running Cycling Swimming Tennis Aerobics
Intensity
METs 1
6.4 km h 10.8 km h1 20.9 km h1 Front crawl, moderate Singles Moderate
4 11 8 8 8 6
Adapted from Ainsworth BE, Haskell WL, Leon As et al. (1993). Compendium of physical activities: Classification of energy costs of human physical activities. Medicine and Science in Sports and Exercise 25(1): 71–80.
155
indicates the energy cost in METS of many popular exercise modes. Energy Expenditure after Exercise
In addition to the additional energy consumed during an exercise bout, several researchers have found that energy expenditure remains elevated for a period following exercise. However, conclusions regarding the magnitude and duration of this postexercise elevation in energy expenditure have been equivocal. Studies have found an increase in energy expenditure in the postexercise period varying in magnitude from 5 kcal (21 kJ) to 130 kcal (546 kJ), with some suggesting that this additional energy expenditure lasts a few minutes and others suggesting that the elevated metabolic rate persists for up to 24 h. The divergence in the findings may be accounted for by the various modes, durations, and intensities of exercise employed in the studies as well as the methods used for measuring alterations in energy expenditure and the confounding effects of food ingestion during the recovery period. In addition, alterations in postexercise energy consumption may exhibit intraindividual variations according to the fitness level of subjects. Several mechanisms underlying this increased energy expenditure during the postexercise period have been postulated, including the energy cost of replenishing fuel stores, the cost of dissipating byproducts of adenosine triphosphate (ATP) resynthesis, restoration of cellular homeostasis, and the futile cycling of energy substrates. The magnitude of this increase may be related to the intensity and duration of exercise, with longer or more strenuous activity creating a greater perturbation to homeostasis and therefore causing greater energy expenditure in restoring the body to its preexercise condition. Effects of Exercise Training on Resting Metabolic Rate
Aside from the transient increase in energy expenditure in the period immediately following exercise, several researchers have examined the chronic effect of exercise on RMR. Although findings are far from consistent, some investigators have found that regular exercise causes a persistent augmentation in RMR. The mechanism for effect has yet to be confirmed, but it has been hypothesized that this increase may be due to the high energy turnover associated with the elevated levels of energy intake and expenditure typical of trained individuals. One beneficial effect of exercise training on resting metabolic rate is the maintenance or
156 EXERCISE/Beneficial Effects
increase in lean body mass. As a result of regular resistance exercise, muscle size increases (hypertrophy) or the age-related decline in muscle mass (atrophy) is reduced, contributing to an increase or maintenance of RMR. Effects of Exercise on the Thermic Effect of Food
The TEF is largely dictated by the composition and energy content of the meal as well as an individual’s body composition. However, some studies have indicated that pre- or postprandial exercise may enhance the TEF. In addition to this acute effect of exercise, regular training may alter the TEF. In males, the thermic effect of a meal is lower in highly trained compared to untrained individuals. In one study, moderate levels of fitness were associated with a greater increase in the TEF than either high or low fitness. The authors suggest that very high or very low levels of fitness may decrease the thermic effect possibly by adaptive mechanisms, such as a lower insulin or lower noradrenaline response to feeding. Interestingly, no equivalent effect has been found in women. Studies on monozygotic twins also suggest a strong genetic factor controlling whether exercise has such an effect. Effect of Exercise on Energy Expenditure in Spontaneous Physical Activity
In addition to the energy expenditure during planned exercise, other skeletal muscle contraction associated with spontaneous physical activity (including fidgeting) incurs an energy cost. Research indicates that the quantity of energy expended in spontaneous physical activity is highly variable between individuals. Studies show that in addition to its effect on RMR, participation in a planned exercise program increases the energy expenditure of an individual during nonexercising time.
Physiological Adaptations to Exercise Training Aside from alterations in energy balance, regular exercise brings about many physiological adaptations. The human body is remarkably plastic in response to the increased metabolic demands of exercise training (overload), with many adaptations occurring that enable the body to function more efficiently. The nature and magnitude of these changes are dependent on the volume (duration and frequency), intensity, and type of exercise performed. For this reason, the physiological adaptation to training will be classified according to the nature of the exercise undertaken.
It is important to remember two principles when considering the physiological adaptations to exercise training. First, there is a degree of intraindividual variation in response to exercise training that may be attributed in part to hereditary factors. Second, whereas exercise training will cause adaptation, the removal of this stimulus will result in a reversal of adaptation, or ‘detraining.’ Adaptations to Submaximal/Endurance Exercise Training
Submaximal exercise generally refers to an intensity of exercise that requires less than an individual’s maximal oxygen uptake. Submaximal exercise challenges the body to deliver and utilise an increased amount of oxygen in the resynthesis of ATP. With training, changes occur that increase the body’s ability to utilize oxygen. For simplicity, the adaptations to submaximal exercise training have been grouped according to the site at which they occur. Central adaptations Central adaptations to regular submaximal exercise include alterations in the morphology and function of the heart and circulatory systems that allow greater delivery of oxygen to the working muscle. The pulmonary system in healthy individuals does not provide a significant limitation to exercise, and therefore little alteration in the lung volumes, respiratory rate, or pulmonary ventilation and diffusion occurs as a result of training. Modest cardiac hypertrophy characterized by an increase in left ventricular volume occurs in response to training. This adaptation allows an increase in stroke volume, leading to a reduction in heart rate at rest and during submaximal workloads and an increased cardiac output during maximal workloads. Finally, an increase in total plasma volume and an increase in the total amount of hemoglobin have been observed in response to submaximal endurance training. Peripheral Adaptations
Peripheral adaptations refer principally to changes in the structure and function of skeletal muscle that enhance its ability to use oxygen to produce energy aerobically. As a result of endurance training, there is an increase in blood supply to the working muscle. This is achieved by an increased capillarization in trained muscles, greater vasodilation in existing muscle capillaries, and a more effective redistribution of cardiac output to the working muscle.
EXERCISE/Beneficial Effects
An increase in the activity of aerobic enzymes and an increased mitochondrial volume density (approximately 4–8%) within trained muscle have been noted. These are coupled with increased glycogen storage within the muscle and increased fat mobilization allowing a higher rate of aerobic ATP resynthesis from free fatty acids and glucose.
High-Intensity Exercise and Strength Training High-intensity exercise requires energy utilization rates that exceed the oxidative capabilities of the muscle. Activities such as sprinting require the anaerobic resynthesis of ATP to produce and maintain high levels of muscular force and are therefore limited in duration. Strength training also relies heavily on anaerobic energy sources and requires high force production by specific muscle groups. Adaptations to High-Intensity Exercise and Strength Training
The main alterations that occur in response to regular high-intensity exercise or strength training are improvements in the structure and function of the neuromuscular system that allow more efficient production of the forces required for these activities and an enhanced ability to produce the energy required through anaerobic processes. Neuromuscular The initial improvements in performance that occur with high-intensity exercise training are largely a result of improved coordination of the nervous system. Increased nervous system activation, more efficient neuromuscular recruitment patterns, and a decrease in inhibitory reflexes allow the individual to produce greater levels of force. The maximum force a muscle can exert is largely determined by its cross-sectional area. In addition to the neural adaptations, strength training stimulates an increase in muscle size. This hypertrophy occurs preferentially in fast twitch muscle fibers and is brought about by increased protein synthesis in response to resistance training. The degree to which muscle hypertrophy occurs is dependent on many factors, including gender and body type. Although some researchers have suggested that strength training may increase the number of muscle cells (hyperplasia), the results of these studies are far from conclusive. Since both high-intensity and strength training rely largely on anaerobic processes for energy production, adaptative alterations in oxygen delivery and
157
utilization, such as increased capillarization or mitochondrial mass of muscle cells, are relatively minor. Metabolic In addition to the neuromuscular alterations that occur with high-intensity and strength training, several metabolic adaptations improve the ability of the muscle to resynthesize ATP from anaerobic sources. Intramuscular stores of the anaerobic energy intermediates, such as creatine phosphate (CP) and glycogen, increase after a period of supramaximal training. The activity of enzymes involved in anaerobic production of energy, such as creatine kinase and myokinase, is also increased.
Studies on the Role of Exercise/Fitness in the Etiology of Coronary Heart Disease Coronary heart disease (CHD) has a multifactorial etiology, and major ‘biological’ risk factors include elevated concentrations of blood total and lowdensity lipoprotein (LDL) cholesterol, reduced concentration of high-density lipoprotein (HDL) cholesterol, high blood pressure, diabetes mellitus, and obesity. In addition, ‘behavioral’ risk factors for CHD include cigarette smoking, a poor diet, and low levels of physical activity and physical fitness associated with the modern, predominantly sedentary way of living. Among these risk factors, a sedentary lifestyle is by far the most prevalent according to data from both the United States and England (Figure 1). Scientific verification of a link between an indolent lifestyle and CHD has been forthcoming during the past 40 years, with the publication of more than 100 large-scale epidemiological studies investigating the relationships between physical activity and cardiovascular health. These studies, some of which are summarized in Figure 2, have produced consistently compelling evidence that regular physical activity can protect against CHD. Pooled data and meta-analyses of the ‘better’ studies indicate that the risk of death from CHD increases about twofold in individuals who are physically inactive compared with their more active counterparts. Relationships between aerobic fitness and CHD appear to be at least as strong. For example, in a cohort of middle-aged men followed up for an average of 6.2 years, the risk of dying was approximately double in those whose exercise capacity at baseline was 8 METS. For both physical activity and fitness, adjustment for a wide range of other risk factors only slightly weakens these associations, suggesting independent relationships.
158 EXERCISE/Beneficial Effects
60
Prevalence (%)
50 40 30 20 10 0 Diabetes
Hypertension Overweight
Elevated Sedentary life style serum cholesterol
Smoking
Figure 1 Estimates of the prevalence (%) of the U.S. population with selected risk factors for coronary heart disease and the population from England. In both studies, a sedentary lifestyle was taken as ‘no physical activity’ or irregular physical activity (i.e., fewer than three times per week and/or less than 20 minutes per session). (From Killoran AJ, Fentem P, and Caspersen C (eds.) (1994) Moving On. International Perspectives on Promoting Physical Activity. London: Health Education Authority, with permission.)
A common weakness of such studies is that they often rely on a single measurement of fitness or activity at baseline, with subsequent follow-up for mortality within the cohort. With such a design, it is difficult to discount the possibility that genetic or other confounding factors are influential in the observed relationship between physical activity/ fitness and mortality. A further weakness in single baseline studies is that subsequent changes in activity/ fitness during the follow-up are not monitored, even
though they may affect the observed relationships due to the phenomenon of ‘regression to the mean.’ Some prospective studies have overcome these deficiencies by examining the effects of changes in physical activity and fitness on mortality. One study reported on the relationship of changes in physical activity and other lifestyle characteristics to CHD mortality in 10 269 alumni of Harvard University. Changes in lifestyle over an 11- to 15-year period were evaluated on the basis of questionnaire
Reduction in coronary mortality
120
100
80 + 60 + 40 Activity
20
Morris + Shaper 0 Sedentary
Fitness Leon Low
Exelund
Sandvik
Moderate
Lie High
Activity/fitness level Figure 2 Summary of the results from six studies in which fitness level was determined (three studies) or activity level assessed by questionnaire (three studies) in individual populations. Follow-up was generally between 7 and 9 years except in Sandvik’s study, which had a 16-year follow-up. The ‘low level’ group for each study represented in this figure was the activity/fitness level next to the least active/fit group. The ‘high level’ represents the group that was the most active/fit for the particular study. If the study participants were grouped by quintile, the ‘moderate’ group is the average of the third and fourth quintiles. (From Killoran AJ, Fentem P, and Caspersen C (eds.) (1994) Moving On. International Perspectives on Promoting Physical Activity. London: Health Education Authority, with permission.)
EXERCISE/Beneficial Effects
information, and subsequent mortality was assessed over an 8-year period. In men who were initially sedentary but started participating in moderately vigorous sports (intensity of 4.5 METS or greater), there was a 41% reduced risk of CHD compared to those who remained sedentary. This reduction was comparable to that experienced by men who stopped smoking. The second study examined changes in physical fitness and their effects on mortality. In this study of 9777 men, two clinical examinations (including treadmill tests of aerobic fitness) were administered approximately 5 years apart, with a mean follow-up of 5.1 years after the second examination to assess mortality. Results showed that men who improved their fitness (by moving out of the least fit quintile) reduced their agedadjusted CHD mortality by 52% compared with their peers who remained unfit. Furthermore, such changes in fitness proved to be the most effective in reducing all-cause mortality when compared with changes in other health risk factors (Figure 3). Mechanisms of Effect
Exercise appears to reduce the risk of CHD through both direct and indirect mechanisms. Regularly performed physical activity may reduce the vulnerability of the myocardium to fatal ventricular arrhythmia and reduce myocardial oxygen requirements. Aerobic training also increases coronary vascular transport capacity via structural adaptations and altered control of vascular resistance. Risk of thrombus formation
159
may also be reduced with regular exercise through its effects on blood clotting and fibrinolytic mechanisms. Regular endurance exercise may also improve the serum lipid profile (particularly in favor of an enhanced HDL: total cholesterol ratio) and have beneficial effects on adipose tissue lipolysis and distribution. Regular exercise may also reduce postprandial lipemia, increase glucose transport into muscle cells, and improve the elasticity of arteries. Exercise Prescription
For protection against CHD and other diseases associated with inactivty, exercise needs to be habitual, predominantly aerobic in nature, and current. Evidence from work carried out on British civil servants suggests that to be cardioprotective, exercise should be moderately vigorous ( 7.5 kcal min1 ( 31.4 kJ min1) or 6 METS, equivalent to walking at approximately 3 miles per hour up a gradient of 1 in 20) and performed at least twice weekly. However, other studies have indicated that lower intensity activity is also effective as long as the total accumulated exercise energy expenditure is greater than approximately 2000 kcal week1 ( 8368 kJ week1). Thus, recommendations from the U.S. Surgeon General suggest that everyone older than the age of 2 years should accumulate 30 minutes or more of at least moderate-intensity physical activity on most— preferably all—days of the week. Such activity may embrace everyday tasks such as stair climbing and walking, recreational physical activities, and more
Adjusted RR for all-cause mortality
3.0 2.5 2.0 1.5 1.0 0.5 0 Smoking BMI Systolic BP Cholesterol (27.0 kg m–2) (140 mm Hg) (6.2 mmol l–1) (any amount) –1 (240 mg dl )
Fitness (least fit quintile)
Figure 3 Relative risks (adjusted for age, family history of coronary heart disease, health status, baseline values, and changes for all variables in the figure, and interval in years between examinations) of all-cause mortality by favorable changes in risk factors between first and subsequent examinations. The analyses were for men at risk on each particular variable at the first examination. Cutoff points designating high risk are given parenthetically at the bottom of the figure. The number of men at high risk (and the number of deaths) for each characteristic were as follows: body mass index (BMI), 2691 (66); systolic blood pressure (BP), 1013 (55); cholesterol, 2212 (79); cigarette smoking, 1609 (45); and physical fitness, 1015 (56). (From Blair SN, Kohl HW, Barlow CE, Paffenbarger RS, Gibbons LW, and Macera CA, (1995) Changes in physical fitness and all-cause mortality. A prospective study of healthy and unhealthy men JAMA, 273: 1093–1098, with permission.)
160 EXERCISE/Beneficial Effects
Age-adjusted mortality per 10 000 person-years
150
100
50
0
6
7
8 9 10 11 Metabolic equivalents
21
24.5
28
31.5
35
38.5
12+ 42
ml per kg per min Figure 4 Age-adjusted, all-cause mortality rates per 10 000 person-years of follow-up by physical fitness categories in 3120 women and 10 224 men. Physical fitness categories are expressed as maximal metabolic equivalents (work metabolic rate/resting metabolic rate) achieved during the maximal treadmill exercise test. One metabolic equivalent equals 3.5 ml kg1 min1. The estimated maximal oxygen uptake for each category is shown also. (From Blair SN et al. (1989) Physical fitness and all-cause mortality. A prospective study of healthy men and women. Journal of the American Medical Association 262: 2395–2401, with permission.)
formal aerobic exercise programs and sports. Intermittent or shorter bouts of activity (of at least 10 minutes duration) may be accumulated throughout the day to confer similar benefits to single, continuous 30-minute bouts of exercise. A consistent finding is that previous exercise that has been abandoned confers no benefit. Desirable aerobic fitness levels have also been described for women (maximal aerobic power of approximately 9 METs [32.5 ml kg1 min1]) and men (10 METs [35 ml kg1 min1]) (Figure 4).
Studies on the Role of Exercise/Fitness in the Etiology of Other Diseases Obesity
Obesity is defined as an excess of adipose tissue. This condition plays a central role in the development of diabetes mellitus and confers an increased risk for CHD, high blood pressure, osteoarthritis, dyslipoproteinemia, various cancers, and all-cause mortality. The prevalence of obesity has risen dramatically in recent years, despite a decline in daily energy expenditure during the past two decades in the United Kingdom of approximately 800 kcal day1 (3347 kJ day1). Based on the principles of energy balance, such circumstantial evidence indicates that physical inactivity may play a central role in the development of
obesity in humans. However, confirmatory data are scarce, particularly from well-designed prospective studies. One large-scale national study in the United States evaluated the relationship of physical activity to weight gain over a 10-year follow-up of 3515 men and 5810 women. Individuals who were sedentary at both baseline and follow-up were much more likely (relative risk, 2.3 (95% confidence interval (CI), 0.9–5.8) in men and 7.1 (95% CI, 2.2–23.3) in women) to experience considerable weight gain (>13 kg) than subjects who were active at both examinations. Evidence suggests that women who gain weight ( 6 kg) over a 1-year period expend on average 212 kcal/day less in light to moderate activities than those who maintain their normal body weight. Difficulties are also encountered in interpreting results from intervention studies investigating the effects of exercise and/or diet on body weight, body composition, and resting metabolic rate (the latter being the single greatest component of total energy expenditure). Both energy intake and physical activity are notoriously difficult to quantify accurately, as is body fat status and distribution. Methodological differences between studies, a lack of control for possible confounding factors, and the fact that weight loss leads to an enhanced metabolic economy (due to reductions in RMR, energy cost of physical activity, and the TEF) further complicate matters. Nevertheless, exercise, particularly of the moderate-intensity type such as walking or cycling, probably helps to protect fat-free mass while promoting the loss of fat mass, but it does not appear to prevent the decline in RMR during weight loss. Similarly, long-term physical activity has minimal effects on RMR beyond its effect on lean body mass. Although studies have shown that exercise alone can reduce body weight, due to the lower total energy deficit, the rate and amount of weight loss are less than can be achieved through dieting alone. Although the combination of exercise and dieting might be expected to improve weight loss, most data show only a modest increase (2 or 3 kg). When the total daily deficit is kept constant, diet, exercise, and diet plus exercise result in similar weight loss, but the inclusion of exercise generally results in greater fat loss and an increased lean tissue mass. There is evidence that the long-term maintenance of weight loss may require more regular activity (approximately double the current guidelines of 30 min/day) than that required to prevent weight gain in the first place. The ideal dietary and exercise prescriptions to control body weight in the long-term remain elusive.
EXERCISE/Beneficial Effects Osteoporosis
Osteoporosis-related fractures represent a major public health concern. Once established, osteoporosis may be irreversible, emphasizing the need for primary prevention strategies based on minimizing bone loss and maximizing peak bone mass. Nearly half the variation in bone mineral density (BMD) may be attributable to nonhereditary factors. Behavioural factors of importance include diet (particularly calcium and vitamin D intakes), smoking, and the amount and type of habitual physical activity. These factors may be particularly influential during adolescence when (depending on the site) up to 90% of adult bone mineral content may be deposited, prior to the attainment of peak bone mass in the third decade of life. Several studies on the relation of physical activity to BMD have been conducted, allowing a few general conclusions to be drawn. Clearly, bone responds positively to the mechanical stresses of exercise. Regular physical activity is likely to boost peak bone mass in young women, probably slows the decline in BMD in middle-aged and older women, and may increase BMD in patients with established osteoporosis. More research is required to clarify the type and amount of exercise that is most effective for enchancing peak bone mass. Evidence favors relatively high-impact, weight-bearing exercises (such as dancing, jumping, and volleyball), particularly during the peripubertal and adolescent years. It is unclear how physical activity and other intervention strategies, such as calcium supplementation and oestrogen replacement therapy, might interact to promote bone health. In addition to its osteogenic effects, regular exercise may also promote better coordination, balance, and ambulatory muscle strength, thus minimising the risk of falling. The reported reduced risk of fracture (relative risk, 0.41 in men and 0.76 in women) in active individuals compared to sedentary ones is likely due to these combined direct and indirect effects of physical activity. Cancer
In general, data relating to associations between physical activity and breast, endometrial, ovarian, prostate, and testicular cancers are inconclusive, although the suggestion that activity in adolescence and young adulthood may provide subsequent protection against breast cancer is worthy of further study. To date, the only clear evidence in this field comes from epidemiological studies relating a reduced risk of cancer of the colon to both occupational and leisure time physical activity. One such study investigated 17 148 Harvard alumni, who
161
were assessed for physical activity at two time points, 10–15 years apart. Those who were highly active (exercise energy expenditure 2500 kcal (10 460 kJ) week1) at both assessments displayed half the risk of developing colon cancer than those who were relatively inactive ( 1000 kcal (4184 kJ) week1). Interestingly, higher levels of physical activity at one (but not both) assessment were not associated with lower cancer risk, suggesting that consistently higher levels of activity may be necessary to provide a measure of protection. Possible biological mechanisms for this association include exercise-induced alteration of local prostaglandin synthesis (particularly prostaglandin F2-alpha) and a decreased gastrointestinal transit time—the latter possibly decreasing the duration of contact between the colon mucosa and potential carcinogens. See also: Bone. Cancer: Epidemiology and Associations Between Diet and Cancer. Coronary Heart Disease: Prevention. Energy: Metabolism; Balance. Energy Expenditure: Indirect Calorimetry. Exercise: Diet and Exercise. Obesity: Definition, Etiology and Assessment; Treatment. Osteoporosis.
Further Reading Ainsworth BE, Haskell WL, Leon AS et al. (1993) Compendium of physical activities: Classification of energy costs of human physical activities. Medicine and Science in Sports and Exercise 25(1): 71–80. Booth FW, Gordon SE, Carlson CJ et al. (2000) Waging war on modern chronic disease: Primary prevention through exercise biology. Journal of Applied Physiology 88: 774–787. Bouchard C, Shephard RJ, and Stephens T (eds.) (1994) Physical Activity, Fitness and Health. International Proceedings and Consensus Statement. Champaign, III, USA, Human Kinetics. Goya Wannamethee S and Shaper AG (2001) Physical activity in the prevention of cardiovascular disease. An epidemiologocial perspective. Sports Medicine 31(2): 101–114. McKenna J and Riddoch C (eds.) (2003) Perspectives on Health and Exercise. Basingstoke, UK: Palgrave Macmillan. Melanson EL, Sharp TA, Seagle HM et al. (2002) Effect of exercise intensity on 24-h energy expenditure and nutrient oxidation. Journal of Applied Physiology 92: 1045–1052. Poehlman ET (1989) A review: Exercise and its influence on resting energy metabolism in man. Medicine and Science in Sports and Exercise 21(s): 510–525. Poehlman ET, Denino WK, Beckett T et al. (2002) Effects of endurance and resistance training on total daily energy expenditure in young women: A controlled randomized trial. Journal of Clinical Endocrinology and Metabolism 87: 1004–1009. Poehlman ET, Melby CL, and Goran MI (1991) The impact of exercise and diet restriction on daily energy expenditure. Sports Medicine 11(2): 78–101. U.S. Department of Health and Human Services (1996) Physical Activity and Health: A Report of the Surgeon General. Atlanta, GA: U.S. Department of Health and Human Services, Centers for Disease Control and Prevention, National Center for Chronic Disease Prevention and Health Promotion.
162 EXERCISE/Diet and Exercise
Diet and Exercise R J Maughan, Loughborough University, Loughborough, UK ª 2005 Elsevier Ltd. All rights reserved.
Introduction At an International Consensus Conference held at the offices of the International Olympic Committee in 1991, a small group of experts agreed a consensus statement that began by saying that ‘‘Diet significantly influences exercise performance.’’ That is a bold and unambiguous statement, leaving little room for doubt. However, the statement went on to add various qualifications to this opening statement. These largely reflect the uncertainties in our current knowledge, but also reflect the many different issues that arise in considering the interactions between diet and exercise. Exercise may take many forms and may be undertaken for many different reasons: as the emphasis on physically demanding occupations has decreased in most parts of the world, so participation in recreational exercise and sport have increased. Even though physical activity programs have been heavily promoted in most developed countries, however, they rarely involve more than about 30% of the population, leaving a major part of the population who seldom or never engage in any form of strenuous activity. In considering the interactions between diet and exercise, two main issues must be considered, each of which gives rise to many subordinate questions. The first question is how altered levels of physical activity influence the body’s requirement for energy and nutrients: this then has implications for body composition (including the body content of fat, muscle, and bone), for the hormonal environment and the regulation of substrate metabolism, and for various disease states that are affected by body fatness, nutrient intake, and other related factors. The second question is how nutritional status influences the responses to and the performance of exercise. This has implications for those engaged in physically demanding occupations, and also for those who take part in sport on a recreational or competitive basis.
Influence of Physical Activity on Energy Balance In the simple locomotor activities that involve walking, running, or cycling, the energy cost of
activity is readily determined and can be shown to be a function of speed: where body mass is supported, as in running, or where it must be moved against gravity, as in cycling uphill, then body mass is also an important factor in determining the energy cost. For walking, running, and cycling at low speeds, there is a linear relationship between velocity and energy cost, if the energy cost is expressed relative to body mass. Across a range of speeds, the cost of locomotion is approximately 1 kcal kg1km1. Therefore, energy expenditure depends on the distance covered and the body mass and is not influenced by walking speed. In purposeful walking, where the aim is to get from one place to another, the distance is set, but where walking is part of a physical activity program, activity is more often measured by time rather than distance, so walking speed becomes an important factor in determining the energy cost. At higher speeds, the relationship between energy expenditure and speed becomes curvilinear and the energy cost increases disproportionately. It is often recommended that 20–30 min of moderate intensity exercise three times per week is sufficient exercise to confer some protection against cardiovascular disease: if this exercise is in the form of jogging, aerobics, or similar activities, the energy expenditure will be about 4 MJ (1000 kcal) per week for the average 70-kg individual, or an average of only about 150 kcal day1 (Table 1). However, even a small daily contribution from exercise to total daily energy expenditure will have a cumulative effect on a long-term basis. For obese individuals, whose exercise capacity is low, the role of physical activity in raising energy expenditure is necessarily limited, but this effect is offset to some degree by the increased energy cost of weight-bearing activity. Very high levels of daily energy expenditure are now rarely encountered in occupational tasks. The average daily metabolic rate of lumberjacks has
Table 1 Estimated average energy cost of physical activity, expressed as METS (multiples of BMR) and in kJ per kg body mass per h Activity
MET
kJ kg1 h1
Bicycling, leisure Bicycling, racing 30 km h1, no drafting Dancing, ballroom Forestry, fast chopping with axe Soccer, casual Walking, slow Walking, brisk uphill Writing, desk work
4.0 16.0 3.0–5.5 17 7.0 3.5 5.0–7.0 1.8
17 67 13–23 71 29 15 21–29 7.5
been reported to be about four times the basal metabolic rate, and similar values have been reported for other very demanding occupations, suggesting that this may be close to the upper limit of physical exercise that can be sustained on a long-term basis. In the short term, sporting activities can involve much higher levels of energy output: the world record for distance run in 24 h is 286 km, which requires an energy expenditure of about 80 MJ (20 000 kcal). Such an effort, however, results in considerable depletion of the body’s energy reserves, and must be followed by a period of recovery. For athletes, very high levels of daily energy expenditure are more often a feature of training than of competition, with very high levels of energy intake reported in many sports. Measurements on runners in steady state with regard to training load and body mass show good relationships between energy intake and distance run. There are some competitive events that require high levels of activity to be sustained for many consecutive days, the most obvious examples being the multi-stage cycle tours, of which the most famous is the Tour de France. Measurements on some of the competitors have shown that they manage to maintain body weight in spite of a mean daily energy expenditure of 32 MJ (8000 kcal) sustained over a 3-week period. It was suggested that those cyclists who were unable to meet the daily energy requirement were unable to complete the race. Measurements of oxygen uptake, heart rate, and other variables made after exercise show that the metabolic rate may remain elevated for at least 12 h and possibly up to 24 h if the exercise is prolonged and close to the maximum intensity that can be sustained. After more moderate exercise, the metabolic rate quickly returns to baseline level. Therefore, it seems likely that the athlete training at near to the maximum sustainable level, who already has a very high energy demand, will find this increased further by the elevation of postexercise metabolic rate: this will increase the difficulties that many of these athletes have in meeting their energy demand. The recreational exerciser, for whom the primary stimulus to exercise is often to control body mass or to reduce body fat content, will not benefit to any appreciable extent from this effect. The control of food intake in relation to energy expenditure is not well understood, but it is clear that both short-term and long-term regulatory mechanisms exist. These allow the adult body weight to be maintained within fairly narrow limits in spite of wide variations in energy expenditure. It is also clear from the growing prevalence of obesity, that these control mechanisms are not perfect. The acute
Daily energy intake (J kg–1)
EXERCISE/Diet and Exercise
163
300 250 200 150 100 5
10 15 Body fat content (%)
20
Figure 1 Association between daily energy intake and body fat content. (For further details see Maughan RJ and Piehl Aulin K (1997) Energy needs for physical activity. In: Simopoulos AP and Pavlou KN (eds.) World Review of Nutrition, vol. 82, pp. 18–32. Basel: Karger.)
effects of exercise on appetite and energy intake are also unclear. A period of activity may result in a stimulation of the appetite, leading to an increase in the energy intake: the magnitude of the increased intake may exceed the total energy expenditure of the activity itself. There are, however, reports that exercise may lead to a suppression of appetite, and this is likely to be true especially of high-intensity exercise. A modest training program involving energy expenditure of 200 kcal three times per week has been reported to have no effect on energy intake. In the study of distance runners referred to above, there was a negative association between the training load (expressed as distance run per week) and body fat and a positive association between training load and energy intake: this led to a somewhat paradoxical negative association between energy intake and body fat content (Figure 1).
Macronutrients and Physical Activity Protein
The idea that protein requirements are increased by physical activity is intuitively attractive, and highprotein diets are a common feature of the diets of sportsmen and women. The available evidence does show an increased rate of oxidation of the carbon skeletons of amino acids during exercise, especially when carbohydrate availability is low. Protein contributes only about 5% of total energy demand in endurance exercise, but the absolute rate of protein breakdown is higher than at rest (where protein contributes about the same fraction as the protein content of the diet, i.e., typically about 12–16%) because of the higher energy turnover. Most recommendations suggest that individuals engaged in endurance activities on a daily basis should aim to achieve a protein intake of about
164 EXERCISE/Diet and Exercise
1.2–1.4 g kg1 day1, whereas athletes engaged in strength and power training may need as much as 1.6–1.7 g kg1 day1. In strength and power sports such as weightlifting, sprinting, and bodybuilding, the use of high-protein diets and protein supplements is especially prevalent, and daily intakes in excess of 4 g kg1 are not unusual. Scientific support for such high intakes is generally lacking, but those involved in these sports are adamant that such high levels of intake are necessary, not only to increase muscle mass, but also to maintain muscle mass. This apparent inconsistency may be explained by Millward’s adaptive metabolic demand model, which proposes that the body adapts to either high or low levels of intake, and that this adjustment to changes in intake occurs only very slowly. Protein synthesis and degradation are both enhanced for some hours after exercise, and the net effect on muscle mass will depend on the relative magnitude and duration of these effects. Several recent studies have shown that ingestion of small amounts of protein (typically about 35–40 g) or essential amino acids (about 6 g) either before or immediately after exercise will result in net protein synthesis in the hours after exercise, whereas net negative protein balance is observed if no source of amino acids is consumed. These observations have led to recommendations that protein should be consumed immediately after exercise, but the control condition in most of these studies has involved a relatively prolonged (6–12 h) period of fasting, and this does not reflect normal behavior. Individuals who consume foods containing carbohydrate and proteins in the hour or two before exercise may not further increase protein synthesis if additional amino acids or proteins are ingested immediately before, during, or after exercise. Various low-(40%) carbohydrate, high-(30%) fat, high-(30%) protein diets have been promoted for weight loss and athletic performance. Proposed mechanisms include reduced circulating insulin levels, increased fat catabolism, and altered prostaglandin metabolism. These diets can be effective in promoting short-term weight loss, primarily by restricting energy intake (to 1000–2000 kcal day1) and by restricting dietary choice. There is no evidence to support improvements in exercise performance, and what evidence there is does not support the concept. Carbohydrate
Carbohydrate is stored in the body in the form of glycogen, primarily in the liver (about 70–100 g in the fed state) and in the skeletal muscles (about
300–500 g, depending on muscle mass and preceding diet). These stores are small relative to the rate of carbohydrate use during exercise. Fat and carbohydrate are the main fuels used for energy supply in exercise. In low-intensity exercise, most of the energy demand can be met by fat oxidation, but the contribution of carbohydrate, and especially of the muscle glycogen, increases as the energy demand increases. In high-intensity exercise, essentially all of the energy demand is met by carbohydrate metabolism, and carbohydrate oxidation rates of 3–4 g min1 may be sustained for several hours by athletes in training or competition. When the glycogen content of the exercising muscles reaches very low levels, the work rate must be reduced to a level that can be accommodated by fat oxidation. Repeated short sprints will also place high demands on the muscle glycogen store, most of which can be converted to lactate within a few minutes. Carbohydrate supplies about 45% of the energy in the typical Western diet: this amounts to about 200– 300 g day1 for the average sedentary individual, and is the amount that is necessary to get through normal daily activities. In an hour of hard exercise, up to 200 g of carbohydrate can be used, and sufficient carbohydrate must be supplied by the diet to replace the amount used. Replacement of the glycogen stores is an essential part of the recovery process after exercise; if the muscle glycogen content is not replaced, the quality of training must be reduced, and the risks of illness and injury are increased. Low muscle glycogen levels are associated with an increased secretion of cortisol during exercise, with consequent negative implications for immune function. Replacement of carbohydrate should begin as soon as possible after exercise with carbohydrate foods that are convenient and appealing, and at least 50– 100 g of carbohydrate should be consumed within the first 2 h of recovery. Thereafter, the diet should supply about 5–10 g of carbohydrate per kg body mass, including a mixture of different carbohydrate-rich foods. For athletes preparing for competition, a reduction in the training load and the consumption of a high carboydrate diet in the last few days are recommended: this will maximize the body’s carbohydrate stores, and should ensure optimum performance, not only in endurance activities, but also in events involving short-duration highintensity exercise and in field games involving multiple sprints. The high-carbohydrate diet recommended for the physically active individual coincides with the recommendations of various expert committees that a healthy diet is one that is high in carbohydrate (at least 55% of energy) and low in fat (less than
EXERCISE/Diet and Exercise
30% of energy). However, where energy intake is either very high or very low, it may be inappropriate to express the carbohydrate requirement as a fraction of energy intake. With low total energy intakes, the fraction of carbohydrate in the diet must be high, but the endurance athlete with a very high energy intake may be able to tolerate a higher fat intake. Fat
Fat is an important metabolic fuel in prolonged exercise, especially when the availability of carbohydrate is low. One of the primary adaptations to endurance training is an enhanced capacity to oxidize fat, thus sparing the body’s limited carbohydrate stores. Studies where subjects have trained on high-fat diets, however, have shown that a high-carbohydrate diet during a period of training brings about greater improvements in performance, even when a highcarbohydrate diet is fed for a few days to allow normalization of the muscle glycogen stores before exercise performance is measured. It must be recognized, though, that these short-term training studies usually involve relatively untrained individuals and may not reflect the situation of the highly trained elite endurance athlete where the capacity of the muscle for oxidation of fatty acids will be much higher. For the athlete with very high levels of energy expenditure in training, the exercise intensity will inevitably be reduced to a level where fatty acid oxidation will make a significant contribution to energy supply and fat will provide an important energy source in the diet. Once the requirements for protein and carbohydrate are met, the balance of energy intake can be in the form of fat.
165
it seems that the prevalence is the same in athletic and sedentary populations, suggesting that exercise per se does not increase the risk. The implications of even mild anemia for exercise performance are, however, significant. A fall in the circulating hemoglobin concentration is associated with a reduction in oxygen-carrying capacity and a decreased exercise performance. Low serum ferritin levels are not associated with impaired performance, however, and iron supplementation in the absence of frank anemia does not influence indices of fitness. Osteoporosis is now widely recognized as a problem for both men and, more especially, women, and an increased bone mineral content is one of the benefits of participation in an exercise program. Regular exercise results in increased mineralization of those bones subjected to stress and an increased peak bone mass may delay the onset of osteoporotic fractures; exercise may also delay the rate of bone loss. Estrogen plays an important role in the maintenance of bone mass in women, and prolonged strenuous activity may result in low estrogen levels, causing bone loss. Many very active women also have a low body fat content and may also have low energy (and calcium) intakes in spite of their high activity levels. All of these factors are a threat to bone health. The loss of bone in these women may result in an increased predisposition to stress fractures and other skeletal injury and must also raise concerns about bone health in later life. It should be emphasized, however, that this condition appears to affect only relatively few athletes, and that physical activity is generally beneficial for the skeleton.
Water and Electrolyte Balance Micronutrients and Physical Activity Many micronutrients play key roles in energy metabolism, and during strenuous physical activity the rate of energy turnover in skeletal muscle may be increased up to 20–100 times the resting rate. Although an adequate vitamin and mineral status is essential for normal health, marginal deficiency states may only be apparent when the metabolic rate is high. Prolonged strenuous exercise performed on a regular basis may also result in increased losses from the body or in an increased rate of turnover, resulting in the need for an increased dietary intake. An increased food intake to meet energy requirements will increase dietary micronutrient intake, but individuals who are very active may need to pay particular attention to their intake of iron and calcium. Iron deficiency anemia affects some athletes engaged in intensive training and competition, but
Few situations represent such a challenge to the body’s homeostatic mechanisms as that posed by prolonged strenuous exercise in a warm environment. Only about 20–25% of the energy available from substrate catabolism is used to perform external work, with the remainder appearing as heat. At rest, the metabolic rate is low: oxygen consumption is about 250 ml min1, corresponding to a rate of heat production of about 60 W. Heat production increases in proportion to metabolic demand, and reaches about 1 kW in strenuous activities such as marathon running (for a 70-kg runner at a speed that takes about 212 h to complete the race). To prevent a catastrophic rise in core temperature, heat loss must be increased correspondingly and this is achieved primarily by an increased rate of evaporation of sweat from the skin surface. In hard exercise in hot conditions, sweat rates can reach 3 l h1, and trained athletes can sustain sweat rates
166 EXERCISE/Diet and Exercise
in excess of 2 l h1 for many hours. This represents a much higher fractional turnover rate of water than that of most other body components. In the sedentary individual living in a temperate climate, about 5–10% of total body water may be lost and replaced on a daily basis. When prolonged exercise is performed in a hot environment, 20–40% of total body water can be turned over in a single day. In spite of this, the body water content is tightly regulated, and regulation by the kidneys is closely related to osmotic balance. Along with water, a variety of minerals and organic components are lost in variable amounts in sweat. Sweat is often described as an ultrafiltrate of plasma, but it is invariably hypotonic. The main electrolytes lost are sodium and chloride, at concentrations of about 20–70 mmol l1, but a range of other minerals, including potassium and magnesium, are also lost, as well as trace elements in small amounts. When sweat losses are high, there can be a substantial electrolyte loss, and intake must increase accordingly. Failure to maintain hydration status has serious consequences for the active individual. A body water deficit of as little as 1% of total body mass can result in a significant reduction in exercise capacity. Endurance exercise is affected to a greater extent than high-intensity exercise, and muscle strength is not adversely affected until water losses reach 5% or more of body mass. Hypohydration greatly increases the risk of heat illness, and also abolishes the protection conferred by prior heat acclimation. Many studies have shown that the ingestion of fluid during exercise can significantly improve performance. Adding an energy source in the form of carbohydrate confers an additional benefit by providing an energy source for the working muscles. Addition of small amounts (perhaps about 2–8%) of carbohydrate, in the form of glucose, sucrose, or maltodextrin, will promote water absorption in the small intestine as well as providing exogenous substrate that can spare stored carbohydrate. The addition of too much carbohydrate will slow gastric emptying and, if the solution is strongly hypertonic, may promote secretion of water into the intestinal lumen, thus delaying fluid availability. Voluntary fluid intake is seldom sufficient to match sweat losses, and a conscious effort to drink is normally required if dehydration is to be avoided. Palatability of fluids is therefore an important consideration. If exercise is prolonged and sweat losses high, the addition of sodium to drinks may be necessary to prevent the development of hyponatremia. Ingestion of large volumes of plain water is also likely to limit intake because of a fall in plasma osmolality leading to suppression of thirst.
Replacement of water and electrolyte losses incurred during exercise is an important part of the recovery process in the postexercise period. This requires ingestion of fluid in excess of the volume of sweat lost to allow for ongoing water losses from the body. If food containing electrolytes is not consumed at this time, electrolytes, especially sodium, must be added to drinks to prevent diuresis and loss of the ingested fluid.
Dietary Supplementation for Active Individuals The use of nutritional supplements in athletes and in the health-conscious recreationally active population is widespread, as it is in the general population. A very large number of surveys have been published. A meta-analysis of 51 published surveys involving 10 274 male and female athletes of varying levels of ability showed an overall prevalence of supplement use of 46%, but the prevalence varies widely in different sports, at different levels of age, performance etc., and in different cultural backgrounds. A wide variety of supplements are used with the aim of improving or maintaining general health and exercise performance. In particular, supplement use is often aimed at promoting tissue growth and repair, promoting fat loss, enhancing resistance to fatigue, and simulating immune function. Most of these supplements have not been well researched, and anyone seeking to improve health or performance would be better advised to ensure that they consume a sound diet that meets energy needs and contains a variety of foods. See also: Anemia: Iron-Deficiency Anemia. Appetite: Physiological and Neurobiological Aspects. Bone. Carbohydrates: Chemistry and Classification; Regulation of Metabolism; Requirements and Dietary Importance. Electrolytes: Water–Electrolyte Balance. Energy: Balance. Exercise: Beneficial Effects. Fats and Oils. Osteoporosis. Protein: Synthesis and Turnover; Requirements and Role in Diet. Sports Nutrition. Supplementation: Dietary Supplements; Role of Micronutrient Supplementation; Developing Countries; Developed Countries.
Further Reading American College of Sports Medicine, American Dietetic Association, and Dietitians of Canada (2000) Joint Position Statement: Nutrition and athletic performance. Medicine and Science in Sports and Exercise 32: 2130–2145. Devlin JT and Williams C (1992) Foods, Nutrition and Sports Performance. London: E and FN Spon.
EXERCISE/Diet and Exercise Henriksson J and Hickner RC (1998) Adaptations in skeletal muscle in response to endurance training. In: Harries M, Williams C, Stanish WD, and Micheli LJ (eds.) Oxford Textbook of Sports Medicine, 2nd edn, pp. 45–69. Oxford: Oxford University Press. Ivy J (2000) Optimization of glycogen stores. In: Maughan RJ (ed.) Nutrition in Sport, pp. 97–111. Oxford: Blackwell. Kiens B and Helge JW (1998) Effect of high-fat diets on exercise performance. Proceedings of the Nutrition Society 57: 73–75. Maughan RJ (1999) Nutritional ergogenic aids and exercise performance. Nutritional Research Review 12: 255–280. Maughan RJ and Murray R (eds.) (2000) Sports Drinks: Basic Science and Practical Aspects. Boca Raton: CRC Press. Maughan RJ and Piehl Aulin K (1997) Energy needs for physical activity. In: Simopoulos AP and Pavlou KN (eds.) World Review of Nutrition, vol. 82, pp. 18–32. Basel: Karger.
167
Millward DJ (2001) Protein and amino acid requirements of adults: current controversies. Canadian Journal of Applied Physiology 26: S130–S140. Nieman DC and Pedersen BK (1999) Exercise and immune function. Sports Medicine 27: 73–80. Noakes TD and Martin D (2002) IMMDA-AIMS advisory statement on guidelines for fluid replacement during marathon running. New Studies in Athletics 17: 15–24. Shirreffs SM and Maughan RJ (2000) Rehydration and recovery after exercise. Exercise and Sports Science Reviews 28: 27–32. Williams C (1998) Diet and sports performance. In: Harries M, Williams C, Stanish WD, and Micheli LJ (eds.) Oxford Textbook of Sports Medicine, 2nd edn, pp. 77–97. Oxford: Oxford University Press. Wolfe RR (2001) Effects of amino acid intake on anabolic processes. Canadian Journal of Applied Physiology 26: S220–S227.
F FAMINE K P West Jr, Johns Hopkins University, Baltimore, MD, USA ª 2005 Elsevier Ltd. All rights reserved.
There are so many hungry people, that God can not appear to them except in the form of bread. Mahatma Gandhi
Famines in History Famine has afflicted humankind, shaping its demography and history from antiquity. Records of famine in ancient Egypt during the third millennium BC are depicted in bas-relief on the Causeway of the Pyramid of Unas in Saqqura. Biblical accounts of a famine resulting from drought in Egypt during the second millennium BC (Middle Kingdom) that stretched to Mesopotamia describe the devastation wrought on the land and society and the means by which Joseph predicted and managed its consequences. The fall of the Roman Empire followed repeated food shortages and famines from 500 BC to 500 AD. China experienced some 1828 famines, nearly one per year, from 108 BC to 1911 AD. The ranks of the Crusades in the eleventh and twelfth centuries swelled in response to promise of food. The storming of the Bastille and French Revolution followed decades of periodic rises in flour and bread prices that had caused widespread hunger and hardship, and hundreds of ‘food riots.’ Recurrent famine motivated the settling of the New World. The Great Irish Famine in the late 1840s caused one and a half million deaths and an equal number of migrations, mostly to America. Decades of Russian famines following crop failures in the late nineteenth century resulted in waves of immigration to the US. Repeated famines led to the overthrow of Czarist Russia that ushered in the Bolshevik Revolution in the early twentieth century. Using food deprivation to wage class warfare and
crush the Cossack revolution in the 1930s, Stalinist policies led to the starvation and death of 3.5 million Ukrainians. In China, multiple famines throughout the nineteenth century reportedly led to over 50 million deaths, and these continued throughout the first half of the twentieth century. Maoist communism rose to power in the 1940s understandably amidst promises of land reform and freedom from chronic hunger and periodic famine. However, collectivization of private farms and irrational rural industrialization schemes coupled with monopolistic control of food grain movement, purchase and access, abusive taxation, and repressive policies against the peasantry left China mostly food insecure throughout the 1950s and primed for what has turned out to be the worst single famine in human history (1959–60). During this period an estimated 30 million people perished, in absence of worldview and reaction, following the secretive, cultist policy failures of Mao’s ‘Great Leap Forward.’ Famine was notorious on the Indian subcontinent throughout the mid-twentieth century, with the two final famines both occurring in Bengal in 1943, towards the end of British rule and again in Bangladesh (formerly East Bengal) in 1974–75. An India free from overt famine over the past half-century, despite continuing chronic undernutrition, has been attributed, in part, to the country’s economic rise, relative peace, and democratic and popular processes that have included political accountability and a flourishing free press; lessons that still remain to be learnt by some modern states. In North Korea, for example, the effects of repeated floods in the late 1990s that ruined crops, combined with isolation, a collapsed centralized economy, and politicization and diversion of already insufficient international food aid from those most in need led to famine of devastating proportion. In the late twentieth century famines have inflicted heavy loss of life in Africa, especially in the Greater Horn (i.e., Ethiopia, the Sudan, and Somalia). At least one modern regime’s demise, that of Emperor Haile Selassie in 1974, followed famine. Famines of seemingly increased complexity
170 FAMINE
in Africa have resulted from deteriorating crop production associated with steady rainfall decline, failures in development and commerce, repressive and corrupt governance, and armed conflict leading, at times, to outright anarchy. Tragically, famines over the past 30 years have occurred at a time in human history when general understanding of causes and consequences of famine, and a global ability to monitor antecedents and intervene to avert mass starvation, disease, and death have never been greater. Yet, with conflict, especially internal civil war, rising as the decisive and yet unpredictable trigger of modern famine, stable governance with democratic processes (e.g., free press, people’s participation, fair trade, etc.) is increasingly recognized to be one of the most important means for its prevention. History has increased awareness and understanding of the need for a stable, peaceful, and equitable political economy to guide the developing world away from famine in the twenty-first century.
Definition of Famine Definitions of famine vary but all contain the necessary elements of widespread inaccessibility to food leading to mass numbers of starved individuals. Importantly, lack of access is not equivalent to nonavailability of food within a region, as most famines occur amidst food stocks sufficient to feed the afflicted population. More comprehensive definitions of famine may include elements of time dependency (e.g., steady, continuous erosion of or sudden collapse in food available for consumption), partial causation (e.g., due to natural calamity, armed conflict, or convergence of other complex causal events), class (e.g., affecting certain ethnic, geographic, economic or occupational groups more than others), and health consequence on a population scale (e.g., accompanied by epidemics of disease and high mortality) or other population responses (e.g., mass migration). While poverty-stricken communities tend to view famine as a continuum of increasing loss and oppression that typically begins long before mass casualty, formal ‘external’ definitions tend to invoke thresholds or shocks involving sudden inflections in trends for events that afflict large numbers of people. These may include spikes in prices of staple grains, levels of violence, destitution, mortality from starvation and infectious disease, and migratory movement. Threshold events tend to distinguish famine, which upon declaration demands a massive relief response, from endemic, chronic food deprivation, which results from extreme poverty, political corruption, developmental
neglect and food insecurity and which leads to chronic, high rates of malnutrition, disease, and mortality. Yet, these factors are ones that, often when acting together, predispose underserved populations of the developing world to risk of famine. Such conditioning factors are antecedent causal elements that require more continuous, sensitive, and specific indicators to detect as well as a set of longer term economic, political, and developmental solutions to prevent. Whether continuous and evolving or more sudden, unleashed famine – where thresholds have been transgressed by masses of people – is catastrophic, distinct, and a human tragedy of unparaleled proportion.
Causes of Famine Starvation is a matter of some people not having enough food to eat, and not a matter of there being not enough food to eat. Amartya Sen
Large numbers of people starve during famine, which is usually followed by epidemics of lethal infectious diseases. Typically, a plethora of forces or conditions act within society to deprive people of food to survive. General food decline in a population may be an important factor, but it is neither necessary nor sufficient as a cause, as amply revealed by critical treatises of numerous famines over the past two centuries. This has led analysts to recognize that famines are complex, often with many (‘component’) causes that vary in their attribution, depending on the classes of society affected, and their timing, severity, duration, and degree of interaction. The constellation of causes and potential solutions of famine can be examined from ecological, economic, social, and public health perspectives, each offering different insights into the ecology of famine. While each view is valid and informative, none are complete or mutually exclusive, making it necessary to integrate these diverse perspectives to understand the complexity of famine and approaches to its prevention. In offering an epidemiologic overview, there appear to be at least three dominant causes of famine that have emerged during the nineteenth and twentieth centuries that appear particularly relevant to understanding modern famine causation (Figure 1): (1) market failure; (2) armed conflict; and (3) failure in central planning. Importantly, none are sole-acting causes and, therefore, for each one there are other antecedent factors, sometimes operative for years before, as well as concurrent and late-acting components that together lead to famine.
FAMINE
171
Market failure
A
B
I
H
D A
F
I
G
3 % H3 %
Great Bengal Famine of 1943
Bangladesh Famine of 1974
B A A 12 1/ D I 12 1/ H12 12 1/ 21/E % 2G% 2 F% Ethiopia and Sudan Famines of 1984–85
War or armed conflict
A A B 12 D I 2 % 12 1/ H 12 1/ 2 E % 2G%2 F%
B
AD Dutch Hunger Winter of 1944
Somalian Famine of 1991–92
Central plan failure
D
C
C
C
D I
E I
I H
E
Ukraine Famine of Great Leap Famine North Korea 1933–34 of 1959–60 Famine of 1997–98 Figure 1 Complex causal networks of selected modern famines, stratified by a dominant cause. Each pie illustrates a complete cause; each wedge illustrates an assumed, essential component cause, without any one of which famine would not occur. Inclusion of causes based on literature reviews; sizes of pie slices are subjective based on descriptions in the literature (causal concepts adapted from Rothman and Greenland, 1998). A: market failure – loss of direct or trade entitlement through a combination of: (1) increased food prices due to food shortage from decreased agricultural production or importation, hoarding and speculation, or other market forces leading to unfavorable terms of exchange; plus (2) loss of means to command food through cash, labor, credit, and other assets (endowment) by vulnerable groups of society. B: war or armed conflict – declared or internal; through siege, blockade, or other expression of force, during a time course leading up to and concurrent with famine. C: central plan failure – occurring within centrally planned states lacking democratic processes, notably in twentieth century communist states; directives that disrupt infrastructure, productivity, and economic well-being, and access to food through heavy taxation, extraction of food grains, livestock and other productive assets and terror, or restrict movement of food stocks outside free-market dynamics, leading to starvation of the masses. D: natural disaster – climatological and environmental catastrophes including floods, or single, repeated or chronic droughts. E: food availability decline – food shortage resulting from poor crop production, lack of trade, poor food transport, storage and marketing sytems. F: weak infrastructure – inadequate systems of finance, credit, roads, communications, agricultural production including irrigation or flood protection systems. G: poor/unstable governance – weak and ineffective forms of governance, including anarchy. H: inadequate aid response/administrative mismanagement – inadequate national or international counter-famine measures, including employment or food procurement policies as well as withheld, slow, ineffectual, or insufficient relief. I: other causes – a catch-all ‘causal complement’ to those listed above, of interacting prefamine and intrafamine sociological, governmental, environmental, and market forces that render each famine unique.
Market Failure
Market failure famines occur when free, competitive market forces, driven by agriculture, transportation, communication and trade, and enabled by an abiding government fail to assure minimal entitlement to food, either directly (through subsistence) or via trade for a large sector of society. Following Amartya Sen, entitlement failure is an economic phenomenon, broadly defined, in which individuals and households are
unable to obtain sufficient amounts of food through all available legal means (cash, labor, skills, credit, and other assets that comprise ‘endowment’) at the market’s existing terms of exchange (costs of securing sufficient amounts of food). Combinations of loss of endowment and adverse shifts in the conditions of exchange (e.g., spikes in grain prices) can lead to certain classes of society being severely deprived of food. Component causes that lead to market failure-driven famine are complex, interacting over an extended time
172 FAMINE
Precipitating event Migration Excess deaths
Earliest cause Other causes
Sufficient cause
Distress sales Changing diets Indigenous responses Migrating labor
Famine
Latent periods
Camps Food relief
Figure 2 A model depicting actions of individual, or component, causes that can lead to a sufficient cause of famine, and societal, indigenous responses to famine predominantly caused by market failure. Famine may be latent or delayed from external view until migrations or excess deaths occur. Government relief is typically a late response to famine.
(Figure 2). Causes acting at various times in the pathway to market failure can be numerous, including longand short-term adversity in climate leading to drought and excessive floods, pestilence and other causes of lost crop yield, reduced food imports or inefficient transport and marketing infrastructures. These all can lead to a national or, more often, regional declines in food availability, inflationary grain market responses to speculation and hoarding, other aspects of infrastructural neglect, ineffectual trade policies, political instability and corrupt governance, market depressions with year-round or seasonal job losses, and depletion of assets of the poor (endowment). Prior or present conflict can destabilize markets and contribute to such types of famine. Famines that can be classified as those primarily of market failure include the Great Irish Famine from 1844 to 1848, the Great Bengal Famine of 1943, the Bangladesh famine of 1974, and the Sudan famine of 1984–85. The Great Irish Famine was triggered by a potato blight that stripped the country of the only staple that Irish peasantry could afford to grow on their small parcels of land. Peasants who grew other staple grains had to sell them to pay rent to landlords. However, during these same years, there were substantial exports of wheat, barley, oats, and animal products by landowners to English markets. Food did not enter the local Irish markets because the peasants lacked effective demand. Market or entitlement failures marked the last two great Bengal famines of the twentieth century: The Great Bengal Famine of 1943 and the Bangladesh Famine of 1974–75 (Figure 1). The 1943 famine, during which some 3 million people are estimated to have died, was originally judged by a Famine Inquiry Commission to be due to a shortage in rice
supply. However, a seminal in-depth analysis years later by Sen showed that the famine occurred in a year during which rice production in Bengal was only 5% lower than the average of the previous 5 years. It was also a year when most economic indicators of Bengal were showing a ‘boom’ in growth due to World War II. Rural food stocks were being procured by the government to support military needs, subsidize rations for civil servants, and stabilize general prices of rice in Calcutta, which drove up the price of rice in rural areas. This practice, coupled with ‘boat blockade’ and ‘rice denial’ policies imposed in regions along the Bay of Bengal for reasons of defense, left certain low wage-earning rural classes (agricultural workers, day laborers, artisans, and fishermen) disentitled, and unable to acquire enough food for their own survival. In Bangladesh, at least 100 000 people died between 1974 and 1975 in a famine that followed an unusually severe flood. During the several years leading up to the famine there were events that brought the country to a highly vulnerable state, including a devastating cyclone and tidal wave, a civil war that led to the country’s independence, and a series of partial crop failures, all superimposed on preexisting high burdens of malnutrition, disease, underdevelopment, and ensuing political chaos. The flood in the middle of 1974 was expected to destroy much of the major ‘aman’ rice to be harvested a few months later. In anticipation of impending rice shortage, rural traders began to hoard grains in early September of that year causing rice prices to spike across the country’s rural markets in a contagious pattern (Figure 3). Rice prices remained at about twice their normal level for months thereafter, even after it became evident that the speculated poor rice harvest was, in fact, a normal one. Thus, total and per capita aggregate grain supplies in Bangladesh remained at about average levels throughout the famine. Local area food deficits and hoarding of grains by traders led to the observed points of inflection in the price of rice throughout the country that caused the entitlements of rural wage earners to collapse, initiating a famine that resulted in extremely high mortality and massive migrations to urban centers in search of relief. The Horn of Africa has been wracked by famine or famine-like conditions, leading to what have become classically defined as ‘complex emergencies’ for much of the past three decades. Aggregate food shortage has appeared to play a more variable and, at times, prominent role in recent famines in the eastern Horn. In Ethiopia, Sudan, Eritrea, and Somalia large tracts of land are drought-prone, average annual rainfall has been declining since the
FAMINE
(B)
(A)
(D)
(F)
173
(C)
(E)
(G)
(H)
Figure 3 Consecutive weekly maps of a contagious spread of spikes in the price of rice in local markets throughout rural Bangladesh from (A) late August 1974 through to (H) the end of October 1974 during a flood-associated period of a famine that reportedly killed from 100 000 to 1 million persons. (Adapted from Seaman J and Holt J (1980) Markets and famines in the third world. Disasters 4(3): 283–297.)
1930s, and robust, indigenous farming and animal husbandry practices have been weakened as agricultural land has increasingly been used for growing export crops. In the Ethiopian famine of 1972–75, in which over 100 000 people died, national crop production dropped to only 7% below normal levels, a decline that, like in Bengal in 1943 and 1974, would not have been expected to trigger a famine. However, crop production had been severely below normal in Wollo Province, where the famine began. Although the famine subsequently spread to other areas of the country, a reluctance by the government to formally recognize the famine and excessive delays in mobilizing and targeting food aid within country (whether from national or international stocks) were deemed responsible for unleashing a famine that, based on national stocks, should have been averted. Famines during 1982–85 in Ethiopia and the Sudan appeared to be more closely tied to gradual declines in national food security during the preceding decade. These trends were exacerbated by repressive governments
enacting targeted, famine-promotive rather than preventive policies, resulting in civil wars and severely deteriorating economic conditions that were compounded by weak international food aid responses. Armed Conflict
A second major class of famine comprises those precipitated or triggered by declared war or armed insurgency, leading to a siege or food blockade by a foreign power (e.g., Allied blockade of Germany in 1915–18; Nazi blockade of Holland precipitating the Dutch Winter Famine of 1944–45, and the Nazi siege of Lenningrad in 1942–44) or, as occurring more in recent years, severe civil war that disrupts normal markets as well as emergency food delivery systems (e.g., the Somalian civil war and famine of 1991–92). Armed conflict can incapacitate or destroy a country’s ability to govern, develop, produce and feed itself domestically or through food aid, as scores of people become displaced, destitute, starve and die from severe malnutrition and epidemic illness. The
174 FAMINE
A third class of modern famine, distinct from the other two, has resulted from failure by intent, indifference, ignorance, or incompetence of a centrally planned state to adequately provide food to all sectors of society, often as a result of totalitarian action to advance political goals outside of the rules of free trade or popular processes. Examples of this third type of famine in the twentieth century include those induced by the notorious policies of Stalin in Soviet Russia in the 1920s and 1930s. In an effort to achieve rapid industrial growth, Stalin waged class warfare among rural peasantry, abolished economic incentives, collectivized farms into massive (inefficient) production units and merged villages into socialist agro-towns, seized and exported grain for foreign exchange to fuel industrialization, restricted population movements across municipalities, and brutally suppressed all opposition. Agricultural production plummeted across regions of Russia leading to disastrous shortages (e.g., by 40% in some areas), further intensifying state seizures of food grain, especially in the grain-belt region of the Ukraine where Stalin sought to crush a nationalist revolt by forcibly extracting available food grains from the population. The actions induced the worst famine in Russian history. Between 1930 and 1937 it was estimated that nearly 15 million peasants died, of whom 7–8 million died in the Ukraine in 1933–34. Under communist rule imposed by Mao Zedong, in 1959–60 China experienced the worst recorded famine in human history that left an estimated 30 million people dead. The Great Leap Famine was provoked through a causal chain of centrally planned policy steps during the preceding decade, modeled after Stalin and motivated by illconceived goals to ‘Leap forward’ MAO’s aims were to achieve agricultural sufficiency and superiority through massive agricultural collectivization and the formation of huge peasant communes, and rapid rural industrialization through crash programs to increase steel production. The plight of tens of millions of rural peasants was tightly controlled by
Coping Strategies Most is known about household and community coping mechanisms in response to famines due to market failure. In cultures where food shortage or inaccessibility to large sectors of society is chronic, and threat of famine periodic, there exist indigenous responses that enable the local populace to cope, protect their entitlement, and minimize as best it can the risk of starvation as terms of exchange for food deteriorate (illustrated as a concept in Figure 4).
Percentage starving
Failure in Central Planning
the state through brutal force, terror, propaganda, and state control of grain production, procurement and taxation motivated by a blind faith among civil servants in the vision and leadership of Mao. As a result of fabricated inflation of grain production figures, driven by a zeal to demonstrate success, China became a net exporter of more than a million metric tons of grain during the peak of famine mortality in the countryside in 1960, mimicking Stalinist Russia. Thus, in addition to events immediately leading to famine, some component causes contributing to the centrally planned Great Leap Famine can be traced back through the previous one to three decades and to influences beyond the borders of China. Communist North Korea’s inability to avert famine in 1997–98 amounts to the most recent example of a central planning failure, conditioned by chronic food insecurity over the previous decade and precipitated by poorly timed, torrential rains and floods in 1995–96 and drought in 1997. However, some causal elements related to how slowly and secretively the isolationist government responded, actions of governance that date back to the Korean War and Cold War politics, and politicization of food aid.
Endowment of poor households
famine in Somalia in the early 1990s exemplifies the rapid emergence of military conflict as a precipitating cause of famine. With significant transfers of weaponry to rogue vigilante groups and increased deployments of land mines in other poor, warring countries in recent years, civil violence and lawlessness also pose a major hindrance to the effective provision of short-term relief during the acute phase of famine and to subsequent economic recovery.
Insurance
Endowment loss Destitution
Terms of exchange (cost of living) Figure 4 Illustration of collapse in entitlement. As endowment of the poor decreases toward a state of destitution with increasingly severe (costly) terms of exchange for food, the risk of starvation and famine increases.
FAMINE
A first line of responses may be viewed as ‘insurance’ against uncertainty; these are activities that can stem loss of endowment, such as restructuring the mix of crops grown or pastoral practices in ways that insulate against drought- or flood-induced shortages. Examples include planting more robust crops, dispersing crops across a wider area, staggering plantings, or increasing livestock diversity and mobility. Food preservation practices and dietary changes to include less commonly eaten foods can initially increase the size and diversity of the food base. As terms of exchange become worse, coping mechanisms aimed at survival increasingly cost households their endowment. These responses include working longer and at different jobs for lower wages, migrating far from home to find marginal work, reducing meal frequency, consuming the next planting’s seeds, and expanding intake to include ‘famine foods’ poor in, or lacking, nutritional quality. At first these may include unusual tubers, leaves, flowers, and other plants. Household assets such as pots, utensils, watches, and small animals are increasingly sold as, eventually, are larger assets such as bullock carts, bicycles, and draft animals. Land mortgage or sales transactions become more numerous. With indebtedness and destitution, petty crime and child abandonment increase; famine foods may include tree bark, ground bone, and rodents; suicide and cannablism may occur. An indicator of severe entitlement loss in a community is the livestock-to-grain price ratio in local markets. Normally this ratio is of a figure that reflects the greater asset value of livestock compared to grain. However, it may invert as the cost of grain and feeding animals and the level of animal wasting all continue to rise, such that, at a peak of famine vulnerability, large numbers of animals may be sold at very low prices relative to the costs of grain. Viewed over time, famine is a continuum. As household and community entitlements erode for increasing numbers due both to deteriorating conditions of exchange and endowment loss, destitution and starvation become more likely. Figure 5 depicts a hypothetical shift in distribution of starving individuals in a poor population exposed to increasing risk of famine, where under usual conditions a small proportion of individuals routinely face the threat of starvation and wasting malnutrition (top panel). During periods of high or repeated stress, such as those of prolonged drought and internal conflict, while the population faces less food security coping mechanisms continue to protect most vulnerable groups from abject starvation, even as they near such a ‘threshold’ amidst inevitable losses of human and economic asset (middle
175
Starvation threshold
Normal
Prefamine
Famine Population
Figure 5 Shifting of a high-risk, undernourished population toward increased starvation during prefamine and famine conditions, particularly those most vulnerable. Truncated left tail area reflects hypothetical effects of coping strategies that prevent starvation. Right skew reflects polarizing of wealth, with some sectors profiting from famine.
panel). During severe distress of famine, entitlement has collapsed for the most vulnerable classes of society, pushing large numbers of persons into a state of starvation, leaving them destitute and migrating or dying (bottom panel). However, not all individuals starve. Some segments of society lose little or no economic ground, or benefit considerably from the plight of others by acquiring property and other assets at low prices, obtain labor at reduced wages or lend money at high interest rates. Still other segments, particularly those trading in famine relief goods and services, stand to gain large profits throughout the famine and recovery periods (depicted by the right skew). Postfamine, the economic landscape is nearly always one of greater polarization of wealth and an increase in size and vulnerability of society’s poor and destitute. Peri-urban slums typically remain swollen following famine as a result of permanent migration. Government and International Responses
Famine through the ages has invoked from law abiding governments preventive action, where believed indicated, and relief responses in the face of imminent catastrophe. In Genesis, Pharoah’s grain taxes during years of plenty were aimed at relieving dwindling food stores in famine. During China’s
176 FAMINE
Eastern Chou and Ch’in dynasties of the third century BC, as well as in India over 2000 years ago, steps formulated to prevent or relieve famine included disaster reporting procedures, cropping alterations, grain distribution, feeding kitchens, tax remissions, vulnerable group relocation, and public works construction to facilitate irrigation, food shipment or flood control. In sixteenth century England, to counter inflationary effects of speculative grain hoarding, the Tudor First Book of Orders called for enforced extraction and marketing of private grain stocks as a way to control staple prices and thwart famine. Policy response can also amount to inaction. The Great Irish Famine from 1844 to 1848 evoked a different response from the British Government: a flawed ‘laissez-faire’ policy intending to allow market forces to equilibrate on their own to meet local food needs, a course that never materialized as entitlement collapsed among Irish peasantry. However, learning from a century of repeated famine, Famine Codes emerged in British India in 1880 that called for massive public works coupled with food distribution and feeding centers for vulnerable groups, which served as the core famine relief policy on the subcontinent for more than a half century and have continued to guide famine relief efforts to the present day. Today, modern preventive response by international agencies and governments can be informed and guided by surveillance systems with regional, national, and local data collection mechanisms. Examples are the Famine Early Warning System (FEWS), which functions across Sub-Saharan Africa and has been supported by the US Agency for International Development over the past two decades and the Global Information Early Warning System (GIEWS) managed by the Food and Agricultural Organization of the United Nations (FAO). The primary aim of surveillance is to detect worsening conditions in high-risk populations in sufficient time to permit effective preventive or pre-emptive action. The task is a ‘tall order’ given widespread, often complex, component causes that must converge in certain ways to cause famine, against a usual plethora of endemic risk factors. With early, adequate, and effective response serving as the criterion of success, modern surveillance has so far failed to prevent famine. In part, this may reveal a basic epidemiologic dilemma: Against a background of profound, widespread economic and nutritional need throughout the developing world, including numerous prefamine but intact situations arising under surveillance, famine is a rare event. Even with presumed high sensitivity and specificity, low predictive value stemming from infrequent occurrence makes action to prevent a particular famine
unlikely given the enormous political and financial resources required to mount preventive responses. Thus, the most effective preventive action relates to setting and enacting a development agenda that recognizes high risk areas and seeks to strengthen the productivity and well-being of famine-vulnerable population groups in those areas of a country. These can include boosting infrastructural, commercial, education, agricultural, and other inputs into priority areas that improve long-term economic conditions. Preemptive government policies are directed toward relieving a prefamine condition once it becomes apparent. Setting up famine early warning systems that monitor climatic, agricultural, population mobility, economic, and nutritional indicators is considered preemptive in that such information is intended to identify high-risk trends so that corrective action could be taken long before famine becomes imminent. Normally, early warning surveillance is only possible in high-risk countries with significant international assistance. Another example is a government making large purchases of food on the international market and releasing the commodities through ration shops, food-for-work and other programs that do not disrupt the local food economy but stabilize local grain market prices instead as a means to prevent speculation throughout the period of high risk. Lagged or relief-oriented responses comprise emergency responses to acute and enormous need that typically are enacted after famine begins and its harsh consequences are already evident in a population. These actions, usually in coordination with major international relief and donor agencies, are typically intended to relieve acute suffering and death and promote the rehabilitation of those masses who have survived to migrate, and reach encampments. By definition, lagged responses represent policy failure for governments intending to minimize the destruction, malnutrition, and mortality of famine. See also: Hunger. Malnutrition: Primary, Causes Epidemiology and Prevention; Secondary, Diagnosis and Management. Nutrition Policies In Developing and Developed Countries. Starvation and Fasting.
Further Reading Ahmed R, Haggblade S, and Chowdhury TE (2000) Out of the Shadow of Famine: Evolving Food Markets and Food Policy in Bangladesh Baltimore: Johns Hopkins University Press. Aykroyd WR (1974) The Conquest of Famine. London: Chatto & Windus. The Bible. Book of Genesis 47: 4–26.
FATS AND OILS Cuny FC (1999) Famine, Conflict and Response: A Basic Guide West Harford: Kumarian Press. Dreze J and Sen A (eds.) (1990) The Political Economy of Hunger: Famine Prevention, vol. 2: WIDER Studies in Developmental Economics, pp. 1–400. Oxford: Clarendon Press. Edkins J (1996) Legality with a vengeance: Famines and humanitarian relief in ‘‘complex emergencies.’’ Millenium: Journal of International Studies 25: 547–575. Newman LF (ed.) (1992) Hunger in History: Food Shortage, Poverty and Deprivation. Oxford: Blackwell. Ravallion M (1997) Famines and economics. Journal of Economic Literature 35: 1205–1242. Rothman K and Greenland S (1998) Modern Epidemiology, pp. 7–28. Philadelphia: Lippincott-Raven.
177
Scrimshaw NS (1987) The phenomenon of famine. Annual Review of Nutrition 7: 1–21. Seaman J and Holt J (1980) Markets and famines in the third world. Disasters 4(3): 283–297. Sen A (1977) Starvation and exchange entitlements: a general approach and its application to the great Bengal famine. Cambridge Journal of Economics 1: 33–59. Sevoy RE (1986) Famine in Peasant Societies. New York: Greenwood Press. Yang DL (1996) Calamity and Reform in China: State, Rural Society and Institutional Change since the Great Leap Forward. Stanford: Stanford University Press. Yip R (1997) Famine. In: Noji EK (ed.) Public Health Consequences of Disasters, pp. 305–335 New York: Oxford University Press.
Fat-Soluble Vitamins see Vitamin A: Biochemistry and Physiological Role. Vitamin D: Physiology, Dietary Sources and Requirements; Rickets and Osteomalacia. Vitamin E: Metabolism and Requirements. Vitamin K
Fat Stores see Adipose tissue
Fats see Fatty Acids: Metabolism; Monounsaturated; Omega-3 Polyunsaturated; Omega-6 Polyunsaturated; Saturated; Trans Fatty Acids. Lipids: Chemistry and Classification; Composition and Role of Phospholipids
FATS AND OILS A H Lichtenstein, Tufts University, Boston MA, USA ª 2005 Elsevier Ltd. All rights reserved.
Dietary fat is a macronutrient that has historically engendered considerable controversy and continues to do so. Contentious areas include optimal type and amount in the diet, role in body weight regulation, and importance in the etiology of chronic disease(s).
Dietary Fats and Oils: The Good, Bad, and Ugly Dietary fats and oils are unique in modern times in that they have good, bad, and ugly connotations. The aspects of dietary fat that are classified as
good include serving as a carrier of preformed fatsoluble vitamins, enhancing the bioavailability of fat-soluble micronutrients, providing essential substrate for the synthesis of metabolically active compounds, constituting critical structural components of cells membranes and lipoprotein particles, preventing carbohydrate-induced hypertriglyceridemia, and providing a concentrated form of metabolic fuel in times of scarcity. The aspects of dietary fat that can be classified as bad include serving as a reservoir for fat-soluble toxic compounds and contributing dietary saturated and trans fatty acids, and cholesterol. Aspects of dietary fat that can be classified as ugly include providing a concentrated form of metabolic fuel in times of excess and comprising the major component of atherosclerotic plaque, the
178 FATS AND OILS
underlying cause of heart disease, stroke, and phlebitis.
Lipids in Food and in the Body Fatty Acids
Fatty acids are hydrocarbon chains with a methyl and carboxyl end. The majority of dietary fatty acids have an even number of carbons. The range in chain length of common dietary fatty acids is broad. Fatty acids with 16 and 18 carbons make up the majority of fatty acids present in plants and animals. However, they are by no means the most metabolically active. Long-chain unsaturated fatty acids, such as arachidonic acid (C20:4), are common precursors of regulatory compounds. Essential nutrients are those that the body cannot synthesize or cannot synthesize in amounts adequate to meet needs. Linoleic acid (18:2) and/or fatty acids that can be derived from linoleic acid are essential fatty acids. These specific fatty acids are essential because humans cannot introduce a double bond above the ninth carbon from the carboxyl end of the acyl chain. To maintain optimal health, they must be supplied by the diet of humans. The metabolism of linoleic acid is represented in Figure 1. A wide range of fatty acids occur in nature. There are a number of features of fatty acids that distinguish one from another. In addition to chain length, they also vary with regard to degree of saturation and location of the double bond(s). Fatty acids with a single double bond are referred to as monounsaturated fatty acids, and those with two or more double bonds are referred to as polyunsaturated Linoleic Acid 18:2n-6 delta 9 desaturase
alpha-Linolenic Acid 18:3n-6 delta 6 desaturase
Dihomo- gammalinolenic Acid 20:3n-6 elongase
Arachidonic Acid 20-4n-6 delta 5 desaturase
Docosatetraenoic Acid 22:4n-6 elongase
22:5n-6 Figure 1 Metabolism of linoleic acid.
fatty acids (Figure 2). The double bonds within unsaturated fatty acids can either be in the cis (hydrogen atoms on the same side of the acyl chain) or trans (hydrogen atoms on opposite sides of the acyl chain) conformation (Figure 3). The cis conformation is most commonly found in nature. Double bonds can also vary with regard to location within the acyl chain. The presence of double bonds, per se, and their number, position, and conformation, dictates the physical properties of the fatty acids. Unsaturated fatty acids of the same length with an identical number of double bonds can occur in multiple forms due to variation in the conformation of one or more of the double bonds (cis versus trans). They are referred to as geometric isomers (Figure 3). A common example is oleic acid (18:1c-9) and elaidic acid (18:1t-9). The presence of a cis relative to a trans double bond results in a greater bend or kink in the hydrocarbon chain. This kink impedes the fatty acids from aligning or packing together, thereby lowering the melting point of the fat. In a cell membrane this will be reflected in increased fluidity. In food this will be reflected in an oil that is liquid or fat that is soft at room temperature. Unsaturated fatty acids of the same length with an identical number of double bonds and conformation can also occur in multiple forms due to the location of the double bonds within the acyl chain. They are referred to as positional isomers. A common example is alpha-linolenic acid (18:3n-3) and gammalinolenic acid (18:3n-6). The difference in location of double bonds results in small alterations to the melting point yet large differences in the metabolic properties of the fatty acids. The most common distinction made among positional isomers of fatty acids is the location of the first double bond from the methyl end of the acyl chain. A fatty acid in which the first double bond occurs three carbons from the methyl end is termed an omega-3 fatty acid, frequently denoted n-3 fatty acid. This class of fatty acids is distinguished from the major class of fatty acids in which the first double bond occurs six carbons from the methyl end, termed an omega-6 or n-6 fatty acid. Enzymes that metabolize fatty acids distinguish among both positional and geometric isomers. The metabolic products of the different positional isomers of fatty acids have different and, occasionally, opposite physiological effects. Most double bonds within fatty acids occur in a nonconjugated sequence, both in the human body and in food. That is, a carbon atom with single carbon–carbon bonds separates the carbons making up the double bonds. Some double bonds occur in
FATS AND OILS
179
Figure 2 Saturated, monounsaturated, and polyunsaturated (n-3 and n-6) acids.
the conjugated form, without an intervening carbon atom separating the double bonds. Conjugated double bonds tend to be more reactive chemically (i.e., more likely to become oxidized). Although there is considerable speculation about the role of conjugated double bond-containing fatty acids and human health, the current state of knowledge is insufficient to draw any firm conclusions. Triacylglycerol Trans form
120°
110°
Cis form
Oleic acid
Elaidic acid
Figure 3 Cis and trans double-bond-containing fatty acids. (Copyright ª The McGraw-Hill Companies, Inc.)
Triacylglycerol is the major form of dietary lipid in fats and oils, whether derived from plants or animals. Triacylglycerol is composed of three fatty acids esterified to a glycerol molecule (Figure 4). The physical properties of the triacylglycerol are determined by the specific fatty acids esterified to the glycerol moiety and the actual position the fatty acid occupies. Each of the three carbons comprising the glycerol molecule allows for a stereochemically distinct fatty acid bond position: sn-1, sn-2, and sn-3. A triacylglycerol with three identical fatty acids is termed a simple triacylglycerol. These are exceedingly rare in nature. A triacylglycerol with two or three different fatty acids is termed a mixed
180 FATS AND OILS
O H H
C
C
C
C
H
H
H
H
H
H
H
C
C
C
C
C
C
C
C
C
H
H
H
H
H
H
H
H
H
O
H
H
H
H
H
H
H
H
H
C
C
C
C
C
C
C
C
C
O
H
H
H
H
H
H
H
H
H
O
H
H
H
H
H
H
C
C
C
C
C
C C
H
H
H
H
H
C H
H
O
C H
H
O
H Figure 4 Triacylglycerol.
H
H C
H
H
H
H C
H
H
triacylglycerol, and these make up the bulk of the fat both in the human diet and in the body. The melting point of a triacylglycerol is determined by the position of the fatty acids esterified to glycerol and physical characteristics—their chain length and number, position, and conformation of the double bonds, and the stereochemical position. Approximately 90% of the molecular weight of triacylglycerol is accounted for by the fatty acids. The fatty acid profile of the diet is reflected, in part, in the fatty acid profile of the adipose tissue triacylglycerol. Such data have been used to approximate long-term food intake patterns of humans. Manipulating the dietary fat provided to domesticated animals is being considered as one approach to modifying the fatty acid profile of meat. Mono- and diacylglycerols have one and two fatty acids, respectively, esterified to glycerol. They rarely occur in large quantities in nature. Mono- and diacylglycerols are primarily intermediate products of triacylglycerol digestion and absorption, clearance from the bloodstream, or intracellular metabolism. They are frequently added to processed foods because of their ability to act as emulsifiers. Their presence in food products is noted on ingredient labels. Once consumed, triacylglycerol are hydrolyzed to free fatty acids and monoglycerides in the small intestine prior to absorption. These compounds enter the intestinal cell and are used to resynthesize triacylglycerol. This lipid is then incorporated into a nascent triacylglycerol-rich lipoprotein particle, termed chylomicron, for subsequent release into peripheral circulation. Chylomicrons are secreted directly into the lymph prior to entering the bloodstream. Once in circulation, triacylglycerol are hydrolyzed before crossing the plasma membrane of peripheral cells for subsequent metabolism. The primary enzyme that hydrolyzes triacylglycerol in plasma is lipoprotein lipase. Lipoprotein lipase
hydrolyzes triacylglycerol into two free fatty acids and 2-monoacylglycerol. The enzyme is attached to the luminal surface of capillary endothelial cells via a highly charged membrane-bound chain of heparin sulfate–proteoglycans. The ability of lipoprotein lipase to bind both the chylomicron particle and the cell surface ensures the cellular uptake of free fatty acids that are generated from the hydrolysis. Once inside the cell, free fatty acids can be oxidized to provide energy, metabolized to biologically active compounds, incorporated into phospholipid and cholesteryl ester, or resynthesized into triacylglycerol for storage as a potential reservoir of fatty acids for subsequent use. Phospholipid
There are only trace amounts of phospholipid in dietary fats and oils. However, because the fatty acids in fats and oils provide substrate for the synthesis of phospholipid in the body, it is important to discuss this subtype of fat. Phospholipid is a critical structural component of all cells, both plant and animal. It is composed of two fatty acids on the sn-1 and sn-2 positions and a moiety frequently referred to as a polar head group on the sn-3 position of glycerol, the latter via a phosphate bond (Figure 5). Phospholipid molecules are amphipathic—that is, there are both hydrophobic and hydrophilic domains in the molecule. The two fatty acids confer hydrophobic properties and the polar head group hydrophilic properties. The specific fatty acids esterified to the glycerol backbone tend to be unsaturated fatty acids. The different polar head groups, most commonly phosphorylcholine, phosphorylserine, phosphorylinositol, or phosphorylethanolamine, result in phospholipids that vary in size and charge. Due to their amphipathic nature, phospholipids serve as the major structural component of cellular membranes by forming bilayers and in so doing also serve as a reservoir for metabolically active unsaturated fatty acids. Due to their amphipathic properties, in the
Figure 5 Phospholipid.
FATS AND OILS
small intestine they play an important role in the emulsification and absorption of dietary fat and fat-soluble vitamins. On the surface of lipoprotein particles, they provide a critical component in the packaging and transport of lipid in circulation. Cholesterol
Dietary sources of cholesterol are limited to foods of animal origin. Cholesterol is an amphipathic molecule that is composed of a steroid nucleus and a branched hydrocarbon tail (Figure 6). Cholesterol occurs naturally in two forms, either as free (nonesterified) cholesterol or esterified to a fatty acid (cholesteryl ester). If esterified, the fatty acid is linked to cholesterol at the number 3 carbon of the sterol ring. Cholesterol serves a number of important functions in the body. Free cholesterol is a component of cell membranes and along with the fatty acid profile of the phospholipid bilayer determines membrane fluidity. The intercalation of free cholesterol into the phospholipid bilayer restricts motility of the fatty acyl chains and hence decreases fluidity. Free cholesterol is critical for normal nerve transmission. It makes up approximately 10% (dry weight) of total brain lipids. Cholesterol is a precursor of steroid hormones (i.e., estrogen and testosterone), vitamin D, adrenal steroids (i.e., hydrocortisone and aldosterone), and bile acids. This latter property is exploited in certain approaches to decrease plasma cholesterol levels by preventing the resorption of bile acids (recycling) and hence forcing the liver to use additional cholesterol for bile acid synthesis and in so doing creating an alternate mechanism for cholesterol excretion. The receptor-mediated cellular uptake of cholesterol from lipoprotein particles is critical to maintaining intracellular and whole body cholesterol homeostasis. Once internalized, lipoproteinassociated cholesterol that is released from lysosomes has three major effects in the cell. The free cholesterol inhibits the activity of 3-hydroxy 3-methylglutaryl CoA reductase, the rate-limiting enzyme in endogenous cholesterol biosynthesis. This property serves to decrease cholesterol biosynthesis commensurate with the uptake of cholesterol from circulating lipoprotein particles and H3C H3C H3C
HO Figure 6 Cholesterol.
CH3 CH3
181
hence protects the cell from accumulating excess intracellular cholesterol. Free cholesterol inhibits the synthesis of receptors that mediate the uptake of lipoproteins from the bloodstream, thereby limiting the amount of additional cholesterol taken up by the cell. Free cholesterol increases the activity of acyl CoA cholesterol acyltransferase (ACAT), the intracellular enzyme that converts free cholesterol to cholesteryl ester. A high level of intracellular free cholesterol is cytotoxic, whereas cholesteryl ester is a highly nonpolar molecule and coalesces into a lipid droplet within the cell, preventing interaction with intracellular components. Increased ACAT activity is an important mechanism in preventing the accumulation of intracellular free cholesterol. Cholesterol can be esterified intracellularly, as previously indicated, by ACAT. ACAT uses primarily oleoyl CoA as substrate and the resulting product is primarily cholesteryl oleate. Cholesterol can also be esterified in plasma by lecithin cholesterol acyltransferase (LCAT). LCAT uses phosphotidylcholine as substrate; the resulting products are primarily cholesteryl linoleate and lysolecithin. Cholesteryl ester is less polar than free cholesterol and this difference dictates how the two forms of cholesterol are handled—intracellularly and within lipoprotein particles. Approximately one-third of cholesterol in plasma circulates as free cholesterol and approximately twothirds as cholesteryl ester. Cholesterol in circulation is carried on all the lipoprotein particles (both intestinally derived chylomicrons and hepatically derived very low-density lipoprotein) or those generated during the metabolic cascade (intermediate-density lipoprotein, low-density lipoprotein (LDL), and high-density lipoprotein (HDL)). Free cholesterol is sequestered on the surface of lipoprotein particles within the phospholipid monolayer, whereas cholesteryl ester resides in the core of the lipoprotein particle. The majority of the cholesterol in circulation is carried on LDL particles. Cholesteryl ester is the major component of atherosclerotic plaque. In the arterial wall, cholesteryl ester is derived from the infiltration of lipoprotein-associated cholesteryl ester resulting from LCAT activity or is synthesized in situ as a result of ACAT activity. The fatty acid profile of the cholesteryl ester in arterial plaque can provide some indication of its source. Other Sterols
Fats and oils derived from plants contain a wide range of phytosterols, compounds structurally similar to cholesterol. The difference between
182 FATS AND OILS
H
H
H O
H
H
H H
H
H
H
HO
O
Campesterol
H
Sitosterol
H
Stigmasterol
Figure 7 Plant sterols.
phytosterols and cholesterol is related to their side chain configuration and/or steroid ring bond patterns. The most common dietary phytosterols are beta-sitosterol, campesterol, and stigmasterol (Figure 7). In contrast to cholesterol, phytosterols are only absorbed in trace amounts. For this reason, plant sterols have been used therapeutically to reduce plasma cholesterol levels. They compete with cholesterol for absorption; hence, they effectively reduce cholesterol absorption efficiency. The absorption efficiency of cholesterol in humans ranges from approximately 40 to 60%. Because the relative absorption of plant sterols, however low, is correlated with the percentage of cholesterol absorbed in an individual, there is considerable interest in using circulating plant sterol concentrations as a surrogate marker for cholesterol absorption efficiency. Limited data suggest efficiency of cholesterol absorption may have a significant effect on lipoprotein profiles and cardiovascular disease risk. Whether circulating phytosterols have an independent effect on cardiovascular disease risk is under investigation.
Dietary Fats and Oils and Cholesterol Dietary fat serves critical functions in the human body. It provides a concentrated source of energy, slightly more than twice per gram than protein or carbohydrate. For this reason, the causes of energy imbalances are often attributed to this component of the diet. However, definitive data in this area are lacking. In addition to providing a source of metabolic energy, dietary fat provides a source of essential fatty acids, linoleic acid (18:2), and/or other fatty acids that are derived from linoleic acid. Dietary fat is the major carrier of preformed fat-soluble vitamins (vitamins A, D, E, and K). The bioavailability of these fat-soluble vitamins is dependent on fat absorption. Dietary fatty acids are incorporated into compounds that serve as structural components of biological membranes and lipoproteins, and as such they serve as a reservoir for fatty acids having subsequent metabolic fates.
Fatty Acid Profile of Common Dietary Fats
Dietary fats and oils derive from both animal and plant sources, primarily in the form of triacylglycerol. The fatty acid profile of dietary fats commonly consumed by humans varies considerably (Figure 8). In general, fats of animal origin tend to be relatively high in saturated fatty acids, contain cholesterol, and are solid at room temperature. A strong positive association has been demonstrated in epidemiological, intervention, and animal data between cardiovascular disease risk and intakes of saturated fatty acids. The exception is stearic acid (18:0), a saturated fatty acid of which a large proportion is metabolized to oleic acid (18:1), a monounsaturated fatty acid. Fats and oils of plant origin tend to be relatively high in unsaturated fatty acids (both monounsaturated and polyunsaturated) and are liquid at room temperature. Notable exceptions include plant oils termed tropical oils (palm, palm kernel, and coconut oils) and hydrogenated fat. Tropical oils are high in saturated fatty acids but remain liquid at room temperature because they contain a high proportion of short-chain fatty acids. Hydrogenated plant oils can be relatively high in saturated and/or trans fatty acids due to chemical changes induced during processing, including conversion of unsaturated to saturated bonds and cis to trans double bonds.
safflower oil corn oil SFA MUFA 18:20 18:30
olive oil cottonseed oil beef tallow palm oil coconut oil 0%
25%
50%
75% 100%
Figure 8 Relative composition of common dietary fats.
FATS AND OILS Major Contributors of Dietary Saturated, Monounsaturated, and Polyunsaturated Fatty Acids and Cholesterol
The major types of dietary fats and oils are generally broken down on the basis of animal and plant sources. The relative balance of animal and plant foods is an important determinant of the fatty acid profile of the diet. However, with the increasing prominence of processed, reformulated, and genetically modified foods, it is becoming more difficult to predict the fatty acid profile of the diet on the basis of the animal verses plant distinction. According to the National Health and Nutrition Examination Survey (NHANES) recall data from 1999–2000, the 10 major dietary sources of saturated fatty acids in US diets are regular cheese (6.0% of the total grams of saturated fatty acids consumed), whole milk (4.6%), regular ice cream (3.0%), 2% low-fat milk (2.6%), pizza with meat (2.5%), French fries (2.5%), Mexican dishes with meat (2.3%), regular processed meat (2.2%), chocolate candy (2.1%), and mixed dishes with beef (2.1%). Hence, the majority of saturated fatty acids are contributed by regular dairy products (16%), and the top 10 sources contribute 30% of the total saturated fatty acids consumed. The increased prevalence of fat-free and lowfat dairy products provides a viable option with which to encourage a populationwide decrease in saturated fat intake. To put the value of decreasing populationwide intakes of saturated fat into perspective, it has been estimated that the isocaloric replacement of 5% of energy from saturated fatty acids with complex carbohydrate, on average, would reduce total cholesterol levels by 10 mg/dl (0.26 mmol/l) and LDL cholesterol by 7 mg/dl (0.18 mmol/l). For a person at moderately high risk of developing cardiovascular disease with a total cholesterol level of 220 mg/dl (5.69 mmol/l) and LDL cholesterol level of 140 mg/dl (3.62 mmol/l), such a dietary modification would decrease total and LDL cholesterol levels by 4.5 and 5%, respectively. Each 1% decrease in total cholesterol levels has been associated with a 2% reduction in the incidence of coronary heart disease. Using this example, that would theoretically translate into a 9% decrease in cardiovascular disease risk. However, it is important to note that decreasing the saturated fatty acid content of the diet should not necessarily be done by displacing fat with carbohydrate. As discussed in the next section, the quantity of dietary fat, relative to carbohydrate and protein, also impacts on blood lipid levels and lipoprotein profiles. The 10 major dietary sources of monounsaturated fatty acids in US diets are French fries (3.3% of the total grams of monounsaturated fatty acids
183
consumed), regular processed meat (2.5%), regular cookies (2.5%), regular miscellaneous snacks (2.4%), pizza with meat (2.4%), regular salad dressing (2.4%), regular cheese (2.3%), Mexican dishes with meat (2.3%), sausage (2.1%), and mixed dishes with beef (2.1%). There is little change in total or LDL cholesterol levels from the isocaloric replacement of monounsaturated fatty acids by complex carbohydrate. However, it is important to note that approximately one-half of the monounsaturated fatty acids consumed in the United States come from animal fats. Therefore, a decrease in saturated fatty acid intake would be predicted to decrease monounsaturated fatty acid intake unless vegetable oils high in monounsaturated fatty acids, such as canola or olive oil, replaced the animal fat. The 10 major dietary sources of n-6 polyunsaturated fatty acids in US diets are regular salad dressing (8.8% of the total grams of polyunsaturated fatty acids consumed), regular white bread (4.2%), regular mayonnaise (3.0%), French fries (2.6%), regular cake (2.5%), regular cookies (2.1%), mixed dishes with chicken and turkey (2.1%), regular miscellaneous snacks (2.0%), regular potato chips (2.0%), and fried fish (2.0%). The distribution of polyunsaturated fatty acids among commonly consumed foods is wide. It has been estimated that the isocaloric replacement of complex carbohydrate with polyunsaturated fatty acids for 5% of energy, on average, will reduce total cholesterol levels by 5 mg/dl (0.13 mmol/l) and LDL cholesterol by 4 mg/dl (0.11 mmol/l). For a person at moderately high risk of cardiovascular disease with a total cholesterol level of 220 mg/dl (5.69 mmol/l) and LDL cholesterol level of 140 mg/dl (3.62 mmol/l), such a dietary modification would decrease total and LDL cholesterol levels by 2.1 and 3.6%, respectively, and potentially result in a 4% decrease in cardiovascular disease risk. The 10 major dietary sources of cholesterol in US diets are fried eggs (16.6% of the total milligrams of cholesterol consumed), regular eggs including scrambled eggs (8.4%), mixed dishes with eggs (4.5%), mixed dishes with beef (2.9%), whole milk (2.6%), regular cheese (2.5%), fried fish (2.3%), mixed dishes with chicken and turkey (2.3%), lean cut meat (2.1%), and regular processed meat (2.1%). Eggs or foods high in eggs contribute approximately 30% of the total dietary cholesterol intake. It has estimated that reducing cholesterol intakes by 200 mg/day, on average, will reduce total cholesterol levels by 5 mg/dl (0.13 mmol/l) and LDL cholesterol by 2.6 mg/dl (0.10 mmol/l). Such a change would be predicted to have a similar risk effect as displacing 5% of energy as carbohydrate with polyunsaturated
184 FATS AND OILS
fatty acids—that is, reducing cardiovascular disease risk by approximately 4%.
Dietary Fat and Cardiovascular Prevention
in normal weight subjects and 3 kg in overweight or obese subjects. However, it is important to note that in contrast to what would have been predicted, during the course of the studies included in the reviews, in no case was weight gain reported.
Amount in Diet
Fatty Acid Profile
When considering the percentage of energy contributed by dietary fats and oils (amount of fat) and cardiovascular disease prevention and management, there are two major factors—the impact on plasma lipoprotein profiles and body weight. The potential relationship with body weight is important because overweight and obesity are strongly associated with elevated lipid and lipoprotein levels, blood pressure, dyslipidemia, and type 2 diabetes—all potential risk factors for cardiovascular disease. With respect to plasma lipoprotein profiles, the focus is usually on triglyceride and HDL cholesterol levels or total cholesterol:HDL cholesterol ratios. When body weight is maintained at a constant level, decreasing the total fat content of the diet, expressed as a percentage of total energy, and replacing it with carbohydrate frequently results in an increase in triglyceride levels, decrease in HDL cholesterol levels, and a less favorable (higher) total cholesterol:HDL cholesterol ratio. Low levels of HDL cholesterol are an independent risk factor for cardiovascular disease ( green cabbage > kohlrabi > broccoli > turnip > black raddish. In mammalian cells, structural chromosome aberrations were observed with some of the juices, with the most potent being Brussels sprouts and white cabbage, and genotoxic effects were accompanied by decreased cell viability. The isothiocyanatecontaining fraction (and other breakdown products of glucosinolates) of these brassica juices was found to contain 70–80% of the total genotoxic activity of the juices. The flavonoid- and other phenolic-containing fraction had a much weaker effect. In related
studies, the isothiocyanates, allyl isothiocyanate and phenethyl isothiocyanate, were found to be more than 1000-fold more cytotoxic in a Chinese hamster ovary cell line than their parent glucosinolates (sinigrin and gluconasturtiin, respectively). Phenethyl isothiocyanate also induced genotoxic effects (chromosome aberrations and sister chromatid exchanges). More data are required before an overall recommendation can be made regarding the likely beneficial or otherwise influences of glucosinolates (and their derivatives) on human health. S-Methyl Cysteine Sulfoxide
S-methyl cysteine sulfoxide is another sulfurcontaining phytochemical found in all brassica vegetables, in addition to glucosinolates. Both S-methyl cysteine sulfoxide and methyl methane thiosulfinate (its main metabolite) can block genotoxicity, induced by chemicals, in mice. S-methyl cysteine sulfoxide is thus likely to contribute to the observed ability of brassica vegetables to protect against cancer in both human and animal studies. It is of interest that a hydrolytic product of S-methyl cysteine sulfoxide was linked in the 1960s to the severe hemolytic anemia or kale poisoning observed in cattle in Europe in the 1930s.
Potential Importance of Other Phytochemicals to Human Health: Molecular Mechanisms of Action Allium Organosulfur Compounds
Allium organosulfur compounds may be phyotchemicals of importance to human health by acting as antioxidants, thus protecting against free radicalmediated damage to important cellular targets such as DNA and membranes implicated in cancer and neurodegenerative diseases and aging. Protection against oxidative damage to LDL and cellular membranes could also protect against cardiovascular disease. Aged garlic extract (AGE) inhibits lipid peroxidation and the oxidative modification of LDL, reduces ischemic/reperfusion injury, and enhances the activity of the cellular antioxidant enzymes superoxide dismutase, catalase, and glutathione peroxidase. AGE also inhibits the activation of the oxidant-induced transcription factor NF-B. Investigation of the major organosulfur compounds in AGE identified highly bioavailable water-soluble organosulfur compounds with antioxidant activity, such as S-allylcysteine and S-allylmeracptocysteine. Organosulfur compounds such as diallyl sulfide may also protect against cancer by modulation of carcinogen metabolism, and this may involve altered
PHYTOCHEMICALS/Epidemiological Factors
ratios of phase 1 and phase 2 drug-metabolizing enzymes. Various garlic preparations including aged garlic extract have been shown to inhibit the formation of nitrosamine-type carcinogens in the stomach, enhance the excretion of carcinogen metabolites, and inhibit the activation of polyarene carcinogens. Inhibitory effects of organosulfur compounds on the growth of cancer cells in vitro, including human breast cancer cells and melanoma cells, have been observed. Modulation of cancer cell surface antigens, associated with cancer cell invasiveness, has been observed, and in some cases cancer cell differentiation can be induced. AGE can reduce the appearance of mammary tumors in rats treated with the powerful carcinogen dimethyl benz(a)anthracene (DMBA), which is activated by oxidation by cytochromes P450 to form the DNA binding form of DMBA diol epoxide, resulting in DNA legions and cancer initiation. The antibacterial activity of these allium compounds may also prevent bacterial conversion of nitrate to nitrite in the stomach. This may reduce the amount of nitrite available for reacting with secondary amines to form the nitrosamines likely to be carcinogenic particularly in the stomach. Allium organosulfur compounds appear to possess a range of potentially cardioprotective effects. In one study, 432 cardiac patients were divided into a control group (210) and a garlic-supplemented group (222), and garlic feeding was found to reduce mortality by 50% in the second year and by approximately 66% in the third year. Furthermore, the rate of reinfarction was reduced by 30 and 60% in the second and third year, respectively. It should be noted that only a small number of patients in both groups experienced the end event of death or myocardial infarction, and a much larger scale study is needed. AGE lowers cholesterol and triglycerides in laboratory animals and can reduce blood clotting tendencies. It has been suggested that garlic supplementation at a level of 10–15 g of cooked garlic daily could lower serum cholesterol by 5–8% in hypercholestrolemic individuals. However, there may be more important cardioprotective effects of garlic. In animal studies, AGE suppressed the levels of plasma thromboxane B2 and platelet factor levels, which are important factors in platelet aggregation and thrombosis. In rats, frequent low doses (50 mg/kg) of aqueous extracts of garlic or onions (onion was less potent) produced significant antithromotic activity (lowering of thromboxane B2) without toxic side effects. Aqueous extracts of raw garlic also inhibited cyclooxygenase activity in rabbit platelets, again contributing to an antithrombotic effect. In addition, AGE and S-allyl cysteine and S-allyl mercaptocysteine have antiplatelet adhesion effects. Platelet
507
adhesion to the endothelial surface is involved in atherosclerosis initiation. Furthermore, S-allyl mercaptocysteine inhibits the proliferation of rat aortal smooth muscle cells, another important atherosclerotic process. Indeed, this antiproliferative effect on smooth muscle cells may be indicative of a possible antiangiogenic ability in relation to prevention of tumor growth and metastasis. Saponins
Saponins are another steroidal phytochemical of interest that may, in addition to isoflavone phytoestrogens, contribute to the health protective effects of soya products. Soyabeans have a high saponin content and soyabean saponins have been shown to have a growth inhibitory effect on human carcinoma cell in vitro, probably by interacting with the cell membrane and increasing membrane permeability. The proposed anticarcinogenic mechanisms of saponins include normalization of carcinogen-induced cell proliferation, direct cytotoxicity, bile acid binding, and immune-modulating effects. Of particular interest is the finding that saponins actively interact with cell membrane components: They possess surface active characteristics because of the amphiphillic nature of their chemical structure. Thus, they can act to alter cell membrane permeability and cellular function. Soybean saponins have been reported to inhibit hydrogen peroxide damage to mouse fibroblast cells and thus may protect human health through antioxidant-mediated mechanisms. Saponins from ginseng root (Panax ginseng C.A. Mey.) may also be important. Antioxidant effects have been reported for total ginseng saponins and its individual saponins (ginsenosides Rb1, Rb2, Rc, and Rd; others include Re and Rg1). Furthermore, ginsenosides Rb1 and Rb2 protected cultured rat myocardiocytes against superoxide radicals, and the mechanism for this may involve induction of genes responsible for antioxidant defences rather than radical scavenging. Ginsenosides stimulate endogenous production of nitric oxide in rat kidney, and this may contribute to the observed antinephritic action of these compounds and suggest a protective role in the kidney. Furthermore, it has been suggested that the observed cardioprotective effects of ginsenosides in animal models may be mediated by nitric oxide release. In addition, ginsenoside enhanced release of nitric oxide from endothelial cells, particularly from perivascular nitric oxidergic nerves in the corpus cavernosum of animal models, may partly account for the reported aphrodisiac effects of ginseng. Also, ginsenosides have been shown to have beneficial effects on inferior human
508 PHYTOCHEMICALS/Epidemiological Factors
sperm motility and progression. It is of interest that regulation of lipid metabolism by ginseng has been reported, and although the mechanism of action remains unclear, it is likely that the peroxisome proliferator-activated receptor- is involved. Other Phytochemicals of Interest
A wide range of other phytochemicals may have important beneficial effects on human health if consumed in sufficient amount to be efficacious. In many cases, their full spectrum of molecular actions remains to be elucidated. Nevertheless, the following phytochemicals and their main botanical sources are deemed worthy of mention. The phytochemicals dihydrophthalic acid, ligustilide, butylidene, phthalide, and n-valerophenone-Ocarboxylic acid have been isolated from Angelica root (Angelica sinensis). They are likely to contribute to the observed circulatory modulating effects of Angelica root, including increasing coronary flow, modulation of myocardial muscular contraction, and antithrombotic effects. Phytochemicals extracted from licorice (Glycyrrhiza glabra L.) include glycyrrhetic acid, glycyrrhizic acid (the sweet principle of licorice), and an active saponin glycyrrhizin (a 3-O-diglucuronide of glycyrrhetic acid). In rats, dietary supplementation with 3% licorice elevated liver glutathione transferase activity, suggesting a potential detoxification and anticancer effect of these phytochemicals because glutathione transferase catalyses the formation of glutathione conjugates of toxic substances for elimination from the body. Antibacterial, antiviral, antioxidant, and antiinflammatory effects have also been reported for these compounds. Indeed, glycyrrhizin has been reported to inhibit HIV replication in cultures of peripheral blood mononuclear cells taken from HIV-seropositive patients. Phytochemicals found in ginkgo (G. biloba) leaves, including ginkgolic acid, hydroginkgolic acid, ginkgol, bilobol, ginon, ginkgotoxin, ginkgolides (A–C), and a number of flavonoids common to other plants, such as kaempferol, quercetin, and rutin, are currently attracting attention for their possible effects on circulation, particularly cerebral circulation, and this may improve brain function and cognition. Indeed, ginkgo, ginseng, and a combination of the two extracts have been found to improve different aspects of cognition in healthy young volunteers. A number of studies have reported that extracts of ginkgo leaves enhanced brain circulation, increased the tolerance of the brain to hypoxia, and improved cerebral hemodynamics. It has been suggested that these effects are mediated via calcium ion flux over
smooth cell membranes and via stimulation of catecholamine release. In addition, protection against free radical-mediated retinal injury has been reported; thus, other antioxidant-mediated protective effects on human health are also possible. Damage to mitochondrial DNA could play a role in neurodegenerative diseases such as Alzheimer’s disease and Parkinson’s disease. There is limited evidence for significant improvements in CHD patients following treatment with a daily dose equivalent to 12 mg total ginkgetin. Ginkgolide B-activated inhibition of glucocorticoid production has been reported and is likely to result from specific transcriptional suppression of the adrenal peripheral-type benzodiazepine receptor gene in rats. This suggests that ginkgolide B may be useful pharmacologically to control excess glucocorticoid formation. See also: Cancer: Epidemiology and Associations Between Diet and Cancer. Cereal Grains. Coronary Heart Disease: Prevention. Fruits and Vegetables. Phytochemicals: Epidemiological Factors. Tea.
Further Reading Adlercreutz CHT (2002) Phyto-oestrogens and cancer. Lancet Oncology 3: 32–41. Arts IC, Hollman PC, Feskens EJ, Bueno de Mesquita HB, and Kromhout D (2001) Catechin intake might explain the inverse relationship between tea consumption and ischemic heart disease: The Zutphen Elderly Study. American Journal of Clinical Nutrition 74: 227–232. Beatty ER, O’Reilly JD, England TG et al. (2000) Effect of dietary quercetin on oxidative DNA damage in healthy human subjects. British Journal of Nutrition 84: 919–925. File SE, Jarrett N, Fluck E et al. (2001) Eating soya improves human memory. Psychopharmacology 157: 430–436. Gupta K and Panda D (2002) Perturbation of microtubule polymerization by quercetin through tubilin binding: A novel mechanism of its antiproliferative activity. Biochemistry 41: 13029–13038. Kim H, Xu J, Su Y et al. (2001) Actions of the soy phytoestrogen genistein in models of human chronic: Potential involvement of transforming growth factor . Biochemical Society Transactions 29: 216–222. Knekt P, Kumpulainen J, Jarvinen R et al. (2002) Flavonoid intake and risk of chronic diseases. American Journal of Clinical Nutrition 76: 560–568. Mithen R, Faulkner K, Magrath R et al. (2003) Development of isothiocyanate-enriched broccoli and its enhanced ability to induce phase 2 detoxification enzymes in mammalian cells. Theoretical Applied Genetics 106: 727–734. O’Reilly JD, Mallet AI, McAnlis GT et al. (2001) Consumption of flavonoids in onions and black tea: Lack of effect on F2-isoprostanes and autoantibodies to oxidized LDL in healthy humans. American Journal of Clinical Nutrition 73: 1040–1044. Rowland IR, Wiseman H, Sanders TAB, Adlercreutz H, and Bowey EA (2000) Interindividual variation in metabolism of soy isoflavones and lignans: Influence of habitual diet on equol production by the gut microflora. Nutrition and Cancer 36: 27–32. Shapiro TA, Fahey JW, Wade KL, Stephenson KK, and Talalay P (2001) Chemoprotective glucosinolates and isothiocyanates of
POTASSIUM broccoli sprouts: Metabolism and excretion in humans. Cancer Epidemiology Biomarkers and Prevention 10: 501–508. Thomson M and Ali M (2003) Garlic [allium sativum]: A review of its potential use as an anticancer agent. Current Cancer Drug Targets 3: 67–81. Wiseman H (2000) The therapeutic potential of phytoestrogens. Expert Opinion in Investigational Drugs 9: 1829–1840.
509
Wiseman H, Goldfarb P, Ridgway T, and Wiseman A (2000) Biomolecular Free Radical Toxicity: Causes and Prevention. Chichester, UK: John Wiley. Wiseman H, O’Reilly JD, Adlercreutz H et al. (2000) Isoflavone phytoestrogens consumed in soy decrease F2-isoprostane concentrations and increase resistance of low-density lipoprotein to oxidation in humans. American Journal of Clinical Nutrition 72: 395–400.
Phyto-estrogens see Phytochemicals: Classification and Occurrence; Epidemiological Factors
Polyunsaturated Fatty Acids see Fatty Acids: Omega-3 Polyunsaturated; Omega-6 Polyunsaturated
POTASSIUM L J Appel, Johns Hopkins University, Baltimore, MD, USA ª 2005 Elsevier Ltd. All rights reserved.
The major intracellular cation in the body is potassium, which is maintained at a concentration of approximately 145 mmol/l of intracellular fluid but at much lower concentrations in the plasma and interstitial fluid (3.8–5 mmol/l of extracellular fluid). The high intracellular concentration of potassium is maintained via the activity of the Naþ/Kþ-ATPase pump. Because this enzyme is stimulated by insulin, alterations in the plasma concentration of insulin can affect cellular influx of potassium and thus plasma concentration of potassium. Relatively small changes in the concentration of extracellular potassium greatly affect the extracellular/intracellular potassium ratio and thereby affect nerve transmission, muscle contraction, and vascular tone. In unprocessed foods, potassium occurs mainly in association with bicarbonate-generating precursors such as citrate and, to a lesser extent, with phosphate. In processed foods to which potassium is added and in supplements, the form of potassium is potassium chloride. In healthy people, approximately 85% of dietary potassium is absorbed. Most potassium (approximately 77–90%) is excreted in urine,
whereas the remainder is excreted mainly in feces, with much smaller amounts excreted in sweat. Because most potassium that is filtered by the glomerulus of the kidney is reabsorbed (70–80%) in the proximal tubule, only a small amount of filtered potassium reaches the distal tubule. The majority of potassium in urine results from secretion of potassium into the cortical collecting duct, a secretion regulated by a number of factors including the hormone aldosterone. An elevated plasma concentration of potassium stimulates the adrenal cortex to release aldosterone, which in turn increases secretion of potassium in the cortical collecting duct.
Acid–Base Considerations A diet rich in potassium from fruits and vegetables favorably affects acid–base metabolism because these foods are also rich in precursors of bicarbonate. Acting as a buffer, the bicarbonate-yielding organic anions found in fruits and vegetables neutralize noncarbonic acids generated from meats and other high-protein foods. In the setting of an inadequate intake of bicarbonate precursors, excess acid in the blood titrates bone buffer. As a result, bone becomes demineralized and calcium is released. Urinary calcium excretion increases. This state has been termed a ‘low-grade metabolic acidosis.’ Increased bone breakdown and
510 POTASSIUM
calcium-containing kidney stones are adverse clinical consequences of excess diet-derived acids. Diets rich in potassium with its bicarbonate precursors might prevent kidney stones and bone loss. In processed foods to which potassium is added and in potassium supplements, the conjugate anion is typically chloride, which cannot act as a buffer.
Adverse Effects of Insufficient Potassium Severe potassium deficiency, which most commonly results from diuretic-induced potassium losses, is characterized by a serum potassium concentration of less than 3.5 mmol/l. The adverse consequences of hypokalemia are cardiac arrhythmias, muscle weakness, and glucose intolerance. Moderate potassium deficiency, which commonly results from an inadequate dietary intake of potassium, occurs without hypokalemia and is characterized by increased blood pressure, increased salt sensitivity, an increased risk of kidney stones, and increased bone turnover. An inadequate intake of dietary potassium may also increase the risk of stroke and perhaps other cardiovascular diseases. Kidney Stones and Bone Demineralization
Because of its effects on acid–base balance, an increased dietary potassium intake might have favorable effects on kidney stone formation. In one large observational study of women (Figure 1), there was a progressive inverse relationship between greater intake of potassium and incident kidney stones. At a median potassium intake of 4.7 g/day (119 mmol/day), the risk of developing a kidney stone was 35% less compared to that for women with an intake of 26.0–29.0) Obese (BMI >29.0)
Kilograms
Pounds
12.8–18.0 11.5–16.0 7.0–11.5 6.0
28–40 25–35 15–25 13
Modified from Institute of Medicine, Committee on Nutritional Status during Pregnancy and Lactation (1990) Nutrition During Pregnancy. Weight Gain. Nutrient Supplements. Food and Nutrition Board. Washington, DC: National Academy Press.
recommended gains over the course of pregnancy for each BMI group (Figure 1), enabling the adequacy of weight gain to be tracked for individual women. To use the chart, women’s height and weight should be measured as near to the time of conception as possible (because pregnancy causes a temporary reduction in height) and used to obtain their BMI from a table. The US recommendations are deemed to be appropriate for women in developed countries worldwide. Pattern of Weight Gain
Relatively little (1–2.5 kg) of the total weight gain during pregnancy occurs during the first trimester, whereas gain in the last two trimesters is relatively linear. Nevertheless, it is important to pay attention to the quality of pregnant women’s diets during the first trimester and to ensure that they do not restrict their intake during this time, when there is the 3.6 Very overweight 3.5 Estimated birthweight (kg)
Pregnancy Weight Gain Recommendations
Recommended total gain
3.4
Moderately overweight
3.3 3.2
Ideal weight
3.1 3.0 Underweight 2.9 0
2
4
6 8 10 12 14 16 Maternal weight gain (kg)
18
20
Figure 1 The relationship between maternal pregnancy weight gain and birth weight. (Reproduced with permission from the Institute of Medicine, Committee on Nutritional Status during Pregnancy and Lactation (1990) Nutrition during Pregnancy. Weight Gain. Nutrient Supplements. Food and Nutrition Board. Washington, DC: National Academy Press.)
PREGNANCY/Weight Gain 535
strongest risk of nutrition-related birth defects and spontaneous abortions. In some studies, an association has been noted between low weight gain in the first trimester and increased risk of spontaneous preterm delivery. Variability in Weight Gain
The BMI-specific target ranges for pregnancy weight gain are relatively narrow, but a very wide range of gain actually occurs. In a California study, for example, only 50% of the mothers who had an uncomplicated pregnancy with a normal birthweight infant gained the recommended 12.5–18 kg, with the remainder gaining more or less. Since a substantial amount of the variation in weight gain is due to physiological variability and prepregnancy BMI, deviation from the recommended range may not necessarily be cause for concern. However, it is especially important to assess the dietary patterns and other behaviors of women whose weight gain is unexpectedly high or low. The IOM Implementation Guide for weight gain recommendations provides helpful information on the assessments that should be used.
Maternal Weight Gain and Birth Weight Inadequate weight gain is associated with poor fetal growth even when the contribution of fetal weight and factors such as length of gestation are taken into consideration. Birth weight is an important determinant of child health and survival; low-birth-weight (28 weeks of gestation) fetal deaths. In addition, the prevalence of gestational hypertension increases 3-fold and there is a 3–4 times greater risk of gestational diabetes in obese pregnant women. Exercising Women
Women who are physically fit at conception appear to be able to continue to exercise during pregnancy without harm to themselves or the fetuses, as long as the activity is not too strenuous or prolonged. In
several studies it was observed that exercising women gained 2 or 3 kg less than those who were more sedentary.
Pregnancy Weight Gain and Postpartum Risk of Obesity On average, well-nourished women retain relatively little weight approximately 1 year postpartum (approximately 0.5–1.5 kg). Delivery is followed by a rapid loss of weight in the subsequent 2 weeks due to fluid loss. This is followed by a slower rate of loss for the next 6 months, so a complete return to preconception weight should not be expected in less time than this. In general, weight still retained at 1 year postpartum is unlikely to be lost without lowering intake and/or increasing physical activity. If weight retention is substantial, it can add to the risk of obesity in the longer term, and obesity is a major public health concern in many countries. The relatively low average weight retention postpartum obscures the fact that many women do retain an excessive amount of weight. Those who retain most are likely to have gained large amounts of weight during pregnancy. At 10–18 months postpartum, weight retention was 2.5 kg for women who gained more than the IOM recommendation compared to 0.7 kg for white women and 3.2 kg for black women who gained the advised amount. These large racial differences in weight retention have not been explained and certainly may be a risk factor for the higher prevalence of later obesity in this group. Most women breast-feed their infants exclusively or partially for a relatively short time. There is little difference in weight loss between women who breast-feed and those who do not for periods up to 6 months postpartum. This is presumably due to the greater appetite and energy intake of women who are breast-feeding and perhaps to dieting on the part of non-breast-feeders. One study of women who breast-fed until 12 months postpartum did report a 2-kg greater weight loss compared to women who stopped breast feeding before 3 months. Even more weight was lost by those who breast-fed more often and gave longer feeds. Women with a high BMI at conception tend to either lose or gain more weight postpartum than those with a normal BMI; approximately one-third end up weighing less than at conception, and one-third weigh substantially more. The reasons for the highly variable weight retention in this group are not known. Although inadequate intake of nutrients during lactation can lead to maternal nutrient depletion and lower breast milk content of some nutrients and
538 PREGNANCY/Weight Gain
especially vitamins, breast feeding women who choose to lose weight can do so by exercising and/or reasonable restriction of energy intake. Exercising by jogging, biking, and aerobics for 45 minutes, four or five times per week for 12 weeks did not affect wellnourished mothers’ ability to lactate or influence their milk composition. However, it is possible that severe energy deficit in lactation, especially of thinner women, will reduce breast milk volume.
short Asian women may increase their offspring’s risk of diabetes in later life. See also: Adolescents: Nutritional Requirements. Breast Feeding. Lactation: Physiology; Dietary Requirements. Obesity: Complications. Pregnancy: Role of Placenta in Nutrient Transfer; Nutrient Requirements; Energy Requirements and Metabolic Adaptations; Safe Diet for Pregnancy; Dietary Guidelines and Safe Supplement Use; Prevention of Neural Tube Defects; Pre-eclampsia and Diet.
Impact of Supplementation Numerous investigators have explored the benefits of energy and/or protein supplementation for pregnancy weight gain and other outcomes. However, relatively few trials have randomly assigned these supplements and used control diets. A statistical analysis was conducted of the 10 such studies that met this criterion in 1995. Most, but not all, of these studies were performed in developing countries. A 5-year controlled trial in The Gambia provided daily prenatal dietary supplements (two biscuits) that contained 4250 kJ energy and 22 g protein. This supplement increased pregnancy weight gain and birth weight during the hungry and harvest seasons. There was a significant but very small increase in head circumference and a significant reduction in perinatal mortality. It was originally thought that timing of supplementation during later gestation would be most likely to increase birth weight. This hypothesis was supported by data from the Dutch famine, during which women in their third trimester had infants with the lowest birth weights. An increase in low birth weight prevalence was also observed in The Gambia when third-trimester gestation overlapped with the hungry season. Nonetheless, research suggests nutrition interventions initiated earlier in pregnancy will have the strongest effect on birth weight. There are enduring advantages to continued supplementation postpartum (during lactation) and into the ensuing pregnancy. A longitudinal study in Guatemala reported a significant increase (approximately 350 g) in birth weight in the second pregnancy when the mother was supplemented during the previous pregnancy and throughout subsequent lactation and the second pregnancy compared to those who were not supplemented during the prior pregnancy. Overall, it is appropriate for supplementation to begin as early in the pregnancy as possible so that both mother and fetus receive the maximum benefits for optimal health and development. However, this advice is tempered by concerns that supplementation of
Further Reading Ceesay SM, Prentice AM, Cole TJ et al. (1997) Effects on birth weight and perinatal mortality of maternal dietary supplements in rural Gambia: 5 year randomised controlled trial. British Medical Journal 315: 786–790. Cnattingius S, Bergstrom R, Lipworth L, and Kramer MS (1998) Prepregnancy weight and the risk of adverse pregnancy outcomes. New England Journal of Medicine 338: 147–152. Dewey KG and McCrory M (1994) Effects of dieting and physical activity on pregnancy and lactation. American Journal of Clinical Nutrition 59(supplement): 439–445. Hickey C, Cliver S, Goldenberg R, Kohatsu J, and Hoffman H (1993) Prenatal weight gain, term birth weight, and fetal growth retardation among high risk multiparous black and white women. Obstetrics and Gynecology 81: 529–535. Institute of Medicine, Committee on Nutritional Status during Pregnancy and Lactation (1990) Nutrition during Pregnancy. Weight Gain. Nutrient Supplements. Food and Nutrition Board. Washington, DC: National Academy Press. Institute of Medicine, Committee on Nutritional Status during Pregnancy and Lactation (1992) Nutrition during Pregnancy and Lactation. An Implementation Guide. Food and Nutrition Board. Washington, DC: National Academy Press. Keppel K and Taffel S (1993) Pregnancy-related weight gain and retention: Implications of the 1990 Institute of Medicine Guidelines. American Journal of Public Health 83: 1100–1103. King JC, Butte NF, Bronstein MN, Kopp LE, and Lindquist SA (1994) Energy metabolism during pregnancy: Influence of maternal energy status. American Journal of Clinical Nutrition 59(supplement): 439S–445S. Kramer M (1993) Effects of energy and protein intakes on pregnancy outcome: An overview of the research evidence from controlled clinical trials. American Journal of Clinical Nutrition 58: 627–635. Luke B, Minogue J, Witter F, Keith LG, and Johnson TRB (1993) The ideal twin pregnancy: Patterns of weight gain, discordancy, and length of gestation. American Journal of Obstetrics and Gynecology 169: 588–597. Parker J and Abrams B (1992) Prenatal weight gain advice: An examination of the recent prenatal weight gain recommendation of the Institute of Medicine. Obstetrics and Gynecology 79: 664–669. Siega-Riz AM, Adair LS, and Hobel CJ (1994) Institute of Medicine maternal weight gain recommendations and pregnancy outcome in a predominantly Hispanic population. Obstetrics and Gynecology 84: 565–573. Wong W, Tang NL, Lau TK, and Wong TW (2000) A new recommendation for maternal weight gain in Chinese women. Journal of the American Dietetic Association 100: 791–796.
Safe Diet for Pregnancy S Stanner, British Nutrition Foundation, London, UK ª 2005 Elsevier Ltd. All rights reserved.
A balanced diet that contains adequate amounts of all the nutrients needed by a mother and her growing fetus is essential for a healthy pregnancy. Pregnant women also need to be advised about how to reduce their risk of exposure to substances that may be toxic to the fetus during development (teratogenic) and therefore associated with the production of physical defects in the developing embryo (e.g., alcohol and excess vitamin A), as well as other dietary and lifestyle behaviors that could optimize maternal health and reduce the risk of health problems in their children. The aim of this article is to describe evidence relating to food safety issues during pregnancy, including potential risks to the fetus as a result of prenatal exposure to food pathogens or toxic food components (e.g., heavy metals and dioxins) and the potentially harmful effects of high doses of alcohol, caffeine, and vitamin A.
Food-Borne Infections during Pregnancy For many years it has been recognized that foodborne antenatal infections may cause death or serious fetal damage. Women may be more susceptible to the effects of infection during pregnancy because of immunological changes leading to suppression of the immune system (most commonly cell-mediated immunity), probably as a result of increases in pregnancy-associated sex steroids, such as oestradiol or progesterone. Among the most common causes of diarrhea during pregnancy are several food- or water-borne pathogens (bacteria, protozoa, or viruses), including salmonella species, Helicobacter pylori, Shigella, Escherichia coli, and cryptosporidium. Hepatitis A is also a food- or water-borne pathogen of concern, particularly in countries where sanitation is poor. In pregnant women, severe vomiting and diarrhea may negatively affect the availability of important nutrients to the growing fetus. For example, impairment of the supply of folate (or the synthetic form, folic acid) during a critical stage of development could increase the risk of associated neural tube defects, such as spina bifida. Although rare, infection with Listeria or Toxoplasma during pregnancy is of particular concern because even in a mild form these infections can prove fatal. Listeriosis caused by the consumption of food containing the bacterium Listeria
monocytogenes leads to flu-like symptoms, such as fever, muscle aches, and sometimes nausea or diarrhea. If the infection spreads to the nervous system, it may also cause headaches, stiff neck, confusion, loss of balance, or convulsions. The bacterium has been found in a variety of raw foods, including unpasteurized (raw) milk, uncooked meats, and vegetables, and in processed foods that become contaminated after processing, such as soft cheeses and cold cuts of meat. According to the Centers for Disease Control and Prevention, pregnant women in the United States are approximately 20 times more likely than other healthy adults to get listeriosis and approximately one-third of listeriosis cases occur during pregnancy. The fetus and newborn are at greatest risk of this infection and its consequences can be severe, leading to miscarriage, stillbirth, and premature delivery or to meningitis in the newborn infant. When infection occurs during pregnancy, antibiotics given promptly to the pregnant woman can often prevent infection of the fetus or newborn, and infants developing the infection can also be treated in the same way. Toxoplasma gondii is a parasite that can be transmitted to the fetus in utero through transplacental transmission, causing stillbirth, miscarriage, or mental retardation. The parasite has been found in raw, inadequately cooked or cured meat, cat feces, and unwashed raw fruit and vegetables. It has also occasionally been reported in unpasteurized goat milk. In the United Kingdom, toxoplasmosis occurs in approximately 2.5–5.5 in 1000 pregnant women (1750–2850 cases per year), generally causing flu-like symptoms, swollen lymph glands, or muscle aches and pains that last for a few days to several weeks. If a pregnant woman contracts the infection, there is an approximately 30–40% chance of fetal infection (congenital toxoplasmosis). Infants who became infected before birth may develop growth problems, vision and hearing loss, hydrocephalus, brain damage, epilepsy, and other problems. In Europe, congenital toxoplasmosis affects between 1 and 10 in 10 000 newborns, of whom 1 or 2% develop learning difficulties or die and 4–27% develop permanent loss of vision. Both the incidence of placental transmission and the severity of congenital disease depend on gestational age at which maternal seroconversion occurs. Although transmission rates from mother to fetus tend to be low early in pregnancy, fetal disease severity is highest when the fetus is infected early in gestation. Mothers can be tested to determine if they have developed an antibody to the infection. Fetal testing may include ultrasound and testing of amniotic fluid or cord blood. When
2 PREGNANCY/Safe Diet for Pregnancy Table 1 General guidelines on good hygienic practices in the home The risk of food poisoning can be minimized by adopting the following practices: Cleanliness in the kitchen Keeping all work surfaces scrupulously clean Washing cooking utensils after coming into contact with raw meat, poultry, or eggs to prevent cross-contamination Using separate chopping boards for foods that are to be cooked (e.g., raw meat) Keeping kitchen cloths clean; rinsing crockery in hot water, leaving it to dry, and then wiping it clean with a tea towel Using kitchen towels to mop up spills rather than a dishcloth Ensuring waste bins are covered and away from food and keeping pets away from the kitchen Hygienic food handling Washing all equipment and work surfaces before and after touching raw food Washing all foods to be eaten raw thoroughly Cooking meat thoroughly to an internal temperature of at least 70 C Keeping raw and cooked foods separated during preparation and storage Cooling cooked foods as quickly as possible if they are to be stored in a refrigerator or freezer Covering foods and not leaving them standing around in the kitchen Storing food at the correct temperature (80 g or 10 units per day) is linked with fetal alcohol syndrome. Modest drinking (3000 mg RAE) should be avoided shortly before or during pregnancy, especially in the early months, because of its potential teratogenicity, so the upper limit for vitamin A intake in pregnancy is set at 3000 mg RAE/day for all women of childbearing age. An alternative dose schedule is up to 8500 RAE weekly during pregnancy. Fetal vitamin A toxicity and birth defects have also occurred from ingestion of isotretinoin and etrentinate, drugs used for treatment of severe cystic acne. High intakes of carotene (a precursor of vitamin A) do not have the same teratogenic effects. Vitamin D
Because of its importance in increasing calcium retention, recommended intakes of vitamin D are doubled during pregnancy. Vitamin D deficiency during pregnancy causes disorders of calcium metabolism, including neonatal hypocalcemia and tetany, hypoplasia of infants’ tooth enamel, and maternal osteomalacia. Because the prevalence of vitamin D deficiency during pregnancy is high during the winter months at northern latitudes in regions such as Europe, the United States, and Canada, and Japan, vitamin D supplements may be necessary for women who live in these regions or who have little exposure
to sunlight. A national survey in the United States conducted in the 1990s revealed that approximately 40% of African American women in the southeastern region had low blood levels of 25-hydroxyvitamin D, and that not drinking vitamin D-fortified milk was a risk factor for deficiency. In the absence of vitamin D fortification or supplementation, infants in Paris, for example, have higher plasma levels of parathyroid hormone and other indications of vitamin D deficiency if they are born soon after the winter months. Vitamin D supplements reversed the indications of vitamin D deficiency. High maternal intakes of vitamin D are toxic and were implicated as the cause of a syndrome that included mental and physical growth retardation and hypercalcemia in British infants between 1953 and 1957. Excessive amounts of vitamin D taken during gestation have also caused aortic stenosis and abnormal skull development in infants. The upper limit for vitamin D in pregnancy is 50 mg per day, the same as for nonpregnant women. Folic Acid and Vitamin B12
There is a substantial increase in folate requirements during pregnancy, from 400 mg Dietary Folate Equivalents in the nonpregnant state to 600 mg per day, because of increased erythropoiesis and fetal– placental growth. Increased folate intakes throughout childbearing age are recommended to prevent neural tube defects such as spina bifida and anencephaly, the most common birth defects, and to lower the risk of abruptio placenta. To be effective for preventing neural tube defects in women at risk for producing an infant with this condition, increased folate intakes are needed preconception and early in pregnancy. The neural tube closes by 28 days of gestation, which is before many women realize that they are pregnant. It is for this reason that increased folate intakes are recommended throughout the childbearing years. In the United States and Canada, fortification of flour with folic acid in recent years has greatly increased folate intakes and improved status in the population; prior to fortification, typical intakes of folate were only about half of the recommended amount. No adverse effects were reported in recent studies in which pregnant women consumed up to 4 mg of folic acid per day during pregnancy. The RDA for vitamin B12 increases slightly during pregnancy to 2.6 mg/day. Vitamin B12 supplements are definitely required by pregnant women who are strict vegetarians; the vitamin is found only in animal products and the usefulness of the form of the vitamin found in algae and bacteria is not clear. An
12
PREGNANCY/Dietary Guidelines and Safe Supplement Use
adequate intake of the vitamin during pregnancy is at least as important as the woman’s vitamin B12 status at conception because the recently absorbed vitamin is more readily transported to the fetus than is the vitamin in maternal liver stores. Homocysteinemia is emerging as a common risk factor for several abnormal pregnancy outcomes, especially for preeclampsia, birth defects, and low birth weight, although there has been little research on whether maternal vitamin B12 deficiency causes these problems. Infants born to women with low vitamin B12 intakes are at high risk of growth failure and neurobehavioral problems that emerge when the infant is a few months old and may be permanent. Although supplements containing the recommended dietary intake of the vitamin are probably adequate for pregnancy, no adverse effects of consuming higher amounts have been reported. Vitamin C
Low plasma vitamin C concentrations have been associated with preeclampsia and premature rupture of the membranes. There has been some concern about fetal vitamin C dependency induced by excessive maternal vitamin C intakes, but this is based on only one anecdotal report. Requirements for the vitamin in pregnancy increase to 80 mg/day for adolescents and 85 mg/day for adult women, an amount estimated to provide sufficient quantities for the fetus. Those who are heavy smokers (>20 cigarettes/day) may need twice the RDA for this vitamin. Vitamin K
Usual diets provide adequate amounts of vitamin K for pregnant women. Newborn infants are routinely given a supplement of vitamin K by intramuscular injection because exclusively breast-fed infants are at risk of developing fatal intracranial hemorrhage secondary to vitamin K deficiency. This practice is quite safe.
detected by routine testing, the recommendation is that 60–120 mg of ferrous iron be given in divided doses throughout the day. The World Health Organization recommends 60 mg/day throughout pregnancy (plus 400 mg folic acid) because of the higher prevalence of iron deficiency anemia in most lowincome countries throughout the world. Gastrointestinal side effects, mainly heartburn, nausea, upper abdominal discomfort, diarrhea, and constipation, increase with high iron doses and contribute to poor compliance with taking daily iron supplements. Supplements of 15 mg of zinc and 2 mg of copper daily are also recommended for pregnant women taking >30 mg iron per day because iron can interfere with the absorption of other minerals if given as a supplement without food. It is therefore important not to exceed recommended intakes of iron during pregnancy. Research suggests that it may be as effective to consume the recommended intakes once per week as it is to take them daily because daily iron supplements gradually block the absorption of subsequent doses. Zinc
Typically, zinc intakes are below recommended amounts for pregnancy even in industrialized countries. In populations in which zinc deficiency is common, the prevalence of malformations and low birth weight is higher, although the causal role of zinc deficiency has not been proven. Zinc supplementation is recommended for pregnant women who ordinarily consume an inadequate diet, smoke, are substance abusers, or are carrying multiple fetuses. However, copper absorption may begin to be impaired at zinc intakes of approximately 18.5 mg/day, and a daily intake of 50 mg zinc impairs both iron and copper absorption. These negative effects are believed to be stronger if the minerals are taken without food. Sodium
Iron
Demands for iron are increased by approximately 700–800 mg during pregnancy and most of this is needed during the last two trimesters. Because the risk of becoming anemic is greater during pregnancy and there is an increased risk of a compromised pregnancy outcome for anemic women (including lower birth weight and less neonatal iron stores), most recommendations in the United States advise all pregnant women with a well-balanced diet take a supplement of 30 mg of ferrous iron daily starting at their first prenatal visit. If iron deficiency anemia is
Due to hormonal changes during pregnancy, sodium metabolism is altered. At one time, dietary restriction of sodium was a common treatment for maternal edema, although it is ineffective. The newborn infants of women who had restricted their sodium intake drastically during pregnancy were observed to have hyponatremia. In animals, sodium restriction during pregnancy leads to water intoxication along with renal and adrenal tissue degradation of the pregnant animal. Therefore, sodium restriction during pregnancy is not advisable.
PREGNANCY/Dietary Guidelines and Safe Supplement Use 13 Iodine
Typically, the iodine intakes of pregnant women in the industrialized world easily meet recommended intakes, often as the result of consuming iodized salt. Maternal iodine deficiency and suboptimal iodine intake have been associated with cretinism, mental development impairments in utero, and infant mortality. Iodine deficiency before or during early pregnancy has the most severe effects, and in regions of endemic iodine deficiency cretinism should be prevented by treating maternal iodine deficiency before or during the first 3 months of pregnancy. However, hypothyroidism in the mother and fetus can be corrected by iodine administration in the third trimester. It appears to be safe to administer massive amounts (500 mg iodine) to pregnant women, orally or intramuscularly. There have been no reports of adverse effects of excessive iodine administration during pregnancy, and thus the upper limits are the same as for nonpregnant women (1100 mg/day for adults).
Teratogens The World Health Organization estimates that 15% of all clinically recognizable pregnancies end in abortion. Of these, 50–60% are due to chromosomal abnormalities. In addition, 3–6% of all offspring are malformed. The causes of these malformations can be divided into three categories: unknown, genetic, and environmental. Environmental causes only account for 10% of all congenital malformations and can be further divided into maternal conditions, infectious agents, mechanical problems (deformations), and chemicals (including prescription drugs and high-dose ionizing radiation). Chemical environmental causes include consumption during pregnancy of the teratogenic agents discussed later. These account for less than 1% of all congenital malformations but are important in that the exposures to these chemicals may be preventable. Several anticancer drugs cause problems for fetal development. Aminopterin can induce abortion within its therapeutic range, and it causes microcephaly, hydrocephaly, cleft palate, meningomyelocele, intrauterine growth retardation, abnormal cranial ossification, and mental retardation. Cyclophosphamide interacts with DNA and can result in cell death. Its use during pregnancy can result in growth retardation, ectrodactyly, syndactyly, and cardiovascular anomalies. Some antibiotics cause abnormal fetal development if taken by the pregnant woman. Streptomycin
can cause hearing problems, although the risk of this is quite low. Tetracycline may produce staining of the teeth and bones if taken late in the first trimester or during the last two trimesters. Anticonvulsants can also cause adverse pregnancy outcomes. Carbamazepine produces minor craniofacial defects, fingernail hypoplasia, and developmental delays. Trimethadione causes ‘fetal trimethadione syndrome,’ characterized by V-shaped eyebrows, low-set ears, a high-arched palate, irregular teeth, central nervous system anomalies, and severe developmental delays. Valproic acid causes spina bifida and facial dysmorphology in the fetus of 1% of pregnant users. Other potentially teratogenic drugs include androgens, which result in masculinization of the embryo and stimulate growth and differentiation of sex steroid receptor-containing tissues. Angiotensinconverting enzyme inhibitors are antihypertensive agents that have detrimental effects during the second and third trimesters, including fetal death, oligohydramnios, pulmonary hypoplasia, neonatal anuria, intrauterine growth retardation, and skull hypoplasia. The pregnant woman who uses cocaine risks preterm delivery, fetal loss, intrauterine growth retardation, microcephaly, neurobehavioral abnormalities, vascular disruptive phenomena, cerebral infarctions, and certain types of visceral and urinary tract malformations. Coumadin, a vitamin K analog, is an anticoagulant and in the first trimester can produce malformations, including nasal hypoplasia, stippling of secondary epiphysis, intrauterine growth retardation, and anomalies of the eyes, hands, neck, and central nervous system. Lithium carbonate, an antidepressant, has teratogenic effects in animals but these have not been confirmed in humans. Contaminants
Most heavy metals, such as lead and mercury, are embryotoxic. High maternal serum lead concentrations increase the risk of abortion and adversely affect the central nervous system of the developing fetus, leading to a low IQ and abnormal behavior of the infant. PCBs are environmental contaminants that remain in the body up to 4 years after exposure. The fetus of a pregnant woman exposed to PCBs is at increased risk of fetal growth retardation, abnormal skull calcifications, deformed nails, and pigmentation of gums, nails, and the groin. Organic mercury compounds tend to accumulate in fat tissue and cause cell death due to the inhibition of cellular enzymes. These compounds cause cerebral palsy, microencephaly, mental retardation, blindness, and cerebellar hypoplasia in the infant.
14
PREGNANCY/Dietary Guidelines and Safe Supplement Use
Special Conditions Nausea and Vomiting
Morning sickness or nausea is common in the early months of pregnancy. It is rarely a condition to cause alarm, except when there is excessive vomiting. In this situation, an acute protein and energy deficit and loss of minerals, vitamins, and electrolytes may result. Treatment of this condition is by consuming small frequent meals and a low-fat, highcarbohydrate diet. Prolonged, persistent vomiting (hyperemesis gravidarum) occurs in approximately 2% of pregnant women. Hospitalization is usually required, with intravenous fluid and electrolyte replacement to prevent dehydration. Heartburn
Heartburn is a common complaint during the latter part of pregnancy due to the pressure of the enlarged uterus on the stomach in combination with the relaxed esophageal sphincter. This can usually be relieved by limiting the amount of food consumed at one sitting and avoiding lying in a reclining position after eating.
depletion of the mother’s nutrient stores. The demands of pregnancy may impose a need for insulin in pregnant women whose condition was controlled through diet alone in the nonpregnant state. Because of hormonal changes during the first and second half of pregnancy, changes to the diet and the insulin dosage may be necessary. Gestational diabetes occurs only during pregnancy and usually resolves after pregnancy. It occurs in 5–10% of pregnancies and most commonly arises after 20 weeks of gestation. Gestational diabetes can be treated largely through nutritional care and moderate exercise to achieve weight control. Nutritional recommendations are to limit protein intake to 15% of total calories, consume 55% of total calories as carbohydrate, and limit fat intake to 30% or less of total calories. Cholesterol intake should be 300 mg/day or less, simple carbohydrate intake should be limited, and sodium intake should not exceed 1000 mg/1000 kcal. Insulin is rarely needed, although blood glucose levels should be monitored daily. Hypertension in Pregnancy
Pregnant women often develop constipation, most frequently during the latter stages of pregnancy. It is caused by reduced gut motility, physical inactivity, and the pressure exerted on the bowel by the enlarged uterus. The weight of the fetus and the downward pressure on the veins can lead to hemorrhoid formation. These conditions can be treated with increased consumption of high-fiber foods and dried fruits and higher fluid intake. Bulk-forming laxatives can also be used; however, there is a risk of alterations in electrolyte absorption with chronic use of laxatives.
Pregnancy-induced hypertension is a syndrome characterized by hypertension, proteinuria, and edema. This condition usually develops in the third trimester and occurs in approximately 7 or 8% of pregnant women. It occurs more often in women who are young, pregnant for the first time, or are of low socioeconomic status. The exact cause of this condition is unknown, but most researchers agree that it is associated with a decreased uterine blood flow leading to reduced fetal nourishment. Previous treatments for this condition included sodium restriction and diuretics; however, neither of these has been successful in altering blood pressure, weight gain, or proteinuria in this condition.
Edema
Multiple Births
Mild edema (fluid accumulation) is often present in the hands, feet, and legs in the third trimester. It is caused by the pressure of the enlarging uterus on the veins returning fluid from the legs. This fluid is often mobilized in the evening when the woman is lying down. This is a normal condition and does not require any special dietary or other treatment.
Women pregnant with twins or multiple fetuses should gain more weight than those with singleton births, approximately 15–20 kg. Nutrient supplementation should include at least zinc and vitamin B6 in addition to the iron supplements recommended for all pregnant women.
Constipation and Hemorrhoids
Diabetes in Pregnancy
For women with diabetes, nutritional counseling should include adequate dietary intake, frequent glucose monitoring, insulin management to meet the growth needs of the fetus, maintaining optimal blood glucose levels, and preventing ketosis and
See also: Alcohol: Absorption, Metabolism and Physiological Effects. Ascorbic Acid: Physiology, Dietary Sources and Requirements. Caffeine. Diabetes Mellitus: Etiology and Epidemiology; Classification and Chemical Pathology; Dietary Management. Early Origins of Disease: Fetal. Folic Acid. Food Safety: Other Contaminants; Heavy Metals. Hypertension: Etiology. Iodine: Physiology, Dietary Sources and
PREGNANCY/Prevention of Neural Tube Defects Requirements; Deficiency Disorders. Iron. Obesity: Complications. Pregnancy: Role of Placenta in Nutrient Transfer; Nutrient Requirements; Energy Requirements and Metabolic Adaptations; Weight Gain; Safe Diet for Pregnancy; Prevention of Neural Tube Defects; Pre-eclampsia and Diet. Sodium: Physiology. Vegetarian Diets. Vitamin A: Deficiency and Interventions. Vitamin D: Rickets and Osteomalacia. Vitamin K.
Further Reading Allen LH (1994) Nutritional supplementation for the pregnant woman. Clinical Obstetrics and Gynecology 37(3): 587–595. Allen LH (2001) Pregnancy and lactation. In: Bowman BA and Russell RM (eds.) Present Knowledge of Nutrition, 8th edn. Washington, DC: ILSI Press. American Diabetes Association (1991) Position statement: Gestational diabetes mellitus. Diabetes Care 14: 5–6. Institute of Medicine (1987) Committee on Nutrition of the Mother and Preschool Child. Laboratory Indices of Nutritional Status during Pregnancy. Washington, DC: National Academy of Sciences. Institute of Medicine (1990) Nutrition during Pregnancy. National Research Council. Washington, DC: National Academy Press. Institute of Medicine (1997) Dietary Reference Intakes for Calcium, Phosphorus, Magnesium, Vitamin D, and Fluoride. Washington, DC: National Academy Press. Institute of Medicine (1998) Dietary Reference Intakes for Thiamin, Riboflavin, Niacin, Vitamin B6, Folate, Vitamin B12, Pantothenic Acid, Biotin, and Choline. Washington, DC: National Academy Press. Institute of Medicine (2000) Dietary Reference Intakes for Vitamin A, Vitamin K, Arsenic, Boron, Chromium, Copper, Iodine, Iron, Manganese, Molybdenum, Nickel, Silicon, Vanadium, and Zinc. Washington, DC: National Academy Press. Institute of Medicine (2002) Dietary Reference Intakes for Energy, Carbohydrate, Fiber, Fat, Fatty Acids, Cholesterol, Protein, and Amino Acids (Macronutrients). Washington, DC: National Academy Press. Institute of Medicine (2004) Dietary Reference Intakes for Water, Potassium, Sodium, Chloride, and Sulfate. Washington, DC: National Academy Press. Kaiser LL and Allen L (2002) Position of the American Dietetic Association: Nutrition and lifestyle for a healthy pregnancy outcome. Journal of the American Dietetic Association 102: 1479–1490. King JC, Bronstein MN, Fitch WL et al. (1987) Nutrient utilization during pregnancy. World Reviews of Nutrition and Diet 52: 71–142. Lewis DD and Woods SE (1994) Fetal alcohol syndrome. American Family Physician 50: 1025–1032. March of Dimes (2002) Nutrition Today Matters Tomorrow: A Report from the March of Dimes Task Force on Nutrition and Optimal Human Development. White Plains, NY: March of Dimes. Neuhouser MLS (1996) Nutrition during pregnancy and lactation. In: Mahan LK and Escott-Stump S (eds.) Krause’s Food, Nutrition & Diet, 9th edn. Philadelphia: WB Saunders. Rosso P (1990) Nutrition and Metabolism in Pregnancy: Mother and Fetus New York: Oxford University Press. Wolfe HM and Gross TL (1994) Obesity in pregnancy. Clinical Obstetrics and Gynecology 37: 596–604.
15
Prevention of Neural Tube Defects P N Kirke, The Health Research Board, Dublin, Ireland J M Scott, Trinity College, Dublin, Ireland ª 2005 Elsevier Ltd. All rights reserved.
Neural tube defects (NTDs) are major congenital malformations of the central nervous system resulting in fetal and perinatal death and severe handicap in the majority of survivors. The finding that folic acid can prevent most NTDs ranks as one of the most important medical research discoveries in recent times. In this article, the epidemiology of NTDs is reviewed, focusing primarily on the role of folic acid and, to a lesser extent, vitamin B12 in the etiology and prevention of these malformations. The causes of the approximately 30% of NTDs that are estimated not to be related to folate are also briefly considered. The mechanisms underlying the link between folate, vitamin B12, and NTD etiology are examined, and the rapidly expanding research literature on genetic risk factors is reviewed. The main issues in using folic acid to prevent NTDs are discussed: ways to increase folate/folic acid intakes, supplementation, fortification, and safety. The role of other nutrients in NTD prevention is considered. Recommendations on using folic acid to prevent NTDs have been issued by various national health authorities and the main points in these recommendations are presented.
Epidemiology Failure of the embryonal neural tube to close normally between 24 and 28 days after conception gives rise to a group of severe congenital malformations known as NTDs that includes spina bifida, anencephalus (approximately 50 and 40% of cases, respectively), encephalocoele, and iniencephaly. These anomalies are believed to be caused by an interaction of genetic predisposition and environmental factors, and many different factors have been pursued. Evidence of the importance of nutrition has accumulated since the 1960s, and the key role of folate/folic acid in the pathogenesis of these malformations was demonstrated conclusively in 1991. Genetic and Environmental Factors
Evidence of a genetic component in the etiology of NTDs includes familial recurrence patterns, ethnic variation, and sex variation (more common in
16
PREGNANCY/Prevention of Neural Tube Defects
females). More direct evidence of the role of genetic factors is the discovery that the gene encoding for the thermolabile variant of the 5,10-methylenetetrahydrofolate reductase enzyme is more common in individuals with spina bifida than in controls. The most striking environmental, or nongenetic, factors are the protective effect of folic acid and the marked variations in prevalence over time and between areas. The prevalence rates of NTDs at birth have been falling in most countries, particularly in regions that traditionally had high rates. It is assumed, but not scientifically proven, that better nutrition is a main factor determining this trend. Variations with season, social class (more common in disadvantaged groups), and, to a lesser extent, maternal age and reproductive history provide further evidence of the role of environmental factors. Several of these factors may be explained in whole or in part on nutritional grounds. Folate/Folic Acid
There is a vast literature on the role of folate/folic acid in the etiology and prevention of NTDs. Evidence that folic acid can prevent NTDs comes from two main types of studies: observational studies of dietary folate intake and of supplementation with folic acid preparations and intervention studies. The strongest evidence on the efficacy of folic acid
comes from randomized controlled trials, notably the Medical Research Council (UK) trial on NTD recurrence and the Hungarian trial on NTD occurrence (i.e., first-time NTDs). These and other intervention studies are summarized in Table 1. Following earlier research, the Medical Research Council trial published in 1991 conclusively established the efficacy of folic acid in preventing NTD recurrence. This trial used a research design to investigate the effects of both folic acid and a combination of other vitamins. The recurrence rate in the groups that received folic acid (1.0%) was significantly lower than that in the groups that did not take folic acid (3.5%), giving a 71% protective effect. Thus, 29% of NTDs were not prevented by folic acid, at least not at the very high pharmacological dose of 4 mg daily used in the trial. The multivitamin combination without folic acid had no protective effect. The main observational studies that have examined the effect of periconceptional use of vitamin supplements containing folic acid on NTD pregnancies are illustrated in Table 2. All of these studies but one found a marked protective effect of supplementation against NTD occurrence. In most of the studies of NTD occurrence, the daily dose of folic acid was between 0.4 and 0.8 mg. Studies of dietary folate intake also show a protective effect of high intakes during the periconceptional period. The consistent finding of a protective
Table 1 Intervention studies of periconceptional folic acid supplementation and NTD risk Study
Design
UK Medical Research Council Trial (1991) Laurence et al. (1981)
Randomized controlled international Randomized controlled Wales Randomized controlled Ireland Randomized controlled Hungary Randomized controlled India
Kirke et al. (1992) Czeizel and Dudas (1992) Indian Council of Medical Research Trial (2000) Smithells et al. (1983) Vergel et al. (1990) Berry et al. (1999)
a
Daily dose folic acid (mg)
Outcome: No. of NTDs
Relative risk
Comments
trial,
4.0
0.29
Significantb
trial,
4.0
0.42
trial,
0.36
trial,
0.8
Not significant Small numbers Not significant Small numbers Significantb
trial,
4.0
6/593 suppl. 21/602 not suppl.a 2/60 suppl. 4/51 not suppl. 0/172 suppl. 1/89 not suppl. 0/2104 suppl. 6/2052 not suppl. 4/137 suppl. 10/142 not suppl.
Nonrandomized controlled trial, UK Nonrandomized controlled trial, Cuba Nonrandomized controlled trial, China
0.36 5.0 0.4
3/454 suppl. 24/519 not suppl. 0/81 suppl. 4/114 not suppl. Northern region 13/13 012 suppl. 16/3 318 not suppl. Southern region 34/58 638 suppl. 28/28 265 not suppl.
0.00 0.00 0.41
Not significant Small numbers
0.14
Significantb
0.00
Not Significant Small numbers Significantb
0.21
0.59
Significantb
Six NTD pregnancies in 593 women supplemented with folic acid and 21 NTD pregnancies in 602 women not supplemented with folic acid. Statistically significant difference in NTD rate between supplemented and nonsupplemented groups.
b
PREGNANCY/Prevention of Neural Tube Defects
that have been published on serum/plasma folate and red cell folate (RCF) are summarized in Tables 3 and 4. The differences between affected and unaffected pregnancies are more pronounced in the first trimester of pregnancy. Maternal use of folic acid antagonists during early pregnancy increases the risk not only of NTDs but also of other congenital defects.
Table 2 Main observational studies of the effect of periconceptional use of folic acid supplements on NTD riska Study
Odds ratio
Mulinare et al. (1988) Mills et al. (1989) Milunsky et al. (1989) Werler et al. (1993) Shaw et al. (1995)
0.41 0.94 0.29 0.60 0.65
17
a
The difference between folate-supplemented and unsupplemented groups was statistically significant in all studies except Mills et al. (1989).
Vitamin B12
The role of vitamin B12 in NTDs is of particular interest because of the close metabolic relationship between this nutrient and folate. The results of some studies of maternal levels of serum vitamin B12 in NTD pregnancies are shown in Table 5. As for folate, lower levels of vitamin B12 are generally seen in affected pregnancies, especially in the first trimester. Studies based on amniotic fluid have
effect of dietary folate and folic acid supplementation in virtually all these different types of studies is very striking. Further evidence implicating folate comes from studies linking maternal folate status to pregnancies affected by NTDs. The main studies
Table 3 Serum folic acid (SFA) and central nervous system defects Study
Blood taken at antenatal booking Hall et al. (1977) Molloy et al. (1985) Kirke et al. (1993) Blood taken in first trimester Smithells et al. (1976) Mills et al. (1992) Wald et al. (1996)
Affected Unaffecteda Affected Unaffected Affected Unaffected Affected Unaffected Affected Unaffected Affected Unaffected
No. of pregnancies
Mean SFA (g 11)
Difference
Statistical significance
11 >1000 32 384 81 247
6.3 6.6 3.4b 3.4b 3.5b 4.6b
0.3
No
0.0
No
5 953 89 172 16 36
4.9 6.3 4.1 4.3 4.3b 5.7b
All women (antenatal booking and first trimester) Blood taken in second trimester Economides et al. (1992) Blood taken after delivery Emery et al. (1969) Yates et al. (1987) Bower and Stanley (1989) Wild et al. (1993) All women (after delivery)
Affected Unaffected
8 24
9.8b 7.4b
Affected Unaffected Affected Unaffected Affected Unaffected Affected Unaffected
19 37 20 20 61 140 29 29
4.9 4.6 2.8 3.3 5.6 5.7 6.2b 5.5b
1.1
Yes
1.4
No
0.2
No
1.4
No
0.6 95% CI (1.0, 0.2)
Yes (p = 0.005)
2.4 95% CI (0.04, 4.84)
Yes (p = 0.054)
0.3
No
0.5
No
0.1
No
0.7
No
0.03 95% CI (0.5, 0.4)
No (p = 0.090)
a Unaffected women were those without a neural tube defect pregnancy either before or during the particular study, except for Wald et al. (1996), in which women had at least one neural tube defect pregnancy before the study. b Median value. From Wald NJ, Hackshaw AK, Stone R and Sourial NA (1996) Blood folic acid and vitamin B12 in relation to neural tube defects. British Journal of Obstetrics and Gynaecology 103: 319–324, Blackwell Scientific.
18
PREGNANCY/Prevention of Neural Tube Defects
Table 4 Red cell folate (RCF) and central nervous system defects No. of pregnancies
Mean RCF (g 11)
Difference
Statistical significance
Affected Unaffecteda
81 247
269b 338b
69
Yes
Affected Unaffected Affected Unaffected
6 959 14 26
141 228 156b 162b
87
Yes
6
No
77
Yes
95% CI (94, 60)
(p < 0.001)
43
No
35
No
5 95% CI (76, 86)
No (p = 0.90)
90
Yes
7
No
24
No
6 95% CI (33, 21)
No (p = 0.66)
Study
Blood taken at antenatal booking Kirke et al. (1993) Blood taken in first trimester Smithells et al. (1976) Wald et al. (1996) All women (antenatal booking and first trimester) Blood taken in second trimester Laurence et al. (1981) Economides et al. (1992)
Affected Unaffected Affected Unaffected
4 47 8 24
238 281 435b 400b
All women (second trimester) Blood taken after delivery Yates et al. (1987) Bower and Stanley (1989) Wild et al. (1993)
Affected Unaffected Affected Unaffected Affected Unaffected
20 20 61 140 29 29
All women (after delivery)
178 268 301 308 247b 223b
a Unaffected women were those without a neural tube defect pregnancy either before or during the particular study, except for Wald et al. and Laurence et al., in which women had at least one neural tube defect pregnancy before the study. b Median value. From Wald NJ, Hackshaw AK, Stone R and Sourial NA (1996) Blood folic acid and vitamin B12 in relation to neural tube defects. British Journal of Obstetrics and Gynaecology 103: 319–324.
consistently found lower vitamin B12 levels in affected pregnancies. It is possible that the low levels of vitamin B12 coincide with low levels of folate, although the findings of a case–control study in Dublin suggest that they are independent risk factors and the distribution of the two ingredients in food is dissimilar. In another smaller study, lower levels of vitamin B12 in affected pregnancies were not independent of folate levels. On biochemical grounds, there is so much interaction between the pathways involving both nutrients that it is possible that deficiency of either could affect a common event in the closure of the neural tube. The role of vitamin B12 in NTDs is discussed further later. Other Nutritional Factors
Vitamin C, vitamin A, and zinc have also been linked to NTDs. Lower maternal levels of white cell vitamin C were reported in affected compared to unaffected pregnancies in one small study. Large
doses of natural or synthetic vitamin A consumed by the mother during pregnancy have been associated with congenital anomalies in her offspring. In a large US study of maternal vitamin A intake before and during early pregnancy, a total daily intake greater than 15 000 IU was associated with an increased risk of birth defects, especially of structures arising from the cranial neural crest (craniofacial, central nervous system, thymic, and heart defects), but the risk of NTDs was not raised. However, these findings have been challenged. Children born to women who have vitamin A supplements at levels found in current multivitamin preparations have not been shown to be at increased risk of birth defects. Although several studies have linked zinc deficiency or abnormalities in zinc metabolism to NTDs, the results have not been consistent. The role of zinc in NTD aetiology requires further clarification. Research on the association between riboflavin and folate and homocysteine levels in people homozygous for the
PREGNANCY/Prevention of Neural Tube Defects
19
Table 5 Maternal serum vitamin B12 (SB12) and central nervous system defects No. of pregnancies
Median SB12 (ng 11)
Difference
Statistical significance
Affected Unaffecteda
81 247
243 296
53
Yes
Affected Unaffected Affected Unaffected Affected Unaffected Affected Unaffected
6 48 28 363 89 178 18 75
288 417 297 277 483b 520b 230 240
129
Yes
20
No
37
No
10
No
38
Yes
95% CI (56, 20)
(p < 0.001)
Study
Blood taken at antenatal booking Kirke et al. (1993) Blood taken in the first trimester Schorah et al. (1980) Molloy et al. (1985) Mills et al. (1992) Wald et al. (1996) All women (first trimester and antenatal booking) Blood taken in second trimester Economides et al. (1992) Blood taken after delivery Yates et al. (1987) Wild et al. (1993)
Affected Unaffected
8 32
205 230
25 95% CI (58, 8)
No (p = 0.12)
Affected Unaffected Affected Unaffected
20 20 29 29
300b 320 449 489
20
No
40
No
34 95% CI (83, 15)
No (p = 0.17)
All women (after delivery)
a Unaffected women were those without neural tube defect pregnancy either before or during the particular study, except for Wald et al. (1996), in which women had a least one neural tube defect pregnancy before the study. b Median value. From Wald NJ, Hackshaw AK, Stone R and Sourial NA (1996) Blood folic acid and vitamin B12 in relation to neural tube defects. British Journal of Obstetrics and Gynaecology 103: 319–24, Blackwell Scientific.
5,10-methylenetetrahydrofolate reductase C677T genetic polymorphism suggests a possible role for riboflavin in the aetiology and prevention of NTDs. Research in the United States has shown that women who are obese (defined as prepregnancy body weight of more than 80 kg or body mass index greater than 29 kg m2) are more likely to have infants with NTDs and some other congenital malformations than women of average prepregnancy weight. In one study it was found that this association was independent of folate intake. Although the underlying mechanism is unclear, these findings suggest that it may involve something other than folate. Studies in the United States found that dieting behaviors involving restricted food intake during the first trimester of pregnancy and diarrheal illnesses during the periconceptional period were associated with increased NTD risk. Other Causes of NTDs
It is estimated from the results of the Medical Research Council trial that approximately 30% of
NTDs are not folate-related. The causes of this group of NTDs are unknown but are likely to include genetic and environmental factors. In this context, recent reports on obesity and NTDs are most interesting. Further research on this subject should result in a better understanding of the complex aetiology of NTDs. Nutritional factors other than folate may be involved—for example, vitamins B and C and zinc, as noted previously, and other nutrients.
Mechanisms The possible mechanisms underlying the involvement of folate/folic acid in the etiology and prevention of NTDs are examined in this section. Functions of Folate and Vitamin B12 and NTD Etiology
Folate acts as the intermediary in the transfer of methyl groups for two important processes in metabolism, namely the methylation reactions and the synthesis of the nucleic acids DNA and RNA
20
PREGNANCY/Prevention of Neural Tube Defects
(Figure 1). The folate cofactor, N5-methyltetrahydrofolate, acts via the vitamin B12-dependent enzyme, methionine synthase, to remethylate homocysteine to produce methionine, which is converted to S-adenosylmethionine (SAM) via S-adenosylmethionine synthase. SAM is the universal methylator necessary for the synthesis of essential proteins, lipids such as myelin, and DNA. The folate cofactor also acts via methionine synthase to synthesize tetrahydrofolate, which, unlike N5-methyltetrahydrofolate, can be polyglutamated and thereafter used to produce the nucleic acids DNA and RNA. Simple deficiency or metabolic impairment in the biochemical functions of either folate or vitamin B12 could, by interrupting DNA biosynthesis or methylation reactions, interfere with cell growth and function and tissue development during a period of very rapid cell proliferation of the fetal neural crest, thereby preventing normal closure of the neural tube.
Folate/Folic Acid and NTDs: Mechanisms
Does folic acid prevent NTDs by correcting simple dietary deficiency, by overcoming a problem in gastrointestinal absorption, or by overcoming some type of metabolic block? Recent research has helped to clarify the role of folate/folic acid in the etiology and prevention of these malformations. Blood samples were collected from women at their first antenatal clinic in the Dublin maternity hospitals and 81 women in this cohort subsequently had infants affected by NTDs. Folate and B12 status were compared in these 81 cases and in a control sample of 247 unaffected pregnancies by measuring plasma and RCF and plasma vitamin B12. Although folate levels were significantly lower in the cases than in the controls, more than 91 and 86% of the cases had normal plasma and RCF levels, respectively. Thus, the vast majority of women who had an
Methylated Product (e.g., methylated lipids, protein, DNA, etc.)
Pyruvate GSH
Methyltransferases Substrate
Cysteine S-Adenosylhomocysteine (SAH)
S-Adenosylmethionine (SAM)
Cystathionine
ATP
THE METHYLATION CYCLE CELL
Cystathionine Synthase vitamin B6
PLASMA
Homocysteine Methionine
(OR)
5-Methyl tetrahydrofolate monoglutamate
Methionine Synthase Vitamin B12
5-Methyl tetrahydrofolate monoglutamate 5-Methyl tetrahydrofolate polyglutamate 5,10 Methylenetetrahydrofolate Reductase (MTHFR) Vitamin B2 5,10-Methylene tetrahydrofolate polyglutamate
dUMP
Serine Glycine
Polyglutamate Synthetase + glutamate Tetrahyrdrofolate polyglutamate
DHF Reductase
Formate Purines
DNA cycle (CELL REPLICATION)
Trifunctional Enzyme
Thymidylate Synthase
Dihydrofolate polyglutamate
DNA 10-formyl tetrahydrofolate Dihydrofolate polyglutamate
Pyrimidines dTMP
Folic Acid Figure 1 Intracellular pathways of folate and homocysteine metabolism and their relation to vitamin B12 function.
Folic Acid
PREGNANCY/Prevention of Neural Tube Defects
NTD birth were not folate deficient, as defined by conventional levels. It has been suggested that women who have had children with NTDs may have a defect in gastrointestinal absorption of folate or folic acid, but there is no strong evidence to support this hypothesis. In a study designed to overcome the methodological problems of earlier investigations, folic acid absorption was similar in a group of nonpregnant women with a history of an NTD pregnancy and in control women with a normal pregnancy history. These findings suggest that the absorption of folic acid routinely consumed in supplements and fortified food products is not impaired in women with a history of an NTD pregnancy. However, autoantibodies against folate receptors have been reported in women who have had a pregnancy complicated by an NTD. These autoantibodies bind to the folate receptors and can block the cellular uptake of folate. A woman’s risk of having an NTD baby has been shown to be closely related to her early pregnancy levels of plasma folate and RCF, the relationship being stronger for RCF (Table 6). There is a strong dose–response effect. Those with RCF levels less than 150 mg l1 have more than eight times the risk of those with levels of more than 400 mg l1. Although the most marked absolute reductions in risk occur by elevating the lower RCF levels, risk continues to decrease as RCF levels increase well beyond what would be considered normal levels, with little further protection apparently being gained at levels higher than 400 mg l1. Most of the NTDs were born to women whose RCFs would have been considered to be in the normal range (i.e., >150 mg l1). Thus, views on what constitutes desirable levels of RCF need to be reconsidered. The lack of evidence of a simple dietary deficiency or of malabsorption and the marked dose–response relationship between maternal RCF level and risk of Table 6 Distribution of cases and controls and risk of NTDs by red cell folate level Red cell folate (g 11)
No. of cases (%)
No. of controls (%)
Risk of NTD per 1000 births
95% confidence interval
0–149 150–199 200–299 300–399 400 Total
11 13 29 29 11 84
10 (3.8) 24 (9.0) 75 (28.2) 77 (29.0) 80 (30.0) 266 (100.0)
6.6 3.2 2.3 1.6 0.8 1.9
3.3–11.7 1.7–5.5 1.6–3.3 1.0–2.4 0.4–1.5 1.5–2.3
(13.1) (15.5) (34.5) (23.8) (13.1) (100.0)
From Daly LE, Kirke PN, Molloy A, Weir DG and Scott JM (1995) Folate levels and neural tube defects—Implications for prevention. Journal of the American Medical Association 274: 1968–1702. Copyright ª 1995, American Medical Association.
21
NTD point to a metabolic explanation for the aetiology of these conditions. Since it is estimated that folic acid can prevent up to 71% of NTDs, defects in folate-related enzymes or processes have been candidates for study. There are 16 folatedependent enzymes in the internal metabolism of mammalian cells. The finding in the Dublin study of significantly higher plasma homocysteine levels in case mothers than in controls suggested that one or more enzymes involved in homocysteine metabolism may be abnormal. The main folate-related enzymes involved in homocysteine metabolism are illustrated in Figure 1. Homocysteine levels in the amniotic fluid of women carrying a fetus with an NTD have been reported as being higher compared with those of normal pregnancies. Evidence of deranged homocysteine metabolism also comes from metabolic studies conducted in Holland. In a study in which women who had given birth to an NTD baby were given a methionine-loading test, methionine intolerance and very high peak levels of homocysteine were found in a subgroup of the NTD women. Cystathionine synthase levels in skin fibroblasts taken from the methionine-intolerant women were normal. Plasma folate and vitamin B12 were found to be independent risk factors for NTDs in the Dublin study. Although the results of this study pointed to an abnormality in the methionine synthase enzyme, there is no strong evidence linking genetic variants of the enzyme to NTDs. However, it is possible that vitamin B12 status may influence NTD risk in ways other than directly affecting the activity of this enzyme. Genetic Risk Factors
The main focus of research on NTDs during the past decade has been the investigation of genetic risk factors with particular emphasis on the genes encoding the enzymes in the folate/homocysteine metabolic pathways. A common variant (C677T) in the gene for one of the folate-related enzymes, 5,10-methylenetetrahydrofolate reductase (MTHFR) (Figure 1), was identified in 1993 and has been shown to be associated with reduced enzyme function and lower blood folate and higher homocysteine levels. The most frequently studied association between a genetic polymorphism and a congenital malformation has been the relationship between NTD risk and this variant. In initial reports from Holland and Ireland published in 1995, homozygosity for the C677T allele was associated with an increased risk of having spina bifida or having an affected child. From the numerous studies on the link between this polymorphism and NTD that have been conducted in
22
PREGNANCY/Prevention of Neural Tube Defects
many countries, it is clear that homozygosity for the variant in the child or mother is a risk in some populations but not others. A review showed that homozygosity for the variant in the child or mother doubles the risk of NTD. A large study of baby–mother pairs showed that the embryo’s MTHFR genotype was more important than that of the mother in conferring risk. Studies have also shown evidence of a strong gene–nutrient interaction in that low maternal blood folate levels in early pregnancy and no periconceptional folate supplementation increase the risk associated with the variant allele. Although how the polymorphism causes NTDs is unclear, it may do so through its association with lower folate or higher homocysteine levels or by some other metabolic mechanism. It is estimated that homozygosity for the MTHFR C677T variant is likely to account for not more than approximately 13% of NTDs. Because it is considered that approximately 71% of NTDs can be prevented by folic acid, other mechanisms, possibly including variants in genes coding for other folatedependent enzymes, problems with folate absorption, or even dietary deficiency of folate, may play a role. In one study it was shown that the elevated homocysteine levels associated with homozygosity for the MTHFR C677T variant were seen only in those with low riboflavin status; the homocysteine levels did not differ by genotype in those with medium or high riboflavin status. The fact that the activity of the MTHFR C677T variant is influenced by the prevailing riboflavin status may help to explain why the variant is a risk factor for NTDs in some countries but not others. These findings, if confirmed, may be important in view of research reports that substantial proportions of populations have suboptimal riboflavin status. Genes encoding other enzymes in the folate/homocysteine pathways have been studied. Polymorphisms in some of the genes that may be expected to be important because of their position in the pathways have been shown not to be important (e.g., methionine synthase and cystathionine beta synthase). Other polymorphisms in these pathways have been reported to increase the risk of NTD (i.e., NTD risk was estimated to be significantly higher in either cases or mothers compared to controls)—for example, the MTHFR 1298A ! C variant, the reduced folate carrier 80A ! G variant, the methionine synthase 919D ! G variant, the methionine synthase reductase 66A ! G variant, and the R653 Q variant in the trifunctional enzyme methylenetetrahydrofolate dehydrogenase/ methenyltetrahydrofolate cyclohydrolase/formyltetrahydrofolate synthetase. For each of these five polymorphisms, however, the increased NTD risk is
based on just one study, and other studies have reported negative results. Interactions between some genes in these pathways have been reported as increasing NTD risk—for example, the MTHFR 677C ! T and MTHFR 1298A ! C variants (two studies); the MTHFR 1298A ! C and reduced folate carrier 80A ! G variants (one study); the MTHFR 1298A ! C, the MTHFR 677C ! T, and the reduced folate carrier 80A ! G variants (one study); and the methionine synthase 2756A ! G and methionine synthase reductase 66A ! G variants (one study). Again, other studies have not confirmed these findings. There have also been reports of increased NTD risk associated with polymorphisms in the genes encoding the thymidylate synthase and glutamate carboxypeptidase enzymes. These results need to be replicated in larger studies and in different populations to provide a clearer picture of whether these polymorphisms truly increase NTD risk. The rationale for studying variants in genes involved directly or indirectly in folate metabolism is clear. However, the closure of the neural tube is a complex process involving the orchestration of many genes. It seems possible that polymorphisms in genes that are far removed from folate metabolism may also cause NTDs that are responsive to folic acid. The MTHFR gene variant is the first specific genetic risk factor to be linked to NTDs. This is the strongest evidence for the involvement of a metabolic derangement in the aetiology of these conditions. This breakthrough gives added impetus to the search for other genetically determined risk factors. Although there is no strong evidence implicating other genetic polymorphisms in the aetiology of NTDs, it is likely that such evidence will soon emerge.
Prevention In the context of the prevention of NTDs, it is necessary to distinguish between primary and secondary prevention. Primary prevention concerns measures that prevent the development of NTD in the embryo. Secondary prevention refers to screening and termination of affected pregnancies. Primary prevention became a reality following the demonstration of the efficacy of folic acid in preventing NTDs and represented a major public health breakthrough. A substantial body of research shows that taking extra folate/folic acid before conception and during the early months of pregnancy prevents approximately 50–75% of NTDs and is effective in preventing both occurrent and recurrent NTDs. For all women who may become pregnant, it is recommended that they take an extra 0.4 mg of folic acid per day for the primary prevention of NTDs. This is
PREGNANCY/Prevention of Neural Tube Defects
in addition to the usual dietary folate intake, which is estimated to be, on average, approximately 0.2 mg per day in the United Kingdom. The most effective ways of using this knowledge to reduce the number of NTD births are discussed next. Ways of Increasing Folate/Folic Acid Intake
As already noted, a woman can increase her folate intake in three ways: eating more folate-rich foods, eating foods fortified with folic acid, and taking folic acid as a medicinal or food supplement. Although all three methods are known to increase folate status, taking folic acid either as supplements or in fortified foods has been shown to be much more effective in achieving this goal than eating folate-rich foods. It is very difficult to achieve a total daily intake of 0.6 mg folate/folic acid from folate-rich (unfortified) foods alone. Furthermore, folate in unfortified food is not as bioavailable as folic acid in fortified food or in supplements. The only practical way of obtaining an extra 0.4 mg folic acid daily, as recommended, is by consuming fortified foods or folic acid supplements, and this should be made clear by health professionals. However, an improved general diet, especially consuming more vegetables and fruit, should be advocated preconceptionally and during pregnancy because it results in an increased intake of other vitamins and nutrients that are important for normal fetal development. Supplementation
Since 1992, women of reproductive age have been advised to take an extra 0.4 mg of folic acid daily before pregnancy and during the first 12 weeks of pregnancy. Compliance with the recommendation has been examined in numerous studies conducted throughout the world. Knowledge of the appropriate use of folic acid for NTD prevention in women of childbearing age and in health workers increased markedly throughout the 1990s. Because folic acid can only work if it is taken before closure of the neural tube, the best indicator of periconceptional supplementation is the proportion of pregnant women who take a folic acid supplement before the pregnancy begins, and this proportion increased during the 1990s. In seven studies published from 1999 to 2003 and based on representative study samples in North America and Europe, the proportion of women reported as taking folic acid before pregnancy ranged from 33 to 49%, with a median of 36%. Supplementation is less common in unplanned pregnancies; in young, socially or educationally disadvantaged, and single mothers; and in
23
those with no knowledge about the protective effect of folic acid. The most important predictor of nonsupplementation is unplanned pregnancy. Because unplanned pregnancy is very common (e.g., approximately half of all pregnancies are reported as being unplanned in the United States and Ireland), this factor constitutes the greatest logistical obstacle to planning optimal protection against NTD by periconceptional supplementation. The low supplementation rates reflect the relative lack of effectiveness of promotional campaigns as currently formulated. Public health programs promoting folic acid must be sustained and must pay particular attention to those at greatest risk of not supplementing. Fortification
Supplementation is probably the most efficient method for ensuring individual protection against an NTD pregnancy but not for a general public health strategy. The disappointing results of supplementation programs led experts to consider fortification of foodstuffs as another public health strategy. There are two approaches to food fortification— voluntary and mandatory. In the former, it is left to individual manufacturers to add folic acid to specific products, whereas in mandatory fortification the relevant authority, with government approval and legislation, requires that a specified dietary staple or staples be fortified to a specified agreed level. The objective of a food fortification policy to prevent NTDs is to increase folate intakes for the target childbearing population as near as possible to the recommended intakes while maintaining safe levels of intake for the entire population. The United States first introduced mandatory food fortification. The Food and Drug Administration (FDA) authorized the addition of folic acid to enriched grain products in 1996 and made compliance mandatory by January 1998. The FDA decided on a level of fortification of 140 mg of folic acid per 100 g flour, and this was estimated to increase average daily intakes of folic acid by 100 mg in women of reproductive age. Studies of fortified foods in the United States have found considerably higher folate levels for many products than those required by the regulations, and the actual average daily increase is estimated to be 150–200 mg. The Canadian government introduced a similar fortification plan in 1998. Studies in both countries have shown that the markers of body folate status (serum and red cell folate and serum homocysteine) have increased dramatically in the population postfortification.
24
PREGNANCY/Prevention of Neural Tube Defects
The effect on NTD rates has been striking, especially in Canada. Comparing NTD rates before and after fortification showed decreases of 55% in Nova Scotia and 49% in Ontario postfortification. A study in the United States reported a decrease of 19% in the NTD rate after fortification, and the smaller decrease was considered to be mainly due to the fact that pregnancy terminations for NTDs were not included in the US data and were included in the Canadian studies. These studies provide strong evidence of the effectiveness of mandatory food fortification of folic acid in preventing NTDs and point the way forward for other countries interested in solving this problem. Fortification of flour was introduced in Chile in 2000 at a level of 220 mg folic acid per 100 g flour. Approximately 38 countries currently either fortify flour (including the United States, Canada, Chile, Argentina, and Israel) or have agreed to do so. In the United Kingdom, the government nutritional advisory committee (the Committee on Medical Aspects of Food and Nutrition Policy) recommended fortification at the level of 240 mg of folic acid per 100 grams of flour, but the Board of the Food Standards Agency decided to defer its implementation. No European Union country has decided to fortify flour to date. Dose
The appropriate dose of folic acid in relation to mandatory food fortification and supplementation programs continues to be debated. The data from the Dublin study, which showed a marked relationship between early pregnancy maternal RCF levels and NTD risk, were used to examine the effectiveness of a food fortification intervention to increase maternal folate levels to prevent NTDs. The analysis showed that if, as a result of food fortification, all women in a population doubled their RCF level, the prevalence of folate-responsive NTDs would be reduced by 66%. If this increase were 150%, which could be achievable with sufficient fortification, the level of protection would be 73% of folate-responsive NTDs (equivalent to 53% of all NTDs). The finding that a woman’s risk of having an NTD pregnancy is related to her early pregnancy levels of RCF in a continuous dose–response relationship suggested that folic acid intake is also related to risk in a continuous dose–response-type relationship, and this was demonstrated in a randomised trial that studied the effect of three different doses of folic acid on RCF levels and reduction in NTD risk. The results of this study in women of reproductive age showed that an extra 0.1, 0.2, or
0.4 mg daily during a 6-month period would be expected to reduce NTD rates by 22, 41, and 47%, respectively. According to another dose– response model that examined the effect of increases in a wide range of daily folic acid intakes on NTD prevention, a 5.0 mg daily dose of folic acid was estimated to decrease NTD risk by 85% in women with a presupplementation serum folate level of 5 ng per liter. These authors argue that the current recommended daily dose of 0.4 mg for folic acid supplements is too low and should be increased to 5.0 mg. They also argue, somewhat controversially, that no known or suspected adverse effects of the 5.0 mg dose have been recorded. The 4.0 mg daily supplement recommended internationally by National Departments of Health for the prevention of NTD recurrence is based mainly on the unequivocal evidence of the efficacy of this dose in the UK Medical Research Council Trial. In another large nonrandomized intervention study, a daily dose of 0.36 mg folic acid seemed to offer similar protection because the recurrence rate in the treated groups was similar to that in the Medical Research Council trial. So while the recommendation of 4.0 mg to prevent recurrence is quite correctly based on the best scientific evidence, it is likely that a much smaller dose would be equally effective. Duration of Supplementation
The minimum duration of supplementation necessary for prevention is not known. Although most national health authorities recommend that women take extra folic acid for at least 4 weeks before conception and until week 12 of pregnancy, supplementation for a shorter duration before closure of the neural tube may also be effective. Until more data are available, the official guidelines should be followed. Given the estimate that approximately half of all pregnancies are unplanned, however, it is important that a woman who has not been taking extra folic acid and who suspects that she may be pregnant immediately starts taking a folic acid supplement. This point requires greater emphasis. Safety
The main concern about taking folic acid at levels greater than 0.4 mg per day is the possibility that the diagnosis of pernicious anemia, which is caused by vitamin B12 malabsorption and is more common in the elderly, would be missed since folic acid at high levels prevents the development of the anemia and thus its diagnosis. In this situation, nerve damage
PREGNANCY/Prevention of Neural Tube Defects
progresses and becomes irreversible. To ensure that 95% of all women get 0.4 mg of folic acid per day through fortification of staple foods would mean that, depending on the diet and differences in eating habits, more than half of the population would get approximately 0.7 mg and 5% would get more than 1.0 mg per day. A compromise is to select a lower target figure for universal fortification that would aim to prevent most folate-responsive NTDs and not put the elderly at risk. This is what the FDA has done. At the 140 mg per 100 grams flour fortification level in the United States, it is estimated that 15–25% of children aged 1–8 years and 0.5% of men and women >70 years would have daily folic acid intakes higher than the tolerable upper intake level. When account is taken of the fact that actual fortification levels in the United States have been estimated to be as much as twice the planned level, the proportions of children and elderly with intakes higher than the upper level may be greater than projected. At the proposed fortification level of 240 mg per 100 grams flour in the United Kingdom, it is estimated that 10% of males would have folic acid intakes >1.0 mg per day. Folic acid in the dose range 0.5–1.0 mg may be absorbed in its original unmetabolized form and may be found in this form in the bloodstream. Circulating folic acid would then be taken up by body cells by a vitamin B12independent mechanism (Figure 1) and would have the potential to switch on the megaloblastic bone marrow in a vitamin B12-deficient person, thereby preventing the development of the anemia and masking the B12 deficiency. Further research is needed, therefore, to determine the amount of folic acid in fortified food that is safe from this potential hazard. The incidence of B12 deficiency in the elderly has been reported to be as high as 15%, but estimates vary considerably. Some reassurance is provided by a U.S. study that suggests that fortification with folic acid has not been associated with an increase in masking of vitamin B12 deficiency. Adding vitamin B12 as well as folic acid to fortified food has been suggested as a solution, but this is problematic. The vast majority of cases of vitamin B12 deficiency are due to the autoimmune disease pernicious anemia. In this condition, the absence of intrinsic factor prevents the absorption of physiological amounts of vitamin B12. Thus, including vitamin B12 at levels of the dietary reference value (DRV) or less is unlikely to benefit such people because it is not absorbed in sufficient amounts. It has been suggested that if a large enough dose of vitamin B12 is added to the diet, then a sufficient amount will be absorbed by passive diffusion to prevent vitamin B12 deficiency. However, the
25
amounts required to do this are between 200 and 400 times the DRV for vitamin B12 and most experts would be concerned about adding such a vast excess of an albeit apparently safe nutrient to the food chain. Vitamin B12 deficiency is rare in children, but concern has been raised about possible unknown negative health effects of long-term exposure of children to levels of folic acid that are several times the DRV. It is established that anticonvulsant drugs impair folate status and there is concern that folic acid supplements may reduce the efficacy of these drugs. However, folic acid supplementation of 4 mg daily is recommended for women taking anticonvulsant medication, and there is no evidence of negative effects of supplementation on the control of epilepsy. A number of studies have suggested the possibility that periconceptional use of vitamin supplements containing folic acid may be associated with an increase in multiple births. A systematic review of the three randomized trials of periconceptional supplementation with folic acid or multivitamins or both (see Table 1), updated with new information from the Medical Research Council trial in women who took folic acid, found a consistent increase in the twinning rate. The pooled relative risk was 1.40 (95% confidence interval (CI), 0.93–2.11), but the increase did not reach statistical significance. Increased rates of multiple births were reported in mothers who took multivitamin supplements in two other studies. In a large prospective study of young women in China, there was no increase in the rate of multiple births in women who had taken periconceptional folic acid supplements (0.62%) compared to nonsupplementers (0.67%) (rate ratio, 0.92; 95% CI, 0.83–1.01). The other studies raised the question of whether folic acid or some other component of the multivitamin supplements was responsible for the increased multiple birth rate, and the Chinese study data suggest that folic acid is not associated with this effect. The association between periconceptional use of folic acid and multiple pregnancy has been shown to be confounded by use of in vitro fertilization (IVF). Pregnancies following IVF are strongly associated with both multiple pregnancies and periconceptional use of folic acid. A number of studies have reported no association between use of folic acid supplements and multiple pregnancy when adjustment is made for use of IVF. Research in the United States suggests that fortification has not resulted in an increase in multiple pregnancy rates.
26
PREGNANCY/Prevention of Neural Tube Defects
There are reports of increased miscarriage rates in women who took vitamin supplements containing folic acid before and during early pregnancy. A review of the randomized trials of periconceptional supplementation of folic acid or multivitamins found a statistically nonsignificant 12% increase in the miscarriage rate among those who took folic acid alone or as part of a multivitamin supplement. It has been suggested that folic acid may extend the viability of fetuses that would otherwise miscarry at earlier stages of pregnancy and be unrecognized as such. In the largest study that has addressed this question, the rates of miscarriage were similar in Chinese women who had (1981 of 21 935 or 9.0%) and had not (174 of 1871 or 9.3%) taken supplements containing only 400 mg folic acid before and during early pregnancy. This study is the most scientifically rigorous examination of the hypothesis that periconceptional folic acid may increase the miscarriage rate, and the findings indicate that this is not so. Other Nutrients
As noted previously, nutritional factors may be involved in the aetiology of some of the nonfolaterelated NTDs. Although there is increasing evidence for the involvement of vitamin B12 and less evidence for vitamin C, riboflavin, and zinc, the available evidence is not strong enough to support dietary supplementation with these or other nutrients. Because large doses of vitamin A are known to be teratogenic, women at risk of pregnancy and those in the early months of pregnancy should avoid liver products which can contain high quantities of vitamin A. In order to keep daily vitamin A intake below 10 000 IU, women in these groups should not take a multivitamin tablet that contains a dose of more than 5000 IU.
Recommendations The Department of Health in the United Kingdom and the Department of Health and Human Services in the United States were the first to issue recommendations on the use of folic acid to prevent NTDs. The recommendations relate to the prevention of occurrent (first-time) and recurrent NTDs, and other national health authorities have adopted similar recommendations. The main points in these recommendations are given next. Prevention of NTD Recurrence
To prevent NTD recurrence in the offspring of women or men who have spina bifida or encephalocoele, or a history of a previous child with NTD,
Such women and men should be counseled about the increased risk in subsequent pregnancies and about the protective effect of supplementation with folic acid. Women with a previously affected pregnancy should, unless contraindicated, be advised to take 4.0 mg of folic acid daily from at least 4 weeks before conception until the end of the third month of pregnancy. In countries in which a 5.0 mg rather than a 4.0 mg preparation is available, the former can be used but the lower 4.0 mg dose should be used as soon as this preparation becomes available. The 4.0 mg dose should be taken only under the supervision of a doctor because giving high doses of folic acid can complicate the diagnosis of vitamin B12 deficiency and epileptic women on anticonvulsant therapy require individual counseling before starting folic acid. Prevention of NTD Occurrence
For the prevention of occurrence of NTDs, the US Public Health Service recommends that all women capable of becoming pregnant consume 0.4 mg of folic acid per day and that total folate consumption should not be more than 1.0 mg per day to avoid the possible risks of high intakes. The UK Expert Advisory Group recommends that women should take an extra 0.4 mg of folic acid daily from when they begin trying to conceive until week 12 of pregnancy. If a woman who has not been taking this additional amount of folic acid suspects that she may have just started a pregnancy, she should begin taking extra folic acid immediately and continue until week 12 of pregnancy. The US Public Health Service and the UK Expert Advisory Group have outlined three possible ways of achieving an extra intake of folate/folic acid: eating more folate-rich foods, eating foods fortified with folic acid, and taking folic acid as a medicinal or food supplement. It is recommended that women should use whatever source or combination of sources they prefer to ensure that they obtain the necessary extra folic acid. The effectiveness of these approaches in achieving the recommended increased population intake of folate/folic acid was considered under Prevention. As already noted, it is very difficult to achieve a total daily intake of 0.6 mg folate/folic acid from only foods naturally rich in folate. The only practical ways of obtaining the recommended extra 0.4 mg folic acid daily is by consuming folic acid supplements or fortified foods, and this should be emphasized when advising women. In countries in which there is mandatory food fortification, it is important that women
PREGNANCY/Pre-eclampsia and Diet 27
are advised to continue taking supplements because fortification is designed to deliver considerably less than the recommended extra 0.4 mg daily intake of folic acid. When taking supplements, the folic acid dose should be obtained from pills containing only folic acid rather than from multivitamin preparations because of the risk of taking harmful levels of vitamins A and D in early pregnancy.
importance of the genotypes of the embryo and the mother. American Journal of Human Genetics 64: 1045–1055. Wald NJ, Hackshaw AK, Stone R, and Sourial NA (1996) Blood folic acid and vitamin B12 in relation to neural tube defects. British Journal of Obstetrics and Gynaecology 103: 319–324.
Pre-eclampsia and Diet See also: Bioavailability. Cobalamins. Folic Acid. Food Fortification: Developed Countries; Developing Countries. Fruits and Vegetables. Homocysteine. Nutrient–Gene Interactions: Health Implications. Obesity: Definition, Etiology and Assessment; Complications. Older People: Nutrition-Related Problems. Socio-economic Status. Supplementation: Role of Micronutrient Supplementation.
E Abalos, Centro Rosarino de Estudios Perinatales, Rosario, Argentina J Villar, World Health Organization, Geneva, Switzerland ª 2005 Elsevier Ltd. All rights reserved.
Introduction Further Reading Berry RJ, Li Z, Erickson JD et al. for the China–U.S. Collaborative Project for Neural Tube Defect Prevention (1999) Prevention of neural tube defects with folic acid in China. New England Journal of Medicine 341: 1485–1490. Botto LD and Yang Q (2000) 5,10-Methylenetetrahydrofolate reductase gene variants and congenital anomalies: A HuGe review. American Journal of Epidemiology 151: 862–877. Botto LD, Moore CA, Khoury MJ, and Erickson JD (1999) Neural-tube defects. New England Journal of Medicine 341: 1509–1519. Centers for Disease Control and Prevention (1991) Use of folic acid for prevention of spina bifida and other neural tube defects—1983–1991. Morbidity and Mortality Weekly Report 40: 513–516. Centers for Disease Control and Prevention (1993) Recommendations for the use of folic acid to reduce the number of cases of spina bifida and other neural tube defects. Morbidity and Mortality Weekly Report 41(RR-14): 1–7. Committee on Medical Aspects of Food and Nutrition Policy (2000) Folic Acid and the Prevention of Disease. London: Department of Health. Daly LE, Kirke PN, Molloy A, Weir DG, and Scott JM (1995) Folate levels and neural tube defects—Implications and prevention. Journal of the American Medical Association 274: 1698–1702. Daly S, Mills JL, Molloy AM et al. (1997) Minimum effective dose of folic acid for food fortification to prevent neural tube defects. Lancet 350: 1666–1669. Elwood JM, Little J, and Elwood JH (1992) Epidemiology and Control of Neural Tube Defects. Oxford: Oxford University Press. EUROCAT Working Group (2003) EUROCAT Special Report: Prevention of Neural Tube Defects by Periconceptional Folic Acid Supplementation in Europe. Belfast: University of Ulster. Available at www.eurocat.ulster.ac.uk/pubdata/folic%20acid.html. Expert Advisory Group (1992) Folic Acid and the Prevention of Neural Tube Defects. London: Department of Health. Scott JM, Kirke PN, and Weir DG (1995) Folate and neural tube defects. In: Bailey L (ed.) Folate in Health and Disease, pp. 329–360. New York: Marcel Dekker. Shields DC, Kirke PN, Mills JL et al. (1999) The ‘‘thermolabile’’ variant of ethylenetetrahydrofolate reductase and neural tube defects: An evaluation of genetic risk and the relative
Hypertensive disorders during pregnancy are one of the main causes of maternal death worldwide, most of these deaths being attributed to eclampsia. Eclampsia is the occurrence of fits in a pre-eclamptic woman that cannot be attributed to other causes (such as epilepsy, etc.). Hypertensive disorders occur in 6–8% of all pregnancies contributing significantly to stillbirths and neonatal morbidity and mortality. Babies are also at increased risk of intrauterine growth restriction, low birth weight, and preterm delivery. Pregnant women with hypertension, either newly diagnosed or pre-existing, are prone to the development of potentially lethal complications, notably abruptio placentae, disseminated intravascular coagulation, cerebral hemorrhage, pulmonary edema, hepatic failure, and acute renal failure. The etiology of hypertensive disorders related to pregnancy, particularly pre-eclampsia, remains unknown. The most important consideration in the classification of the disease is differentiating hypertensive disorders that antedate pregnancy from those that are pregnancy specific, of which the more ominous are pre-eclampsia and eclampsia. Pre-eclampsia is a pregnancy-specific syndrome of reduced organ perfusion secondary to vasospasm and activation of the coagulation cascade. Although our understanding of this syndrome has increased, the criteria used to identify the disorder remain a subject of confusion and controversy. In chronic hypertension, elevated blood pressure is the cardinal pathophysiologic feature, whereas in pre-eclampsia, increased blood pressure is important primarily as a sign of the underlying disorder. As might be expected, the impact of the two conditions on mother and fetus is different, as is their management.
28
PREGNANCY/Pre-eclampsia and Diet
Classification There is controversy about the definition of hypertensive disorders during pregnancy, and several classifications have been suggested. Recently, the USA National High Blood Pressure Education Program Working Group on High Blood Pressure in Pregnancy updated the 1990 report, and classified the hypertensive disorders during pregnancy as: (a) chronic hypertension defined as hypertension observable before pregnancy, or diagnosed before the 20th week of gestation; (b) pre-eclampsia, which is a pregnancy-specific syndrome occurring usually after 20 weeks’ gestation, determined by hypertension with proteinuria; (c) pre-eclampsia superimposed on chronic hypertension; and (d) pregnancyinduced hypertension or gestational hypertension, which is transient hypertension detected for the first time after mid-pregnancy if pre-eclampsia is not present at the time of delivery and blood pressure returns to normal by 12 weeks post-partum (a retrospective diagnosis). The system suggested by the International Society for the Study of Hypertension in Pregnancy (ISSHP) defines hypertension as a diastolic blood pressure of 90 mmHg or above on two consecutive occasions at least 4 hours apart, or a single diastolic blood pressure of 110 mmHg or more. The definition of pre-eclampsia has the same criteria for high blood pressure, but with the addition of significant proteinuria, usually at least 300 mg per 24 h or 1þ on dipsticks.
Pathophysiology of Pre-eclampsia Pre-eclampsia is a syndrome with both fetal and maternal manifestations. The maternal disease is characterized by vasospasm, activation of the coagulation system, and perturbations in many humoral and autacoid systems related to volume and blood pressure control. The pathologic changes in this disorder are primarily ischemic in nature and affect placenta, kidney, liver, and brain. Of importance, and distinguishing pre-eclampsia from chronic or gestational hypertension, is that pre-eclampsia is more than hypertension; it is a systemic syndrome, and several of its ‘nonhypertensive’ complications can be lifethreatening even when blood pressure elevations are quite mild. The cause of pre-eclampsia is not known. Many consider the placenta as the pathogenic focus for all manifestations of pre-eclampsia because the delivery of both the baby and the placenta is the only definitive cure of this disease. There is no disease without the placenta. Thus, research has focused on the changes in the maternal blood vessels that supply
blood to the placenta. Failure of the spiral arteries to remodel is postulated as the morphologic basis for decreased placental perfusion in pre-eclampsia, which may ultimately lead to early placental hypoxia. Oxidative stress and inflammatory-like responses may also be important in the pathophysiology of pre-eclampsia. Research on how alterations in the immune response at the maternal interface might lead to pre-eclampsia addresses the link between placenta and maternal disease. A nonclassical human leucocyte antigen (HLA), HLA G, is expressed in normal placental tissue and may play a role in modulating the maternal immune response to the immunologically foreign placenta. Placental tissue from preeclamptic pregnancies may express less or different HLA G proteins, resulting in a breakdown of maternal tolerance to the placenta. Additional evidence for alterations in immunity in pathogenesis includes the higher frequency of nulliparous gestations with subsequent normal pregnancies, a decreased prevalence after heterologous blood transfusions, a long period of cohabitation before successful conception, and observed pathologic changes in the placental vasculature in pre-eclampsia that resemble allograft rejection. Finally, there are increased levels of inflammatory cytokines in the placenta and maternal circulation, as well as evidence of increased ‘natural killer’ cells and neutrophil activation in pre-eclampsia. The mechanisms underlying vasoconstriction and altered vascular reactivity in pre-eclampsia remain obscure. Research has focused on changes in the ratio of vasodilative and vasoconstrictive prostanoids, since prostacyclin may be suppressed and thromboxane may be raised. More recently, investigators have postulated that the vasoconstrictive potential of pressor substances (e.g., angiotensin II and endothelin) is magnified in pre-eclampsia as a consequence of a decreased activity of nitric oxide (NO) synthesis and decreased production of NO-dependent or NO– independent endothelium relaxing factor (EDRF). Also under investigation is the role of endothelial cells (the site of prostanoid, endothelin, and EDRF production), which in pre-eclampsia may be dysfunctional, due perhaps to inflammatory cytokines (e.g., tumor necrosis factor alpha) and increased oxidative stress. Other systems postulated to play a role in preeclamptic hypertension are the sympathetic nervous system, calciotrophic hormones, insulin, and magnesium metabolism. Finally, some nutritional deficiencies have been postulated as playing a role in the pathogenesis of pre-eclampsia. Their possible role in the hypertensive disorders of pregnancy are discussed below.
PREGNANCY/Pre-eclampsia and Diet 29
The Possible Role of Nutrition in the Pathophysiology of Pre-eclampsia Epidemiological observations have long suggested a role for nutritional deficiencies (i.e., calcium, proteins, vitamins, etc.) in pre-eclampsia. However, intervention evaluations have failed to confirm such promising observations. We will describe here the evidence from randomized controlled trials that supports the relationship between different nutrients and pre-eclampsia. Calcium
There is considerable evidence linking calcium intake and hypertension during pregnancy from observational and experimental studies. However, there is still no satisfactory explanation for the mechanisms involved in the calcium-mediated effect on blood pressure reduction. It has been postulated that parathyroid hormone could be involved in this relationship. Demonstrated alterations in extracellular calcium homeostasis in pre-eclampsia include hypocalciuria and decreased serum levels of calcitriol. Increased parathyroid hormone (PTH) and decreased plasma ionized calcium concentration have not been consistently observed. Also, consistent abnormalities of intracellular calcium metabolism have been described in pre-eclamptic women, such as increased intracellular free calcium concentration in platelets and lymphocytes. Increases in intracellular free calcium concentration in circulating cells are hypothesized to result from fluctuation in hormones or vasoactive substances that cause similar alteration in vascular smooth muscle. Pregnancy is a state of high calcium requirements as a result of fetal demands while maternal adaptive mechanisms are partially inhibited. These phenomena lead to the hyper-parathyroid state of pregnancy. An increase of parathyroid hormone serum levels would involve an increase of free intracellular calcium. Then, the concentration of intracellular free calcium in vascular smooth muscle cells determines the degree of tension, and is the trigger for muscular contraction. So the vasoconstrictive effect, with a rise in blood pressure, results from an increase in vascular smooth muscle tension. Antioxidant Agents
An additional role for nutrition in the genesis of pre-eclampsia could be nutritional factors that strengthen oxidative stress, leading to pre-eclampsia. A nutritional factor could be the deficiency of antioxidant intake, specifically vitamin C and E. Vitamin C is central for the neutralization of both
water-soluble and lipid-soluble free radicals; as a water-soluble molecule its ability to neutralize free radicals in the aqueous compartment is clear. Also, ascorbate is not made in humans and must come from diet. Vitamin E, a potent antioxidant, has been suggested to play a role in preventing pre-eclampsia. Other Nutrients
Nutritional factors other than antioxidants can also contribute to oxidative stress. Hyper-homocysteinemia can occur as a result of dietary deficiencies. Hyperhomocysteinemia as a risk factor for pre-eclampsia is said to be altered, at least in part, by the genesis of oxidative stress. Vitamin B6 and B12 and folic acid are involved at different steps in the metabolic pathway for removing or recycling homocysteine to methionine. Dietary deficiencies of any of these micronutrients can increase circulating homocysteine. Pre-eclampsia is characterized by increased triglycerides that favor the formation of small, dense low-density lipoproteins (LDLs). This lipoprotein variant has increased access to the subendothelial space where it is sequestered from blood-borne antioxidants. The relevant role of triglycerides in the genesis of pre-eclampsia is indicated by the fact that they are increased long before clinically evident disease. Similarly, free fatty acids are increased in pre-eclampsia and this increment can be observed months before the diagnosis. Recent studies indicate that this effect may be secondary to altered copper binding by albumin to which large amounts of free fatty acids are bound. Unbound copper is a potent stimulator of free radical formation. Ordinarily this effect of copper is prevented by protein binding (quantitatively, primarily to albumin). However, with fatty acid binding, albumin binds copper differently. In this configuration, copper bound to albumin maintains its ability to participate in redox reactions. Thus, it appears that increased free fatty acids can also contribute to oxidative stress. All of these nutritional alterations may be amenable to dietary modification raising the possibility of nutritional prophylaxis.
Nutritional Interventions and Hypertensive Disorders of Pregnancy Prevention
The ability to prevent hypertensive disorders of pregnancy is limited by lack of knowledge of its underlying etiology. Prevention is focused on identifying women at higher risk of developing pregnancyinduced hypertension or pre-eclampsia during
30
PREGNANCY/Pre-eclampsia and Diet
pregnancy, followed by close clinical and laboratory monitoring to recognize the clinical symptoms of the disease in its early stages. These women and their pregnancies can then be selected for more intensive monitoring or delivery. Although these measures do not prevent the disease, they may be helpful for preventing some adverse maternal and fetal sequelae. As part of many other nonpharmacological interventions, some dietary interventions have been proposed to prevent the development of pregnancyinduced hypertension and pre-eclampsia. Nutritional advice in pregnancy The relevant literature was reviewed in order to assess the effects of advising pregnant women to increase their energy and protein intakes on the outcome of pregnancy, and maternal and fetal/infant morbidity and mortality. Nutritional advice was assessed on a Cochrane systematic review and appears to be effective in increasing pregnant women’s energy and protein intake, but the implications for fetal, infant, or maternal health cannot be judged from the available evidence. Pre-eclampsia prevention was assessed only in one small trial involving 136 women with no beneficial effects. Protein/energy supplementation The effect of balanced protein/energy supplements for pregnant women on gestational weight gain and pregnancy outcomes was also evaluated. Pre-eclampsia prevention was assessed in three trials involving 516 women, with no significant beneficial effects. However, these trials had methodological flaws, so the results should be interpreted cautiously. In another pre-specified subgroup, only one trial involving 782 women evaluated pre-eclampsia prevention when isocaloric balanced protein/energy supplements were given to underweight pregnant women, showing no effect. Energy/protein restriction for obese pregnant women Excessive weight gain during pregnancy has long been recognized as a risk factor for edema and impending pre-eclampsia. Epidemiological studies suggested that high maternal weight was positively associated with the risk of pre-eclampsia. Energy/protein restriction for high weight-forheight or weight gain during pregnancy was another subgroup assessed in this systematic review. Pre-eclampsia was evaluated in two trials (284 women), which showed no reduction in the risk of occurrence. Similarly, there was no influence on pregnancy-induced hypertension (3 trials, 384 women). The limited evidence available suggests
that protein/energy restriction of pregnant women who are overweight or exhibit high weight gain is unlikely to be beneficial and may be harmful to the developing fetus. Although weight reduction may be helpful in reducing or preventing high blood pressure in nonpregnant women, there is no effect on preventing pre-eclampsia, even in obese women. Clinicians frequently ask pregnant women to restrict their food intake in an attempt to prevent pre-eclampsia, despite the absence of evidence that such advice is beneficial. Salt restriction Even in the early phase of pregnancy, marked hemodynamic changes occur including a fall in vascular resistance and blood pressure and a rise in cardiac output. To compensate for the increased intravascular capacity the kidney retains more sodium and water. Apparently, the set point of sodium homeostasis shifts to a higher level at the expense of an expansion of extracellular volume. In nonpregnant individuals, a strong positive association of sodium intake with blood pressure has been established, but the relationship between sodium intake and blood pressure in human pregnancy remains obscure to date. For decades a low-salt diet has often been recommended as treatment for edema, in the hope that restricting salt intake would treat, and also prevent, pre-eclampsia. Recently, this practice has been questioned, and even a high sodium intake has been proposed for pre-eclampsia treatment and prevention. The concerns about the effect of a low-sodium diet during pregnancy on maternal nutritional status led researchers to investigate if such changes could alter other nutrient intake. It was shown that the reduction in sodium intake also caused a significant reduction in the intake of energy, protein, carbohydrates, fat, calcium, zinc, magnesium, iron, and cholesterol. Even though the majority of clinicians no longer advise women to alter their salt intake during pregnancy, this is still current practice in many countries worldwide. A recently published Cochrane systematic review evaluates the effect of the advice about low dietary salt intake during pregnancy. The review includes two trials with data reported for 603 women. Both trials compared nutritional advice to restrict dietary salt with advice to continue a normal diet. Women with established pre-eclampsia were not enrolled, so this review provides no information about the effects of advice to restrict salt intake for treatment of pre-eclampsia. No effect was found in preventing pre-eclampsia or pregnancy-induced hypertension (1 trial, 242 women). Women’s preferences were not reported, but the authors presumed that a
PREGNANCY/Pre-eclampsia and Diet 31
low-salt diet was not very palatable and was therefore difficult to follow. Calcium supplementation A role for altered calcium metabolism in the pathogenesis of pre-eclampsia is suggested by epidemiological evidence linking low dietary levels of calcium with increased incidence of the disease. In agreement with these observations, several modifications in calcium metabolism have been observed in pre-eclamptic women and in calcium supplemented mothers. A Cochrane systematic review of calcium supplementation during pregnancy has been published. Authors prespecified comparison groups taking into account the women’s risk of hypertensive disorders of pregnancy (low versus increased), and the women’s baseline dietary calcium intake (low: 0.2 mmol/mol creatinine 19
200 300 600 800 1000
Number of IU that equal the UL all rac-a-Tocopherol and esters dl--Tocopheryl acetate dl--Tocopheryl succinate dl--Tocopherol
1100 1100 1100
RRR-a-Tocopherol and esters d--Tocopheryl acetate d--Tocopheryl succinate d--Tocopherol
1500 1500 1500
adequate data. In 2000 the Food and Nutrition Board did recommend that food be the only source of vitamin E for infants. However, a UL of 21 mg day1 was suggested for premature infants with birth weights of 1.5 kg, based on the adult UL. The vitamin E UL was set for supplements because it is almost impossible to consume enough -tocopherol-containing foods to achieve a daily 1000 mg intake for prolonged periods of time. The UL was defined for all forms of -tocopherol, not just the 2R forms, because all of the forms in all rac-tocopherol are absorbed and delivered to the liver. The appropriate conversion factors are different from those shown in Table 2, and necessary to estimate the UL for supplements containing either RRR- or all rac--tocopherol supplements. The ULs given in IU are shown in Table 4. The UL for RRR--tocopherol is apparently higher because each capsule contains less -tocopherol than those containing all rac--tocopherol. Precautions and Adverse Reactions
High vitamin E intakes are associated with an increased tendency to bleed. It is not known if this is a result of decreased platelet aggregation caused by an inhibition of protein kinase C by -tocopherol, some other platelet-related mechanism, or decreased clotting due to a vitamin K and E interaction causing abnormal blood clotting. Individuals who are deficient in vitamin K or who are on anticoagulant therapy are at increased risk of uncontrolled bleeding. Patients on anticoagulant therapy should be monitored when taking vitamin E supplements to ensure adequate vitamin K intakes. Adverse Effects of Drugs on Vitamin E Status
Adapted from Food and Nutrition Board and Institute of Medicine (2000) Dietary Reference Intakes for Vitamin C, Vitamin E, Selenium, and Carotenoids. Washington, DC: National Academy Press.
Drugs intended to promote weight loss by impairing fat absorption, such as Orlistat or sucrose polyester,
VITAMIN E/Metabolism and Requirements
can also impair vitamin E and other fat-soluble vitamin absorption. Therefore, multivitamin supplementation is recommended with these drugs. Vitamin supplements should be taken with meals at times other than when these drugs are taken to allow adequate absorption of the fat-soluble vitamins.
Vitamin E Bioavailability Absorption and Plasma Transport
Intestinal absorption of vitamin E is dependent upon normal processes of fat absorption. Specifically, both biliary and pancreatic secretions are necessary for solubilization of vitamin E in mixed micelles containing bile acids, fatty acids, and monoglycerides (Figure 3). -Tocopheryl acetates (or other esters) from vitamin E supplements are hydrolyzed by pancreatic esterases to -tocopherol prior to absorption. Following micellar uptake by enterocytes, vitamin E is incorporated into chylomicrons and secreted into the lymph. Once in the circulation, chylomicron triglycerides are hydrolyzed by lipoprotein lipase. During chylomicron catabolism in the circulation, vitamin E is nonspecifically transferred both to tissues and to other circulating lipoproteins. It is not until the vitamin E-containing chylomicrons reach the liver that discrimination between the various dietary vitamin E forms occurs. The hepatic -TTP preferentially facilitates secretion of -tocopherol, specifically 2R--tocopherols, and not other
Excretion
LIVER Preferential secretion Lipolysis RRR-
α-T
α-T
α-T
LDL
HDL
387
tocopherols or tocotrienols from the liver into the plasma in very low-density lipoproteins (VLDLs). In the circulation, VLDLs are catabolized to lowdensity lipoproteins (LDL are also known as the ‘bad cholesterol’ because high LDL levels are associated with increased risk of heart disease). During this lipolytic process, all of the circulating lipoproteins become enriched with -tocopherol. There is no evidence that vitamin E is transported in the plasma by a specific carrier protein, but rather it is nonspecifically transported in lipoproteins. An advantage of vitamin E transport in lipoproteins is that easily oxidizable lipids are protected by the simultaneous transport of this lipid-soluble antioxidant. Similarly, delivery of vitamin E to tissues is dependent upon lipid and lipoprotein metabolism. Thus, as peroxidizable lipids are taken up by tissue, the tissues simultaneously acquire a lipid-soluble antioxidant. Plasma Concentrations, Kinetics, and Tissue Delivery
Plasma -tocopherol concentrations in normal humans range from 11 to 37 mmol l1. When plasma lipids are taken into account the lower limits of normal are 1.6 mmol -tocopherol/mmol lipid or 2.5 mmol -tocopherol/mmol cholesterol. -Tocopherol is transported in plasma lipoproteins, so if lipid concentrations are extraordinarily high or low, then correction for lipid levels are helpful to determine adequacy of vitamin E status. Additionally, -tocopherol concentrations in erythrocytes, adipose tissue, or even peripheral nerves have been used to assess vitamin E status. The apparent half-life of RRR--tocopherol in plasma of normal subjects is approximately 48 h, while that of SRR--tocopherol or -tocopherol is only 15 h. Vitamin E is delivered to tissues by three mechanisms: transfer from triglyceride-rich lipoproteins during lipolysis; as a result of tissue lipoprotein uptake by various receptors that mediate lipoprotein uptake; and as a result of vitamin E exchange between lipoproteins or tissues. The regulation of tissue vitamin E is not well understood, but -tocopherol is the predominant form in tissues as a result of its dominance in plasma.
VLDL
Tissue uptake of α-tocopherol Figure 3 Intestinal vitamin E absorption and plasma lipoprotein transport. (Adapted from Traber MG (1998) Vitamin E. In: Shils ME, Olson JA, Shike M, and Ross AC (eds.) Modern Nutrition in Health and Disease, pp. 347–362. Baltimore: Williams & Wilkins.)
Human Vitamin E Deficiency Vitamin E deficiency was first described in children with fat malabsorption syndromes, principally abetalipoproteinemia, cystic fibrosis, and cholestatic liver disease. Subsequently, humans with severe vitamin E deficiency with no known defect in lipid or
388 VITAMIN E/Metabolism and Requirements
lipoprotein metabolism were described to have a defect in the -TTP gene. Erythrocyte fragility, hemolysis, and anemia were described as vitamin E deficiency symptoms in various animals fed diets devoid of vitamin E. Additionally, studies in experimental animals have shown that a deficiency of both selenium (a required component of glutathione peroxidases) and vitamin E causes a more rapid and severe onset of debilitating deficiency symptoms. Hypothetically, a deficiency of both vitamins E and C should also cause more severe antioxidant deficiency symptoms, but most animals make their own vitamin C, so this interaction has not been unequivocally demonstrated in humans or animals. In contrast to experimental vitamin E deficiency in rodents, in humans the major vitamin E deficiency symptom is a peripheral neuropathy characterized by the degeneration of the large caliber axons in the sensory neurons. Vitamin E deficiency occurs only rarely in humans and almost never as a result of inadequate vitamin E intakes, therefore, interactions with other nutrients have not been well studied. There have been reports of vitamin E deficiency symptoms in persons with protein-calorie malnutrition. Vitamin E deficiency does occur as a result of genetic abnormalities in -TTP and as a result of various fat malabsorption syndromes. Vitamin E supplementation halts the progression of the neurologic abnormalities caused by inadequate nerve tissue -tocopherol and, in some cases, has reversed them. Patients with these disorders require daily pharmacologic vitamin E doses for life to overcome the mechanisms leading to deficiency. Generally, patients with ‘ataxia with vitamin E deficiency’ are advised to consume 1000 mg RRR--tocopherol per day in divided doses, patients with abetalipoproteinemia 100 mg per kg body weight, and cystic fibrosis sufferers 400 mg day1. However, patients with fat malabsorption due to impaired biliary secretion generally do not absorb orally administered vitamin E. These patients are treated with special forms of vitamin E, such as -tocopheryl polyethylene glycol succinate, that spontaneously form micelles, obviating the need for bile acids.
Chronic Disease Prevention The frequency of human vitamin E deficiency is very rare. In individuals at risk, it is clear that vitamin E supplements should be recommended to prevent deficiency symptoms. What about vitamin E supplement use in normal individuals? Dietary changes such as decreasing fat intakes, substituting fat-free foods for fat-containing ones, and increased reliance
on meals away from the home have resulted in decreased consumption of -tocopherol-containing foods. Therefore, intakes of the vitamin E RDA of 15 mg -tocopherol, may be difficult. Special attention to consuming nuts, seeds, and whole grains will improve -tocopherol intakes; alternatively, multivitamin pills can be consumed. Importantly, vitamin E’s potential role in preventing or ameliorating chronic diseases associated with oxidative stress leads us to ask whether vitamin E supplements might be beneficial. For many vitamins, when ‘excess’ amounts are consumed, they are excreted and provide no added benefits. Antioxidant nutrients may, however, be different. Heart disease and stroke, cancer, chronic inflammation, impaired immune function, Alzheimer’s disease – a case can be made for the role of oxygen free radicals in the etiology of all of these disorders, and even in aging itself. Do antioxidant nutrients counteract the effects of free radicals and thereby ameliorate these disorders? And, if so, do large antioxidant supplements have beneficial effects beyond ‘required’ amounts? The 2000 Food and Nutrition Board and Institute of Medicine DRI Report on Vitamin C, Vitamin E, Selenium, and Carotenoids stated that there was insufficient proof to warrant advocating supplementation with antioxidants. But, they also stated that the hypothesis that antioxidant supplements might have beneficial effects was promising. This remains a very controversial area in vitamin E research. See also: Antioxidants: Diet and Antioxidant Defense; Observational Studies; Intervention Studies. Ascorbic Acid: Physiology, Dietary Sources and Requirements; Deficiency States. Fats and Oils. Nuts and Seeds. Vitamin E: Physiology and Health Effects.
Further Reading Food and Nutrition Board and Institute of Medicine (2000) Dietary Reference Intakes for Vitamin C, Vitamin E, Selenium, and Carotenoids. Washington, DC: National Academy Press. Keaney JF Jr, Simon DI, and Freedman JE (1999) Vitamin E and vascular homeostasis: implications for atherosclerosis. FASEB Journal 13: 965–975. Ouahchi K, Arita M, Kayden H, Hentati F, Ben Hamida M, Sokol R, Arai H, Inoue K, Mandel JL, and Koenig M (1995) Ataxia with isolated vitamin E deficiency is caused by mutations in the alpha-tocopherol transfer protein. Nature Genetics 9: 141–145. Pryor WA (2000) Vitamin E and heart disease: basic science to clinical intervention trials. Free Radical Biology and Medicine 28: 141–164. Traber MG Vitamin E. In: Shils ME, Olson JA, Shike M, and Ross AC (eds.) Modern Nutrition in Health and Disease, vol. 10. Baltimore: Williams & Wilkins (in press).
VITAMIN E/Physiology and Health Effects
Physiology and Health Effects P A Morrissey and M Kiely, University College Cork, Cork, Ireland ª 2005 Elsevier Ltd. All rights reserved.
In 1922, Evans and Bishop discovered a fat-soluble dietary constituent that was essential for the prevention of fetal death and sterility in rats accidentally fed a diet containing rancid lard. This was originally called ‘factor X’ and ‘antisterility factor’ but was later named vitamin E. Subsequently, the multiple nature of the vitamin began to appear when two compounds with vitamin E activity were isolated and characterized from wheat germ oil. These compounds were designated - and -tocopherol, derived from the Greek ‘tokos’ for childbirth, ‘phorein’ meaning to bring forth, and ‘ol’ for the alcohol portion of the molecule. Later, two additional tocopherols, - and -tocopherol, as well as four tocotrienols were isolated from edible plant oils. After the initial discovery, more than 40 years passed before it was proved that vitamin E deficiency could cause disease in humans and was associated with antioxidant functions in cellular systems. It took another 25 years before the non-antioxidant properties of the vitamin were highlighted. This article reviews the chemistry of the tocopherols; their dietary sources, absorption, transport, and storage; and their metabolic function. In addition, the potential role of dietary or supplemental tocopherol intake in the prevention of chronic disease and possible mechanisms for observed protective effects are discussed. Finally, a summary of the assessment of tocopherol status in humans, intake requirements, and an overview of the safety of high intakes is provided.
Chemistry The chemistry of vitamin E is rather complex because there are eight structurally related forms— four tocopherols (, , , and ) and four tocotrienols (, , , and )—that are synthesized from homogentisic acid and isopentenyl diphosphate in the plastid envelope of plants. The structures of -, -, -, and -tocopherols are shown in Figure 1. -Tocopherol is methylated at C5, C7, and C8 on the chromanol ring, whereas the other homologs (, , and ) have different degrees of methylation (Figure 1). Tocopherols have a saturated phytyl side chain attached at C2 and have three chiral centers that are in the R configuration at positions C2, C41, and C81 in the naturally occurring forms, which are
389
given the prefix 2R, 41R, and 81R (designated RRR). The members of the tocotrienols are unsaturated at C31, C71, and C111 in the isoprenoid side chain and possess one chiral center at C2 in addition to two sites of geometric isomerism at C31 and C71. Vitamin E biological activity is expressed as mg RRR-tocopherol equivalents (-TE) whenever possible. The activity of RRR--tocopherol is 1. The activities of RRR--, RRR--, and RRR--tocopherol are 0.5, 0.1, and 0.03, respectively.
Dietary Sources The composition and content of the different tocopherol components in plant tissue vary considerably, ranging from extremely low levels found in potato tubers to high levels found in oil seeds. -Tocopherol is the predominant form in photosynthetic tissues and is mainly localized in plastids. The particular enrichment in the chloroplast membranes is probably related to the ability of tocopherols to quench or to scavenge reactive oxygen species and lipid peroxy radicals by physical or chemical means. In this way, the photosynthetic apparatus can be protected from oxygen toxicity and lipid peroxidation. In nonphotosynthetic tissues, -tocopherol frequently predominates and can be involved in the prevention of autoxidation of polyunsaturated fatty acids. Most of the tocopherol content of wheat germ, sunflower, safflower, and canola and olive oils is in the form of -tocopherol, and these oils contain approximately 1700, 500, 350, 200, and 120 mg -TE kg1, respectively. Vegetable oils (e.g., corn, cottonseed, palm, soybean, and sesame) and nuts (e.g., Brazil nuts, pecans, and peanuts) are rich sources of -tocopherol. Corn and soybean oils contain 5–10 times as much -tocopherol as -tocopherolrich sources of -tocopherol, and each contains approximately 200 mg -TE kg1. Because of the widespread use of these plant products, -tocopherol is considered to represent 70% of the vitamin E consumed in the typical US diet. The level of vitamin E in nuts ranges from 7 mg -TE kg1 in coconuts to 450 mg -TE kg1 in almonds. Cereals are moderate sources of vitamin E, providing between 6 (barley) and 23 mg -TE kg1 (rye). Fresh fruit and vegetables generally contain approximately 1–10 mg -TE kg1. The concentration of vitamin E (-tocopherol is the predominant form) in animal products is usually low, but these may be significant dietary sources because of their high consumption. Mean dietary intakes of 6.3–13.0 mg -TE per day have been reported in various European and US population studies. Data from the Third National Health and Nutrition Examination Survey
390 VITAMIN E/Physiology and Health Effects
R1 HO 6 7
R2
4
5 8
1
3
CH3
CH3
CH3
CH3
2
O
12′ 2′ 1′
4′ 3′
6′ 5′
8′ 7′
10′ 9′
R3 Compound
R1
R2
R3
α-Tocopherol
CH3
CH3
CH3
β-Tocopherol
CH3
H
CH3
γ-Tocopherol
H
CH3
CH3
δ-Tocopherol
H
H
CH3
11′
CH3
Figure 1 The four major forms of vitamin E (-, -, -, and -tocopherols) differ by the number and positions of methyl groups on the chromonol ring. In -tocopherol, the most biologically active form, the chromonol ring is fully methylated. In - and -tocopherols, the ring contains two methyl groups, whereas -tocopherol is methylated in one position. The corresponding tocotrienols have the same structural arrangement except for the presence of double bonds on the isoprenoid side chain of C31, C71, and C111.
(NHANES III) (1988–1994) in the United States indicate a median total intake (including supplements) of -TE of 12.9 mg day1 and a median intake from food only of 11.7 mg day1 in men aged 31–50 years. In women in this age range, the median total intake (including supplements) of -TE was 9.1 mg day1 and the median intake from food only was 8.0 mg day1. In the United States, fats and oils used in spreads, etc. contribute 20.2% of the total vitamin E intake; vegetables, 15.1%; meat, poultry, and fish, 12.6%; desserts, 9.9%; breakfast cereals, 9.3%; fruit, 5.3%; bread and grain products, 5.3%; dairy products, 4.5%; and mixed main dishes, 4.0%. The North/South Ireland Food Consumption Survey, published in 2001, reported that the median daily intake of vitamin E from all sources was 6.3 mg in men and 6.0 mg in women aged 18– 64 years. The largest contributors of vitamin E to the diet were vegetables and vegetable dishes (18.9%) and potatoes and potato products (12.4%), most likely as a result of the oils used in composite dishes. Nutritional supplements contributed 5.5% of the vitamin E intake in men and 11.9% in women overall. In the subgroup that regularly consumed nutritional supplements (23% of total), vitamin E was the nutrient most frequently obtained in supplemental form in men (78%) and women (73%). In these people, supplements made a larger contribution to total vitamin E intakes than did food.
Absorption Metabolism and Excretion Because of its hydrophobicity, vitamin E requires special transport mechanisms in the aqueous environment
of plasma, body fluids, and cells. In humans, vitamin E is taken up in the proximal part of the intestine depending on the amount of food lipids, bile, and pancreatic esterases that are present. It is emulsified together with the fat-soluble components of food. Lipolysis and emulsification of the formed lipid droplets then lead to the spontaneous formation of mixed micelles, which are absorbed at the brush border membrane of the mucosa by passive diffusion. Both - and -tocopherol and dietary fat are taken up without preference by the intestine and secreted in chylomicron particles together with triacylglycerol and cholesterol (Figure 2). The nearly identical incorporation of - and -tocopherol in chylomicrons after supplementation with equal amounts of the two tocopherols indicates that their absorption is not selective (Figure 2). The chylomicrons are stored as secretory granula and eventually excreted by exocytosis to the lymphatic compartment, from which they reach the bloodstream via the ductus thoracicus. The exchange between the apolipoproteins of the chylomicrons (types AI, AII, and B48) and high-density lipoprotein (HDL) (types C and E) triggers the intravascular degradation of the chylomicrons to remnants by the endothelial lipoprotein lipase (LPL) and is a prerequisite for the hepatic uptake of tocopherols (Figure 2). During LPL-mediated catabolism of chylomicron particles, some of the chylomicron-bound vitamin E appears to be transported and transferred to peripheral tissues, such as muscle, adipose, and brain (Figure 2). The formation of remnants favors the rapid uptake of the tocopherols via the hepatic receptors for apo-E and apo-B. The chylomicron remnants are subsequently taken up by the liver, where -tocopherol is preferentially
VITAMIN E/Physiology and Health Effects
1
391
2 Chylomicron B 48 A
α-T
HDL
B 48
3
E
B48
LPL E
C
To peripheral tissues
B48 C
A
γ-T
A
B48
E
B48
C
4 Chylomicron remnants
HDL 6
5
PLTP VLDL LDL
7 Peripheral Tissue Plasma
α-TTP
LIVER P450
9
γ-CEHC Kidney 10 Urinary excretion
TAPs? 8 Membrane compartments Figure 2 Absorption, transport, and metabolism of -tocopherol (-T) and -tocopherol (-T) in peripheral tissues. 1: Both -T and -T are absorbed without preference by the intestine along with lipid and reassembled into chylomicrons. 2: Exchange between apolipoproteins of the chylomicrons (types AI, AII, and B48) and high-density lipoprotein (HDL) (types C and E) occurs. 3: Chylomicrons are degraded to remnants by lipoprotein lipase (LPL) and some -T and -T are transported to peripheral tissues. 4: The resulting chylomicron remnants are then taken up by the liver. 5: In the liver, most of the remaining -T, but only a small fraction of -T, is reincorporated in nascent very low-density lipoproteins (VLDLs) by -tocopherol transfer protein (-TTP). 6: Plasma phospholipid transfer protein (PLTP) facilitates the exchange of tocopherol between HDL and LDL for delivery to tissues. 7: Plasma tocopherols are delivered to tissues by LDL and HDL. 8: Tocopherol-associated proteins (TAPs) probably facilitate intracellular tocopherol transfer between membrane compartments. 9: Substantial amounts of -T are degraded by a cytochrome P450-mediated reaction to 2,7,8trimethyl-2-(-carboxyethyl-6-hydroxychroman (-CEHC). 10: -CEHC is excreted into urine. Adapted from Azzi A and Stocker A (2000) Vitamin E: Non-antioxidant roles. Progress in Lipid Research 39: 231–255; and from Jiang Q, Christen S, Shigenaga MK and Ames BN (2001) -Tocopherol, the major form of vitamin E in the US diet, deserves more attention. American Journal of Clinical Nutrition 74: 714–722.
incorporated into nascent very low-density lipoprotein (VLDL) by a specific 32-kDa -tocopherol transfer protein (-TTP), which enables further distribution of -tocopherol to peripheral cells (Figure 2). -TTP is mainly expressed in the liver, in some parts of the brain, in the retina, in low amounts in fibroblasts, and in the placenta. -TTP possesses stereospecificity as well as regiospecificity toward the most abundant isomer of vitamin E, (RRR)--tocopherol. The sorting process does not tolerate alteration at C2. As a consequence of the selective transfer mechanism, major parts of the
natural homologs and nonnatural isomers of -tocopherol are excluded from the plasma and secreted with the bile. Relative affinities of tocopherols for -TTP are as follows: -tocopherol, 100; -tocopherol, 38; -tocopherol, 9; and -tocopherol, 2. A 75-kDa plasma phospholipid transfer protein (PLTP), which is known to catalyze the exchange of phospholipids and other amphipatic compounds between lipid structures, has been shown to facilitate the exchange of -tocopherol from VLDL to HDL and LDL for further delivery to tissues (Figure 2).
392 VITAMIN E/Physiology and Health Effects
A family of cellular tocopherol-associated proteins (TAPs) with the ability to bind and redistribute -tocopherol has been identified. TAPs bind to -tocopherol but not to other isomers of tocopherol. Present in all cells, TAPs may be specifically involved in intracellular -tocopherol movement, for example, between membrane compartments and plasma membranes, or in optimizing the -tocopherol content of membranes. -Tocopherol appears to be mainly degraded to its hydrophilic 30 -carboxychromanol metabolite, 2,7,8-trimethyl-2-(-carboxyethyl)-6-hydroxychroman (-CEHC) (Figure 3), and excreted in the urine. The mechanism of -tocopherol metabolism involves terminal cytochrome P450 (CYP)-mediated !-hydroxylation of the tocopherol phytyl side chain, oxidation to the corresponding terminal carboxylic acid, and sequential removal of two- or three-carbon moieties by -oxidation, ultimately yielding the hydrophilic 30 -carboxychromanol metabolite of the parent tocopherol that is excreted in the urine. Functional analysis of several recombinant human liver P450 enzymes revealed that tocopherol !-hydroxylase activity was associated only with the cytochrome P450 isoform 4F2 (CYP4F2). Kinetic analysis of the tocopherol !-hydroxylase activity in recombinant human CYP4F2 microsomal systems revealed similar Km values (37 and 21 mM) but notably different Vmax values (1.99 vs 0.16 nmol/nmol of P450/min) for - and -tocopherol, respectively. The data suggest a role for the CYP-mediated !-hydroxylase pathway in the preferential physiological retention of -tocopherol and elimination of -tocopherol. In nonsupplemented individuals, a
CH3 HO
H3C
O CH3
CH3
COOH
α-CEHC
HO
H3C
O CH3
CH3
COOH
γ-CEHC Figure 3 Chemical structures of 2,5,7,8-tetramethyl-2(-carboxyethyl)-6-hydroxychroman(-CEHC) and 2,7,8-trimethyl2-(-carboxyethyl)-6-hydroxychroman (-CEHC).
substantial proportion of the estimated daily intake of -tocopherol is excreted in human urine as its -CEHC metabolite, but a much smaller proportion of -tocopherol is excreted as 2,5,7,8-tetramethyl2-(-carboxyethyl)-6-hydroxychroman (-CEHC) (Figure 3). -CEHC is excreted in large amounts only when the daily intake of -tocopherol exceeds 150 mg or plasma concentrations of -tocopherol are above a threshold of 30–40 mmol l1. Even then, urinary excretion of -CEHC is lower than that of -CEHC. It is likely that it is the capacity of -TTP rather than the plasma -tocopherol concentration that determines -tocopherol degradation. Overall, hepatic catabolism of -tocopherol appears to be responsible for the relatively low preservation of -tocopherol in plasma and tissues, whereas -TTP-mediated -tocopherol transfer plays a key role in the preferential enrichment of -tocopherol in most tissues. Supplementation with -tocopherol depletes plasma and tissue -tocopherol levels. This is likely due to the preferential affinity of -TTP for -tocopherol. However, the depletion of -tocopherol may also occur because an increase in -tocopherol may further reduce the incorporation of -tocopherol into VLDL, which leaves more -tocopherol to be degraded by CYP. On the other hand, -tocopherol supplementation may spare -tocopherol from being degraded. Plasma (RRR)--tocopherol incorporation is a saturable process. Plasma concentrations of -tocopherol reach a threshold of 30–40 mmol l1 despite supplementation with high levels (400 mg or greater) of (RRR)--tocopherol. Dose–response studies showed that the limitation in plasma -tocopherol concentration appears to be a result of rapid replacement of circulating with newly absorbed -tocopherol. Kinetic analysis has shown that the entire plasma pool of -tocopherol is replaced daily. The highest concentrations of -tocopherol in the body are in adipose tissues and adrenal glands. Adipose tissues are also a major store of the vitamin, followed by liver and skeletal muscle. The rate of uptake and turnover of -tocopherol by different tissues varies greatly. Uptake is most rapid into lungs, liver, spleen, kidney, and red cells (in rats, t1=2 < 15 days) and slowest in brain, adipose tissues, and spinal cord (t1=2 < 30 days). Likewise, depletion of -tocopherol from plasma and liver during times of dietary deficiency is rapid, whereas adipose tissue, brain, spinal cord, and neural tissues are much more difficult to deplete. The major route for the elimination of tocopherol from the body is via the feces. Fecal tocopherol arises from incomplete absorption, secretion from mucosal cells, and biliary excretion. Excess
VITAMIN E/Physiology and Health Effects
-tocopherol as well as forms of vitamin E not preferentially used, such as synthetic racemic isomer mixtures, or -tocopherol are eliminated during the process of nascent VLDL secretion in the liver and are probably excreted into bile. In addition to the urinary excretion of -tocopherol as -CEHC, biliary excretion is an alternative route for elimination of excess -tocopherol. This is confirmed by the fact that the ratio of - to -tocopherol in bile is sevenfold higher than in plasma.
Tocopherols as Antioxidants Under normal physiological conditions, cellular systems are incessantly challenged by stressors arising from both internal and external sources. The most important potential stressors are reduced derivatives of oxygen, which are classified as reactive oxygen species (ROS), and include the superoxide anion · · (O 2 ), hydroxyl radical ( OH), and oxygen-centered radicals of organic compounds (peroxyl (ROO· ) and alkoxyl (RO· )) together with other nonradical reactive compounds, such as hydrogen peroxide (H2O2). In addition, reactive nitrogen species such as nitric oxide (NO· ), nitrogen dioxide (NO·2), peroxynitrite (ONOO), and hypochlorous acid are involved. Cellular systems have evolved a powerful and complex antioxidant defence system to limit inappropriate exposure to these stressors. -Tocopherol is quantitatively the most important chain-breaking antioxidant in plasma and biological membranes. The antioxidant activities of chain-breaking antioxidants are determined primarily by how rapidly they scavenge peroxyl radicals, thereby preventing the propagation of free radical reactions. When the chromanol phenolic group of -tocopherol (TOH) encounters a ROO· it forms hydroperoxide (ROOH), and in the process a tocopheroxyl radical (TO· ) is formed: TOH þ ROO· ! ROOH þ TO· k1
The rate constant (k1) for hydrogen abstraction from -tocopherol is 2.35 106 M1 s1, which is higher than that for the other tocopherols and related phenols. Because the rate constant (k2) for the chain propagation reaction between ROO· and an unsaturated fatty acid (RH) (ROO· þ RH ! ROOH) is much lower than k1, at approximately 102 M1 s1 -tocopherol outcompetes the propagation reaction and scavenges the ROO· 104 times faster than RH reacts with ROO· . Thus, the kinetic properties of antioxidants, in particular -tocopherol, require that only relatively small concentrations are required
393
for them to be effective. The concentration of -tocopherol in biological membranes is approximately 1 mol per 1000–2000 mol phospholipids (i.e., 1:103). Ascorbic acid can reduce the tocopheroxyl radical (TO· ) to its native state, and it has been concluded that part of the reason why low concentrations of -tocopherol are such efficient antioxidants in biological systems is because of this capacity to be regenerated by intracellular reductants such as ascorbic acid. The heteroxyclic chromanol ring of -tocopherol has an optimised structure for resonance stabilization of the unpaired electron of the -tocopheroxyl radical, and the electron-donating substituents (e.g., the three methyl groups) increase this effect. Because -tocopherol lacks one of the electron-donating methyl groups on the chromanol ring, it is somewhat less potent in donating electrons than -tocopherol and is thus a slightly less powerful antioxidant. However, the unsubstituted C5 position on -tocopherol allows it to trap lipophilic electrophiles such as peroxynitrite, thereby protecting macromolecules from oxidation.
Vitamin E Deficiency Vitamin E deficiency is seen rarely in humans. However, there may be a risk of vitamin E deficiency in premature infants because the placenta does not transfer -tocopherol to the fetus in adequate amounts. When it occurs in older children and adults, it is usually a result of lipoprotein deficiencies or a lipid malabsorption syndrome. These include patients with abetalipoproteinemia or homozygous hypobetalipoproteinemia, those with cholestatic disease, and patients receiving total parenteral nutrition. There is also an extremely rare disorder in which primary vitamin E deficiency occurs in the absence of lipid malabsorption. This disorder is a rare autosomal recessive neurodegenerative disease caused by mutations in the gene for -TTP. This disorder is known as ataxia with vitamin E deficiency (AVED). Patients with AVED have extraordinary low plasma vitamin E concentrations (5.2. It has been estimated that an average daily dietary intake of 15–30 mg -tocopherol would be required to maintain this plasma level, an amount that could be obtained from dietary sources if a concerted effort were made to eat foods rich in vitamin E. The US Institute of Medicine Food and Nutrition Board set an estimated average requirement (EAR) of 12 mg -tocopherol for adults >19 years on the criterion of vitamin E intakes that were sufficient to prevent hydrogen peroxide-induced hemolysis in men. The same value was set for men and women on the basis that although body weight is smaller on average in women than men, fat mass as a percentage of body weight is higher on average in women. Because information is not available on the standard deviation of the requirement for vitamin E, the recommended dietary allowance (RDA) was established for men and women as the EAR (12 mg) plus twice the coefficient of variation (assumed to be 10%), rounded up, giving a value of 15 mg day1. In Europe, the Scientific Committee for Food did not set a population reference intake (PRI) for vitamin E on the basis that there is no evidence for deficiency from low intakes, and the frequency of distribution of intakes is skewed to the right, making it difficult to set a PRI that is not inappropriately high, especially for those with a low consumption of polyunsaturated fatty acid (PUFA), whose requirements are lower than those with a high consumption of PUFA. It has been suggested that the optimum concentration of -tocopherol in plasma for protection against cardiovascular disease and cancer is >30 mmol l1, given normal plasma lipid levels and in conjunction with a plasma vitamin C concentration >50 mmol l1 and a -carotene level >0.4 mmol l1. This has not been proven in large-scale human intervention trials, but even in the absence of conclusive evidence for a prophylactic effect of vitamin E on chronic disease prevention, some experts believe that a recommendation of a daily intake of 87–100 mg -tocopherol is justifiable based on current evidence. Realistically, these levels can be achieved only by using nutritional supplements. The tolerable upper intake level for vitamin E is 1000 mg day1, based on studies showing hemorrhagic toxicity in rats, in the absence of human dose–response data.
See also: Antioxidants: Diet and Antioxidant Defense. Fats and Oils. Fatty Acids: Omega-3 Polyunsaturated; Omega-6 Polyunsaturated. Lipoproteins. Nuts and Seeds.
398 VITAMIN K
Further Reading Azzi A and Stocker A (2002) Vitamin E: Non-antioxidant roles. Progress in Lipid Research 39: 231–255. Brigeluis-Flohe R, Kelly FJ, Salonen JT et al. (2002) The European perspective on vitamin E: Current knowledge and future research. American Journal of Clinical Nutrition 76: 703–716. Esposito E, Rotilio D, Di Matteo V et al. (2002) A review of specific dietary antioxidants and the effects on biochemical mechanisms related to neurodegenerative processes. Neurobiology of Ageing 23: 719–735. Frei B (1994) In Natural Antioxidants in Human Health and Disease. London: Academic Press. Halliwell B (1996) Antioxidants in human health and disease. Annual Review of Nutrition 16: 33–50. Institute of Medicine (2000) Dietary Reference Intakes for Vitamin C, Vitamin E, Selenium and Carotenoids. Washington, DC: National Academy Press. Jiang Q, Christen S, Shigenaga MK, and Ames BN (2001) -Tocopherol, the major form of vitamin E in the US diet, deserves move attention. American Journal of Clinical Nutrition 74: 712–722.
Machlin LJ (1984) Vitamin E. In: Machlin LJ (ed.) Handbook of Vitamins: Nutritional Biochemical and Clinical Aspects, pp. 99–145. New York: Marcel Dekker. Morrissey PA and Kiely M (2002) Vitamin E, nutritional significance. In: Roginski H, Fuguay JW, and Fox PF (eds.) Encyclopedia of Dairy Science, pp. 2670–2677. London: Elsevier. Neuzil J, Weber C, and Kontush A (2001) The role of vitamin E in atherogenesis: Linking the chemical, biological and clinical aspects to the disease. Atherosclerosis 157: 257–283. Packer L and Fuchs J (eds.) (1993) Vitamin E in Health and Disease. New York: Marcel Dekker. Pryor WA (2000) Vitamin E and heart disease: Basic science to clinical intervention trials. Free Radical Biology and Medicine 28: 141–164. Rimbach G, Minihane AM, Majewicz J et al. (2002) Regulation of cell signalling by vitamin E. Proceedings of the Nutrition Society 61: 415–425. Thomas SR and Stocker R (2000) Molecular action of vitamin E in lipoprotein oxidation: Implications for atherosclerosis. Free Radical Biology and Medicine 28: 1795–1805. Traber MG and Sies H (1996) Vitamin E in humans: Demand and delivery. Annual Review of Nutrition 16: 321–347.
VITAMIN K C J Bates, MRC Human Nutrition Research, Cambridge, UK ª 2005 Elsevier Ltd. All rights reserved.
The discovery of vitamin K as an essential nutrient arose in the late 1920s from Henrik Dam’s studies of sterol metabolism. He observed that chicks fed a fat-free diet developed subcutaneous hemorrhages and anemia. A lipid extract of liver or of certain plant tissues was curative, and by 1935 he claimed discovery of a new vitamin in these extracts that he named ‘vitamin K’ from the German Koagulation. By the late 1930s, two chemically similar forms of the vitamin from different sources were recognized, namely phylloquinone or K1 and menaquinone or K2, which had been isolated from alfalfa and from putrefied fish meal, respectively (Figure 1). Phylloquinone, with its saturated phytyl side chain, is now understood to be the sole representative of vitamin K that occurs in plant tissues, especially in green leafy ones, where it acts as a component of the electron transport chain. The menaquinones, or MK-n, by contrast, comprise a broad family of representatives that have a variable length, unsaturated side chain, and are composed of one or more (sequential) isoprene units in place of the saturated phytyl side chain. These menaquinones can be produced by certain types of bacteria, both in the large
bowel of animals and at other locations where they may contribute to human food sources of menaquinones. Germ-free rats become vitamin K deficient more readily than their conventional counterparts, and they can develop very low hepatic MK-4 levels. The specific menaquinone with the same side chain length as phylloquinone is called menatetranone, or MK-4, and this is produced commercially for human medication, especially in Japan. There is evidence that phylloquinone can be converted to MK-4 in animals and humans. Most bacterially synthesized menaquinones have longer side chains, typically 7–9 isoprene units and up to 13, which are indicated by ‘n’ in the MK-n shorthand notation. A synthetic homolog of phylloquinone, K1(25), is not found in nature and can therefore be used as an internal standard in the chromatographic separation and quantitation of vitamin K. Menadione, a water-soluble form of the vitamin that has a single methyl group in place of the side chain, has vitamin K activity (it can be converted to menatetranone in vivo) and is used in animal feeds, but it is not used in humans because of its toxicity at high doses.
Food Sources, Absorption, Distribution, and Turnover Food sources of phylloquinone for man (Table 1) include green leafy vegetables as the major
VITAMIN K 399 Table 1 Mean estimate food contents of phylloquinone and selected menaquinones
O 1 4
2 3
2′
Food item 3′
Phylloquinone 3
O
O
Menaquinone-n n
O
O
Menadione
O
O ONa
Phylloquinone (vitamin K1)
MK-4
MK-7
MK-8
MK-9
817 387 156 36 3.0 — 0.3 3.9
— — — — — 8.9 2.1 7.7
— — — — — — — —
— — — — — — 0.5 —
— — — — — — 1.1 —
2.2 — 0.5 10.4
0.4 0.2 0.8 4.7
— 0.1 — 1.3
— 1.6 — 16.9
—
5
4
1
10
40
34.7 53.7 93.2 14.9 2.9 1.1
— — — 15.0 — —
998 — — — — —
84.1 — — — — —
— — — — —
— 51.1
a
Warfarin
O
Kale Spinach Broccoli Peas Apples Chicken Pork Luncheon meat Mackerel Plaice Milk Hard cheese Soft cheese Nattob Olive oil Margarine Butter Corn oil Bread
g/100 g wet weight or g/100 ml a
O
OH
OH
A dashed line means not detectable. Values obtained for MK-5 and MK-6 are omitted from this summary. The data demonstrate clearly (i) the huge difference in vitamin K contents between different foods and (ii) the preponderence of phylloquinone in some foods and of menaquinones (of several different chain lengths) in others. b A Japanese food made from fermented soya bean curd. Data from Schurgers LJ and Vermeer C (2000) Determination of phylloquinone and menaqunones in food. Haemostasis 30: 298–307, Table 2.
Dicumarol O
O
O
O
Figure 1 Chemical structures of phylloquinone, menaquinones, menadione, warfarin, and dicumarol.
quantitative source; however, its availability for absorption from these foods is thought to be relatively poor. Certain plant-derived oils, notably soya and canola oils, are also rich in the vitamin, which is probably much more readily available from such sources than it is from leaves. Menaquinones are typically obtained from foods, such as cheeses or Japanese ‘natto’ (fermented bean curd), in which bacterial fermentation has occurred. Smaller amounts of both phylloquinones and menaquinones are obtained from liver and other animal-derived foods. Phylloquinone is highly lipophilic; however, at low concentrations it is transported by a saturable,
energy-dependent transport system across the gut wall, mainly in the upper small intestine. Phylloquinone in foods consisting of plant tissues is much less readily bioavailable for absorption than the pure vitamin since it is tightly bound to the thylakoid membranes of the chloroplasts, and the absorption of vitamin K from plant foods is considerably improved by including additional fat in the meal. Its absorption also depends on the stimulation of bile salt and pancreatic lipase secretions. The long-chain menaquinones, which are even more lipophilic, are only passively absorbed and are much less bioavailable for absorption than phylloquinone. However, if given by injection (e.g., intracardially), they can be even more functionally active than phylloquinone. The relative bioavailability and bioactivity of the different forms and food sources of vitamin K need more research. Preliminary studies with
400 VITAMIN K
deuterium-labeled broccoli suggest that the bioavailability of endogenous vitamin K can be studied in humans by intrinsic stable isotope-labeling procedures. Once absorbed, vitamin K is transported to the liver in the chylomicrons, where it becomes distributed among the triglyceride-rich chylomicron remnants (ca. 50%) and the low-density lipoprotein and high-density lipoprotein fractions of plasma (ca. 25% each). Plasma vitamin K concentrations, which are typically in the low nanomolar range in humans, are much lower than for the other fatsoluble vitamins (A, D, and E), and they are strongly correlated with the triglyceride content of the plasma. Indeed, some authorities prefer to express plasma vitamin K as a ratio to triglycerides instead of as a simple concentration. Differences between the apoE lipoprotein genetic variants affect plasma vitamin K, according to their different triglyceride clearance profiles. There is evidence for a major diurnal cycle of plasma vitamin K, with peak concentrations of both vitamin K and its associated triglycerides occurring in late evening and with lowest values in the morning. A kinetic study using radioactive vitamin K indicated that the turnover time of the exchangeable pool of the vitamin is quite short, approximately 1.5 days, and the first and second exponential decay curves had half-lives of 0.5–1 and 25–78 h, respectively. The exchangeable body pool size was only approximately 1 mg/kg body weight. The liver is an important repository of the vitamin for both plant-derived phylloquinone and the bacterially derived menaquinones. Depletion studies have indicated that the hepatic phylloquinone stores seem much more labile than the menaquinone stores, and that a functional deficiency accompanies the loss of the phylloquinone, which the remaining nondepleted menaquinones cannot prevent. Despite this, if menaquinones are given exogenously, they can be curative. Different tissues have different relative avidities for phylloquinone and menaquinones, and it has been suggested that they may have a different spectrum of functions from each other. Thus, in humans, phylloquinone is concentrated in liver, heart, and pancreas. The longer chain menaquinones, MK-6 to -11, are found mainly in liver with traces in heart and pancreas, but MK-4 is found especially in brain and kidney, where it exceeds phylloquinone concentrations. The tissue distribution in humans is similar to that in the rat. The turnover of phylloquinone results in ca. 40–50% of the exchangeable body pool being transferred via the bile into the feces and 20% being excreted into the urine, the latter including the excretion of oxidized products that become conjugated as glucuronides.
Physiological Functions of Vitamin K: Interaction with Antagonists Blood Coagulation Proteins
The principal physiological function that led to the discovery of vitamin K, and its confirmation as an essential vitamin for higher vertebrates, was its unique role in the blood clotting cascade. This cascade comprises a complex series of linked proenzyme-to-enzyme conversions, which leads eventually to a fibrin clot (Figure 2). Central to this process is the activation by calcium of gamma-carboxylated glutamyl (Gla) residues in some of the members of the cascade series: factors VII, IX, and X and factor II (prothrombin). In addition, there is an inhibitory level of control by proteins C, S, and possibly Z. All seven of these Gla proteins have Gla clusters that interact specifically with calcium so as to alter their polypeptide conformations and to permit their interaction with other members of the coagulation cascade (by exposing a phospholipid-binding domain) and hence leading either to activation or to inhibition of individual components. The Gla moieties of these and indeed all the vitamin K-dependent Gla proteins are formed by a post-translational carboxylation reaction catalyzed by the single enzyme, ‘carboxylase,’ at the
Intrinsic pathway
Extrinsic pathway
Contact activation of Factors XI, XII → Factor XI-act Factor XI-act + Ca2+ activates Factor IX
Factors II-act and X-act activate factor VII
Factor II-act (thrombin) activates Factors VIII and V
Factor VII-act + Tissue Factor + Ca2+ activate Factor X
Factors IX-act + VIII-act + Ca2+ activate Factor X
Protein C → Protein C-act (+ thrombomodulin) Protein C-act + Protein S: inactivates Factor V-act + Factor VIII-act
Factor X-act (together with V-act and Ca2+) activates prothrombin (Factor II) to give thrombin Factor II-act (thrombin) converts fibrinogen to fibrin (clot) Figure 2 Vitamin K-dependent clotting factors. Factors II (prothrombin), VII, IX, and X and proteins C and S are all Gla proteins. The functions of proteins C and S, shown in bold, are inhibitory to the clotting cascade, whereas the other factors all form part of the cascade mechanism.
VITAMIN K 401
endoplasmic reticulum sites of Gla protein synthesis. In the case of the blood coagulation proteins, the sole site of synthesis is the liver. Each carboxylated protein has a C-terminal ‘propeptide’ sequence that binds the carboxylase enzyme, and directs a coordinated series of carboxylations of the recipient glutamyl residues, before the propeptide is removed and the fully carboxylated protein is then secreted into the extracellular space for transport into the plasma. Vitamin K acts as the essential recycling cofactor (or cosubstrate) for all protein carboxylation, Glaforming reactions (Figure 3). In its dihydro or quinol form, the vitamin reacts with molecular oxygen, thereby creating a highly reactive, high-energy carbanion at the Glu site for insertion of carbon dioxide, creating a new Gla residue. This vitamin K quinol oxidation step provides the essential energy for the endothermic carboxylation step. The other product of the reaction is the epoxide of vitamin K, comprising a three-membered carbon–oxygen ring. Since the oxidized vitamin needs to be recycled back to the quinol form before the next protein carboxylation cycle, a two-stage reduction process ensues, forming first vitamin K quinone and then the original quinol (Figure 3). Both of these reduction steps can be catalyzed by the enzyme vitamin K epoxide reductase, which is linked to a dithiol–disulfide reducing couple and which is highly sensitive to inhibition by the coumarin class of drugs, of which warfarin (Figure 1) is the best known and most commonly used member. The reduction of the intermediate vitamin K quinone
to its quinol form can also be catalyzed by another, NAD(P)H-dependent, quinone reductase that is warfarin resistant, and for this reason the inhibition of carboxylation by warfarin can be reversed or antagonized by large doses of vitamin K provided exogenously in its normal quinone form. A severe deficiency of vitamin K, or treatment with coumarin drugs (for the control of excessive blood clotting tendency in humans), results in prolonged clotting times that can be detected by the standardized ‘one stage prothrombin time’ test, in which citrated or oxalated (i.e., calcium-complexed) blood is treated with tissue factor plus additional calcium so as to initiate the clotting process. However, a much more sensitive test for mild vitamin K deficiency is the PIVKA test (Proteins Induced by Vitamin K Absence or Antagonism), which is an immunological enzyme-linked immunosorbent assay (ELISA) test that specifically recognizes undercarboxylated blood clotting proteins and particularly des-gamma-carboxy prothrombin. Proteins C and S, and possibly also Z, function differently from the other Gla-containing blood clotting factors that are an integral part of the fibrinforming cascade. Protein C has a regulatory role, inactivating factors V and VIII, and in conjunction with protein S it also acts as a cofactor to enhance the rate of fibrinolysis of blood clots in locations where they are unwanted and potentially harmful. The exact function of protein Z remains unresolved, although interactions with thrombin and factor X have been reported. Clearly, there is a delicate balance of
Peptidyl γ-carboxyglutamate -(Gla)-
Peptidyl glutamate -(Glu)O2, CO2, Carboxylase
Vitamin K hydroquinone (reduced form)
Vitamin K epoxide (3-membered ring: one oxygen, two carbon atoms)
Warfarin-sensitive reductase + dithiol Warfarin-sensitive reductase + dithiol
Warfarin-insensitive reductase + NAD(P)H Vitamin K quinone (oxidized form, as found in most foods)
Figure 3 Vitamin K oxidation–reduction cycle during Gla formation. Oxidation of vitamin K hydroquinone (reduced vitamin) to vitamin K epoxide by molecular oxygen provides the energy needed to drive the carboxylation of peptidyl-Glu to peptidyl-Gla (i.e., gammacarboxy glutamate). The vitamin K epoxide is then recycled by reduction with dithiols in two stages. The first stage requires a reductase enzyme that is coumarin drug (e.g., warfarin) inhibitable. The second stage can be catalyzed by either of two reductases, one of which is NAD(P)H dependent and is not warfarin inhibited.
402 VITAMIN K
pro- and anti-clot formation and removal activities among the vitamin K-dependent Gla proteins of the cascade, although the net effect of a deficiency of the vitamin or of its antagonism by drugs appears to be a reduction of the clotting tendency. Bone Gla proteins
Protein S together with two other Gla proteins, osteocalcin (OC; or bone Gla protein) and matrix Gla protein (MGP), play a variety of only partly understood roles in bone and other mineralized tissues. Of these proteins, only OC is produced solely and specifically by mineralized tissue, whereas the other two (or at least their mRNA templates) are more widespread and occur also in soft tissues. OC is synthesized specifically by osteoblasts and odontoblasts, and it accounts for ca. 15–20% of the noncollagen protein of the bone matrix. Approximately 20% is secreted into blood plasma, where it has no obvious function, but it has frequently been measured as an index of bone-forming (osteblastic) activity, and is present in increased amounts in plasma of people with certain bone diseases and of young infants. It is a small protein, MW 5700, with just three Gla residues. Unlike the blood coagulation Gla proteins, which in most people not severely vitamin K deficient and not vitamin K antagonist treated are almost completely carboxylated, circulating OC is at least 5–10% undercarboxylated in many population groups, as measured by assays that depend on the affinity of the undercarboxylated form for hydroxyapatite or a specific ELISA assay for the undercarboxylated form. Since vitamin K supplements can reduce its degree of undercarboxylation in many people, it has been proposed as a new and highly sensitive functional test of vitamin K status in man. Despite the growing level of interest in its practical use as a status index, our understanding of the essential function of OC remains incomplete. Its affinity for calcium is less strong than that of the larger Gla proteins, but it binds avidly to hydroxyapatite and is chemotactic for osteoclasts and their progenitors. Moreover, it can enhance the differentiation of osteoclast progenitor cells in culture, which has been interpreted as implying a possible role in bone resorption. Transgenic mice that lack the gene for OC have increased bone mass, despite an increased number of osteoclasts. In humans, however, underhydroxylation of OC especially in postmenopausal women has been linked to low vitamin K intakes, reduced bone mineral density, and increased risk of fracture. Intervention with high-dose MK-4, mainly in Japan, has been reported to improve bone mineral density and decrease fracture risk. Although a single study in the United Kingdom suggested that a combination of
vitamin K1 and vitamin D supplements may benefit bone mineral density in postmenopausal women, considerably more research is needed in this area. The separate roles of OC and other vitamin K-dependent proteins also need to be clarified. The second vitamin K-dependent Gla protein in bone, MGP, has a MW of 9600 and five Gla residues and is highly insoluble. Unlike OC, it is also found in cartilage, and, significantly, its mRNA occurs in several soft tissues including artery walls. Its synthesis is modulated by 1,25-dihydroxy vitamin D and by retinoic acid. Mice lacking the gene for MGP quickly developed calcified arteries and died of aortic rupture before 2 months of age. For this reason, MGP is believed to antagonize the pathological calcification of soft tissues and thus to protect them. The absence of MGP also led to inappropriate calcification of growth plate cartilage, reduced growth, osteopenia, and fracture in the MGP gene knockout mice. In humans, defects in the MGP gene are associated with Keutel’s syndrome and chondroplasia punctata, in which cartilage calcification is abnormal. Similar abnormalities have been observed in infants whose mothers were treated with warfarin during the first trimester of pregnancy. In one study, low vitamin K intake was associated with atherosclerotic calcification of the aorta in postmenopausal women. Also, circulating MGP levels were found to be raised in severe atherosclerosis and in type 1 diabetes in humans. A specific immunoassay for MGP has been developed that should assist further research on this potentially important regulatory protein. The third bone-associated Gla protein, protein S, is also involved with blood clotting. It is synthesized by osteoblast-like and osteblastoma cells in culture, and it has been detected in bone matrix. It is also synthesized by hepatocytes, megakaryocytes, and endothelial cells. Children with an inborn deficiency of it developed osteopenia and bone lesions; however, its precise functional role is unknown. All three bone Gla proteins (and probably most other Gla proteins) have ‘leader’ or ‘pre’-peptides when first formed on the endoplasmic reticulum (ER) that are required for translocation across the ER and are removed during this process. OC, protein S, and most other Gla proteins also have a pro-peptide sequence that is removed during secretion and that directs the action of the carboxylase enzyme before secretion. MGP differs from the other Gla proteins in that its carboxylase recognition sequence is not removed; instead, only a short (five-residue) carboxy-terminal sequence is removed from it. All known mammalian Gla proteins contain the characteristic amino acid sequence
VITAMIN K 403
Gla-X-X-X-Gla-X-Cys, where X represents an undefined amino acid. If vitamin K is in short supply or antagonized, certain Gla residues escape gamma-glutamyl formation more than others. Thus, in a study of OC, the Glu residue at position 17 was typically only 67% carboxylated, that at position 21 was 88% carboxylated, and that at position 24 was 93% carboxylated. Surprisingly, in a meta-analysis of studies on warfarin-treated adult patients, no evidence of any increase in bone disorders was found. Gas6 and Other Vitamin K-Requiring Gla Proteins
A Gla protein that is associated with the central nervous system, rather than with liver or bone, was discovered in 1993. In tissue culture models it had the properties of a growth arrest-specific (GAS) cellsignalling gene product. It acts as a ligand for a number of receptor protein kinases; it potentiates the growth of vascular smooth muscle cells, Schwann cells, and the neurons that synthesize gonadotropin-releasing hormones; and it can prevent apoptotic cell death. Knockout mice in which three Gas6 receptors are mutated had major neurological and spermatogenic abnormalities. There is interest in potential roles for Gas6 in Alzheimer’s disease and Parkinson’s disease. Clearly, these properties and emerging roles have helped to confirm the growing suspicion that vitamin K-dependent Gla proteins possess key functions beyond blood clotting and even bone remodelling. Gas6 has a MW of 75,000 with 11 or 12 Gla residues, and its structure is partly homologous with protein S. Even less well characterized are several other Gla proteins from a variety of tissues. Kidney contains ‘nephrocalcin,’ with just two or three Gla residues, which may be involved in renal calcium transport (another important function that may be impaired by vitamin K deficiency in man). Atherocalcin, or plaque Gla protein, may be related or even identical to MGP. Proline-rich Gla proteins PRGP-1 and PRGP-2 are found predominantly in the spinal cord and thyroid gland, respectively, but their functions are unknown. Gla proteins occur in most vertebrates and also in molluscs, so their evolutionary appearance in the animal kingdom is probably quite ancient in origin. Other, Probably Non-Gla Functions of Vitamin K
Vitamin K is thought to be involved in sphingolipid metabolism in certain bacteria by modulating serine palmitoyl transferase, and warfarin treatment decreased brain levels of sulfatides and galactocerebroside sulfotransferase activity in animals, which was reversible by vitamin K (either K1 or MK-4). Therefore, it is now thought that vitamin K may be
involved in sphingolipid metabolism, and this in turn has implications for its action as a second messenger as well as being a structural component. There are several functions of MK-4 that are shared by the isolated geranyl-geraniol side chain, which involve the induction of apoptosis of osteoclasts and of certain cancer cells in culture. Depriving certain tumours of vitamin K, both in vitro and in vivo, seemed to inhibit their growth and metastasis. Patients receiving warfarin for cardiovascular disease seem to have a reduced incidence of tumors, and warfarin may also suppress delayed-type hypersensitivity reactions. Recent studies have suggested that MK-4, in particular, has a transcriptional regulatory function, for example, in osteosarcoma cell cultures, in which it binds to and activates the SXR steroid and xenobiotic receptor. This in turn increases mRNA levels for osteoblast markers: bone alkaline phosphatase, osteoprotogerin, osteopontin, and MGP. MK-4 and its isolated geranyl-geraniol side chain was also able to suppress the synthesis of prostaglandin E2, which is a potent bone resorption catalyst. These observations have led to speculation (i) that some of the menaquinones may possess some functions that are not shared by phylloquinone, and (ii) that there may be implications for cell proliferation and for cancer risk from variations in the supply of vitamin K and in its speciation.
Population Groups at Risk of Vitamin K Deficiency Because of the minimal extent of transfer of vitamin K across the placenta, the fetus and newborn infant have much lower circulating vitamin K than adults (typically 30-fold lower). In addition, human milk has a lower concentration of the vitamin than that of most other mammalian species. Although low vitamin K levels have not been found to affect the developing fetus in a functionally deleterious way, it is clear that the newborn, and especially the solely breast-fed infant, is at higher risk of functional deficiency than older infants and adults. In a minority of cases, this can lead to life-threatening or long-term damage associated with intracranial bleeding. Hemorrhagic disease of the newborn (HDN) is classified as early (first 24 h of life), classic (days 1–7), or late (2–12 weeks). Of these, the third category is most likely to involve dangerous intracranial bleeding. Risk factors for HDN include intestinal fat malabsorption and hepatic disease. In Western countries, since the 1950s, it has been routine practice to give prophylactic phylloquinone in a 1 or 2 mg dose at birth, and this has been found to considerably reduce the risk of HDN. An intramuscular depot dose was found to be
404 VITAMIN K
highly effective; however, a study in the United Kingdom in the 1990s suggested a possible link with childhood cancer. Despite little subsequent support for this contraindication, the adverse publicity led to a shift in practice toward oral dosing. An oral micellar preparation containing glycholate and lecithin has been developed that has improved absorption characteristics. Another approach toward the avoidance of late HDN is vitamin K supplementation of breastfeeding mothers since breast milk vitamin K levels can be increased substantially by dosage to the mother. Modern commercial formula feeds typically contain 50–125 mg phylloquinone/l. Antibiotic-treated patients may be at increased risk of developing vitamin K deficiency. Some antibiotics may reduce the production of usable menaquinones by gut bacteria; others, such as cephalosporin, may exert vitamin K epoxide reductase inhibitory effects. Vitamins A and E in large doses may increase the risk of vitamin K deficiency and/or its sequelae in susceptible people. Thus, in one study, patients receiving anticoagulant drugs exhibited a further reduction of prothrombin levels if they were given 400 IU -tocopherol per day for 4 weeks. The microsomal vitamin K-dependent carboxylase enzyme was found to be inhibited by -tocopheryl quinone and, to a lesser extent, by -tocopherol. It is also inhibited by other oxygen free radical antagonists. Control of blood clotting with warfarin-type drugs thus requires control of intakes of vitamins A and E as well as vitamin K so as to achieve consistent results. As noted previously, some older people, especially postmenopausal women, seem to be at increased risk of developing marginal vitamin K deficiency, which manifests itself, for instance, by an increased percentage of undercarboxylated osteocalcin (ucOC) in the circulation. The sequelae of such marginal deficiency, and in particular its implications for bone health, are currently the subject of considerable research effort (Table 2). Several epidemiological cross-sectional studies have noted an association between higher vitamin K intakes and higher bone mineral density or lower fracture risk. One study reported that a subgroup of postmenopausal women who were ‘fast losers’ of calcium responded to vitamin K supplements by reduced calcium and hydroxyproline excretion. Although vitamins D and K have distinct functions in calcium absorption, and its distribution, deposition, and excretion, there is evidence that synergistic interactions can occur between them, and that both can affect the same cell-signalling pathways. Osteocalcin and MGP synthesis is stimulated by 1,25-dihydroxy vitamin D in cell culture. MK-4 in large doses has been used for prophylaxis and treatment of osteoporosis, especially in
Table 2 Studies (1985–2001) linking vitamin K intake, status, or effects of supplementation with bone health in humans Nature of evidence
No. of studies
Serum vitamin K positively correlated with BMD Serum vitamin K lower in people with hip or vertebral fractures Vitamin K intake directly correlated with BMD ucOC directly correlated with risk of hip fracture ucOC inversely correlated with velocity of ultrasound (a measure of bone quality) ucOC inversely correlated with BMD Supplementation with phylloquinone increased carboxylation of osteocalcin Supplementation with phylloquinone or menaquinone reduced calcium loss Supplementation with phylloquinone increased markers of bone formation and reduced markers of bone resorption Supplementation with phylloquinone (þvitamin D) increased BMD Supplementation with menaquinone (þvitamin D) increased BMD Supplementation with menaquinone alone increased BMD and/or decreased bone loss Supplementation with menaquinone reduced fracture risk
4 3 2 5 1 2 7 3 1
1 2 6 2
BMD, bone mineral density; ucOC, undercarboxylated osteocalcin. Data from Weber P (2001) Vitamin K and bone health. Nutrition 17: 880–887, and S. Karger AG, Basel.
Japan. A study in The Netherlands reported reduced bone loss after 2 years of treatment of postmenopausal women with amounts of phylloquinone that are achievable from dietary sources. More long-term intervention trials are needed.
Status, Requirements, and Recommended Intakes Vitamin K status can be measured either by its concentration in plasma or by its efficacy in ensuring optimal carboxylase function, as indicated by specific carboxylated plasma proteins. Accurate assay of the very low concentrations of vitamin K that are present in plasma was a considerable analytical challenge, which was eventually solved by highperformance liquid chromatography (HPLC) followed by high-sensitivity coulometric or fluorometric detection. A popular method uses organic solvent extraction, a cartridge cleanup step, an HPLC separation followed by postcolumn reduction of the vitamin K quinone to the reduced quinol form by metallic zinc or other reductant, and finally fluorometric quantitation of the fluorescent quinol. A useful internal standard, not found in nature, is the homolog of phylloquinone, vitamin K1(25). With
VITAMIN K 405
modern detectors, analysis is possible with only 0.25 ml plasma. A published ‘normal’ range in the United States is 0.25–2.7 nmol/l, corresponding to approximate average daily intakes of 100 mg/day in men and 80 mg/day in women. As noted earlier, the phylloquinone content of plasma has a short half-life and is strongly correlated with plasma triglycerides. It is therefore not ideal as a long-term index of status. Alternatives include functional indices such as plasma prothrombin time (increased only by severe vitamin K deficiency), PIVKA, (which is more sensitive to marginal deficiency), and ucOC (which is the most sensitive functional indicator). These functional indices are not totally specific for vitamin K deficiency, although ucOC (for which monoclonal antibodies now exist) does appear to possess reasonably good specificity. Unfortunately, the different commercial kit assays measure different epitopes of OC, which makes harmonization difficult. Urinary total Gla is sensitive to vitamin K status, but it varies with age and has not yet proved to be very useful as a status indicator. Functional indices that are based on impaired carboxylase activity affecting other Gla proteins may be developed in the future. Most estimates of the amount of phylloquinone needed to correct clotting changes suggest that adult human requirements are between 0.5 and 1 mg/kg/day. There are no reference nutrient intakes defined for vitamin K in the United Kingdom, although a ‘safe intake’ for adults was set in 1991 at 1 mg/kg/day and for infants 10 mg/day. In the United States, the Food and Nutrition Board of the National Academy of Sciences has defined an Adequate Intake (AI) of phylloquinone of 90 mg/day for adult women and 120 mg/day for adult men, with proportionately smaller values for children. For infants aged 0–6 months, the AI is only 2 mg/day, and it is 2.5 mg/day at 7–12 months, thus creating a larger proportional difference between infants and older age groups than for most micronutrients. Both phylloquinone and the menaquinones appear to be nontoxic, even in multimilligram amounts. However, menadione, the water-soluble form of vitamin K, was found to cause hemolytic anemia, hyperbilirubinemia, and kernicturus in infants when
>5 mg was given. Therefore, it is not currently used for human prophylaxis or treatment. Since vitamin K is thought to have a wide range of functions in the body in addition to blood clotting, and some of these may have long-term health implications, research on requirements and optimal intakes, with multiple end points, is needed. Metabolic and healthrelated differences between the menaquinones and phylloquinone also need to be defined. See also: Bone. Fruits and Vegetables. Infants: Nutritional Requirements. Pregnancy: Safe Diet for Pregnancy. Vitamin A: Biochemistry and Physiological Role. Vitamin E: Physiology and Health Effects.
Further Reading Binkley NC and Suttie JW (1995) Vitamin K nutrition and osteoporosis. Journal of Nutrition 125: 1812–1821. Bugel S (2003) Vitamin K and bone health. Proceedings of the Nutrition Society 62: 839–843. Ferland G (1998) The vitamin K-dependent proteins: An update. Nutrition Review 56: 223–230. Greer FR (1999) Vitamin K status of lactating mothers and their infants. Acta Paediatrica Supplement 430: 95–103. Nelsestuen GL, Shah AM, and Harvey SB (2000) Vitamin Kdependent proteins. Vitamins and Hormones 58: 355–389. Saxena SP, Israels ED, and Israels LG (2001) Novel vitamin K-dependent pathways regulating cell survival. Apoptosis 6: 57–68. Shearer MJ (1997) The roles of vitamins D and K in bone health and osteoporosis prevention. Proceedings of the Nutrition Society 56: 915–937. Shearer MJ (2000) Role of vitamin K and Gla proteins in the pathophysiology of osteoporosis and vascular calcification. Current Opinion in Clinical Nutrition and Metabolic Care 3: 433–438. Suttie JW (1992) Vitamin K and human nutrition. Journal of the American Dietetic Association 92: 585–590. Suttie JW (1995) The importance of menaquinones in human nutrition. Annual Review of Nutrition 15: 399–417. Tsaioun KI (1999) Vitamin K-dependent proteins in the developing and aging nervous system. Nutrition Review 57: 231–240. Vermeer C, Jie K-SG, and Knapen MHJ (1995) Role of vitamin K in bone metabolism. Annual Review of Nutrition 15: 1–22. Vermeer C and Schurgers LJ (2000) A comprehensive review of vitamin K and vitamin K antagonists. Hematology/Oncology Clinics of North America 14: 339–353. Weber P (2001) Vitamin K and bone health. Nutrition 17: 880–887.
W Water see Thirst
WEIGHT MANAGEMENT Contents Approaches Weight Maintenance Weight Cycling
Approaches N Finer, Luton and Dunstable Hospital NHS Trust, Luton, UK ª 2005 Elsevier Ltd. All rights reserved.
Weight loss and weight loss maintenance require a decrease in energy intake (diet), an increase in energy expenditure (exercise and physical activity), or both. Dietary management should encourage healthy eating, that is, an appropriately balanced intake of macroand micronutrients. For most obese individuals this will entail not just a decrease in total energy intake, but specifically a decrease in fat intake, together with an increase in complex carbohydrates, fruit, and vegetables. Myriad diets have been popularized as a means to reducing energy intake, but few are recommended as meeting the overall nutritional needs of an obese individual, and many are so restrictive that they clearly could not be followed for more than a few weeks. Increasing exercise and physical activity has benefits beyond those that result from the relatively modest amounts of extra energy expended. These include a beneficial protection from excessive loss of lean body tissue during dieting, improved fitness and psychological health, and a greater likelihood of longterm weight maintenance. Diet and exercise are core components of behavioral treatments; such treatments, based on learning theories, also aim to help individuals become aware of the behaviors that have
led to their weight gain, and to develop strategies to alter them. Weight loss can be achieved successfully with all strategies; behavioral therapies that include a strong focus on increasing exercise and activity seem to offer the best chances of long-term success.
The Concept of Desirable Weight Body weight reflects the additive mass of the various tissues that make up the organism, and is a function of energy and nutrient balance over a prolonged period. Positive energy balance will result in weight gain (mainly from deposition of lipid in adipose tissue), while prolonged undernutrition will lead to weight loss. For most of human history, the dominant disorder of body weight has been thinness. Thinness, whether from malnutrition or disease, was associated with illness and was often a prelude to death; in societies where food supplies are scarce or seasonal, a high body weight may be seen as a desirable sign of health, and probably wealth. In contrast, in developed societies where levels of activity are low and food is plentiful, the growing prevalence of over-weight and obesity has been clearly linked to illness and premature mortality. The concept of a desirable weight at which health is optimal and the risk of disease minimal has not been easy to define, largely because of the effects of many other factors such as age, sex, social status, and smoking.
408 WEIGHT MANAGEMENT/Approaches
Dietary Management Dietary management of obesity aims to reduce fat stores by changing eating habits to reduce energy intake below that required for weight maintenance. The term ‘reducing diet’ has been coined to describe such diets used to treat the obese. Since many obese individuals may eat a nutritionally inadequate (apart from energy) diet, it is important that advice on energy restriction is accompanied by the prescription of a ‘healthy’ diet that contains adequate protein, vitamins, calcium, trace elements, and a desirable ratio of complex carbohydrate to fat. Weight loss per se is of no medical benefit unless it is maintained, and this will require the obese individual to adhere to a permanent change in eating habits. Many think of a ‘diet’ as a temporary change in eating habits (often extreme or quirky), a view encouraged by many of the diet books that hold out the promise of easy and instant success. It is essential that the concept of a long-term change in dietary habits be accepted at the start of treatment. The energy value of weight gained or lost is approximately 31 MJ kg1 (7500 kcal kg1) since it is composed approximately of 3 parts fat to 1 part lean. Thus a daily energy deficit of 2.1 MJ (500 kcal) will produce a weight loss of about 2 kg per month. For the average man or woman this represents a 20–30% reduction in energy intake, although for the obese the percentage reduction will be smaller. Thus the severely obese, for example with a body mass index (BMI) of 35 or more, will need to follow an energy-restricted diet for months rather than weeks to reverse their obesity. As weight is lost, energy requirements fall, in part because of the reduced energetic mass of the person, and also because of adaptive changes in energy expenditure. For this reason, the rate of weight loss will eventually slow and reach a plateau for any fixed level of dietary energy restriction (Figure 1).
Energy
+
In Start of diet with decrease in energy intake
Out
– Time Figure 1 Fall in body weight (solid line) resulting from a fixed decrease in energy intake. Note that the rate of weight loss slows as the gap between energy expenditure (darker shading) and energy intake (lighter shading) narrows.
Myriad diets have been popularized and promoted directly to the public, reflecting every possible permutation of increasing or decreasing the major macronutrients. Fashion and commercialism have dictated many of them. Table 1 shows the variety of diets that have been suggested, and used, for treating obesity. Many of these diets fail to focus on long-term dietary change, and the quirkiness of many makes it unlikely that they would be followed for long. Current ideas on a reasonable reducing diet are that it should contain at least 100 g carbohydrate to prevent glycogen depletion and ketosis. Highcarbohydrate diets are composed of complex carbohydrates and are thus of low energy density, which may aid management of hunger. Since highcarbohydrate diets are low in fat, they have the theoretical advantage of directly reducing the risk of cardiovascular disease. The energetic efficiency with which carbohydrate is converted and stored as fat is lower than that of dietary fat, providing a further advantage. Protein intake must be adequate to maintain lean body mass. Although there is an inevitable fall with weight loss, 0.8 g per kg per day þ 1.75 g per 100 calorie deficit of protein (about 44 g daily for women and 56 g daily for men) should be consumed, and fat restricted to less than 30% of total energy. The diet should contain recommended daily intakes of vitamins, minerals, and electrolytes, if necessary by supplementation; 20–30 g daily of fiber should also be consumed. Many diets prescribe an energy intake that is based on a generalized rather than an individualized assessment of energy needs. The common prescription of 4.2–5.0 MJ (1000–1200 kcal) daily may be problematic and inappropriate. Weight loss in men will be faster and greater compared to women of equal BMI, because of the relatively greater metabolic rate per kilogram of body weight of men. The very obese, whose daily energy requirements can be as high as 12.6 MJ (3000 kcal), may lose weight at an excessive rate and develop symptoms of ketosis, postural hypotension, or excessive hunger. Many obese patients fail to register or admit to the amount of food they consume, and claim that such a diet is more than their habitual intake. One principle of energy prescription that has proved easy to administer and successful in outcome is to calculate energy requirements from standard formulas (Table 2), and prescribe a diet that provides a fixed energy deficit of 2.1 MJ (500 kcal). Compliance and weight loss were better with this approach than with a fixed 5 MJ (1200 kcal) diet. A diametrically opposite approach is the use of very low-energy liquid diets. These were originally
WEIGHT MANAGEMENT/Approaches
409
Table 1 Types of diet used for treating obesity Generic name for diet
Typical dietetic modification
Popular example of diet
Starvation diet Very low-energy (protein-sparing) dietsa
Less than 1.2 MJ (300 kcal) per day About 2 MJ (500 kcal) per day with >50 g high-quality protein; usually liquid 5–7.5 MJ (1200–1800 kcal) per day often from menus, recipes Nutritionally balanced, individually tailored to produce fixed energy deficit (e.g., 2 MJ or 500 kcal per day) based on measured or predicted energy needs Over 40% protein, thus low in carbohydrate and fat
Grapefruit and Black Coffee Cambridge Diet Modifast
Low-energy dieta Fixed energy deficit dieta
High-protein diet Low-protein diet High-fat diet Low-fat dieta
Restricted carbohydrate and protein Restrict fat to 60
0.062 weight in kg þ 2.036 MJ daily 0.034 weight in kg þ 3.538 MJ daily 0.038 weight in kg þ 2.755 MJ daily
developed in the 1960s to provide a nutritionally complete intake in terms of protein, vitamins, and micronutrients, but provide as little as 1.4 MJ (350 kcal) daily. The inclusion of sufficient highquality protein was designed to prevent the excessive loss of lean body mass seen with starvation or other ketotic diets, hence the alternative term ‘proteinsparing modified fast.’ Appropriately selected, wellmotivated patients are highly compliant with such diets, and their weight loss can be very rapid.
Paradoxically, perhaps, patients seem to find it easier to mount levels of near-total restraint than more moderate restriction. It appears that withdrawing all solid or ‘proper’ food helps the patient to define himself or herself as ‘not eating’, in the same way that some quitting smokers find it easier to abstain completely from cigarettes rather than to cut down. In the 1970s a commercial very low-energy diet formulation (the Last Chance Diet) was marketed, and was associated with a number of deaths from cardiac arrhythmia. This diet was deficient in essential amino acids and in minerals such as magnesium and potassium. It was withdrawn. In the 1980s newer, better formulated diets were commercially marketed. Concerns about their inappropriate use by already slim women, often with an eating disorder, forced governmental health agencies to issue guidelines on their use. In the US, a task force recommended that such diets contain at least 3.3 MJ (800 kcal), be supervised by experienced physicians, and be used only by those with a BMI more than 30, for less than 16 weeks. In the UK a report from the Committee on Medical Aspects of Food Policy suggested such diets should provide a minimum of 1.7 MJ (400 kcal) and 40 g protein daily for
women, and 2.1 MJ (500 kcal) and 50 g protein daily for men and tall women. They were recommended for use only by those with a BMI more than 25 and under medical supervision, for no longer than 4 weeks. The drawback of such diets is that unless they are combined with, or followed by, some other treatment (pharmacological or behavioral), weight regain, often soon and rapid, is almost universal. More recently, low-energy liquid diets of around 3 MJ (750 kcal) daily have been popularized, often as part of an overall behavior modification program (see later), or in the form of sachets intended to be used as meal replacements. Both approaches have been shown to have potential for success in shortterm studies lasting up to 1 year.
Exercise and Physical Activity The term ‘physical activity’ refers to bodily movement produced by skeletal muscle that results in energy expenditure; it thus includes activities of daily living, as well as leisure activity from sport and exercise. The term ‘exercise’ refers to planned or structured bodily movements, usually undertaken in leisure time in order to improve fitness (e.g., aerobics), while ‘sport’ is physical activity usually in structured competitive situations (e.g., football). Physical activity at recommended levels (moderate intensity for 30 min for 5 days each week) is associated with many health benefits; these include lower all-cause mortality rates, fewer cardiovascular events such as myocardial infarction and stroke, and a lower incidence of metabolic disorders including non-insulin-dependent diabetes mellitus and osteoporosis. Levels of activity have been falling in Westernized societies largely because of a decrease in physical activity at work (from increasing mechanization) and increasingly sedentary leisure-time pursuits (such as television viewing). The Allied Dunbar National Fitness Survey of the UK showed that 70% of the population are insufficiently active, and a separate UK government survey showed that 1 in 3 adults could be classified as sedentary, i.e., taking less than half an hour of continuous moderateintensity physical activity each week (Figure 2). Both cross-sectional data and prospective studies confirm an inverse relationship between physical activity and weight gain. The finding that in many countries such as the UK, average energy intake has fallen over the time that obesity has been increasing, emphasizes the importance of inactivity as a cause of obesity. These secular changes of inactivity are most marked in children who now spend much of their leisure time watching television or in other sedentary pursuits. Health authorities in many countries now
Population (%)
410 WEIGHT MANAGEMENT/Approaches
60 50 40 30 20 10 0 16–24
25–34
35–44 45–54 Age (years)
55–64
65–74
Figure 2 Percentage of adults in England by age and sex (1990–1991) with a sedentary life style; dark bars, men; light bars, women. Data from Fentem and Walker (1995) Setting targets for England: challenging, measurable and achievable. In: Killoran A (ed.) Moving On: International Perspectives on Promoting Physical Activity. London: Health Education Authority.
advocate an increase in physical activity as a means of preventing obesity and improving health and fitness. While there is agreement that such measures may be useful in preventing obesity, the role of exercise in treating obesity is less clear. Potential mechanisms linking exercise and activity with weight loss and weight loss maintenance are shown in Figure 3. Like dietary change, increasing time spent on exercise and activity can be seen as part of a generalized behavioral change, which can be self-reinforcing. Exercise and activity raise energy expenditure over and above the resting metabolic rate. Under some circumstances, such as prolonged vigorous exercise in trained individuals, rates of energy expenditure remain elevated for some time after the cessation of exercise. Logically, therefore, exercise should be a useful way to treat obesity. However, the amounts of exercise-induced energy expenditure are small in comparison with potential changes in energy intake. The energy cost of activity and exercise can be expressed as a multiple of resting metabolic rate, termed a MET; the term ‘physical activity level’ (PAL) represents the total daily energy expenditure divided by the resting energy expenditure; it typically averages 1.5. The energy costs of walking are about 2.0 MET – for a 70 kg individual this is about 0.5 MJ h1 (120 kcal h1) – while gentle running costs about 8 MET or 2 MJ h1 (480 kcal h1). A moderately fit individual would only be able to maintain a level of exercise of 7 MET for about 30 min, representing an additional energy expenditure of about 1.5 MJ (360 kcal) resulting, if energy intake were maintained, in a weight loss of about 0.3 kg per week. Energy expenditure remains above baseline for some time after exercise has stopped; this is termed ‘post-exercise energy expenditure.’ The effect is small and only produced by very high levels of activity, capable of achievement only by elite
WEIGHT MANAGEMENT/Approaches
Exercise
Direct increase in energy expenditure • Direct cost of exercise • Improved aerobic fitness –
Nonexercise increase in energy expenditure • Increased resting metabolic rate • Increased ratio of fat-free to fat mass • Increased capacity for fat oxidation
411
+
Decreased food and energy intake • Short-term reduction in hunger • Decreased preference for fat • Substitutes for eating as stress coping mechanism
Psychological effects • Improved fitness • Improved well-being and self-esteem • Improved assertiveness Physical effects • Improvement in coexisting diseases • Increased mobility
+
Weight loss
Figure 3 Mechanisms linking exercise with weight loss and weight loss maintenance.
athletes. The mechanism for this effect is unknown. Moderate intensity exercise programs, of the sort prescribed to the obese, are unlikely to raise energy expenditure by more than about 0.2 MJ (50 kcal) per exercise session. Regular exercise does, however, elevate long-term energy expenditure by its effect on altering body composition. Resting metabolic rate is proportional to the fat-free mass. Exercise increases muscle development and bone mass, so directly raising metabolic rate. The purpose of weight loss is to reduce fat mass, with as little loss of fat-free mass (FFM) as possible. The loss of fat to meet the extra energy requirements of regular exercise will decrease the ratio of fat to FFM and thus indirectly favor an increase in resting metabolic rate for any given body weight. These effects are modest, and mainly only seen from the sort of high-intensity excercise achieved by athletes. Even endurance-level training over periods of up to 12 weeks increases nonexercising daily energy expenditure by less than 0.8 MJ (190 kcal). The effects of exercise are thus quantitatively small. The relatively small potential for exercise to reduce body weight is borne out by the results of trials of exercise in obesity treatment, which suggest that exercise programs achieve weight losses of less than 0.1 kg per week, and that total weight loss averages about 3 kg. In one meta-analysis of five controlled trials of exercise without dietary restriction, mean weight loss in 95 men was 2.6 kg over 30 weeks, compared with a gain of 0.4 kg in the control group. Programes that combine dietary and exercise interventions can be more successful, but it is
often difficult to separate the effects of one from the other. In order to explore the effect of exercise on the composition of weight loss during dieting, Garrow analyzed data from 21 randomized, controlled studies. All trials that combined exercise and diet and included information about weight and FFM loss were included (Figure 4). A small reduction in the percentage of FFM lost is observed if exercise is included with the dietetic intervention. Thus, for example, in a woman losing 15 kg, exercise would reduce her FFM loss from 3.6 kg (24%) to 3.0 kg (20%). Similar but quantitatively greater benefits are seen in men: for a 15 kg weight loss, exercise reduced FFM loss from 3.6 kg (24%) to 2.5 kg (17%). Activity and exercise are strong predictors for successful weight loss maintenance. A number of studies have shown that obese women who have lost weight and continue to undertake regular exercise are 3–4 times more likely to maintain their weight loss over a follow-up period of 2–3 years. The amount of exercise also correlates with the degree of success. In one study of about a hundred obese men and women who had lost about 27 kg, those with high levels of exercise were maintaining an average of 18 kg loss at 3 years, compared with 9 kg in the moderate exercise group and no weight loss in the nonexercisers. The importance of exercise and weight loss maintenance is demonstrated by a 2-year study of obese subjects treated by either diet, exercise, or a combination of the two. Weight loss in the diet group at 1 year was 6.8 kg, in the exercise group 2.9 kg, and 8.9 kg in the combination treatment group. However, after 2 years
FFM lost (kg)
412 WEIGHT MANAGEMENT/Approaches
5
5
4
4
3
3
2
2
1
1
0
0
–1
–1 0
5
10
15
20
25
0
(A)
5
10
15
20
25
Weight lost (kg)
Weight lost (kg) (B)
Figure 4 Relationship of total weight loss to fat-free mass loss in women (A) and men (B) undertaking a diet with exercise (solid squares, solid line) or without exercise (open squares, broken line). Data from 21 randomized controlled studies, collated by Garrow JS, Summerbell CD (1995) Meta-analysis: effect of exercise, with or without dieting, on the body composition of overweight subjects. Eur J Clin Nutr 49: 1–10.
the groups that had included exercise were maintaining losses of 2.2–2.7 kg while those on diet alone had only managed to maintain a 0.9 kg loss. Similar findings have been seen in dieters from commercial slimming groups.
Behavior Modification Behavioral modification is seen as the cornerstone of any treatment program that seeks to empower and enable obese individuals to make voluntary changes in life style. Any therapy relies to a greater or lesser extent on such a principle. For example, treating hypertension should be an apparently straight-forward clinical management issue, but patient non-compliance with medication is common. The skilled clinician will often include the principles of behavior therapy in consultations to help the patient understand and put into practice the new ‘life style’ of taking their drugs regularly. The approach in obesity is firmly based on theories of learning, and relies on the concept that behaviors associated with weight gain and weight maintenance are to a significant extent learned and subject to modification. Such a behavioral theory is not undermined by the knowledge that genetic and environmental factors are also important in determining the predisposition to obesity. A prerequisite for successful behavior change is that the individual must be ‘ready’ and motivated to change. It is common practice to assess this aspect of ‘readiness’ prior to enrolling patients in behavioral programs, and a number of standardized and validated questionnaires are available. Because behavioral programs are intensive of therapist time, patients are often treated in groups, often with manuals, which
allow for individual study. These groups are usually ‘closed;’ that is, a small group of patients start the program simultaneously and go through it together. This contrasts with many commercial diet groups, in which patients are free to join or leave at any time. More recently computeraided interventions have been developed, but as yet results are not promising. The components of a typical behavior modification programme are shown in Table 3. For each area, patients need to learn the underlying concepts, recognize the importance to their own situation, and practise strategies to change their behavior. The results of a large number of programs have been published, either as audit outcome or as comparative trials. Programs vary in duration from 12 weeks to 52 weeks (there has been a trend since the 1970s to lengthen treatment time). Drop-out rates are clearly biased by selection procedures, but are typically 10–20%. Weight loss during treatment is typically 10–15% of initial weight, at a rate of about 0.5 kg per week. In order to strengthen the impact of the intervention on weight loss, many programs have included a period of time on very low-energy or liquid-based diets. This approach of a complete withdrawal for a time from established (abnormal) eating habits can be usefully integrated into a model of behavior change, and is well and positively tolerated by obese patients. Although data suggest that the greater weight loss induced by very low-energy diets has little effect on the long-term results in terms of weight loss maintenance, these diets do represent a practical and pragmatic initial approach to treating patients in a group, especially when many individuals within such a group may resist
WEIGHT MANAGEMENT/Weight Maintenance
413
Table 3 The components of a typical behavior modification program Domain
Intervention strategy
Example
Self-monitoring
Food intake diaries Exercise and activity Weight change Nutrition knowledge Healthy eating
Food diaries Activity logs Regular weighing and recording on weight charts Energy, macronutrients, understanding food labeling Low fat, high complex carbohydrate, adequate fruit and vegetable intake Using stairs not escalators Decrease television viewing Group workouts at sports centers Aim for 0.5–1.0 kg weekly 10% weight loss as initial goal
Nutrition
Exercise and activity Goal setting
Problem solving
Cognitive change
Increasing daily energy-using activities Decreasing sedentariness Formal exercise Realistic rates of weight loss Realistic target weight Weight maintenance Identifying conflicts with aims Interpersonal conflicts Stimulus control and negative feelings Modifying thoughts about and responses to food cues Self-esteem and assertiveness training Preventing relapse
the idea that they are able to lose weight on conventional reduced-energy diets. Research is now directed towards finding ways of improving the results of such programs in terms of long-term weight loss maintenance. An increased focus on weight-maintaining behavior rather than weight loss, a stronger emphasis on increasing activity and exercise, and better relapse strategies are being evaluated. Targeting the needs of specific subgroups, for example those with binge eating disorders or dysfunctional family circumstances, is another way in which behavioral therapy may be improved. See also: Eating Disorders: Bulimia Nervosa. Energy Expenditure: Indirect Calorimetry. Exercise: Diet and Exercise. Obesity: Definition, Etiology and Assessment; Fat Distribution; Childhood Obesity; Prevention; Treatment. Starvation and Fasting.
Holidays, parties, restaurant meals The unhelpful relative or friend Hunger on returning home from work Good and bad foods; food as a reward; coping with ‘highly desirable’ foods Recognizing and exerting choice Acceptance of occasional small weight gains
Frost G, Masters K, King C et al. (1991) A new method of energy prescription to improve weight loss. Journal of Human Nutrition and Dietetics 4: 369–373. Thomas PR (ed.) (1995) Weighing the Options. Criteria for Evaluating Weight-management Programs. Washington: National Academy Press. Tremblay A, Bouchard C, and Despres JP (eds.) (1995) Proceedings of a satellite symposium of the 7th ICO on Exercise and Obesity: Morphological, metabolic and clinical implications. International Journal of Obesity (supplement 4): S1–S129. Scottish Intercollegiate Guideline Network (1996) Obesity in Scotland, Integrating prevention with weight management. A national clinical guideline recommended for use in Scotland. Edinburgh: SIGN. Wing RR (1997) Behavioural approaches to the treatment of obesity. In: Bray GA, Bouchard C, and James WPT (eds.) Handbook of Obesity, pp. 855–873. New York: Marcel Dekker.
Weight Maintenance H A Raynor and R R Wing, Brown University, Providence, RI, USA
Further Reading Activity and Health Research, Allied Dunbar National Fitness Survey (1992) A Report on Activity Patterns and Fitness Levels: Main Findings. London: Sports Council and Health Education Authority. Brownell K (1997) The Learn Programme for Weight Control, 7th edn. American Health Publishing: Dallas Texas. Fentem P and Walker A (1995) Setting targets for England: challenging, measurable and achievable. In: Killoran A (ed.) Moving On: International Perspectives on Promoting Physical Activity. London: Health Education Authority. Finer N (ed.) (1997) Obesity: a series of expert reviews. British Medical Bulletin 53(2): 229–450.
ª 2005 Elsevier Ltd. All rights reserved.
Most of the developed world is in the midst of an obesity epidemic. In the United States, more than 60% of adults are overweight and obese. The negative impact of obesity on health outcomes and health care costs has heightened awareness of the importance of achieving successful weight loss maintenance. This article reviews information obtained from observational and experimental studies on successful weight loss maintenance. First, the definition
414 WEIGHT MANAGEMENT/Weight Maintenance
0 Weight Change (kg)
and prevalence of successful weight loss maintenance are presented, and why weight loss maintenance may be difficult is discussed. Next, factors identified in research examining weight loss maintenance, obtained from the National Weight Control Registry (NWCR) in the United States and from randomized controlled trials examining long-term weight loss and weight loss maintenance, are described. Finally, general recommendations for achieving successful weight loss maintenance are provided.
–2 –4 –6 –8 0
Definition of Successful Weight Loss Maintenance There is no universally accepted definition of successful weight loss maintenance. We recommend the following definition: an intentional weight loss of 10% of initial body weight that is maintained for >1 year. Several points in this definition should be noted. First, the definition requires that the weight loss be intentional; this is important since several studies suggest that unintentional weight loss occurs frequently and may likely have different causes and consequences than intentional weight loss. Second, criteria for both magnitude and duration of the weight loss are set. The criterion of 10% weight loss is recommended since weight losses of this magnitude have been shown to have positive health consequence. The 1-year duration is selected in keeping with the US Institute of Medicine definition. However, examining both 1-year and 5-year maintenance of weight loss is suggested.
Data on Prevalence of Long-Term Maintenance of Weight Loss Most information about long-term maintenance of weight loss comes from obesity treatment studies. In such studies, overweight individuals who receive a lifestyle intervention achieve a weight loss of approximately 7–10 kg at 6 months. Typically, the maximum weight loss occurs at 6 months, followed by weight maintenance for the next 6 months and then gradual weight regain. This pattern of weight change is illustrated in the Diabetes Prevention Program (DPP). DPP is a multicenter clinical trial of the effects of lifestyle intervention, metformin, and placebo on the development of diabetes in more than 3000 individuals with impaired glucose tolerance. Figure 1 shows that participants in lifestyle intervention achieved an average weight loss of 7 kg (7% of initial body weight) at 6 months, maintained this weight loss through 12 months, and then gradually regained weight.
1 2 3 Years from Randomization
4
Figure 1 Average weight loss achieved in the lifestyle intervention of the Diabetes Prevention Program.
Few lifestyle treatment programs provide followup beyond 1 or 2 years. One study reported that at 5-year follow-up 13% of participants remained >5 kg below their baseline weight. Likewise, 22% of participants were >5 kg below baseline weight at 5 years in another lifestyle intervention. These studies may underestimate the prevalence of successful long-term weight loss because they are based on a single episode of weight loss and likely involve a selected sample who find weight loss most problematic. For example, a random digit dialing telephone survey of 500 adults in the United States found that 228 of these adults reported being overweight (body mass index >27) at their heaviest weight. Sixty-nine of the 228 individuals were currently at least 10% below their highest body weight and had maintained at least a weight loss of 10% for at least 1 year (mean weight loss was 19.1 kg, maintained for 7 years). When successful weight losers were further restricted to those who reported intentional weight loss of >10% maintained for >1 year, 47 (20.6%) of the 228 overweight participants met this criterion. Thus, 20% of overweight individuals appear to meet the criteria specified for ‘‘success.’’
Why is Weight Loss Maintenance Difficult? Long-term weight loss maintenance may be difficult due to a combination of physiological, environmental, and psychological factors. Proposed physiological factors contributing to weight regain include reduced resting metabolic rate and insulin and leptin resistance. However, investigations examining metabolic factors in individuals who have lost weight have not been able to consistently document changes in physiological characteristics that would explain the tendency for weight regain to occur. Environmental
WEIGHT MANAGEMENT/Weight Maintenance
factors may affect energy balance by promoting increased intake and/or reduced energy expenditure, causing weight regain to occur. The strong impact that environmental cues have on energy intake and expenditure have recently been acknowledged, as Americans are now described to be living in an ‘‘obesogenic environment.’’ This environment provides greater exposure to a variety of highly palatable, energy-dense foods and expanding portion sizes that potentially increase intake. Additionally, the environment is filled with products of convenience and efficiency that promote decreased energy expenditure. The psychological self-control needed to override these environmental cues may be difficult for most people to sustain over long periods. Finally, during obesity treatment, weight loss can provide reinforcement for adherence to eating and activity prescriptions that promote weight loss. During weight loss maintenance, weight loss no longer occurs; therefore, there is less reinforcement of healthy eating and activity behaviors, causing motivation for sustaining these behaviors to decrease.
Research on Successful Weight Loss Maintenance Although many individuals have difficulty sustaining weight loss, some are able to maintain a substantial amount of weight loss over a long period of time. To increase the prevalence of successful weight loss maintenance, two types of research investigating weight loss maintenance have been conducted: observational and experimental. In observational research, successful weight loss maintainers are identified and information about how they maintain their weight loss is collected. With experimental research, variables that are believed to affect weight status are manipulated and weight change over time is measured. The National Weight Control Registry
The largest observational study of successful weight losers is the NWCR in the United States. The NWCR is a registry of individuals who have lost at least 13.6 kg and kept it off at least 1 year. On average, these participants have lost more than 27.3 kg and kept it off more than 6 years. Information that registry members have provided has aided in learning about the weight loss maintenance process. Registry participants are recruited through newspaper and magazine articles and thus are a self-selected population. The registry members are primarily female (80%) and Caucasian (97%). Many have a strong genetic predisposition to obesity, with 73% having
415
one or both parents with obesity and 46% having been overweight as a child. Participants in the registry are asked to indicate how they lost their weight in this successful effort. Approximately half say they lost the weight entirely on their own, whereas the other half reported receiving some type of assistance from a physician, dietician, or commercial program. The combination of diet plus exercise was used by 89%, with the most common dietary strategy being restricting intake of certain types of foods. Although there is marked heterogeneity in the approaches used for weight loss, there appear to be some common themes for weight loss maintenance. The first common element is consumption of a lowcalorie, low-fat diet. Registry participants are consuming an average of 1381 kcal/day, with 24% of calories from fat, 19% from protein, and 56% from carbohydrates. Very few (less than 1%) report consuming a low-carbohydrate diet (less than 24% of calories from carbohydrates). Registry members report eating an average of 4.87 meals or snacks per day. More than threefourths of the sample report eating breakfast every day of the week, whereas less than 5% report never eating breakfast. Consuming breakfast may be an important behavioral characteristic of successful weight losers. These long-term changes in diet in registry members are accompanied by long-term changes in physical activity. Women in the registry report 2545 kcal/week of physical activity and men report 3293 kcals/week. This is equivalent to 1 h per day of brisk activity. Approximately half of registry members engage in walking plus another form of physical activity, including cycling, weight lifting, aerobics, running, or stair climbing. Only 9% of registry members maintain weight loss without physical activity. The final characteristic of registry members is that they weigh themselves regularly. More than 44% weigh themselves daily and 31% weigh themselves at least once a week. Frequent monitoring of weight may allow these individuals to quickly catch small weight gains and institute corrective actions. See Table 1 for a summary of the strategies that registry Table 1 Strategies used by successful weight loss maintainers in the National Weight Control Registry Area
Strategy
Diet
Consuming a low-calorie, low-fat diet Consuming breakfast regularly Engaging in 1 h of moderate–intense physical activity per day Self-monitoring of weight
Physical activity Behavioral tools
416 WEIGHT MANAGEMENT/Weight Maintenance
members have reported as being helpful for successful weight loss maintenance. Experimental Studies Examining Weight Loss Maintenance
Our understanding of weight loss maintenance comes not only from the study of successful weight losers but also from randomized clinical trials evaluating specific treatment components. These trials are stronger scientifically because participants are randomly assigned to treatment conditions and all aspects of the intervention are kept constant except the factor under investigation. However, these studies are limited by their short duration (typically 1 or 2 years) and their relatively small sample size (usually 100–200 participants). In experimental studies of weight loss maintenance, the primary focus is usually on overall weight loss (from baseline to the end of the study), usually defined as long-term weight loss, rather than on maintenance of weight loss from the end of the initial treatment (typically 6 months) to study end. Overall weight loss is selected as the variable of interest because it is most strongly associated with health impact. In addition, focusing on weight change from end of treatment to follow-up would make those who lost small amounts of weight but maintained that weight loss in full appear to be more successful than those who lost large amounts of weight and regained some. Experimental research has focused on three main ways to increase long-term weight loss (Figure 2). The first is to increase the rate of initial weight loss so that a greater amount of weight loss occurs during the first 6 months of treatment. A second focus is to improve maintenance of weight loss achieved after the first 6 months of treatment. Finally, combining both of these approaches is considered the ideal approach.
Several strategies have been tested in experimental studies of weight loss maintenance, including focusing on energy balance, in which changes in diet and/ or physical activity are used to create larger energy deficits that produce greater weight loss, or focusing on intensifying behavioral components of interventions so that skills necessary for sustaining weight loss can be maintained over a longer period. These strategies have been implemented during the initial weight loss treatment phase and the weight loss maintenance phase. Energy Balance
In order to produce weight loss, it is necessary to modify energy balance by eating less and/or exercising more. A substantial body of research suggests that the combination of diet plus exercise is most effective for long-term maintenance of weight loss. Diet
Within the context of diet, weight loss researchers have focused primarily on the level of caloric restriction and the degree of structure in the diet. Typically, behavioral weight loss programs recommend a low-calorie, low-fat diet. Participants are instructed to eat 1000–1500 kcals/day (low-calorie diet), depending on their initial body weight, and to reduce dietary fat to 20–25% of calories. There are no specific foods that are required or prohibited, but consumption of complex carbohydrates and guidelines based on the Food Guide Pyramid are stressed. Participants are instructed to self-monitor the calories and fat grams in all foods they consume. Selfmonitoring is recommended daily for the first 6 months, and 1 week per month thereafter. Adherence to self-monitoring has been shown to be one of the best predictors of maintenance of weight loss.
0 Standard
Weight Change (kg)
–2
Increased weight loss
–4 –6
Increased maintenance
–8
Combined increased weight loss and maintenance
– 10 – 12 – 14 0
3 1 2 Years from Randomization
4
Figure 2 Ways to increase average long-term weight loss maintenance achieved in experimental research.
WEIGHT MANAGEMENT/Weight Maintenance
Very low-calorie diets Very low-calorie diets (VLCDs) are dietary regimens that provide approximately 400–600 kcal per day usually as a liquid formula. VLCDs have been shown to produce excellent initial weight losses (20 kg at 12 weeks); this effect is due in part to the degree of caloric restriction and in part to decreased dietary variety and the use of portioncontrolled foods in these regimens. Given the large initial weight loss produced by VLCDs, it was hoped that combining these diets with behavioral approaches would maximize long-term weight loss. Although VLCDs improve initial weight loss, they do not appear to produce better long-term weight loss than low calorie diets (LCDs). Difficulty with weight maintenance in programs with a VLCD appears to occur during the transition from the VLCD to a diet composed of conventional foods. Since VLCDs have been very effective at decreasing intake, the effect of intermittent use of VLCDs (initiating weight loss with a VLCD, transitioning to conventional foods, and then returning to a VLCD) on long-term weight loss has also been investigated, but results have been less than promising. During a 50-week behavioral obesity intervention, in which a VLCD was prescribed for weeks 1 through 12 and 24 through 36, weight loss at week 50 was not significantly different between an intermittent VLCD and a LCD. VLCDs with caloric levels between 400 and 800 kcals/day have been compared to examine if greater caloric restriction produces better weight loss. One study compared two outpatient groups with different caloric prescriptions, 420 and 800 kcals/day, and found that weight loss was not significantly different between the groups. This suggests that VLCDs may produce greater initial weight loss not only by restricting calories but also by increasing the structure of the diet. Structured low-calorie diets Several studies have investigated different ways to increase the structure of LCDs. Structure in the diet can be strengthened by decreasing variety and/or food choices and by controlling portion sizes consumed. A study examined whether providing food to participants, which controls portion size and decreases food choice, improved long-term weight loss during a standard behavioral intervention using an LCD. Participants were provided all of the food they should eat for five breakfasts and dinners each week for 18 months. Participants receiving the food provisions had greater weight loss at 6 months (10.1 vs. 7.7 kg), 12 months (9.1 vs. 4.5 kg), and 18 months (6.4 vs. 4.1 kg) than those participants receiving a standard intervention, even
417
though both groups had identical calorie goals (1000–1500 kcals/day). However, even with the greater dietary structure, participants still regained weight during the maintenance phase. Structure in the diet, by decreasing food choices, can also be increased by providing structured meal plans and detailed grocery lists. One investigation that provided meal plans and grocery lists along with a standard intervention showed greater weight loss than the standard intervention alone. The weight losses achieved with the meal plans were similar to those achieved with food provisions. Using portion-controlled foods available in the marketplace, such as frozen entrees and meal replacement products such as Slim-FastR, also increases dietary structure. When an LCD composed of conventional foods was compared to an LCD using two Slim-FastR meal replacements, two Slim-FastR snack bars, and a healthy dinner, the diet using the Slim-FastR portion-controlled foods produced better weight loss at 3 months (7.1 vs. 1.3 kg). For the next 24 months, both groups were instructed to consume one Slim-FastR meal replacement and snack bar per day. At 27 months, the Slim-FastR group still had better weight loss (10.4 vs. 7.7 kg), and the greater weight loss was maintained at 4 years (9.5 vs. 4.1 kg) in those participants available for follow-up. Food provisions have also been used during a maintenance intervention as a rescue strategy. However, used in this manner, food provisions were not helpful in improving weight loss maintenance compared to a maintenance program without food provisions. Consequently, increasing dietary structure by decreasing variety and food choices and/or using portion-controlled foods appears to improve longterm weight loss. These changes in the diet may increase adherence to an LCD, thereby producing greater weight loss, especially during the first 6 months of obesity treatment. Physical Activity
Correlational studies suggest that physical activity is the single best predictor of long-term maintained weight loss. Physical activity is important because it increases energy expenditure, but it may also reduce hunger and improve mood. Physical activity is usually prescribed at a level of 1000 kcals/week or 150 minutes of moderate–intense activity per week; however, long-term weight loss has been found to be greater in participants who are active 200 minutes or more per week compared to those who are active 150 minutes or less per week during weight loss
418 WEIGHT MANAGEMENT/Weight Maintenance
interventions. Similarly, the NWCR data discussed earlier show that successful weight loss maintainers are very active, reporting more than 2500 kcals/ week of activity. This suggests that an exercise prescription of at least 200 minutes per week may be needed to improve long-term weight loss. One study compared the effect of a standard activity recommendation (1000 kcals/week) versus a higher physical activity prescription (2500 kcals/week, equivalent to walking 75 minutes 5 days per week) in a standard behavioral weight loss intervention. The group with the higher physical activity prescription had greater long-term weight loss at 12 months (8.5 vs. 6.1 kg) and 18 months (6.7 vs. 4.1 kg). However, even with the higher exercise prescriptions, participants still regained weight during the maintenance phase. Strategies for improving maintenance of physical activity One of the most challenging problems with physical activity in weight control programs is adherence to activity prescriptions. One way to increase activity adherence is to prescribe activity in short bouts (40 minutes/day in four 10-minute bouts) rather than in long bouts (40 minutes/day in one bout). Accumulating activity during the day may make it easier to achieve physical activity goals. Although short bouts of exercise improved initial adoption of exercise, they did not appear to increase physical activity adherence or weight loss at 12 and 18 months. Participants have also been provided with personal trainers, supervised walks, home exercise equipment, and financial incentives to improve physical activity adherence. Although personal trainers and financial incentives did increase attendance at exercise sessions, neither improved total exercise achieved or weight loss at 18 months. Providing participants with home exercise equipment has been shown to improve both adherence to physical activity and weight loss. Participants given home exercise equipment and encouraged to exercise in multiple short bouts had greater long-term weight loss (18 months) than those prescribed short bouts without equipment (7.4 vs. 3.7 kg). The results of this study suggest that home exercise equipment and other approaches that make exercise more convenient may facilitate long-term adherence and consequently weight loss maintenance. Another approach evaluated for increasing long-term physical activity adherence is focusing specifically on physical activity during a weight maintenance program. A 6-month exercise-focused maintenance program, which included supervised group walking sessions, individual and group incentives for exercise completion, and relapse
prevention training aimed at maintaining physical activity, was compared to a weight-focused maintenance program, which focused on group problem solving of weight-related problems. No differences were found between the groups in terms of exercise participation and energy expenditure at the end of the maintenance program or at 6-month follow-up. The weight-focused group had better maintenance of weight losses over 6-month follow-up (3.1 vs. 5.2 kg). These results and findings from other studies suggest that placing too much emphasis on activity may detract from the dietary component of weight loss interventions and consequently decrease long-term weight loss success. Taken together, it appears that physical activity at higher prescriptions, 200 minutes per week, aids long-term weight loss. However, adherence to this amount of activity may be difficult, and providing home exercise equipment with a physical activity prescription of multiple short bouts may be a promising approach. Intensifying the Behavioral Component
Whereas the research described previously focused on ways to enhance negative energy balance through modifications in diet and physical activity, other investigations have examined ways to intensify behavioral components in weight loss or weight loss maintenance interventions. These strategies include extending professional contact, increasing social support, enhancing motivation using incentives, providing skills training, and combining some of these strategies into multicomponent maintenance programs. Extending professional contact As noted previously, the maximum weight loss in a behavioral weight loss intervention is typically attained at 6 months, which also represents the end of the weekly phase of therapy and the start of the less intense maintenance phase. Weight regain is commonly assumed to be due to a failure to continue practicing effective behavioral techniques when treatment transitions. One way to sustain behavioral strategies is to lengthen treatment or to continue to provide some form of professional contact during the maintenance phase. Lengthening the initial phase of treatment has been shown to increase initial weight loss. For example, when behavioral treatments of identical content, differing only in length of treatment (20 vs. 40 weeks), are compared, the two programs produce similar weight losses at 20 weeks (9.5 kg), but the extended treatment produces greater weight
WEIGHT MANAGEMENT/Weight Maintenance
loss at 40 weeks (13.6 vs. 6.4 kg). Based on this, several investigators tried to develop year-long programs with weekly meetings throughout. Weight losses at the end of the year were 10–14 kg, but attendance became quite poor toward the end of the program and the cost-effectiveness of such long-term weekly programs was questioned. Thus, investigators have considered how best to provide contact after the end of the 6-month weekly program. One of the first methods employed to extend professional contact during the maintenance phase was the use of booster sessions. Booster sessions take place on a fairly infrequent basis after treatment, with an increasing interval of time between sessions to fade professional contact (e.g., meeting at months 1, 3, 6, and 12). Booster contacts have yielded inconsistent results. This finding and the fact that better maintenance of weight loss occurs when participants continue to be seen biweekly suggest that patients need a fairly high level of contact during maintenance. Studies using biweekly maintenance programs have found better weight loss maintenance at 6-month (120% (continued weight loss) vs. 83% (weight regain)) and 18-month (87 vs. 33%) followups compared to a control intervention receiving no maintenance component. The specific content of the maintenance sessions appears less critical than the frequency of ongoing contact, the regular weighing of patients, and the emphasis on continued selfmonitoring. Although face-to-face contact appears most effective, it may also be possible to provide extended contact by phone, mail, or e-mail.
Social support Another approach to provide longterm support is to involve friends and family of participants in the treatment program. Spouses have been included in treatment, but the effects have been mixed. A meta-analysis of the spouse support literature showed a small positive effect through 2 or 3 months of follow-up. One study examined the effectiveness of natural social support (participants were recruited with three other friends and family members who were all losing weight in the same program) and experimentally created social support (through the use of intragroup activities and intergroup competitions) during a standard behavioral weight loss intervention. Sixty-six percent of participants recruited with a friend and given the social support intervention retained their weight loss in full from month 4 to month 10 compared to 24% of individuals recruited alone and given the standard behavioral intervention without any social support intervention.
419
Peer support can also be developed among group members in the same weight loss intervention. Support from other members of the group may explain the finding that group treatment tends to be more successful than individual therapy. One investigation conducted a 7-month weight loss maintenance program involving peer support following a behavioral obesity treatment. Participants formed peer self-help groups, which met biweekly and used group problem-solving skills to handle difficulties with weight loss. The peer support group maintained a greater weight loss at 1-year follow-up than the control group that received no maintenance program (6.5 vs. 3.1 kg). These studies suggest that social support is helpful in long-term weight loss and weight loss maintenance. Incentives for weight loss and weight loss maintenance Behavioral interventions used in obesity treatments focus on changing antecedents and consequences of behaviors that influence energy balance. Behaviors that produce negative energy balance, and consequential weight loss, can be reinforced, thereby increasing the likelihood that these behaviors will continue. The effects of contracting for healthy behaviors and weight loss have been inconsistent. Contracting with participants to attend supervised exercise sessions doubled the number of walks attended but had no effect on overall activity level or weight loss. Likewise, providing financial incentives (a substantial cash payment was given each week to participants when the weekly weight loss goal was met) during a standard behavioral weight loss program had no effect on long-term weight loss. Procedures in which patients deposit money at the start of the program and then earn portions back each week for meeting specific weight loss goals or self-reported caloric intake goals appear more effective. In one study showing positive results, the financial deposits were returned based on the average weight loss of the whole group. Skills training Another approach to improve maintenance of weight loss is to provide participants with training in specific skills. These specific skills can provide participants with the ability to cope with high-risk situations that increase the likelihood of a relapse of problematic eating and activity behaviors. Two types of skill-based maintenance programs, provided after the completion of a standard behavioral weight loss intervention, have been investigated. One approach focuses on relapse prevention, in which participants are taught a
420 WEIGHT MANAGEMENT/Weight Maintenance
variety of methods to anticipate and cope with the problem of relapse in weight loss maintenance. In the second approach, participants are taught to use the steps of problem-solving to manage difficulties during weight loss maintenance. After 1 year of a maintenance program, participants in the problemsolving intervention had better weight loss than those who had received no maintenance program following treatment (10.8 vs. 4.1 kg). There was no difference in weight loss between participants in the relapse prevention program and those participants who received no maintenance program (5.9 vs. 4.1 kg). These results suggest that strengthening problem-solving skills during weight maintenance improves weight loss maintenance.
Multicomponent programs Since long-term weight loss maintenance is believed to be difficult for many reasons, an approach that combines several different strategies used after initial weight loss treatment may produce better weight loss maintenance. These multicomponent maintenance programs have used different combinations of extending professional contact, increasing peer support, providing incentives, and increasing physical activity. All programs using a multicomponent maintenance program show better weight loss maintenance at 18-month follow-up compared to no maintenance program, but the multicomponent programs do not produce greater weight loss maintenance than simple programs that just extend professional contact.
Conclusion Successful weight loss maintenance can be challenging. However, information obtained from the NWCR, a registry of long-term successful weight loss maintainers, and from experimental studies that have examined different approaches to improve weight loss maintenance indicates that there are several strategies that may assist with long-term weight loss maintenance. These strategies are described in Table 2. Most notably, it is vital to recognize that for successful weight loss maintenance, individuals must continue to consume less calories and to engage in a greater level of physical activity than they did prior to weight loss; otherwise, they will return to a state of positive energy balance, in which weight (re)gain occurs. Information from the NWCR suggests that to sustain weight loss, a low-calorie, low-fat diet is needed. Experimental studies show that at least during the weight loss phase, a structured low-calorie diet improves long-term weight loss. The structure of the diet can be increased by using food provisions, structured meal plans, and/or meal replacements. In addition, maintaining structure in the diet may help sustain a lower calorie diet, thus helping with weight maintenance. Being physically active, for at least 200 minutes per week, also seems to aid in successful weight loss maintenance. Both experimental studies and selfreported activity information from registry participants support this recommendation. Ways to assist with achieving this level of activity include having
Table 2 Helpful strategies for successful weight loss maintenance Area
Strategy
Diet
Consume a low-calorie (1500 calories per day) diet Consume a low-fat (25% whole grain or bran by weight; all others were considered to be refined grains. Whole-grain breakfast cereal consumption was inversely associated with total mortality independent of a range of dietary and lifestyle considerations. The use of bread and cereal intakes as a measure of total whole-grain consumption is of some concern, as the extent to which they correlate with overall whole-grain consumption is uncertain. Indeed, such studies also fail to distinguish whether it is in fact something within the whole-grain package that is of benefit, or something else entirely. Cardiovascular Disease
Cardiovascular diseases are responsible for over a third of all deaths and are the biggest contributor to the global burden of disease. There are a number of studies to suggest that individuals who consume a diet rich in whole-grain foods have a lower incidence of heart disease, although the mechanism is still unclear (see Table 2).
431
Increases in the consumption of whole grains have been shown to decrease CHD deaths and the risk of stroke and heart disease in some, but not all, epidemiological analyses. In the study of postmenopausal Iowan women there was a reduction in relative risk (RR) of ischemic heart disease of about a third in those consuming at least 1 serving of whole-grain foods per day. This relationship was attributable to differences in the consumption of dark breads and whole-grain breakfast cereals while less common whole-grain foods such as popcorn, brown rice, and oatmeal showed no relationship with CVD. A significant inverse relationship between increasing whole-grain intake and risk was also observed for CHD and total CVD, but not stroke alone (Figure 3). Similar results were obtained in the Nurses’ Health Study of 75 000 women aged 38–63 years who were free from existing diabetes, angina, myocardial infarction, stroke, or other CVDs at baseline. Here, a significant inverse relationship was observed between CHD and whole-grain consumption even after multivariate adjustment for known confounders such as age, smoking, BMI, alcohol, and other dietary and lifestyle factors. For each additional serving of whole-grain food per day, the authors found a relative risk of 0.91 (95% confidence interval (CI) 0.85, 0.97) for CHD risk. In this study there was also a significant inverse relationship between whole-grain intake and risk of ischemic stroke. After adjustment for smoking and other known CVD risk factors, the relationship was
Table 2 Summary of the evidence relating a reduced risk of CVD to increased whole-grain consumption Evidence for a reduced risk of:
Cohort
Reported association
Reference
CHD
Californian Seventh Day Adventists
Lower RR for preference of whole grain bread
IHD
Iowa Women’s Health Study
Lower RR for increasing whole grain consumption
CHD and CVD
Iowa Women’s Health Study
CHD
Nurse’s Health Study
Ischemic stroke
Nurse’s Health Study
Lower RR for increasing whole grain consumption (except for stroke after adjustment) Lower RR for increasing whole grain consumption Lower RR for increasing whole grain consumption (total stroke cases)
Fraser et al. (1999) Associations between diet and cancer, ischemic heart disease, and all-cause mortality in non-Hispanic white California Seventh-day Adventists. American Journal of Clinical Nutrition 70: 532S–538S. Jacobs et al. (1998a) Whole grain intake may reduce the risk of ischaemic heart disease in postmenopausal women: The Iowa Women’s Health Study. American Journal of Clinical Nutrition 68: 248–257. Jacobs et al. (1999) Is whole grain intake associated with reduced total and cause-specific death rates in older women? The Iowa Women’s Health Study. American Journal of Public Health 89: 322–329. Liu et al. (1999) Whole grain consumption and risk of coronary heart disease: results from the Nurses’ Health Study. American Journal of Clinical Nutrition. 70: 412–419. Liu et al. (2000) Whole grain consumption and risk of ischemic stroke in women: A prospective study. JAMA 284: 1534–1540.
432 WHOLE GRAINS Multivariate adjusted hazard rate ratios across quintiles of whole grain intake for cardiovascular disease 1.2 1
HRR
0.8 0.6 0.4 0.2 0 1
2 3 4 Quintiles of whole grain intake
5
Figure 3 * adjusted HRR for age and total energy intake. * adjusted for age, energy intake, marital status, education, high blood pressure, diabetes, heart disease, cancer, BMI, WHR, physical activity, smoking, alcohol intake, use of vitamin supplements, HRT, total fat, saturated fat, intake of fruits and vegetables, intake of meat and intake of fish and seafood.
attenuated but remained significant. However, after further adjustment for assorted dietary variables (folate, vitamin E, fiber, magnesium, and potassium), the effect was no longer significant. Unlike previous studies, the authors defined the different categories of stroke and found that although risk of hemorrhagic stroke or incident fatal strokes did not appear to be influenced by whole-grain consumption, total stroke risk was inversely related to consumption of whole-grain foods. It is notable that in many studies subjects with the highest intake of whole-grain foods also had the healthiest lifestyles and the relationship with wholegrain foods is attenuated after adjustment for other diet and lifestyle variables. The exact mechanisms of protection are unclear. Diets rich in whole-grain foods tend to reduce serum LDL-cholesterol and TAG levels whilst increasing HDL-cholesterol concentrations and blood pressure is lower. This may be due in part to the dietary fiber, but the effect usually persists after adjustment for fiber intake. Whole grains also contain a number of specific components that may have heart health benefits, including antioxidants (vitamin E and selenium), B vitamins, flavonoids, and indoles. These may reduce oxidative stress and homocysteine levels, and the isoflavone content of these grains may positively influence vascular reactivity and the inflammatory state. Type 2 Diabetes
The prevalence of type 2 diabetes has reached epidemic proportions with over 150 million cases diagnosed worldwide; this number is expected to double by 2025. The concurrent rise in obesity has been directly linked to insulin resistance and compensatory
hyperinsulinemia and eventual type 2 diabetes, with over 80% of diagnosed type 2 diabetes being the result of excess body fat. Public health recommendations to reduce fat intake, especially saturated fat, have led to a rise in the proportion of carbohydrates (particularly refined carbohydrates) in the diet with consequences for postprandial glucose and insulin metabolism. The source of carbohydrate is also important. Whole-grain foods commonly have a low glycemic index because whole-grain foods with an intact bran and germ layer have a much smaller impact on blood glucose than refined carbohydrate foods because of their larger particle size, slowing the rate of enzymic attack. The level of soluble fiber within whole grains has also been identified as a possible protector and the higher amylose content is also thought to be beneficial. Slower rates of digestion are observed when foods have more compact granules, contain high levels of viscous soluble fiber, and have a higher amylose to amylopectin ratio. The relationship between whole grains and diabetes has been studied in five large cohorts as highlighted in Table 3. All of the studies have found an inverse relationship between consumption of whole grains or cereal fiber and disease reduction despite slight variations in methodology. As a proxy measure of whole-grain consumption, the relationship between the intake of total and specific sources of dietary fiber, dietary glycemic index, and glycemic load in the Nurses’ Health Study and the Health Professional’s Study was examined. Among the 65 173 women who participated during 1986–1992, women in the highest quintile of cereal fiber intake had a 28% lower risk of diabetes than those in the lowest quintile of intake (RR 0.72; 95% CI 0.58, 0.90; P = 0.001), a significant reduction that was not observed with fruit or vegetable fiber intakes. In men there was an inverse relationship between cereal fiber intake and risk of type 2 diabetes: a reduction in risk of 30% following adjustment for confounders. Again, no significant relationship of fruit or vegetable fiber to diabetes risk was observed. The fiber content of whole grains has been suggested as a possible explanation for the inverse relationship between total and whole-grain intakes and risk of type 2 diabetes observed in a 10-year followup of Finnish men (n = 2286) and women (n = 2030). When the highest and lowest quartiles of whole-grain consumption were compared there was an over 30% reduction in risk following adjustment for age, sex, geographic area, and energy intake. Cereal fiber, but not that from fruits and vegetables, was inversely related to risk of type 2 diabetes even after adjustment for a number of
433
WHOLE GRAINS
Table 3 Summary of the evidence relating a reduced risk of type 2 diabetes to increased whole grain consumption, including studies where cereal or dietary fiber intake is taken as a surrogate marker for whole-grain intakes
Epidemiological Type 2 diabetes
Type 2 diabetes
Type 2 diabetes
Type 2 diabetes
Type 2 diabetes
Risk factors for type 2 diabetes and CVD Intervention Insulin sensitivity
Cohort
Reported Association
Reference
Nurse’s Health Study
Lower RR with increased dietary fiber
Health Professionals Follow-up Study Finnish Mobile Clinic Health
Lower RR with increased dietary fiber
Salmeron et al. (1997a) Dietary fiber, glycemic load, and risk of non-insulin-dependent diabetes mellitus in women. JAMA 277: 472–477. Salmeron et al. (1997b) Dietary fiber, glycemic load, and risk of NIDDM in men. Diabetes Care 20: 545–550.
Lower RR with increased whole grains
Montonen et al. (2003) Whole-grain and fiber intake and the incidence of type 2 diabetes American Journal of Clinical Nutrition 77: 622–629.
Examination Survey Health Lower RR with increased Professionals whole grains Follow-up Study Nurse’s Health Lower RR with increased Study whole grains Framingham Offspring Study
Reduction in fasting insulin with increasing whole-grain intake
11 hyperinsulinemic overweight patients
Reduction in fasting insulin following diet rich in whole grains
confounders. Adjustment for cereal fiber considerably weakened the association between whole-grain consumption and risk of type 2 diabetes, suggesting that this may be a significant component of the whole-grain package. The effect of whole-grain consumption specifically, rather than fiber intakes, on incidence of type 2 diabetes was examined in the Health Professional’s Follow-up Study. Over a 12-year follow-up period, intakes of whole and refined grains were analyzed using a validated semiquantitative FFQ. Despite no baseline history of diabetes or CVD, 1197 cases of incident type 2 diabetes were identified in this male cohort. Following adjustment for dietary and life style confounders including age, smoking, physical activity, and fruit and vegetable intake, there was a reduced risk of type 2 diabetes of almost 40% in those with the highest quintile compared with the lowest quintile of whole-grain intakes. The results were attenuated after adjustment for BMI, although the relationship remained significant. In those with a BMI >30 kg m2 the association between whole grain and type 2 diabetes was weak, whereas in those men with a BMI 30 kg m2 with the highest fasting insulin levels being observed in those with the highest BMI and the lowest intake of whole-grain foods. Prospective epidemiological studies are generally stronger than cross-sectional associations. In the CARDIA study by Pereira and coworkers, a significant inverse relationship was observed between whole-grain foods and fasting insulin levels among over 3500 black and white young Americans aged 18–30 years. A dietary history was collected at baseline and 7 years later, while insulin measurements were collected at 10 years follow-up. After
adjustment for a number of dietary and lifestyle factors an inverse and graded response was observed between whole-grain intake at 7 years and the insulin measurements collected at 10 years follow-up, although the relationship was not significant in black women. There is only one small intervention study that examines the impact of increasing wholegrain consumption. In this study, it was found that after 6 weeks there was a 10% reduction in fasting insulin compared to results observed following the refined grain diet (141 3.9 pmol l1 versus 156 3.9 pmol l1; P < 0.01). This relationship remained even after adjustment for body weight changes (nonsignificant change of 0.7 kg on whole-grain diet) and physical activity. As for other diseases the mechanism of the effect of wholegrain on insulin sensitivity is not entirely clear and may in part be mediated through effects on body weight. Cereal fiber and possibly certain micronutrients such as magnesium may also be important since the wholegrain effect is attenuated after adjustment for these variables. Cancer
Dietary factors are thought to account for about 35% of all cancers but the role of any specific dietary factor or dietary regime has only been established for certain types of cancer. Only a few studies have looked at the links between wholegrain intake and cancer. In the Iowa Women’s Study there was a 30% reduction in cancer deaths when comparing those with the highest quintile of whole-grain intake to those in the lowest quintile, after adjustment for age and energy intake. However, once other dietary and lifestyle factors were included within the multivariate analysis this relationship was attenuated and lost its statistical significance. Similar findings were observed in the Norwegian County Study with a 28% reduction in cancer deaths from the highest to lowest quintile of wholegrain when adjusted for age and energy intake. However, the effect was no longer significant after further adjustment for other dietary and lifestyle factors. A number of studies have used case–control designs to investigate the relationship between whole-grain consumption and cancer incidence, although these suffer from the inherent flaws of such study designs, especially those involving recall of past dietary habits. In an analysis of 40 case–control studies, 90% of the studies included had an odds ratio (OR) 100 mM) are found within the synaptic cleft during this process. In addition, brain injury resulting from ischemia or trauma causes the release of massive amounts of zinc, which is thought to be responsible for the resultant cell death. Antioxidant Defense System
Although zinc is not itself an antioxidant, there are several ways in which it participates in the antioxidant defense system of the body, with important implications for health. It can bind to thiol groups in proteins, making them less susceptible to oxidation. By displacing redox-reactive metals such as iron and copper from both proteins and lipids it can reduce the metal-induced formation of hydroxyl radicals and thus protect the macromolecules. Its role in inducing MT has already been mentioned, and this protein scavenges hydroxyl radicals. Increased oxidative stress results in the release of zinc from MT, presumably making it more available for other proteins. Copper/zinc superoxide dismutase is an important zinc-containing antioxidant enzyme whose activity is impaired in the deficient state. In general, animal studies have revealed an association between zinc deficiency and increased oxidative stress. The likelihood of increased oxidative stress under conditions of zinc deficiency suggests a potential anticarcinogenic role for this mineral. This connection is further supported by the finding that the tumor suppressor gene p53, which is frequently mutated in human cancers, is a zinc-containing transcription factor whose expression is also dependent on zinc. Macronutrient Metabolism
Many of the enzymes of intermediary metabolism contain zinc, and deficiency affects all macronutrients. Protein synthesis and DNA and RNA synthesis require zinc. Insulin is secreted from the pancreas and circulates in association with zinc. This secretion is diminished under conditions of zinc deficiency, leading to impaired glucose metabolism. Lipid metabolism is also affected, with zinc deficiency being associated with reductions in circulating high-density lipoproteins.
ZINC/Physiology
Human Zinc Deficiency In addition to dietary inadequacy, there are several routes that lead to zinc deficiency. Acrodermatitis enteropathica, the genetic disorder of zinc malabsorption, has already been mentioned. Other, more generalized, malabsorption syndromes (e.g., coeliac disease) can also lead to zinc deficiency. Deficiency has also resulted from inappropriate intravenous feeding and the use of chelation therapy. Children are likely to be particularly at risk of zinc deficiency, because of its involvement in growth. Mild
Given the difficulty of assessing marginal impairments in zinc status, the effects of deficiency can often be verified only by a response to treatment. Growth provides a good example of this. Children in Denver, Colorado, who were of low height for their age increased their growth rates in response to zinc supplementation, whereas zinc had no effect in children of normal height. In addition to improved growth, improvements in immune function, taste and smell acuity, and reproductive function have been noted with zinc supplementation. Severe
Severe human zinc deficiency has been well characterized by the original descriptions in the Middle East and in patients with acrodermatitis enteropathica. The symptoms of mild deficiency are continued and exaggerated. Thus, stunting can be extreme and is accompanied by delayed sexual maturation and impotence. Characteristic skin lesions are found, originating around the mouth and nose but becoming widespread as deficiency develops. Diarrhea is also present. Deficits in taste and smell are accompanied by anorexia and other behavioral changes, including increased irritability and impaired cognitive function. Eye pathologies similar to those seen in vitamin A deficiency are observed.
453
fact is clinically useful in individuals with Wilson’s disease, a condition of copper toxicity. In the USA, an upper limit of 40 mg day 1 has been set for adults, because of the threat to copper status. The popularity of zinc lozenges for treatment of the common cold could lead to this intake being exceeded. Thus, the use of these treatments should be of limited duration.
Assessment The prevalence of marginal zinc deficiency in human populations is unknown because of the lack of a good means of assessing zinc status. Measurement of plasma zinc is straightforward, but it does not serve as a reliable indicator of zinc status. Plasma zinc is a quantitatively minor pool that can be easily influenced by minor shifts in tissue zinc. Plasma concentrations do not fall with decreasing dietary intake, except at very low intakes. Plasma zinc can also be affected by factors unrelated to zinc status (e.g., time of day, stress, and infection). Cellular components of blood can be assayed, but erythrocyte concentrations of zinc are maintained in deficient states and variable results have been found with leucocytes. Hair zinc concentrations may reflect available zinc but will also depend on the rate of hair growth. Several different zinc-dependent enzymes have been investigated as potential markers of zinc status, but none have proved reliable. MT in blood cells has been suggested as a useful indicator of zinc status, assayed at either the protein or the mRNA level. MT expression is likely to be regulated by factors other than zinc and therefore may lack the specificity required of a good indicator. The gene-array approaches that have recently been used to determine the global effects of zinc deficiency within a tissue would appear to offer hope for the identification of an appropriate functional marker of zinc status.
Recommended Intakes Zinc Toxicity Toxicity of zinc from food sources has not been reported and seems unlikely since absorption is homeostatically regulated. Acute gastrointestinal symptoms and headaches have been reported after ingestion of amounts about 10–20-fold higher than the recommended intakes. Chronic ingestion of these large amounts has been shown to impair immune response and lipoprotein metabolism. However, the key danger of excessive zinc intake is reduced copper status. This is probably due to a zinc-induced blockage of copper absorption and in
In the absence of a reliable index of zinc status, both the US Food and Nutrition Board and the Food and Agriculture Organization (FAO)/World Health Organization (WHO) Expert Committee used the factorial approach to estimate human zinc requirements. As shown in Table 3, the FAO/WHO give three sets of recommendations, depending on the zinc bioavailability of the diet. The US Food and Nutrition Board figures fall between those given for moderate- and low-availability diets. Both groups also set upper limits for intake, based largely on the risk of impairing copper status. These values
454 ZINC/Deficiency in Developing Countries, Intervention Studies Table 3 Recommended intakes of zinc Age group
Children (1–3 years old) Adolescents (14–18 years old) Adults (>19 years old) Pregnant women Lactating women
US –Canadian recommended dietary allowance
Female Male Female Male Third trimester 0–3 months
3 9 11 8 11 11 12
are similar (40 mg for the US Food and Nutrition Board, 45 mg for FAO/WHO, for adults). See also: Antioxidants: Diet and Antioxidant Defense; Observational Studies; Intervention Studies. Bioavailability. Children: Nutritional Requirements. Cofactors: Inorganic. Copper. Cytokines. Immunity: Effects of Iron and Zinc. Inborn Errors of Metabolism: Classification and Biochemical Aspects. Nutrient–Gene Interactions: Molecular Aspects; Health Implications.
Further Reading Andrews GK (2001) Cellular zinc sensors: MTF-1 regulation of gene expression. Biometals 14: 223–237. Cousins RJ, Blanchard RK, Moore JB et al. (2003) Regulation of zinc metabolism and genomic outcomes. Journal of Nutrition 133(5S-1): 1521S–1526S.
Deficiency in Developing Countries, Intervention Studies C Hotz, National Institute of Public Health, Morelos, Mexico ª 2005 Elsevier Ltd. All rights reserved.
Introduction Knowledge of the occurrence of zinc deficiency and its importance to human health has increased greatly in recent years. Available evidence indicates that zinc deficiency is an important contributing factor to impaired growth and development, morbidity, and mortality among children in underprivileged settings. Presently, there are few estimates of the
FAO/WHO reference nutrient intake Bioavailability High
Moderate
Low
2.4 4.3 5.1 3.0 4.2 6.0 5.8
4.1 7.2 8.6 4.9 7.0 10.0 9.5
8.3 14.4 17.1 9.8 14.0 20.0 19.0
FAO/WHO (2002) Human Vitamin and Mineral Requirements. Report of a joint FAO/WHO expert consultation, Bangkok, Thailand, pp. 257–270. Rome: Food and Nutrition Division of the Food and Agriculture Organization. Gaither LA and Eide DJ (2001) Eukaryotic zinc transporters and their regulation. Biometals 14: 251–270. Institute of Medicine (2001) Dietary reference intakes for vitamin A, vitamin K, arsenic, boron, chromium, copper, iodine, iron, manganese, molybdenum, nickel, silicon, vanadium and zinc. pp. 442–501. Washington, DC: National Academy Press. MacDonald RS (2000) The role of zinc in growth and cell proliferation. Journal of Nutrition 130(5S): 1500S–1508S. Maret W (2001) Zinc biochemistry, physiology, and homeostasis – recent insights and current trends. Biometals 14: 187–190. Mills CF (ed.) (1989) Zinc in Human Biology. London: SpringerVerlag. Prasad AS (1991) Discovery of human zinc deficiency and studies in an experimental human model. American Journal of Clinical Nutrition 53: 403–412. Vallee BL and Falchuk KH (1993) The biochemical basis of zinc physiology. Physiological Reviews 73: 79–118.
prevalence of zinc deficiency in developing countries based on dietary intake or biochemical indices. However, national level estimates of the adequacy of zinc in the food supply and the prevalence of childhood growth stunting can be used to inform on the relative risk of zinc deficiency among countries. National programs to improve zinc status through either supplementation or food fortification are just being initiated.
Recognition of Zinc Deficiency in Developing Countries The recognition of zinc deficiency as an important contributor to the high rates of morbidity, mortality, and delayed growth and development among
ZINC/Deficiency in Developing Countries, Intervention Studies
children is relatively recent in contrast to the earlier recognition of the importance and widespread occurrence of deficiencies of iodine, vitamin A, and iron. Coordinated efforts to address vitamin A deficiency in less developed countries were formally initiated by the establishment of the International Vitamin A Consultative Group (IVACG) in 1975. In the mid-1980s, similar groups were founded for the control of iodine deficiency disorders (International Council for the Control of Iodine Deficiency Disorders; ICC/IDD) and iron deficiency (International Nutritional Anemias Consultative Group; INACG). It was not until the year 2000 that a similar group emerged, the International Zinc Nutrition Consultative Group (IZiNCG), to promote the control of zinc deficiency in more vulnerable populations. The detection of zinc deficiency in populations and the recognition of its association with health outcomes have been somewhat more challenging for zinc than for other nutrients, contributing to the delay in efforts to control it. The ability to diagnose zinc deficiency in individuals using biochemical measures is somewhat limited. For example, the concentration of zinc in serum or plasma may not diminish until the depletion of zinc is more advanced, making it less useful for diagnosing mild to moderate zinc deficiency states in individuals. Other possible biochemical indicators of zinc status have not been consistently demonstrated to reflect change in zinc status. These limitations may have subsequently dampened enthusiasm for evaluating zinc status at the population level. Furthermore, the health conditions that are clearly associated with zinc deficiency (e.g., childhood growth stunting, common childhood infections, and mortality; described in further detail below) are general in nature and have multiple causes. This is in contrast to the strong iconic association of iodine deficiency with goiter and cretinism, vitamin A deficiency with eye disorders and blindness, and iron deficiency with easily diagnosable anemia. The nonspecific nature of health outcomes associated with zinc deficiency is in concordance with the role of zinc in a wide variety of biological functions, covering all human physiological systems. Thus, the very nature of zinc metabolism and the ubiquity of zinc in biological functions at the molecular, cellular, and physiological levels has likely contributed to the difficulties and delays in recognizing the important contribution of zinc deficiency to impaired health and development. A brief history of the knowledge of zinc deficiency in developing countries is presented in Table 1.
455
Table 1 History of knowledge of zinc deficiency in developing countries Year
Event or publication
1963
Relationship between zinc deficiency and hypogonadal dwarfism noted in Egypt The role of zinc deficiency in hypogonadal dwarfism described in Iran Zinc demonstrated to increase linear growth, weight, and bone age in Iranian pubertal boys Supplemental zinc demonstrated to increase linear growth in Chinese preschool children Supplemental zinc during pregnancy demonstrated to increase birth weight and gestational age of infants in India Supplemental zinc demonstrated to decrease the prevalence of diarrhea and pneumonia among malnourished children in Vietnam Pooled analysis of randomized, controlled, zinc supplementation trials indicates a significant positive effect of zinc on reducing the incidence of diarrhea and pneumonia Establishment of the International Zinc Nutrition Consultative Group (IZiNCG) Mortality reduced by zinc supplementation among low-birth-weight infants in India Zinc supplementation recommended as adjunctive therapy for childhood diarrhea Meta-analysis of randomized, controlled zinc supplementation trials indicates a modest but significant overall improvement in growth
1972 1974 1982 1993
1996
1999
2000 2001
2002
Causes of Zinc Deficiency in Developing Countries Although the etiology of zinc deficiency in developing countries has not been thoroughly studied, the main contributing factor is believed to be inadequate intake of zinc in bioavailable (i.e., available for absorption across the intestine) forms. Inadequate Dietary Intake of Zinc
In general, the risk of inadequate intake of dietary zinc within a population may be associated with the nature of the food supply, and its content and relative bioavailability of zinc. Animal source foods, in particular shellfish, small whole fish, beef, and organ meats such as liver and kidney, are rich sources of zinc. Furthermore, the zinc contained in animal source foods is more highly bioavailable than from plant source foods; the presence of certain amino acids (e.g., histidine, methioinine), or perhaps other unidentified factors, may facilitate the intestinal absorption of zinc from animal flesh foods. Plant source foods, such as most fruits and vegetables including green leaves, and starchy roots and tubers, have relatively low zinc content. While whole grains and legumes have moderate to high
456 ZINC/Deficiency in Developing Countries, Intervention Studies
zinc content, these foods also contain large quantities of phytate (phytic acid or myo-inositol hexaphosphate), the most potent identified dietary inhibitor of zinc absorption. The zinc and phytate content, and the phytate:zinc molar ratio of some foods are shown in Table 2. Plants synthesize phytate, which occurs in highest concentration in seeds and to a lesser extent in vegetative plant parts. Phytate forms chelates with zinc and other minerals; as this compound is largely undigested and is not absorbed, it carries the chelated portion of dietary zinc out of the intestine, thus reducing the amount of zinc available for absorption. The phytate:zinc molar ratio of the diet can be used to estimate the bioavailability of zinc. Populations with a heavy dietary reliance on unrefined cereals or legumes, complemented with only small amounts of zinc-rich animal source foods, will have lower intakes of bioavailable zinc. Although milling cereal
Table 2 The content of zinc and phytate, and the phytate:zinc molar ratio in uncooked foods Food
Cereals Corn Pasta Rice (milled) Wheat or wholewheat bread White bread
Zinc (mg/100 g)
Phytate (mg/100 g)
Phytate:zinc molar ratio
1.8 0.7 1.1 2.9
800 282 352 845
44 40 32 29
0.9
30
3
Nuts and legumes Lentils/mung beans Peanuts Peas Red beans
1.3 3.3 2.9 2.9
358 1760 1154 1629
27 53 39 56
Roots and tubers Cassava Potato Sweet potato
0.3 0.3 0.5
54 81 50
18 27 10
Vegetables Cabbage Green leaves Onion Tomato
0.1 0.2 0.2 0.1
0 42 0 6
– 21 – 6
Fruits Banana Coconut Orange Mango
0.2 1.1 0.1 0.0
0 324 0 20
– 29 – –
Animal source foods Beef Chicken Eggs Fish Milk Pork
3.0 1.3 1.1 0.5 0.4 1.9
0 0 0 0 0 0
– – – – – –
grains removes large amounts of phytate, it also removes large amounts of zinc. Thus, among populations with a heavy dietary reliance on refined cereals (e.g., rice) or starchy roots and tubers (e.g., potatoes, cassava) with small amounts of zinc-rich animal foods, the total intake of dietary zinc will be low. In either case, low food intakes due to food insecurity will exacerbate the risk of not meeting daily physiological requirements for absorbed zinc. Other Causes of Zinc Deficiency
There are a few other commonly occurring conditions in developing country settings that may contribute to zinc deficiency. Diarrhea may not only lead to a reduced absorption of dietary zinc during the episode due to increased intestinal transit time, but may also cause an increase in the loss of body zinc. Under normal physiological conditions, zinc is secreted into the intestine in large quantities together with digestive juices but is largely reabsorbed again; diarrhea may interfere with the reabsorption of this zinc. Given the important role of the intestine in regulating dietary zinc absorption, and the secretion and reabsorption of body zinc during digestion, conditions that affect the health or integrity of the intestine, such as tropical enteropathy, could interfere with the adequate maintenance of zinc balance. The contribution of these conditions to zinc deficiency in developing countries requires investigation.
Prevalence of Zinc Deficiency in Developing Countries: Available Evidence Relatively little information on population zinc status has been collected at the national or subnational level in developing countries. Thus, only very limited estimates of the prevalence of zinc deficiency are available that are based on the proportion of the population with low concentrations of serum zinc or inadequate dietary zinc intakes. Estimates of the magnitude of risk of zinc deficiency in a population have therefore been derived from more indirect indicators, such as the:
adequacy of zinc in the national food supply; national prevalence of childhood growth stunting; and
occurrence of a positive response of health conditions to supplemental zinc as determined by randomized, controlled zinc supplementation trials. Adequacy of Zinc in the National Food Supply
As described above, the nature of the food supply will provide some information on the likelihood of risk of
ZINC/Deficiency in Developing Countries, Intervention Studies
457
Table 3 Adequacy of dietary zinc in the food supply in major developing country regions, as compared to North America
North America China Latin America and Caribbean South Asia Southeast Asia Sub-Saharan Africa
Population (millions)
Zinc (mg/caput/day)
Phytate:zinc molar ratio
Zinc from animal source foods (%)
Estimated population at risk of inadequate zinc intake (%)
305 1256 498
12.5 12.4 10.3
11 16 20
61 37 42
10 14 25
1297 504 581
10.8 9.2 9.4
26 24 26
11 21 15
27 33 28
Adapted with permission from Food and Nutrition Bulletin (2004) (suppl 2) 25: S135. International Zinc Nutrition Consultative Group (Brown KH, Rivera JA, Bhutta Z, Gibson RS, King JC, Ruel M, Sandstro¨m B, Wasantwisut E, Hotz C, Lo¨nnerdal B, Lopez de Roman˜a D, and Peerson J) (2004) Assessment of the risk of zinc deficiency in populations and options for its control. Food and Nutrition Bulletin 25: S91–S202.
inadequate dietary zinc within a population. Information compiled by the United Nation’s Food and Agriculture Organization has been used to estimate the potential risk of inadequate zinc in the food supply for a large number of countries. This estimate uses country level data on the per capita amounts of 95 different food commodities available for human consumption, and estimates of the zinc content and phytate:zinc molar ratio of these foods, to calculate the per capita amount of bioavailable zinc in the food supply. The per capita amount of bioavailable zinc is compared to the physiological requirement for absorbed zinc weighted for the demographic distribution of the population. The theoretical proportion of the population at risk of inadequate dietary zinc is used to estimate the relative risk of zinc deficiency at the national level. For example, countries with 25% or more of the population at risk of inadequate dietary zinc are considered to be at elevated risk. This information is limited in that it represents the national average situation and cannot identify subnational populations that may be at elevated risk. In the absence of more direct measures of zinc status, such estimates will justify the need to conduct population surveys that measure risk of zinc deficiency more directly. Estimates of the proportion of the population at risk of inadequate dietary zinc based on food supply data have been calculated for 176 countries; a summary of the tabulations by developing country region is given in Table 3, and compared to those from North America. Overall, these estimates suggest that about 20% of the world’s population is at risk of inadequate dietary zinc intake.
linear growth of children following zinc supplementation for at least 2 months, indicated that supplemental zinc had an overall, positive effect on linear growth. This meta-analysis also demonstrated that a low group mean index of child height-for-age (i.e., 1.58 SD below the reference median for height-for-age) predicts an improvement in linear growth in response to supplemental zinc. Therefore, a high prevalence of childhood growth stunting in a population represents an elevated risk of zinc deficiency. The World Health Organization suggests that when the prevalence of children with height-for-age of 2 SD below the reference median is 20% or higher, childhood growth stunting should be considered a problem of public health concern; this prevalence may likewise be indicative of an elevated risk of zinc deficiency. The World Health Organization maintains a global database on the prevalence of low height-for-age at the national and subnational level for a large number of countries.
National Prevalence of Childhood Growth Stunting
Consequences of Zinc Deficiency in Developing Countries: Evidence Derived from Zinc Supplementation Trials
Zinc deficiency is a common limiting factor to adequate child growth in developing country settings. A meta-analysis of 25 studies using a randomized, placebo-controlled design, which measured change in
Occurrence of a Positive Response of Health Conditions to Supplemental Zinc
Suggestive evidence for the widespread occurrence of zinc deficiency in developing regions is derived from the large number of countries from a wide geographical range where positive health changes were observed in response to supplemental zinc. The health conditions that have been positively affected by supplemental zinc, as demonstrated through randomized, controlled, community-based zinc supplementation trials and the locations of these studies are described in detail in the following section.
In the context of developing country settings, present knowledge on the health consequences of zinc
458 ZINC/Deficiency in Developing Countries, Intervention Studies
deficiency has been almost entirely derived from community-based trials of zinc supplementation among populations at possible risk of zinc deficiency. In these trials, individuals in the study population are randomly allocated to receive either a zinc supplement, usually in the form of tablets or syrups, or the same supplement format without zinc (i.e., placebo). The condition under study is then monitored for a given period (typically for 2 months to one year), and the occurrence of or change in the condition is compared between the zinc-supplemented group and the corresponding control group. Given that several other nutritional and environmental factors can influence the health conditions hypothesized to occur with zinc deficiency, such studies have been essential in demonstrating unequivocally the causal role of zinc deficiency in these conditions among human populations. The following section provides an overview of the population groups at elevated risk of zinc deficiency, and the health consequences associated with zinc deficiency, as concluded from these studies.
Groups at Elevated Risk of Zinc Deficiency
In accordance with age and physiological status, some population groups have increased daily physiological requirements for absorbed zinc. The incorporation of zinc in new tissues being synthesized such as occurs during growth and pregnancy or the secretion of zinc in breast milk during lactation require that relatively larger amounts of zinc are absorbed daily. These increased needs for zinc increase the challenge of acquiring sufficient amounts of absorbable zinc from the food supply. Those groups with higher zinc requirements and who are thus at elevated risk of zinc deficiency include:
infants (particularly those born prematurely); young children; children recovering from severe malnutrition; adolescents; and pregnant and lactating women.
At least some evidence exists for the occurrence of zinc deficiency among each of these groups in developing country settings. The elderly may also be at elevated risk of zinc deficiency, due to a decline in adequacy of zinc intakes and possibly a reduction in the absorption of dietary zinc. However, evidence for zinc deficiency among the elderly has thus far only been derived from industrialized countries; elderly populations have not been the subject of study of zinc deficiency in developing countries.
Growth and Development of Children
Many children in developing country settings experience poor growth, in comparison to relatively healthy children from more developed countries. The prevalence of low height-for-age and weightfor-age indices among children under 5 years of age are used as indicators of poor living conditions, to which poor diet, poor environmental and social conditions, and higher exposure to infectious diseases contribute. Similar conditions can result in impaired neurobehavioral development and cognitive function, putting children in developing countries at further disadvantage. Evidence exists for a specific role of zinc in both of these aspects of child development. Table 4 provides a summary of countries in which improved growth or development in response to supplemental zinc has been clearly demonstrated.
Growth Zinc plays an important role in child growth. Several mechanisms may be involved, including the role of zinc in the transcription and translation of genetic material and, perhaps more importantly, the regulatory role of zinc in the primary endocrine system, which controls growth (i.e., the growth hormone-somatomedin axis). Specifically, zinc status is associated with the concentration of circulating insulin-like growth factor-1, the principal growth factor that controls early childhood growth. Among populations where growth retardation occurs, both height and weight gain have improved following supplemental zinc. Stimulation of linear growth appears to be the primary response, while the increase in body weight likely reflects the synthesis of lean tissue such as bone, cartilage, and muscle associated with linear growth. This is evident because, in general, weight does not increase independently of increased height in response to supplemental zinc. The magnitude of improvement in linear growth in response to supplemental zinc is, not surprisingly, greater among children experiencing growth retardation (or ‘stunting’; >2 SD below the median height-for-age of international reference data). Zinc deficiency has been demonstrated to be an important limiting factor to growth of children across a wide range of geographical settings in developing regions (Table 4). It should be noted that not all studies have demonstrated a significant, positive effect of zinc on growth. Possible explanations for this include: the prevalence or severity of growth stunting in the study communities was low; zinc status was adequate; or deficiencies of other growth-limiting nutrients coexisted thus preventing a positive effect of zinc on growth. The latter
ZINC/Deficiency in Developing Countries, Intervention Studies
459
Table 4 Countries from developing regions with documented evidence of improved growth or development in response to supplemental zinc Region
Country
Population group
Development outcome improved
Eastern Mediterranean Latin America and Caribbean
Iran Belize Brazil Chile
Pubertal boys Preschool children Low-birth-weight infants Low-birth-weight infants Severely malnourished infants Preschool children (boys only) Preadolescent and adolescent children (boys only) Infants (growth stunted)
Height, weight, bone age Height Weight Length Length gain Height Height
Guatemala
Jamaica South and Southeast Asia
Bangladesh
China
Sub-Saharan Africa
India Japan Vietnam Ethiopia Uganda
Preadolescent children Severely malnourished infants and preschool children Infants (low serum zinc concentration) Severely malnourished infants and preschool children Infants Preschool children Preadolescent children Preschool children Preadolescent children Preschool children (growth stunted) Infants (growth-stunted) Preschool and school-aged children
Length, lean body mass, physical activity Mid upper arm circumference Lean tissue synthesis Weight Weight gain Length, weight Height, weight Heel-to-knee heighta Neuropsychological performance Physical activity level Height Height, weight Length Mid upper arm circumference
a
An improvement with supplemental zinc was observed only when administered simultaneously with other micronutrients.
situation may also explain the observation in some studies of a transient effect of zinc on growth. Low-birth-weight infants (