The Toxicology of Essential and Nonessential Metals 1483469085, 9781483469089

No matter how careful we are at using metals in industrial processes, some level of human exposure is unavoidable. Count

222 34 13MB

English Pages 196 [259] Year 2017

Report DMCA / Copyright

DOWNLOAD PDF FILE

Table of contents :
The Toxicology of Essential and Nonessential Metals
Copyright
Contributing Authors
Contents
Preface
Part 1 Introduction to Toxic Metals
Chapter 1 Introduction to Toxic Metals
Chapter 2 Factors Influencing the Toxicity of Metals
Chapter 3 Phytoremediation
Chapter 4 Electrolytes
Chapter 5 Magnesium Silicates
Part 2 Essential Trace Metal with the Potential for Toxicity
Chapter 6 Iron
Chapter 7 Cobalt
Chapter 8 Copper
Chapter 9 Zinc Toxicology
Part 3 Metal Used in Medicine with a Potential for Toxicity
Chapter 10 Lithium
Chapter 11 Aluminum
Chapter 12 Platinum
Chapter 13 Titanium
Chapter 14 Gold
Chapter 15 Tellurium
Part 4 Toxic Metals
Chapter 16 Arsenic
Chapter 17 Beryllium
Chapter 18 Cadmium
Chapter 19 Lead
Chapter 20 Mercury
Chapter 21 Thorium
Reference
Recommend Papers

The Toxicology of Essential and Nonessential Metals
 1483469085, 9781483469089

  • 0 0 0
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up
File loading please wait...
Citation preview

The

Toxicology

of

Essential

and

Nonessential Metals         Nichole Coleman, PhD

  Copyright © 2017 Nichole Coleman, PhD. All rights reserved. No part of this book may be reproduced, stored, or transmitted by any means—whether auditory, graphic, mechanical, or electronic —without written permission of the author, except in the case of brief excerpts used in critical articles and reviews. Unauthorized reproduction of any part of this work is illegal and is punishable by law. This book is a work of non-fiction. Unless otherwise noted, the author and the publisher make no explicit guarantees as to the accuracy of the information contained in this book and in some cases, names of people and places have been altered to protect their privacy. ISBN: 978-1-4834-6908-9 (sc) ISBN: 978-1-4834-6907-2 (e) Library of Congress Control Number: 2017906279 Because of the dynamic nature of the Internet, any web addresses or links contained in this book may have changed since publication and may no longer be valid. The views expressed in this work are solely those of the author and do not necessarily reflect the views of the publisher, and the publisher hereby disclaims any responsibility for them. Any people depicted in stock imagery provided by Thinkstock are models, and such images are being used for illustrative purposes only. Certain stock imagery © Thinkstock.

  Lulu Publishing Services rev. date: 5/4/2017

Contributing Authors Nichole Coleman, PhD San Francisco State University San Francisco, California Chemistry Christopher Blaine University of California–Davis Davis, California Neurobiology, Physiology, and Behavior Christopher Bivins San Francisco State University San Francisco, California Botany Bhumil Patel UC–Santa Cruz Santa Cruz, California Molecular, Cellular, and Developmental Biology Christopher Blaine University of California–Davis Davis, California

Biology Emmanuel Romero San Francisco State University San Francisco, California Biology Paul Valencia Scattolin California State University–East Bay Hayward, California Biological Sciences Alexandria Lufting San Francisco State University San Francisco, California Biology

Contributing Authors Aisha Castrejon Sacramento State University Sacramento, California Biology Lyia Yang, PhD University of California–Davis Davis, California

Plant Biology Harold Galvez University of California–Berkeley Berkeley, California Microbial Biology John Mateo San Francisco State University San Francisco, California Microbiology Brittany de Lara University of California–Irvine Irvine, California Biological Sciences Hazel T. Salunga San Diego State University San Diego, California Biology Tojo Chemmachel San Francisco State University San Francisco, California Kinesiology Kathryn Ma University of California–Santa Cruz Santa Cruz, California Evolution and Ecology

Contributing Authors Meryl De La Cruz University of the Pacific Stockton, California Biochemistry Henry Chen University of California–Berkeley Berkeley, California Public Health Hayley Moore University of California–Berkeley Berkeley, California Political Economics April Hishinuma University of California–Berkeley Berkeley, California Cognitive Science Chia-In Jane Lin University of California–Santa Barbara Santa Barbara, California Biology Christopher Taing University of the Pacific Stockton, California Biological Science

Contents Preface Part 1: Introduction to Toxic Metals CHAPTER CHAPTER CHAPTER CHAPTER CHAPTER

1 Introduction to Toxic Metals 2 Factors Influencing the Toxicity of Metals 3 Phytoremediation 4 Electrolytes 5 Magnesium Silicates

Part 2: Essential Trace Metal with the Potential for Toxicity CHAPTER CHAPTER CHAPTER CHAPTER

6 Iron 7 Cobalt 8 Copper 9 Zinc Toxicology

Part 3: Metal Used in Medicine with a Potential for Toxicity CHAPTER CHAPTER CHAPTER CHAPTER CHAPTER CHAPTER

10 Lithium 11 Aluminum 12 Platinum 13 Titanium 14 Gold 15 Tellurium

Part 4: Toxic Metals CHAPTER CHAPTER CHAPTER CHAPTER CHAPTER CHAPTER

16 Arsenic 17 Beryllium 18 Cadmium 19 Lead 20 Mercury 21 Thorium

Reference

Preface Nichole Coleman PhD, Chia-In Jane Lin and Christopher Taing What does “the poison of kings” have to do with the theatrical play Arsenic and Old Lace? The play, written by Joseph Kesselring, was inspired by a true story of Amy Archer-Gilligan, a woman who’d killed anywhere from twenty to one hundred people with arsenic-tainted food and drink. With her second husband, Archer-Gilligan owned a nursing home called the Archer Home for Aged People. She was suspected of killing both of her husbands and many residents in the nursing home. Police found large quantities of arsenic, which Archer-Gilligan said was used to kill the undesired rodent residents of the home. She was put on trial and found guilty, yet she pleaded a case of insanity and was granted a life sentence. In prison, she was eventually admitted into a mental hospital, where she remained until her death. The blades we use, the pots and pans we cook with, face paint and children’s makeup contain numerous toxic metals. In make-up alone, there is as many as four types of metals. Costume paint contains cadmium and lead. Most paints that contained toxic metals are found in dark pigmented paints. Sadly, cosmetics are the least regulated consumer product on the market, and these toxic ingredients don’t have to be labeled on the products. All toxic substances are harmful to human health, but metals are different from other synthetic toxicants because they can be neither created nor destroyed, and thus they are nonbiodegradable. Today, toxic heavy metals are found in cigarettes and gourmet foods such as fish, chocolate, and food seasonings. Irrespective of how carefully metals are

used in industrial processes, some level of human exposure is unavoidable. This indestructibility coupled with the toxic bioaccumulation contributes to the high concern for metals in our environment. In this book, we feature twenty-one chapters on various aspects and characteristics of metal dependency (metals that are essential for human health) and toxicity (metals that are nonessential for human health). The chapters review a mixture of original research articles, perspectives, and reviews of Internet sources that underscore the dangers of some consumer products. Each chapter deals with the environmental exposure, clinical presentation, and occupational exposures to a variety of different metals, including cobalt, copper, iron, arsenic, cadmium, zinc, and platinum, to name a few. We also devote an entire section to essential metals that can be a potential danger if taken without care. The authors also discuss a number of different types of toxicity, including contact dermatitis, neurotoxicity, and chemical carcinogenesis. The overall focus of this book is to bring awareness to the exposures of metal toxicity and to make the reader aware of some medical advances of potential toxic heavy metals. The conclusion of each chapter underscores the fact that metal toxicity is a complex problem and that we have much more to learn about the epidemiology, exposures, doseresponse relationships, mechanisms of toxicity, and occupational factors altering human sensitivity to metal toxins. At the end of each chapter, the bibliography list all of the sources for which the chapters were written. Although you will need to have experience with basic chemistry and biology to understand the process by which these metals can affect your physiology, it is our sincerest hope that this compilation of metal toxicological studies will not only introduce and familiarize the reader to the topic herein but also serve as a useful and valuable reference for those who are not familiar with the studies in metal toxicology. We also

hope that it will serve as a provocation to those in the field, as well as others, to develop, produce, and generate new methodologies to investigate the mechanisms by which metals serve as toxicants and to develop approaches that can be used to protect against their toxicities.

TABLE OF THE ELEMENTS AND THEIR SYMBOLS Element

Symbol

Atomic

Atomic

Number

Mass

Actinium

Ac

89

(227)

Aluminum

Al

13

26.9815

Americium

Am

95

(243)

Antimony

Sb

51

Argon

Ar

Arsenic

Astatine

Element

Symbol

Atomic

Atomic

Number

Mass

Meitnerium

Mt

109

(266)

Mendelevium

Md

101

(256)

Mercury

Hg

80

200.59

121.75

Molybdenum

Mo

42

95.94

18

39.948

Neodymium

Nd

60

144.24

As

33

74.9216

Neon

Ne

10

20.179

At

85

(210)

Neptunium

Np

93

237.0482

Element

Symbol

Atomic

Atomic

Number

Mass

Silver

Ag

47

107.868

Gold

Au

79

Barium

Ba

Berkelium

Element

Symbol

Atomic

Atomic

Number

Mass

Nickel

Ni

28

58.71

196.9665

Niobium

Nb

41

92.9064

56

137.34

Nitrogen

N

7

14.0067

Bk

97

(249)

Nobelium

No

102

Beryllium

Be

4

9.01218

Sodium

Na

11

22.9898

Bismuth

Bi

83

208.9806

Osmium

Os

76

190.2

Bohrium

Bh

107

(262)

Oxygen

O

8

(254)

15.9994

Element

Symbol

Atomic

Atomic

Number

Mass

Boron

B

5

Bromine

Br

35

79.904

Cadmium

Cd

48

112.40

Calcium

Ca

20

Californium

Cf

98

Carbon

C

Cerium

Ce

Element

Symbol

Atomic

Atomic

Number

Mass

Pd

46

106.4

Phosphorus

P

15

30.9738

Platinum

Pt

78

195.09

40.08

Plutonium

Pu

94

(242)

(251)

Polonium

Po

84

(210)

6

12.011

Potassium

K

19

39.102

58

140.12

Praseodymium

Pr

59

140.9077

10.81

Palladium

Element

Symbol

Atomic

Atomic

Number

Mass

Element

Symbol

Atomic

Atomic

Number

Mass

Cesium

Cs

55

132.9055

Promethium

Pm

61

(145)

Chlorine

Cl

17

35.453

Protactinium

Pa

91

231.0359

Chromium

Cr

24

51.996

Lead

Pb

82

207.2

Cobalt

Co

27

58.9332

Radium

Ra

88

226.0254

Copper

Cu

29

63.546

Radon

Rn

86

(222)

Curium

Cm

96

(247)

Rhenium

Re

75

186.2

Dubnium

Db

105

(260)

Rhodium

Rh

45

102.9055

Atomic

Atomic

Number

Mass

Element

Symbol

Dysprosium

Dy

66

162.50

Einsteinium

Es

99

(254)

Erbium

Er

68

Fluorine

F

9

Francium

Fr

87

(223)

Iron

Fe

26

Gadolinium

Gd

64

Element

Symbol

Atomic

Atomic

Number

Mass

Rubidium

Rb

37

Rutherfordium

Rf

104

167.26

Ruthenium

Ru

44

18.9984

Seaborgium

Sg

106

(263)

Selenium

Se

34

78.96

55.847

Silicon

Si

14

28.086

157.25

Silver

Ag

47

107.868

85.4678

(257)

101.07

Element

Symbol

Atomic

Atomic

Number

Mass

Element

Symbol

Atomic

Atomic

Number

Mass

Gallium

Ga

31

69.72

Sodium

Na

11

22.9898

Germanium

Ge

32

72.59

Strontium

Sr

38

87.62

Gold

Au

79

196.9665

Sulfur

S

16

32.06

Hafnium

Hf

72

178.49

Antimony

Sb

51

121.75

Hassium

Hs

108

Tin

Sn

50

118.69

Helium

He

2

Tantalum

Ta

73

180.9479

Holmium

Ho

67

Technetium

Tc

43

98.9062

(265)

4.00260

164.9303

Element

Hydrogen

Symbol

H

Atomic

Atomic

Number

Mass

Element

Symbol

Atomic

Atomic

Number

Mass

1

1.0080

Tellurium

Te

52

127.60

Mercury

Hg

80

200.59

Terbium

Tb

65

158.9254

Indium

In

49

114.82

Thallium

Tl

81

204.37

Iodine

I

53

126.9045

Thorium

Th

90

232.0381

Iron

Fe

26

55.847

Tin

Sn

50

118.69

Krypton

Kr

36

83.80

Titanium

Ti

22

47.90

Potassium

K

19

39.102

Tungsten

W

74

183.85

Atomic

Atomic

Number

Mass

Element

Symbol

Lanthanum

La

57

Lawrencium

Lr

103

Lead

Pb

Lithium

Li

Element

Symbol

Atomic

Atomic

Number

Mass

Uranium

U

92

238.029

(257)

Vanadium

V

23

50.9414

82

207.2

Xenon

Xe

54

131.30

3

6.941

Ytterbium

Yb

70

173.04

Zinc

Zn

30

65.37

Zirconium

Zr

40

91.22

138.9055

Magnesium

Mg

12

24.305

Manganese

Mn

25

54.9380

PART 1

INTRODUCTION TO TOXIC METALS Nichole Coleman PhD, Christopher Blaine, Christopher Bivins, and Bhumil Patel

Household compact fluorescent bulbs contain mercury vapor.

  Chapter Chapter Chapter Chapter Chapter

1 2 3 4 5

Introduction to Toxic Metals Factors Influencing the Toxicity of Metals Phytoremediation Electrolytes Magnesium Silicates

CHAPTER 1

INTRODUCTION TO TOXIC METALS Nichole Coleman, PhD Life is chemistry. Everything we are—how and what we think, how we move—is all chemistry. When we examine the human body, we often use an organizational hierarchal approach, which starts with DNA, the supreme macromolecule that gives instructions to orchestrate cellular function. Cells organize into tissues, tissues organize into organs, and all of these components yield you and me, human beings. DNA is our blueprint, but in all truth, the hierarchy starts at an even more elemental level: with the atoms that make up our DNA. Nearly one-half of the elements in the periodic table are found in the human body. The elements of the periodic table can be either detrimental or beneficial to human health. Indeed, many elements are essential to the proper functioning of the human body. However, many nonessential elements can cause harm when we are exposed to them via our environment. Out of more than one hundred metallic elements, roughly twenty-eight can be found in our blood, depending on exposure. As the classic toxicological adage states, the dose makes the poison, as all metals can cause disease if absorbed in excess or if the body has a deficiency. This holds true for both essential and nonessential metals.

Metals as Toxicant Metals are defined by physical properties of the element, and their physical properties—luster, electrical and thermal conductivity, mechanical ductility, and strength— characterize each metal. Metals in biological systems lose one or more electrons to form cations through an oxidation/reduction mechanism. Metals have been recognized as toxins for centuries. The oxidized forms of metals interact with metalloprotiens and ion channels through mimicry, disrupting physiological processes. Heavy metals are elements having atomic weights between 63.5 and 200.6 and a specific gravity or density greater than 5.0. Chief among them are lead, cadmium, and arsenic (table 1-1). Toxic metals used by the ancients— namely, arsenic, copper, lead, and antimony—date back to 300 BC. Table 1-1: Nonessential Trace Elements Element

Health Effects

Arsenic

Acute • Diarrhea • Anemia • Encephalopathy • Kidney failure • Hepatitis

Cadmium

Respiratory distress, renal dysfunction

Liver, bone, immune, nervous systems

Lead

IQ declines, clumsiness, gait abnormalities, headache, behavioral changes, seizures, abdominal pain, constipation, colic, nephropathy, anemia

CNS, gastrointestinal, and kidneys

Dysfunction and inactivation of proteins with sulfhydryl groups, headaches, tremors, impaired coordination, abdominal cramps,

CNS gastrointestinal, immune system,

Mercury

Target Chronic • Hyperpigmentation • Alopecia • Cirrhosis • Tremors • Gangrene • Cancer

Gastrointestinal, bone, cardiovascular, central nervous system (CNS), kidneys, and liver

diarrhea, dermatitis, polyneuropathy, proteinuria, hepatic dysfunction Aluminum Encephalopathy

lungs, liver, skin, and kidneys Bones and lungs

  The preferred way to overthrow royalty during the Renaissance era was via poisoning with arsenic. Neurotoxic effects, tremors, irritability, and erethism characterize mercury poisoning. The phrase “the Mad Hatter” originated from the hat makers of Europe who were exposed to mercury. In 1956, the Bay of Minamata in Japan was found to have large quantities of industrial mercury waste. More than three thousand people were affected with Minamata disease, which is now known as methyl mercury poisoning (1). More recently, in 2007, the groundwater of Bangladesh was found to exceed the World Health Organization (WHO) safety limits of arsenic, and the leaching of arsenic from the bedrock presented a serious health risk to the population living in that region (2). Ayurvedic medicines originated in India more than two thousand years ago and rely heavily on herbal medicinal products. A large percentage of India’s population use ayurveda by way of ayurvedic practitioners. As early as the nineteenth century, there were plants identified that were capable of accumulating uncommonly high zinc levels and hyperaccumulating up to 1 percent nickel in shoots. Following the identification of these and other hyperaccumulating species, a great deal of research has been conducted to elucidate the physiology and biochemistry of metal hyperaccumulation in plants (3). Metals are different from other toxicants because they can be neither created nor destroyed, and thus they are nonbiodegradable. Today, toxic heavy metals are found in cigarettes and gourmet foods such as fish, chocolate, and food seasonings. Irrespective of how carefully metals are used in industrial processes, some level of human exposure

is unavoidable. This indestructibility, coupled with the toxic bioaccumulation, contributes to the high concern for metals in our environment.

Heavy Metals in the Environment Metals, both toxic and biologically relevant, are redistributed naturally in the environment by a combination of geological and biological cycles. Rainwater dissolves rocks, ores, and colloidal minerals and transports them to rivers and groundwater. Biological cycles transport metals via biomagnification by plants and animals, which results in the incorporation of toxic metals into our food supplies. Industrial processes also greatly enhance metal distribution in the biosphere by discharge of highly toxic waste into the soil, water, and air. Reports of metal intoxication are common in plants, fish, birds, and animals near some industrial plants. After the 1986 disaster of Chernobyl, the wildlife in the region surrounding the nuclear power plant had high levels of radioactive material throughout their bodies, and this has resulted in morphological, physiological, and genetic disorders in every animal species that has been studied (4). These populations exhibit a wide variety of morphological deformities not found in other populations. Exposure to toxic metals is not solely due to industrial processes, however; chronic arsenic poisoning from drinking water occurs in many parts of the world. Also, endemic intoxication from excess thallium can occur when natural environmental sources are elevated (5).

Environmental Remediation

Phytoremediation

Plants have the ability to absorb contaminants, including oil, metals, pesticides, and explosives that are trapped in the soil. Phytoremediation is a process that uses plants to clean up environments contaminated by metals. These types of plants work best where contaminant levels are low, because areas with large metal content will impede plant growth and can make cleanup efforts a lengthy process (table 1-2). Phytoremediation can also be used to contain contaminated areas and lessen the impact of metal transfer by rain or wind to unaffected areas (6). Table 1-2: Common Plants Used for Phytoremediation Metal

Plant

Arsenic

Sunflower

Metal

Plant

Cadmium

Willow

Metal

Plant Alpine pennycress

Lead

Indian mustard

Metal

Plant Ragweed

Metal

Plant Hemp dogbane

Metal

Plant

Sodium

Barley

Metal

Plant Sugar beets

Cesium-137

Sunflower

Strontium-90

Sunflower

Mercury Selenium

Transgenic planting methods using bacterial enzymes

Metal

Plant

Zinc

Willow

Metal

Plant Alpine pennycress

Copper

 

Willow

Wastewater Heavy Metal Contamination With the rapid development of industries such as metalplating facilities, mining operations, fertilizer industries, tanneries, battery factories, paper industries, pesticide factories, and so forth, heavy metal wastewater is increasingly discharged (directly or indirectly) into the environment, especially in developing countries. Unlike organic contaminants, heavy metals are not biodegradable and tend to accumulate in living organisms, and many heavy metal ions are known to be toxic or carcinogenic. Toxic heavy metals of particular concern in the treatment of industrial wastewaters are zinc, copper, nickel, mercury, cadmium, lead, and chromium (7).

Chemical Mechanism of Metal Toxicity Heavy metals disrupt metabolic functions in two ways (1). They accumulate and thereby disrupt function in vital organs and glands such as the heart, brain, kidneys, bone, liver, and so forth (2). They displace the vital nutritional minerals from their original place, thereby hindering their biological function. However, it is impossible to live in an environment free of heavy metals, because the chief route of exposure is by consumption of food and beverages, inhaled air, and skin exposure via an occupational hazard. Chemically, metals in their oxidized state can be very reactive with biological systems in a variety of ways; however, the inhibition of critical enzymes and interaction with ion channels are the chief molecular mechanisms of metal toxicology. The metal can show more specific forms of chemical attack through mimicry, blocking the biological recognition site of the essential metal. An example of mimicry is the mechanism of toxicity of cadmium, copper, and nickel, which displace zinc from its biological binding site. Many metals can act as a catalytic center for redox reactions with endogenous oxidants, producing oxidative modification of biomolecules such as proteins or DNA, thereby causing an array of aberrant gene expression. The modification by way of oxidation is considered a key step in carcinogenicity of certain metals, such as chromium and cadmium (8).

Diagnosing Metal Toxicity Patients with idiopathic renal disease, bilateral peripheral neuropathy, unexplained changes in mood and mental function, or skin inflammation should be asked if they have an occupational or any other type of exposure to heavy metals. Otherwise, confirming the diagnosis of metal toxicity is very challenging, because the signs and symptoms are analogous to other diseases. Establishing metal toxicity requires demonstration of the following features: 1) a source of metal exposure, 2) demonstration of the symptoms of exposure, and 3) metal concentrations in the appropriate test specimen (plasma or urine). If one of these features is absent, metal toxicity cannot be diagnosed. Physiologically Important Elemental Species It is astounding to realize that almost half of the elements in the periodic table have been found in the human body. Some of these are necessary for the myriad of biochemical processes that take place in the human body. Metals that are regarded as essential for human health are sodium (Na+), potassium (K+), calcium (Ca2+), and magnesium (Mg+2). They are collectively referred to as electrolytes. Electrolytes are substances that produce an electrically conducting solution when dissolved in water. The oxidized forms of sodium, calcium, and magnesium carry a charge and are essential for life. All higher forms of life need electrolytes to survive. There are other metals that have been shown to be essential for plants and mammalian life, but the essentiality for humans has not been demonstrated. Such metals include arsenic, nickel, and boron. Arsenic deficiency depresses growth and impairs reproduction in laboratory animals and in chickens (9). Nickel deficiency results in decreased growth and formation of blood cells in several animal species. Boron is essential for the growth of most

plants; however, boron deficiency appears to affect calcium and magnesium metabolism and may affect membrane function (10). Metals regarded as essential for human health in trace amounts include iron, zinc, copper, manganese, chromium, molybdenum, and selenium. They are essential because they form an integral part of one or more enzymes involved in a metabolic or biochemical process. The primary role of such elements is as a catalyst, and only trace amounts are necessary for cellular function. These metals are commonly detected in nature, particularly in various mineral deposits and soils, meaning that they are available to be taken up by plants and animals that serve as food sources for humans. Table 1-3 is a list of the six essential trace metals needed in our diet.

Table 1-3: Health Effects of Trace Elements Trace Element

Health Effects Beneficial

Harmful

Targets

Cr6+ Oxidation Cr3+ Oxidation State • State • Oxidizes Metabolizes • Glucose DNA • Produces • Fat • Cholesterol free radicals • Lung cancer

• Lungs Kidneys • Liver • Skin • Immune system

• Iron metabolism • ATP synthesis • Antioxidant • Melanin synthesis • Neurotransmitter synthesis

• Hemolysis

• Liver • Kidneys

• O2 transport • Electron acceptor • ATP synthesis

• Diabetes • Arthritis • Cardiac arrhythmia • Impotence • Cancer

• Liver • Skin

Manganese

• Urea cycle • Neurotransmitter synthesis • Antioxidant

• Nausea, • Disorientation • Memory loss • Anxiety • Compulsive laughing/crying

• CNS • Gastrointestinal

Molybdenum

• Purine synthesis • Metabolism • ATP synthesis

• Increased incidence of gout

• Kidneys

• DNA, RNA, and protein synthesis • Glucose metabolism • Insulin function

• Decrease heme synthesis • • Gastrointestinal Copper deficiency • Lungs • Hyperglycemia • Pneumonia

Chromium

Copper

Iron

Zinc

 

CHAPTER 2

FACTORS INFLUENCING THE TOXICITY OF METALS Nichole Coleman, PhD

X-ray structure of metalloprotein hemoglobin.

Metal-Binding Proteins

Metal ions are vital life elements that assist in many metabolic processes in every living cell. However, these essential nutrients are toxic at elevated levels. Therefore, deficiency or excess in metal ion, resulting from of genetic disposition or malnutrition, can cause death or severe diseases. For example, abnormal iron uptake has been linked to hemochromatosis as well as anemia and atherosclerosis, and it is a factor in neurological diseases such as Parkinson’s, Alzheimer’s, Huntington’s, Friedreich’s ataxia, and pica. To prevent such an illness, cells must maintain metal ion homeostasis, and this is carried out through highly regulated processes of uptake, storage, and secretion. Heavy metals include both those essential for normal biological functioning (e.g., Cu and Zn) and nonessential metals (e.g., Cd, Hg, and Pb). Both essential and nonessential metals can be present at concentrations that interrupt normal biological functions, and that induce cellular stress responses. Cellular targets for metal toxicity include tissues of the kidney, liver, heart, and the immune response and nervous systems. Interestingly, influences of specific metals, their reservoirs, and the cellular stress response can have remedial effects on certain diseases (1115). The dynamics of controlling metal ion transport across cellular membranes, intracellular homeostasis, and regulatory responses of cells to changing environmental supply of metal ions have been the subjects of many studies in recent years. Of particular interest are the divalent trace metals Cu2+, Mn2+, Fe2+, and Zn2+ because of their important role in cell metabolism, especially as cofactors of many enzymes. Normally their intracellular concentration is kept at a low, rather constant physiological level. Toxic metals like Cd2+, Co2+, and Ni2+ can be a major threat to the health of mammals and could suppress plant growth, mostly because they interfere in various ways with transport,

homeostasis, or function of the essential metals. To minimize their deleterious effects, cells have established various strategies, among which the utility of transport proteins that allow sequestration by metal binding proteins. Toxic heavy metals share some chemical similarities with the essential metals, and when in excess, they can induce the production of reactive oxygen species (ROS) as well as interact with sulfhydryls, altering protein structure and function. The management of both essential and toxic heavy metals is in part accomplished by metallothionein (MT), the best-known example of metal-binding proteins (16).

Metallothioneins In mammals, there are two major forms of metallothionein (MT-1 and MT-2), which have identical arrangements of cysteinyl residues but different isoelectric points. In most instances, only two metals, iron and zinc, are bound to metallothioneins. The synthesis of metallothioneins may be induced by presence of zinc and copper and also the toxic metal cadmium. Cadmium forms an intracellular complex with metallothioneins that is a cellular protective response that shields the surrounding cells from its toxic effects, however the renal excretion of the cadmiummetallothioneins complex may have a role in the nephrotoxicity of cadmium. Metallothioneins are considered to be small stress response protein, and it serves many roles in both normal and stressed cells, acting as a reservoir of essential heavy metals (e.g., Cu2+ and Zn2+), as a scavenger for both heavy metal toxicants (e.g., Hg2+ and Cd2+) and free radicals, and as a regulator of transcription factor activity. Traditionally considered an intracellular protein, recent work has suggested important roles for metallothioneins in both intracellular and extracellular compartments (17, 18).

Metal Transporters Transferrin is a glycoprotein that binds most of the ferric iron (Fe3+) in plasma and helps transport iron across cell membranes. Transferrin is also able to transport aluminum and manganese. Ferritin is a storage protein for iron, but it has been suggested that it serves as a general metal detoxification protein because it binds toxic metals cadmium, beryllium, aluminum, and zinc when in excess. Ceruloplasmin is a ferroxidase enzyme but is also the major copper-carrying protein in the blood. As the name implies, this ferroxidase has an important role in iron metabolism. Its primary role is to convert ferrous iron to ferric iron, which then binds to transferrin. This protein also induces iron uptake by a mechanism that is independent of the transferrin pathway. In the last few years, it has become apparent that the family of (NRAMP) metal ion transporters plays a major role in metal ion homeostasis (19). The family members function as general metal ion transporters and can transport Mn2+, Zn2+, Cu2+, Fe2+, Cd2+, Ni2+, and Co2+. Yet several observations indicate that many more proteins involved in transport and homeostasis of metal ions await their identification.

Ionophores An ionophore is a biological compound that binds metal ions reversibly. Most ionophores are lipid-soluble macromolecules that transport ions across cell membranes. The two broad categorizations of ionophores synthesized by microorganisms are: (20) 1. Carrier ionophores that bind to a particular ion and shield its charge from the surrounding environment. This makes it easier for the ion to pass through the hydrophobic interior of the lipid membrane. 2. Channel formers that introduce a hydrophilic pore into the membrane, allowing ions to pass through without coming into contact with the membrane’s hydrophobic interior. Channel forming ionophores are usually large proteins. Table: 2-1 Biological Ionophores Biological Ionophore

Beauvericin

Structure

Ion Affinity

Ca2+, Ba2+

Biological Ionophore

Structure

Ion Affinity

Calcimycine

Mn2+, Ca2+, Mg2+

Enniatin

Ammonium cation

Gramicidin A

H+, Na+, K+

Biological Ionophore

Structure

Ion Affinity

Ionomycin

Ca2+

Lasalocid

K+, Na+, Ca2+, Mg2+

Monensin

Na+, H+

Nigericin

K+, H+, Pb2+

CHAPTER 3

PHYTOREMEDIATION Christopher Bivins

Artist description of Phytoremediation.

Introduction The interaction of heavy metals with different forms of life on earth can be quite varied. For animals such as us humans, the severely toxic effects of mercury on human physiology have been known for decades. On the other hand, small amounts of selenium supplemented in the diet have been shown to have positive effects on male hormonal

health. Nonetheless, the toxicity of heavy metals on the mammalian physiology is well established knowledge. However, organisms outside of the animal kingdom have been found to have completely different interactions between their physiology and the heavy metals in their environment. Two kingdoms of life, Plantae, and Fungi, have produced an incredibly vast number of species that have evolved an equally vast array of ways to not only tolerate the presence of heavy metals in their environment, but also to thrive from their presence. These species have presented a priceless opportunity to repair ecosystems in a new, cutting-edge form of habitat restoration in which plants and fungi known to take up heavy metals from the soil are used to remove these toxic elements from a given habitat. This chapter will explore this brand-new science and its potential for cleaning out heavy metals that have been deposited in our environment through industrial pollution (21). Advantages of Phytoremediation The use of plants is arguably one of the most sustainable and environmentally friendly ways to remove heavy metals from the soil. The advantages of using naturally occurring organisms that have evolved the physiology capable of handling these toxic elements are not to be underestimated. Millions of generations of plants existing in environments containing heavy metals have produced botanical supermutants that not only tolerate the presence of heavy metals in the soil but also thrive off of their presence. Although there is no doubt many more species capable of tolerating heavy metals yet to be discovered (another lesson on the importance of maintaining biodiversity), the existence of commonly known plants capable of extracting heavy metals represents an economical goldmine (sometimes literally), because these plants are already equipped to serve our restorative needs. That is one of the most important

advantages of the practice of phytoremediation. The cost is incredibly low, and the benefit—the decontamination of toxic soil—are incredibly high. Preexisting knowledge of such plants reduces but does not eliminate the need to discover and develop plants suitable for the task using genetic modifications (22). A second advantage associated with phytoremediation is that plants are quite easy to monitor. From the moment, the first seedling sprouts above ground, the progress being made is visible to the naked eye. As the plant grows, the accumulation of heavy metals in its biomass increases, making it relatively easy to not only monitor the progress of a remediation project but also remove the heavy metals from the environment by simply pulling the plant out of the soil and transporting it to a processing center. This practice also represents yet another incredible advantage of phytoremediation: the potential for the reuse of the metals extracted by the plant. This practice is known as phytomining, and unlike traditional mining practices, this method leaves the soil completely intact and unharmed. This is yet another advantage of phytoremediation. The use of plants to remove heavy metals from a contaminated environment is by far one of the least harmful practices known to mankind. Not only is the soil unharmed by the growth of plants, but in most cases the soil ecology is improved by the presence of plants because of the fact that they are removing a toxic component of the soil, and also because they bring with them microorganisms that increase the biodiversity of the soil and are essential to the soil’s ecological health. This advantage is arguably the greatest of all, and the agricultural implications are astronomical. Barren wastelands of contaminated soil, once inhospitable to almost any form of life, can potentially be restored to support a multitude of crops. As the human population continues to expand, increasing the acreage of farmable soil

is a task we absolutely must prioritize. Phytoremediation provides the opportunity for us to do this. Disadvantages of Phytoremediation As any gardener or farmer knows, plant growth is usually anything but rapid, especially with woody plants such as trees and shrubs. Unlike traditional metal-extraction methods, the use of plants to extract heavy metals from the soil is a slow process. The rate of heavy metal extraction is equal to the rate of growth of the species of plant being used. In relation to this issue, the percentage of accumulation relative to the actual biomass of the plant presents another issue as far as time is concerned. If the species being used to extract metals from the soil is a relatively small plant, such as Indian mustard (Brassica juncea), then one has to wait for the plant to grow to its maximum size, and then one has to repeat this process over and over again, generation after generation, until the soil contaminant has been reduced to acceptable levels. The belowground growth rate and size of the plant being used for phytoremediation also presents another limitation (23). The extent to which a plant can access heavy metals within the soil is dependent upon the depth that the roots are able to reach, as well as the surface area that the roots encompass. Surface area is less of a limitation than the maximum reach of depth of which the roots are capable. Logically, growing many plants next to one another increases the surface area of the roots, however the depth that the roots can reach is limited from species to species. For example, the Chinese brake fern (Pteris vittata), useful for removing arsenic from the soil, has a maximum root depth of no more than about thirty centimeters. Another set of limitations involved with phytoremediation is the issue of controlling a given heavy metal once it has been taken up by a plant. Although monitoring the growth of plants is a

relatively easy task, monitoring the consumption of these plants by primary consumers is an entirely different task. If a plant containing heavy metals is eaten by an animal in the environment, whether it be by insects, birds, or mammals, then once that heavy metal has been transferred to another organism, the ability to keep track of that heavy metal becomes exponentially more difficult. Even worse, the heavy metal remains in the environment, and it can also travel up the food web, poisoning the ecology of the environment. Unless there is diligent prevention of primary consumers eating plants being used in a phytoremediation project, phytoremediation poses a potential risk of not decontaminating the soil of heavy metals, as well as the risk of mobilizing heavy metals and poisoning the food web. Also, when the plants are finally removed during the final stages of a phytoremediation project, there are bound to be fragments of biomass (leaves, twigs, roots) leftover. Painstaking care must be taken to ensure that every piece of plant matter belonging to the restorative species is removed from the site; otherwise, the heavy metal that has accumulated in this biomass will leach back out into the soil. Another limitation involved with phytoremediation is the soil itself. If the soil type or the ecology of the soil is incompatible with the plant that is capable of removing the heavy metal of interest, then phytoremediation with that species simply will not be possible. In addition to this concept, the degree of heavy metal concentration and toxicity in a given restoration site also poses a potential threat to the ability to attempt a phytoremediation project. Some plants do seem to thrive in soils contaminated with heavy metals, such as Alpine pennycress (Thlaspi caerulescens), but most plants currently known to possess phytoremediation potential merely tolerate a certain amount of heavy metals in their bodies. If a soil is too concentrated with heavy metals, a given remediative plant might not be able to grow at all, although there are many

examples of exceptions to this rule that span a great range of variability (24). Types of Heavy Metal Phytoremediation There are three main types of heavy metal phytoremediation: phytosequestration, phytostabilization, and phytoextraction.

Phytosequestration When a plant stores a given substance in its root system, plant biologists refer to this action as sequestration. Sequestration is typically associated with energy storage, in which plants store sugar in the form of starch into their roots. However, sequestration is also a method of preventing toxic substances and plant pathogens from entering the vasculature of the plant. Inside the roots of almost all the plants on earth is a narrow strip of cells called the Casparian strip, which encircles the inner cells of the root which contains the plant’s vascular cells. The Casparian strip acts as a kind of filter for the plant, allowing water and dissolved essential minerals to enter the plant while blocking out toxic compounds and pathogens (25, 26). Certain plants that can tolerate soil contaminated with heavy metals block the entrance of these elements into the vasculature of the plant at the site of the Casparian strip, and then sequester the heavy metals into special cells in the roots called vacuoles. With this method of phytoremediation, heavy metals are removed from the soil and then can be further removed by pulling the plant out by its roots (27).

Phytostabilization This method is very similar to phytosequestration. In phytostabilization, plant roots remove heavy metals from the soil and store them in their root vacuoles. The idea is

that by storing the heavy metals in the roots, the plants capable of phytostabilization thus prevent heavy metals that would have otherwise been in the soil from leaching out into the water supply during floods and rainstorms. Plants with large, woody root systems can act as long-term storage facilities for toxic heavy metals, although there is a risk of the heavy metals leaching back into the soil very gradually over time. The key is that phytostabilization reduces the motility of heavy metals greatly, although there is still the risk of the heavy metals entering the food chain through primary consumption (28).

Phytoextraction This method is by far the most effective at permanently removing heavy metals from the soil, and it is also the most economically lucrative method of phytoremediation. With phytoextraction, water-soluble heavy metal particles are able to diffuse past the Casparian strip and enter directly into the plant’s vascular system, through which the heavy metals can be deposited in various parts of the plant body, such as the leaves and stems. The capacity for a given plant to transport heavy metals from the roots into the plant vasculature varies widely from species to species, and it also depends on the type of heavy metals present in the soil. Therefore, some plants are more ideal candidates for phytoremediation than others. For example, the alpine pennycress (Thlaspi caerulescens) has been shown to accumulate high concentrations of both cadmium and zinc, and in fact it can thrive in the presence of these toxic metals—yet the presence of copper stunts its growth. Promising research is being done involving the genetic engineering of plants in order to make them more suitable for phytoremediation purposes. Transgenic plants are being designed with the incorporation of genes from bacteria known to possess certain enzymes that allow them to

tolerate heavy metals. This practice could vastly increase the effectiveness of phytoextraction practices, and scientists could be able to customize a plant’s genome and tailor it to be a perfect match to clean up the specific contaminants at a given remediation site. The most interesting and economically lucrative aspect of phytoextraction is the ability to harvest the metals from the biomass of a phytoremediation crop. Plants that have been employed in phytoremediation projects can be harvested and burned, and then by using some clever chemistry, one can extract precious metals like copper or even gold from the ashes. The ability for a plant to extract a metal from the soil into its above ground biomass depends greatly on the solubility of the metal of interest. Fortunately, chemists have discovered ways to manipulate the solubility of metals (29-31). A complex balance of natural selection has allowed the magnificent creatures we call plants to evolve an incredibly vast amount of adaptations to these toxic heavy metals that we human beings have not had nearly enough time to adapt to. We are currently in a phase of symbiotic mutualistic evolution in which we now have an incredible opportunity to develop a unique way to deal with these toxic elements that pollute our environment and cause a multitude of health problems and ecological damage. We have much to learn from these incredible life-forms who have spent hundreds of millions of years occupying the dynamic, ever-changing soils that we seem to be polluting and destroying at an alarming rate. There are doubtlessly hundreds if not thousands more species of plants who have evolved unique, individual mechanisms to tolerate the presence of heavy metals in their environment that have yet to be discovered. However, as we destroy underexplored ecosystems at an increasing pace through development and industrial pollution, we drive the rate of species extinction. Once a species is extinct, any knowledge of the possible mechanisms it could have

possessed to deal with heavy metals disappears forever. Considering that heavy metals are so incredibly toxic to our own physiology, we have a direct interest in maintaining the diversity of species so that we may discover new and more efficient ways of decontaminating our environment through the symbiotic relationships we could potentially form with the cornucopia of plant species with which we share this planet. Plants, however, are not our only potential teachers we could learn from in our evolutionary arms race toward fitness. A group of astoundingly interesting and beautiful beings who are more closely related to us than they are to plants have also evolved countless adaptations that allow them to tolerate and sequester heavy metals. The use of our distant relatives, the kingdom Fungi, to sequester and remove toxic heavy metals from the soil (also known as mycoremediation) is a practice that is even newer than phytoremediation, and it shows great potential in aiding us repair the damage done to soils by heavy metals all around the world.

CHAPTER 4

ELECTROLYTES Christopher Blaine

Introduction Conducting nerve signals. Contracting muscles. Coagulating blood. Regulating enzymes. What do all these functions have in common? They are managed by electrolytes. The chemical definition of an electrolyte is a substance dissolved in a fluid that allows for the conduction of electricity. For the sake of this chapter, we will be limited ourselves to the

physiologically significant metal electrolytes: sodium, potassium, calcium, and magnesium. Electrolytes play a key role in regulatory functions because of their ability to flow freely through the watery extra- and intracellular fluids without the need for carrier proteins. Flow of these electrolytes is then regulated and maintained by transport proteins, which determine the concentration and timing of flow in and out of the cell, and osmotic pressure. The key difference between these electrolytic metals and other metals found throughout this book is that these metals are needed in relatively large amounts compared to other metals that are required in very small doses or are toxic at any dose. Sodium

  Arguably the most significant of the metal electrolytes is sodium. Physiologic sodium levels are tightly regulated by the hypothalamus and kidneys because of the multitude of functions that it performs in the human body. Sodium is used by neurons (in the periphery and the brain), skeletal muscle, and cardiac tissue to create action potentials. As

the most abundant ion in the body, sodium is critical to maintaining and regulating osmotic activity and renal function. The exchange of sodium also regulates the acidity of urine. The normal serum and plasma levels of sodium are 136– 145 mmol/L. Cerebrospinal fluid has a higher peak normal sodium range at 150 mmol/L. Hyponatremia occurs when serum/plasma sodium concentration drops to 135 mmol/L or lower. However, most patient do not become symptomatic until sodium concentrations are below 130 mmol/L. At 125– 130 mmol/L, patients start showing nausea and vomiting. Below 125 mmol/L, gastric distress continues, along with headaches, lethargy, ataxia, seizures, coma, and respiratory distress. Sodium levels below 120 mmol/L for forty-eight hours or less constitute a medical emergency. There are three major causes of hyponatremia: increased sodium loss, increased water retention, and water imbalance. Sodium loss may be caused by hypoadrenalism, potassium deficiency, diuretic use, ketonuria, kidney damage, prolonged vomiting or diarrhea, and severe burns. Increased water retention is due to kidney damage, hepatic cirrhosis, or congestive heart failure. Water imbalance may be caused by excess water intake, syndrome of inappropriate arginine vasopressin hormone secretion (SIADH), or pseudohyponatremia. The causes of hyponatremia can be more easily diagnosed based on a patient’s osmolality. Low osmolality indicates sodium loss or increased water retention. Normal osmolality indicates displacement by other ions, myeloma, hyperlipidemia, pseudohyponatremia, hyperprotinemia, or pseudohypokalemia. High osmolality indicates hyperglycemia or mannitol infusion (32). Hypernatremia can cause altered mental status, lethargy, irritability, restlessness, seizures, muscle twitching, hyperreflexes, fever, nausea, vomiting, difficult respiration, and increased thirst. Hypernatremia is mostly associated with a

hyperosmolar state. Above 160 mmol/L, hypernatremia has a mortality rate of 60–75 percent. Hypernatremia may be caused by excess water loss, decreased water intake, or increased sodium intake or retention. Excess water loss may be caused by diabetes insipidus, renal tubular disorder, prolonged diarrhea, profuse sweating, or severe burns. Decreased water intake is typically a concern in elderly people, infants, people with mental impairments, or other people who are otherwise incapable of indicating or recognizing thirst. Increased sodium intake or retention may be due to hyperaldosteronism, excess intake of sodium bicarbonate, or excess dialysis fluid removal. Whereas blood osmolality helps to diagnose hyponatremia, urine osmolality is used to indicate the source of hypernatremia. Urine osmolality below 300 mOsm/kg indicates diabetes insipidus. Osmolality of 300–700 mOsm/kg indicates a partial defect in arginine vasopressin release, or a response to arginine vasopressin or osmotic diuresis. Osmolality above 700 mOsm/kg indicates a loss of thirst, insensible loss of water, gastrointestinal loss of hypotonic fluid, or excess sodium intake (33). Potassium

  Potassium is closely tied to sodium regulation, acting as its counterpart in many physiologic functions. Action potentials of neurons are based on the influx of sodium and the efflux of potassium. Retaining sodium in the kidney requires the excretion of potassium. Apart from its connection to sodium, potassium is also used in an antiporter to bring protons into to the lumen of the stomach creating its acidity. Potassium is regulated in a similar manner to sodium; however, wherever sodium is retained, potassium is typically excreted. Normal plasma levels of potassium are 3.5–4.5 mmol/L in men and 3.4–4.4 mmol/L in women. (Compare this to the normal osmolarity of sodium.) Normal serum values for men and women are 3.5–5.1 mmol/L. Normally, 25–125 mmol are excreted in the urine. Hypokalemia can be caused by gastrointestinal losses, renal losses, cellular shift, decreased intake, or inhibition of the sodium-potassium ATPase pump. Gastrointestinal losses include vomiting, diarrhea, gastric suction, intestinal tumors, malabsorption, cancer therapy, or large doses of laxatives. Renal loss may be caused by the use of diuretics, nephritis, renal tubular acidosis, hyperaldosteronism, Cushing’s syndrome, hypomagnesemia, or acute leukemia.

Cellular shift, causing hypomagnesemia, can be caused by alkalosis or insulin overdose, which can increase the activity of the sodium-potassium ATPase pump. The ATPase pump may be inhibited by hypoxia, hypomagnesemia, or a digoxin overdose (34). Hypokalemia causes weakness, fatigue, constipation, muscle weakness, paralysis, and even arrhythmias, which can cause cardiac arrest. Symptoms typically begin to manifest below 3.0 mmol/L. Hyperkalemia is caused by decreased renal excretion, cellular shift, or increased intake, or it is found in lab samples because of artifacts. Decreased renal excretion of potassium may be due to acute or chronic renal failure, hypoaldosteronism, Addison’s disease, or use of diuretics. Cellular shift causing hyperkalemia is due to acidosis, muscular or cellular injury, chemotherapy, leukemia, or hemolysis. The destruction of cells causes the release of the relatively high intracellular concentration of potassium. Artefactual hyperkalemia is due to similar problems with sample hemolysis, thrombocytosis, prolonged tourniquet use, or excessive fist clenching when having a blood sample drawn. Hyperkalemia causes muscle weakness, tingling, numbness, and mental confusion. These changes are due to shifting the baseline of neuromuscular conduction. These symptoms appear around 8 mmol/L. However, cardiac arrhythmia, which can cause a fatal cardiac arrest, typically begins manifesting with EKG findings around 6–7 mmol/L. Above 10 mmol/L, fatal cardiac arrest becomes more common. Calcium

  Calcium is the key mineral (along with phosphates) that forms the bone matrix. It is also used as a signaling molecule, especially in neuromuscular junctions, hormone production, hormone secretion, and blood clotting. Calcium is regulated by the parathyroid hormone, which mobilizes calcium from storage in the bones and makes it ready for excretion; intestines, which regulate its absorption from food; kidneys, which modulate its excretion; and vitamin D, which modifies parathyroid activity and enhances absorption in the intestines. Although there may be 1 kg of calcium in the body, 99 percent of it is locked away in the bones. Normal serum calcium is 2.1–2.6 mmol/L, or 8.5–10.5 mg/dL. Hypocalcemia is defined as calcium blood levels below 2.1 mmol/L. Manifestations of severe hypocalcemia begin when serum calcium drops below 7.5 mg/dL, and that is cause for emergency care. Signs and symptoms of hypocalcemia include tetany, seizures, muscle cramps, muscle cramps, muscle weakness, fatigue, altered mental status, weakened teeth and gums, coarse hair and skin, gastrointestinal distress, congestive heart failure, and cardiomyopathy. EKG changes can be seen in more severe cases. Hypocalcemia may be caused by inadequate vitamin D production or

absorption and action, hypoparathyroidism (lack of parathyroid hormone), functional hypoparathyroidism (magnesium depletion or excess), pseudohypoparathyroidism (parathyroid hormone resistance), hyperphosphatemia, neonatal hypocalcemia, large volume blood transfusion, cancer (particularly bone metastases), acute pancreatitis, rhabdomyolysis, and “hungry bone syndrome” caused by parathyroidectomy or thyroidectomy . Hypercalcemia is defined as blood calcium levels above 2.6 mmol/L. Mild increases in blood calcium, typically during mobilization of bone calcium storage, will not show symptoms. However, rapid or greater increase in blood calcium will cause weakness, confusion, depression, bone pain, abdominal pain, kidney stones, increased urination, and renal failure. Severe cases can cause comas and potentially fatal cardiac arrhythmias. Hypercalcemia may be caused by hyperparathyroidism, cancers including solid metastases and multiple myeloma, vitamin D disorders, high bone turnover, kidney failure, and rebound from rhabdomyolysis (35). Magnesium

  Magnesium is the fourth most abundant cation in the body and is the second most abundant intracellular cation. Similar to calcium, 99 percent of magnesium is stored away, either sequestered intracellularly or in bone-deposits. One percent of total magnesium in found in the extracellular space, with 70 percent of that ionized or complexed to other filterable ions that can then be secreted. The remaining 30 percent of the remaining extracellular magnesium is bound to proteins. Magnesium is critical to cell metabolism because it helps to catalyze nearly every enzymatic reaction using phosphorus as an energy source or building material, as in DNA and protein synthesis, glycolysis, oxidative phosphorylation, and many others. Magnesium is known to interfere with the release of acetylcholine and catecholamines and has been proposed as a modulator of the physiologic stress response. Normal plasma magnesium is 0.7–0.9 mmol/L or 1.7–2.3 mg/dL and is regulated by diet, intestinal absorption, and renal excretion. Hypomagnesemia is associated with several disease states that we have seen in other hypoelectrolytic states, in part because the same mechanisms that cause loss of one electrolyte cause loss of all electrolytes. Magnesium is also commonly not eaten in adequate amounts because of loss during cooking. Hypomagnesemia can be caused by starvation, alcohol dependence, total parenteral nutrition, redistribution from extracellular to intracellular space (as with the treatment of diabetic ketoacidosis or acute pancreatitis), gastrointestinal loss, renal tubular defects, chemotherapy, immunosuppressants, proton pump inhibitors, or diuretics. Hypomagnesemia is associated with hypokalemia, hypocalcemia (especially in severe cases where magnesium levels drop below 1.2 mg/dL), arrhythmias, hypertension, convulsions, muscle cramping, apathy, hyperreflexia, depression, generalized weakness, anorexia, and vomiting. Hypomagnesemia is also being

found to be closely associated with coronary artery disease, diabetes mellitus, migraine headaches, nephrolithiasis, and osteoporosis. Supplementing magnesium has even been shown to improve bronchial hyper-reactivity to certain allergens. Hypermagnesemia is more commonly due to renal failure, but it can also be caused by excessive intake (typically due to use of magnesium-containing medications), lithium therapy, hypothyroidism, Addison’s disease, familial hypocalciuric hypercalcemia, milk alkali syndrome, and depression. Symptoms of hypermagnesemia do not manifest until levels exceed 2 mmol/L, and they usually begin with decreasing deep-tendon reflexes. As the hypermagensemia worsens, facial paresthesia develops, followed by muscle weakness, flaccidity, paralysis, depressed respiration, and eventually apnea. Moderate increased in blood magnesium may cause hypotension and bradycardia, with complete cardiac arrest occurring around 7 mmol/L or higher. Hypermagnesemia may also cause hypocalcemia, nausea, and vomiting (35).

CHAPTER 5

MAGNESIUM SILICATES Nichole Coleman PhD and Bhumil Patel

Natural asbestos fiber.

Introduction Magnesium is a nutritionally essential metal that can bare adverse health effects due to deficiency or excess. Magnesium compounds like magnesium citrate (C6H6MgO7), magnesium oxide (MgO2), magnesium sulfate (Mg(SO4)2), and magnesium hydroxide (Mg(OH)2) are known to be laxatives and antacids, where magnesium hydroxide is used

by the masses and is sold under the trade name Milk of Magnesia. Although magnesium forms very useful compounds with oxygen, sulfur, and carbon, isosterically replacing carbon with silicon produces one of the deadliest carcinogenic fibers known to man, asbestos (Mg3Si2O5(OH)4). Asbestos In Greek, asbestosis means indestructible. Asbestos is taken from the Greek and referrers to a family of naturally occurring fibrous magnesium silicate compound. Its chemical formula is Mg3Si2O5(OH)4. It has been known to humans as a carcinogen and can cause mesothelioma, a condition that is characterized by cancer of the mesothelium surrounding organs. Recent studies also suggest links between asbestos exposure and increased risk for cancers of the gastrointestinal, colorectal, throat, kidney, esophagus, and gallbladder. Asbestos has amazing properties that made it an attractive resource in the early 1900s. It can uniquely be spun into thread and then woven into a cloth. It has great fireproofing and insulating properties and is relatively lightweight. Many other properties such as abundance, resistance to water and acids, and electrical nonconductivity made asbestos a very valuable resource in construction of naval vessels, as well as a component that could be blended with plastics, resins, and other materials. Consumption in the United States peaked around 1973 even though links to lung cancer began appearing as early as the 1930s. Linking asbestos and disease generated a lot of controversy because of it being embedded with so many products in industry and general use. The first paper appeared in 1924 in the British Medical Journal, written by William Cook. It studied the illness and death of a worker who had passed away because of fibrosis of the lungs and tuberculosis. This worker also happened to work at an asbestos factory.

Following this paper, other papers soon began appearing that concluded asbestos caused a disease, which was later termed asbestosis. Most of the regulations enacted were in the UK, but other countries did not follow suit until much later (36). The EPA issued a final rule banning most asbestoscontaining products on July 12, 1989. Although, it is banned, thousands of products containing asbestos still contaminate our environment, and thus awareness is a key factor in reducing health issues arising from asbestos use. Asbestos Exposure Asbestos can be found in many places such as the workplace, community centers, or even our homes. The main cause of disease via asbestos is when the fiber breaks and becomes a fine particle that can travel airborne and lodge in the airways and lungs. They can remain in the body for a very long time period and can accumulate. Over time, accumulated particles can cause inflammation and scarring, which in turn can lead to serious health problems. Today, most exposure to asbestos occurs because of occupational exposure. People can also be exposed at home, either by coming in contact with old sources of building material via home renovations or from new sources such as outdoor activities or auto repair shops. Naturally occurring asbestos can be found in local soils, and thus gardening can also lead to minor exposure to asbestos (36). Symptoms and Health Effects of Asbestos Most of the known effects of asbestos come from studying people who have had past exposure to it. What is observed is a slow buildup of scar-like tissue in the lungs and the membranes surrounding the lungs. This tissue loses the ability to expand and contract like normal lung tissue and thus makes breathing progressively difficult. Another effect

of this buildup is the constriction of blood vessels that inhibits blood flow to the lungs, causing the heart to enlarge. The term for this condition is called asbestosis, and the signs include coughing and shortness of breath. People with low-level exposure usually are at risk for developing asbestosis. Another major health effect caused by asbestos is cancer. Cancer of the lung tissue does not present itself right after exposure but develops many years after the initial exposure. It is hard to determine the exact number of cases from asbestos, but what is known from epidemiological studies is that most pronounced effects present themselves ten to fifteen years after exposure. Thus, people with known exposure should take preventative measures as soon as possible. Smoking should be ceased because it considerably increases the risk of lung cancer in people exposed to asbestos. The third major disease caused by asbestos exposure is mesothelioma, a cancer of the thin membrane that surrounds the lung and other internal organs. It is a very serious disease, and by the time the patient is diagnosed, it is usually too late. The median survival rate is twelve to twenty-one months, and the five-year survival rate is between 5–10 percent. Survival has improved from past years, and scientists believe that early identification and intervention may increase survival (37).

Advanced malignant mesothelioma on right.

Asbestos exposure in children could have different effects than when compared to adults. Even though children and infants breathe more air per kilogram of body weight than adults, the difference could be counterbalanced by the alveoli being less developed, thus providing less surface area for absorption of asbestos particles. Some studies have shown cancers appearing earlier in people who were exposed to asbestos when they were children, but there is not enough data to conclude this hypothesis. It might be true that early exposure can lead to earlier onset when compared to adults because there is more time to cause injury to lungs. However, there is not enough scientific data

from studies that support the fact that children do not develop mesothelioma from asbestos exposure, but rather through a less understood mechanism that does not involve any asbestos exposure (38). Treatment for Asbestos Exposure There are currently no treatments that can reverse the effects of asbestos on the lungs. However, treatment of the symptoms and help with relief can be administered to prevent any further complications. One of the chief factors that contribute to further development of the diseases is smoking. Doctors would first recommend that you stop smoking if you are a smoker, and to avoid second hand smoke at all times. If the person has trouble breathing, has shortness of breath, and has a low level of blood oxygen, then the doctor may recommend oxygen therapy. This procedure may be done at home or in the hospital and helps increase the concentration of oxygen in the blood through nasal prongs or a mask. If there is buildup of fluid in the lungs, then a thoracentesis may be performed. This is done by inserting a thin tube or needle in the cavities between the lungs and the chest wall. The excess fluid is then drained via the other end of the tube. If the patient has lung cancer or mesothelioma, treatment options will include surgery, chemotherapy, radiation therapy, and targeted therapy. Other forms of treatment could include prescriptions that deal with the symptoms of fluid buildup and pain, and that could deal with any other discomforts or complications caused by the disease. It is also generally a good idea to stay on top of flu and pneumonia vaccines, because they can help lower the risk for lung infections. Recently, there has been a new experimental treatment that was showed to stop asbestos related tumors in three out of four cases. The antitumor

recombinant protein, NGR-hTNF, is a targeted therapeutic drug that is currently in phase three clinical trials to establish its efficacy as a management therapy for people with recurring mesothelioma (39, 40). Remediation of Asbestos As previously stated, asbestos can be found in many places in our environment. Certified professionals should be employed when asbestos removal is required. A quick test must be performed to make sure the substance contains asbestos. This can be simply performed by using a polarized light microscope. Once it is confirmed that there is asbestos in the sample, the condition of the material must be inspected. If it is in good condition with no danger of flaking into smaller particles, new material can be used to cover it. If the asbestos is needed to be removed, the process of containment and removal is called abatement. Usually the house or building in question is cleared out, but the residents do have the option to stay. Doorways and windows are sealed off, and HEPA filters are used to create negative pressure so that the air inside the house does not escape in to the environment. The abatement team then cuts out the asbestos and transports it to a specially sealed dumpster meant for asbestos products. After abatement is complete, an air test must be performed to make sure the air is safe for children and adults. A TEM (transmission emission microscope) or a PCM (phase contrast microscope) is used to do this analysis. The danger from asbestos exposure is scientifically proven to be a significant health hazard to the public. Public policy should reflect the complete ban on use of asbestos products and aggressive prevention measures so that children and adults can be protected from future disease.

PART 2

ESSENTIAL TRACE METAL WITH THE POTENTIAL FOR TOXICITY Nichole Coleman PhD, Christopher Blaine,

Emmanuel Romero, Paul Valencia Scattolin,

and Alexandria Lufting

Vitamin and essential trace metal wheel.

  Chapter 6 Iron Chapter 7 Cobalt Chapter 8 Copper

Chapter 9 Zinc

CHAPTER 6



IRON Christopher Blaine

Introduction Humankind has a deep history with iron. Some of the earliest known artifacts constructed from iron can be dated back to about 3000 BC, with the true Iron Age beginning around 1200 BC. Iron is closely linked to the development of the modern age, especially as the base for steel, which allowed for the construction of skyscrapers, railroads, ships, cars, and other major appliances. But humanity’s connection to iron runs even deeper than that. It runs, quite literally, in our blood. Iron is the major oxygen-binding component of heme in hemoglobin and electron transfer molecule in the electron transfer chain in the mitochondria. Not only this, but iron’s ability to transfer electrons can

create reactive oxygen species that can be utilized by immune cells in host defense. In this chapter, we will discuss sources of dietary iron and its absorption, iron storage, and transportation within the human body, metabolism of iron, iron deficiency, iron overload, and treatments for deficiency and overload. Iron Absorption There are two major forms of iron that can be absorbed in the intestine: heme iron and nonheme (or free) iron—that is, iron absorbed from animal sources and iron absorbed from plant sources). Heme iron is far more easily absorbed given that it is already in the heme state and can immediately be integrated. Nonheme iron must undergo several reductionoxidation steps and be bound to transport and storage molecules before it can become assimilated into biologically functional molecules. Both heme and free iron are found in meat, poultry, fish, and particularly liver. Plants such as leafy green, beans, lentils, watercress, tofu, and chickpeas contain only free iron. Iron can be more easily absorbed from the gut in the presence of partially digested muscle, fermented food, and many organic acids, particularly ascorbic acid (vitamin C). Dietary inhibitors of nonheme iron include calcium phosphate, bran, phytic acid, polyphenols, phytates, and oxalates. Phytates and oxalates are present in many plants, black coffee, tea, and black pepper, reducing the bioavailability of iron (41). Table 6-1: Iron Reach Foods Food Groups Meats and other protein Seafood

Iron Sources Beef, lamb, ham, turkey, chicken, veal, pork, liver, eggs, tofu, beans, lentils Shrimp, clams, scallops, oysters, tuna, sardines, haddock, mackerel

Food Groups Vegetable Breads and cereals Fruit

Iron Sources Spinach, sweet potatoes, peas, broccoli, beet greens, dandelion greens, collards, kale, chard, tomato White bread, whole wheat bread, enriched pasta, bran cereals, corn meal Strawberry, watermelon, raisins, dates, figs, prunes, prune juice, dried, apricots

  A recommended diet includes 10–14 mg of iron, but only 5–15 percent of this is actually absorbed by the enterocytes of the duodenum in the small intestine. However, this can increase up to 35 percent in the case of iron deficiency. Intestinal iron is typically present in the ferric state and must be reduced to its ferrous state to be absorbed. Duodenal cytochrome-b ferrireductase (Dcytb) reduces ferric iron, which makes it absorbable by divalent metal transporter-1 (DMT-1). Iron Storage and Transportation Iron absorbed in enterocytes is bound to a storage molecule known as ferritin. From ferritin, the iron can then be released across the basolateral surface through ferroportin channels with the help of hephaestin and ceruloplasmin. Hephaestin and ceruloplasmin helps to form transferrin-iron complexes, allowing transferrin to then transport iron to the liver for storage as ferritin or to iron-metabolizing tissues, namely the bone marrow. Modifying the transcription of the mRNA to produce these storage molecules is one of the main ways to regulate iron uptake and release to the body (42).

Figure 6-1: Pathway of iron absorption in enterocytes. 1: Ferrireductase; 2: Divalent metal transporter 1 (DMT-1);

3: Heme protein carrier 1 (HPC1); 4: Heme oxygenase; 5: Heme exporter;

6: Ferroportin (Ireg-1); 7: Hephaestin; 8: Transferrin receptor-1 (TfR1)

Hepcidin is a key regulator of the entry of iron into general circulation. When hepcidin levels are abnormally high as in cases of inflammation, serum iron falls due to iron sequester within macrophages and liver cells. This characteristically leads to anemia due to an inadequate amount of serum iron. When the hepcidin levels are abnormally low such as in hemochromatosis, iron overload occurs due to increased ferroportin mediated iron efflux from storage and increased gut iron absorption. Macrophages, particularly in the spleen, play an integral role in recycling and recovering iron. About 90 percent of all iron is recycled by splenic macrophages that break down damaged erythrocytes to recover the cell’s hemoglobin. Only 10 percent of a person’s iron actually comes from diet.

This scavenged iron is then transported back to the liver, where is can be stored and released at a later time. Normal serum iron is around 55–160 µg/dL in men and 40– 155 µg/dL in women, and normal hemoglobin levels are typically around 13.5–17.5 g/dL in men and 12–15.5 g/dL in women. Values also differ based on age, altitude, and physical activity. Metabolism of Iron There are three main functions of iron based around its interaction with oxygen and its ability to easily pass along electrons. The first is to transport and store oxygen in the form of heme bound to hemoglobin or myoglobin. The second major function is to assist in redox reactions, namely oxidative phosphorylation. The third major function is to create reactive oxygen species that may be used in macrophages or other immune cells to destroy phagocytized materials. Other minor roles include assisting in the production of neurotransmitters and myelination.

Figure 6-2: Iron compartments in the human body.

As figure 6-2 shows, hemoglobin is primarily found in the heme state in hemoglobin, designating the primary role of iron as oxygen mobilization. Four heme molecules per hemoglobin work in concert to pick up oxygen at the lungs and distribute as needed through the other tissue. Muscle tissue will contain myoglobin, which has a higher affinity for oxygen than hemoglobin, allowing its removal and storage until muscle activity increases oxygen demands. Iron bound in enzymes like cytochrome and iron-sulfur clusters is primarily found in the oxidative phosphorylation pathway, but it can be found in many proteins that perform redox reactions. Iron’s tendency to transfer electrons combined with oxygen’s affinity for electrons creates what are known as reactive oxygen species. These excited oxygen molecules and compounds can cause damage to cell membranes, DNA, proteins, and other major structures. Although it is

dangerous to the human body, it can be utilized in immune cells in specific compartments to destroy invading cells. Siderophores As mentioned previously, hepcidin causes the internalization of ferroportin, sequestering iron inside the eterocytes, hepatocytes, and macrophages, and it is upregulated during inflammation. This is assumed to be part of a host defense system to prevent pathogens from accessing free iron for their own metabolism. However, certain pathogens, particularly intracellular pathogens like mycobacterium, nisseria, shigella, and salmonella species have developed certain toxins, which cause chelation of iron from carrier or storage molecules. These toxins are known as siderophores. Once siderophores have forced the release of iron, they bind to the iron itself and then are absorbed by the pathogenic bacteria. These siderophores are often unique to the pathogenic bacteria. Other bacteria such as Cryptococcus neoformans will produce toxins knows as hemophores, which can remove heme from its macromolecular structures. Siderophores are inhibited by the host-immune protein siderocalin, but certain bacteria have developed a way to bypass siderocalin binding by using so-called stealth siderophores. Pathogenic bacteria may use other strategies beyond the use of toxins. Some pathogens will lyse erythrocytes to scavenge the abundant heme iron. Extracellular pathogens will have receptors that allow the ingestion of carrier molecules in the blood. The receptors may even allow absorption of hemoglobin and hemoglobin complexes (43). Iron Deficiency There is no direct mechanism to decrease iron levels, but 1– 2 mg of iron is lost per day because of shedding of gastrointestinal epithelial cells, excretion through bile or

urine, or menstrual bleeding. Heavier menstrual bleeding, gastrointestinal bleeding, or actively bleeding wounds can massively increase these losses. Minimal amounts can also be found in the sweat, nails, and hair. Mutations at DMT-1, which is responsible for intestinal absorption, can cause a loss of function. This loss of function can lead to multiple pathologic conditions including microcytic hypochromic anemia, which is defined by the lack of hemoglobin in erythrocytes causing smaller, paler red blood cells. Infection may also cause low iron levels from iron being scavenged by extracellular parasites and the increased production of hepcidin. This can cause free iron to drop to one-tenth of its normal values; however, because this does not change the amount of transferrin or ferritin, there are often no physiologic symptoms. The most reliable way to measure bioavailable iron levels is to measure ferritin levels of the liver. Ferritin, as a marker of iron storage, determines whether or not there is an iron shortage. By far the most common method of detection of blood iron levels, especially at the clinical level, is to measure the concentration of hemoglobin and corresponding hemoglobin values in red blood cells. Hemoglobin below 12 g/dL in women and below 13 g/dL in men is defined as anemia by the World Health Organization. Dietary iron deficiency is one of the leading causes of anemia and is one of the simplest to amend. Ferrous sulfate tablets allow oral supplementation of iron missing from the diet. Another method that is often more easily implemented in poorer nations is to cook with cast-iron materials if ironrich foods are not available. Vegetarians and vegans who do not carefully plan their diets may also suffer from iron deficiency. Inclusion of more iron-rich foods, combined with vitamin C, will offset the lack of heme iron in the diet (44). Certain disease states may also cause anemia. Chronic infections or sepsis, which cause hemolysis, lead to major

depletion of ferritin and hemoglobin as the demand for new erythrocytes increases and available iron is taken up by pathogenic bacteria. As the infection continues, less erythrocytes are able to return to the spleen to recycle their hemoglobin. Iron deficiency mainly causes fatigue and dizziness, with extreme cases of anemia leading to hypotensive shock. Deficiency in early development can cause further complications leading to retardation and impaired cognitive ability. Iron Overload Acute iron toxicity can be due to an ingestion of large amounts of iron-containing medicines, mostly iron supplementation medicines like ferrous sulfate. This is most common in children who accidentally ingest the medicine. The 2014 Annual Report of the American Association of Poison Control Centers’ (AAPCC) National Poison Data System shows that 75 percent of cases of acute toxicity were in children below the age of six. Severe toxicity occurs with the ingestion of 0.5 g of iron (recall that recommended intake is measured in milligrams) or 2.5 g of ferrous sulfate, and it manifests within one to six hours of ingestion. Signs and symptoms acute iron toxicity include shock, metabolic acidosis, liver damage, and coagulation abnormalities for the next several days. Other late effects include renal failure and hepatic cirrhosis. The suspected mechanism of toxicity is increased absorption of ferrous ions, causing endothelial cell damage in the liver (45). Chronic iron toxicity, also known as iron overload, comes from three major sources: hereditary hemochromatosis, excess dietary iron, and blood transfusions. Long-term iron exposure may also occur because of the inhalation of iron oxide fumes and dust affecting hematite miners, iron and steelworkers, and arc welders.

Hereditary hemochromatosis is excess iron intake due to genetic mutations that alter iron homeostasis and hepcidin regulation. Two major types of hereditary hemochromatosis mutations exist: human factor engineering (HFE) gene mutations and non-HFE gene mutations. Non-HFE mutations are less common, but they are found more prevalently in non-European populations. These mutations cause decreased hepcidin activity and increased iron absorption. The excess iron spills into the bloodstream and saturates transferrin, forcing hepatocytes and cardiac myocytes to take in some of the spillover through alternative iron transporters (46). Excess dietary iron can be caused by the same methods used to remedy iron deficiency. Cooking with cast iron materials, adding food with high bioavailable iron to one’s diet, and taking iron supplements can all cause iron overload if not managed properly. This can be further complicated in patients who have been diagnosed with anemia but have not been properly tested to understand the nature of their anemia. Many hereditary disease such as alpha and beta thalassemia can cause anemia without the patients being deficient in iron, yet they are treated with iron supplements. Testing serum ferritin and transferrin allows a more accurate diagnosis of anemic patients to understand the source of their diseases. Blood transfusions can cause a condition known as transfusional siderosis. Blood transfusion can sometimes be required for anemic conditions. It is somewhat similar to the previous two forms of iron overload, but this time instead of ionized iron, the overload is causing by excess heme iron. Treatment of Iron Toxicity Acute iron toxicity is treated through the removal of ingested iron. This can be achieved by inducing vomiting or performing a gastric lavage. More extreme toxic events

require symptomatic treatment of the iron-induced acidosis or shock. Chelation therapy is typically reserved for treatment of iron overload in patients with thalassemia and transfusional siderosis. Dietary overload can be resolved by excluding iron rich foods from the diet. Many chelative drugs are available, including deferiprone (L1), deferoxamine (DF), deferasirox (DRFA), diethylenetriaminepentaacetic acid (DTPA), and ethylenediaminetetraacetic acid (EDTA). The main choices for therapy are L1 and DF.

CHAPTER 7

COBALT Emmanuel Romero

Introduction The US Census Bureau reports that by the year 2050, an estimated 83.7 million residents in this country will be sixtyfive years of age or older, which is double the size of this demographic from 2012. Modern medicine is constantly evolving to serve these individuals as they deal with quality of life issues, and among the most important of these issues is mobility. Exercise and other physical activities are essential to a healthy lifestyle because they help maintain strong bones and muscles, good metabolism, optimal cardiovascular health, and morale. Age-related issues such as osteoarthritis may impede on the mobility of older individuals, creating the need for hip replacements and similar therapies.

Many implants approved by the US Food and Drug Administration contain metal components. According to the government agency, metal-on-metal hip implants, or MoM systems, come with several advantages, including durability and low risk of dislocation. However, corrosion can pose a hazard to patients if it results in metal particles invading the surrounding tissue or entering the circulation. Depending on the metal, patients face the potential risk for different kinds of poisoning. One metal often used in MoM systems is cobalt. Cobalt and Essential Trace Elements

  Among the nutrients that humans require are essential trace elements, which people must consume in miniscule amounts ranging from fifty micrograms to eighteen milligrams a day. These minerals include iron, copper, zinc, iodine, and cobalt. They often play the role of catalyst in

physiology, and sometimes become part of the basic building blocks of the human body. In healthy individuals, cobalt is found in several tissues, including muscle and fat, as well as various organs such as the liver and heart. Cobalt also takes the form of cobalamin, a part of the vitamin B12 that intestinal bacteria normally produce. According to the Office of Dietary Supplements, a division of the National Institutes of Health, vitamin B12 is essential to DNA synthesis and maintenance of healthy nerves and blood cells. Beef liver, poultry, eggs, milk, clams, and fortified breakfast cereals all serve as dietary sources of vitamin B12. Deficiencies in vitamin B12 may lead to fatigue, mood disorders, dementia, memory problems, and megaloblastic anemia. One researcher recently attributed the eccentric behaviors of former US First Lady Mary Todd Lincoln, long a subject of psychiatric and medical speculation, to a vitamin B12 deficiency (47-49). Medical and Nonmedical Applications of Cobalt Nonradioactive forms of cobalt can be used as structural components of MoM systems, but scientists are investigating other isotopes of cobalt as alternative radiation sources for brachytherapy. One study published in 2011 concluded that although there were no clinical advantages to using cobalt-60 instead of iridium-192 in brachytherapy, the former had a much longer half-life, suggesting it may be an economically preferred radiation source for cancer patients who need to be treated in developing countries (50). About forty years ago, doctors administered cobalt salts to patients they diagnosed as having refractory anemia because the mineral seemed to have stimulated the production of red blood cells, though the mechanism for this response had not been entirely clear. However, this therapy

became less popular as doctors observed adverse side effects in patients’ cardiovascular, nervous, and endocrine systems. Cobalt also has several industrial uses and can be found in alloys, batteries, magnets, tires, machinery, and the dye cobalt blue. Diamond polishing, welding, and ceramics are other activities that involve the use of cobalt. In the middle of the 1900s, some beer breweries across the United States used cobalt as a foam stabilizer for their beverages. What Does Cobalt Toxicity Look Like? Following a large initial exposure to cobalt, the human body’s own excretory and lymphatic systems help dilute and eventually eliminate the substance. During one experiment using intravenous administrations of cobalt, adult male subjects excreted 40 percent of the mineral after twentyfour hours. Measurements taken after a month and after a year showed that 20 percent and 10 percent of the cobalt was retained, respectively, with the kidney and liver being the main organs of retention. Although a single large dose of cobalt will eventually be eliminated, chronic exposure over a longer period of time may cause cobalt to accumulate within the organs and elevate levels of the metal in circulation. Scientists do not know much about cobalt deficiency, but there are cases of cobalt toxicity recorded in medical and scientific literature. However, many of these problems are not specific to cobalt toxicity, which underscores the importance of doctors performing a thorough medical examination to get a clear patient history before making any conclusions. In the nervous system, excess cobalt may cause headaches, mood disorders, cognitive decline, vision problems, deafness, or tinnitus. There can also be several negative effects observed on the endocrine system, such as hypothyroidism, which some experts believe to be a

hallmark of cobalt toxicity. Historically, there have been cases of anemia patients, particularly children, who developed goiters because of oral cobalt therapies. Cobalt also can adversely impact the cardiovascular system. This was typically observed in cases of “beer drinkers’ cardiomyopathy,” which was a result of the former practice of using the mineral as a foam stabilizer in beer. Autopsies on patients with this condition revealed an excess of cobalt in the muscle tissues of their hearts, which had grown large and become less effective at pumping blood around the body. When it comes to the blood, some doctors noted that excess cobalt can cause the blood to thicken (51). Industrial inhalation of cobalt can spur the development of scar tissue in the lungs. Medical literature has also linked cobalt to contact dermatitis, but other minerals such as nickel are more likely irritants. Scientists are still not entirely certain how cobalt exerts its negative effects on the body. Possibilities include direct cell toxicity, strong interactions with sulfhydryl groups, interference with mitochondrial enzymes and/or thyroid function, disruption of calcium signaling, and production of oxidative particles (52). Monitoring the MoM Problem Today, cobalt toxicity can be the result of long-term exposure to small doses of the mineral (such as inhalation on the job) or a one-time exposure to a large amount of cobalt (such as an industrial accident), but according to the US National Library of Medicine, there have been reports of poisoning caused by the degeneration of MoM prosthetic systems that are made with cobalt. According to the FDA, orthopedic surgeons usually take precautions to help ensure that MoM prostheses operate in such a way that risks for corrosion and that the subsequent

release of metal ions into the surrounding tissue and circulation are kept to a minimum. Unfortunately, it is difficult for doctors to predict which patients can become sick from cobalt poisoning, or to speculate as to how severe symptoms will be in such an event. A case study from the University of Pittsburgh Medical Center noted that since 2006, more clinicians have been seeing cases of arthroprosthetic cobaltism, in which patients with MoM systems presented in the medical setting with nervous, endocrine, and cardiovascular symptoms, as well as hip pain and blood serum cobalt concentrations upward of sixty micrograms per liter. However, there have not been any known studies comparing clinical and laboratory characteristics between groups of symptomatic or asymptomatic patients who did or did not undergo implantation of cobalt-containing MoM systems. This has made standardization of patient care in cases of arthroprosthetic cobaltism rather difficult. Because of the lack of consensus in approaching the screening and diagnosis of cobalt toxicity among MoM patients, there is a significant amount of variation in the different guidelines put forth by medical groups. For example, the British Medicines and Healthcare Products Regulatory Agency urges doctors to monitor symptoms and blood cobalt levels among patients receiving cobaltcontaining MoM systems for at least five years, all the while screening for abnormal effects and laboratory measurements of cobalt ion concentrations of at least seven micrograms per liter. Conversely, the FDA has no clinical value threshold for cobalt but still recommends close watch over hip replacement patients for problems in endocrine, cardiovascular, or nervous systems. Regardless of the cause of cobalt toxicity, there has yet to be an established reference for abnormally high cobalt levels from patient specimens. One study suggested that serum concentrations greater than one microgram per liter

may indicate toxic exposure, and that results greater than five micrograms per liter can be dangerous. Still, laboratory results must be taken within the context of other pieces of patient presentation, including symptoms and history of MoM prostheses. At the University of Pittsburgh School of Medicine, there is a proposed set of three criteria in diagnosing cobalt toxicity in MoM patients: abnormally high levels of cobalt, as confirmed by laboratory analyses of blood or serum; at least two medical exams confirming the presence of symptoms indicative of poisoning, such as neurocognitive tests; and the exclusion of other possible medical explanations. The FDA has not recommended metal ion testing for MoM patients who are asymptomatic, though monitoring of patient health is still highly advised. Furthermore, when clinicians do conduct metal ion testing, it is important to note that not all commercial labs can perform these analyses with the required degree of accuracy and precision. Serum samples can also become easily contaminated because of trace elements present in collection tubes and needles, as well as the surrounding environment (53). What Are Other Exposure Routes of Cobalt? Apart from MoM systems, individuals may develop cobalt toxicity as a result of occupational exposure to dust that contains cobalt. This can happen among people who work in cobalt processing and mining, diamond polishing, ceramics, or hard metal industries. According to the US Centers for Disease Control and Prevention, more than a million workers across the country in these fields are at risk of exposure. For example, breathing problems can occur among cemented carbide industry employees who work in environments where air concentrations of dust reach upward of two micrograms per cubic meter. The conditions

can lead to a form of pneumoconiosis known as hard metal lung disease. Other complications may include asthma or labored breathing. Cobalt dust may also be swallowed on the job. Ingestion of cobalt outside the industrial setting may be a less common contemporary cause of toxicity, because practices of using the mineral for beer brewing or therapy for anemia have become less common. How Is Cobalt Toxicity Treated? Remediation of cobalt toxicity depends on several factors, including the route of exposure, the amount of cobalt to which an individual is exposed, laboratory results, and the clinical presentation of patients. When it comes to arthroprosthetic cobaltism, doctors may recommend removal of the MoM system and replacement with a prosthetic not made with cobalt. In the emergency setting, patients with breathing problems caused by inhalation of a large amount of cobalt may receive various medications to temper inflammation and relieve swelling. Rare cases in which large doses of cobalt are swallowed may call for hemodialysis. There is no known chelation therapy for cobalt toxicity. In instances when patients develop cobalt toxicity over time, doctors can only recommend reducing exposure to the mineral, as well as treatment of the individual symptoms caused by poisoning. For example, thyroid hormones may be prescribed for patients with endocrine problems, and those with cardiovascular issues may benefit from betablockers or angiotensin-converting enzyme inhibitors. Corticosteroid drugs can also help with inflammation. What Is the Future of MoM? Based on the latest available evidence, the FDA ruled on February 2016 that before two different types of MoM

systems can become commercially available to patients, they must have premarket approval applications, which include information on known risks, device effectiveness, and reports on safety. Additionally, the agency is improving medical imaging technology to allow better monitoring of implanted devices, encouraging proficiency testing for labs that perform metal ion analyses on specimens, and conducting postmarked surveillance. In regard to preventing cobalt toxicity in the industrial setting, the US Department of Health and Human Services and the US Department of Labor recommend that employers and employees be mindful of all practices designed to minimize exposure to metal dust. These include the use of appropriate breathing apparatuses, impervious clothing, and functional exhaust and ventilation systems. Workers must also observe good hygiene practices by thoroughly washing metal dust off the skin, and refraining from eating or smoking in work areas.

CHAPTER 8

COPPER Paul Valencia Scattolin

Introduction Copper is the twenty-sixth most abundant naturally occurring element in the earth’s crust, accounting for an estimated proportion of fifty parts per million. Copper forms a wide variety of compounds because of its ability to exist in four oxidation states and twenty-nine isotopes, most of which are radioactive. The most common forms of copper minerals found in nature are two copper sulfides, two copper carbonates, one copper oxide, and pure copper. Chalcocite is a copper sulfide (Cu2S) dark metallic crystalline mineral. It is one of the most desirable copper minerals to mine because of its high copper content. Chalcopyrite is a copper iron sulfide (CuFeS2) golden metallic crystalline mineral that resembles gold in

appearance. Azurite is a copper carbonate hydroxide [Cu3(CO3)2(OH)2] deep blue metallic crystalline mineral. Because of its rich blue color and its ability to be easily grind, azurite was extensively used as a pigment in old civilizations and has occasionally been used in jewelry. Malachite is a copper carbonate hydroxide [Cu2CO3(OH)2] crystalline mineral with opaque to light green bands. Malachite was also used extensively as a pigment in old civilizations and has been popular for jewelry making and carved art. Cuprous oxide (Cu2O) is a reddish solid mineral that arises from the oxidation of pure copper; since antiquity, it has been used as a pigment, but today it is most commonly used in marine paints because of its antifouling and antifungal properties. Also, it is used in the Benedict’s test as the reagent responsible for the change in color of the solution that yields a positive result when the solution contains reducing sugars. Pure copper, or native copper, is a lustrous, reddish, corrosion-resistant, malleable metal with exceptional abilities to conduct heat and electricity. Copper is classified as a transition metal on the periodic table, located in group 11 and period 4 with assigned atomic number 29. It is one of the few metallic elements that occur in nature in a pure state. Because of its pure occurrence in nature, copper metal has been excavated and manipulated since the time of ancient civilizations for the creation of artifacts such as tools, weapons, and jewelry. Copper’s natural and versatile physical properties led to its continuous use since discovered. As human intellect progressed, so did the use of copper. In fact, copper was the first metal to be combined with other metals to make alloys such as brass and bronze. In today’s societies, copper remains extensively used in various industries. In the jewelry industry, copper is one of the primary metals that makes the gold alloy used in modern jewelry (measured in karats), but pure copper

jewelry remains popular in some cultures throughout the world. In the architecture and construction industries, pure copper is primarily used in electrical wiring, plumbing, and roofing. An ultra-strong alloy mixture of copper, zinc, and tin is used by the firearm industry to make guns and cannons. In the electronics industry, pure copper is vital in the making of many components, from electric motors and circuit boards to a recent innovation: inkjet-printed copper nanoparticles in flexible electronics. The industrial and domestic applications of pure copper, copper alloys, and other copper compounds are boundless in our modern lives. However, the most important of copper’s roles are in living organisms. There are two copper forms in different compounds that exist in biological systems, copper (I) and copper (II), which refer to its oxidation state. Copper (I), or cuprous ion (Cu+), is unstable in neutral pH environments, but it can be further oxidized into copper (II). Copper (II), or cupric ion (Cu2+), is stable in alkaline pH aqueous environments, but it can be reduced back to copper (I). Copper’s molecular capability to rearrange from copper (I) to copper (II) and vice versa accounts for its importance in living organisms. Copper is considered an essential trace mineral in human physiology necessary for many rather important biochemical reactions. Several studies have confirmed that copper is vital for fetal and brain development, infant growth, erythrocyte and leukocyte maturation, blood glucose and cholesterol homeostasis, bone and tissue integrity, myocardial contractility, and immunological mechanisms. In addition, copper plays a role in the formation of skin pigments and hair follicles. Also, copper is needed as a cofactor by major antioxidant enzymes like superoxide dismutase and ceruloplasmin. According to the FDA, the current established copper dose is 2 mg/day for people over four years of age, based on a twothousand-calorie diet (54).

Copper Dietary Sources The concentration of copper in food and water consumed by humans can be highly variable. During infancy, humans acquire copper from breast milk. Even though breast milk is considered a poor copper source, it is the only natural source of copper available for newborn babies. For this reason, copper is stored prenatally in the fetus’s liver to sustain adequate copper nutrition in infants during the first few months of life. As an alternative to breast milk, infant formulas contain different copper concentration, depending on the needs of the infant and usually dictated by whether the infant was full-term or premature. The current FDA infant formula nutrient requirements list copper minimum concentration in infant formula to be 60 µg/100 kcal of formula. Copper and other mineral concentrations in drinking water depend entirely upon the water source. Natural water sources can pick up minerals from the soil and surrounding environment. Thus, mineral content in the soil and environment can be proportional to mineral content in the water. Natural, unpolluted freshwater has very little copper because of its neutral pH. However, it has been found that acid rain can reduce the pH of water to four or lower, causing an increase in free and chelated copper concentration in the lake water. This is why we have government entities that regulate water flow, pH, and content in order to provide safe drinking water for the public, although this does not necessary occur all the time. There are several factors that influence copper concentration in potable water. Water is commonly conducted through copper metal pipelines, and copper metal is sensitive to acidic conditions, so acidic potable water has a higher copper concentration than neutral water. Also, some municipal water agencies have been known to add copper salts to the water in an effort to control algae

growth in the pipelines. As a result, the copper content in tap water constitutes an important copper source in populations with access to municipal water. To an extent, an average adult can ingest 13–50 percent of their daily adequate copper dose by drinking 2 liters of water, assuming there is a copper concentration of 0.1–0.5 mg/L in the water. Perhaps this is why copper deficiency is uncommon in adults, and infants and young children can have an over consumption of copper from drinking water by virtue of their smaller body size and weights. Nevertheless, food rather than water accounts for the majority of copper consumption in adults. It is known that seawater has a higher copper concentration than freshwater, and this concentration gradually rises as water depth increases. Because of this, creatures harvested from the ocean floor are rich in copper, especially lobsters, crabs, oysters, and some fish species. Cattle muscle meats contain insignificant amounts of copper, whereas cattle organ meats are very rich in copper, especially the liver. Among plants and seeds, the richest in copper content are shitake mushrooms, kale, avocado, spinach, brown rice, potato, cashews, chickpeas, cocoa, sesame seeds, quinoa, almonds, lentils, and raisins. Regardless of the source, copper absorption can range from 15 to 97 percent. As a result, copper concentrations need to be taken into consideration because copper deficiency and copper toxicity can potentially cause physiological changes that are implicated in the development of chronic diseases (55). Copper Deficiency Copper deficiency in humans can be acquired or genetic. An acquired copper deficiency can be the result of low copper storage, insufficient intake, excess excretion, malabsorption, increased demand, or a combination of any of these. Although acquired copper deficiency occurs mainly in

infants, it is a pathology seen in children and adults as well. Premature infants are at higher risk of developing a copper deficiency because of lower copper storage in the liver and higher physiological demand for the mineral. Infants exclusively fed on cow’s milk received negligible levels of copper because of the modest copper content in cow’s milk compared to breast milk. In children, the most common cause for copper deficiency is simply malnutrition, yet consuming other substances can impact copper metabolism and eventually lead to copper deficiency. For example, high oral doses of iron, zinc, ascorbic acid, penicillamine, and alkali therapy are known to decrease copper absorption, which can result in copper deficiency. Other predisposing factors of copper deficiency from malabsorption are short bowel syndrome, cystic fibrosis, celiac disease, and tropical and nontropical sprue. Copper depletion from excess excretion occurs during episodes of prolonged diarrhea and excessive bile loss (56). Whatever the cause of copper deficiency, the most pronounced clinical manifestations are anemia and bone disease. The anemia is believed to be the result of defective iron mobilization due to low ceruloplasmin activity. Ceruloplasmin is a copper-carrying enzyme involved in the oxidation of iron from ferrous (Fe2+) to ferric (Fe3+). The bone disease is related to observed abnormalities in the bone marrow, such as maturation arrest of myeloid precursor cells, presence of ringed sideroblasts, vacuolization, and megaloblastic changes in erythrocytes. Consequently, it is manifested as osteoporosis, epiphyseal separation, bone fractures, cupping and fraying of the metaphyseal region of the bones with spur formation, and development of the subperiosteal bone. Less pronounced clinical manifestations include sensitivity to infections, hypotonia, diminish growth, and hypopigmentation of the hair. Other less common clinical manifestations that are linked to copper deficiency include decreased antioxidant activities, anomalies of the

cardiovascular system, increased blood cholesterol, and decrease glucose tolerance (56). Copper deficiency can also occur from genetic disorders. The most common of these is Menkes syndrome, an X-linked recessive trait responsible for poor copper absorption and distribution throughout the body. Children with this particular disorder experience many symptoms, including growth retardation, arterial dilatation and tortuosity, swollen veins, hypothermia, lax skin, hair depigmentation with abnormal spiral twisting, osteoporosis, bone fractures, abnormal metaphysis wormian bone synthesis, retinal dystrophy, and severe central nervous system (CNS) impairment. Brain pathological studies of patients with Menkes syndrome have revealed major degeneration of the brain and cerebellum with notable changes in Purkinje neurons. The damage to the CNS manifests as ataxia, seizures, and mental retardation. Unfortunately, because of the degree and number of symptoms, children with Menkes syndrome often die within the first six years of age (57). Copper Toxicity Even though copper has been extensively used for many generations, its toxicity has been recently identified. The discovery of copper-inherited disorders such as Menkes and Wilson’s diseases captivated people’s desire to learn more about the physiological effects of copper in the human body. Studies soon revealed that excessive copper exposure can lead to adverse physiological effects that culminate in copper toxicity. Also, it is known that the more soluble the copper compound, the more toxic it is. For instance, copper sulfate and copper chloride are more toxic than copper oxide and copper hydroxide. Animal studies suggest that copper toxicity has the greatest effect in the forestomach, liver, and kidneys, which are the major organs involved in copper absorption and excretion. Other major systems

affected by copper toxicity are the immune, skeletal, and central nervous systems. Despite copper’s beneficial ability to oxidize and reduce, its oxidation potential may also be implicated in its toxicity. In high concentrations, copper causes oxidative degradation of lipids by creating free radicals that end up disrupting the integrity of cell membranes. Human case reports indicate that ingesting twenty to seventy grams of copper salts in one dose induces abdominal pain, nausea, vomiting, diarrhea, dizziness, headache, difficulty breathing, tachycardia, hematuria, hemolytic anemia, gastrointestinal bleeding, kidney failure, liver failure, and possibly death. Acute exposure to high copper concentration in drinking water is characterized by stomach discomfort, nausea, and vomiting. These symptoms occur in adults drinking water with a copper concentration of approximately 5 mg/L. Chronic copper poisoning slowly progresses to liver failure with an approximate dose of 30–60 mg/day. Copper poisoning through skin contact has not been found to cause disease. Even so, traces of copper can be absorbed through the skin from wearing copper jewelry and from using ointments with high copper concentrations. One study reported that wearing a copper metal bracelet can account for up to 13 mg per month of copper intake through the skin in the form of glycine-copper complexes dissolved in sweat. In extreme occupational copper exposure through contaminated air particles, copper absorption was estimated at 200 mg per day, which may lead to copper toxicity detected in serum copper levels. However, precise concentrations from acute or chronic copper intoxication are not yet well established. While diagnosis of copper deficiency is relatively simple, copper toxicity is more challenging to diagnose. Elevated serum copper and ceruloplasmin levels are not uniquely associated with copper toxicity. Increased concentration of serum copper and ceruloplasmin can be prompted by

infections, inflammation, pregnancy, and other biological processes. Concentrations of copper-containing enzymes like superoxide dismutase, ceruloplasmin, and cytochrome c oxidase are not good indicators of copper toxicity because they can also be induced by certain physiological processes. The most reliable diagnostic test for copper toxicity is to measure hepatic copper concentration, which requires an invasive surgical procedure that is usually only performed when copper poisoning is suspected. Wilson’s Disease

Copper toxicity can be the result of certain genetic disorders. The predominant copper toxicity genetic disorder is Wilson’s disease, also known as hepatolenticular

degeneration. Wilson’s disease is an autosomal recessive inherited disorder from chromosome 13 that causes copper accumulation in tissues and organs, mainly the liver and brain. The incidence of this disorder was calculated to be one in thirty thousand, with an estimated carrier heterozygotic frequency of one in ninety. Initial clinical manifestations can begin anywhere in early life through old age. The trigger is a mutation in the Wilson ATPase gene, though a correlation between the mutation and disease severity has not been identified. This genetic condition is a copper transport disorder that inhibits copper incorporation into ceruloplasmin, which reduces copper excretion. As a result, copper accumulates in hepatocytes until their capacity is exceeded and lysis takes place. Copper residues then diffuse systemically into the blood and affect other organs. Therefore, Wilson’s disease may commence with acute hepatitis that culminates in fulminant hepatic failure or with chronic hepatitis that culminates in cirrhosis. The damage to the liver can provoke the development of jaundice due to high blood bilirubin levels. Also, liver damage from copper deposition is linked to Kayser-Fleischer rings, a dark brown coloration in the outermost layer of the iris. The rings are present in 90 percent of patients suffering from active Wilson’s disease. Preceding symptoms from encephalopathy include movement retardation, tremor in the upper extremities, behavioral changes, and diminished mental capabilities (58). Other less common symptoms observed in people with this syndrome are renal abnormalities and intravascular coagulation. It is not uncommon for patients suffering from Wilson’s disease with mild symptoms to be undiagnosed. Marker enzymes from liver damage are not reliable for diagnosing Wilson’s disease because serum alkaline phosphatase activity remains low, even in cases of severe hepatic damage, and aminotransferase activity levels do not noticeably elevate. Key indicators of Wilson’s disease are

low serum ceruloplasmin levels with elevated copper levels in serum and urine. Histochemical examination from a liver biopsy and genetic testing can provide a diagnostic confirmation of Wilson’s disease. Renal Effects of Copper Most patients with copper intoxication express early renal damage by developing oliguria or anuria. In some of these patients, the urine is noted as being dark brown or reddish, which are signs of hematuria. Acute or continuous chronic copper intake ultimately results in renal failure. Comparisons between copper intoxicated patients with and without renal failure have been conducted. Surprisingly, all patients whose kidneys fail experience acute intravascular hemolysis. However, the exact mechanisms of renal failure after an acute intravascular hemolysis episode are not fully understood. Studies show that free hemoglobin in plasma above 200 mg per 100 ml is toxic to the renal tubular cells. Animal studies concluded that copper deposition in the kidneys occur right after an animal experiences a hemolytic crisis, though it is suspected that copper is released from hemolyzed erythrocytes and gathers in the tubular epithelium of the kidneys, resulting in renal lesions. Histological kidney examination of patients with acute tubular necrosis has revealed losses of epithelium lining in the tubules with presences of hemoglobin casts, scattered inflammatory cells, and benign cell proliferation indicating tissue regeneration. Gastrointestinal Effects of Copper Gastrointestinal symptoms are among the first to occur from copper intoxication because copper is highly absorbed in the intestines. Clinical studies of mild copper toxicity have shown that copper intake from drinking water in concentrations greater than 3 mg/L is directly associated

with diarrhea, nausea, abdominal pain, and vomiting. Other studies of acute copper toxicity indicate that some patients suffer from a severe burning sensation in the epigastric region of the stomach, and most were found to be experiencing gastrointestinal bleeding observed in hematemesis, and symptoms of melena. Pulmonary Effects of Copper As previously mentioned, copper is a vital cofactor in superoxide dismutase and ceruloplasmin. Superoxide dismutase is an enzyme found in the cytosol of eukaryotic cells, and ceruloplasmin is an enzyme found in plasma. Both of these enzymes are thought to play a significant role in providing protection against oxygen toxicity. Animal studies of copper-deficient rats have confirmed a decrease of 44 percent in superoxide dismutase activity, and a decrease of 94 percent in plasma ceruloplasmin concentration compared to control rats. The end results have been as expected: the copper-deficient rats exhibited pulmonary toxicity with lung edema under a normobaric 85 percent oxygen air concentration. In addition, other copper-deficient rats were exposed to a hyperbaric 100 percent oxygen air concentration. Under both atmospheric and oxygen concentration conditions, the mortality rate of copperdeficient rats increased, with a decreased survival time in rats exposed to hyperbaric-hyperoxia conditions. Another study conducted in mice with elevated levels of the human lung copper-zinc superoxide dismutase further confirmed the necessity of copper and the antioxidant properties of superoxide dismutase. This study reported increased survival rates with decreased morphologic lung damage in mice with elevated superoxide dismutase exposed to hyperoxia conditions. The lungs of the mice did not present signs of edema or hyaline membrane formation and were noted to have a decrease in neutrophils. The study

concluded by stating that increased levels of copper-zinc superoxide dismutase in the lungs decreases pulmonary damage from oxygen toxicity. Hematologic Effects of Copper Hematological conditions from copper deficiency and toxicity can be tricky to diagnose because they both cause anemia. Copper deficiency causes normocytic hypochromic anemia, and copper toxicity causes hemolytic anemia. Both conditions decrease the number of erythrocytes in circulating blood, but they do so in different manners. Erythrocytes and leukocytes need copper for proliferation and maturation metabolic activities. Thus, copper deficiency inhibits these processes, resulting in low erythrocyte and leucocyte counts, which is illustrated in the form of normocytic hypochromic anemia and immunodeficiency. However, copper toxicity reduces the number of erythrocytes by inducing intravascular hemolysis. This is illustrated in blood examinations that show drastic drops in hemoglobin and hematocrit levels with increases in plasma hemoglobin, reticulocytes, and bilirubin levels. Peripheral blood smears show crenated, fragmented, contracted erythrocytes, and urine exams show hemoglobinuria. Excess copper in erythrocytes damages the cell membrane and denatures hemoglobin, causing the formation of Heinz bodies within the cells. It also interferes with several metabolic processes involving various cellular enzymes like catalase, adenosine triphosphate, glucose 6-phosphate dehydrogenase, and glutathione-disulfide reductase. Dermatologic Effects of Copper Copper content in the body can affect the skin in different ways. For instance, jaundice is a disorder characterized by yellowish skin and eye pigmentation caused by high blood bilirubin levels due to hepatic damage from copper toxicity.

In this scenario, excess copper is the main instigator for the dermatologic effects seen in jaundice. Copper deficiency is also implicated in the development of skin disorders, but clearer understanding of this requires a closer look at the mechanisms involved in human skin pigmentation. Melanocytes are specialized cells that generate intracellular organelles called melanosomes. These organelles synthesize and accommodate melanin, the substance responsible for skin pigmentation. They are able to perform this function because they contain most of the enzymes that regulate melanin production, and some of these enzymes need copper. For example, tyrosinase is a copper-dependent enzyme and is one of the most important enzymes required for melanin synthesis. Without copper, tyrosinase is not functional, and its disruption causes oculocutaneous albinism type 1 (59). Treatment for Copper Exposure In general, copper intake is rarely harmful to humans because we have evolved the ability to regulate copper absorption and excretion through homeostatic mechanisms. Yet when copper excess surpasses homeostatic capabilities, treatment is essential. Meanwhile, treatment choice depends on the degree of toxicity and other factors. Copper toxicity from excessive intake or Wilson’s disease can be treated with pharmaceuticals that inhibit copper absorption and others that enhance copper excretion. Chelation therapy is the current treatment of choice with Dpenicillamine, zinc sulfate, or zinc acetate as chelating agents. Zinc agents are preferred because they stimulate metallothionein, a copper-binding protein in gut cells that are constantly eliminated in feces. This effective copper secretion method prevents copper absorption in the intestines, which are the major source of copper absorption in the body. Overall, copper-rich nutrients and excessive

copper exposure should be avoided when suffering from copper toxicosis.

CHAPTER 9

ZINC TOXICOLOGY Alexandria Lufting

Introduction Zinc ranks twenty-fourth among the most frequent elements found in the earth’s crust. This transition metal is a bluish or silvery white in color, and it develops a white film when it comes into contact with moisture. Zinc has an atomic weight of 65.38 amu, has an atomic number of 30, and has five isotope states. Zinc is a component of alloys and is often combined with copper to create brass. It is also used to deter the rusting of iron and steel via galvanizing, and it’s used in the production of die casting used in various industries. Zinc derivatives are used in a wide variety of products: zinc oxide is used in the production of cosmetics,

rubber, soaps, and batteries, and zinc sulfide is used to manufacture luminescent paints. Zinc is required for several various physiological activities such as cellular metabolism and division, the synthesis of proteins and DNA, and proper functioning of the immune system. It also plays an integral role in the catalytic activity of hundreds of enzymes, our growth during developmental periods, and our senses of smell and taste. Recently, a study done at UC San Francisco found that zinc plays an integral role in the formation of kidney stones (60). The study examined fruit fly kidney stone formation, which is similar to human kidney stone formation. Results indicated that lowering the concentration of zinc enacted a drastic change in urinary oxalate, and therefore a change in kidney stone formation. Another study done in Uttar Pradesh, India, focused on a diarrhea treatment program. Diarrhea is a major cause of death in children under five in such areas. Zinc, along with oral rehydration solution (ORS), are staples in this treatment, and private practitioners were evaluated by whether or not they prescribed these medications to children with diarrhea. ORS was prescribed by 77.3 percent of these practitioners, and zinc was prescribed by 29.9 percent. Antibiotics and antidiarrheals were also prescribed (61.9 percent and 17.5 percent, respectively). The study concluded that more training and knowledge is needed, especially in areas that are hard to travel to, in order to decrease the amount of antibacterials prescribed and increase the use of zinc and ORS. Dietary Sources, Deficiency, and Toxicity The recommended daily intake of zinc varies by age and gender. For both male and female children ranging from seven months to three years, the recommended intake is 3 mg. It increases with age, averaging out to 5 mg for children aged four to eight, and 8 mg for children aged nine to

thirteen. A difference in recommendations due to gender begins to be seen in individuals aged fourteen to eighteen, with males requiring 11 mg, while females require 9 mg. For adult males, it remains at 11 mg, and for adult females it decreases to 8 mg. In the United States, recent zinc intakes from food have averaged to be higher than recommended, at 14 mg for men and 9 mg for women (61). The majority of exposure to zinc comes from dietary sources because a wide array of foods contains it. Oysters have the highest zinc per serving content, with a threeounce serving containing 74 mg of zinc. In the United States, the majority of our zinc intake comes from poultry and red meat, such as a three-ounce beef patty, which contains 5.3 mg of zinc. Other foods that provide adequate quantities of zinc include some varieties of seafood (e.g., lobster and crab), dairy products, nuts, beans, fortified breakfast cereals, and whole grains. Whole grains, dairy products, and breakfast cereals have lower amounts of zinc because of the presence of phytates, which have a propensity to bind to zinc. Dietary supplements can contain different forms of zinc such as zinc sulfate, zinc carbonate, zinc gluconate, and zinc acetate, each with various percentages of elemental zinc. At this time, there are no known differences in absorption rates, bioavailability, or tolerability between different forms of zinc. Other sources of zinc include homeopathic remedies for cold prevention and acne treatment, as well as some cold lozenges, nasal sprays, and immune support supplements (61). Zinc is also an ingredient in some adhesive creams. An overabundance of zinc over time can result in damage to the nerves in the feet and hands. Some cases of zinc toxicity have resulted from the overuse of zinc containing denture creams; individuals were reported to have used two tubes per week. According to package directions, one tube should last up to eight weeks. Such denture adhesive creams now include illustrations of the proper amount of

adhesive to use, or they have replaced zinc as an ingredient altogether. Dermal exposure sources include zinc oxide topical creams, sunblock, calamine lotion, and deodorants that contain zinc chloride. Some antidandruff shampoos also contain zinc pyrithione. As an integral essential trace element, zinc deficiency can lead to a variety of negative health effects, and it typically arises from inadequate dietary consumption but can also be secondary because of other illnesses. Symptoms of zinc deficiency include a delay in growth rate, the development of neurons and sexual maturation, lesions of the skin and eyes, loss of appetite, diarrhea, and alopecia. Acrodermatitis enteropathica, which occurs because of the malabsorption of zinc in infants, results in diaper and facial rashes, delayed growth, impaired healing of wounds and infections, diarrhea, and early death if no treatment is applied. In adolescents, deficiencies in zinc can continually lead to delayed growth and puberty (as well as dwarfism), changes in taste and weight, tremors, and emotional volatility. Death or lymphopenia can occur because of severe infections without treatment. Groups at risk for zinc deficiency include women who are pregnant or lactating, premature newborns, alcoholics, the elderly, children residing in developing countries, and patients who receive parenteral nutrition. Vegetarians and vegans may also be at risk for zinc deficiency because of the lower levels of zinc found in vegetables compared to meat and seafood. Many also have a diet that is high in crude fiber, which contains phytates that decrease the body’s ability to absorb zinc from dietary sources. According to the Food and Nutrition Board and the Institute of Medicine, the upper intake level, or the largest amount that can be consumed without resulting in negative affects to an individual’s health, is 40 mg per day for male and female adults. Acute toxicity is uncommon, but has been reported to occur within a time span of thirty minutes

of consuming 4 g of zinc gluconate, which contains 570 mg elemental zinc. The use of zinc galvanized utensils or the ingestion of acidic foods and drinks stored in zinc galvanized containers has also been known to lead to zinc toxicity. Symptoms of acute toxicity include diarrhea, nausea, abdominal cramping, vomiting, and headaches. Chronic toxicity can occur via ingestion of 150–450 mg of zinc per day. Studies have also reported chronic toxicity occurring over a ten-week period in individuals who consumed 60 mg per day, as well as in individuals who ingested 80 mg per day during a period of about six years. Symptoms include improper iron and immune system function, as well as reduced levels of copper and high density lipoproteins. Toxicity—Metal Fume Fever Another possible route of exposure that can lead to negative health effects occurs through inhalation. This is most common in those who work in industrial occupational settings (zinc galvanizing, smelting, welding), and it is referred to as metal fume fever (MFF). Zinc oxide fumes, as well as fumes from other metals like iron, magnesium, and copper, contribute to this condition. Symptoms arise within about three to eight hours of exposure and include excessive sweating, fatigue, muscle soreness, coughing, dyspnea, chest pain, chills, and fever. MFF typically lasts one to four days and is more likely to develop upon return to these work settings after some time away, such as after vacation or a weekend. The pathophysiology of MFF is still unknown but is thought to be due to cell lysis resulting in the release of endogenous pyrogen. Respiratory symptoms are also followed by an increase in the number of bronchiolar leukocytes. The Occupational Safety and Health Administration (OSHA) has set acceptable exposure to zinc oxide fumes and dust to 5 mg/m3 in an eight-hour workday period (at forty hours per week) (62).

Pathophysiology The pathophysiology of zinc toxicity is not fully known. It is dependent on such variables as the exposure route and form of zinc. Zinc toxicity is also relatively rare, especially in comparison to zinc deficiency. In animals who have ingested substances high in zinc (such as pennies minted from 1983 onward, batteries, or vitamin supplements), free zinc is released from these substances in the stomach because of low pH. Caustic, soluble zinc salts form, get absorbed from the small intestine, and migrate to the liver, kidneys, muscles, bones, prostate, and pancreas. These salts are corrosive and irritate tissues, impede the metabolism of other ions, and prevent the production and function of red blood cells (63). One study on the olfactory neuron cell lines in rats found that exposure to zinc gluconate intranasal gel, which is often used as a treatment for colds, can lead to pyroptosis (an inflammatory form of cell death) in the olfactory neurons (64). Zinc oxide nanoparticles, which are present in the food industry, were found to provoke an inflammatory response and prompt the production of reactive oxygen species (ROS) within intestinal cells in yet another study. High ROS levels damage cell DNA, interrupt the cell cycle, and ultimately lead to cell death. Other reports suggest that zinc accumulation within cells can further lead to apoptosis by activating potassium channels and p38, a pro-apoptotic molecule. Chronic zinc toxicity has a negative effect on copper levels and can lead to copper deficiency. High levels of zinc increase metallothioneins, which preferentially bind to copper and create a complex that then gets excreted. Copper deficiency can lead to anemia because it is required by the enzymes ceruloplasmin and cytochrome-C oxidase, essential components of heme synthesis (65). Plasma zinc measurements do not provide a sensitive gauge of zinc concentration within the body. Zinc balance

determinations, or the measurement of zinc dependent biomarkers such as thymulin or metallothionein, may prove more dependable. Other methods of analysis involve testing samples of hair, nails, urine, and (in the case of children) lost teeth. Serum and urine are the best indicators of zinc toxicity because hair samples can be easily contaminated. In clinical laboratories, zinc is evaluated via FAAS (Flame Atomic Absorption Spectroscopy), ICP-AES (Inductively Coupled Plasma Atomic Emission Spectroscopy), and ICP-MS (Inductively Coupled Plasma Mass Spectrometry). Low serum levels in urine and serum are indicative of a deficiency of zinc (66). Clinical Presentation Clinical presentation of zinc toxicity can result in symptoms ranging from the following categories.

Gastrointestinal • Abdominal pain and cramping, diarrhea, epigastric pain.

nausea,

vomiting,

Hematologic • Symptoms arise from copper deficiency due to zinc toxicity. This includes a decrease in the synthesis of heme, reduction of copper and ceruloplasmin in serum, reduction of high density lipoproteins, and hyperglycemia. Leukopenia, anemia, neutropenia, and an increase in plasma cholesterol and low-density lipoproteins can also occur.

Dermatologic/Ocular • One study on rabbits, guinea pigs, and mice noted skin irritation after exposure to zinc chloride. This included inflammation of the superficial dermis and epidermis,

hyperkeratosis, parakeratosis, and follicular epithelia acanthosis. • Similarly, there was also a case report in which a soldering paste consisting of 30 percent zinc chloride accidentally got into a plumber’s eye. This resulted in a loss of visual acuity, conjunctival swelling, hyperemia, bullous keratopathy, hemorrhaging, lens spotting, and corneal opacity. The majority of these symptoms dissipated after six weeks, except for lens cloudiness, which lasted for more than a year.

Neurological • Focal neurological deficit, lethargy, headaches.

Cardiovascular • Abnormal cardiac function due to zinc-induced copper deficiency.

Respiratory • Symptoms include coughing, pulmonary inflammation, hyperpnea, dyspnea, and chest pain. Treatments Treatments can vary depending on the amount of zinc a patient has been exposed to and in what form. Typical first steps involve stabilizing the patient using oxygen, fluids, or blood transfusions. In instances of zinc toxicity via ingestion in animals, H2-blockers and proton pump inhibitors can be applied to prevent zinc salts from forming in the stomach, as well as gastroprotectant therapy using sucralfate to help prevent gastric ulcers. In these cases, diuresis can also be used to halt hemoglobinuric nephrosis and encourage excretion of zinc from the kidneys. High levels of zinc toxicity may call for the use of a chelating agent, although this is debatable as it may increase intestinal zinc

absorption. One of the main chelating agents is calcium disodium ethylene diaminetetraacetate (CaNa2EDTA). Other commonly used chelating agents include diethylenetriaminepentaacetic acid (DTPA), ethylenediaminetetraacetic acid (EDTA), and dimercaprol or British Anti-Lewisite (BAL). One study showed that ethyl pyruvate (EP) can successfully chelate zinc and encourage NAD replenishment, which can protect against neurotoxicity resulting from zinc toxicity. Serum zinc levels should also be monitored during chelation to ensure appropriate treatment duration (67).

PART 3

METAL USED IN MEDICINE WITH A POTENTIAL FOR TOXICITY Nichole Coleman PhD, Lyia Yang PhD, Aisha Castrajon,

John Mateo, Harold Galvez, and Brittany de Lara, Henry Chen

Self-assembled mono layers of gold nanocages.

Gold nanocages are promising contrast agents for medicinal use.

  Chapter 10 Lithium Chapter 11 Aluminum Chapter 12 Platinum Chapter 13 Titanium Chapter 14 Gold Chapter 15 Tellurium

CHAPTER 10

LITHIUM Aisha Shaunte Castrejon

Dental model and two teeth made in lithium disilicate.

Introduction Lithium has a long history for medicinal use beginning in the late 1800s to treat different mental disorders. Even today, it

is still used for medicinal purposes. However, now lithium has been found to be beneficial in many other things, like powering electronics and transportation that people use every day. It has become so common in public consumption that tons of lithium is produced every year all over the world. The drawback with lithium use is that it can cause toxicity in people who are exposed to high amounts of the metal, causing devastating and sometimes permanent side effects. Fortunately, the levels that people are exposed to or ingest can be controlled through careful monitoring of lithium levels in the blood and treatment of the toxicity. Discovery and History of Lithium The element lithium is noted on the periodic table as Li and has an atomic number of 3. It is a soft, silver-colored metal that’s found mostly in the solid form with very low density. Other low density alloys like magnesium or aluminum are mixed in with lithium to also improve its strength. It has a melting point of 180.5ºC, a boiling point of 1342ºC, and an atomic mass of 6.94 atomic mass units (amu). Lithium usually must be kept in conditions that are cool, dry, and away from moisture because it can react with water vigorously; thus, it should be stored in oil. Direct sunlight and heat should also be avoided because lithium is also flammable. Another aspect of lithium is that is has two isotypes, labeled 6Li and 7Li, and it has a half-life of twelve to seventy-two hours. The discovery of this metal was first made by a scientist from Stockholm named Johan August Arfvedson in 1817, who had found the element lithium dispersed in minerals found in river beds and igneous rocks. He named the element after the Greek word lithos, which means stone, but he wasn’t able to find a method to separate the metal from the rock itself. There are many igneous rocks he found containing the element lithium, including spodumene,

lepidolite, amblygarite, and petalite. A scientist from Brazil named Joze Bonifacio de Andralda e Silva was the first person to discover the igneous rock petalite in 1790, but he was unaware it contained the element lithium. It wasn’t until 1855 that lithium ore was finally able to be separated from igneous rock through a process called electrolysis, which was discovered by Robert Bunsen from Germany and by Augustus Matthiessen from Brittan. Electrolysis is the process that can purify and separate metal by using a current that causes a spontaneous reaction. A scientist named Alfred Baring Garrod from London was the first person who used lithium to treat gout in 1847. He found that patients with gout had uric acid in their blood, and that lithium was able to dissolve the crystals found in blood. He also discovered that lithium could treat kidney stones, bladder stones, and gallstones. After this discovery, Frederick Lange, a Danish doctor, used lithium to treat patients with depression and found it successful. Years later in 1949, John Cade continued studying the medicinal use for lithium and discovered its effectiveness in the treatment of manic depression. He experimented with ten patients using lithium citrate and lithium carbonate against a placebo and found patients were responding to the drug treatment. The original treatment for manic depression was with Chlorpromazine and electro-convulsive therapy, which did not have a favorable effect on patients as Lithium. Cade measured the levels of lithium in a patient’s blood by using a Coleman flame photometer to prevent toxicity; this instrument was invented in 1958. Medicinal use for lithium was finally introduced to the United States in 1960 by Samuel Gershon at Michigan University. Lithium Production Today, lithium is made in large quantities in underground reservoirs that contain high amounts of salts. These

reservoirs are called salar brines, and the salts can be found on surface or beneath the dry lake beds. The process begins first by pumping the salts to the surface, which are then evaporated over a couple of months. Most companies use this method because using solar evaporation is more cost efficient in the long run. The dry lake beds usually contain over 7,000 ppm of lithium, and once it is dry on the surface, the salts are sent to a plant to be filtered from other minerals that might also be found in the brine, which include boron and magnesium. After that process, it is treated with sodium carbonate to form lithium carbonate, which is a fine white powder. The mixture undergoes the process of being filtered and dried again. After the process is complete, there is an excess of the brine, which is then always pumped back into the salar brines. Another method that companies use to separate lithium metal from igneous rocks includes crushing the rocks and heating it to high temperatures in a kiln, which changes the lithium into a molten form that can be extracted from the rock. Once it cools down, the plant grinds it into a powder and mixes it with sulfuric acid. The process is repeated to remove any other minerals, and then like the last method, sodium carbonate is added, filtered, and dried to form the white powder. Three countries that are the top producers of Lithium are Australia, Chile, and China; some of the most well-known companies that produce lithium include Sociedad Quimica y Minera de Chile in Chile, Talison in Australia, FMC in the United States, and Chemetall in Germany. The United States Geological Survey estimated that over thirty-seven tons of lithium is produced each year and shipped out to pharmaceutical companies and technological companies (68). When the product is sent to the pharmaceutical companies, it is compounded into capsules, tablets, or syrup form to treat manic depression and other mental disorders. There are two brands each manufactured by two different

companies. One company is GlaxoSmithKline and formulates Eskalith capsules, which are ovoid and appear yellow, green, or blue. The other company is named Ani Pharmaceuticals and makes Lithobid tablets, which are round and pink. This medication is also used in patients who experience other symptoms that include neutropenia, depression, and multiple headaches. It was discovered that people with certain allergies to food or other medication should not take lithium because there may be a chance the patients may have an adverse allergic reaction. Lithium Drug Interactions This drug can also interact with certain foods, alcohol, tobacco, and other medications that may cause severe symptoms. Some of the medications that lithium interacts with that are most commonly used by patients include Poromperidol, Clozapine, Dorepezil, Fentanyl, Loxapine, and Pimozide. When these drugs interact, they can cause diarrhea, dizziness, irregular heartbeats, drowsiness, vomiting, and tremors. Some patients also experience pain in the eyes, ears, nose, fingers, and toes, or a pale blue coloring of the skin. Symptoms are rare and seldom occur, but may still be experienced. Preexisting medical conditions can also interact with the drug. For example, Burgada syndrome, heart disease, kidney disease, encephalopathic syndrome, and goiter can cause diarrhea, sweating, vomiting, dehydration, muscle weakness, and low urine output when lithium is consumed (69). It was also discovered that lithium can have an effect on pregnant women. When this drug is taken in the first trimester of pregnancy, it can cause Ebstein cardiac anomaly in infants, which is a congenital heart disease that can prove to be fatal if it’s not properly treated. This disease causes a malformation in the heart, leading to atrialization of the right ventricle and displacement of septal and

tricuspid valves. This allows the blood to flow backward from the right ventricle to the right atrium and is known as valvular regurgitation. In embryonic development, the anomaly occurs during the formation of the chordae and tricuspid valves, causing a malformation as the fetus grows. The symptoms include cyanosis, edema, ascites, dypsnea, sudden cardiac death, and fatigue. Other less common symptoms include paradoxical embolism, stroke, bacterial endocarditis, and brain abscesses (70). Doctors can usually diagnose this disease through echocardiograms, which may show paroxysmal super ventricular tachycardia or atrial flutter with abnormal Pwaves. Chest radiographs will show a large right atrium and cardiomegaly. Echocardiograph can help doctors diagnose the disease while a patient is still pregnant. A positive diagnosis will show a dilated right ventricle. Fortunately, there are multiple treatments for this disease that will help extend an infant’s life. Drug treatments include diuretics like Furosemide, cardiac glycosides like Digoxin, antibiotic prophylactics, and angiotensin converting enzyme inhibitors like Enalapril. Sometimes the condition is so severe that surgical intervention is needed. This includes repairing the right ventricle and tricuspid valves or performing a heart transplant. Without treatment, the prognosis is dire for infants. This disease can affect any gender and age, but it seems to be more common in Caucasian females (71). Medicinal Uses for Lithium The main use for lithium is to treat the mental disorder manic depression, or also known as bipolar disorder. This disease is also associated with attention deficit disorder (ADHD) in children. Bipolarism occurs when a person has immediate changes in emotions, which include mood swings and depression. People will experience times when they feel suicidal, hopeless, sad, or guilty, and they can also

experience other times when they feel energetic and optimistic. Other symptoms included irritability, impatience, impulsiveness, hyperactivity, and talking excessively. The symptoms can last for days or months, with latent periods where a person seems to be stable. There is no specific demographic this disorder affects—it can occur in everyone at any age—but it has been shown to be most common in women. Environmental factors and genetic factors are usually responsible for bipolarism, but it is treatable if caught early enough (72). This disorder is usually diagnosed by a psychologist or psychiatrist. It is tested by using the Young Mania Rating Scale (YMRS) and the Internal State Scale (ISS), which is a written and verbal test to determine mental health. Other tests include laboratory tests to rule out drugs and alcohol as a factor. The prognosis of this disorder is good if the patient stays on medication. The dosage in tablet form that is recommended for adults with acute mania is 600 mg; for long-term mania, it is 300 mg three times a day. The dosage in extended release tablet form that is recommended for adults with acute mania is 900 mg, and for adults with longterm mania, it is 600 mg. However, dosage may vary from person to person because the medication needs to be constantly monitored and adjusted for each person. It has been shown in studies that there is a higher suicide rate in patients who aren’t treated, which is about 15–25 percent higher. Unfortunately, lithium treatment doesn’t work for everyone, but there are other alternative treatments like Valproate, which can treat rapid cycling bipolarism. There are also other antidepressants like Risperidone, Olanzapine, or Clozapine. Sometimes Carbamazepine is also used as an anticonvulsant. There are other invasive treatments like electroconvulsive therapy (ECT) to treat convulsions. Some patients who don’t want to use drugs opt for holistic treatment like natural herbs or simply avoid overstimulation.

Another mental disorder that can be treated with lithium is schizophrenia, which is a chronic and severe condition that affects the way a person views reality and changes the way they feel and behave. Patients are affected by this mental disorder as early as sixteen years old and as late as thirty years old. It is very rare for anyone younger than sixteen to be afflicted by this disorder, but it is still possible. Doctors have broken this disorder into three categories, which include positive, negative, and cognitive. The positive category occurs when a patient experiences hallucinations, delusions, and sometimes twitching; the patient loses touch with reality. The negative category occurs when patients have a change in emotions or behavior, where they can experience voice changes, less talking, less facial movements, and difficulty doing activities. The cognitive category occurs when patients experience changes in memory, difficulty understanding or making decisions, and problems paying attention or focusing (73). It was found that genetic and environmental factors play a role in this disorder. If one family member has had this disorder, it can be passed down from generation to generation. Environmental factors include malnutrition at birth, difficult birth, viruses, or other psycho-social factors that can attribute to this disorder. Scientists have found that a chemical imbalance in the brain is linked to this disorder, in which there is an imbalance in glutamate and dopamine. It has also been found that there may be faulty development of neuron connections in the brain as a child develops and reaches puberty, which is why it is most common in teenagers and less so in children under fifteen years old. There are other treatments for schizophrenia besides lithium, and these include other antipsychotics that are similar to treating bipolarism. There are also psychosocial therapies with psychotherapist that can help and a combination of these two treatments will make up a program called Coordinate Specialty Care (CSC).

Lithium Toxicity However, if a patient does decide to use lithium as treatment, the person is susceptible to lithium toxicity, which can occur accidentally through improper dosing or intentionally through drug abuse. This is why it is so important for doctors to monitor the amount of lithium being dispensed. Toxicity can occur if levels of lithium in blood serum are greater than 4.0 mEq/L, which will cause some kidney dysfunction. Toxicity levels that are greater than 5.0 mEq/L will lead to confusion, seizures, and eventually a coma. When lithium is ingested, it is easily absorbed into the gastrointestinal tract, and it then affects many of the body’s systems, which include the endocrine system, cardiovascular system, central nervous system, gastrointestinal system, and the renal system. Lower doses of lithium are significantly harmless, but an overdose will cause high levels of the metal in the kidneys, thyroid glands, and bones. Normally, low levels are cleared by the kidneys and excreted. The kidneys filter lithium through the glomerular filtration rate (GFR), but about 80 percent of that lithium filtered is reabsorbed back into the proximal tubule. What is actually occurring is that lithium can act like sodium and potassium ions on many transport proteins, and it can enter the cell through the sodium ion channels or the sodium/hydrogen ion channels. Lithium will compete with theses ions to enter the channels located in the renal tubules (74). Lithium can also inhibit the pituitary gland from releasing the antidiuretic hormone (ADH) to the distal tubules, which will cause a water and sodium imbalance and impair reabsorption. This will also lead to polyuria, polydipsia, and goiter. Other effects include renal tubular acidosis, nephritis, and hypothyroidism. Lithium ions will enter thyroid cells and block hormones from being released from thyroglobulin, and in turn they will inhibit adenylate cyclase, preventing thyroid

stimulatory hormone (TSH) from activating thyroid cells in the TSH receptor. Lithium causes interruption in Intracellular metabolism, which will affect heart cells and neurons by disrupting neurotransmitters and signal transduction (75). The first signs of an overdose from toxicity will show a patient beginning to have slurred speech, convulsions, and trembling. There are three types of lithium toxicity, and the first is acute poisoning, which affects the gastrointestinal tract, causing vomiting, cramping, nausea, diarrhea, and cardiac dysrhythmias. In cardiac dysrhythmias, an electrocardiograph will display a T-wave flattening. The second form of toxicity is acute or chronic poisoning, which causes similar gastrointestinal tract symptoms and neurological symptoms; these include tremors, lethargy, and seizures. The third type of toxicity is chronic poisoning, which will cause failure in renal function and excretion. Chronic toxicity will also cause neurological symptoms similar to acute or chronic, but it will lead to cerebellar dysfunction, convulsions, slurred speech, trembling, seizures, and finally a coma. This is also known as Syndrome of Irreversible Lithium Effected Neurotoxicity, or SILENT. Other symptoms of toxicity include edema, dermatitis, skin ulcers, leukocytosis aplastic anemia, and nephrogenic diabetes insipidus. The American Association of Poison Control Centers national poison data had found that in 2013, there had been over 3,000 single exposures to lithium, with over 1,100 of these cases including people who suffered from toxicity. It was found that five of those cases of toxicity lead to death. Prevention and Treatment for Lithium Toxicity Lithium toxicity can be prevented with a series of blood tests to monitor the levels of the metal in the bloods serum. To do this, a sample of blood is collected from the patient in a lithium-free royal blue top blood vial containing EDTA to

prevent coagulation. Other tests include urinalysis to check renal function and electrolyte levels. If a patient is experiencing toxicity, then a urine test will show that the specific gravity and anion gap will be low. Thyroid panels are also done to check for hypothyroidism, and lumbar punctures are performed to screen for any infection in the central nervous system. Additional tests include CT scans for patients who have suffered seizures or are in a coma, to determine whether there was any trauma to the brain. Electrocardiographs are constantly performed to check for T-wave flattening or other dysrhythmias that lithium toxicity causes to the heart. Treatment of toxicity is mostly supportive therapy; this includes IV fluids or a diuretic named Amiloride to replace water and electrolytes. Hemodialysis helps rid the body of lithium in patients with renal failure. Another treatment is bowel irrigation, which uses a polyethylene glycol lavage because the colon is where lithium is most readily absorbed. If a patient is found to have been abusing lithium by intentionally over dosing, then the person is referred to a psychologist and is possibly place on a different medication (76). Additional Exposure to Lithium Although concerns exist from toxicity associated with lithium ingestion from those who consume it for treatment of mental disorders, there can also be various exposures to this metal in our daily lifestyles. Lithium is used in electronics and transportation such as cell phones, ceramics, laptops, cameras, toys, pacemakers, clocks, cars, planes, bicycles, trains, and armor plating. It is a pertinent element in rechargeable and nonrechargeable batteries that are used in these devices that people are exposed to every day. Another form of lithium is lithium hydride, which is extremely poisonous and corrosive. Whereas lithium

carbonate may cause cognitive loss, epileptic seizures, tremors, and glycosuria, lithium hydride will cause severe skin burns. Interestingly, the soft drink 7-Up previously contained lithium citrate and was originally named Lithiated Lemonlime Soda. It was released weeks before the stock market crash of 1929. The name 7-Up was given by the creator Charles L. Grigg, who was rumored to have named it “7” after the seven different ingredients, and “up” after lithium because of the properties it contains to alter a person’s mood. The FDA banned the use of lithium in soda in 1948 because of its side effects. A scientist from Cornell University named Anna Fels has proposed that we add a small amount of lithium to water supplies instead, because she believes that this will lower suicide, rape, and murder rates in the country. However, many studies are needed, and ethical questions need to be answered to prove this true. Regardless, lithium is becoming a metal with many possible applications, and research continues to develop new ways in which this metal is used in medicine and technology (77).

CHAPTER 11

ALUMINUM John Mateo

Aluminum is ubiquitous and used to store a great

number of our standard staples.

Introduction Aluminum, the thirteenth element in the period table, is the most abundant metal found on planet earth. A malleable

silver metal, aluminum is also a conductor of heat and electricity. In nature, aluminum is found combined with other metals, such as potassium in the form of potassium aluminum sulfate dodecahydrate (KAl(SO4)2·12H2O). Aluminum has existed in forms that are not bioavailable to humans, but because of acid rain, the amount of aluminum has increased in the biological ecosystem, resulting in the damaging effects on our fish and plant species. Aluminum has a plethora of uses, mainly in the form of electrical applications, cooking utensils, and as alloys in packing material. The normal daily intake is between 1 and 10 ng for children and adults. This intake is primarily because more than 90 percent of carbonated drinks, including beer and other alcoholic beverages, are packaged in aluminum cans; human exposure to aluminum comes from food, drinking containers and teas made from plants exposed to the metal. Medical Applications and Uses for Aluminum Powdered aluminum salts were used back in Greek and Roman times as a styptic. A styptic is an antihemorrhaging agent that stops bleeding; it works by contracting muscle tissues to seal injured blood vessels. The Greeks and Romans would rub the powder over their wounds, causing the tissues to contract, and it would stop the flow of blood to the wound. The active ingredient used to contain the bleeding is anhydrous aluminum sulfate. The anhydrous aluminum sulfate acts as a vasoconstrictor that slows blood flow to the wound. Today, styptic pens are commonly applied to small cuts like when shaving. Styptic powder is also used in many veterinary hospitals. Oftentimes when pets get their nails cut too short, a vein located in the center of the nail gets clipped. The damage causes pain and bleeding for the animal. The styptic powder is used to stop the bleeding.

Many stick deodorants found in our local supermarkets contain aluminum; in fact, aluminum is the active ingredient. Powdered compounds of aluminum salts have been approved by the Food and Drug Association for the use of aluminum zirconium and aluminum chlorohydrate as an antiperspirant, which provides wetness protection to the underarms. When applied to the underarms, the compounds work by blocking the sweat ducts that cause the arms to sweat. The aluminum compounds not only reduce the amount of sweat on the skin but also promote a hostile environment for odor-producing bacteria on the skin. Thus, prevents microbes that thrive in wet and warm temperatures from growing. Aluminum hydroxide is another popular compound that can be found in most over-the-counter drugstores. Aluminum hydroxide is the most common aluminum compound used for medicinal purposes today. It is used to treat stomach ulcers; aluminum hydroxide can be found in many antacids (like Mylanta and Maalox). Aluminum hydroxide has a very basic, alkaline property. Its compound is used as an antacid to reduce the acidity of the stomach. This provides temporary relief from heartburn and allows time for the stomach to heal itself from a very acidic environment. Aluminum hydroxide can also allow time for an ulcer to heal too. Aluminum hydroxide can be found at many drug stores as tablets, oral gel tablets, and most popularly as liquid syrup. Although aluminum has no physiological purpose in the human body, it can act as an adjuvant that enhances the body’s immune response to foreign antigens. An adjuvant is a substance that can enhance the body’s ability to induce an immune response to foreign disease without actually providing any sort of immunity itself. Aluminum shares similar properties to other essential trace elements like magnesium, calcium, and iron. In some cases, aluminum can be substituted in the place of these essential elements,

provoking an immune response and enhancing the speed of a response. Aluminum is used in many live attenuated vaccines such as hepatitis A and B, haemophilus influenza type B, and several pneumococcal vaccines (78). Aluminum plays a role in water treatment systems. Many microorganisms found in natural waters can cause illness and disease if ingested. Common disinfectants like chlorine are used to treat water, but organisms like giardia and cryptosporidium are resistant to such disinfectants. Aluminum sulfate is effective at removing such parasites through a process called coagulation. Coagulation is a process in which small particles bunch together to form larger particles that can then be removed through filtration. Aluminum sulfate is the most common chemical used for the coagulation of pathogenic particles. As a result, a residual amount of aluminum can be found in some of the water we use daily (79). In the past, aluminum was an ingredient in water treatments used during dialysis sessions in patients with kidney failure. This treatment has long been remedied because aluminum was found to be absorbed in the body during treatment and deposited into the tissues, causing human dementia syndromes known as dialysis dementia. Toxicity Associated with Aluminum Despite the usefulness of aluminum, the toxicity associated with aluminum is quite devastating. The target organs are the lung, the bones, and the central nervous system in humans and other animals. Aquatic species are particularly at risk because aluminum causes damage to gills and other internal organs (80). Lungs and Bone Toxicity Occupational exposure to aluminum dust can be damaging to lung and bone. Osteomalacia, a condition characterized

by the softening of the bone, has been associated with excessive intake of aluminum containing antacid for otherwise healthy populations. Osteomalacia is also found in uremic patients exposed to aluminum dialysis fluid. This form of osteomalacia is from the accumulation of aluminum in bone causing bone mineralization (81). Neurotoxicity Aluminum is associated with the interaction of neuronal chromatin and is shown to decrease the rate of DNA synthesis and the reduction RNA polymerase activity. Pathological changes of neurofibrillary tangles in cell bodies have been associated with aluminum exposure in primates. Aluminum alters calcium metabolism in the brain. It has been shown that the amount of calcium increase after aluminum exposure. Aluminum also competes for calmodulin binding sites and thus impairs the regulation of calcium in neurons (82). Human Dementia Syndromes Dialysis dementia is a progressive and fatal neurological syndrome that has been reported in patients on hemodialysis treatment during renal failure. Chelation therapy with desferrioxamine may slow the progression of dementia. The inhabitants of the Marina Islands in the western part of the Pacific Ocean have an unusually high incident of neurodegenerative diseases associated with nerve cell degeneration. These islands, namely Guam and Rota, have very high levels of aluminum and manganese in the soil and very low levels of calcium and magnesium. This low concentration of calcium and magnesium, and high levels of aluminum and manganese, in their diet was thought to lead to their high incidence of neurodegeneration (83). Alzheimer Disease (AD)

There have been numerous cases that correlate the increase in aluminum exposure to neurodegeneration resulting in dementia. Alzheimer’s, accounting for more than 80 percent of dementia patients, is the most common type of dementia and is characterized by memory loss and the loss of intellectual abilities. For many years, aluminum has been considered the cause of AD, but the basis for this correlation is a matter of debate. The foundation for this connection is from the finding of increased aluminum levels in Alzheimer brains. However, the presence of aluminum may be a consequence of a compromised blood brain barrier. Also, staining methods from earlier studies may have led to aluminum contamination (84-87). In conclusion, the cause of Alzheimer’s is a matter of debate, but the observation of dementia in dialyzed patients provide a convincing argument that aluminum has an effect on the human nervous system.

CHAPTER 12

PLATINUM

Harold Galvez

Cisplatin in its injectable form.

Platinum and Its Medical Applications Platinum is a silvery-gray, shiny metal with a high melting point and an ability to resist corrosion in a variety of temperatures. These, along with other characteristics, make it a practical candidate for many medical applications. Because of its high biocompatibility and ability as a metal to be shaped into miniscule structures, platinum is often used for both temporary and permanent implants. Unlike base metals, platinum is chemically inactive and will not corrode in the body, decreasing the chance of a patient suffering from allergic reactions to implanted platinum devices. Another advantage platinum offers is that it is able to conduct electricity and can be used as an electrode in medical procedures for diagnosing and treating certain diseases. Its radiopacity allows doctors to inspect the location of the implant during the treatment process using x-ray (88). Platinum’s conductivity is exploited in devices such as artificial pacemakers and implantable cardioverter defibrillators (ICDs). These are given in order to correct irregular cardiac rhythm in patients with conditions such as bradycardia or those with higher risk to sudden cardiac death. While pacemakers aid in sustaining a normal heart beat, ICDs deliver stronger shocks when a life-threatening, irregular heartbeat is detected. Both types of implants consist of two main parts: (1) a generator that contains a battery and controls the electrical impulses and (2) wires (or leads) that connect the generator to the heart and through which the current flows. The connection between the pulse generator and the leads are usually made of platinum while the leads themselves contain electrodes made of platinumiridium alloy. Catheters are flexible tubes used by doctors as a minimally invasive alternative to surgery. In the condition of

atherosclerosis, a treatment process called percutaneous transluminal coronary angioplasty (PTCA), or balloon angioplasty, involves guiding a catheter to the site in the arteries where there is an accumulation of fatty substances. To ensure correct movement of the catheter to the plaque site, a guide wire made of base metal and a platinumtungsten tip is used by the doctor for easier maneuvering. The platinum-alloy tip also allows the doctor to visualize the placement of the catheter under x-ray. After the catheter is in place, a balloon attached to the end is then inflated to rid of the plaque blocking the artery. The balloon also features marker bands made of platinum used to monitor its precise location. Subsequent to the clearing the blockage, a mesh tube called a stent is placed in the artery site to decrease the chance of restenosis. In the past, these stents were typically made of stainless steel or cobalt-chromium, but recent times have seen a shift toward platinum. In 2014, the manufacturer Boston Scientific received FDA approval for its platinum-chromium coronary stent system (89, 90). Other catheters use platinum components as electrodes in the diagnosis and treatment of cardiac arrhythmias. Electrophysiology catheters are used to assess the electrical system of the heart in order to provide the correct type of therapy. Radiofrequency ablation is a procedure that also uses catheters containing platinum electrodes. The electrical current that passes through the electrodes helps to correct the unnecessary electrical impulses that are the cause of atrial fibrillation. Neuromodulators are instruments that usually contain platinum components in their pulse generator and platinumiridium electrodes. They possess mechanisms similar to that of heart pacemakers, but instead of sending electrical currents to the heart, they supply them to nerves as well as directly to the brain. In spinal cord stimulation (SCS), or dorsal column stimulation, the platinum electrodes are positioned to send impulses to the spinal cord. This is useful

in treating those with chronic pain and patients who have recently undergone spinal surgery. The stimulation, which can be turned on and off or adjusted, works by interrupting the pain signal from reaching the brain. Deep brain stimulation (DBS) situates the platinum electrodes directly in the brain and is used in treatment of patients with motion disorders such as Parkinson’s disease and epilepsy (91). Weakness of arterial walls can lead to a state of localized arterial swelling called an aneurysm. Platinum wires are used in treating aneurysms in the brain, which are especially difficult to resolve surgically. This method of treatment was first used by Dr. Guido Guglielmi in the late 1980s. Platinum coils are distributed to the location of the aneurysm, where they then encourage blood coagulation around the weakened vessel walls. Cochlear implants aid in substituting the function of an impaired inner ear. These instruments are composed of platinum electrodes that are connected to the cochlea, a speech processor and coil, and an implanted constituent. Sound from the environment is received by the processor, which sends it as a digital signal to the implant. The implant then translates the signal into electrical currents that can be sent to the cochlea via the platinum electrodes in order to simulate natural hearing. Platinum metal has also been used as a covering for iridium that has been irradiated and implanted in the body as a therapeutic agent against tumors. Platinum’s radiodensity blocks radiation from reaching healthy tissues while still allowing the uncovered iridium tip to target the cancerous growth. In 1962, a professor by the name of Barnett Rosenberg discovered that some chemical forms of platinum were able to inhibit the division of living cells. He was studying the effects of electromagnetic fields on bacterial cell growth, when he happened upon the inhibitory effects of dissolved platinum coming from the electrodes in his experiment.

Designation of cisplatin as the inhibitory platinum compound allowed for clinical trials to test its potential use for treating cancer.

Structure of cisplatin.

Similar to previous anticancer drugs, cisplatin treatment came with risk of damage to the kidneys. The advancement of hydration techniques resolved this issue and prompted a novel method for treating cancer using platinum compounds. While cisplatin is a successful form of therapy for tumors of the lungs, ovaries, head, and neck, it is most effective against testicular cancer. Because of its poor water solubility, treatment with cisplatin requires intravenous administration. After cisplatin enters the bloodstream, passive diffusion by cells sequesters the drug into the cytoplasm; translocation via active transport is much less observed. Cisplatin works by undergoing a hydrolysis reaction which results in a positively charged molecule. The aquation product then forms crosslinks on DNA, which can cause uncoiling and bending within the DNA structure.

Physical changes in the DNA structure consequently interfere with DNA replication, transcription, and metabolic function of the target cell. The platinated site of the target DNA can also be bound by several cellular proteins that can either repair the damage or cause cell apoptosis. This formation of a platinum-DNA-protein complex has been proposed as another possible mechanism of action for cisplatin. Work that was done at the Institute of Cancer Research and the Royal Marsden Hospital sought for compounds with similar activity as cisplatin, but with less toxicity. By 1986, carboplatin, a newer and safer form of platinum anticancer drug, was approved. What followed was a second generation of platinum complexes that differed from cisplatin in their chemical structure. Tumor resistance to cis conformer and carboplatin required researchers to obtain newer analogues such as oxaliplatin. Compared to its predecessors, oxaliplatin is known to work better against numerous colon cancer cell lines, and clinical trials have recently been done to observe its efficacy toward other cancers. Another second-generation drug, called nedaplatin, offers greater water solubility while being considerably less nephrotoxic. As scientists endeavored for agents with enhanced spectrums of activity and even fewer side effects, the third generation of platinum complexes such as lobaplatin and heptaplatin came about. More recently, clinical trials were performed on a fourth-generation complex called satraplatin. Unlike its predecessors, which are administered intravenously, satraplatin will be the first platinum-based cancer drug that can be taken orally. Development of nano-counterparts for the platinum-based treatments is also gaining momentum because of their greater bioavailability and selectivity of cancerous cells (92, 93).

Toxicity of Platinum The type of platinum compound, the route, and the duration of exposure determine the toxicity observed. Platinum metal is biologically inert, but occupational exposure from inhalation of platinum salts has been known to cause respiratory allergies. Skin irritation can also arise from this type of exposure. Soluble platinum salts are responsible for sensitization in individuals. If sensitization to platinum salts is suspected, biological monitoring in the form of a skin prick test can be performed. Instillation in the nose has also been used in the past to distinguish platinum salt sensitization. In some cases, pulmonary reactions have been shown to detect an immunological reaction before a positive skin prick test. Smoking increases the effect of platinum-salt sensitization, as does existing respiratory complications. True contact dermatitis caused from platinum compounds occurs at a much lower frequency and is considered to be rare. Past instances of dermatitis are now thought to be caused by a primary exposure to strong acids and alkalis. The downside to platinum-based anticancer drugs is that they come with a high risk of developing toxicities. Myelosuppression, nephrotoxicity, ototoxicity, and neurotoxicity have all been linked to cisplatin and its newer counterparts (94-96).

Platinum Exposure Platinum-group metals are comprised of the light metals ruthenium, rhodium, and palladium and the heavy metals osmium, iridium, and platinum. They are present at low levels in the lithosphere and can also be found as byproducts of refining other metals. Platinum’s average concentration in the lithosphere is estimated at 1–5 mg/kg originating from ores near the earth’s crust, with meteorites as an additional source. It is found in a metallic state or various mineral states. In regions with high amounts of industrialization, pollution has led to higher platinum concentrations in river sediments. Platinum-containing catalyst, especially those used for ammonia oxidation, are a stationary source of platinum emission. A mobile source of platinum occurrence in the environment is the catalytic convertors of motor vehicles. The platinum emission from old pellet forms ranged from 0.8 to 1.9 µg per km travelled, whereas newer generations tested for emission levels 100–1,000 times lower. Areas with greater automobile traffic were found to have higher levels of platinum in dust and soils, which could eventually contaminate crops. The electrical, glass, and dental industries also account for other sources of platinum discharge into the environment. Platinum compounds used in treating cancer are excreted in urine, ultimately ending up in the sewage and the environment. A 1986 study done in Sydney, Australia, observed platinum levels in the diet using food samples from a market. Results showed the highest concentrations of platinum in eggs and offal, followed by meat, grains, fish, fruits and vegetables, and lastly dairy products. Using the Australian Federal Department of Health as reference for a hypothetical diet, the daily intake for adults was then calculated to 1.44 µg. Similar results were found in food

samples from Lord Howe Island, an area of low traffic density and pollution. This indicated that diet can be a key source of platinum consumption in places where there is no industrial exposure. Other literature has established platinum concentrations in drinking water to be about 100 pg/liter. Water samples from the Indian Ocean presented values from 37 to 154 pg/liter, samples from Pacific Ocean were at 100–200 pg/liter, and the Baltic Sea was at 2,200 pg/liter. Reports have revealed that individuals not occupationally exposed to platinum can have levels from less than 1 to 1,200 ng/g wet tissue weight in their bodies. Evidence also confirms that a substantial amount of platinum in these individuals could also be found in fat. Platinum’s ability to catalyze hydrogenation, dehydration, oxidation, and other types of reactions make it useful in chemical and petroleum industries. In 2005, fifty one percent of the platinum produced internationally was used in the automotive industry, followed by twelve percent for jewelry, four percent for electronics, four percent in chemical refining, and a small percentage in dentistry and medicine. The same year also saw a fifty percent increase in worldwide platinum stock compared to the previous five years, totaling up to 225,000 kilograms. The majority of this supply came from mined ore, with much less derived from Russian exports and scrap metal. Workers mining and refining ores are at a higher risk of exposure to platinum. Although the data that exists implies very low air levels at mines and refineries (12 ng/mL

Urine Test

0–36 mcg/specimen Acute: > 1000 mcg/specimen

Hair Test

0 mg/kg

Acute: 1–3 mg/kg

Chronic: 0.1–0.3 mg/kg

  Treatment for Arsenic Toxicity In order to combat the toxic effects of arsenic, several chelating agents are used. Dimercaprol (Briticsh Anti-Lewisite in oil) is used first when a patient is taken into the hospital for arsenic poisoning. Because many patients are diagnosed with arsenic toxicity, it is common for an emergency department to have this substance in stock. Dimercaprol is excreted out of the body through urine and bile. The second chelating agent is called succimer (DMSA), which is used only in children who have had arsenic poisoning. The third major chelating agent is dimerval (DMPS), which is illegal to use in the United States. The fourth is called penicillamine, which is not as commonly used as dimercaprol. Because inorganic arsenic found in drinking water interferes with cellular respiration, it disrupts the normal functioning of the voltage-gated potassium channels, which are important in driving the electrochemical gradient. As a result, potassium levels decrease and increase the possibility of heart arrhythmias from arsenic trioxide. In order to combat these effects, supplemental potassium is taken.

CHAPTER 17

BERYLLIUM Hazel T. Salunga

Beryllium copper wrench. Beryllium copper alloy is a strong, nonmagnetic, austere metal material.

Introduction Beryllium (Be) is a group two metallic element with a silvergray-whitish color, atomic weight of 9.01, boiling point 2,970°C, melting point of 1,287°C, and density of 1.85 at 20°C. Beryllium is known to have good electrical and thermal conductivity; and at ordinary temperature it can resist oxidation in the air because of a thin layer of beryllium oxide (BeO), making it further resistant to corrosion. Beryllium is naturally occurring and is the fortyfourth most abundant element. It is found in rocks, coal, soil, and volcanic dust. Commercial use of beryllium began in the early twentieth century, where its use ranged from coal combustion to free metal nuclear reactions, x-ray windows, and other applications related to space optics, missile fuel, nuclear retractor neutron reflectors, nuclear weapons, precision instruments, high-speed computers, and space vehicles.

Beryllium exists in many states; some examples include but are not limited to beryllium chloride, beryllium fluoride, beryllium hydroxide, beryllium oxide, and beryllium nitrate. Beryllium chloride is used to make metal by electrolysis and is also used as an acid catalyst in organic reactions. Beryllium fluoride is used by glass makers as well as in nuclear reactors. Beryllium hydroxide exists in three forms: stable, metastable, and as a slimy gel with a basic pH. Beryllium oxide is commonly used in industrial instruments, including high-technology ceramics, electric heat sinks, crucibles, thermocouple tubing, and automotive ignition systems. Beryllium nitrate is used as a reagent for hardening mantles in gas and acetylene lamps. Of the excess fifty beryllium-bearing minerals and elements known, beryl and bertrandite are commercially important. Extensive occupational and industrial use of beryllium has prompted studies on the harmful effects of health and wellbeing. While working with beryllium, careful measures should be taken to prevent hazard susceptibility or risk (122). Where Beryllium Is Found, and Environmental Applications Naturally occurring beryllium is found in the crust of the earth at 2.6 parts per million. It is found in over forty types of minerals, and its particles exist in the environment as a result of weathering rocks, soils, and residuum of industrial emissions (IARC). Beryllium settles in sediment, dissolves in groundwater, and adds to wastewater disposals. Its particulates, released in the atmosphere, are a result of both natural and anthropogenic sources. Windblown dust is the most important natural source of atmospheric beryllium, and volcanic activity contributes to the remainder of beryllium found in the earth’s atmosphere (IARC). Ambient concentrations of atmospheric beryllium are typically low (less than 0.5 ng/m3) when compared to atmospheric

concentrations within the vicinity of beryllium-processing plants, which are detected in higher amounts. Beryllium found in natural surface waters typically range from 0.01– 0.1 ug/L. In ocean waters (uniformly, worldwide) concentrations are estimated to be three orders of magnitude lower than that of surface water. In places including Brazil and Russia, beryllium is mined in the form of beryl in 11 percent beryllium oxide. In the United States (New Mexico, Utah, and Colorado), bertranadite, which contains less than 1 percent beryllium, can be used to process beryllium hydroxide. Mining beryllium in the United States of America contributes to 0.03 ng/m3 of beryllium concentrations found in the air, with increasing amounts and major sources coming from a result of the combustion of coal and oil in large cities. Compounds beryl and bertrandite are commercially mined to recover beryllium ore, with gem-quality presented as an aquamarine color or emerald green (123). Occupational Exposure of Beryllium Industrial applications of beryllium include aerospace, automotive, biomedical defense, energy and electrical, fire prevention, weaponry, nuclear energy sites, demolition sites, industrial instruments, equipment and objects, and manufacturing. Beryllium is widely used in occupational settings, and the health effects have been largely investigated in the United States of America, especially within the nuclear industry. Inhalation of dust and dermal contact are the main routes and sources of occupational exposure, however there may also be potential for in-home exposure if contaminated work garments are brought home (IARC). Currently, the United States Occupational Safety and Health Administration (OSHA) determines permissible exposure limit is 2 ug/m3. The National Institute of for

Occupational Safety and Health (NIOSH) recommends an exposure limit of 0.5 ug/m3, and the American Conference of governmental Industrial Hygienists (ACGIH) has a threshold limit value 0.05 ug/m3. Acute or prolonged exposure may lead to severe dermatitis, reversible pneumonitis, chronic granulomatous lung disease, or acute pneumonitis, which may progress chronically to irreversible granulomatous lung disease. In severe cases, long-lasting beryllium exposure may lead to chronic beryllium disease (CBD). Several other clinical implications are seen as a result of beryllium exposure (see the beryllium clinical presentation) (124). Beryllium Exposure to the General Population Beryllium exposure is not limited to occupational or industrial workers. Because beryllium is a naturally occurring element, the primary route of it for the general population is via the ingestion of contaminated water or food. Varying studies from the Environmental Protection Agency during the latter half of the twentieth century have estimated the daily intake of beryllium in food to be about 0.12 ug per day; other studies were conducted by the Agency for Toxic Substance and Disease Registry (ATSDR). For dietary exposure, measured concentrations for beryllium have been reported in thirty-eight foods, including various types of fruits and juices from around the world. ATSDR has determined that the concentrations in the foods have been reported in the range of less than 0.1–22 ug/kg fresh weight, with the highest concentrations measured in legumes such as kidney beans, peas, and fruits such as pears. Other concentrations measured include fruits and juices of roughly 74.9 ug/L, averaging a concentration of 13.0 ug/L. Moreover, beryllium has been detected and measured in rice and potatoes—80 ug/kg and 0.3 ug/kg, respectively.

To measure biomarkers in food, several analytical methods are used for measuring beryllium in biological samples. These methods include but are not limited to gas chromatography electron capture (GC-ECD), graphite furnace atomic absorption spectrometry (GF-AAS), inductively coupled plasma atomic emission spectrometry (ICP-AES), and inductively coupled plasma mass spectrometry (ICP-MS). Samples typically used in these analytical techniques include blood, urine, feces, nails, hair, and lung tissue (125). Toxicokinetics of Beryllium Beryllium absorption is primarily via the lungs, where it then deposits and moves slowly into the pulmonary and systemic blood circulation. Upon entry into the airway system, it damages the mucosal lining and causes complications such as pneumonia and berylliosis, a lung disorder that potentiates further damage to organ systems, primarily the cardiovascular system. Pulmonary effects observed upon inhalation beryllium may result in acute pulmonary disease, including a fulminating inflammatory reaction of the entire respiratory tract involving the nasal passages, pharynx, tracheobronchial airways, and alveoli. Studies focused on tumor formation in rodent determine that beryllium is a likely candidate as a lung because as it has been shown to decrease the fidelity of DNA synthesis (see the beryllium carcinogenicity section) (126). Immunological Response of Beryllium Inhalation of beryllium serves as the most toxic route for those handling it in an occupational or industrial setting. It seemingly persists in the lung long after the cessation of exposure. The body’s defense and response to beryllium persistence involves memory CD4+ T lymphocytes as well as a negative regulator of T lymphocytes known as

programmed death-1 (PD-1). When the bronchoalveolar cells are stimulated with beryllium salts, a significant percentage of bronchoalveolar lavage (BAL) CD4+ T cells from patients illicit an immune response by producing cytokines such as interferon gamma (IFN-g) and tumor necrosis factor alpha (TNF-a). Following expression of proinflammatory cytokines, it leads to recruitment of macrophages, alveolitis, and granuloma development. The presence of beryllium-specific T cells positively correlates with the degree of severity of lymphocytic alveolitis—lung deterioration due to chronic exposure and irreversible damage. Mechanistically, beryllium mediates a thiol imbalance, ultimately leading to oxidative stress which modulates the expansion and proliferation of CD4+ T cells. Ultimately, the PD-1 pathway is active in beryllium-induced disease and plays a key role in controlling beryllium-induced T cell proliferation (127). Clinical Presentation of Beryllium Toxicity Beryllium toxicity presents two distinctive mechanisms of injury from acute and chronic exposure. In acute disease, beryllium exposure can result in inflammation of the upper respiratory tract (nostrils, nasal cavity, mouth pharynx, and larynx) and lower respiratory tract (trachea, lungs, bronchus). Acute disease typically manifests itself as chemical pneumonitis. Upon chronic assault and developed sensitization to beryllium, CBD may develop. CBD is also referred to as berylliosis. Chronic beryllium exposure and entry into the airways damages the mucosal lining, which causes pneumonia and CBD and which further potentiates damage to organ systems, primarily the cardiovascular system. Pulmonary effects observed upon inhalation of beryllium may result in acute pulmonary disease, including a fulminating inflammatory reaction of the entire respiratory tract

involving the nasal passages, pharynx, tracheobronchial airways, and alveoli. Acute Chemical Pneumonitis Acute chemical pneumonitis typically manifests itself as in inflammation of the upper or lower respiratory tract, or it can include both. The disease may appear suddenly after short exposure to extremely high concentrations, or it may progress slowly after chronic exposure to low concentrations. Other disorders of the respiratory tract that result of acute exposure include bronchiolitis, lung cancer, pulmonary hypertension, or in rare instances, a pneumothorax (ATSDR). Current studies have determined acute beryllium lung disease has been almost completely eliminated in the United States through the careful use of exposure controls in industrial and occupational settings. Chronic Beryllium Disease Chronic beryllium disease (CBD) is primarily a pulmonary disorder that develops after chronic exposure and subsequent sensitization to beryllium in an occupational setting. Settings include those aforementioned, as well as where beryllium alloys are manufactured and processed. CBD is a disorder in which a delayed hypersensitivity response to results in granuloma formation. Beryllium sensitization typically occurs over a long period of exposure (up to fifty days), however in some cases, CBD will not develop until decades after exposure (ATSDR). Common manifestations include chronic interstitial pneumonitis with induced expression of lymphocytes, histiocytes, and plasma cells (ATSDR). Clinical presentation of CBD includes pulmonary hypertension, pneumothorax, and hilar and mediastinal lymphadenopathy (128). Acute Hypersensitivity and Skin Effects

Common skin effects observed with beryllium-related toxic effects include clinical presentation of contact dermatitis. Exposure of soluble beryllium compounds may result in hypersensitivity reactions such as papulovesicular lesions, as well as necrotizing or ulcerative granulomatous lesions; hypersensitivity reactions or granulomatous lesions occur with delayed or passive transfer, respectively. Carcinogenicity of Beryllium Beryllium and its alloys have been found to induce malignant tumors of lung and osteogenic sarcomas in laboratory settings, in experimental animal models such as monkeys, rabbits, rats, and mice. In vitro studies determine that beryllium induces the morphological change and transformation in mammalian cells as well as decreases DNA synthesis. Developing lung cancers is a consequence of pulmonary instillation or inhalation with consequent direct actions on the lung and bone tumors, which may reflect beryllium’s propensity and attractiveness to bone as a target. Recent studies have elucidated mechanisms of beryllium chloride–induced oxidative DNA damage, where alterations in the expression patterns of DNA repair-related genes have been discovered in murine models. It has been concluded that beryllium chloride has the propensity to cause genetic damage, specifically to bone marrow cells because of oxidative stress. The resulting inflammatory processes that cause beryllium disease (acute or chronic) may plausibly exacerbate the development of lung cancer by increasing the rate of cell turnover, enhancing oxidative stress, and altering signaling and immunological pathways involved with cell replication. Collectively, the underlying processes of berylliuminduced carcinogenesis are complex. In addition to oxidative stress, beryllium may cause a down regulation of crucial

genes involved with DNA synthesis, repair, and recombination (IARC). Several studies have elucidated gene expression patterns in vitro, in cells lines transformed with beryllium sulfate, and they report an up regulation of cancer-related genes and a down regulation of genes involved with DNA synthesis, repair, and recombination in tumor cells. It may be likely that different chemical forms of beryllium have different effects on mutagenicity and carcinogenicity potential. Cigarette Smoke and Beryllium Toxicity Smoking remains the leading cause of preventable death, and studies have found numerous harmful substances in tobacco and tobacco smoke. Among the more than four thousand compounds, specific toxic metals (including beryllium) have been classified as carcinogens by the IARC. Exposures to these toxic metals may result in inflammation, sensitization, and carcinogenesis of the lower respiratory tract. In high throughput studies and preparations conducted by Fresquez et al. (2013), beryllium was detectable using samples using magnetic sector methods. Various US brand cigarettes (Newport Green Menthols, American Spirit Natural, and Marlboro Gold, to name a few) were crossanalyzed for trace amounts of toxic metals in each. Tobacco filler from American Spirit Natural cigarettes was lower than other brands and varieties in beryllium. Marlboro product beryllium concentrations were 0.031 ug/g and below. Chelating Agents and Treatment Current treatments of beryllium-induced toxicity with chelating agents includes the use of tiron and calcium disodium ethylene diamine triacetaic acid (EDTA) (Sharma et al. 2000). Other prospects for chelation therapy include calcium trisodium diethylene triamine pentaacetic acid

(CaNa3DTPA) in the presence of alpha-tocopherol against beryllium-induced toxicity, which was found in experimental studies conducted by Nirala et al. (2009). Results from this study determined that Tiron, in combination with alphatocopherol, exerted statistically significant and beneficial effects in the reversal of beryllium-induced biochemical, histopathological, and ultrastructural alterations. Organ Excretion Variability of beryllium excretion lies within the state in which beryllium has entered the body. Beryllium toxicity may occur as a result of penetrance of soluble or insoluble beryllium particulates. Acute biological responses are associated with the more soluble beryllium compounds, but it is more likely that humans who develop CBD are exposed to the insoluble form such as beryllium oxide or metal. Natural clearing of beryllium may occur via the throat and ciliary mucus escalator. When it enters the mouth and the gastrointestinal capillaries, it becomes protein-bound and will continue to be spread and deposited over a long period of time (months to years) in the liver, spleen, and bone, with a small portion excreted by the kidneys.

CHAPTER 18

CADMIUM Nichole Coleman PhD, Tojo Chemmachel and Christopher Taing

Cadmium in gourmet foods.

Introduction As children scrambled across their yards on Easter morning, parents worriedly watched as colorful eggs attracted the attention of the sugar-craved toddlers and adolescents. But why, on the most holy day of a religion, were parents scouring and throwing away their children’s baskets of

chocolates? Well, in late March 2016, the Oakland Californiabased health group, As You Sow, revealed a list of chocolates that carried high levels of lead and cadmium. Cadmium was slowly starting to gain a reputation as the new lead, hiding traces of the toxic metal in innocent toy cars and chocolates. Maybe a crunchy apple instead? Unfortunately, cadmium has become much more common in our lives than we’d like to believe. Cadmium is well known for being a hazard to human health, and government and global agencies have worked to limit cadmium exposure to human populations. Major international players such as the Environmental Protection Agency (EPA) and the World Health Organization (WHO) have been proactive in regulating the levels of cadmium permitted in certain objects and substances, to minimize their adverse effects on the human race. A recent study published by the Breast Cancer Fund, has found that face paint and children’s makeup can contain numerous toxic metals. There were 48 different Halloween face paints that were tested and 21 of them had traces of at least one of the toxic metals. There were some face paints that contained as many as four types of toxic metals. Out of those tested, about 30% contained Cadmium and 20% contained Lead. Most of these toxic metals were found in the dark pigmented paints. Sadly, cosmetic products are the least regulated consumer product on the market and these toxic ingredients are not required to be labeled on the product. Cadmium has the atomic number of 48 and was discovered in 1817 by either Friedrich Stromeyer and/or Karl Leberecht. Cadmium is a soft, silver-white metal found in the Earth’s crust. Humans usually find it as a byproduct of other metals such as zinc, lead, and copper. In the early 1840s, cadmium was used as pigments and a tanning agent for leather goods. By the mid-1900s, the metal became used for electroplating other metals to increase their

corrosion resistance. Today, we can still find cadmium in batteries, pigments, coatings, and stabilizers for plastics. Environmental Exposure of Cadmium According to the EPA’s National Emission Standards for Hazardous Air Pollutants (NESHAP), cadmium is recognized as one of thirty-three hazardous air pollutants that can pose a threat to public health in metropolitan areas. Although cadmium is considered to be rare, it is relatively abundant in the environment, typically as cadmium sulfide. With zinc production, cadmium can become refined, and it is released into the environment from mining and smelting processes, posing a strong risk to miners and metal welders. Cadmium has also been found in phosphate fertilizers and sewage sludge, directly impacting the surrounding environment. As it escapes and infiltrates the neighboring area, it can enter and climb through the food chain via contaminated soil or water. There are many ways to reduce the presence and harmful effects of toxic metals, such as cadmium, in the environment. The first is called excavation or physical removal of the contaminated site else. However, this technique only moves the problem elsewhere to be dealt with, making a temporary solution. The second method involves stabilizing the metals using chemicals to create less toxic compounds with the toxic metals that cause it to be insoluble in water or prevent it from being absorbed by plants and animals. The third method involves the use of plants to absorb the toxic metals from the environment and allows for safe removal and disposal of the toxic metals. The primary source of cadmium exposure for humans is through food. Cadmium can be found in leafy vegetables, grains, potatoes, chocolate, and shellfish. Anything that grows or lives in contaminated sites is susceptible to cadmium absorption and can be passing along the food

chain to humans. Another possible route is through the consumption of acidic foods stored in cadmium plated containers. Roughly 5-10% of the cadmium in food and water will enter the body through indigestion. The second source is through smoking, as tobacco leaves can accumulate high levels of cadmium from the soil. About 50% of the cadmium inhaled from cigarette smoke is absorbed by the body. Thus, smokers tend to have higher amount of cadmium in the blood than non-smokers. Another huge exposure pathway involves occupational exposure for individuals that do work that involves heating cadmium such as electroplating and smelting. The major routes of cadmium exposure are through inhalation of dusts and fumes and accidental ingestion through contaminated hands. Smoking Smoke and Cadmium

With cigarette usage slowly declining in today’s society, people can embrace cleaner air free from toxic particles and airborne substances. Popular cigarette brands have always been linked to an increased risk of toxic metal exposure. Heavy metals such as aluminum, mercury, lead and cadmium infiltrate the burning paper and cigarette filter, exposing the user and nearby passersby to elevated levels of toxins. Because these chemicals are inhaled, they pose a serious threat to the respiratory system, with an increased chance of pulmonary edema and trachea-bronchitis. Particles small enough to make it to the bloodstream can have more severe problems with the kidneys and the liver. Cadmium in Gourmet Food

Cadmium can be released into the environment through industrial waste and fertilization which exposes plants, animals and humans to the uptake and consumption of cadmium. To illustrate this, imagine cadmium being released into the air by a local metal mine/refinery. The cadmium is released into the air and then enters the local soil and water irrigation system. The local crops then take up the cadmium from the water and soil. These crops are fed to humans and livestock, which may also be consumed by humans. However, there are many factors that influence the cadmium content such as agricultural practices, amount of cadmium deposited into the atmosphere, and the plants itself. Certain plants such as tobacco, rice, cereal grains and potatoes take up cadmium more than other heavy metals (129). Cadmium can also concentrate in sweetmeats such as liver and kidneys. One of the common foods that we consume that surprisingly contains cadmium is chocolate. A group named As You Sow has conducted independent testing on 70 chocolate products and found that 45 of them contain lead and/or cadmium. They found that many of them have levels of lead and/or cadmium that is over the safe threshold level of the California’s Safe Drinking Water and Toxic Enforcement Act of 1986 (Prop 65) (130-132). Diagnosis/Clinical Presentation of Cadmium Toxicity The methods to detect exposure are through cadmium measurement in blood, urine, hair, or nails, but urinary cadmium is an accurate measurement of cadmium in the body. Urinary cadmium also shows present and past cadmium exposure. For acute inhalation, some signs include chest pain, hard breathing, fever, and nausea. For acute indigestion, abdominal cramps, diarrhea, nausea and vomiting are common symptoms. Chronic exposure symptoms can be much and include chronic obstructive pulmonary disease, chronic renal failure, kidney stones, lung

cancer and bone issues. One of the more well-known cases is the “Itai-itai” disease that was described in older Japanese women exposed to high levels of cadmium over their lifetime. These women were consuming rice that was grown in cadmium contaminated fields which caused them to have osteomalacia, osteoporosis, and renal dysfunction. The US Department of Health along with the International Agency for Research on Cancer has determined cadmium to be a human carcinogen (133). By 2012, the gourmet food market had earner over $95 billion in sales, with that margin expected to rise in future years. But studies have started to point to these gourmet foods as a emissary of higher levels of toxic metals, particularly in specialty oils, meats, and salts. A study from SPEX CertiPrep shows that cadmium levels were relatively higher in Himalayan Pink Fine Mineral Salt, a delicacy in many countries. Cadmium was also found to be linked to coffee powder, citing that the cadmium and lead levels remain in the powder and must be efficiently filtered before consumption. The pathway that cadmium must take in order to reach animals is somewhat more intricate and complex. As cadmium enters the food chain, certain plants such as tobacco, rice, grains, and potatoes absorb it much more readily than its other heavy metal counterparts. With grains being the main source of food for certain livestock, it has been particularly easy to find higher levels of cadmium in the meat we eat, especially the liver and kidneys. Foie gras is a classic example of a food typically enjoyed by affluent citizens. Although it is considered to be gourmet, it can possibly contain dangerous levels of cadmium due to the force-feeding of ducks, which can involve contaminated grains; that subsequently results in a liver laden with cadmium and other metal toxins (129, 134). Cadmium in Pet Food

They say that a dog is man’s best friend, but based on what people feed them, one wouldn’t think so. Toxic levels of lead, mercury, and cadmium have been found in dog and cat foods, primarily in the dry food choices, in doses three times higher than those found in the reference dosage limits for human consumption. Increased levels of cadmium in the dogs and cats are seen as silent killers, with vital signs and responsiveness in normal levels, but a drastic change in sleep behavior physical characteristics. Many of these symptoms are seen in aging pets, but with the right tests, you can identify whether your pet is being exposed to high levels of toxic metals (135, 136). Cadmium and Cancer The EPA categorizes cadmium as a group B1 carcinogen, a substance that is known to be harmful to humans. Research conducted at George Washington University revealed that cadmium exposure leads to an accelerated rate of cellular aging, which can lead to early senescence and deterioration of function from those cells. Research on cadmium and its effects are relatively new, as society actively recognizes it as a carcinogen to humans. The toxic effects of cadmium expand beyond the immediate organs they affect. Cadmium has been associated with elevated risks of breast and prostate cancers. Cadmium has also been hypothesized to be involved in the transdifferentiation of specialized cells and increases in oncogene activation, making it a possible cancerous carcinogen. Cadmium has been hypothesized to inactivate certain detoxifying enzymes, interfere with proteins that aid in DNA damage repair, and disturb nucleotide excision repair, base excision repair, and mismatch repair. All of these factors can lead to an elevated risk of cancerous activity in the body (137). Renal Effects of Cadmium Exposure

Cadmium targets the kidneys, with an approximate latency period of ten years before a delayed onset of renal damage. Chronic cadmium exposure is linked to progressive real tubular dysfunction. Toxic effects for the kidney are directly related to the doses of cadmium received. Damage to the kidneys is typically considered to be irreversible, with cadmium producing a slow but progressive decline of glomerular filtration rate. Biomarkers for Cadmium Exposure There are multiple avenues that could be utilized in order for a healthcare professional to identify if you have been exposed to cadmium. Simple urine samples will reveal the release of proteins and higher weight molecules, such as transferring and albumin, which is typically seen with kidney dysfunction. Other biomarkers of cadmium include N-acetylβ-d-glucosamidase, β2-microglubulin, and metallothionein, but newer and more novel biomarkers are being discovered to improve the biomonitoring of cadmium exposure (138). Typically, within hours of cadmium exposure, symptoms of inflammation begin to occur, with symptoms ranging from coughing to a headache. Depending on how the cadmium enters the body, the affected individual will experience different symptoms. With its long half-life in humans, cadmium can accumulate in the body, particularly in the kidneys. As cadmium is exposed to the kidneys, they undergo proximal renal tubular dysfunction. Kidneys lose their function to filter acids from the blood and create low phosphate levels, which can lead to muscle weakness and coma. With cadmium inhalation, it can quickly become lethal, with respiratory tract, liver, and kidney problems, often leading to kidney failure. Long-term exposure to cadmium inhalation can lead to trachea-bronchitis, pneumonitis, and pulmonary edema. Bones have a tendency to become softer with cadmium exposure because

they lose bone mineral density, thus increasing the chance for bone fractures. Treatment for Cadmium Toxicity Healthcare professionals who are concerned with cadmium exposure will initially run preliminary tests to identify whether conditions related to cadmium are present. A chest x-ray will be ordered to identify any respiratory ailments, such as pneumonitis and pulmonary edema, and a measurement of oxygen saturation could be added. Another organ of extreme interest would be the kidneys; health officials will check for kidney function to assess whether organ damage was present. Blood tests could also determine whether cadmium was recently exposed to the body, as well as assess levels of cadmium in the body. If the patient is properly diagnosed with cadmium poisoning, clinicians would immediately measure kidney function as an assessment to kidney damage. A treatment option for elevated levels of cadmium includes dietary changes to alleviate and prevent cadmium toxicity (139). Mechanism of Toxicity Cadmium will enter the blood through the various routes and enter the liver, where it forms complexes with proteins. These cadmium-protein complexes will then enter and accumulate in the kidney where it can damage filtering mechanisms and can cause kidney damage or disease. Cadmium can be excreted by the kidney over time but the process is very slow. Thus, large accumulation of cadmium creates the most problems. One of these problems involves an increase in urinary calcium and phosphorus loss which can result in osteomalacia and osteoporosis. Cadmium has been known as a catalyst in the formation of reactive oxygen species, increase of lipid peroxidation and depleting glutathione and protein-bound sulfhydryl groups. Cadmium

is also known to increase production of inflammatory cytokines and downregulates the formation of nitric oxide formation. Treatment and Prevention Treatment depends on the type and method of exposure. Acute exposures, which occur mostly through inhalation of vapors, can be treated with fluid replacement, supplemental oxygen, and mechanical ventilation. With acute exposure due to indigestion, forced vomiting or gastric lavage can help treat it. For chronic exposure, the best treatment is actually prevention of further exposure. Ways of prevention include improving workplace environment and exposure, washing hands and other methods of hygiene that helps prevent contamination with cadmium (140).

CHAPTER 19

LEAD Hayley Moore

Introduction Perhaps the most significant and recent incident involving lead is what is commonly being referred to as the Flint Water Crisis. Severe lead contamination of the drinking water in Flint, Michigan, became a public health disaster. This situation was poorly handled, and thousands of children

were exposed to high levels of lead as a result. A state of emergency was declared in Michigan, and President Obama’s response to the government break down was, “What is inexplicable and inexcusable is once people figured out that there was a problem there, and that there was lead in the water, the notion that immediately families weren’t notified, things weren’t shut down. That shouldn’t happen anywhere.” For fifty years, Flint’s primary source of water was from the Detroit water system of Lake Huron and the Detroit River (Detroit Water and Sewerage Department—DWSD). The Flint River was the designated backup source. In April 2014, in an effort to save money—approximately $200 million over twenty-five years—the city made a change, and the Flint River became the primary water source. Shortly after the switch, residents complained about the color, odor, and taste of the water. Water from the Flint River is corrosive because of high chloride ion concentration. When tested at the water plant, there were high levels of trihalomethanes (THMs)—a corrosive, carcinogenic byproduct of chlorine after disinfecting water. However, officials were in denial for months that the water was unsafe and failed to apply corrosion inhibitors. The problem with lead lies in that fact that many homes in the Flint area have older lead water pipes. Although lead pipes are not much of a concern for properly treated water, such as the water from Lake Huron, they can cause severe lead contamination with corrosive water due to lead leaching out from the pipes. Because of this, public water treatment plants are required by law to treat acidic water and increase the pH. To make matters worse, chlorine, which is supposed to act as a disinfectant, reacts with lead in such a way that eliminates its disinfectant properties. This caused an increase in total coliform bacteria and E. coli in the water— in turn causing more chlorine to be added to the water

source by the Michigan Department of Environmental Quality (MDEQ) to bring coliform levels back down, and adding even higher levels of THMs to the water. The Environmental Protection Agency (EPA) has a set level for lead concentration maximum to be 15 ppb in water to be considered safe for drinking. At 5,000 ppb, water is considered hazardous waste. In early 2015, a city test done at a resident’s home reported lead concentrations of 104 ppb. Virginia Tech conducted their own independent tests and found lead concentrations to be 13,200 ppb. Despite concerns from the EPA, the MDEQ denied a problem and failed to report samples that kept their levels to appear below the federally mandated levels. In October 2015, the water source in Flint was switched back to the Detroit water system. Unfortunately, it was too little, too late because irreversible damage to the water pipes had already been done. Karen Weaver, the mayor of Flint, declared a state of emergency on December 14, 2015, because of the level of lead present in the water. A Flint Water Advisory Task Force released a preliminary report on the water and came to the conclusion that MDEQ was primarily at fault. The next day, MDEQ Director Dan Wyant and MDEQ spokesperson Brad Wurfel resigned. Rick Snyder, the governor of Michigan, declared a state of emergency on January 5, 2016. Shortly after, President Obama declared a federal state of emergency. This act allowed FEMA to come in and assist the people of Flint and federal funding to provide water and water filters. An emergency order was then issued by the EPA. Plans were put into place to replace the city’s pipelines and provide support to children who experienced lead toxicity. As of April 2016, the water in Flint remains unsafe to drink (141). Three officials faced felony charges over the Flint water crisis in April 2016: two of MDEQ’s state officials, Stephen Busch and Michael Prysby, and city employee Michael Glasgow. The three were charged with misconduct,

conspiracy to tamper with evidence, neglect of duty, and violating Michigan’s Safe Drinking Water Act. Additionally, in June 2016, lawsuits were filed against Veolia for fraud, negligence, and public nuisance, and against the Lockwood, Andrews, and Newnam firm for negligence and public nuisance. Veolia was hired in 2015 as a water-quality consultant by the city of Flint. The law firm Lockwood, Andrews, and Newnam was hired in 2011 to assist in the operations of the Flint River water treatment plant. Finally, in late July 2016, six more state officials were criminally charged. Liane Shekter-Smith, Adam Rosenthal, and Patrick Cook of MDEQ were charged with conspiracy, willful neglect of duty, and misconduct in office. Rosenthal was additionally charged with tampering with evidence. Nancy Peeler, Robert Scott, and Corinne Miller of the Michigan Department of Health and Human Services were charged with conspiracy, willful neglect of duty, and misconduct in office. It is estimated that six to twelve thousand children have been exposed to toxic levels of lead because of this public health crisis. These children may suffer from severe neurological problems and decreased IQ. Because the full impact of the damage done to young children during development will not be fully apparent for years to come, the long-term effects of this crisis have yet to be determined, but they are estimated to exceed $400 million just for health and social costs. The estimated cost to fix the water infrastructure and to replace all of the water pipes is estimated to be $1.5 billion (142). Lead Exposure There is no biological need for lead—in fact, no level of lead is even considered safe in humans. Lead is one of the oldest known environmental exposure hazards. Humans can be exposed to lead by means of ingestion, inhalation, ocular contact, and dermal contact. Lead has been used in

numerous applications for thousands of years because of its availability, versatility, and properties that make it easy to work with. It has been used in things ranging from fishing nets, cosmetics, currency, water pipes, and contraceptives. More recently, lead has been used in paints, gasoline, and plumbing. Although many efforts have been put in place to reduce the use of lead today in the United States, lead is still commonly used around the world, and many toxic exposures occur because of residual effects of past lead use (143). Lead in Our Environment Soil and water can often be contaminated with lead, as seen in the Flint, Michigan, water crisis. Soil and water contamination occurs through many different ways. Lead was used in gasoline prior to 1980; unfortunately, the residual environmental effects from leaded gasoline can still be seen today in contaminated soil. When farming soil is contaminated with lead, food products such as fruits and vegetables are also contaminated and are a source of lead exposure in humans. Contamination of soil and water can also be due to broken down lead paint, older pesticides, contaminated landfills, and leaded pipes. Leaded pipes prove to be the most hazardous when used with soft and/or acidic water because it dissolves the lead. Insoluble layers are formed when hard water is used and thus do not create as severe a contamination. In the case of Flint, Michigan, the water from Flint River was much more corrosive than the water from Lake Huron because of higher chlorine concentrations. This caused lead from the water pipes to leach out and severely contaminate the water. Groundwater is also often contaminated with lead from soil or the atmosphere (144). Lead Involved in Various Applications

Prior to 1978, lead was commonly used in paint to allow for greater moisture resistance, increased durability, and faster drying. Since major toxic effects of lead have been discovered, laws in the United States have been put in place to prohibit the use of it in household paints. However, many older households still contain old lead paint, creating a health hazard. Lead can be ingested in the form of dust that is created when the paint deteriorates. Children are at higher risk for this source of exposure because of increased time spent on the floor and hand-to-mouth contact. Unfortunately, lead paint is still widely used in other countries, so lead exposure from imported toys is still a risk, as was seen in 2007 when millions of toys from China were recalled because of the presence of lead paint. Occupational exposure is the most common source for lead toxicity in adults. Some occupations that are at risk for lead exposure include lead miners, plastic manufacturers, welders, plumbers, construction workers, auto mechanics, and anyone working closely with ammunition, radiation shields, ceramic glazes, circuit boards, fetal monitors, and certain surgical equipment. Lead is still commonly used in many products. It can be found in some aviation fuels, semiconductors, statues and sculptures, organ pipes, scuba diving weight belts, and stained glass. Compounds are sometimes used as an oxidizing agent in organic chemistry laboratories. Certain colors of ceramic glazes contain lead compounds. It is also frequently found on coatings of electrical cords because there have been no regulations against using lead compounds in plastics. The number one use of lead in the United States is in storage batteries. The second most common use is for ammunition. Lead bullets have resulted in controversies about environmental effects of hunting and shooting ranges. Clinical Presentation and Health Effects of Lead

Symptoms of lead toxicity vary in individuals, and some may even be asymptomatic, however it is rare for an individual to be asymptomatic when blood lead levels are over 100 μg/dL. Identifying lead toxicity can be troubling because symptoms are often vague and subtle. Symptoms predominantly involve the central nervous system because lead mimics calcium and is able to cross the blood-brain barrier. Early symptoms include nausea, loss of appetite, personality changes, fatigue, depression, constipation, diarrhea, unusual taste in the mouth, sleep problems, muscle pain, decreased libido, intermittent abdominal pain, and general malaise. As lead builds up in the body through chronic exposure, all of the body’s organ systems may be affected. Symptoms can include memory loss, headache, kidney failure, abdominal pain, anemia, reproductive problems in males, and pregnancy complications in women. Additionally, tingling, pain, or weakness may be experienced in the extremities; the skin may become pale; a blue line may be seen at the gum line; delayed reaction times can occur; and in extreme cases blindness, seizures, coma, or even death can occur. The nervous system, renal system, cardiovascular system, and reproductive system are the most severely affected by lead poisoning. The amount of lead that is toxic to an individual varies and is largely dependent upon age, with young children being the most vulnerable. For adults, symptoms typically occur at blood lead levels over 50–60 μg/dL, although symptoms may be seen at 40 μg/dL. For children, symptoms typically occur at 60 μg/dL. Neuropsychiatric symptoms typically occur when blood lead levels are between 25 and 60 μg/dL. At 50 μg/dL, anemia may occur because lead poisoning often causes iron (as well as calcium and zinc) deficiencies. Abdominal colic can occur when levels are over 80 μg/dL. Severe symptoms such as encephalopathy signs, including headache, seizures, delirium, and coma, occur when blood

levels exceed 100 μg/dL. Encephalopathy signs can occur in children at 70 μg/dL. According to the Center for Disease Control and Prevention (CDC), blood lead levels should not exceed 10 μg/dL in adults and 5 μg/dL in children. Lead is absorbed much more quickly in children than in adults, and thus lead exposure is significantly more harmful to children. The effects on children are more devastating because it can lead to permanent damage to the developing and growing body and mind. Reduced cognitive capacity can occur even at very low exposure levels. Lead exposure can cause delayed puberty in females at higher exposure levels. Fetal exposure to lead increases risk for low birth weight and premature birth. Treatment of Lead Toxicity When blood lead levels are above the maximum acceptable levels, removing the source of lead exposure is paramount. Depending on the situation and severity of symptoms, chelation therapy is often started if blood levels exceed 40 μg/dL. Treatment of lead toxicity also includes the treatment of associated zinc, calcium, and iron deficiencies. Whole bowel irrigation may be indicated if lead is present in the gastrointestinal tract. Corticosteroids and anticonvulsants may be needed to treat encephalopathy symptoms. Chelating Agents for Lead Toxicity Chelation therapy is used when high blood lead levels are present. The injectable chelating agents used are edetate disodium calcium and dimercaprol. The oral chelating agents used are succimer and d-penicillamine. Chelation therapy ceases when symptoms resolve or when blood lead levels are reduced to manageable levels. During chronic lead exposure, lead is stored in the bone. Because of this, there may be repeated high blood lead levels after chelation therapy when lead is leached from the bones. Therefore,

repeated treatments may be needed. These chelating agents can cause a number of side effects, including renal toxicity, gastrointestinal disturbances, and zinc deficiency. Organ Excretion of Lead by Metalloprotiens Another treatment option for lead toxicity is the use of a protein that binds with lead ions. A metalloprotein that uses lead as its metal ion cofactor is nigericin. By working as an ionophore for lead ions, nigericin binds to lead allowing the complex to move across cell membranes and eventually be excreted by the kidneys. Nigericin is a naturally derived antibiotic from the bacterium Streptomyces hygroscopicus. The nigericin-lead complexes that are formed are very stable and are thus a very effective metalloprotein treatment option. Lead Bioremediation Bioremediation is a common practice to remove pollutants from contaminated sites such as soil utilizing organisms. Some examples of lead bioremediators are fish bones for use in soil, Aspergillus versicolor, and Desulfovibrio and Desulfotomaculum bacteria for use in water. However, most bioremediators do not readily absorb lead, especially from soil. A much more effective way of removing lead from soil is by a type of bioremediation called phytoremediation. In phytoremediation, certain plants are capable of efficiently removing lead by bioaccumulation. These plants absorb lead through their roots and bioaccumulate the lead into the part of the plant above the ground. The plant is then harvested and disposed of. Phytoremediation is economical and environmentally friendly. There are some limitations to this method: the process is slow, and lead removal is limited to the depth of the plants’ roots. Indian Mustard

Indian mustard, Brassica juncea, is an herb and a species of mustard plant. It often appears in Indian, Korean, Chinese, Japanese, African, Bangladeshi, Italian, and Pakistani dishes. Many dishes make use of the mustard greens. It is also used to make spicy brown mustard. This plant is rich in vitamins A, C, and K. B. juncea is used in bioremediation because of its high tolerance of lead and its ability to store lead in its cells. Bioaccumulation is increased with the addition of EDTA to the soil as a chelating agent. Ragweed Ragweed, Ambrosia artemisiifolia, is an herb that is in the aster family and is a widespread aggressive invasive species. It is native to North America and much of South America. Ragweed is one of the most common causes of pollen-related allergic rhinitis. It emerges in late spring, sets seed in late summer, and has a bloom period is from July to October. The seeds are commonly eaten by songbirds. A. artemisiifolia is a very tolerant plant. It can grow in coarse, fine, and medium textured soils, in neutral to very acidic soil, and in shallow, infertile, or saline soils. This can make it easy to use for bioremediation. However, this plant is very difficult to control and can be detrimental to surrounding environments because of its invasiveness. Hemp Dogbane Hemp Dogbane, Apocynum cannabinum, is a perennial herbaceous plant. It is found throughout Northern America. Its active growth period is in the spring and summer and blooms in the summer. A. cannabinum is similar and related to cannabis, however it does not have any psychoactive properties. This plant is very poisonous, causing cardiac arrest. The bark is used to make fiber for clothes, bags, and twine. Apocynum cannabinum makes for a useful bioremediation plant because it is considered a

hyperaccumulator for its ability to remove higher than normal amounts of lead from soil. It can be grown in coarse, fine, and medium textured soils.

CHAPTER 20

MERCURY Kathryn Ma

Gold mercury amalgam.

Introduction Mercury is a transitional metal designated by Hg on the periodic table, and it has the atomic number of 80. It is unique in that it is in its liquid state at room temperature. In nature, mercury has three oxidation states: Hg0, Hg+ (where

it has lost an electron), and Hg2+ (where it has lost two electrons). Hg2+, also known as mercuric mercury, can bind to carbon atoms, forming organic mercury compounds. Mercury has been used and found in food, jewelry, and toys. Mercury in the News In May 2016, the Peruvian government called a two-month state of emergency in the Madre de Dios region of Peru, which lies in the Amazon jungle, because of large levels of mercury poisoning. Mercury contamination to that region was due to illegal gold mining; mercury was used to separate the gold from other minerals, and then it was dumped into the environment; which was estimated to be about forty tons of mercury per year. Both the indigenous population and their food supply, such as fish, were all exposed to mercury. It was estimated that about 41 percent of those living in the Madre de Dios region were being exposed, and up to 80 percent of a particular tribe had mercury poisoning. In an effort to help the citizens, the Peruvian government sent hospital ships to treat and monitor those affected, and to give food supplies that were free of mercury contamination (145), (146). This is an unfortunate example of how people can be exposed to mercury, but there are other ways an individual can be exposed, some more known and some lesser known. Let’s take a closer look. Mercury Exposure Exposure to mercury can occur through different modes: it can be absorbed through the skin, inhaled, or ingested. This is partially because mercury exists in different physical and chemical states, such as a vapor, liquid, or in the form of methyl mercury. Mercury vapor is released into the atmosphere as the earth’s crust expels excess gas. Human activity also accounts for a source of mercury vapor in the

atmosphere; estimated to be about the same amount as natural sources. Once mercury vapor is in the atmosphere, it is able to travel to different areas of the world, and it can come back to the earth’s surface via rainwater. At this point, mercury can become mercury vapor and repeat the cycle again, or it can be turned into methyl mercury by methanogenic bacteria that reside in fresh and ocean water. Methyl mercury is the form of mercury that those who enjoy eating fish must be concerned about. Mercury and Gourmet Food The commonly known route of mercury exposure is through the consumption of fish. As the mercury settles into different bodies of water, the methanogenic bacteria convert mercury into methyl mercury, which is a dangerous form. First, methyl mercury accumulates in phytoplankton, and then it starts moving up the food chain until it accumulates in the muscles of predatory fish, which include tuna, salmon, swordfish, and sharks. The amount of mercury in predatory fish is dependent on what they consume, how long they can survive for, and where they sit in the food chain. Specialty salts have come into popularity lately, often coming from different parts of the world such as the Mediterranean Sea, France, the United States, the Himalayas, Australia, India, or Israel. The salts come in different colors such as black, red, pink, or brown, and they come in different textures such as fine powder, large and small crystals, or flakes. In a study by SPEX CertiPrep, they tested twelve different gourmet salts, most of which had detectable levels of mercury in it. Levels of mercury ranged from as little as 0.003 µg/g for French oak smoked grey sea salt to 0.180 µg/g for India brown powder mineral salt. Chocolate is a specialty food that has been popular for many years, and not all are created equally. In another

study, SPEX CertiPrep analyzed three milk chocolate samples, three dark chocolate samples and a chocolate liquor. In each 40-gram serving, the levels of mercury detected in the dark chocolate samples ranged from