17 States in the early 1990s (65.5 infections per million per year) and has progressively declined since due to the widespread use of fluconazole and the successful treatment of the HIV infection with new antiretroviral drugs.25 Cryptococcosis may present as a pneumonic process or, more often, as a CNS infection secondary to hematogenous and lymphatic spread from a primary pulmonary focus. A more widely disseminated form of the infection may also occur with cutaneous, mucocutaneous, osseous, and visceral involvement. Pulmonary cryptococcosis is variable in presentation ranging from an asysmptomatic process to a more fulminant bilateral pneumonia. Nodular infiltrates, usually without cavitation, may be either unilateral or bilateral becoming more diffuse in severe infections. C. neoformans is highly neurotropic and the most common form of the disease is cerebromeningeal. The course of the disease is variable and may be quite chronic; however, it is inevitably fatal if untreated. Both meninges and the underlying brain tissue are involved and the presentation clinically is that of fever, headache, meningismus, visual disturbances, altered mental status, and seizures. The clinical picture is highly dependent upon the patient’s immune status and tends to be very severe in AIDS patients and other severely compromised patients treated with steroids or other immunosuppressive agents.23 Although both C. neoformans var. neoformans (and var. grubii) and var. gattii can cause meningoencephalitis, var. neoformans (and var. grubii) causes infection primarily in immunocompromised patients (e.g. AIDS), whereas var. gattii infections tend to occur in normal healthy hosts.19–22 Worse prognosis is usually associated with var. gattii.22,26 When compared with var. neoformans, infections caused by var. gattii are associated with cerebral or pulmonary cryptococcomas, papilledema, and high serum/CSF antigen titers.
Microbiology C. neoformans is a ubiquitous encapsulated soil yeast that reproduces asexually by budding. The perfect or sexual stage of C. neoformans can be produced by mating the fungus in vitro; however, the role of this stage in infectivity and pathogenesis is unknown. The yeast cell may vary from 4–20 µm in diameter and is surrounded by a polysaccharide capsule ranging from 1–30 µm. The narrow-based buds are usually single. The capsule may be visualized indirectly by the India ink or nigrosin technique and more specifically in clinical material with mucicarmine, which stains capsular mucopolysaccharide. In tissue, cryptococci stain poorly with hematoxylin and eosin but well with methenamine silver and periodic acid-Schiff. C. neoformans grows well on most bacterial and fungal media used in the routine clinical microbiology laboratory. A rapid presumptive identification of an encapsulated yeast as C. neoformans may be accomplished by demonstration of urease and phenoloxidase enzyme activity.17 C. neoformans is strongly urease positive and possesses a membrane-bound phenoloxidase enzyme that converts phenolic compounds to melanin. Phenoloxidase activity is readily demonstrated on media such as birdseed agar or caffeic acid agar, which contains 3,4-dihydroxycinnamic acid. Oxidation of the Odiphenol in medium produces dark colonies suggestive of C. neoformans. Confirmatory identification is accomplished by employing standard biochemical and physiological tests. Standard laboratory tests do not differentiate among the different serotypes. Media have been proposed for separating serotypes A and D from B and C but are not available commercially.
Diagnosis The clinical presentation of pulmonary cryptococcosis may mimic a number of acute and chronic infectious processes as well as malignancies. Signs and symptoms include fever, malaise, pleuritic pain, cough, scanty sputum, and hemoptysis. Chest roentgenograms may reveal lobar infiltrates, single or multiple nodules, or tumor-like masses. Sputum cultures are positive in only 20% of cases, and the diagnosis is frequently made at thoracotomy for suspected malignancy. Patients with pulmonary cryptococcosis should be thoroughly
Opportunistic Fungal Infections
463
evaluated for systemic infection, with cultures of blood, urine, and cerebrospinal fluid (CSF). Central nervous system cryptococcosis may present as either meningitis (most common), encephalitis, or a more focal process suggestive of malignancy. Signs and symptoms in patients without AIDS include fever, headache, mental status changes, ocular symptoms, meningismus, nausea, vomiting, cranial nerve palsies, and seizures. Aside from fever and headache these signs and symptoms may be significantly less common in patients with AIDS. The chest roentgenogram may or may not be abnormal in patients with central nervous system or systemic cryptococcosis. Extraneural dissemination may present as cryptococcemia or focal involvement of one of several target organs. The laboratory diagnosis of cryptococcosis requires the isolation of cryptococci from normally sterile body fluids, histopathology showing encapsulated organisms, or detection of cryptococcal antigen in serum or CSF. A rapid diagnosis of extraneural infection may be facilitated by biopsy and staining with methenamine silver and mucicarmine. Examination of the CSF in patients with meningitis usually suggests a chronic lymphocytic meningitis with a low-grade (less than 500/mm3) lymphocytic pleocytosis, elevated protein, and low glucose. Microscopic examination of CSF mixed with India ink or nigrosin may reveal encapsulated organisms in approximately 50% of cases. Cultures of CSF and other clinical material are usually positive. Occasionally repeated lumbar punctures, cisternal taps, or sampling of large volumes (up to 10 mL) of CSF may be necessary to establish the diagnosis. In patients with AIDS, cryptococci are present in large numbers, but the CSF shows fewer abnormalities. Detection of cryptococcal antigen in serum and CSF is extremely valuable in the diagnosis of cryptococcal infection. Antigen titers are particularly high in patients with AIDS. Both latex agglutination (LA) and enzyme immunoassays (EIA) are commercially available and are rapid, sensitive, and specific.27 Antigen is detected in the serum in approximately 50% and in CSF in more than 90% of patients with cryptococcal meningitis. High titers of cryptococcal antigen in CSF or serum are associated with a poor prognosis. False-positive results are rare but may be due to rheumatoid factor or cross-reactivity in patients infected with Trichosporon beigelii. The newer EIA methods lack reaction with rheumatoid factor and are more specific than the LA methods.27
Therapy Pulmonary cryptococcosis may not require therapy as long as the process appears to be resolving and the patient is intact immunologically. Long-term follow-up is necessary in patients whose infection is diagnosed at thoracotomy, because there is a 3–10 risk of meningitis for up to three years after surgery. Patients with progressive pulmonary infection, particularly those who are immunocompromised, and all patients with extrapulmonary infection require systemic antifungal therapy. At present, such therapy consists of intravenous amphotericin B. Fluconazole may also be used although the efficacy of this agent in the treatment of pulmonary cryptococcosis has not been documented in clinical trials. Cryptococcal meningitis and extrapulmonary cryptococcosis always require systemic antifungal therapy.28 Cryptococcal meningitis is almost universally fatal without therapy, but approximately 80–90% of patients (non-AIDS) can be cured with current therapeutic regimens. Current therapeutic recommendations are amphotericin B plus 5-fluorocytosine acutely for two weeks (induction therapy), followed by 8-week consolidation with oral fluconazole.28 AIDS patients generally require life-long therapy with fluconazole. In patients without AIDS, treatment may be discontinued after the consolidation therapy; however, relapse may be seen in up to 26% of these patients within 3–6 months after discontinuation of therapy.23,28 Thus, a prolonged consolidation treatment with an azole for up to one year may be advisable even with patients without AIDS. Infections caused by C. neoformans var. gattii demonstrate slower response to antifungal therapy than those caused by var. neoformans. Neurological and visual sequelae are often present despite prolonged amphotericin B therapy and placement of intraventicular shunts.26
464
Communicable Diseases
ASPERGILLOSIS
Clinical and Epidemiologic Features The term aspergillosis refers to any one of a number of disease states caused by members of the genus Aspergillus. Aspergillus species are ubiquitous fungi that may be isolated from a variety of environmental sources, including insulation and fireproofing materials, soil, grain, leaves, grass, and air.29 The aerosolized conidia are present in large numbers and are constantly being inhaled. Although several hundred species of Aspergillus have been described, relatively few are known to cause disease in humans. Aspergillus fumigatus remains the most common cause of aspergillosis, followed by Aspergillus flavus, Aspergillus niger, Aspergillus terreus, and Aspergillus versicolor.30,31 Aspergillus infections occur worldwide and appear to be increasing in prevalence, particularly among patients with chronic pulmonary disease and among the immunocompromised populations.29–31 Aspergillus species are particularly important causes of nosocomial infections in patients who are immunocompromised secondary to burn injury, malignancy, leukemia, and bone marrow and other organ transplantation. Several major outbreaks of invasive nosocomial aspergillosis have been described in association with exposure to Aspergillus conidia aerosolized by hospital construction, contaminated air handling systems, and insulation or fireproofing materials within walls or ceilings of hospital bed units.29,30 The crude mortality associated with these infections is high, approximately 90% in most series.29,31 The clinical manifestations of aspergillosis include pulmonary colonization with bronchitis and aspergilloma formation, allergic syndromes such as allergic bronchopulmonary aspergillosis (ABPA), and invasive aspergillosis.29 Intoxication or neoplasm secondary to ingestion of aflatoxin or other toxins produced by Aspergillus spp. contaminating grain and other foods is also a serious problem worldwide. Pulmonary colonization by Aspergillus spp. may involve the bronchial mucosa or may become localized in a preexisting cavity, resulting in the formation of an aspergilloma. Superficial colonization of the tracheobronchial mucosa produces little inflammation and is not associated with tissue invasion. The expectoration of bronchial casts containing mucus and hyphal elements may be observed. Patients in whom mucosal colonization is observed are those with preexisting pulmonary disease, including cystic fibrosis, chronic obstructive pulmonary disease, and chronic asthma requiring administration of corticosteroids. Aspergillomas are masses of mycelia and amorphous debris localized in preexisting pulmonary cavities, usually in the upper lobes. The cavities are usually lined with modified bronchial epithelium and have been formed secondary to other disease processes such as tuberculosis, infarcts, or neoplasms. There is little surrounding inflammation, and invasion of the pulmonary parenchyma by Aspergillus spp. is rare. Aspergillomas may be clinically silent; however, hemoptysis secondary to ulceration of the epithelial lining of the cavity is observed in 50–80% of cases.29 The lesions may be stable, grow, or shrink with the surrounding cavity. Spontaneous lysis occurs in approximately 10% of cases within 3 years. The allergic manifestations of aspergillosis are the result of tissue hypersensitivity to conidia or other antigens of Aspergillus spp. (almost always A. fumigatus). The clinical picture may vary from mild asthma to fibrosis and bronchiectasis secondary to allergic bronchopulmonary aspergillosis. Exposure to aerosolized Aspergillus conidia may produce bronchospasm in individuals with atopic asthma. Repeated and heavy inhalation of Aspergillus conidia and other antigens may result in extrinsic allergic alveolitis in nonatopic patients. Prolonged exposure may lead to micronodular changes and fibrosis. ABPA is the result of type I (IgE-mediated), type III (immune complex-mediated), and possibly type IV (cell-mediated) hypersensitivity reactions to Aspergillus antigens. This condition occurs in up to 20% of individuals with asthma and is associated with colonization of the bronchial mucosa by Aspergillus spp. These patients experience recurrent bouts of severe asthma, wheezing, fever, weight loss, chest pain, and cough productive of bloodtinged sputum. Eventually the disease becomes chronic, with the
development of fibrosis, bronchiectasis, and mucus plugging with subsequent atelectasis or cavitation. This condition may be associated with nasal polyps and chronic sinusitis. Invasive aspergillosis occurs most commonly in patients who are severely immunocompromised secondary to hematologic and lymphoreticular malignancies. Major risk factors include neutropenia, broad-spectrum antibacterial therapy, and administration of corticosteroids.29–32 Patients undergoing bone marrow transplantation are at particularly high risk, both during neutropenia and following engraftment during episodes of graft versus host disease (GVHD). The disease process is most commonly localized to the lungs, followed by the paranasal sinuses. The infectious process is typified by mucosal ulceration and direct extension of hyphae into surrounding tissues. Vascular invasion results in thrombosis, embolization, and infarction. Hematogenous dissemination occurs in 35–40% of cases of invasive pulmonary aspergillosis and may involve brain, liver, kidneys, gastrointestinal tract, thyroid, heart, skin, and other sites.29–32 Extension of paranasal infection into the orbit and brain may mimic rhinocerebral zygomycosis. Although the pulmonary process may occasionally be inapparent, it most commonly presents as a necrotizing, patchy bronchopneumonia with or without hemorrhagic infarction. In all infected foci, the infection is characterized by vascular invasion, tissue infarction, and necrosis. Massive hemoptysis, gastrointestinal bleeding, and cerebral infarcts and abscesses may occur. Chronic necrotizing aspergillosis, a more indolent pulmonary infectious process, occurs predominantly in middle-aged patients with mildly compromised host defenses or preexisting pulmonary parenchymal damage. The locally invasive infection is slowly progressive and results in cavitation and aspergilloma formation. The infectious process is usually confined to the upper lobes but occasionally may involve an entire lung.
Microbiology Aspergillus species are molds that reproduce by means of spores or conidia. The conidia germinate to form hyphae, which are the forms most commonly found in infected tissue. Aspergillus species grow well on most media and are identified to species level based on the microscopic identification of specific morphologic features. Over 600 different species of Aspergillus have been described; however, most clinical infections are due to A. fumigatus, A. flavus and A. terreus.30–35 A. niger is the most common cause of otomycosis. At present, there are no commercially available kits to aid in the identification of Aspergillus spp. In tissue, Aspergillus hyphae stain well with Gomori methenamine silver stain and are uniform, 2–7 µm in diameter, septate, and dichotomously branched with angles of approximately 45º. These features are not diagnostic and are shared by several other opportunistic fungal pathogens.
Diagnosis The clinical signs and symptoms of pulmonary aspergillosis are nonspecific and range from mild asthma to severe hemoptysis, acute bronchopneumonia, and pulmonary infarction. Extrapulmonary involvement may present as cellulitis, hemorrhage, or infarction depending on the specific site of infection. Chest radiographs may be useful in the diagnosis of aspergilloma with the appearance of a freely movable intracavitary mass surrounded by a crescent of air (Monod’s sign). The radiographic appearance of allergic bronchopulmonary aspergillosis varies with the stage and chronicity of the disease but may appear as bronchiectasis with bronchial thickening or dilation, consolidation, and atelectasis. The most common radiographic picture of invasive pulmonary aspergillosis is that of a patchy density or well-defined nodule, which may be single or multifocal with progression to diffuse consolidation or cavitation. However, invasive pulmonary aspergillosis is often inapparent on routine chest radiographs. Thus, high resolution CT (HRCT) scans play an important role in the early diagnosis of the disease.36 Early lesions in the lungs of neutropenic patients appear as small nodules with a surrounding area of low attenuation, the so-called halo sign. These nodules eventually
17 become larger with the disappearance of the halo sign, and eventually cavitate (the air crescent sign).30,36 The laboratory diagnosis of aspergillosis is generally unsatisfactory. Definitive diagnosis of invasive aspergillosis usually requires biopsy of the involved tissue. Unfortunately the severe underlying diseases and associated bleeding diatheses commonly seen in these patients often preclude such an invasive approach. Isolation of Aspergillus spp. in cultures from the respiratory tract is problematic as this organism is common in the environment and may colonize the airways of individuals. Several investigators have shown that the interpretation of respiratory tract cultures yielding Aspergillus is aided by considering the risk group of the patient.37–39 Thus, it is clear that for high-risk patients, such as allogeneic BMT recipients, individuals with hematologic malignancies, those with neutropenia, and in liver transplant recipients, a positive culture alone that yields Aspergillis spp. is associated with invasive disease. Identification of fungi isolated from culture to the species level is also helpful: A. niger is rarely a pathogen, whereas A. flavus and A. terreus have been shown to be statistically associated with invasive aspergillosis when isolated from respiratory tract cultures.38 The rapid diagnosis of invasive aspergillosis has been advanced by the development of immunoassays for detection of Aspergillus glactomannan (GM) in serum.36,40 This test employs an EIA format and is available as a commercial kit, or from reference laboratories. The GM test appears to be reasonably specific but exhibits variable sensitivity. It is best used on serial specimens from high-risk (primarily neutropenic and BMT) patients, often in tandem with HRCT scans, as an early indication to begin empiric or preemptive antifungal therapy and to more aggressively pursue a definitive diagnosis.36 Skin tests and demonstration of serum precipitins have been useful in diagnosing ABPA; however, they are of no use in diagnosing invasive infection. Additional laboratory features of ABPA include elevated serum IgE and peripheral blood eosinophilia.
Therapy Treatment of aspergillosis is difficult and is probably not indicated for aspergilloma unless life-threatening hemoptysis occurs, in which case segmental resection or lobectomy is indicated. Systemic antifungal therapy has been of no value. Likewise, neither systemic nor aerosolized antifungal therapy has been effective in treatment of the allergic syndromes such as ABPA. Corticosteroids are considered the treatment of choice. Given the high mortality associated with invasive aspergillosis, an aggressive approach to diagnosis and treatment is required.30,31 In addition, return of bone marrow function or reversal of neutropenia is essential for survival. Specific antifungal therapy of aspergillosis usually involves the administration of amphotericin B or one of its lipid-based formulations.30 It is important to realize that A. terreus is considered resistant to amphotericin B and should be treated with an alternative agent such as voriconazole.33 The recent introduction of voriconazole provides a treatment option that is more efficacious and less toxic than amphotericin B.41 Concomitant efforts to decrease immunosuppression and/or reconstitute host immune defenses are important. Likewise, surgical resection of the involved areas, if possible, is recommended. Prevention of aspergillosis in high-risk patients is paramount.30 Neutropenic and other high-risk patients are generally housed in facilities where air is filtered so as to minimize exposure to Aspergillus conidia. Prophylaxis with an echinocandin or an azole may also be beneficial. ZYGOMYCOSIS
Clinical and Epidemiologic Features Zygomycosis is a general term that includes infections caused by fungi in the order Mucorales and order Entomophthorales (class Zygomycetes). The Zygomycetes are ubiquitous worldwide in soil and decaying vegetation. Zygomycosis is not communicable and is
Opportunistic Fungal Infections
465
acquired by inhalation, ingestion, or contamination of wounds with spores from the environment. Although Rhizopus oryzae (arrhizus) is the most common agent of human zygomycosis, additional species of Rhizopus, Mucor, Absidia, Rhizomucor, Cunninghamella, Saksenaea, and others have been causing infection with increasing frequency.42–44 Clinically zygomycosis is a fulminant infectious process that produces rhinocerebral disease in patients with diabetic ketoacidosis; rhinocerebral, pulmonary, or disseminated disease in immunocompromised patients; local or disseminated disease in patients with burns or open wounds; and gastrointestinal disease in patients with malnutrition or preexisting gastrointestinal disorders.42–47 In each case, the progression of disease may be rapid, with invasion and destruction of key anatomic structures in a matter of days. This is particularly true with rhinocerebral infection, wherein death may occur within 3–10 days in untreated patients.42,45–47 Although classically the major risk factor for zygomycosis is diabetic acidosis, it is now clear that neutropenia, hematologic malignancy, and cytotoxic or immunosuppressive therapy place patients at risk for these infections.42–47 The hallmark of zygomycosis is vascular invasion with thrombosis, hemorrhage, infarction, and tissue necrosis. The disease usually extends locally across tissue planes; however, hematogenous dissemination may also occur. Mortality is directly related to rapidity of diagnosis (extent of disease), aggressiveness of therapy, and underlying disease state. Estimates of crude mortality in patients with rhinocerebral zygomycosis are 40% in patients with diabetes and at least 80% in patients with other underlying diseases (malignancy, organ transplantation, neutropenia).42,46 The prognosis is poor in cases of disseminated zygomycosis: only about 4% of patients have been reported to have survived the infectious process.46 Focal outbreaks of zygomycosis have been related to the use of certain adhesive bandages or tape on open wounds. The resulting cutaneous infections were due to Rhizopus species, which were also isolated from the bandage material.42,46,47 Recently, zygomycosis has been seen following blood and marrow transplantation in patients receiving antifungal treatment or prophylaxis with either voriconazole or caspofungin, two agents that are not active against the Zygomycetes.45,48–51
Microbiology The agents of zygomycosis are molds that reproduce asexually by means of spores. All of the Zygomycetes appear identical in tissue and are seen microscopically following staining with hematoxylin and eosin or Gomori methenamine silver as broad (6–50 µm), irregular, branching, usually aseptate (pausiseptate) hyphae. Definitive identification requires isolation on agar medium and subsequent microscopic examination. Following primary isolation the Zygomycetes grow well on most media; however, primary isolation from clinical material is frequently difficult. Isolates are identified to genus and species level based on the microscopic identification of specific morphologic features.
Diagnosis The clinical signs and symptoms of zygomycosis are dependent on the site of infection. Rhinocerebral disease may present with nasal stuffiness, blood-tinged nasal discharge, facial swelling, and facial or orbital pain. Major diagnostic clues are the presence of a black eschar on the nasal or palatine mucosa and drainage of “black pus” from the eye.42,46–48 Radiographic examination of the sinuses may reveal clouding, thickening of the mucous membranes, and bone destruction. Progression of disease is manifested by orbital cellulitis, proptosis, and cranial nerve defects. Cerebral infarction caused by vascular compromise is common. Examination of the CSF may reveal elevated protein, normal glucose, and a modest pleocytosis. Culture and microscopic examination of CSF is uniformly negative. Pulmonary zygomycosis may resemble invasive pulmonary aspergillosis presenting as an acute bronchopneumonia or pulmonary infarction. Radiographic findings are nonspecific and include a patchy, nonhomogeneous infiltrate
466
Communicable Diseases
progressing to consolidation and cavitation. Life-threatening hemoptysis may occur. Gastrointestinal infection may present with abdominal pain, diarrhea, and bleeding. Vascular invasion results in infarction and perforation of the bowel with subsequent hemorrhage and peritonitis. Cutaneous infection may present as chronic ulceration, papules, or black, necrotic areas of infarction. The fulminant and life-threatening nature of these infections precludes the use of culture in the diagnosis of zygomycosis.42 Cultures are positive in only 20% of cases and are rarely positive antemortem. Serologic tests are not reliable, and microscopic examination of sputum or wound drainage is rarely positive for fungal elements. The key to diagnosis is the demonstration of the characteristic hyphae in tissue obtained on biopsy.42 A negative histopathologic examination does not rule out infection, and additional material should be obtained if clinically indicated.
12.
13.
14.
15.
Therapy Successful therapy of zygomycosis requires early diagnosis, systemic antifungal therapy with amphotericin B, aggressive surgical débridement of the involved area, and control of the underlying disorder.42,46,47 Most of the Zygomycetes appear quite susceptible to amphotericin B and are generally not susceptible to azoles or echinocandins.46 Among the extended-spectrum triazoles; however, posaconazole has documented utility in the treatment of infections in humans.46 In contrast, voriconazole is inactive against these agents and breakthrough zygomycosis has been reported in BMT patients receiving voriconazole prophylaxis.45,48,49 Similarly, breakthrough zygomycosis is now appearing among patients receiving agents of the echinocandin class as they are becoming more widely used in immunocompromised patients.50,51
16. 17. 18.
19. 20.
21. REFERENCES
1. Eggimann P, Garbino J, Pittet D. Epidemiology of Candida species infections in critically ill non-immunosuppressed patients. Lancet Infect Dis. 2003;3:685–702. 2. Hajjeh RA, Sofair AN, Harrison LH, et al. Incidence of bloodstream infections due to Candida species and in vitro susceptibilities of isolates collected from 1998 to 2000 in a population-based active surveillance program. J Clin Microbiol. 2004;42:1519–27. 3. Pappas PG, Rex JH, Lee J, et al. A prospective observational study of candidemia: epidemiology, therapy, and influences on mortality in hospitalized adult and pediatric patients. Clin Infect Dis. 2003;37: 634–43. 4. Pfaller MA, Diekema DJ. Role of sentinel surveillance of candidemia: trends in species distribution and antifungal susceptibility. J Clin Microbiol. 2002;40:3551–57. 5. Pfaller MA, Diekema DJ. Rare and emerging opportunistic fungal pathogens: concern for resistance beyond Candida albicans and Aspergillus fumigatus. J Clin Microbiol. 2004;42:4419–31. 6. Trick WE, Fridkin SK, Edwards JR, et al. Secular trend of hospitalacquired candidemia among intensive care unit patients in the United States during 1989–1999. Clin Infect Dis. 2002;35:627–30. 7. Wisplinghoff H, Bischoff T, Tallent SM, et al. Nosocomial bloodstream infections in U.S. hospitals: analysis of 24,179 cases from a prospective nationwide surveillance study. Clin Infect Dis. 2004;39: 309–17. 8. Zaoutis TE, Argon J, Chu J, et al. The epidemiology and attributable outcomes of candidemia in adults and children hospitalized in the United States: a propensity analysis. Clin Infect Dis. 2005;41:1232–9. 9. Odds FC, ed. Candida and Candidiasis. 2nd ed. London: Bailliere Tindall; 1988. 10. Gudlaugsson O, Gillespie L, Lee K, et al. Attributable mortality of nosocomial candidemia, revisited. Clin Infect Dis. 2003;37:1172–7. 11. Morgan J, Meltzer MI, Plikaytis BD, et al. Excess mortality, hospital stay, and cost due to candidemia: a case-control study using data
22.
23.
24.
25. 26.
27.
28. 29. 30.
31. 32.
33.
from population-based candidemia surveillance. Infect Control Hosp Epidemiol. 2005;26:540–7 . Blumberg HM, Jarvis WR, Soucie JM, et al. National Epidemiology of Mycoses Survey (NEMIS) Study Group. Risk factors for candidal bloodstream infections in surgical intensive care unit patients: the NEMIS prospective multicenter study. Clin Infect Dis. 2001;33:177–86. Wey SB, Mori M, Pfaller MA, et al. Risk factors for hospitalacquired candidemia: a matched case-control study. Arch Intern Med. 1989;149:2349–53. Pfaller MA, Richter SS, Diekema DJ. Conventional methods for the laboratory diagnosis of fungal infections in the immunocompromised host. In: Wingard JR, Anaissie EJ, eds. Fungal Infections in the Immunocompromised Patient., Boca Raton, FL: Taylor & Francis Group; 2005:341–81. Ostrosky-Zeichner L, Alexander BD, Kett DH, et al. Multicenter clinical evaluation of the (1→3)β-D-glucan assay as an aid to diagnosis of fungal infections in humans. Clin Infect Dis. 2005;41:654–9. Pappas PG, Rex JH, Sobel JD, et al. Guidelines for treatment of candidiasis. Clin Infect Dis. 2004;38:161–89. Anaissie EJ, McGinnis MR, Pfaller MA, eds. Clinical Mycology. New York: Churchill Livingston; 2003. Franzot SP, Salkin IF, Casadevall A. Cryptococcus neoformans var. grubii: separate varietal status for Cryptococcus neoformans serotype A isolates. J Clin Microbiol. 1999;37:838–40. Nucci M, Marr KA. Emerging fungal diseases. Clin Infect Dis. 2005;41:521–526. Hoang LM, Maguire JA, Doyle P, et al. Cryptococcus neoformans infections at Vancouver Hospital and Health Sciences Centre (1997–2002): epidemiology, microbiology and histopathology. J Med Microbiol. 2004;53:935–940. Kidd SE, Hagen F, Tscharke RL. A rare genotype of Cryptococcus gattii caused the cryptococcosis outbreak on Vancouver Island (British Columbia, Canada). Proc Natl Acad Sci USA. 2004;101:17258–63. Mitchell DH, Sorrell TC, Allworth AM, et al. Cryptococcal disease of the CNS in immunocompetent hosts: influence of cryptococcal variety on clinical manifestations and outcome. Clin Infect Dis. 1995;20:611–6. Pappas PG, Perfect JR, Cloud GA, et al. Cryptococcosis in human immunodeficiency virus-negative patients in the era of effective azole therapy. Clin Infect Dis. 2001;33:690–9. Speed B, Dunt D. Clinical and host differences between infections with the two varieties of Cryptococcus neoformans. Clin Infect Dis. 1995;21:28–34. Wilson LS, Reyes CM, Stolpman M, et al. The direct cost and incidence of systemic fungal infections. Value Health. 2002;5:26–34. Seaton RA, Verma N, Naraqi S, et al. Visual loss in immunocompetent patients with C. neoformans var. gattii meningitis. Trans R Soc Trap Med Hyg. 1997;91:44–49. Tanner DC, Weinstein MP, Fedorciw B, et al. Comparison of commercial kits for detection of cryptococcal antigen. J Clin Microbiol. 1994;32:1680–84. Saag MS, Graybill RJ, Larsen RA, et al. Practice guidelines for the management of cryptococcal disease. Clin Infect Dis. 2000;30:710–8. Anaissie EJ, McGinnis MR, Pfaller MA, eds. Clinical Mycology. New York: Churchill Livingston; 2003. Steinbach WJ, Loeffler J, Stevens DA. Aspergillosis. In: Wingard JR, Anaissie EJ, eds. Fungal Infections in the Immunocompromised Patient. Boca Raton, FL: Taylor & Francis Group; 2005;257–94. Patterson TF, Kirkpatrick WR, White M, et al. Invasive aspergillosis. Medicine. 2000;79:250–60. Marr KA, Carter RA, Crippa R, et al. Epidemiology and outcome of mould infections in hematopoietic stem cell transplant recipients. Clin Infect Dis. 2002;34:909–17. Steinbach WJ, Benjamin DK Jr, Kontoyiannis DP, et al. Infections due to Aspergillus terreus: a multicenter retrospective analysis of 83 cases. Clin Infect Dis. 2004;39:192–8.
17 34. Pfaller MA, Diekema DJ. Rare and emerging opportunistic fungal pathogens: concern for resistance beyond Candida albicans and Aspergillus fumigatus. J Clin Microbiol 2004;42:4419–31. 35. Nucci M, Marr KA. Emerging fungal diseases. Clin Infect Dis. 2005;41:521–6. 36. Maertens J, Theunissen K, Verhoef G, et al. Galactomannan and computed tomography-based preemptive antifungal therapy in neutropenic patients at high risk for invasive fungal infection: a prospective feasibility study. Clin Infect Dis. 2005;41:1242–50. 37. Horvath JA, Dummer S. The use of respiratory-tract cultures in the diagnosis of invasive pulmonary aspergillosis. Am J Med. 1996;100: 171–8. 38. Perfect JR, Cox GM, Lee JY: The impact of culture isolates of Aspergillus species: a hospital-based survey of aspergillosis. Clin Infect Dis. 2001;33:1824–33. 39. Yu VL, Muder RR, Poorsattar A. Significance of isolation of Aspergillus from the respiratory tract in diagnosis of pulmonary aspergillosis: results from a three-year prospective study. Am J Med. 1986;81:249–54. 40. Yeo SF, Wong B. Current status of nonculture methods for diagnosis of invasive fungal infections. Clin Microbiol. 2002;Rev 15: 465–84. 41. Herbrecht R, Denning DW, Patterson TF. Voriconazole versus amphotericin B for primary therapy of invasive aspergillosis. N Engl J Med. 2002;347:408–15. 42. Gonzales CE, Rinaldi MG, Sugar AM. Zygomycosis. Infect Dis Clin N Am. 2002;16:895–914. 43. Iwen PC, Freifeld AG, Sigler L, et al. Molecular identification of Rhizomucor pusillus as a cause of sinus-orbital zygomycosis in a patient with acute myelogenous leukemia. J Clin Microbiol. 2005;43: 5819–21.
Opportunistic Fungal Infections
467
44. Schlebusch S, Looke DFM. Intraabdominal zygomycosis caused by Syncephalastrum racemosum infection successfully treated with partial surgical debridement and high-dose amphotericin B lipid complex. J Clin Microbiol. 2005;43:5825–7. 45. Kontoyiannis DP, Lionakis MS, Lewis RE, et al. Zygomycosis in a tertiary-care cancer center in the era of Aspergillus-active antifungal therapy: a case-control observation study of 27 recent cases. J Infect Dis. 2005;191:1350–60. 46. Roden MM, Zaoutis TE, Buchanan WL, et al. Epidemiology and outcome of zygomycosis: a review of 929 reported cases. Clin Infect Dis. 2005;41:634–53. 47. Spellberg V, Edwards JE Jr., Ibrahim A. Novel perspectives on mucormycosis: pathophysiology, presentation, and management. Clin Microbiol Rev. 2005;18:556–69. 48. Chamilos G, Marom EM, Lewis RE, et al. Predictors of pulmonary zygomycosis versus invasive pulmonary aspergillosis in patients with cancer. Clin Infect Dis. 2005;41:60–6. 49. Siwek GT, Dodgson K, de Magalhaes-Silverman M, et al. Invasive zygomycosis in hematopoietic stem cell transplant recipients receiving voriconazole prophylaxis. Clin Infect Dis. 2004;39: 584–7. 50. Giermenia C, Moleti ML, Micozzi A, et al. Breakthrough Candida krusei fungemia during fluconazole prophylaxis followed by breakthrough zygomycosis during caspofungin therapy in a patient with severe aplastic anemia who underwent stem cell transplantation. J Clin Microbiol. 2005;43:5395–6. 51. Safdar A, O’Brien S, Kouri IF. Efficacy and feasibility of aerosolized amphotericin B lipid complex therapy in caspofungin breakthrough pulmonary zygomycosis. Bone Marrow Transplant. 2004;34: 467–8.
This page intentionally left blank
Other Infection-Related Diseases of Public Health Import
18
Dermatophytes Marta J. VanBeek
DERMATOPHYTOSES
Superficial fungal infections are among the most common diseases of the skin, affecting millions worldwide. The high global prevalence and ease of communicability render such infections a public health concern. Dermatophytosis is a general term used to describe superficial fungal infections of the skin, hair, and nails. The infectious agents, referred to as dermatophytes, are a group of closely related fungi equipped with the capacity to invade keratinized tissue of both humans and animals. Since fungi are generally incapable of penetrating deeper tissues, infections are typically restricted to the nonliving cornified layers of the hair, skin, and nails. The clinical presentation and severity of dermatophytosis varies widely according to the anatomic site of the infection, the specific dermatophyte, and the immunological defense of the host. The clinical term “tinea” refers exclusively to dermatophyte infections. Tinea infections are classified according to their anatomic location (Table 18-1). While a single dermatophyte species can cause a variety of clinical manifestations in different parts of the body, the same clinical picture may be due to dermatophytes of different species or genera. Clinically relevant dermatophytes are classified into three genera, Epidermophyton, Microsporum, and Trichophyton.1 Depending on their habitat, dermatophytes are also classified as anthropophilic (human), zoophilic (animal), or geophilic (soil). Anthropophilic organisms are responsible for most human cutaneous fungal infections and rarely infect other animals. Zoophilic dermatophytes are associated with animal fungal infections but occasionally infect humans. Geophilic dermatophytes are primarily associated with keratinous materials such as hair, feathers, hooves, and horns after these materials have been dissociated from living animals and are in the process of decomposition. These species may cause human and animal infection.1 Dermatophytes are spread by direct contact from other people (anthropophilic fungi), animals (zoophilic fungi), soil (geophilic fungi), or indirectly from fomites. Fomites such as combs, towels, blankets, and pillows can disseminate fungus from a primary source to secondary contacts. However, person-to-person contact is the most common source of infection in the United States.2 The incidence of dermatophytosis varies by geographic distribution, climate, season, race, and cultural habitats. Dermatophyte infections are more common in warm, humid climates, and in dense populations. Up to 20% of the U.S. population harbors a dermatophytosis at any
time, with tinea pedis being the most common infection.2 Overall, peak prevalence of dermatophytosis occurs after puberty.2 In developed countries, it is estimated that 5% of patients with dermatological conditions have a dermatophytosis, and more than 90% of the male populations have experienced a transient fungal infection by the age of 40.3–5 The etiologic species, geographic distributions, and population characteristics of specific tinea infections have changed dramatically over the past 50 years. The following sections will review the epidemiology, clinical morphology, common etiologic species, diagnosis, and treatment for the most common types of tinea infection.
Tinea Capitis Epidemiology Almost a million children this year may be infected with the highly contagious disease called tinea capitis, also known as ringworm of the scalp. Tinea capitis accounts for over 90% of fungal infections in children under age 10 in the United States.6 Globally, Microsporum canis has been the most common species causing tinea capitis. It remains the predominant agent of tinea capitis in rural areas and in some parts of Europe, the eastern Mediterranean, and South America.7,8 In recent decades Trichophyton tonsurans has become the most common species causing tinea capitis in the United States (80% of cases)9 and United Kingdom.10 The reason for this etiologic shift remains unclear. T. tonsurans is an anthropophilic fungus, which spreads from person to person or from infected fomites. It has been responsible for a progressive, continent-wide epidemic over the past 50 years.4 Urban areas and minority communities have been particularly affected.10 M. canis, a zoophilic species, can be acquired from infected cats or dogs. This same species can also be transferred from human-to-human, resulting in small outbreaks or clusters of infections among those living in close proximity.11
Clinical Presentation and Diagnosis Clinically, tinea capitis appears as an inflammatory erythematous, scaly plaque with central alopecia, or hair loss (Fig. 18-1). Unlike other tinea infections, tinea capitis is rarely pruritic. However, if left untreated or misdiagnosed, the plaques can evolve into an inflammatory nodule, referred to as a kerion (Fig. 18-2). This nodule may be 469
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
470
Communicable Diseases
TABLE 18-1. COMMON CUTANEOUS FUNGAL INFECTIONS
Clinical Condition
Site of Infection
Common Dermatophytes
Tinea Capitis
Scalp
Tinea Corporis
Trunk and extremities
Tinea Cruris
Genitalia, perineal and perianal skin∗
Tinea Pedis
Feet
Trichophyton tonsurans∗ Microsporum canis Trichophyton mentagrophytes† Microsporum audouinii Trichophyton verrucosum Trichophyton violaceum Trichophyton rubrum ∗ Trichophyton mentagrophytes Microsporum canis Trichophyton tonsurans† Trichophyton verrucosum Trichophyton rubrum ∗ Trichophyton mentagrophytes Microsporum canis† Epidermophyton floccosum Trichophyton rubrum ∗ Trichophyton mentagrophytes Epidermophyton floccosum† Trichophyton rubrum∗,† Trichophyton mentagrophytes
Tinea Unguium Nails of the (Onychomycosis) hands or feet
Figure 18-2. Tinea capitis: kerion.
∗
Gupta AK,Tu LQ. Dermatophytes: Diagnosis and treatment. J Am Acad Dermatol. Jun 2006 ;54(6):1050–5. †Vander Straten MR, Hossain MA, Ghannoum MA. Cutaneous infections dermatophytosis, onychomycosis, and tinea versicolor. Infect Dis Clin N Am. 2003;17(1):87–112.
very painful and drain purulent material. Examination of the scalp hair under ultraviolet light (wood’s filter) for yellow-green fluorescence is helpful in diagnosing tinea capitis caused by M. canis or Microsporum audouinii. However, tinea capitis caused by Trichophyton species does fluoresce.11 Definitive diagnosis is made by scraping scale from scalp lesions onto a glass slide, applying 10–20% potassium hydroxide (KOH) to the collected scale, and examining the preparation under a microscope for the presence of arthrospores and/or segmented, branched filaments. Confirmation of the diagnosis and speciation is made by culture on Sabouraud’s agar medium, a procedure requiring 2–6 weeks for growth and isolation.12
Prevention and Control Tinea capitis is easily transmitted from person-to-person, or from animals to humans. Up to 30% of children can be asymptomatic carriers of T. tonsurans.13 Consequently, outbreaks of tinea capitis are frequent within families and among school children. Such outbreaks require the identification and treatment of all active infections and asymptomatic carriers. Specifically, it is important to inspect all contacts of cases, including family and intimate contacts in order to identify and eliminate all possible sources of infection. Some animals or pets may also be inapparent carriers. Treatment of tinea capitis requires oral therapy for 6–12 weeks. Topical therapy is not adequate primary therapy; however, it may be added to deter or decrease transmission during treatment with oral antifungals.1 In order to prevent reinfection, it is important to treat all infected family members and infected close contacts simultaneously.
Tinea Corporis Epidemiology Tinea corporis is a dermatophytosis of the glabrous skin of the trunk and extremities (excluding the scalp, beard, face, hands, feet, and groin). Tinea corporis may be caused by any of the dermatophytes of the genera Trichophyton, Microsporum, and Epidermophyton.14 Geography, race, and seasonality all seem to influence the etiologic dermatophyte in tinea corporis.6 In the United States, and throughout the world, Trichophyton rubrum is currently the predominant infecting dermatophyte of nonscalp skin infections.2,15 Other common pathogenic agents are Trichophyton mentagrophytes, and M. canis. Children are frequently infected with Microsporum canis, especially those exposed to infected animals.16 Unlike tinea pedis or tinea cruris, tinea corporis infections are more common in women.13
Clinical Presentation and Diagnosis
Figure 18-1. Tinea capitis.
Tinea corporis, or ringworm, typically appears as single or multiple, annular, scaly lesions with central clearing, a slightly elevated, reddened edge, and sharp margination on the trunk or extremities (Figs. 18-3 and 18-4). The infection may range from mild to severe
18
Other Infection-Related Diseases of Public Health Import
471
emphasizing the risk associated with dense populations in humid conditions.17 Consequently, a dry, cool environment may play a role in preventing infection. 18 Avoiding contact with infected individuals will also minimize risk of infection. Once infected, scales may be transmitted through direct contact between individuals, or indirectly through contact with objects that carry the infected scales.19 Diagnosis is dependent on a positive KOH preparation. Scales for the preparation should be collected from the advancing edge of the lesion. Tinea corporis may be treated with topical antifungals in an immunocompetent patient with limited disease. When a large body surface area is affected, or the host is immunocompromised, oral therapy may be required.
Tinea Cruris Epidemiology Tinea cruris includes infections of the genitalia, pubic area, perineal skin, and perianal skin.1 The condition is common throughout the world, with men being affected more frequently than women.13,20 Otherwise known as jock itch, the infection is most commonly caused by T. rubrum or Epidermophyton floccosum.13 Infection is more common in summer months, when ambient temperature and humidity are high. Occlusion from wet or tight-fitting clothing also provides an optimal environment for infection.17
Clinical Presentation and Diagnosis
Figure 18-3. Tinea corporis: neck.
with variable pruritus. When a zoophilic dermatophyte, such as Trichophyton verrucosum, is the responsible organism, an intense inflammatory reaction can occur, resulting in inflammatory papules and pustules. Fungal hyphae may also invade the follicle and hair shaft, causing perifolliculitis or an inflammatory nodule (Majocchi’s granuloma).1 The diagnosis of tinea corporis is based on clinical appearance and KOH examination of skin scrapings from the advancing edge of the lesion.
Tinea cruris typically presents as annular lesions extending from the crural fold over the adjacent upper inner thigh (Fig. 18-5).20 In addition to affecting the proximal medial thighs, lesions may extend to the buttocks and lower abdomen, typically sparing the scrotum. Pustules or vesicles may be present at the active edge of the infected area; maceration may ensue at the inguinal crease.17 Patients with tinea cruris frequently complain of burning and pruritus. Diagnosis is based on clinical signs and symptoms in addition to a positive KOH preparation or fungal culture.
Prevention and Control Poor hygiene, hyperhidrosis, tight-fitting clothing, and immunosuppression are factors that contribute to the onset of this condition. Frequent outbreaks are common among people who use communal exercise or bathing facilities.17 Some have observed a higher prevalence of tinea cruris in patients with concomitant tinea pedis and
Prevention and Control Tinea corporis is particularly common in areas of excessive heat and moisture. Risk factors include close body contact with infected humans, animals, or soil.1 Considerable literature documents historical epidemics of tinea corporis among military recruits and athletes,
Figure 18-4. Tinea corporis: arm.
Figure 18-5. Tinea cruris.
472
Communicable Diseases
onychomycosis (fungal infection of the toenails).21,22 Autoinoculation may occur when an individual brushes fungal organisms onto the underwear from infected feet and toenails. This may be ameliorated by covering the infected toenails with socks, before donning the undergarment. Alternatively, adequately treating the tinea pedis or onychomycosis eliminates the potential for autoinoculation. In an immunocompetent host, tinea cruris can be treated with a several week course of topical antifungal therapy.23
Tinea Pedis Epidemiology Tinea pedis is the most common dermatophytosis, affecting up to 70% of adults worldwide.2,13 This is characteristically an infection of urbanized areas, occurring among people who wear shoes since heat and moisture are essential for the growth of the fungus.24 Frequent exposure to communal locker rooms and shower stalls also predisposes to infection. Consequently, prevalence is high among people frequenting swimming pools and involved in sporting activities. Men between 20 and 40 years of age are most frequently affected.25,26 One report found that dermatophytes could be recovered from the plantar surface of up to 7% of the U.S. population.27 In other studies, the incidence of tinea pedis has been estimated at 3% in the United States, but may be up to 5% in the elderly and in excess of 20% in populations who use communal showers or locker rooms.26, 28 Tinea pedis in children is uncommon, with a frequency of 2.2% in children aged 7–10 years, and 8.2% in children aged 11–14 years.29 Though many species can manifest as tinea pedis, T. rubrum and T. mentagrophytes var. interdigitale are thought to be the most common pathogens.30
Clinical Presentation and Diagnosis Tinea pedis, or athlete’s foot, has three common presentations. The interdigital form is the most common and is characterized by fissuring, maceration, and scaling in the interdigital spaces of the fourth and fifth toes. Patients with this infection frequently complain of itching or burning. A second form, usually caused by T. rubrum, presents in a moccasinlike distribution with the plantar skin appearing scaly and thickened (Fig. 18-6). There is frequent hyperkeratosis and erythema of the soles, heels, and sides of the feet. The third form is vesiculobullous tinea pedis; it is characterized by the development of vesicles, pustules, or
bullae on the soles.18 Diagnosis is based on clinical signs and symptoms in addition to a positive KOH preparation or fungal culture.
Prevention and Control Many investigators suspect that the type and duration of exposure to a dermatophyte determine whether a person is likely to acquire tinea pedis. While tinea pedis is transmissible, not all people are equally at risk, likely due to a degree of innate resistance.31 Poor hygiene, hyperhidrosis, inadequate drying of the feet, and immunodeficiency are factors that contribute to disease.24 Minimizing exposure to infected individuals and to areas known to be at high risk of fungal colonization (public exercise and bath facilities) will reduce a personal risk of infection. If left untreated, tinea pedis may remain throughout life and exhibit periods of exacerbation and remission.32 Adequate treatment requires prolonged therapy with topical antifungal agents. Despite this, recurrence occurs in up to 70% of patients.2 Such recurrence is often attributed to species persistence on fomites such as, socks and shoes.33 These cases can be eradicated with the addition of a preventative maintenance program of topical antifungal use 1–2 times per week indefinitely.
Tinea Unguium/Onychomycosis Epidemiology Onychomycosis is the invasion of the nail plate by a dermatophyte, yeast or nondermatophyte mold. “Tinea unguium” refers to onychomycosis caused only by dermatophytes. The prevalence of onychomycosis increases with age, reaching nearly 20% in patients over 60 years old.22 Onychomycosis has been reported to occur at a rate of 5–15% in various populations,3–5 and represents 30% of all cases of dermatophyte infections.24,34 Onychomycosis in children is rare, with an estimated prevalence of 0.2%.35 The incidence of onychomycosis has been increasing worldwide, and at present accounts for almost half of all nail disorders.36 The increase is attributed to several factors, including the aging population, the growing number of immunocompromised patients, and the widespread use of occlusive clothing and shoes.36 Ninety percent of all nail infections are caused by dermatophytes (T. rubrum 71%, T. mentagrophytes 20%); 8% by nondermatophyte molds (Aspergillus, Fusarium); and 2% by Candida.22 Among the dermatophytes, T. rubrum is the dominant organism in both the United States and in Europe, usually accounting for greater than 90% of the isolates.35 Rarely, both yeasts and nondermatophytic molds can present as copathogens within the same infected nail.
Clinical Presentation and Diagnosis Tinea unguium or onychomycosis is characterized by symptoms of pain, discoloration, thickening, onycholysis, accumulation of subungual debris, and brittleness of the nails (Fig. 18-7). Onychomycosis
Figure 18-6. Tinea pedis: moccassin distribution.
Figure 18-7. Tinea unguium/onychomycosis.
18
Other Infection-Related Diseases of Public Health Import
473
Prevention and Control Onychomycosis presents with contiguous tinea pedis in 50% of cases.1 In these cases it is impossible to pinpoint the initial infection. It is plausible that one could minimize risk of onychomycosis by treating an existing tinea pedis in a patient with unaffected toenails.1 Groups of individuals such as the elderly, diabetics, and those with previous trauma to the nail unit may be predisposed to onychomycosis.22 Treatment of onychomycosis requires at least 3 months of oral therapy. Since oral antifungal therapy only affects newly developed nail, it may take 12 months before the nail returns to normal. Even with apparently optimal diagnosis and treatment, one in five onychomycosis patients are not cured by current therapies.37 The reasons for the 20% failure rate include inaccurate diagnosis; misidentification of the pathogen; and the presence of a second disorder, such as psoriasis.38 DERMATOPHYTOSIS: EXPECTED TRENDS
Figure 18-8. Tinea unguium: white superficial onychomycosis.
can present as distal lateral subungual onychomycosis (DLSO), proximal subungual onychomycosis (PSO), total dystrophic onychomycosis (TDO), and white superficial onychomycosis (WSO).26 DLSO is the most common presentation, manifest by thickening and discoloration of the distal nail bed which eventually progresses proximally. WSO (Fig. 18-8) type pathology affects the superficial nail plate and rarely penetrates to the nail bed; it accounts for 10% of onychomycosis cases.26 PSO is the least common type of onychomycosis in healthy persons. It affects the nail plate in the area of the nail matrix, and progresses distally under the nail bed and plate. Total dystrophic onychomycosis may be the end result of any of the four main forms of onychomycosis. This condition is characterized by total destruction of the nail plate. Since there are multiple potential causes of dystrophic nails (including vascular insufficiency, trauma, and psoriasis), the diagnosis of onychomycosis cannot be made clinically. Definitive diagnosis depends on a positive KOH preparation and fungal culture.1
Over the past two decades, newer oral antifungals have improved the efficacy and rapidity of treatment. Despite this, dynamic changes in population susceptibility have contributed to a significant rise in the incidence of fungal infections. A weakened host defense and an impaired ability to complete activities of daily living, such as bathing, make the elderly patient especially susceptible to fungal infections.39 Currently, fungal infections are among the most prevalent dermatologic conditions in the elderly, second only to benign and malignant tumors.40 Similarly, immunosuppressed patients of any age are particularly susceptible to cutaneous fungal infections.1 With rising numbers of organ and bone marrow transplants, our iatrogenically immunosuppressed population has increased markedly. Transplant patients on chronic immunosuppressant medications have a significant risk of frequent and recalcitrant dermatophytoses in addition to systemic fungal disease. Despite many advances in the treatment of HIV, these patients are also at increased risk for both cutaneous and systemic fungal infections. Currently, the prevalence of fungal infections is 20% in HIV positive patients with T-cell counts below 400/mL.41 It is clear that as the elderly and immunosuppressed populations swell, we will likely see increased numbers of infections and, perhaps changing trends in the most common offending dermatophytes. If properly anticipated, the public health community will be prepared to address these changes.
Hookworm Disease: Ancylostomiasis, Necatoriasis, Uncinariasis Laverne K. Eveland
Hookworms are among the three most common soil-transmitted helminth infections,1 and cause one of the most important diseases of humans in tropical and subtropical climates. Estimates of 700–800 million people are infected with hookworms, most of whom live in sub-Saharan Africa and eastern Asia.2 Most human infections are caused by Necator americanus, with Ancylostoma duodenale infections scattered throughout the world.3 Investigations in rural West Bengal have shown marked aggregation of both hookworm species in individual villagers, with more than 60% of the infections found in less than 10% of the people, which theoretically should facilitate control.4 Although the prevalence of hookworm infection has decreased in areas such as the United States and Puerto Rico, where improvements in socioeconomic conditions have elevated living standards, hookworm disease results in enormous human misery and suffering, as well as economic loss in areas where overcrowding, poverty, and unsanitary living conditions combined with inadequate health care and education prevail.2
Immigration from developing countries has also changed the distribution of hookworms and other soil-transmitted helminth infections in developed countries. An analysis of 216,000 stool specimens examined in 1987 identified hookworms (1.5%), Trichuris trichiura (1.2%), and Ascaris lumbricoides (0.8%) as the leading causes of helminth infections in the United States, and the highest rates of hookworm infection were in California, Wisconsin, Rhode Island, Colorado, and Washington, all states lacking indigenous transmission.5 In fact, of the nine states reporting more than 2% rates of infection none were endemic for hookworm disease, suggesting that most infections were acquired outside the United States. Hookworm disease is characterized by an iron-deficiency anemia and protein malnutrition, leading to higher infant mortality and lower birthweight,6 retarded growth, reduced worker productivity, and impaired learning and cognitive development.7 Disease manifestations are not dramatic, but silent and insidious, and have historically
474
Communicable Diseases
been confused with innate shiftlessness.8 Although frank disease is usually not apparent in well-nourished persons with sufficient iron intake,9 significant protein is lost into the intestinal tract in the form of plasma,10 and the absorption rate of protein is significantly increased after deworming.11
The Parasites N. americanus and A. duodenale are often referred to as “new” and “old” world hookworms, respectively, but these designations are misleading. They are small nematodes, with males measuring 5–11 mm and females 9–13 mm. Hookworms have specialized mouthparts resembling one or more pairs of teeth in A. duodenale and a pair of curved cutting plates in N. americanus that bite into plugs of the intestinal mucosa which have been drawn into the large buccal capsule. N. americanus are bent dorsally at their anterior end. The two species are difficult to distinguish on the basis of egg morphology or size (60–70 × 40 µm). They are the only species that mature in humans, except for A. ceylanicum which causes rare human infections in South America and Asia. However, larvae of zoonotic hookworms that cannot develop to maturity in humans can cause dermatitis (Cutaneous Larva Migrans—see later on) when they migrate through human skin. In northeastern Australia sexuallyimmature stages of the canine hookworm A. caninum have been implicated as the cause of acute abdominal pain with peripheral blood eosinophilia and enteritis in humans.12 N. americanus and A. duodenale also differ in their life cycles and biology, including modes of infection and survival, with implications for their control. A. duodenale is apparently not as well adapted to its host as N. americanus. It is relatively short-lived but more pathogenic as measured by the severity of symptoms, with increased blood loss and anemia,13 its relative resistance to expulsion with anthelmintics,8 and increased activity of proteases presumed to be involved in skin penetration, tissue migration, and feeding.14 A. duodenale increases the probability of contacting its host by producing a greater number of eggs, and synchronizing maximal egg output with the season most favorable for development of free-living larvae, which are robust and capable of surviving longer outside the host and infecting both orally and percutaneously.15 N. americanus has been the dominant species in southern China, southern India, Indochina, sub-Saharan Africa, southern United States, and Australia.15 A. duodenale predominates in southern Europe, northern coastal Africa, northern India, north China, and Japan. It has also been described in native Paraguayan Indians, in the hill tribes of Fukien, China, and in the aborigines of western Australia. Where the two species are sympatric, their relative abundance varies geographically with host age, gender, and other factors. N. americanus coexists with A. duodenale in southern India, Myanmar, Malaysia, Philippines, Indonesia, Micronesia, Polynesia, and Portuguese West Africa, although it is the predominant species in these areas. However, A. duodenale predominates in coastal Peru and Chile.16 The life cycles of the two species are similar, but differ in several important ways that influence their epidemiology, pathogenesis, diagnosis, treatment and control. Female A. duodenale lay up to 25,000 eggs per day, while N. americanus can only produce up to 10,000.15 The eggs are usually at the four-to eight-cell stage of development when they are passed in human feces and measure approximately 60 × 40 µm. If they are deposited in suitable moist, shady, sandy soil, they develop and hatch in 1 or 2 days into first-stage rhabditiform larvae (0.25–0.30 mm × 17 µm), which have characteristics that distinguish them from Strongyloides stercoralis larvae and freeliving larvae such as those of Rhabditis species. The rhabditiform larvae grow for 2 or 3 days, feeding on bacteria and organic debris. They then molt into second-stage rhabditoid larvae (0.5–0.6 mm), which continue to feed for several days, and then into third-stage filariform larvae. The filariform larvae may remain viable in the soil for several weeks under favorable conditions. Eggs of A. duodenale are more resistant to temperature and other environmental variations than those of N. americanus, and its larvae can survive longer outside the host.15 Hookworms normally infect humans by penetrating the skin, after which the filariform larvae are carried in the blood to and through the right heart to the lungs, then up the respiratory tree and down the
digestive tract into the small intestine. After a final molt they attach to the mucosa of the jejunum and upper ileum and develop into sexually differentiated adults. The lung migration is essential for the development of N. americanus but not for A. duodenale. Eggs of N. americanus usually appear in host feces within 40–60 days after the worms reach the intestine, while A. duodenale has a much more variable prepatent period, ranging from 43 to 105 days.17 A. duodenale larvae can infect equally well by the oral route, developing into adults without lung passage.15,17 In a study in China many children showed clinical manifestations and eggs in their feces within 3 months of birth, suggesting transplacental transmission of infective larvae.18 Although the evidence is indirect, because hookworm larvae have never been demonstrated in breast milk, the facts that A. duodenale infects nursing infants with no apparent exposure to other routes of infection, and that in a number of endemic areas there is a predominance of A. duodenale in infants also strongly argue for transmammary transmission.10 The prepatent period for A. duodenale is long because before larvae that have penetrated the skin reach the intestine to become adults, they undergo arrested development for extended periods in deep tissues. Human patients dewormed as long as 200 days following infection have been reported positive for fourth-and fifth-stage A. duodenale larvae.19 A. duodenale larvae sequester in the viscera of experimental animals up to 36 days and in the muscles for 66 days,20 and at least 27-day-old muscle larvae of A. duodenale from experimental animals develop into adult worms when fed to dogs.21 These observations indicate that meat-borne A. duodenale infection of humans is possible through the ingestion of larvae in food animals that can serve as paratenic hosts, although no work has been done to explore the actual epidemiological significance of this means of transmission. The duration of infections is highly variable; many worms are eliminated within a year, but records of longevity range from 4–20 years for N. americanus and 5–7 years for A. duodenale.16
Infection and Disease The pathogenesis of hookworm disease usually begins when the larvae enter any portion of the skin with which they make contact, although it is also probable that in places where people have no direct contact with contaminated soil hookworms may also be acquired through the buccal mucosa and lower levels of the alimentary tract when people eat vegetables grown on soil containing hookworm larvae.20 Skin penetration by the larvae results in a stinging sensation of minor or moderate intensity, depending upon the number of larvae penetrating and the sensitivity of the host. Skin reactions varying from erythematous papules to vesiculation last from 7 to 10 days.10 Secondary bacterial infections may also occur, especially if the itching lesions are abraded by scratching. This so-called “ground itch” or “dew itch” must be distinguished from the characteristic Cutaneous Larva Migrans (CLM) caused by the zoonotic Ancylostoma braziliense and other nematodes of the family Ancylostomidae. CLM is characterized by tortuous inflammatory areas in the dermis associated with swelling, erythema, papular dermatitis, and pruritus. N. americanus sometimes migrates in the skin and produces a mild CLM, which is of shorter duration than that caused by A. braziliense.16 Although migrating hookworm larvae do not usually produce pulmonary symptoms, they do produce minute focal hemorrhages when they break out of pulmonary capillaries, and may produce clinical pneumonitis in massive infections. Wakana disease, which has been described in Japan, sometimes results following the ingestion of A. duodenale larvae, penetration of the larvae into mucous membranes of the mouth and pharynx, and their migration to the lungs. The initial symptoms that occur shortly after the larvae are ingested are pharyngeal itching, hoarseness, salivation, nausea, and vomiting, followed by an illness of several days duration that includes coughing, dyspnea, wheezing, urticaria, nausea and vomiting. Chest roentgenograms may reveal pulmonary infiltrates, which presumably result from an allergic reaction to the larval antigens.22 Although light infections are usually asymptomatic, acute, heavy hookworm infections can produce gastrointestinal symptoms similar
18 to those of acute peptic ulcers, which may include fatigue, nausea, vomiting, and burning and cramping abdominal pain. Peripheral blood eosinophilia occurs, and Charcot-Leyden crystals may be present in the feces. The acute disease occurs more frequently with A. duodenale than with N. americanus. As the infection progresses, anemia from chronic blood loss may be accompanied by a loss of appetite and symptoms suggestive of congestive heart failure. Geophagy and pica may develop, with constipation resulting from the dietary change. The worms ingest blood that passes so rapidly through their bodies that they probably utilize simple diffusible substances rather than the ingested erythrocytes.23 However, they also spill a significant amount of blood by lacerating the mucosa during feeding, and the bleeding continues for as long as 30 minutes.24 Blood loss from mucosal damage increases disproportionately in heavy infections because the worms attach and reattach more frequently because of mating competition, especially early in the infection.16 Adult worms produce potent anticoagulants to facilitate blood feeding that may also enhance blood loss through mucosal bleeding,25 and an inhibitor that blocks adhesion of activated neutrophils to vascular endothelial cells, thereby probably assisting the hookworm in evading the host’s inflammatory response.26 Classic hookworm disease is an iron-deficiency, microcytic, hypochromic anemia resulting directly from blood loss. Intestinal injury and changes in intestinal motility might contribute to malabsorption of nutrients in the host, but patients in general are no more malnourished than uninfected subjects.10,27,28 Good nutrition consisting of iron, other minerals, and animal protein mitigates the disease associated with light to moderate hookworm infections, even though it does not affect the existing hookworm population or protect an individual from infection. In heavy infections, disease cannot be ameliorated by diet alone. Although the disease is usually associated with heavy infections, it has long been a mystery why it occurs in some persons with only light infections while other persons with extremely heavy infections have no signs or symptoms. The answer appears to lie in the availability of dietary iron stores rather than diet per se,27 because in hookworm endemic areas dietary intake of iron appears to be generally adequate.10 It is likely that those more susceptible to disease cannot absorb sufficient iron due to intestinal morphological or functional abnormalities for reasons unrelated to their hookworm infection, such as tropical enteropathy or protein malnutrition (kwashiorkor).28 Remote organs such as the corneal epithelium,29 central nervous system and heart30 may also be adversely affected indirectly by hookworm infection. On the other hand, the chronic anemia of hookworm disease may also result in physiological compensations within the host, such as increased pulmonary vital capacity, increased tolerance of tissue cells to anoxia, and lowered systolic pressure. Also, the risk of myocardial infarction may be reduced due to dilatation of the heart and increased collateral circulation of coronary arteries.16 Changes may occur in bone marrow because of blood loss; retroperitoneal lymph nodes may become enlarged secondary to antigenic stimulation; and the anemia and hypoxia of hookworm disease are sometimes associated with fatty deterioration of the heart, liver and kidneys.27,31 Infections that produce more than 5000 eggs per gram (EPG) of feces are considered heavy; 2000–5000 EPG, moderately heavy; 500–2000 EPG, moderately light, and less than 500 EPG, light.10 Light infections do not usually result in clinical disease, but moderate and heavy infections are often associated with significant anemia. Diagnosis is complicated in early infections because anemia may actually begin before hookworm eggs are detectable in feces, when larval and immature hookworms first reach the mucosa and begin to cause blood loss. Hookworm disease should be suspected in a person with a subnormal hemoglobin level, Charcot-Leyden crystals in feces, and a history of exposure. Specific immunoglobulin E has been reported to be highly specific (96%) and sensitive (100%) in the serodiagnosis of hookworm infections.32 Although heavy infections may be detected by direct fecal smears, in light infections concentration techniques are usually needed to demonstrate the eggs. Several excellent concentration methods are available, including zinc flotation and several modifications of formalin-ether and formalin-ethyl
Other Infection-Related Diseases of Public Health Import
475
acetate techniques. Hookworm eggs may develop and hatch in fecal specimens stored for more than 24 hours at room temperature or above. It is then necessary to distinguish the rhabditiform larvae from those of free-living nematodes and S. stercoralis.
Epidemiology The most favorable conditions for the development of hookworm larvae and completion of the life cycle include loose, moist, shady, sandy humus, promiscuous defecation or the use of improperly treated human feces (night soil) as fertilizer, and the opportunity for humans to come into contact with the soil. An important epidemiological factor appears to be the presence of dung beetles that thrive in such soil and bury human feces efficiently, thereby maintaining defecation sites acceptable for repeated use and enhancing the infection potential of particular spots in which the larvae hatch, develop and migrate upward through the sandy soil.33 Rainfall is required to provide adequate moisture for the larvae to migrate, aggregate, and reach human skin on grass or other moist surfaces. Temperature is an important factor in determining which species of hookworm is found, because N. americanus can tolerate higher temperatures than A. duodenale, while the latter can withstand low temperatures that retard or prevent development of the former.16 The infective larvae may remain viable in the soil for months during periods of drought or low temperatures. Eggs and infective third-stage hookworm larvae have also been found on the external body and in the gut of several fly species, some of which forage on human feces and moist skin34 and remain infective when regurgitated by common houseflies for up to eight hours postingestion.35 Observations in the American south that whites are much more susceptible to infection than nonwhites of similar socioeconomic status suggests a genetic predisposition to infection, and gender-associated effects have been attributed to differences in habits and exposure to infection.8, 10 It has been observed that approximately 10% of available hosts harbor 70% or more of the parasite population, further evidence suggesting that heavily infected persons may be genetically predisposed to such levels of infection.36 For epidemiological purposes, the extent of hookworm disease in a community depends on both the prevalence and intensity of infection as measured by egg output. There can also be high prevalence with low intensity of infections or heavy infections in regions of low prevalence.10 However, people who contribute large numbers of eggs to the environment are not necessarily those who are the greatest source of infection for others, because infective larvae show a high degree of aggregation in the soil that depends on density independent factors influencing larval development and survival, such as moisture, shade and the vertical distribution of ova in the soil.37
Prevention and Control Theoretically, hookworm disease could be reduced by the sanitary disposal of human feces, wearing shoes and protective clothing, the use of ovicides or larvicides, vaccines, and adequate individual or mass chemotherapy. However, the mere availability of properly constructed sanitary latrines does not ensure their use, as local habits, customs, or beliefs regarding cleanliness and personal hygiene may be major obstacles.10 It has been demonstrated that children more often than adults tend to go barefoot, resulting in more contact with soil, and fail to use sanitary facilities even when they are present, but prefer the convenience of defecating among bushes in backyards or areas near their homes.38 Even in adults, shoes and protective clothing are often not a reasonable expectation because they are expensive, difficult to clean, and can be extremely uncomfortable in hot weather. Health education encourages people to defecate where conditions are unfavorable for the development or survival of free-living stages, such as on saline soils, open, dry, fallow land, or in flooded fields.10 However, none of these control methods have been very successful in areas where soil pollution is the norm and poverty and unsanitary life styles promote the heaviest hookworm transmission. For this reason, more recent efforts are being made to eliminate worm burdens by the mass treatment of affected populations with anthelmintics, in an effort to lower the intensity of the infection, which will reduce
476
Communicable Diseases
morbidity and gradually disrupt transmission.39 To further this objective more research is also needed in (a) development of species-and stage-specific diagnostic tests, (b) investigation of the consequences of arrested development of hookworms, (c) study of hookworm transmission in various regions in the context of varied cultural factors, (d) quantification of the effects of morbidity on individuals and communities, (e) investigation of the relationships between hookworm disease and human nutrition, (f) elucidation of the human host response to infection, and (g) the search for potential vaccines. At present no effective vaccines are available. Although a cDNA clone encoding a specific antigenic protease has been proposed as a candidate immunogen in human beings, there is no direct evidence that protective immunity to hookworm develops in humans.10 A study in Papua New Guinea following mass chemotherapy showed that infection with N. americanus returned to pretreatment levels after 2 years and that the predisposition to reinfection was independent of host age and sex.40 It has also been noted that the intensity of hookworm infection steadily rises with age or plateaus in adults.3,41,42 However, a recent study in dogs with a Ancylostoma caninum aspartic protease vaccine resulted in significantly lowered worm burdens and fecal egg counts, and prevented anemia in animals following challenge with infective larvae.43 A nonprofit partnership called the Human Hookworm Vaccine Initiative is currently attempting human vaccine trials using antigens derived from living L3 canine
hookworm larvae that can partially protect laboratory animals against hookworm challenge.44, 45 In general, only persons at highest risk for disease as determined by egg output and iron-deficiency anemia should be treated. Little benefit is gained from treating individuals with light infections in endemic areas, as reinfection commonly occurs in such foci within 4–6 months.46 Persons who return from endemic areas to good sanitary conditions and adequate nutrition may not require treatment.10 Albendazole, mebendazole, or pyrantel pamoate are drugs of choice for treating hookworm infection.47 Since thiabendazole was first introduced as an anthelmintic in 1961,48 the benzimidazoles (BZ) drugs have become the current drugs of choice to treat hookworm infections. Although thiabendazole is larvicidal and mebendazole is ovicidal and larvicidal, albendazole kills both preintestinal and intestinal worms, but it is not known whether it affects arrested larvae of A. duodenale.10 Bephenium hydroxynaphthoate is effective against A. duodenale, and also against N. americanus when combined with tetrachloroethylene, although the latter is difficult to obtain for human use in the United States. Tetrachloroethylene is effective when used alone in higher doses but should not be used if Ascaris worms are present. Pyrantel pamoate is useful against both species and is useful for combined infections with Ascaris. Praziquantel has also been shown to reduce hookworm infections from 93.5 to 64.8% in boys, and from 70.9 to 23.7% in female patients treated for schistosomiasis.49
Other Intestinal Nematodes Mark R. Wallace • John W. Sanders • Shannon D. Putnam
Intestinal nematodes are the most common parasites in humans, infecting up to one-fourth of the world’s population. Most infestations occur in the developing world where warm, moist climates, poverty, and poor sanitation favor transmission. Since most helminths do not multiply in the human host (Strongyloides stercoralis and Capillaria philippinensis being notable exceptions), the overall worm burden is usually light and symptoms minimal. Heavy worm burdens occur in a sizable minority of infected persons (often children) and may cause severe illness, impaired school or work performance, stunted growth, and a variety of unusual manifestations. Through autoinfection, S. stercoralis and C. philippinensis have the potential to cause life-threatening hyperinfections. The intestinal nematodes vary greatly in size, life cycle, and disease manifestations. In this chapter we review the most common intestinal nematodes excluding the hookworms, which are discussed elsewhere. With the exception of C. philippinensis, all the intestinal nematodes discussed are primarily pathogens of humans.
Strongyloides Stercoralis Though strongyloidiasis is less prevalent than the other common intestinal nematodes and is often only minimally symptomatic, its potential for autoinfection allows for unusually chronic and/or severe infections.
The Parasite The life cycle of S. stercoralis is similar to that of the hookworms (Fig. 18-9). The adult worm, a parthenogenetic female, lives within the mucosal epithelium of the human small intestine and deposits eggs (usually less than 50 per day) in the mucosa. There they hatch
into noninfective rhabditiform larvae, migrate into the small intestinal lumen, and are discharged with human feces into the soil. The larvae then develop into either the infective filariform larvae (direct cycle) or adult worms capable of producing additional generations of rhabditiform larvae, which subsequently moult into infective filariform larvae (indirect cycle). Human infection is acquired through skin penetration or (less commonly) ingestion of filariform larvae, which then traverses the venous circulation to the lungs. Once in the lungs, the larvae penetrate capillary walls, enter the alveoli, ascend the trachea to the epiglottis, are swallowed, and eventually reach the upper part of the small intestine where they develop into adult worms. In the autoinfection cycle, the rhabditiform larvae mature to infective filariform larvae within the human gut and reinvade through the intestinal mucosa or perianal skin. This allows the infection to continue and the parasites to multiply without any additional exposure to soil-borne filariform larvae. Autoinfection accounts for the extremely long-lived infections (sometimes over 50 years) and the possible development of hyperinfection and disseminated strongyloidiasis in immunocompromised hosts.
Epidemiology Strongyloidiasis is an infection of worldwide importance. It is endemic in the developing world, much of Europe, and the Appalachian region of the United States. The prevalence rates vary greatly between surveys with estimates of up to 100 million cases worldwide. Immigrants, travelers, and military personnel can acquire S. stercoralis in endemic areas and then harbor the parasite with few (if any) symptoms for decades through autoinfection. Residents of mental institutions are also at particularly high risk for strongyloidiasis due to fecal-oral transmission and geophagia.
18
Human
Other Infection-Related Diseases of Public Health Import
477
Adult s penetrate mucosa, and release eggs which hatch to rhabditiform larvae Autoinfection cycle Rhabditiform transform to infective filariform larvae
Rhabditiform larvae
Filariform larvae Soil Rhabditiform larvae
Eggs
Adults
Figure 18-9. Life cycle of Strongyloides stercoralis. (Source: Redrawn from Longworth DL, Weller P. Hyperinfection syndrome. In: Remington JS, Swartz S, eds. Current Clinical Topics in Infectious Disease,vol 7. New York, McGrawHill; 1986.)
Infection and Disease The initial entry of the filariform larvae through the skin may produce a transient pruritus similar to that of the hookworms. Cough and wheezing, indistinguishable from that seen in hookworm or Ascaris infection, may occur as the larval forms migrate through the respiratory tree. The pulmonary symptoms are usually mild and short-lived, but may be severe in hyperinfection. Established infection in the immunocompetent host may be asymptomatic or manifested by intermittent vague abdominal pain, indigestion, nausea, anorexia, or diarrhea. The autoinfection cycle may perpetuate strongyloidiasis infection for decades. Patients with ongoing autoinfection may develop larva currens: an urticarial, serpiginous rash. This rapidly moving eruption (up to 5–10 cm/h) is due to autoinfecting filariform larva migrating under the skin after penetrating the perianal surface. Larva currens may last days and recur over months or years; it is said to be pathognomonic of strongyloidiasis. Children with heavy worm burdens may have malabsorption and growth retardation. Severe strongyloidiasis may resemble inflammatory bowel disease and lead to the (disastrous) initiation of immunosuppressive therapy and subsequent hyperinfection. Hyperinfection and disseminated strongyloidiasis occur when autoinfection is amplified by the presence of immunosuppression or chronic illness. Common predisposing factors include corticosteroid therapy, immunosuppressive chemotherapy, renal failure, malignancy, chronic pulmonary disease, alcoholism, tuberculosis, or malnutrition. Hyperinfection strongyloidiasis may present with severe pulmonary or gastrointestinal symptoms due to the massive parasite load and may also involve other organs as S. stercoralis larvae aberrantly migrate to the central nervous system, liver, heart, or other distant sites. Bacteremias may occur as the larvae penetrate the gastrointestinal mucosa and carry gastrointestinal flora into the bloodstream. Hyperinfection should
always be considered in immunosuppressed patients with unexplained gastrointestinal or pulmonary processes or recurrent Gram-negative bacteremias. Eosinophilia, prominent in uncomplicated strongyloidiasis, is often absent in these seriously ill patients. Overall mortality of the hyperinfection syndrome is high even with appropriate therapy.
Diagnosis The diagnosis of S. stercoralis infection rests on identifying the larval forms; eggs hatch before exiting and are usually not seen in the stool. Because of the low rate of egg production, examination of a single stool detects only about 30% of infections; three or more fresh stools should be examined for the presence of rhabditiform larvae. Various stool concentration methods may improve the yield, and Strongyloides stool cultures are available in a few laboratories. Duodenal aspirates or sampling by the “string test” are positive in over 90% of infections. When pulmonary symptoms are present, a sputum examination for filariform larvae is indicated. In some cases where strongyloidiasis is suspected but no larval forms can be demonstrated, an enzyme-linked immunosorbent assay (ELISA) serology may be helpful; this is both sensitive and specific. Serologic testing may be used to screen patients from endemic areas prior to the initiation of immunosuppressive therapies.
Therapy Eradication is the goal of strongyloidiasis therapy. Simply reducing worm burden is inadequate as it leaves the patient exposed to the risk of subsequent hyperinfection. Ivermectin (100 µg/kg per day for 1–2 days) is highly effective (>90%) against chronic intestinal strongyloidiasis and is generally well tolerated. For patients with underlying immunodeficiencies, including human T-lymphotropic virus (HTLV)-I infections, treatment should be repeated at two weeks. Although not
478
Communicable Diseases
FDA-approved for disseminated strongyloides, it is the most effective and best tolerated drug, but some experts recommend dosing for 5–7 days, and repeating that in two weeks. Albendazole, dosed at 400 mg PO twice daily for 2–3 days, is another choice for treatment and is also well tolerated; but it is not as effective as ivermectin. Thiabendazole was the traditional therapy of choice for strongyloidiasis and is given for three days in uncomplicated cases and seven (or more) days in hyperinfection syndromes. It is effective, with 90% cure rates in uncomplicated cases, but virtually all patients treated with thiabendazole will develop some toxicity; disorientation, fatigue, and gastrointestinal complaints are the primary side effects.
Treatment There are many effective treatment choices for ascariasis. The primary drugs of choice are the benzimidazoles, mebendazole (100 mg twice daily for 3 days or 500 mg once), or albendazole (400 mg once), but as these are potentially teratogenic, pyrantel pamoate (11 mg/kg up to a maximum of 1 g once) should be used during pregnancy. Piperazine (50–75 mg/kg QD up to a maximum of 3.5 g for 2 days) should be used if intestinal obstruction or “wayward worms” are suspected, as it will paralyze the worms and reduce the risks of additional visceral injury. Ivermectin (200 µq/kg once) and nitazoxanide (adults: 500 mg twice daily for 3 days; children ages 4–11: 200 mg oral suspension twice daily for 3 days) are effective alternatives.
Prevention and Control The sanitary disposal of human feces is essential for the control of strongyloidiasis in endemic areas. Wearing appropriate footwear is a valuable adjunct to prevention, but may be impractical in warmer climates. Hyperinfection syndromes are prevented by the identification and eradication of strongyloidiasis infections.
Prevention and Control
Ascaris Lumbricoides
Trichuris Trichiura
Ascaris lumbricoides is the largest and most common of all the intestinal geohelminths infecting humans. Though usually asymptomatic, severe clinical manifestations occur in a significant minority of patients. The fecundity of the female ascarid and the prolonged egg survival in the soil guarantee that ascariasis will continue to be among humankind’s most prevalent infections for the foreseeable future.
Trichuriasis is an extremely common infection, with approximately 800 million persons infected worldwide. Many are coinfected with Ascaris or hookworm, which share a similar geographic and socioeconomic distribution. Like Ascaris, most infections are asymptomatic, but severe disease can occur with massive worm burdens.
The Parasite The adult worms are 120–400 mm long and live in the small intestine for 1–2 years. The mature female produces approximately 200,000 unembryonated eggs daily. The eggs have a rough, mammillated coat and are discharged into the intestinal lumen and passed with the feces. Once deposited in soil, the eggs embryonate and become infectious, remaining viable for years despite extremes of temperature and moisture. After ingestion via contaminated soil or foods, the eggs hatch into rhabditiform larvae in the small intestine. The larvae penetrate the intestinal mucosa, invade the portal veins, pass through the liver, and continue to the lungs. Once in the lung, they penetrate into the alveoli, are coughed up, swallowed, and return to the small intestine where they develop into the adult worms.
Epidemiology It is estimated that over one billion humans are infected with Ascaris. Ascariasis is most common in warmer climates with inadequate human waste facilities, but cases can occur in temperate climates with good sanitation; there are an estimated four million cases per year in the United States. Although the usual mode of transmission is usually fecal-soil-oral, egg-contaminated food or inhalation of airborne eggs may also produce infection.
Infection and Disease Most infections with A. lumbricoides are asymptomatic. Clinical disease is most likely in heavily infected individuals, especially children. During the larval migration through the lungs in primary infection, a transient pneumonitis with eosinophilia may be seen, which is indistinguishable from the pulmonary phase of Strongyloides and hookworms. Gastrointestinal symptoms of ascariasis are often mild and vague, but wandering ascarids occasionally cause severe pancreatic or hepatobiliary disease. Children with heavy infections may develop bowel obstruction. The role of sustained heavy Ascaris burdens in childhood malnutrition and developmental delays is difficult to firmly establish but probably is a contributing factor.
Diagnosis The diagnosis of ascariasis is easily made by identifying the large number of eggs in a single stool specimen. Pulmonary ascariasis is occasionally diagnosed by identifying the larvae in sputum. Adult worms may be found in the stools or emerging from the mouth or nose.
Proper human waste disposal is essential to control ascariasis. Targeted mass treatment aimed at groups at risk for heavy helminthic infections (usually children) are often conducted as a part of preventative medicine programs and may reduce overall morbidity and mortality.
The Parasite Adult T. trichiura are approximately 30–50 mm long and live for years in the cecal and colonic mucosa. The posterior section of the adult worm appears thick and tapers to a long threadlike anterior structure, resembling a bull whip (hence the name whipworm). The adult male worm’s tail is coiled while the female worm’s tail is straight. The females are oviparous, producing 2000–10,000 eggs each day, which pass into the environment with the fecal stream. Once in the soil, the eggs mature over the next 2–4 weeks developing into infective first-stage larvae. After ingesting fecally contaminated material, the first-stage larvae hatch in the small intestine and migrate to the colon where they develop into mature worms. There is no tissue phase in the whipworm life cycle.
Epidemiology Trichuris has a cosmopolitan geographic distribution with a preference for warm, moist regions where sanitation facilities are lacking. The use of human waste for fertilizer (“night soil”) facilitates T. trichiura transmission. Though more common in the developing world, trichuriasis is also found in the southeastern United States and Puerto Rico.
Infection and Disease Most T. trichiura infections are asymptomatic, but abdominal pain, anorexia, and diarrhea can be seen. Heavy infections can produce the Trichuris dysentery syndrome when whipworm infiltrates the bowel from the cecum to the rectum. The dysentery syndrome may be so severe as to resemble inflammatory bowel disease and may result in anemia or rectal prolapse. As with other geohelminths, children are most often heavily infected with Trichuris and may suffer delayed development. Trichuris is usually not associated with eosinophilia.
Diagnosis Diagnosis is made by identifiying the eggs in the feces. The eggs have a thick, clear shell with distinctive bipolar plugs. More than 10,000 eggs per g of stool indicate heavy infection. The diagnosis is occasionally made endoscopically through direct visualization of adult worms in the colon.
Treatment A 3-day course of mebendazole (100 mg twice daily for 3 days) is the optimal therapy for the individual patient as it is more effective than
18
Other Infection-Related Diseases of Public Health Import
single dose therapy; but single doses of either mebendazole (500 mg once) or albendazole (400 mg once) are often used in mass treatment eradication campaigns in heavily endemic areas. Single-dose therapy results in a 60–75% cure rate. Ivermectin also has activity against Trichuris, but the efficacy of single dose therapy has been disappointing. However, combining a single dose of albendazole and ivermectin was shown to be more effective than either drug alone, making this the best choice for mass treatment campaigns. Nitazoxanide has also shown promise as an effective therapy for trichuriasis.
Enterobius vermicularis
Prevention and Control
The Parasite
479
Enterobius (human pinworm) is one of the most common parasitic intestinal infections, occurring in both temperate and tropical climates. The worldwide prevalence is difficult to estimate, as the infection is often asymptomatic, but some authorities have speculated that over a billion people are infected. Pinworm infection rarely results in serious illness, but frequently produces considerable morbidity and anxiety among school-age children and their parents.
Unlike the more common intestinal nematodes infecting humans, capillariasis is not primarily a human disease and almost always results in severe infection.
Adult pinworms are small (females, 8–13 mm long with a long pointed tail; males, 2–3 mm in length with a blunt tail) and live in the ileum, cecum, colon, and appendix for 1–3 months. The typical infection involves a few to several hundred adult worms. The gravid adult female migrates out of the anus at night to lay thousands of eggs in the perianal region. The eggs are elongated, flattened on one side, with a thick clear shell. They are partially embryonated when laid and become infective within 4–6 hours at body temperature. Infection occurs when the eggs are ingested, hatch in the small intestine to produce larvae, and pass into the colon where they moult twice as they mature into adult worms.
The Parasite
Epidemiology
C. philippinensis is believed to exist primarily in a fish-bird life cycle. Birds, the proposed reservoir host, harbor the adult worms and in turn defecate eggs, which are fed upon by freshwater fish. Larval forms of C. philippinensis develop within the fish, which are then consumed by birds to complete the cycle. Humans inadvertently become infected by ingesting raw fish or crustaceans infected with the larval forms; the eggs are not infectious to humans. Following raw fish ingestion, the adult worms develop and reside in the proximal small bowel. Like strongyloidiasis, eggs can hatch into infective larvae within the human gut and produce autoinfection with extremely high parasite burdens and serious illness.
Enterobiasis occurs worldwide, affecting all socioeconomic classes. It is the most common nematode infection in the United States, usually involving school-aged children. The condition may spread rapidly within families, day care facilities, institutions, or other crowded situations. Ingesting infective eggs via contaminated fingers, fomites, or direct oral-anal sexual contact leads to infection.
As with most intestinal nematodes, the primary mode of prevention is to provide the proper disposal of human feces and to avoid ingestion of soil-contaminated material through careful hand washing and food preparation.
Capillaria Philippinensis
Infection and Disease
Since first discovered in the 1960s, most cases of C. philippinensis infections have been reported from the Philippines and Thailand. More recently, cases have been reported from Japan, Iran, Taiwan, Egypt, Indonesia, Korea, and India.
Most infections are asymptomatic. Pruritus or dysesthesia of the perianal and perineal areas are the primary symptoms of infection. Vulvovaginitis and urinary tract infection due to migration of adult worms are sometimes reported in prepubescent girls. Rarely, adult worms may traverse the fallopian tubes or move across breaks in the gut mucosa to gain access to the peritoneum and form granulomas. The pinworm larval forms have been implicated in case reports as a rare cause of eosinophilic colitis resembling the trichuriasis dysentery syndrome. Pinworm infection does not cause eosinophilia or anemia.
Infection and Disease
Diagnosis
Epidemiology
Infection with C. philippinensis usually (if not always) leads to a serious illness characterized by abdominal pain, nausea, vomiting, borborygmi, and voluminous diarrhea. Severe chronic infection can cause malabsorption, electrolyte abnormalities, wasting, and eventual death. The untreated mortality has been estimated at 10–30 %.
Diagnosis Diagnosis is based on the identification of thick-shelled, striated, bipolar eggs in the feces. The eggs (35–45 µm long by 20 µm wide) somewhat resemble those of the closely related Trichuris. In chronic cases, larvae and adult worms may also be seen in stool specimens. Examination of small bowel aspirates or biopsies may occasionally be helpful in making the diagnosis when stool examinations are negative.
Treatment A 10-day course of albendazole is the preferred treatment, as it kills all forms of the parasite. A 20-day mebendazole regimen is the best alternative. The previously used 30-day course of thiabendazole is too toxic for routine therapy. Shorter courses of treatment lead to an unacceptably high relapse rate and should be avoided.
Enterobiasis diagnosis is best made by applying adhesive tape to the perianal region and microscopically examining the tape for E. vermicularis eggs or adults. For the highest diagnostic sensitivity, material should be collected in the early morning prior to bathing or defecation. Examination may need to be repeated six times before infection can be conclusively excluded, but three specimens are adequate in most cases. Standard “ova and parasite” stool examination is positive in only 5–15% of confirmed cases.
Treatment Single doses of albendazole, mebendazole, or pyrantel pamoate are all highly effective and widely used; a second dose 1–2 weeks after initial therapy is often given. Reinfection, whether through self infection or infection from close contacts, is a major problem in E. vermicularis therapy. Attention to washing hands after defecation, keeping fingernails cut short and avoiding perianal scratching are the keys to avoiding reinfection. All family members may need to be simultaneously treated to avoid a circle of infection. Although not generally used for this purpose, ivermectin also has activity against E. vermicularis and should be effective.
Prevention and Control Human capillariasis is entirely prevented by avoiding the consumption of raw fish. When cases occur, prompt treatment is essential to prevent mortality and to limit possible contamination of local waters with feces, which could create local outbreaks. As with all other intestinal nematodes, proper disposal of human waste is essential to prevent the disease.
Prevention and Control Enterobiasis infection can be prevented through proper personal hygiene practices including washing hands after defecation and before eating or preparing foods, discouraging bare perianal scratching, changing undergarments and bedding regularly, and providing sanitary human waste disposal.
480
Communicable Diseases
Schistosomiasis Ettie M. Lipner • Amy D. Klion
Schistosomiasis, or bilharziasis, is a chronic debilitating disease with significant morbidity and mortality. It affects more than 200 million people in 74 countries worldwide and is second only to malaria in socioeconomic and public health importance in tropical and subtropical areas.1 Human disease is caused by five species of blood flukes of the genus Schistosoma: S. mansoni, S. haematobium, S. japonicum, S. mekongi, and S. intercalatum.
Biology and Life Cycles The schistosome requires an intermediate and a definitive host to complete its life cycle. Asexual reproduction takes place in the molluscan intermediate host and sexual reproduction in the definitive vertebrate host. Briefly, free-swimming miracidia hatch from eggs deposited in freshwater during defecation or urination by an infected definitive host. These miracidia penetrate the appropriate snail host and develop into primary sporocysts, each of which produces multiple secondary sporocysts. Each of the secondary sporocysts produces a great number of cercariae, resulting in the production of hundreds to thousands of cercariae from an individual miracidium. The fork-tailed cercariae migrate out of the snail and propel themselves toward the surface of the water. Of note, both miracidia and cercariae have a limited life span in the absence of an appropriate host (6–24 hours under experimental conditions).2 Sporocysts, on the other hand, remain dormant during adverse conditions, and are able to resume cercarial production with the return of a favorable environment. When humans contact schistosome-infested water, cercariae penetrate the skin, lose their tails, and are transformed into schistosomula. After several days, the schistosomula enter a venule or lymphatic vessel and migrate to the right side of the heart, then to the lungs, and finally to the liver sinusoids, where they begin to mature. On reaching maturity, adult male and female worms pair and migrate to their final habitats. There, eggs are deposited in the venules of the intestine or urinary bladder, break through the submucosa and mucosa into the lumen, and are evacuated through the feces or urine completing the life cycle. The mature female schistosome measures from 7.2 to 26 mm in length and 0.25 to 0.5 mm in width, whereas the mature male measures from 6.5 to 20 mm in length and 0.5 to 1 mm in width. They remain in copula for their entire lifespan, an average of 5–8 years, but sometimes for as long as 30 years.3 The preferred location of adult worms in the host is different among the schistosome species. S. japonicum and S. mekongi adult parasites are generally found in the superior mesenteric vein; S. mansoni and S. intercalatum in the inferior mesenteric vein; and S. haematobium, in the vesicular and pelvic venous plexuses. Daily egg production also varies with the species: from approximately 1500 to 3000 eggs per day per worm pair in S. japonicum infection to 250 eggs per day in S. mansoni and S. intercalatum infection and 50–100 eggs/day in S. haematobium infection. These biological differences between schistosome species are important in determining both the clinical manifestations and transmission rates of infection.
Distribution Schistosomiasis is endemic in many tropical and subtropical countries and is a frequent cause of travel clinic visits, particularly in travelers to west Africa.1,4 The distribution of schistosomiasis is dependent on the existence of the appropriate snail host and necessary environmental conditions. S. mansoni (intermediate host: Biomphalaria spp.) has the most widespread distribution, ranging from the Arabian peninsula to
South America and the Caribbean. S. japonicum (intermediate host: Oncomelania spp.) is confined to the Far East, distributed in parts of China, Indonesia (S. japonicum-like), the Philippines, and until recently, Japan. A related species, S. mekongi (intermediate host: Neotricula spp.), is found in Laos, Cambodia and Thailand. S. haematobium (intermediate host: Bulinus spp.) is endemic in the Middle East, Africa, Turkey, and India. Transmitted by the same intermediate host as S. haematobium, S. intercalatum is found only in regions of Central and West Africa. Important reservoir hosts for S. japonicum include mice, dogs, goats, rabbits, cattle, sheep, rats, pigs, horses, and buffalo.5 Although natural infection of non-human primates with S. mansoni and S. haematobium has been described, animals other than humans do not appear to be major reservoirs of infection with these species.5
Pathological and Clinical Manifestations Most of the pathological changes and clinical manifestations of schistosomiasis result from the host’s immunological response to the eggs. The severity of the disease depends on the species, strain, location of parasites, intensity and duration of infection, frequency of reinfection, and the host’s reactivity. Mild infections without symptoms often occur. The course of infection may be divided into four progressive stages: invasion, maturation, established infection, and chronic infection with its attendant complications. In the invasion stage, exposure of the sensitized host to cercarial or schistosomular antigens may lead to transient allergic manifestations. Although most infected individuals have no symptoms during cercarial penetration, a localized papular dermatitis (“swimmer’s itch’’) may occur with repeated exposures.6 A similar, but more intense, reaction is provoked when schistosome species that normally do not infect humans penetrate the skin and die in the dermis releasing large quantities of parasite antigen.7 Petechial hemorrhages, foci of eosinophilia, and leukocytic infiltration may be produced in the lung or in the liver when schistosomula migrate through the lungs and reach the liver. During this period, transitional symptoms of fever, malaise, cough, and a generalized allergic reaction may appear. When present, symptoms generally resolve in 5–15 days without treatment. Active schistosomiasis starts with worm maturation and the beginning of egg production. Severe cases of acute schistosomiasis, or Katayama fever, are not uncommon and occur 35–40 days after S. japonicum or heavy S. mansoni infection, coincident with the first two weeks of egg production. The clinical manifestations are characterized by a serum sickness-like syndrome of fever, chills, cough, arthralgias and myalgias, diarrhea, eosinophilia, hepatosplenomegaly, and generalized lymphadenopathy. Recovery usually occurs within several weeks, but fatalities do occur. The syndrome most likely reflects the strong host immune response to egg antigens and the formation and deposition of circulating immune complexes. In the established stage, intense egg deposition and excretion takes place. The intestinal schistosomes (S. japonicum, S. mekongi, S. mansoni, and S. intercalatum) release eggs into the mesenteric veins. Some of these become lodged in the intestinal submucosa, where they secrete proteolytic enzymes that erode the tissue, and break through the intestinal wall. In heavy infection, this may cause diarrhea and blood in the stool. Other eggs may be trapped at the original site or swept back into the portal blood flow and distributed to the liver, spleen, or other ectopic foci, where they provoke an inflammatory tissue response and granuloma formation. This may cause thrombosis of vessels, formation of polyps in the intestinal wall, or hepatosplenic schistosomiasis (see later on). In S. mansoni infection, the rectum and colon
18 are affected more frequently than other parts of the gastrointestinal tract. The severity of early disease is closely correlated with the number of eggs and their anatomic location. Consequently, S. japonicum, which has the highest capacity for egg production and the widest egg distribution, is a more common cause of severe, disseminated disease. Adult S. haematobium in the veins surrounding the urinary bladder deposit eggs into the vesicular plexus. These commonly break through the bladder wall and cause dysuria, urinary frequency, proteinuria, and hematuria. Inflammatory polypoid masses in the bladder or ureteral walls are common early in infection and are a significant cause of obstructive uropathy. Eggs may also be carried by the venous system to the genital organs, gastrointestinal tract, lungs, and liver. The chronic stage with its attendant complications is generally observed only in heavy infection, and is consequently very uncommon in travelers. The acute symptoms (if present) resolve, and the level of egg secretion becomes stable. Although most individuals with chronic infection are asymptomatic, egg-induced granuloma formation, fibrous proliferation and vascular obliteration lead to chronic pathology in others (see later on). Once initiated, schistosome-induced fibrosis may progress despite resolution of the initial infection.7 Chronic intestinal schistosomiasis is characterized by fibrous patches, inflammatory polyps, thickening of the intestinal wall and adhesions of the thickened mesentery and omentum to the intestine. Complications include secondary bacterial infection and intestinal obstruction. Recurrent Salmonella bacteremia is particularly common.8 In hepatosplenic schistosomiasis, granulomas develop around eggs in the portal venules. Hepatosplenomegaly may be pronounced. Over time, the liver gradually shrinks in size as a result of increasing fibrosis in a periportal distribution, called Symmers’ pipestem fibrosis. This may result in blockage of presinusoidal blood flow, leading to portal hypertension, ascites, and esophageal varices. An association between hepatosplenic schistosomiasis and nephrotic syndrome secondary to immune complex glomerulonephritis has been well-documented.9 Pulmonary schistosomiasis has been reported in all five species of schistosome infection. Eggs may be carried to the lungs by venous shunting through systemic collateral vessels formed as a result of portal hypertension or because of aberrant migration of worms into the vena caval or vertebral venous systems. The resultant granulomatous arteritis of the pulmonary capillary bed may lead to obliterative arteriolitis, dilatation of the pulmonary arteries, and pulmonary hypertension. Rarely, this leads to cor pulmonale with right-sided heart failure. Fibrosis and calcification of the eggs in the urinary bladder may impair bladder function. Fibrosis of the neck of the bladder and opening of the ureter result in obstruction of urine flow and may lead to the development of hydroureter, hydronephrosis, renal stones and, rarely, renal failure. Chronic ulceration and irritation of the bladder epithelium may in time lead to malignant transformation and the development of squamous cell carcinoma of the bladder. Female genital schistosomiasis, as defined by the presence of schistosome eggs or worms in the upper or lower genital tract, is a recently recognized syndrome that is associated with the presence of visible lesions or “sandy patches” on the cervix on colposcopic examination.10 The potential role of these inflammatory lesions in enhancing the transmission of sexually transmitted infections, the development of malignancies of the genital tract and infertility rates remains controversial. In cerebrospinal schistosomiasis, ectopic eggs may cause granuloma formation in the central nervous system, resulting in focal damage.11 Brain involvement is most common in S. japonicum infection, and may present acutely as meningoencephalitis.11 In chronic infection, seizures are the predominant manifestations. S. mansoni and S. haematobium more commonly affect the spinal cord, causing transverse myelitis.12 The immune response to helminth infections, including schistosomiasis, is characterized by a Th2 phenotype with eosinophilia, elevated serum IgE levels. In contrast, viruses and protozoa generally
Other Infection-Related Diseases of Public Health Import
481
induce a Th1 type immune response. The immunological and clinical effects of coinfection with schistosomiasis and such pathogens reflect the balance between these opposing responses.13–19 In some infections, such as viral hepatitis C, coinfection with schistosomiasis leads to more severe liver disease, with higher viral titers and increased mortality.15 In other infections, such as malaria, coinfection with schistosomiasis appears to afford a modest protective effect against malaria, with decreased parasitemia and milder clinical disease in coinfected children.16,17 Finally, although the course of HIV infection does not appear to be altered by schistosomiasis,18 available data suggest that advanced HIV infection may decrease resistance to reinfection in schistosomiasis.19
Diagnosis Definitive diagnosis is made by identifying characteristic eggs in the stool or urine sample, or by tissue biopsy.20 Eggs of S. japonicum and S. mekongi are globular in shape without spines; S. mansoni, oval with a lateral spine; and S. haematobium, oval with a terminal spine. Concentration techniques should be employed for all urine and stool specimens, and multiple samples should be examined carefully before a negative report is given. The quantitative Kato-Katz (cellophane) thick fecal smear is a rapid, inexpensive method of detection of eggs in the stool. It has become a standard diagnostic tool in epidemiology for international comparison of data,1 and has largely replaced filtration and hatching techniques. Newer techniques, such as Visser filtration,20 allow examination of larger amounts of stool and may be useful in documenting light infections. If eggs cannot be found in a chronic symptomatic case, rectal biopsy snips should be taken, pressed between two slides and examined by light microscopy for eggs.21 Colposcopic biopsy with histologic examination may also be useful in such instances. In urinary schistosomiasis, concentration and quantification of eggs in urine samples may be accomplished by centrifugation or a variety of filtration techniques. Since S. haematobium eggs are shed into the urine following a circadian rhythm, samples should be obtained between 10 A.M. and 2 P.M. Large volumes (>3 liters) of urine may need to be examined to detect eggs in light infections. In epidemiological studies, hematuria is often used as an indirect indicator of S. haematobium infection; however, the diagnostic value of hematuria at the individual level is limited by large variations in the predictive value of the test between different populations.22 The detection of antibodies against schistosomes may be helpful in documenting recent infection in visitors to endemic areas; however, the inability of such tests to distinguish between past and current infection limits their utility in endemic areas.23 More recently, a variety of schistosome antigen detection tests have been developed, of which serum CAA (circulating anodic antigen) and urine CCA (circulating cathodic antigen) are the best characterized.23 Since antigen titers become positive early in infection and are correlated with the intensity of infection, serum CAA and urine CCA may be useful in the diagnosis of acute schistosomiasis and in assessing cure after chemotherapy. Although abdominal ultrasonography is sometimes helpful diagnostically, findings may be nonspecific early in infection. Consequently, it is most useful in assessing morbidity and monitoring the response to treatment in patients with chronic disease.7
Treatment Praziquantel, a heterocyclic pyrazinoisoquinoline, is the drug of choice for all species of schistosomes, with cure rates from 60% to 98% in most series.24 In patients with hepatosplenic involvement, periportal fibrosis may actually resolve with treatment25 The drug is well tolerated, with only mild transient side effects, including abdominal discomfort, nausea, diarrhea, headache, dizziness, drowsiness, and pruritus. Three doses of 20 mg/kg given at 4-hour intervals are recommended for treatment of S. japonicum infections24 In most cases, a single dose of 40 mg/kg is sufficient for treatment of infection with other schistosome species24 Of note, HIV status does not appear to play a role in the response of schistosomiasis to praziquantel therapy.26
482
Communicable Diseases
Resistance to praziquantel has been induced in laboratory strains of S. mansoni with repeated exposure to the drug.27 Reports of decreased cure rates with praziquantel in epidemic foci in Senegal28 and the Nile Delta Region of Egypt29coupled with decreased drug susceptibility of parasite isolates from individuals in these two regions in a murine infection model30 initially raised concern that the same phenomenon would occur in humans. However, praziquantel resistance has not been detected in other endemic regions,31 and there is no evidence that the proportion of parasite-resistant strains has increased in these regions over time despite continued use of praziquantel.32 Concern about reliance on a single drug has led nevertheless to renewed interest in alternatives to praziquantel therapy, including artemisinin derivatives. Metrifonate, an organophosphorus ester, and oxamniquine, a tetrahydroquinoline, used in the past as alternative therapies for the treatment of S. haematobium and S. mansoni, respectively, are no longer commercially available and will not be discussed. Artemisinin derivatives, including artesunate and arthemether, are best known for their antimalarial properties; however, laboratory experiments and clinical trials have confirmed that these compounds also exhibit activity against all of the schistosome species that infect humans.33–35 Artemisinin derivatives are well-tolerated and may be administered orally or by intramuscular injection. Optimal regimens for artemisin treatment of schistosomiasis have not yet been determined. Unlike praziquantel, which is active only against adult worms, artemisinin compounds have activity against both the immature and adult stages of the schistosome life cycle and may be useful for chemoprophylaxis.36
Control and Prevention Control and prevention of schistosomiasis are among the most complex problems in public health. Success in control depends on having a well-organized program based on a profound understanding of the epidemiology of the disease, the biology, ecology, and distribution of the parasite intermediate snail host, and the geographic characteristics of the environment. It is also important to have sound knowledge of local socioeconomic conditions, support from health authorities, and cooperation of the communities. The elimination of schistosomiasis through interruption of transmission has been attempted for the last five decades. It has been successful in some countries, such as Japan and large parts of China,37 but has proved to be beyond the resources of many endemic areas. Furthermore, ecological changes, both natural (e.g., drought) and manmade (e.g., water resource development projects, relocation of populations for political reasons), have led to schistosomiasis outbreaks in some regions where disease transmission was previously controlled.38 As a result, the World Health Organization has recommended the institution of integrated control programs targeted at reducing the morbidity and prevalence of the disease.39 The availability of geographic information systems (GIS) and sophisticated epidemiologic models is likely to enhance the effectiveness of such programs.40
Snail Control Molluscicides provide a rapid and effective means of reducing the snail population and decreasing disease transmission;41 however, their application must take into account the focal and seasonal patterns of disease transmission. A suitable molluscicide must be safe and nontoxic to mammals and aquatic organisms, stable in storage, and simple to apply. Niclosamide, a synthetic amide that has been used since the 1960s, fulfills most of these criteria and remains the molluscicide of choice. The major limitations to its widespread use are cost (as much as $100 per kg in some areas of the world) and the high incidence of drug-associated fish mortality. Natural molluscicides of plant origin provide the theoretical advantage of decreased cost, local production, and low toxicity, but to date have not been as effective as niclosamide in field trials. Long-lasting effects in the reduction of snail populations can be achieved by environmental modifications, such as the installation of
overhead sprinklers and trickle-type irrigation systems, modification of canal design, alteration of water level, or lining of canals with cement. Simple methods, including weed control and drainage of unused standing water, can also reduce snail populations. Biological snail control methods are still in the experimental stages, and none has reached large-scale field trials. Preliminary studies using fish, insect, and molluscan competitors have met with only limited success.
Chemotherapy Chemotherapy not only decreases the morbidity and prevalence of disease42 but also reduces transmission. Three basic strategies have been advocated: mass treatment, selective population-based therapy of infected individuals, and therapy targeted to subpopulations of infected individuals (e.g., those with high-intensity infection). The most appropriate treatment strategy depends on the endemicity of infection and the available resources. For example, in a highly endemic area, the cost of screening individuals for infection may exceed the cost of providing therapy for all persons living in the endemic area. Regardless of the strategy, reinfection generally occurs, especially in children where up to 40% may be reinfected one year after treatment. Even a small residual egg output can sustain disease transmission if the snail population is not controlled. Thus, a continuing schedule of screening and retreatment is required. The long-term side effects (if any) of repeated drug treatment and the potential effects on drug resistance also need to be considered.
Education Health education is an integral part of any successful schistosomiasis control program and has been shown in several studies to have an effect on human behavior and ultimately on disease transmission and prevalence.43 It is much more likely that people will minimize contact with infested water, avoid polluting water sources and cooperate with community control programs if they understand the basic mechanism of disease transmission. Furthermore, simple and inexpensive water disinfection procedures, such as boiling, filtering, or storing for 24 hours, after which contaminating cercariae become noninfective, can be instituted. Finally, people who must have contact with contaminated water can be taught personal protection measures, including the use of repellents, rubber boots, and other barrier methods (e.g., wrapping the feet with cloth or puttees smeared with powdered Thea oleosa fruits), which may provide partial protection against infection.
Sanitation and Water Supply Although expensive, the provision of safe water and adequate sanitation is crucial to the long-term control of schistosomiasis. In St. Lucia, the installation of individual household water systems was associated with a 75% decrease in the incidence of new S. mansoni infections in children.44 In theory, installation of latrines may protect snail-bearing waters from contamination with infectious human wastes; however, this has been less effective than provision of a safe water supply in decreasing transmission. The reason for this is likely multifactorial, and includes accessibility and social issues limiting the use of latrines in many communities. Finally, since water resource development programs may spread schistosomiasis to previously uninfected areas, such programs should be planned by multidisciplinary teams, including epidemiologists, ecologists, biologists, engineers, and public health officials. Successful short-term control has been achieved with an integrated approach in some endemic areas.44–46 However, once prevalence has been reduced to the targeted level, a maintenance program is necessary to sustain it. This was highlighted by a recent study of community-based control of schistosomiasis in the Philippines, in which a marked increase in the incidence of hepatosplenomegaly was seen with suspension of antischistosomal chemotherapy for as little as two years.46 The cost of such long-term, multi-faceted control programs is not insignificant and may be as high as US$3 per protected subject per year (as compared to the less than US$5 per capita total expenditure for health in sub-Saharan
18 Africa). Although integration of schistosomiasis control programs with other local health programs has been successful in decreasing costs and increasing efficacy in some countries, additional inexpensive and effective alternatives (such as a vaccine) are clearly needed.
Vaccine Development The immune response to schistosome infection is extremely complex and likely depends on the intensity and timing of exposure as well as genetic factors.47,48 Nevertheless, epidemiological studies in areas endemic for schistosomiasis suggest that acquired resistance to reinfection occurs with age. Furthermore, although vaccination of experimental animals (including nonhuman primates) with live
Other Infection-Related Diseases of Public Health Import
483
attenuated schistosomes provides only partial immunity to reinfection (70–90% reduction in worm burden), such levels of immunity could have a significant effect on morbidity by reducing the prevalence of high intensity infections and could potentially reduce transmission.49,50 Since the use of a live vaccine would be unethical in humans, recent attention has focused on recombinant or synthetic peptides as potential vaccine candidates and on the use of novel delivery systems (e.g., BCG) and adjuvants (e.g., IL-12) to enhance their immunogenicity. One such vaccine, Sh28-GST, a recombinant from S. haematobium, has completed Phase I/II clinical trials in Africa with no evidence of local or systemic toxicity and may progress soon to Phase III human studies.51
Toxic Shock Syndrome (Staphylococcal) Arthur L. Reingold
INTRODUCTION
Staphylococcal toxic shock syndrome (TSS) is an acute, multisystem febrile illness caused by Staphylococcus aureus. A similar illness caused by group A streptococcal infections is discussed in Chap. 12. The accepted criteria for confirming a case of TSS include fever, hypotension, a diffuse erythematous macular rash, subsequent desquamation, evidence of multisystem involvement, and lack of evidence of another likely cause of the illness (Table 18-2). HISTORICAL BACKGROUND
TSS was first described as such in 1978 by Todd et al.1 However, cases of what we now believe to have been TSS have been reported in the medical literature since at least 1927 as “staphylococcal scarlet fever” or “staphylococcal scarlatina.”2,3 In addition, a number of patients reported in the medical literature in the 1970s as having adult Kawasaki disease probably had TSS.4 The association between illness and focal infection with S. aureus was, by definition, apparent in early reports of staphylococcal scarlet fever, but was reinforced by the findings of Todd et al1 and later by the findings of other investigators.5,6 TSS achieved notoriety in 1980 when numerous cases were recognized and an association between illness (in women), menstruation, and tampon use was demonstrated.7,8 While the early case reports of staphylococcal scarlet fever and the report by Todd et al1 clearly showed that TSS occurred in small children, men, and women who were not menstruating, most (but by no means all) of the cases initially recognized and reported in late 1979 and early 1980 were in menstruating women,8–10 leading to the frequent misperception among the general public and many physicians that TSS occurred only in association with tampon use (hence, “the tampon disease”). This misperception undoubtedly led to subsequent biases in the diagnosing (and probably reporting) of TSS cases. However, later studies designed to eliminate such biases have shown that TSS does, in fact, occur disproportionately in menstruating women,11,12 while case-control studies demonstrating an association between the risk of developing TSS during menstruation and tampon use preceded (indeed, led to) the introduction of bias concerning the relationship between tampon use and menstrual TSS.5,6
Follow-up studies demonstrated that the risk of developing tampon-related menstrual TSS varies with the absorbency, chemical composition, and oxygen content of the tampon,13–16 although the relative importance of these and other tampon characteristics in determining that risk remains uncertain. As a result of both epidemiological and in vitro laboratory studies, the formulation of available tampons changed dramatically in the early 1980s, such that absorbencies were substantially lower and chemical composition was less varied across brands and styles. Studies in the late 1980s demonstrated that the incidence of TSS, particularly menstrual TSS, rose and fell in parallel with the absorbency of tampons,17 but that the risk of developing menstrual TSS continued to vary directly with tampon absorbency, despite the changes in tampon formulation.18 METHODOLOGY
Sources of Mortality Data Mortality rates for TSS have not been reported directly, but can be estimated from reported incidence rates and case-fatality ratios.
Sources of Morbidity Data Surveillance for TSS began in a few states in late 1979 and in other states and nationally in early 1980. Since that time, TSS has been made a reportable disease in most states. However, the level of intensity of surveillance activities has varied markedly between and within states. Thus, a few states established active surveillance for TSS for brief periods of time, while others did little to stimulate the diagnosis and reporting of cases. As a result, the completeness of diagnosing and reporting TSS cases undoubtedly has been inconsistent between states and over time. However, data from a national hospital discharge survey indicated that reporting of cases in the 1980s, while incomplete and variable by region, was not biased dramatically insofar as the age, race, sex, or menstrual status of the patients is concerned.19 Hospital record review studies, in which both diagnosed and previously undiagnosed cases of TSS were ascertained in a consistent fashion, so as to minimize or eliminate both diagnostic and reporting biases, also have been conducted.11,12,17,20,21 These studies demonstrated that, by and large, the patient characteristics and temporal trends observed in
484
Communicable Diseases
TABLE 18-2. CASE DEFINITION OF TOXIC SHOCK SYNDROME Fever: temperature ≥ 38.9°C (102°F) Rash: diffuse macular erythroderma Desquamation: 1–3 weeks after onset of illness Hypotension: systolic blood pressure ≤ 90 mm Hg for adults or below fifth percentile by age for children under 16 years of age, orthostatic drop in diastolic blood pressure ≥ 15 mm Hg from lying to sitting, orthostatic syncope, or orthostatic dizziness Multisystem involvement: three or more of the following: Gastrointestinal: vomiting or diarrhea at onset of illness Muscular: severe myalgia or creatine phosphokinase level at least twice the upper limit of normal for laboratory Renal: blood urea nitrogen or creatinine at least twice the upper limit of normal for laboratory or urinary sediment with pyuria (≥ 5 leukocytes per high-power field) in the absence of urinary tract infection Hepatic: total bilirubin, serum aspartate transaminase, or serum alanine transaminase at least twice the upper limit of normal for laboratory Hematologic: platelets < 100,000 Central nervous system: disorientation or alterations in consciousness without focal neurological signs when fever and hypotension are absent Negative results on the following tests, if obtained: Blood, throat, or cerebrospinal fluid cultures (cultures may be positive for Staphylococcus aureus) Rise in titer to Rocky Mountain spotted fever, leptospirosis, or rubeola
data collected through the largely passive network of TSS surveillance reflected true variation in the incidence of TSS by age, sex, race, and menstrual status. These same studies, taken together, demonstrated that at least some of the apparent geographic variation in the incidence of TSS in the United States in the 1980s was real. More recent information concerning the epidemiologic features of TSS comes almost entirely from the passive national surveillance system, as few epidemiologic studies of TSS have occurred since the 1980s.
Surveys Numerous small surveys have demonstrated that many asymptomatic individuals carry in the nasopharynx and/or vagina strains of S. aureus that produce TSS toxin-1 (TSST-1), the toxin believed to be responsible for most TSS cases.22–26 Similarly, large serosurveys have shown that antibodies to TSST-1 or to a cross-reacting antigen are extremely common.22,25,27,28
Laboratory Diagnosis Isolation and Identification of the Organism While recovery of S. aureus from the vagina or another site of infection is not one of the criteria of the TSS case definition, it is possible in most TSS cases if appropriate specimens are obtained before antimicrobial therapy is initiated.5,6,29,30 S. aureus grows readily on most standard culture media and is readily identifiable by any clinical microbiology laboratory within 2 or 3 days. Testing of S. aureus strains for production of TSST-1, however, is performed in only a few research laboratories. Hence, the results of such testing are not readily available during the acute illness and are not of value in treating patients suspected of having TSS. Furthermore, because both S. aureus in general and TSST-1-producing strains of S. aureus in particular can be recovered from many patients without the clinical features of TSS and from asymptomatic individuals, microbiological results cannot and do not prove that a given patient has TSS.
Serological and Immunologic Diagnostic Methods. A variety of serological and immunologic techniques have been used to test S. aureus strains for production of TSST-1. As noted above,
these tests are not available outside a few research laboratories. It is possible to detect TSST-1 in clinical specimens,31,32 but these assays are not generally available. Antibodies to TSST-1 can be measured using solid-phase radioimmunoassay and other techniques. However, most healthy individuals have detectable anti-TSST-1 antibodies.22,25,27,28 Furthermore, some patients with TSS have demonstrable anti-TSST-1 antibodies at the time of onset, and many patients without such antibodies at the time of onset do not demonstrate an antibody rise in response to their illness.27,33 Thus, testing for anti-TSST-1 antibodies (which is not available except in one or two research laboratories, in any event) is of limited value in confirming the diagnosis of TSS, although it has been argued that the absence of detectable antibodies at the time of onset supports the diagnosis of TSS. BIOLOGICAL CHARACTERISTICS OF THE ORGANISM
As noted earlier, there is convincing evidence that S. aureus is the cause of TSS. In patients with menstrual TSS, S. aureus can be recovered from the vagina and/or cervix in 95–100% of cases (usually as a heavy growth), but in only 5–15% of healthy control women.5,24,34–39 In patients with nonmenstrual TSS associated with a focal wound, S. aureus is typically the only organism found in the lesion.29,30 Furthermore, experimental studies demonstrate that TSS-associated S. aureus strains can cause a similar illness in rabbits. Similarly, there is strong evidence that the ability to make TSST-1, previously known as pyrogenic exotoxin C,40 staphylococcal enterotoxin F,41 and several other names, is characteristic of, although not universal among, TSS-associated S. aureus strains. Thus, 90–100% of S. aureus isolates recovered from the vagina, cervix, or used tampon in menstrual TSS cases produce TSST-1, compared with only 10–20% of vaginal or nasopharyngeal isolates from healthy controls.26,40–43 On the other hand, only 60–70% of S. aureus strains recovered from normally sterile sites in patients with nonmenstrual TSS produce TSST-1,44–46 suggesting that other staphylococcal toxins, particularly staphylococcal enterotoxin B (SEB), are capable of inducing a clinically indistinguishable syndrome. Two studies of historical strains of S. aureus demonstrated that the proportion of strains capable of making TSST-1 has changed over time and was generally higher in the mid to late 1970s than in earlier time periods.42,47 Interestingly, that proportion appears to have declined somewhat in the early 1980s, when the incidence of TSS was peaking. More recent data concerning the proportion of S. aureus strains that make TSST-1 have not been published. TSS-associated S. aureus strains have also been characterized phenotypically with respect to a number of other properties, including phage type, antimicrobial susceptibility, resistance to heavy metals, production or activity of various enzymes, and presence of plasmids and bacteriophages. The picture that emerges with regard to these characteristics, while consistent, is by no means invariable or unique. A higher proportion of TSS-related S. aureus strains are lysed by phase types 29 and/or 52 (58–82%), as compared to only 12–28% of control strains.42,43,48 Similarly, TSS-associated strains generally are resistant to penicillin (and ampicillin), arsenate, and cadmium, while being susceptible to β-lactamase resistant antimicrobial agents, most other commonly tested antimicrobial agents, bacteriocins, and mercury.6,49,50 Other characteristics that appear to distinguish these strains from other S. aureus strains include decreased production of hemolysin, lipase, and nuclease49,51; tryptophan auxotypy52; decreased lethality in chick embryos49; increased pigment production53; and increased casein proteolysis.53 TSS-associated strains also have been reported to be less likely to carry plasmids and more likely to carry lysogenic bacteriophage than control strains. There is controversy over whether or not the gene coding for TSST-1 can be transferred by lysogeny.54,55 It should be noted that most of the strains examined in the above studies were recovered from the genital tract in menstrual TSS cases. Thus, the results are not necessarily applicable to S. aureus strains associated with nonmenstrual TSS, and there is some evidence to
18 suggest that such strains, recovered from normally sterile sites in patients with nonmenstrual TSS, are less likely to be lysed by phage types 29 and/or 52 than are strains from menstrual TSS cases.44 At the same time, as noted above, they also are less likely to make TSST-1.
DESCRIPTIVE EPIDEMIOLOGY
Prevalence and Incidence Carriage of S. aureus on the skin and in the nasopharynx and vagina is very common. Numerous cross-sectional studies have demonstrated that 30–40% of individuals carry S. aureus in the nasopharynx and 5–15% of women carry S. aureus in the vagina.22–24,34–39 The corresponding figures for TSST-1 producing S. aureus are 5–15% (nasopharynx) and 1–5% (vagina). Thus, carriage of S. aureus strains believed to be capable of causing TSS is also very common. In contrast, TSS is a rare disease. After it became a notifiable disease in 1983, the number of cases reported annually in the United States initially ranged from 400 to 500. More recently, approximately 100–150 cases have been reported annually to the Centers for Disease Control and Prevention (CDC).56 The most reliable estimates of incidence rates come from hospital-based record review studies. In these studies, both diagnosed and previously undiagnosed cases of TSS were ascertained in an unbiased way by reviewing thousands of medical records of hospitalized patients with one of a long list of discharge diagnoses likely to be indicative of misdiagnosed cases of TSS. In one such study in Colorado, the annual incidence of TSS in women between the ages of 10 and 30 was 15.8 per 100,000 in 1980.12 In a similar study in California, the incidence rate in women between the ages of 15 and 34 was only 2.4 per 100,000 in 1980.11 The incidence rate in men of the same age group in the latter study was consistently less than 0.5 per 100,000 in all of the years studied. Initial estimates of the incidence of diagnosed TSS were derived from statewide surveillance systems established in late 1979 or early 1980. The states with the most aggressive case-finding methods reported annual incidence rates at that time of 6.2 per 100,000 menstruating women (Wisconsin),5 8.9 per 100,000 menstruating women (Minnesota),57 and 14.4 per 100,000 females 10–49 years of age (Utah).58 An overall estimate of 0.8 per 100,00 total population of hospitalized, diagnosed TSS in the United States in 1981 and 1982 was derived from a national hospital discharge survey.19 More recent estimates of the incidence of TSS, based on admittedly incomplete passive surveillance, are even lower.56 While TSS has been documented in numerous other countries, no estimates of incidence rates for other countries are available. The discrepancy between the frequency of colonization and/or infection with TSST-1-producing S. aureus and the rarity of TSS is thought to be due to the fact that most individuals have detectable antiTSST-1 antibodies. By age 30, more than 95% of men and women have such antibodies.28 The origin of these antibodies is unknown.
Other Infection-Related Diseases of Public Health Import
the observed differences, but there is substantial evidence that at least some of the observed differences are real. For example, a study of hospital discharge data in which differences in the reporting of cases could not have been a factor showed that the overall annual incidence of hospitalized cases varied by region between 0.24 and 1.43 per 100,000 in 1981–1982.19 In this study, however, potentially large differences in the completeness with which TSS cases were diagnosed and different standards for hospitalizing patients suspected of having TSS could not be ruled out. More convincing evidence for true geographic differences in incidence rates comes from the virtually identical hospital record review studies conducted in Colorado and northern California, in which variation in the diagnosing and reporting of cases was largely or completely eliminated.11,12 As noted earlier, the incidence of TSS in 1980 in females 10–30 years of age was 15.8 per 100,000 in Colorado, but only 2.4/100,000 females 15–34 years of age in northern California. However, a prospective study employing active surveillance for TSS in five states (Missouri, New Jersey, Oklahoma, Tennessee, and Washington) and one large county (Los Angeles) showed that in 1986, the incidence of menstrual TSS was in the range of 1 per 100,000 females 15–44 years of age in all six study areas.59 Studies of S. aureus strains from the United States show no geographic differences in what proportion make TSST-1.47 Similarly, anti-TSST-1 antibodies are found in similar proportions of healthy individuals in different parts of the United States.
Other Countries Documented cases of TSS have been reported from Canada, most of western Europe, Australia, New Zealand, Japan, Israel, South Africa, and elsewhere. No information concerning incidence rates of TSS outside of the United States is available. However, the proportion of cases in other countries associated with menstruation and tampon use appears to be substantially lower than in the United States, in keeping with the fact that tampon use in general is less frequent in other countries and superabsorbent tampons are less widely used.
Temporal Distribution Substantial controversy has surrounded the interpretation of observed changes over time in the diagnosis and reporting of TSS cases. Data from the passive national surveillance system suggested that the number of cases began to rise in 1978, peaked in 1980, and then declined and leveled off, with virtually all of the observed differences being due to changes in the number of menstrual TSS cases reported9,10 (Fig. 18-10). While this pattern also was observed in some individual states employing vigorous case-finding methods (e.g., Utah and Wisconsin), a different pattern was seen in Minnesota, where no decline in the number of cases was observed in 1981.60 Because of the documented impact of publicity on reporting of TSS cases and the undoubted fluctuations over time in the likelihood that cases would
1,400
Epidemic Behavior and Contagiousness
1,200
Because TSS increased dramatically in incidence in the United States beginning in 1979 in comparison with previous years,11,12,20 it would be correct to say that an epidemic of TSS occurred at that time. TSS does not occur, however, in explosive epidemics in the same way that dengue and meningococcal disease do, although strains of S. aureus that produce TSST-1 are, like other S. aureus strains, transmitted readily by person-to-person spread.
1,000
Geographic Distribution United States Cases of TSS have been reported in all 50 states and the District of Columbia, but the incidence of reported cases has varied substantially between states and regions.9,10,56 Variation in the completeness of diagnosis and reporting of cases undoubtedly accounts for some of
485
Rely tampons withdrawn Total Menstrual
Number of cases
Absorbency lowered
Nonmenstrual 1982-FDA requires tampon labeling∗
800
Polyacrylate removed
600 400
FDA standardizes absorbency labeling
200 0 1980
1982
1984
1986
1988
1990
1992
1994
Year ∗FDA, food and drug administration; includes definite and probable toxic shock syndrome cases
Figure 18-10. Reported cases of toxic shock syndrome, United States, 1979–1994.
1996
486
Communicable Diseases
3.5
Todd report Report on describes rely-TSS Change in TSS link Rely Rely tampon California removed formulation
Females Males
Rate per 100,000 person-years
3.0
2.5
3.5
Polyacrylate rayon removed
2.4 2.3
2.0
1.9
1.8
1.8
1.7
1.5
1.4
1.4
1.3
1.1
1.0
0.5
0.9
0.5 0.4
0.3
0.3
0.3
0.0
0.3 0.0
0.0
0.0
0.0 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 Year Figure 18-11. Incidence of hospitalized toxic shock syndrome cases in males (dashed line) and females (solid line), aged 15 through 34 years, northern California Kaiser-Permanente Medical Care Program, 1972 through 1987.
be diagnosed and/or reported, the results of studies that eliminate or minimize these influences are important in interpreting temporal trends. While the three published hospital record review studies all suffer from having a relatively small number of cases of TSS to analyze statistically, the results of all three studies are consistent. In the California study, the incidence of TSS in women increased consistently through 1980, fell somewhat in 1981 and 1982, and then increased again in 1983, while the incidence in men remained consistently
low (Fig. 18-11). In the Colorado study, the results were similar except that the decrease in 1981 compared with 1980 was sharper (Fig. 18-12). The similarity of the pattern in Colorado is even more apparent, if cases meeting only the authors’ proposed screening definition for TSS and not the more rigorous collaborative case definition are removed.61 Similar trends are seen in the study from Cincinnati, although incidence rates cannot be estimated in this study.20 Thus, there is convincing evidence that hospitalized cases of TSS in females of menstrual age increased in the late 1970s, irrespective of
16 15
Females ≤ 30 yrs
Incidence per 100,000
14 13
Females 10–30 yrs
12
Males ≤ 30 yrs
11 10 9 8 7 6 5 4 3 2 1 1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
Weld county only Year Figure 18-12. Annual incidence per 100,000 population of toxic shock syndrome in hospitalized patients ≤ 30 years of age meeting either the strict or the screening case definition in two Colorado counties, 1970–1981.
18 any changes in the recognition and reporting of the disease. A similar increase was not apparent among men. There is also some evidence that this upward trend in the incidence of TSS through 1980 was reversed in several geographic areas, at least temporarily, in 1981.
Age TSS can occur in individuals of all ages and has been documented in a newborn baby and in patients up to 80 years of age. However, data from both passive and active surveillance systems and from the California record review study indicate that younger women are at greater risk of developing TSS than are older women. Of cases associated with menstruation reported nationally, almost 60% have been in women 15–24 years of age, compared with only 25% in women 25–34 years of age.9,10 Cases in women 35–44 years of age are even less common. Furthermore, the highest age-specific incidence rates consistently have been observed in women 15–19 or 15–24 years of age. Thus, in the California record review study, the annual incidence rate was 2.6 per 100,000 women 15–19 years of age compared with rates of 0.8 to 1.4 per 100,000 among women 20–24, 25–29, and 30–34 years of age.11 Similarly, in Minnesota the annual incidence of menstrual TSS among women 15–24 years of age was 13.7 per 100,000 compared with rates of 2.3 in those <15 years of age and 6.6 in those ≥25 years of age.57 The age distribution of TSS cases unassociated with menstruation is more uniform, especially if cases in postpartum women are excluded.30
Sex All available evidence clearly indicates that TSS is much more common among women of menstrual age than among men of the same age. Of U.S. cases reported through passive surveillance, 95% have been in women and 5% in men.9,10 In the California record review study, the overall incidence of TSS in women 15–34 years of age during the time period 1972–1983 was 15 times that in men of the same age (1.5 vs. 0.1 per 100,000 person years).11 This marked difference in incidence rates between men and women undoubtedly relates primarily to the fact that most cases of TSS are associated with menstruation and tampon use. What appears to be an increased risk of TSS during the postpartum interval and the apparent association between TSS and the use of barrier contraception probably contribute further to this pattern.29,30,62 The incidence of TSS associated with other types of staphylococcal infections (e.g., surgical wound infections, cutaneous, and subcutaneous lesions) appears to be similar in men and women.29,30,59
Race Although it has been apparent since at least 1980 that TSS occurs in individuals of all racial groups,8 the over-whelming majority (93–97%) of reported cases have been in whites, who make up only 80–85% of the U.S. population.9,10 Likely explanations for this discrepancy fall into two categories: biases in the diagnosis and reporting of cases on the one hand and true racial differences in either susceptibility to TSS or exposure to risk factors on the other. It has been postulated that increased difficulty in recognizing the rash on darkskinned individuals, poorer access of minority groups to medical care, and the relative paucity of individuals of races other than white in areas with active TSS research efforts have all contributed to the observed racial distribution of cases. However, data from the California record review study indicate that in the 15–34 age group TSS does indeed disproportionately affect whites.11 All of the 54 definite cases (most of which are related to menstruation) found in that study were in whites, while only 81% of the population at risk was white (p < 0.05; Fisher’s exact test, two-tailed). It has been noted that the racial distribution of patients with nonmenstrual TSS (87% white) more closely resembles the racial distribution of the U.S. population than does the racial distribution of patients with menstrual TSS (98% white).10,59 Taken together with
Other Infection-Related Diseases of Public Health Import
487
studies demonstrating that young white women use tampons far more often than do comparably aged women of other racial groups,63–66 these results suggest that observed race-specific differences in incidence rates are due, at least in part, to different levels of exposure to an important risk factor for developing TSS during menstruation.
Occupation There is no evidence to suggest that any given occupational group, including health-care providers, is at increased risk of developing TSS.
Occurrence in Different Settings As noted below, transmission of strains of S. aureus capable of causing TSS has been demonstrated in the hospital setting.67–69 There is also evidence of spread of these strains and occasional clustering of cases in households and in military installations (CDC, unpublished observations).
Socioeconomic Factors It is unclear to what extent the marked racial variation in tampon use, especially among adolescents, reflects socioeconomic rather than racial differences. Other socioeconomic factors have not been noted to play a role in TSS.
Other Factors Menstrual TSS Numerous case-control studies conducted in 1980–1981 examined risk factors for developing TSS during menstruation (Table 18-3). These studies consistently found that tampon use increased the risk of menstrual TSS (the Oregon study, with its small number of cases, while not finding an association between menstrual TSS and tampon use in general, did find an association with a particular brand of tampon).5,6,15,16,57,70 Included among these studies are two performed before any information concerning this association had appeared in the medical literature or lay press.5,6 A study comparing tampon use among women with menstrual TSS in 1983 and 1984 with tampon usage patterns ascertained via a national survey found evidence of a continuing increased risk of menstrual TSS among tampon users.13 Furthermore, a multistate case-control study of menstrual TSS cases with onset in 1986–1987 documented that this association persisted at that time.18 No additional case-control studies have been conducted since that time. Two early case-control studies demonstrated that the risk of menstrual TSS varied with tampon brand and/or style (i.e., absorbency), suggesting that risk was a function of tampon absorbency and/or chemical composition.15,16 The two more recent studies document clearly that risk of menstrual TSS is directly correlated with measured in vitro tampon absorbency,13,18 independent of chemical composition, but that chemical composition is also a factor.13 It is interesting to note that the correlation with tampon absorbency has persisted, despite the major alterations in chemical composition and marked decreases in the absorbencies of available tampons that have occurred since 1980. A reanalysis of data from earlier studies has suggested that the oxygen content of tampons is a better predictor of the risk of menstrual TSS than either chemical composition or absorbency.14 In vitro studies examining the effect of various surfactants on TSST-1 production have shown that they can have a dramatic effect on production of this toxin.71 These results suggest another tampon characteristic (i.e., type of surfactants present) that might influence the risk of menstrual TSS in users of various brands and styles. The early case-control studies also examined the role of a number of other factors in determining the risk of developing TSS during menstruation. Four of the studies found that women with TSS were less likely to use oral contraceptives than were controls, although the differences in individual studies were not statistically significant.5,6,15,58
TABLE 18-3. RISK FACTORS FOR MENSTRUAL TSS Tampons Case Onset Dates
Date of Study
Geographic Area
Source of Controls
9/75–6/80 12/76–6/80 7/80–8/80 1/76–8/80 10/79–9/80 12/79–11/80
1980 6/80 9/80 5/80–8/80 10/80–11/80 1/81–3/81
Wisconsin USA (CDC) USA (CDC) Utah Minnesota, Wisconsin, Iowa Oregon
1/86–6/87
1/86–8/87
Los Angeles County, Missouri, New Jersey, Oklahoma, Tennessee, Washington
No. Cases
Clinic Friend Friend Neighbor Neighbor Friend Clinic Friend
35 52 50 29 76
105 52 150 91 152 18 18 185
180
Neighbor Rely Brand Tampons % Cases 33 71 63 53 67 ∗
% Controls NA‡ 27 26 24 29 17 6
No. Controls
97 100 100 100 99 100
187
Absorbency
Relative Risk NA 7.7 6.1 2.5 10.0* 34.0*
p Value NS NS <0.0001 <0.0005 0.005 <0.05 <0.05
Odds Ratio in Multivariate Model NA NA 1.0 NA 3.2–10.4 NA 1.34/gm
% Cases
% Controls
*
p Value
76 85 83 77 81 78 89 71
10.6 20.1* 20.5* 18.0* 18.0 11.5* 5.6* 19
<0.01 <0.05 <0.01 0.012 <0.001 NS† NS <0.01
60
48
<0.01
Oral Contraceptives p Value
% Cases
% Controls
NS
17 4 NA 3 12
36 7 NA 11 20
25
24
0.01
If not reported in reference, crude odds ratio estimated disregarding matching; 0.5 added to all cells in tables with a 0 value; estimated values indicated with an asterik. NS, not significant. ‡NA, data not available. †
Odds Ratio∗
Odds Ratio
p Value
Ref.
0.36* 0.48* NA 0.29* 0.55*
NS NS
1.1
NS
5 6 16 112 15 113 18
NS 0.05
488
18 One study found that continuous use of tampons during the menstrual period was associated with an increased risk of TSS,6 while in another study a similar association, present on univariate analysis, did not remain significant in a multivariate analysis.15 Of four studies looking at the relationship between the history of a recent vaginal infection and the risk of TSS, only one found such an association. Factors found not to be related to the risk of developing menstrual TSS in one of more studies included marital status, income, parity, sexual activity, bathing, frequency of exercise, alcohol use, smoking, history of vaginal herpes infections, and frequency of changing tampons.
Postpartum TSS Although numerous cases of TSS occurring during the postpartum interval have been reported, the lack of precise information concerning the incidence of TSS in various settings makes it difficult to be certain that the incidence in postpartum women is elevated. Those cases of postpartum TSS not related to infection of a cesarean section incision or an infection of the breast (i.e., mastitis and breast abscess) have occurred predominantly in association with the use of tampons to control the flow of lochia or the use of barrier contraception (i.e., diaphragms and contraceptive sponges).30
Postoperative TSS TSS has been associated with S. aureus surgical wound infections following a wide array of surgical procedures.68 It has been suggested, however, that patients undergoing nasal surgery are at particularly high risk, presumably due to the frequency of S. aureus carriage in the nasopharynx and the difficulty of eradicating such carriage.22 The common use of “nasal tampons” and other packing material following nasal surgery also may play a role.
Other Nonmenstrual TSS TSS can result from S. aureus infection at any body site. However, many nonmenstrual TSS cases are the result of cutaneous and subcutaneous S. aureus infections. Also, cases of TSS associated with S. aureus infection of the respiratory tract in the setting of influenza have received substantial attention.72 Risk factors for the development of such infections and/or associated TSS have not been studied. A relatively small proportion of nonmenstrual, nonpostpartum TSS cases are associated with vaginal S. aureus infections. One risk factor that has been identified in such cases is the use of contraceptive sponges.62 It remains uncertain whether or not diaphragm use is similarly associated with an increased risk of nonmenstrual TSS.
MECHANISMS AND ROUTES OF TRANSMISSION
Like all S. aureus strains, those capable of causing TSS appear to be transmitted readily by person-to-person spread, both within the hospital and in the community. There is convincing evidence that a nurse transmitted a TSS-associated strain of S. aureus to hospitalized burn patients67 and suggestive evidence that some cases of postoperative TSS are due to nosocomial spread of the causative organism by hospital personnel.68,69 In addition, vertical transmission from mother to newborn, with the development of TSS in both, has been reported.73 Outside the hospital setting, transmission between husband and wife has been suggested by the almost simultaneous appearance of TSS in both, as has transmission between mother-daughter pairs (unpublished reports to the CDC). It is assumed, but not proven, that transmission in all these instances was by direct person-to-person spread. Nevertheless, it should be emphasized that in many TSS cases, particularly those associated with a focus of infection in the vagina, it may well be that disease is due to the introduction and/or multiplication of an endogenous S. aureus strain rather than to an exogenous source of infection.
Other Infection-Related Diseases of Public Health Import
489
PATHOGENESIS AND IMMUNITY
TSS results from an infection with an appropriate strain of S. aureus in a susceptible host. Once a nidus of infection is established, onset of symptoms typically occurs 1–3 days later. The best evidence concerning incubation period comes from patients with postoperative TSS due to surgical wound infections. In these patients, the date when the infection became established is usually the day of surgery, and thus can be determined unequivocally. The median incubation period in such patients is two days.68 When TSS is caused by S. aureus infection of the vagina during menstruation, onset of symptoms is typically on the third or fourth day of menstruation, although it can be earlier or later. In most cases of TSS, the toxin TSST-1 is the bacterial product most likely to be responsible for many of the observed signs, symptoms, and abnormalities of laboratory values. There is, however, substantial evidence that one or more other staphylococcal products, particularly enterotoxin B, are capable of causing an indistinguishable illness.44,46 Furthermore, bacterial products other than TSST-1 that are made more commonly by S. aureus strains recovered from patients with TSS than by other strains have been described (see Sec. 4). For these reasons, it is likely that other staphylococcal products play a role in the pathogenesis of TSS. Furthermore, there is evidence that some of the multisystem derangements frequently observed in patients with TSS are due to the profound hypotension or shock that can occur and only indirectly to any staphylococcal products. For example, renal failure in TSS is probably secondary to hypotensioninduced acute tubular necrosis, which in turn is the result of multiple factors, including: hypovolemia due to vomiting, diarrhea, increased insensible losses associated with a high fever and inability to ingest or retain fluids; and “thirdspacing” of fluids. Uptake of “endogenous” endotoxin from gram-negative intestinal flora also has been proposed as playing a role in the pathogenesis of TSS.74 The biological properties of TSST-1 have been studied in vitro and in vivo, and attempts have been made to develop an animal model of TSS. In vitro, purified TSST-1 has been shown to stimulate the proliferation of T lymphocytes,75 to inhibit immunoglobulin synthesis,76 and to be a potent stimulator of interleukin-1 production by macrophages and monocytes.77,78 TSST-1 also has been shown to bind to and be internalized by epithelial cells,79 suggesting that it can be absorbed from focal sites of infection. In vivo, TSST-1 has been shown to be pyrogenic,40 to induce lymphopenia,80 to decrease the clearance of endotoxin,81 and to increase susceptibility to endotoxin-induced shock.74 It is now clear that many of the effects of TSST-1 are due to its potent superantigen properties, acting through the release of immune cytochines.82 While initially reported to be an enterotoxin (as evidenced by induction of vomiting in monkeys), preparations of TSST-1 not contaminated with other staphylococcal enterotoxins do not appear to induce vomiting.83 Attempts to reproduce TSS in animals have included using mice, rabbits, goats, baboons, chimpanzees, and rhesus monkeys.55,84–93 In these studies, investigators either have attempted to infect the animals with TSS-associated S. aureus strains at one of a variety of sites (previously implanted subcutaneous chambers, vagina, uterus, and muscle) or have injected purified TSST-1 as a bolus or continuous infusion. The animal models that come closest to reproducing the syndrome observed in humans have been those using rabbits. Live TSS-associated S. aureus organisms inside a previously implanted subcutaneous chamber and continuous infusion of purified TSST-1 both result in fever, hyperemia of mucous membranes, hypocalcemia, elevated creatinine phosphokinase (CPK) and hepatic enzymes, renal failure, and death in a high proportion of rabbits.84,88 Also, the pathological changes observed postmortem in these animals are similar to those reported in patients dying of TSS.94,95 These rabbit models have been used to study the host factors in susceptibility to TSS suggested as important by clinical and epidemiological data or by in vitro results. It has been found that the age, sex, hormonal status, and strain of rabbits used all have a substantial impact on susceptibility to “rabbit TSS.”85,86 Thus, older rabbits have been reported to be more susceptible than younger rabbits. Similarly,
490
Communicable Diseases
male rabbits appear to be more susceptible than female rabbits, although castration abolishes this difference and estrogens protect male rabbits. Experiments concerning the contribution of endogenous endotoxin (i.e., endotoxin released by gut flora) to the pathogenesis of TSS have yielded conflicting results, although it appears that blocking the effect of endotoxin by giving polymyxin B does not consistently prevent rabbit TSS.88 Preexisting anti-TSST-1 antibody, however, does appear to protect against TSS in the rabbit,91 and corticosteroids in high doses also decrease mortality.88 Because of the observed association between menstrual TSS and tampon use, many investigators have looked at the effect of tampons and their constituents on the growth of S. aureus and the production of TSST-1 in vitro. At the same time, the effect of environmental conditions such as pH, PO2, PCO2, and cation concentration on the production of TSST-1 has been investigated. In general, studies have shown that tampons and their individual components inhibit the growth of S. aureus in vitro, regardless of the growth medium.96,97 Although some studies have suggested that S. aureus can use various tampon constituents as an energy source, these results have been challenged and their relevance to human disease questioned.98–100 Also controversial is the effect of tampons on the production of TSST-1 in vitro, with some studies showing that certain tampons and tampon constituents increase TSST-1 production and other studies showing no effect or inhibition of toxin production.97,99 It has been suggested that the effect of tampons on TSST-1 production (and possibly on the risk of menstrual TSS) is mediated by changes in the availability of magnesium, which is bound by certain tampon components.101 The various types of surfactants found on tampons also appear to influence the production of TSST-1, at least in vitro. Growth conditions appear to be important in determining the amount of TSST-1 produced. Thus, an aerobic environment, neutral pH, and low levels of glucose, magnesium, and tryptophan all increase TSST-1 production, although some controversy has arisen about the effect of magnesium concentration on TSST-1 levels.101–105 It has been shown that in patients with TSS related to focal sites of S. aureus infection, growth conditions within the infected focus are well suited to TSST-1 production.105 Also, while the vagina generally has been considered to be anaerobic, studies have shown that a substantial amount of oxygen is introduced when a tampon is inserted, leading to speculation that the amount of oxygen introduced with a tampon may be an important factor in explaining the increased risk of menstrual TSS among tampon users.106 A role for proteases of either bacterial or human origin in the pathogenesis of TSS also has been suggested.105 An earlier theory that the association between menstrual TSS and tampon use was mediated by the demonstrated induction of vaginal ulcerations by tampons107 has received less attention ever since similar vaginal ulcerations were reported in at least one patient with menstrual TSS who did not use tampons.94
temperatures in the range of 104–106°F being fairly common. The evidence of hypotension in an individual case can range from mild orthostatic dizziness to profound shock. The characteristic macular skin rash can be dramatic and obvious, with the patient appearing bright red throughout; it can be subtle and difficult to appreciate, particularly in dark-skinned individuals; or it can be localized. Similarly, the desquamation that occurs during convalescence (usually 5–15 days after the acute illness) can be of subtle flaking and peeling of skin on the face and/or trunk or can involve the loss of full-thickness sheets of skin, particularly on the fingers, hands, and feet. Depending on which systems are affected most prominently in an individual case, the multisystem involvement in TSS can produce rather different clinical pictures. In some patients, the involvement of the mucous membranes (e.g., sore throat, conjunctival and oropharyngeal injection) is severe and most prominent, while in other patients the gastrointestinal symptoms (vomiting and/or diarrhea) are predominant. Similarly, myalgias, thrombocytopenia, and involvement of the hepatic and renal systems can range from nil to severe. One study has suggested that the clinical spectrum of disease differs between menstrual and nonmenstrual TSS cases.45 Patients who receive aggressive supportive therapy (e.g., fluids), appropriate antimicrobial agents, and drainage of any focal S. aureus infection usually respond rapidly and improve over the course of several days. However, patients in whom therapy is either delayed or in whom a focal S. aureus infection is not eradicated can have a stormy, life-threatening course. In cases meeting all the established criteria, the case-fatality ratio is 1–3% overall, although it increases with increasing age.19 The spectrum of illness of TSS has not been defined adequately due to the lack of a specific diagnostic laboratory test. It is evident that some illnesses not meeting all the criteria of the strict case definition, which was devised for use in epidemiological studies, represent milder forms of TSS. For example, few would question that an individual whose highest recorded temperature was 101.8°F, but who otherwise met all of the established criteria, had TSS. A number of authors have described patients of this kind,108,109 and some have attempted to fashion simplified and/or less rigorous case definitions for TSS.110 It is apparent that less rigorous case definitions are likely to be more sensitive but less specific in identifying TSS cases. Ultimately, however, it is not possible, in the absence of a specific diagnostic test, to determine where along a spectrum of increasingly milder and/or more atypical cases illnesses cease to be TSS and start to be something else. Thus, it is unclear whether a tampon-using menstruating woman with S. aureus in the vagina (or anyone else) who experiences headache, fatigue, and nausea could represent a very mild form of TSS. Such distinctions are made all the more difficult because of the relative frequency with which completely asymptomatic individuals are colonized with TSST-1-producing S. aureus in the nasopharynx, vagina, and probably other sites that are not normally sterile.
PATTERNS OF HOST RESPONSE
Diagnosis Clinical Features An illness meeting all the criteria of the established TSS case definition is, by the very nature of the criteria, severe, and the majority of such patients are hospitalized for treatment. Some patients experience the relatively gradual onset of sore throat, fever, fatigue, headache, and myalgias over 24–48 hours, followed by vomiting and/or diarrhea, signs of hypotension, and the appearance of the characteristic diffuse “sunburn-like” macular skin rash. Other patients appear to have a much more dramatic onset over the course of several hours, with some reporting that they can remember the exact moment when they suddenly felt overwhelmingly ill. Because an established set of strict criteria is used to define someone as having or not having TSS, all of the cases so defined are, not surprisingly, alike, regardless of the site of infection with S. aureus. There is, however, some variation. The temperature elevation in patients with TSS, while sometimes modest, can be extreme, with
As noted above, TSS can occur in individuals of any age, sex, and race. However, most recognized cases occur in a limited number of clinical settings. In women of reproductive age, TSS is most commonly seen during the menstrual period and the postpartum interval, although it can occur at other times as well, in association with focal S. aureus infections and in users of barrier contraception. TSS during pregnancy, however, is quite uncommon. Although patients undergoing nasal surgery may be at elevated risk, TSS related to a surgical wound infection is a possibility in any postoperative patient, particularly during the first 24–72 hours. In many such instances, there will be few or no local signs that the operative site is infected.68 As noted earlier, the median interval between surgery and onset of TSS in such cases is two days, but the range is 12 hour to many weeks. TSS is an infrequent but serious consequence of focal S. aureus infections at every conceivable body site, although cutaneous and subcutaneous abscesses and other similar infections appear to predominate. In addition,
18 TABLE 18-4. DIFFERENTIAL DIAGNOSIS IN PATIENTS WITH SUSPECTED TSS Kawasaki syndrome Scarlet fever Meningococcemia Leptospirosis Measles (especially “atypical”) Rocky Mountain spotted fever Viral gastroenteritis Viral syndromes with exanthems Appendicitis Pelvic inflammatory disease Tubo-ovarian abscess Staphylococcal scalded skin syndrome Drug reactions/Stevens–Johnson syndrome
TSS has been reported to be a life-threatening complication of postinfluenza S. aureus infections of the respiratory tract.111,112 The differential diagnosis for a patient suspected of having TSS depends, in part, on which features of the illness are most prominent. For example, patients in whom sore throat and fever predominate early are frequently suspected initially of having streptococcal or viral pharyngitis. In cases in which diarrhea and vomiting are more prominent, viral gastroenteritis is often considered. When the rash becomes apparent, scarlet fever, streptococcal TSS, and drug reactions are often suspected. The differential diagnosis also can be influenced by the patient’s age and sex and the clinical setting in which the illness occurs. For example, cases in infants and very young children must be distinguished from Kawasaki syndrome and staphylococcal scalded skin syndrome. Similarly, in postpartum or postabortion women, other causes of fever and hypotension must be considered, such as endometritis and septic abortion. In individuals with appropriate exposure histories, leptospirosis, measles, and Rocky Mountain spotted fever should be included in the differential diagnosis. In summary, TSS can be confused fairly readily with a wide range of other conditions (Table 18-4). CONTROL AND PREVENTION
General Concepts Menstrual TSS Most strategies for decreasing the incidence of TSS have focused on menstrual TSS and its relationship to tampon use. In light of the demonstrated association between tampon use and risk of developing menstrual TSS, women were advised in 1980 that they could minimize their risk of developing menstrual TSS by not using tampons. In response, many women stopped using tampons, at least temporarily. The proportion of menstruating women who used tampons fell from approximately 70% in 1980 to less than 50% in 1981, but has rebounded to approximately 60–65% since that time. In response to epidemiological and in vitro laboratory evidence concerning the possible roles of tampon absorbency and chemical composition in determining risk of menstrual TSS, most tampon manufacturers have dramatically altered both the absorbency and chemical composition of their products. After increasing markedly in the late 1970s, the measured in vitro absorbency of tampons has dropped sharply since 1979–1980, and one component, polyacrylate, has been eliminated from tampon formulations. In addition, one brand of tampons found to be associated with a high risk of menstrual TSS was withdrawn from the market altogether in 1980. All tampons sold in the United States in 2007 are made of cotton, rayon, or a blend of cotton and rayon.
Other Infection-Related Diseases of Public Health Import
491
All tampons now carry a label explaining the association between tampon use and menstrual TSS and describing the signs and symptoms of the illness. Tampon packages also carry a statement that women should use the lowest absorbency tampon consistent with their needs. Uniform absorbency labeling of tampons was required by the Food and Drug Administration beginning in 1989. Although frequent changing of tampons has been recommended as a way of decreasing the risk of menstrual TSS, there is no evidence to suggest that changing tampons more often reduces risk. Evidence from one study suggests that alternating tampons and napkins during a menstrual cycle may decrease the risk of TSS.6
Postpartum TSS Because women may be at increased risk of TSS during the postpartum period, they should avoid the use of tampons and barrier contraception during that interval.
Hospital-Acquired TSS Other than those measures designed to minimize nosocomial infections in general (e.g., good hand-washing practices) and those recommended specifically for patients with other types of staphylococcal infections, there are no proven methods for decreasing the risk of TSS associated with infected surgical wounds and other nosocomial S. aureus infections.
Antibiotic and Chemotherapeutic Approaches to Prophylaxis Appropriate antimicrobial therapy of an initial episode of menstrual TSS, combined with discontinuing tampon use, has been shown to reduce the risk of recurrent episodes during subsequent menstrual periods.5 The value of follow-up cultures and prophylactic antimicrobial agents in women with a history of menstrual TSS is unproven, although such measures may be justified in women who have had recurrent episodes of TSS. Because carriage of S. aureus at various body sites is so common and cases of TSS are relatively rare, there is no role for obtaining cultures from or giving chemoprophylaxis to individuals without a prior history of TSS.
Immunization Although some consideration was given to attempting to develop a toxoid vaccine from TSST-1 soon after its discovery, no concrete steps in this direction have been taken. Given the high proportion of the population with naturally occurring anti-TSST-1 antibodies and the relative rarity of TSS, it would be prohibitively expensive and impractical to demonstrate that such a vaccine yielded clinical protection. UNRESOLVED PROBLEMS
Unresolved problems in our understanding of TSS relate primarily to its pathophysiology. While a clear link between the use of tampons and risk of menstrual TSS has been established, the specific characteristics of tampons responsible for this increased risk are unknown. The relative importance of absorbency, chemical composition, oxygen content, and perhaps other tampon characteristics, such as the surfactants used in their manufacture, in determining risk is uncertain. Similarly, while a direct correlation between measured tampon absorbency and risk of menstrual TSS has been demonstrated, it remains unclear whether or not users of the lowest absorbency tampons are at greater risk than nontampon users. At the same time, the role of tampon chemical composition in determining risk is ill-defined. As a result of all these uncertainties, it is unknown whether or not the “perfect tampon” (i.e., one that offers menstrual protection and has no associated increased risk of menstrual TSS) currently exists or can be developed.
492
Communicable Diseases
Reye’s Syndrome Robert B. Wallace
What is now known as Reye’s syndrome was first described in Australia in 1963,1,2 and shortly thereafter a series of similar cases was published in the United States.3 It is unclear whether cases occurred in prior eras. The syndrome as originally described was characterized by an acute encephalopathic clinical picture and fatty liver in children, with major neurological and metabolic manifestations often leading to death.4 Epidemiological, clinical, and metabolic studies have added considerable information on the nature of the condition, but it remains a syndrome that may be comprised of diverse causes and pathogenetic mechanisms.
and children with Reye’s syndrome have tended to be younger, generally less than five years of age. Cases in the United States occur predominantly in the fall and winter seasons, with a modal age distribution of 5–15 years. Further, the decline in the U.S. incidence rate for Reye’s syndrome in the 1980s was initially more prominent in children under 10 years of age, although more recently all age groups have enjoyed some decrease. All of this suggests the possibility of age- and geography-related heterogeneity in the nature and causes of the syndrome.
Causes and Control of Reye’s Syndrome Case Definition and Surveillance Rates of occurrence of Reye’s syndrome depend in part on the skill in clinical case recognition, the rigor of surveillance, and case definition.5 Clearly some definitions and criteria are much more encompassing than others, and will change the apparent occurrence rates of the syndrome. The epidemiological case definition used by the U.S. Centers for Disease Control5 includes: 1. Acute noninflammatory encephalopathy with: a. Microvascular fatty metamorphosis of the liver confirmed by biopsy or autopsy, or, b. A serum alanine aminotransferase (ALT or SGPT); a serum ammonia greater than three times normal 2. If cerebrospinal fluid is obtained, leukocyte count must be <=8/mm3. 3. In addition, there should be no other more reasonable explanations for the neurological or hepatic abnormalities. The illness generally occurs in two phases beginning with a clinical viral illness, with respiratory or gastroenterological manifestations, and within a few days progressing to overt encephalopathy. Case reports continue to appear in the worldwide literature, and have been reported in the neonatal period and in adults, although most occur in infants and children. The syndrome has been clinically staged according to the level of consciousness and corresponding physical signs.6 Other definitions have been more specific7 but none will be wholly satisfactory until a “gold standard” for the diagnosis appears, likely encompassing specific biomarkers. Recent evidence suggests, for example, that at least some cases originally labeled as being the syndrome were associated with known inborn errors of metabolism.7 Diagnosis rates may also vary according to the frequency of biopsy and autopsy, although the specificity of histopathological changes has been disputed. In fact, as more metabolic diseases are discovered that have a Reye’s syndrome-like clinical picture, the clinical pattern of remaining cases may be changing over time.8 Continuous surveillance of Reye’s syndrome began in 1976 in the United States, and the incidence of the syndrome has clearly decreased since. There were as many as 555 cases reported in a single year. However, in recent years the number of reported cases has been much smaller. Despite this, the surveillance effort remains active, and reporting is encouraged. With respect to reported occurrence in the United States, the author was unable to find a specific, dedicated surveillance report since the late 1990s. There have also been differences in occurrence patterns among countries. For example, in Australia, occurrences may be nonseasonal
The causes of Reye’s syndrome, including pathogenetic mechanisms, remain enigmatic,9 despite advances in understanding the pathogenesis of the condition.10 Hypotheses include genetic predisposition, possibly related to selected inborn errors of metabolism; exposure to environmental toxins such as various chemicals, pesticides, and mycotoxins; and use of medications such as salicylates and antiemetics. Also, at least in the United States, most cases are preceded by an acute viral infection, usually beginning 7–10 days prior to syndrome onset. Instances of infection with many categories of viruses have been documented, but the two most prominent are varicella and influenza B. Approximately, 5–30% of reported cases were varicella associated and explored the relation of case rates to the prevalent influenza strain.6 The synergistic effect of a second or dual viral infection in causing the syndrome has been postulated. Other viruses have been the subject of speculation but have not been rigorously evaluated. The 1980s were characterized by the epidemiological assessment as to whether salicylates, particularly aspirin, have a causal role in the syndrome. After some anecdotal reports and case series, several case-control studies were performed in the United States. Although some of these were criticized on methodological grounds, in aggregate they suggested that the syndrome was at least in part related to the use of aspirin as treatment for the febrile illness preceding or during syndrome onset.10 No evidence was found implicating acetaminophen or other medications. In fact, the decline in Reye’s syndrome incidence noted above has been related to public education and the subsequent decline in the use of aspirin for febrile conditions in children.11 However, aspirin does not likely explain all cases of the syndrome, and other forces, yet unidentified, may be at work. In other countries such as Australia, aspirin was not related to the syndrome, particularly in children under 5 years of age,12 and some of these cases are turning out to be other, defined metabolic disorders. Several other chemical agents and drugs have been suggested to be related to the syndrome, but conclusive evidence is generally lacking,13 and debate in the literature persists. 14 SUMMARY
Reye’s syndrome appears to be an important and at least partially preventable entity, even if not fully characterized or etiologically explained. However, modern biology continues to suggest pathogenetic mechanisms. Continued surveillance is necessary to assess its public health impact, search for additional causes, and detect any important increases in incidence. Most authors suggest maintaining the recommendation to avoid aspirin use in children until more information is available.
18 REFERENCES
Dermatophytes 1. Elewski BE. The superficial mycoses, the dermatophytoses, and select dermatomycoses. In: Elewski BE, ed. Cutaneous Fungal Infections. 2nd ed. Malden, MA: Blackwell Science, Inc; 1998;1–72. 2. Drake LA, Dinehart SM, Farmer ER, et al. Guidelines of care for superficial mycotic infections of the skin: tinea corporis, tinea cruris, tinea faciei, tinea manuum, and tinea pedis. J Am Acad Dermatol. 1996;34(2 Pt 1):282–6. 3. Ajello L. Geographic distribution and prevalence of the dermatophytes. Ann N Y Acad Sci. 1960;89:30–8. 4. Georg L. Epidemiology of the dermatophytoses sources of infection, modes of transmission and epidemicity. Ann N Y Acad Sci. 1960;89: 69–77. 5. Gupta AK, Sauder DN, Shear NA. Antifungal agents: an overview. Part II. J Am Acad Dermatol. 1994;30:911–33. 6. Philpot CM. Some aspects of the epidemiology pf tinea. Mycopathologia. 1977;62:3–13. 7. Vidotto V, Moiraghi Ruggenini A, Cervetti O. Epidemiology of dermatophytosis in the metropolitan area of Turin. Mycopathologia. 1982;80:21–26. 8. Sinski JT, Flouras K. A survey of dermatophytes isolated from human patients in the United States from 1979 to 1981 with chronological listings of worldwide incidence of five dermatophytes often isolated in the United States. Mycopathologia. 1984;85:97–120. 9. Goldstein AO, Smith KM, Ives TJ, et al. Mycotic infections. effective management of conditions involving the skin, hair, and nails. Geriatrics. 2000;55:40–2, 45–7, 51–2 (Review). 10. Bronson DM, Desai DR, Barskey S, et al. An epidemic of infection within a 20-year survey of fungal infections in Chicago. J Am Acad Dermatol. 1983;8:322–330. 11. Snider R, Landers S, Levy ML. The ringworm riddle: an outbreak of Microsporum canis in the nursery. Pediatr Infect Dis. 1993;12:145–8. 12. Fuller LC, Child FJ, Midgley G, et al.. Diagnosis and management of scalp ringworm. Br Med J. 2003;326:539–41. 13. Zuber T, Baddam K. Superficial fungal infection of the skin: where and how it appears help determine therapy. Postgrad Med. 2001;109: 117–32. 14. Faergemann J, Mörk NJ, Haglund A, et al. A multicentre (doubleblind) comparative study to assess the safety and efficacy of fluconazole and griseofulvin in the treatment of tinea corporis and tinea cruris. Br J Dermatol. 1997;136:575–577. 15. Foster KW, Ghannoum MA, Elewski BE. Epidemiologic surveillance of cutaneous fungal infection in the United States from 1999 to 2002. J Am Acad Dermatol. 2004;50:748–52. 16. Martin AG, Kobayashi GS. Superficial fungal infection: dermatophytosis, tinea nigra, piedra. In: Freedberg IM, Fitzpatrick TB, Eisen AZ, et al., eds. Fitzpatrick’s Dermatology in General Medicine, 5th ed. New York: McGraw-Hill; 1999:2337–57. 17. Hainer BL. Dermatophyte infections. Am Fam Physician. 2003;67: 101–8. 18. Elgart ML, Warren NG. The superficial and subcutaneous mycoses. In: Moschella SL, Hurley HJ, eds. Dermatology, 3rd ed. Philadelphia, PA: WB Saunders Company; 1992:869–941. 19. Ghannoum MA, Hajjeh RA, Scher R, et al. A large-scale North American study of fungal isolates from nails: the frequency of onychomycosis, fungal distribution, and antifungal susceptibility patterns. J Am Acad Dermatol. 2000;43:641–648. 20. Rosen T. Dermatophytosis: diagnostic pointers and therapeutic pitfalls. Consultant. 1997;37:1545–7. 21. Sadri MF, Farnaghi F, Danesh-Pazhooh M, et al. The frequency of tinea pedis in patients with tinea cruris in Tehran, Iran. Mycoses. 2000;43:41–4.
Other Infection-Related Diseases of Public Health Import
493
22. Gupta AK, Jain HC, Lynde CW, et al. Prevalence and epidemiology of onychomycosis in patients visiting physicians’ offices: a multicenter Canadian survey of 15,000 patients. J Am Acad Dermatol. 2000;43(2 pt 1):244–8. 23. Weinstein A, Berman B. Topical treatment of common superficial tinea infections. Am Fam Physician. 2002;65:2095–2102. 24. Vander Straten MR, Hossain MA, Ghannoum MA. Cutaneous infections dermatophytosis, onychomycosis, and tinea versicolor. Infect Dis Clin North Am. 2003;17:87–112. 25. Noble SL, Forbes RC, Stamm PL. Diagnosis and management of common tinea infections. Am Fam Physician. 1998;58:163–74, 177–8. 26. Rogers D, Kilkenny M, Marks R. The descriptive epidemiology of tinea pedis in the community. Australas J. Dermatol. 1996;37: 178–84. 27. Rippon J. Medical Mycology: The Pathogenic Fungi and the Pathogenic Actinomycetes. 3rd ed. Philadelphia, PA: WB Saunders Co; 1988. 28. Aste N, Pau M, Aste N, et al. Tinea pedis observed in Cagliari, Italy, between 1996 and 2000. Mycoses. 2003;46:38–41. 29. Terragni L, Buzzetti I, Lasagni A, et al. Tinea pedis in children. Mycoses. 1991;34:273–6. 30. Gupta AK, Tu LQ. Dermatophytes: diagnosis and treatment. J Am Acad Dermatol. 2006;54:1050–5. 31. Zaias N, Tosti A, Rebell G, et al. Autosomal dominant pattern of distal subungual onychomycosis caused by Trichophyton rubrum. J Am Acad Dermatol. 1996;34(2 Pt 1):302–304. 32. Gupta AK, Chow M, Daniel R, et al. Treatments of tinea pedis. Dermatol Clin. 2003;21:431–62. 33. Lopes JO, Alves SH, Mari CR, et al. A ten-year survey of tinea pedis in the central region of the Rio Grande do Sul, Brazil. Rev Inst Med Trop Sao Paulo. 1999;41:75–7. 34. Faergemann J, Baran R. Epidemiology, clinical presentation and diagnosis of onychomycosis. Br J Dermatol. 2003;149(Suppl 65): 1–4. 35. Elewski BE. Cutaneous mycoses in children. Br J Dermatol. 1996; 134(suppl 46):7–11. 36. Andre J, Berger T, De Doncker P, et al. The second international symposium on onychomycosis: an update on the issues. Med Monitor. 1996;2:1–8. 37. Elewski BE, Hay RJ. Update on the management of onychomycosis. Highlights of the third international summit on cutaneous antifungal therapy. Clin Infect Dis. 1996;23:305–13. 38. Clinical Courier. New strategies for the effective management of superficial fungal infections. Clin Courier. 1997;16:2–3. 39. Loo DS. Cutaneous fungal infections in the elderly. Dermatol Clin. 2004;22:33–50. 40. Johnson ML. Aging of the United States population. The dermatologic implications. Clin Geriatr Med. 1989;5:41–51. 41. Burkhart CN, Chang H, Gottwald L. Tinea corporis in human immunodeficiency virus-positive patients: case report and assessment of oral therapy. Int J Dermatol. 2003;42:839–43.
Hookworm Disease: Ancylostomiasis, Necatoriasis, Uncinariasis 1. Bethony J, Brooker S, Albonico M, et al. Soil-transmitted helminth infections: ascariasis, trichuriasis, and hookworm. Lancet. 2006; 367(9521):1521–32. 2. de Silva NR, Brooker S, Hotez PJ, et al. Soil-transmitted helminth infections: updating the global picture. Trends Parasitol. 2003;19(12): 547–51. 3. Hotez PJ, Brooker S, Bethony JM, et al. Hookworm infection. N Engl J Med. 2004;351:799–807. 4. Schad GA, Anderson RM. Predisposition to hookworm infection in humans. Science. 1985;228(4707):1537–40. 5. Kappus KD, Lundgren RG Jr, Juranek DD, et al. Intestinal parasitism in the United States: update on a continuing problem. Am J Trop Med Hyg. 1994;50:705–13.
494
Communicable Diseases
6. Christian P, Khatry SK, West KP Jr. Antenatal anthelmintic treatment, birthweight, and infant survival in rural Nepal. Lancet. 2004; 364(9438):981–3. 7. Crompton DWT, McKean PG, Schad GA. Hookworm disease: current status and new directions. Parasitol Today. 1989;5:1–2. 8. Chandler AC. Introduction to Parasitology. 9th ed. New York: John Wiley & Sons;1955. 9. Stoll NR. On endemic hookworm, where do we stand today? Exp Parasitol. 1962;12:241–52. 10. Schad GA, Banwell JG. Hookworms. In: Warren KS, Mahmoud AAF, eds. Tropical and Geographical Medicine. New York: McGraw-Hill; 1990:379–93. 11. Ju JJ, Hwang WI, Ryu TG, et al. Protein absorption in an adult man bearing intestinal parasites. Korean J Biochem. 1981;13:45–55. 12. Croese J, Loukas A, Opdebeeck J, et al. Human enteric infection with canine hookworms. Ann Intern Med. 1994;120:369–74. 13. Matsusaki G. Hookworm disease and prevention. In: Morishita K, Komiya Y, Matsubayashi H, eds. Progress of Medical Parasitology in Japan. Tokyo: Meguro Parasitological Museum; 1966:187–282. 14. Pritchard DI, McKean PG, Schad GA. An immunological and biochemical comparison of hookworm species. Parasitol Today. 1990; 6(5):154–6. 15. Hoagland KE, Schad GA. Necator americanus and Ancylostoma duodenale: life history parameters and epidemiological implications of two sympatric hookworms of humans. Exp Parasitol. 1978;44:36–49. 16. Beaver PC, Jung RC, Cupp EW. Clinical Parasitology. 9th ed. Philadelphia: Lea & Febiger; 1984. 17. Komiya Y, Yasuraoka K. The biology of hookworms. In: Morishita, Kaoru, eds. Progress of Medical Parasitology in Japan. Tokyo: Meguro Parasitological Museum; 1966:5–114. 18. Yu SH, Jiang ZX, Xu LQ. Infantile hookworm disease in China. A review. Acta Trop. 1995;59:265–70. 19. Wang MP, Hu YF, Peng JM, et al. Persistent migration of Ancylostoma duodenale larvae in human infection. Chin Med J (Engl). 1984;97:147–9. 20. Soh CT. The distribution and persistence of hookworm larvae in the tissues of mice in relation to species and to routes of inoculation. J Parasitol. 1958;44(5):515–9. 21. Schad GA, Murrell KD, Fayer R, e et al. Paratenesis in Ancylostoma duodenale suggests possible meat-borne human infection. Trans R Soc Trop Med Hyg. 1984;78(2):203–4. 22. Harada Y Wakana. Disease & hookworm allergy. Yonago Acta Medica. 1962;6(2):109–18. 23. Neva FA, Brown HW. Basic Clinical Parasitology. 6th ed. Norwalk: Appleton & Lange;1994. 24. Kalkofen UP. Intestinal trauma resulting from feeding activities of Ancylostoma caninum. Am J Trop Med Hyg. 1974;23:1046–53. 25. Cappello M, Vlasuk GP, Bergum PW, et al. Ancylostoma caninum anticoagulant peptide: a hookworm-derived inhibitor of human coagulation factor Xa. Proc Natl Acad Sci USA. 1995;92:6152–56. 26. Moyle M, Foster DL, McGrath DE, et al. A hookworm glycoprotein that inhibits neutrophil function is a ligand of the integrin CD11b/CD18. J Biol Chem. 1994;269:10008–15. 27. Roche M, Layrisse M. The nature and causes of “hookworm anemia.” Am J Trop Med Hyg. 1966;15:1029–1102. 28. Variyam EP, Banwell JG. Hookworm disease: nutritional implications. Rev Infect Dis. 1982;4:830–5. 29. van der GR, Abdillahi H, Stilma JS, et al. Circulating antibodies against corneal epithelium and hookworm in patients with Mooren’s ulcer from Sierra Leone. Br J Ophthalmol. 1983;67:623–8. 30. Andy JJ. Helminthiasis, the hypereosinophilic syndrome and endomyocardial fibrosis: some observations and an hypothesis. Afr J Med Med Sci. 1983;12:155–64. 31. Miller TA. Hookworm infection in man. Adv Parasitol. 1979;17:315–84. 32. Ganguly NK, Mahajan RC, Sehgal R, et al. Role of specific immunoglobulin E to excretory-secretory antigen in diagnosis and prognosis of hookworm infection. J Clin Microbiol. 1988;26:739–42.
33. Miller A. Dung beetles (Coleoptera, Scarabaeidae) and other insects in relation to human feces in a hookworm area of southern Georgia. Am J Trop Med Hyg. 1954;3(2):372–89. 34. Sulaiman S, Sohadi AR, Yunus H, et al. The role of some cyclorrhaphan flies as carriers of human helminths in Malaysia. Med Vet Entomol. 1988;2(1):1–6. 35. Dipeolu OO. Laboratory investigations into the role of Musca vicina and Musca domestica in the transmission of parasitic helminth eggs and larvae. Int J Zoonoses. 1982;9:57–61. 36. Williams-Blangero S, Blangero J, Bradley M, et al. Quantitative genetic analysis of susceptibility to hookworm infection in a population from rural Zimbabwe. Hum Biol. 1997;69(2):201–08. 37. Hominick WM, Dean CG, Schad GA. Population biology of hookworms in west Bengal: analysis of numbers of infective larvae recovered from damp pads applied to the soil surface at defaecation sites. Trans R Soc Trop Med Hyg. 1987;81:978–86. 38. Kan SP. Soil-transmitted helminthiases among inhabitants of an oil-palm plantation in West Malaysia. J Trop Med Hyg. 1989;92: 263–69. 39. Brooker S, Bethony J, Hotez PJ. Human hookworm infection in the 21st century. Adv Parasitol. 2004;58:197–288. 40. Quinnell RJ, Slater AF, Tighe P, et al. Reinfection with hookworm after chemotherapy in Papua New Guinea. Parasitology. 1993;106 (Pt 4): 379–85. 41. Bethony J, Chen J, Lin S, et al. Emerging patterns of hookworm infection: influence of aging on the intensity of Necator infection in Hainan Province, People’s Republic of China. Clin Infect Dis. 2002;35(11):1336–44. 42. Hotez PJ, Bethony J, Bottazzi ME, Brooker S, Buss P. Hookworm: the great infection of mankind. PLoS Med. Mar 2005 ;2(3):e67. Epub. Mar 29,2005. 43. Loukas A, Bethony JM, Mendez S, et al. Vaccination with recombinant aspartic hemoglobinase reduces parasite load and blood loss after hookworm infection in dogs. PLoS Med. 2005;2(10):e295. 44. Hotez PJ, Zhan B, Bethony JM, et al. Progress in the development of a recombinant vaccine for human hookworm disease: the human hookworm vaccine initiative. Int J Parasitol. 2003;33(11): 1245–58. 45. Brooker S, Bethony JM, Rodrigues LC, et al. Epidemiologic, immunologic and practical considerations in developing and evaluating a human hookworm vaccine. Expert Rev Vaccines. 2005;4(1):35–50. 46. Albonico M, Smith PG, Ercole E, et al. Rate of reinfection with intestinal nematodes after treatment of children with mebendazole or albendazole in a highly endemic area. Trans R Soc Trop Med Hyg. 1995;89(5):538–41. 47. Abramowicz M (ed). Drugs for parasitic infections. Med Lett Drugs Ther. 2004;1–12. 48. Brown HD, Matzuk AR, Ilves IR, et al. Antiparasitic drugs. IV. 2-(4′-thiazolyl)-benzimidazole, a new anthelmintic. J Am Chem Soc. 1961;83(7):1764–65. 49. Utzinger J, Vounatsou P, N’Goran EK, et al. Reduction in the prevalence and intensity of hookworm infections after praziquantel treatment for schistosomiasis infection. Int J Parasitol. 2002;32(6): 759–65.
Other Intestinal Nematodes Archibald LK, et al. Correspondence: albendazole is effective treatment for chronic strongyloidiasis. JAMA. 1993;270:2921. Braun TI, Fekete T, Lynch A. Strongyloidiasis in an institution for mentally retarded adults. Arch Intern Med. 1988;148:634–6. Gann PH, Neva FA, Gam AA. A randomized trial of single-and two-dose ivermectin versus thiabendazole for treatment of strongyloidiasis. J Infect Dis. 1994;169:1076–9. Genta RM. Global prevalence of strongyloidiasis: critical review with epidemiologic insights into the prevention of disseminated disease. Rev Infect Dis. 1989;11:755–66.
18 Lindo JF, Conway DJ, Atkins NS, et al. Prospective evaluation of enzyme-linked immunosorbent assay and immunoblot methods for the diagnosis of endemic Strongyloides stercoralis infection. Am J Trop Med Hyg. 1994;51:175–9. Liu LX, Weller PF. Strongyloidiasis and other intestinal nematode infections. Infect Dis Clin North Am. 1993;7:655–82. Mahmoud AAF. Strongyloidiasis. Clin Infect Dis. 1996;23:949–53. Muennig P, Pallin D, Challah C, et al. The cost-effectiveness of ivermectin vs. albendazole in the presumptive treatment of strongyloidiasis in immigrants to the United States. Epidemiol Infect. Dec 2004;132(6):1055–63. Milder JE, Walzer PD, Kilgore G, et al. Clinical features of Strongyloides stercoralis infection in an endemic area of the United States. Gastroenterology. 1981;80:1481–8. Pelletier LL, Baker CB, Gam AA, et al. Diagnosis and evaluation of treatment of chronic strongyloidiasis in ex-prisoners of war. J Infect Dis. 1988;157:573–6. Woodring JH, Halfhill H, Berger R, et al. Clinical and imaging features of pulmonary strongyloidiasis. South Med J. 1996;89:10–9. Zaha O, Hirata T, Kinjo F, et al. Efficacy of ivermectin for chronic strongyloidiasis: two single doses given 2 weeks apart. J Infect Chemother. Mar 2002;8(1):94–8.
Ascariasis Anonymous. Ascariasis: indiscriminate or selective mass chemotherapy? Lancet. 1992;339:1253, 1264. Villamizar E, Mendez M, Bonilla E, et al. Ascaris lumbricoides infestation as a cause of intestinal obstruction in children: experience with 87 cases. J Pediatr Surg. 1996;31:201–5.
Trichuriasis Albonico M, Smith PG, Hall A, et al. A randomized controlled trial comparing mebendazole and albendazole against Ascaris, Trichuris and hookworm infections. Trans R Soc Trop Med Hyg. 1994;88: 585–9. Cooper ES, Duff EMW, Howell S, et al. “Catch-up” growth velocities after treatment for Trichuris dysentery syndrome. Trans R Soc Trop Med Hyg. 1995;89:653. Pearson RD, Schwartzman JD. Nematodes limited to the intestinal tract. In: Strickland GT, ed. Hunter’s Tropical Medicine. 7th ed. Philadelphia: WB Saunders;1991.
Capillariasis Cross JH. Intestinal capillariasis. Clin Micro Rev. 1992;5:120–9. Kang G, Mathan M, Ramakrishan BS, et al. Human intestinal capillariasis: first report from India. Trans R Soc Trop Med Hyg. 1994; 88:204.
Enterobiasis Cook GC. Enterobius vermicularis infection. Gut. 1994;35:1159–62. Liu LX, Chi J, Upton MP, et al. Eosinophilic colitis associated with larvae of the pinworm Enterobius vermicularis. Lancet. 1995;346: 410–2.
Treatment Drugs for parasitic infections. Med Lett 1–12. Belizario VY, Amarillo ME, de Leon WU, et al. A comparison of the efficacy of single doses of albendazole, ivermectin, and diethylcarbamazine alone or in combinations against Ascaris and Trichuris spp. Bull World Health Organ. 2003;81(1):35–42. Epub Mar 11, 2003 . Fox LM, Saravolatz LD. Nitazoxanide: a new thiazolide antiparasitic agent. Clin Infect Dis. 2005;15;40(8):1173–80. Epub March 14, 2005.
Other Infection-Related Diseases of Public Health Import
495
Schistosomiasis 1. Chitsulo L, Loverde P, Engels D. Disease watch: schistosomiasis. Nat Rev Microbiol. 2004;2:12–3. 2. Arnon R. Life span of parasite in schistosomiasis patients. Isr J Med Sci. 1990;26:404–5. 3. Sturrock RF. The intermediate hosts and host-parasite relationship. In: Jordan P, Webbe G, Sturrock RF. Human Schistosomiasis. Oxon: CAB International;1993;33–85. 4. Grobusch MP, Muhlberger N, Jelinek T, et al. Imported schistosomiasis in Europe: sentinel surveillance data from TropNetEurope. J Travel Med. 2003;10:164–9. 5. Jordan P, Webbe G. Epidemiology. In: Jordan P, Webbe G, Sturrock RF. Human Schistosomiasis. Oxon: CAB International;1993;87–158. 6. Hoeffler DF. Cercarial dermatitis: its etiology, epidemiology and clinical aspects. Arch Environ Health. 1974;29:225–9. 7. Wiest PM. The epidemiology and morbidity of schistosomiasis. Parasitol Today. 1996;12:215–20. 8. Rocha H, Kirk JW, Hearey CDJ. Prolonged Salmonella bacteremia in patients with Schistosoma mansoni infection. Arch Intern Med. 1971;128:254–7. 9. Andrade ZA, Van Marck EAE. Schistosomal glomerular disease. A review. Mem Inst Oswaldo Cruz. 1984;79:499–506. 10. Poggensee G, Feldmeier H, Kranz I. Schistosomiasis of the female genital tract: public health aspects. Parasitol Today. 1999;15: 378–81. 11. Scrimgeour EM, Gadjusek DC. Involvement of the central nervous system in Schistosoma mansoni and S. haematobium infection. A review. Brain. 1985;108:1023–38. 12. Cohen J, Capildo R, Rose FC, et al. Schistosomal myelopathy. Br Med J. 1977;1:1258. 13. Actor JK, Shirai M, Kullberg MC, et al. Helminth infection results in decreased virus-specific CD8+ cytotoxic T-cell and Th1 cytokine responses as well as delayed virus clearance. Proc Natl Acad Sci USA. 1993;90:948–52. 14. McElroy MD, Elrefaei M, Jones N, et al. Coinfection with Schistosoma mansoni is associated with decreased HIV-specific cytolysis and increased IL-10 production. J Immunol. 2005;174:5119–23. 15. Kamal, Madwar M, Bianchi L, et al. Clinical, virological and histopathological features: long-term follow-up in patients with chronic hepatitis C co-infected with S. mansoni. Liver. 2000;20:281–9. 16. Lyke KE, Dicko A, Dabo A, et al. Association of Schistosoma haematobium infection with protection against acute Plasmodium falciparum malaria in Malian children. Am J Trop Med Hyg. 2005;73:1124–30. 17. Briand V, Watier L, Le Hesran JY, et al. Coinfection with Plasmodium falciparum and Schistosoma haematobium: protective effect of schistosomiasis on malaria in Senegalese children? Am J Trop Med Hyg. 2005;72:702–7. 18. Brown M, Kizza M, Watera C, et al. Helminth infection is not associated with faster progression of HIV disease in coinfected adults in Uganda. J Infect Dis. 2004;190:1869–79. 19. Karanja DMS, Hightower AW, Secor WE, et al. Resistance to reinfection with Schistosoma mansoni in occupationally exposed adults and effect of HIV-1 co-infection on susceptibility to schistosomiasis: a longitudinal study. Lancet. 2002;360:592–6. 20. Schutte CH, Pienaar R, Becker PJ, et al. Observations on the techniques used in the qualitative and quantitative diagnosis of schistosomiasis. Ann Trop Med Parasitol. 1994;88:305–16. 21. Rabello AL. Parasitological diagnosis of schistosoma mansoni: fecal examination and rectal biopsy. Mem Inst Oswaldo Cruz. 1992;87(4): 325–31. 22. Mott KE, Dixon H, Osei-Tutu E, et al. Evaluation of reagent strips in urine tests for detection of Schistosoma haematobium infection: a comparative study in Zambia and Ghana. Bull World Health Organ. 1985;63:125–33. 23. Doenhoff MJ, Chiodini PL, Hamilton JV. Specific and sensitive diagnosis of schistosome infection: can it be done with antibodies? Trends Parasitol. 2004;20:35–9.
496
Communicable Diseases
24. Pearson RP, Guerrant RC. Praziquantel: a major advance in antihelminthic therapy. Ann Intern Med. 1983;99:195–8. 25. Homeida MA, el Tom I, Nash T, et al. Association of the therapeutic activity of praziquantel with the reversal of Symmers’ fibrosis induced by Schistosoma mansoni. Am J Trop Med Hyg. 1991;45: 360–5. 26. Karanja DMS, Boyer AE, Strand M, et al. Studies in schistosomiasis in western Kenya: II. Efficacy of praziquantel for treatment of schistosomiasis in persons coinfected with human immunodeficiancy virus-1. Am J Trop Med. 1998;59:307–311. 27. Fallon PG, Doenhoff MJ. Drug-resistant schistosomiasis: resistance to praziquantel and oxamniquine induced in Schistosoma mansoni in mice is drug specific. Am J Trop Med Hyg. 1994;51:83–8. 28. Cioli D, Pica-Mattoccia L, Archer S. Drug resistance in schistosomes. Parasitol Today. 1993;9:162–6. 29. Ismail M, Botros S, Metwally A, et al. Resistance to praziquantel: direct evidence from Schistosoma mansoni isolated from Egyptian villagers. Am J Trop Med Hyg. 1999;60:932–5. 30. Fallon PG, Sturrock RF, Niang AC, et al. Short report: diminished susceptibility to praziquantel in a Senegal isolate of Schistosoma mansoni. Am J Trop Med Hyg. 1995;53:61–2. 31. King CH, Muchiri EM, Ouma JH. Evidence against rapid emergence of praziquantel resistance in Schistosoma haematobium in Kenya. Emerg Infect Dis. 2000;6:585–94. 32. Botros S, Sayed H, Amer N, et al. Current status of sensitivity to praziquantel in a focus of potential drug resistance in Egypt. Int J Parasitol. 2005;35:787–91. 33. Utzinger J, N’Goran EK, N’Dri A, et al. Oral artemether for prevention of Schistosoma mansoni infection: randomised controlled trial. Lancet. 2000;355:1320–5. 34. Li YS, Chen HG, He HB, et al. A double-blind field trial on the effects of artemether on Schistosoma japonicum infection in a highly endemic focus in southern China. Acta Trop. 2005;96:153–67. 35. De Clercq D, Vercruysse J, Kongs A, et al. Efficacy of artesunate and praziquantel in Schistosoma haematobium infected school children. Acta Trop. 2002;82:61–6. 36. Utzinger J, Xiao S, Keiser J, et al. Current progress in the development and use of artemether for chemoprophylaxis of major human schistosome parasites. Curr Med Chem. 2001;15:1841–60. 37. Xianyi C, Liying W, Jiming C, et al. Schistosomiasis control in China: the impact of a 10-year World Bank Loan project (1992–2001). Bull World Health Organ. 2005;83:43–8. 38. Liang S, Yang C, Zhong B, et al. Re-emerging schistosomiasis in hilly and mountainous areas of Sichuan, China. Bull World Health Organ. 2006;84:139–44. 39. Savioli L, Albonico M, Engels D, et al. Progress in the prevention and control of schistosomiasis and soil-transmitted helminthiasis. Parasitol Int. 2004;253:103–13. 40. Yang GJ, Vounatsou P, Zhou XN, et al. A review of geographic information system and remote sensing with applications to the epidemiology and control of schistosomiasis in China. Acta Trop. 2005;96: 117–29. 41. Perrett S, Whitfield PJ. Currently available molluscicides. Parasitol Today. 1996;12:156–9. 42. Richter J. The impact of chemotherapy on morbidity due to schistosomiasis. Acta Trop. 2003;86(2–3):161–83. 43. Hu GH, Hu J, Song KY, et al. The role of health education and health promotion in the control of schistosomiasis: experiences from a 12-year intervention study in the Poyang Lake area. Acta Trop. 2005;96:232–41. 44. Li YS, Sleigh AC, Li Y, et al. Five-year impact of repeated praziquantel therapy on subclinical morbidity due to Schistosoma japonicum in China. Trans R Soc Trop Med Hyg. 2002;96:438–43. 45. Bausch D, Cline BL. The impact of control measures on urinary schistosomiasis in primary school children in northern Cameroon: a unique opportunity for controlled observations. Am J Trop Med Hyg. 1995;53:577–80.
46. Olveda RM, Daniel BL, Ramirez BDL, et al. Schistosomiasis japonica in the Philippines: the long-term impact of population-based chemotherapy on infection, transmission and morbidity. J Infect Dis. 1996;174:163–72. 47. Wynn TA, Hoffmann KF. Defining a schistosomiasis vaccine strategy—is it really Th1 vs Th2? Parasitol Today. 2000;16:497–501. 48. Marquet S, Abel L, Hillaire D, et al. Genetic localization of a locus controlling the intensity of infection by Schistosoma mansoni on chromosome 5q31-q33. Nat Genet. 1996;14:181–4. 49. Eberl M, Langermans JA, Frost PA, et al. Cellular and humoral immune responses and protection against schistosomes induced by a radiation attenuated vaccine in chimpanzees. Infect Immun. 2001;69:5352–62. 50. Bergquist NR, Leonardo LR, Mitchell GF. Vaccine-linked chemotherapy: can schistosomiasis control benefit from an integrated approach? Trends Parasitol. 2005;21:112–7. 51. Capron A, Riveau G, Capron M, et al. Schistosomes: the road from host-parasite interactions to vaccines in clinical trials. Trends Parasitol. 2005;21:143–9.
Toxic Shock Syndrome (Staphylococcal) 1. Todd JK, Fishaut M, Kapral F, et al. Toxic shock syndrome associated with phage-group-I staphylococci. Lancet. 1978;2:1116–8. 2. Aranow H, Jr., Wood WB. Staphylococcal infection simulating scarlet fever. J Am Med Assoc. 1942;119:1491–5. 3. Stevens FA. The occurrence of Staphylococcus aureus infection with a scarlatiniform rash. J Am Med Assoc. 1927;88:1957–8. 4. Everett ED. Mucocutaneous lymph node syndrome (Kawasaki disease) in adults. J Am Med Assoc. 1979;242:542–3. 5. Davis JP, Chesney PJ, Wand PJ, et al. Toxic shock syndrome: Epidemiologic features, recurrence, risk factors, and prevention. N Engl J Med. 1980;303:1429–35. 6. Shands KN, Schmid GP, Dan BB, et al. Toxic shock syndrome in menstruating women: Its association with tampon use and Staphylococcus aureus and the clinical features in 52 cases. N Engl J Med. 1980;303:1436–42. 7. Centers for Disease Control. Follow-up on toxic shock syndrome— United States. Morb Mortal Wkly Rep. 1980;29:297–9. 8. Centers for Disease Control. Toxic shock syndrome—United States. Morb Mortal Wkly Rep. 1980;29:229–30. 9. Reingold AL. Epidemiology of toxic shock syndrome, United States, 1960–1984, Centers for Disease Control. CDC Surveill Summ. 1984;33(3SS): 19SS–22SS. 10. Reingold AL, Hargrett NT, Shands KN, et al. Toxic shock syndrome surveillance in the United States, 1980–1981. Ann Intern Med. 1982;92(Part 2):875–80. 11. Petitti DB, Reingold AL, Chin J. The incidence of toxic shock syndrome in northern California, 1972 through 1983. J Am Med Assoc. 1986;255:368–72. 12. Todd JK, Wiesenthal AM, Ressman M, et al. Toxic shock syndrome. II. Estimated occurrence in Colorado as influenced by case ascertainment methods. Am J Epidemiol. 1985;122:857–67. 13. Berkley SF, Hightower AW, Broome CV, et al. The relationship of tampon characteristics to menstrual toxic shock syndrome. J Am Med Assoc. 1987;258:917–20. 14. Lanes SF, Rothman K J. Tampon absorbency, composition and oxygen content, and risk of toxic shock syndrome. J Clin Epidemiol. 1990;43:1379–85. 15. Osterholm MT, Davis JP, Gibson RW. Tristate toxic shock syndrome study. I. Epidemiologic findings. J Infect Dis. 1982;145: 431–40. 16. Schlech WF, Shands KN, Reingold AL, et al. Risk factors for the development of toxic shock syndrome: Association with a tampon brand. J Am Med Assoc. 1982;248:835–9. 17. Petitti DB, Reingold AL. Update through 1985 on the incidence of toxic shock syndrome among members of a prepaid health plan. Rev Infect Dis. 1989;11(1):22–7.
18 18. Reingold AL, Broome CV, Gaventa S, et al. Risk factors for menstrual toxic shock syndrome: Results of a multistate case-control study. Rev Infect Dis. 1989;11(1):35–42. 19. Markowitz LE, Hightower AW, Broome C, et al. Toxic shock syndrome. Evaluation of national surveillance data using a hospital discharge survey. J Am Med Assoc. 1987;258:75–8. 20. Linnemann CC, Jr., Knarr D. Increasing incidence of toxic shock syndrome in the 1970s. Am J Public Health. 1986;76:566–7. 21. Petitti DB, Reingold AL. Recent trends in the incidence of toxic shock syndrome in Northern California. Am J Public Health. 1991;81: 1209–11. 22. Jacobson JA, Kasworm EM, Crass BA, et al. Nasal carriage of toxigenic Staphylococcus aureus and prevalence of serum antibody to toxic shock syndrome toxin 1 in Utah. J Infect Dis. 1986;153: 356–9. 23. Lansdell LW, Taplin D, Aldrich TE. Recovery of Staphylococcus aureus from multiple body sites in menstruating women. J Clin Microbiol. 1984;20:307–10. 24. Martin RR, Buttram V, Besch P, et al. Nasal and vaginal Staphylococcus aureus in young women: Quantitative studies. Ann Intern Med. 1982;96(Part 2):951–3. 25. Ritz HL, Kirkland JJ, Bond GG, et al. Association of high levels of serum antibody to staphylococcal toxic shock antigen with nasal carriage of toxic shock antigen-producing strains of Staphylococcus aureus. Infect Immun. 1984;43:954–8. 26. Schlievert PM, Osterholm MT, Kelly JA, et al. Toxin and enzyme characterization of Staphylococcus aureus isolates from patients with and without toxic shock syndrome. Ann Intern Med. 1982;96(Part 2): 937–40. 27. Bonventre PF, Linnemann C, Weckbach LS, et al. Antibody responses to toxic shock syndrome (TSS) toxin by patients with TSS and by healthy staphylococcal carriers. J Infect Dis. 1984;150: 662–6. 28. Vergeront JM, Stolz SJ, Crass BA, et al. Prevalence of serum antibody to staphylococcal enterotoxin F among Wisconsin residents: Implications for toxic-shock syndrome. J Infect Dis. 1983;148: 692–8. 29. Reingold AL, Dan BB, Shands KN, et al. Toxic shock syndrome not associated with menstruation: A review of 54 cases. Lancet. 1982;1: 1–4. 30. Reingold AL, Hargrett NT, Dan BB, et al. Nonmenstrual toxic shock syndrome: A review of 130 cases. Ann Intern Med. 1982;96(Part 2): 871–4. 31. Miwa K, Fukuyama M, Kunitomo T, et al. Rapid assay for detection of toxic shock syndrome toxin 1 from human sera. J Clin Microbiol. 1994;32:539–42. 32. Vergeront JM, Evenson ML, Crass BA, et al. Recovery of staphylococcal enterotoxin F from the breast milk of a woman with toxic shock syndrome. J Infect Dis. 1982;146:456–9. 33. Stolz SJ, Davis JP, Vergeront JM, et al. Development of serum antibody to toxic shock toxin among individuals with toxic shock syndrome in Wisconsin. J Infect Dis. 1985;151:883–9. 34. Corbishley CM. Microbial flora of the vagina and cervix. J Clin Pathol. 1977;30:745–8. 35. Guinan ME, Dan BB, Guidotti RJ, et al. Vaginal colonization with Staphylococcus aureus in healthy women: A review of four studies. Ann Intern Med. 1982;96(Part 2):944–7. 36. Linnemann CC, Staneck JL, Hornstein S, et al. The epidemiology of genital colonization with Staphylococcus aureus. Ann Intern Med. 1982;96(Part 2):940–4. 37. Noble VS, Jacobson JA, Smith CB. The effect of menses and use of catamenial products on cervical carriage of Staphylococcus aureus. Am J Obstet Gynecol. 1982;144:186–9. 38. Onderdonk AB, Zamarchi GR, Walsh JA, et al. Methods for quantitative and qualitative evaluation of vaginal microflora during menstruation. Appl Environ Microbiol. 1986;51:333–9. 39. Smith CB, Noble V, Bensch R, et al. Bacterial flora of the vagina during the menstrual cycle: Findings in users of tampons, napkins, and sea sponges. Ann Intern Med. 1982;96(Part 2):948–51.
Other Infection-Related Diseases of Public Health Import
497
40. Schlievert PM, Shands KN, Dan BB, et al. Identification and characterization of an exotoxin from Staphylococcus aureus associated with toxic shock syndrome. J Infect Dis. 1981;143:509–16. 41. Bergdoll MS, Crass BA, Reiser RF, et al. A new staphylococcal enterotoxin, enterotoxin F, associated with toxic shock syndrome Staphylococcus aureus isolates. Lancet. 1981;1:1017–21. 42. Altemeier WA, Lewis SA, Schlievert PM, et al. Staphylococcus aureus associated with toxic shock syndrome: Phage typing and toxin capability testing. Ann Intern Med. 1982;96(Part2):978–82. 43. Altemeier WA, Lewis SA, Schlievert PM, et al. Studies of the staphylococcal causation of toxic shock syndrome. Surg Gynecol Obstet. 1981;153:481–5. 44. Garbe PL, Arko RJ, Reingold AL, et al. Staphylococcus aureus isolates from patients with nonmenstrual toxic shock syndrome. J Am Med Assoc. 1985;253:2538–42. 45. Kain KC, Schulzer M, Chow AW. Clinical spectrum of nonmenstrual toxic shock syndrome (TSS): Comparison with menstrual TSS by multivariate discriminant analyses. Clin Infect Dis. 1993;16: 100–6. 46. Schlievert PM. Staphylococcal enterotoxin B and toxic shock syndrome toxin-1 are significantly associated with nonmenstrual TSS [letter] Lancet. 1986;1:1149–50. 47. Hayes PS, Graves LM, Feeley JC, et al. Production of toxic shockassociated protein(s) in Staphylococcus aureus strains isolated from 1956 through 1982. J Clin Microbiol. 1984;20:43–6. 48. Marples RR, Wieneke AA. Enterotoxins and toxic shock syndrome toxin-1 nonenteric staphylococcal disease. Epidemiol Infect. 1993;110: 477–88. 49. Barbour AG. Vaginal isolates of Staphylococcus aureus associated with toxic shock syndrome. Infect Immun. 1981:33:442–9. 50. Kreiswirth BN, Novick RP, Schlievert PM, et al. Genetic studies on staphylococcal strains from patients with toxic shock syndrome. Ann Intern Med. 1982;96(Part 2):974–7. 51. Chow AW, Gribble MJ, Bartlett KH. Characterization of the hemolytic activity of Staphylococcus aureus strains associated with toxic shock syndrome. J Clin Microbiol. 1983;17:524–8. 52. Chu MC, Melish ME, James JF. Tryptophan auxotypy associated with Staphylococcus aureus that produces toxic shock syndrome toxin. J Infect Dis. 1985;151:1157–8. 53. Todd JK, Franco-Buff A, Lawellin DW, et al. Phenotypic distinctiveness of Staphylococcus aureus strains associated with toxic shock syndrome Infect Immun. 1984;45:339–44. 54. Kreiswirth BN, Lofdahl S, Betley MJ, et al. The toxic shock syndrome exotoxin structural gene is not detectably transmitted by a prophage [letter]. Nature. 1983;305:709–12. 55. Rasheed JK, Arko RJ, Feeley JC, et al. Acquired ability of Staphylococcus aureus to produce toxic shock-associated protein and resulting illness in a rabbit model. Infect Immun. 1985;47:598–604. 56. Summary of Notifiable Diseases-United States, 2003. Morb Mortal Wkly Rep. 2005;52:26;30;73. 57. Osterholm MT, Forfang JC. Toxic shock syndrome in Minnesota: Results of an active-passive surveillance system. J Infect Dis. 1982;145:458–64. 58. Kehrberg MW, Latham RH, Haslam BR, et al. Risk factors for staphylococcal toxic shock syndrome. Am J Epidemiol. 1981;114: 873–9. 59. Gaventa S, Reingold AL, Hightower AW, et al. Active surveillance for toxic shock syndrome in the United States, 1986. Rev Infect Dis. 1989;11:S28–S34. 60. Centers for Disease Control. Toxic shock syndrome—United States, 1970–1982. Morb Mortal Wkly Rep. 1982;31:201–4. 61. Reingold AL. On the proposed screening definition for toxic shock syndrome by Todd et al. [letter]. Am J Epidemiol. 1985;122: 918–9. 62. Faich G, Pearson K, Fleming D, et al. Toxic shock syndrome and the vaginal contraceptive sponge. J Am Med Assoc. 1986;255: 216–8.
498
Communicable Diseases
63. Finkelstein JW, VonEye A. Sanitary product use by white, black, and Mexican-American women. Am J Public Health. 1990;105: 491–6. 64. Gustafson TL, Swinger GL, Booth AL, et al. Survey of tampon use and toxic shock syndrome, Tennessee, 1979–1981. Am J Obstet Gynecol. 1982;143:369–74. 65. Irwin CE, Millstein SG. Emerging patterns of tampon use in the adolescent female: The impact of toxic shock syndrome. Am J Public Health. 1982;72:464–7. 66. Irwin CE, Millstein SG. Predictors of tampon use in adolescents after media coverage of toxic shock syndrome. Ann Intern Med. 1982;96(Part 2):966–8. 67. Arnow PM, Chou T, Weil D, et al. Spread of a toxic shock syndrome-associated strain of Staphylococcus aureus and measurement of antibodies to staphylococcal enterotoxin F. J Infect Dis. 1984;149: 103–7. 68. Bartlett P, Reingold AL, Graham DR, et al. Toxic shock syndrome associated with surgical wound infections. J Am Med Assoc. 1982;247: 1448–50. 69. Kreiswirth BN, Kravitz GR, Schlievert PM, et al. Nosocomial transmission of a strain of Staphylococcus aureus causing toxic shock syndrome. Ann Intern Med. 1986;105:704–7. 70. Helgerson SD, Foster LR. Toxic shock syndrome in Oregon: Epidemiologic findings. Ann Intern Med. 1982;96(Part 2): 909–11. 71. Projan SJ, Brown-Skrobot S, Schlievert PM, et al. Glycerol monolaurate inhibits the production of β-lactamase, toxic shock syndrome toxin-1, and other staphylococcal exoproteins by interfering with signal transduction. J Bacteriol. 1994;176:4204–9. 72. MacDonald KL, Osterholm MT, Hedberg CW, et al. Toxic shock syndrome. A newly recognized complication of influenza and influenza-like illness. J Am Med Assoc. 1987;257:1053–8. 73. Green SL, LaPeter KS. Evidence for postpartum toxic shock syndrome in a mother-infant pair. Am J Med. 1982;72:169–72. 74. Schlievert PM. Enhancement of host susceptibility to lethal endotoxin shock by staphylococcal pyrogenic exotoxin type C. Infect Immun. 1982;36:123–8. 75. Poindexter NJ, Schlievert PM. Toxic shock syndrome toxin 1-induced proliferation of lymphocytes: Comparison of the mitogenic response of human, murine, and rabbit lymphocytes. J Infect Dis. 1985;151: 65–72. 76. Poindexter NJ, Schlievert PM. Suppression of immunoglobulinscreening cells from human peripheral blood by toxic shock syndrome toxin-1. J Infect Dis. 1986;153:772–9. 77. Ikejima T, Dinarello CA, Gill DM, et al. Induction of human interleukin-1 by a product of Staphylococcus aureus associated with toxic shock syndrome. J Clin Invest. 1984;73:1312–20. 78. Parsonnet J, Hickman RK, Eardley DD, et al. Induction of human interleukin-1 by toxic shock syndrome toxin-1. J Infect Dis. 1985;151: 514–22. 79. Kushnaryov VM, MacDonald HS, Reiser R, et al. Staphylococcal toxic shock toxin specifically binds to cultured human epithelial cells and is rapidly internalized. Infect Immun. 1984;45: 566–71. 80. Schlievert PM. Alteration of immune function by staphylococcal pyrogenic exotoxin type C: Possible role in toxic shock syndrome. J Infect Dis. 1983;147:391–8. 81. Fujikawa H, Igarashi H, Usami H, et al. Clearance of endotoxin from blood of rabbits injected with staphylococcal toxic shock syndrome toxin-1. Infect Immun. 1986;52:134–7. 82. Schlievert PM. Role of superantigens in human disease. J Infect Dis. 1993;167:997–1002. 83. Reiser RF, Robbins RN, Khoe GP, et al. Purification and some physicochemical properties of toxic shock toxin. Biochemistry. 1983;22: 3907–12.
84. Arko RJ, Rasheed JK, Broome CV, et al. A rabbit model of toxic shock syndrome: Clinico-pathological features. J Infect. 1984;8: 205–11. 85. Best GK, Abney TO, Kling JM, et al. Hormonal influence on experimental infections by a toxic shock strain of Staphylococcus aureus. Infect Immun. 1986;52:331–3. 86. Best GK, Scott DF, Kling JM, et al. Enhanced susceptibility of male rabbits to infection with a toxic shock strain of Staphylococcus aureus. Infect Immun. 1984;46:727–32 87. de Azavedo JCS, Arbuthnott JP. Toxicity of staphylococcal toxic shock syndrome toxin-1 in rabbits. Infect Immun. 1984;46:314–7. 88. Parsonnet J, Gillis ZA, Richter AG, et al. A rabbit model of toxic shock syndrome that uses a constant, subcutaneous infusion of toxic shock syndrome toxin-1. Infect Immun. 1987;55:1070–6. 89. Pollack M, Weinberg WG, Hoskins WJ, et al. Toxinogenic vaginal infections due to Staphylococcus aureus in menstruating rhesus monkeys without toxic-shock syndrome. J Infect Dis. 1983;147: 963–4. 90. Scott DF, Kling JM, Kirkland JJ, et al. Characterization of Staphylococcus aureus isolates from patients with toxic shock syndrome, using polyethylene infection chambers in rabbits. Infect Immun. 1983;39:383–7. 91. Scott DF, Kling JM, Best GK. Immunological protection of rabbits infected with Staphylococcus aureus isolates from patients with toxic shock syndrome. Infect Immun. 1986;53:441–4. 92. Tierno PM, Jr., Malloy V, Matias JR, et al. Effects of toxic shock syndrome Staphylococcus aureus, endotoxin and tampons in a mouse model. Clin Invest Med. 1987;10:64–70. 93. Van Miert ASJPAM, van Duin CTM, Schotman AJH. Comparative observations of fever and associated clinical hematological and blood biochemical changes after intravenous administration of staphylococcal enterotoxins B and F (toxic shock syndrome toxin-1) in goats. Infect Immun. 1984;46:354–60. 94. Larkin SM, Williams DN, Osterholm MT, et al. Toxic shock syndrome: Clinical, laboratory, and pathologic findings in nine fatal cases. Ann Intern Med. 1982;96(Part 2):858–64. 95. Paris AL, Herwaldt L, Blum D, et al. Pathologic findings in twelve fatal cases of toxic shock syndrome. Ann Intern Med. 96(Part 2): 852–7. 96. Broome CV, Hayes PS, Ajello GW, et al. In vitro studies of interactions between tampons and Staphylococcus aureus. Ann Intern Med. 1982;96(Part 2):959–62. 97. Schlievert PM, Blomster DA, Kelly JA. Toxic shock syndrome Staphylococcus aureus: Effect of tampons on toxic shock syndrome toxin 1 production. Obstet Gynecol. 1984;64:666–70. 98. Kirkland JJ, Widder JS. Hydrolysis of carboxymethyl-cellulose tampon material [letter]. Lancet. 1983;1:1041–2. 99. Tierno PM, Jr., Hanna BA. In vitro amplification of toxic shock syndrome toxin-1 by intravaginal devices. Contraception. 1985;31: 185–94. 100. Tierno PM, Jr., Hanna BA, Davies MB. Growth of toxic-shock-syndrome strain of Staphylococcus aureus after enzymic degradation of Rely tampon component. Lancet. 1983;1:615–8. 101. Mills JT, Parsonnet J, Tsai YC, et al. Control of production of toxicshock-syndrome toxin-1 (TSST-1) by magnesium ion. J Infect Dis. 1985;151:1158–61. 102. Kass EH, Kendrick MI, Tsai YC, et al. Interaction of magnesium ion, oxygen tension, and temperature in the production of toxicshock-syndrome toxin-1 by Staphylococcus aureus. J Infect Dis. 1987;155:812–5. 103. Mills JT, Parsonnet J, Kass EH. Production of toxic-shock-syndrome toxin-1: Effect of magnesium ion. J Infect Dis. 1986;153: 993–4. 104. Mills JT, Parsonnet J, Tsai YC, et al. Control of production of toxicshock-syndrome toxin-1 (TSST-1) by magnesium ion. J Infect Dis. 1985;151:1158–61.
18 105. Schlievert PM, Blomster DA. Production of staphylococcal pyrogenic exotoxin type C: Influence of physical and chemical factors. J Infect Dis. 1983;147:236–42. 106. Todd JK, Todd BH, Franco-Buff A, et al. Influence of focal growth conditions on the pathogenesis of toxic shock syndrome. J Infect Dis. 1987;155:673–81. 107. Wagner G, Bohr L, Wagner, et al. Tampon-induced changes in vaginal oxygen and carbon dioxide tension. Am J Obstet Gynecol. 1984;148: 147–50. 108. Friedrich EG, Siegesmund KA. Tampon-associated vaginal ulceration., Obstet Gynecol. 1980;55:149–56. 109. Fisher CJ Jr., Horowitz BZ, Nolan SM. The clinical spectrum of toxic-shock syndrome. West J Med. 1981;135:175–82. 110. Tofte RW, Williams DN. Toxic-shock syndrome: Evidence of a broad clinical spectru. J Am Med Assoc. 1981;246:2163–7. 111. Wiesenthal AM, Ressman M, Caston SA, et al. Toxic shock syndrome. I. Clinical exclusion of other syndromes by strict and screening definitions. Am J Epidemiol. 1985;122:847–56. 112. Sperber SJ, Francis JB. Toxic shock syndrome during an influenza outbreak. J Am Med Assoc. 1987;257:1086–7. 113. CDC. Toxic-Shock Syndrome—Utah. Morb Mortal Wkly Rep. 1980;29:475–6. 114. Helgerson SD, Foster LR. Toxic Shock Syndrome in Oregon— epidemiologic finding. Ann Intern Med. 1982;96:909–11.
Suggested Reading Chesney PJ, Davis JP, Purdy WK, et al. Clinical manifestations of the toxic shock syndrome. J Am Med Assoc.1981;246:741–8. Fisher RF, Goodpasture HC, Peterie JD, et al. Toxic shock syndrome in menstruating women. Ann Intern Med.1981;94:156–63. Proceedings of the First International Symposium on Toxic Shock Syndrome. Rev Infect Dis.1989;11. Stallones RA. A review of the epidemiologic studies of toxic shock syndrome. Ann Intern Med.1982;96(Part 2):917–20.
Other Infection-Related Diseases of Public Health Import
499
Reye’s Syndrome 1. Anderson RMcD. Encephalitis in childhood: pathologic aspects. Med J Aust. 1963;1:573–575. 2. Reye RDK, Morgan G, Baral J. Encephalopathy and fatty degeneration of the viscera: a disease entity in childhood. Lancet. 1963;2: 749–752. 3. Johnson GM, Scurletis TD, Carroll NB. A study of sixteen fatal cases of encephalitis-like disease in North Carolina children. N C Med J. 1963;24:463–473. 4. Glasgow JFT, Moore R. Current concepts in Reye’s syndrome. Br J Hosp Med. 1993;50:599–604. 5. Sullivan KM, Belay ED, Durbin RE, et al. Epidemiology of Reye’s syndrome, 1991-1994. Comparison of CDC surveillance and hospital surveillance data. Neuroepidemiology. 2000;19:338–344. 6. Centers for Disease Control. Reye syndrome surveillance—United States, 1987 and 1988. MMWR. 1989;38:325–327. 7. Gauthier M, Guay J, LaCroix J, et al. Reye’s syndrome. A reappraisal of diagnosis in 49 presumptive cases. Am J Dis Child. 1989;143: 1181–1185. 8. Hardie RM, Newton LH, Bruce JC, et al. The changing clinical pattern of Reye’s syndrome. 1982–1990. Arch Dis Child. 1996;74: 400–405. 9. Glasgow JFT, Middleton B. Reye’s syndrome—insights on causation and prognosis. Arch Dis Child. 2001;85:351–353. 10. Hurwitz ES. Reye’s syndrome. Epidemiol Rev. 1989;11: 249–253. 11. Arrowsmith JB, Kennedy DL, Kuritsky JN, et al. National patterns of aspirin use and Reye syndrome reporting, United States, 1980 to 1985. Pediatrics. 1987;79:858–863. 12. Orlowski JP, Campbell P, Goldstein S. Reye’s syndrome: a case control study of medication use and associated viruses in Australia. Cleve Clin J Med. 1990;57:323–329. 13. Visentin M, Salmona M, Tacconi MT. Reye’s and Reye-like syndromes, drug-related diseases? (causative agents, etiology, pathogenesis, and therapeutic approaches). Drug Metab Rev. 1995;27: 517–539. 14. Orlowski FP, Hanhan UA, Fiallos, et al. Is aspirin a cause of Reye’s syndrome? A case against. Drug Saf. 2002;25:225–231.
This page intentionally left blank
III Environmental Health
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
This page intentionally left blank
The Status of Environmental Health
19
Arthur L. Frank
Preventive medicine and public health advances continue to contribute to the well-being of persons, and central to modern changes are environmental issues that significantly shape the world. While great strides have been made over the past few decades, there are still areas of pressing concern and long-term danger. While still not entirely clear in all respects, global climate change seems more and more real, with significant consequences if not modified. Recent policies in the United States have raised concerns about environmental degradation in such areas as water purity, clean air, the health of forests, and planning for future growth in population. It is becoming increasingly appreciated that sufficient clean water may not be as readily available in the future as in the past, with significant public impact. Superfund site cleanups have slowed dramatically. Endangered species seem more endangered. On a global basis there have been pockets of progress. Individually the steps may seem limited, but over time they add up to significant protection of human health. Increasingly, countries are banning the use of asbestos, public transport—like the bus system in the capital of India—run on cleaner fuels, and cigarette smoking in public settings is decreasing through legislation. Far too much petroleum is still consumed, with a decreasing availability in sight and with insufficient reductions in use or development of alternative sources. Occupational health problems continue to contribute to mankind’s difficulties. Little has been done to reduce child labor around the world with increasing hazards for kids, given the nature of much of their work. The use of children in the sex trade or as soldiers is to be particularly condemned. Too many children work rather than go to school. Some developed countries ban the use of certain dangerous chemicals, but not their production and export for use in societies where safety and health standards leave many at constant risk. The basic economics of work, with transnational movement of many jobs, contributes to the increasing gap between rich and poor, and to future lower standards of living in many places, without offsetting gains in less-developed countries. As public health professionals, we must continue to fight for the well-being of all, and environmental issues can contribute to our collective betterment. This section of this famous book continues to address both traditional and cutting-edge issues. It has grown, as
public health has changed, from a small part of early editions into a significant part of the overall text. The topics covered essentially put a whole environmental and occupational text into the hands of readers, embedded in all the other wonderful material to be found in this venerable volume. Over time there has been an increase in the number of journals devoted to occupational and environmental health issues. Given the often contentious and litiginous nature of issues in this field, it has become ever more important that potential conflicts of interest be noted. Too often this does not happen. Regulations, especially in the United States, are now often effected by those who are to be regulated. This development, combined with the decreasing importance of labor unions with their declining membership, may leave many workers vulnerable to workplace hazards in the years ahead. With more and more jobs moving out of the United States there is a risk that workplaces in the future, compared to the recent past, will be globally less protected and safe. Hindering the field of environment health as well is the continuing shortage of sufficient numbers of appropriately trained health professionals, including physicians, nurses, industrial hygienists, and others. Educational opportunities are shrinking, as is funding and support of such activities. Physicians, as a rule, still receive very little training in occupational medicine. The basic methodology in terms of assessing workplace and environmental health hazards and their effects on people has not changed since the seminal work of Ramazzini more than 300 years ago. As is true for most medical assessments, obtaining a proper history is most important. The essential parts of such a history are to be found in Table 19-1 that provides a useful format for obtaining the necessary information in most, if not all, settings. As one looks ahead to the future, there will continue to be problems at the heart of environmental and occupational health. Old issues will remain—child labor, agricultural work exposures, ergonomic problems, the use of tobacco—but there will be an added emphasis on newer issues such as genetic testing, use of mechanistic models to predict human disease, and the shift of certain diseases, like lung cancer, into societies poorly equipped to handle them. By making use of the materials in this section, it can be hoped that some lives will be made better.
503 Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
504
Environmental Health TABLE 19-1. ENVIRONMENTAL AND OCCUPATIONAL EXPOSURE HISTORY Current work: ________________________________________________________ How long at this job? __________________________________________________ Description of work: ___________________________________________________ ___________________________________________________________________ Any contact with dust, fumes, chemicals, radiation, noise, etc.? _____________ Yes _______________ No If yes, describe: _______________ ___________________________________________________________________ Describe any adverse effects noted: _____________________________________ Are any fellow workers ill? ________________ Yes _________________ No If yes, describe: ______________________________________________________ ___________________________________________________________________ Do you use any protective equipment at work? _____________ Yes _______________ No Previous job history From To Exposures First regular job _________ __________ ___________ Next job _________ __________ ___________ Next job _________ __________ ___________ Vacation or temporary job _________ __________ ___________ Vacation or temporary job _________ __________ ___________ Military service or related exposures: _____________________________________ ___________________________________________________________________ Have you lived near an industrial facility or has a family member worked in a setting where hazardous materials have been brought home? ________ Yes ________ No If yes, describe: ______________________________________________________ Hobby history: _______________________________________________________ Smoking history: _____________________________________________________ Alcohol and drug use history: ___________________________________________ Comments: _________________________________________________________
20
Toxicology Principles of Toxicology Michael Gochfeld
Toxicology is the study of the harmful effects of chemicals, including drugs, on living organisms. Many books (see General References), and particularly Toxicologic Profiles published by the Agency for Toxic Substances and Disease Research, cover in detail the toxicology of individual substances. Freely available search engines allow access to innumerable Web pages with toxicological data, courses, and comments, the reliability of which requires careful assessment. This chapter focuses on generic and conceptual issues relating to properties of toxic substances in general, how they enter and move through the body, and the kinds of pathophysiologic effects that they exert on various targets within the body that ultimately lead to the health effects. The rapid advances being made in molecular toxicology are beyond the scope of this chapter. Historians1 trace the modern history of toxicology back to Paracelsus (1493–1541), who recognized that a substance that was physiologically ineffective at very low dose might be toxic at high dose and therapeutic at intermediate dose. However, Gallo2 identified human use of natural venoms in antiquity with a number described in the famed Ebers Papyrus (ca 1500 BC). In the Middle Ages, poisoning became a political tool and toxicological understanding therefore a necessity for both perpetrator and victim. In the past century, toxicology developed under the combined impetus of a burgeoning chemical industry, the quest for therapeutic agents, and concern over adulterated foods. In 1906, the United States enacted the Food and Drug Act, perhaps stimulated more by muckraking writings such as Upton Sinclair’s The Jungle3 than by toxicologists.2 Although the general principles haven’t changed since the previous edition (1998), toxicology is passing through a genetic revolution, with a heavy emphasis on understanding toxic mechanism at the presumably most basic level, the gene and its expression. Still in its infancy, and driven at first more by commercial ventures than scientific questions,4,5 toxicogenomics and proteomics offer great promise, but can be dealt with only slightly in this chapter. Likewise, the widespread importance of oncogenes, growth factors, cell cycling, cytokines, apoptosis as well as gene regulation, transcription factors, messenger cascades, have stimulated extensive research,6 but can be mentioned only briefly. Toxic chemicals (a) enter and move through the environmental media (air, water, soil, food) at various concentrations until they come into contact with a target individual; (b) are taken up by inhalation, ingestion, through the skin, or by injection (exposure); (c) are absorbed into the bloodstream (uptake) reaching a certain concentration (blood level); (d) undergo complex toxicokinetics involving metabolism, conjugation, storage, and excretion as well as delivery to target organs (dose to target); and (e) affect some molecular, biochemical, cellular, or physiological structure or function to produce their adverse effect.
Internal distribution and the dose reaching a target organ, tissue, or cell are constantly modified by binding to carrier molecules, metabolic activation (or inactivation), storage in various tissues (e.g., polychlorinated biphenyls [PCBs] in fat, lead in bone), and by excretion.7 The relationships among these processes are demonstrated in Fig. 20-1. This chapter covers the classification of toxic chemicals, the manner in which exposure occurs and how it can be measured, the absorption and distribution of chemicals within the body, and finally the kinds of toxic effects that are produced. Emerging areas of toxicology are briefly considered. BRANCHES OF TOXICOLOGY
Toxicology is a broad discipline embracing such traditionally clinical areas as pathology, pharmacology, and clinical toxicology on the one hand, and molecular biology, biochemistry, and physiology on the other. Industrial toxicology, ecotoxicology, environmental toxicology, forensic, analytic, and regulatory toxicology are also prominent areas. Historically, toxicology was linked with pharmacology and focused on the toxic effects of pharmaceuticals. This remains a fundamental part of drug development and assessment. Industrial toxicology emerged to investigate the toxic effects of raw materials, intermediates, products, and wastes produced by commerce. Toxicology has subdisciplines linked to behavior, nutrition, biochemistry (including proteomics), and genetics (including toxicogenomics), which have opened new research horizons. Clinical toxicology focuses on the diagnosis and treatment of poisonings.8,9 Toxicology is concerned with both lethal and sublethal effects. In the 1950s and 1960s lethal effects were the major emphasis, and many studies were aimed at identifying the lethal dose (LD)-50 for a chemical, the dose which killed 50% of the animals. Today, experimental toxicologists employ in vivo and increasingly in vitro techniques to study the effects of one or more chemicals or other stressors on biological functions, as well as survival. Efforts to limit animal research have stimulated the search for alternatives testing,10 although in vitro testing has limitations.11 Two decades ago, molecular toxicology was a new frontier with an emphasis on the discovery of biomarkers.2,12 Increasingly, toxicologists focused on the cellular, biochemical, and molecular interactions and changes wrought by foreign chemicals, thereby elucidating their mechanisms of actions. In the past decade attention has shifted to genomic and proteomic investigations, focusing on the effects of chemicals on gene expression and protein synthesis. This research also identifies new kinds of biomarkers such as tumor-specific antigens.13 Phenomenological measurements of enzyme activity have been supplemented by studies of enzyme structure and gene-protein 505
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
506
Environmental Health
Air
Soil
Lungs
Water
Skin
Food
Gut
Absorption Bioavailability
Binding Partitioning Metabolism
Blood Mobilization
Concentration
Deposition
Organs
Excretion
Solubility Conjugation
Target cells Figure 20-1. A multicompartment illustration showing movement of contaminants from environmental media, uptake through lungs, skin, and gut, distribution in blood to excretory, storage, or target organs. (Source: Courtesy EOHSI.)
and protein-protein interactions. Receptor biology has emerged as a subdiscipline, stimulated in part by the interest in endocrine active xenobiotics, which may be active at picomolar (10−12) concentrations.14 Many new and highly specialized journals—some mainly in electronic form—have appeared in the past decade. No attempt is made here to cover the rapidly evolving field of molecular toxicology. Toxicological data are a major basis for environmental risk assessments, which in turn are increasingly used by regulatory agencies to assess chemical hazards, prioritize hazardous waste site cleanups, establish governmental policies, and set levels of allowable exposure. Such risk assessments produce quantitative or qualitative estimates of the magnitude of the risk of some adverse endpoint associated with a particular dose or exposure of a target population to a particular chemical, physical, or biological agent. Ecotoxicology is a major subdiscipline, and data on exposure, distribution, and effects in ecosystems and organisms provides data used in ecological risk assessments (see Chap. 42). The publication of Silent Spring,15 by Rachel Carson in 1962, is often hailed as a landmark, not only for ecotoxicology, but for human environmental health concerns in general. Carson published on the overuse and misuse of pesticides despite intense pressure from the agrochemical and agriculture industries. She emphasized the inevitable escalation in toxicity of new pesticides as insects developed resistance to earlier generation insecticides. Although not usually recognized as a subdiscipline, military toxicology has had a long history. The development of chemical warfare agents such as mustard gas16 was a driving force in the early twentieth century, and nerve gas research played a major role in the development of organophosphates, later used as pesticides. The development of antidotes and preventatives was likewise necessary. Although international treaties in the 1970s brought a halt to much of the chemical weapon development, the new millennium has brought increased emphasis on terrorism and preparedness for biological, chemical, and radiation hazards. This includes research into methods for early detection of release of hazardous agents,17 for monitoring humans for exposure or effects, and for the deployment of effective prevention, diagnosis, and treatment.18 Radiation toxicology lies largely in the domain of health physics, and is outside the scope of this chapter. EVOLUTIONARY BASIS OF TOXICOLOGY
Although the teaching of evolution in schools periodically comes under attack, the late Ernst Mayr emphasized that the basic principles
of organic evolution advanced by Darwin have withstood all challenges.19 Although the mechanisms by which evolutionary changes occur are still being elucidated,20 through the acquisition of new data in field and laboratory, the basic phenomena of heredity, mutation and selection underlie the relatedness among all forms of life. These processes are augmented by random events, migration, interbreeding, and even horizontal transfer of genes across species boundaries.21 Evolution remains the basis of our understanding of toxicology,22 and the relationships between animal studies and human effects.23 Evolutionary relatedness underlies the principles of extrapolating from animal studies to human exposures, and toxicologists take the underlying evolutionary principles for granted, while textbooks seldom specifically reference it. Since the 1960s when Fitch and Margoliash demonstrated the conservatism among amino acid sequences of some proteins and applied the concept of genetic distance among species,24 toxicological research has advanced on many fronts. Humans share with ape ancestors approximately 99% of the genome. Yet within any given species, and probably more so in humans than any other, there are variations in gene sequences that alter protein structure in subtle ways, that in turn may effect nutritional requirements, physiological tolerances, mating prowess, or fecundity. Under different environmental regimes, one genotype may be favored over another, and over long periods of time, increasing trends toward wetness or aridity, may select for increased tolerance of the appropriate condition— directional selection. Genetic variation within a species allows for increased adaptiveness to changing environments or to environmental stressors.
TYPES OF STRESSORS
The stressors that potentially harm the body can be broadly classified as physical (noise, temperature, radiation), biological (infectious, immunologic, allergenic), chemical, mechanical (ergonomic), and psychosocial. Toxicologists focus mainly on chemicals, both synthetic and those of natural biological origin, and on physical agents such as radiation. There are interactions among classes of stressors. Thus radiation, infection, or psychological stress may modify the effects of toxic chemicals,25 and vice versa, and there is increasing attention to the effects of two or more chemicals administered together where synergistic, independent, or antagonistic effects may occur (see below). The following definitions are important. Toxicity is the intrinsic ability of a substance to harm living things. A xenobiotic is any substance foreign to the body, including all synthetic chemicals as well as many natural substances. Susceptibility refers to the ability of a living thing to be harmed by an agent. It is influenced by genotype, by age and gender, and by environmental factors such as nutrition, prior exposure, and underlying state of health (for example, immune status). Bioavailability is the ability of a substance that enters the body to be liberated from its environmental matrix (air, water, soil, food), while absorptive capacity (of skin, lungs, or gastrointestinal [GI] tract) influences how much bioavailable material can enter the circulation. Biotransformation, or intermediary metabolism, is the biochemical change(s) a chemical undergoes once it reaches the cells of the body. This may lessen its toxicity (detoxification) or enhance it (activation) and may facilitate excretion. Mechanism refers to the way in which the toxic substance acts on a cellular or subcellular level to disrupt the living organism. Threshold is the lowest dose of a chemical that has a detectable effect.
CLASSIFICATION OR TAXONOMY OF TOXIC AGENTS
One can organize knowledge in toxicology in terms of chemical agents or types of effect. Chemicals can be classified based on their structure, source, economic role, mechanism of action, or on their target organ. The lists below are not intended to be exhaustive.
20
Classification by Structure Organic Chemicals Aromatics (e.g., phenols, benzene derivatives) Aliphatics (e.g., ethanes, ethenes) Polyaromatic hydrocarbons (PAHs) Chlorinated polyaromatics (e.g., dioxins, furans, PCBs) Chlorinated hydrocarbons (chlorinated alkanes and alkenes) Amines and nitriles Ethers, ketones, aldehydes, alcohols, organic acids
Inorganic Chemicals Acids and bases Anions and cations Heavy metals Metalloids (e.g., selenium, arsenic) Salts
Classification by Source
Toxicology
507
the diet, while exposure to synthetic pesticides has occurred for only two or three human generations.
Classification by Use Very often in clinical toxicology, the first thing one learns about a chemical exposure is the type of compound. Thus a would-be suicide patient may be brought in with “an overdose of sleeping pills,” or a worker may have been overcome while “using a solvent,” or a homeowner may report “some pesticide spray” making him/her ill. Examples of common use classes of materials that may have toxic effects include: Solvents Pharmaceutical agents Paints, dyes, coatings Detergents, cleansers Pesticides Acids, bases
Many plants and animals secrete chemicals designed to keep them from being eaten.26 The classic example is the Monarch butterfly, the larvae of which develop on milkweeds which contain alkaloids that the caterpillar incorporates in its own tissues. During metamorphosis the alkaloids are retained in the adult butterfly, and birds that eat a Monarch become sickened and quickly learn to avoid it and similarly colored yellow and black caterpillars or black and orange butterflies. Beetles may squirt hot cyanide compounds to deter predators. Plants that have been partially eaten by herbivores may load increased levels of distasteful alkaloid compounds in newly regenerated leaves. Similarly, many fungi secrete chemicals that inhibit growth of bacterial competitors. A wide variety of these naturally occurring bioactive substances or “toxins” have been adapted into some of our most familiar pharmaceuticals, for example, antibiotics.
Pharmaceuticals and Abused Substances. These are grouped together because of the tendency for very high concentrations of bioactive agents to be deliberately introduced into the body. In fact, many abused substances that were originally developed as pharmaceuticals (e.g., amphetamines, barbiturates, and narcotics) have profound toxic effects, quite apart from their addictive properties. By whatever route, and whether legal or illicit, these chemicals are used because of their high level of bioactivity. Even when the dosage used is in the therapeutic range, there may be undesired side effects, which are manifestations of toxicity. These may occur in most users (e.g., soporific effects of diphenhydramine) or rarely (anaphylaxis from penicillin). Certainly the most widespread toxic exposures involve the chronic inhalation of tobacco smoke by the smoker and those around them, and the chronic consumption of ethanol.
Natural or Biological Compounds, “Toxins”
Classification by Mechanism of Action
Plant Bacterial Invertebrate Vertebrate
Synthetic Chemicals Industrial raw material, by-product, waste, or product Pharmaceutical agent Environmental toxicology and risk assessment have focused mainly on synthetic chemicals, yet natural toxic compounds, called toxins or venoms, are widespread and include some of the most toxic agents known. Invertebrate toxins, mainly of marine origin, occasionally cause epidemic outbreaks of foodborne disease and have proven valuable research tools because of their highly specific modes of action. These include brevetoxin secreted by the dinoflagellates that cause red or brown tides along the southeastern United States coastline and many other warm ocean areas.27 Many of these toxins have very complex structures, for example, the chain of 13 heterocyclic 5–7-membered rings that make up the backbone of ciguatoxin. These plant and animal toxins have evolved specifically to damage either predators or prey. A review of their toxicology is beyond the scope of this chapter.26 Many pharmacologic agents are extracted directly from plants or microorganisms, or are patterned on natural compounds. Plants contain many insecticidal or deterrent compounds, and Bruce Ames has argued that there is no reason to be concerned about synthetic pesticide residues on food, since foods are loaded with natural pesticides at much higher concentrations.28 Unfortunately this ignores the evolutionary history through which organisms would have adapted to chemicals naturally encountered in
Much exciting research in modern toxicology focuses on the mechanism by which a bioactive substance interacts with and alters its targets to produce its unwanted effects, for example: Enzyme inhibition Enzyme induction Formation of free radicals/active oxygen species Metabolic poisons Redox reactions: oxidants and antioxidants Macromolecular binding Interference with signal transduction (e.g., DNA, protein) Cell membrane disruption including lipid peroxidation Hormone activity (hormone synthesis, receptor regulation) Competitive binding of active sites or receptors Immune effects Irritants
Classification by Target Organ Xenobiotics can act on any organ system in the body. The effects on these target organs are discussed in other chapters in this section. Standard textbooks of toxicology6 are organized by organ system, and several of the general readings deal with organ systems. The next chapter deals specifically with neurobehavioral toxicology. Neurotoxin
Pulmonary toxin
Hematotoxin Nephrotoxin
Metabolic toxin Endocrine toxin
Hepatotoxin Cardiotoxin
Dermatotoxin Reproductive toxin
Genotoxin (including mutagens) Immunotoxin Carcinogen (including initiators and promoters) Teratogens
508
Environmental Health
The liver is of particular importance in toxicology. Ingested substances absorbed into the blood stream go first to the liver via the portal vein on “first pass.” In the liver they may undergo metabolism, which may either detoxify or activate them. The liver may conjugate substances to facilitate their excretion in the urine or may secrete some substances into the bile. Liver cells are particularly vulnerable to toxins, and toxic hepatitis, manifest by abnormalities in liver function tests, may present as jaundice or as a fulminating fatal liver failure. CHEMICAL STRUCTURE AND TOXICOLOGY
Several chemical principles play important roles in toxicology. They influence how the chemical behaves in its environmental matrix, how it is absorbed into, metabolized by, distributed through, and excreted from the body, and how it exerts its toxic effect.
by long-term animal bioassays.38 Quantitative-Structure-Activity Relationships (QSAR) play an important role, particularly in drug development39 and in predicting toxicity.40 CHEMICALS IN THE ENVIRONMENT
Environmental toxicology is generally concerned with chemicals in the air, water, soil, and food we encounter in our home, community, and workplace environments. Our behavior greatly influences the microenvironments we frequent, the exposures we experience, and the ways that chemicals enter our body via ingestion, inhalation, percutaneous absorption, and even by injection. Table 20-1 indicates the factors that influence the uptake and toxicity of a material and the susceptibility of the host. Uptake varies by route of exposure and bioavailability. A given chemical may be readily absorbed from the lungs but may have negligible uptake through the skin or intestinal tract.
Chemical Species There are different forms of many chemicals; a chemical variant of a metal is called a “species.” This may refer to organic versus inorganic state or to valence state; thus trivalent and hexavalent chromium are species of chromium,29 and because CrIII is an essential nutrient while CrVI is a potent lung carcinogen,30 the difficulty in reliably analyzing the concentrations of CrIII and CrVI in an environmental sample impedes our ability to protect potentially exposed people. Toxicologists have demonstrated that slight modifications in a chemical may drastically alter its effect.31 This is particularly true for the effects of certain metals, which, when in an organic complex, may have drastically different effects than their elemental or ionic form. For example, methylmercury and orgonotin tin are both more toxic than inorganic mercury or tin, but the reverse is true for arsenic, where naturally occurring organic species have lower toxicity than the inorganic arsenites and arsenates. These in turn can be methylated in the body to less toxic metabolites, a capacity that varies among individuals.32 The organic mercury and tin species have been incorporated in biocides such as fungicidal seed dressing and in marine paints to thwart the growth of barnacles. However, both methylmercury and alkyltin compounds are potent neurotoxins33,34 and have caused widespread ecotoxic effects in the marine environment.
Isomers and Congeners Two chemical compounds that have the same chemical formula but differ in structure are called isomers. Thus butane, a four-carbon chain can appear as either normal (linear) butane or branched isobutane. Congeners have the same basic structure, but different numbers of atoms. For instance, dichlorophenol and trichlorophenol would be congeners while 2,4-dichlorophenol and 2,5-dichlorophenol would be isomers. The behavior in the body and the toxicity may vary greatly among isomers and congeners. Thus different chlorinated dibenzodioxins vary by orders of magnitude in their toxicity. Each compound can be assigned a toxicity potency (toxic equivalency factor or TEF) relative to 2,3,7,8tetrachlorodibenzo-p-dioxin (TCDD),35 and these TEFs are considered additive in causing cancer.36
Structure-Activity Relationships The converse of the variation in toxicity between isomers and congeners is that chemicals that are structurally similar may have similar types of toxic effects on the body, although the effects may be modulated in intensity by adjacent atoms. This forms the basis for much pharmaceutical research, the quest for agents that have a desired effect without undesired side effects. Understanding structure-activity relationships (SARs) is important in toxicology since one can often infer the effects of a chemical by knowing the effects of related compounds. Thus many short-chain chlorinated hydrocarbons have a common general anesthesia effect, even though their potency varies with their structure. Similarly, many metal ions are nephrotoxic to the proximal kidney tubule,37 and many hallucinogenic compounds share a common active group. SARs have proven predictive of carcinogenicity identified
Chemicals in Air Air pollution remains a major public health concern, and ozone is a ubiquitous irritant formed in the atmosphere. Probably the main substrates for excess ozone formation are oxides of nitrogen (the term for the family of NOx) emitted in automobile exhaust. Another substance of concern is sulfur dioxide. Both ozone and sulfur dioxide are irritating to the respiratory system. Recent research has focused attention on particulates less than 2.5 µm in aerodynamic diameter (PM2.5 fraction), which are associated with increased mortality, particularly for people who have diabetes and ischemic heart disease.41 Attention has focused on sophisticated analysis of pulse rate,42 showing that particulate exposure decreases heart rate variability. Recent work also points to an association of PM2.5 directly with atherogenesis.43 Although much of the research has been driven by outdoor air pollution, it is actually the indoor exposures which are often of greater magnitude and concern.44 In the aftermath of the 1970s fuel crisis, newly constructed, “energy-efficient” office buildings tended to be relatively airtight, and fuel conservation programs greatly reduced the amount of fresh air (makeup air) added to air conditioning. This contributed to many reports of “sick-building syndrome.” Many homes have unsuspected air pollutants that are hazardous to health. Radon, a decay product from naturally occurring uranium in soil, occurs in gaseous form and emits alpha particles that cause lung cancer. Although alpha particles can penetrate only a very short distance (less than a millimeter), inhalation of the gas brings it in direct contact with lung tissue. Radon occurs in many parts of the United States and may reach relatively high concentrations in certain homes. A more common but less dreaded pollutant is nitrogen dioxide, which is formed TABLE 20-1. FACTORS THAT MODIFY TOXICITY
Host Species, strain, genotype Age Sex Infectious/immunologic history Behavioral stress history Activity level/fitness Nutritional status Toxicant exposure history
Environment Temperature Light: cycle, intensity, spectral properties Air: flow rate, ion content, humidity, particles
Toxicant Matrix/bioavailability Physical form Chemical species Solvents/vehicles
20 by combustion in a gas cooking range. Elevated levels of this irritant can be measured in a kitchen while cooking is in progress. Children living in homes with gas ranges may experience an excess of respiratory symptoms.45 Recent attention has focused on molds which release both allergenic mycelia and spores as well as toxic secretions, which have been implicated in causing a variety of symptoms.46 Many industrial processes emit vapors, smokes, or mists, which can be inhaled. Hence air is the major route of exposure for industrial workers. Most of the standards regarding industrial exposure refer to airborne concentrations above which inhalation could lead to adverse health effects.47 See Chap. 46 on occupational exposures.
Chemicals in Water Both surface and groundwater are used as community water sources. Many industrial and municipal wastes, both treated and untreated, are discharged directly to surface waters, and discharge permits allow certain quantities of toxic chemicals to be piped into streams, lakes, rivers, canals, and the ocean. Groundwater contamination occurs as contaminants leach downward through soil. Use of lead arsenate insecticide and organomercurial fungicide, once the mainstays of agricultural pest control, have been banned or curtailed, but these metals have gradually leached through the soil, eventually reaching groundwater, decades after they were applied. Solubility is the primary factor determining the behavior of chemicals in water. Many metal salts dissolve readily, while most of the larger organic molecules do not. Public drinking water sources are regulated with regard to several pollutants and must test for a suite of contaminants on a regular, usually quarterly basis. Private wells are not systematically tested. Although ingestion is the major pathway for water contaminants, volatile compounds in water escape during cooking and showering, offering a significant potential for inhalation exposure.
Chemicals in Soil Soils have complex physical structures and compositions that vary greatly. The physical texture and water and organic content determine how chemicals will move through soil and influences their bioavailability. Some soils are naturally rich in toxic elements, such as the nickel-rich soils of New Caledonia or serpentine soils, to which unique groups of plant species have become adapted. However, human activities have resulted in soil contamination via fallout of air pollutants, discharge of liquid industrial or agricultural waste, or dumping of solid waste. Once a chemical is deposited on soil, it may remain in place or it may be washed away by water flowing over the surface (runoff) or by percolation down through the soil (leaching). Some chemicals may be readily leached from the upper layers of soil and carried down or away by water. Others may undergo biodegradation or photodegradation with the aid of microorganisms or sunlight. Some chemicals are persistent; for example, the chlorinated hydrocarbon pesticides and PCBs tend to remain unchanged in the soil for many years. Soil particles that form fine dusts can become airborne and can be inhaled. Particles less than 5 µm in diameter are likely to reach the alveoli. Other particles may settle on food or water and be ingested. People may also ingest particles of soil that get on their fingers or under their nails or that are on the outer surface of vegetables. The ingestion of contaminated soil by toddlers is a major route of exposure and is often the major determining pathway in a risk assessment.
Chemicals in Food Food may contain toxic chemicals from a variety of sources. Although regulations governing pesticide application (for example, the minimum number of days between spraying and harvest) are designed to protect workers and minimize residual pesticides in food, many vegetables still contain some pesticide residues. Some residues may be surface sprays that adhere to plant tissue, while others are systemic substances taken up through the roots and incorporated in the tissue. Hormones and antibiotics used in promoting animal growth can also be detected in certain foods as can food additives used to prolong shelf life or enhance flavor, texture, or color. Some of these
Toxicology
509
compounds have been demonstrated to have toxic effects in longterm low-level exposure experiments (see Chap. 33). In the process of cooking, particularly grilling, meats may contain carcinogenic heterocyclic amines.48 Many plants contain chemicals that have physiological as well as nutritional function. Some have hormonal effects whereas others may be carcinogenic, or anticarcinogenic, such as isoflavones in soy beans,49 or antioxidant phenolic constituents of green tea. The content of these plant chemicals may vary with the variety, geography, seasonality, soil composition, or even whether the plant has been attacked by insect pests. Biological Amplification in the Food Chain. Among the phenomena that influence the movements of chemicals in the environment is the process of biological amplification. This phenomenon has been demonstrated in a variety of ecosystems and has implications for human exposure. Most examples of bioamplification concern lipophilic chemicals such as PAHs or organometals such as methylmercury. These substances may be present in water or soil at the parts per million level. When taken up by planktonic organisms, they tend to concentrate in the tissues of these organisms and only a small fraction of the uptake is excreted. At each step up the food chain (what ecologists call trophic levels), the organism retains more than it excretes and incorporates an ever-increasing amount of contaminant in its fat. If the bioconcentration factor (BCF) were 10 for each level, then the plankton swimming in water with a 1 ppb concentration would accumulate 10 ppb, the fish larvae eating plankton would reach 100 ppb, small fish eating the larval fish 1000 ppb (1 ppm), and large fish eating the small fish 10 ppm. This example leaves the hapless human fish-eater consuming a huge dose of the amplified toxic material. A high lipid-water partition coefficient enhances bioamplification for organisms. However, some nonlipophilic materials may also undergo bioamplification if they concentrate in some other tissue (i.e., the thyroid) or bind to macromolecules.
EXPOSURE TO TOXIC SUBSTANCES
Understanding human exposure is the unique feature of environmental medicine. Traditional approaches such as taking a history remain important, but more sophisticated approaches are required to understand exposure which takes place in the home, community, and workplace.
Exposure Assessment Exposure assessment has emerged as a discipline that combines environmental sampling, chemical analysis, biomarkers, behavioral studies, and mathematical modeling to estimate the dose received by an individual.50 It is necessary to investigate exposure formally as a system of coupled events.51 Exposure pathways involve contaminated air, water, soil, or food entering through the lungs, GI tract, or skin, each combination of which is a potential pathway as shown in Table 20-2 and Fig. 20-1. Each nonzero cell in Table 20-2 is a potential pathway, ranging from slight importance (+) to major importance (++++). The ingestion of soil by toddlers is often the determining pathway in residential and recreational risk assessment. Injection of drugs, metal slivers, shrapnel fragments, is a special case not usually dealt with in environmental toxicology. When a human comes in contact with a contaminated medium, there is always the question about how much enters the body, is absorbed into the bloodstream, and reaches the target tissue. The bioavailability of a material in a particular matrix and the absorptive capability through the skin, intestinal mucosa, or alveoli can vary greatly and are difficult to measure directly. Likewise, the actual exposure of the target cells, tissues, or organs—the internal dose50— is seldom known. Absorption varies with species, age, the vehicle or solvent, as well as the presence of carrier molecules. Children absorb some compounds such as lead more efficiently than do adults. Since they are also more likely to ingest soil and are more vulnerable to its
510
Environmental Health TABLE 20-2. EXPOSURE MATRIX* Media/route
Inhalation
Ingestion
Percutaneous
Injection
Air Water Soil/dust
++++ +++ (showering) +++
0 ++ (slurries/muds) +
0 0 0
Food Other
0 +
++ ++++ ++++ (toddlers, diggers) ++++ +
0 0
0 ++
∗
With permission from EOHSI
effects, they are in triple jeopardy. Children who are undernourished are more likely to eat soil (pica) and are also more efficient at absorbing lead or cadmium, for example, from a diet deficient in iron or calcium. Children consume about 0.1 g of soil per day.52 For a given contaminant, different pathways may be important for different compounds. For example, organic mercury is primarily taken up by ingesting contaminated seafood, while elemental mercury usually enters by inhalation. Advances in instrumentation and analytic chemistry have supported great strides in direct measurement of environmental exposure. This is not without cost, because as our ability to analyze ever smaller quantities of an agent improves, our ability to deal environmentally and sociopolitically with such exposures has not kept pace. Analytic techniques that would formerly have yielded concentrations of “zero” or “nondetectable,” now provide results at parts per quadrillion (e.g., femtograms/gram). This has been referred to as the “vanishing zero.”53 The discipline of industrial hygiene is particularly concerned with anticipating, estimating, and controlling exposure to workplace hazards.54 For airborne hazards, industrial hygienists use a variety of pumps and collection media to capture pollutants in a known volume of air. These are then quantified in the laboratory and extrapolated to determine how much of the material a person is exposed to in an 8-hour period. Where particulates are involved, it is necessary to establish a size distribution to determine the portion that is of respirable size. Because exposures are not constant throughout the day, measurements must be made either at several times during the day or over several 8-hour work shifts. Exposures are expressed in terms of a time-weighted average (TWA) corrected to an 8-hour exposure.47 Industrial hygiene has expanded to environmental hygiene, including evaluation of hazards in the home and community.
Bioavailability An important aspect of exposure alluded to above is bioavailability.55 How readily is a toxicant released from its environmental matrix? In the case of ethanol dissolved in water, there is virtually 100% uptake of the alcohol into the bloodstream. In the case of a metal bound to protein in our food, the uptake may depend on the efficiency of protein digestion. In the case of substances bound to soil, bioavailability may vary greatly. Bioavailability of 2,3,7,8-tetrachlorodibenzodioxin (“dioxin”) was low in soil from Newark, New Jersey, probably due to a high degree of organic compounds in the soil, while dioxin from the sandy soil at Times Beach, Missouri, had much higher bioavailability.56 Bioavailability is also important for plants and consequently for humans that consume the plants. Certain pollutants in soil may be taken up by a plant and translocated to the leaves or fruits, which are subsequently harvested for human consumption. Depending upon the chemical species, concentration, pH, competing ions, etc, the plant may take up a large amount of the pollutant or none at all.
Absorption Bioavailability and absorption combine to determine how much of a substance enters the bloodstream through ingestion, inhalation, or the skin. Bioavailability refers to properties of the matrix (e.g., how a
xenobiotic may be bound), while absorption refers to properties of the organ (how readily a xenobiotic passes through the alveolar membrane, intestinal epithelium, or the skin). Methylmercury, for example, has nearly complete absorption from the gut, while ingested elemental mercury will pass through the GI tract with virtually no absorption. Ingestion of elemental mercury poses a threat mainly when the intestinal tract is disrupted, for example, after surgery.57 Lead absorption varies with age, children absorb about 50% of an ingested quantity compared to less than 10% for adults.58 Absorption of lead and cadmium increases in women and children with low iron stores and other essential nutrients,58,59 and this effect is enhanced in pregnancy.60
In Utero Exposure Many chemicals exert an effect on the developing embryo and fetus. They may pass through the placenta and reach the fetus; in some cases achieving concentrations higher than the maternal circulation.61 Transplacental transport is not necessary, however, since a chemical may influence the fetus by altering blood flow. ACUTE AND CHRONIC EXPOSURE AND TOXICITY
The terms “acute” and “chronic” can refer either to the duration of exposure or to the resultant health effects. A single “acute” exposure to a toxic chemical may be sufficient to induce health effects that in turn may be either acute (followed by recovery), subacute, or chronic. Long-term or chronic exposure may be followed by no adverse health effects (if the dose is low), or by acute effects (which may occur when a sufficient dose is accumulated), or by chronic effects. In addition to having a long duration, chronic effects tend to be nonreversible. More specifically with respect to toxicological studies on animals, acute toxicity can be defined as adverse effects usually occurring within 24 hours after a single dose. Subchronic effects usually occur after repeated dosing over up to 10% of the life span.62 Chronic exposure refers to dosing animals for more than 10% of their life span.63A particular dose may have a much greater effect when administered acutely than chronically, but in many cases repeated low doses cause effects not seen in acute toxicity. ACUTE TOXICITY IS RATED AS FOLLOWS BASED ON THE PROBABLE LETHAL ORAL DOSE FOR HUMANS
Toxicity Class
Dose required/kg Body Weight
Approximate Amount Consumed
Practically nontoxic Slightly toxic Moderately toxic Very toxic Extremely toxic Supertoxic
> 15 g/kg 5–15 g/kg 0.5–5 g/kg 50–500 mg/kg 5–50 mg/kg < 5 mg/kg
> 1 liter 300–1000 cc 30–300 cc 3–30 cc 0.3–3 cc < 0.3 cc
Acute LD-50s range from 10 g/kg for ethanol to .01 ug/kg for botulinum toxin.
20
Time-Dose Interactions Physiologists grappled with this problem in the nineteenth century, finding that some combination of voltage and duration needed to be achieved to produce a response to a shock. At a lower voltage, longer duration was required. The lowest voltage (dose) at which a response could occur approaches an asymptote called the rheobase. Chemical dosing offers an analogy that is seldom quantified. A lay person knows that taking one tablet a day for 10 days is not the same as taking 10 tablets on a single day, although the total dose is the same. The timedose relationship is complex and nonlinear. Moreover, different time-dose combinations can have very different qualitative as well as quantitative effects. A chronic or recurrent dosage may accumulate to exceed a certain threshold for a chronic effect, while an acute dose may be quickly eliminated without ever producing that effect. Conversely, a chronic daily dose may never reach an effect threshold. Hence a single daily dose of 1 oz of alcohol does not reach the threshold for producing impairment, and is reputed, in fact, to have beneficial effects. CHEMICALS IN THE BODY
What the body does to a xenobiotic, how it is distributed, altered, and eliminated, is referred to as toxicokinetics. What the chemical does to the body, how it interacts with target cells and intracellular targets, exerting its toxic effects, is referred to as toxicodynamics.
Toxicokinetics Toxicokinetics is the totality of reactions that govern the uptake and distribution of a toxic substance and its metabolites throughout the body. It is based on the different rate constants that exist for metabolic processes in different tissues under different circumstances and on different partitioning coefficients, binding properties, etc. These reactions are competitive, such that the amount of material available for metabolism depends on the amount that has been sequestered in fat, bound to protein, or excreted in the urine. The fate and effect of every substance that enters the body depend upon its absorption, transport, metabolism, storage, and excretion. Metabolism, for example, alters the binding properties and solubility of the original chemical and influences its toxicity and whether it will be stored or excreted. Two factors that influence the entry of chemicals into cells include the perfusion rate of the organ and the diffusion rate of the substance across the membrane. Fick’s law describes the passage of a xenobiotic across a membrane as proportional to the concentration gradient, the membrane surface area, and a compound-specific permeability coefficient. The latter in turn depends upon the condition of the membrane, the presence of receptors or transporters, and the lipid-aqueous partitioning of the compound. This is often measured as the solubility in octanol divided by the solubility in water. Excretion via urine, feces, exhaled air, or sweat is in turn determined by the relative solubility of the compound and its delivery to the kidney, liver, lungs, or skin. In general, compounds that are water soluble or conjugated are excreted via urine, while lipid-soluble compounds are secreted via the bile into the intestine.
Metabolic Activation versus Detoxification In some cases such as lead, cyanide, and carbon monoxide, the xenobiotic itself is the active poison exerting its toxic effect directly on enzymes, cells, or macromolecules. In other cases (e.g., arsenate, hexane, carbon tetrachloride) reduction to arsenite or oxidation to 2,5-hexanedione or a CCl3COO* radical produces the ultimate toxicant. Some xenobiotics create reactive hydroxyl, peroxyl, or alkoxylfree radicals. An important feature of metabolism is its ability to both reduce and enhance toxicity. Although the liver is the major site for detoxification of xenobiotics, many toxics do not exert activity until they reach the liver and are metabolically activated, usually through an oxidative reaction. This forms more highly reactive intermediate
Toxicology
511
compounds that can interfere with other metabolic reactions or “attack” membranes, organelles, or macromolecules. Phase I reactions involve oxidation (by dehydrogenases, flavin monooxygenases, cytochrome P450, and other systems), hydrolysis (for example by carboxylesterases, peptidases, paraoxonase), the formation and hydrolysis of epoxides, reduction (of azo and nitro groups, carbonyl, sulfides, and dehalogenation), and various other reactions. The P450 cytochrome-dependent enzyme system (see below) plays a major oxidative role. Phase II metabolism involves linking a substance to a glucuronide or adding acetyl or methyl radicals or conjugating it with amino acids or glutathione (GSH). Phase II reactions usually increase the hydrophilic nature of the substance, facilitating its excretion in urine. The liver is the main organ of metabolism, but metabolic enzymes occur in most tissues. Within cells they are found mainly in the microsomal component of the endoplasmic reticulum, but also in the cytosol and other organelles. In addition, intestinal flora can play a significant role, for example bacteria in the colon can transform PAH into estrogenic metabolites,64 although the significance of this is not yet known. Also there are active P450, glutathione-S-transferase (GST), and other metabolic enzymes in the nasal mucosa that modify inhaled xenobiotics.65 Additional details are provided by Parkinson.66 As an example, 1-methyl-4-phenyl-1,2,5,6-tetrahydropyridine (MPTP) was produced accidentally during the synthesis of an illicit narcotic analog. MPTP is oxidized to the neurotoxic metabolite MPP+ by monoamine oxidase B,67 which in turn is transported by the dopamine transporter and concentrates in dopaminergic neurons where it inhibits cellular respiration, causing cell death. The inadvertent consumption of this by-product by substance abusers produced Parkinsonism in a large number of young people, and MPTP has now become a model drug for Parkinsonism research. Monoanime oxidase inhibitors block the toxicity of MPTP. The common analgesic, acetaminophen, undergoes metabolism by P450 to a quinone, which interacts with liver proteins, causing centrilobular necrosis. It also undergoes activation through the prostaglandin H synthase (PHS) system in the kidney to produce a nephrotoxic free radical. The bladder epithelium is also relatively rich in PHS, which can metabolize certain aromatic amines into genotoxic metabolites which cause bladder cancer in humans and dogs. In rats, the predominant pathway is N-hydroxylation in the liver, such that the same amines cause liver tumors rather than bladder tumors.
Cytochrome P450 This system of iron-containing enzymes is very diverse performing oxidation, hydroxylation, epoxidation, and dealkylation reactions on a great variety of substrates.68 An entire subdiscipline has developed around understanding the species, tissue, substrate, and reaction specificity (or lack thereof) of the many P450s found in various organisms. The P450s were also called the liver microsomal oxidase system since the highest concentration is found in the microsomes (endoplasmic reticulum) of hepatocytes, but P450s are found in virtually all tissues. They are heme-containing proteins, which have peak absorption at 450 nm when complexed with carbon monoxide. P450 oxidation reactions can result in hydroxylation, the formation of epoxides from carbon = carbon double bonds, the cleavage of esters, dehalogenation, and other reactions.69 P450 monooxygenase enzymes that catalyze a variety of reactions, are divided into families 1, 2, 3, and 4. Members of different families have lower than 40% amino acid sequence identity. Within these families are subfamilies: 1A, 1B, 2A, 2B, 2C, 3A, etc, within each subfamily the proteins have 40–55% homology. Proteins with greater homology have the same number, for example 1A1, 1A2, 2A6, 2B6, 2C8, 2C9, 2C19, 3A4, etc. There are at least 15 of these different enzymes in the liver. Many new isoforms of P450 are being discovered. The P450 enzyme that metabolizes caffeine is referred to as P450 1A2 and is often abbreviated CYP1A2 (while the gene that produces it is written in lowercase italics cyp1A2). Other examples of specific P450 reactions are the hydroxylation of testosterone at position 6 by CYP3A4, and of coumarin at position 7 by CYP2A6. A particular
512
Environmental Health
substrate may be metabolized by more than one P450, while conversely each P450 catalyzes more than one reaction. Thus CYP3A4 can hydroxylate testosterone at several positions and also dehydrogenate it to 6-dehydrotestosterone.66 Xenobiotics may have multiple metabolic pathways, thus CYP2D6 oxidizes the aromatic ring of propranalol and CYP2C19 metabolizes the side chain, directing propranalol into two different pathways. Conversely the conversion of acetaminophen to its quinone metabolite can be accomplished by three different P450s. Many of the discrete P450s have been detected in studies of drug metabolism, and their natural substrates have not always been identified. Many of the P450s are inducible rather than constitutive enzymes. That is, the amount of P450 activity remains low until a suitable substrate is present which activates the gene that governs the expression of a particular P450. Current interest in P-450 focuses on heritable deficiencies or polymorphisms that influence individual susceptibility to xenobiotics.70,71 A mutation in the gene for CYP2D6 interferes with metabolism of the drug debrisoquine, and about 5–10% of Caucasians but less than 1% of Japanese are “poor metabolizers.” Conversely 20% of Japanese are poor metabolizers of the anticonvulsant S-mephenytoin due to deficiency of CYP2C19. Since these P450s are not substrate specific, these deficient individuals may be intolerant of certain other xenobiotics, whether environmental or pharmacologic. Tissue Specificity. CYP1A2 is expressed in liver cells but not in other tissues, while CYP1A1 is low in liver (of most mammals, but not guinea pigs or rhesus monkeys), but high in other tissues. Since the two catalyze different reactions, a single substrate may follow different metabolic pathways in different tissues. This is a rapidly evolving area of research with important applications on pharmacology and toxicology.72 Induction. It has long been known that certain xenobiotics induce the formation of metabolic enzymes.73 CYP1A2 is induced by a variety of PAHs and indoles. CYP3A4 is induced by barbiturates, while CYP2D6, which metabolizes many different drugs, is constitutive rather than inducible. CYP2D6-deficient individuals are hyperresponsive to certain drugs. However, they are likewise protected from certain environmentally caused cancers such as lung, bladder, and liver cancer, because of their failure to activate certain procarcinogens. Whether this is a direct effect of CYP2D6 deficiency remains to be determined. There is a tenfold variation in the liver content of CYP3A4.74,75,76 This may add credence to the use of a 10 × uncertainty factor in risk assessment to protect the most susceptible individual.
Flavin-Containing Monooxygenases Flavin-containing monooxygenases are another family of microsomal enzymes that require NADPH and oxygen to catalyze the metabolism of various xenobiotics that contain nitrogen (e.g., amines), sulfur (e.g., thiols), and phosphorus (e.g., organophosphates). There are several forms of these oxygenases, which have different distributions in various organs and species. Thus mouse and human liver have a high concentration of FMO3 and low concentration of FMO1 and the reverse is true in the rat, but both are present in high concentrations in the kidneys of all three species. Hepatocytes in female mice have higher expression of FMO1 and FMO3 than do male mice.77
Phase II Reactions These include several important conjugation reactions, some of which accelerate the elimination of xenobiotics. Glucuronidation. A series of enzymes called uridine diphosphate glucuronosyltransferases, found in various tissues of mammals other than felines, catalyze the conjugation of xenobiotics with glucuronides, which are usually water soluble, allowing excretion in the urine (low molecular weight forms) or in the bile. Glutathione (GSH) Conjugation. Many xenobiotics are electrophilic and will react with GSH. The conjugation is accelerated by cytosolic glutathione-S-transferase (GST) enzymes. GSH conjugates
can be excreted in the bile or can be transformed to water-soluble metabolites in the kidney and excreted in the urine. Polymorphisms at the GST loci result in variable efficiencies of the conjugation reaction. Depletion of GST-P1 in the lungs has been associated with increased susceptibility to lung disease and the effects of smoking, and enhanced apoptosis of lung fibroblasts.78 Despite early suggestions, a meta-analysis of 20 studies did not find that GSTM1 deficiency was a risk factor for colon cancer.79 Divalent cations readily bind with sulfhydryl groups, including GSH, and indeed, treatment with mercury increases the activity of several enzymes involved in the synthesis of GSH and the reduction of glutathione disulfide (GSSG).80 Conversely acetaminophen depletes GSH levels in liver; both the depletion and the subsequent hepatotoxicity are inhibited by diallyl sulfone, a metabolite of garlic,81 which inhibits CYP2E1 which activates acetaminophen. Other Reactions. Sulfation results in the formation of a watersoluble ester due to the transfer of the SO3 moiety. Methylation and amino acid conjugation are minor pathways. N-Acetylation is a major pathway for aromatic amines or hydrazines. It is catalyzed by N-acetyltransferases (NAT). These are cytosolic enzymes found in most mammals, except canines. There are at least three forms of NAT, and a deficiency in either activity or structure of NAT2 results in slow acetylation of certain drugs (for example, the antituberculosis drug, isoniazid). This deficiency occurs in about 70% of the Middle East population, in 50% of Europeans, and in 20% of Asians. Sulfur transferases have been found to have a wide role; for example, the enzyme 3-mercaptopyruvate sulfurtransferase is capable of detoxifying cyanide by transfering a sulfur, forming the less toxic thiocyanate. Sequestration of Xenobiotics. The amount of a substance available to affect a target organ or excrete depends on how much has been stored or bound. Sequestration of an agent in an organ need not be permanent. Stored substances may be slowly or quickly released from such relatively inactive depots as bone or fat. Lipophilic substances such as chlorinated hydrocarbons and organometals are generally found in fatty tissues or in lipid components of cells and membranes. They may be released in large concentrations from fat during starvation or illness. Metal ions such as strontium and lead compete with calcium for deposition in bone, and bone therefore provides a longterm storage depot for these ions. For example, lead accumulates in bone and may be suddenly released during the remodeling of bone that occurs in menopause82 and organochlorines may be mobilized from fat during periods of rapid weight loss. Metallothioneins are low-molecular weight proteins rich in sulfhydryl groups, which are involved in the regulation of zinc and other cations. Some metals such as cadmium can induce the formation of metallothioniens in the liver, and these proteins in turn bind the metal and influence its transport to other organs.
Routes of Excretion Xenobiotics and their metabolites are excreted mainly through the urine and feces, but also through the lungs, sweat, milk, and through the sloughing of skin and hair. Renal clearance is greatest for substances that are water soluble or that are conjugated into hydrophilic complexes. Fecal excretion usually occurs for substances that are lipophilic or can be conjugated into lipophilic complexes. Enterohepatic cycles may exist to interfere with excretion. A substance that is lipophilic can be secreted into the intestine, from which it is immediately reabsorbed, redistributed to the liver, conjugated with bile, and returned to the gut. Humans exposed to organic mercury, excrete it mainly in feces, while inorganic mercury exposure is reflected mainly by urinary excretion. For organics, molecular weight influences the excretory pathways. Higher chloriniated PCBs are excreted mainly in feces while mono- and dichloro PCBs are excreted mainly in urine. Volatile compounds are excreted through the lungs. At any moment, the concentration of volatiles in expired air depends on how much has just been inspired (but not absorbed), as well as how much
20
Toxicology
513
is released to the lungs from the bloodstream. Measurement of volatiles in expired air is potentially useful for monitoring VOC exposure. Short-chain chlorinated hydrocarbons are highly volatile, and whether consumed in water or inhaled, they are excreted via the lungs. Once they reach the liver, they are oxidatively metabolized into polar metabolites, which are water soluble and are not excreted in the air. Biological Half-Lives. The concept of a radiologic half-life, whereby the radioactive decay of a compound can be predicted, is mirrored by the biological half-life, the time it takes for half of a dose of a xenobiotic to be eliminated. However, as elimination may follow a two- or three-phase decay curve, the half-life is only an approximation, and estimates of half-lives vary among studies and among individuals as well. The individual variation in half-life of cadmium in the kidney has been estimated to range from a few years to a century.83
DEFENSES
Over many generations, organisms develop adaptations to environmental stressors including toxic chemicals. Fish and crustacea that live in contaminated water or sediments must have adaptations for tolerance, not required of conspecifics living in pristine conditions. Killifish, for example, living in contaminated bays such as New Bedford Harbor, Massachusetts, have experienced strong artificial selection for tolerance, and have quickly (about 50 years) evolved genetic resistance to the toxic effects of PCBs and PAHs.84 The basis of tolerance may be genetic, requiring selection of the more tolerant organisms over generations, or physiological (for example, by enzyme induction or organ hypertrophy). Transfer experiments have verified that organisms moved from clean to contaminated environments lack the tolerance acquired either genetically or physiologically.85 Defenses begin at the behavioral level. Organisms can avoid contaminated environments. Contaminated foods can be tasted and rejected. In primate family groups, one individual may taste a fruit and wait for hours before encouraging other members to partake. Noxious chemicals may cause the individual to vomit or feel abdominal pain, which would be a warning to avoid such items now and in the future and would be communicated to other group members. At the physiological level, chemicals may cause enzyme induction which hastens their own breakdown, thereby protecting against subsequent dose. Some sulfur-rich molecules such as GSH scavenge many xenobiotics, binding them and preventing their uptake by target tissues. Defenses also operate on the cellular level in the form of lysosomes.
TOXICOLOGICAL EFFECTS AND PHARMACODYNAMICS
Endpoints or Responses A toxicological effect may be manifest at the molecular, cellular, tissue, organ, individual, or population level. Some effects such as death, acute respiratory illness, skin rashes, and toxic hepatitis may be readily apparent, while others may be subtle, requiring sophisticated testing for identification. Endpoints depend on the toxic properties of a chemical and what the researcher chooses to study. A chemical may be highly specific, such as the effect of benzene on the bone marrow causing leukemia, or nonspecific. Endpoints may be sought in any organ system or any tissue type. They need not be clinically significant. Recently, attention has focused on subcellular and molecular targets of poisons and on biomarkers. Toxicology is concerned with all of these forms and levels of injury.
Dose-Response Curve Although many toxicological studies simply report the presence or absence of a particular effect, the hallmark of toxicology is the
Figure 20-2. The classic sigmoid dose-response curve showing a threshold and asymptote of maximal response.
dose-response curve. This is predicated on the fact that a high dose of a substance usually has a greater effect on any endpoint than does a low dose. The dose-response curve (Fig. 20-2) plots the dose along the X-axis and the endpoint response along the Y-axis. The typical dose-response curve has the sigmoid shape illustrated. It is a cumulative percent response curve showing the severity of response of an individual given increasing dose (individual curve), or the number of individuals responding (population curve). It is customary to measure dose in terms of the amount of the agent divided by the body weight of the organism, for example, milligrams of chemical per kilogram of body weight. In some cases, for example, acute toxic effects or sensitization of the skin, eye, or respiratory tree, the toxicant is not distributed throughout the body, and the dose per body weight is therefore not a good predictor of effect. In such cases different units must be used, such as concentration in a volume of air or on an area of skin. In interpreting dose-response data from animal studies, it is necessary to know the species, strain, age, and sex of the test animals, the conditions of exposure, as well as the dose. Endpoints include death, presence of lesion (e.g., tumor), number of lesions, and anatomic, physiological, biochemical, molecular, or behavioral changes. Thus if one were concerned with neurotoxic, nephrotoxic, and lethal characteristics of a particular chemical, one would draw three doseresponse curves, graphing the severity of each effect against dose. Fig. 20-3 shows nested dose-response curves for the number of people manifesting each different endpoint (increasing in threshold and severity from left to right) with different levels of organomercury exposure. Thus, difficulty in speech occurred at a much lower dose than coma and death. The common features of most dose-response curves are shown in Fig. 20-2. Initially, there is a flat subthreshold portion where an increase in dose produces no detectable effect. The threshold is the lowest dose that produces an observable effect. Beyond that point, the curve tends to rise steeply and often enters a linear phase where the increase in response is proportional to the increase in dose. Eventually a maximal response is reached, and the curve flattens out. Various endpoints have been used to reflect toxicity. Traditionally toxicologists were interested in the LD-50, the dose which killed half of the exposed animals. The Y-axis was therefore the number of animals dying at each dose. Various chemicals could be ranked in terms of their LD-50. This proved to be a very narrow indication of toxicity, and many other endpoints have proven more useful, but one can still speak of the response dose, or RD-50, or the effective dose, or ED-50. Since typical toxicological studies used only a few doses, it was unlikely that the actual LD-50 dose would be used. The same data can be plotted using a probit scale for the Y-axis, resulting in a straight line, which allows the LD-50 (or RD-50) to be identified from the graph.
514
Environmental Health
Composite curves Where a substance produces more than one kind of response separate curves should be drawn for each, such as the organomercury curves (Fig. 20-3). If one of the responses is beneficial, a downward curve can be drawn, while the traditional upward curve illustrates the toxic response (Fig. 20-4). This can allow detection of a safe or therapeutic region, above which the undesirable or toxic effects are reached. For example, eating fish provides great nutritional benefits, but above a certain level of consumption the benefits are outweighed by the toxic constituents (mercury and PCBs).88 This would not be considered an example of hormesis since the endpoints are very different.
Figure 20-3. Nested dose-response curves for different clinical manifestations of organomercury poisoning based on the epidemic in Iraq (1971), showing the progression in thresholds from the relatively minor but early sign of paresthesias to lethality, estimated at the time exposure ceased. Solid squares = paresthesias, open squares = ataxia, solid triangles = dysarthria, open circles = deafness, solid circles = death. (Source: Takizawa Y. Epidemiology of mercury poisoning. In: Nriagu J, ed. The Biogeochemistry of Mercury in the Environment. Amsterdam: Elsevier; 1979.)
Hormesis. Many substances that are essential elements or nutrients at low doses, for example, iron and chromium, become toxic at high doses. Whether certain other nonessential, xenobiotics, or radiation, also have a beneficial dose range (hormesis) is controversial. Hormesis has been promoted as a general phenomenon.86 However, confusion arises between unitary phenomenon where a particular agent produces a specific response that follows a U-shaped curve rather than a sigmoid cuve, and other situations where there are compound responses such as growth, immune responsiveness, and longevity.87 Confusion also arises between the terms beneficial and harmful on the one hand versus high and low on the other. Thus a beta-blocking agent may produce a monotonic beta-blockade in the autonomic nervous system, but this may be viewed as beneficial (therapeutic range) or harmful (toxic range). Hormesis, if it occurs, may have regulatory implications, but is not helpful in understanding toxicology.
Thresholds. Thresholds (Fig. 20-2) are a familiar concept to physiologists and biochemists. A particular response may not occur at a very low dose or intensity of stimulus. Thresholds probably exist for most toxicological exposures. Thus we can live normal lives even though we are exposed to myriad chemicals, albeit at low (subthreshold) levels. Experience with radiation, however, indicated that even at very low doses there was a measurable response. There did not seem to be a definable threshold, or the threshold was very close to zero. This led to our understanding of a no-threshold approach to carcinogens.89 This is one of the most controversial issues in toxicology, with critics claiming that there must be a level of radiation below which no harm occurs, or even where there is benefit. The National Research Council’s Committee on Biological Effects of Ionizing Radiation (BEIR-7 Committee) has (June 2005) reaffirmed that the linear-non-threshold model is still the best approach to cancer risk assessment for low doses of radiation (BEIR).90 The concept of a threshold is also a source of severe controversy in the case of chemical carcinogens,91 where, theoretically at least, a single molecule may be the critical molecule that induces a cancer transformation in a cell. Some scientists believe that there must be a threshold for cancer as there is for other toxicological reactions. Others argue, on theoretical grounds, that since no threshold (below which no cancer risk exists) has been demonstrated, there probably is not a threshold. In the light of the ongoing controversy, some governmental regulatory agencies have concluded that, until we are more certain, it is prudent to act as if there were no threshold for carcinogens. Thus the application of a no-threshold approach to chemical carcinogens can be viewed as a policy decision rather than a scientific decision.92
Latency Latency is the time between a stimulus and a response or, in toxicology, the time between an exposure and an effect. In some cases (for example, acute exposure to hydrogen sulfide), the effect is felt in seconds, and the latency is therefore measured in seconds. In the case of Composite benefit-risk by dose curve H1
H2
N1 N2
Figure 20-4. Composite benefit and risk by dose curve, summing the dose-response curves for harm (H1, H2) and benefit (B) to arrive at net benefit–harm composites (N1, N2). Up-arrows under X-axis indicate thresholds. (Source: The Environmental and Occupational Health and Sciences Institute.)
B
15
30
45 50 Fish consumption (grams/day)
75
90
20 asbestos-induced mesothelioma, a cancer of the lining of the chest or abdomen, the latency is on the order of 40 years, that is, the cancer may not develop until 40 years after the first exposure occurred.93 If the latency is very short, as with acute effects, it is usually easy to establish a cause-effect relationship. When the latency is much longer, the cause may be long-forgotten before the outcome is realized. Accordingly, only sophisticated epidemiologic studies can identify cause-effect relationships with long latencies. In some cases there is a dose-response relationship for latency, that is, at higher doses latency is reduced.
Reversibility Since most individuals recover from most toxic exposures, it is clear that many toxic effects are reversible. Inhibition of a biochemical pathway may be reversed if a competing agent is introduced to bind up the xenobiotic. If a cell is killed, it does not come back to life, but in almost all organs, regeneration of new cells occurs to take over the role of the damaged cells. In the case of genetic damage to the nucleic acid molecules, sophisticated biochemical reactions called “DNA repair” mechanisms are brought into play and eliminate, in various ways, the damaged DNA. Cells with irreparable DNA damage may be eliminated by apoptosis. Our DNA repair mechanism(s) become less efficient as we age, and this is one of the factors associated with the increased incidence of cancer in older people. SUSCEPTIBILITY
Although it is well known that individual humans vary in their susceptibility to different stressors, and although some of the factors modifying susceptibility are well known, there is a great need for research on susceptibility. In experimental animal species, strain, gender, and age influence susceptibility. Indeed, some rodent strains are bred for enhanced susceptibility to certain diseases. If a population of organisms were exposed to a fixed dose of a chemical, one could graph the responses with a histogram—how many individuals had no, low, medium, or high response. If response is quantitative, a smoothed histogram could be drawn. This might take the form of a normal, lognormal (Fig. 20-3), or some other distribution. If only one gender were susceptible, the curve would be skewed. Or if only very young and very old individuals were susceptible, the curve would be bimodal. Other sections of this chapter describe the role of genetic polymorphisms which alter metabolism rates for xenobiotics and consequently risk of disease. These polymorphisms can be used as susceptibility biomarkers, for example, particular variants of CYP1A1, CYP1B1, GSTM1, NAT2, and CYP2E1.94 Susceptibility is thus a complex phenomenon. Emphasis on single nucleotide polymorphisms is one of the simpler areas, and epidemiology has been fruitful in identifying subgroups with increased or decreased susceptibility due to abnormalities of phase I or phase II enzymes. Mutation of the gene cyp2D6 alters drug metabolism, and a case-control study showed about a 70% increased risk of acute myelogenous and lymphoblastic leukemias among poor metabolizers at the CYP2C19 and 2D6 loci, the increased risk for the latter occurring only after age 40.95 CYP1A1 abnormalities have been associated with increased lung cancer risk in some studies,96 but not others.95 Studies of phase II enzymes may be more rewarding. Variation in NAT2 activity reveals that slow acetylators have increased bladder cancer rates, while fast acetylators have increased colon cancer. Many enzymes are involved in hormone metabolism and variations in activity may contribute to variation in cancer rates in hormone-sensitive tissues, particularly breast. The importance of epigenetic effects in now becoming evident.
Toxicology
515
in biology including understanding of gene regulation, transcription factors, polymorphisms, receptors, cytokines, oncogenes, cell cycling, intracellular membranes, vesicles and transport, DNA repair, apoptosis, and epigenetic effects have greatly expanded the horizons of mechanistic toxicology since 2001.
Metabolic Poisons These include substances that disrupt metabolic pathways, for example, cyanide compounds, which inhibit cellular respiration. Binding to an enzyme and altering its tertiary structure and its active site is a common mechanism. Some substances act within cells to alter the structure or function of internal membranes, such as endoplasmic reticulum, or organelles, such as mitochondria. Many chemicals act on mitochondria, interfering with their energetic function and resulting in swelling and loss of detail on electron micrographs and necrosis of cells.
Macromolecular Binding Chemicals may bind to various macromolecules such as proteins, hemoglobin, or nucleic acids. These adducts may interfere with function or may be silent. Some are reversible, being repaired within hours, while others persist and may presage future cancer. The presence of DNA adducts may reflect genotoxic or carcinogenic properties, although their utility as biomarkers of cancer susceptibility has not been determined.
Cellular Poisons Cellular poisons are substances that damage cells or cell membranes, causing necrosis or lysis or apoptosis. Membranes are functional as well as structural entities, and chemicals that interfere with membrane transport systems may have major consequences. Some xenobiotics are transported into cells by transporters that normally transport endogenous compounds. Toxic agents may react with either the protein or lipid component of the membrane nonspecifically or by binding to specific receptors. Many naturally occurring toxins cause lysis of cells (necrosis), for example, the hemolysins in certain plants and snake venoms. Some heavy metals act directly on the cell membrane, interfering with the sulfhydryl binding responsible for membrane integrity and altering membrane fluidity.97
Apoptosis versus Necrosis Apoptosis, often called “programmed cell death” is a necessary part of the life history of a cell, but is also a mechanism of toxicity. Gene activation leads to proteins that prepare the cell for apoptosis, which ends with phagocytosis of cell fragments, without concomitant inflammation.98 This is an essential feature during development, allowing the remodeling of tissues. Apoptosis selectively eliminates cells with damaged DNA and also counters the clonal expansion of neoplastic cells. Inhibition of apoptosis, for example, by estrogens, allows mutations to accumulate and tumor proliferation to occur. Hormone-dependent tumors expand when the hormone inhibits apoptosis, while an antiestrogenic drug, such as tamoxifen, allows apoptosis to occur. Conversely, the tumor-promotor phenobarbital inhibits apoptosis.99 The inhibition of apoptosis thus enhances the proliferative phase of carcinogenesis.100 New cancer treatments focus on harnessing apoptosis to destroy tumor cells.101 Apoptosis kills cells by disrupting the cytoskeleton, affecting the nucleus and mitochondria. It occurs in all forms of life, and in all types of cells. Caspases, a group of proteases, are activated by an apoptotic signal and begin the process of destroying the cell, in a complex cascade of enzyme-enzyme interactions. Nuclear and cytoplasmic membranes condense and break into membrane-bound bodies, while in necrosis the effect can be on the cell’s energy cycles or on the cell membrane, causing cells to swell and lyse.102
MECHANISMS OF TOXICITY
Enzyme Induction Understanding how a chemical causes its adverse effect is important in directing research or influencing risk assessments. New advances
Because of the specificity of enzymes, the body cannot at all times maintain a full supply of all the enzymes that may be needed for every
516
Environmental Health
situation. Accordingly only some enzymes are constitutive (present in full supply), while many are inducible (produced in response to the presence of substrate). In normal development and cell cycling, the induction of appropriate enzymes is carefully regulated, whereas induction by xenobiotics is usually disruptive. Many substances induce the expression of the enzymes that will act on them, and within 12 or 24 hours the amount of enzyme protein present within a cell may increase by several orders of magnitude. Some enzyme systems are highly specific and act only on a single substrate, others are nonspecific and catalyze classes of reactions on a wide range of substrates. The substrates vary in their potency at inducing enzymes. Enzyme induction plays an important role in metabolizing xenobiotics, either enhancing their toxicity or reducing it. However, sometimes the most important consequence of the enzyme induction is the greatly accelerated metabolism of endogenous bioactive compounds. For example, the pesticide DDT induced enzymes that broke down estrogen, and the resulting hormone deficiency disrupted reproduction of many animal species.
Receptors and Ligands Advances in biochemistry include identifying the role of receptors and ligands and their regulation as an important part of many toxic interactions. Although some toxic interactions take place in solution, toxicologists have increasingly recognized that toxic effects usually involve binding of the toxicant to some active receptor site on an enzyme or membrane or to some intracellular ligand. The xenobiotic itself becomes the ligand for its receptor. A familiar example is the binding of neuroinhibitory substances to the receptors on the postsynaptic membrane or the myoneural junction. The xenobiotic-receptor interaction involves affinity, efficacy, potency, and reversibility. Receptors are important components of normal cellular function and account for the remarkable specificity of many cell processes. It is now realized that many hormone effects are mediated by hormonespecific receptors in particular target tissues, for example, the estrogen receptor. Some toxic effects occur because a xenobiotic is capable of binding to a hormone receptor or a neuroreceptor and interfering with the normal action of the endogenous chemical. Binding may be activating or inactivating. The compound 2,3,7,8-TCDD, often known simply as dioxin, has proven a valuable tool for toxicological research.103 Its effects are in part mediated by binding to the Ah (aryl hydrocarbon) receptor which encodes a transcription factor allowing TCDD and coplanar PCBs and PAHs to affect gene expression.104 Related substances that bind to the Ah receptor have effects similar to TCDD, but with vastly different dose-response curves related to their binding affinity. By binding to estrogen-like receptors that might normally be activated by estrogens, dioxin may inhibit the proliferation of breast cancers. This is based on animal research. However, a study of the dioxin-exposed communities around the Givaudan chemical plant in Seveso, Italy, which exploded, releasing a cloud of dioxin, showed a deficit of breast cancer, although other cancers were elevated.105 A normal feature of receptor models is that they are reversible, allowing the same biological function to be rapidly repeated. Toxic effects involving receptors often are much less reversible (for instance, the binding of carbon monoxide to hemoglobin or the inhibition of cholinesterase by organophosphate pesticides). This leads among other things to competitive inhibition between the xenobiotic and the endogenous compound.
Immunotoxins Immunotoxins act by suppressing or activating the immune system and through autoimmunity and hypersensitivity. Immunosuppression predisposes to infectious complications and virus-induced cancers. Some alter the formation of immunoglobulins, while others affect the lymphocytes. Some agents interfere with the production or function or lifespan of the B and T lymphocytes. B cells control antibodymediated or humoral immunity. T cells mature in the thymus and are the main factor in cell-mediated immunity. T cells are classified on
the basis of surface antigens, and it is now commonplace to quantify a variety of T-cell subpopulations and determine which functions have been inhibited. Substances known to interfere with the immune system include polyhalogenated aromatic compounds (e.g., 2,3,7,8-TCDD), metals (e.g, lead and cadmium), pesticides, and even air pollutants (e.g., NO2, SO2, tobacco smoke). Mercury, for example, causes autoimmune changes and glomerulonephritis in the brown Norway rat strain but none in the Lewis strain. This appears related to a depletion of the RT6+ subpopulation of T lymphocytes in the former but not in the latter.106 Xenobiotics may cause T cells, NK cells, and particularly macrophages to release cytokines which initiate inflammatory responses. The release of tumor necrosis factor alpha from various cells can stimulate epithelial cells to produce chemotaxic signals that recruit leukocytes. Oxidants can activate transcription factors (for example NF-kappaB), which influence inflammatory mediators.107 Sensitizers. Sensitizers are substances that act through the immune system to induce an increased immune response. These can be complete allergens or haptenes. The main target organs are the skin itself and the respiratory system. Nickel and poison ivy (Rhus) contact dermatitis are common examples of such skin sensitization. Occupational asthmas reflect sensitization of the lung and airways to aerosols. The tests used to evaluate immunotoxicity include quantification of IgE, IgM, IgG, counts of B cells, T cells and their subsets, T helper and suppressor cells and their activity, natural killer cells and their activity, and cytokine levels, for example, interleukin-2 production.
Genotoxicity Radiation and various chemicals are capable of damaging DNA, the genetic material, or interfering with the processes involved in chromosomal replication and cell division. Damage occurring in the germ cells may be heritable, while those occurring in somatic cells are not. Mutagenesis. Some substances interact with genetic material, causing either point mutations, chromosomal damage, or interference with meiosis, mitosis, or cell division. A variety of tests can measure these effects including chromosomal aberrations, aneuploidy, sister chromatid exchange, translocation assays, micronucleus formation, glycophorin A assay, and T-cell receptor genes. New genetic techniques allow sequencing of genes and detection of changes at specific codons. Mutation Spectrum. Genetic analysis can reveal a pattern of GC or AT base pair substitutions, deletions, or duplications at a single gene locus in individuals with a particular exposure. The relative frequency of the different mutations—the spectrum—may differ depending on the nature of the exposure. For example, somatic mutations at the X-linked hypoxanthine phosphoribosyltransferase (hprt) gene are mainly deletions (49%) or base pair substitutions (44%). Although GC substitutions are commoner in nonsmokers, there was a slight increase in AT substitutions in smokers.108 However, after radiotherapy there was a substantial increase in rearrangements and deletions, which persisted for at least several years.109 Ras Oncogenes. Genotoxic chemicals may cause mutation in proteins called proto-oncogenes producing the mutant oncogene that encodes for a modification of the natural protein product. Some changes such as that in the ras proto-oncogene increase cell susceptibility to cancer. The 21-kDa protein (p21) binds with a receptor on the inner cell membrane and mediates responses to growth factors. Mutation at codon 13 “locks” the protein into the active form such that it no longer responds to other cell signals. With signal transduction impaired, this permanent activation is associated with malignant transformation and proliferation.110 Tumor Suppressor Genes (p53). Certain proteins inhibit cell division cycles. If the genes that encode these control proteins mutate,
20 the resulting gene product may lack the inhibitory effect, allowing unbridled cell proliferation. One of these, the p53 gene, encodes a 53kDa protein, which among other functions inhibits cell growth, slowing the process of neoplastic transformation. Once thought to be a tumor antigen, elucidation of its suppressor role required two decades of study.111 Transgenic mice that lack p53 develop cancer at an early age.112 Humans heterozygous for normal p53 suffer the Li-Fraumeni syndrome with increased risks for cancer at a young age. Contrary to radiation, chemical carcinogens do not attack DNA randomly; rather there are hot spots vulnerable to adduct formation. There is an association between a change in the 249th codon of the p53 product in people exposed to aflatoxin B1 and in people with hepatocellular carcinoma, suggesting that the toxin may cause cancer by this highly specific mutation.113 Similarly benzo[a]pyrene, a lung carcinogen, consistently forms adducts with guanine at the 157th, 248th, and 273rd codon of p53.114
Reproductive Effects The processes of gametogenesis, fertilization, implantation, embryogenesis, organogenesis, and birth are complex and subject to many errors. Major errors incompatible with life generally result in abortion, which can be viewed as a quality control procedure. Adverse reproductive consequences include failure to form gametes (e.g., azoospermia) and formation of abnormal gametes. Once gametes are formed, several factors may intervene to prevent fertilization, embryogenesis, or implantation. There is concern that many synthetic chemicals, particularly those that bind to hormone receptors, may interfere with one or more of these steps. A notable case is dibromochloropropane (DBCP), a nematocide, which induced oligospermia or azoospermia in the men who manufactured and packaged DBCP. Those with azospermia never recovered normal spermatogenesis after cessation of exposure. Lead also interferes with spermatogenesis. A long list of chemicals has been implicated in toxicity to the male reproductive system (including interfering with spermatogenesis, semen quality and sperm motility, erection, and libido). The list of chemicals affecting female reproduction includes cancer chemotherapeutic agents, other pharmaceuticals, metals, insecticides, and various industrial chemicals.115 Cord blood is routinely collected in many centers, and can be used to measure levels of toxicants or other biomarkets in the neonate.116
Teratogenesis From conception through birth and maturation, the organism undergoes a bewildering series of carefully timed events that require the formation and replacement of tissues. Some substances interfere with the complex processes of morphogenesis. Depending on the stage of embryogenesis and fetogenesis, they may affect different organ systems, leading to embryonic death, major structural birth defects, slowed maturation, or even postnatal effects such as learning difficulties.117 Thus, there is a sequence of critical windows in development, during which a xenobiotic may produce a specific defect. In general, exposure prior to implantation is likely to be lethal. Exposure during organogenesis begets birth defects or embryolethality. Later in fetal life, one sees intrauterine growth retardation or fetal death or functional changes that interfere with birth or postnatal development. Approximately 3% of live births have detectable congenital abnormalities, and additional congenital defects may become apparent later in life. Some of the defects are genetic or chromosomal in origin, but some are due to chemical exposures (including drugs taken by the mother). The recently recognized field of behavioral teratology involves study of some of these effects, such as the impact of lead exposure on psychomotor development and learning. The fetal alcohol syndrome reflects the specific toxicity of ethanol ingested by the mother on the development and behavior of the newborn.
Endocrine Disruptors This rubric applies to a wide range of substances and a wide range of effects and is an area of major controversy highlighted by Theo Colburn’s book, Our Stolen Future.118 The ability of DDT to influence
Toxicology
517
estrogen metabolism through enzyme induction has been established since the early 1960s, but recent research has shown a wide range of effects in various animal species from compounds that resemble hormones or that interact with hormone receptors to enhance or inhibit normal endocrinologic function particularly related to development, maturation, and reproduction.119 Many of these compounds occur naturally in vegetables and have been called “phytoestrogens.” These include a group of isoflavonoid and lignin polycyclic compounds. At the same time that concern is voiced regarding interference with reproduction and development, their beneficial features are being exploited. One isoflavonoid, coumesterol, anatgonizes estrogen during embryonic development, leading to reproductive abnormalities in behavior and hormone function. Others, such as genistein, protect against certain hormone-dependent breast cancers by competing with estrogens, or against other cancers by inhibiting proliferation, differentiation, or the vascular supply.120 However, public concern has focused more attention on industrial chemicals with endocrine activity, particularly on Bisphenol A and nonylphenol, exposure to which is widespread in developed countries.121 A group of structurally diverse xenibiotics that activate a peroxisome proliferator receptor in the liver, may alter steroid metabolism.122 PAHs with steroidal-like structures have weak estrogenic and antiestrogenic action. Research is proceeding on many fronts123 including using the bioengineered yeast estrogen screen, in which the human estrogen receptor and estrogen response elements are expressed to screen for the action of various xenobiotics. The potency of these compounds is influenced by environmental persistence and bioamplification, bioavailability, and binding affinities.124 Most attention focuses on estrogenicity, either enhancement (in relation to estradiol) or inhibition. Binding to the estrogen receptor can be activating or blocking. Moreover, other parts of the endocrine system such as the thyroid and hypothalamicpituitary-adrenal axis are also vulnerable to interference.
Oxidative Stress and Free Radicals In addition to its critical role in supporting cellular respiration and oxidation-reduction reactions throughout the body, oxygen plays a more sinister role in toxicity. Normally there is a balance between oxidative and antioxidant reactions. However, oxidative reactions have long been known to play important roles in inflammation, aging, carcinogenesis, and toxicity.125 Much of this is mediated by the formation of superoxide radicals. Toxicologists speak of reactive oxygen species, some of which are free radicals. These are designated in formulas with a star or asterisk. Oxygen can receive an electron and form a superoxide anion radical, which can in turn react with hydrogen to form hydrogen peroxide, which reacts with free electrons and hydrogen ion to form water and a highly reactive hydroxide radical. In the course of these reactions, the highly reactive free radicals, particularly the hydroxy radical, are available to attack macromolecules, initiating a variety of toxic effects. The superoxide anion radical is formed in many oxidation reactions, where oxygen acts as an electron receptor. Thus chromium increases the formation of superoxide anion and nitric oxide in cells and enhances DNA single-strand breaks.126 Glucose-6-phosphate dehydrogenase (G6PDH) is essential in cells facing oxidative stress, which in turn increases the amount of G6PD in exposed cells. The fungicide maneb damages dopaminergic neurons through oxidative stress, an effect enhanced by GSH depletion.127 Transgenic mice that overexpressed superoxide dismutase or GSH peroxidase were relatively resistant to the antioxidant effect of maneb combined with paraquat.128 In response to the potential harm that these reactive oxygen species may cause, the body has evolved antioxidant defenses, including water-soluble vitamin C and lipid-soluble vitamins E and A. Superoxide dismutase, a metalloprotein, and GSH-dependent peroxidases, in association with GSH reductase, serve to scavenge free radicals. One of the consequences of free radical formation is reaction with lipids, including those in cell and organelle membranes, to form lipid peroxides, which in turn lead to cell damage and dysfunction.
518
Environmental Health
Lipid Peroxidation Some cytotoxicity of chlorinated hydrocarbons such as carbon tetrachloride are mediated by peroxidation of membrane lipids, which can be caused by a variety of reactive oxygen species.129 An active area of research involves identifying naturally occurring and synthetic compounds that interfere with lipid peroxidation.130 For example, peroxide radicals can attack fatty acids in the cell membrane producing a lipid peroxyl radical, which self-converts through a series of reactions into lipid aldehydes, which result in both membrane disruption and the generation of new free radicals.
Nitric Oxide A major recent development has been recognition of the complex roles that nitric oxide (NO) plays in the cell as an intracellular messenger. This occurs in the nervous system, lung, and liver, where synthesis is altered by xenobiotics.131 NO also has antithrombotic properties and is a potent vasodilator. L-Arginine is converted to nitric oxide by a calcium-dependent, NADPH-dependent cytosolic enzyme, nitric oxide synthase (NOS).132 This formation is coupled to activation of glutamate receptors.133 Excess NO production increases intracellular free radicals enhancing neuronal degradation.134 A common polymorphism (Glu to Asp at the 298th codon) impairs these activities.135 CARCINOGENESIS: INITIATION AND PROMOTION
Cancer is not a single disease, but includes a great many diseases that share a common property of uncontrolled cell proliferation. Normally cell proliferation proceeds in controlled fashion ensuring an adequate number of new cells for any given physiological task. Carcinogens dysregulate this process. It is customary to divide carcinogenesis into stages: initiation, promotion, proliferation, and clinically apparent disease. Initiation is the process by which the genetic material of the cell is altered, predisposing it to cancer.136 Such genetic abnormalities are often repaired, or initiated cells may be destroyed as part of the body’s defense against cancer. However, we are exposed to initiating events throughout our lives. Initiated cells may survive but remain dormant, perhaps controlled by the immune system. In the presence of promoters, initiated or mutated cells have a selective growth and division advantage over normal cells. Promotion is the process by which initiated cells are stimulated or allowed to become cancerous,137 and proliferation is the stage of clonal expansion. At each of these stages, defenses may reverse or retard the process of carcinogenesis and tumor growth.138
Effects on Signal Transduction Cell cycles are regulated by molecules that serve as signals to activate certain receptors that transduce signal (change the form of the signal) to influence genes. Signal transduction pathways typically alter gene expression or modify gene products, either enhancing or inhibiting their function. Many endogenous signal chemicals (such as hormones) as well as xenobiotics can alter gene expression by activating transcription factors, which in turn promote the transcription of certain genes. MIXTURES AND INTERACTIONS
Even today most toxicological study and virtually all toxicological regulation is based on a single compound tested and regulated one at a time. However, for several decades it has been appreciated that chemical exposure is rarely to a single compound, and that exposures occur either as mixtures of chemicals or against a backdrop of health status, lifestyle, pharmaceuticals which influence susceptibility. Many studies of two chemicals at a time are performed, but few triadic studies have been attempted. Mixture research can either be synthetic, testing two or more chemicals together, or analytic, testing a mixture and then trying to determine which components cause an
effect. Because of the need to test multiple doses, mixtures research could employ Latin square design common in other disciplines.
Interactions When two chemicals are administered together or when an individual is exposed to a mixture of chemicals, there may be various interactions identified as follows: (a) independence or additivity: each substance produces its own effect appropriate for its dose; (b) synergism: the combined effect is greater than either substance would produce alone or additively, that is, it is supra-additive but rarely actually multiplicative; and (c) antagonism: the combined effect is less than one would have expected from one or both chemicals administered alone. A classic example of a truly multiplicative interaction is the case of asbestos and smoking.139 Asbestos exposure increases the risk of lung cancer fivefold, while smoking increases the risk of lung cancer about tenfold over the nonsmoker’s risk. The asbestos worker who smokes has a risk about 50 times greater than the person with neither exposure. Synergism may occur when substance A enhances the effect of B, promotes its activation, or interferes with its degradation and excretion. Antagonism occurs when A interferes with the uptake of B, competes with it for metabolic enzymes or substrates, or enhances its degradation or excretion. Although truly synergistic interactions are rare, supra-additive interactions are probably common. In combination, the fungicide maneb and the herbicide paraquat, two chemicals likely to be used in farming, cause altered motor activity and coordination, a “Parkinsonian disease phenotype” in mice at doses below which either could cause such damage alone,140 and this effect as well as reduction of dopaminergic neurons on the nigrostriatal pathway, is enhanced in elderly (18-month old) mice.141 There is a synergistic interaction between aflatoxin B1 (itself a potent hepatocarcinogen) and hepatitis B. Hepatitis patients exposed to aflatoxin are at greatly increased risk compared with normal subjects.142 Many xenobiotics, such as the PCBs and dioxins, induce enzymes (for example the P450s), which in turn alters the metabolism of endogenous chemicals and drugs.143 The oxidation of toluene, for example, is greatly increased by prior exposure to PCBs. Co-contamination with mercury and PCBs occurs in various species of wildlife, and synergism between them has been proposed as a cause of developmental defects.144 Likewise, phthalates and PCBs cause a supra-additive reduction of human sperm motility.145 Expanded toxicological investigation of mixtures is essential to advance our understanding of hazards, exposures, and risk.146 In addition to chemical mixtures, other modifying factors such as stress, can influence how an organism responds to a xenobiotic. In a classic and rare triadic (three-chemical study), White and Carlson147 showed that caffeine potentiated combined effect of trichlorethylene and epinephrine in causing cardiac arrythmias in rabbits. The combination of lead and restraint (stress) on pregnant rats had a greater effect on corticosterone levels in offspring than either treatment alone.148
Age Interactions Not surprisingly, chemicals can have different effects at different stages of the life cycle, from critical windows in prenatal development to enhanced vulnerability during reproductive life or old age. Genotoxic compounds affect older individuals preferentially because the DNA-repair function involving base-excision and adduct removal gradually declines with age. In rodents (Peromyscus) many enzymes involved in metabolism undergo systematic life-cycle changes.149 Parkinsonism is a disease of the elderly, and traditionally age is considered the only definite risk factor. Evidence for a chemicalage interaction has been shown in mice where the fungicide maneb has a greater effect in 18 month old animals than in young adults.141 Older animals did not experience an increase in tyrosine hydroxylase activity to compensate for the absolute depletion in amount of the enzyme.
20
Toxicology
519
Interactions in the Environment
TABLE 20-3. EXAMPLES OF BIOMARKERS
Interactions among chemicals in the environment are also prominent, but are beyond the scope of this chapter. For example, in air sunlight causes the photochemical oxidation of sulfur and nitrogen oxides (SOx and NOx) to produce ozone. Ozone itself interacts with volatile organics to produce acids and aldehydes.150
Biomarkers of Exposure Specific chemical agent in blood, urine, hair, nails, exhaled air, feces Specific metabolite in blood, urine, exhaled air Specific effect marker can be exposure biomarker
CLINICAL EVALUATION OF TOXICITY
Clinical toxicology usually refers to the emergency diagnosis and treatment of episodes of acute poisoning. Yet clinicians play an important role in the understanding of chronic poisonings as well. This requires an appropriate “index of suspicion,” accurate identification of possible hazards, and an estimation of the magnitude and circumstances of exposure, as well as delineation of toxic effects. The clinician obtains a detailed medical, social, environmental, and occupational history (see Chap. 19), performs a physical examination, and uses a variety of clinical and laboratory tests, including the assessment of biomarkers of exposure and effect. When a patient presents with a symptom complex or disease and a history of exposure to one or more substances, there is a twofold challenge. The first is to determine whether the chemical(s) can or do cause the disease. This is termed general causation, and often employs the Bradford-Hill postulates.151 The second, specific causation is whether exposure was sufficient to cause a particular person’s disease. Evaluation of pulmonary damage may be apparent on chest x-rays, pulmonary function tests, or by alterations in the cells obtained by bronchoalveolar lavage. Or pulmonary cells obtained by bronchoalveolar lavage (or possibly by sputum induction) may be used in a lymphocyte proliferation test to identify specific sensitivities, for example, to beryllium.152 Damage to liver can often be detected by disturbances in the pattern of various enzymes that serve as markers of liver cell damage. Similarly, severe kidney damage may be reflected in the excretion of large proteins such as albumin, while low molecular weight proteins may be biomarkers of earlier damage.153 The emerging discipline of proteomics seeks patterns of biomarkers sensitive enough to detect early damage, and specific enough to identify the cause.154
BIOMARKERS
The past decade has seen a tremendous emphasis on biomarkers in understanding toxic effects on humans155,156 and other organisms.157 In the broadest sense, anything that can be measured (for example, blood pressure and historical information) could be included under the rubric of “biomarker,” for there is no clear distinction between these clinical measures and those identified through molecular biology, biochemistry, or analytic chemistry. The highest level of biomarker is direct measure of the chemical or a specific metabolite in human tissues.158 However, for many chemicals, particularly lowmolecular-weight organics, rapid metabolism or excretion renders this not feasible. Table 20-3 provides examples of biomarkers. It has been convenient to divide biomarkers into three categories: markers of susceptibility, markers of exposure, and markers of effect. However, the distinction is blurred in practice, for example, when a xenobiotic (such as benzo[a]pyrene) forms an adduct with DNA that can be considered an effect, a marker for exposure, or (if it increases the risk of cancer), a marker of susceptibility (future likelihood of developing cancer). Biomarkers of exposure indicate how much of a substance has contacted or been absorbed into the body. Biomarkers of susceptibility are used to identify individuals with unusually high (or low) susceptibility to a particular stressor or xenobiotic. Biomarkers of effect are directly related to the toxic endpoint of interest. For example, cholinesterase depression by organophosphate pesticides is an effect biomarker, the means by which OPs cause disease, but are poor exposure markers since the levels among people do not correlate closely with exposure. Familiar biomarkers
Biomarkers of Effect Male reproduction Semen quality and sperm count and motility Mullerian-inhibiting factor Chromosomal aberrations DNA adducts in sperm Female reproduction Chorionic gonadotropin assay Urinary hormone assays Pulmonary Pulmonary function testing Airway reactivity (challenge tests) Pulmonary cytology Clara cell protein 16 Immunology Immunoglobulin levels Lymphocyte counts (types, subtypes) Lymphocyte function assays T-cell-dependent antibody assays Lymphocyte proliferation tests Cytokine/chemokine activity Receptor expression assays Macrophage/leukocyte respiratory burst response Lead poisoning Blood lead Zinc or erythrocyte protoporphyrin Delta-amino levulinic acid in urine Delta-amino levulinic acid dehydratase activity Bone lead by X-ray fluorescence Biomarkers of Suseptibility Age, sex Single nucleotide polymorphisms in proteins Phase I metabolic enzymes Phase II transferases p53 tumor suppressor gene sequence Metabolic enzyme activity DNA repair assays
include blood lead (a biomarker of exposure) and zinc protoporphyrin (a biomarker of effect from lead exposure). The study of biomarkers in various human populations has been labeled molecular epidemiology.155 Single nucleotide polymorphisms (SNPs) in CYP2D6, for example, result in fast or slow metabolism of certain substrates, rendering individuals more or less susceptible to toxicity, depending upon whether the xenobiotic metabolized by CYP2D6 is active in its native or metabolized form. The metabolism of PAHs by CYP1A1 is also subject to genetic variation, enhancing the susceptibility to lung cancer from the benzo[a]pyrene in tobacco smoke. Similarly, SNPs in phase II enzymes such as N-acetyltransferase may enhance susceptibility to bladder cancer in those exposed to aromatic amines, while reducing susceptibility to certain lung carcinogens. Clara cells are nonciliated cells of the bronchi which secrete a 16-kD anti-inflammatory protein (CC16), which has been proposed as a marker of increased epithelial permeability. CC16 levels in blood were increased in healthy volunteers exposed to ozone,159 but were decreased in workers exposed to NOx.160 There is a great variety of potential biomarkers, and new analytic methods continually enhance the ability to measure lower and lower levels of a marker, while advances in molecular biology add to the variety of potential markers.156 Biomarkers of exposure are usually the
520
Environmental Health
xenobiotic itself or a specific metabolite. Biomarkers of susceptibility are often genetic, but measurement of iron-binding saturation may indicate susceptibility to cadmium absorption, for example.161 Biomarkers of effects may be physiological, cellular, biochemical, or molecular. Actual application of molecular markers is growing rapidly. Biomarkers complement environmental measurements in tracking the movement of a xenobiotic from its source, through the environment, to and into the body, and thence to metabolic, storage, ectretory, and/or target organs. Biomarker utility depends on sensitivity, specificity, and predictive value.
Biomarkers for Lead Using lead as an example, the commonest approach to assessing exposure is simply to measure the lead content of a sample of whole blood. In the United States this is expressed as µg/dL while in other countries it is reported as µmol/L. The amount of lead circulating in the blood at any time represents the recent intake as well as any mobilization from the storage organs, particularly bone. One target for lead is hemoglobin synthesis. Lead blocks several of the enzymes in the heme pathway, particularly delta-aminolevulinic acid dehydratase (ALA-D), resulting in the buildup and excretion of ALA in the urine. Ferrocheletase is also inhibited, resulting in increased protoporphyrin, capable of binding zinc instead of iron. Free erythrocyte protoporphyrin (FEP) and zinc protoporphyrin (ZPP) levels increase at high blood lead levels. Urinary ALA, once used as the biomarker of choice for lead poisoning, was considered too sensitive for monitoring occupational exposures. FEP or ZPP became widely used, particularly in evaluating childhood lead poisoning, but are not sufficiently sensitive now for evaluating changes when blood lead is below 10 µg/dL. Urinary ALA and FEP were biomarkers of exposure and also biomarkers of effect since they were in the direct pathway by which lead produces anemia. Interest in the chronic and cumulative exposure to lead, poorly reflected by current blood leads, has led to studies of bone lead, since bone is the major repository of lead in the body and accumulates lead in lieu of calcium. In vivo Xray fluorescence has proven effective in some studies as a measure of lifetime exposure. This technique revealed that men as well as women mobilize lead from bone as they age.162 Hair is not used for assessing lead, however, the concentration of mercury in hair is a good indicator of methylmercury intake, but does not reflect inorganic mercury exposure, while blood mercury reflects inorganic or elemental exposure and correlates with dental amalgams.163 The National Research Council’s Board on Environmental Studies and Toxicology has a Committee on Biologic Markers, which has published monographs on markers in pulmonary toxicology,164 reproductive toxicology,165 and immunotoxicology.166 The potential application of biomarkers are boundless. They can be used to estimate exposure, internal dose, and dose to target cells. They can be the endpoint in dose-response assessments. They can be used to distinguish exposed from unexposed populations for epidemologic studies. The study of biomarkers is largely in the domain of analytical chemistry. Enhanced specimen processing and analytic methodology have led to the vanishing zero, as chemical analytic instruments can now measure in the picomolar (10−12) and even femtomolar (10−15) range while new fluorescent technique advertise biological detection at the attomolar (10−18) and even zeptomolar (10−21) level, the latter approaching Avogadro’s number for the number of molecules in a mole of a substance (6.22 × 10−23). Genomics now plays a major role in the quest for biomarkers.
Adducts Adducts are formed when a chemical or its metabolite binds with macromolecules, particularly nucleic acids, but also proteins such as hemoglobin. Smokers, for example, have higher levels of benzo[a]pyrene adducts to DNA than do nonsmokers.167 Some adducts are repaired within hours, while others persist. A variety of techniques have been used to assess adduct formation, including the 32P postlabeling (Randerath) technique based on differential mobility of DNA bases involved in adduct formation. This method has yet
to prove useful in screening other populations. DNA-protein crosslinking is promoted by a variety of genotoxic chemicals including hexavalent chromium.168 Two benzene metabolites, hydroxyquinone and muconaldehyde, induce DNA-protein cross links supra-additively.169
Radiation Damage and Chromosomes The main body of information on the genetic effects of ionizing radiation derives from the 60-year follow-up of atom bomb victims of Hiroshima and Nagasaki conducted cooperatively by the United States and Japan through the Radiation Effects Research Foundation. The chromosomal damage in 2300 survivors shows a clear doseresponse relationshp that parallels the incidence of leukemia in the same population.170 Radiation toxicology is summarized by Harley.171
CAUSALITY: ENVIRONMENTAL CHEMICAL EXPOSURE AND HEALTH EFFECTS
In the laboratory, establishing causality between a chemical exposure and a health effect depends upon sound experimental design with careful attention to alternative hypotheses. It may or may not involve careful definition of the mechanism by which the effect is achieved. In the community, determination of cause and effect is much more difficult. Under these “natural” conditions, the hazardous substance is not always identified or may be present in mixtures, and the dose and the conditions and time frame of exposure are seldom known. It may be difficult to ascertain who is exposed as well as to what. Often there is a bewildering array of symptoms and signs suspected or attributed to the putative cause. Simply defining relevant health effects may be a costly and frustrating venture, while linking them to specific exposures may be impossible.172 Scientists and clinicians may not appreciate that the courts impose entirely different standards for establishing causation. Moreover, standards of causation differ under different bodies of law. Thus, in some jurisdictions one may have to establish a “reasonable probability,” in other cases it must be “more likely than not,” or “without this event the outcome probably would not have occurred.” In some circumstances one must establish an attributable risk, how much of the outcome can be related to the particular exposure. In other cases, the causation is “presumptive” unless proven otherwise. For example, the U.S. Congress required the Veterans Administration to give veterans the benefit of the doubt in cases involving herbicide exposure, and certain diseases in exposed veterans are now presumed to be related to herbicide exposure and qualify for compensation.
Case study: Vietnam Veterans Exposed to Herbicides Among the herbicides used in Vietnam to deny cover and destroy crops, Agent Orange (a 1:1 mixture of two common herbicides) became a great concern. Synthesis of one component, 2,4,5trichlorophenoxyacetic acid, resulted in a small amount of an unwanted condensation product, 2,3,7,8-tetrachloro-dibenzodioxin (TCDD), generally considered the most toxic synthetic compound. Extensive studies of veterans and other populations exposed to herbicides have resulted in different levels of confidence regarding causation. Table 20-4 gives an example of causal associations for various medical conditions and herbicide exposure in Vietnam (from the Institute of Medicine).173 Establishing causation is one of the most challenging tasks in environmental toxicology. The Hill postulates mentioned above provide guidance. When toxicological studies give conflicting results, the weight of evidence approach is used. Careful review of the body of epidemiologic and toxicological evidence cannot always provide definitive answers, even for commonly studied chemicals to which many people are exposed. And only a small percentage of chemicals have been adequately studied. Elucidation of the causes of Gulf War Syndrome174 and World Trade Center cough, provide additional examples.175
20 TABLE 20-4. FOUR LEVELS OF ATTRIBUTABILITY FOR DISEASES IN VIETNAM VETERANS173
Sufficient Evidence of Causation Chronic lymphocytic leukemia Soft-tissue sarcoma Non-Hodgkin’s lymphoma Hodgkin’s disease Chloracne
Limited or Suggestive Evidence of an Association Respiratory cancer Prostate cancer Multiple myeloma Early-onset peripheral neuropathy Porphyria cutanea tarda Type II diabetes mellitus Spina bifida in offspring
Inadequate or Insufficient Evidence Hepatobiliary cancer Oral, nasal, pharyngeal cancer Many other cancers Reproductive effects and birth defects Many other conditions
Limited or Suggestive Evidence of no Association Gastrointestinal tumors Brain tumors
Toxicology
521
Transgenic and Knockout Mice For decades, toxicologists have taken advantage of rodent strains inbred for specific metabolic or susceptibility characteristics that rendered a particular strain suitable for a particular test. Genetic engineering has produced mice with highly specific defects that might not have arisen by chance, thereby offering a new array of “tools” for toxicological research. Thus, a mouse can be designed to be deficient in a particular protein or overexpress it, and traits can be combined in the same animal such as the severe combined immunodeficiency (SCID) mouse.
Human Exposure Studies Human experimentation always presents ethical challenges and must be balanced against the value of the information gained. It should be confined to questions that cannot be answered by other methods, and must be designed to avoid serious harm to subjects. It is conducted under rigorous scrutiny by institutional review boards and must contain safeguards to avoid harm to patients. Many studies have been done using ingested or injected routes. Improved technology controlling air flow, particle generation, and vapor delivery has allowed the use of carefully controlled inhalation studies as well.174 Unfortunately, one alternative to carefully controlled chamber studies are the inadvertent, uncontrolled, industrial exposures that have taken place in many settings without benefit of IRB review.
ANIMAL WELFARE AND ANIMAL RIGHTS TOXICITY TESTING
Toxicologists employ a wide variety of systems and paradigms to test chemicals in order to predict their effects on human health or the environment. Studies range from modeling, in vitro studies in cell components and cell cultures, animal experiments, and human epidemiology. The factors that affect toxicity in humans (Table 20-1) must be considered in designing the experiments. One must choose the appropriate animal model or in vitro test system. If using animals, the genetic strain, gender, and age of the animal must be selected. The dosage schedule, single or multiple, and acute, subchronic, chronic, as well as appropriate dose levels must be chosen. The route of administration should be relevant to natural conditions of exposure. The experiment should last long enough to fully encompass any effects that have a long latency. And appropriate controls must be selected. In addition to these design features, there are standards for good laboratory practices which indicate how animals must be cared for and how data must be recorded. This provides for appropriate quality assurance methodology. Increasingly, a variety of in vitro test systems are replacing many studies traditionally done in animals.
Bioassays of the National Toxicology Program The National Toxicology Program (NTP), operated by the National Institute for Environmental Health Sciences, sponsors long-term rodent studies to detect the carcinogenic or other toxic properties of chemicals.176 Chemicals are selected depending on the data needs of governmental agencies and in response to public concerns. The standard protocol is two species (rat, mouse), both sexes, and a minimum of 50 individuals for each category, with oral dosing over a 2-year “life span.” These 2-year bioassays can provide information on metabolism and genetic, reproductive, and developmental toxicity as well as on toxic effects on various organ systems. The NTP bioassays serve an important role in screening new chemicals for carcinogenic activity and classifying them with respect to human carcinogenicity. However, the main application has been the use of the tumor incidence data in risk assessment. Only a small fraction of chemicals in commerce have been tested in these assays which are expensive and timeconsuming, and alternative techniques are sought to provide reliable answers more economically.10
Toxicologists have become increasingly attentive to the animal welfare/ animal rights movements. Proponents of animal rights argue that animals have intrinsic rights that, in the extreme, should protect them from any and all use in experimental research. Whether “animal rights” are guaranteed by either human or divine “law,” is beyond the scope of this chapter. However, animal welfare is clearly an important issue for toxicologists. The Animal Welfare Act (AWA) is administered by the Animal and Plant Health Inspection Service of the U.S. Department of Agriculture. Currently it applies only to mammals, exclusive of mice and rats. The discovery and standardization of alternatives to animals in research and testing systems is an active research area, and several journals are devoted to this topic. The challenge is to develop test systems, for example, cell cultures, that mimic the whole animal, but a major limitation is the rapid dedifferentiation of cultured cells with loss of the critical phenotype, that limits extrapolation. Experimental animals should be spared unnecessary stress, discomfort, or pain. The AWA requires that alternatives to painful procedures be considered. Increasingly, researchers have sought alternative models that do not require whole animals. At the same time, animal research has been redesigned to use fewer animals and to minimize pain and discomfort. The National Science Foundation and National Institutes of Health have recognized the importance of animal welfare not only from a humane perspective but because stressed animals cannot provide an unbiased response in experimental situations. Accordingly, researchers using animals must take into account animal care guidelines, which stipulate the conditions under which animals must be kept and the availability of veterinary care. Research protocols must be reviewed by institutional animal care committees as well as by human subjects review boards. Animal facilities must be inspected and accredited, usually by the Association for Assessment and Accreditation of Laboratory Animal Care (AAALAC). With the recognition that stress is an important modifying factor in toxicology,25 toxicologists must be more attentive to the stresses imposed on animals (even mice and rats) by handling and procedures, but also animal care and crowding. The concern over animal welfare reaches its peak when primates are used. Primates are expensive to acquire and maintain, and most studies of primates can afford only a few animals who often live under unnatural and extremely stressful conditions. In addition, since
522
Environmental Health
extrapolation from primates to humans is not always more appropriate than extrapolation from other animal models, most toxicology research does not involve primates, and the trend has been to close rather than expand primate research facilities.
REGULATING TOXIC EXPOSURES
Protecting people and ecosystems from hazardous chemicals requires the interplay of governmental and nongovernmental bodies. There is a complex governmental regulatory framework for toxic chemicals in the environment. Each agency has distinct jurisdiction, and unfortunately there is not always consistency among agencies. Among these agencies and programs are the following:
Food and Drug Administration This agency is responsible for protecting the integrity of food, drugs, and cosmetics (see Chap. 47) and ensuring that harmful levels of xenobiotics, additives, and adulterants are not present. It sets allowable daily intakes (ADIs) for various chemicals. A major change in the Food Quality Protection Act of 1996 was to increase its coverage of chemicals while setting aside the Delaney Amendment, which forbid any animal carcinogen as an additive or pesticide which left a measurable residue in food.
Occupational Safety and Health Administration Established in 1970 by the Occupational Safety and Health Act, this branch of the U.S. Department of Labor is required to set standards that will protect workers from adverse health consequences (see Chap. 46). The Occupational Safety and Health Administration (OSHA) establishes permissible exposure limits (PELs), to which a worker could be exposed 40 hours a week for a 40-year working lifetime, and short-term exposure limits (STELs), the latter being ceiling values that cannot be exceeded for more than 15 minutes. Unfortunately most of its PELs are seriously outdated, based on 1968 data.
Environmental Protection Agency The Environmental Protection Agency (EPA) has far-flung responsibility for protecting the environment. EPA sets and enforces regulations regarding amount of tolerable pollution and levels of contamination in soil, air, and water. It implements the Federal Insecticide, Fungicide and Rodenticide Act (FIFRA), originally passed in 1947, and the Toxic Substances Control Act (TSCA), originally passed in 1976, as well as Clean Air and Clean Water Acts, and many others. One of the latter acts established the National Toxicology Program and requires EPA to evaluate data on any new chemicals proposed for manufacture and importation.
tetrachloroethylene is a possible human carcinogen. This has prompted a quest for alternative dry-cleaning substances, including the use of liquid carbon dioxide.178 Chlorine. Another controversy concerns the movement to ban all chlorine-containing products. Many of the chlorinated solvents are classified as known or probable human carcinogens. Exposure to chlorination products in drinking water has been linked to low birth weight and small head circumference,179 and to intestinal cancer,180 although the potency is low and causality is in question. Organomanganese in Gasoline. The removal of organic lead from gasoline was a major success in applied toxicology. However, its proposed replacement, methylcyclopentadienyl manganese tricarbonyl (MMT) may greatly increase exposure to manganese, itself a potent neurotoxin that causes a parkinsonian-like syndrome. MMT has been used in Canada since 1977, and urban pigeons have higher levels of manganese than do rural ones, consistent with traffic-related contamination.181 Widespread use of this compound in gasoline seems bound to repeat the lead-in-gasoline tragedy of the mid-twentieth century. In 1997, Canada terminated the use of MMT. EPA’s attempt to prevent incorporation of MMT in gasoline was overthrown by Ethyl Corporation’s court challenge. To date, however, MMT has not found its way into U.S. automotive fuel. PRECAUTIONARY APPROACH
Although society has always recognized the importance of precaution, it is not always embodied in regulatory practice. The Toxic Substances Control Act uses a precautionary approach, requiring premarket testing of new chemicals. The precautionary approach developed extensively in the 1990s with regard to new technologies, and its generalization is that the existence of uncertainties, or the lack of definitive information, should not delay the regulatory or other control of new technologies or substances, where there is a reasonable presumption of serious or irreparable harm. Those who introduce new substances or processes bear the obligation of demonstrating their safety. This view competes with the alternative that a substance or technology is innocent until proven guilty, which at its most conservative requires demonstrating effects in humans. Advocates of precaution argue that human epidemiologic studies require large amounts of funding and that the most definitive prospective studies take long time periods, and that certainty requires multiple studies. Since epidemiologic methods are inherently conservative (low alpha and high beta, favoring type II errors over type I errors), a precautionary approach should always be considered.182 FUTURE DIRECTIONS
Department of Transportation The Transportation Act governs the labeling and handling of hazardous chemicals shipped in interstate commerce. It requires classification and testing of chemicals to determine the type and extent of hazard they might pose in the event of a spill.
Imaging New imaging techniques such as Positron Emission Tomography (PET), functional magnetic resonance imaging (fMRI), and MRI microscopy, offer great promise in toxicological research in vivo. These techniques have rapidly assumed prominence in clinical medicine, toxicology is beginning to exploit them.
PRODUCT SUBSTITUTION
Toxicogenomics Both environmental and industrial toxicology have focused on the development of substitutes for widely used, but unacceptably toxic, chemicals that for various reasons are no longer acceptable. The chlorofluorocarbons (CFCs), used as refrigerants and propellants, have global atmospheric effects catalyzing the destruction of atmospheric ozone, which resulted in an international agreement to phase out their use. The development of compounds that share CFCs desirable properties and are also nontoxic and environmentally friendly is a major area of research. Likewise, the widely used dry-cleaning fluid
The promise of these new technologies is just being realized. The ability to produce gene-chips has opened new horizons for research. Gene expression arrays provide semiquantitative responses to xenobiotics, and data clustering techniques allow identification of which genes are up-regulated and down-regulated in response to a particular challenge. Due to the large number of responses (thousands of dependent variables read simultaneously), informatics is developing in tandem, building on principles of numerical taxonomy and multivariate clustering techniques established a generation ago by Sokal and Sneath.183
20 The rapidly growing genomics literature illustrates two major trends in research: a descriptive pattern approach and a mechanistic approach. The former relies heavily on cluster analysis to elucidate patterns of gene response or protein increase or decrease in response to treatment. The latter tests hypotheses regarding particular genegene and gene-protein responses.184 In view of the rapid developments in chip technology, identification and annotation of gene sequences on chips, it has become challenging to relate current findings to those published only a few years ago.185
Nanotechnology New technologies introduce new materials. While the future of nanotechnology is bright and imaginative, the hazards posed by solid phase structures small enough to be absorbed through membranes require study. Nanoparticles in the 1 to 100 nanometers (0.1 µm) size range occur naturally, and some have been in production for a long time (e.g., carbon black). Combustion, such as diesel exhaust produces a range of particle sizes, some in the ultrafine range (less than 100 nm aerodynamic diameter, and these have disproportionately higher inflammatory effects than an equal mass of fine particles.186 Small nanoparticles can enter cells and be transported along axons. The smaller the particle, the greater its surface to volume or mass
Toxicology
523
ratio, and the greater potential for bioactivity. Extensive planning is required for meaningful nanotoxicology research.187
Toxic Environments Environmental health has traditionally examined environmental media in terms of pollutant concentrations in air, water, soil, and food. Research has shown the importance of interactions among chemicals and between chemicals and other factors such as stress. The social and home environment can be toxic as well188 influencing both exposure and response to contaminants such as lead and cocaine. Moreover, urban and suburban environments impose their own stresses, which are attracting attention to the “built environment”189 and the issue of environmental justice, an important part of environmental toxicology should be a major part of the national exploration of health disparities190 OLD DIRECTIONS
Among the many traditional areas of toxicology that require increased attention, two stand out in my mind: (1) mixtures and interactions and (2) time-dose interactions. As the new frontiers with dazzling technologies attract attention and funding, these traditional areas may have trouble competing for decreasing scientific grant funds.
Neurobehavioral Toxicity Nancy Fiedler • Joanna Burger • Michael Gochfeld
The nervous system is a prominent target for many poisons that can cause morphological or functional damage.1 Classical neurotoxic effects include the depression of central nervous function by anestheticlike solvents, the weakness from anticholinesterase pesticides, the tremor of chronic mercurialism, or the peripheral neuropathy of lead poisoning. Recent attention has focused on dementia attributed to chronic solvent exposure and on neurodevelopmental disruption and cognitive impairment caused by prenatal exposure to ethanol, mercury, lead, and polychlorinated biphenyl (PCBs). Whereas evaluation of nervous system function was formerly the domain of the neurologist and electrophysiologist, neurobehavioral testing offers another dimension of evaluation that is important for several reasons. First, neurobehavioral tests are sensitive to subtle behavioral changes that may occur at doses lower than those required to cause anatomical or physiological changes or even symptoms or signs that can be observed by the clinician.2–5 Second, because neurobehavioral toxins can affect the higher levels of function and functional integration essential for complex cognitive processes, neurobehavioral (including psychometric) tests offer standardized methods to evaluate these critical and somewhat unique aspects of human behavior. Although acute effects are often dramatic, the discipline has become increasingly concerned with chronic effects such as impaired learning, memory, vigilance, and depressed psychomotor performance. Persistent behavioral effects can occur as a consequence of acute poisoning or from prolonged exposure to low levels of chemicals.6 Neurobehavioral evaluations of exposed individuals or groups provide an opportunity to objectively evaluate the many nonspecific symptoms such as weakness, dizziness, irritability, listlessness, anorexia, depression, disorientation, incoordination, difficulty in concentrating, or personality changes, which are sometimes attributed to environmental exposures. Ultimately toxicity occurs through the interaction of a chemical and a molecular target,7 yet in neurobehavioral toxicology we often treat the nervous system as a “black box.” Lotti1 notes the frustrating
search for morphological correlates or markers of functional toxicity. In some instances, the molecular approach has been rewarding, although many mechanisms remain elusive. For example, neuropathy target esterase (NTE) was identified as the target for the organophosphateinduced delayed polyneuropathy (OPIDP).8 Although this mechanism was proposed in 1975, the physiological function of NTE remains obscure, and although the reaction of organophosphates (OPs) with NTE is understood, details of the subsequent cascade leading to the polyneuropathy is not known.9 Moreover, the known function of NTE (which allows measurement of its activity) is not related to the likelihood of developing the polyneuropathy. This neuropathy is characterized by distal axonal degeneration in both the central and peripheral nervous systems. OP pesticides are apparently less potent than triortho-cresyl phosphate in causing OPIDP.9 Similarly, lead alters the sensitivity of the N-methyl-D-aspartate (NMDA) receptor complex. Several areas of the brain are rich in NMDA receptors, and the density of receptors varies with time during development. Antagonists of NMDA receptors impair learning in several study designs.10–12 For example, in birds, NMDA antagonists block the learning of song.13 The ramifications of this change on imprinting, learning, and memory, which are influenced by the NMDA receptor in certain parts of the brain, are an active area of neurobehavioral research.14,15 Lead also blocks voltage-dependent calcium channels that mediate neurotransmitter release.16 While researchers continue the search for mechanisms to explain toxicity at the molecular level, behavioral methods to detect and quantify changes in function from acute and chronic exposure have developed in parallel. For example, in 1973 the National Institute for Occupational Safety and Health (NIOSH) convened a Behavioral Toxicology Workshop for Early Detection of Occupational Hazards3 which reviewed research findings on many substances in various organisms and considered the tools that could be applied for evaluation of behavioral toxicity.2 During the past 30 years, the field has grown rapidly with a variety of experimental paradigms and clinical
524
Environmental Health
approaches for detecting behavioral manifestations of neurotoxicity.17,18 There have been extensive reviews of experimental and clinical findings (see General References). In this chapter, we review the target components of the nervous system, examples of neurotoxicants, the kinds of behavioral abnormalities seen, and some of the neurobehavioral tests currently used for evaluating such abnormalities.
TARGET COMPONENTS OF THE NERVOUS SYSTEM
Hypothalamic-Pituitary-Adrenal Axis (HPA) The hypothalamus is a major physiological control area of the brain and hypothalamic signals to the pituitary cause the release of various hormones which control other endocrine organs including the adrenal. Maternal exposure to lead, for example, permanently alters the HPA responsiveness in offspring.19 Prenatal exposure to morphine inhibits the HPA and alters the hypothalamic metabolism of serotonin, inducing chronic sympathoadrenal hyperactivity; when exposed to ether by inhalation, morphine-treated rats failed to increase tyrosine hydroxylase or epinephrine.20
Autonomic Nervous System Toxins structurally similar to neurotransmitters may enhance (agonist) or inhibit (antagonist) the normal function of either the parasympathetic or sympathetic systems. Many widely used drugs have primary or side effects on the autonomic system, and organophosphates interfere with parasympathetic function. Recent findings suggest that exposure to lead may permanently increase corticosterone levels in rats and alter responsivity to stressors among the adult offspring of animals exposed during gestation and through lactation.19 Thus, lead and perhaps other neurotoxicants may exert their effects directly on the autonomic nervous system but may also exacerbate the effects of other external stressors.
Peripheral Nervous System Peripheral neuropathies may occur when a xenobiotic kills nerve cells, destroys the axon, or causes myelinopathies. Even subtle damage to the myelin can be detected by nerve conduction velocity studies. Axonopathies involve a dying back of the axon itself (for example, that caused by n-hexane).20 These defects can be detected by electrophysiologists or neuropathologists. Peripheral neuropathies may affect either sensory nerves, motor nerves, or both; usually sensory nerve fibers are most susceptible.21 The n-hexane axonopathy is a classical case of a specific metabolite (the 2,5-hexanedione), which causes cross-linking of neurofilaments, manifested by axonal swelling and dissolution.22 Carbon disulfide causes a similar crosslinking axonopathy, while the acrylamide neuropathy involves adducts of microtubule-associated proteins, impairing synaptic vesicle transport.22 The hexane axonopathy is a sensitive finding, occurring at relatively low doses; however, at high dose in acute CNS degeneration of the vestibular and cerebellar regions, it becomes the dominant lesion.23 Arsenic causes subclinical sensory neuropathy detectable by the vibration threshold measurement (see below).24
Central Nervous System The central nervous system (CNS) is the primary domain of neurobehavioral toxicology. Neurotoxic effects in the brain are often complex, with elusive pathologic changes that affect associations among neuronal pathways. Improved histochemical approaches allow pathologists to detect changes in dendritic patterns and interconnections, for example, between two nuclei in the brain as well as the localized destruction of specific types of nerve cells. Many neurobehavioral effects are due to agonistic or antagonistic actions on neurotransmission in the CNS. Brain development is an intricately timed and coordinated process involving multiple cell signaling molecules and pathways
that influence cell differentiation, neuronal migration, positioning, and synaptogenesis. Reelin, a signaling molecule is required for neuronal positioning and the ultimate layering of the neocortex. Mice that are heterozygote deficient develop with an ataxic, “reeling,” gait. Ethanol interfered with reeling action, resulting in the failure of migrating neurons to stop at the appropriate position.25 Research on how the brain achieves the so-called higher functions (learning, memory, creativity, cognition, etc.) gradually expands our understanding. Ablation studies (opportunistic or deliberate) and computer analogy are examples of approaches to understanding brain function. In addition, with the advent of single-photon emission computed tomography (SPECT), functional MRI, and positron emission tomography (PET), functional imaging has revealed the brain structures involved in the performance of various cognitive functions.26 Some preliminary studies suggest that PET scanning could begin to clarify how brain function may be affected by solvent encephalopathy even when static CT and MR images have shown no structural abnormalities. For example, PET scanning has been used to document encephalopathy due to solvent exposure.27 In addition to clarifying structural-functional relationships, it is necessary to clarify how substances pass through the blood-brain barrier (BBB) at different times in the life cycle, and what happens to them once they enter the brain. Methylmercury (MeHg), for example, readily passes the blood-brain barrier and to some extent is demethylated in the brain, but much remains to be learned about these processes and how their toxic effects are mediated.
Basal Ganglia The recognition that Parkinson’s disease (PD) did not have a familial basis has prompted research on possible environmental causes. Agricultural chemicals have been implicated, and rodents exposed to the herbicide paraquat, provides an animal model. The combination of paraquat + maneb augmented the PD effect, with reduction in locomotor activity, particularly in older mice, reflecting a reduction in dopaminergic neurons.28
SELECTED NEUROBEHAVIORAL TOXINS
This section provides a brief overview of neurobehavioral toxicants. (For more detail, see General References.) Many commonly occurring chemicals are neurotoxic. Table 20-5 indicates the variability in effects produced by some common neurotoxicants. Data from the National Health and Nutrition Examination Survey (NHANES III) comparing children’s cognitive abilities to serum cotinine (a nicotine marker), reported a decrement in reading scores and block design (both verbal and spatial measures).29 Carbon monoxide at relatively low levels (equivalent to carboxy hemoglobin [COHb] < 10%) impairs vigilance, tracking, and ability to drive.30,31 Virtually all solvents, whether aliphatic or aromatic, chlorinated or not, have acute depressant effects on the nervous system, many of them sharing common anesthetic properties. It is also apparent that there are important chronic effects from solvent exposure both in animals and workers, particularly based on research in Scandinavia.32–34 Nerve conduction remains altered for many years following cessation of solvent exposure, while memory and learning, mood, impulse control, and motivation are impaired.35 Long-term exposure causes a toxic encephalopathy with memory and motor deficits. However, testing and diagnostic criteria are not standardized.36 Smokers who have the GSTM1-null genotype appear to be at a greater risk of solvent-induced chronic encephalopathy than smokers with normal GSTM1.37 Rats chronically (30 h/week for 6 months) exposed to 1500 pap toluene show permanent 16% depletion of neurons in the inferior regions of the hippocampus.38 Styrene effects have been studied in several occupational groups39,40 with both specific changes (impaired reaction time and color vision) and more general mood alterations.41 Carbon disulfide effects are manifest in almost all components of the central and peripheral nervous systems (particularly distal part
20
Toxicology
525
TABLE 20-5. EXAMPLE OF BEHAVIORAL IMPAIRMENTS ASSOCIATED WITH VARIOUS TOXIC SUBSTANCES Impairments
Pb
Acute psychosis Emotional lability Memory impairment Psychomotor impairment Neurasthenia Extrapyramidal impairment Neuropathy Tremor
As
Mn
Hg
CS2
+
+ + +
+ + + + + + + +
+ + + +
+ + +
+
+
+
+ + +
Solv
OPP
+ + +
+ +
+ + +
Abbreviations: Pb, lead; As, arsenic; Mn, manganese; Hg, mercury; CS2, carbon disulfide; Solv, solvents; OPP, organophosphate pesticides.
of long axons) in humans through neurofilament cross-linking.42,43 Evidence of peripheral neuropathy (paresthesia, numbness), cranial neuropathy, dementia (confusion), parkinsonism, acute psychoses, irritability, and memory loss have been attributed to this compound.44 Many metals, for example, lead, mercury, manganese, and arsenic, are also neurotoxic, but these tend to have discrete nervous system effects (Table 20-6). The species of metal influences its impact. Thus organic tin compounds cause weakness and paralysis as well as central disturbances, partly through a dopamine effect.45 Organic arsenic affects the optic nerve and retina, while inorganic arsenic produces polyneuritis and weakness. Tremors, and in severe cases ataxia, occur with either inorganic or organic mercury poisoning; however, organic mercury also produces visual field changes,4 while inorganic mercury produces personality disturbances characterized as erethism. This syndrome involves irritability, labile temper, pathologic shyness (avoiding close friends), depression, loss of sleep, fatigue, and blushing. In some cases there is a dose-response curve between the occurrence of symptoms and the concentration of mercury in urine. Dental amalgams are associated with increased urine mercury, but the extent to which such elevations are conducive to neurological or psychological symptoms is unclear. On the other hand, MeHg disrupts both the developing and the mature CNS, interfering with visual, auditory, and somatosensory function.46 Exposed rats developed specific antibodies to neurotypic and gliotypic proteins and had reduced glial fibrillary acid protein in their cortex. Pathologic changes include neuronal degeneration and demyelination and an increase in astroglia with accumulation of MeHg.47 Lead poisoning has been extensively studied in children and adults.48,49 Lead is universally deleterious to the developing nervous
system. Ultrastructural studies show altered axonal development and dendritic deployment with fewer neural connections, leading among other things to impaired cognition and concentration.50 This is associated with deficiency in expression of a specific nerve growthassociated protein (GAP-43). Perinatal and postnatal exposure to lead resulted in depressed mRNA levels for GAP-43 in rats.51 Importantly, lead effects (impulsive behavior, poor concentration, poor working memory) persisted at least to age 11, particularly in children who had not been breast-fed. Whether breast-feeding conveys protective nutrients or has primarily social benefits remains to be elucidated. Impulsivity is one of the changes lead induces in rodents.52 In several studies, prenatal exposure to certain PCB isomers has been implicated in causing impaired neurobehavioral and cognitive development in babies and young children. Despite controversy, the evidence appears to be consistent using several populations and evaluation techniques53,54 (see Behavioral Teratology below). The new millennium has seen attention focus on polybrominated diphenyl ethers, persistent chemicals, developed as fire retardants, which can cause hyperactivity in rats.55 A more esoteric compound is MPTP (1-methyl-4-phenyl1,2,3,6-tetrahydropyridine), a synthetic substance produced accidentally in the attempted synthesis of meperidine analogs by substance abusers. A metabolite of MPTP damages the dopaminergic cells of the substantia nigra, leading to irreversible parkinsonian symptoms.56 This important discovery provided a model for studying parkinsonism.1 In addition, many psychoactive chemicals both licit and illicit, including ethanol and hallucinogens, have their primary effects on neurobehavioral performance. BIOCHEMICAL MECHANISMS
TABLE 20-6. AVAILABILITY OF NORMATIVE DATA (VALIDATED ON LARGE NORMAL AND NONNORMAL POPULATIONS) FOR VARIOUS NEUROBEHAVIORAL TESTS Test Visual reaction time Auditory reaction time Santa Ana Grooved pegboard WAIS subtests Digit symbol Digit span, auditory Vocabulary and comprehension Block design California Verbal Learning Test Benton Retention Test Embedded figures SCL-90
Function Psychomotor Psychomotor Psychomotor Psychomotor Perception/encoding Working memory Cognitive verbal Cognitive nonverbal Verbal memory Visual spatial memory Perception profile of mood states Mood affect
The “black box,” or phenomenological, approach to neurobehavioral toxicology is yielding to mechanistic studies. Advances in molecular and cell biology and biochemistry are elucidating many aspects of brain function that will facilitate making predictions and designing of new tests. An important benefit is to enhance interpretation of behavioral toxicology studies. Advances in molecular biology will suggest new populations to study and will provide new biomarkers to validate exposures.
Neurotransmitters Neurobehavioral toxicology is intimately dependent on advances in understanding neurotransmitter function, which go beyond the role of transducing nerve impulses. The behavioral abnormalities attributed to low-level lead exposure may involve, in part, alterations in dopaminergic transmission,57–59 while learning deficits from lead are related to glutamic transmitters.60 Also, lead may have a more global effect on the release of several neurotransmitters by altering calcium homeostasis.61 Maternal exposure to lead increased dopamine and serotinin in the brain of offspring, but decreased glutamate levels in the cortex.62
526
Environmental Health
Nitric oxide, an intracellular messenger, is formed from L-arginine by the enzyme nitric oxide synthase (NOS), found in many tissues including brain, where it is constitutive rather than inducible. It modulates the secretion of hormones such as adrenocorticotropic hormone (ACTH) and is in turn regulated by estrogen, which enhances the expression of mRNA for NOS in parts of the brain (e.g., ventromedial nucleus of the hypothalamus) rich in estrogen receptors.63 This may be one of several mechanisms by which endocrine-disrupting chemicals modulate behavior. 5-Hydroxytryptamine (serotonin) research covers very broad areas central to neurobehavioral toxicology. Various receptor systems such as opioid receptor antagonists and agonists are under investigation for their control of serotonin synthesis and release.64 Xenobiotics, particularly pharmaceuticals such as MAO inhibitors, can produce a serotonin syndrome from elevated serotonin levels.
Neuropeptides An exciting area of neurobiology is the study of neuropeptides such as substance P, neurokinin A, thyrotropin-releasing hormone, and neuropeptide Y. Their functions, distribution, and control of synthesis and breakdown are an active area of research, particularly with regard to substance abuse. For example, neuropeptide Y, a vasoconstrictor peptide found in sympathetic nerve terminals and the adrenal medulla as well as in the plasma,65 modulates the release of glutamate, GABA, norepinephrine, dopamine, somatostatin, serotonin, nitric oxide, growth hormone (GH), and corticotropin-releasing factor (CRF). It has a neuroprotective role against excitotoxic agents.66
Receptor Biology Technical advances in probing for up- or down-regulation specific receptors on various cell populations, and for measuring ligand interactions, has greatly expanded understanding of toxicology. Estrogen receptor studies illustrate how hormones can regulate neurotransmitters in the brain.67 Upregulation of the NMDA receptor in the rat forebrain by ketamine appears to be the mechanism leading to apoptosis.68
Thyrotropin-Releasing Hormone Certain cells of the hypothalamus contain thyroid hormone receptors that, when activated, regulate gene expression of various proteins that mediate the hormone effect on the nervous system development.69 These may play a role in behavioral teratology. Certain PCBs (for example 2,3’,4,4,5 penta-PCB, but not 3,3’,4,4,5 penta-PCB) are structural mimics of triiodothyronine and stimulate neural differentiation in cell culture.70
Nerve Growth Factors In 1986, Montacalcini and Cohen received the Nobel Price for discovering growth factors that influence the differentiation of nerve cells. The mechanism by which growth factors are regulated and how they in turn “control” cell differentiation and ultimately behavior are being investigated using transgenic animals that lack particular receptors. This is becoming an important tool in neurotoxicology and will provide new models for studying behavior.71
ANIMAL MODELS IN NEUROBEHAVIORAL TOXICITY
the same effect on learning, for example, in a wide variety of animal species is important validation of its role in humans. Eye-limb coordination, cerebellar function, and even learning are common to all vertebrates, and even cognition may be identified in many so-called “lower” organisms.72 In recognition of the important contribution of animal behavior studies to shaping our understanding of human behavior,73 three pioneers of animal behavior research, Konrad Lorenz, Niko Tinbergen, and Karl von Frisch were awarded the Nobel Prize in biology and medicine in 1973. Animal experimentation also provides the opportunity to assess exposures and effects that cannot be studied in humans. Developing species-appropriate test batteries is an exciting challenge for behavioral toxicologists.74–76 Animal studies have focused on discrimination of stimuli, learning deficits, disturbance of locomotion or balance, decreased performance of previously learned tasks, memory deficits, altered activity patterns, and changes in normal behavior patterns related to reproduction or maintenance. A wide variety of paradigms have been employed to understand the effects of stresses on the nervous system, and many of these can be applied to humans. In addition, some research has examined how the neurobehavioral effects of a toxic chemical or physical stressors can be exhibited in offspring of the exposed individual.
Learning and Memory Tasks Experimental intervention allows specific probes of behavior and performance. Early testing employed Y mazes and other learned visual discrimination tasks. Experiments with rats and mice examined how toxins affect the speed of learning a maze after a reward or punishment was offered in one or the other arms.77,78 Learning impairment offers a valuable paradigm. Animals are treated with drugs or other chemicals before or after a learning situation or conditioning stimulus to see whether subsequent performance is enhanced or impaired. Injection of glucose enhances, and injection of insulin impairs, learning of foot-shock avoidance tasks.79 Passive avoidance training allows investigation of substances that affect a calcium-calmodulin–dependent protein kinase in the same forebrain nuclei. Kinase activity increases within 10 minutes after training. Antagonistic drugs cause amnesia.80
Imprinting Many young animals form an attachment to a parent or other individual whom they see, hear, or smell shortly after birth. This “imprinting” behavior is pronounced in a variety of birds, and the ability of various chemicals to impair the imprinting behavior has been studied. Imprinting depends on NMDA receptors in the forebrain, where antagonistic drugs reduce imprinting behavior.81 NMDA antagonists block olfactory imprinting in rats.82
Parental Recognition An important function of imprinting is the ability to recognize parents and relatives to gain food or protection and avoid aggression from strangers. Since this behavior has direct survival value, it can be used to test the relevance of effects of neurotoxic chemicals. Leadexposed herring gull chicks have poor discrimination and longer latency for choosing between a parental surrogate and a stranger83 and these effects differ depending on the age of exposure, indicating the presence of a critical developmental window for this effect.
Conditioning Studies Animal research contributes significantly to our understanding of neurotoxicity and neurobehavioral changes. No animal model adequately mimics the complex neurobehavioral performance of humans, particularly in the intellectual domain. However, many important advances in understanding brain function have been derived from studies mainly on rodent and avian models. Rodent studies allow large sample sizes to be employed, while avian studies take advantage of the fact that, like humans, birds rely primarily on visual and acoustic rather than olfactory or tactile communication. The fact that a chemical produces
In studies involving conditioning of psychomotor performance, animals are trained to perform tasks in response to certain stimuli. They are then exposed to a substance, and the disruption of performance is quantified.84,85 With time, the behavioral tests have become more sophisticated and now include such paradigms as nonspatial and spatial delayed matching to a sample, serial position sequences, and multiple fixed-interval reinforcement tests in animals trained with operant conditioning.74,85–87 These studies examined learned behavior and relied on the production of the desired behavior, followed by measurement of
20 its sensitivity to environmental stimuli.74 Alterations in visual performance can be useful endpoints in conditioned animals.88 The great advantage of these methods is that they can detect subtle differences in behavior of animals that otherwise appear normal; however, they do require experience in the operant conditioning techniques.
Fixed Interval Schedule-Controlled Paradigm (FISC) Conditioning studies can employ reinforcement to the animal that responds a particular number of times (variable or fixed ratio) or after an interval. In the fixed interval paradigm, the animal is reinforced for giving an appropriate response after a stimulus has been on for a particular time. Once the conditioning is established and stable, the exposure can be applied to determine whether the learned response is impaired or abolished. For example, the fixed interval behavioral response was modified by novelty in lead-treated but not control animals.19
Discrimination Conditioning Animals can be conditioned to respond differentially to a variety of stimuli, and the effects of various substances on this ability offer a sensitive test of discrimination. This conditioning has been expanded to more relevant neurotransmitters, and animals can be trained to discriminate these from saline.14
Intracerebral Injection In combination with stereotactic techniques and histochemical studies of the brain, the localized injection of agonist and antagonistic chemicals into specific regions of the brain is contributing to the understanding of localization of behavioral functions, and conversely, as functions are localized, it becomes feasible to test many new substances for specific agonist or antagonist activity. For example, serotonin inhibits the premating lordosis behavior of female rodents by acting on 5-hydroxytryptamine 1A receptors, but it enhances the same behavior at 2A/2C receptors. The relative activity of these receptor classes varies during the estrous cycle.89
Open Field Exploratory Behavior Animals have a natural tendency to explore novel environments. This involves a combination of locomotory and perceptual events, and toxicants may inhibit one or both or may lead to agitation and more rapid behavior. A comparison study of behavioral and neurochemical traits in 15 inbred mouse strains revealed that the former had higher heritability than the latter.90 In addition to moving around an enclosure, rodents rear up periodically. Dopamine plays a prominent role in modulating locomotor activity. Activity cages divided into grids with light sensors can detect movement (frequency of breaking beams in both horizontal coordinates), as well as frequency of rearing, and can distinguish animals engaged in perimeter exploration from activity confined to the central grid squares. Mice treated with paraquat showed decreased horizontal but not vertical activity.91
NATURALISTIC STUDIES
Naturalistic observations of behavior conducted in the laboratory and in the field employ behaviors that occur naturally in the organisms (for example, locomotion, balance, or predator defense).75,92 In many of these studies, the toxic agent such as lead interferes with learning or learning retention and the subsequent performance of learned tasks. Under natural conditions, animals have somewhat predictable or stereotyped ways of behaving that can be quantified. Such behaviors may be directly relevant to their survival and successful reproduction. Toxics that affect such behavior can have far-reaching effects on fitness. Some behaviors examined include pecking accuracy and pecking rate of pigeons,93 activity rates in mice,94,95 nest site defense in falcons,96 monkey behavior,97 dove courtship sequences,98 begging behavior,
Toxicology
527
and food manipulation in terns,75 and web-weaving in spiders.99 In most of these studies, the effect was clearly demonstrable by directly observing individuals. The advantage of the naturalistic behaviors is that the behaviors are important for fitness and have been shaped and perhaps optimized by evolution. Thus predator avoidance is a natural part of an animal’s behavioral repertoire, while pushing a button may not be. Conversely, operant conditioning paradigms afford tighter control of experimental situations. Yet natural behaviors such as locomotion,76 exploration, righting ability, depth perception, thermoregulation, aggression, avoidance,75 learning, and parental recognition are all amenable to laboratory and field experimentation where variables can be controlled.75,100 Experiments with herring gulls injected with lead in the wild indicated that the effects that were observed were similar and as severe as results in the laboratory. Recovery, however, was quicker and parental behavior partially ameliorated behavioral deficits to allow the chicks’ partial recovery of cognitive function.92 While most neurotoxicology studies on animals examine the direct effect of exposure, some multigeneration studies have yielded important results,101,102 showing that the offspring and even grandchildren of treated animals may manifest behavioral deficits. Exposure of one or both parents can affect behavior in offspring. If both parents are exposed, the impact is greater than if either one is exposed alone.101 It seems reasonable to conclude that animal behavioral models will continue to be useful for understanding many aspects of behavioral toxicology, for developing useful questions and approaches for clinical application, and for validating generalizations developed in humans. Conversely, for some of the higher functions, humans will remain the primary test subjects and improved epidemiologic studies employing both old and new psychometric approaches will be fruitful. These must avoid type II errors as rigorously as type I errors are avoided.103 Such studies must be opportunistic, recognizing exposures that have already occurred, while the animal models will allow the use of controlled exposures and testing of new paradigms. Neurobehavioral studies in animals also afford the opportunity to design comprehensive studies of mixtures.104
Sensory Systems Vision The visual system is both a target for neurotoxic agents and a crucial function for testing. Intact visual systems are required for accurate performance on many tests used to assess higher order cognitive function. Neurotoxicants may affect visual functions directly and thus confound interpretation of performance on tests of cognitive function unless visual function is assessed separately.105 Visual evoked potentials use electroencephalographic techniques to measure brain wave responses to light. Neurobehaviorists test such functions as visual acuity, alteration of visual fields, color vision, contrast sensitivity, and critical flicker fusion. For example, Mergler and colleagues106 reported loss of color vision and contrast sensitivity among workers exposed chronically to organic solvent mixtures.107,108 Neuro-optic pathways are vulnerable to the effects of styrene, which impair color discrimination. A recent meta-analysis of styrene’s effects on color vision supports increased errors in performing a color discrimination task with an estimated increase of the color confusion index of 2.23% after 20 ppm exposure over eight work years.109 Color vision loss associated with solvents is typically characterized as a blue/yellow deficit. Such a deficit is associated with reduced function of the blue cones or their associated ganglion cells.110,111 Because color vision loss also occurs with age, Benignus et al.109 equated the loss associated with 20 ppm styrene exposure over 8 working years as comparable to 1.7 additional years of age. Campagna and colleagues112 found that color vision loss is dose-dependent and can be significantly detected above 4 ppm. Other investigations also support loss of color vision with toluene113 and mixtures of solvents (e.g., xylene, methyl ethyl ketone, acetone).114 The Critical Flicker Fusion task tests CNS discriminatory ability, by the point at which lights flickering at an increasingly rapid rate
528
Environmental Health
appear to fuse into a constant source. Lead-exposed workers showed impairment.115 Contrast sensitivity reflects the ability to detect subtle differences in lighter and darker areas of a stimulus (i.e., luminance). Detecting differences in contrast allows detection of words on a page and is the basis for perception of stimuli used to assess higher order cognitive functions particularly for computer-based testing. Several studies of workers chronically exposed to neurotoxicants reveal reduction in contrast sensitivity116 particularly for mid-spatial frequencies.117 However, contrast sensitivity is also affected by visual acuity and other diseases such as diabetes. These different approaches thus evaluate the receptive capability of the eye itself and ultimately the ability of the brain to process and respond to information transmitted from the eye.
Hearing Hearing evaluation is a necessary precursor to neurobehavioral testing since many tests rely on hearing for accurate performance. As with the eye, some tests evaluate the external receptor, while others determine the response of the brain to sound. Certain neurotoxic chemicals, for instance the antibiotics streptomycin and kanamycin, damage the auditory nerve pathway. More subtle changes in our ability to detect loudness, pitch, and timbre are the domain of the psychoacoustician and in special cases can be evaluated as part of a neurobehavioral assessment. The working environment of those who are routinely exposed to neurotoxicants often includes exposure to noise, which may result in damage to the sensory cells of the inner ear. Also, neurotoxicants may directly interfere with hearing through effects on the central or peripheral nervous system. Morata et al.118 suggested that noise and organic solvents such as carbon disulfide, toluene, and trichloroethylene may interact to produce hearing loss and perceptual impairments. Morioka et al.,119 reported that the upper limit of hearing was reduced among workers exposed to styrene.
Olfaction
characterizes lead poisoning (but is not a lead taste) and the garliclike taste that occurs with selenium but is not a selenium taste. There is a close linkage between olfaction and taste, albeit different peripheral receptors and diminished olfactory sensitivity or discrimination will interfere with taste. Taste actually lends itself to objective study more readily than olfaction, since one can control and determine the concentration of a substance in solution more easily than in air.
Touch and Vibratory Sensation Physical examination of light touch and pain sensation and of temperature and two-point discrimination can be elaborate and timeconsuming, but in the hands of an experienced neurologist can detect subtle nervous system malfunction. However, evaluation of touch is actually quite complex. In addition to skin receptors, there are receptors in underlying tissues and muscle. The sensory perceptual examination of the Halstead-Reitan Neuropsychological Battery is used to assess accuracy of finger tip touch and the ability to perceive numbers and shape from tactile sensations.127 Finger and toe vibratory threshold, assessed with a device that allows amplitude and frequency of vibration to be manipulated (“Vibrometer”),128–130 has been sensitive to subtle changes in threshold among workers exposed to solvents131 and pesticides.132 Vibration threshold may also be altered among workers who use vibrating hand-held tools.133 Although the pressure applied by the patient may confound measurement, a physical device can be used to control pressure applied by the individual. Specific protocols for evaluating vibration thresholds are available in a manual published by the Agency for Toxic Substances and Disease Registry (ATSDR), describing a recommended neurobehavioral test batteries for environmental health field studies.134
Temperature Ability to discriminate slight changes in temperature is also affected by chemical exposure. Devices that provide objective control of temperature, combined with a forced-choice paradigm, allow the clinician or researcher to evaluate this modality.129
Unlike virtually all other mammals, humans rely relatively little on olfaction to find their food or detect danger. Nonetheless, olfaction has been shown to influence human appetite and sexual development,120 and we are capable of distinguishing the odor of our mates from those of other individuals. Disruption in the olfactory sense either due to loss of olfaction (hyposmia or anosmia) or hypersensitivity to odors has been associated with exposure to neurotoxicants. The University of Pennsylvania Smell Identification Test (UPSIT) is a standardized multiple choice scratch and sniff test, assessing the ability to correctly identify odors.121 Decrements in the sense of smell were documented with the UPSIT for paint-manufacturing workers122 exposed to solvents. The authors hypothesized that these deficits are related to peripheral effects on olfactory neurons causing dysfunction. Olfactory threshold tests determine the lowest concentrations at which an odorant can be reliably detected. Exposure to metals, for example cadmium, also reduces olfactory acuity as demonstrated by increased olfactory thresholds (decreases sensitivity).123 While neurotoxicant occupational exposures can result in loss of olfactory acuity by acting directly on the olfactory neurons, individuals exposed to chemical odors accidentally either at work or in communities sometimes report a heightened sensitivity to odors.124 This sensitivity may arise as a result of a conditioned response in which symptoms of irritation, precipitated initially by a chemical exposure, are later associated with the odors that accompany much lower exposure concentrations of the original and similar chemicals.125 Although a heightened sensitivity to odors is reported, this hypersensitivity has yet to be validated using standardized olfactory threshold testing.121,126
The dorsal columns of the spinal cord carry information on position to the brain, where the labyrinth and vestibular apparatus detect the positions of the eyes, head, and body, and the sensorimotor system compensates by adjusting muscle tone to maintain posture. Its function depends on the saccular and utricular macules that sense linear acceleration of the head and the semicircular canals, which sense angular acceleration. Visual and proprioceptive impulses also feed this system. Disruption of either the sensory components or the central vestibular function can cause dizziness and vertigo. Tests for sway,135 straight-line walking, and the Romberg’s tests are traditional ways of measuring the performance of these tasks. In addition to testing position sense, these tests are dependent on intact motor and vestibular functions. A force platform system is recommended by NIOSH as a more precise system to measure subtle changes in postural sway under conditions that separate the effects of vision, proprioception, and vestibular function on postural sway.136,137 Postural equilibrium, controlled by the vestibular system, was shown to be affected by 0.015% blood alcohol concentration.138 Other acute exposures to neurotoxicants such as acetone and methyl ethyl ketone have not shown increases in postural sway.139 However, acute measures of exposure to chlorpyrifos, an organophospate pesticide, were associated with greater postural sway in more challenging conditions to include eyes closed and soft-surface conditions.140 Similarly, acute indicators of exposure to lead have been shown to have subclinical but significant effects on postural sway in children and adults.141–143
Taste
Motor Function
Although the food industry conducts extensive subjective research on tastes, there is little objective literature on the impact of chemicals on taste sensitivity. Many chemicals have specific “tastes,” while others seem to induce abnormal tastes such as the metallic taste that
Motor deficits may be due to muscle disease, disorders of the motor cortex or pathways, changes in the reflex pathways controlling tone, or central disorders (cerebellum, basal ganglia), which interfere with both volition, fine tuning, and coordination of motor function. The
Position Sense and Vestibular Function
20 dopaminergic system is a major regulator of tone and voluntary movements and is affected by a variety of xenobiotics. A physical examination can detect changes in muscle mass (particularly asymmetry) and physical weakness. Behavioral tests focus on the motor system as a manifestation of central function, for example, reaction time (see below), rapid alternating movements, and fine muscle control. Many compounds that produce acute intoxication (i.e., alcohol) affect sensory-motor function, producing alterations of gait and posture. Some neurotoxicants affect motor nerves, leading to reduced strength, coordination, and fine muscle control. Finger Tapping and Grooved Pegboard144 are tests with normative standards that are frequently employed to assess loss of fine motor coordination and speed due to neurotoxicants.145 Loss of ability to perform previously learned motor sequences, apraxias, may be an indication of neurotoxicity, and some of the animal paradigms appear directly analogous to this deficit.
Toxicology
529
That is, such symptoms may be secondary to other cognitive deficits or may be a primary effect of exposure to neurotoxicants. This is further illustrated by the Orebro Q-16, a questionnaire shown to be sensitive to but not specific for neurotoxicity symptoms due to solvent exposure.153 Standardized symptom checklists such as the Symptom Checklist-90,154 the Beck Depression Inventory,155 and the State-Trait Anxiety Scale156 are screening tools for psychiatric symptoms that offer comparison of the patient’s symptom reports to those of other patient and nonpatient normative groups. The Minnesota Multiphasic Personality Inventory-2 (MMPI-2)157 is a more extensive questionnaire used to assess psychopathology. Although the MMPI-2 requires more time to administer (at least 1 hour), the advantages of this instrument are scales to assess the validity of the patient’s responses (e.g., denial or exaggeration of symptoms) as well as subtle and obvious items associated with clinical scales of psychopathology.
Neurobehavioral Testing Basal Ganglia The basal ganglia and cerebellum constitute the extrapyramidal motor system, often a target of toxic chemicals. The functional relationships of the basal ganglia to the striatum and cerebral cortex are described in standard texts. Damage to these ganglia or the corticostriatal-pallidal-thalamic-cortical loop is associated with a variety of disorders including ataxias, tremors, akinesia or dyskinesia, athetosis, dystonia, and myoclonus. This system is characterized by the variety of neurotransmitters (e.g., γ-aminobutyric acid [GABA], dopamine) associated with particular functional components. Toxic damage by MPTP to the substantia nigra, for example, is known to produce parkinsonism.146 Selected neurobehavioral tests of fine motor function may detect early damage to this system.
Cerebellar Function The cerebellum refines motor function and contributes to balance, posture and tone, repetitive movement, coordination, and spatial location. Gross cerebellar dysfunction is manifest as staggering gait, swaying or stumbling, ataxias involving movements of specific limbs in which the timing of contraction of antagonistic muscle groups is disrupted, and loss of controlled rapid alternating movements, dysdiadochokinesia. MeHg targets the cerebellum, producing ataxia at relatively low doses (Fig. 20-3).
Cognitive Evaluation Complete neurobehavioral examination is an interdisciplinary endeavor requiring the participation of the physician, neurologist, psychologist, and electrophysiologist. A complete examination will include an interview, a physical examination, and one or more sensory and neurobehavioral tests, supplemented where necessary by electrophysiology. Interview. The interview provides the examiner with an important opportunity to observe the mood, affect, and behavior of the individual. This can be supplemented by a structured psychiatric interview such as the Diagnostic Interview Survey147 or the Structured Clinical Interview for the Diagnostic and Statistical Manual148 and mental status examination.149 The interview allows one to explore the contribution of “organic” and “psychologic” pathology and to detect anxiety, depression, changes in intellectual function, and other performance. Personality, Mood, and Affect. A number of epidemiologic studies indicate that personality changes are among the earliest indicators of neurobehavioral toxicity. Erethism, attributable to inorganic mercury (see above), is probably the classic example of this. Mood changes associated with solvents150,151 and classroom hyperactivity behavior attributed to lead152 are additional examples where mood and personality in general may be altered, without necessarily showing specific focal changes. While a number of instruments document these complaints and compare an individual patient’s symptoms to a normative group, the cause of these symptoms cannot be ascertained.
In the presence of uncertainty, a major rationale for neurobehavioral testing is the prevailing assumption that subtle behavioral changes in cognitive function may be the most sensitive indicator of exposure to toxics.158 Just as liver function tests can measure cell damage, conjugation, or metabolic ability, so neurobehavioral tests have distinct target functions as outlined below. Moreover, there is the increasing recognition that levels of exposure formerly thought safe or unlikely to produce health effects are now known to have far-reaching consequences on important behavioral functions. Most evident among these is the impact of low-level lead exposure on hyperactivity and intellectual development in children.159 Weiss160 argues effectively that even small decrements in intellectual function may shift the population distribution such that more individuals will fall below the normal range of function (i.e., IQ < 70). An extensive literature documents the sensitivity of neurobehavioral tests to acute and chronic neurotoxicant exposures such as lead, organic solvents, and pesticides.145,161,162 However, this literature is by no means uniform and is significantly affected by adequate documentation of exposure to neurotoxicants among the individuals tested.
Psychometric Tests in Neurobehavioral Evaluation Although an interview and a mental status examination can detect many gross changes, psychometric tests are useful to extend the sensitivity of the examination by detecting and quantifying subclinical effects. Psychometric tests for which there is a long history of validation and a database of normative data can be particularly useful in evaluating an individual. Many new tests, lacking such normative data, may be difficult to interpret on an individual basis, but may be useful in large-scale screenings or epidemiologic studies. The following is a discussion of the core functions recommended for assessment of a patient exposed to neurotoxicants. There are many tests in the literature. Those cited as illustrative of various functions are those that have normative data to allow interpretation of an individual’s performance. Unlike statistical group comparisons in a research context, individual assessment of dysfunction is dependent upon comparison to baseline or pre-exposure test results for the individual or to normative standards for a group of individuals of similar age, gender, education, and ethnicity. It is important to be alert to cultural and language biases in evaluating test results. For example, if an individual’s performance on a test of a particular function is markedly lower (e.g., one to two standard deviations) than his estimated preexposure ability, the clinician may suspect deficits due to exposure. In addition to demographics, neurobehavioral performance is also effort dependent. That is, an individual’s performance will be affected by her/her motivation to perform well or conversely to perform poorly. Secondary gain related to worker’s compensation or other litigation may influence the individual even at a subconscious level. Therefore, part of a thorough examination should include an assessment of motivation. This can be performed directly with the use of tests that use a “forced choice” method to detect negative response
530
Environmental Health
bias, defined as below chance performance (e.g., Test of Memory Malingering)163 or through analysis of discrepancies within neurobehavioral tests such as poorer performance on simple relative to more complex operations (e.g., recognition memory is less than recall).164,165 Slick et al.166 reviews the methods for detection of malingered performance and offers criteria to diagnose definite, probable, and possible malingered neurocognitive dysfunction. In addition to test criteria mentioned above, the clinical interview is an important source of data for determining motivation. For example, if selfreported symptoms, observed behavior during the interview, or background information is discrepant from neurobehavioral test performance, then malingering may be suspected.
Overall Intellectual Ability Tests of cognitive verbal ability are generally more familiar to the patient and include such tests as vocabulary, comprehension, and reading (e.g., revised Wechsler Adult Intelligence Scale (WAIS-R)167 National Adult Reading Test).168 These tests are regarded as most resistant to the effects of neurotoxicants since they reflect abilities that are well-rehearsed and long-standing.169 If an individual’s verbal abilities have declined significantly, this usually reflects serious or chronic damage. Such deficits can occur with significant head injury or stroke, but generally not with exposure to neurotoxicants unless the latter has occurred over a number of years at significant levels162 producing a well-defined dementia such as “painter’s syndrome”170 or chronic toxic encephalopathy. Thus, the patient’s performance on a vocabulary test is frequently used as an estimate of pre-exposure function if pre-exposure testing is unavailable. While this is a standard in the literature, a recent investigation directly comparing actual pre-exposure and current vocabulary scores revealed that exposures to neurotoxicants may have a more significant impact on tests of highly practiced skills (e.g., vocabulary) than previously assumed.171 Tests of cognitive nonverbal functions are generally more complex and reflect ability not related to verbal skills such as the Raven’s Progressive Matrices.172 These tests are useful in situations where estimates of ability, unbiased by verbal skills, are needed.
Psychomotor Functions Psychomotor function requires integration of sensory perceptual processes, such as vision or hearing, with motor responses. For example, simple reaction time or the latency (milliseconds) of a button press in response to a visual or auditory cue provides the simplest method for assessing psychomotor function. The time between presentation of the stimuli is varied and may affect performance. At a more complex level, a patient may be asked to place pegs into holes as quickly as possible,173 a task requiring more motor skill than reaction time. Tests of psychomotor function such as Digit Symbol167 have consistently been among the most sensitive indicators of deficits due to neurotoxicants.169 The patient records symbols with their corresponding number according to a key while being timed. While memory substantially aids performance, it is not necessary since the key is always present.
Attention/Concentration A precursor to performance on neurobehavioral tasks is the ability to scan the environment, orient to the appropriate stimulus, and sustain attention to a task, with more complex tasks requiring relatively greater levels of sustained attention. Neurotoxicants can disrupt this ability as demonstrated with such tests as Digit Span of the WAIS-R in which the individual is asked to repeat an increasing string of digits (digits forward) or to reverse the digits (digits backward) immediately following their verbal presentation.172 Trials A and B, in which the individual connects numbers or numbers and letters in sequence, tests psychomotor skills, visual attentiveness, and the ability to think flexibly under time pressure. Tests of vigilance require sustained attention over relatively longer periods of time such as for continuous performance tests in which the individual responds to a specific target presented among
similar nontarget stimuli. Vigilance tasks are sensitive to low-level effects of alcohol174,175 and to the interaction of neurotoxicants, fatigue, and variation in the interstimulus interval.
Memory and Learning Tests of memory/learning assess a patient’s short-term memory by presenting stimuli (e.g., words, digits, pictures) visually or auditorially and asking the patient either to recall or recognize these stimuli immediately or after a delay (e.g., 30 minutes). The California Verbal Learning Test involves the presentation of a list of words that the patient is asked to recall.176 For other memory tests, pictures of abstract drawings or actual objects are presented to the subject, who must then reproduce these drawings from memory (e.g., Benton Retention Test).177 Short-term memory loss is one of the most frequent clinical complaints of patients exposed to neurotoxicants,135 and it has been substantiated in studies of neurobehavioral deficits due to solvents, lead, mercury, and pesticides.172 If a patient’s performance on a short-term memory task is well below his or her general ability as assessed by a vocabulary test, then complaints of memory problems may be substantiated.
Temporal Properties of Performance One of the most subtle measures of acute neurobehavioral deficits is a slowing in function.178,179 While peripheral neuropathies are characterized by slowing of nerve impulse conduction, it is the slowing of central functions that are evaluated in neurobehavioral testing. Whether this can be thought of as an “increased resistance” in the CNS, or the need for adaptation wherein alternative pathways are sought for particular functions, is a subject for future research. It is not known whether cells die, interconnections shrink or wither, or biochemical communication is inhibited, but probably all of these mechanisms apply.
Test Batteries in Behavioral Neurotoxicology A number of investigators and clinicians have developed test batteries for use in human behavioral neurotoxicology.159,180,181 A review by Anger182 provides a guide to their features and limitations. In general, these batteries include tests representative of several basic functions. For example, the World Health Organization recommended a core battery of tests (Neurobehavioral Core Test Battery) to assess the following functions: psychomotor, cognitive nonverbal, cognitive verbal, memory/learning, perceptual speed, and mood.170 Tests are categorized as representative of a particular function, but often involve more than one function for adequate performance (e.g., reaction time and visuospatial perception affect psychomotor function).127 The Adult Environmental Neurobehavioral Test Battery (AENTB) is another screening battery recommended for evaluation of environmental exposures that are presumed to be lower than those occurring at the workplace.183 In the late 1970s, researchers recognized the potential of computers to challenge the nervous system in a repeatable, objective fashion and to score performance in real time. Epidemiologic studies have been significantly enhanced by the use of computerized neurobehavioral test batteries such as the widely applied Neurobehavioral Evaluation System184 and the Behavioral Assessment and Research System (BARS).185 The advantages of the computer are (a) consistency of application, (b) reduced need for highly trained testors, and (c) automatic recording of data in real time. Disadvantages have been (a) capital costs for purchasing several computers, (b) many target populations not being computer literate, (c) lack of motivation and stimulation provided by a live examiner and loss of opportunity to observe performance, and (d) lack of normative data for interpretation of individual performance. The Cambridge Automated Neuropsychological Test Automated Battery (CANTAB)186 is a computerized battery of nonverbal tests of attention, memory, and executive function that are largely language independent and culture free. CANTAB is comparable to tests used in the animal literature and has been used extensively with patients who have brain injury or
20 neurodegenerative diseases, children, and in tests of therapeutic agents. CANTAB may prove to be of particular interest for the extrapolation of neurobehavioral findings in animals, where higher exposure concentrations are possible, to analogous performance in humans. The disadvantage of computer batteries is the lack of feedback and social interaction through which an examiner can maintain a subject’s motivation. BEHAVIORAL TERATOLOGY
The developing nervous system undergoes dramatic growth and expansion of function, not only prior to birth, but throughout the first decade of life. Normal brain formation requires the orchestration of various signals and processes to achieve cell differentiation, migration, positioning, process-formation, and synapse formation, at the right time. Anatomical changes such as increasing myelinization occur during the first years of life, and associations are formed that make possible complex motor patterns, fine-tuning of coordination, concept formation, pattern recognition, and more highly learned tasks such as speech and communication. For some tasks, such as learning language, there appear to be “critical periods” during which learning proceeds more rapidly and effectively. Young children find it easier to learn new languages than adults, but this “window” does not have sharp edges. Animals or humans that are isolated from speakers during a critical period may find it difficult or impossible to learn speech at a later time. There may be critical periods for development of other functions as well.187 As organisms mature, their locomotory ability, learning, and knowledge should increase appropriately for their age. There is increasing evidence that even low-level chemical exposure may have profound impact on the orderly acquisition of nervous system function. The magnitude of such changes is not fully appreciated, and the field of behavioral teratology is in a rapid growth phase.
Neural Cell Adhesion Molecules These cell surface proteins play crucial roles in the migration and connection of cellular elements of the developing nervous system. Xenobiotics can cause dysmorphogenesis and serious neurological impairment. The expression of NCAMs is timed, resulting in the increase and decrease of signals at different times in development. Their expression is implicated also in learning and memory and immune responses. In rats MeHg altered the temporal expression NCAM and polysialation of NCAM on day 30, but not on day 15 or 60.188 In lead-injected baby gulls, synaptosomal polysialylated NCAM expression was found on day 34, and N-cadherin was reduced on day 34 and day 44, but by day 55 there were no differences in N-cadherin expression, or polysialylated NCAM expression.189 These parallel results identify one potential mechanism for the developmental neurotoxicity of these metals.
Lead and Child Development Probably the best documented behavioral teratology is associated with lead. At blood lead levels formerly thought innocuous (i.e., below 25 µg/dL), children still show depressed intellectual development,190 and more subtle effects may occur at levels below 15 µg/dL. Elementary school children with higher body burdens of lead were rated by their teachers as being more easily distracted, less persistent, less independent and organized, more hyperactive and impulsive, more easily frustrated, and showing poorer overall functioning, compared with children in the lower lead groups. Needleman et al.’s study shows remarkable dose-response relationships between dentine-lead levels and poor school ratings. Children with higher lead did more poorly on verbal and digit span components of IQ tests.190 As late as the 1970s, the average blood lead in urban American was approximately 15 µg/dL (0.6 µmol/L). The removal of lead from gasoline has resulted in a decline of blood leads in less than a generation to a national average of around 2 µg/dL. This has unmasked the low-level toxicity, which reveals that impairment of cognitive development by
Toxicology
531
lead is continuous, even below 10 µg/dL, with no evidence of a threshold yet identified.191 Although the main impact of lead is related to the peak of exposure in the 18–30 month range, subsequent exposure at least to age 7 is also associated with IQ decrement.192
Methylmercury All forms of mercury are toxic, and MeHg is one of the most toxic forms. The classic case of Minamata disease involved over 2000 people in fishing families who became ill in the 1950s from eating fish from Minamata Bay (Kyushu Island, Japan), which had been contaminated by industrial effluent. The syndrome and epidemic are graphically illustrated.193 Victims developed a range of symptoms, and many babies were born with congenital Minamata disease: blindness, profound mental and physical retardation. Another large outbreak of organomercury poisoning occurred in Iraq in people who ate seed grain treated with an organo mercurial fungicide. Similar outbreaks occurred in Guatemala and also affected a family in New Mexico (see Fig. 20-3 for symptoms in the Iraq epidemic). In North America and Europe, the growing source of concern is mercury released from power plants that is transported in the atmosphere and falls out some distance from its source. Inorganic mercury in the falls out is converted to MeHg by anerobic bacteria in the sediments of lakes and rivers. MeHg is readily bioavailable and undergoes bioamplification up the food chain, resulting in high levels (> 100 ppb) in the tissues of many kinds of fish that people consume. Adults who consume fish daily experience elevated blood mercury levels and may become symptomatic, while mothers who consume fish frequently transfer MeHg to the fetus where it reaches a higher concentration than in the mother.194 At high levels, this impacts neurobehavioral development. Several long-term studies of populations that consume large amounts of fish and whales have yielded somewhat different results. The Faroe Island study195 showed an impact, particularly on the Boston Naming Test and on auditory evoked potential, related to the child’s cord blood mercury. A Seychelle Island study has not documented effects attributable to prenatal mercury exposure.196 Other studies in New Zealand and the Amazon support some relationships of MeHg to neurodevelopmental effects.197 In addition to effects of cognitive function, prenatal exposure to MeHg may influence locomotor activity mediated by dopamine.198
Polychlorinated Biphenyls It has long been known that PCBs interfere with locomotion and learning in rodents199 and with learning and cognition in monkeys.200 Interference with cellular metabolism, neurotransmitters, and thyroid hormone also have been proposed.201,202 Several epidemiologic studies have assessed neurobehavioral deficits in populations exposed to PCBs and related compounds.203 There is evidence of developmental neurotoxicity including low IQ, from the Japanese Yusho incident, involving prenatal exposure to PCBs and furans.204 Several years later, a similar event, the Yu-Cheng incident in Taiwan, resulted in heavy PCB exposure (exceeding 1 g in some cases) as well as dibenzofurans. Children born to exposed mothers had multiple defects at birth and showed developmental delays and lowered performance on neurological examination and standard tests of cognition. The fact that these abnormalities did not correlate well with measures of postbirth maternal exposure205 illustrates the importance of measuring fetal exposure in determining neurodevelopmental defects. Linkage to a persistent chemical is evidenced by the poor performance of children born to exposed mothers more than 6 years after the exposure.206 Jacobson and colleagues studied babies born to women who ate PCB-contaminated fish from the Great Lakes. Some of these women had elevated serum and milk PCB levels, and their babies showed slowed neurobehavioral development,207 which has persisted for several years. Some of the abnormalities are predicted by cord serum PCB levels.208 Rogan and Gladen studied 931 children of mothers who did not, as a group, have unusually high PCB exposure. Children with higher prenatal PCB exposure at birth were more likely to be
532
Environmental Health
hypotonic and hyporeflexic and showed poorer psychomotor performance on the Bayley Scales. These changes were not related to postnatal PCB exposure209 and did not persist after age 5.210 The Oswego Newborn and Infant Development project examined the behavioral effects in human newborns, infants, and children of mothers who had consumed fish from Lake Ontario.211 Fish from this lake are contaminated with a wide range of toxic chemicals, including PCBs, dioxin, dieldrin, lindane, chlordane, cadmium, mercury, and mirex. Newborns were classified into high, medium, and low maternal exposure groups and were tested on the Neonatal Behavioral Assessment Scale in their first and second day after birth. The groups did not differ demographically, but after many confounders were eliminated, the high-exposure babies showed a greater number of abnormal reflexes and less mature autonomic responses than babies in the other groups. This confirms Jacobson et al.’s original findings.207,212 This study also found a dose-response relationship between fish consumption and decreased habituation to mildly aversive stimuli and is similar to results found in laboratory rats fed Lake Ontario salmon.213 CONFOUNDERS OF BEHAVIORAL PERFORMANCE
Neurobehavioral evaluation requires the concentration and cooperation of the subject, yet these behaviors too may be diminished in chemical-exposed individuals. Interpretation of test results in the individual must take into account a variety of confounders that are only briefly mentioned here. Many of the confounders have a global effect, that is, they interfere with all aspects of performance rather than with particular subtests. Subjects who have a high level of anxiety may find it difficult to concentrate on complex tasks, particularly on tests of vigilance. Lack of familiarity with the test context or with the expectations, particularly if the testing is not conducted in one’s first language, will certainly interfere with performance. Subjects who believe they are being evaluated for poisoning may be hesitant about participating in so many “psychologic” tests that may suggest that the examiners don’t believe their complaints are “real.” Subjects may also have personal reasons for performing suboptimally. However, most subjects do attempt to do their best. Computerized batteries proved baffling to subjects who were not familiar with the use of computers, although this confounder is gradually diminishing as computer use expands in all sectors of society.
Age and Gender Age is a universal confounder in neurobehavior. Depending on the modality, performance improves during childhood, peaks in the teens and twenties, and then declines. Many neurobehavioral functions decline steadily with age.14,213 In addition to well-known effects on shortterm memory, aging produces alterations in cognitive function as well, although this varies greatly among individuals, from frank dementia as in Alzheimer’s disease to very subtle changes. Many well-established tests have age-adjusted scoring. Reaction time increases and performance on psychomotor tasks decreases with age. In rats, age may also indirectly impair performance by enhancing the negative effects of stress.214 The dopaminergic neurons also degenerate with age, resulting in some cases of late-onset parkinsonism. Oxidation of dopamine produces reactive oxygen species, which may enhance the degeneration of these neurons. Glutathione blocks the dopamine-induced apoptosis. On an average, males and females differ in a variety of nervous system functions related to language, fine motor skill, and spatial perceptions. These differences are far from deterministic, and most tests do not have sex-adjusted scoring. Some differences related to early brain development were influenced by sex hormones in utero, while others reflect gender-specific learned skills.
Physical Condition Lack of sleep, drowsiness, a recent full meal, or recent use of drugs, alcohol, or tobacco may also have global effects on performance. Examiners should elicit subjective evaluations of wakefulness and
should carefully observe the subject. A pretest questionnaire should determine the time at which alcohol, cigarettes, or specific medications were used. Unrelated illnesses may affect performance. Diabetes or other metabolic states may interfere with alertness. Dementias due to other causes such as head injuries will complicate interpretation of test results.
Learning and Experience Learning poses an additional confounding problem in interpreting neurobehavioral tests, particularly when tests are to be repeated in a prospective study. The time interval between testing, the individual subject’s learning ability, and the test’s complexity will alter the learning curve or practice effect of repeat testing. This phenomenon needs to be quantified to interpret accurately changes in test performance over time. One method to deal with practice effects is to provide practice sessions for all tests to reduce the impact of a learning curve. Familiarity with computers enhances performance on the computerized batteries. Educational level may confound some tests.
Language and Culture Perhaps the most important problems are the inherent intellectual and cultural biases of many of the tests. Designed for white, Englishspeaking, educated, middle-class patients, the tests may require major modifications before being applied to less educated and/or non-English speaking cohorts, much less to worker populations from distant cultures. Straight word-for-word translations are not necessarily adequate for overcoming cultural biases. Studying cultural impacts on performance should be viewed as a challenge for the coming decade. STRESS
Stress is a very general term for any agent or condition that alters the status quo. Organisms adapt to stress in many ways. Cold stress, for example, stimulates thermoregulatory responses. Strain represents the bodies’ pathophysiological response when adaptive mechanisms are exceeded or fatigued. Although the measurement of catecholamines is used as a metric of stress, not all physiological, psychological, or behavioral effects are mediated by catecholamines. Adding stress to an exposure model enhances or unmasks subject responses.215 The combination of lead plus restraint (stress) of pregnant females results in increased catecholamine excretion in offspring.216 FUTURE DIRECTIONS
Building on the foundation of clinical psychology and neurobiology, behavioral toxicologists have assembled a variety of test approaches that yield important information about nervous system response to toxic chemicals. In many cases, the mechanisms are uncertain and the pathologic lesion unrecognized. The molecular, biochemical, and microanatomic changes are being revealed. New ways of probing receptors and new breeds of transgenic or “knockout” animals that lack a particular gene offer the opportunity to identify specific mechanisms. A neurotoxicant may act on a discrete target such as the basal ganglia or may disrupt associations between different parts of the brain, interfering with intellectual functions such as cognition and memory. These all provide an active domain for research in a variety of disciplines using a variety of models. New test equipment requires validation on a variety of populations and interpretation depends on improving exposure assessment as well. As the field of neurobehavioral testing matures and tests become validated on increasing numbers of “normal” individuals, one may achieve greater certainty in evaluating subtle abnormalities. Neuronal peptides and nervous system development were two research needs identified in 1980 and still central today. The interaction of xenobiotics with cytokines, genes, gene products, cell differentiation, apoptosis, cell assembly, and neuronal connections
20 during development is basic to improving the understanding of neurobehavioral development and behavioral teratology. As with general toxicology (Chap. 20, Principles of Toxicology), the effects of mixtures and the interactions of chemicals with stress, are important but challenging research areas. Nutritional state can modify both response to neurotoxicants and performance on tests. The relation of omega-3 fatty acids to dementia and cognitive function is controversial since cases had higher levels of PUFAs than controls.217 One possible explanation for different results of MeHg exposure and neurobehavior outcome comparing the Faroes and Seychelles studies is the greater diversity of fresh fruits and vegetables available in the tropical Seychelles compared to the temperate Faroes. Most mixture research is dyadic (two agents at a time), but even using controls and several doses, and deciding whether the pre-treat or coadminister, can result in many combinations for each of which several animals must be employed. Gene chip technology allows the same approach to be applied to identifying gene expression effects, which are up-regulated and which are down-regulated. Neuroimaging studies, particularly PET scans and functional MRI, are exciting horizons that neurotoxicologists are just beginning to explore. The development of small-animal imaging systems should advance this field rapidly. Studies that can localize the distribution of xenobiotics or their metabolites in specific brain regions are desirable. The blood-brain barrier itself needs much more examination. This is actually a system involving the blood-brain interface and the bloodcerebrovascular fluid interface. The characteristics of metal transport across or sequestration in the barrier have implications for toxicity. Barriers change with age and can be disrupted by chemicals.218 Stress and experience require structural modifications in the healthy brain (plasticity), and plasticity and neuronal replacement are important research topics. Toxicants can interfere with the plasticity of the brain, resulting in long-term impairment of learning. This may be a much more sensitive, yet hard to measure, endpoint than structural or behavioral measures.219 REFERENCES
Principles of Toxicology 1. Oser BL. Toxicology then and now. Regul Toxicol Pharmacol. 1987;7:427–43. 2. Gallo M. History and scope of toxicology. In: Klassen CD, ed. Casarett and Doull’s Toxicology. 6th ed. New York: McGraw-Hill; 1996:3–10. 3. Sinclair U. The Jungle. New York: Viking: 1946 (originally published 1905). 4. Pennie WD. Custom cDNA microarrays; technologies and applications. Toxicology. 2002;181–182:551–4. 5. Balbus JM. Ushering in the new toxicology: toxicogenomics and the public interest. Environ Health Perspect. 2005;113:818–22. 6. Klassen CD, ed. Casarett and Doull’s Toxicology. 6th ed. New York: McGraw-Hill; 2001. 7. Rozman KK, Klassen CD. Absorption, distribution, and excretion of toxicants. In: Klassen CD, ed. Casarett and Doull’s Toxicology. 6th ed. New York: McGraw-Hill; 2001:107–32. 8. Ford A. Clinical ToxIcology. Philadelphia: WB Saunders; 2001. 9. Goldfrank L, Flomenbaum N, Lewin N, Howland MA, Hoffman R, Nelson L. Toxicologic Emergencies. 7th ed. New York: McGraw Hill; 2002. 10. Scientific Group on Methodologies for the Safety Evaluation of Chemicals. Alternative testing methodologies (SGOMSEC 13-IPCS 29). Environ Health Perspect.1998;106(2):405–412. 11. Snodin DJ. An EU perspective on the use of in vitro methods in regulatory pharmaceutical toxicology. Toxicol Lett. 2002;127: 161–8.
Toxicology
533
12. Mendelsohn ML. Can chemical carcinogenicity be predicted by short-term tests. Ann N Y Acad Sci. 1988;534:115–26. 13. Anderson KS, Labaer J. The sentinel within: exploiting the immune system for cancer biomarkers. J Proteome Res. 2005;4:1123–33. 14. Wozniak AL, Bulayeva NN, Watson CS. Xenoestrogens at picomolar to nanomolar concentrations trigger membrane estrogen receptor-α-Ca2+ fluxes and prolactin release in GH3/B6 pituitary tumor cells. Environ Health Perspect. 2005;113:431–9. 15. Carson R. Silent Spring. New York: Houghton Mifflin; 1962. 16. Smith KJ, Hurst CG, Moeller RB, Skelton HG, Sidell FR. Sulfur mustard: its continuing threat as a chemical warfare agent, the cutaneous lesions induced, progress in understanding its mechanism of action, its long-term health effects, and new developments for protection and therapy. J Amer Acad Dermatol. 1995;32:765–76. 17. Lioy PJ. Exposure assessment: utility and application within homeland or public security. J Exp Anal Environ Epi. 2004;14:427–8. 18. Kipen HM, Gochfeld M. Mind and matter: OEM and the World Trade Center. Occup Environ Med. 2002;59:145–6; McClellan RK, Deitchman SD. Role of the occupational and environmental medicine physician. In: Upfal MJ, Krieger GR, Phillips SD, Guidotti TL, Weissman D, eds. Terrorism: Biological, Chemical, and Nuclear. Clinic Occ Environ Med. 2003;2(2):181–90. 19. Mayr E. What Evolution Is. New York: Basic Books; 2001. 20. Gould SJ. The Structure of Evolutionary Theory. Cambridge: Belknap Harvard; 2002. 21. Lewontin RC. Directions in evolutionary biology. Annu Rev Genet. 2002;36:1–18. 22. Nebert DW, Negishi M. Multiple forms of cytochrome P-450 and the importance of molecular biology and evolution. Biochem Pharmacol. 1982;31:2311–17. 23. Dawkins R. The Blind Watchmaker: Why the Evidence of Evolution Reveals a Universe without Design. New York: WW Norton; 1985. 24. Fitch WM, Margoliash E. Constructing phylogenetic trees. Science. 1967;155:279–84. 25. Cory-Slechta DA, Virgolini MB, Thiruchelvam M, Weston DD, Bauter MR. Maternal stress modulates the effects of developmental lead exposure. Environ Health Perspect. 2004;112:717–30. 26. Singh BR, Tu AT, eds. Natural toxins 2: structure, mechanism of action and detection. New York: Plenum Press; 1996. 27. Fleming LE, Kirkpatrick B, Backer LC, Bean JA, Wanner A, Dalpra D, et al. Initial evaluation of the effects of aerosolized Florida red tide toxins (brevetoxins) in persons with asthma. Environ Health Perspect. 2005;113:650–7. 28. Ames BN, Gold LS. Dietary pesticides (99.99% all natural). Proc Natl Acad Sci USA. 1990;87:7777–81. 29. Agency for Toxic Substances and Disease Registry. Toxicological Profile: Chromium. USPHS, ATSDR/TP-88/10. Atlanta: Centers for Disease Control; 1989. 30. Nieboer E, Jusys AA. Biologic chemistry of chromium. In: Nriagu JO, Nieboer E, eds. Chromium in the Natural and Human Environments. New York: John Wiley & Sons; 1988:21–80. 31. Dawson JH. Probing structure-function relations in heme-containing oxygenases and peroxidases. Science. 1988;240:433–9. 32. Vahter M. Mechanisms of arsenic biotransformation. Toxicology. 2002;181–182:211–7. 33. d’Itri, FM. Mercury contamination—what we have learned since Minamata. Environ Monit Assess. 1991;19:165–82. 34. Wenger GR, McMillan DE, Chang LW. Behavioral effects of trimethyltin in two strains of mice. Toxicol Appl Pharmacol. 1984;73:78–88. 35. Davis D, Safe S. Immunosuppressive activities of polychlorinated dibenzorfuran congeners: quantitative structure-activity relationships and interactive effects. Toxicol Appl Pharmacol. 1988;94:141–9. 36. Walker NJ, Crockett PW, Nyska A, Briz AE, Jokinen MP, Sells DM, et al. Dose-additive carcinogenicity of a defined mixture of “dioxinlike compounds.” Environ Health Perspect. 2005;113:43–8.
534
Environmental Health |
37. Schnellmann RG. Toxic responses of the kidney. In: Klaasen CD, ed. Casarett and Doull’s Toxicology. McGraw-Hill: New York; 2001: 491–514. 38. Ashby J, Tennant RW. Prediction of rodent carcinogenicity for 44 chemicals: results. Mutagenesis. 1994;9:7–15. 39. Rettie AE, Jones JP. Clinical and toxicological relevance of CYP2C9: drug-drug interactions and pharmacogenetics. Annu Rev Pharmacol Toxicol. 2005;45:477–94. 40. Wang Y, Liu H, Zhao C, Liu H, Cai Z, Jiang G. Quantitative structure-activity relationship models for prediction of the toxicity of polybrominated diphenyl ether congeners. Environ Sci Technol. 2005;39:4961–6. 41. Schwartz J, Dockery DW, Neas LM. Is daily mortality associated specifically with fine particles? J Air Waste Manage Assoc. 1996;46:927–39. 42. Park SK, O’Neill MS, Vokonas PS, Sparrow D, Schwartz J. Effects of air pollution on heart rate variability: the VA normative aging study. Environ Health Perspect. 2005;113:304–9. 43. Kunzli N, Jerrett M, Mack WJ, Beckerman B, LaBree L, Gilliland F, et al. Ambient air pollution and atherosclerosis in Los Angeles. Environ Health Perspect. 2005;113;201–6. 44. Environmental Protection Agency. Introduction to Indoor Air Quality: A Reference Manual. EPA/400/3-91/003. Washington, DC: Environmental Protection Agency; 1991. 45. Goldstein BD, Melia RJW, du V Florey C. Indoor nitrogen oxides. Bull N Y Acad Med. 1981;58:873–82. 46. Laumbach RJ, Kipen HM. Bioaerosols and sick building syndrome: particles, inflammation, and allergy. Curr Opin Allergy Clin Immunol. 2005;5:135–9. 47. American Conference of Governmental Industrial Hygienists. Threshold Limit Values and Biological Exposure Indices for 2004–2005. Cincinnati: American Conference of Governmental Industrial Hygienists; 2004. 48. Knize MG, Felton JS. Formation and human risk of carcinogenic heterocyclic amines formed from natural precursors in meat. Nutr Rev. 2005;63:158–65. 49. MacDonald RS, Guo J, Copeland J, Browning JD, Jr, Sleper D, Rottinghaus GE, et al. Environmental influences on isoflavones and saponins in soybeans and their role in colon cancer. J Nutr. 2005;135:1239–42. 50. Lioy P. Total human exposure analysis: a multidisciplinary science for reducing human contact with contaminants. Environ Sci Technol. 1990;24:938–45. 51. Georgopoulos PG, Lioy PJ. Conceptual and theoretical aspect of human exposure and dose assessment. J Expo Anal Environ Epidemiol. 1994;4:253–85. 52. Environmental Protection Agency. Estimating Exposure to DioxinLike Compounds. EPA/600/6-88/005Ca,Cb,Cc. Washington DC: Environmental Protection Agency; 1994. 53. Zweig G. The vanishing zero: the evolution of pesticide analyses. Essays Toxicol. 1970;2:156–98. 54. Plog B, Quinlan P. Fundamentals of Industrial Hygiene. 5th ed. Chicago: National Safety Council; 2002. 55. Caussy D, Gochfeld M, Gurzau F, Neagu C, Ruedel H. Lessons from case studies of metals: investigating exposure, bioavailability, and risk. Ecotox Environ Safety. 2003;56:45–51. 56. Umbreit TH, Hesse EJ, Gallo MA. Bioavailability of dioxin in soil from a 2,4,5,-T manufacturing site. Science. 1986;232: 497–9. 57. Haas NS, Shih R, Gochfeld M. A patient with postoperative mercury contamination of the peritoneum. J Toxicol Clin Toxicol. 2003;41:175–80. 58. ATSDR. Toxicological Profile for Lead (update). Atlanta, GA: Agency for Toxic Substances and Disease Registry; 1999. 59. ATSDR. Toxicological Profile for Cadmium (update). Atlanta, GA: Agency for Toxic Substances and Disease Registry; 1999.
60. Å kesson A, Berglund M, Schutz A, Bjellerup P, Bremme K, Vahter M. Cadmium exposure in pregnancy and lactation in relation to iron status. Amer J Public Health. 2002;92:284–7. 61. Stern AH. A revised probabilistic estimate of the maternal methyl mercury intake dose corresponding to a measured cord blood mercury concentration. Environ Health Perspect. 2005;113:155–63. 62. Chan PK, O’Hara GP, Hayes AW. Principles and methods for acute and subchronic toxicity. In: Hayes AW, ed. Principles and Methods of Toxicology. New York: Raven Press; 1982;1–51. 63. Stevens KP, Gallo MA. Practical considerations in the conduct of chronic toxicity studies. In: Hayes AW, ed. Principles and Methods of Toxicology. New York: Raven Press: 1982, 53–77. 64. Van de Wiele T, Vanhaecke L, Boeckaert C, Peru K, Headley J, Verstraete W, et al. Human colon microbiota transform polycyclic aromatic hydrocarbons to estrogenic metabolites. Environ Health Perspect. 2005;113:6–10. 65. Brittebo EB. Metabolism of xenobiotics in the nasal olfactory mucosa: implications for local toxicity. Pharmacol Toxicol. 1993; 72(suppl 3):50–2. 66. Parkinson A. Biotransformation of xenobiotics. In: Klassen CD, ed. Casarett and Doull’s Toxicology. 6th ed. New York: McGraw-Hill; 133–224. 67. Gerlach M, Riederer P, Przuntek H, Youdim MBH. MPTP mechanisms of neurotoxicity and their implications for Parkinson’s disease. Eur J Pharmacol. 1991;208:273–86. 68. Guengerich FP. Mammalian Cytochrome P-450. Boca Raton, FL: CRC Press; 1987. 69. Guengerich FP. Reactions and significance of cytochrome P-450 enzymes. J Biol Chem. 1991;66:10019–22. 70. Tucker GT. Clinical implications of genetic polymorphism in drug metabolism. J Pharm Pharmacol. 1994;46(suppl 1):417–24. 71. Meyer UA. The molecular basis of genetic polymorphisms of drug metabolism. J Pharm Pharmacol. 1994;46(suppl 1):409–15. 72. Guengerich FP. Catalytic selectivity of human cytochrome P-450 enzymes: relevance to drug metabolism and toxocity. Toxicol Lett. 1994;70:133–8. 73. Conney AH. Pharmacological implications of microsomal enzyme induction. Pharmacol Rev. 1967;19:317–66. 74. Wrighton SA, Stevens JC. The human hepatic cytochromes P-450 involved in drug metabolism. Crit Rev Toxicol. 1992;22:1–21. 75. Shimada T, Yamazaki H, Mimura M, et al. Interindividual variations in human liver cytochrome P-450 enzymes involved in the oxidation of drugs, carcinogens and toxic chemicals: studies with liver microsomes of 30 Japanese and 30 Caucasians. J Pharmacol Exp Ther. 1994;270:414–23. 76. Guengerich FP, Shimada TL. Oxidation of toxic and carcinogenic chemicals by human cytochrome P-450 enzymes. Chem Res Toxicol. 1991;4:391–407. 77. Falls JG, Blake BL, Cao Y, Levi PE, Hodgson E. Gender differences in hepatic expression of flavin-containing monooxygenase isoforms (FMO1, FMO3, and FMO5) in mice. J Biochem Toxicol. 1995;10: 171–7. 78. Ishii T, Fujishiro M, Masuda M, Nakajima J, Teramoto S, Ouchi Y, et al. Depletion of glutathione S-transferase P1 induces apoptosis in human lung fibroblasts. Exp Lung Res. 2003;29:523–36. 79. Ye Z, Parry JM. A meta-analysis of 20 case-control studies of the glutathione-s-transferase M1 (GSTM1) status and colorectal cancer risk. Med Sci Monit. 2003;9:SR83–91. 80. Lash LH, Zalups RK. Alterations in renal cellular glutathione metabolism after in vivo administration of a subtoxic dose of mercuric chloride. J Biochem Toxicol. 1996;11:1–9. 81. Lin MC, Wang EJ, Patten C, Lee MJ, Xiao F, Reuhl KR, et al. Protective effect of diallyl sulfone against acetaminophen-induced hepatotoxicity in mice. J Biochem Toxicol. 1996;11:11–20. 82. Vahter M, Berglund M, Akesson A, Liden C. Metals and women’s health. Environ Res. 2002;88:145–55.
20 83. Sugita M, Tsuchiya K. Estimation of variation among individuals of biological half-time of cadmium calculated from accumulation data. Environ Res. 1995;68:31–38. 84. Hahn ME, Karchner SI, Franks DG, Franks DG, Merson RR. Aryl hydrocarbon receptor polymorphisms and dioxin resistance in Atlantic killifish (Fundulus heteroclitus). Pharmacogen. 2004;14:131–43. 85. Kahn AT, Weis JS. Effect of methylmercury on egg and juvenile viability in two populations of killifish Fundulus heteroclitus. Environ Res. 1987;44:272–8. 86. Calabrese EJ, Baldwin LA. Hormesis: a generalizable and unifying hypothesis. Critical Rev Toxicol. 2001;31:353–424. 87. Upton AC. Radiation hormesis: data and interpretations. Crit Rev Toxicol. 2001;31:681–95. 88. Gochfeld M, Burger J. Good fish/bad fish: a composite benefit-risk by dose curve. Neurotoxicology. 2005;26(4):511–20. 89. National Research Council. Drinking Water and Health. Vol 6. Washington DC: National Academy Press; 1986. 90. National Research Council. Health Risks from Exposure to Low Levels of Ionizing Radiation: BEIR VII Phase 2 (2005). Washington DC: National Academy Press; 2005. 91. Schneiderman MA, DeCouflé P, Brown CC. Thresholds for environmental cancer: biological and statistical considerations. Ann N Y Acad Sci. 1979;329:92–130. 92. Environmental Protection Agency. Proposed guidelines for carcinogen risk assessment. Fed Regist. 1984;49:46294–301. 93. Bianchi C, Brollo A, Raman L, Zuch C. Asbestos-related mesothelioma in Monfalcone, Italy. Am J Ind Med. 1993;24:149–60. 94. Their R, Bruning T, Roos PH, Rihs HP, Golka K, Ko Y, et al. Markers of genetic susceptibility in human environmental hygiene and toxicology: the role of selected CYP, NAT and GST genes. Int J Hyg Environ Health. 2003;206:149–71. 95. Roddam PL, Rollinson S, Kane E, Roman E, Moorman A, Cartwright R, et al. Poor metabolizers at the cytochrome P450 2D6 and 2C19 loci are at increased risk of developing adult acute leukemia. Pharmacogen. 2000;10:605–15. 96. Taioli E, Gaspari L, Benhamou S, et al. Polymorphisms in CYP1A1, GSTM1, GSTT1, and lung cancer below the age of 45 years. Int J Epidem. 2003;32:60–3. 97. Johnson DR. Role of renal cortical sulfhydryl groups in development of mercury-induced renal toxicity. J Toxicol Environ Health. 1982;9:119–26. 98. Bursch W, Oberhammer F, Schulte-Herman R. Cell death by apoptosis and its protective role against disease. Trends Pharmacol Sci. 1992;13:245–51. 99. Schulte-Herman R, Timmermann-Trosiener I, Barthel G, Bursch W. DNA synthesis, apoptosis, and phenotypic expression and determinants of growth of altered foci in rat liver during phenobarbital promotion. Cancer Res. 1990;50:5127–35. 100. Marsman DS, Barrett JC. Apoptosis and chemical carcinogenesis. Risk Anal. 1994;14:321–6. 101. Thompson CB. Apoptosis in the pathogenesis and treatment of disease. Science. 1995;267:1456–60. 102. Sweet LI, Passino-Reader DR, Meier PG, Omann GM. Xenobioticinduced apoptosis: significance and potential application as a general biomarker of response. Biomarker. 1999;4:237–53. 103. Schecter A. Dioxins and Health. New York: Plenum Press; 1994. 104. Lucier GW, Portier CJ, Gallo MA. Receptor mechanisms and doseresponse models for the effects of dioxins. Environ Health Perspect. 1993;101:36–44. 105. Bertazzi PA, Pesatori AC, Consonni D, Tironi A, Landi MT, Zocchett C. Cancer incidence in a population accidentally exposed to 2,3,7,8-tetrachlorodibenzo-para-dioxin. Epidemiology. 1993;4: 398–406. 106. Kosuda LL, Greiner DL, Bigazzi PE. Mercury-induced renal autoimmunity: changes in RT6+ T-lymphocytes of susceptible and resistant rats. Environ Health Perspect. 1993;101:178–85.
Toxicology
535
107. Dambach DM, Durham SK, Laskin JD, Laskin DL. Distinct roles of NF-kappaB p50 in the regulation of acetaminophen-induced inflammatory mediator production and hepatotoxicity. Toxicol Appl Pharmacol. 2005;211(2):157-65 . 108. Burkhart-Schultz K, Thomas CB, Thompson CL, Stout CL, Brinson E, Jones IM. Characterization of in vivo somatic mutations at the hypoxanthine phosophoribosyltransferase gene of a human control population. Environ Health Perspect. 1993;103:68–74. 109. Nicklas JA, O’Neill JP, Hunter TC, Falta MT, Lippert MJ, JacobsonKram D, et al. In vivo ionizing irradiations produce deletions in the hprt gene of human T-lymphocytes. Mutat Res. 1991;250: 383–91. 110. Brandt-Rauf P, Marion M-J, DeVivo I. Mutant p21 protein as a biomarker of chemical carcinogenesis in humans. In: Mendelsohn ML, Peeters JP, Normandy MJ, eds. Biomarkers and Occupational Health. Washington DC: Joseph Henry Press; 1995:163–173. 111. Levine AJ, Finlay CA, Hinds PW. P53 is a tumor suppressor gene. Cell. 2004;S116:S67–-9. 112. Harris CC. p53: at the crossroads of molecular carcinogenesis and risk assessment. Science. 1993;262:1980–1. 113. Aguilar F, Hussain SP, Cerutti P. Aflatoxin B1 induces the transversion of G → T in codon 249 of the p53 tumor suppressor gene in human hepatocytes. Proc Natl Acad Sci U S A. 1993;90:8586–90. 114. Denissenko MF, Pao A, Tang M-S, Pfeifer GP. Preferential formation of benzo[a]pyrene adducts at lung cancer mutational hotspots in p53. Science. 1996;274:430–2. 115. Thomas MJ, Thomas JA. Toxic responses of the reproductive system. In: Klassen CD, ed. Casarett and Doull’s Toxicology. 6th ed. New York: McGraw-Hill; 2001: 673–711. 116. Grandjean P, Budtz-Jorgensen E, Jorgensen PJ, Weihe P. Umbilical cord mercury concentration as biomarker of prenatal exposure to methylmercury. Environ Health Perspect. 2005;113:905–8. 117. Needleman HL, Schell A, Bellinger D, Leviton A, Allred EN. The long-term effects of exposure to low doses of lead in childhood. N Engl J Med. 1990;322:83–88. 118. Colburn T, Dumanoski D, Myers JP. Our Stolen Future. New York: Dutton, 1996. 119. Guillette LJ, Jr, Gross TS, Masson GR, Matter JM, Percival HF, Woodward AR. Developmental abnormalities of the gonad and abnormal sex hormone concentrations in juvenile alligators from contaminated and control lakes in Florida. Environ Health Perspect. 1994;102:680–8. 120. Adlercreutz H. Phytoestrogens: epidemiology and a possible role in cancer protection. Environ Health Perspect. 1995;103(suppl 7):103–12. 121. Calafat AM, Kuklenyik Z, Reidy JA, Caudill SP, Ekong J, Needham LL. Urinary concentrations of bisphenol A and 4-nonylphenol in a human reference population. Environ Health Perspect. 2005;113: 391–5. 122. Fan KQ, You L, Brown-Borg H, Brown S, Edwards RJ, Corton JC. Regulation of phase I and phase II steroid metabolism enzyme by PPAR alpha activators. Toxicology. 2004;204:109–21. 123. Miyamoto J, Burger J. Report from a SCOPE/IUPAC project: implications of endocrine active substances for humans and wildlife. Pure Appl Chem. 2003;75:1617–2615. 124. Arnold SF, Robinson MK, Notides AC, Guillette LJ. Jr, McLachlan JA. A yeast estrogen screen for examining the relative exposure of cells to natural and xenoestrogens. Environ Health Perspect. 1996;104:544–8. 125. Sies H. Oxidative stress: introductory remarks. In: Sies H, ed. Oxidative Stress. New York: Academic Press; 1985:1–10. 126. Houssoun EA, Stohs SJ. Chromium-induced production of reactive oxygen species, DNA single-strand breaks, nitric oxide production, and lactate dehydrogenase leakage in J774A.1 cell cultures. J Biochem Toxicol. 1995;10:315–22. 127. Barlow BK, Lee DW, Cory-Slechta DA, Opanashuk LA. Modulation of antioxidant defense systems by the environmental pesticide maneb in dopaminergic cells. Neurotox. 2005;26:63–75.
536
Environmental Health
128. Thiruchelvam M, Prokopenko O, Cory-Slechta DA, Richfield EK, Buckley B, Mirochnitchenko O. Overexpression of superoxide dismutase or glutathione peroxidase protects against the paraquat+ maneb-induced Parkinson disease phenotype. J Biol Chem. 2005; 280:22530–39. 129. Tappel AL. Lipid peroxidation damage to cell components. Fed Proc. 1973;32:1870–4. 130. Melin AM, Perromat A, Clerc M. In vivo effect of diosmin on carrageenan and CCl4-induced liver peroxidation in rat liver microsomes. J Biochem Toxicol. 1996;11:27–32. 131. Laskin DL, Heck DE, Gardner CR, Fedor LS, Laskin JD. Distinct patterns of nitric oxide production in hepatic macrophages and endothelial cells following acute exposure of rats to endotoxin. J Leukoc Biol. 1994;56:751–8. 132. Bredt DS, Snyder SH. Isolation of nitric oxide synthase, a calmodulin-requiring enzyme. Proc Natl Acad Sci USA. 1990;87: 682–5. 133. Dawson TM, Dawson VL, Snyder SH. A novel neuronal messenger molecule in brain: the free radical, nitric oxide. Ann Neurol. 1992;32:297–311. 134. Dawson VL, Dawson TM, Bartley DA, Uhl GR, Snyder SH. Mechanisms of nitric oxide-mediated neurotoxicity in primary brain cultures. J Neurosci. 1993;13:2651–61. 135. Ichihara S, Yamada Y, Fujimura T, Nakashima N, Yokota M. Association of the polymorphism of the endothelial constitutive nitric oxide synthase gene with myocardial infarction in the Japanese population. Amer J Cardiol. 1998;81;83–6. 136. Armitage P. Multistage models of carcinogenesis. Environ Health Perspect. 1985;63:195–201. 137. EPA (U.S. Environmental Protection Agency). Report of the EPA Workshop on the Development of Risk Assessment Methodologies for Tumor Promoters. EPA/600/9-87/013. Washington, DC: Environmental Protection Agency; 1987. 138. Cohen SM, Ellwein LB. Genetic errors, cell proliferation, and carcinogenesis. Cancer Res. 1991;51:6493–505. 139. Hammond EC, Selikoff IJ, Seidman H. Asbestos exposure, cigarette smoking and death rates. Ann N Y Acad Sci. 1979;330:473-90. 140. Thiruchelvam M, Brockel BJ, Richfield EK, Baggs RB, Cory-Slechta DA. Potentiated and preferential effects of combined paraquat and maneb on nigrostriatal dopamine systems: environmental risk factors for Parkinson’s disease. Brain Res. 2000;873:225–34. 141. Thiruchelvam M, McCormack A, Richfield EK, Baggs RB, Tank AW, DiMonte DA, et al. Age-related irreversible progressive nigrostriatal dopaminergic neurotoxicity in the paraquat and maneb model of the Parkinson’s disease phenotype. Europ J Neurosci. 2003;18:589–600. 142. Ross RK, Yuan J-M, Yu MC, Wogan GN, Qian G-S, Tu J-T, et al. Urinary aflatoxin biomarkers and risk of hepatocellular carcinoma. Lancet. 1992;339:943–6. 143. Conney AH, Burns JJ. Metabolic interactions among environmental chemicals and drugs. Science. 1972;178:576–86. 144. Gochfeld M. Developmental defects in common terns of western Long Island, NY. Auk. 1975;92:58–65. 145. Hauser R, Williams P, Altshul L, Calafat AM. Evidence of interaction between polychlorinated biphenyls and phthalates in relation to human sperm motility. Environ Health Perspect. 2005;113: 425–30. 146. Monosson E. Chemical mixtures: considering the evolution of toxicology and chemical assessment. Environ Health Perspect. 2005;113:383–90. 147. White JF, Carlson GP. Epinephrine-induced cardiac arrythmias in rabbits exposed to trichloroethylene: potentiation by caffeine. Fundam Appl Toxicol. 1982;2:125–9. 148. Cory-Slechta DA, Virgolini MB, Thiruchelvam M, Weston DD, Bauter MR. Maternal stress modulates the effects of developmental lead exposure. Environ Health Perspect. 2004;112:717–30.
149. Guo Z, Wang M, Tian G, Burger J, Gochfeld M, Yang CS. Ageand gender-related variations in the activities of drug-metabolizing and antioxidant enzymes in the white-footed mouse (Peromyscus leucopus). Growth Dev Aging. 1993;57:85–100. 150. Fan Z, Lioy P, Weschler C, Fiedler N, Kipen H, Zhang J. Ozoneinitiated reactions with mixtures of volatile organic compounds under simulated indoor conditions. Environ Sci Technol. 2003;37:1811–21. 151. Hill AB. The environment and disease: association or causation? Proc R Soc Med. 1965;58:295–300. 152. Newman LS. To Be2+ or not to Be2+: immunogenetics and occupational exposure. Science. 1993;262:197–8. 153. Lauwerys R, Bernard A. Preclinical detection of nephrotoxicity: description of the tests and appraisal of their health significance. Toxicol Lett. 1989;46:13–30. 154. Barrier M, Mirkes PE. Proteomics in developmental toxicology. Reprod Toxicol. 2005;19:291–304. 155. Schulte PA, Perera FP. Molecular Epidemiology: Principles and Practices. San Diego: Academic Press; 1993. 156. Mendelsohn ML, Peeters JP, Normandy MJ, eds. Biomarkers and Occupational Health: Progress and Perspectives. Washington DC: Joseph Henry Press; 1995. 157. Peakall D. Animal Biomarkers as Pollution Indicators. London: Chapman & Hall; 1992. 158. Sexton K, Needham LL, Pirkle JL. Human biomonitoring of environmental chemicals. Am Sci. 2004;92:38–45. 159. Blomberg A, Mudway I, Svensson M, Hagenbjork-Gustafsson A, Thomasson L, Helleday R, et al. Clara cell protein as a biomarker for ozone-induced lung injury in humans. Eur Respir J. 2003; 22:883–8. 160. Halatek T, Gromadzinska J, Wasowicz W, Rydzynski K. Serum clara-cell protein and beta2-microglobulin as early markers of occupational exposure to nitric oxides. Inhal Toxicol. 2005;17: 87–97. 161. Vahter M, Berglund M, Akesson A, Liden C. Metals and women’s health. Environ Res. 2002;88:145–55. 162. Tsaih SW, Korrick S, Schwartz J, Lee ML, Amarasiriwardena C, Aro A, et al. Influence of bone resorption on the mobilization of lead from bone among middle-aged and elderly men: the Normative Aging Study. Environ Health Perspect. 2001;109:995–9. 163. Lindberg A, Bjornberg KA, Vahtrer M, Berglund M. Exposure to methylmercury in non-fish eating people in Sweden. Environ Res. 2004;96:28–33. 164. National Research Council. Biomarkers in Pulmonary Toxicology. Washington, DC: National Academy Press; 1989. 165. National Research Council. Biomarkers in Reproductive Toxicology. Washington, DC: National Academy Press; 1992. 166. National Research Council. Biomarkers in Immunotoxicology. Washington, DC: National Academy Press; 1992. 167. Santella RM, Grinberg-Funes RA, Young TL, Singh VN, Wang LW, Perera FP. Cigarette smoking related polycyclic aromatic hydrocarbon-DNA adducts in peripheral mononuclear cells. Carcinogenesis. 1992;13:2041–5. 168. Costa M, Zhitkovich A, Toniolo P. DNA-protein cross-links in welders: molecular implications. Cancer Res. 1991;53:460–5. 169. Amin RP, Witz G. DNA-protein crosslink and DNA strand break formation in HL-60 cells treated with trans,trans-muconaldehyde, hydroquinone, and their mixtures. Int J Toxicol. 2001;20:69–80. 170. Mendelsohn ML. The current applicability of large scale biomarker programs to monitor cleanup workers. In: Mendelsohn ML, Peeters JP, Normandy MJ, eds. Biomarkers and Occupational Health. Washington, DC: Joseph Henry Press; 1995:9–19. 171. Harley N. Toxic effects of radiation and radioactive materials. In: Klassen C, ed. Casarett & Doull’s Toxicology. New York: McGraw-Hill; 2001:917–44. 172. Kimbrough RD. Determining exposure and biochemical effects in human population studies. Environ Health Perspect. 1983;48:77–79.
20 173. Institute of Medicine. Veterans and Agent Orange Update 2004. Washington DC: National Academies Press; 2005. 174. Fiedler N, Giardino N, Natelson B, Ottenweller JE, Weisel C, Lioy P, et al. Responses to controlled diesel vapor exposure among chemically sensitive Gulf War veterans. Psychosom Med. 2004;66:588–98. 175. Landrigan PJ, Lioy PJ, Thurston G, Berkowitz G, Chen LC, Chillrud SN, et al.; NIEHS World Trade Center Working Group. Health and environmental consequences of the world trade center disaster. Environ Health Perspect. 2004;112:731–9. 176. Rall D, Hogan MD, Huff JE, Schwetz BA, Tennant RW. Alternatives to using human experience in assessing health. Ann Rev Public Health. 1987;8:355–85. 177. Resnik DB, Portier C. Pesticide testing on human subjects: weighing benefits and risks. Environ Health Perspect. 2005;113:813–7. 178. Black H. A cleaner bill of health. Environ Health Perspect. 1996;104:488–90. 179. Kanitz S, Franco Y, Patrone V, Caltabellotta M, Raffo E, Riggi C, et al. Association between drinking water disinfection and somatic parameters at birth. Environ Health Perpsect. 1996;104:516–21. 180. Alavanja M, Goldstein I, Susser M. A case-control study of gastrointestinal and urinary tract cancer mortality and drinking water chlorination. In: Jolley RL, Gorchev H, Hamilton DH, Jr, eds. Water Chlorination: Environmental Impact and Health. Ann Arbor, MI: Ann Arbor Scientific Publishing; 1990. 181. Loranger S, Demers G, Kennedy G, Forget E, Zayed J. The pigeon (Columba livia) as a monitor for manganese contamination from motor vehicles. Arch Environ Contam Toxicol. 1994;27:311–7. 182. Gochfeld M. Why epidemiology of endocrine distruptors warrants the precautionary principle. Pure Appl Chem. 2003;75:2521–9. 183. Sokal RR, Sneath P. Principles of Numerical Taxonomy. San Francisco: WH Freeman; 1963. 184. Pognan F. Genomics, proteomics, and metabonomics in toxicology: hopefully not “fashionomics.” Pharmacogenom. 2004;5:879–93. 185. Hwang KB, Kong SW, Greenberg SA, Park PJ. Combining gene expression data from different generations of oligonucleotide arrays. BMC Bioinformatics. 2004;5:159–62. 186. Donaldson K, Brown D, Cloutter A, Duffin R, MacNee W, Renwick L, et al. The pulmonary toxicology of ultrafine particles. Aerosol Med. 2002;15:213–20. 187. Oberdo"rster G, Oberdo"rster E, Oberdo"rster J. Nanotoxicology: an emerging discipline evolving from studies of ultrafine particles. Environ Health Perspect. 2005;113:823–39. 188. Lewis M, Bendersky M, eds. Mothers, Babies, and Cocaine: The Role of Toxins in Development. Mahwah, NJ: Lawrence Erlbaum Associates, Inc. 1995. 189. Frumkin H. Health, equity, and the built environment. Environ Health Perspect. 2005;113:290–1. 190. Arcury TA, Quandt SA, Russell GB. Pesticide safety among farmworkers: perceived risk and perceived control as factors reflecting environmental justice. Environ Health Perspect. 2002;110(suppl 2): 233–40.
General References TOXLINE http://toxnet.nlm.nih.gov/cgi-bin/sis/htmlgen?TOXLINE. Aldredge WN. Mechanisms and Concepts in Toxicology. London: Taylor and Francis; 1996. Arias IM, Jakoby WB, Popper H, Schacter D. The Liver: Biology and Pathobiology. 4th ed. New York: Raven Press; 2000. ATSDR Toxicological Profiles. Atlanta, Georgia: Agency for Toxic Substances and Disease Registry, Division of Toxicology. (Includes a search engine.) Ayres J, Maynard R, Richards R, eds. Air Pollution and Health. London: World Scientific Publishing Company; 2005. Ballantyne B, Marrs T, Turner P. General and Applied Toxicology. 2nd ed. London: Macmillan Press Ltd; 2001; BEIR VI. Health Effects of Exposure to Radon. Washington DC: National Academy Press: 1999.
Toxicology
537
Borlak J. Handbook of Toxicogenomics: Strategies and Applications. Hoboken: John Wiley and Sons; 2005. Crosby, Donald G. Environmental Toxicology and Chemistry. New York, NY: Oxford University Press; 1998. Cunningham MJ. Genetic and Proteomic Applications in Toxicity Testing (Methods in Pharmacology and Toxicology). Totowa: Humana Press; 2003. Davies KJA, Ursini F, eds. The Oxygen Paradox. Italy: CLEUP University Padova; 1995. Davis D. When Smoke Ran Like Water. New York: Basic Books; 2002. Forman HJ, Cadenas E, eds. Oxidative Stress and Signal Transduction. New York: Chapman & Hall; 1997. Francis BM. Toxic Substances in the Environment. New York: John Wiley & Sons; 1994. Fun AM, Chang LW, eds. Toxicology and Risk Assessment: Principles, Methods, and Applications. 2nd ed. New York: Marcel Dekker; 2000. Galli CL, Marinovich M, Goldberg AM, eds. Modulation of Cellular Responses in Toxicity. Berlin: Springer; 1995. Gallo MA, Hesse EJ, MacDonald GJ, Umbreit TH. Interactive effects of estradiol and 2,3,7,8-tetrachloro-dibenzo-p-dioxin on hepatic cytochrome P-450 and mouse uterus. Toxicol Lett. 1986;32:123–32. Gossel TA, Bricker JD. Principles of Clinical Toxicology. 3rd ed. New York: Raven Press; 1994. Hardman JG, ed. Goodman and Gilman’s The Pharmacological Basis of Therapeutics.10th ed. New York: McGraw-Hill; 2001. Hayes AW. Principles and Methods of Toxicology. 4th ed. New York: Raven Press; 2001. Hayes WJ, Laws ER, Jr. Handbook of Pesticide Toxicology. Vols 1–3. New York: Academic Press; 1997. Hughes WW. Essentials of environmental toxicology: the effects of environmentally hazardous substances on human health. Washington, DC: Taylor and Francis; 1996. Johnson BL, DeRosa CT. The toxicologic hazard of superfund hazardous waste sites. Rev Environ Health. 1997;12:235–51. Josephy D, Mannervik B. Molecular Toxicology. New York: Oxford University Press; 2005. Kiran R, Varma MN. Biochemical studies on endosulfan toxicity in different age groups of rats. Toxicol Lett. 1988;44:247–52. Klaasen CD, ed. Casarett and Doull’s Toxicology: The Basic Science of Poisons. 6th ed. New York: McGraw-Hill; 2001. Landis, Wayne G, Ming-Ho Yu. Introduction to Environmental Toxicology: Impacts of Chemicals Upon Ecological Systems. Boca Raton, FL: Lewis Publishers; 1995:328. Loomis TA, Hayes AW. Loomis’s Essentials of Toxicology. 4th ed. San Diego: Academic Press; 1996. Lu FC. Basic Toxicology: Fundamentals, Target Organs, and Risk Assessment. 4th ed. Washington DC: Taylor and Francis; 2002. Malins DC, Ostrander GK, eds. Aquatic Toxicology; Molecular, Biochemical, and Cellular Perspectives. Boca Raton, FL: Lewis Publishers: 1994. Markowitz G, Rosner D. Deceit and Denial: The Deadly Politics of Industrial Pollution. Berkeley CA: University of California Press;2002. Mehlman MA, Upton AC, eds. The Identification and Control of Environmental and Occupational Diseases: Asbestos and Cancers. Princeton NJ: Princeton Scientific Publishing 1994. Mommsen TP, Moon TW. Environmental Toxicology (Biochemistry and Molecular Biology of Fishes). Amsterdam: Elsevier Science; 2005. Committee on the Toxicological Effects of Methylmercury, Board on Environmental Studies and Toxicology, National Research Council. NRC Toxicological Effects of Methylmercury. Washington DC. National Academy Press; 2000. National Research Council. Health Risks from Exposure to Low Levels of Ionizing Radiation: BEIR VII Phase 2 (2005). Washington DC: National Academy Press; 2005. Packer L, Cadenas E, eds. Oxidative Stress and Disease. New York: Marcel Dekker; 1999.
538
Environmental Health
Parker MG. Nuclear Hormone Receptors: Molecular Mechanisms, Cellular Functions, Clinical Abnormalities. London: Academic Press; 1991. Peterson LE, Abrahamson S, eds. Effects of Ionizing Radiation: Atomic Bomb Survivors and their Children (1945–1995). Washington DC: National Academy Press; 1998. Robertson DG, Lindon J. Metabonomics in Toxicity Assessment. London: CRC Press; 2005. Schwarzenbach RP, Gschwend PM, Dieterr M. Imboden. Environmental Organic Chemistry. New York, NY: Wiley; 1993. Salsburg DS. Statistics for Toxicologists. New York: Marcel Dekker; 1986. Schardein JL. Chemically Induced Birth Defects. 3rd ed. New York: Marcel Dekker; 2000. Sen CK, Sies H, Baeuerle P, eds. Antioxidant and Redox Regulation of Genes. San Diego: Academic Press; 2000. Sipes IG, McQueen CA, Gandolfi AJ. Comprehensive Toxicology. New York: Pergamon; 1997 (13 volumes). Timbrell JA. Introduction to Toxicology. 3rd ed. London: Taylor and Francis; 2002. Walker CH, Hopkin SP, Sibly RM, et al. Principles of Ecotoxicology. 2nd ed. London: Taylor and Francis; 2001. Wexler P. Information Resources in Toxicology. 3rd ed. New York: Elsevier; 2000. Williams PL, Burson JL. Industrial Toxicology. New York: Van Nostrand; 1985.
Exposure Barrett JC. Prevention of environmentally related disease. Environ Health Perspect. 1994;102:812–3. Lioy PJ, Waldman JM, Greenberg A, Harkov R, Pietarinen C. The Total Human Environmental Exposure Study (THEES) to benzo(a)pyrene: comparison of the inhalation and food pathways. Arch Environ Health. 1988;43:304–12. Miller FJ, Graham JA. Research needs and advances in inhalation dosimetry identified through the use of mathematical dosimetry models of ozone. Toxicol Lett. 1988;44:231–46. Morris RD. Chlorination, chlorination by-products and cancer: a metaanalysis. Am J Public Health. 1992;82:955–62. Nieuwenhuijsen MJ, ed. Exposure Assessment in Occupational and Environmental Epidemiology. New York: Oxford University Press; 2003.
Carcinogenesis Dragani T, Manenti G, Gariboldi M, Falvella S, Pierotti M, Della Porta G. Genetics of hepatocarcinogenesis in mouse and man. In: Zervos C, ed. Oncogene and Transgenics Correlates of Cancer Risk Assessments. New York: Plenum Press; 1992:67–80. Drinkwater N, Bennett L. Genetic control of carcinogenesis in experimental animals. Prog Exp Tumor Res. 1991;33:1–20. Haseman J, Lockhart A. The relationship between use of the maximum tolerated dose and study sensitivity for detecting rodent carcinogenicity. Fund Appl Toxicol. 1994;22:382–91. Huff J. Chemicals and cancer in humans: first evidence in experimental animals. Environ Health Perspect. 1993;100:201–10. Huff J, Boyd J, Barrett JC. Cellular and molecular mechanisms of hormonal carcinogenesis: environmental influences. New York: John Wiley & Sons; 1996. Li JJ, Li SA, Gustafsson J-A, Niendi S, Sekely LI, eds. Hormonal Carcinogenesis. Vols 1–2. New York: Springer Verlag; 1996.
Biological, Chemical, Radiation Weapons, Disasters, and Preparedness Currance PL, Clements B, Bronstein A. Emergency Care for Hazardous Materials Exposure. St Loius, London: Mosby Publishing: 2005. Hoenig SL. Handbook of Chemical Warfare and Terrorism. Westport CT: Greenwood Press; 2002.
Jackson BA, Baker JC, Ridgely MS, Bartis JT, Linn HI. Protecting Emergency Responders Vol 3: Safety Management in Disaster and Terrorism Response. Washington DC: National Institute for Occupational Safety and Health; 2004. Miller J, Broad WJ, Engelberg S. Germs: Biological Weapons and America’s Secret War. New York: Simon & Schuster; 2001. National Research Council. Review of the U.S. Army’s Health Risk Assessments for Oral Exposure to Six Chemical Agents. Washington DC: National Academy Press; 1999. Upfal MJ, Krieger GR, Phillips SD, Guidotti TL, Weissman D. Terrorism: biological, chemical and nuclear. Clinics Occup Environ Med. 2003;2(2).
Neurobehavioral Toxicology 1. Lotti M. Neurotoxicology: the cinderella of neuroscience. Neurotoxicology. 1996;17:313–22. 2. Weiss B. Tools for the assessment of behavioral toxicity. In: Xinteras C, Johnson BL, de Groot I, eds. Behavioral Toxicology: Early Detection of Occupational Hazards. Washington, DC: U.S. Department of Health, Education and Welfare, National Institute of Occupational Safety and Health; 1974: 444–9. 3. Xintaras C, Johnson BL, de Groot I. Behavioral Toxicology: Early Detection of Occupational Hazards. Washington DC: U.S. Department of Health, Education and Welfare; 1974. 4. Zenick H, Reiter LW. Behavioral Toxicology, an Emerging Discipline. Research Triangle Park, NC: U.S. Environmental Protection Agency; 1977. 5. Valciukas JA, Lilis R. Psychometric techniques in environmental research. Environ Res. 1980;21:275–97. 6. Lotti M. Central neurotoxicity and behavioral effects of anticholinesterases. In: Ballantyne B, Marrs TC, eds. Clinical and Experimental Toxicology of Organophosphates and Carbamates. London: Butterworth-Heinemann; 1992:75–83. 7. Aldridge WN. The biological basis and measurement of thresholds. Ann Rev Pharmacol Toxicol. 1986;26:39–58. 8. Johnson MK. The delayed neuropathy caused by some organophosphorus esters: mechanism and challenge. CRC Crit Rev Toxicol. 1975;3:289–316. 9. Lotti M, Moretto A. Organophosphate-induced delayed polyneuropathy. Toxicol Rev. 2005;24:37–49. 10. Cohn J, Cory-Slechta DA. Lead exposure potentiates the effects of NMDA on repeated learning. Neurotoxicol Teratol. 1994;16: 455–65. 11. Cohen SA, Muller WE. Age-related alterations of NMDA-receptor properties in the mouse forebrain: partial restoration by chronic phosphatidylserine treatment. Brain Res. 1992;584: 174–80. 12. France CP, Lu Y, Woods JH. Interactions between N-methyl-Daspartate and CGS 1975 administered intramuscularly and intracerebroventricularly in pigeons. J Pharmacol Exp Ther. 1990;225: 1271–77. 13. Aamodt SM, Nordeen EJ, Nordeen KW. Blockade of NMDA receptors during song model exposure impairs song development in juvenile zebra finches. Neurobiol Learn Mem. 1996;65:91–8. 14. Cory-Slechta DA, Pokora MJ, Preston RA. The effects of dopamine agonists on fixed interval schedule-controlled behavior are selectively altered by low-level lead exposure. Neurotoxicol Teratol. 1996;18:565–75. 15. Husi H, Ward MA, Choudhary JS, Blackstock WP, Grant SG. Proteomic analysis of NMDA receptor-adhesion protein signaling complexes. Nature Neurosci. 2000;3:661–9. 16. Singh AK, Jiang Y. Developmental effects of chronic low-level lead exposure on voltage-gated calcium channels in brain synaptosomes obtained from the neonatal and the adult rats. Comp Biochem Physiol C Pharmacol Toxicol Endocrinol. 1997;118: 75–81.
20 17. Anger WK, Storzbach D, Amler RW, Sizemore OJ. Human behavioral neurotoxicology: workplace and community assessments. In: Rom W, ed Environmental and Occupational Medicine. 3rd ed. New York: Lippincott-Raven; 1998:329–50. 18. Baker EL, Letz R, Fidler A. A computer-administered neurobehavioral evaluation system for occupational and environmental epidemiology. J Occup Med. 1985;27:206–12. 19. Virgolini MB, Chen K, Weston DD, Bauter MB, Cory-Slechta DA. Interactions of chronic lead exposure and intermittent stress: consequences for brain catecholamine systems and associated behaviors and HPA axis function. Toxicol Sci. 2005;87(2):469-82. . 20. LoPachin RM, Ross JF, Reid ML, Das S, Mansukhani S, Lehning EJ. Neurological evaluation of toxic axonopathies in rats: acrylamide and 2,5-hexanedione. Neurotoxicology. 2002;23:95–110. 21. Singer R, Valciukas JA, Lilis R. Lead exposure and nerve conduction velocity: the differential time course of sensory and motor nerve effects. Neurotoxicology. 1983;4:193–202. 22. Sills RC, Harry GJ, Valentine WM, Morgan DL. Interdisciplinary neurotoxicity inhalation studies: Carbon disulfide and carbonyl sulfide research in F344 rats. Toxicol Appl Pharmacol. 2005; 207(2 Suppl):245–50. 23. Yokoyama K, Araki S, Nishikitani M, Sato H. Computerized posturography with sway frequency analysis: application in occupational and environmental health. Ind Health. 2002;40:14–22. 24. Hafeman DM, Ahsan H, Louis ED, Siddique AB, Slavkovich V, Cheng Z, et al. Association between arsenic exposure and a measure of subclinical sensory neuropathy in Bangladesh. J Occ Environ Med. 2005;47:778–84. 25. Mooney SM, Siegenthaler JA, Miller MW. Ethanol induces heterotopias in organotypic cultures of rat cerebral cortex. Cereb Cortex. 2004;14:1071–80. 26. Gilman S. Medical progress: advances in neurology. N Engl J Med. 1992;326:1608–16. 27. Morrow LA, Callender T, Lottenberg S, Buchsbaum MS, Hodgson MJ, Robin N. PET and neurobehavioral evidence of tetrabromoethane encephalopathy. J Neuropsychiatry Clin Neurosci. 1990;2:431–5. 28. Thiruchelvam M, McCormack A, Richfield EK, Baggs RB, Tank AW, DiMonte DA, et al. Age-related irreversible progressive nigrostriatal dopaminergic neurotoxicity in the paraquat and maneb model of the Parkinson’s Disease phenotype. Eur J Neurosci. 2003;18:589–600. 29. Yolton K, Dietrich K, Auinger P, Lanphear BP, Hornung R. Exposure to environmental tobacco smoke and cognitive abilities among U.S. children and adolescents. Environ Health Perspect. 2005; 113:98–103. 30. Laties VG, Merigan WH. Behavioral effects of carbon monoxide on animals and man. Annu Rev Pharmacol Toxicol. 1979;19:357–92. 31. O’Hanlon JF. Preliminary studies of the effects of carbon monoxide on vigilance in man. In: Weiss B, Laties VG. eds. Behavioral Toxicology. New York: Plenum Press; 1975:61–75. 32. Seppalainen AM. Neurophysiological findings among workers exposed to organic solvents. Scand J Work Environ Health. 1981;7(suppl 4):29–33. 33. Lindstrom K, Martelin T. Personality and long term exposure to organic solvents. Neurobehav Toxicol. 1980;2:89–100. 34. Flodin U, Edling C, Axelson O. Clinical studies of psychoorganic syndromes among workers with exposure to solvents. Am J Ind Med. 1984;5:287–95. 35. National Institute for Occupational Safety and Health. Organic solvent neurotoxicity. NIOSH Curr Intelligence Bull. 1987;48:1–39. 36. van der Hoek JA, Verberk MM, van der Laan G, Hageman G. Routine diagnostic procedures for chronic encephalopathy induced by solvents: survey of experts. Occup Environ Med. 2001;58:382–5. 37. Ahmadi A, Jonsson P, Flodin U, Soderkvist P. Interaction between smoking and glutathione S-transferase polymorphisms in solventinduced chronic toxic encephalopathy. Toxicol Indust Health. 2002;18:289–96.
Toxicology
539
38. Korbo L, Ladeefoged O, Lam HR, Ostergaard G, West MJ, ArlienSoberg, P. Neuronal loss in hippocampus in rats exposed to toluene. Neurotoxicology. 1996;17:359–66. 39. Cherry N, Waldron HA, Wells GG, Wilkinson RT, Wilson HK, Jones S. An investigation of the acute behavioural effects of styrene on factory workers. Br J Ind Med. 1980;37:234–40. 40. Campagna D, Gobba F, Mergler D, Moreau T, Galassi C, Cavalleri A, et al. Color vision loss among styrene-exposed workers: neurotoxicological threshold assessment. Neurotoxicology. 1996;17: 367–74. 41. Benignus VA, Geller AM, Boyes WK, Bushnell PJ. Human neurobehavioral effects of long-term exposure to styrenhe: a metaanalysis. Environ Health Perspect. 2005;113:532–8. 42. Cavanaugh JV. Peripheral neuropathy caused by chemical agents. CRC Crit Rev Toxicol. 1980;2:365–76. 43. Teisinger J. New advances in the toxicology of carbon disulfide. Am Ind Hyg Assoc J. 1974;35:55. 44. Vigilani EC. Carbon disulfide poisoning in viscose rayon factories. Br J Ind Med. 1954;11:235. 45. Tsunoda M, Konno N, Nakano K, Liu Y. Altered metabolism of dopamine in the midbrain of mice treated with tributyltin chloride via subacute oral exposure. Environ Sci. 2004;11:209–19. 46. Rice DC. Sensory and cognitive effects of developmental methyl mercury exposure in monkeys, and a comparison to effects in rodents. Neurotoxicology. 1996;17:139–54. 47. El-Fawal HAN, Gong Z, Little AR, Evans HL. Exposure to methyl mercury results in serum autoantibodies to neurotypic and gliotypic proteins. Neurotoxicology. 1996;17:531–40. 48. Valciukas JA, Lilis R, Fischbein A, Selikoff IJ. Central nervous system dysfunction due to lead exposure. Science. 1978;201:465–7. 49. Agency for Toxic Substances and Disease Registry (ATSDR). The Nature and Extent of Lead Poisoning in Children in the United States: A Report to Congress. Atlanta: U.S. Public Health Service; 1988. 50. Schwartz BS, Lee BK, Bandeen-Roche K, Stewart W, Bolla K, Links J, et al. Occupational lead exposure and longitudinal decline in neurobehavioral test scores. Epidemiology. 2005;16:106–13. 51. Schmitt TJ, Zawia N, Harry GJ. GAP-43 mRNA expression in the developing rat brain: alterations following lead-acetate exposure. Neurotoxicology. 1996;17:407–14. 52. Cory-Slechta DA. Lead-induced impairments in complex cognitive function: offerings from experimental studies. Child Neuropsychol. 2003;9:54–75. 53. Rogan WJ, Gladen BC, McKinney JD, Carreras N, Hardy P, Thullen J, et al. Neonatal effects of transplacental exposure to PCBs and DDE. J Pediatr. 1986;109:335–41. 54. Jacobson JL, Jacobson SW. Intellectual impairment in children exposed to polychlorinated biphenyls in utero. N Engl J Med. 1996;335:783–9. 55. Kuriyama SN, Talsness CE, Grote K, Chahoud I. Developmental exposure to low-dose PBDE-99: effects on male fertility and neurobehavior in rat offspring. Environ Health Perspect. 2005;113: 149–54. 56. Langston JW, Ballard P, Tetrud JW, Irwin I. Chronic parkinsonism in humans due to a product of meperidine-analog synthesis. Science. 1983;219:979–80. 57. Silbergeld EK. Mechanisms of lead neurotoxicity, or looking beyond the lamppost. FASEB J. 1992;6:3201–06. 58. Cory-Slechta DA, Pokora MJ, Fox RAV, O’Mara DJ. Lead-induced changes in dopamine D1 sensitivity: modulation by drug discrimination training. Neurotoxicology. 1996;17:445–58. 59. Cory-Slechta DA, Pokora MJ, Johnson JL. Postweaning lead exposure enhances the stimulus properties of N-methyl-D-aspartate: possible dopaminergic involvement? Neurotoxicology. 1996;17:509–22. 60. Cory-Slechta DA. Relationships between lead induced learning impairments and changes in dopaminergic, cholinergic, and glutaminergic neurotransmitter system functions. Annu Rev Pharmacol Toxicol. 1995;3:391–415.
540
Environmental Health
61. Simmons TJB. Lead-calcium interactions in cellular lead toxicity. Neurotoxicology. 1993;14:77–86. 62. Leret ML, Garcia-Uceda F, Antonio MT. Effects of maternal lead administration on monaminergic GABAergic and glutamatergic systems. Brain Res Bull. 2002;58:469–73. 63. Ceccatelli S, Grandison L, Scott REM, Pfaff DW, Kow L-M. Estradiol regulation of nitric oxide synthase mRNAs in rat hypothalamus. Neuroendocrinology. 1996;64:357–63. 64. Franck J, Nylander I, Rosén A. Met-enkephalin inhibits 5hydroxytrypatmine release from the rat ventral spinal cord via δ opioid receptors. Neuropharmacology. 1996;35:743–8. 65. Hauser GJ, Danchak MR, Colvin MP, Hopkins RA, Wocial B, Myers AK, et al. Circulating neuropeptide Y in humans: relation to changes in catecholamine levels and changes in hemodynamics. Neuropeptides. 1996;30:159–65. 66. Silva AP, Xapelli S, Grouzmann E, Cavadas C. The putative neuroprotective role of neuropeptide Y in the central nervous system. Curr Drug Targets CNS Neurol Disord. 2005;4:331–47. 67. Yang RC, Shih HC, Hsu HK, Chang HC, Hsu C. Estradiol enhances the neurotoxicity of glutamate in GT1-7 cells through an estrogen receptor-dependent mechanism. Neurotoxicology. 2003;24: 65–73. 68. Wang C, Sadovova N, Fu X, Schmued L, Scallet A, Hanig J, et al. The role of the N-methyl-D-aspartate receptor in ketamine-induced apoptosis in rat forebrain culture. Neuroscience. 2005;132:967–77. 69. Ribeiro RCJ, Apriletti JW, West BL, Wagner RL, Fletterick RJ, Schaufele F, et al. The molecular biology of thyroid hormone actin. Ann N Y Acad Sci. 1995;758L:366–89. 70. Fritsche E, Cline JE, Nguyen NJ, Scanlan TS, Abel J. Polychlorinated biphenyls disturb differentiation of normal human neural progenitor cells: clue for involvement of thyroid hormone receptors. Environ Health Perspect. 2005;113:871–6. 71. Leighton JK. Application of emerging technologies in toxicology and safety assessment: regulatory perspectives. Int J Toxicol. 2005;24:153–5. 72. Griffin DR. The Question of Animal Awareness: Evolutionary Continuity of Mental Experience. New York: Rockefeller University Press; 1976. 73. Lorenz K. On Aggression. New York: Harcourt, Brace & World; 1966. 74. Laties VG. How operant conditioning can contribute to behavioral toxicology. Environ Health Perspect. 1978;26:29–35. 75. Burger J, Gochfeld M. Early postnatal lead exposure: behavioral effects in common tern chicks (Sterna hirundo). J Toxicol Environ Health. 1985;16:869–6. 76. Reiter L. Use of activity measures in behavioral toxicology. Environ Health Perspect. 1978;26:9–20. 77. Brown DR. Neonatal lead exposure in the rat: decreased learning as a function of age and blood lead concentration. Toxicol Applied Pharmacol. 1975;32:628–37. 78. Ogilvie DM. Sublethal effects of lead acetate on the Y-maze performance of albino mice (Mus musculus L.). Can J Zoology. 1977;55:771–5. 79. Kopf SR, Baratti CM. Memory modulation by post-training glucose or insulin remains evident at long retention intervals. Neurobiol Learn Mem. 1996;65:189–91. 80. Zhao WQ, Bennett P, Rickard N, Sedman GL, Gibbs ME, Ng KT. The involvement of Ca2+/calmodulin-dependent protein kinase in memory formation in day-old chicks. Neurobiol Learn Mem. 1996;66:24–35. 81. Bock J, Wolf A, Braun K. Influence of the N-methyl-D-aspartate receptor antagonist DL-2-amino-5-phosphonovaleric acid on auditory filial imprinting in the domestic chick. Neurobiol Learn Mem. 1996;65:177–88. 82. Lincoln J, Coopersmith R, Harris EW, Cotman CW, Leon M. NMDA receptor activation and early olfactory learning. Brain Res. 1988;467:309–12.
83. Burger J, Gochfeld M. Lead and behavioral development: parental compensation for behaviorally impaired chicks. Pharmacol Biochem Behav. 1996;55:339–49. 84. Cory-Slechta DA, Weiss B, Cox C. Delayed behavioral toxicity of lead with increasing exposure concentration. Toxicol Appl Pharmacol. 1983;71:342–52. 85. Rice DC, Gilbert SG. Early chronic low-level methyl mercury poisoning in monkeys impairs spatial vision. Science. 1982;206: 759–71. 86. Rice DC. Behavioral deficit (delayed matching to sample) in monkeys exposed from birth to low levels of lead. Toxicol Appl Pharmacol. 1984;75:337–45. 87. Dietz DD, McMillan DE, Mushak P. Effects of chronic lead administration on acquisition and performance of serial position sequences by pigeons. Toxicol Appl Pharmacol. 1979;47:377–84. 88. Laties VG, Evans HL. Methyl mercury–induced changes in an operant discrimination in the pigeon. J Pharmacol Exp Ther. 1980;214:620–8. 89. Uphouse L, Andrade M, Caldarola-Pastuszka M, Jackson A. 5-HT1A receptor anatgonists and lordosis behavior. Neuropharmacology. 1996;35:489–95. 90. Mhyre TR, Chesler EJ, Thiruchelvam M, Lungu C, Cory-Slechta DA, Fry JD, Richfield EK. Heritability, correlations and in silico mapping of locomotor behavior and neurochemistry in inbred strains of mice. Genes Brain Behav. 2005;4:209–28. 91. Reeves R, Thiruchelvam M, Baggs RB, Cory-Slechta DA. Interactions of paraquat and triadimefon: behavioral and neurochemical effects. Neurotoxicol. 2003;24:839–50. 92. Burger J, Gochfeld M. Behavioral impairments of lead-injected young herring gulls in nature. Fundam Appl Toxicol. 1994;23:553–61. 93. Barthalamus GT, Leander JD, McMillan DE, Mushak P, Krigman MR. Chronic effects of lead on schedule-controlled pigeon behavior. Toxicol Appl Pharmacol. 1977;41:459–71. 94. Crofton KM, Howard JL, Moster VC, Gill MW, Reiter LW, Tilson HA, et al. Interlaboratory comparison of motor activity experiments: implications for neurotoxicological assessments. Neurotoxicol Teratol. 1991;13:599–609. 95. Silbergeld E, Goldberg A. A lead-induced behavioral disorder. Life Sci. 1973;13:1275–83. 96. Fox GA, Donald T. Organochlorine pollutants, nest-defense behavior and reproductive success in merlins. Condor. 1980;82:81–4. 97. Bushnell PJ, Bowman RE, Allen JR, Marlar RJ. Scotopic vision deficits in young monkeys exposed to lead. Science. 1977;196:333–35. 98. McArthur MLB, Fox GA, Peakall DB, Philogene BJR. Ecological significance of behavioral and hormonal abnormalities in breeding ring doves fed an organochlorine chemical mixture. Arch Environ Contam Toxicol. 1983;12:343–53. 99. Witt PN. Drugs alter web-building of spiders: a review and evaluation. Behav Sci. 1971;16:98–113. 100. Laties V, Cory-Slechta DA. Some problems in interpreting the behavioral effects of lead and methyl mercury. Neurobehav Toxicol. 1979;1:129–35. 101. Brady K, Herrera Y, Zenick H. Influence of parental lead exposure on subsequent learning ability of offspring. Pharmacol Biochem Behav. 1975;3:561–5. 102. Dahlgren RB, Linder RL. Effects of dieldrin in penned pheasants through the third generation. J Wildlife Manag. 1974;39:320–30. 103. Jacobson JL, Jacobson SW. Methodological issues in research on developmental exposure to neurotoxic agents. Neurotoxicol Teratol. 2005;27:395–406. 104. Wormley DD, Ramesh A, Hood DB. Environmental contaminantmixture effects on CNS development, plasticity, and behavior. Toxicol Appl Pharmacol. 2004;197:49–65. 105. Hudnell HK, Boyes WK, Otto DA, House DE, Creason JP, Geller AM, et al. Battery of neurobehavioral tests recommended to
20
106.
107. 108.
109.
110.
111.
112.
113.
114.
115.
116.
117.
118. 119.
120. 121. 122.
123.
124. 125.
126.
127. 128.
ATSDR: solvent-induced deficits in microelectronic workers. Toxicol Ind Health. 1996;12:235–43. Mergler D, Bowler R, Cone J. Colour vision loss among disabled workers with neuropsychological impairment. Neurobehav Toxicol. 1990;12:669–72. Mergler D, Blain L. Assessing color vision loss among solventexposed workers. Amer J Ind Med. 1987;12:195–203. Mergler D, Huel G, Bowler R, Frenette B, Cone J. Visual dysfunction among former microelectronics assembly workers. Arch Environ Health. 1991;46:326–34. Benignus V, Gellar AM, Boyes WK, Bushnell PJ. Human neurobehavioral effects of long-term exposure to styrene: a meta-analysis. Environ Health Perspect. 2005;113:532–8. Greenstein V, Sarter B, Hood D, Noble K, Carr R. Hue discrimination and s cone pathway sensitivity in early diabetic retinopathy. Invest Ophthalmol Vis Sci. 1990;1008–14. Pacheco-Cutillas M, Sahraie A, Edgar D. Acquired colour vision defects in glaucoma-their detection and clinical significance. Br J Ophthalmol. 1999;83:1396–402. Campagna D, Goba F, Mergler D, Moreau T, Galassi C, Cavalleri A, et al. Color vision loss among styrene-exposed workers neurotoxicological threshold assessment. Neurotoxicology. 1996;17:367–74. Cavalleri A, Gobba F, Nicali E, Fiocchi V. Dose-related color vision impairment in toluene-exposed workers. Arch Environ Health. 2000;55:399–404. Semple S, Dick F, Osborne A, Cherrie JW, Soutar A, Seaton A, et al. Impairment of colour vision in workers exposed to organic solvents. Occup Environ Med. 2000;57:582–7. Williamson AM. The development of a neurobehavioral test battery for use in hazard evaluations on occupational settings. Neurotoxicol Teratol. 1990;12:509–14. Boeckelmann I, Pfister EA. Influence of occupational exposure to organic solvent mixtures on contrast sensitivity in printers. J Occup Environ Med. 2003;45:25–33. Frenette B, Mergler D, Bowler R. Contrast-sensitivity loss in a group of former microelectronics workers with normal visual acuity. Optom Vis Sci. 1991;68:556–60. Morata TC, Dunn DE, Sieber WK. Occupational exposure to noise and ototoxic organic solvents. Arch Environ Med. 1994;49:359–65. Morioka I, Kuroda M, Miyahista K, Takeda S. Evaluation of organic solvent ototoxicity by the upper limit of hearing. Arch Environ Health. 1999;54:341–6. Burger J, Gochfeld M. A hypothesis on the role of pheromones on age of menarche. Med Hypotheses. 1985;17:39–46. Doty RL. The Smell Identification Test. Haddon Heights, NJ: Sensonics, Inc; 1995. Schwartz BS, Ford DP, Bolla KI, Agnew J, Rothman N, Bleecker ML. Solvent-associated decrements in olfactory function in paint manufacturing workers. Amer J Ind Med. 1990;18:697–706. Rose C, Heywood P, Costanzo R. Olfactory impairment after chronic occupational cadmium exposure. J Occup Med. 1992;34:600–5. Shusterman D. Critical review: the health significance of environmental odor pollution. Arch Environ Health. 1992;47:76–87. Van den Bergh O, Stegen K, Van Diest I, Raes C, Stulens P, Eelen P, et al. Acquisition and extinction of somatic symptoms in response to odours: a pavlovian paradigm relevant to multiple chemical sensitivity. Occup Environ Med. 1999;56:295–301. Caccappolo E, Kipen H, Kelly-McNeil K, Knasko S, Hamer RM, Natelson B, Fiedler N. Odor perception: multiple chemical sensitivities, chronic fatigue and asthma. J Occup Environ Med. 2000;42:629–38. Lezak MD. Neuropsychological Assessment. New York: Oxford University Press; 1995. Gerr FE, Hershamn D, Letz R. Vibrotactile threshold measurement for detecting neurotoxicity: reliability and determination of age-and
129. 130.
131. 132.
133.
134.
135.
136.
137.
138.
139.
140.
141.
142. 143.
144.
145.
146.
147.
148.
149.
Toxicology
541
height-standardized normative values. Arch Environ Health. 1990;45:148–54. Bove F, Litwak MS, Arezzo JC, Baker EL. Quantitative sensory testing in occupational medicine. Semin Occup Med. 1986;1:185–8. Mergler D. Behavioral neurophysiology: quantitative measures of sensory toxicity. Neurotoxicology. In: Approaches and Methods. New York: Academic Press, Inc.; 1995:727–36. Demers RY, Markell BL, Wabeke R. Peripheral vibratory sense deficits in solvent-exposed painters. J Occup Med. 1991;33:1051–4. McConnel R, Keifer M, Rosenstock L. Elevated quantative vibrotactile threshold among workers previously poisoned with methamidophos and other organophosphate pesticides. Am J Ind Med. 1994;25:325–34. Lundstrom R, Nilsson T, Burstrom L, Hagbert M. Exposureresponse relationship between hand-arm and vibrotactile perception sensitivity. Am J Ind Med. 1999;35:456–64. ATSDR. Neurobehavioral Test Batteries for use in Environmental Health Field Studies. Atlanta, GA: Department of Health and Human Services, Public Health Service; 1992. Kilburn KH, Warshaw RH, Hanscom B. Are hearing loss and balance dysfunction linked in construction iron workers? Br J Ind Med. 1992;49:138–41. Sauter, SL, Henning RH, Chapman JL, Smith TJ, Quackenboss JJ. The use of force platforms for assessment of standing steadiness in neurobehavioral toxicology: a feasibility analysis. NIOSH Contract No. 80-2903;1980. U.S. Department of Health and Human Services. NIOSH Health Hazard Evaluation Report 90-0149-2522, Cincinnati (OH); 1995, 1–31. Bhattacharya A, Morgan R, Shukla R, Ramakrishanan HK, Wang L. Non-invasive estimation of afferent inputs for postural stability under low levels of alcohol. Ann Biomed Eng. 1987;15:533–50. Dick RB, Setzer JV, Taylor BJ, Shukla R. Neurobehavioural effects of short duration exposures to acetone and methyl ethyl ketone. Br J Ind Med. 1989;46:111–21. Dick RB, Steenland K, Krieg EF, Hines CJ. Evaluation of acute sensory-motor effects and test sensitivity using termiticide workers exposed to chlorpyrifos. Neurotoxicol Teratol. 2001;23:381–93. Bhattacharya A, Shukla R, Dietrich K, Bornschein R, Berger O. Effect of early lead exposure on children’s postural balance. Dev Med Child Neurol. 1995;37:861–78. Chia AE, Chua LH, Ng TP, Foo SC, Jeyaratnam J. Postural stability of workers exposed to lead. Occup Environ Med. 1994;51:768–71. Dick RB, Pinkerton LE, Krieg JEF, Biagini RE, Deddens JA, Brightwell WS, et al. Evaluation of postural stability in workers exposed to lead at a secondary lead smelter. Neurotoxicology. 1999;20:595–608. Heaton RK, Grant I, Matthews CG. Comprehensive norms for an expanded halstead-reitan battery: demographic corrections, research findings, and clinical applications. Odessa, FL: Psychological Assessment Resources, Inc.; 1991. Anger WK. Worksite behavioral research. Results, sensitive methods, test batteries and the transition from laboratory data to human health. Neurotoxicology. 1990;11:627–70. Langston JW, Ballard P, Tetrud JW. Chronic parkinsonism in humans due to a product of meperidine-analog synthesis. Science. 1983;219:979–80. Robins LN, Helzer JE. Diagnostic Interview Schedule (DIS) Version III-R. St. Louis, MO: Washington University School of Medicine; 1991. Spitzer RL, Williams JBW, Gibbon M, First MB. Structured Clinical Interview for DSM-III-R–Non-Patient Edition (SCID-NP, Version 1.0). Washington, DC: American Psychiatric Press; 1995. Spitzer RL, Williams JBW, Gibbon M. User’s Guide for the Structured Clinical Interview for DSM-III-R. Washington DC: American Psychiatric Press; 1990.
542
Environmental Health
150. Morrow LA, Ryan CM, Goldstein G, Hodgson MJ. A distinct pattern of personality disturbance following exposure to mixtures of organic solvents. J Occup Med. 1989;31:743–6. 151. Kilburn KH, Sediman BC, Warshaw R. Neurobehavioral and respiratory symptoms of formaldehyde and xylene exposure in histology technicians. Arch Environ Health. 1985;40:229–33. 152. Needleman H, Gunnoe C, Leviton A, Reed R, Peresie H, Maher C, et al. Deficits in psychologic and classroom performance of children with elevated dentine lead levels. N Engl J Med. 1979;300:689–95. 153. Hogstedt C, Andersson K, Hane M. A questionnaire approach to the monitoring of early disturbance in central nervous function. In: Aitio A, Ruhimaki V, Vainio H, ds. Biological Monitoring and Surveillance of Workers Exposed to Chemicals. Washington, DC: Hemisphere; 1984. 154. Derogatis L. SCL-90-R Manual II. Towson, Maryland: Clinical Psychometric Research; 1983. 155. Beck AT, Rush AJ, Shaw BF, Emery G. Cognitive Therapy of Depression. New York, NY: Gulford; 1979. 156. Spielberger CD, Gorsuch RL, Lushene R, Vagg PR, Jacobs GA. Manual for the State-Trait Anxiety Inventory (Form Y). Palo Alto, CA: Consulting Psychologists Press; 1983. 157. Hathaway SF, McKinley JC. Minnesota Multiphasic Personality Inventory 2. Minneapolis: University of Minnesota Press; 1989. 158. Zenick H, Reiter LW. Behavioral Toxicology, an Emerging Discipline. Washington, DC: Environmental Protection Agency; 1977. 159. Baker EL, Letz R. Solvent neurobehavioral testing in monitoring hazardous workplace exposures. J Occup Med. 1986;28:126–9. 160. Weiss B. Neurobehavioral properties of chemical sensitivity syndromes. Neurotoxicology. 1998;19:259–68. 161. Johnson BL, Baker EL, ElBatawi M, Gilioli R, Hanninen H, Seppalainen AM. Prevention of Neurotoxic Illness in Working Populations. New York: John Wiley and Sons; 1987. 162. Hartman DE. Neuropsychological Toxicology: Identification and Assessment of Human Neurotoxic Syndromes. New York: Pergamon Press; 1988. 163. Tombaugh TN. Test of Memory Malingering (TOMM). North Tonawanda, New York: Multi-Health Systems, Inc.; 1996. 164. Meyers JE, Volbrecht ME. A validation of multiple malingering detection methods in a large clinical sample. Arch Clin Neuropsychology. 2003;18:261–76. 165. Bianchini KJ, Houston RJ, Greve KW, Irvin TR, Black FW, Swift DA, et al. Malingered neurocognitive dysfunction in neurotoxic exposure: an application of the slick criteria. J Occup Environ Med. 2003;45:1087–99. 166. Slick DJ, Sherman E., Iverson GL. Diagnostic criteria for malingered neurocognitive dysfunction: proposed standards for clinical practice and research. Clin Neuropsychol. 1999;13:545–61. 167. Wechsler D. WAIS-R Manual. San Antonio, Texas: Psychological Corporation; 1981. 168. Nelson HE. National Adult Reading Test (NART): Test Manual. Nelson, U.K.: NFER; 1982. 169. Gamberale F. Use of behavioral performance tests in the assessment of solvent toxicity. Scand J Work Environ Health. 1985;1165–74. 170. World Health Organization. Organic Solvents and the Central Nervous System. Copenhagen, Oslo: World Health Organization; 1985. 171. Michelsen H, Lundberg I. Neuropsychological verbal tests may lack “hold” properties in occupational studies of neurotoxic effects. Occup Environ Med. 53:478–83;1996. 172. Dick RB. Neurobehavioral assessment of occupationally relevant solvents and chemicals in humans. In: Chang LW, Dyer RS, eds. Handbook of Neurotoxicology. New York,: Marcel Dekker, Inc.; 1995:217–22. 173. Matthews CG, Klove H. Instruction Manual for the Adult Neuropsychology Test Battery. Madison, WI: University of Wisconsin Medical School; 1964.
174. Jansen AAI, de Grier JJ, Slangen JL. Alcohol effects on signal detection performance. Neuropsychobiology. 1985;14:83–7. 175. Gustafson R. Alcohol, reaction time, and vigilance settings: importance of length of intersignal interval. Percept Mot Skills. 1986;63:424–6. 176. Delis DC, Kramer JH, Kaplan E, Ober BA. California Verbal Learning Test Manual. San Antonio, TX: The Psychological Corporation; 1987. 177. Benton A. Revised Visual Retention Test Manual: Clinical and Experimental Applications. New York, NY: Psychological Corporation; 1974. 178. Echeverria D, Fine L, Langolf G, Schork T, Sampaio C. Acute behavioural comparisons of toluene and ethanol in human subjects. Br J Ind Med. 1991;48:750–61. 179. Rahill AA, Weiss B, Morrow PE, Frampton MW, Cox C, Gibb R, et al. Human performance during exposure to toluene. Aviat Space Environ Med. 1996;67:640–7. 180. Hutchinson LJ, Amler RW, Lybarger JA, Chappel W. Neurobehavioral test batteries for use in environmental health field study. Atlanta:U.S. Department of Health and Human Services; 1992. 181. Hanninen H, Lindstrom K. Behavioral Test Battery for Toxicopsychological Studies. Helsinki: Institute of Occupational Health; 1979. 182. Anger WK. Neurobehavioural tests and systems to assess neurotoxic exposures in the workplace and community. Occup Environ Med. 2003;60:531–8. 183. Anger WK, Letz R, Chrislip DW, et al. Neurobehavioral test methods for environmental health studies of adults. Neurotoxicol Teratol. 1994;16:489–97. 184. Baker E, Letz R, Fidler A. A computer-administered neurobehavioral evaluation system for occupational and environmental epidemiology. J Occup Med. 1985;27:206–12. 185. Anger WK, Rohlman DS, Sizemore OJ, Kovera CA, Gibertini M, Ger J. Human behavioral assessment in neurotoxicology: producing appropriate test performance with written and shaping instructions. Neurotoxicol Teratol. 1996;18:371–9. 186. Fray PJ, Robbins TW. CANTAB battery: proposed utility in neurotoxicology. Neurotoxicol Teratol. 1996;18:499–504. 187. Burger J, Gochfeld M. Lead and behavioral development: effects of varying dosage and schedule on survival and performance of young common terns (Sterna hirundo). J Toxicol Environ Health. 1988;24:173–82. 188. Dey PM, Gochfeld M, Reuhl KR. Developmental methylmercury administration alters cerebellar PSA-NCAM expression and Golgi sialyltransferase activity. Brain Res. 1999;845:139–51. 189. Dey PM, Burger J, Gochfeld M, Reuhl KR. Developmental lead exposure disturbs expression of synaptic neural cell adhesion molecules in herring gull brains. Toxicology. 2000;146:137–47. 190. Needleman HL, Gatsonis CA. Low-level lead exposure and the IQ of children: a meta-analysis of modern studies. JAMA. 1990;263:673–8. 191. Canfield RL, Henderson CR, Jr, Cory-Slechta DA, Cox C, Jusko TA, Lanphear BP. Intellectual impairment in children with blood lead concentrations below 10 microg per deciliter. N Engl J Med. 2003;348:1517–26. 192. Chen A, Dietrich KN, Ware JH, Radcliffe J, Rogan WJ. IQ and blood lead from 2 to 7 years of age: are the effects in older children the residual of high blood lead concentrations in 2-year-olds? Environ Health Perspect. 2005;113:597–601. 193. Smith WE, Smith AM. Minamata Disease. New York, Holt: Rinehart and Winston; 1975. 194. Stern AH. A revised probabilistic estimate of the maternal methyl mercury intake dose corresponding to a measured cord blood mercury concentration. Environ Health Perspect. 2005;113:155–63. 195. Grandjean P, White RF, Weihe P, Jorgensen PJ. Neurotoxic risk caused by stable and variable exposure to methylmercury from seafood. Ambul Pediatr. 2003;3:18–23.
20 196. Myers GJ, Davidson PW, Cox C, Shamlaye CF, Palumbo D, Cernichiari E, et al. Prenatal methylmercury exposure from ocean fish consumption in the Seychelles child development study. Lancet. 2003;361:1686–92. 197. National Research Council. Toxicological Effects of Methylmercury. Washington DC: National Academy Press; 2000. 198. Gimenez-Llort L, Ahlbom E, Dare E, Vahter M, Ogren S, Ceccatelli S. Prenatal exposure to methylmercury changes dopamine-modulated motor activity during early ontogeny: age and gender-dependent effects. Environ Toxicol Pharmacol. 2001;9:61–70. 199. Tilson HA, Davis GJ, MaLachlan JA, Lucier GW. The effects of polychlorinated biphenyls given prenatally on the neurobehavioral development of mice. Environ Res. 1979;18:466–74. 200. Schantz SL, Levin ED, Bowman RE, Heironimus MP, Laughlin NK. Effects of perinatal PCB exposure on discrimination-reversal learning in monkeys. Neurotoxicol Teratol. 1989;1:243–50. 201. Maier WE, Kodavanti PRS, Harry GJ, Tilson HA. Sensitivity of adenosine triphosphatases in different brain regions to polychlorinated biphenyl congeners. J Appl Toxicol. 1994;14:225–9. 202. Kodavanti PRS, Ward TR, McKinney JD, Tilson HA. Increased [3H]phorbol ester binding in rat cerebellar granule cells by polychlorinated biphenyl mixtures and congeners: structure-activity relationships. Toxicol Appl Pharmacol. 1995;130:140–8. 203. Krasnegor NA, Otto DA, Bernstein JH, Burke R, Chappell W, Eckerman DA, et al. Neurobehavioral test strategies for environmental exposures in pediatric populations. Neurotoxicol Teratol. 1994;16:499–509. 204. Harada M. Intrauterine poisoning: clinical and epidemiologial studies and significance of the problem. Bull Inst Const Med Kumanoto Univ. 1976;25(suppl):1–69. 205. Yu M, Hsu C, Gladen BC, Rogan WJ. In utero PCB/PCDF exposure: relation of developmental delay to dysmorphology and dose. Neurotoxicol Teratol. 1991;13:195–202. 206. Chen YJ, Gue Y, Hsu C, Rogan WJ. Cognitive development of YuCheng (“oil disease”) children prenatally exposed to heat-degraded PCBs. JAMA. 1992;268:3213–8. 207. Jacobson SW, Fein GG, Jacobson JL, Schwartz PM, Dowler JK. The effect of intrauterine PCB exposure on visual recognition memory. Child Dev. 1985;56:853–60. 208. Jacobson JL, Jacobson SW, Humphrey JB. Effects of in utero exposure to polychlorinated biphenyls and related contaminants on cognitive functioning in young children. J Pediatr. 1990;116:38–45. 209. Rogan WJ, Gladen BC. PCBs, DDE, and child development at 18 and 24 months. Ann Epidemiol. 1991;1:407–13. 210. Gladen BC, Rogan WJ. Effects of perinatal polychlorinated biphenyls and dichlorodiphenyl dichloroethene on later development. J Pediatr. 1991;119:58–63. 211. Lonky J, Relhman J, Darvill T, Mather J, Sr, Daly H. Neonatal behavioral assessment scale performance in humans influenced by maternal consumption of environmentally contaminated Lake Ontario fish. J Great Lakes Res. 1996;22:198–212. 212. Daly HB. The evaluation of behavioral changes produced by consumption of environmentally contaminated fish. In: Issacson RL, Jensen KR, eds. The Vulnerable Brain and Environmental Risks. Malnutrition and Hazard Assessment. Vol 1. New York: Plenum Press; 1992,151–71. 213. Doty RL, Shaman PL, Applebaum SL, Giberson R, Siksorski L, Rosenberg L. Smell identification ability: changes with age. Science. 1984;226:1441–3. 214. Mabry TR, McCarty R, Gold PE, Foster TC. Age and stress history effects on spatial performance in a swim task in Fischer-344 rats. Neurobiol Learn Mem. 1996;66:1–10. 215. Fiedler N, Giardino N, Natelson B, Ottenweller JE, Weisel C, Lioy P, et al. Responses to controlled diesel vapor exposure among chemically sensitive Gulf War veterans. Psychosom Med. 2004 Jul–Aug;66(4):588–98.
Toxicology
543
216. Cory-Slechta DA, Virgolini MB, Thiruchelvam M, Weston DD, Bauter MR. Maternal stress modulates the effects of developmental lead exposure. Environ Health Perspect. 2004;112: 717–30. 217. Laurin D, Verreault R, Lindsay J, Dewailly E, Holub BJ. Omega-3 fatty acids and risk of cognitive impairment and dementia. J Alzheimer Dis. 2003;5:315–22. 218. Zheng W, Aschner M, Ghersi-Egrea JF. Brain barrier systems: a new frontier in metal neurotoxicological research. Tox Appl Pharmacol. 2003;192:1–11. 219. Wallace CS, Reitzenstein J, Withers GS. Diminished experiencedependent neuroanatomical plasticity: evidence for an improved biomarker of subtle neurotoxic damage to the developing rat brain. Environ Health Perspect. 2003;111:1294–8.
General References Anastasi A. Psychological Testing. New York: Macmillan; 1976. Anger WK, Storzbach D, Amler RW, Sizemore OJ. Human behavioral neurotoxicology: workplace and community assessments. In: Rom W, ed. Environmental and Occupational Medicine. 3rd ed. New York: Lippincott-Raven; 1998;329–50. Annau Z. Neurobehavioral Toxicology. Baltimore: Johns Hopkins University Press; 1986. Bender L. A Visual Motor Gestalt Test and its Clinical Use. New York: American Orthopsychiatric Association; 1938. Berent S, Albers JW. Neurobehavioral Toxicology: Neuropsychological and Neurological Perspectives (Studies on Neuropsychology, Development, and Cognition). London: Psychology Press; 2005. Bondy SC, Campbell A. Developmental neurotoxicology. J Neurosci Res. 2005;81:605–12. Camhi JM. Neuroethology. Sunderland, MA: Sinauer Associates Press; 1984. Chang LW, Dyer RS. Handbook of Neurotoxicology. New York: Marcel Dekker; 1995. Chang LW, Slikker W, Jr. eds. Neurotoxicology: Approaches and Methods. San Diego: Academic Press; 1995. Davis DD, Templer DI. Neurobehavioral functioning in children exposed to narcotics in utero. Addict Behav. 1988;13:275–83. Dun NJ, Perlman RL, eds. Neurobiology of Acetylcholine. New York: Plenum Press; 1987. Grandjean P. Symposium synthesis: application of neurobehavioral methods in environmental and occupational health. Environ Res. 1993;60:57–61. Hartman DE. Neuropsychological Toxicology. New York: Pergamon Press; 1988. Hook GER, Lucier GW. Human developmental neurotoxicity. 1994;102(suppl 2):115–161. Huber F, Markl H. Neuroethology and Behavioral Physiology. New York: Springer Verlag; 1983. Hunting KL, Matanoski GM, Larson M, Wolford R. Solvent exposure and the risk of slips, trips and falls among painters. Am J Industr Med. 1991;20:353–370. Hutchinson LJ, Amler RW, Lybarger JA, Chappell W. Neurobehavioral test batteries for use in environmental health field study. Atlanta: U.S. Department of Health and Human Services, Agency for Toxic Substances and Disease Registry; 1992. Johnson BL, ed. Prevention of Neurotoxic Illness in Working Populations. New York: John Wiley & Sons; 1987. Kandel ER, Schwartz JH, Jessell TM. Principles of Neural Sciences. 4th ed. Norwalk CT: Appleton; 2000. Kilburn KH, Warshaw RH. Neurobehavioral testing of subjects exposed residentially to groundwater contaminated from an aluminum diecasting plant and local referents. J Toxicol Environ Health. 1993;39: 483–96.
544
Environmental Health
Lindstrom K, Martelin T. Personality and long term exposure to organic solvents. Neurobehav Toxicol. 1980;2:89–100. LoPachin RM, Jones RC, Patterson TA, Slikker W, Jr, Barber DS. Application of proteomics to the study of molecular mechanisms in neurotoxicology. Neurotox. 2003;24:751–75. Lotti M, Moretto A. Organophosphate-induced delayed polyneuropathy. Toxicol Rev. 2005;24:37–49. Lucchini R, Albini E, Benedetti L, Alessio L. Neurobehavioral science in hazard identification and risk assessment of neurotoxic agents—what are the requirements for further development? Int Arch Occup Environ Health. 2005;78:427–37. Marchetti C. Molecular targets of lead in brain neurotoxicity. Neurotox Res. 2003;5:221–36. Marlow M, Stellern J, Errera J, Moon C. Main and interaction effects of metal pollutants on visual-motor performance. Arch Environ Health. 1985;40:221–4. Mutti A, Mazzucchi A, Rustichelli P, Frigeri G, Arfini G, Franchini I. Exposure-effect and exposure-response relationships between occupational exposure to styrene and neuropsychological functions. Am J Ind Med. 1984;5:275–81. Prozialeck WC, Grunwald GB, Dey PM, Reuhl KR, Parrish AR. Cadherins and NCAM as potential targets in metal toxicity. Toxicol Appl Pharmacol. 2002;182:255–65. Reiter L. An introduction to neurobehavioral toxicology. Environ Health Perspect. 1978;26:5–7. Schmid C, Rotenberg JS. Neurodevelopmental toxicology. Neurol Clin. 2005;23:321–36. Seppalainen AN, Lindstrom K, Martelin T. Neurophysiological and psychological picture of solvent poisoning. Am J Ind Med. 1980;1:31–42. Slikker W. Jr, Chang LW. Handbook of Developmental Neurotoxicology. New York: Academic Press; 1998.
Tilson HA, Sparber SB. Neurotoxicants and Neurobiological Function. New York: John Wiley & Sons; 1987. Tilson HA, Cabe PA, Mitchell CL. Behavioral and neurological toxicity of polybrominated biphenyls in rats and mice. Environ Health Perspect. 1978;23:257–63. Tinbergen N. The Study of Instinct. New York: Oxford University Press; 1974. Valciukas JA. Foundations of Environmental and Occupational Neurotoxicology. New York: Van Nostrand Reinhold; 1991. Valciukas JA, Lilis R. Psychometric techniques in environmental research. Environ Res. 1980;21:275–97. Weiss B. Behavioral toxicology and environmental health science: opportunity and challenge for psychology. Am Psychol. 1983;38:1174. Weiss B. Experimental implications of behavior as a criterion of toxicity. In: Weiss B, Laties VG, eds. Behavioral Pharmacology: The Current Status. New York: Alan R. Liss; 1985:467–72. Weiss B, Laties VG. Behavioral Pharmacology: The Current Status. New York: Alan R. Liss; 1985. Wallace DR. Overview of molecular, cellular, and genetic neurotoxicology. Neurol Clin. 2005;23:307–20. Winlow W, Vinogradova OS, Sakharov DA. Signal Molecules and Behavior. New York: Manchester University Press; 1991. Xintaras C, Johnson BL, de Groot I. Behavioral Toxicology: Early Detection of Occupational Hazards. DHEW (NIOSH) 74–126. NIOSH, Washington, DC: U.S. Department of Health, Education and Welfare; 1974. Yerkes RM. The mental life of monkeys and apes: a study of ideational behavior. Behav Monogr. 1916;3:1–145. Yesavage JA, Dolhert N, Taylor JL. Flight simulator performance of younger and older aircraft pilots: effects of age and alcohol. J Am Geriatr Soc. 1994;42:577–82.
Environmental and Ecological Risk Assessment
21
Michael Gochfeld • Joanna Burger
Risk assessment is a formalized process for characterizing and estimating the magnitude of harm resulting from some condition— usually exposure to one or more hazardous substances in the environment. This chapter addresses what risk assessment is, what it is used for, and how it is done. “Environmental risk assessment” usually refers to human health risks, while “ecological risk assessment” refers to damage to natural or artificial ecosystems, wildlife species, and endangered species. There are some common properties and important differences.1,2 Environmental risk assessment interfaces with environmental toxicology and exposure assessment, while ecological risk interfaces with ecotoxicology. Risk assessments are used in a wide variety of context, for example, to establish no effect concentrations3 which can inform cleanup levels,4 sediment quality standards,5 or comparison of alternative remediation strategies. Ecological risk is also applied to the probability of extinction of species or populations (population viability analysis) due to chance6 or pollution,7 and to the likelihood that exotic species will become invasive.8 Increasingly, governments and the public have realized that it is critical to protect the health and well-being of ecological systems, both for their own value as well as for the ecological services that they provide for humans including safe drinking water, clean air, fertile land for agriculture, unpolluted waters for fisheries, erosion control and stabilization of coastal environments, and places for recreation and other aesthetic pursuits so important to people.9 Ecological risk has been linked with the growing interest in restoring damaged habitats.10 Moreover, changes in ecosystem health can have direct effects on human health by changing human exposure to disease organisms.11 Risk assessment for genetically modified crops bridges human health and ecological concerns.12 Harmonization of ecological and human health risk assessment has been done on a few occasions (see below).13,14 Risk assessments are intended to provide objective information to inform public policy decisions.15 Their utility for individual risk is variable. Risk assessments are used in other walks of life from bridge construction to finance to medical errors,16 and more recently to terrorism.17 Risk assessment is primarily a scientific endeavor, while risk management refers to those actions taken by society to ameliorate risks. Risk management takes into account human values and fiscal concerns and determines what risk assessments need to be done and how they are to be used, but the methods and outcomes of risk assessment should not be biased by these concerns.18 Risk management may involve policy decisions that set particular standards for contaminants in air, water, soil, or food, or they may reflect particular decisions on whether and how much to remediate a hazardous waste site. There is ongoing controversy as to whether risk assessment can remain value-free or whether that is an illusion. In 1983, the modern environmental risk assessment approach was codified by the National
Research Council’s “red book” on Risk Assessment in the Federal Government,18 which laid out a four-step approach: hazard identification, dose-response assessment, exposure assessment, and risk characterization. It emphasized that risk assessment was value free and suggested the existence of a firewall between risk assessment and risk management.18 Over the ensuing decade it became apparent that divorcing risk assessment from risk management was seldom possible, and the Presidential/Congressional Commission on Risk Assessment and Risk Management (PCCRARM) completely reversed the separation approach by declaring that risk assessment was an integral component of risk management and that values influenced what risks were assessed, how they were assessed, and how the results were used (Fig. 21-1).19,20 Even more radically, PCCRARM placed stakeholders in the center of the entire process, suggesting that they become involved in setting the context for risk assessment, participating in decisions about what questions should be answered and the methodologies employed, and contributing to the interpretation and subsequent risk management decisions. Stakeholders included all persons and agencies with an interest, broadly interpreted, in the outcome. PCCRARM also defined risk management broadly as “the process of identifying, evaluating, selecting, and implementing actions to reduce risk to human health and to ecosystems. The goal of risk management is scientifically sound, cost-effective, integrated actions that reduce or prevent risks while taking into account social, cultural, ethical, political, and legal considerations.” No more comprehensive definition has been proposed. Protecting human health does not necessarily protect ecosystems and their component communities and organisms from harm.21 Humans may be less or more susceptible to certain chemicals than either wild or experimental animals. Also, the process of remediating contaminated soil may seriously disrupt fragile ecosystems, while conversely, the establishment of new wetland ecosystems is being used for wastewater treatment to prevent environmental contamination.22 Risk assessment involves target populations, either real or hypothetical, and the question of how much increased risk will occur if a group of people or a natural ecosystem is exposed to a certain amount of a hazardous substance or condition over a certain period of time. Major descriptions of the risk assessment process18,19 and its role in policy1 have been published (see General References), and various refinements are added to take into account the great uncertainties attached to risk estimation. Although risk assessment can provide probabilities with great apparent precision, they are accompanied by such broad uncertainties that utility is often compromised. Since risk assessment is imperfect, or often gives results that are unpopular, there have been many attempts to refine it. Most of these have focused on reducing the worst 545
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
546
Environmental Health to allow a regulated industry to “trade” among various risks as an alternative to reducing the risks of one chemical.
Veterinary Applications Problem/context
Evaluation
Risks
Engage stakeholders
Actions
Options
Decisions
The policy implications of veterinary risk assessment can be as far reaching as for human health, and this extends to wildlife zoonoses as well.29 One part of the growing world trade movement is the Agreement on the Application of Sanitary and Phytosanitary Measures to reduce importation of plant and animal pests and diseases. In this context, risk assessment is required mainly for risks to human consumers of the plants and animals,30 but it has also been applied to animal health. It is not clear how risk assessment was used in dealing with avian flu in Asia and by whom and how the decision to kill millions of domestic fowl was made. However, although foot and mouth disease resulted in Britain slaughtering thousands of cattle, formal risk assessment was applied to noninfected potential candidates, allowing a more restrictive cull.31 Risk assessment has been applied to several other major livestock diseases, including bovine spongiform encephalopathy,32 parasite movements,33 West Nile virus,34 and mosquito-borne bluetongue disease of livestock.35 It has been used to assess a gill disease in salmon farming36 and the likelihood of rabies entering Britain with pets.37 It is also applied to the environmental contamination by veterinary chemicals excreted by livestock and pets,38 which can be studied using the ecological risk paradigm of mesocosms.39
Terrorism and Preparedness Figure 21-1. Risk management framework from the Presidential/ Congressional Commission on Risk Assessment and Risk Management report. The framework encourages managing risks in a broader context, involving stakeholders who are concerned or affected by the risk management process, and using an iterative approach rather than a preordained outcome.
case assumptions, replacing conservative default values with “realistic” values, which in some cases may make things seem less risky. Only a few of these innovations can be dealt with here. The U.S. National Academy of Science’s National Research Council has several committees investigating various aspects of risk assessment to enhance its scientific quality and its effectiveness in informing public policy on the environment and health. Several important volumes have been published including the Committee on Risk Assessment Methodology’s volume on Issues in Risk Assessment23 and the Committee on Risk Characterization’s Understanding Risk,1 the latter focusing specifically on the transfer of risk information to policy. APPLICATIONS OF RISK ASSESSMENT
There is a rapidly growing literature on specific applications of risk assessment,24,25 The social implications of the risk assessment process have been discussed by many authors including Lowrance,26 Imperato and Mitchell,27 and Jasanoff.28 Particularly since 2001, in the aftermath of 9/11, risk assessment has been applied to terroristic events and the consequences of biological, chemical, or radiological attacks. Risk assessment was developed primarily by regulatory agencies to provide a rationale for setting enforceable standards for toxic chemicals in air, water, food, soils, consumer products, and wastes, including cleanup of hazardous waste sites, and determining “how clean is clean.” It is now used to set priorities, to compare risks, to identify research needs, and to generate information for cost-benefit analysis. Determining priorities for action is the first step in policy making. Comparative risk assessment is part of prioritization, but it has also become mandated by the 1990 amendments to the Clean Air Act, which direct the EPA
Since 2001, the United States has focused heavily on all aspects of terrorism and preparedness for both natural, nonnatural (industrial, transportation), and deliberate (terrorism) disasters,40 and risk assessment has been used in several ways including estimating the likelihood and magnitude of terrorist events, the design and vulnerability of infrastructure,41 food supply,42 drinking water,43 consequences of infectious disease outbreak, and industrial chemical hazards.44 Since absolute protection (detection, interdiction) are presumably not possible and even subabsolute protection is prohibitively expensive, risk assessments offer the opportunity of identifying priorities for investing limited resources in prevention or consequence reduction, for example, in protecting critical infrastructure.41 National security and preparedness policy, including detainment versus human rights, vaccination versus individual liberty, and redundancy versus cost-containment, can benefit from risk management/risk assessment appraisal.45 More specific applications include the risk of sheltering in place versus fleeing,46 and information and data system integrity.47
Military Applications The French mathematician, Poisson, was a progenitor of risk assessment, who developed the distribution that bears his name, for estimating rare events. This was first applied to the probability of Prussian army soldier injuries from being kicked by horses. Broader considerations of risk in military strategy and tactics are not considered here. Risk assessment has been applied to excess injury rates among occupying forces48 and former international peacekeepers,49 to risks from forced anthrax vaccination,50 to psychiatric hospitalization,51 as well as how to monitor troops for exposure to chemicals.52 Despite a long history of mysterious syndromes associated with troops in battle, the failure to predict the physical and/or psychological risks and consequences of exposures to Agent Orange, oil well fires, prophylactic drugs, and other exposures, are reflected in the problems reported by veterans of the Vietnam and Gulf Wars.53
Future Land Use of Contaminated Sites Economic and demographic factors constantly change land-use priorities dictating the reuse and redevelopment of former industrial and agricultural sites, many of which are highly contaminated. Risk assessment needs to be closely linked with future land-use decisions. In the aftermath of the Cold War, The Department of Energy’s widespread
21 legacy of radioactive and chemical waste on its nuclear weapons complex represents the largest reuse challenge. Cleanup decisions and goals are linked to future land use, requiring assessment of (1) ecological versus human health, (2) worker versus public health, (3) among competing contaminated areas, in order to prioritize remediation and use limited remediation dollars cost-effectively.54 Stakeholders who would potentially use or be affected by remediation and land use, are concerned about environmental health risks, even more than property value risks.55 Groundwater contamination may limit redevelopment more than soil contamination.56 An iterative balancing of future landuse options with their associated risk scenarios and implications is still in its infancy.54
Energy and Transportation The increasing recognition of a crisis in energy, reliance on fossil fuel impacted by international markets, offers a fertile area for risk assessment and risk-risk balancing. Nuclear energy suffered setbacks in the United States, and no new nuclear plants have been built in a generation, yet limited, costly, and polluting fossil fuel, may increase the attractiveness of nuclear energy in the future—particularly after past disasters are forgotten. Despite intense efforts at risk communication, the nuclear industry seems no closer to gaining public acceptance in the United States than a generation ago, sharing many of the features of genetically modified organisms.57 Balancing the hazards of handling, transporting, and storing nuclear waste against the various social, economic, and health consequences of other fuels is important. Both nuclear accidents and nuclear wastes are problematic, and the future of Yucca Mountain as a permanent waste repository, has engendered repeated risk investigations involving radiation, engineering, and geology.58 A Monte Carlo analysis allowed estimate of peak-of-the-mean exposure for a 10,000-year compliance period,59 but there is also the requirement of a million-year security for the repository, and such assurance seems unachievable. Likewise, increased use of compressed natural gas instead of gasoline or diesel for public buses imposes an increased risk of explosion and increased fatality rate from those rare events.61
BALANCING RISKS AND COSTS
There are several ways of applying environmental risk assessment in making policy decisions. One can estimate risks associated with a variety of hazards (for example, different hazardous waste sites) and use them to prioritize remediation, starting first with those sites that pose the greatest risk to the greatest number. One can compare an estimated risk with a level of so-called acceptable risk (see below) and decide whether or not to take an action. One can treat the reduction of risk as a benefit and perform a cost-benefit analysis for any proposed solution, recognizing that benefit in terms of lives, health, or environmental quality is not easily compared with monetary costs. In another mode, one can contrast the risks from two or more alternative decisions (e.g., to clean up or not to clean up or to ban or not to ban) and may choose the path with the lowest risk. This is called risk-risk balancing. Remediation workers at hazardous waste sites do face risks beyond those related to the chemical, biological, or radiological hazards themselves, and worker risk is sometimes used to balance the risk reduction of costly cleanups. Such risk balancing usually fails to acknowledge that the same workers would face risks at other jobs if a particular remediation project is abandoned.62 Risk assessment is or can be used in applications for siting permits for hazardous facilities such as nuclear plants, liquified natural gas depots, municipal solid waste incinerators, and hazardous waste sites.63 Because of the wide confidence limits around many risk estimates (uncertainty) and the controversies over how to do risk assessments, many management applications of risk analyses may be premature. Nonetheless, risk analysis has played an important role in many governmental decisions, such as management of dioxin-contaminated soil64 and the setting of safe drinking water standards.65
Environmental and Ecological Risk Assessment
547
The dangers of allowing risk management to intrude on risk assessment are highlighted by the Office of Management and Budget’s (OMB) attack on the Occupational Safety and Health Administration’s risk-based cadmium standard. Not only did OMB display “a fundamental lack of understanding” about risk assessment, but it used a flawed approach to try to second-guess the risk assessment.66
Balancing Benefits and Risks of Fish Consumption There is an extensive literature on the benefits of eating fish both for children and adults. The benefits accrue from the low fat, nutritional protein, from the omega-3 fatty acids (PUFAs), and probably from the avoidance of less healthy food choices. Various agencies have estimated the distribution of food consumption in different populations, and have used average consumption (downwardly biased by noneaters and seldom-eaters), in determining the safety of eating fish. The contaminants in fish are mainly methylmercury and PCBs, and there are two populations who incur these risks: those few adults who eat a lot of fish and a vulnerable subgroup including pregnant women and young children who may be at risk even from moderate consumption. Among the high-end fish eaters are nutritionally conscious people who abjure red meat, recreational anglers, and subsistence fishers. The EPA uses 19 kg/year as the average fish consumption for the general population (including those who eat no fish) and 55 kg/year as the subsistence consumption level. However, many fishermen interviewed along the Savannah River in South Carolina reported exceeding 55 kg/year, some even exceeding 100 kg of fish in a year.67 A patient who reported eating an average of about 18 meals of fish per week estimated her annual consumption at over 250 kg/year, with a preference for Swordfish and Tuna, enough to make her symptomatic from mercury.68 Pregnant women, on the other hand, are advised to avoid certain fish entirely and to eat less than 12 oz (340 g) per week of other fish. At that rate, consuming fish averaging 0.2 ppm mercury (wet weight), a 60 kg female would exceed the EPA reference dose of 0.1 ug/kg/day. Most published studies have been of contaminants in recreationally caught fish, while most people obtain their fish from commercial sources. Dioxins and PCBs accumulate in fish and are higher in farmraised Atlantic salmon from Europe than in the same salmon farmed in South America or than wild-caught Pacific Salmon; even averagefrequency consumers would elevate dioxin intake into the health risk range.69 Hence it is necessary to provide good guidance on fish that are low in contaminants.70 Further research on both the dose-response curves for benefits and the harm curves from MeHg and organochlorines will produce a composite benefit-risk by dose curve that can make risk communication more accurate and more meaningful.71 Moreover, the FDA advisory includes the reassurance: “There is no harm in eating more than 12 ounces of fish in 1 week as long as you don’t do it on a regular basis.” and “Just make sure you average 12 ounces of fish a week.” Both of these statements are unsupported and misleading. Fish remain a healthful and important source of nutrition, but wise choices of which and how much to consume are essential.
Environmental Equity Although it has long been known that the most hazardous workplace or community exposures are not uniformly distributed, and that persons in lower socioeconomic groups are most likely to encounter such hazards in their work or home, only since 1990 when Bullard’s book Dumping in Dixie72 appeared, has attention focused on “environmental justice” or “equity.” This inequity is not universal, for depending on the economic history of a community or country, industrialization can mean prosperity as well as hazard.73
ACCEPTABLE RISK
One common goal of environmental risk assessment is to identify whether a particular exposure scenario or environmental level of an agent is “acceptable” or whether a target population can continue to
548
Environmental Health
be exposed to a current level without unacceptably high consequences. This requires society to identify levels of harm that it considers “acceptable” and to recognize that what may seem acceptable to a risk manager or regulator may not seem “acceptable” to a target population. What constitutes unacceptably high risk to one person (e.g., sky diving) may be a provocative challenge to another. The risk estimate can be used to establish an appropriate regulatory approach or policy that will protect the public from greater exposure.74 The process of establishing “acceptable risk” is a human values and social decision, not a scientific one. For cancer, it has become traditional to state that an exposure to a hazard is acceptable if it does not cause an elevation in the lifetime death rate of cancer greater than one in a million exposed people. If we accept for sake of argument that approximately 20% of people die of cancer, a 1 in a million or 10–6 elevation of risk means that instead of 200,000 out of a million people dying of cancer, the level will be 200,001. Clearly, this immeasurably small elevation of risk cannot be identified by any current or projected epidemiologic methods. Nor is it easy to communicate such an infinitesimally small increment. By contrast, regulations regarding occupational exposures or natural hazards (i.e., radon) tolerate a much higher risk (on the order of 10–4), but this too is immeasurably small, particularly compared to the risks faced by asbestos workers, more than 30% of whom eventually died of an asbestosrelated cancer75 or by chromate workers for whom a lifetime lung cancer risk may exceed 1 in 10.76 Most epidemiologists are content if they can identify a 50% increase in risk, whereas 1 in 10,000 translates into 0.01% increased risk. Nonetheless, these “acceptable” levels play an important role in risk management. Recognizing that the 1 in a million risk may be more theoretical than practical, one increasingly sees agencies using a 1 in 100,000 or even 1 in 10,000 risk for nonoccupational exposures. Before one determines whether a risk is acceptable or not, it is necessary to define an endpoint. Table 21-1 provides a spectrum of endpoints ranging from those like early death from cancer to emotional disturbances. There is a tendency to treat the first entries as the most consequential, and indeed risk assessment has been preoccupied with cancer. Yet some people are disabled by their emotional reactions to hazardous exposures. Society must determine how safe it wants to be77 and how much it is prepared to sacrifice for that level of security. Unfortunately, the persons who most often decide whether or not to invest in environmental safety are usually not those most at risk. Although the Environmental Protection Agency (EPA) sets a risk level 10−6 excess cancer deaths as the cutoff between acceptable and unacceptable, some persons argue that this is an unrealistically small level since most of the risks that most people willingly face (e.g., driving an automobile) are much higher. Indeed, the cancer risk of living in a home with 4 pCi of radon per cubic meter has been estimated on the TABLE 21-1. SPECTRUM OF ADVERSE CONSEQUENCES CONSIDERED BY RISK ASSESSORS IN APPROXIMATELY DECLINING ORDER OF SEVERITY Shortening of life (mortality) Cancer versus other causes Illness or injury leading to disability Acute versus chronic Permanent versus temporary disability Serious versus minor disability Illness or injury with temporary disability followed by recovery Chronic versus acute Serious versus minor disability Physical discomfort without disability Psychological disorder with behavioral consequences Posttraumatic stress disorder Anxiety reaction Stress reaction Chronic frustration and anger Emotional discomfort
order of between 1 in 100 and 1 in 1000 excess cancers, but even in the radon belt, many people choose not to test their homes or remediate the elevated level.78 Courts have become involved in the risk debate, at least insofar as it involves interpreting biomedical evidence.79 Once a risk estimate has been calculated, it becomes a major challenge to communicate the risk to responsible officials, the media, and potentially at-risk individuals, for the manner in which individuals perceive risk often bears little resemblance to the actual magnitude of their risk.80
As Low as Reasonably Achievable (ALARA) When risk cannot reasonably be reduced to what is deemed an acceptable level, regulation employs the ALARA (As Low as Reasonably Achievable) or in the United Kingdom ALARP (As Low as Reasonably Practical) approach, while control of emissions may require the BACT (Best Available Control Technology). These have the advantage of being performance rather than specification standards. ENVIRONMENTAL RISK ASSESSMENT PARADIGM FOR CARCINOGENS
The basic four-step approach to environmental risk assessment for carcinogens as outlined below was codified by the National Research Council in 1983.18 In the late 1970s, there was heavy emphasis on improving dose-response information; by the early 1980s the research emphasis had shifted to improving understanding of the appropriate mathematical models for low-dose extrapolation. By the late 1980s, it was realized that the exposure assessment phase required much research attention. In the 1990s, attention focused on understanding the mechanisms by which agents produce disease and modifying the generic risk assessment process accordingly.81 In the late 1990s, susceptibility emerged as a major concern for risk assessors. By 2000, emphasis had again shifted to understanding the magnitude and impact of uncertainty, how it should be analyzed and communicated.82 And risk has come full circle with increasing discussion of improving the hazard assessment and selecting appropriate mathematical models for low-dose extrapolation83 and for estimating uncertainty.84 Establishing the guidelines for carcinogen risk assessment has been a painstaking process as indicated in the following table. The EPA Guidance finally released in March 2005 began wending its way through the federal bureaucracy in 1976.
Hazard Identification The first step is to define the hazard and establish the endpoint that will be used in the risk assessment (Table 21-2). This means identifying a toxic substance or mixture and naming one or more endpoints (e.g., lung cancer, neurotoxicity) which are of concern. For a hazardous waste site, it begins with a list of the chemicals (usually from the list of 129 priority pollutants) that have been identified in the soil. One or a few chemicals are then selected based on their quantity, mobility, coherence with health reports, or actual exposure measurements.
Dose-Response Assessment This usually involves extensive review of the toxicological and/or epidemiologic literature to ascertain whether dose-response curves can be constructed for the endpoints of concern or whether specific thresholds have been determined. Unfortunately, for many compounds the only dose-response data are from old studies that used dosing levels appropriate for determining an LD-50, but not appropriate for low-dose extrapolation. Almost inevitably, one must rely on extrapolation to a lower dose.
Exposure Assessment Estimating exposure to the target population is essential.86 This requires measurements or models of contaminants from sources,
21 TABLE 21-2. CHRONOLOGY OF LANDMARKS IN CARCINOGEN RISK ASSESSMENT (EPA 2005)85 1976—EPA issued “Interim Procedures and Guidelines for Health Risk Assessments of Suspected Carcinogens” 1983—National Research Council “Risk Assessment in the Federal Government” 1986—Publication of the first Guidelines for Carcinogen Risk Assessment 1996—EPA published for public comment the Proposed Guidelines for Carcinogen Risk Assessment. The proposed guidelines underwent Science Advisory Board (SAB) review in 1997 1999—EPA’s Advisory board and Children’s Health Protection Advisory Committee (CHPAC) reviewed the Draft Final Revised Guidelines 2001—In November EPA published in the Federal Register a Notice of Intent to finalize the cancer guidelines and extended the opportunity to provide comment 2003—In March EPA released draft Final Cancer Guidelines and draft Supplemental Guidance for Assessing Susceptibility from Early-Life Exposure to Carcinogens for public comment 2004—In March EPA received SAB comments (PDF, 40p) on the Supplemental Guidance 2004/5—In response (PDF, 7p) to an SAB recommendation, EPA extended the analysis supporting the Supplemental Guidance 2005—In March EPA published final Cancer Guidelines and Supplemental Guidance
through environmental media (including fate and transport), contact with the receptor, bioavailability and absorption, and finally an estimate of dose to a target organ or tissue or cell. Exposure assessment must take into account the measured or estimated concentration of a substance (air, water, food, soil) and all applicable routes of exposure (inhalation, ingestion, skin absorption) (see Table 20-2). This requires knowing how individuals behave: where they spend their time, what they eat, how much they drink, and many other variables which can be incorporated into increasingly sophisticated models.87 In many cases, the actual site-specific or case-specific data are unavailable, and default values are used. Preferably, direct observations or questionnaires can be used to obtain case-specific exposure estimates. For example, our studies of risks from consuming fish contaminated by metals or radionuclides required site-specific estimates of how much fish fishermen actually consume.67 Increasing attention focuses on estimating probability distributions for all aspects of exposure including behavior influencing hours at home, minutes of showering and water use,88 and food consumption.89 Exposure assessment must incorporate estimates of bioavailability and increasingly relies on physiologically based pharmacokinetic models to provide estimates of the internal dose, the dose actually delivered to the target organ.90 Obtaining empirical distributions relevant to exposures (drinking water, air volumes, market baskets, etc.) for different populations and age groups is costly but important.
Risk Characterization This involves a quantitative estimate of the exposure level at which a particular level of excess risk exists. In the case of cancer, one constructs a dose-response curve based on animal studies of cancer and performs a low-dose extrapolation to estimate the dose that would produce a particular excess of cancer (for example, a 10–6 increase, resulting in one additional case per million exposed people). The toxicological data used may involve the presence of tumors, the number of tumors per animal, or the time-to-tumor from initial dosing. A variety of biological models of carcinogenesis have been advocated, each leading to selection of different mathematical extrapolations. The onehit model assumes a linear relationship between dose and outcome,
Environmental and Ecological Risk Assessment
549
starting at the smallest dose above zero (i.e., no threshold), with the slope of the line determined from the available studies. It assumes no threshold and is basically drawn from our understanding of radiation and cancer. Multistage models take into account our understanding of chemical carcinogenesis as a process involving initiation and promotion.91 Crump92 proposed a linearized multistage model, now widely used. This is contrasted with the linear no-threshold (LNT) model,93 the Armitage-Doll multistage model, and more recently the Moolgavkar-Vernon-Knudsen model, which emphasizes mutational events94 involved in initiation rather than in promotion. The LNT model derived from radiation carcinogenesis has been considered controversial, particularly when applied to chemical carcinogenesis. Recently (June 2005), the U.S. National Academy of Sciences Committee on the Biological Effects of Ionizing Radiation, has issued its BEIR VII report,95 reaffirming that the LNT model is the correct model to use for radiation and cancer. Since most toxicological studies and occupational epidemiology cohorts have been dosed or exposed to levels far above those encountered by the general public, it is necessary to extrapolate from these high doses to presumed effects at low doses, using one of the above models. The Carcinogen Assessment Group of the EPA has prepared cancer potency estimates referred to as slope factors (also labeled q1* values) for a number of common carcinogens using the linearized multistage model which is currently considered the most generally applicable extrapolation approach for chemical carcinogens.81 These estimate the number of excess cancers associated with a unit increase in dose. These and other valuable data are available in EPA’s IRIS database (http://www. epa.gov/iriswebp/iris/index.html). Other models such as dose-distribution models give higher estimates of dose and are considered less protective of the public health, although future research may validate their use for some substances. It is likely that where several models give similar estimates of risk for a substance and one model gives a very divergent estimate of risk, one can safely rely on the evidence of the concordant estimates.
CARCINOGEN CLASSIFICATIONS
The International Agency for Research in Cancer has a five-tier system for classifying chemicals on the basis of human cancer, and the EPA system is analagous (Table 21-3). Group 1. Known human carcinogen: adequate evidence in humans Group 2A. Probable human carcinogen: limited evidence in humans but sufficient evidence in animals Group 2B. Possible human carcinogen: limited evidence in humans and less than sufficient in animals Group 3. Not classifiable: this category is used most commonly for agents, mixtures, and exposure circumstances for which the evidence of carcinogenicity is inadequate in humans and inadequate or limited in experimental animals. Group 4. Probably not carcinogenic in humans. Evidence suggests not a carcinogen in either humans or animals. The EPA uses the similar categories but they are labeled A, B1, B2, C, D, E (Table 21-3).
Interspecies Extrapolations One of the controversial aspects of risk management is the utility of risk assessments based on animal toxicology without supporting human epidemiology. The evolutionary relatedness among animals, their common derivation, and the high degree of homology among protein structures provides the basis for the principle of animal extrapolation to humans. It is important to realize, however, that such basic phenomena as the presence of enzymes and consequent metabolism vary not only among species, but among strains of a species,
550
Environmental Health TABLE 21-3. COMPARISON OF THE IARC AND EPA CARCINOGEN CLASSIFICATIONS Classification
Evidence in Humans
Known human carcinogen Probable human carcinogen
Adequate epidemiologic evidence in humans Limited evidence
Possible human carcinogen Not classifiable Probably not a human carcinogen
Inadequate Limited Inadequate evidence Substantial negative evidence
Evidence in Animals
Sufficient Sufficient Limited or less than sufficient Inadequate or limited Substantial negative evidence in at least two species
between sexes, and over the course of the lifespan.96 The response of experimental animals (or of human subjects) may vary with many factors. In some cases the fact that a toxic substance produces the same effect (e.g., bladder cancer or leukemia) in several species of animals makes one confident that interspecies extrapolation is valid. For a carcinogen that produces cancer in many species, but in each case involving a different organ system, extrapolation is more uncertain. Finally, a substance may be a carcinogen in one species, but not in another. Almost all known human carcinogens are also known animal carcinogens, and it is prudent to assume that animal carcinogens, particularly those that cause cancer in both sexes and more than one species, are probable human carcinogens as well. Where data are available to estimate cancer potency for a single chemical from both animal studies and human epidemiologic studies, there is a high correlation97 validating the use of animal toxicological data. The basic problem is that one does not know a priori whether the human is more or less susceptible to the agent than the experimental animal used in a study. Incorporating a safety factor of 10 assumes that humans are no more than 10-times more sensitive. However, humans are just as likely to be less sensitive than more sensitive, and in many cases the sensitivity is not known. Thus in the case of 2, 3, 7, 8-TCDD (dioxin), guinea pigs are about 1000 times more sensitive than rats while it is not clear whether humans are closer to guinea pigs or to rats.
Interpreting the Model To minimize animal pain and stress, the quest of alternatives to animal models valid for risk assessment is a priority of the National Toxicology Program. Before selecting a model, one establishes a level of acceptable risk. This enables one to ask, what dose of chemical would increase the cancer risk by one case in a million. The mathematical model allows one to extrapolate downward until one reaches that very low dose associated with one in a million excess risk. If one uses the LNT model, this dose will be much lower than if one uses the probit model. Environmentalists seeking to prevent any unnecessary exposure to carcinogens will tend to favor the model giving the lowest allowable dose, while an industrialist responsible for controlling exposures in and around his or her factory will feel more comfortable if the less conservative probit model is used, thus relaxing his or her burden somewhat. Unfortunately, much of the debate over which model to use has focused on the political and economic consequences of the choice rather than on the scientific basis. This is perhaps inevitable since there has been only slow progress toward determining the biological basis for model selection for specific chemicals. Understanding the mechanisms by which toxic substances produce their effects at the molecular and cellular level can help clarify the risk approach. In addition to estimating a critical dose, one can calculate the 95% confidence limits around an estimate. To ensure protectiveness of public health, one reports the upper 95% confidence limit as the
IARC Group
EPA Group
1
A
2A
B1
2B
B2 C
3 4
D E
“upper bound” of the risk estimate. In addition, one can establish a science-policy decision of using a no-threshold model in cancer risk assessment.93 In other countries, a no-threshold model is used for genotoxic carcinogens, but not necessarily for other carcinogens such as promoters.98 Thus the compromise solution is the linearized multistage model.92 This takes into account the two-stage process of carcinogenesis, recognizing that a single hit may not be sufficient to cause a cancer. The dose estimated by the linearized multistage model is intermediate between the doses generated by the other two models. The EPA has also selected a “one-in-a-million excess risk,” often indicated as a 10–6 excess risk, as the point at which it will make decisions to regulate exposures.
Modeling Endpoints Other than Cancer There are attempts to harmonize risk assessment for noncancer and cancer endpoints.99 Although the method described under Risk Assessment for Noncancer Endpoints is still prevalent, there are attempts to bring dose-response extrapolation to bear on other endpoints. A model for developmental toxicology incorporates many parameters to estimate cell kinetic rates, and the population of cells with normal and abnormal kinetics, from which developmental abnormalities can be inferred.100 RISK ASSESSMENT FOR NONCANCER ENDPOINTS
Risk analyses for noncancer endpoints use a variety of approaches usually based on the highest dose known to produce no effect (no observed adverse effect level [NOAEL]) or if only the control dose had no effect, the lowest dose known to produce an adverse effect (lowest observed adverse effect level [LOAEL]) can be used.101 Ideally, one would use data from epidemiologic studies including the most sensitive human subpopulations, where there has been a lifetime of adequate exposure by appropriate routes as well as a lifetime of follow-up (to ensure that events with long latency are not missed). In such a study, the exposure would be documented for all subjects for all years, and any outcome would be correctly diagnosed and recorded. These conditions are never met. One must rely either on incomplete epidemiologic studies or on animal studies. Since most published animal studies were not designed for risk assessment, one must be cautious in interpreting them, and new studies with more appropriate design would be helpful. Studies with very short-term exposure or with short-term follow-up are usually not incorporated in risk assessments. Several terms need to be understood. Benchmark Dose. An alternative to relying on a NOAEL, the benchmark dose is the lower confidence limit on a dose which produces an effect in some percent of test animals (usually set at 1, 5, or 10%). It is derived from modeling. It is usually less conservative (less protective), and is linked to an assumption that a certain percent of
21 illness (up to 10% of the population) is tolerable. Estimates of benchmark doses need to account for exposure uncertainty.102 Conservative. When applied to models or standards, more conservative equates to more protective. This has no relation to and is often opposite to the political meaning of conservative. NOEL and NOAEL. In toxicological studies, these are the no observable effect level and no observed adverse effect level, respectively. The NOAEL is the dose at which there was no biological or statistically significant adverse effect. Often there may be measurable effects that are not known to have adverse consequences (hence the term NOEL). It is sensitive to the doses chosen for the study. LOAEL. Lowest observed adverse effect level. In some studies, even the lowest dose induced a significant adverse effect. Rather than throw out such studies, use these data, but treat the LOAEL differently from a NOAEL, incorporating a 10X uncertainty factor. Since most toxicological data used in risk assessment were not collected with risk assessment in mind, one is often confronted with LOAELs rather than NOAELs. Reference Dose (RfD), or Acceptable Daily Intake (ADI). The RfD is established by the Environmental Protection Agency based on risk assessments for noncancer and nongenetic endpoints.37,103 This is a daily dose expressed usually in µg/kg/day that one could be exposed to every day (usually for a 50-year lifetime) without experiencing any adverse effect. Safety Factor (SF), or Uncertainty Factor (UF). A margin of safety (often arbitrary) introduced into the regulatory process to account for uncertainties in the biomedical database. Since they do not necessarily ensure safety, most authors prefer to label them uncertainty factors. Various UFs are used to calculate an RfD from an NOAEL or LOAEL. The most common default values for these are: To use animal data to protect humans,38,104 UF 10 To protect the most sensitive human individuals, UF 10 To calculate a chronic or lifetime RfD from a study using only a subacute or acute exposure, UF 10 If the RfD is based on an LOAEL rather than an NOAEL, UF 10 Although the choice of these values of 10 may be arbitrary, subsequent data analyses have tended to support their utility.105 However, if all four conditions hold, then the combined UF equals 10,000, resulting in an RfD that is four orders of magnitude lower than the LOAEL. Some authors believe that this is overly and unreasonably protective and have countered with lower default values or none at all. In some cases, substance-specific research may point to a different safety factor.
Making the Calculations In using these levels, one selects the highest NOAEL or the lowest LOAEL for a nontrivial endpoint reported in the literature as the starting point for calculations. One also makes certain assumptions about exposure. The standard human target is the 70-kg adult male. However, if susceptible subpopulations include children, females, or ethnic groups, a more appropriate body mass should be chosen. Exposure is assumed to occur over a 50- or 70-year life span, but in many cases involving childhood exposure, a different critical period is selected. The acceptable daily intake, or reference dose, is calculated by dividing the NOAEL by a denominator comprising all applicable uncertainty factors multiplied together.105
Physiologically Based, Pharmacokinetic Models (PBPK) The above methodology has been criticized as being overconservative, introducing too many arbitrary UFs. Since the value of “10” is a default value, critics seek more “realistic” estimates.99 The use of benchmark doses divided by uncertainty factors derived from geometric standard deviations has been proposed as less overconservative.106 The use of pharmacokinetic models may offer a way of avoiding reliance on the default values.107 For example, using the concentration of tetrachloroethylene in the air in a shower, a PBPK can predict the concentration delivered to the brain after both inhalation and dermal exposure.
Environmental and Ecological Risk Assessment
551
Instead of relying on an RfD (daily intake), one can calculate a reference target tissue level (RTTL).108 Research is underway on using PBPK approaches to convert external measurements (concentrations in air) to dose to target.90 The utility and believability of PBPK will be enhanced through independent evaluation of the assumptions, the input distributions, and parameters used.109,110
Receptor Kinetics and Gene Expression Recent advances in molecular and cell biology point to the role of receptor-mediated gene expression in cells as part of toxicological responses. Xenobiotics may bind to (or inhibit) receptors intended for naturally occurring mechanisms. Increasing knowledge of these mechanisms and modeling will improve understanding of the shape of the dose-response curve.111 Xenobiotics can influence gene expression, upregulating or downregulating messengers or receptors, and can act on cell cycles, messenger cascades, DNA repair and each mechanism has different implications for risk assessment. The utility of toxicogenomics in risk assessment will likely focus on understanding susceptibility and toxic mechanisms.112 RADIATION RISK ASSESSMENT
Radiation risks must be interpreted against a background of omnipresent radiation level from cosmic, terrestrial, and internal sources. Typical radiation exposure ranges from about 100 mrem to 500 mrem/year, depending largely on altitude, and standards are based on levels over this background, even though there is a substantial cancer burden attributable to background, and the recent reaffirmation that radiation cancer-risk follows a linear no-threshold model (BEIR VII, 2005).95 Although radiation can be measured with precision, its impact is often difficult to assess. For example, the attributable risk of thyroid cancer to Chernobyl fallout could not be estimated until the background thyroid rate was known, and this estimate remains elusive.113 BIOLOGICAL AGENT RISK ASSESSMENT
Although risk assessment has developed mainly in the realm of chemicals and toxicology, biologic agents lend themselves to risk assessment. Arthropod-borne disease represents life cycles in host animals, insect vectors, and human patients. There are probabilities associated with each life-cycle transition—for example, the probability that an infected mosquito will bite a person, the probability of a high virus titer in its saliva, the probability of an infectious dose being delivered. Similarly, the efficacy of various barriers in blocking exposure, for example, behavior and insect repellents, or the effectiveness of UV light and respiratory protection against tuberculosis transmission can be analyzed. Risk assessment has long been an integral part of food safety and drinking water quality, and is used in evaluating agricultural practices.114 INFORMATION USED IN RISK CHARACTERIZATION
Epidemiology and experimental toxicology provide information for risk assessment. Epidemiological studies concern the appropriate species, humans, but they are often limited by uncertainties as to exposure and lack of power to detect small increases in risk. It is difficult to detect with statistical confidence an increase in risk (incidence of disease or death) unless it exceeds by more than 100% that which occurs in a reference or control group, and the use of methods such as meta-analysis to increase power is often necessary. Metaanalysis, though controversial, is proving a valuable tool in interpreting epidemiologic studies and makes use of the fundamental scientific process of building confidence from replicating studies. Moreover, epidemiologic studies often lack power, and insisting on a .05 level of significance introduces a strong bias against finding associations between exposure and outcome. These conservative features
552
Environmental Health
argue for the precautionary principle.115 Goldstein and Carruth116 point to the emerging role of the precautionary approach in World Trade Organization policies, where risk assessment is already entrenched, and question the assertion that precaution is the antithesis of risk. Rather precaution is part of a spectrum to be invoked when uncertainty is high and adverse consequences large.117 Toxicological studies have the advantage usually in terms of accuracy of exposure and dose measurements, but in many cases there are uncertainties on converting animal doses to human exposures, and sometimes it is hard to define the relevance of an observed effect for predicting human disease or disability.
Dose-Duration and Risk Typically risk assessment focuses on lifetime exposure and lifetime risk, and in many cases assumes that exposure occurs at a more or less constant daily level over a period of years or even a 50- or 70-year lifetime. Radiation risk assessments are more likely to apportion risk over time, recognizing that childhood exposure has different impacts from adult exposure, but even the radiation dose-distribution is used qualitatively. The relationship between duration and dose is complicated, and largely unstudied. A person with a bottle of 30 tablets who takes one each day would have a different experience than if all 30 tables were consumed at one sitting. Peak doses may exceed thresholds which are never reached by small daily doses, even over a long time period. On the other hand, repeated doses may produce other diseases which do not occur in those that survive an acute dose. More data on doseduration is a major research need. At the same time, age is a major variable influencing susceptibility and response,118 and particularly during development and childhood there are critical windows during which exposure may exert an effect that does not occur earlier or later in life.
The Maximally Tolerated Dose Critics of risk assessment point to the reliance on worst-case approaches such as the maximally tolerated dose (MTD) used in laboratory studies as leading inevitably to overestimates of risk. The MTD is the highest dose “that does not alter the animals’ longevity or well-being” from unrelated effects.23 The National Research Council’s Committee on Risk Assessment Methodology (CRAM) concluded that the MTD is useful mainly for qualitatively identifying carcinogens and was not intended to be the only data used for quantitatively estimating risk.23
Extreme Value Theorem Risk assessments are regularly used to estimate low-probability/highconsequence events. Extreme value is a branch of statistics focusing on extreme deviations from the median of probability distributions. Although used mainly actuarially, it is useful for evaluating highly unusual events such as 100-year floods. The Netherlands has built its dikes just tall enough to retain a 10,000-year flood event, or to be breached only once in 100,000 years,119 recognizing that such events could occur next year. Extreme value approaches could be applied to estimate risks to the most highly sensitive (supersensitive) individuals, who may be much more than 10 times as susceptible as the median of the population.
Uncertainty Uncertainty permeates the risk assessment process. Even when the mechanism by which an agent produces disease is well understood, uncertainty is introduced in the dose-response data from animal studies, the exposure assessment, and the risk estimation process. These uncertainties may be multiplicative, leading to orders of magnitude differences in estimate depending on the assumptions one uses. It is important to distinguish between uncertainty introduced by inadequate data or choice of methodology from the inherent variability among individuals.120 Thus in a population for which an average exposure assessment can be estimated, some will behave in a way that minimizes, while others maximize, their potential exposures. Similarly individuals vary in their susceptibility to different hazards. A
variety of approaches is being suggested to reduce the uncertainty in risk assessments. Monte Carlo (randomization) simulations have been used to produce a distribution of estimates.121,122 However, formal uncertainty analysis is probably less important than efforts to reduce the uncertainties through toxicological and epidemiologic research enhanced by the increasing availability of biomarkers and through careful site-specific studies of exposure.
Susceptibility The past decade has seen burgeoning interest in individual variability in susceptibility to hazardous agents (see Chap. 18-A). This variability can be genetic or acquired. Age, gender, and race influence susceptibility.123,124 Acquired variability may reflect overall health status, concurrent exposures, diet, and lifestyle. Major research breakthroughs are being made in understanding the contribution of genetic variability, particularly in the P-450 enzyme system on susceptibility. Most of this research focuses on the consequences of single-nucleotide polymorphisms, although the largest source of variance in susceptibility is due to polygenic interactions, pleiotropism, and epigenetic factors.
Individual versus Collective Risk The process of risk assessment is concerned with collective risks facing a target population rather than an individual. Policy makers, likewise, are concerned with protecting groups from unacceptable exposures and risks. However, many decisions regarding risks are made at the individual level. And even when a group is exposed, its members try to interpret and respond to the risk as individuals. It seems reasonable to assume that, once a risk has been estimated for a group, any individual within that group will face the average risk. However, within a population the risk is distributed unevenly, depending on variation in exposure and susceptibility, such that any individual may have a risk much lower or higher than the estimate.
Limitations of Risk Assessment Some of the limitations of risk assessment are inherent in the underlying toxicological and epidemiologic databases, the lack of adequate exposure data, or incomplete outcome ascertainment. Specific issues alluded to above include: (a) For the most part risk has been and will continue to be based on published animal research. However, until recently, toxicological research on animals was not designed with quantitative risk assessment in mind, hence the choice of doses and number of animals used may have been appropriate for descriptive purposes, but not for the low-dose extrapolations used in risk assessment. (b) Many of the endpoints of concern in humans have not been adequately studied in animal models. (c) The uncertainties inherent in extrapolating from animals to humans have engendered controversy. (d) Human epidemiologic studies of adequate power are usually too sparse to contribute to risk assessment, hence the continued necessity of relying on animal models. (e) Human exposure data are often inadequate. (f) In cancer-risk assessments, there are dramatic differences depending on which mathematical model is used. (g) Risk estimates based on collective exposure are not easily translated into individual risk. (h) The temporal aspects of dose, peak exposures, and duration are generally ignored in chemical risk assessment and only superficially considered in radiation risk assessment. (i) There is the continuing debate over what constitutes an acceptable risk level, which often overrides biomedical estimates of risk. Although these concerns interfere with performance and application of risk assessments, the process has become increasingly robust, so that it does serve useful functions in ordering priorities, in comparing the risks of different solutions, and in providing some data for establishment of policy. RISK PERCEPTION
Some individuals engage in extremely risky behavior on a regular basis as part of their job and may receive hazardous duty pay in recognition
21
Environmental and Ecological Risk Assessment
553
TABLE 21-4. RISK PERCEPTION DICHOTOMIES Acceptable, or Reduces Apparent Riskiness
Unacceptable, or Increases Apparent Riskiness
Assumed voluntarily or self-imposed Adverse effect immediate Alternatives not available, a necessity Risk certain Occupational exposure Familiar hazard Consequences reversible Some benefit gained from assuming risk Hazard associated with perceived good
Borne involuntarily or imposed by others Outcome delayed Alternatives available, a luxury Risk uncertain Community exposure Feared or “dread” hazard Consequences irreversible No apparent benefit to persons at risk Someone else profits at “my expense”
of this risk. Others engage in risks for recreational purposes or thrill. At the opposite pole of such risk-taking behavior are risk-aversive individuals. One might predict that a risk-taking individual, such as a skydiver or a mercenary soldier, would willingly undertake other risks such as smoking, driving without a seat belt, tolerating radon in their homes, or living next to a hazardous waste dump, while risk-aversive individuals take public transportation to a 9-to-5 job, wear a hard hat while walking near tall buildings, frequently check their homes for radon, have their car undergo frequent safety inspections, and shun all activities or exposures that enhance their risk of becoming ill or injured. However, risk perception is not that simple, influenced by many factors other than knowledge of risks.125 Some individuals who willingly take great risks fear having their drinking water contaminated even at immeasurably low levels. A person may complain that they are sensitive to mold, yet continue to smoke. The fact that, in general, individuals tend to overestimate negligible risks and underestimate severe ones is a source of frustration to risk analysts and policy makers alike, and this has engendered the rapidly growing field of “risk perception research.”126 Perceived risks to natural resources such as forest biodiversity are shaped more by underlying value systems than by specific knowledge of impacts.127 Unfortunately, many efforts to understand risk perception and improve risk communication are aimed at marketing a particular viewpoint—that is, trying to convince people to accept a particular level of risk that is politically or economically expedient. Although not usually recognized, the field has its roots in the study of “marketing,” which emerged in the 1950s and 1960s to understand factors motivating human purchasing decisions. Although the risk perception literature rarely references its parent discipline, many common principles can be recognized. Nonetheless, important advances and generalizations have been developed.128 Lowrance26 is credited with popularizing the understanding of risk perception and what constitutes acceptable risk. He described the series of “dichotomies,” originally proposed by Fischoff, which influence human perception of risk. Some of these are shown in Table 21-4. The goal of risk perception research is to understand how individuals appreciate risks, how they make their risk-taking and riskavoiding decisions, and how to bring their understanding of specific risks into congruence with the actual levels of risk. This will reduce the anxiety levels where risk is overestimated and may influence behavior, preventing significant exposure where risk is underestimated. All too often, risk managers have the goal of reducing anxiety and encouraging people to accept exposures, particularly those that would be costly to mitigate. However, examples of the need to enhance awareness and response to underestimated exposures include convincing people to have their homes tested for radon and to have exposure mitigated if the radon level is high and educating people regarding the hazards of smoking tobacco. Although people in different parts of the world and at different socioeconomic levels face different kinds of risks and make risk decisions driven by different factors, there are some universal features to risk perception. For example, Hinman et al.129 showed a remarkable concordance between United States and Japanese respondents regarding the things they dread (with nuclear accidents, radiation waste, and nuclear
war at the high end for both countries); however, there was less concordance between countries in the knowledge about the 30 hazards tested. Comparisons of lay public versus “experts” consistently reveal that the former views technology as more risky than the latter, apparently independent of the technology and risks.130 Among scientists, those in the life sciences and those in academia tend to perceive greater risks from nuclear waste than do physical scientists or those in industry or government.131 The latter are also more willing to impose risks on others. Not surprisingly, employees of a nuclear plant perceived a lower risk of accidents than did the general public.132 Demographic factors influence perception in complex ways. In some studies more educated people who may have a better understanding of science and technology are more accepting of technological hazards,133 but the fact that people of lower socioeconomic status and education fear such developments relates in part to their perception that they personally are at greater risk.134 Perceived risks for any hazard correlates with one’s perception of personal risk from that hazard, but is tempered by any benefits from that hazard.135,136
RISK COMMUNICATION
Like risk perception, there is a growing field of research surrounding the methods that can be used to impart risk information to the public as well as to individuals. Risk assessment and risk management are highly charged fields where politics and emotion mix freely with science, and ultimately whom you trust determines your views of risk.137 The landmark book Improving Dialogue with Communities: A Risk Communication Manual for Government identified the challenges that government officials face when bringing news (particularly unpleasant news) about a local environmental hazard to communities.138 Public utilities and corporations have also invested in improving their communication with their neighboring communities, both to comply with Superfund Amendment Reauthorization Act (SARA) title 3 and with community right-to-know laws in some states and to create channels of communication in case of an accident. Although this was true when written in 1996, many communities have apparently lost interest in SARA title 3 and the Local Area Planning Committees established under SARA have become inactive. Moreover, SARA was predicated on public access to information about hazards in the community, while the rising industrial secrecy engendered by homeland security fundamentally undermines the access of the public to hazard information.
Models of Risk Communication In many circumstances, risk communication has been a one-way path between the “expert” and the “public” following a source-receiver model in which the recipient is passive and is expected to respond to the message in a predicted manner. Very frequently the anticipated response does not materialize. Two-way communication (sometimes called a convergence model),139 is necessary so that the receiver can inform the sender what parts of the problem are important, thereby shaping the message they receive. In communicating with non-English speakers, the primary focus is usually on correct translation, but we
554
Environmental Health
found that direct classroom interaction was more effective than relying on a well-translated illustrated pamphlet in communicating about fish consumption to pregnant Latinas.140
Risk Comparisons Risk assessors, not realizing that individuals must put risk into very personal contexts, often lament that the public reacts irrationally to risks. Risk comparison is an approach to communicating risk by contrasting unfamiliar risks with familiar ones. Common reference points include the risk of driving so many miles in a car, the radiation risk of a transcontinental air flight, and the lung cancer risk from smoking a pack of cigarettes per day. The implication is that a risk lower than these should be acceptable. Yet individuals may rationally accept the necessary (and predictable) risk of transcontinental flight, while shunning the perceived (and uncertain) risk of having a communication tower constructed in their community. The perception is colored by the personal gain and utility of the former and the lack of vested interest in the latter. Aesthetic considerations also color perceptions of risk. A nonsmoker may take the one-pack-a-day comparison as evidence of high and unacceptable risk. The concept of risk comparison thus often makes more sense to the communicator than to the communicatee. The dichotomies shown in Table 21-4 help us understand this apparent paradox.
Temporal Characterization of Risk As an alternative to risk comparison, very low lifetime risks can be transformed into a time frame that may help some people grasp their significance. Thus a one in 100,000 lifetime risk in a town of 2000 people translates into one death in 3500 years (50 × 70 year lifetimes).141
Media Coverage of Risk One often gains the impression that newspaper and television coverage exaggerates the hazards of everyday life, with stories that bear little relationship to the actual magnitude of public health hazard.142 Nonetheless the media are an important source of hazard and risk information for many people, and the media therefore could play a crucial role in providing a balanced perspective on risk. Although many toxicologists shun journalists for fear of being misquoted, there is a substantial basis for believing that environmental news coverage can be improved if a dialogue between toxicologists, risk assessors, and reporters can be developed.143
Stakeholders and Citizens’ Advisory Boards (CABs) Various agencies, including large corporations and the Department of Energy, form advisory boards representing various types of stakeholders. These can voice concerns to which the company can respond proactively. CABs can focus attention on the risks that they view as significant and can identify acceptable alternatives or programs. At some factories they actually participate in fence-line monitoring programs. Their actual contribution to policy outcomes, however, is variable.144 Improving the involvement and usefulness of communities, particularly minority communities, in agency decisions, as well as a need to evaluate risk communication methodologies have been identified as high priorities for risk communication research.145
ECOLOGICAL RISK ASSESSMENT
Ecological Risk Assessment has emerged as the discipline to evaluate the risk of stressors to ecological systems (including their component organisms), and it has borrowed heavily from the human risk assessment four-step paradigm:13,23 hazard identification, dose-response, exposure assessment, and risk characterization.146,147 The use of Ecological Risk Assessment is now common, as is evidenced by the over 100 guidance documents published in the last decade.148 While initial risk assessors searched for commonalities in risk methodologies, recent guidance has stressed the importance of site-specific information.
Evaluating risk to ecological systems is far more complex than is human health risk assessment because of the complexities of ecosystems. Ecosystems include both the abiotic (soil, air, water) and biotic components, and the latter includes a wide range of species with different life spans (from minutes to hundreds of years), different life history strategies (some have few offspring, others lay millions of eggs), different life stages (e.g., egg, larvae, adult), and vastly different susceptibilities to stressors. It is for this reason that ecological risk assessment must be conducted with a particular objective in mind, with a particular range of species of concern. Moreover, where human health risk assessments lead to a probability of adverse harm, there is no comparable metric for ecological risk assessments. The hazard identification phase of ecological risk assessment therefore requires input from public policy makers, risk managers and regulators, and the general public.1,149 All of these stakeholders must work in an iterative framework to provide the background, scope, and objectives for an ecological risk assessment. Most of the federal agencies involved in ecological risk assessment have acknowledged the importance of this initial phase and of including stakeholders. The inclusion of a range of managers, regulators, and interested and affected parties in the design and implementation of risk assessments has improved their usefulness.150–152 Moreover, because of the complexities within ecosystems, the endpoints or measures of risk must be carefully defined and selected. This is not a trivial aspect, and a range of endpoints is often used. In human risk assessments one has to worry about only one species; in ecological assessments the structure and function of the system as well as the survival of component species are of concern. Selecting the target endpoint is challenging. Is it a particular ecosystem function such as productivity, or the amount of energy or matter channeled through the system, or is it a size of a component population? One critical difference between human and ecological risk assessment is that, whereas the health and well-being of each individual human is important, for ecological systems (except for endangered species) it is the population viability that is of concern. There is still a lack of dose-response and exposure data for plants and animals in most ecosystems. Most estimates of exposures come from measuring the levels of chemicals in various tissues. There are few monitors available for wild animals, and the cost would be prohibitive for obtaining either large sample sizes or data on many different species. Replicating ecosystems on a scale suitable for research has required aquarium-level (microcosm) and pond-level (mesocosm) models,64,153 the latter still restricted to a few research stations. Technological advances in the development of tools, such as chemicalspecific hazard quotients for risk characterization,148 species sensitivity distribution methods,154 and GIS155 have provided more quantification to the process. A recent phase in ecological risk assessment is to focus on a large area scale, a so-called landscape approach.65,156,157 The problems that ecological systems face often can be examined only on a regional basis where the health and well-being of meta-populations can be assessed. In addition, researchers are trying to estimate the resiliency of ecosystems or the time required for them to recover from a disturbance or contamination.66,158 As with human health risk assessment, it is often challenging to determine whether a risk assessment has achieved its purpose since long-term studies are required to determine whether predictions have been borne out. Most ecological risk assessments deal with only one chemical or a class of chemicals, such as the antifouling algaecide irgarol159 or metals,160 yet organisms in nature are exposed to several stressors or chemicals of concern. Further, exposure usually occurs for several organisms at a time.161 Consideration of a wider range of chemicals, and a wide range of organisms has led to attempts to protect most organisms, most of the time. In some cases, screening risk assessments and probabilistic risk models are used.162 Although human and ecological risk assessment often proceed independently for a given site, the selection of indicators that can be used to assess both human and ecological health has the greatest utility.163,164 This has led to development of methods to integrate a
21 framework for human health and the environment.147 This allows for the integrated assessment of all exposed species, rather than concentrating on only humans, and is more apt to lead to sustained effort and future biomonitoring.
Environmental and Ecological Risk Assessment
555
and chemical agents, led to the discovery that an anti-inflammatory drug used in cattle was highly nephrotoxic to the vultures that eventually ate these cattle.172 Risk assessments for veterinary pharmaceuticals generally take into account human consumers, and the vultures demonstrate that there are additional receptors.
NATURAL RESOURCE DAMAGE ASSESSMENT
Natural Resource Damage Assessment is a legal and regulatory strategy to monetize damage to natural resources, mainly from contamination. Penalties assessed can be used, and are often required to offset the specific damage by remediating contamination, rehabilitating damaged habitats, reintroducing organisms, or by offsets such as purchasing alternative habitat. The information required for NRDA is often the same as that used in a risk assessment,165 although it is only damage, not risk, that can be penalized. Nonetheless, ecological risk assessment could play a major role in achieving sound NRDA decisions. CONSERVATION MEDICINE
Although the role of animals as hosts and vectors of human disease is a century old, conservation medicine employs a comprehensive approach to evaluating risks, prevention and control at the intersection of human, animal, and ecosystem health,166 combining principles of epidemiology and epizootiology. Several examples are considered below; others include Hantavirus and avian influenza. Among animal epidemics, chytridiomycosis has gained prominence for causing or contributing to the worldwide population crashes among many species of frogs.167
West Nile Virus Many human viral diseases exist in wild animal reservoirs. The risk assessment process for West Nile Virus (WNV) must take into account the impact not only on humans, but on its avian hosts. Widespread declines of bird populations have been predicted, although the main victims thus far are members of the Corvidae family, crows and jays.168 These species are especially vulnerable to WNV. Risk assessment should influence pest control policies (repellents, sprays, environmental controls), and which mosquito species to target.169 Bird populations may be more vulnerable to widespread, ill-advised insecticide spraying than to the virus itself, while the efficacy of most spray programs has not been documented.
Organochlorines and Avian Reproduction Organochlorines (OCs) are persistent chemicals in the environment and in animals and bioaccumulate in the food chain. OCs, particularly DDT, induce metabolic enzymes which normally metabolize steroids, including estrogen. This was the first endocrine disruptor identified. Birds that ate large fish or other birds, accumulated high levels in their tissues, and suffered reproductive failure, particularly noticeable in laying eggs with very thin shells (due to rapid transit through the oviduct) and increased embryolethality. The most noticeable victims were Bald Eagles, Ospreys, Peregrine Falcons, and Brown Pelicans. The banning and restriction of OC use led to a decline in residues and improved reproduction;170 however, continued persistence in local food chains still threatens these species in some areas.171 Other examples of secondary poisoning are numerous, and risk assessment for agrochemicals must include assessment of impact on nontarget organisms.
Diclofenac and Vulture Declines Vultures are large birds that scavenge animal remains. They play important roles in community sanitation as well as in the “burial” rituals of certain religious communities in India. These large soaring birds have been conspicuous parts of the Asian skyscape, but beginning in the mid-1990s, a precipitous decline in vulture populations, total disappearance in some places, was noted. Where once several dozen birds would be in view, none occurred. Investigations of infectious
RISK ASSESSMENT AND JUNK SCIENCE
The applications of risk assessment to policy making are widespread and contentious. Some stakeholders oppose risk assessment on ethical grounds, some because of lack of understanding, and some because of vested interests in outcomes. Critics of regulation have popularized the term “junk science” in an attempt to discredit the basis of many risk assessments. Although the criticisms may be targeted at the values and acceptable risk levels, rather than the data and analyses, the “junk science” rubric is intended to discredit the results and overemphasize the uncertainties, thereby dissuading policy makers from using risk results.173 The tobacco industry, for example, misrepresented the scientific method in its oft-repeated plaint “there is no proof that smoking causes cancer.” The accumulation of scientific studies to the contrary support the alternative statement “there is no doubt that smoking causes cancer.” Therefore, junk science works both ways. Those who misrepresent the nature of the scientific method to the courts, the media, and even the peer review process are engaged in junking science. Misunderstanding and misuse of hormesis is a long-standing and growing example of junk science.174 Policy makers also have to be alert for shills, for example, scientists who reported findings developed for them by the tobacco industry.175
VENUES FOR RISK-RELATED RESEARCH
The University and the Corporation are the traditional bastions of research. Risk-related research has also been performed extensively within the regulatory agencies such as the Food and Drug Administration and the Environmental Protection Agency. Increasingly, however, risk-related research is being performed by nongovernmental agencies and by environmental consulting firms and much of this research enters the peer-reviewed literature. As university researchers find themselves increasingly constrained by limits imposed by institutional review boards, certain kinds of research may be relegated to nonacademic centers, where protection of human subjects may not receive the same priority. Since the essence of science is falsifiability and reproducibility, research that cannot be replicated because of ethical concerns or policies, for example the immersion of “volunteers” in a bath of hexavalent chromium, a known human carcinogen,176 would be difficult to replicate, and is therefore of questionable value and should not be used for risk management. The issue is of what kinds of research data can be accepted as science transcends modern risk assessment.
HARMONIZATION
The desirability and limitations of harmonizing methods for human and ecological risk assessment have been mentioned above. Harmonizing scientific, political, and judicial interpretations of risk information should be considered as well. Courts in different jurisdictions have interpreted risk differently, ranging from ignoring it, to requiring demonstration of a twofold excess relative risk in epidemiologic studies, although most individuals would consider even a 10% increase in a serious risk as unacceptable. Harmonizing radiological and chemical risk assessment is challenging, and for radiation the protection of organisms thus far depends on protection of humans despite evidence of ecosystem sensitivity to the contrary.177 Finally, international harmonization, at least among North America, Europe, and Japan would be a valuable exercise. Currently, the precautionary approach is
556
Environmental Health
viewed more favorably in Europe than in the United States, although the difference in impact may be slight. And even within Europe, countries differ in how they approach acceptable risk.119
FUTURE PRIORITIES
Risk assessment continues to evolve on many fronts from the basic four-part paradigm to bioterrorism,178 and new mathematical approaches are linked to expanding data sets. Although risk assessment is criticized as being both over- and underconservative,179 involvement of stakeholders at all stages coupled with enhanced methods, should converge on greater acceptability. New metrics such as quality-adjusted life years180 may enhance both the estimation and communication of risk. As with toxicology in general, risk assessment for mixtures is an essential development. Accounting for the duration-dose trade-off is beginning to attract more attention,181 both for research and application to standard-setting policy. The spatial analysis and depiction of risks is a rapidly growing field.182,183 Brownfields’ redevelopment is one of the few urban health initiatives to retain high political visibility in the United States,184 and enhanced use of risk approaches will facilitate wise and economic use of contaminated lands. Risk assessment has its detractors as well as exploiters.
REFERENCES
1. National Research Council. Understanding Risk—Informing Decisions in a Democratic Society. Washington, DC: National Academy Press; 1996. 2. Burger J, Gochfeld M. Ecological and human health risk assessment: a comparison. In: Di Giulio RT, Monosson E, eds. Interconnections Between Human and Ecosystem Health. New York: Chapman Hall; 1996: 127–48. 3. Lin BL, Tokai A, Nakanishi J. Approaches for establishing predictedno-effect concentrations for population-level ecological risk assessment in the context of chemical substances management. Environ Sci Technol. 2005;39:4833–40. 4. Raber E, Carlsen T, Folks K, Kirvel R, Daniels J, Bogen K. How clean is clean enough? Recent developments in response to threats posed by chemical and biological warfare agents. Int J Environ Health Res. 2004;14:31–41. 5. Leung KM, Bjorgesaeter A, Gray JS, Li WK, Lui GC, Wang Y, et al. Deriving sediment quality guidelines from field-based species sensitivity distributions. Environ Sci Technol. 2005;39:5148–56. 6. RAMAS. Linking spatial data with population viability analysis. Version V software. http://www.ramas.com/ramas.htm. 7. Tanaka Y. Ecological risk assessment of pollutant chemicals: extinction risk based on population-level effects. Chemosphere. 2003;53:421–5.. 8. Reed RN. An ecological risk assessment of nonnative boas and pythons as potentially invasive species in the United States. Risk Anal. 2005;25:753–66. 9. Burger J, Carletta MA, Lowrie K, Miller KT, Greenberg M. Assessing ecological resources for remediation and future land uses on contaminated lands. Environ Manage. 2004;34:1–10. 10. Cairns J. Restoration ecology: a major opportunity for ecotoxicologists. Environ Toxicol Chem. 1991;10:429–32. 11. Morse SS. Factors in the emergence of infectious diseases. Emerg Infect Dis. 1995;1:7–15. 12. von Krauss MP, Casman EA, Small MJ. Elicitation of expert judgments of uncertainty in the risk assessment of herbicide tolerant oilseed crops. Risk Anal. 2004;24:1515–27. 13. Di Giulio RT, Monosson E. Interconnections between Human and Ecosystem Health. London: Chapman & Hall; 1996.
14. Cacela D, Lipton J, Beltman D, Hansen J, Wolotira R. Associating ecosystem service losses with indicators of toxicity in habitat equivalency analysis. Environ Manage. 2005;35:343–51. 15. Burger J, Gochfeld M, Powers CW, Waishwell L, Warren C, Goldstein BD. Science, policy, stakeholders, and fish consumption advisories: developing a fish fact sheet for the Savannah River. Environ Manage. 2001;27:501–14. 16. Inoue K, Koizumi A. Application of human reliability analysis to nursing errors in hospitals. Risk Anal. 2004;24:1459–73. 17. Elad D. Risk assessment of malicious biocontamination of food. J Food Prot. 2005;68:1302–05. 18. National Research Council. Risk Assessment in the Federal Government. Washington DC: National Academy Press; 1983. 19. Presidential/Congressional Commission on risk Assessment and Risk Management. Risk Assessment and Risk Management in Regulatory Decision Making. Washington DC: U.S. Government Printing Office; 1997. 20. United States Environmental Protection Agency. Risk assessment guidelines for carcinogenicity, mutagenicity, complex mixtures, suspect developmental toxicants, and estimating exposures. Fed Regist. 1986;51:33992–34054. 21. Burger J. How should success be measured in ecological risk assessment? The importance of “predictive accuracy.” Environ Health Toxicol. 1994;42:367–76. 22. Benyamine M, Backstrom M, Sanden P. Multi-objective environmental management in constructed wetlands. Environ Monit Assess. 2004;90:171–85. 23. National Research Council. Committee on Risk Assessment Methodology: Issues in Risk Assessment. Washington, DC: National Academy Press; 1993. 24. Derby SL, Keeney RL. Risk analysis: understanding “how safe is safe enough?” Risk Anal. 1981;1:217–24. 25. National Research Council. Report of the Commission on Risk Assessment and Risk Management. Washington DC: National Research Council; 1996. 26. Lowrance WW. Of Acceptable Risk. Los Altos, CA: William Kaufmann; 1976. 27. Imperato PJ, Mitchell G. Acceptable Risk. New York: Viking; 1985. 28. Jasanoff S, ed. Learning from Disaster. Philadelphia: University of Pennsylvania Press; 1994. 29. Daszak P, Tabor GM, Kilpatrick AM, Epstein J, Plowright R. Conservation medicine and a new agenda for emerging diseases. Ann N Y Acad Sci. 2004;1026:1–11. 30. Giovannini A, MacDiarmid S, Calistril P, Contel A, Savini L, Nannini D, et al. The use of risk assessment to decide the control strategy for bluetongue in Italian ruminant populations. Risk Anal. 2004;24:1737–53. 31. Honhold N, Taylor NM, Wingfield A, Einshoj P, Middlemiss C, Eppink L, et al. Valuation of the application of veterinary judgment in the preemptive cull of contiguous premises during the epidemic of foot-andmouth disease in Cumbria in 2001. Vet Rec. 2004;155:349–55. 32. Grist EP. Transmissible spongiform encephalopathy risk assessment: the UK experience. Risk Anal. 2005;25:519–32. 33. Cringoli G, Rinaldi L, Veneziano V, Musella V. Disease mapping and risk assessment in veterinary parasitology: some case studies. Parassitologia. 2005;47:9–25. 34. Barker CM, Reisen WK, Kramer VL. California state mosquitoborne virus surveillance and response plan: a retrospective evaluation using conditional simulations. Am J Trop Med Hyg. 2003;68: 508–18. 35. Purse BV, Baylis M, Tatem AJ, Rogers DJ, Mellor PS, Van Ham M, et al. Predicting the risk of bluetongue through time: climate models of temporal patterns of outbreaks in Israel. Rev Sci Tech. 2004; 23:761–5. 36. Douglas-Helders GM, Saksida S, Nowak BF. Questionnaire-based risk assessment for amoebic gill disease (AGD) and evaluation of
21
37.
38.
39.
40.
41.
42. 43.
44. 45. 46. 47. 48. 49. 50.
51. 52.
53.
54.
55.
56.
57.
58. 59.
freshwater bathing efficacy of reared Atlantic salmon Salmo salar. Dis Aquat Organ. 2005;63:175–84. Jones RD, Kelly L, Fooks AR, Wooldridge M. Quantitative risk assessment of rabies entering Great Britain from North America via cats and dogs. Risk Anal. 2005;25:533–42. Wajsman D, Ruden C. Identification and evaluation of computer models for predicting environmental concentrations of pharmaceuticals and veterinary products in the Nordic environment. J Expo Anal Environ Epidemiol. 2006;16(1):85-97. Van den Brink PJ, Tarazona JV, Solomon KR, Knacker T, Van den Brink NW, Brock TC, et al. The use of terrestrial and aquatic microcosms and mesocosms for the ecological risk assessment of veterinary medicinal products. Environ Toxicol Chem. 2005;24:820–9. Jederberg WW. Issues with the integration of technical information in planning for and responding to nontraditional disasters. J Toxicol Environ Health A. 2005;68:877–88. Apostolakis GE, Lemon DM. A screening methodology for the identification and ranking of infrastructure vulnerabilities due to terrorism. Risk Anal. 2005;25:361–76. Elad D. Risk assessment of malicious biocontamination of food. J Food Prot. 2005;68:1302–5. Meinhardt PL. Water and bioterrorism: preparing for the potential threat to U.S. water supplies and public health. Annual Rev Public Health. 2005;26:213–37. Hauschild VD, Bratt GM. Prioritizing industrial chemical hazards. J Toxicol Environ Health A. 2005;68:857–76. Gofin R. Preparedness and response to terrorism: a framework for public health action. Eur J Public Health. 2005;15:100–4. Jetter JJ, Whitfield C. Effectiveness of expedient sheltering in place in a residence. J Hazard Mater. 2005;119:31–40. Ryan JCH, Ryan DJ. Proportional hazards in information security. Risk Anal. 2005;25:141–9. Lehtomäki K, Pääkkönen RJ, Rantanen J. Risk analysis of Finnish peacekeeping in Kosovo. Risk Anal. 2005;25:389–96. Thoresen S, Mehlum L. Risk factors for fatal accidents and suicides in peacekeepers: is there an overlap? Milit Med. 2004;169:988–93. Sulsky SI, Grabenstein JD, Delbos RG. Disability among U.S. army personnel vaccinated against anthrax. J Occup Environ Med. 2004;46:1065–75. Booth-Kewley S, Larson GE. Predictors of psychiatric hospitalization in the Navy. Milit Med. 2005;170:87–93. May LM, Weese C, Ashley DL, Trump DH, Bowling CM, Lee AP. The recommended role of exposure biomarkers for the surveillance of environmental and occupational chemical exposures in military deployments: policy considerations. Milit Med. 2004;169:761–7. Boyd KC, Hallman WK, Wartenberg D, Fiedler N, Brewer NT, Kipen HM. Reported exposures, stressors, and life events among Gulf War registry veterans. J Occup Environ Med. 2003;45:1247–56. Burger J, Powers C, Greenberg M, Gochfeld M. The role of risk and future land use in cleanup decisions at the department of energy. Risk Anal. 2004;24:1539–49. Burger J. Assessing environmental attitudes and concerns about a contaminated site in a densely populated suburban environment. Environ Monit Assess. 2005;101:147–65. Kaufman MM, Murray KS, Rogers DT. Surface and subsurface geologic risk factors to ground water affecting brownfield redevelopment potential. J Environ Qual. 2003;32:490–9. Groth E, III. The debate over food biotechnology in the United States: is a societal consensus achievable? Sci Eng Ethics. 2001;7: 327–46. Brown RV. Logic and motivation in risk research: a nuclear waste test case. Risk Anal. 2005;25:125–40. Mohanty S, Codell RB. Ramifications of risk measures in implementing quantitative performance assessment for the proposed radioactive waste repository at Yucca Mountain, Nevada, USA. Risk Anal. 2004;24:537–46.
Environmental and Ecological Risk Assessment
557
60. Vastag B. Federal ruling requires million-year guarantee of safety at Yucca Mountain nuclear waste site. J Natl Cancer Inst. 2004;96: 1656–8. 61. Chamberlain S, Modarres M. Compressed natural gas bus safety: a quantitative risk assessment. Risk Anal. 2005;25:377–87. 63. Greenberg MR, Anderson RF. Hazardous Waste Sites: The Credibility Gap. New Brunswick, NJ: Center for Urban Policy Research; 1984. 64. Environmental Protection Agency. Office of Research and Development: Health Assessment Document for 2,3,7,8-Tetrachlorodibenzop-dioxin (TCDD) and Related Compounds. Washington, DC: Environmental Protection Agency; 1994. 65. National Research Council, Safe Drinking Water Committee. Drinking Water and Health. Selected Issues in Risk Assessment. Vol 9. Washington, DC: National Academy Press; 1989. 66. Crump KS, Gentry R. A response to OMB’s comments regarding OSHA’s approach to risk assessment in support of OSHA’s final rule on cadmium. Risk Anal. 1993;13:487–9. 67. Burger J, Gaines KF, Gochfeld M. Ethnic differences in risk from mercury among Savannah River fishermen. Risk Anal. 2001;21: 533–44. 68. Gochfeld M. Cases of mercury exposure, bioavailability, and absorption. Ecotoxicol Environ Saf. 2003;56:174–9. 69. Foran JA, Carpenter DO, Hamilton MC, Knuth BA, Schwager SJ. Risk-based consumption advice for farmed Atlantic and wild Pacific Salmon contaminated with dioxins and dioxin-like compounds. Environ Health Perspect. 2005;113:552–6. 70. Burger J, Stern AH, Gochfeld M. Mercury in commercial fish: optimizing individual choices to reduce risk. Environ Health Perspect. 2005;113:266–71. 71. Gochfeld M, Burger J. Good fish/bad fish: a composite benefit-risk by dose curve. Neurotoxicology. 2005;26(4):511-20. 72. Bullard RD. Dumping in Dixie: Race, Class, and Environmental Quality. Boulder, CO: Westview; 1990. 73. Cutter SL, Holm D, Clark L. The role of geographic scale in monitoring environmental justice. Risk Anal. 1996;16:517–26. 74. Goldstein BD. Risk assessment/risk management is a three step process: in defense of EPA’s risk assessment guidelines. J Am Coll Toxicol. 1988;7:543–9. 75. Selikoff IJ, Seidman H. Asbestos-associated deaths among insulation workers in the United States and Canada, 1967-1987. Ann N Y Acad Sci. 1991; 643:1–14. 76. Park RM, Bena JF, Stayner LT, Smith RJ, Gibb HJ, Lees PSJ. Hexavalent chromium and lung cancer in the chromate industry: a quantitative risk assessment. Risk Anal. 2004;24:1099–108. 77. Fischoff B, Slovic P, Lichtenstein S, Read S, Combs B. How safe is safe enough? A psychometric study of attitudes towards technological risks and benefits. Policy Sci 1978;8:127–52. 78. Sandman PM. Hazard Versus Outrage: The Case of Radon. New Brunswick, NJ: Rutgers Environmental Communication Research Program; 1988. 79. Henefin MS, Kipen H, Poulter SR. Reference guide on medical testimony. In: Reference Manual on Scientific Evidence. 2nd ed. Washington DC: Federal Judicial Center; 2000: 439-484. http:// www. fjc.gov/public/pdf.nsf/lookup/sciman00.pdf/$file/sciman00.pdf. 80. Kasperson RE, Renn O, Slovic P, Brown HS, Emel J, Goble R, et al. The social amplification of risk: a conceptual framework. Risk Anal. 1988;8:177–87. 81. Environmental Protection Agency. Integrated Risk Information System (IRIS) Database. Washington, DC. Environmental Protection Agency; 1996. 82. Chess C, Calia J, O’Neill KM. Communication triage: an anthrax case study. Biosecur Bioterror. 2004;2:106–11. 83. Lutz WK, Gaylor DW, Conolly RB, Lutz RW. Nonlinearity and thresholds in dose-response relationships for carcinogenicity due to sampling variation, logarithmic dose scaling, or small differences in
558
84.
85.
86. 87.
88.
89.
90.
91.
92.
93. 94. 95.
96.
97.
98.
99. 100.
101.
102.
103. 104.
Environmental Health individual susceptibility. Toxicol Appl Pharmacol. 2005;207 (2 Suppl):565–9. Bailar JC, III, Bailer AJ. Risk assessment—the mother of all uncertainties. Disciplinary perspectives on uncertainty in risk assessment. Ann N Y Acad Sci. 1999;895:273–85. Environmental Protection Agency. Proposed guidelines for carcinogen risk assessment. Fed Reg. 1984, final release March 2005;49:46294–301. Lioy P. Assessing total human exposure to contaminants. Environ Sci Technol. 1990;24:938–45. Roy A, Georgopoulos PG, Ouyang M, Freeman N, Lioy P. Environmental, dietary, demographic, and activity variables associated with biomarkers of exposure for benzene and lead. J Exp Anal Environ Epid. 2003;13:417–26. Wilkes CR, Mason AD, Hern SC. Probability distributions for showering and bathing water-use behavior for various U.S. subpopulations. Risk Anal. 2005;25:317–38. Burger J, Boring S, Dixon C, Lord C, McMahon M, Ramos R, et al. Exposure of South Carolinians to commercial meats and fish within their meat and fish diet. Sci Total Environ. 2002;287:71–81. Simmons JE, Evans MV, Boyes WK. Moving from external exposure concentration to internal dose: duration extrapolation based on physiologically based pharmacokinetic derived estimates of internal dose. J Toxicol Environ Health A. 2005;68:927–50. Krewski D, Van Ryzin J. Dose response models for quantal response toxicity dates. In: Csorgo M, Dawson D, Rao JNK, Saleh E, eds. Current Topics in Probability and Statistics. New York: North-Holland; 1981. Crump KS, Howe RB. The multistage model with a time-dependent dose pattern: application to carcinogenic risk assessment. Risk Anal. 1984;4:163–76. Armitage P. Multistage models of carcinogenesis. Environ Health Perspect. 1985;63:195–201. Moolgavkar SH, AG Knudsen, Jr. Mutation and cancer: a model for human carcinogenesis. J Nat Cancer Inst. 1981;66:1037–52. National Research Council, Committee on Biological Effects of Ionizing Radiation. BEIR VII Report: Health Risks from Exposure to Low Levels of Ionizing Radiation. Washington DC: National Academy Press; 2005. Guo Z, Wang M, Tian G, Burger J, Gochfeld M, Yang CS. Age- and gender-related variations in the activities of drug-metabolizing and antioxidant enzymes in the white-footed mouse (Peromyscus leucopus). Growth Dev Aging. 1993;57:85–100. Allen BC, Crump KS, Shipp AM. Correlation between carcinogenic potency of chemicals in animals and humans. Risk Anal. 1988;8: 531–44. IARC. General principles for evaluating the carcinogenic risk of chemicals. In: IARC Monographs on the Evaluation of Carcinogenic Risk of Chemicals to Humans. Suppl 4. Lyon, France: International Agency for Research on Cancer; 1982. Clewell HJ, Crump KS. Quantitative estimates of risk for noncancer endpoints. Risk Anal. 2005;25:285–90. Leroux BG, Leisenring WM, Mollgavkar SH, Faustman EM. A biologically-based dose-response model for developmental toxicology. Risk Anal. 1996;16:449–58. Farland W, Dourson M. Noncancer health endpoints: approaches to quantitative risk assessment. In: Cothern CR, ed. Comparative Environmental Risk Assessment. Boca Raton, FL: Lewis; 1993: 87–106. Budtz-Jorgensen E, Keiding N, Grandjean P. Effects of exposure imprecision on estimation of the benchmark dose. Risk Anal. 2004;24:1689–96. Barnes DG, Dourson M. Reference dose (Rfd): description and use in health risk assessments. Reg Toxicol Pharmacol. 1988;8:471–86. Environmental Protection Agency. IRIS: Integrated Risk Information System. Washington, DC: Environmental Protection Agency; 1992.
105. Hallenbeck WH. Quantitative evaluation of human and animal studies. In: Hallenbeck WH, Cunningham KM, eds. Quantitative Risk Assessment for Environmental and Occupational Health. Chelsea, Michigan: Lewis; 1987: 43–60. 106. Gaylor DW, Kodell RL. A procedure for developing risk-based reference doses. Regul Toxicol Pharmacol. 2002;35(2 Pt 1):137–41. 107. Clewell HJ, III, Jarnot BM. Incorporation of pharmacokinetics in noncancer risk assessment: example with chloropentafluorobenzene. Risk Anal. 1994;14:265–76. 108. Rao HV, Brown DR. A physiologically-based pharmacokinetic assessment of tetrachloroethylene in groundwater for a bathing and showering determination. Risk Anal. 1993;13:37–50. 109. Teeguarden JG, Waechter JM, Jr, Clewell HJ, III, Covington TR, Barton HA. Evaluation of oral and intravenous route pharmacokinetics, plasma protein binding, and uterine tissue dose metrics of bisphenol A: a physiologically based pharmacokinetic approach. Toxicol Sci. 2005;85:823–38. 110. Clark LH, Setzer RW, Barton HA. Framework for evaluation of physiologically-based pharmacokinetic models for se in safety or risk assessment. Risk Anal. 2004;24:1697–717. 111. Kohn MC, Portier CJ. Effects of the mechanisms of receptor-mediated gene expression on the shape of the dose-response curve. Risk Anal. 1993;13:565–72. 112. Oberemm A, Onyon L, Gundert-Remy U. How can toxicogenomics inform risk assessment? Toxicol Appl Pharmacol. 2005;207(2 Suppl):592-8. 113. Catelinois O, Laurier D, Verger P, Rogel A, Colonna M, Ignasiak M, et al. Assessment of the thyroid cancer risk related to Chernobyl fallout in eastern France. Risk Anal. 2005;25:243–52. 114. Stine SW, Song I, Choi CY, Gerba CP. Application of microbial risk assessment to the development of standards for enteric pathogens in water used to irrigate fresh produce. J Food Prot. 2005;68:913–8. 115. Gochfeld M. Why epidemiology of endocrine disruptors warrants the precautionary principle. Pure Appl Chem. 2003 ;75:2521–9. 116. Goldstein B, Carruth RS. The precautionary principle and/or risk assessment in World Trade Organization decisions: a possible role for risk perception. Risk Anal. 2004;24:491–9. 117. Burger J. Making decisions in the 21st century: scientific data, weight of evidence, and the precautionary principle. Pure Appl Chem. 2003;75:2505–14. 118. Hattis D, Goble R, Russ A, Chu M, Ericson J. Age-related differences in susceptibility to carcinogenesis: a quantitative analysis of empirical animal bioassay data. Environ Health Perspect. 2004;112:1152–8. 119. Ale JM. Tolerable of acceptable: a comparison of risk regulation in the United Kingdom and in the Netherlands. Risk Anal. 2005;25: 231–41. 120. Hattis D, Burmaster DE. Assessment of variability and uncertainty distributions for practical risk analyses. Risk Anal. 1994;14:713–30. 121. Thompson KM, Burmaster DE, Crouch EAC. Monte Carlo techniques for quantitative uncertainty analysis in public health risk assessments. Risk Anal. 1992;12:53–64. 122. Zheng J, Frey HC. Quantitative analysis of variability and uncertainty with known measurement error: methodology and case study. Risk Anal. 2005;25:663–75. 123. Hattis D, Russ A, Goble R, Banati P, Chu M. Human interindividual variability in susceptibility to airborne particles. Risk Anal. 2001;21:585–99. 124. Ginsberg G, Hattis D, Sonawane B, Russ A, Banati P, Kozlak M, et al. Evaluation of child/adult pharmacokinetic differences from a database derived from the therapeutic drug literature. Toxicol Sci. 2002;66:185–200. 125. Slovic P, Finucane ML, Peters E, et al. Risk as analysis and risk as feelings: some thoughts about affect reason, risk, and rationality. Risk Anal. 2004;24:311–22. 126. Slovic P, Fischoff B, Lichtenstein S. Why study risk perception. Risk Anal. 1982;2:83–94.
21 127. McFarlane B. Public perceptions of risk to forest biodiversity. Risk Anal. 2005;25:543–53. 128. Covello VT, Flamm WG, Rodricks JV, Tardiff RG, eds. The Analysis of Actual vs. Perceived Risks. New York: Plenum Press; 1983. 129. Hinman GW, Rosa EA, Kleinhesselink RR, Lowinger TC. Perceptions of nuclear and other risks in Japan and the United States. Risk Anal. 13:449–455; 1993. 130. Savadori L, Savio S, Nicotra E, Rumiati R, Finucane M, Slovic P. Expert and public perception of risk from biotechnology. Risk Anal. 2004;24:1289–99. 131. Barke RP, Jenkins-Smith HC. Politics and scientific expertise: scientists, risk perception and nuclear waste policy. Risk Anal. 1993; 13:425–39. 132. Kivimäki M, Kalimo R. Risk perception among nuclear power plant personnel: a survey. Risk Anal. 1993;13:421–4. 133. Pilisuk M, Acredolo C. Fear of technological hazards: one concern or many? Soc Behav. 1988;3:17–24. 134. Savage I. Demographic influences on risk perceptions. Risk Anal. 1993;13:413–20. 135. Gregory R, Mendelsohn R. Perceived risk, dread, and benefits. Risk Anal. 1993;13:259–64. 136. Slovic P, Finucane ML, Peters E, MacGregor DG. Risk as analysis and risk as feelings: some thoughts about affect, reason, risk, and rationality. Risk Anal. 2004;24:311–22. 137. Slovic P. Trust, emotion, sex, politics, and science: surveying the risk-assessment battlefield. Risk Anal. 1999;19:689–701. 138. Hance BJ, Chess C, Sandman PM. Improving Dialogue with Communities: A Risk Communication Manual for Government. Trenton, NJ: NJ Department of Environmental Protection; 1988. 139. Bradbury JA. Risk communication in environmental restoration programs. Risk Anal. 1994;14:357–63. 140. Burger J, McDermott MH, Chess C, Bochenek E, Perez-Lugo M, Pflugh KK. Evaluating risk communication about fish consumption advisories: efficacy of a brochure versus a classroom lesson in Spanish and English. Risk Anal. 2003;23:791–803. 141. Weinstein ND, Kolb K, Goldstein BD. Using time intervals between expected events to communicate risk magnitudes. Risk Anal. 1996;16:305–8. 142. Greenberg MR, Sachsman DB, Sandman PM, Salomone KL. Network evening news coverage of environmental risk. Risk Anal. 1987;9:119–26. 143. Sandman P, Sachsman D, Greenberg M, Gochfeld M. Environmental Risk and the Press. New Brunswick, NJ: Transaction Books; 1987. 144. Lynn FM, Busenberg GJ. Citizen advisory committees and environmental policy: what we know, what’s left to discover. Risk Anal. 1995;15:147–62. 145. Chess C, Salomone KL, Hance BJ. Improving risk communication in government: research priorities. Risk Anal. 1995;15:127–36. 146. Environmental Protection Agency. Risk Assessment and Management: Framework for Decision Making. Washington DC: Environmental Protection Agency; 1984. 147. Suter GW, II, Vermeire T, Munns WR, Jr., Sekizawa J. An integrated framework for health and ecological risk assessment. Toxicol Appl Pharmacol. 2005; 207(2 Suppl):611-6. 148. Sorensen MT, Gala WR, Margolin JA. Approaches to ecological risk characterization and management: selecting the right tools for the job. Human Ecol Risk Assess. 2004;10:245–69. 149. Norton SB, Rodier DR, Gentile JH, van der Schalie WH, Wood WP, Slimak MW. A framework for ecological risk assessment at the EPA. Environ Toxicol Chem. 1992;11:1663–72. 150. Burger J, Gochfeld M, McGrath LF, Powers CW, Waishwell L, Warren C, et al. Science, policy, stakeholders, and fish consumption advisories: developing a fish fact sheet for the Savannah River. Environ Manage. 2000;27:501–14. 151. Burger J, Gochfeld M, Kosson D, Powers CW, Friedlander B, Eichelberger J, et al. Science, policy, and stakeholders: developing
Environmental and Ecological Risk Assessment
559
a consensus science plan for Amchitka Island, Laeutians, Alaska. Environ Manage. 2005;35:557–68. 152. Goldstein BD, Erdal S, Burger J, Faustman EM, Freidlander BR, Greenberg M, et al. Stakeholder participation: experience from the CRESP program. Environ Epidem Toxicol. 2000;2:103–11. 153. Bartell SM, Gardner RH, O’Neill, RV. Ecological Risk Estimation. Boca Raton, FL: Lewis Press; 1992. 154. Fisher DJ, Burton DT. Comparison of two U.S. environmental protection agency species sensitivity distribution methods for calculating ecological risk criteria. Human Ecol Risk Assess. 2004;9:675–90. 155. Hayes EH, Landis WG. Regional ecological risk assessment of a near shore marine environment: Cherry Point, WA. Human Ecol Risk Assess. 2004;10:299–325. 156. Graham RL, Hunsaker CT, O’Neill RV, Jackson BL. Ecological risk assessment at the regional scale. Ecol Applic. 1991;1:196–206. 157. Xu X, Lin H, Fu Z. Probe into the method of regional ecological risk assessment—a case study of wetland in the Yellow River Delta in China. J Environ Manage. 2004;70:253–62. 158. Gochfeld M, Burger J. Evolutionary consequences for ecological risk assessment and management. Environ Monit Assess. 1993;28:161–8. 159. Hall LW, Jr, Gardinali P. Ecological risk assessment for Irgarol 1051 and its major metabolite in United States surface waters. Human Ecol Risk Assess. 2004;10:525–45. 160. Pekey H, Karakas D, Ayberk S, Tolun L, Bakoglu M. Ecological risk assessment using trace elements from surface sediments of Izmit Bay (Northeastern Marmara Sea) Turkey. Mar Poll Bull. 2004;48:946–53. 161. Matsinos YG, Wolff WF. An individual-oriented model for ecological risk assessment of wading birds. Ecol Model. 2003;170:471–8. 162. Mukhtasor TH, Veitch B, Bose N. An ecological risk assessment methodology for screening discharge alternatives of produced water. Human Ecol Risk Assess. 2004;10:505–24. 163. Burger J, Gochfeld M. On developing bioindicators for human and ecological health. Environ Monitor Assess. 2000;66:23–46. 164. Burger J, Gochfeld M. Bioindicators for assessing human and ecological health. In: Wiersma GB, ed. Environmental Monitoring. Boca Raton, FL: CRC Press; 2004: 541–61. 165. McCay DF. Development and application of damage assessment modeling: example assessment for the North Cape oil spill. Mar Pollut Bull. 2003;47:341–59. 166. Aguirre AA, Ostfeld RS, Tabor GM, House C, Pearl MC. Conservation Medicine: Ecological Health in Practice. New York: Oxford;2002. 167. Daszak P, Tabor GM, Kilpatrick AM, Epstein J, Plowright R. Conservation medicine and a new agenda for emerging diseases. Ann N Y Acad Sci. 2004;1026:1–11. 168. Marra PP, Griffing SM, McLean RG. West Nile virus and wildlife health. Emerg Infect Dis. 2003;9:898–9. 169. Kilpatrick AM, Kramer LD, Campbell SR, Alleyne EO, Dobson AP, Daszak P. West Nile virus risk assessment and the bridge vector paradigm. Emerg Infect Dis. 2005;11:425–9. 170. Grier JW. Ban of DDT and subsequent recovery of reproduction in bald eagles. Science. 1982;218:1232–5. 171. Elliott JE, Miller MJ, Wilson LK. Assessing breeding potential of peregrine falcons based on chlorinated hydrocarbon concentrations in prey. Environ Pollut. 2005;134:353–61. 172. Oaks JL, Gilbert M, Virani MZ, Watson RT, Meteyer CU, Rideout BA, et al. Diclofenac residues as the cause of vulture population decline in Pakistan. Nature. 2004;427:630–3. 173. Michaels D, Monforton C. Manufacturing uncertainty: contested science and the protection of the public’s health and environment. Am J Public Health. 2005;95(suppl 1):S39–48. 174. Kayajanian G. Arsenic, cancer, and thoughtless policy. Ecotoxicol Environ Saf. 2003;55:139–42. 175. Friedman LC, Daynard RA, Banthin CN. How tobacco-friendly science escapes scrutiny in the courtroom. Am J Public Health. 2005;95(suppl 1):S16–20.
560
Environmental Health
176. Corbett GE, Finley BL, Paustenbach DJ, Kerger BD. Systemic uptake of chromium in human volunteers following dermal contact with hexavalent chromium (22 mg/L). J Expo Anal Environ Epidemiol. 1997;7:179–89. 177. Hinton TG, Bedford JS, Congdon JC, Whicker FW. Effects of radiation on the environment: a need to question old paradigms and enhance collaboration among radiation biologists and radiation ecologists. Radiat Res. 2004;162:332–8. 178. Goldstein BD. Advances in risk assessment and communication. Annual Rev Public Health. 2005;26:141–63. 179. Finkel AM. Disconnect brain and repeat after me: “risk assessment is too conservative.” Ann N Y Acad Sci. 1997;837:397–417. 180. Ponce RA, Wong EY, Faustman EM. Quality adjusted life years (QALYs) and dose-response models in environmental health policy analysis—methodological considerations. Sci Total Environ. 2001; 274:79–91. 181. Boyes WK, Evans MV, Eklund C, Janssen P, Simmons JE. Duration adjustment of acute exposure guideline level values for trichloroethylene using a physiologically-based pharmacokinetic model. Risk Anal. 2005;25:677–86. 182. Mayer HJ, Greenberg MR, Burger J, Gochfeld M, Powers C, Kosson D, et al. Using integrated geospatial mapping and conceptual site models to guide risk-based environmental clean-up decisions. Risk Anal. 2005;25:429–46. 183. Omumbo JA, Hay SI, Snow RW, Tatem AJ, Rogers DJ. Modelling malaria risk in East Africa at high-spatial resolution. Trop Med Int Health. 2005;10:557–66. 184. Greenberg M, Lee C, Powers C. Public health and brownfields: reviving the past to protect the future. Am J Public Health. 1998;88:1759–60.
General References on Risk Assessment Baker S, Driver J, McCallum D, eds. Residential Exposure Assessment, A Sourcebook. New York: Kluwer Academic/Plenum Publishers; 2001. Bates DV. Environmental Health Risks and Public Policy. Seattle: University of Washington Press; 1994. Blair A, Burg J, Foran J, Gibb H, Greenland S, Morris R, et al. Guidelines for application of meta-analysis in environmental epidemiology. Regul Toxicol Pharmacol. 1995;22:189–97. Boehm G, Nerb J, McDaniels T, Spada H, eds. Environmental Risks: Perception, Evaluation and Management. Oxford, UK: Elsevier; 2001. Conway RA. Environmental Risk Analyses for Chemicals. New York: Van Nostrand; 1982. Environmental Protection Agency. Reducing Risk: Setting Priorities and Strategies for Environmental Protection. Washington, DC: Environmental Protection Agency; 1990. Environmental Protection Agency. Risk Assessment Guidance for Superfund. Washington, DC: Environmental Protection Agency; 1991. Environmental Protection Agency. Guidelines for exposure assessment. Fed Regist. 1992, May 29;57:22888–938. Environmental Protection Agency. Health Effects Assessment Summary Tables. Washington, DC: Environmental Protection Agency; 1992. Environmental Protection Agency. Health Assessment Document for 2,3,7,8-Tetrachlorodibenzo-p-dioxin (TCDD) and Related Compounds. Vols 1–3. Washington, DC: Environmental Protection Agency; 1994. Faustman EM, Omenn GS. Risk assessment. In: Klaassen CD, ed. Casarett and Doull’s Toxicology. 6th ed. New York: McGraw-Hill; 2001: 83–104. Finkel AM, Golding D. Worst Things First? The Debate over Risk-Based National Environmental Priorities. Baltimore: Johns Hopkins University Press; 1994. Goldring D, Krimsky S. Theories of Risk. New York: Praeger; 1992. Goldsmith DF. Risk assessment applied to environmental medicine. In: Brooks S, Gochfeld M, Herzstein J, Schenker M, Jackson R, eds. Environmental Medicine. St. Louis: CV Mosby; 1995: 30–6.
Goldstein BD. The maximally exposed individual: an inappropriate basis for public health decision making. Environ Forum. 1989, November–December; 13–16. Guzelian PS, Henry CJ, Olin SS. Similarities and Differences between Children and Adults: Implications for Risk Assessment. Washington, DC: International Life Sciences Institute; 1992. Hallenbeck WH, Cunningham KM. Quantitative Risk Assessment for Environmental and Occupational Health. Chelsea, MI: Lewis Press; 1987. Hawkins NC, Graham JD. Expert scientific judgment and cancer risk assessment: a pilot study of pharmacokinetic data. Risk Anal. 1988;8:615–25. (Expert opinion is polarized on formaldehyde.) Imperato PJ. On Acceptable Risk. Viking, New York: 1985. Jayjock MA. How much is enough to accept hormesis as the default? Or “At what point, if ever, could/should hormesis be employed as the principal dose-response default assumption in risk assessment?” Hum Exp Toxicol. 2005;24:245–7. Krewski D, Brown C, Murdoch D. Determining “safe” levels of exposure: safety factors of mathematical models. Fund Appl Toxicol. 1984;4: S383–94. Krimsky S. Hormonal Chaos: The Scientific and Social Origins of the Environmental Endocrine Hypothesis. Baltimore, MD: Johns Hopkins University Press; 2000. Kunreuther H, Gowda MVR, eds. Integrating Insurance and Risk Management for Hazardous Wastes. Boston: Kluwer; 1990. Long FA, Schweitzer GE. Risk assessment at hazardous waste sites. Am Chem Soc Symp. 1982;204. Lowrance WW. Of Acceptable Risk. Los Altos, CA: William Kaufmann; 1976. Lucier GW. Risk assessment: good science for good decisions. Environ Health Perspect. 1993;101:366. National Research Council. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press; 1983. National Research Council, Committee on Pesticides in the Diets of Infants and Children, Commission on Life Sciences. Pesticides in the Diets of Infants and Children. Washington, DC: National Academy Press; 1993. National Research Council, Committee on Risk Assessment of Hazardous Air Pollutants, Commission on Life Sciences. Science and Judgement in Risk Assessment. Washington, DC: National Academy Press; 1994. National Research Council. A Risk-Management Strategy for PCB-Contaminated Sediments. Washington DC: National Academy Press; 2001. Needleman HL, Gaszonis CA. Low-level lead exposure and the IQ of children. JAMA. 1990;263:673–8. Nicholson WJ, ed. Management of assessed risk for carcinogens. Ann NY Acad Sci. 1981;363:1–300. Oftedal P, Brogger A. Risk and Reason: Risk Assessment in Relation to Environmental Mutagens and Carcinogens. New York: Alan R. Liss; 1986. Olin S, Farland W, Park C, Rhomberg L, Scheuplein R, Starr T, et al. eds. Low-Dose Extrapolation of Cancer Risks. Washington, DC: International Life Sciences Institute Press; 1995. Presidential/Congressional Commission on Risk Assessment and Risk Management. Framework for Environmental Health Risk Management, Final Report. Washington, DC: The Commission; 1997. Risk Analysis. An International Journal of the Society for Risk Analysis. New York: Plenum Press; 1981 to present. Saxena J. Hazard Assessment of Chemicals. Vols 1–2. New York: Academic Press; 1986. Schecter A, Gasiewicz TA. Dioxins and Health. 2nd ed. New York: Plenum Press; 2003. Schecter A, Papke O, Tung KC, Staskal D, Birnbaum L. Polybrominated diphenyl ethers contamination of United States food. Environ Sci Technol. 2004;38:5306–11.
21 Schecter A, Pavuk M, Papke O, Ryan JJ, Birnbaum L, Rosen R. Polybrominated diphenyl ethers (PBDEs) in U.S. mothers’ milk. Environ Health Perspect. 2003;111:1723–9. Sielken RL. Quantitative cancer risk assessment for TCDD. Food Chem Toxicol. 1987;25:257–67. Silbergeld EK. Risk assessment and risk management: an uneasy divorce. In: May D, Hollander R, eds. Acceptable Evidence: Science and Values in Risk Assessment. New York: Oxford University Press; 1991: 99–114. Silbergeld EK, Patrick TE. Environmental exposures, toxicologic mechanisms, and adverse pregnancy outcomes. Am J Obstet Gynecol. 2005;192(suppl 5):S11–21. Smith ERA. Energy, the Environment and Public Opinion. Lanham MD: Rowman and Littlefield; 2002. Whyte AV, Burton I. Environmental Risk Assessment. New York: John Wiley & Sons; 1980.
General References on Quantitative Risk Assessment Allen BC, Crump KS, Shipp AM. Correlation between carcinogenic potency of chemicals in animals and humans. Risk Anal. 1988;8:531–44. Andersen ME, Clewell HI, Gargas ML, Smith FA, Reitz RH. Physiologically based pharmacokinetics and the risk assessment process for methylene chloride. Toxicol Appl Pharmacol. 1987;87:185–205. Armitage P. Multistage models of carcinogenesis. Environ Health Perspect. 1985;63:195–201. Bailar JC, III, Needleman J, Berney BL, McGinnis JM. Assessing Risks to Health: Methodological Approaches. Westport CT: Auburn House; 1993. Crump KS. A critical analysis of a dose-response assessment for TCDD. Food Chem Toxicol. 1988;26:79–83. Crump KS, Krewski D, van Landingham C. Estimates of the proportion of chemicals that were carcinogenic or anticarcinogenic in bioassays conducted by the National Toxicology Program. Environ Health Perspect. 1999;107:83–8. Finkel AM. Dioxin: are we safer now than before? Risk Anal. 1988;8:161–6. Gerrity TR, Henry CJ. Principles of Route-to-Route Extrapolation for Risk Assessment. Amsterdam: Elsevier; 1990. Knight FH. Risk, Uncertainty and Profit. New York: Harbor Torchbooks; 1921. Moolgavkar SH, Knudsen AG, Jr. Mutation and cancer: a model for human carcinogenesis. J Nat Cancer Inst. 1981;66:1037–52. National Research Council. Science and Judgment in Risk Assessment. Washington, DC: National Academy Press; 1994. Purchase IFH, Auton TR. Thresholds in chemical carcinogenesis. Reg Toxicol Pharmacol. 1995;22:199–205. Safe Drinking Water Committee, National Academy of Science. Drinking Water and Health. Vol. 6. Washington, DC: National Academy Press; 1986. Upton AC. The question of thresholds for radiation and chemical carcinogenesis. Cancer Invest. 1989;7:267–76.
General References on Risk Perception and Risk Communication Burger J, Gochfeld M. Fishing a superfund site: dissonance and risk perception of environmental hazards by fishermen in Puerto Rico. Risk Anal. 1991;11:269–77. Burger J, Gochfeld M. Ecological and human health risk assessment: a comparison. In: Di Giulio RT, Monosson E, eds. Interconnections between Human and Ecosystem Health. London: Chapman & Hall; 1996:127–48. Chess C, Calia J, O’Neill KM. Communication triage: an anthrax case study. Biosecur Bioterror. 2004;2:106–11. Covello VT, Flamm WG, Rodricks JV, Tardiff RG, eds. The Analysis of Actual vs. Perceived Risks. New York: Plenum Press; 1983. Davies JC, Covello VT, Allen FW, eds. Risk Communication: Proceedings of the National Conference on Risk Communication. Washington, DC: Conservation Foundation; 1986.
Environmental and Ecological Risk Assessment
561
Depoe SP, Delicath JW, Elsenbeer MA, eds. Communication and Public Participation in Environmental Decision Making. New York: State University of NY Press; 2004. Epple D, Slovic P. Taxonomic analysis of perceived risk: modeling individual and group perceptions within homogeneous hazard domains. Risk Anal. 1988;8:435–56. Johnson B, Covello V, eds. Social and Cultural Construction of Risk. Boston: Reidel; 1987. Johnson BB, Chess C. How reassuring are risk comparisons to pollution standards and emission limits? Risk Anal. 2003;3:999–1007. Johnson BB, Chess C. Communicating worst-case scenarios: neighbors’ views of industrial accident management. Risk Anal. 2003;23:829–40. Kahneman D, Slovic P, Tversky A. Judgement under Uncertainty: Heuristics and Biases. New York: Cambridge University Press; 1982. National Research Council. Regulating Pesticides in Food: The Delaney Paradox. Washington, DC: National Academy Press; 1987. National Research Council. Improving Risk Communication. Washington, DC: National Academy Press; 1989. National Research Council. Issues in Risk Assessment. Washington, DC: National Academy Press; 1993. National Research Council. Pesticides in the Diets of Infants and Children. Washington, DC: National Academy Press; 1993. National Research Council. Building Consensus through Risk Assessment and Management of the Department of Energy’s Environmental Remediation Program. Washington, DC: National Academy Press; 1994. National Research Council. Science and Judgement in Risk Assessment. Washington, DC: National Academy Press; 1994. Sandman P, Sachsman D, Greenberg M, Gochfeld M. Environmental Risk and the Press. New Brunswick, NJ: Transaction Books; 1987. Short JF, Jr. Social dimensions of risk: the need for a sociological paradigm and policy research. Am Sociol. 1987;22:167–72. Slovic P. Informing and educating the public about risk. Risk Anal. 1986;6:403–15. Slovic P. Perception of risk. Science. 1987;236:280–5. Slovic P, Fischoff B, Lictenstein S. Facts versus fears: understanding perceived risk. In: Kahneman D, Slovic P, Tversky A, eds. Judgement under Uncertainty: Heuristics and Biases. New York: Cambridge University Press; 1982. von Winterfeldt D, John RS, Borcherding K. Cognitive components of risk ratings. Risk Anal. 1981;1:277–88.
General References on Applications of Risk Assessment Carnegie Commission. Risk and the Environment: Improving Regulatory Decision Making. New York: Carnegie Commission; 1993. Denison RA, Silbergeld EK. Risks of municipal solid waste incineration: an environmental perspective. Risk Anal. 1988;8:343–57. Ditz DW. Hazardous waste incineration at sea. EPA decision making on risk. Risk Anal. 1988;8:499–508 (criticizes EPA for underestimating risk). Gough M. Science policy choices and the estimation of cancer risk associated with exposure to TCDD. Risk Anal. 1988;8:337–42. Jasanoff S. Learning from Disaster: Risk Management after Bhopal. Philadelphia: University of Pennsylvania Press; 1994. Kroes R. Contribution of toxicology toward risk assessment of carcinogens. Arch Toxicol. 1987;60:224–8. (Genotoxic carcinogens get no threshold model, others get threshold model.) Kunreuther H, Lathrop JW. Siting hazardous facilities: lessons from LNG. Risk Anal. 1981;1:289–302. Kunreuther H, Slovic P. Decision making in hazard and resource management. In: Kates RW, Burton I, eds. Geography, Resources and Environment. Vol. 2. Chicago: University of Chicago Press; 1986: 153–87. National Research Council. Pesticides in the Diets of Infants and Children. Washington, DC: National Academy Press; 1993. Rycroft TW, Regens JL, Dietz T. Incorporating risk assessment and benefit-cost analysis in environmental management. Risk Anal. 1988;8: 415–20.
562
Environmental Health
General References on Ecological Risk Assessment Barnthouse LW. The role of models in ecological risk assessment: a 1990s perspective. Environ Toxicol Chem. 1992;11:1751–60. Bartell SM, Gardner RH, O’Neill RV. Ecological Risk Estimation. Boca Raton, FL: Lewis Press; 1992. Burger J, Gochfeld M. Temporal scales in ecological risk assessment. Arch Environ Contam Toxicol. 1992;23:484–8. Cairns J, Niederlehner BR, Orvos DR, eds. Predicting Ecosystem Risk. Princeton: Princeton Scientific Publishing; 1992. Commission on Risk Assessment and Risk Management. Report of the Commission on Risk Assessment and Risk Management. Washington, DC: National Academy Press; 1996. Dell’Omo G. Behavioral Ecotoxicology. New York: John Wiley; 2002. Di Giulio RT, Monosson E. Interconnections between Human and Ecosystem Health. London: Chapman & Hall; 1996. Environmental Protection Agency. Ecological Risk Assessment: Federal Guidelines. Washington DC: USEPA; 2000. Foran JA, Forenc SA, eds. Multiple Stressors in Ecological Risk and Impact Assessment. Pensacola, FL: SETAC; 2000. Hoffman DJ, Rattner BA, Burton GA, Jr, Cairns J, Jr, eds. Handbook of Ecotoxicology. 2nd ed. Boca Raton: Lewis; 2003. Linthurst RA, Bourdeau P, Tardiff RC, eds. Methods to Assess the Effects of Chemicals on Ecosystems. SCOPE Monograph No. 53. New York: John Wiley & Sons; 1995. National Research Council. Risk Assessment in the Federal Government: Managing the Process. Washington, DC: National Academy Press; 1983. National Research Council. Ecological Knowledge and Environmental Problem Solving. Washington, DC: National Academy Press; 1986. National Research Council. Animals as Sentinels of Environmental Health Hazards. Washington, DC: National Academy Press; 1991. Norton SB, Rodier DR, Gentile JH. A framework for ecological risk assessment at the EPA. Environ Toxicol Chem. 1992;11:1663–72. National Research Council. Risk and Decisions about Disposition of Transuranic and High-Level Radioactive Waste. Washington DC: National Academy Press; 2005. Pastorok RA, Bartell SM, Ferson S, Ginzburg LR, eds. Ecological Modeling in Risk Assessment: Chemical Effects on Populations, Ecosystems, and Landscapes. Boca Raton: Lewis, 2002. Peakall D. Animal Biomarkers as Pollution Indicators. London: Chapman & Hall; 1992.
Presidential/Congressional Commission on Risk Assessment and Risk Management. Framework. Washington DC: U.S. Government Printing Office, 1997. http://www.riskworld.com/Nreports/1996/risk_rpt/ Rr6me001.htm. Römbke J, Moltmann JF. Applied Ecotoxicology. Boca Raton, FL: Lewis Publishers; 1996. Stahl RG. Risk Management: Ecological Risk-Based Decision-Making. Pensacola FL: SETAC Press; 2001. Suter GW, II. Endpoints for regional ecological risk assessment. Environ Manag. 1990;14:9–23. Suter GW, II. Ecological Risk Assessment. Boca Raton, FL: Lewis Publishers; 1993. Travis CC, Morris JM. The emergence of ecological risk assessment. Risk Anal. 1992;12:167–9. National Research Council: Issues in Risk Assessment. Washington, DC: National Academy Press, 1993.
Susceptibility Alavanja M, Aron J, Brown C, Chandler J. From biochemical epidemiology to cancer risk assessment. J Natl Cancer Inst. 1987;78:633–43. Armitage P, Doll R. Age distribution of cancer. Bri J Cancer. 1954;8:1–12. Finkel AM. A quantitative estimate of the variations in human susceptibility to cancer and its implications for risk management. In: Olin S, Farland W, Park C, Rhomberg L, Scheuplein R, Starr T, Wilson J, eds. Low-Dose Extrapolation of Cancer Risks. Washington, DC: International Life Sciences Institute Press; 1995: 297–328. Fraumeni JF, Jr, ed. Persons at High Risk of Cancer: An Approach to Cancer Etiology and Control. New York: Academic Press; 1975. Goodlett CR, Peterson SD. Sex differences in vulnerability to developmental spatial learning deficits induced by limited binge alcohol exposure in neonatal rats. Neurobiol Learn Mem. 1995;64:265–75. Greenberg GN, Dement JM. Exposure assessment and gender differences. J Occup Med. 1994;36:908–12. Harris CC. Interindividual variation among humans in carcinogen metabolism, DNA adduct formation, and DNA repair. Carcinogenesis. 1989;10:1563–6. Nebert D. Possible clinical importance of genetic differences in drug metabolism. Br Med J. 1981;283:537–42.
22
Biomarkers Michael D. McClean • Thomas F. Webster
Biological markers, or biomarkers, are indicators of events occurring in a biological system. While exposure refers to contact between a substance and the surface of the human body via inhalation, ingestion, or dermal contact, biomarkers provide information about the activity of a substance once it is absorbed. Whether the agent of interest is the original substance to which the individual was exposed or a metabolite, biological monitoring can provide useful information about exposure, early health effects, and susceptibility. Figure 22-1 presents a conceptual model for exposure-related disease. Biological monitoring is typically conducted by analyzing biological materials such as blood, urine, hair, breath, milk, and saliva, whereas other options such as lung tissue, liver tissue, adipose tissue, and bone are considerably more invasive and rarely available. The usefulness of different materials strongly depends on the compound of interest. BIOMARKERS OF EXPOSURE
Biomarkers provide exposure measurements that are potentially more biologically relevant than personal exposure measurements. Personal exposure represents the total amount of a substance that is available for absorption, but only a portion of the total passes across the skin, gastrointestinal tract, and/or respiratory tract. Internal dose is the amount of a substance that has been absorbed and is, therefore, available to undergo metabolism, transport, storage, or elimination. Similarly, only a portion of the internal dose is eventually transported to the critical target site. Biologically effective dose represents the amount of a substance or metabolite that reaches the site of toxic action and could, therefore, result in an adverse effect. One of the key advantages associated with using biomarkers to assess exposure is that measurements of internal dose and biologically effective dose integrate personal exposures over multiple exposure routes (inhalation, ingestion, and dermal contact). Additionally, exposures often vary widely over time such that repeated personal measurements (e.g., air samples) would be necessary to characterize average long-term exposure; however, a single biological measurement can often provide information about average long-term exposure, while also incorporating individual-specific differences in metabolism or other biological processes that may also affect dose. A common example of a biomarker of internal dose is the measurement of alcohol in either exhaled breath or blood to determine the amount of alcohol an individual has consumed. Ethanol affects the central nervous system such that most individuals begin to show measurable signs of mental impairment at approximately 0.05% blood alcohol concentration, and motor function continues to deteriorate with increasing concentrations. Additionally, ethanol is volatile and transfers from blood to the alveolar air sacs such that ethanol is also detectable in exhaled breath in proportion to the concentration in blood. Accordingly, the measurement of ethanol in exhaled breath provides a useful and easily obtained measure of internal dose.
A common example of a biomarker of biologically effective dose is the analysis of DNA adducts (addition products) in peripheral blood samples. When certain substances such as polycyclic aromatic hydrocarbons (PAHs) are absorbed, hepatic metabolism leads to the formation of highly reactive epoxides. One specific example is the transformation of benzo[a]pyrene, classified as a probable human carcinogen, to benzo[a]pyrene-diol-epoxide which can then covalently bind to guanine in DNA. Following reaction with genetic material, DNA adducts can increase the risk of mutation and may thereby initiate the carcinogenic process. DNA adducts are typically measured in peripheral white blood cells as a surrogate measure for adduct burden in other (inaccessible) target tissues. DNA adducts are typically considered a biomarker of exposure since a metabolite of the substance to which exposure occurred is measured at the site of toxic action (i.e., DNA); however, animal studies have shown that carcinogenic potency correlates well with adduct burden such that DNA adducts are often used as a surrogate for cancer risk, and, therefore, more as a biomarker of effect. Exposure assessment plays a critical role in environmental and occupational epidemiology. Traditional methods for assessing exposure, such as questionnaires and measurements of environmental media (air, water, food, etc.), are prone to error, particularly for retrospective studies. Random error in assessing exposure will, on average, reduce a study’s ability to link exposure to disease and tend to bias results toward the null. Biomarkers, if properly utilized, can improve exposure assessment. Examples include studies of the relationships between aflatoxin and liver cancer, phthalates and reproductive effects, neurotoxicity and exposure to lead, methyl mercury and PCBs.1–5 Biomarkers of exposure are not a panacea however. The best measure of exposure depends on both the compound and outcome of interest.6 Concentrations in serum or other readily available biological materials may not always provide good measures of levels in target tissues that are difficult or impossible to sample, for example, bone or brain. In addition, the timing of exposure is of key importance for both developmental effects, where in utero exposure is often of prime importance, and cancer, where relevant exposures may have taken place decades before diagnosis. Failure to take such considerations into account can cause otherwise careful biomarker studies to provide biased results. Biomarkers of exposure have many uses besides etiologic research. Comparing internal doses with exposure estimates based on measurements in environmental media and questionnaires can help to identify important routes of exposure. Recent examples include studies of asphalt,7 phthalates,8 and polybrominated diphenyl ethers (PBDEs).9 When properly validated, biomarkers can be helpful in assessing risk of disease, for example, blood lead, serum cholesterol. Periodic cross-sectional studies of the population (biomonitoring) can supply information on exposure trends, telling us whether environmental policies are effective or providing warnings of potential 563
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
564
Environmental Health Genetic susceptibility
Personal exposure
Internal dose
Biologically effective dose
Early effect
Altered structure or function
Clinical disease
Figure 22-1. Conceptual model for exposure-related disease.
problems. For instance, the dramatic decline in blood lead concentrations over the last several decades is a notable achievement, while the continued presence of elevated levels in certain segments of the population shows where effort is still urgently needed. Using banked breast milk samples, PBDEs (brominated fire retardants commonly used in consumer products) were found to have increased exponentially in people over the last several decades.10 This discovery catalyzed both new research and a phaseout of certain uses. Comparing levels of compounds across different segments of the population— for example, age, sex, race/ethnicity, occupation, geography—can provide information about equity and exposure to potentially vulnerable populations. Despite its limitations, the U.S. National Health and Nutrition Examinations Survey (NHANES) is a notable example of biomonitoring.11 BIOMARKERS OF EFFECT
Biomarkers of exposure have had the most success in environmental and occupational epidemiology, but there is also considerable interest in other types of biomarkers. Reduced acetylcholinesterase activity is a classic biomarker of effect resulting from exposure to organophosphate pesticides. While DNA adducts are often considered markers of biologically effective dose, mutations in particular genes, a possible result of DNA adducts, can provide markers of early effects in the process leading to cancer. For example, a specific mutation in p53, an important tumor suppressor gene, has been found in liver cancer patients from areas with high aflatoxin exposure.1 Sperm counts have been extensively used as markers in studies of male reproductive toxicants. Measurement of anogenital distance is a recent innovation in this field.12 Imaging technology, particularly functional MRI scans of the brain, hold promise for neurotoxicology.13 BIOMARKERS OF SUSCEPTIBILITY
People may be more susceptible to a given exposure for a number of reasons, such as genetics, age, or preexisting health conditions. Biomarkers of susceptibility primarily focus on genetics: variations in specific genes can modify an exposure-disease relationship, for example, shifting the dose-response curve, leading to the concept of geneenvironment interaction. Two types of genes have received the most attention, genes coding for enzymes involved in DNA repair and biotransformation (metabolism) of xenobiotics.14 A classic example of the latter are “slow acetylators,” individuals who possess a variant of the N-acetyltransferase gene (NAT2) have increased risk of bladder cancer following exposure to aromatic amines or smoking. Similarly, XRCC1 is an example of a DNA repair enzyme with known polymorphisms that alter the risk of lung cancer.15 In addition to genetic factors, preexisting health conditions can also increase susceptibility by altering certain exposure-disease relationships. For instance, the risk of liver cancer due to aflatoxin exposure is significantly increased among individuals who test positive for hepatitis B surface antigen.1
SUMMARY
A primary goal of biomarker research is to characterize the relationship between a biological measurement and the actual biological phenomenon of interest. However, this process is complicated by several factors such as inter- and intra-individual variability and inter-and intralaboratory variability. It is difficult to characterize a normal range of values in the general population when there are so many factors than can potentially affect biological measurements. Most biomarkers are experimental and measured via analytical techniques that are specialized and expensive, further complicating the process of replicating results in multiple studies that evaluate different populations over time. Furthermore, biomarker research poses a number of important ethical issues. How should results be communicated to participants, particularly when the interpretation and implications for health are unclear? Could insurance agencies or employers potentially use genetic data to discriminate against susceptible individuals? Despite these challenges, biomarkers have already proven their value in environmental and occupational epidemiology. Valid and reliable biomarkers can become effective screening tools that facilitate the process of monitoring for exposure and disease.
REFERENCES
1. Groopman JD, Kensler TW. Role of metabolism and viruses in aflatoxin-induced liver cancer. Toxicol Appl Pharmacol. 2005; 206(2):131–7. 2. Duty SM, Silva MJ, Barr DB, Brock JW, Ryan L, Chen Z, et al. Phthalate exposure and human semen parameters. Epidemiology. 2003;14(3):269–77. 3. Needleman HL, Schell A, Bellinger D, Leviton A, Allred EN. The long-term effects of exposure to low doses of lead in childhood. An 11-year follow-up report. N Engl J Med. 1990;322:83–8. 4. Grandjean P, Budtz-Jorgensen E, White RF, Jorgensen PJ, Weihe P, Debes F, et al. Methylmercury exposure biomarkers as indicators of neurotoxicity in children aged 7 years. Am J Epidemiol. 1999;150(3): 301–5. 5. Vreugdenhil HJ, Lanting CI, Mulder PG, Boersma ER, WeisglasKuperus N. Effects of prenatal PCB and dioxin background exposure on cognitive and motor abilities in Dutch children at school age. J Pediatr. 2002;140(1):48–56. 6. Checkoway H, Pearce N, Kriebel D. Research Methods in Occupational Epidemiology. Oxford: Oxford University Press; 2004. 7. McClean MD, Rinehart RD, Ngo L, Eisen EA, Kelsey KT, Wiencke JK, et al. Urinary 1-hydroxypyrene and polycyclic aromatic hydrocarbon exposure among asphalt paving workers. Ann Occup Hyg. 2004;48(6):565–78. 8. Duty SM, Ackerman RM, Calafat AM, Hauser R. Personal Care Product Use Predicts Urinary Concentrations of Some Phthalate Monoesters. Environ Health Perspect. 2005 (in press).
22 9. Wu N, Webster T, Hermann T, Paepke O, Tickner J, Hale R, et al. Associations of PBDE levels in breast milk with food consumption and indoor dust concentrations. Organohalogen Compounds 2005; 67:654–7. 10. Norén K, Meironyté D. Certain organochlorine and organobromine contaminants in Swedish human milk in perspective of past 20–30 years. Chemosphere. 2000;40(9–11):1111–23. 11. Centers for Disease Control (CDC), U.S. Department of Health and Human Services. Third National Report on Human Exposure to Environmental Chemicals. 2005. http://www.cdc.gov/exposurereport/. 12. Swan SH, Main KM, Liu F, Stewart SL, Kruse RL, Calafat AM, et al. Decrease in anogenital distance among male infants with
Biomarkers
565
prenatal phthalate exposure. Environ Health Perspect. 2005;113(8): 1056–61. 13. Janulewicz P, Palumbo C, White R. Role of neuroimaging. In: Bellinger F, ed. Human Developmental Neurotoxicology. Forthcoming. 14. Kelada SN, Eaton DL, Wang SS, Rothman NR, Khoury MJ. The role of genetic polymorphisms in environmental health. Environ Health Perspect. 2003;111(8):1055–64. 15. Ratnasinghe D, Yao SX, Tangrea JA, Qiao YL, Andersen MR, Barrett MJ, et al. Polymorphisms of the DNA repair gene XRCC1 and lung cancer risk. Cancer Epidemiol Biomarkers Prev. 2001; 10(2):119–23.
This page intentionally left blank
Asbestos and Other Fibers
23
Kaye H. Kilburn
ASBESTOS
Asbestos-Associated Diseases Prevention of asbestosis and reduction in lung cancer mortality in asbestos-exposed subjects has occurred in the last generation in the United States. Massive medical evidence compelled the primary industry to quit using asbestos by making use excessively expensive as risks became uninsurable. This lead to successive waves of court awards for liability and punitive damages and settlements negotiated to user workers, co-contaminated workers, and bystanders. Jury awards for mesotheliomas were frequently $1 million, and for asbestosis ranged from thousands to hundreds of thousands of dollars: an effective way to stem an epidemic. Unfortunately widespread substitution of humanmade fibers in construction may predestine a repeat performance.
Clinical Recognition of Asbestosis Asbestosis is a fibrotic disease of the lung from asbestos after a suitable latent period. Cellular infiltrates and fibrosis surround small bronchioles and limit forced expiratory flow to impair pulmonary function. Asbestosis is diagnosed from chest radiographs by diffuse, irregular opacities in the lung fields or by circumscribed or diffuse pleural thickening, which are defined by international criteria.1 Asbestos exposure produces no acute symptoms. Pathological fibrosis of lung or pleura is well advanced when expiratory airway obstruction permits “early diagnosis,” radiographic abnormality follows and only then do workers have breathlessness on exertion, or cough productive of phlegm. Usually asbestosis has incubated for two decades or more from the first exposure; this is called the “latent” period. Asbestos and cigarette smoking synergize to impair function and produce fibrosis and carcinoma.
History Although the first use of asbestos by humans is lost in antiquity, it is mentioned by Plinius, who referred to asbestos as immun vivum, “durable linen,” and Roman slaves who worked in these mines grew breathless and died prematurely. Asbestos has properties of incombustibility, durability, and resistance to friction, which have made it useful for insulation and heat protection in modern industry. H. Montague Murray,2 a London physician, recognized a new disease in the badly scarred lungs of an asbestos worker, presumably from a textile factory, who died after a brief illness characterized by extreme breathlessness. Murray connected the workplace exposure to the scarring in testifying before an inquiry at the British Government Commission on Occupational Disability in 1907 and stated hopefully that with the recognition of the cause, he would predict few future cases. His singular finding was ignored until 1924, when Cooke3 described pulmonary fibrosis in a woman who had worked for 20 years in an asbestos textile factory. The illness was widely
regarded as a manifestation of tuberculosis, the plague of those times, and thus was largely ignored. Cooke4 also introduced the name “pulmonary asbestosis” as a pneumoconiosis, one of the dust diseases (as named by Zenker 60 years earlier). He suggested optimistically that recognition would lead to prevention. After further scattered reports of asbestosis in individual workers, an epidemiologic investigation was conducted by Merewether and Price5 of the workers in British asbestos textile factories in 1930. They systematically associated factory dust containing asbestos with radiographic findings of asbestosis in card room workers as reported by Pancoast et al.6 in 1918. Gloyne’s autopsy studies of the workers’ lungs7 showed lesions of membranous and respiratory bronchioles. Later the 1930’ supporting studies, the Metropolitan Life Insurance Company study by Lanza et al.,8 reported that two-thirds of the x-ray films of 126 “randomly selected” persons (from those) with three or more years of employment had asbestosis. In 1938, Dreessen et al.9 studied 511 employees of asbestos textile factories in North Carolina and found a low prevalence of abnormalities in the x-ray films in largely newly hired hands with short exposures. However, when several dozen workers who had been discharged from these factories were traced, many of their x-ray films showed characteristic asbestosis.10–12 Dreessen’s study and the associated reports made it clear that asbestosis produced abnormalities in the chest x-ray and shortness of breath. In the 1930s, additional reports of insulators, boilermakers, and men in other trades who manufactured or used asbestos showed that they had abnormal x-ray films, shortness of breath, and in some cases, rales in the chest, clubbing of the digits, and cyanosis. However, World War II intervened before the prevalence was measured or exposure controlled. Thus, knowledge of the pervasiveness of asbestosis waited until the 1960s and 1970s, when studies in the shipbuilding and construction trades showed chest x-rays were abnormal in many exposed workers. Large studies of asbestos miners and millers13 showed that airway obstruction and reduction in vital capacity and diffusing capacity occurred before the chest x-ray abnormalities.
Lung Cancer Lung cancer in individuals exposed to asbestos was reported in the 1930s, but the causal connection developed slowly. Merewether14 reported in 1947 that 13.5% of the asbestos textile workers studied in 1931 had died of lung cancer within 16 years. Heuper15 by 1942 concluded in his textbook that asbestos was a more important cause of lung cancer than arsenic or radium. Richard Doll,16 in a well-designed study of a textile factory cohort (a defined population), noted long latency of lung cancer and its importance as a cause of death, in 1955 found occupational exposure to asbestos increased lung cancer deaths 10-fold above the expected rate. Mancuso and Coulter,17 who confirmed Doll’s findings in the United States, and Selikoff18 first reported a large excess of lung cancer among major users of 567
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
568
Environmental Health
asbestos—insulators. Findings indicating that users were in danger vastly increased the numbers of persons at risk for lung cancer and made control urgent. In their 20-year prospective study of mortality rates among almost 18,000 insulators begun in 1967, Selikoff and Seidman,19 by 1979, found excessive death rates not only for lung cancer, mesothelioma, and asbestosis but also for cancers of the gastrointestinal tract, larynx, oropharynx, and kidney (Tables 23-1 and 23-2), and synergistic interactions between cigarette smoking and asbestos in these cancers. Age-standardized rates per 100,000 person-years are as follows: individuals who neither worked with asbestos nor smoked cigarettes had a calculated death rate of 11.3; asbestos workers who did not smoke had a rate of 58.4. Smokers in general (not asbestos workers) showed a rate of 122.6, whereas those who had both types of exposure, cigarettes and asbestos, had a rate of 601.6.
Mesothelioma, a Twentieth-Century Tumor Klemperer and Rabin20 in 1931 described this rare tumor, characteristically spread on the pleural surface, and reasoned that the responsible carcinogen must penetrate to these inaccessible pleura and peritoneum surfaces. Reports of mesotheliomas in subjects exposed to asbestos were rare in the 1940s and 1950s until Wagner et al.,21 in 1960, reported 47 mesotheliomas in people who had worked in or lived near the crocidolite works in South Africa 15 years earlier. Such a clear association between a rare tumor and a causal agent was unprecedented, but it was quickly corroborated by other studies. Although Wagner’s study lacked control subjects, his germinal observations were confirmed by population-based data. Consequently, the diagnosis of a mesothelioma stimulates a search for asbestos exposure that is seldom unfulfilled. Latent intervals of 30–40 years are characteristic, so many recent and current patients were exposed in U.S. shipyards that built and repaired a two-ocean navy and shipping fleet during World War II.
Asbestos Minerals Fibers and Fibrils “Asbestos” is a general name for naturally occurring fibrous minerals that include serpentine and amphibole fibers, but excludes fibrous forms of other minerals such as wollastonite, brucite, gypsum, and calcite. Chrysotile, the only serpentine asbestos, occurs in “cobs” about the size of a palm of a hand in pockets, often within platelike and nonfibrous silica deposits. The fibers can be seen with an optical microscope; the fibrils that compose them are of micrometer size, and, therefore, single fibrils isolated or in tissue are ordinarily visible only with an electron microscope. Fortunately, there are crude associations among “dustiness” (the gravimetric measurement of the total airborne concentration), the visible fibers recognized with the optical microscope (particularly with the help of phase contrast or polarized light), and the concentrations of fibrils measured with the electron microscope. For industrial hygiene, the rough relationships between total dust measured gravimetrically from air samples and fibers visible with the light microscope have produced reasonable dose-response relationships in miners and millers of asbestos, asbestos textile workers, and workers producing asbestos (calcite) pipe. Estimates of maximal human exposure to fine fibers range widely from several hundred fibrils per cubic meter of air to several hundred millions of fibrils.
Sources The commercially important asbestos fibers are chrysotile, amosite, and crocidolite. Chrysotile, or white asbestos, is mined mainly in Canada’s Quebec province and in the Ural Mountains of the former Soviet Union. U.S. mines in Arizona and California produced small quantities. The three amphiboles are crocidolite, or blue asbestos, which is highly associated with mesothelioma; amosite (named for the Asbestos Mining Organization of South Africa), called brown
TABLE 23-1. LESS COMMON MALIGNANT NEOPLASMS: DEATHS AMONG 17,800 ASBESTOS INSULATION WORKERS IN THE UNITED STATES AND CANADA, JANUARY 1, 1967–DECEMBER 31, 1986 Observed Site of Cancer Causing Death Increased incidence at these sites: Larynx Oropharynx Kidney Pancreas Esophagus Stomach Colon/rectum Gall bladder/bile ducts No increased incidence at these sites: Urinary bladder Prostate Liver Brain tumors (all) Cancer of brain Leukemia Lymphoma
a
b
Ratio o/e c
Expected Deaths
DC
BE
DC
BEd
10.57 22.02 18.87 39.52 17.80 29.36 88.49 5.37
17 38 32 92 29 34 125 13
18 48 37 54 30 38 121 14
1.61 1.73** 1.70** 2.33*** 1.63* 1.16 1.41*** 2.42**
1.70* 2.18*** 1.96*** 1.37* 1.68* 1.29 1.37** 2.61**
20.77 52.56 11.06 26.35 22.55 28.74 43.24
17 59 31 40 29 32 33
22 61 12 33 27 33 39
0.82 1.12 2.80*** 1.52* 1.29 1.11 0.76
1.06 1.16 1.08 1.25 1.20 1.15 0.90
Source: From Selikoff IJ, Seidman H. Asbestos associated deaths among insulation workers in the United States and Canada, 1967–1987. Ann NY Acad Sci. 1991;643:1–14. aExpected deaths are based upon white male, age-specific death rates of the U.S. National Center for Health Statistics, 1967–1986. bDC: Number of deaths as recorded from death certificate information only. cBE: Best evidence. Number of deaths categorized after review of best available information (autopsy, surgical, clinical). Where no such data were available, the death certificate diagnosis was used. dCalculated for information only, since it utilized “best evidence” vs. “death certificate” diagnoses, which are not strictly comparable because of different ascertainment and verification. Probability range: * P < 0.05;** P < 0.01; *** P < 0.001.
23 TABLE 23-2. PULMONARY PARENCHYMAL ASBESTOSIS OF PROFUSION 1/0 OR MORE INTERNATIONAL LABOR ORGANIZATION CRITERIA IN 419 MIDWESTERN INSULATORS BY HISTORY OF CIGARETTE SMOKING
Smoking Category Nonsmokers Ex-smokers Current smokers
Mean Age [Years] 40 44 48.2
Number with Asbestosis/ Number in Population 7/97 29/131 37/191
Percent
Risk Ratio
7.2 22.1 19.4
3.1 2.7
asbestos because of its iron content; and anthophyllite, which is found in Finland. Actinolite, or fibrous tremolite, contaminates minerals. Examples include crocidolite and talc from many sources, particularly the Gouverneur district of New York State. Mining and milling exposes far fewer human subjects to asbestos than to do thermal insulation and construction materials: surfacing materials, preformed thermal insulating products, textiles, cementitious (concrete-like) products, paper products, roofing felts, asbestoscontaining compounds, flooring tile and sheet goods, wall coverings, and paints and coatings. Dispersal of the fibrils into the air of “massive containers”—ships and industrial facilities such as aluminum refineries, copper smelters, glass and fiberglass factories, paper mills, and powerhouses—exposes all workers, well beyond those who handle asbestos products.
Asbestos and Other Fibers
569
of the twentieth century, a key concern is that developing countries will probably continue letting economic determinants take precedence over health. Even in Israel, asbestos pipe containing crocidolite was manufactured through the 1970s, and asbestos remains in place in sugar mills and oil refineries in Mexico, Brazil, and China. In February 2005, WR Grace & Co. and seven executives were indicted on federal charges that they knowingly put workers and the public in danger through exposure to vermiculite ore contaminated with tremolite in Libby, Montana. 24 Earlier the U.S. Justice Department alleged Grace fraudulently transferred money before filing a Chapter 11 bankruptcy. 25 Asbestos firms have paid injured workers $20 billion and have repeatedly sought “relief from asbestos claims” by appeals to the Supreme Court to halt trials26 and via new legislation.27 Costs of present and future claims are estimated at $200 billion and 10 major asbestos product manufacturers filed Chapter 11 bankruptcies in 2000–2001.26
Removal of Asbestos Fragmentation and degeneration due to heat and vibration increase the liberation of fibrils from asbestos during renovation, removal, or repair.28 The highest doses for workers may be generated during the removal of asbestos without proper procedures, including wetting material down and restricting the area to properly suited, well-trained personnel using air-supply respirators.22 If the asbestos is placed in plastic bags and buried, the hazard is minimized. These safeguards have been neglected in many asbestos-removal efforts, where levels of more than 100 fibers per millimeter have been measured.
Biological Effects of Asbestos Molecular Effects
Use of Asbestos in Industry and Construction Asbestos serves in heat insulation, friction-resistant products, and construction.22 As heat insulation, asbestos cloth is used in blankets, gloves, suits, and boiler packing and is combined with magnesia in pipe insulation. Asbestos combined with portland cement, or blue mud, was widely used to free-form insulation around pipes and boilers. Friction products included brake shoes and pads, clutch facings, and other woven products that must resist both friction and the heat it generates. The major tragedy of asbestos exposure since World War II was in workers in the construction trades, where, for reasons never stated but including availability, cheapness, and binding properties, asbestos has been used in drywall, in spray ceilings, paint, floor tile, ceiling tile, and as filler in many other products. Analogous to fine sand, as the inert material in paint, asbestos was added to products whether or not it conferred useful properties.
Asbestos fibrils in vitro hemolyze red blood cells and generate reactive oxygen- and nitrogen-derived species to damage DNA.29 Chrysotile also mediates the uptake of exogenous DNA into monkey cells in such a way that the genes on the DNA are expressed.24 In several cultured human cell lines chrysotile and amosite induce changes including DNA breakage.29–33 Asbestos, and synthetic fibers, induced rat alveolar macrophages to form and release tissue necrosis factor-α.34 Amphiboles induce cytotoxic effects in cultured macrophages, which caused hyperplasia and squamous cell metaplasia.30 Chrysotile was more toxic than amosite in normal and transformed epithelial cell lines and in plasminogen activation.33 Crocidolite fibers induced loss of wild-type alleles, decreased apoptosis, accelerated tumor growth, invasion, and lymphoid dissemination.34 Amiosite fibers disrupt mitochondria and induce apoptosis.35
Cellular Effects Peak Use of Asbestos Asbestos use peaked in the United States in the early 1970s. Although the yearly consumption rose steadily from 1890 to 1950, was virtually level at more than 7000 metric tons per year from 1950 through 1969, and fell precipitously in the 1980s. The profile is similar for other developed nations, where asbestos was widely used in construction.
Patterns of Use Asbestos litigation and regulation since the mid-1970s have excluded asbestos from many consumer products, from building materials, and lastly from brakes and friction goods. Pioneering synthetic brake materials, Volvo Corporation in Sweden produced pads and shoes that, although twice as expensive, lasted three or four times as long as asbestos. Progress has been uneven. Asbestos insulating products continued to be installed in New Jersey schools (without the addition of warning labels) well into the early 1980s, and in 1986 an inventory of a U.S. Navy warehouse for ship fittings disclosed 130 products containing asbestos. Many of these were gaskets and other relatively low-exposure items, but others included thermal insulation and blankets. Because asbestos use accompanied the intense industrialization
Heppleston36,37 and Allison38 two decades ago showed that macrophages that had phagocytosed asbestos fibrils generated inflammatory signals. Macrophages caused fibrosis in animal models39,40 and in the human lung, where asbestos caused recruitment and proliferation.41 In contrast, quartz was disturbingly lethal for cells. Observations of cells in vitro and in permeable chambers implanted in the peritoneum of rats showed that asbestos causes macrophages to produce peptides that stimulate fibroblasts (fibronectin and others) to replicate and produce collagen.42–44
Target Organ Processing asbestos in the lung45–46 evidently begins in airways, particularly the small airways where fibrils impinge. Short-term clearance depends on fiber size and type; chrysotile, for example, clears from guinea pig lungs faster than amosite.47 The probable scenario in airways is that fibrils pass between the epithelial cells, cross the basement membrane, and lodge in the connective tissue, attracting macrophages. Macrophages on the airway surfaces may phagocytose some fibrils, but fibrils are simply carried away on the mucociliary escalator. Others, apparently a small minority, are coated with ironrich protein and become asbestos (ferruginous) bodies (Fig. 23-1).
570
Environmental Health and animals.37,38 Fibronectin and at least one other fibroblaststimulating factor can be stimulated by asbestos in cells.54–56,57 Implantation of asbestos and human-made fibers in the pleural space of experimental animals, and refinements of this technique with milling and sizing of the fibrils, led Stanton and Wrench58 to propose that the physical properties—the diameters and lengths of the fibers or fibrils—were responsible for mesothelioma. Intracellular asbestos fibrils interfere with chromosome aggregation in mitosis, although whether this interference is linked to neoplasia is unclear.24 It appears that physical, surface, and chemical properties of the fibrils may be important in cell proliferation and in forming tumors.
Human Exposure Workers
Figure 23-1. Asbestos (ferruginous) bodies in lung tissue consist of an asbestos core with an iron protein coat that make them appear tan or brown (×600).
Airway walls thicken beneath the epithelium, and cells are attracted to the alveolar side of membranous small airways. Next is bridging, via the lymphatic vessels, between the peribronchiolar scars, linking them like a lattice. Previous assumption that this linked-up network “shrank” the lung to reduce volume is no longer tenable. Volumes lost to shrunken zones are compensated for by areas of emphysema.48 Interstitial fibrosis is seen only with advanced asbestosis, particularly in subjects who have smoked cigarettes.
Transport of Fibrils Fibrils are transported to other sites, via regional lymphatic vessels, and into the pleural space. Hillerdal49 suggested that the fibrils, absorbed in small airways and alveoli, move via the lymphatic vessels or within cells to the pleural surface, cross the pleural “space,” and impinge on the parietal pleura, where the macrophages are retained. Here they send signals to fibroblasts and to mesothelial cells to proliferate and produce collagen.50 This produces characteristic hyaline plaques of the parietal pleura. When pleural effusion intercedes, the pleura may fuse producing dense adhesions. In some instances, fibrosis invades the lung from pleural surfaces via the perivenous lymphatics. Retrograde flow may occur because symphysis obliterates the pleural space so that it is no longer accessible as a sump for the fibrils, which move into the peripheral lung. Fibrils remain in the perivenous lymphatics. Whatever the mechanism, such fibrous strands, as seen on cut surfaces of the lung or on highresolution computer-augmented tomograms, are most dense at the pleura and attenuate progressively toward the hilum.
Immune Responses Association of rheumatoid factor with asbestosis51–52 has posed unresolved questions: first, whether subjects who develop rheumatoid factor are more susceptible to the clinical disease after asbestos exposure; second, whether immunoglobulin synthesis is stimulated by asbestos; and third, whether such elevations enhance the development of asbestosis. Alterations in populations of lymphatic T cells have also been associated with asbestos exposure and with asbestosis53–54 that poses questions similar to those for the role of immune globulins. Antinuclear antibodies were more frequent and higher titers in tremolite exposed residents of Libby, Montana than in controls.55
Summary Macrophages export peptides that stimulate fibroblasts to proliferate and produce collagen in cell systems and diffusion chambers of plants
Contained air space into which asbestos has been dispersed, such as in a textile factory, ship, power station, factory, smelter, or refinery where there is heat conservation concentrates the dose. Open air spraying of asbestos insulation on the structural steel of high-rise buildings in New York City, exposed sprayers heavily and asbestos was detected in ambient air as far away as Cape May, New Jersey. Sprayers themselves, within the skeleton of the building were at greatest risk. During mining and milling, moisture and nonasbestos rock bind fibers and impair the discharge of fibrils into the air. In comparison, textile operations, in which the fibers are carded and spun, generate dry fibrils into the workplace air, and to the spraying of asbestos insulation is similar. Obviously, partially bound asbestoscontaining materials are less hazardous than those that readily release fibers or generate fibrils into the air. Cleaning brake drums with compressed air disturbs many fine fibrils (Fig. 23-2), as does the removal of insulation that has been cooked on boilers or steam lines. If prevalence of asbestosis reflects cumulative exposures, insulators, sheet metal workers, boilermakers, and pipefitters are at high risk whereas electricians, carpenters, laborers, workers, and mechanics have had less exposure and less disease after 15–25 years. Ships with perforated plates for decks maintain fibrils in the air space, similar to asbestos textile factories.59 Thus in taking a patient’s history, determining patient involvement in asbestos heat-conservation or heat-protection is essential. For example, asbestosis has been diagnosed in cafeteria and office workers employed in asbestos pipe plants where they had shared a building with production workers for 15–20 years.
Secondary Human Exposure Family Members Families of amosite factory workers60 in Paterson, New Jersey showed effects from exposure to asbestos brought home by workers on their person and clothes. About 48% of wives, 21% of daughters, and 42% of sons showed parenchymal or pleural evidence of asbestosis. Shifting to a less intense work exposure, that of shipyards, another family study showed that 11.3% of wives, 2.1% of daughters, and 7.6% of sons had signs of asbestosis.61 In the past, consumer electrical goods contained asbestos but to date there has been no evidence of disease from exposure at the levels of fibrils released from electric irons, electric hair dryers, or even asbestoscontaining artificial logs used in fireplaces. Nevertheless, discontinuance of such exposure is the responsibility of the Consumer Products Safety Administration.
Schools and Other Buildings Passive bystander asbestos exposure—as occurs to people in buildings with asbestos as heat insulation on steam pipes, boilers, and ducts leading to the rooms and sprayed on ceilings and walls or as construction materials—has been the subject of contentious discussion, rule making, and litigation for the past two decades.62,63 Surveys by the U.S. Environmental Protection Agency (EPA) in 1985 estimated that 31,000 schools and 733,000 public and commercial
23
Asbestos and Other Fibers
571
Asbestosis Diagnosis The diagnosis of asbestosis requires, first, a history of exposure, usually occupational or as a bystander in a trade in which asbestos has been used. Second, a suitable latent period elapsed since the start of exposure. The third criterion is typical pulmonary or pleural abnormalities on the chest x-ray. The minimal latent period in developed countries exposures in the past 30 years, is 10 years, prevalences of asbestosis climb at 20, 30, and 40 years. In shipbuilding and construction trades, workers who were virtually continuously exposed for 25–35 years, the prevalence of asbestosis, including pleural disease, is 25–35%. Full-size posteroanterior (PA) chest radiographs show irregular opacities in the lower half of the lung fields near the lateral pleural surfaces. Pleural signs are circumscribed or diffuse areas of pleural thickening, so-called hyaline plaques of the parietal pleura, which are seen easily when located laterally or on the diaphragm but may also be located posteriorly or anteriorly and seen face on (en face) or in profile. Descriptions of the patterns of changes in chest radiographs as a result of asbestosis have been progressively enhanced and detailed by the International Labor Organization (ILO) working committees since 1919. The 1980 revision1 included a set of standard radiographs with the major ILO categories portrayed (Fig. 23-3, A to E). Pleural changes are described by their location, thickness, and extent (Fig. 23-3F). Use of the ILO classification scheme has improved communication between investigators in various countries and become the medical-legal criteria for recognizing asbestos. Recent studies of several thousand exposed workers showed that the functional implications of pleural and pulmonary signs are similar: both impair expiratory flow and produce air trapping.66–68 Furthermore, despite radiographic distinctions between circumscribed plaquelike and diffuse pleural thickening, the only physiological difference was greater when diffuse thickening surrounded the base of the lung. Spiral CT scans have enhanced recognition and help map pulmonary pleural asbestos but it is not clear that they are more sensitive than the full size PA x-ray.
Pathology Figure 23-2. A workman removing insulation containing asbestos shows cavalier lack of caution. He should be protected with an air supply respirator venting into his impervious clothing and gloves. He must gather all material into double thickness strong plastic bags for safe disposal.
buildings contained friable, easily crumbled asbestos-containing material.22 After mesotheliomas were diagnosed in 3 Los Angeles school custodians, 205 school maintenance workers, and custodians with 10 years on the job showed 16% had pleural and 13% had parenchymal signs of asbestosis.64 New Jersey custodians (S Levine, personal communication) had similar prevalences. In neither of these studies were custodians with prior exposure to asbestos excluded nor was it possible to ascertain which ones were working on the maintenance of boilers and heat- and power-generating facilities. In Boston schools,65 52 custodians showed signs of asbestosis related to workplace exposure. Teachers and students sharing these air spaces may show signs as well, but neither has been studied. Many school boards have had asbestos removed from the schools; some have issued bonds for this purpose, and some have sued the suppliers of asbestos-containing products to recover the costs. In response to pressure from consumer groups and legislatures, the EPA has recommended removal of such products when air levels of asbestos are 0.1 to 0.01 fiber/mL.22 Our society needs to make changes based on the known harmfulness of asbestos, without assessing morbidity and mortality rates from a particular exposure.61,62
British pathologist Roodhouse Gloyne7 described the classic cellular aggregates and cell proliferation around the small airways, the terminal and respiratory bronchioles, a diagnostic gold standard. The primacy of this lesion was obscured in later descriptions of fibrosis throughout the lung, with dense aggregates of macrophages and asbestos bodies in the surviving alveoli that led to characterization of asbestosis as interstitial fibrosis. However, new human pathological descriptions and animal experiments confirm the membranous and respiratory bronchioles as the focus of fibrosis (Fig. 23-4). Subsequent bridging extends between the bronchioles, creating a latticework; additional interstitial fibrosis may develop as the process advances.48 Another distinctive lesion involves perivenous lymphatics visualized on extended scale computer-assisted tomograms of subjects showing increased markings in the lung bases.69,70 Fibrosis is well visualized peripherally and attenuates toward the hilum, opposite to ordinary vascular and bronchial markings, which attenuate toward the pleural surfaces. Accentuated secondary lobular septa occur at about 1-cm intervals along the lateral margins of the lung, where they are recognized as “laddering” on the chest radiograph. A small percentage of asbestos-exposed subjects have pleural effusions, healing of these effusions may obliterate the costophrenic angles and produce diffuse pleural scarring.71,72 Undoubtedly, fibrils migrate to the pleura from their locus of deposition in small airways or alveoli.49 Whether they are translocated as free fibrils or within macrophages after phagocytosis is unknown. In either case, they exit the lung to the pleural space and move with the lymph flow to the parietal pleural lymphatic vessels.49 Here they apparently stimulate macrophages or stimulate the retention of the macrophages in the outer layers of the pleura and stimulate fibroblastic proliferation. Thus, circumscribed thickening (plaques) is found in the lower two-thirds of the lateral dorsal and ventral parietal pleura and
572
Environmental Health
Figure 23-3. The International Labor Organization (ILO) classification for pneumoconiosis has provided criteria for asbestosis on chest x-ray films since 1959, using a scheme from 1916. Classification is based on a standard 14 × 17 = inch posteroanterior radiograph of a technical quality that distinguishes details in the lungs. In 1980, copies of radiographs were supplied for normal, A 0/0, and for the three major categories of profusion of opacities for each size: s, t, and u. B 1/1 slight opacities, notable in outer lung regions; C 2/2 = moderate opacities, partly obscuring pulmonary vessels; and D 3/3 = opacities so profuse as to obscure the pulmonary vessels. E shows the standard films for opacities 3–10 mm in diameter (u/u). F shows circumscribed plaque (UL), diffuse pleural plaques (UR), calcified diaphragmatic plaque (LR), and calcified wall (LL). Technical quality. With modern x-ray equipment, dedicated technicians can take nearly ideal maximally inflated chest radiographs in all instances except morbid obesity, severe infirmity, or distortion of the chest cage or internal organs. The common correctable error is underinflation, which is recognized when the right side of the diaphragm is above the ninth intercostal space. Such films must be repeated after the subject is instructed in holding a deep breath. Films of high quality can be ensured if the qualified reader has suboptimal films repeated before the subject leaves the x-ray unit. The 12-point scale. The profusion of opacities was classified into one of four major categories by comparison with standard radiographs and a number, 0 to 3, written to the left of the slash. If during this rating, the major category above or below was seriously considered as an alternative, this was recorded on the right side of the slash, thus, “2/1” represents a profusion of major category 2 but with category 1 having been seriously considered. Profusion without serious doubt, in the middle of the major category, was recorded as 2/2. If the category above was seriously considered, profusion was recorded as 2/3.
23
Asbestos and Other Fibers
573
Figure 23–3. Table Information
on the dome of the diaphragm. Plaques are disks of dense hyalinized collagenous connective tissue up to several millimeters thick.
Clinical Features The principal symptom is insidious shortness of breath on exertion and the symptom gradually worsens before radiograph abnormalities are seen. Cough with phlegm production is common and, when present for three months in two succeeding years diagnoses chronic bronchitis. Bronchitis increases in prevalence as the duration of asbestos exposure exceeds 20 years, even in workers who have never smoked. Physical examination of the chest reveals decreased breath sounds. Wheezing on forced expiration increases in frequency as the lesions on x-ray films become more profuse. Fine crepitant rales may be heard after the radiographic changes are moderately advanced (ILO category 2/2 and greater) but are rare earlier. Peripheral cyanosis and finger clubbing, typical in advanced asbestosis, are uncommon and should arouse suspicion of other causes. Asbestos skin “warts,” are rare but were common in insulators who handled asbestos daily.
Physiological Impairment The key pathological lesions of asbestos in the lungs are narrow and constricted membranous and respiratory bronchioles. These lesions, in turn, reduce mid and terminal flow rates,73,74 that is,
obstruct expiratory air flow, causing the earliest physiological finding in asbestosis.13,66 Over 1,700 workers who had never smoked cigarettes were studied to define the effects of asbestos alone.13,66 Their small airways were obstructed before the irregular opacities of asbestosis appeared on the posteroanterior chest radiograph (Fig. 23-5). Such airflow limitation produced air trapping noted by an increased ratio of residual volume (RV) to total lung capacity (TLC) ratio (RV/TLC), and the increased residual volume reduced vital capacity. This reduced vital capacity led to the concept that asbestosis is a restrictive lung disease similar to idiopathic pulmonary fibrosis. However, reduced vital capacity is not a reliable measure of restrictive disease, which is defined solely by loss of total lung volume, in the absence of obstruction.75 These effects are exaggerated by cigarette smoking (Fig. 23-6). Gas dilution using helium or nitrogen ignores the volume of air that is not connected, that is, air trapped so does not measure all of the total lung capacity, just as happens in patients with emphysema. Thus radiographic75 or body plethysmographic methods must be used for accurate measurement of total lung capacity.76 Radiographs must be obtained when the lungs are fully inflated, while in the latter one, expiratory reserve volume must be measured carefully with plethysmographine measurements. Total lung capacity is slightly increased due to effects of cigarette smoking; 85% of workers exposed to asbestos also smoked cigarettes for many years.
574
Environmental Health Not only is there a strong correlation between the profusion of irregular opacities and physiological impairment, but airways obstruction can be measured in workers after 15 years of exposure in the absence of radiographic lesions. Airway obstruction also characterizes the physiological pattern in those workers who show only pleural signs of asbestosis, either circumscribed or diffuse,78 on chest x-rays as was hypothesized by Fridriksson et al.79 and confirmed by many physiological studies. Subjects with both pleural and parenchymal asbestosis on chest x-rays are more impaired than those with either pleural or pulmonary changes alone.
Pleural Effusions and Their Sequelae The recognition that pleural effusions occur with asbestosis without other proximate causes, waited on tuberculosis becoming rare in American workers. During the past two decades, there have been reports71–72 of pleural effusions that last weeks to months; are without bacterial flora, stigmata of tuberculosis, or malignant cells; and have a benign course. Such effusions may precede diffuse pleural thickening with adhesions between the visceral and parietal pleura
Figure 23-4. A. This terminal bronchiole that is 16 or more bifurcations from the trachea shows a division to the left. It has a greatly thickened fibrotic wall that causes small airway obstruction. B. In this more advanced stage, such bronchioles have been eliminated and alveoli are fibrotic, producing the irregular opacities seen on chest x-rays (original magnification 100x.).
Percent of predicted FEF 25–75
200 160 120 80 40 0 ILO 0/0 ILO 0/1 ILO 1/0 ILO 1/1 ILO 1/2 ILO 2/1 ILO 2/2 ILO 3/2 Percent of predicted FEF 25–75 vs ILO category (NS)
(A)
Cigarette Smoking Interaction Men with asbestosis who smoked cigarettes had an increasing profusion of irregular opacities (Table 23-2).77 Cigarette smoking produced obstructive lesions of the small airways and caused emphysema by departitioning the distal portion of the lung. Because of these wellknown effects of cigarette smoke on the airways, obstruction in asbestos workers was attributed to cigarette smoke. However, studies of large numbers of workers who never smoked showed that airway obstruction was characteristic of asbestosis alone.13,66–68
100
80 Percent RV/TLC
Progressive impairment of airflow and air trapping occur with an increasing profusion of opacities on radiographs59,66,68 (Figs. 23-5, A and B, and 23-6, A and B). Further limitation of expiratory flow from 25 to 75% of vital capacity (FEF25–75) indicates increased airway obstruction, which has increased air trapping, within a normal total lung capacity. As flow (FEF25–75) decreased, both forced expiratory volume, in 1 second, and vital capacity decreased; lung volume was maintained. Further proof of this relationship came from the 46 men with severe asbestosis, as shown by ILO profusions of 2/3 and greater from 8000 asbestos-exposed workers, none of whom had reduced thoracic gas volume or TLC. Four additional subjects who appeared to have restrictive disease had had a lobe or more of lung removed for cancer. In summary, “a small, tight lung” does not characterize asbestosis. Rather it is an airway obstructive disease in which total lung volume, the measure of restrictive lung disease, has increased by 10% due to cigarette smoking.68 Gas transfer capacity—that is, the diffusing capacity for carbon monoxide, measured during a single breath-hold of 10 seconds—does not decrease until air trapping has reduced vital capacity. Thus decreased diffusing capacity is not an early sign of asbestosis.
60
40
20
0 ILO 0/0 ILO 0/1 ILO 1/0 ILO 1/1 ILO 1/2 ILO 2/1 ILO 2/2 ILO 3/2 RV/TLC percent vs ILO category (NS)
(B) Figure 23-5. A. Mid-flows (FEF25–75) in 1777 men who were never smokers as a percentage of predicted (adjusted for height and age) are shown as box plots against ILO categories 0/0 to 3/3 with median line, 25–75% limits as the box bottoms and tops and whiskers equal to three halves of the interquartile range rolled back to where there are data. Regression equation for FEF25–75 percent predicted = 99.01 – 4.92 ILO category (p < 0.0001, R2 = 2.4%). B. Residual volume/total lung capacity (RV/TLC) is plotted against ILO categories as in top. RV/TLC = 36.6 + 2.54 ILO category (p < 0.0001, R2 = 6.7%).
23
Asbestos and Other Fibers
575
Percent of predicted FEF 25–75
200 160 120 80 40 0 ILO 0/0 ILO 1/0 ILO 1/2 ILO 2/2 ILO 3/2 ILO 0/1 ILO 1/1 ILO 2/1 ILO 2/3 ILO 3/3 Percent of predicted FEF 25–75 vs ILO category (CS)
(A) 100
Figure 23-7. The lacelike white shadows are calcified pleural plaques seen enface in both posterior lung fields and in profile as plateaus on the diaphragms.
Percent AV/TLC
80
60
40
20
0 ILO 0/0 ILO 1/0 ILO 1/2 ILO 2/2 ILO 3/2 ILO 0/1 ILO 1/1 ILO 2/1 ILO 2/3 ILO 3/3 RV/TLC percent vs ILO category (CS)
(B) Figure 23-6. A. Mid-flows (FEF25–75) in 4550 men who were current smokers as percentage of predicted (adjusted for height, age, and duration of cigarette smoking) are plotted against ILO categories as in Figure 23-5. FEF25–75 percent predicted = 89.4–4.93 ILO category (p < 0.0001, R2 = 3.5%). B. Residual volume/total lung capacity (RV/TLC) is plotted against ILO categories for current smokers. RV/TLC = 41.2 + 2.04 ILO category (p < 0.0001, R2 = 7.9%).
Mesotheliomas spread rapidly over the surfaces, displacing or engulfing vital organs rather than invading them (Fig. 23-8). In the peritoneum or pleura, the bumpy growths are white or light yellow and vascular without necrosis. Histological sections show either a dense fibroblastic connective tissue, with stroma cells forming tubular structures resembling capillaries, or small vessels or glands, or a combination.80 They are sometimes difficult to distinguish from metastatic adenocarcinoma from lung, pancreas, colon, or stomach tissue but ultrastructural and histochemical studies are helpful.81 These otherwise rare tumors serve as sentinel or signal neoplasms, strongly suggesting asbestos exposures. Similarly, nasal sinus carcinomas are sentinels for exposure to nickel carbonyl or wood dust from certain tropical hardwood trees. Angiosarcoma of
and obliteration of costophrenic angles, as many workers with these signs have histories of pleural effusion. Follow-up of some reported subjects has been likewise confirmatory.71 It is inferred that fibrosis, more dense in the periphery of the lung and attenuating toward the hilum, which is recognized with extended-scale, computer-augmented tomography69,70 (Fig. 23-7), may be due to asbestosis pleural effusions. These workers are more functionally impaired than are others with pleural asbestos disease.78 inferring that they develop thick pleural encasement of the lungs that may require surgical removal (decortication) to relieve lung trapping, but this is not confirmed.
Mesothelioma The pleura and peritoneum are lined with mesothelial cells derived from mesoderm that may develop into connective tissue cells or ends of endothelial cells. Asbestos is translocated to mesothelial cells and initiates tumors that grow rapidly, have an excellent blood supply, and thus rarely show necrosis. Mesothelioma invades nerves to cause pain and kill by interfering with breathing. Microscopic metastases are frequent but rarely clinically important. These tumors arise in response to asbestos fibers or fibrils that have either penetrated the lung to reach the pleural space or penetrated the bowel wall to reach the peritoneum.
Figure 23-8. Pleural mesothelioma in a 35-year-old housewife. Her father, a shipyard employee, had brought dusty work clothes home from the shipyard to be cleaned. He died of lung cancer. Her mother died of pleural mesothelioma.
576
Environmental Health
the liver in the United States suggests exposure to vinyl chloride monomer, but similar tumors of the liver in Africa associate with aflatoxin exposure. Historically, the scrotal cancers in the chimney sweeps causally related to coal tar by Percival Pott were sentinels for coal tar exposure. In the general population, the incidence of mesothelioma varies from 1:1000 to 1:10,000 deaths, but in heavily exposed insulators, it caused 8–10% of deaths. Add to these the mesothelioma deaths, others from contamination of the home by asbestos brought in by shipyard or asbestos factory workers, and deaths of subjects who had brief exposures. Although the latency period for mesothelioma averages 35–40 years, it may be as brief as 5 years. There is no relation to cigarette smoking, nor is there convincing evidence for a dose-response or an enhanced risk from intensive or prolonged exposure to asbestos. Amphiboles may be more potent than is chrysotile. Thus in the shipbuilding trades, the incidence of mesothelioma is related to the number of workers at risk in all of the trades, whereas the prevalence of asbestosis is higher in heavily exposed workers such as pipe coverers, pipe-fitters, and boilermakers. Experiments of implanting of fibers of various types and sizes into the pleural space of rats and guinea pigs showed Stanton and Wrench that fibrous glass, rock wool, palygorskite, and brucite caused mesothelioma.58 In Turkey, erionite, a fibrous zeolite, has been associated with an extraordinary prevalence of mesotheliomas around Cappadocia.82,83 Before zeolite can be accepted as a cause, however, it must be noted that fibrous tremolite has also been found in this area and is a contaminant of natural products used for building material. Thus fibrous tremolite, an amphibole, may be responsible for these mesotheliomas as in vermiculate-exposed Libby, Montana. The animal experiments caution against widespread adoption of human-made substitutes for asbestos. Carbon fibers, because of their size and shape, may share the potential for inducing mesothelioma, and they and vitreous fibers may also produce pulmonary fibrosis, although they have not been tested. Management of mesothelioma is discouraging to the patient and frustrating to the physician. Survival after diagnosis is usually one year. Patients rarely live five years. Used alone, radiotherapy, chemotherapy, and surgery offer no advantage to the natural course. Debulking of the tumor surgically, and multi-drug chemotherapy, with doxorubicin (adriamycin), cyclophosphamide, and cisplatinate, increase the life span after diagnosis about one year. Because mesotheliomas invade nerves, pain relief is a major concern in palliation.
Lung Cancer The major public health concern and principal cause of death from asbestos in developed countries is lung cancer. Some groups of asbestos-exposed workers have a lung cancer mortality rate as high as one in five. Cocausality with cigarette smoking unquestionably makes the relative risk 50–100 times higher in the asbestos-exposed smoker as in the non–asbestos-exposed subject who has never smoked.19,84 Because 65–85% of workers exposed to asbestos have smoked and about 50%, in 1990, continued to do so,85 their excessive risk of lung cancer calls for intervention to quit smoking. Although the smoking rate for males in the general population is now less than 30%, despite clear evidence that stopping smoking reduces the risk of cancer,86 more than 50% of asbestos workers surveyed from 1987 to 199477,78,85 in the construction, shipbuilding, and metal trades continued to smoke. Workers with an extreme risk of cancer have only one hope, one practical approach: to quit smoking. Treatment of lung cancer has advanced so little in the past 40 years that only 5–8% of patients survive five years after discovery. Although investigations with genetic markers and surface antigens suggest that we may be on the verge of earlier clinical recognition, the logistics of extending expensive and time-consuming methods to several million asbestos-exposed workers active and retired are practically impossible.
The latency for lung cancer in asbestos-exposed workers appears to be similar to that in non–asbestos-exposed workers, with incidence peaking around age 60.19,84,87 Because of a rapidly decreasing risk with the cessation of smoking, efforts to induce asbestos-exposed persons to stop smoking are a public health priority. Similar risk reduction must apply to bystanders and household-exposed groups, as well as persons lesser exposed in buildings containing asbestos. Although proof that the tumors are due to asbestos considered to be difficult in the absence of radiographically demonstrated asbestosis, a recent study showed that almost all the lungs removed for cancer from asbestos-exposed individuals show microscopic fibrosis.88 One lesson from history is clear: when asbestos exposure was sufficiently great that asbestosis was a principal cause of death, opportunities to survive long enough to develop lung cancer are diminished. For example a German asbestos industry study in Dresden showed that one-fourth of the deaths after World War II in asbestos workers were due to asbestosis; less than 2% had lung cancer.89 After the war, extensive industrial hygiene controls reduced the risk of fatal asbestosis in asbestos workers, they lived longer and died of lung cancer. The dismal prospect for treatment of lung cancer and the large number of people exposed to asbestos who still smoke make the public health priority cessation of smoking among blue collar workers who have breathed asbestos. Workers can be motivated by a physician who directs personal attention to effects of cigarette smoke such as signs of chronic bronchitis and emphysema and emphasizing the higher cancer risks when combined with asbestos exposure.87 This strategy should be extended in the United States and most of the developed world, as during the past 20 years asbestos exposure has been progressively reduced. Stopping smoking can substantially reduce the risk of millions of people for lung cancer from exposures in the previous era, to improve public health and decreasing medical and societal costs (lost wages and dependent families).
Other Asbestos-Related Neoplasms Many other neoplasms have been attributed to asbestos by careful longterm studies of large numbers of insulators and asbestos workers so that contributions from other factors and a long time are required because many individuals in the study populations also smoking cigarettes, using alcohol, and other occupational carcinogens can be ascertained. Asbestos exposure is associated with neoplasms of the pancreas and kidney, certain types of lymphoma, and neoplasms of the esophagus, mouth, and colon. The most extensive study, which served to anchor the experience, is that of the heat and frost insulators, a cohort of 17,800 workers who have been studied by Selikoff and Seidman19 since January 1967. In this group, the ratio of observed to expected cancers of the esophagus, larynx, kidney, pharynx, and buccal mucosa was greater than 2; whereas the ratio for cancers of the stomach, colon, and rectum was greater than 1.5. Thus, it appears that these common epithelial cancers are caused by asbestos exposure, despite causal interactions with cigarette smoke and alcohol (Table 23-2). The mortality from asbestos disease was elevated 2.6 times the expected rate, and that due to lung cancer was 9.1 times the expected rate in British workers certified by medical panels as having asbestosis in 1980.84 Criteria were sufficient exposure and the presence of two of four conditions (pulmonary [radiological] abnormality, pulmonary functional impairment, basal rales, and finger clubbing); 39% died of lung cancer; 9% of mesothelioma; and 20% of asbestosis. Selikoff and Seidman19 studied all deaths in United States and Canadian insulators, found in the 1967 to 1987 interval excess of deaths 1.4 times the expected number, and with deaths from cancer 3.0 times the expected number. Lung cancer accounted for 23.6% of deaths; mesothelioma, 9.3%; and asbestosis, 8.6%.
Societal Impact Beginning in the 1970s, workers’ compensation and tort litigation were undertaken for workers with mesothelioma, lung cancer, and asbestosis threatened by death or showing impairment of function.
23 Fifty different laws in the states and no record linkage contribute to the absence of accurate figures on the numbers of plaintiffs who have successfully threaded through the legal maze of workers’ compensation. This system was a social construct to avoid litigation and provide compensation without adversarial confrontation. It was focused on workplace injuries. The trade-off for the worker (plaintiff) was to give up all legal redress for injury or illness. In practice, obtaining workers’ compensation may be more difficult than pursuing thirdparty litigation. Costs have been shifted to society from industry after the workers’ resources have been exhausted, employing public assistance, Social Security disability, Medicare, and Medicaid. In the mid-1970s, civil actions (torts) were filed against major asbestos suppliers and manufacturers on behalf of patients with asbestos disease. More than a decade of such litigation has made the use of asbestos expensive because insurance is difficult or impossible to buy. Juries awarded large sums to a small fraction of plaintiffs with mesothelioma and lung cancer and some with pulmonary asbestosis. Smaller awards or settlements were made for pulmonary impairment associated with asbestosis in the lungs. The associated (nonpulmonary) neoplasms and pleural asbestosis have fared less well, with smaller jury awards and less frequent settlements. The coresponsibility of cigarette smoking has not been accepted by the tobacco companies, nor has litigation succeeded against them for their contribution to the lung cancer death toll. In 1978, the U.S. Congress asked for an appraisal of workers’ compensation programs for occupationally related lung disease, which included asbestosis, byssinosis, and black lung, as the prelude to an omnibus bill. However, by 2005 no omnibus bill had passed. Although on paper the situation is worse than in 1978, verdicts in the courts have collectively pushed bankruptcy proceedings led by Johns Mansville to more than a dozen major asbestos firms because they lost insurance and bore large costs in fighting and settling asbestos cases. Meanwhile, installation and use of new asbestos products have virtually ceased in the United States. More members of the exposed workforce know the hazards and hygiene of asbestos removal. Exposure has certainly decreased in developed countries. Currently, the burden of asbestosis is on the individual who tries to obtain Social Security, county welfare, public assistance, disability compensation, or Medicare payments. The likelihood of obtaining such help apparently depends on luck. Regulations to control exposure in the workplace were enacted in 1977; a temporary standard allowed workers to be employed in environments that contained up to 2 fibers/ml of air, or 2 million fibers/m3.90,91 The National Institute for Occupational Safety and Health (NIOSH) has recommended to the Occupational Safety and Health Administration (OSHA) a 0.1 fiber/ml industrial exposure in the United States, which was adopted in 1990 as a time-weighted average value. In view of the temporizing slowness of this approach, it is reassuring that the use of asbestos in the United States has steadily fallen since 1978, that it has been proscribed in consumer products, that in California a home cannot be sold without an asbestos inspection and amelioration of any problems found, and that it is a public sense to avoid asbestos. Asbestos products are not being installed in new construction because of EPA rules. The EPA expanded its asbestos ban to most uses in July 1989.92 But brake blocks, pipe, and shingles were banned in 1996 after a three phaseout stages. After 1990, only 10% of products had been phased out but most exceeding the EPA standards ceased production by November 1993, including brake linings, friction materials, flooring felt, and tile. Disappearance of asbestosis will be slow because of its long latency and some exposure from asbestos that was in place. Although in many jurisdictions removal is done legally by specially trained workers in disposable suits and using air supply respirators, some fly-by-night removal companies use laborers uninformed of the dangers of asbestos and lacking instructions for safe handling. Such avoidance of responsibility also characterized our earlier eras. Such unconscionable disregard of human suffering underscores the need for tighter controls and genuine accountability. Criminal penalties may be needed as were threatened in Libby, Montana in 2005. Brakes and clutch facings needed for safety can be free of asbestos
Asbestos and Other Fibers
577
materials, and although they cost more than the products replaced, they needed to be replaced less often and avoid asbestos diseases for workers.93 Asbestos use in the United States fell from 240,000 metric tons in 1984 to 85,000 metric tons in 1987.84,85,92 NATURAL NONASBESTOS AND MANUFACTURED FIBERS
Natural Nonasbestos Fibers According to the Mine Safety and Health Administration, about 150 minerals are fibrous or contain fibers. A fiber is an elongated polycrystalline unit resembling cotton or animal hair. Mineralogists define fibers as particles with an aspect ratio (length to diameter) equal to or greater than 10 to 1. “Asbestiform” denotes a type of silicate fiber that has a high tensile strength, extreme aspect ratio (i.e., high length/diameter ratio), flexibility, heat resistance, and aggregation of fibrils into bundles. Chrysotile is a good example. The Occupational Safety and Health Administration has defined the asbestos fiber as being greater than 5 µm in length with an aspect ratio of 3 to 1 or greater. The pulmonary toxicity of natural and human-made fibers is determined by the dose, the dimensions, and durability of the fiber. Fibers with long residence time because of high durability are more toxic than those with shorter residence time. Mesotheliomas are produced in animal models by pleural or peritoneal injection of fibers such as amosite, crocidolite, chrysotile, anthophyllite, tremolite, attapulgite, erionite (zeolite), borosilicate glass, aluminum silicate glass, mineral wool, aluminum oxide, potassium titanate, silicon carbide, sodium aluminum carbonate, and wollastonite.58 Amphiboles are more durable than chrysotile in solution and in animal tissues, including lung. Whether talc, a sheetlike silicate, is toxic to the lung is unclear because in most North America deposits it is significantly contaminated by fibers of tremolite, anthophyllite, and crystalline quartz.94 Pure cosmetic talc, that is, talc with minimal fiber content, produces few, if any, toxic reactions. Thus toxicity of talc appears due to fiber contamination, and perhaps free silica content. Vermiculite, a family of hydrated magnesium-aluminum-iron silicates, is sheetlike. The mineral is expanded by heat after removal from the mines and used for insulation and for fillers in paint, plasters, rubber, and other materials. The health hazard from vermiculite is attributable to its contamination with fibrous tremolite.95 Vermiculite workers and residents of Libby, Montana had numerous mesotheliomas, lung cancers, and asbestosis from tremolite. Zeolites, a group of crystalline and hydrated aluminum silicate minerals, consist of extremely fine tubes of mordenite or erionite. The tubes are 10–20 µm in length and less than 1–3 µm in diameter. Naturally occurring deposits of zeolites are distributed worldwide, but adverse health effects have been investigated near Karain, Turkey, in the central Anatolia.82,83 Although mesotheliomas, pleural thickening, and plaques were attributed to exposure to erionite, it appears that it was contaminated with chrysotile and tremolite.82 Airborne fibers in Karain, which average less than 0.01 fiber/m3 and a peak level of 1.38 fibers/m3, are significantly below the current standard for asbestos fibers. Either this is an unrecognized hazard from low-level airborne erionite exposure, or to tremolite.94,95 Wollastonite deposits are scattered around the world. A study in Finland showed that workers from a limestone-wollastonite quarry had a high frequency of pleural thickening and pulmonary fibrosis. Fibrosis was observed in only 3% of a worker cohort in the United States, but these subjects’ reductions in expiratory airflow were related to dust levels.96 Further studies of effects of erionite and wollastonite on human populations are needed, but the data suggests caution in handling these materials.
Human-Made Fibers The physical characteristic of fibers made by humans, from slag, rock, glass, ceramics, and carbon vary greatly with their manufacture.97 Carbon fibers are used in making sailboat masts and aircraft
578
Environmental Health
components (as in the Stealth bomber). The same considerations of dose, dimensions, and durability that apply to natural fibers extend to human-made filaments98 with a range of diameters. It is clear that little human respiratory hazard should be predicted for fibers with diameters greater than 10 µm because these fibers do not split into fragments that are respirable. Current commercial fibrous glasses are highly heterogeneous, with some fiber diameters of 1 µm or less. Both rotary spinning and flame attenuation produce fibers less than 1 µm (Fig. 23-9, A and B). Currently the National Institute for Occupational Safety and Health94 recommends that fibrous glass exposure be limited to 3 fibers/m3. These fibers are defined as less than 3.5 µm in diameter and equal to or greater than 10 µm in length. Rotary spinning, the process analogous to that for making cotton candy, requires less energy and is replacing flame attenuation for producing fine fibers. The thermal coefficient, a measure of insulating capacity, is increased as fiber diameters are reduced (Fig. 23-9C). Where high thermal coefficients are needed with low weight, such as aerospace applications, uniformly fine fiberglass is preferred. Fine fiberglass is also used in refrigerator doors and in insulating industrial construction and homes because it is mixed (heterogeneously) with larger fibers. This usage exposes production and construction workers to some respirable fibers.
Furnace glass stream Attenuation jets
Binder
Collector A Furnace Primary filaments
Effects of Nonrespirable Fibers
Drive rollers
Nonrespirable fibers irritate the skin,99 causing itching, burning, and irritation of the conjunctivae and the nasal and pharyngeal passages.100 Removal from exposure stops it as it does for natural irritants such as peach fuzz or stinging nettle. Striking dermatographism, histamine wheal and flare, precludes further exposure in some people.
Guide
Binder
Effects of Respirable Fibers High-pressure burner
Collector B
Thermal conductivity
Longer fibers resist phagocytosis and produce ferruginous bodies after variable periods of residence. Shorter fibers are phagocytized and release peptides that stimulate recruitment of cells for production of collagen and other fibers.101 Intrapleural injection of fibers produces mesothelioma in animals.58 Inhalation, even for a long period in rodents102–104 and in monkeys,103 produces macrophage accumulations and granulomas containing fibrous glass but little fibrosis, but in rats, plaques develop on the visceral pleura.103 Insulators who use materials with a high thermocoefficient and low weight—as in the construction of cabins of aircraft or space vehicles—should be studied to learn whether these fine fibers imitate asbestos. In the past, many of these workers were exposed to asbestos used in these applications or in similar work. Furthermore, the manufacturing sites for fiberglass have been rich in asbestos insulation. Finally, the duration of human exposure in many of these facilities has been less than the 20 or 25 years,105 which is the usual “latent period” for effects of asbestos exposure. Therefore, it is logical that the hazard from fine fiberglass is analogous to that from asbestos, as has been demonstrated recently.106 Workers were studied in a midwestern appliance plant where refrigerator doors, and previously, entire cabinets, were insulated with fiberglass sheeting and loose rotary-spun fiberglass.106 Spirometry and lung volumes were measured, respiratory and occupational questionnaires were administered, and chest x-ray films were read for pneumoconiosis using International Labor Office 1980 criteria in 284 men and women with exposures of 20 years or more to heterogeneous fine fiberglass. Electron microscope measurements of fiber size in several samples showed that 49–83% had diameters under 5 µm. Air samples were examined only by light microscopy so that low levels of 0.1–0.4 fibers/ml were are meaningless. Expiratory flows were reduced including FEV1 (mean 90.3% of predicted [pr], FEF25–75 [85.5% pr], and FEF75–85 (76.2% pr). Forced vital capacity was significantly reduced (92.8% pr) and total lung capacity was significantly increased (109.2% pr). In white male smokers, a group large enough for comparisons, pulmonary function reductions paralleled the appearance of irregular opacities. Forty-three
1
2
3
4 5 6 7 Diameter, micrometer
8
9
10
C Figure 23-9. A. Rotary process of producing fine fiberglass used both centrifugal force and air jets to attenuate the glass. Heterogeneous fiber diameters result. B. Flame attenuation provides heat and drive force to pull the fibers into smaller diameters. C. Insulating capacity, the reciprocal of thermal conductivity, is increased as fiber diameter is reduced.
workers (15.1%) had evidence of pneumoconiosis on chest radiographs: 26 of these (9.1%) had no known exposure to asbestos and 17 (6.0%) had some exposure. Our best judgment was that in 36 (13.0%) pulmonary opacities or pleural abnormalities were due to fiberglass.
23 Commercial rotary-spun fiberglass used for insulating appliances produced human disease similar to asbestosis. Radiographic studies of workers at seven fibrous glass and mineral wood facilities, 40 years ago, demonstrated 10% had small irregular opacities of low profusion, 0/1 to 1/1.17 Physiological testing was not done.17,104,105 Mortality rates for fibrous glass workers have been studied without regard to the respirability (size) of the fibers; generally there have been no excess deaths from malignant or nonmalignant respiratory disease.107 A 17-plant study in the United States under the auspices of the Thermal Insulation Manufacturers Association and a 72-plant seven-country European study by the European Insulation Manufacturers Association found risks for lung cancer above those of control populations, a 12% increase in glass wool workers and a 36% increase in mineral wool workers.107 Serious questions were raised about the suitability of national versus regional versus area controls for tracking cancer mortality rates so the mortality rate remains unsettled.108–110 Because society’s members are contaminated by many chemicals, suitable comparison groups are difficult to locate. Human mesotheliomas from fiberglass have not been identified although construction workers installing loose fiberglass had exposures of 7 fibers/ml.109
Public Health Considerations Research Because of the analogous dimensions and respirability of fine fibrous glass and other man-made fibers and variable durability, additional studies are needed of health effects and mortality in populations that have been exposed for at least 20-year latent periods and without exposure to asbestos.111 If these studies confirm those above, they will help choose the safe alternatives to asbestos. Meanwhile, the association of mesothelioma with siliceous filaments in sugarcane factory workers in India112 raises the possibility of a “natural fiber” of plant origin mimicking asbestos exposure or perhaps asbestos exposure has not been adequately ruled out. A better history of exposure, examination of lung tissues, fiber content with scanning electron microscopy, and energy-dispersive analysis propose many questions of competing etiology as asbestos is replaced.
Control Measures It seems ironic that we are witnessing widespread adoption of fibrous glass without key information needed to determine the human health risks.98,100,108–111,113 Clearly, determination of the health hazards of fine human-made fibers is a high priority before their production and application in industries produces a problem for this new century that mimics that from asbestos. Meanwhile, it is prudent to regard respirable fibers as needed with the same precautions as asbestos.110
REFERENCES
1. International Labour Office. U/C International Classification of Radiographs of Pneumoconiosis in Occupational Safety and Health Series. Geneva: International Labour Office; 1980. 2. Murray HM. Report of the Departmental Committee on Compensation for Industrial Disease. London: HM Stationery Office; 1907. 3. Cooke WE. Fibrosis of the lungs due to the inhalation of asbestos dust. Br Med J. 1924;2:147. 4. Cooke WE. Pulmonary asbestosis. Br Med J. 1927;2:1024–6. 5. Merewether ERA, Price CV. Report on Effects of Asbestos Dust on the Lungs and Dust Suppression in the Asbestos Industry. London: HM Stationery Office; 1930. 6. Pancoast HK, Miller TG, Landish HRM. A roentgenologic study of the effects of dust inhalation upon the lungs. Am J Roentgenol. (N.S.) 1918;5:129–38. 7. Gloyne SR. The morbid anatomy and histology of asbestosis. Tubercule (London). 1933;14:445–51, 493–7, 550–9.
Asbestos and Other Fibers
579
8. Lanza AJ, McConnell WJ, Fehnel JW. Effects of the inhalation of asbestos dust on the lungs of asbestos workers. Public Health Rep. 1935;50:1–48. 9. Dreessen WC, Dallavalle JM, Edwards TI, et al. A study of asbestosis in the asbestos textile industry. Public Health Bull. 1938;241: 1–147. 10. Donnelly J. Pulmonary asbestosis: incidence and prognosis. J Ind Hyg. 1936;18:222–8. 11. Shull JR. Asbestosis: a roentgenologic review of 71 cases. Radiology. 1936;27:279–92. 12. McPheeters SB. A survey of a group of employees exposed to asbestos dust. J Ind Hyg. 1936;18:229–39. 13. Becklake MR, Fournier-Massey G, McDonald JC, Siemiatycki J, Rossiter CA. Lung function in relation to chest radiographic changes in Quebec asbestos workers. I. Methods, results and conclusions. Bull Physio Pathol Resp. 1970;6:637–59. 14. Merewether ERA. Annual Report of the Chief Inspector of Factories. London: HM Stationery Office; 1947. 15. Hueper WC. Occupational Tumors and Allied Diseases. Springfield, IL: Charles C Thomas; 1942. 16. Doll R. Mortality from lung cancer in asbestos workers. Br J Ind Med. 1955;12:81–6. 17. Mancuso TF, Coulter EJ. Methodology in industrial health studies: the cohort approach, with special reference to an asbestos company. Arch Environ Health. 1963;6:210–22. 18. Selikoff IJ. Asbestos disease in the United States, 1918–1975. Rev Fr Mal Resp. 1976;4:7–24. 19. Selikoff IJ, Seidman H. Asbestos associated deaths among insulation workers in the United States and Canada, 1967–1987. Ann N Y Acad Sci. 1991;643:1–14. 20. Klemperer P, Rabin CB. Primary neoplasms of the pleura: a report of five cases. Arch Pathol. 1931;11:385–412. 21. Wagner JC, Sleggs CA, Marchand P. Diffuse pleural mesothelioma and asbestos exposure in the North Western Cape Province. Br J Ind Med. 1960;17:260–71. 22. U.S. Environmental Protection Agency. Guidance for Controlling Asbestos-Containing Materials in Buildings. EPA 560/5-85-024. Office of Pesticides and Toxic Substances:Washington, DC; June 1985. 23. Spurny KR. On the release of asbestos fibers from weathered and corroded asbestos cement products. Environ Res. 1989;48:100–16. 24. Schneider A. W.R. Grace Indicted in Libby Asbestos Deaths. St Louis Post-Dispatch: February 8, 2005. 25. Moss M, Appel A. Company’s Silence Countered Safety Fears about Asbestos. New York Times, CL: 51809; July 9, 2001. 26. Girion L. Firms Hit Hard as Asbestos Claims Rise. Los Angeles Times. December 17, 2001. 27. Girion L. Halt of Asbestos Trial Sought. Los Angeles Times. September 10, 2002. 28. Appel JD, Fasy TM, Kohtz DS, Kohtz JD, Johnson EM. Asbestos fibers mediate transformation of monkey cells by exogenous plasmid DNA. Proc Natl Acad Sci USA. 1988;85:7670–74. 29. Levresse V, Renier A, Levy F, Broaddus VC, Jaurand M. DNA breakage in asbestos-treated normal and transformed (TSV40) rat pleural mesothelial cells. Mutagenesis. 2000;15:239–44. 30. Mossman BT, Craighead JE, MacPherson BV. Asbestos-induced epithelial changes in organ cultures of hamster trachea: inhibition by retinyl methyl ether. Science. 1980;207:311–3. 31. Wade MJ, Lipsin LE, Tucker RW, Frank AL. Asbestos cytotoxicity in a long-term macrophage-like cell culture. Nature. 1976;264:444–6. 32. Neugut AI, Eisenberg D, Silverstein M, Pulkribek P, Weinstein IB. Effects of asbestos epithelial cell lines. Environ Res. 1978;17: 256–65. 33. Ljungman AJ, Lindahl M, Tagnession C. Asbestos fibers and manmade mineral fibers: induction and release of tumor necrosis factor from rat alveolar macrophages. Occup Environ Med. 1994;15:777–83.
580
Environmental Health
34. Vaslet C, Messier N, Kane A. Accelerated progression of asbestosinduced mesotheiomas in heterozygous p53+/( mice. Toxicol Sciences. 2002;68:331–8. 35. Kamp DW, Panduri V, Weitzman SA, Chandel N. Asbestosinduced alveolar epithelial cell apoptosis: role of mitochondrial dysfunction caused by iron-derived free radicals. Mol Cell Biochem. 2002;234–235:153–60. 36. Heppleston AG. Silica and asbestos: contrasts in tissue response. Ann NY Acad Sci. 1979;330:725–44. 37. Heppleston AG. The fibrogenic action of silica. Br Med Bull. 1969;25:282–7. 38. Allison AC. Pathogenic effects of inhaled particles and antigens. Ann NY Acad Sci. 1974;221:299–308. 39. Davis JMG. The effects of chrysotile asbestos dust on lung macrophages maintained in organ culture. Br J Exp Pathol. 1967;48: 379–85. 40. Davis JMG, Beckett ST, Bolton RE, Collings P, Middleton AP. Mass and number of fibres in the pathogenesis of asbestos-related lung disease in rats. Br J Cancer. 1978;37:673–88. 41. Spurzem JR, Saltini C, Rom W, Winchester RJ, Crystal RG. Mechanisms of macrophage accumulation in the lungs of asbestosexposed subjects. Am Rev Respir Dis. 1987;136:276–80. 42. Wagner JC, Burns J, Munday DE, McGee J. Presence of fibronection in pneumoconiotic lesions. Thorax. 1982;37:54–6. 43. Rom WN, Bitterman PB, Rennard SI, Catin A, Crystal RG. Characterization of the lower respiratory tract inflammation of nonsmoking individuals with interstitial lung disease associated with chronic inhalation of inorganic dusts. Am Rev Respir Dis. 1987;136:1429–34. 44. Davis HV, Reeves AL. Collagen biosynthesis in rat lungs during exposure to asbestos. Am Ind Hyg Assoc J. 1971;32:599–602. 45. Wagner JC, Berry G, Skidmore JW, Timbrell V. The effects of the inhalation of asbestos in rats. Br J Cancer. 1974;29:252–69. 46. Wagner JC. Asbestosis in experimental animals. Br J Ind Med. 1963;20:1–12. 47. Churg A, Wright JL, Gilks B, DePaoli L. Rapid short-term clearance of chrysotile compared to amosite asbestos in the guinea pig. Am Rev Respir Dis. 1989;139:A214. 48. Craighead JE, Abraham JL, Churg A, et al. Asbestos-associated disease. Arch Pathol Lab Med. 1982;106:544–97. 49. Hillerdal G. The pathogenesis of pleural plaques and pulmonary asbestosis: possibilities and impossibilities. Eur J Respir Dis. 1980;61:129–38. 50. Rennard SI, Jaurand M-C, Bignon J, et al. Role of pleural mesothelial cells in the production of the submesothelial connective tissue matrix of lung. Am Rev Respir Dis. 1984;130:267–74. 51. Turner-Warwick M, Parkes WR. Circulating rheumatoid and antinuclear factors in asbestos workers. Br Med J. 1970;3:492–95. 52. Kagan E, Solomon A, Cochrane JC, Kuba P, Rocks PH, Webster I. Immunological studies of patients with asbestosis. II. Studies of circulating lymphoid cell numbers and humoral immunity. Clin Exp Immunol. 1977;28:268–75. 53. Kagan E, Solomon A, Cochrane JC, et al. Immunological studies of patients with asbestosis. I. Studies of the cell-mediated immunity. Clin Exp Immunol. 1977;28:261–7. 54. Bitterman P, Rennard SI, Ozaki T, Adelberg S, Crystal RG. PGE2: a potential regular of fibroblast replication in normal alveolar structures. Am Rev Respir Dis. 1983;127:271A. 55. Pfau J, Sentissi J, Weller G, Putnam E. Assessment of autoimmune responses associated with asbestos exposure in Libby, Montana, USA. Environ Health Persp. 2005;113:25–30. 56. Rennard SI, Crystal RG. Fibronection in human bronchopulmonary lavage fluid elevation in patients with interstitial lung disease. J Clin Invest. 1981;69:113–22. 57. Rennard SI, Bitterman PB, Crystal RG. Pathogenesis of granulomatous lung disease. IV. Mechanisms of fibrosis. Am Rev Respir Dis. 1984;30:492–6. 58. Stanton MF, Wrench C. Mechanisms of mesothelioma induction with asbestos and fibrous glass. J Natl Cancer Inst. 1972;48:797.
59. Kilburn KH, Warshaw RH, Thornton JC. Asbestosis, pulmonary symptoms and functional impairment in shipyard workers. Chest. 1985;88:254–9. 60. Anderson HA, Lilis R, Daum SM, Selikoff IJ. Asbestosis among household contacts of asbestos factory workers. Ann N Y Acad Sci. 1979;330:387–99. 61. Kilburn KH, Lilis R, Anderson HA, et al. Asbestos disease in family contacts of shipyard workers. Am J Public Health. 1985;75:615–7. 62. Nicholson WJ, Swoszowski EJ, Jr, Rohl AN, Todaro JD, Adams A. Asbestos contamination in United States schools from use of asbestos in surfacing materials. Ann N Y Acad Sci. 1979;330:587–96. 63. Sawyer RN, Swoszowski EJ, Jr. Asbestos abatement in schools: observations and experiences. Ann N Y Acad Sci. 1979;330:765–75. 64. Balmes JR, Warshaw R, Chong S, Kilburn KH. Effects of occupational exposure to asbestos containing materials in public schools. Am Rev Respir Dis. 1984;129:A174. 65. Oliver LC, Sprunce NL, Green RE. Asbestos-related disease in public school custodians. Am Rev Respir Dis. 1989;139:A211. 66. Kilburn KH, Warshaw RH, Einstein K, Bernstein J. Airway disease in non-smoking asbestos workers. Arch Environ Health. 1985;40:293–5. 67. Kilburn KH, Warshaw RH. Correlation of pulmonary functional impairment with radiographic asbestosis (ILO category). Am Rev Respir Dis. 1989;139:A210. 68. Kilburn KH, Warshaw RH. Airways obstruction from asbestos exposure: effects of asbestosis and smoking. Chest. 1994;106:1061–70. 69. Wollmer P, Jakobsson K, Albin M, et al. Measurement of lung density by x-ray computed tomography. Chest. 1987;91:865–9. 70. Aberle DR, Gamsu G, Ray CS. High-resolution CT of benign asbestos-related disease: clinical and radiographic correlation. Am J Radiol. 1988;151:883–91. 71. Gaensler EA, Kaplan AI. Asbestos pleural effusion. Ann Intern Med. 1971;74:178–91. 72. Epler GR, McLoud TC, Gaensler EA. Prevalence and incidence of benign asbestos pleural effusion in a working population. JAMA. 1982;247:617–22. 73. Morris JF, Koski A, Johnson LC. Spirometric standards for healthy nonsmoking adults. Am Rev Respir Dis. 1971;103:57–67. 74. Morris JF, Koski A, Breese JD. Normal values and evaluation of forced end-expiratory flow. Am Rev Respir Dis. 1975;111:755–62. 75. Kilburn KH, Warshaw RH. Measuring lung volumes in advanced asbestosis: comparability of plethysmographic and radiographic versus helium rebreathing and single breath methods. Respir Med. 1993;87:115–20. 76. Kilburn KH, Warshaw RH. Total lung capacity in asbestosis: a comparison of radiographic and body plethsymographic methods. Am J Med Sci. 1993;305:84–7. 77. Kilburn KH, Lilis R, Anderson HA, Miller A, Warshaw RH. Interaction of asbestos, age and cigarette smoking in producing radiographic evidence of diffuse pulmonary fibrosis. Am J Med. 1986;80: 377–81. 78. Kilburn KH, Warshaw RH. Pulmonary functional consequences of pleural asbestos disease circumscribed and diffuse. Chest. 1990; 98:965–72. 79. Fridriksson HV, Hedenstrom H, Hillerdal G, Malmberg P. Increased lung stiffness in persons with pleural plaques. Eur J Respir Dis. 1981;62:412–24. 80. Suzuki Y. Pathology of human malignant mesotheliomas. Semin Oncol. 1980;8:268–2. 81. Suzuki Y, Churg J, Kannerstein M. Ultrastructure of human malignant mesothelioma. Am J Pathol. 1976;85:241–62. 82. Baris YI, Sakin AA, Ozesmi M, et al. An outbreak of pleural mesothelioma and chronic fibrosing pleurisy in the village of Karain Urgup in Anatolia. Thorax. 1978;33:181–92. 83. Lilis R. Fibrous zeolites and endemic mesothelioma in Cappadocia, Turkey. J Occup Med. 1981;23:548–58. 84. Berry G. Mortality of workers certified by pneumoconiosis medical panels as having asbestosis. Br J Ind Med. 1981;38:130–7.
23 85. Kilburn KH, Warshaw RH. Effects of individually motivated smoking cessation on male blue collar workers. Am J Public Health. 1990;80:1334–7. 86. Hammond EC, Selikoff IJ, Seidman H. Asbestos exposure, cigarette smoking and death rates. Ann N Y Acad Sci. 1979;330:473–90. 87. Selikoff IJ, Seidman H, Hammond EC. Mortality effects of cigarette smoking among amosite asbestos factory workers. J Natl Cancer Inst. 1980;65:507–13. 88. Kipen HM, Lilis R, Suzuki Y, Valciukas JA, Selikoff IS. Pulmonary fibrosis in asbestos insulation workers with lung cancer: a radiological and histopathological evaluation. Br J Ind Med. 1987;44:96–100. 89. Jacob G, Anspach M. Pulmonary neoplasia among Dresden asbestos workers. Ann N Y Acad Sci. 1965;132:536–48. 90. Peto J. Dose-response relationships for asbestos-related disease: implications for hygiene standards. II. Mortality. Ann NY Acad Sci. 1979;330:195–203. 91. Berry G, Lewinsohn HC. Dose-response relationships for asbestosrelated disease: implications for hygiene standards. I. Morbidity. Ann N Y Acad Sci. 1979;330:184–94. 92. EPA announces final regulation to ban new asbestos products. Washington, DC: U.S. Environmental Protection Agency, Office of Public Affairs (A107); 1989. 93. Erdinc M, Erdinc E, Cok G, Polatli M. Respiratory impairment due to asbestos exposure in brake-lining workers. Environ Res, 2003; 91:151–6. 94. Lockey JE, Moatamed F. Health implications of non-asbestos fibers. In: Gee B, ed. Occupational Lung Diseases. New York: Churchill Livingstone; 1984: 75–98. 95. Hassell PA, Sluis-Cremer GK. X-ray findings, lung function and respiratory symptoms in black South African vermiculate workers. Am J Ind Med. 1989;15:21–9. 96. Hanke W, Sepulveda M-J, Watson A, Jankovic J. Respiratory morbidity in wollastonite workers. Br J Ind Med. 1984;41:474–9. 97. Kilburn KH. Flame-attenuated fiberglass: another asbestos? Am J Ind Med. 1982;3:121–5. 98. Stanton MF. Fiber carcinogenesis: is asbestos the only hazard? J Natl Cancer Inst. 1974;52:633–4. 99. Bjornberg A. Glass fiber dermatitis. Am J Ind Med. 1985;8:395–400. 100. National Institute for Occupational Safety and Health. Criteria for a Recommended Standard Occupational Exposure to Fibrous Glass.
101. 102.
103.
104.
105.
106.
107.
108.
109. 110.
111.
112.
113.
Asbestos and Other Fibers
581
Publication No. DHEW (NIOSH) 77-152. U.S. Public Health Service, Department of Health, Education, and Welfare; 1977. Maroudas NG, O’Neill CH, Stanton MF. Fibroblast anchorage in carcinogenesis by fibres. Lancet. 1973;1:807–9. Gross P, Kaschak M, Tolker EB, Babyak MA, de Treville RTP. The pulmonary reaction to high concentrations of fibrous glass dust. Arch Environ Health. 1970;20:696–704. Mitchell RI, Donofrio DJ, Moorman WJ. Chronic inhalation toxicity of fibrous glass in rats and monkeys. J Am Coll Toxicol. 1986;5:545–74. Smith DM, Ortiz LW, Archuleta RF, Johnson NF. Long-term health effects in hamsters and rats exposed chronically to man-made vitreous fibres. Ann Occup Hyg. 1987;31:731–54. Enterline PE, Marsh GM, Esmen NA. Respiratory disease among workers exposed to man-made fibers. Am Rev Respir Dis. 1983; 128:1–7. Kilburn KH, Powers B, Warshaw RH. Pulmonary effects of exposure to fine fibreglass: irregular opacities and small airway obstruction. Br J Ind Med. 1992;49:714–20. Nasr AN, Ditchek T, Scholtens PA. The Prevalence of Radiographic Abnormalities in the Chests of Fiber Glass Workers: Occupational Exposure to Fibrous Glass. Publication No. USPHS NIOSH 76–151. U.S. Department of Health, Education, and Welfare; 1976. Enterline PE, Marsh GM, Stone RA, Henderson VL. Mortality among a cohort of U.S. man-made fiber workers. J Occup Med. 1990;32:594–604. Doll R. Overview and conclusions. Symposium on Man-made mineral fibers, Copenhagen, October 1986. Ann Occup Hyg. 1987;31:805–19. Simonato L, Fletcher AC, Cherrie JW, et al. International Agency for Research on Cancer. Historical cohort study of MMMF production workers in seven European countries: extension of the followup. Ann Occup Hyg. 1987;31:603–23. Hallin N. Report on Mineral Wool Dust in Construction Sites. Stockholm, Sweden: Bygghalsan, The Construction Industry’s Organization for Working Environment, Safety and Health; 1981. Das PB, Fletcher AG, Jr, Deodhare SG. Mesothelioma in an agricultural community of India: a clinicopathological study. Aust N Z J Surg. 1976;46:218–26. Infante PF, Schuman LD, Dement J, Huff J. Fibrous glass and cancer: commentary. Am J Ind Med. 1994;26:559–84.
This page intentionally left blank
Coal Workers’ Lung Diseases
24
Gregory R. Wagner • Michael D. Attfield
Historical Perspective Lung disease among underground coal miners has been a recognized occupational hazard since at least the mid-seventeenth century. Miners’ black lung, now called coal workers’ pneumoconiosis (CWP) was first documented among Scottish coal miners in 1837.1 Although the disease was thought to be disappearing in Britain at the turn of this century, wider use of chest radiographs following World War I showed pneumoconiosis, similar to silicosis, among coal miners in South Wales. By 1934, British physicians were beginning to accept coal dust as an occupational exposure that could result in disability and death. In 1942, the Committee on Industrial Pulmonary Diseases of the Medical Research Council introduced the term “coal workers’ pneumoconiosis.”2,3 In marked contrast, appreciation of CWP as an occupational disease and public health problem occurred much later in the United States, as did legislation to prevent or compensate CWP and associated respiratory disease. One reason for the relatively late recognition of CWP as a distinct disease entity in the United States was the early emphasis placed on the etiological role of silica in pneumoconiosis. The Hawk’s Nest tragedy (1932–1934), in which more than 400 workers died of acute silicosis and tuberculosis after working on the tunnel at Gauley Bridge, West Virginia, reinforced the prevalent theory that silica content was the critical etiological agent in pneumoconiosis. The first systematic study of U.S. coal miners was conducted by the Public Health Service between 1928 and 1931 in the anthracite coal fields in eastern Pennsylvania.4 Because of the relatively high silica content and similarity to silicosis, the term “anthracosilicosis” was used to describe the pneumoconiosis found among those miners. Of 2711 men studied, 23% were found to be affected. The prevalence of pneumoconiosis was related to the number of years underground, particles per cubic meter, and free silica content of the dust. “Pulmonary infection” was more frequent among miners with higher dust exposure and greater than 15 years underground. Among miners over age 55, pulmonary tuberculosis was as much as 10 times more common than in the general population.5 Little additional progress was made in the United States until 1954, when the Public Health Service published a bibliography of American and British reports on respiratory disease among coal miners.6 Following this, various clinical and epidemiologic studies7–9 further documented the importance of CWP. At the direction of Congress, the Public Health Service began a comprehensive survey of the Appalachian coal fields in 1963. Of 2549 working miners and 1191 nonworking miners, 9% of the working and 18% of the nonworking miners were found to have radiographic evidence of pneumoconiosis.10 This study, published in 1968, together with the disastrous November 20, 1968, Farmington, West Virginia, mine explosion that killed 78 miners, triggered increased pressure from miners, their union (the United Mine Workers of America), and public health advocates, and led to passage of the Federal Coal Mine Health and Safety Act of 1969 (Public Law 1973).11 This was the first
American mining law to recognize the importance of both health and safety hazards and provide a mandate for strong preventive measures. Since that time, awareness has grown indicating that CWP is not the only occupational pulmonary disease affecting coal miners. The results of the study by Rogan and colleagues12 were the first to show a clear link between chronic airflow obstruction and dust exposure, independent of CWP status, while Rae et al.13 demonstrated that respiratory symptom prevalence was related to level of dust exposure. Emphysema is increased in coal miners,14 and is related to both retained dust in the lung, and to cumulative dust exposure.15,16
Legislation Although the Federal Coal Mine Health and Safety Act of 1969 was a landmark piece of legislation, it was by no means the first or last legislation to deal with occupational risks of mining (Table 24-1). The 1969 Act addressed several issues specifically and has served as a model for subsequent occupational safety and health legislation. The provisions included the following:17 • Mandatory health standards to be prescribed by the Secretary of Health and Human Services (HHS) • Right of entry for inspection (Department of Interior) and investigation (HHS) • Power to close mining operations, issue abatement orders, and penalize operators for noncompliance • A respirable dust standard of 3 mg/m3 to be reduced to 2 mg/m3 3 years after passage of the Act • Medical surveillance of underground coal miners through entry and periodic chest x-ray examinations • Rights of miners (transfer rights) with evidence of pneumoconiosis to work in a low dust area (now < 1 mg/m3) with increased dust monitoring. If job transfer is necessary, there is no loss of pay (rate retention) • Autopsies on deceased miners, administered by the National Institute for Occupational Safety and Health (NIOSH) through the National Coal Workers’ Autopsy Study • Compensation for miners with total disability and for dependents of those miners who die of lung disease from coal mine employment • Research and training The medical surveillance provisions of the Act were implemented through specifications developed by the NIOSH Appalachian Laboratory for Occupational Safety and Health in August 1970. Since that date, more than 350,000 examinations have been performed. Subsequently, Title IV of the 1969 Act has been amended twice by Congress, each time modifying requirements that qualify miners for benefits and making coal operators responsible for providing trust funds to pay these benefits. In 1977, the 1969 Act was revised and 583
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
584
Environmental Health
TABLE 24-1. COAL MINING HEALTH AND SAFETY LEGISLATION IN THE UNITED STATES 1865: 1910: 1941: 1946: 1947: 1952: 1966:
1969: 1972: 1977:
1977: 1977:
Bill is introduced to create Federal Mining Bureau. It is not passed. Bureau of Mines is established but specifically denied right of inspection. Bureau of Mines is granted authority to inspect, but it is not given authority to establish or enforce safety codes (Title I, Federal Coal Mine Safety Act). Federal Mine Safety Code for Bituminous Coal and Lignite Mines is issued by the Director, Bureau of Mines (agreement between Secretary of the Interior and the United Mine Workers of America) and included in the 1946 (Krug-Lewis) UMWA Wage Agreement Congress requests coal mine operators and state agencies to report compliance with the Federal Mine Safety Code; 33% compliance is reported. Title II of the Federal Coal Mine Safety Act is passed. All mines employing 15 or more persons underground must comply with the act. Enforcement is limited to issuing orders of withdrawal for imminent danger or for failure to abate violations within a reasonable time. Amendments to 1952 law are passed. Mines employing under 15 employees are included under 1952 Act; stronger regulatory powers are given to Bureau of Mines, such as the provision permitting the closing of a mine or section of a mine because of an unwarrantable failure to correct a dangerous condition. Federal Coal Mine Health and Safety Act is passed. The hazards of pneumoconiosis are, for the first time, given prominence, in addition to those of accidents. Black Lung Benefits Act of 1972 is passed. Several sections of the Title IV are amended, liberalizing the awarding of compensation benefits. Federal Mine Safety and Health Act of 1977 is passed. It amends Coal Mine Health and Safety Act of 1969 largely by adding health and safety standard setting, inspections, and research provisions for metal and nonmetal miners, while leaving the 1969 act largely intact. This act also consolidates health and safety compliance activities for general industry (OSHA) and mining (MSHA) in the Department of Labor. Black Lung Benefits Revenue Act of 1977 is passed. This provides for an excise tax on the sale of coal by the producer to establish trust funds to pay black lung benefits. Black Lung Benefits Reform Act of 1977 is passed, to improve and further define provisions for awarding black lung benefits. Additionally, it establishes (a mandate) that a detailed study of occupational lung disease would be undertaken by the Department of Labor and NIOSH.
Source: With permission from Key MM, Kerr LE, Bundy M, eds. Pulmonary Reactions to Coal Dust. New York: Academic Press; 1971.
CWP is a specific occupational lung disease arising from the prolonged inhalation of coal mine dust. Black lung is a generic term that has been used legislatively and popularly to mean any lung disease that may arise from coal mine employment. This includes both pathologically defined CWP and also obstructive airway disease among coal miners. CWP occurs in two forms: (1) simple (chronic) CWP and (2) complicated CWP, or progressive massive fibrosis (PMF). The characteristic lesion of simple CWP is the coal macule, which is a focal collection of dust-laden macrophages at the division of the respiratory bronchioles together with associated focal emphysema.19 Micro- and macronodules of simple CWP usually are smaller than 1 cm in diameter. Complicated CWP, or PMF, consists of solid, heavily pigmented masses generally greater than 2 cm in diameter, commonly located in the apical region of the lung and occurring on a background of simple CWP.
anthracite, there being a gradient in toxicity from low-volatile bituminous (more fibrogenic) to subbituminous coal (less fibrogenic). Lignite, which also is mined on a limited basis, has not been adequately studied epidemiologically. Workers engaged in face work and coal preparation often have the highest exposures to respirable coal dust and thus the highest rates of CWP. Drillers and other workers involved in tasks which generate free silica dust are also at risk of contracting silicosis. Prior to 1970, dust concentrations in face jobs in underground mine were ranging from 6 to 10 mg/m3. Subsequent to the 1969 Act,11 dust levels were limited first to 3 mg/m3 and then to 2 mg/m3. Overall, the regulations brought about a marked reduction in dust exposures in coal mines,20 although not without problems being reported periodically.21 Recent evidence suggests that miners are not being uniformly protected from developing disease.22 New technological developments, such as the machine-mounted continuous respirable dust monitor23 may help to identify and control future overexposures. Surface coal mining accounts for an increasing fraction of coal mined in the United States, while the underground mining workforce has been decreasing. Surface mining is prevalent in the western states, and some parts of Appalachia as “mountaintop removal.” Surface miners generally experience lower levels of dust exposure than their counterparts underground.24 Some surface mine jobs, however, can involve very high exposures to silica, especially if dust control measures are missing or ineffective. Drillers, in particular, are at risk of both acute and chronic silicosis, and severe cases have been reported.25
Environmental Exposures
Pathophysiology
Significant exposure to coal mine dust may occur not only underground but also in surface strip and auger mines, in coal preparation plants, and in coal-handling operations. U.S. coal reserves are extensive, covering some 400,000 square miles across the country (Fig. 24-1). Coal in the United States may be classified by four ranks: lignite, subbituminous, bituminous, and anthracite, reflecting the degree of metamorphosis of the coal. Anthracite deposits, which are mined on a limited basis only in northeastern Pennsylvania, are associated with the highest rates of pneumoconiosis. Bituminous coals, which are mined from central Pennsylvania westward to Utah are less fibrogenic than
Pathologically-defined simple CWP consists, at a minimum, of the characteristic coal macule lesion(s).17,19 These may occur as microscopic manifestations of CWP associated with little or no functional impairment. With greater dust deposition in the lung, micronodules (less than 7 mm in diameter) and nodules (larger than 8 mm but less than about 1 cm) are found, predominantly in the upper lung zones (Fig. 24-2). These nodules consist of collagen in addition to a preponderance of reticulin. With increased profusion of nodular lesions in the lung come greater functional abnormalities, but until marked, CWP often is not associated with significant respiratory symptoms or limiting impairment.
largely incorporated into a new, comprehensive mining law—the Federal Mine Safety and Health Act of 1977. Pub L No 91-173, amended by Pub L No 95-164, 10118—which extended many of the provisions of the 1969 Act to metal and nonmetal miners. Significant new responsibilities were given to the Department of Labor (Mine Safety and Health Administration) for establishing health standards and mine inspections and to HHS (NIOSH) for research and surveillance in noncoal mines.
Definition of CWP
24
Coal Workers’ Lung Diseases
585
FU PR GR
WI EI
A
FC Legend: TG Alaska
Hawan
Coal deposits Scattered coal deposits A . Appalachia EI . Eastern Interior WI . Western Interior TG . Texas Gulf PR . Power River FU . Fort Union GR . Green River FC . Four Corners
Figure 24-1. Coal deposits in the United States.
The presence of simple CWP is a significant risk factor for development of PMF; and its probability increases with the severity of simple CWP (Fig. 24-3).26,27 PMF lesions usually occur in the posterior portion of the upper lobes and in the superior segment of the lower lobes. Unlike silicotic lesions, they cut easily and may
have cavities containing inky fluid. The margins may be rounded or irregular, with fibrous strands extending into adjacent lung tissue. Caplan’s syndrome, consisting of pulmonary nodules associated with rheumatoid arthritis, occurs rarely in coal miners. The nodules, Caplan’s lesions, are similar to large (up to 5 cm) silicotic nodules on
Figure 24-2. Whole lung section showing simple CWP with associated focal emphysema but otherwise preserved lung architecture.
Figure 24-3. Whole lung section showing progressive massive fibrosis with cavitation involving the superior segments of the lung on a background of simple CWP and extensive emphysema.
586
Environmental Health
gross examination, usually have smooth borders and concentric internal laminations, and in contrast to PMF lesions, often have little dust contained within the lesion.19 Although other forms of emphysema occur in coal miners as they do in the general population, focal emphysema is integral to the coal macule (Fig. 24-2). Focal emphysema is associated with local loss of elastic fibers and alterations in capillary density. The panlobular, irregular, centrilobular, and bullous emphysema associated with these massive lesions is often extensive and destructive; it frequently results in marked pulmonary impairment.19 Increasing pathological and physiological evidence has strengthened the view that coal mine dust exposure causes centrilobular emphysema.28–30 Chronic bronchitis, characterized pathologically by hypertrophy and hyperplasia of the bronchial mucous glands with an associated increase in the goblet cells of the small airways, occurs as a result of dust exposure.31 Clinically defined as the chronic production of phlegm, chronic bronchitis is a frequent clinical finding among coal miners,32 and its prevalence and incidence are related to dust exposure.13,33 Physiologically, miners with simple CWP have been found to have increased residual volumes, decreased maximal expiratory flow rates, reduction in PaO2, increased alveolar arterial oxygen differences, and slight hyperventilation, especially with exercise.34,35 These findings may be nonexistent or slight in those in the earliest stages of CWP, but become progressively more significant with advancing disease. In PMF (again varying with the extent of the lesions), moderate-to-severe airway obstruction is manifested by markedly reduced flow rates, decreased diffusing capacity, perfusion defects, and reduced PaO2, together with obstructive and restrictive mechanical changes in the lung.34 These findings often are marked. Pulmonary hypertension with cor pulmonale is a frequent consequence of advanced PMF.
Clinical Features There are no pathognomonic signs or symptoms of CWP. In the early stages of CWP, workers may be asymptomatic and without functional impairment. Chronic cough and phlegm are, however, associated with prolonged inhalation of coal dust. These symptoms may or may not be associated with functional impairment. As CWP progresses, shortness of breath and functional impairment become more common, yet some miners with advanced simple CWP remain symptom free. Those with PMF, especially those with large lesions, typically present with cough, phlegm, and shortness of breath. The chest radiograph is the standard method for detection of CWP. Although the radiographic examination is somewhat limited in sensitivity, the correlation between the profusion of CWP pathologically and radiographically is reasonably good.36 An internationally developed and accepted method of radiograph classification distributed by the International Labor Office can be used to describe the extent, size, shape, and distribution of radiographic opacities and also to describe pulmonary, cardiac, pleural, and other thoracic abnormalities that may appear on a chest radiograph.37 This classification divides simple pneumoconiosis into four major subcategories (0, 1, 2, and 3), each of which is subdivided into three categories (i.e., 1/0, 1/1, and 1/2), resulting in an approximation to a continuous scale. PMF is divided into three categories (A, B, and C), depending on lesion size. Although designed as a tool for public health surveillance and epidemiological investigation, this classification also has been adopted internationally to describe CWP clinically and is used for compensation purposes in some jurisdictions. Computerized tomography (CT or HRCT) is used routinely where available to clarify ambiguous findings on the standard chest radiograph and to investigate the possibility of cancer in miners with large opacities. Use of CT for routine screening is not recommended currently, although it is employed in some countries for medical monitoring of dust-exposed workers. There is no internationally-accepted standardized method for classification of CT studies in dust-exposed workers; however, some have been proposed but not yet validated.38
Epidemiology Mortality patterns among coal miners have been studied extensively and have generally shown increased standard mortality ratios (SMRs) for accidents, nonmalignant respiratory disease, pulmonary tuberculosis, and stomach cancer.39–43 Mortality rates by major radiographic category have shown significant excesses for those with complicated CWP over those with category 0,44 particularly for miners who developed PMF early in their working life.45 Little evidence has been found for a gradient of increasing mortality with increasing category of simple CWP, although Miller and Jacobsen showed reduced survival among those with simple CWP compared to those with category 0.45 Mortality from all nonviolent causes was found to be related to cumulative dust exposure.45 Importantly, mortality from bronchitis and emphysema was also related to dust exposure, an observation confirmed by Kuempel et al., using both underlying and contributing causes of death.46 The latter study also showed a relationship between mortality from pneumoconiosis and cumulative dust exposure. In the main, mortality from lung cancer in coal miners is not increased, but there is widely varying evidence regarding a link between CWP and lung cancer. In studies where excesses were found, lack of control for confounding factors may have been responsible.47 Using detailed case-control methods, Ames and colleagues were unable to detect a CWP-lung cancer relationship. By contrast, stomach cancer mortality has been almost uniformly increased in coal mining cohorts in both Britain and the United States,39,40,43 and a relationship with dust exposure has been detected.45 Ong and coworkers 48 have hypothesized, supported by laboratory mutagenesis data, that compounds in coal may undergo intragastric nitrosation or interaction with exogenous chemicals or both to form carcinogenic compounds that may with time cause stomach cancer. The Meyer hypothesis,49 which posits that miners with good lung clearance are at increased risk of stomach cancer because of ingestion of cleared dust while those with impaired clearance get nonmalignant lung disease, has been invoked as one explanation of the increased mortality from stomach cancer in coal miners. This hypothesis was confirmed in one analysis using CWP as an indicator of impaired clearance,50 but not in another using airway obstruction as the indicator.51 Morbidity studies of coal miners have dealt with various outcomes relating to nonmalignant pulmonary disease. Preeminent among these has been the association between radiographic evidence of CWP and dust exposure. In 1959, the Pneumoconiosis Field Research (PFR), a scientific study initiated by the National Coal Board of Great Britain, began a massive, long-term cohort study of 26 collieries. After 10 years of study, analysis of the respirable dust and radiographic findings provided clear doseresponse relationships, which resulted in new dust standards in the United States and in Great Britain.52 These findings were confirmed in a subsequent study of 10 of the original collieries (Fig. 24-4).53 Free silica content in respirable samples was found not to influence pneumoconiosis risk, once cumulative exposure to mixed mine dust was taken into account. Despite this, it was found that a small number of miners with rapid progression had higher exposure to free silica, suggesting the development of silicosis rather than CWP.54 Further examination of these data has led to a warning that even brief overexposures to silica in the coal mine can be hazardous.55 Coal rank, in addition to mixed mine dust exposure, has consistently found to be an important predictor of CWP prevalence and incidence.56–58 A substantial degree of variation exists between mines which cannot be accounted for by dust exposure and other environmental factors. 59 Findings from similar studies in the United States conducted by NIOSH are consistent with the British pneumoconiosis field research data (Fig. 24-5).57,60 Because of the strong association between PMF and respiratory impairment and increased mortality, the attack rate of PMF has been of particular interest. The risk of developing PMF increases with
24
Figure 24-4. Lines (a) and (b) are estimates of probabilities of developing category 2 or 3 of simple pneumoconiosis over an approximately 35-year working life at the coalface, in relation to the mean dust concentration experienced during that period. (a) is based on 10 years of data, Interim Standards Study, Pneumoconiosis Field Research. (b) is an update of (a) based on 20 years of data, Pneumoconiosis Field Research. (Source: Data from Hurley JF, et al. Simple Pneumoconiosis and Exposure to Respirable Dust: Relationships from Twenty-Five Years’ Research at Ten British Coal Mines. Institute of Occupational Medicine, Report No. TM/79/13.)
increasing radiographic category of CWP,61 and with progression of CWP.27 These studies are important because they provide the basis for recommending removal of a miner with radiographic evidence of CWP from areas of high dust exposure, as is implemented in the federal regulations associated with the 1969 Act.11 It is important to note, however, that there is the potential for PMF to develop directly from a background of category 0 in response to dust exposure.62 This indicates that the incidence of PMF cannot be controlled merely by the prevention of simple CWP. The attack rate of PMF does not appear to depend on presence of pulmonary tuberculosis, as once suspected.17,19,63 Smoking has not been found to affect CWP development,64 nor did bronchitis appear to play a role.65 The exposure-response relationship for CWP and dust exposure is similar for current coal
Figure 24-5. Ten-year predicted incidence and progression of CWP for various starting categories. (Source: The Division of Respiratory Disease Studies/NIOSH.)
Coal Workers’ Lung Diseases
587
miners and ex-miners, although ex-miners had more disease owing to higher exposures.66 Although rounded-type small radiographic opacities have been traditionally studied in connection with CWP, there is evidence that small irregular opacities also increase in prevalence with degree of dust exposure.67,68 Small irregular opacities may be linked with lung function deficits.69 While radiographic evidence of CWP has been the major focus of epidemiological research on CWP, much attention has also been paid to coal dust exposure and other nonmalignant lung diseases (including bronchitis, obstructive airway disease, and emphysema). Unlike CWP, these diseases are known to be of multifactorial etiology, including a major influence of cigarette smoking among smokers. Hence their interpretation and significance in terms of occupational exposure has been associated with some controversy. There is now overwhelming evidence of an exposure-response relationship for cumulative dust exposure and diminished ventilatory function. This has been found in cross-sectional studies,12,70–72 and in longitudinal studies.73,74 Smoking was not found to potentiate the effect of dust exposure, nor was presence of CWP a prerequisite for ventilatory function loss. Although the average effect of dust exposure obtained from the exposure-response analyses may appear small, this appearance is misleading, and there is evidence that some miners suffer important deficits in ventilatory function from their work.26,75 Severe declines in underground coal miners are associated with increased mortality,76 and may not only be due to dust exposure but also arise from other airborne factors, such as water used for dust suppression.77 There is no epidemiologic evidence that the effect of smoking and dust exposure differ in nature.78 More recent evidence suggests that new recruits to mining suffer large initial declines in ventilatory function, followed by lesser long-term declines.33,79,80 Respiratory symptoms associated with chronic bronchitis have been shown to be related to cumulative dust exposure and its surrogates, in both smokers and never smokers.13,32,81 The presence of emphysema, as detected on the chest radiograph, is linked with extent of cumulative dust exposure.82 This finding is consistent with the results of from several pathologic studies, which indicate that emphysema is associated with both retained dust and cumulative exposure (or its surrogates) during life.15,16,83
Prevention The key to preventing CWP is prevention of prolonged inhalation of significant concentrations of coal mine dust. This can be accomplished by the control of respirable coal mine dust through proper ventilation, use of water spray dust suppression, and enclosure of mining operations.84 Secondary prevention strategies, for example, removal of miners with early evidence of CWP to low-dust jobs can assist in reducing the incidence of severe disease. Both strategies were mandated by Congress in the Federal Coal Mine Health and Safety Act of 1969 and have been implemented with substantial but incomplete success in underground operations of the U.S. coal industry. Since passage of the 1969 Act, respirable dust levels have been reduced for most high-risk jobs to meet the 2.0 mg/ml standard. Recent evidence suggests that control of dust for prevention of CWP will have similar rates of success with other coal-mining-related lung diseases.85 NIOSH CWP surveillance of U.S. miners has documented decreases in radiographic prevalence of CWP (category 1 or greater) over the period 1987–2001 from 20% to about 5% to in miners with 25 or more years in mining. In contrast, prevalence rates for miners with less than 20 years tenure in mining have remained relatively stable, ranging from about 1–4% in 2001 for those with 0–9 to 15–19 years of tenure (Fig. 24-6). Despite these gains in prevention, concern has been expressed about the adequacy of control measures,86 and there remains evidence of excessive risk for underground coal miners in certain localities, as manifested by cases of rapid progression of CWP.22 In addition, although dust concentrations in surface mines have averaged less than half those of underground mining, high exposures to coal dust and free silica may occur for those who drill, crush, and prepare coal for transport. NIOSH has described several cases of acute or accelerated
588
Environmental Health
Prevalence (%)
25 20 15 10 5 0 1987–1991 Tenure (yrs):
0–9
1992–1995 10–14
15–19
1996–2002 20–24
25+
Figure 24-6. Trends in coal workers’ pneumoconiosis prevalence by tenure among examinees employed at underground coal mines, U.S. National Coal Workers’ X-Ray Surveillance Program, 1987–2002.
silicosis in young (<35 years old) drillers, and has recommended the use of wet drilling and exhaust ventilation as effective prevention measures.25 In response to evidence of limitations in the effectiveness of current U.S. effort to fully control lung disease in coal miners, NIOSH produced comprehensive recommendations for addressing this problem.87 This criteria document makes the following recommendations: • • • •
Control of respirable coal mine dust to 1 mg/m3 Improved engineering control and work practices Improved hazard surveillance Extension of health screening and surveillance to include tests of pulmonary function for all coal miners—both underground and surface
In response to the NIOSH recommendations, the U.S. Secretary of Labor empaneled an Advisory Committee on the Elimination of Pneumoconiosis Among Coal Mine Workers.88 This committee reviewed the scientific data on the causes of disease persistence and issued 20 recommendations. Those include recommendations for improved dust control, and inspection and enforcement of exposures to coal mine dust including silica dust. A strengthened program of medical screening and health surveillance was also endorsed. Ultimately, improved prevention depends on adoption and application of these recommendations.
REFERENCES
1. Thomson W. On black expectoration and deposition of black matter in the lungs. Medico-Chirurgical Transactions. 20;230:1837. 2. Medical Research Council of Great Britain. Chronic Pulmonary Diseases in South Wales Coal Miners. Special Report Series 243. London: Medical Research Council of Great Britain; 1942. 3. Medical Research Council of Great Britain. Chronic Pulmonary Diseases in South Wales Coal Miners. Special report series 244. London: Medical Research Council of Great Britain; 1943. 4. Sayers RR, Bloomfield JJ, Dallavalle JM. Anthraco-Silicosis (Miners’ Asthma): A Preliminary Report of a Study Made in the Anthracite Region of Pennsylvania. Special Bulletin No. 41. Harrisburg, PA: Pennsylvania Department of Labor and Industry; 1934. 5. House of Representatives Subcommittee of the Committee of Labor. An Investigation Relating to Health Conditions of Workers Employed in Construction and Maintenance of Public Utilities. 74th Congress, HJ Res. 449. Washington, DC: 74th Congress; 1936. 6. Doyle HN, Noehren TH. Pulmonary Fibrosis in Soft Coal Miners: An Annotated Bibliography on the Entity Recently Described as soft coal Pneumoconiosis. U.S. Public Health Service Bibliography, Ser. 11. Washington, DC: U.S. Public Health Service; 1954.
7. Levine MD, Hunter MB. Clinical study of pneumoconiosis of coal workers in Ohio river valley. JAMA. 1957;163:1–9. 8. Lieben J, Pendergrass E, McBride WW. Pneumoconiosis study in central Pennsylvanian coal miners. J Occup Med. 1961;5:376–88. 9. Stoeckle JD, Hardy HL, King WB, Nemiah JC. Respiratory disease in U.S. soft-coal miners: clinical and etiological considerations. A study of 30 cases. J Chron Dis. 1961;15:887–905. 10. Lainhart WS, Felson B, Jacobson G, Pendergrass EP. Pneumoconiotic lesions in bituminous coal miners and metal miners. Arch Environ Health. 1968;16:207–10. 11. Federal coal mine health and safety act. Public Law 91-173., 1969;2917. 12. Rogan JM, Attfield MD, Jacobsen M, et al. Role of dust in the working environment in development of chronic bronchitis in British coal miners. Br J Ind Med. 1973;30:216–17. 13. Rae S, Walker DD, Attfield MD. Chronic bronchitis and dust exposure in British coalminers. In: Walton WH, ed. Inhaled Particles III. Old Woking, Surrey, England: Unwin Brothers; 1971:883–96. 14. Ryder RC, Lyons JP, Campbell H, Gough J. Emphysema and coal workers’ pneumoconiosis. BMJ. 1970;3:481–7. 15. Leigh J, Driscoll TR, Cole BD, Beck RW, Hull BP, Yang J. Quantitative relation between emphysema and lung mineral content in coalworkers. Br J Ind Med. 1994;51;400–7. 16. Ruckley VA, Fernie JM, Chapman JS, et al. Comparison of radiographic appearances with associated pathology and lung dust content in a group of coal workers. Br J Ind Med. 1984;41:459–67. 17. Lee DHK. Historical aspects. In: Key MM, Kerr LE, Bundy M, eds. Pulmonary Reactions to Coal Dust, 1953–1977. New York, NY: Academic Press; 1971: 9. 18. Federal mine safety and health act of 1977. Public Law 91-173. Amended by Public Law 95–164, 1977;101. 19. Kleinerman J, Green FHY, Laqueur W, et al. Pathology standards for coal workers’ pneumoconiosis. Arch Pathol Lab Med. 1979;103: 375–432. 20. Parobeck PS, Jankowski RA. Assessment of the respirable dust levels in the nation’s underground and surface coal mining operations. Am Ind Hyg Assoc J. 1979;40:910–5. 21. Mine Safety and Health Administration. Report of the Statistical Task Team of the Coal Mine Respirable Dust Task Group. Washington DC: U.S. Department of Labor; 1993. 22. Antao VC, Petsonk EL, Sokolow LZ, et al. Rapidly progressive coal workers’ pneumoconiosis in the United States: Geographic clustering and other factors. Occup Environ Med. 2005;62:670–74. 23. National Institute for Occupational Safety and Health. Machinemounted continuous respirable dust monitor. Technol News. 1997;463:1–2. 24. Piacitelli GM, Amandus HA, Dieffenbach A. Respirable dust exposures in U.S. surface coal mines (1982–1986). Arch Environ Health. 1990;45;202–9. 25. National Institute for Occupational Safety and Health. Request for Assistance in Preventing Silicosis and Deaths in Rock Drillers. NIOSH Alert. DHHS (NIOSH) Publication No. 92-107. Cincinnati, OH: National Institute for Occupational Safety and Health; 1992. 26. Hurley JF, Soutar CA. Can exposure to coalmine dust cause a severe impairment of lung function? Br J Ind Med. 1986;43;150–7. 27. McLintock JS, Rae S, Jacobsen M. The attack rate of progressive massive fibrosis in British miners. In: Walton WH, ed. Inhaled Particles III, Old Woking, Surrey, England: Unwin Brothers; 1971:933–52. 28. Worth G. Emphysema in Coal Workers. Am J Ind Med. 1984;6: 401–3. 29. Soutar CA. Update on lung disease in coal miners. Br J Ind Med. 1987;44:145–8. 30. Ruckley VA, Seaton A. Emphysema in coalworkers. Thorax. 1981;36:716. 31. Douglas AN, Lamb D, Ruckley VA. Bronchial gland dimensions in coalminers: influence of smoking and dust exposure. Br J Ind Med. 1982;37:760–4.
24 32. Kibelstis JS, Morgan EJ, Reger R, et al. Prevalence of bronchitis and airway obstruction in American bituminous coal miners. Am Rev Respir Dis. 1973;108:886–93. 33. Seixas NS, Robins TG, Attfield MD, Moulton LH. Exposureresponse relationships for coal mine dust and obstructive lung disease following enactment of the Federal Coal Mine Health and Safety Act of 1969. Am J Ind Med. 1992;21:715–34. 34. Lapp NL, Seaton A. Pulmonary function in coal workers’ pneumoconiosis. In: Key MM, Kew LE, Bundy M, eds. Pulmonary Reactions to Coal Dust. New York: Academic Press; 1971:153–85. 35. Rasmussen DL, Laqueur WA, Futterman HD. Pulmonary impairment in Southern West Virginia coal miners. Am Rev Respir Dis. 1968;98:658–67. 36. Wagner GR, Attfield MD, Parker JE. Chest radiography in dustexposed miners: promise and problems, potential and imperfections. In: Banks DE, ed. Occupational Medicine: State of the Art Reviews. Vol 8. No. 1. Hanley and Belfus, Inc: Philadelphia; 1993:127–41. 37. International Labour Office. International Classification of Radiographs of Pneumoconiosis (2000 edition). Occupational Safety and Health Series, No. 22. Geneva: International Labour Office; 2002. 38. Kusaka Y, Hering KG, Parker JE. International Classification of HRCT for Occupational and Environmental Respiratory Diseases. Tokyo: Springer-Verlag; 2005. 39. Stocks P. On the death rates from cancer of the stomach and respiratory diseases in 1949–53 among coal miners and other residents in counties of England and Wales. Br J Cancer. 1962;16;592–8. 40. Enterline PE. Mortality rates among coal miners. Am J Public Health. 1964;54:758–68. 41. Carpenter RG, Cochrane AL, Clarke WG, Jonathan G, Moore F. Death rates of miners and ex-miners with and without coalworkers’ pneumoconiosis in South Wales. Br J Ind Med. 1993;50(7):577–85. 42. Cochrane AL, Carpenter RG, Moore F, Thomas J. The mortality of miners and ex-miners in the Rhondda Fach. Br J Ind Med. 1964;21: 38–45. 43. Rockette H. Mortality Among Coal Miners by the UMWA Health and Retirement Funds. DHEW (NIOSH) publication 77-155. Washington, DC: U.S. Department of Health, Education, and Welfare: 1977. 44. Ortmeyer CE, Costello J, Morgan WKC, Swecker S, Petersen MR. The mortality of Appalachian coal miners. Arch Environ Health. 1974;29;67–72. 45. Miller BG, Jacobsen M. Dust exposure, pneumoconiosis, and mortality of coal miners. Br J Ind Med. 1985;42;723–33. 46. Kuempel ED, Stayner LT, Attfield MD, Buncher CR. Exposureresponse analysis of mortality among coal miners in the United States. Am J Ind Med. 1995;28:167–84. 47. Ames RG, Amandus H, Attfield M, Green FY, Vallyathan V. Does coal mine dust present a risk for lung cancer? A case-control study of U.S. coal miners. Arch Environ Health. 1983;38:331–3. 48. Ong TM, Whong WZ, Ames RG. Gastric cancer in coal miners: an hypothesis of coal mine dust causation. Med Hypotheses. 1985;12: 159–65. 49. Meyer MB, Luk GD, Sotelo JM, Cohen BH, Menkes HA. Hypothesis: the role of the lung in stomach carcinogenesis. Am Rev Respir Dis. 1980;121;887–92. 50. Swaen GMH, Meijers JMM, Slangen JJM. Risk of gastric cancer in pneumoconiotic coal miners and the effect of respiratory impairment. Occup Environ Med. 1995;52:606–10. 51. Ames RG, Gamble JF. Lung cancer, stomach cancer, and smoking status among coal miners. Scand J Work Environ Health. 1983;9:443–8. 52. Jacobsen M, Rae S, Walton WH, Rogan JM. The relation between pneumoconiosis and dust exposure in British coal mines. In: Walton WH, ed. Inhaled Particles III. Old Woking, Surrey, England: Unwin Brothers; 1971: 903–9. 53. Hurley JF, Burns J, Copland L, Dodgson J, Jacobsen M. Coalworkers’ simple pneumoconiosis and exposure to dust at 10 British coalmines. Br J Ind Med. 1982;39:120–7.
Coal Workers’ Lung Diseases
589
54. Seaton A, Dodgson J, Dick JA, Jacobsen M. Quartz and pneumoconiosis in coalminers. Lancet. 1981;1272–5. 55. Buchanan D, Miller BG, Soutar CA. Quantitative relations between exposure to respirable quartz and risk of silicosis. Occup Environ Med. 2005;60:159–64. 56. Walton WH, Dodgson J, Hadden GG, Jacobsen M. The effect of quartz and other non-coal dusts in coalworkers’ pneumoconiosis. In: Walton WH, ed. Inhaled Particles IV, Volume 2. Old Woking, Surrey, England: Unwin Brothers; 1977:669–89. 57. Attfield MD, Morring K. An investigation into the relationship between coal workers’ pneumoconiosis and dust exposure in U.S. coal miners. Am Ind Hyg Assoc J. 1992;53:486–92. 58. Reisner MTR, Robock K. Results of epidemiological, mineralogical, and cytotoxicogical studies on the pathogenicity of coal-mine dusts. In: Walton WH, ed. Inhaled Particles IV. Oxford: Pergamon Press; 1977:703–16. 59. Crawford NP, Bodsworth FL, Dodgson J. A study of the apparent anomalies between dust levels and pneumoconiosis at several British collieries. Ann Occup Hyg. 1982;26;725–44. 60. Attfield MD, Seixas NS. Prevalence of pneumoconiosis and its relationship to dust exposure in a cohort of U.S. bituminous coal miners and ex-miners. Am J Ind Med. 1995;27:137–51. 61. Cochrane AL. The attack rate of progressive massive fibrosis. Br J Ind Med. 1962;19:52–64. 62. Hurley JF, Maclaren WM. Dust-Related Risks of Radiological Changes in Coalminers Over a 40-Year Working Life: Report on Work Commissioned by NIOSH. TM/79/09Edinburgh, Scotland: Institute of Occupational Medicine; 1987. 63. Dick JA. The Role of Pulmonary Tuberculosis in the Causation of Progressive Massive Fibrosis in Coal Workers in Great Britain. Vth International Pneumoconiosis Conference, Caracas, Venezuela, 29 October–3 November 1978, Bremerhaven: Wirtschaftverlag NW; 1985:409–21. 64. Jacobsen M, Burns J, Attfield MD. Smoking and coalworkers’ simple pneumoconiosis. In: Walton WH, ed. Inhaled Particles IV. Oxford: Pergamon Press; 1977:759–72. 65. Muir DCF, Burns J, Jacobsen M, Walton WH. Pneumoconiosis and chronic bronchitis. Br J Ind Med. 1977;2:424–7. 66. Soutar CA, Maclaren WM, Annis R, Melville AWT. Quantitative relations between exposure to respirable coalmine dust and coalworkers’ simple pneumconiosis in men who have worked as miners but have left the industry. Br J Ind Med. 1986;43:29–36. 67. Amandus HE, Lapp NL, Jacobson G, Reger RB. Significance of irregular small opacities in radiographs of coalminers in the USA. Br J Ind Med. 1976;33:13–7. 68. Collins HPR, Dick JA, Bennett JG, et al. Irregularly shaped small shadows on chest radiographs, dust exposure, and lung function in coalworkers’ pneumoconiosis. Br J Ind Med. 1988;45:43–55. 69. Cockcroft AE, Wagner JC, Seal EME, Lyons JP, Campbell MJ. Irregular opacities in coalworkers’ pneumoconiosis—correlation with pulmonary function and pathology. Ann Occup Hyg. 1982;26:767–87. 70. Hankinson JL, Reger RB, Fairman RP, Lapp NL, Morgan WKC. Factors influencing expiratory flow rates in coal miners. In: Walton WH, ed. Inhaled Particles IV. Oxford, England: Pergamon Press; 1977:737–55. 71. Soutar CA, Hurley JF. Relation between dust exposure and lung function in miners and ex-miners. Br J Ind Med. 1986;43:307–20. 72. Attfield MD, Hodous TK. Pulmonary function of U.S. coal miners related to dust exposure estimates. Am Rev Respir Dis. 1992;14: 605–9. 73. Love RG, Miller BG. Longitudinal study of lung function in coal miners. Thorax. 1982;37:193–7. 74. Attfield MD. Longitudinal decline in FEV1 in United States coalminers. Thorax. 1985;40:132–7. 75. Marine WM, Gurr D, Jacobsen M. Clinically important respiratory effects of dust exposure and smoking in British coal miners. Am Rev Respir Dis. 1988;137:106–12.
590
Environmental Health
76. Beeckman LF, Wang ML, Petsonk EL, Wagner GR. Rapid declines in FEV1 and subsequent respiratory symptoms, illnesses, and mortality in coal miners in the United States. Am J Respir Crit Care Med. 2001;163:633–9. 77. Wang ML, Petsonk EL, Beeckman LF, Wagner GR. Clinically important FEV1 declines among coal miners: an exploration of previously unrecognized determinants. Occup Environ Med. 1999;56:837–44. 78. Attfield MD, Hodous TK. Does regression analysis of lung function data obtained from occupational epidemiologic studies lead to misleading inferences regarding the true effect of smoking? Am J Ind Med. 1995;27:281–91. 79. Seixas NS, Robins TG, Attfield MD, Moulton LH. Longitudinal and cross sectional analyses of exposure to coal mine dust and pulmonary function in new miners. Br J Ind Med. 1993;50:929–37. 80. Henneberger PK, Attfield MD. Coal mine dust exposure and spirometry in experienced miners. Am J Respir Crit Care Med 1996;153: 1560–6. 81. Leigh J, Wiles AN, Glick M. Total population study of factors affecting chronic bronchitis prevalence in the coal mining industry of New South Wales, Australia. Br J Ind Med. 1986;43:263–71. 82. Wagner GR, Attfield MD. Radiographic appearances of emphysema in coal miners: its relationship to pathologic abnormality and dust exposure. Epidemiology. 1995;6:S117.
83. Leigh J, Outhred KG, McKenzie HI, Glick M, Wiles AN. Quantified pathology of emphysema, pneumoconiosis, and chronic bronchitis in coal workers. Br J Ind Med. 1983;40:258–63. 84. National Institute for Occupational Safety and Health. Handbook for Dust Control in Mining. Pittsburgh, PA: National Institute for Occupational Safety and Health; 2003. 85. Soutar CA, Hurley JF, Miller BG, Cowie HA, Buchanan D. Dust concentrations and respiratory risks in coalminers: key risk estimates from the British Pneumoconiosis Field Research. Occup Environ Med. 2004;61:477–81. 86. Weeks JL. The fox guarding the chicken coop: monitoring exposure to respirable coal mine dust, 1969–2000. Am J Public Health 2003;93:1236–44. 87. National Institute for Occupational Safety and Health. Criteria for a Recommended Standard: Occupational Exposure to Coal Mine Dust. Washington, DC: National Institute for Occupational Safety and Health; 1995. 88. U.S. Department of Labor. Report of the Secretary of Labor’s Advisory Committee on the Elimination of Pneumoconiosis among Coal Mine Workers. Washington, DC: U.S. Department of Labor; 1996.
25
Silicosis Stephen Levin • Ruth Lilis
Silicosis is a fibrotic lung disease produced by the inhalation of dustcontaining free crystalline silicon dioxide (SiO2). Free silica and silicates represent a large part of the earth’s crust. Silicon and oxygen are the two most important elements in the crust; about 27.7% of its composition is silicon, and 46.6% is oxygen. Free silica, the most widespread naturally occurring substance known to have a fibrogenic effect on the lungs, occurs in crystalline and amorphous forms. The crystalline forms that are fibrogenic are quartz, tridymite, and cristobalite; cryptocrystalline forms (consisting of minute crystals) are flint, chert, opal, and chalcedony. There are numerous forms of amorphous silica. At high temperatures (800–1000°C), quartz, the most common crystalline form of free silica, is converted into tridymite, and at even higher temperatures (1100–1400°C) it is transformed into cristobalite. Flint, chert, opal, chalcedony, and amorphous forms of free silica, including kaolin and diatomaceous earth, are also transformed into tridymite and cristobalite at these temperatures. This effect of high temperatures is of importance, since both tridymite and cristobalite are more potent than quartz in producing pulmonary fibrosis.
History Silicosis undoubtedly originated in antiquity with the mining and processing of metals and building stone. Agricola, in his book De Re Metallica (1556), was probably the first to recognize the adverse effects of inhaled dust. The first monograph on miners’ diseases, Von der Bergsucht by Paracelsus in 1567, included a classic description of “miners’ phthisis.” Van Diemerbroeck described how the lungs of stonecutters dying of “asthma” cut like masses of sand (Anatomi Corporis Humani, 1672). Bernardino Ramazzini included a description of diseases of stonemasons and miners in De Morbis Artificium Diatriba (1700). In England, the disease (phthisis) was described in flint knappers, needle pointers, knife grinders, fork sharpeners, and cutters of sandstone. John Scott Haldane (1923) described the cellular storage and retention of dust, including the long-term retention of silica, and recommended better ventilation of mines and factories. The distinction between tuberculosis and silicosis followed Koch’s discovery of the tubercle bacillus in 1882. The earliest description of silicosis in the United States, in the nineteenth century, was of employees of a cutlery plant; the disease was then detected among miners. Tunnel work generated numerous cases of silicosis. The tunnel at Gauley Bridge in West Virginia, where many workers contracted both acute and chronic silicosis in the 1930s, attracted much public attention. This resulted in the initiation of dust suppression and respiratory protection methods, improved industrial hygiene, and the introduction of laws for compensation of silicosis victims. Although the magnitude of the silicosis risk was gradually reduced in tunnel drilling and mining operations, significant silica exposure continued to occur in other industrial operations, such as foundries, the manufacture and use of silica flour, the production of detergent soaps with a high content of free silica, and sandblasting.
Work Exposures Mines. The quartz content of the ores mined and the intensity of exposure to dusts determine the relative risks of working in the following situations: metal ore mines, especially gold, copper, tin, silver, nickel, tungsten, uranium, and platinum; coal mines (drilling through rock or work in areas with narrow seams); mines or quarries for silicates (talc, kaolin, bentonite, mica, clays, etc.), slate, graphite, and fluorspar and their processing; drilling for exploration; and crushing operations. A recent study among South African gold miners with an average length of service of 21 years reported a prevalence of silicosis among 19%.1 Quarries. Quarries of materials with high free crystalline silica content (quartz, sandstone, granite, slate, porphyry, etc.) and the processing of such materials place workers at risk for silicosis. Sandstone is almost pure silica; granite may have a variable silica content, 20–70%; and slate usually is approximately 40% silica. The cottage industry producing slate pencils in India has produced numerous cases of severe silicosis.2 Tunnels. Tunnel drilling and other excavations in rocks with high SiO2 content may represent a severe hazard, especially since ventilation usually is poor. Among the earliest studies of silicosis in the United States were those of disease in subway and tunnel builders in New York City in the mid-1920s. Cases of silicosis also have been traced to the excavation of deep foundations in sandstone in Australia. In northeastern Brazil, a high incidence of silicosis in pit diggers has been reported.3 Highway Repair. A recent study examined highway construction trends, silicosis surveillance case data, and environmental exposure data and found that a large population of highway workers is at risk of developing silicosis from exposure to crystalline silica.4 High levels of silica exposure have been identified in this setting: the potential for respirable quartz concentrations involving disturbance of concrete were reported to range up to 280 times the National Institute for Occupational Safety and Health (NIOSH) Recommended Exposure Limit of 0.05 mg/m3, assuming exposure for an 8- to 10-hour workday.5 Stone Masonry. Stonemasons may be subjected to significant and seldom well-controlled silica exposure. Sandstone and granite are the most important materials. Foundry Work. A significant risk of silica exposure is associated with the mixture of sand and clays used for molds; the temperature of the molten metal poured into the molds fuses some sand to the surface of the castings and converts some quartz into tridymite or even cristobalite. Sometimes the molds are dusted with powders of high 591
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
592
Environmental Health
free-silica content, which adds a significant risk. The separation of castings from molds and cores, by shaking or knocking or automatically on vibrating tables, generates dangerous concentrations of dust. Fettling, the process by which the remnants of molds are removed from the castings by various abrading and polishing techniques carries a substantial risk. Grinding. Grinding and polishing with sandstone or other abrasive materials of high silica content have largely been replaced by less hazardous procedures, since these methods have resulted in numerous severe cases of silicosis. Nevertheless, grinding with such synthetic materials as Carborundum does not totally eliminate the risk, since remnants of the silica-containing mold are a source of airborne silica dust. Crushed sand, sandstone, and quartzite have been used for metal polishes and sandpaper. Sandblasting. Sandblasting, used in foundries; in construction work, especially for the polishing of metal surfaces before painting and for cleaning building stone; and in the etching of glass and plastics, is an extremely hazardous occupation with high levels of exposure to very fine particles. Steel shot, iron garnet, and Carborundum are sometimes used instead of sand, but these replacement materials have not undergone adequate study in animal models, especially under conditions comparable to long-term occupational exposure.6 Sandblasting of relatively small objects can be done in enclosed chambers operated from the outside. A hazardous exposure persists, however, for workers entering the sandblasting booths to remove the objects or clean the floors. Sandblasting in construction work or shipbuilding is much more difficult to enclose; hence adequate respiratory protection of all persons in the work area is essential. Sandblasting was banned in the United Kingdom in 1951 and in the European Economic Community in 1966 but is still widely used in the United States, where cases of rapidly progressing silicosis attributed to this type of exposure have been reported.7,8 The NIOSH recommends that silica sand be prohibited as abrasive blasting material and that less hazardous materials be used in blasting operations.4 Refractory Brick Manufacture. Manufacture of refractory brick and other refractory products (especially the acid refractories) carries a high risk of silicosis. Quartzite, sandstone, sands, or grits with a high quartz content are crushed, milled, shaped, dried, and fired at high temperatures, and a proportion of quartz is converted to tridymite and cristobalite. Bricklaying. Bricklaying and dismantling or repair of refractory bricks in ovens, furnaces, kilns, and boilers carry a high risk of silicosis, especially because of the presence of cristobalite along with the quartz. Pottery. The pottery industry may generate significant risks when the raw materials (mostly clays) contain free silica, even though use of powdered flint, which was a major source of silica in the pottery industry in Great Britain, has been discontinued. Glazes with variable contents of quartz also are used; firing at high temperatures (up to 1400°C) may create another source of significant silica exposure. In the United States, wollastonite, a calcium metasilicate, is used instead of flint, quartz, sand, and china clay, and, therefore, the health hazard in this industry is less than that reported in Great Britain in the past. Glass. Glass industry workers, especially those grinding and polishing with fine quartz, and sandblasters of glass have considerable silica exposure. Manufacture of Abrasive Soaps. The manufacture of soaps containing fine sand (silica flour) has in the past been a cause of rapidly progressing silicosis, “abrasive soap pneumoconiosis.” Fillers. Fillers used in the paint, rubber, plastic, and paper manufacturing industries may include silica flour, a finely ground, highly toxic quartz. It is sometimes incorrectly labeled as amorphous silica.9
Rapidly progressing silicosis has resulted from the production of silica flour in Australia10 and the United States.11 Enamel. Vitreous enameling, using mixtures of pulverized materials containing quartz at high temperatures, may present a significant risk; enamel spraying is particularly hazardous. Diatomaceous Earth. Calcined diatomaceous earth carries a significant risk, since part of the amorphous silica is transformed through calcination into cristobalite and tridymite. It is used in filters, absorbents, and abrasives and may generate significant exposure and risk of silicosis. Ceramic Fiber Insulation. Ceramic fiber insulation is being used increasingly as a refractory lining for heat-treating and preheating furnaces in the iron and steel industry. Studies have shown that the fibers undergo partial conversion to cristobalite when exposed to high temperatures.
Other Exposures In rural African women, cases of pneumoconiosis (“Transkei Silicosis”) were identified and attributed to silica inhalation during hand grinding of maize between rocks (sandstone). The criteria for diagnosis included rural domicile, radiographic and lung biopsy evidence of pneumoconiosis, no exposure to mining or industry, and no evidence of active tuberculosis.12 Five cases of silicosis, four of them with progressive massive fibrosis, have been reported in workers from two dental supply factories.13 Forty-two Brazilian stone carvers were examined with chest x-rays and high resolution CT studies; the prevalence of silicosis was 54%.14
Occurrence Accurate data on the occurrence of silicosis in various industries and in different parts of the world are difficult to obtain and hard to compare, in part because of different notification systems. Cross-sectional surveys of exposed populations, such as miners, indicate the prevalence of the disease. The attack rate or incidence of the disease is less well known. The incidence of silicosis undoubtedly increased in the majority of the industrialized countries until the 1950s. Methods of dust suppression and control that had been developed and applied mainly in large industrial facilities then led to a decrease in silicosis incidence. Dust control became more rigorous as the hazards were recognized, but smaller industries and new industrial processes continued to expose workers to dangerous levels of silica. In industrialized countries with intensive mining, such as West Germany, silicosis is still one of the most important problems of occupational medicine; as many as 3500 new cases and approximately 1500 deaths due to silicosis occurred annually in the 1960s, five times more than the total number of fatal work accidents. France reported a similar incidence and mortality from silicosis. In India, silicosis was diagnosed as soon as systematic examinations of miners were initiated in the 1950s and 1960s. In the Bihar mining area, 34% of those examined were found to have advanced silicosis. Similarly, in Japan a high prevalence of silicosis (63%) was found in some metal ore miners. Much of the available information is based on compensation cases. Because the criteria for compensation differ from country to country, only general trends can be detected. In the United Kingdom, for example, 721 persons were awarded industrial injury compensation for silicosis in 1957; in 1969, only 162 new awards for silicosis were made. Mining, quarrying, and slate industries had not shown a significant downward trend in silicosis rates, however. In the United States, the incidence of silicosis has decreased in the Vermont granite quarries,15 but metal mining is still an important cause of silicosis. A survey of more than 76% of the workforce in 50 metal mines, conducted by the Public Health Service and the Bureau of Mines between 1958 and 1961, revealed a silicosis prevalence of 3.4%. In one-third of cases, complicated silicosis was present. Prevalence was related to silica content of the rock, occupation, and length of exposure. Trasko in 1956 estimated the total number of silicosis
25 cases in 20 states to be about 6000.16 Miners and foundry workers were each represented by more than 1600 cases, but the number of cases was probably underestimated. In a British study of foundry workers,17 the prevalence of simple pneumoconiosis was 34% among fettlers and 14% in foundry floor workers. Similar data for the United States are not available. In 1971, milling of bentonite (sodium montmorillonite) was found to have produced severe silicosis in Wyoming18; a silicosis risk in this industry had not been suspected in the past. In 1983, the National Institute of Occupational Safety and Health estimated that approximately 3.2 million workers in 23,8000 plants in the United States were potentially exposed to crystalline silica. Watts et al. (1984)19 analyzed respirable silica exposures in metal and nonmetal mines in the United States (41,502 samples taken from 1974 to 1981). Workers in sandstone, clay, shale, and various nonmetallic mineral mills had the highest exposures to silica dust. Crushing, grinding, sizing, and bagging operations and general labor tasks were associated with the highest exposures. In 1984, the U.S. Mine Safety and Health Administration identified approximately 2400 work sites in coal mines where the level of 5% silica in respirable dust had been exceeded, representing the work environment of 15,000–20,000 coal miners (about 10% of U.S. coal miners). Floor and roof samples were found to contain 18–82% quartz; coal itself contained only 1–4%.20 Continuous mining machines; cutting of roof, floor, and inclusion rock bands; and roof-bolting operations were the major sources of silica exposure.21 The median silica content of respirable dust in 1743 personal air samples collected by the U.S. Occupational Safety and Health Administration in U.S. foundries from 1974 to 1981 ranged from 7.3% to 12.0%. Of 10,850 samples collected in iron and steel foundries, 23% had concentrations in excess of 0.20 mg/m3 respirable silica. Reports on a high (37%) prevalence of silicosis in workers in silica flour mills, with a significant proportion of cases developing massive fibrosis,7 and reports of acute silicosis in sandblasters in the Louisiana Gulf area22 point to the fact that silicosis continues to be an important occupational health risk, although the number of individuals affected probably has been reduced substantially. A recent report from the Center for Disease Control and Prevention presented evidence for a decline in silicosis mortality and incidence in the United States during the period from 1968 to 2002,23 although evidence for considerable underreporting and underrecognition of silicosis and silicosis mortality has been reported as well.24,25 The International Agency for Research on Cancer (IARC) has reported an estimated risk of death up to age 65 from silicosis after 45 years of exposure at 0.1 mg/m3 silica (the current standard in many countries) was 13 per 1000, while the estimated risk at an exposure of 0.05 mg/m3 was 6 per 1000. Both of these risks are above the risk of 1 per 1000 typically deemed acceptable by Occupational Safety and Health Administration (OSHA).26
Effects on Health Classic silicosis is a chronic and slowly progressive disease. Acute silicosis and silicoproteinosis (alveolar lipoproteinosis-like silicosis) occur in epidemic outbreaks under circumstances of heavy silica exposure. Sandblasting, abrasive soap manufacture, tunnel drilling, and refractory brick manufacture have been the major sources of such outbreaks. Dust concentration, particle size (in the 0.1–2 µm range, which reach respiratory bronchioles and alveoli), and duration of dust exposure define the hazard. Thus high concentrations of fine dust overburden the limited direct clearance capacity of the distal zones of the lung, and longer exposures increase the risk of developing silicosis. The interactions of concentration, particle size, and duration of exposure are the main determinants of the attack rate, latency period, incidence, rate of progression, and outcome of the disease. In industrial processes in which silica-containing materials are heated at temperatures exceeding 800°C so that transformation into tridymite and cristobalite occurs, the higher fibrogenic potency of these forms of SiO2 results in a higher attack rate and more severe silicosis. In the superficial layers of refractory brick that have been repeatedly subject to contact with molten metal, cristobalite may reach a concentration of 94%. Fusicalcination of
Silicosis
593
diatomaceous earth also results in high cristobalite concentrations (up to 35%). In experimental studies and in an investigation of human subjects with silicosis, silica particles have been shown to initially produce an alveolitis, characterized by sustained increases in the total number of alveolar cells, including macrophages, lymphocytes, and neutrophils. Advances in genomics and proteinomics have provided tools for developing a better understanding of the molecular mechanisms involved in the pathogenesis of silicosis. In vitro and in vivo animal studies, as well as investigations in humans strongly support the role of macrophage products in the development and progression of silicosis. Such products include enzymes and reactive oxygen species, including superoxide, hydrogen peroxide, and nitric oxide, which may cause lung damage; cytokines which recruit and/or activate polymorphonuclear leukocytes and thus result in further oxidant damage to the lung; and fibrogenic factors which induce fibroblast proliferation and collagen synthesis.27 Evidence has accumulated that implicates reactive oxygen species in the initial activation of alveolar macrophages (AMs),28,29 and that grinding or fracturing quartz particles breaks Si⋅O bonds and generates ⋅Si and Si⋅O. radicals on the surface of the cleavage planes. Upon contact with water, these silica-based radicals can generate hydroxyl radicals (⋅OH) directly, causing lipid peroxidation, membrane damage, and cell death.30 Silica has been shown to induce apoptosis, marked by DNA fragmentation and increased levels of cytosolic histone-bound DNA fragments in human alveolar macrophages, mediated by activation of the interleukin-converting enzyme family of proteases.31 In rats, chronically exposed by inhalation to nonoverload levels of crystalline silica dust, activation of nuclear factor-kappaB (NF-κB)/ DNA binding in bronchoalveolar lavage cells was evident after five days of silica inhalation, which increased linearly with continued exposure. Parameters of pulmonary damage, inflammation and alveolar type II epithelial cell activity rapidly increased to a significantly elevated but stable new level through the first 41 days of exposure and increased at a steep rate thereafter. Pulmonary fibrosis was measurable only after this explosive rise in lung damage and inflammation, as was the steep increase in tumor necrosis factor-alpha (TNF-α) and interleukin-1 production from bronchoalveolar lavage cells and the dramatic rise in lavageable alveolar macrophages. Indicators of oxidant stress and pulmonary production of nitric oxide exhibited a time course which was similar to that for lung damage and inflammation with the steep rise correlating with initiation of pulmonary fibrosis. Staining for inducible nitric oxide synthetase and nitrotyrosine was localized in granulomatous regions of the lung and bronchial associated lymphoid tissue.32 NF-κB is induced in alveolar macrophages by silica exposure in a dose-dependent way, with a consequent increase in the expression of the TNF-α gene.33 Inducible nitric oxide synthasederived nitric oxide has been shown in other studies to contribute to the pathogenesis of silica-induced lung disease.34 Crystalline and amorphous silica can directly upregulate the early inflammatory mediator COX-2, prostaglandin E (PGE) synthase, and the downstream anti-fibrotic prostaglandin E-2 in primary human lung fibroblasts.35 The alveolar macrophage plays a prominent role in lung inflammation via the production of oxygen radicals, enzymes, arachidonic acid metabolites, and cytokines. Studies in miners with and without significant silicosis report that gene-environment interactions involving cytokine polymorphisms play a significant role in silicosis by modifying the extent of and susceptibility to disease.36 Silica exposure in mice has been found to induce a significant increase in interstitial macrophages with an antigen presenting cell phenotype, as well as an increase in the antigen presenting cell activity of alveolar macrophages.37 Bronchoalveolar lavage in silicosis and coal workers’ pneumoconiosis showed a large influx of mononuclear phagocytes, increased production of oxidants, fibronectin, neutrophil chemotactic factor, interleukin-6, and TNF-α.38 Macrophage-derived growth promoting activity factors were shown to have characteristics consistent with platelet-derived growth factor, insulin-like growth factor-1, and fibroblast growth factor-like molecules.39 An effect of age on macrophage function has been reported. In young rats, silica induced a significant increase in bronchoalveolar
594
Environmental Health
lavage, TNF-α, and lactate dehydrogenase, as well as in cell numbers, which correlated with increased collagen deposition and silicotic nodule formation. In old rats, however, no changes in bronchoalveolar lavage or lung parameters were observed following silica instillation. These in vivo results were also confirmed in vitro, where silica failed to induce TNF release in alveolar macrophages obtained from old animals.40 Transforming growth factor-alpha (TGF-α), a cytokine with potent mitogenic activity for epithelial and mesenchymal cells, may play a role in the lung remodeling of silicosis. TGF-α may be critical in directing the proliferation of pneumocytes type II that characterize silicosis.41 Transforming growth factor-beta 1 (TGF-β1) was demonstrated in fibroblasts and macrophages located at the periphery of silicotic granulomas and in fibroblasts adjacent to hyperplastic type II pneumocytes.42 Silica causes release of TNF from mononuclear phagocytes. Experimental studies indicate that silica can upregulate the TNF gene and thus increase TNF gene transcription in exposed cells.43 Inhalation of crystalline silica particles produces a rapid increase in the rate of synthesis and deposition of lung collagen. Lungs of silica-exposed rats showed increased alveolar wall collagen and fibrotic nodules at 79 and 116 days of exposure with increased collagenase and gelatinase activity. Matrix metalloproteinases were significantly elevated in alveolar macrophages after 40-day exposure. Stromelysin expression was demonstrated in alveolar macrophages and cells within fibrotic nodules.44 Silica-induced fibrosis is unique among all the animal models and most human fibrotic lung disease thus far examined in that the excess collagen deposited in the lung contains normal ratios of the two major collagen types of the lung, types I and II; nevertheless it is biochemically different from normal lung collagen. The difference seems to be due to altered intermolecular cross-links; there is an increased hydroxylysine content of collagen. Dysfunctional cross-links are more likely to be derived from hydroxylysine. Hydroxylysine replaces lysine in the primary structure of a specific collagen α-chain to form the altered cross-links. In the alveolar spaces of rats exposed to very high concentrations of quartz or cristobalite, a material similar to that found in human alveolar lipoproteinosis together with a significant increase in the number of type II alveolar cells have been detected. This alveolar material is acellular and has a high phospholipid content with osmophilic bodies similar to those present as inclusions in type II alveolar cells. Phosphatidylcholine and phosphatidylglycerol are components of the increased amounts of surfactant found in the alveolar spaces under such circumstances. Significant increases in surfactant production associated with type II epithelial cell hypertrophy and hyperplasia were shown to be associated with a proportional enhancement of surfactant proteins (SP-A and SP-B) and phospholipids.45 Thus it seems that two different types of reactions can occur as a result of the penetration of silica particles into alveolar spaces: triggering of a fibrogenic reaction by altered macrophages or production of excess phospholipids by type II alveolar cells. The rate at which silica particles accumulate in the alveoli is of great importance; exposure to high concentrations results in lipoproteinosis; exposure to relatively lower concentrations of silica, over longer periods of time, leads to the development of typical nodular fibrosis. Most silicosis cases are of the classic nodular type, characterized by the presence of collagenous and hyaline nodules.
Pathology Silicotic nodules are readily felt in the lung and seen on the cut surface. Their size usually varies between 2 and 6 mm; they are hard, grayish, and more frequent in the apical and posterior parts of the lung. Sectioned nodules show a characteristic whorled pattern. The hilar lymph nodes most often are enlarged and also contain silicotic nodules. Large fibrotic masses tend to be located mostly in the upper and posterior parts of the lungs; they are the result of coalescence of individual nodules when their profusion is high. Cavitation in large fibrotic masses can occur and most often is due to complicating tuberculous infection; cavitation due to ischemic necrosis is relatively rare
in silicosis. Emphysema frequently is present when large fibrotic masses have developed. Enlargement of the right chambers of the heart and the pulmonary artery can be found in advanced silicosis. In a classic example of nodular disease in gold miners, the quartz content of the lungs is 2.5–3 g of the total 7–10 g dust content; in foundry workers it is between 1 and 2 g with approximately 10 g total dust content. In contrast, in stellate or diffuse fibrosis in hematite miners, the total dust content may be 60 g with 3.5 g quartz, and in coal miners, 40–55 g with 1–1.5 g quartz.46 Silicotic nodules initially appear in the area of the respiratory bronchiole and around arterioles. The nodules consist of concentric layers of collagen; hyalinization of the collagen occurs with time and progresses from the center to the periphery of the nodule; reticulin fibers usually are present in the periphery. A cellular peripheral layer is characteristic of relatively early lesions; it consists mostly of fibroblasts and macrophages. Particles of silica can be found in the center of the nodules; polarized light is particularly useful to visualize the birefringent SiO2 particles. The alveoli around the silicotic nodule most often are normal, although scar emphysema occasionally can be observed; centrilobular emphysema is not a feature of silicosis.47,48 Small pulmonary arterioles and venules are involved in the fibrotic process and are often obliterated. With continuous exposure, the silicotic nodules grow and new nodules appear. Progression may continue even after exposure has been discontinued, especially when the dust is characterized by high silica concentration and small particle size. Coalescence of nodules occurs when the profusion of silicotic nodules has increased beyond a critical level. Dense hyalinized collagen masses develop in which individual nodules can still be identified, especially at the periphery. These lesions destroy the normal architecture of the lung; necrosis in the avascular center can occur even in the absence of tuberculous infection, although the latter is a frequent complication. In rapidly developing silicosis, because of exposure to high concentrations of fine silica particles, the characteristic pathological features consist of the rapid development of numerous small nodules, together with areas of diffuse fibrosis and the rapid coalescence of nodules into large fibrotic masses. Acute or hyperacute silicosis resembles idiopathic alveolar lipoproteinosis and has been associated with extremely high exposures to pure or almost pure free silica and very small particle sizes. The term silicolipoproteinosis has been proposed for this condition.49 Exposures in the manufacture of abrasive soap, quartz milling, the grinding of quartzite and sandstone to produce silica flour, and sandblasting with quartzite have been associated with silicolipoproteinosis. In this form of silicosis, the lungs are firm and edematous. A few silicotic nodules can be present; alveolar walls are infiltrated by mononuclear and plasma cells or thickened by fibrosis, and alveoli are filled with an eosinophilic PAS-positive lipid and proteinaceous fluid with numerous fine granules and desquamated cells. The latter are mostly type II alveolar cells, containing osmiophilic lamellar bodies. Diffuse interstitial pulmonary fibrosis is present, but silicotic nodules are rare or absent. These lesions have been reproduced in experimental animals exposed to inhalation of high concentrations of fine quartz particles.50,51 Proteinuria and renal failure have been associated with silica exposure from sandblasting or refractory bricks.52,53 This appears to represent the effect of high levels of renal silicon dioxide crystals transferred to the kidney after pulmonary deposition.
Clinical Features Classic nodular silicosis sometimes can be completely asymptomatic, although relatively numerous silicotic nodules can be present on the chest x-ray film. In most such cases, no abnormalities can be detected on physical examination. As the disease progresses, cough, sputum production, and dyspnea on exertion gradually develop in most cases. In some there is only a dry cough; in others small amounts of mucoid sputum are produced. An increased susceptibility to repeated respiratory
25 infections develops in many patients and can result in larger amounts of mucopurulent sputum. In the advanced stages of silicosis, distortion of the normal architecture of the bronchi develops, especially when coalescence into massive fibrosis has taken place. Rhonchi and wheezes can be detected in such cases and paroxysms of coughing can occur. Shortness of breath develops gradually as the disease progresses; initially it is limited to heavy exercise, but later it manifests itself with moderate or even minor efforts. Physical signs are practically absent in the initial stages of silicosis. With the development of massive fibrosis or of a major infectious complication such as tuberculosis, abnormalities on percussion and auscultation (rales, rhonchi, areas of reduced or increased resonance) and cyanosis can develop. Cor pulmonale is the most frequent complication of silicosis in industrialized countries. Pulmonary hypertension with a loud second pulmonic sound and corresponding electrocardiographic signs can be detected; overt congestive heart failure with hepatomegaly and peripheral edema is less frequent and is thought to occur mainly in cases with significant associated emphysema or marked chronic bronchitis. In patients with “acute” silicosis similar to idiopathic alveolar lipoproteinosis, symptoms develop rapidly over a period of several weeks or months; time from onset of exposure to first symptoms can vary from less than 1 year to a few years. Fatigue, cough, sputum production (mostly mucoid), chest pain of pleuritic type, rapidly progressive shortness of breath, weight loss, and rapid deterioration are characteristic for such cases. Shortness of breath at rest, cyanosis, and abnormalities on percussion and auscultation with presence of crepitations are noted frequently. The rapid and fatal course of the disease leads to death in hypoxic respiratory failure.54
Radiographic Findings The radiographic changes in silicosis are essential for the diagnosis and classification of the disease, for the evaluation of its progression, and for the detection of important complications, such as tuberculosis, emphysema, and cor pulmonale. Nevertheless, it should be emphasized that pathological changes precede, often by several years, the appearance of the earliest radiographic changes, since to be
Silicosis
595
detected on the standard posteroanterior chest film, the pathological changes (silicotic nodules) have to reach a certain size, profusion, and radiological density. Because of this radiological latency period of silicosis, a normal chest x-ray film does not exclude the existence of the pathological process of silicosis in a person with significant exposure. Nevertheless, the disease seldom is symptomatic in this stage of radiological latency, with the notable exceptions of “acute silicosis,” alveolar lipoproteinosis, and chronic bronchitis due to silica. The earliest radiographic changes consist of fine linear-reticular opacities, often described as “lace-like,” in the upper and middle lung fields and extending to the periphery. These linear reticular opacities increase in thickness with time. The most characteristic radiographic abnormalities are silicotic nodules (Fig. 25-1), which usually appear initially in the middle and upper right lung fields. The earliest discrete round opacities are small, with a diameter of 1–3 mm and of low radiopacity. The diameter of silicotic nodules increases with time, as does their profusion and radiopacity, and they become more visible in most of the lung fields, with the exception of the lower lateral areas. The International Labor Office’s Classification of Radiographs of Pneumoconioses (1980) grades simple silicosis according to the profusion of the opacities, from 1/0 to 3/+, and to the size of most of the nodules, “p” for less than 1.5 mm, “q” for between 1.5 and 3 mm, and “r” for opacities with a diameter of more than 3 mm but less than 10 mm. The nodules often are seen against a background of a linearreticular pattern. As the number of rounded opacities increases, the profusion progresses, and eventually coalescence of nodules, initially in small limited areas in the upper lateral parts of the lung fields, becomes apparent. At this stage, when coalescence into large opacities is suspected (and their size is relatively small, less than 5 cm in diameter), they are classified as Ax. This marks the point at which simple silicosis progresses to complicated silicosis. As the large opacity becomes definite, it is classified according to size into category “A” (less than 5 cm), “B” (one or more opacities with a diameter of more than 5 cm but with a combined area of less than the equivalent of the right upper zone), and “C” (one or more opacities whose combined area exceeds the equivalent of the right upper zone). The large opacities in silicosis usually are bilateral and
Figure 25-1. Simple silicosis. Small rounded opacities (q-diameter, approximately 3 mm) in upper lung fields, bilaterally.
596
Environmental Health in foundry workers, brickworkers, hematite miners, and workers in user industries. The diffusing capacity is normal until relatively late in the course of the disease. Thus, in classic silicosis there is a decrease in total lung capacity, vital capacity, and residual volume, with arterial blood oxygen tension normal or slightly decreased. A mixed pattern of restrictive and obstructive ventilatory dysfunction is found most often in advanced, complicated silicosis. Evidence has accumulated that suggests that chronic levels of silica dust that do not cause disabling silicosis may cause the development of chronic bronchitis, emphysema, and/or small airways disease that can lead to airflow obstruction, even in the absence of radiological silicosis.55 Imbalance of ventilation-perfusion occurs in the more advanced stages of the disease. Impairment of gas exchange and signs of cor pulmonale can develop. The coexistence of chronic bronchitis with airway obstruction results in reduced forced expiratory volume in one second (FEV1), reduced flow at 25–75% of vital capacity (FEF25–75), and in increased airway resistance. With severe obstruction, arterial blood oxygen tension is reduced and carbon dioxide tension increased. Acute silicolipoproteinosis almost always causes marked restrictive dysfunction with reduced diffusing capacity and arterial desaturation.
Complications
Figure 25-2. Small, rounded opacities (r-diameter, approximately 3–10 mm) predominantly in upper and middle lung fields; large opacity due to coalescence of nodules in left upper lung field (size B, according to International Classification of Radiographs of Pneumoconioses).
most often located in the upper, but also in the middle, lung fields (Figs. 25-2 and 25-3). When the opacities are observed over time, contraction may be noted, and migration to the enlarged hilar opacities is not unusual. Distortion of the pulmonary and mediastinal structures is frequent in this stage, as are emphysematous changes, including bullae, in the rest of the lung. Hilar lymph node enlargement is observed quite consistently in silicosis; calcification of the periphery of the lymph nodes, “eggshell calcification,” may be present occasionally. Pleural adhesions also may be found; quite characteristic are the longitudinal pleural plicatures extending from the diaphragmatic pleura along the interlobar fissures. In rapidly progressive silicosis, the radiological latency period— a few months to 2 years—is much shorter than in classic silicosis. The radiological abnormalities are different from those of classic (nodular) silicosis, a fact that may have contributed to the underestimation of the incidence of this form of silicosis. Early changes consist of a diffuse haziness of reticular, irregular opacities in the middle and lower lung fields. Rounded and linear opacities develop rapidly over the entire lung fields. Occasionally, very small opacities are the main feature. The hilar shadows are only moderately enlarged. Rapid coalescence and large opacities, sometimes involving an entire lobe, can be observed in some cases; in others the numerous small, rounded opacities do not coalesce, and death ensues rapidly. Alveolar lipoproteinosis is characterized by diffuse, hazy infiltrates found most often in the lower lung fields, particularly above the diaphragm. Changes similar to those characteristic for pulmonary edema are present sometimes; in other cases, small rounded opacities indicating alveolar filling can be observed.
Pulmonary Function With classic silicosis, the typical change in pulmonary function is a gradual reduction in lung volume, beginning with reduction in vital capacity. The functional changes are less than would be predicted from the radiographic evidence. Airway obstruction, however, often is present because chronic bronchitis frequently coexists, especially
The complications of silicosis include tuberculosis, cor pulmonale, and Caplan’s syndrome. Tuberculosis has been the most persistent problem over the past 150 years. There is no doubt that involvement of the lungs by silicosis increases the susceptibility for tuberculosis infection. In contrast, there is no added risk of tuberculosis after exposure to asbestos or other nonsilica dusts. Thus, patients with silicosis in whom tuberculosis is suspected on the basis of a positive tuberculin test and a suggestive x-ray film should be treated with antituberculous chemotherapy, because demonstration of mycobacteria by smear or culture is difficult in silicotuberculosis, and the disease sometimes advances rapidly. The high risk of tuberculosis in subjects with silicosis has been quantified in a cohort study of 1153 gold miners with and without silicosis, followed for seven years. The annual incidence of tuberculosis was 981/100,000 in the 335 men without silicosis and 2707/100,000 in the 818 men with silicosis.56 Tuberculosis mortality in the United States is still seen in excess among workers exposed to silica dust.57 Chronic bronchitis is not infrequent in some occupational groups exposed to silica dust, such as foundrymen.11 Bronchitis due to acute or subacute infections of the distorted bronchi associated with advanced silicosis has been well characterized. Emphysema is considered a side effect in the silicotic process. Small areas of scar emphysema can be found around nodules; coalescence of nodules into fibrotic masses often produces larger areas of emphysema, often bullous, mostly in the lower lung fields. In a group of 1553 South African gold miners who had undergone autopsy examination between 1974 and 1987, it was found that a miner with 20 years in high-dust occupations had 3.5 times higher odds of significant emphysema at autopsy than a miner not in a dusty occupation.58 In a study of 207 workers evaluated for possible pneumoconiosis using high-resolution CT scans for detection, typing, and grading of emphysema, a significant excess of emphysema was found in those with pneumoconiosis and in smokers with silica exposure (as compared to those with asbestos exposure). Thus silica exposure was shown to be a significant contributing factor to the development of emphysema.59 Cor pulmonale is a well-recognized complication of silicosis; the massive involvement of the pulmonary vasculature in the fibrotic process with obliteration of numerous arterioles eventually results in a marked increase in resistance and consequently in pulmonary artery pressure. Right ventricular heart failure with overt clinical signs is seen less frequently, although it is not unusual. In such cases, death due to congestive heart failure can occur. In cases with coexistent emphysema and chronic bronchitis with marked airflow obstruction or complicating tuberculosis, right ventricular heart failure is encountered more frequently.
25
Silicosis
597
Figure 25-3. Rounded opacities (r/q) in upper and middle lung fields; bilateral multiple large opacities due to coalescence of nodules.
The presence of cor pulmonale at death was analyzed in a study of 732 South African gold miners. Marked emphysema was the highest risk factor, with an odds ratio of 21.32 (95% confidence interval (C.I.) 5.02–90.7), followed by extensive silicosis (odds ratio 4.95, 95% C.I. 2.92–8.38).60 Epidemiological studies have shown strong associations between silica exposure and several autoimmune diseases, including rheumatoid arthritis,61 scleroderma, and systemic lupus erythematosus. Mice exposed develop silicosis and exacerbated autoimmunity following crystalline silica exposure, including increased levels of autoantibodies, proteinuria, circulating immune complexes, pulmonary fibrosis, and glomerulonephritis, possibly resulting from silica-induced alterations in immunoglobulin levels, increased TNF-α, increased B1a B cells, and CD4+ T cells, with decreased regulatory T cells.62 In rats, silicosis was associated with elevated IgG and IgM levels in blood and bronchoalveolar lavage fluid, relative to nonexposed controls. Draining lung-associated lymph nodes were the most important sites for increased IgG and IgM production, with lungs contributing to a lesser degree.63 Caplan’s syndrome, the association between rheumatoid arthritis and silicosis, is rare. It is characterized by the appearance of large nodules (more than 1 cm in diameter) on a background of preexisting silicotic nodules. The larger nodules of Caplan’s syndrome occasionally cavitate. Renal lesions have been described in cases in which heavy occupational exposure to free silica has led to silicolipoproteinosis. Glomerular and tubular lesions have been described. Proteinuria and hypertension were associated with these renal lesions. The silica content of the kidney was found to be high in such cases. End-stage renal disease has been found at higher rates among silicotic and nonsilicotic ceramic workers exposed to silica,64 and epidemiological studies in silicotics have demonstrated an increased risk of end-stage renal disease.65
Silica and Cancer The question of a possible carcinogenic effect of silica has received increasing attention. Experimental studies on the possible carcinogenic effect of crystalline silica have been conducted on rats, mice, and hamsters, using various routes of administration: inhalation, intratracheal instillation, intrapleural and intraperitoneal injection. Findings from these studies were negative in mice and hamsters. In rats, the incidence of adenocarcinoma of the lung and squamous cell carcinoma was significantly increased, and intraperitoneal injections caused malignant lymphomas, cirrhosis, liver cell adenomas, and carcinomas.66 Epidemiological studies have in the past been conducted on numerous silica-exposed groups, such as metal ore miners, coal miners, and workers in the granite and stone industry, the ceramics, glass, and related industries, foundries, and in persons diagnosed as having silicosis. There were methodological difficulties with many of these early studies; confounding by cigarette smoking and insufficient information on exposure to other carcinogens, such as radon (mostly in mining and quarrying operations), polycyclic aromatic hydrocarbons (mostly in foundries), and arsenic (in metal ore mining and possibly in the ceramics and glass industries), were the most important issues of concern. Metal ore mining has not been associated with an increased incidence of respiratory cancer in some studies67,68; in other cohorts of metal ore miners, mortality rates for respiratory cancer were found to be 20–50% above levels in the general population.69,70 In most studies of coal miners, no increased incidence of lung cancer was detected. Earlier studies of granite workers generally yielded negative findings.71,72 A more recent mortality study of Vermont granite workers analyzed industrial hygiene data collected from 1924 to 1977 in conjunction with mortality data to examine quantitative exposure-response for silica, lung cancer, and other lung diseases.
598
Environmental Health
A clear relationship of mortality from lung cancer, tuberculosis, pneumoconiosis, nonmalignant lung disease, and kidney cancer was found with cumulative exposure. An exposure to 0.05 mg/m3 from ages 20 to 64 was associated with a lifetime excess risk of lung cancer for white males of 27/1,000.73 A cohort study among workers compensated for silicosis between 1988 and 2000 from the stone and quarry industry in Germany, found an increased risk of developing lung cancer based on the mortality rates of the general population of Germany.74 A similar study of the mortality of Australian workers compensated for silicosis found, after adjusting for smoking, a standardized mortality ratio for lung cancer of 1.9 (95% C.I. 1.5–2.3).75 In a case-control study utilizing industry/occupation information on death certificates, those deceased who were postulated to have had detectable crystalline silica exposure had a significantly increased risk for silicosis, COPD, pulmonary tuberculosis, and rheumatoid arthritis. In addition, a significant trend of increasing risk with increasing silica exposure was observed for these same conditions and for lung cancer. Those postulated to have had the greatest crystalline silica exposure had a significantly increased risk for silicosis, lung cancer, COPD, and pulmonary tuberculosis only.76 In the ceramics and pottery industry, a moderately increased mortality from respiratory cancer has been detected in some studies.77,78 A number of reports on foundry workers have pointed to slightly to moderately increased respiratory cancer mortality.79–81 In a nested case-control analysis within a cohort mortality study of North American industrial sand workers, a causal relationship was found between lung cancer and quartz exposure after allowance for cigarette smoking.82 An increased lung cancer risk was found among Japanese tunnel workers after control for smoking.83 A dose-related increase in lung cancer risk was reported among diatomaceous earth workers, independent of radiographically evident silicosis.84 A mortality study of 716 cases of silicosis, diagnosed from 1940 through 1983, was undertaken as part of the North Carolina pneumoconiosis surveillance program for workers in dusty trades. Five hundred forty-six death certificates were obtained among 550 deceased. Mortality for lung cancer was increased among whites (SMR 2.6); the SMR was 2.3 in those without other exposure to known carcinogens. Ageand smoking-adjusted rates in silicotics were 3.9 times higher than in nonsilicotic metal miners.85 The association between silicosis and lung cancer mortality was studied in 9912 (369 silicotics and 9543 nonsilicotics) white metal miners examined by the U.S. Public Health Service from 1959–1961 and followed through 1975. The SMR for lung cancer was 1.73 (95% C.I. 0.94–2.90) in silicotics and 1.18 (C.I. 0.98–1.42) in nonsilicotics. Confounding from exposure to radon or other carcinogens, such as arsenic, could not be ruled out.86 A mortality study of 3328 gold miners in South Dakota found an SMR for lung cancer of 1.13 (C.I. 0.94–1.36). No positive exposure-response trend was evident with cumulative exposure. Silicosis and tuberculosis were significantly increased (SMRs of 3.44 and 2.61, respectively).87 A statistically significant relationship between silica exposure and esophageal cancer has been reported among workers employed in underground caissons; the study controlled for smoking and alcohol use.88 DNA binding to crystalline silica surfaces may be important in silica carcinogenesis by anchoring DNA and its target nucleotides to within a few Angstroms of sites of oxygen radical production on the silica surface.89 Silica exposure induced significant suppression of lung p53mRNA in mice.90 In a study of the p53 and K-ras gene mutations in the lung cancers of workers with silicosis, distinctive mutation distributions were evident in the exons, differing from those of lung cancers not associated with silicosis.91 As summarized by the American Thoracic Society, “The balance of evidence indicates that silicotic patients have increased risk for lung cancer. It is less clear whether silica exposure in the absence of silicosis carries increased risk for lung cancer.”92 In 1997, the IARC classified inhaled crystalline silica as a human carcinogen (group 1). The IARC found sufficient evidence in humans for the
carcinogenicity of inhaled crystalline silica in the form of quartz or cristobalite from occupational sources (Group 1). It found inadequate evidence in humans for the carcinogenicity of amorphous silica (Group 3). There was sufficient evidence in experimental animals for the carcinogenicity of quartz and cristobalite. There was inadequate evidence in experimental animals for the carcinogenicity of uncalcined diatomaceous earth and synthetic amorphous silica.93
Diagnosis A history of exposure to free silica is important for the diagnosis of silicosis. A detailed work history is necessary, with appropriate attention to occupations held in the past, since the latency period for the appearance of characteristic chest x-ray abnormalities is often decades, especially with relatively low silica concentrations in the airborne dust. The other essential element for a correct diagnosis of silicosis is a good quality chest x-ray film. Nodular silicosis is not difficult to recognize, although nodular opacities can be found in many other diseases. Enlarged hilar opacities are quite characteristic for silicosis. The sensitivity of the chest radiograph was evaluated in 557 gold miners in South Africa by comparing profusion of rounded opacities with pathological findings (average 2.7 years between chest x-ray and pathological examination). The sensitivity of the chest x-ray (using ILO category 1/1 or greater as a positive diagnosis of silicosis) was found to be 0.393, 0.371, and 0.236 (for three independent readers). A large proportion of those with moderate silicosis were not diagnosed radiologically. The authors concluded that the sensitivity of the chest radiograph could be improved by using 1/0 as a cutoff point for a positive diagnosis of silicosis (for exposure to relatively low-dust concentrations) and 0/1 for workers exposed to high average concentrations of silica dust.94 Computed tomography scanning in the early detection of silicosis was shown to be significantly more informative than the chest radiograph. Thirteen of 32 subjects classified as normal on standard chest x-rays were found to be abnormal on CT scanning using conventional and high-resolution techniques, as were four of six subjects classified as “indeterminate” on the standard chest radiograph. In addition, the CT scans added six cases of confluence of small opacities to the three cases detected with standard chest x-rays.95 The profusion of opacities on high-resolution CT scans has been demonstrated to correlate with functional impairment. The presence of branching centrilobular structures and nodules may be helpful in early recognition of silicosis.96 Pulmonary function tests are not particularly helpful in the diagnosis of silicosis since they can be entirely normal in the presence of well-developed nodular opacities. When abnormalities are present, they are most often of a mixed, obstructive-restrictive type, although cases with only restrictive or obstructive dysfunction also can be found. The differential diagnosis has to exclude conditions such as sarcoidosis, miliary tuberculosis, carcinomatous lymphangitis, pulmonary hemosiderosis, rheumatoid lung, fibrosing alveolitis, alveolar microlithiasis, and histoplasmosis. Massive fibrosis seldom presents difficulties in diagnosis, although early in its development, when a single large opacity is detected, differential diagnosis with lung cancer can be a problem. The presence of nodular opacities around the large opacity most often facilitates the correct diagnosis of silicosis with coalescent, massive fibrosis. The diagnosis of tuberculosis in the presence of silicosis is difficult; this complication should always be considered and frequent sputum cultures should be obtained. The diagnosis of acute silicosis is more difficult than that of classic nodular silicosis because the radiographic changes are less characteristic and the clinical course more rapid. Idiopathic alveolar proteinosis, acute allergic alveolitis, and tuberculosis have to be considered in the differential diagnosis. A careful occupational history with evidence of exposure to high silica dust levels is extremely important for the diagnosis of this form of silicosis.
25
Silicosis
599
REFERENCES
Treatment Although poly-2-vinyl-pyridine-1-oxide and polybetaine prevent silicosis in experimental animals, possibly by altering the surface charge on silica particles, the results of clinical trials have been unrewarding. Treatment of patients with silicosis by the inhalation of powdered aluminum was undertaken in the 1950s, but aluminum itself carries the risk of diffuse interstitial fibrosis. Thus neither of these forms of prophylactic treatment can be recommended. Potential future therapeutic strategies that have been proposed include inhibition of cytokines such as interleukin-1, TNF-α, and the use of antioxidants.97 There is no specific treatment for established silicosis; therapy of complications, such as bronchitis and pneumonitis, is important to prevent rapid deterioration of functional status. Prompt treatment of silicotuberculosis with regimens in which isoniazid, rifampin, and ethambutol are given together is most satisfactory. The treatment should be vigorous, carefully monitored, and longer than that for uncomplicated tuberculosis. Appropriate treatment for congestive heart failure always has to include the management of coexisting chronic obstructive bronchitis. No specific treatment is useful for rapidly progressive silicosis. In contrast, lipoproteinosis due to silica can be treated by bronchopulmonary lavage, which may be helpful in clearing the alveoli of the deposited particles,98 and by steroid therapy to suppress the inflammatory reaction.
Prognosis The prognosis for nodular silicosis is relatively good, particularly if the progression of the disease is slow. For rapidly progressing silicosis, early death is almost the rule. Lipoproteinosis may resolve spontaneously without treatment or may improve rapidly after removal of free silica from the lung by bronchopulmonary lavage. There is some evidence that lipoproteinosis proceeds to diffuse fibrosis if left untreated.99
Control and Prevention The recognition of the silicosis hazard and stringent dust control engineering measures are essential. Frequent monitoring of airborne dust levels is needed to ensure a safe working environment. The effectiveness of dust control measures in preventing silicosis has been emphasized dramatically by the reduction in silicosis in Great Britain and the European Economic Community since sandblasting was outlawed. A special effort is necessary to avoid exposure to cristobalite and tridymite, which are produced in the calcining of silica within diatomaceous earth, fuller’s earth, and particularly in the regrinding of broken or salvaged refractory brick, in the scaling of boilers, and in steel foundries. Reduction of exposure to quartz above the threshold limit value of 10 mg/m3 %SiO2 + 2 would reduce the silicosis attack rate considerably. The NIOSH has proposed a further reduction of the time-weighted average silica exposure to 50 µg/m3. The effects of dust levels on other workers in the area must be considered because even if sandblasters or brick grinders are protected by appropriate respirators, workers in other trades within the same area may be affected. Failure to apply occupational standards to workplaces employing five or fewer workers also has resulted in cases of silicosis. In addition, it appears essential to regard silica in quantities of 5% or less within other rock, such as limestone, kaolin, gypsum, graphite, or portland cement, as important and capable of producing disease if total dust concentrations are as high as they often are in mining or other operations. The problem of silica exposure in foundries is well known and may require changes in technology to bring it under control. Personal respiratory protection is valuable when it is otherwise impossible to control environmental dust levels.
1. Churchyard GJ, Ehrlich R, teWaterNaude JM, et al. Silicosis prevalence and exposure-response relations in South African goldminers. Occup Environ Med. 2004;61(10):811–6. 2. Jain SM, Sepha GC, Khare KC, Dubey VS. Silicosis in slate pencil workers. Chest. 1977;71:423–6. 3. Holanda MA, Holanda MA, Martins MP, Felismino PH, Pinheiro VG. Silicosis in Brazilian pit diggers: relationship between dust exposure and radioilogic findings. Am J Ind Med. 1995;27:367–78. 4. Valiante DJ, Schill DP, Rosenman KD, Socie E. Highway repair: a new silicosis threat. Am J Public Health. 2004;94(5):876–80. 5. Linch KD. Respirable concrete dust-silicosis hazard in the construction industry. Appl Occup Environ Hyg. 2002;17(3):209–21. 6. Hubbs A, Greskevitch M, Kuempel E, Suarez F, Toraason M. Brasive blasting agents: designing studies to evaluate relative risk. J Toxicol Environ Health A. 2005;68(11–12):999–1016. 7. Buechner HA, Ansari A. Acute silico-proteinosis, a new pathologic variant of acute silicosis in sandblasters, characterized by histologic features resembling alveolar proteinosis. Dis Chest. 1969;55:274–84. 9. Banks DE, Morring KL, Boehlecke BE. Silicosis in the 1980s. Am Ind Hyg Assoc J. 1981;42:77–9. 10. Zimmerman PV, Sinclair RA. Rapidly progressive fatal silicosis in a young man. Med J Aust. 1981;2:704–6. 11. Banks DE, Morring KI, Boehlecke BE. Silicosis in silica flour workers. Am Rev Respir Dis. 1981;124:445–50. 12. Grobbelaar JP, Bateman ED. Hut lung: a domestically acquired pneumoconiosis of mixed aetiology in rural women. Thorax. 1991;46: 334–40. 13. de la Hoz RE, Rosenman K, Borczuk A. Silicosis in dental supply factory workers. Respir Med. 2004;98(8):791–4. 14. Antao VC, Pinheiro GA, Kavakama J, Terra-Filho M. High prevalence of silicosis among stone carvers in Brazil. Am J Ind Med. 2004;45(2):194–201. 15. Ashe HB, Bergstrom DE. Twenty six years’ experience with dust control in the Vermont granite industry. Ind Med Surg. 1964;33: 973–8. 16. Trasko VM. Some facts on the prevalence of silicosis in the United States. Arch Ind Health. 1956;14:379–86. 17. Lloyd-Davies TAL. Respiratory Disease in Foundry Men. London: HM Stationary Office; 1971. 18. Phibbs BP, Sundin RE, Mitchell RS. Silicosis in Wyoming bentonite workers. Am Rev Respir Dis. 1971;103:1–17. 19. Watts WF, Parker DR, Johnson RL, Jensen KL. Analysis of Data on Respirable Quartz Dust Samples Collected in Metal and Nonmetal Mines and Mills. Information Circular 8967. Washington, DC: Bureau of Mines, U.S. Department of the Interior; 1984. 20. Jankosvski RA, Nesbit RE, Kissel FN. Concepts for controlling quartz dust exposure of coal mine workers. In: Peng SS, ed. Coal Mine Dust Conference Proceedings. Cincinnati: American Conference of Governmental Industrial Hygienists; 1984:126–36. 21. IARC. Silica and some silicates. In: Evaluation of the Carcinogenic Risk of Chemicals to Humans. Vol. 42. Lyon, France: International Agency for Research on Cancer; 1987: 39–143. 22. Hughes JM, Jones RN, Gilson JC, et al. Determinants of progression in sandblasters’ silicosis. In: Walton WH, ed. Inhaled Particles V. Oxford: Pergamon Press; 1983: 701. 23. Centers for Disease Control and Prevention (CDC). Silicosis mortality, prevention, and control—United States, 1968–2002. MMWR. 2005; 54(16):401–5. 24. Rosenman KD, Reilly MJ, Henneberger PK. Estimating the total number of newly-recognized silicosis cases in the United States. Am J Ind Med. 2003;44(2):141–7. 25. Goodwin SS, Stanbury M, Wang ML, Silbergeld E, Parker JE. Previously undetected silicosis in New Jersey decedents. Am J Ind Med. 2003;44(3):304–11.
600
Environmental Health
26. 't Mannetje A, Steenland K, Attfield M, et al. Exposure-response analysis and risk assessment for silica and silicosis mortality in a pooled analysis of six cohorts. Occup Environ Med. 2002;59(11): 723–8. 27. Lapp NL, Castranova V. How silicosis and coal-workers’ pneumoconiosis develop—a cellular assessment. Occup Med. 1993;8:35–56. 28. Zhang Z, Shen HM, Zhang QF, Ong CN. Involvement of oxidative stress in crystalline silica-induced cytotoxicity and genotoxicity in rat alveolar macrophages. Environ Res. 2000;82(3):245–52. 29. Barrett EG, Johnston C, Oberdorster G, Finkelstein JN. Antioxidant treatment attenuates cytokine and chemokine levels in murine macrophages following silica exposure. Toxicol Appl Pharmacol. 1999;158(3):211–20. 30. Castranova V. Generation of oxygen radicals and mechanisms of injury prevention. Environ Health Perspect. 1994;102 Suppl 10:65–8. 31. Iyer R, Holian A. Involvement of the ICE family of proteases in silica-induced apoptosis in human alveolar macrophages. Am J Physiol. 1997;273(4 Pt 1):L760–7. 32. Castranova V, Porter D, Millecchia L, Ma JY, Hubbs AF, Teass A. Effect of inhaled crystalline silica in a rat model: time course of pulmonary reactions. Mol Cell Biochem. 2002;234–5(1–2): 177–84. 33. Rojanasakul Y, Ye J, Chen F, et al. Dependence of NF-kappaB activation and free radical generation on silica-induced TNF-alpha production in macrophages. Mol Cell Biochem. 1999;200(1–2): 119–25. 34. Zeidler P, Hubbs A, Battelli L, Castranova V. Role of inducible nitric oxide synthase-derived nitric oxide in silica-induced pulmonary inflammation and fibrosis. J Toxicol Environ Health A. 2004;67(13): 1001–26. 35. O’Reilly KM, Phipps RP, Thatcher TH, Graf BA, Van Kirk J, Sime PJ. Crystalline and amorphous silica differentially regulate the cyclooxygenase-prostaglandin pathway in pulmonary fibroblasts: implications for pulmonary fibrosis. Am J Physiol Lung Cell Mol Physiol. 2005;288(6):L1010–6. Epub 2005 Jan 21. 36. Yucesoy B, Vallyathan V, Landsittel DP, et al. Association of tumor necrosis factor-alpha and interleukin-1 gene polymorphisms with silicosis. Toxicol Appl Pharmacol. 2001;172(1):75–82. 37. Migliaccio CT, Hamilton RF Jr, Holian A. Increase in a distinct pulmonary macrophage subset possessing an antigen-presenting cell phenotype and in vitro APC activity following silica exposure. Toxicol Appl Pharmacol. 2005;205(2):168–76. Epub 2005 Jan 21. 38. Vanhee D, Gosset P, Boitelle A, Wallaert B, Tonnel AB. Cytokines and cytokine network in silicosis and coal workers’ pneumoconiosis. Eur Respir J. 1995;8:834–42. 39. Melloni B, Lesur O, Bouhadiba T, Cantin A, Begin R. Partial characterization of the proliferative activity for fetal lung epithelial cells produced by silica-exposed alveolar macrophages. J Leukoc Biol. 1994;55:574–80. 40. Corsini E, Giani A, Peano S, Marinovich M, Galli CL. Resistance to silica-induced lung fibrosis in senescent rats: role of alveolar macrophages and tumor necrosis factor-alpha (TNF). Mech Ageing Dev. 2004;125(2):145–6. 41. Absher M, Sjostrand M, Baldor LC, Hemenway DR, Kelley J. Patterns of secretion of transforming growth factor-alpha (TGF-alpha) in experimental silicosis. Acute and subacute effects of cristobalite exposure in the rat. Reg Immunol. 1993;5:225–31. 42. Williams AO, Flanders KC, Saffiotti U. Immunohistochemical localization of transforming growth factor-beta 1 in rats with experimental silicosis, alveolar type II hyperplasia, and lung cancer. Am J Pathol. 1993;142:1831–40. 43. Savici D, He B, Geist LJ, Monick MM, Hunninghake GW. Silica increases tumor necrosis factor (TNF) production, in part, by upregulating the TNF promoter. Exp Lung Res. 1994;20:613–25. 44. Scabilloni JF, Wang L, Antonini JM, Roberts JR, Castranova V, Mercer RR. Matrix metalloproteinase induction in fibrosis and
fibrotic nodule formation due to silica inhalation. Am J Physiol Lung Cell Mol Physiol. 2005;288(4):L709–17. Epub 2004 Dec 17. 45. Lesur O, Veldhuizen RA, Whitsett JA, Hull WM, Passmayer F, Cantin A, et al. Surfactant-associated proteins (SP-A, SP-B) are increased proportionally to alveolar phospholipids in sheep silicosis. Lung. 1993;17:63–74. 46. Nagelschmidt G. The relationship between lung dust and lung pathology in pneumoconiosis. Br J Ind Med. 1960;17:247–59. 47. Gardner LV. Pathology of so-called acute silicosis. Am J Public Health. 1930;23:1240–49. 48. Heppleston AG. The fibrogenic action of silica. Br Med Bull. 1969;25:282–7. 49. Parkes WR. Diseases due to free silica. In: Occupational Lung Disorders. 2nd ed. London: Butterworth; 1982:134–74. 50. Gross P, deTreville RTP. Alveolar proteinosis: its experimental production in rodents. Arch Pathol. 1968;86:255–61. 51. Heppleston AG. A typical reaction to inhaled silica. Nature. 1967;213:199–200. 52. Saldanha LF, Rosen VJ. Silicon nephropathy. Am J Med. 1975;59: 95–103. 53. Giles RD, Sturgill BC, Suratt PM, Bolton WK. Massive proteinuria and acute renal failure in a patient with acute silico-proteinosis. Am J Med. 1978;64:336–42. 54. Ruttner JR, Heer HR. Silikose and Lungenkarzinom. Schweiz Med Wochenschr. 1969;99:245–9. 55. Hnizdo E, Vallyathan V. Chronic obstructive pulmonary disease due to occupational exposure to silica dust: a review of epidemiological and pathological evidence. Occup Environ Med. 2003;60(4):237–43. 56. Cowie RL. The epidemiology of tuberculosis in gold miners with silicosis. Am J Respir Crit Care Med. 1994;150:1460–2. 57. Bang KM, Weissman DN, Wood JM, Attfield MD. Tuberculosis mortality by industry in the United States, 1990–1999. Int J Tuberc Lung Dis. 2005;9(4):437–42. 58. Hnizdo E, Sluis-Cremer GK, Abramowitz JA. Emphysema type in relation to silica dust exposure in South African gold miners. Am Rev Respir Dis. 1991;143:1241–7. 59. Begin R, Filion R, Ostiguy G. Emphysema in silica- and asbestosexposed workers seeking compensation. A CT scan study. Chest. 1995;108:647–55. 60. Murray J, Reid G, Kielkowski D, de-Beer M. Cor pulmonale and silicosis: a necropsy-based case-control study. Br J Ind Med. 1993;50: 544–8. 61. Rosenman KD, Moore-Fuller M, Reilly MJ. Connective tissue disease and silicosis. Am J Ind Med. 1999;35(4):375–81. 62. Brown JM, Pfau JC, Holian A. Immunoglobulin and lymphocyte responses following silica exposure in New Zealand mixed mice. Inhal Toxicol. 2004;16(3):133–9. 63. Huang SH, Hubbs AF, Stanley CF, et al. Immunoglobulin responses to experimental silicosis. Toxicol Sci. 2001;59(1):108–17. 64. Rapiti E, Sperati A, Miceli M, et al. End stage renal disease among ceramic workers exposed to silica. Occup Environ Med. 1999; 56(8):559–61. 65. Rosenman KD, Moore-Fuller M, Reilly MJ. Kidney disease and silicosis. Nephron. 2000;85(1):14–9. 66. Williams AO, Knapton AD. Hepatic silicosis, cirrhosis, and liver tumors in mice and hamsters: studies of transforming growth factor beta expression. Hepatology. 1996;23(5):1268–75. 67. Brown DP, Kalplan SD, Zumwalde RD, Kaplowitz M, Archer VE. Retrospective cohort mortality study of underground gold mine workers. In: Goldsmith DF, Winn DM, Shy CM, eds. Silica, Silicosis, and Cancer. Controversy in Occupational Medicine. New York: Praeger; 1986:335–50. 68. Lawler AB, Mandel JS, Scuman LM, Lubin JH. Mortality study of Minnesota iron ore miners: preliminary results. In: Wagner WL, Rom WN, Merchant JA, eds. Health Issues Related to Metal and Nonmetallic Mining. Boston: Butterworths; 1983:211–26.
25 69. Muller J, Wheeler WC, Gentleman JF, Suranyi G, Kusiak RA. Study of Mortality of Ontario Miners, 1955–1977. Pt 1. Toronto: Ontario Ministry of Labour/Ontario Workers’ Compensation Board/Atomic Energy Control Board of Canada; 1983. 70. Costello J. Mortality of metal miners. A retrospective cohort and case-control study. In: Proceedings of an Environmental Health Conference, Park City, Utah, 6–9 April 1982. Morgantown, WV: National Institute of Occupational Safety and Health; 1982. 71. Davis LK, Wegman DH, Monson RR, Froines J. Mortality experience of Vermont granite miners. Am J lnd Med. 1983;4:705–23. 72. Costello J, Graham WGB. Vermont granite workers’ mortality study. In: Goldsmith DF, Winn DM, Shy CM, eds. Silica, Silicosis, and Cancer. Controversy in Occupational Medicine. New York: Praeger; 1986:437–40. 73. Attfield MD, Costello J. Quantitative exposure-response for silica dust and lung cancer in Vermont granite workers. Am J Ind Med. 2004;45(2):129–38. 74. Ulm K, Gerein P, Eigenthaler J, Schmidt S, Ehnes H. Silica, silicosis and lung-cancer: results from a cohort study in the stone and quarry industry. Int Arch Occup Environ Health. 2004;77(5): 313–8. Epub 2004 May 20. 75. Berry G, Rogers A, Yeung P. Silicosis and lung cancer: a mortality study of compensated men with silicosis in New South Wales, Australia. Occup Med (Lond). 2004;54(6):387–94. Epub 2004 Sep 3. 76. Calvert GM, Rice FL, Boiano JM, Sheehy JW, Sanderson WT. Occupational silica exposure and risk of various diseases: an analysis using death certificates from 27 states of the United States. Occup Environ Med. 2003;60(2):122–9. 77. Thomas TL. A preliminary investigation of mortality among workers in the pottery industry. Int J Epidemiol. 1982;27:175–80. 78. Forastiere F, Lagorio S, Michelozzi P, et al. Silica, silicosis, and lung cancer among ceramic workers: a case-referent study. Am J Ind Med. 1986;10:363–70. 79. Sherson D, Iversen E. Mortality among foundry workers in Denmark due to cancer and respiratory and cardiovascular disease. In: Goldsmith DF, Winn DM, Shy CM, eds. Silica, Silicosis, and Cancer. Controversy in Occupational Medicine. New York: Praeger; 1986: 403–14. 80. Fletcher AC. The mortality of foundry workers in the United Kingdom. In: Goldsmith DF, Winn DM, Shy CM, eds. Silica, Silicosis, and Cancer. Controversy in Occupational Medicine. New York: Praeger; 1986:385–401. 81. Silverstein M, Maizlish N, Park R, Silverstein B, Brodsky L, Mirer F. Mortality among ferrous foundry workers. Am J Ind Med. 1986;10:27–43. 82. Hughes JM, Weill H, Rando RJ, Shi R, McDonald AD, McDonald JC. Cohort mortality study of North American industrial sand workers. II. Case-referent analysis of lung cancer and silicosis deaths. Ann Occup Hyg. 2001;45(3):201–7. 83. Yucesoy B, Vallyathan V, Landsittel DP, et al. Association of tumor necrosis factor-alpha and interleukin-1 gene polymorphisms with silicosis. Toxicol Appl Pharmacol. 2001;172(1):75–82; Nakagawa H, Nishijo M, Tabata M. et al. Dust exposure and lung cancer mortality
84.
85.
86. 87.
88.
89.
90.
91.
92. 93.
94.
95.
96.
97.
98. 99.
Silicosis
601
in tunnel workers. J Environ Pathol Toxicol Oncol. 2000;19(1–2): 99–101. Checkoway H, Hughes JM, Weill H, Seixas NS, Demers PA. Crystalline silica exposure, radiological silicosis, and lung cancer mortality in diatomaceous earth industry workers. Thorax. 1999;54(1): 56–9. Amandus HE, Shy C, Wing S, Blair A, Heineman EF. Silicosis and lung cancer in North Carolina dusty trades workers. Am J Ind Med. 1991;20:57–70. Amandus HE, Costello J. Silicosis and lung cancer in U.S. metal miners. Arch Environ Health. 1991;46:82–9. Steenland K, Brown D. Mortality study of gold miners exposed to silica and nonasbestiform amphibole minerals: an update with 14 more years of follow-up. Am J Ind Med. 1995;27:217–29. Yu IT, Tse LA, Wong TW, Leung CC, Tam CM, Chan AC. Further evidence for a link between silica dust and esophageal cancer. Int J Cancer. 2005;114(3):479–83. Saffiotti U, Daniel LN, Mao Y, Shi X, Williams AO, Kaighn ME. Mechanisms of carcinogenesis by crystalline silica in relation to oxygen radicals. Environ Health Perspect. 1994;102 (Suppl 10): 159–63. Ishihara Y, Iijima H, Matsunaga K, Fukushima T, Nishikawa T, Takenoshita S. Expression and mutation of p53 gene in the lung of mice intratracheal injected with crystalline silica. Cancer Lett. 2002;177(2):125–8. Liu B, Guan R, Zhou P, et al. A distinct mutational spectrum of p53 and K-ras genes in lung cancer of workers with silicosis. J Environ Pathol Toxicol Oncol. 2000;19(1–2):1–7. Beckett WS (Chair). Report of the ATS Committee on Adverse Effects of Crystalline Silica Exposure. 1996:12. International Agency for Research on Cancer (IARC). Silica. Crystalline Silica-Inhaled in the Form of Quartz or Cristobalite from Occupational Sources (Group 1). IARC Monogr Eval Carcinog Risks Hum. 1997;68:1–475. Hnizdo E, Murray J, Sluis-Cremer GK, Thomas RG. Correlation between radiological and pathological diagnosis of silicosis: an autopsy population based study. Am J Ind Med. 1993;24: 427–45. Begin R, Ostiguy G, Fillion R, Colman N. Computed tomography scan in the early detection of silicosis. Am Rev Respir Dis. 1991;144: 697–705. Antao VC, Pinheiro GA, Terra-Filho M, Kavakama J, Muller NL. High-resolution CT in silicosis: correlation with radiographic findings and functional impairment. J Comput Assist Tomogr. 2005; 29(3):350–6. Rimal B, Greenberg AK, Rom WN. Basic pathogenetic mechanisms in silicosis: current understanding. Curr Opin Pulm Med. 2005; 11(2):169–73. Ramieriz RJ, Keiffer RE, Ball WC. Bronchopulmonary lavage in man. Ann Intern Med. 1965;63:819–28. Hudson AR, Halprine GM, Miller JA, Kilburn KH. Pulmonary interstitial fibrosis following alveolar proteinosis. Chest. 1974;65:700–2.
This page intentionally left blank
Health Significance of Metal Exposures
26
Philippe Grandjean
The term metal has important meanings in physics and chemistry. In environmental medicine, arsenic and selenium are often considered part of the metals group. Nutritionists often refer to trace metals as those constituting less than 1 g of the human body, an arbitrary limit which would exclude iron. Although “toxic metals” is a common term, all metals may actually exert toxic effects, and the dose and time of exposure determines whether or not toxicity ensues. Frequently, heavy metals (with a gravity of 4 g/cm3 and above) are considered most important with regard to adverse health effects. This belief stems from the observation that the toxicity of the metals tends to increase toward the right and bottom of the periodic table, where the molecular weight of elements increases. However, increased atomic number and increased gravity are of little medical significance and would not account for the toxic potential of beryllium. Instead, the relative toxicity on a molar basis would seem to be related to the affinity to various ligands and the resulting biochemical activity. On the basis of such considerations, metals may be separated into hard metals (class A), with a lower affinity toward sulfur and nitrogen than toward oxygen, and the soft metals (class B), where the opposite is the case.1 Among the metals considered in this chapter, aluminum and beryllium belong to the generally less toxic class A, while the other metals are either borderline or class B metals. In contrast to organic compounds, which may be broken down by detoxification processes, metals will remain metals. However, some changes may occur due to oxidation/reduction, as with mercury vapor and chromate, and most metals will be bound to organic compounds, notably proteins such as metallothionein. Some metals form rather stable organometal compounds with a covalent bond between carbon and the metal. Some organic compounds, such as tetraethyl lead and tributyltin, are dealkylated in the body. On the other hand, methylation in the liver is an important part of arsenic and selenium kinetics. These metabolic processes affect the toxicity and may vary between individuals. When present as airborne particles, retention in the airways is governed by physical principles related to the aerodynamic diameter of the particles. Some metal compounds are corrosive and exert their effect on the mucous membranes. Such is the case with osmium tetroxide and zinc chloride. In other situations, systemic effects, whether mediated by oral or respiratory intake, are most important and will then depend on the amount absorbed. Solubility of metal compounds is of major significance and, in the gut, some interaction between metals may occur. Thus, zinc and copper tend to mutually inhibit the absorption of the other metal. The same appears to be true for iron and cobalt, but the absorption of several divalent metals is increased in iron deficiency. In addition, phosphate and other components may decrease the absorption, due to formation of insoluble
compounds. The variability is illustrated by the fact that gastrointestinal absorption of lead sulfide is barely detectable, while a soluble compound ingested during a fasting period may result in a 50% absorption rate. Exposure potentials have increased considerably due to the development of metallurgy and associated processes and due to the contamination from energy production. Chemical elements that are rare in the earth’s crust may now result in heavy exposures of workers, neighbors, and consumers. In comparison with atmospheric emissions from natural sources, air pollution with lead from human activities is more than 10-fold greater, and the amounts of cadmium, zinc, and several other metals in anthropogenic air pollution are also comparatively large. Table 26-1 shows a provisional grouping of some metals, according to their abundance and annual production rate. Although only major tendencies would appear from such crude grouping, the rarer metals seem to cause much less prevalent exposures than do the metals that are common in the earth’s crust. However, production figures tend to increase, in particular for aluminum, molybdenum, nickel, and rare earths. When the intake of iron, zinc, or other essential metals is insufficient, signs of deficiency may develop. Many of these cases have occurred as part of multiple nutrient deficiencies or as a result of longterm parenteral nutrition. Refined food in general tends to be an insufficient source of essential minerals. When toxicity is compared, the rarer metals appear to be more toxic than elements that are more common components of the earth’s crust and the “natural” environment. In Table 26-1, the molar limits for occupational exposure have been used for classifying metals into three groups with different toxicities. A similar grading of the toxicity could be based on LD50 values from animal experiments. In preventive medicine, the target organ, sometimes referred to as the critical organ, is of special importance, as the earliest effects of metal toxicity are said to originate from this location. As a consequence, if effects in the target organ can be prevented, no other toxicity should be expected. However, prevention becomes somewhat more difficult when considering that the critical effect of respiratory exposure to some chromate or nickel compounds is respiratory cancer; such stochastic effects may be fully prevented only if exposures are effectively eliminated. Other complex problems relate to the prevention of contact dermatitis in individuals who have developed metal allergies; even oral intake of the offending metal can induce or worsen the hand eczema in these patients. Individual susceptibility must therefore be taken into account. In this regard, interactions between metals are also of importance. Thus, at least in experimental studies, zinc supplements may prevent cadmium toxicity, and selenium may potentially protect against mercury toxicity. 603
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
604
Environmental Health
TABLE 26-1. NATURAL OCCURRENCE, PRODUCTION, AND HEALTH SIGNIFICANCE OF METALS, AS INDICATED BY APPROXIMATE GROUPING OF RELEVANT PARAMETERS Abundance in Earth’s Crust −2
Annual Production 11
I. Common (>10 mol/kg) Al, Fe, Mg, Mn, Ti II. Medium (10−4–10−2 mol/kg) Ba, Be, Co, Cr, Cu, Ni, V, Zn, Zr
I. Large (>10 mol/yr) Al, Cu, Fe, Mg, Mn, Zn II. Medium (109–1011 mol/yr) Ba, Cr, Mo, Ni, Pb, Sb, Sn, Ti, Zr
III. Rare (<10−4 mol/kg) Ag, As, Cd, Hg, Mo, Os, Pb, Pt, Sb, Se, Sn, Ta, Te, Tl, U, W, rare earths
III. Low (<109 mol/yr) Ag, As, Be, Cd, Co, Hg, Os, Pt, Se, Ta, Te, Tl, U, V, W, rare earths
As preventive efforts become more efficient, the patterns of adverse effects change and, in fact, become more difficult to recognize. Most metals accumulate in the body, and storage depots or “slow compartments” may slowly release metals to the blood or may actually be the site of delayed toxicity. The resulting insidious, delayed effects are often hard to detect, also for the patient. In the absence of pathognomonic symptoms and a history of a recent hazardous exposure, an etiologic diagnosis may be almost impossible to verify. The diagnosis of metal poisoning has been frequently supported by the detection of increased or toxic levels of the metal in blood or urine. Methods have now been further refined and become routine parameters for biological monitoring of metal exposures.2 Special care is needed when collecting blood and urine samples to avoid external contamination.3 Recent developments have included more sensitive analyses and methods for in vivo detection of cadmium and mercury in kidney and liver, for measurement of lead levels in calcified tissues, and for assessment of various biochemical abnormalities, which indicate early biological effects of metal exposures. Biological monitoring will become an essential part of future preventive activities with regard to environmental and occupational metal exposures. However, because metals are ubiquitous and often disseminated through a multitude of pathways, the sources of human exposures must be known before a preventive strategy can be planned. Attention must also be paid to intakes from mineral supplements and from the use of metals in pharmaceuticals and traditional medicines. In the following pages, the metals of greatest public health significance are dealt with in alphabetical order. The general outline includes: environmental occurrence, uses, exposure sources; absorption and fate in the human organism; essential functions and toxic effects in humans; preventive measures; and limits applicable. Relevant publications from the International Programme on Chemical Safety and from the International Agency for Research on Cancer are mentioned, but otherwise references have been limited to a few recent key studies or reports. For more detailed information and reference to additional literature sources, some most recent edition of the standard handbook should be consulted.4 ALUMINUM
Exposures Aluminum is a silvery-white, light, ductile metal with a high resistance against corrosion and used in light metal alloys, in particular with magnesium. Kitchenware, aluminum foil, and automobile bodies are important uses, and the aircraft industry is one of the major consumers. The most intense occupational exposures occur in the aluminum refineries, where the metal is produced by electrolysis of aluminum oxide dissolved in molten cryolite. Refinery and foundry workers, welders, and grinders working with aluminum or its alloys may be exposed to high levels of aluminum fumes or particles. Aluminum
Occupational Exposure Limit –4
Significance of Daily Oral Intake
3
I. High (>10 mol/m ) Al, I. Deficiency recorded Cr, Cu, Fe, Mg, Ti, Zn Fe, Mg, Se, Zn II. Medium (10–6–10–4 mol/m3) II. Unknown or no significance Ag, As, Ba, Be, Cd, Co, Cr, Al, Be, Mn, Mo, Ni, Os, Pt, Sb, Cu, Mn, Mo, Ni, Sb, Se, Sn, Ta, Sn, Ta, Te, Ti, Tl, V, W, Zr, V, W, Zr, rare earths rare earths III. Low (<10–6 mol/m3) Ag, Hg, III. Environmental toxicity recorded Os, Pb, Pt, Te, Tl, U As, Ba, Cd, Co, Hg, Pb, U
chloride is used in petroleum processing and in the rubber industry, and alkyl compounds are used as catalysts in the production of polyethylene. Other aluminum compounds are also widely used, notably for flocculation of drinking water.5 Aluminum compounds in soil are soluble at low pH values (below six) (e.g., caused by acidification). Soft drinking water may also dissolve traces of aluminum flocculants used in municipal water treatment. In such cases, the aluminum concentration may occasionally exceed 1 mg/L, but otherwise the concentrations in water are usually well below 100 µg/L, and drinking water is then an insignificant source of exposure. Among food items, meat products and vegetables may exhibit relatively high levels; the total daily intake through food and beverages is generally about 10 mg. Sources of excess exposures include aluminum silicate used as an anticaking agent and aluminum powder used for decorating pastry. Small amounts may be released from aluminum pots and pans at low pH levels, especially when acid foods are stored. Ulcer patients may ingest several grams of aluminum hydroxide every day in their antacid medicine. Aluminum is barely absorbed from the gastrointestinal tract, probably because sparingly soluble aluminum phosphate is formed. Patients who ingest aluminum-containing antacids appear to absorb about 0.1% of the amount ingested. Concurrent intake of fruit juices may substantially increase the absorption. Inhalation of fine aluminum dust can lead to retention in the alveoli, and the concentration of this metal in the lungs increases with advancing age. When released to the blood, aluminum appears to be effectively excreted, almost entirely in the urine.
Effects Salts of aluminum are irritants because acid is liberated on hydrolysis. Thus, conjunctivitis, eczema, and upper airway irritation may result, and even local necrosis of the cornea has been recorded. A form of pneumoconiosis, sometimes called aluminum lung or aluminosis, is associated with severe exposures to aluminum oxide; the most frequent symptoms are dyspnea and dry cough. Unilateral pneumothorax has been seen more often than expected in workers exposed to aluminum dust. Aluminum exposure may cause neurotoxicity, particularly in patients undergoing dialysis.5 Due to the deficient excretion of aluminum in the urine of these patients, accumulation in the body occurs from small amounts from the dialysis water and if aluminum hydroxide gels are used to decrease phosphate absorption in the gut. In particular, aluminum accumulates under those circumstances in the brain and seems to be at least a partial cause of dialysis dementia. The early symptoms are speech impairment and dysphasia, followed by myoclonic movements, seizures, and progressive global dementia with prominent symptoms from the parietal lobe. This disease appears to be irreversible, and survival beyond a few years is uncommon. The introduction of calcium-based phosphate binders and reverse osmosis for water purification has effectively eliminated this problem.
26 In addition, aluminum seems to accumulate, although to a much lesser extent, in the brain of patients with Alzheimer’s disease. This accumulation may be a phenomenon secondary to the disease development, and the possible causative role of aluminum has not yet been determined. A different type of encephalopathy may develop as an apparent result of heavy occupational aluminum exposure. Thus, aluminum is undoubtedly neurotoxic, but the extent to which this occurs in individuals with normal kidney function still has to be clarified. Dialysis osteomalacia is a complication that has occurred rarely in patients undergoing long-term dialysis treatment; it causes development of sclerosis and osteoporosis, leading to skeletal pains and multiple fractures. The occurrence of this disease was closely associated with long-term aluminum accumulation. Bone toxicity has also been described in patients receiving chronic parenteral nutrition containing aluminum-contaminated case in hydrolysate and in patients who had ingested large doses of aluminum-containing antacids for extended periods.
Prevention Aluminum measurements of serum are extensively used in the monitoring of patients undergoing dialysis treatment. Although high levels of aluminum may be accurately estimated by most laboratories, reference levels have decreased significantly indicating an improved contamination control in the laboratories. Serum levels below 10 µg/L (0.37 µmol/L) are usually considered normal. In the past, serum concentrations in dialysis patients could exceed 50 µg/L (1.85 µmol/L), and the risk of adverse effects of aluminum was much increased if the serum level exceeded 100 µg/L (3.7 µmol/L). Aluminum has a short biological half-life in the blood of individuals with normal kidney function, thus rendering aluminum measurements of serum samples of limited value in occupational health practice. However, urinary excretion of aluminum reflects short-term exposures, while a better indication of the chronic accumulation is the excretion after an exposure-free interval of several days. A measure of the body burden is the concentration in a bone biopsy from the iliac crest. Aluminum toxicity in dialysis patients can be prevented by using dialysis water with an aluminum concentration below 10 µg/L (0.37 µmol/L) after reverse osmosis or other effective treatment. Also, substitution of oral aluminum-containing phosphate binders by calcium-based compounds has been widely instituted. Solutions used for parenteral nutrition should be examined for aluminum contents, and low-level products should be preferred. Desferrioxamine has only limited therapeutic use as an aluminum chelator. In the United States, aluminum is regulated as an inert dust, with an exposure limit of 15 mg/m3 for dust and 5 mg/m3 for respirable particles. This limit may not entirely protect against adverse effects. The limits recommended by the American Conference of Governmental Industrial Hygienists (ACGIH) are: 10 mg/m3 for aluminum metal and oxide, 5 mg/m3 for aluminum pyro-powders and welding fumes, and 2 mg/m3 for soluble aluminum salts and (unstable) aluminum alkyls. ANTIMONY
Antimony is used for various alloys with lead and other metals, for semiconductors, and for thermoelectric devices, and antimony compounds are widely employed, especially as pigments and (antimony trioxide) as a flame retardant in textiles. Occupational antimony exposures mainly occur in the nonferrous mining and refining of the metal and in the production of pewter, solder, storage battery plates, and babbitt metal. Coal combustion and waste incineration are additional major sources of anthropogenic emission. Exposure to antimony compounds has been reported in the glass industry and from the production of abrasives, textile dyeing, and handling of pigments and catalysts. Antimony-containing pharmaceuticals (e.g., against leishmaniasis and schistosomiasis) are still in wide use in certain parts of the world.
Health Significance of Metal Exposures
605
Adverse health effects seen in relation to occupational antimony exposures are difficult to evaluate, because concomitant exposures to arsenic often occurs. While cardiotoxicity has been documented as a side effect in antimony pharmaceuticals, electrocardiogram changes related to occupational exposures have only occasionally been reported. An increased mortality from ischemic heart disease in antimony smelter workers was suggested by one study, where the difficulty in obtaining a proper control population was emphasized. More commonly, antimony compounds have given rise to irritation of the mucous membranes, irritant eczema, and even chemical burns and perforation of the nasal septum. In particular, antimony trioxide frequently causes the so-called antimony spots (i.e., small, erythematous papules that develop under intense itching on exposed, moist skin areas in hot environments) that are fortunately short lasting. A benign pneumoconiosis is related to antimony exposures. The lung cancer risk may be increased by exposures to this metal, and a relation to increased frequency of abortions has been reported in one study. In the presence of strong acid, stibine (SbH3) may be formed. Storage battery workers and metal etchers may be exposed to this hazard. This gas is very toxic and causes severe hemolysis, shock, central nervous system (CNS) symptoms, and even death due to anuria. Most antimony absorbed is rather rapidly excreted, Sb(V) mostly in the urine, Sb(III) mostly via the gastrointestinal tract. A slow compartment seems to exist, probably reflecting accumulation in liver and kidneys. Severe toxicity has not been documented at urine levels below 1 mg/L (8.2 µmol/L), but biological monitoring of antimony levels in blood and urine has so far been used only rarely. The limit for occupational antimony exposures is 0.5 mg/m3; for stibine this level corresponds to 0.1 ppm. Antimony exposure can, also, be medically assessed via measurement of antimony levels in hair, which have been found to increase with treatment of antimony compounds. However, monitoring of hair is not recommended, as there is always a risk of external contamination from the metal, which would not be distinguishable from absorbed antimony. ARSENIC
Exposures Arsenic occurs widely in the environment, and dissolved arsenic compounds in groundwater can cause severe exposures from deep wells, especially in certain parts of South America, West Bengal, and Taiwan. Some crustaceans may contain as much as 100 mg/kg, but most arsenic in seafood occurs as less harmful organic complexes. Other food items usually contain little arsenic. Major sources of environmental pollution are primary metal smelters and coal burning.6 Occupational exposure to arsenic occurs in the following branches of industry: metal smelting, where arsenic occurs as a contaminant or by-product; production and use of various alloys, especially with lead and copper; semiconductor industry; production and use of wood treatment (chromated copper arsenate) and agricultural pesticides (e.g., calcium and lead arsenate); production of opal glass; certain kinds of enameling; production of pharmaceuticals; production of paints and coatings; leather tanning and the taxidermist industry; and the production, handling, analysis, etc., of arsenic and arsenic compounds. When arsenic-containing ores are heated, arsenic trioxide (As2O3, white arsenic) is formed, and this compound constitutes the main product for the arsenic-consuming industry. Experimental studies suggest that this As(III) is more toxic than the As(V), which occurs in arsenate compounds, for example, wood treatment products. Arsine (AsH3) is particularly toxic. However, little is known about the speciation of arsenic in occupational exposures. Easily soluble arsenic compounds may be absorbed rather efficiently through the respiratory and gastrointestinal tracts; absorption through the skin has also been documented. As(V) seems to be partially converted to As(III). Methylation occurs in the liver, and the methylated arsenic species usually constitute the main part of the urinary arsenic excretion after exposure to inorganic arsenic compounds.
606
Environmental Health
The methylation process varies between species and between human populations.7 Such variations may suggest genetic differences in the enzymes responsible for the methylation of arsenic, but the methylation rate may also be influenced by such factors as the arsenic species absorbed, dose level, age, nutrition, and disease. The extent to which variation in arsenic methylation affects its toxicity, including carcinogenicity, is not known. Arsenobetaine and arsenocholine from fish and crustaceans are relatively rapidly excreted unchanged in the urine. The biological half-life for inorganic arsenic in the body averages about four days; after an acute exposure to inorganic arsenic, the arsenic excretion in urine is therefore increased for a week or more. An additional, somewhat slower excretion occurs through hairs, nails, and skin cells. Both skin and lungs may constitute a “slow” arsenic compartment with a long biological half-time.
Effects Acute intoxication due to ingestion of arsenic trioxide or lead arsenate first causes vomiting, colics, and diarrhea, then follows fever, cardiotoxicity, peripheral edema, and shock, which can lead to death within 12–48 hours. Patients who survive an acute intoxication usually exhibit anemia and leukopenia and may experience peripheral nervous damage 1–2 weeks later. Late effects include loss of hair and nail deformities. Recovery from peripheral neurotoxicity is slow and may take several months. Neonatal exposure to arsenic from contaminated milk supplements has caused severe developmental neurotoxicity that resulted in permanent cognitive deficits. Anecdotal evidence suggests that long-term intake of small amounts of arsenic can lead to a decrease in acute toxicity, but the mechanism of this apparent tolerance is not known, as is the possible implication for chronic toxicity. Another kind of acute poisoning may occur following inhalation of the extremely toxic arsine (AsH3), which smells like garlic. This compound is formed when arsenic (frequently as an impurity) comes into contact with strong acid, and prolonged inhalation of 10 ppm or more of arsine is lethal. The patient first suffers dizziness, headache, pains in the stomach, arms, and legs and subsequent hemolysis, jaundice, and kidney damage, which may lead to death. Under chronic exposure conditions, neuropathy mainly of sensorimotor type may develop and cause paresthesias in the extremities and neuralgic pains, but muscle weakness, especially in the fingers, and motor incoordination may also occur. These effects may occur as a late result of an acute exposure or as a result of a long-term exposure to arsenic, where chronic skin symptoms may occur at the same time. A subclinical neuropathy, detectable by neurophysiological methods, has been described in relation to relatively low arsenic exposures. Accidental arsenic exposure from contaminated milk powder caused over 100 deaths in infants and lasting mental retardation and other neurological effects in survivors. Long-term exposure to inorganic arsenic compounds can cause chronic eczema, hyperpigmentation of the skin, and hyperkeratosis, especially on foot soles and palms. Development of skin cancer may be seen at a later time: squamous cell carcinomas mostly at the hyperkeratoses on the extremities, basal cell carcinomas in any region. Vascular effects may result in Raynaud’s phenomenon, acrocyanosis, and necroses (“blackfoot disease”). In addition, epidemiological studies in Taiwan, Chile, and Argentina have shown increased incidence of bladder cancer and lung cancer.7 Similar findings have emanated from studies of pesticide production workers, sprayers, smelter workers, residents near polluting industries, and patients treated with arsenicals. The incidence of other cancer forms may be increased as well, though this evidence is less certain. In most studies, the exposures were mixed, and the effects of As(III) and As(V) cannot be separated. Arsenic may act synergistically with tobacco smoke. Teratogenic effects have also been reported.
Prevention Biological monitoring of arsenic levels in blood is of limited interest, because arsenic is rapidly cleared from the blood. Hair analysis has been employed in forensic medicine, but the significance of external contamination excludes the use of this method in the surveillance of
dust exposures in industry. Measurement of arsenic levels in urine may be used for the evaluation of current exposures, because a major part, about 60% at steady-state, of the absorbed arsenic is excreted in the urine. However, due to the somewhat variable proportion excreted by this route, the daily variations related to the short biological halflife and the contribution of arsenic compounds from food items, urine tests for total arsenic are useful only on a group basis. If the excretion is above 1 mg/L (13 µmol/L), the result can be used as an indication of arsenic intoxication. Normally the arsenic content in urine is below 100 µg/L (1.3 µmol/L), but levels more than twice that high may be seen after a good seafood meal. After exposure to inorganic arsenic compounds, the urinary arsenic usually consists of no more than 25% inorganic arsenic, one-third of the rest being monomethylarsonate, and two-thirds being dimethylarsinate (cacodylate). Background levels of these compounds would probably be below 20 µg/L (0.27 µmol/L), unless exposures from contaminated wells were prevalent. The organoarsenicals from seafood do not affect the urinary excretion of the methylated compounds. The limit for airborne arsenic and inorganic arsenic compounds is 0.01 mg/m3; for organic compounds, 0.5 mg/m3; and for arsine the limit is 0.05 ppm, which corresponds to 0.2 mg/m3. However, on the basis of the carcinogenic effects, the National Institute of Occupational Safety and Health (NIOSH) has recommended a limit of 0.002 mg/m3 for all arsenic compounds; exposures below this limit would result in minor or undetectable increases of arsenic levels in urine. A WHO/FAO expert group has suggested a limit for daily intake of inorganic arsenic of 0.002 mg/kg body weight. The U.S. Environmental Protection Agency uses a drinking water limit of 10 µg/L. BERYLLIUM
Beryllium, the fourth lightest element, is extracted from beryl ore and is found at low concentrations in the Earth’s crust. Beryllium is used for coating of cathode-ray tubes (e.g., for radar equipment, in electrical or electronic instruments, and in nuclear reactors). Moreover, beryllium is used in many light metal alloys for the space and aircraft industry, and for the nuclear industry. Most of the environmental pollution is due to combustion of fossil fuels.8 Most beryllium salts are practically insoluble at neutral pH, and absorption after oral intake is therefore limited. Skin contact may result in allergic dermatitis. Inhalation of beryllium dusts is the major hazard. Once absorbed, excretion is slow. An acute, severe exposure to airborne beryllium may result in an inflammation of mucous membranes and in a chemical pneumonia. Chronic beryllium disease (sometimes called berylliosis) has similarities to sarcoidosis, and the differential diagnosis may require documentation of beryllium exposure.8 This pulmonary granulomatosis can evolve following an acute phase by a long but variable latency period. Often the diagnosis is made several years after cessation of exposure. Cases have occurred in several household contacts and subjects with only short-term exposures. The most frequent symptom is dyspnea at exertion. The chest x-ray usually reveals a mixture of small, rounded, and irregular opacities. Pulmonary function tests show decreased diffusion, later followed by more generalized pulmonary impairment. Granulomas may also occur in the liver and other organs, but the Kveim test for sarcoidosis is negative, while the lymphocyte blast transformation test is positive for beryllium. The course of the disease is irregular, and some form of predisposition seems to affect the pathogenesis. Although steroid treatment is beneficial, no complete recovery has been recorded. Beryllium is regarded a human carcinogen based on epidemiological evidence from refining, machining, and production of beryllium metal and alloys, where beryllium-exposed individuals suffer lung cancer more frequently than expected.9 Biological monitoring is of limited interest and plays no role in the prevention of excess beryllium exposures. The limit for occupational beryllium exposure is 0.002 mg/m3, but efforts have been made to decrease this limit by a factor of 10. The peak value of 0.025 mg/m3 is applied for short exposures.
26 CADMIUM
Exposures Cadmium concentration in agricultural soils is increasing because of the deposition of airborne cadmium particles and because of the cadmium content of phosphate fertilizers and sewage sludge used for fertilization. Cadmium is a relatively mobile metal in soils, and many crops retain relatively high cadmium levels. In particular, tobacco leaves are high in cadmium. The total daily intake of cadmium via food varies according to dietary habits, but averages range from less than 10 to more than 50 µg/day. Cereals, mollusks and crustaceans, wild mushrooms, and beef kidney are main sources of increased dietary cadmium exposure.10 The most important application of this metal is cadmium plating for corrosion treatment of metals, especially iron and steel. Brazing is still carried out with solders containing cadmium. Rechargeable nickelcadmium batteries are increasingly used in modern-day electronic products. To a limited degree, cadmium is also used in certain copper alloys and in bearing metal. Cadmium rods are used in nuclear power plants. Cadmium sulfide and selenide are used as pigments in enamel, ceramics, glass, plastic, and leather. Many of these uses are now being restricted. However, considerable occupational cadmium exposure may still be a result of various work processes, such as welding or cutting of metals with cadmium-containing coatings, spray-painting with cadmium pigments, or primary production of copper and zinc from cadmium-containing ores. Raw phosphate often contains significant amounts of cadmium, and exposures may occur during the production of phosphate fertilizers. This metal has a melting point of 320°C, and dangerous fumes are generated by rather low temperatures. Pulmonary absorption depends on particle size and solubility, while 2–10% of oral intake is transferred to the bloodstream. Uptake by the liver induces the synthesis of metallothionein, which binds cadmium. When released to the blood, the complex is subsequently excreted through the kidney glomeruli; most is reabsorbed by the tubulus cells, and an accumulation in the kidney cortex takes place. In general, about one-half of the human body burden of cadmium is located in liver and kidneys. The liver is the main storage organ for cadmium in the body, but the highest concentration is eventually reached in the kidneys. The biological half-life is 10–20 years, and cadmium accumulation in the body therefore seems to occur during the major part of a lifetime.
Effects Acute cadmium poisoning most frequently occurs after inhalation of cadmium fume, for example, cutting cadmium-plated steel with an oxyacetylene torch. After a latency of a few hours, the first symptoms may suggest metal fume fever, but a toxic pneumonitis then develops. Recovery is often slow and may take months; several years after acute cadmium pneumonitis, progressive pulmonary fibrosis has been observed. Oral cadmium exposure may cause acute poisoning, for example, when large amounts of the metal have been released from solder materials in soft-drink machines or from ceramic glazes of kitchenware. The chronic form of cadmium poisoning is a result of long-term accumulation in the body, where the kidneys constitute the target organ. The toxic damage mostly seems to occur in the proximal tubuli, but glomerular changes often appear at a later state and may even in some cases be the first indication of cadmium-induced nephropathy. The first sign of kidney dysfunction is usually an increased excretion of low-molecular proteins in the urine, notably β2-microglobulin and the more stable α1-microglobulin (protein HC). In case of glomerular dysfunction, larger proteins also occur in the urine. Low-level environmental exposure to cadmium is related to increased urinary leakage of small proteins, and subjects above 60 years and patients with diabetes may be at an increased risk of cadmium nephrotoxicity.11 Subsequent losses of protein and minerals, and disturbances of vitamin D metabolism, may lead to skeletal changes as seen most
Health Significance of Metal Exposures
607
dramatically in Japan where a large number of cadmium-exposed patients suffered osteomalacia with skeletal pains and pseudofractures, the so-called itai-itai disease. Environmental cadmium exposures at low levels are associated with decreased bone mineral density and increased risk of bone fractures. Thus, environmental cadmium pollution may accelerate age-related declines of both renal function and bone density. Long-term inhalation of cadmium can lead to emphysema, and this outcome may also be of importance in regard to cadmium retention in the lungs of smokers. Experimental animal studies show that pulmonary exposure to cadmium compounds may cause lung cancer, and epidemiological evidence on cancer of the lung, prostate, and kidneys has confirmed that cadmium should be regarded a carcinogenic metal.9 Additional animal experiments suggest that this metal may be a teratogen.
Prevention The cadmium level in the blood is an indication of the current exposure (during the last few months) and is frequently used for biological monitoring. Levels up to 10 µg/L (89 nmol/L) may occur in heavy smokers, while never-smokers usually show levels below 1 µg/L (9 nmol/L). For industrial exposures, a recommended limit for blood cadmium is 5 µg/100 mL (44 nmol/L), but this limit will not protect against kidney damage under long-term exposure conditions. Urinary excretion of cadmium is limited in the beginning, and immediate increases occur only under rather heavy exposures. High urinary cadmium levels are found when the kidneys over a long period have accumulated large amounts of cadmium, which then start to leak. If the exposure continues, tubular and perhaps glomerular dysfunction develops, and relatively large amounts of cadmium are then excreted in the urine. The most recent data from Sweden suggest that the earliest effects may occur at a urine-cadmium excretion below 1 µg/L,11 and levels continuously above this limit should therefore be avoided. β2-Microglobulin and protein HC may be assessed in urinary samples as part of monitoring efforts, but excess levels are found only in case of early or imminent kidney damage, that is, when preventive efforts have failed. The occupational exposure limit in the United States is 0.005 mg/m3 due to the carcinogenic risk.9 Many countries have enacted regulations concerning cadmium release from ceramic glazes and other materials that may leach cadmium to food and beverages. The International Standards Organization (ISO) has adopted a limit for cadmium release from ceramic flatware of 0.17 mg/dm2, with higher limits for hollowware. With regard to dietary intake of cadmium, a WHO/FAO expert group several years ago suggested a Provisional Tolerable Weekly Intake (PTWI) limit of 7 µg/kg body weight per week. Since then, kidney function in the elderly and diabetes patients has turned out to be more vulnerable than expected, and lifelong cadmium accumulation from environmental exposures would seem to eventually cause adverse effects, perhaps substantially below the PTWI. However, even the current PTWI seems already to be exceeded by some population groups, and prevention of cadmium pollution from all sources would therefore seem to be a major environmental priority. CHROMIUM
Exposures Chromium most commonly occurs as trivalent compounds. Divalent compounds are rather unstable, and hexavalent chromates are reduced to trivalent compounds in the presence of oxidizable substances. Only scattered information is available on environmental exposures to chromium. In the United States, daily intake through food is usually below 100 µg, but higher intake occurs in northern Europe. However, the chemical form of chromium present in food and drinking water is largely unknown, but soluble chromates likely predominate. Occupational exposures to chromium occur in several branches of industry: production of chromium and chromium compounds,
608
Environmental Health
stainless steel and other metal alloys; chromium plating of metals; production of heat-resistant bricks with chromate additives; use of chromates as pigments and bichromates for tanning; welding of chromium-plated metals and chromium-containing alloys; development of photographic emulsions; and production and usage of wood preservatives. The main consumption of chromium is in the steel industry, and stainless steel usually contains between 8% and 18% chromium. In addition, chromate present in cement results in considerable cutaneous exposures. The gastrointestinal uptake of Cr(VI) is a few percent, while the absorption of Cr(III) is much less; organic complexes of chromium may be more easily absorbed. The fate of inhaled chromium particles and the transfer within the body depends on the particle size and solubility of the compounds. Excretion is mainly via the urine.
Effects Chromium is an essential trace metal (as glucose tolerance factor) for several species, including humans. Glucose intolerance, weight loss, and peripheral neuropathy in patients undergoing long-term intravenous nutrition may be cured by Cr(III) supplements. Chromium deficiency in humans is otherwise unknown, and the daily chromium need is unclear. The toxicity of the various chromium compounds varies, partly in relation to the different solubilities.12 In general, hexavalent compounds are more easily soluble than the trivalent compounds. The chromate ion is strongly oxidizing and is capable of passing through biological membranes. Trivalent chromium is less toxic, apparently due to the lower solubility and lower biological mobility. The major effects include corrosion of skin and mucous membranes, allergic responses, and carcinogenicity. Long-term inhalation of Cr(VI) compounds in chromium-plating workshops has in the past caused severe corrosion of the nasal mucous membranes with defects in the nasal septum. These effects are now seen more rarely. Chromate may cause circumscribed ulcers (chrome holes) at the knuckles, nail roots, or other exposed skin areas. Even though they may be quite deep, they are almost painless. Healing often takes several weeks and leaves a depressed scar, but the ulcers are apparently not related to development of skin cancer. Chromium is one of the best known allergens in the occupational environment, and chromate is frequently the most common cause of allergic contact dermatitis among males. Cement eczema is a common occupational disease in construction workers, and chromate is the most frequent cause of allergic hand eczema in occupational health, with high prevalence rates in tanners, furriers, and workers exposed to chromates in photographic laboratories and in relation to wood treatment. Chromate has also been identified as a cause of asthma, probably mediated by a type I allergic reaction. Chromite mining has apparently caused several cases of a benign pneumoconiosis. Chromium is a well-documented human carcinogen, and occupational exposures resulting from the production of ferrochrome and of chromates have caused an increased frequency of cancer in the respiratory tract.12,13 An increased occurrence of lung cancer in welders may be due to the content of insoluble chromates in welding fumes from stainless steel. Although trivalent chromium compounds may be involved in the carcinogenesis, exposures to such compounds have not been shown to cause cancer in epidemiological studies.
Prevention Biological monitoring of chromium levels in the urine is useful to follow the exposure to soluble, hexavalent chromium compounds. The biological half-time in plasma is a few days. When external contamination of the sample has been avoided, the upper reference level is usually about 0.5 µg/L (10 nmol/L). Plasma chromium levels parallel the urinary excretion, but chromium concentrations in erythrocytes or whole blood reflect longer-term chromate exposures. Exposure to trivalent compounds or sparingly soluble chromates will not result in detectable changes in body fluids available for biological monitoring.
The exposure limit for airborne chromate and chromium acid is a ceiling value of 0.1 mg/m3; for soluble chromic and chromous salts, the limit is 0.5 mg/m3; and for chromium metal and insoluble chromium salts, 1 mg/m3. Cr(VI) compounds are regarded as carcinogenic, and a permissible exposure limit of 0.001 mg/m3 has been suggested by NIOSH. Skin contact with Cr(VI) compounds should be avoided, and any skin contamination should be immediately removed with soap and water. This problem is even more important for patients with chromate allergy who may have to avoid contact with leather products and plastic articles with leachable chromate pigments. The sulfur on matchsticks contains chromate as well. On the other hand, chromium alloys release only insignificant amounts because of oxide formation in the surface layer. In some countries the addition of 0.4% of ferrous sulfate to the cement is required by law, because it effectively reduces the chromate to insoluble Cr(III) compounds. WHO has for many years recommended a drinking water limit of 0.05 mg/L for total chromium, but lower concentrations can easily be maintained in most places. COBALT
Human cobalt exposures from natural sources are very limited, and daily intake through food has usually been estimated at somewhat below 50 µg. Cobalt levels in drinking water are usually low and of little concern, and atmospheric levels are frequently undetectable. Occupational exposures have become prevalent.14 The most important use is hard metal, which consists of various metal carbides (mainly tungsten) cemented by a cobalt binder. Cobalt has also found considerable use in alloys, to which it adds a high melting point, tensile strength, and resistance to corrosion. Cobalt compounds are increasingly used as catalysts, including desiccators in paints. Cobalt pigments are used in ceramic and glass products. The alloys are extensively used in the electrical, automobile, and aircraft industries, and cobalt is also used for electroplating. Absorption in the gastrointestinal tract varies, but probably averages about 25% for soluble compounds, unless cobalt is ingested in the form of vitamin B12, and in iron deficiency, which increased the absorption of cobalt. Ingestion of excessive amounts of cobalt will induce vomiting and diarrhea. Cobalt is an essential micronutrient and has important actions as an enzyme activator and as a component of vitamin B12. However, cobalt deficiency has not been documented in humans, but enzootic deficiency is a potential problem in certain regions of the United States, Australia, Scotland, and other parts of the world. Thus, cobalt is added to cattle feed and sometimes to fertilizers. Respiratory exposure to cobalt dust may lead to airway irritation, asthma, and measurable systemic absorption. Cemented carbide production workers may develop a pneumoconiosis called hard metal lung, frequently following long-term exposures of more than 10 years. The pathogenesis of cobalt-induced pulmonary disease is not known in detail, but some individual hypersensitivity may predispose to the pulmonary reactions. Some studies suggest that cobalt exposure may lead to an increased risk of lung cancer. Cutaneous exposures to cobalt are common. Small concentrations of this metal are present in cement, and cobalt may contaminate cutting oils and may leach from metal objects. Cobalt allergy is frequent, but occurs frequently in connection with allergy toward nickel or chromate. Hand eczemas in patients with such cross-reactions have a relatively poor prognosis. An outbreak of cardiomyopathy, sometimes complicated by pericardial effusion, was reported in Quebec City about 1970.15 This disease occurred exclusively in beer drinkers, and subsequent investigations showed that the local brewery added cobalt sulfate to the beer. The same practice was discovered in Omaha, Minneapolis, and in Brussels, where similar epidemics occurred. Although probably not solely due to the addition of about 1 mg of cobalt to each liter of beer, the epidemics faded after discontinuation of the addition of cobalt. Several similar cases have been linked to industrial cobalt exposures.
26 Biological monitoring may be of some use.14 The kinetics of cobalt in the organism show the existence of two fast compartments with half-lives of up to 2 days, while about 10% of absorbed cobalt is excreted much more slowly. Urinary cobalt excretion levels are normally below 2 µg/L (30 nmol/L), unless the individual takes a mineral supplement. Following occupational exposures, urinary excretion levels may be 100-fold the normal upper limit, but the levels may change rapidly due to the short half-life. Thus, more information may be obtained on the average long-term exposure by measuring the cobalt level in urine or blood on Monday morning after an exposure-free period. Occupational exposures to cobalt metal fume and dust should be limited as much as possible while respecting the current exposure limit of 0.1 mg/m3. Due to the increasing awareness concerning hard metal disease, a limit of 0.05 mg/m3 for cobalt metal, dust, and fume has been proposed by ACGIH. Even this limit may not sufficiently protect a worker with pulmonary hypersensitivity, however. COPPER
Copper is a widely used metal that has both beneficial and adverse health effects. This metal is used in electrical equipment, in alloys, and in plumbing and heating systems. Acidic, soft water may leach copper from the tubings. The daily intake through food averages about 1 mg or more. Copper is an essential element that is necessary for various metalloenzymes, and possible signs of copper deficiency in humans have been documented in depletion experiments. Accidental intake of large amounts of this metal results in acute gastrointestinal symptoms. Copper sulfate has therefore been used as an emetic, but the potential absorption of toxic quantities of the metal limits its usefulness. Copper appears to play an etiological role in the development of so-called Indian childhood cirrhosis, but other factors, such as genetic predisposition are thought to be of importance. Anecdotal evidence suggests that infants given formula reconstituted with copper-contaminated tap water can develop a chemical hepatitis, but this possible risk has not been confirmed. Wilson’s disease (hepatolenticular fibrosis) causes accumulation of copper in the liver related to insufficient formation of the copper-binding ceruloplasmin; these patients and the heterozygous carriers may be particularly sensitive to excess copper exposures. Also, patients with other preexisting liver disease or undergoing hemodialysis may be more susceptible to copper storage disease. Occupational exposures to copper fume and fine dust can cause metal fume fever, and copper dust is a respiratory irritant. Serum concentrations are affected by ceruloplasmin levels and increase during pregnancy and during anticonceptive hormone treatment. Excretion is mainly via the bile. The exposure limit for copper dusts or mists is 1 mg/m3, and for copper fume, 0.1 mg/m3, although ACGIH has suggested 0.2 mg/m3 for fumes. WHO recommends a limit for drinking water of 2 mg/L. The recommended daily dietary intake is 2.3 mg for adults. IRON
Iron is necessary for life but may also cause toxicity at excess exposures. Iron deficiency with anemia is the most prevalent metal deficiency syndrome in humans, especially among women of the reproductive age groups and certain groups of small children. Several nutrients interfere with iron absorption, but it is always increased in case of deficiency. Ingestion of iron supplements in considerable excess (above 30 mg/kg body weight) may cause acute gastrointestinal lesions followed by metabolic acidosis, toxic hepatitis, and shock. Chronic iron overload, as in hereditary hemochromatosis, leads to hemosiderosis, potential liver cirrhosis, and increased cancer risk. Foundry workers, grinders, and welders are exposed to considerable quantities of iron oxide fume, which accumulates in the lungs and may result in siderosis, a benign pneumoconiosis. Hematite miners have exhibited an excess incidence of lung cancer; although iron may
Health Significance of Metal Exposures
609
not be the primary cause, an interaction between the iron dust and other factors, such as radon and asbestos, is possible. The exposure limit for iron oxide fume is 10 mg/m3, but ACGIH has recommended a limit of half as much and 1 mg/m3 for soluble iron salts. Recommendations for daily iron intakes suggest that iron supplements are necessary for large population groups, but the supplement should always be stored in child-proof containers. Iron pentacarbonyl may be formed when carbon monoxide comes in contact with iron at high partial pressures. This liquid is extremely toxic and, when the vapor is inhaled, results in almost immediate headache, dyspnea, and dizziness. The symptoms then fade, only to return after several hours when pulmonary consolidation and cerebral degeneration are progressing. The ACGIH exposure limit is 0.1 ppm. LEAD
Exposures Lead has a wide spectrum of applications. Metallic lead is used in various alloys, and several inorganic compounds have important uses. Production of organolead compounds, tetraethyl lead and tetramethyllead, as octane boosters in gasoline has now almost ceased. The extensive use of lead resulted in considerable redistributions in the biosphere, particularly as a result of air pollution from leaded gasoline. Calculated natural lead exposures suggest that environmental lead exposures average 10- to 100-fold above typical exposure levels in premetallurgical times. Dietary lead intakes have decreased considerably in many countries as a result of the substitution of lead additives to gasoline. Daily oral intakes of lead are below 100 µg for adults, often averaging about 10 µg. The major sources of environmental lead exposure include: gasoline additives, lead-based paint, lead-soldered food cans, ceramic glazes, and industrial pollution. Drinking water levels may be of particular concern in soft water areas; where lead pipes are still in use, the highest lead concentrations occur in the “first draw” water in the morning. The melting point for lead is 327°C, and hazardous evaporation results when the temperature exceeds about 500°C. This fact is of importance where lead is melted or molded in factories and workshops. Various inorganic compounds are used as pigments and desiccators, for corrosion treatment and enameling, and as an additive to glass and a stabilizer in polyvinyl chloride (PVC) plastic. Lead compounds that are used in ceramic glazes are usually fritted (i.e., aggregated as larger particles by preheating). Occupational lead exposure occurs in particular in the following processes: primary production of lead from lead ores; secondary lead production from used automobile batteries and scrap metal; production of batteries; welding and flame cutting of leadcontaining or minium-treated alloys; molding of lead-containing alloys in foundries; soldering with lead solder, if the temperature is too high; production of and spray painting with paints containing lead pigments and desiccators; addition of lead stearate as stabilizer in PVC plastic; batch mixing with lead compounds for the production of crystal glass; and grinding and sandblasting of lead alloys and coatings. High exposures have also been documented in instructors from indoor shooting ranges, in workers producing leaded panes, and in gunsmiths. Inorganic lead compounds are absorbed only to a minor degree in the gastrointestinal tract of adults, usually about 10% or slightly less, somewhat higher during fasting and somewhat lower when excess calcium, phosphate, and phytate are present. However, the immature gastrointestinal tract is relatively permeable to lead, and balance studies in small children have suggested that oral intake may result in absorption rates of 30–50%. Almost all lead in the blood is bound to the erythrocytes, and the lead content of serum or plasma is so low that it cannot be reliably measured by conventional analytical methods. Measurements therefore refer to the lead content of whole blood (or erythrocytes). Due to the low solubility of lead phosphate, lead accumulates in calcified tissues. About 95% of the lead burden of an adult person is located in
610
Environmental Health
the skeleton, with a very long biological half-life related to the slow tissue remodeling rate. Skeletal lead is more mobile in children. Much less lead is present in the soft tissues, and the half-life is generally about 2 months. The brain probably constitutes an exception: lead that has passed through the blood-brain barrier has a biological halflife of more than a year. The placenta does not constitute any major barrier to lead passage, and the fetus is, therefore, exposed to lead through the mother. Some lead is excreted into the gastrointestinal tract, but the major excretion is via the urine. Only low concentrations of lead have been detected in human milk.
Effects Lead is an important enzyme inhibitor. Of major clinical importance are the chronic effects on blood cells and the nervous system. Anemia is a typical symptom in classic lead poisoning. Lead inhibits the Na-K-ATPase in the cell membranes of the erythrocytes and thereby makes them less stable, with a shortened life span as a result. Quantitatively less important is the interference with hemoglobin synthesis, several steps of the heme formation being inhibited by lead. Most sensitive is the enzyme aminolevulinic acid dehydratase (ALAD), which is inhibited already at low lead concentrations in the blood at 50 µg/L (0.25 nmol/L) and above. The erythrocyte ALAD activity correlates very closely with the lead content in the blood, but in occupational lead exposure, the activity of this enzyme may become very low. Less sensitive to lead is the incorporation of ferrous ion into protoporphyrin IX to form heme. When this reaction is inhibited, zinc substitutes for iron and the resulting zinc protoporphyrin (ZPP) binds instead of heme to the hemoglobin molecule, thereby rendering it unable to carry oxygen. Each erythrocyte in the blood contains a ZPP amount as a message of the lead exposure at the time when the cell was formed. A blood sample containing erythrocytes that have been formed within the last 4 months or so, and the ZPP concentration in the blood, is, therefore, an indication of the average lead exposure within this time interval. The measurement may be carried out by a portable fluorometer in a few seconds. In adult men, the ZPP concentration increases significantly when the blood-lead concentration averages above 250 µg/L (1.25 nmol/L). In women, the threshold is somewhat lower because of the increased sensitivity related to lower iron stores in the body. In children, the threshold for ZPP increase seems to be about 150 µg/L (0.75 nmol/L). An increased amount of ZPP in the blood can also be caused by iron deficiency alone, but iron deficiency may at the same time make the patient more sensitive to the toxic effects of lead. Lead affects both the central and the peripheral nervous system. Cases of encephalopathy in adults have been caused by consumption of moonshine whiskey distilled in old car radiators. More insidiously, a chronic toxic encephalopathy may develop. Typically, the leadpoisoned worker is taken to the doctor or the hospital by the wife who is worried about by his failing health and his unbearable irritability. Clinical examination and neuropsychological testing frequently show that attention, concentration, memory, and abstraction are affected. Early effects, detectable by neuropsychological tests, may develop when blood lead concentrations exceed 400 µg/L (2.0 nmol/L) for extensive periods. Prospective studies suggest that decreasing performance may occur in men when a lead level of 300 µg/L (1.5 nmol/L) is exceeded.16 Children are more susceptible to the central nervous effects, and severe cases of encephalopathy with seizures can still occur, for example, as a result of ingesting of lead-containing paint flakes from peeling walls. More commonly, adverse effects are detected in children with elevated levels of lead in the blood in the absence of any past history of acute lead toxicity. Attention, visuospatial performance, and other brain functions are sensitive to lead toxicity, and decreased results may be detected on IQ tests. Although measurable deficits were thought to occur only at blood-lead concentrations above 100 µg/L (0.5 nmol/L), more recent studies at lower exposure levels have revealed effects also below this limit, and the dose-effect curve may even be steeper at such low exposures.17 The adult patient with an acute lead poisoning has a weak handshake and a decreased function of the extensor muscles of the
forearm (“lead palsy,” Teleky’s sign). Decreased nerve conduction velocity has been documented in chronically exposed workers. Related subjective symptoms may include muscle weakness, fatigue, pains in the extremities, and sometimes even tremor. The earliest detectable effects on nerve conduction velocity appear to occur when blood lead levels exceed 400 µg/L (2.0 nmol/L). Children appear to be somewhat more sensitive also with regard to the peripheral nervous system effects. Acute lead exposure may also affect the kidney function, but this effect appears to be reversible. Symptoms from the gastrointestinal tract include anorexia, dysphagia, constipation, or in some cases diarrhea and occur as a result of chronic exposures as well as acute intoxication. In severe poisoning, colicky pains occur, and several such patients have been subject to surgery for a suspected appendicitis or ulcer. Under chronic exposure conditions and bad oral hygiene, the accumulation of lead sulfide can cause a formation of a blue-gray seam of the gingival edge, the so-called lead seam. Some studies have suggested that severe lead exposure may result in a decreased life span, in particular due to an increased incidence of stroke. A similar tendency has also been postulated in relation to kidney disease, and kidney cancer has been suggested by animal studies. Although lead may be a weak cancer promoter and augment the development of other disease, current lead exposure levels would probably not cause a detectable increase in cause-specific mortality, although the influence on individual health could be considerable. Teratogenic effects are well documented, and some reports have indicated toxic effects on spermatozoa.
Prevention The current lead exposure of an individual is best reflected in the lead concentration of whole blood. Prevention of adverse health effects requires that blood lead levels be maintained below 400 µg/L (2.0 nmol/L), and the Occupational Safety and Health Administration (OSHA) lead standard includes the provision that blood lead concentrations should be kept below 300 µg/L (1.5 nmol/L) in male and female workers who intend to have children. The long-term exposure may be evaluated by measuring the ZPP level in the blood, and this test can efficiently be used for screening purposes. Medical surveillance is required as an additional safeguard and must be made available to all employees exposed above the action level of 30 µg/m3 for more than 30 days a year. Blood-lead examination must be carried out at least every 6 months, every 2 months if the blood-lead level exceeds 400 µg/L (2.0 nmol/L). The removal protection provision means that workers with a blood-lead level above 500 µg/L (2.5 nmol/L), or if otherwise indicated by the medical surveillance, should be removed without losing wage or benefits, until the level has returned to 400 µg/L (2.0 nmol/L) or below that level. If the air-lead level cannot be kept below 50 µg/m3, engineering control measures must be initiated. Regular air monitoring is required if levels exceed the action limit of 30 µg/m3. The standard also includes provision for employee information and respirator use. The goal for the U.S. Centers for Disease Control and Prevention is to reduce children’s blood-lead concentrations below 100 µg/L (0.5 nmol/L). If many children exceed this level in a local area, communitywide interventions (primary prevention) should be considered. Interventions for individual children should begin at blood-lead concentrations of 150 µg/L (0.75 nmol/L). These limits appear high in the perspective of recent epidemiological findings,17 and a wise prevention approach is to minimize lead exposures to the greatest extent possible. An FAO/WHO expert group has recommended that the weekly oral intake of lead should be below a PTWI of 0.025 mg/kg body weight. This limit is likely protective for adults, but it may be insufficient to prevent adverse effects in children. The action level set by the U.S. Environmental Protection Agency for lead in drinking water is 15 µg/L, and WHO recommends a limit of 10 µg/L. Some countries have adopted a limit for lead in wine (250 µg/L); milligram quantities may occur in a vintage bottle if the lead cap has been eroded. Lead caps are no longer used. Also, specific limits may apply to ceramic glazes. The lead release is
26 usually measured by means of a 5% dilute acetic acid test, and the release is measured during boiling for 30 minutes three times. Exposures are also limited by setting standards for lead contents of paints. Major efforts have been initiated to remove old, peeling lead paint as part of restoration of houses with a lead hazard. MANGANESE
Manganese has a wide range of applications, ferromanganese being the main product, with 90% of this production used in various metal alloys, including welding rods. Other applications include dry batteries (manganese dioxide) and pigments for the glass and ceramics industry. Methylcyclopentadienyl manganese tricarbonyl (MMT) is increasingly used as an octane-booster in gasoline as other additives are phased out. Occupational exposures to manganese may occur in the primary production and in the various user industries, especially when manganese-containing alloys are welded. Daily intakes through food usually average about 2–3 mg, but may vary considerably, depending on the intake of cereals and rice, which are high in manganese. High levels in drinking water occur in some regions, although low limits are set for technical reasons. Increasing use of MMT in gasoline may cause atmospheric manganese levels above 1 µg/m3 in cities, and similar levels may be encountered near ferromanganese plants. The gastrointestinal absorption of manganese appears to be below 5% of that ingested, although higher at lower intakes and in case of iron deficiency; a considerable excretion occurs through the bile, some of which is reabsorbed. Manganese is an essential element in metalloenzymes and as enzyme activator, but deficiency states are unlikely to occur under normal circumstances. Characteristic, manganese-related diseases appear to be relatively rare. Two different pictures may emerge: pulmonary and neurological pathologies. In acute respiratory exposure to manganese, a chemical pneumonitis may develop with cough, phlegm, fever, and changes on the chest x-ray. Also, manganese aerosols may cause metal fume fever (as described under Zinc). However, pulmonary effects are unlikely to occur at manganese exposures below 0.3 mg/m3. Manganism is a central nervous system disease with clinical manifestations somewhat similar to those of Parkinson’s disease. This chronic intoxication has primarily been described in miners, workers in ore processing plants, and foundries. The onset is delayed and sometimes occurs after the exposure has ceased. The first symptoms are nonspecific, such as fatigue, headache, irritability, and memory difficulties. The more characteristics signs then develop insidiously: stiff movements, hoarse and low voice, stiffened facial expression, muscular hypertonia, and tremor. At least partial, and temporary, recovery may be obtained by treatment with L-dopa. The severe manganism appears to affect only a small number of the exposed individuals, and individual vulnerability may, therefore, be of importance. Recent studies have suggested that the early, nonspecific symptoms occur at an increased frequency in welders and other workers with increased exposures to manganese, and perhaps also in subjects with increased environmental manganese exposures. In patients with compromised liver function, manganese may be less effectively excreted, and accumulation in the basal ganglia has been demonstrated, thus suggesting that manganese may contribute to the development of the encephalopathy seen in severe liver disease.18 Also in regard to this metal, the developing brain may be more vulnerable. Deficits on developmental tests of brain functions were associated with increased manganese concentrations measured in the children’s cord blood at birth.19 This study was carried out in a population without apparent excess exposures and presumably without serious iron deficiency. Although much remains to be learned about manganese toxicity, this metal should prudently be regarded a developmental neurotoxicant. Biological monitoring for manganese is of some interest and needs further exploration. In the blood, some of the manganese has a half-life of about 1 month. Urine analyses are not useful, except perhaps in case of MMT exposure, but analysis of hair samples has occasionally been used for screening purposes.
Health Significance of Metal Exposures
611
The limit for occupational manganese exposures has been lowered by ACGIH to 0.2mg/m3. The WHO drinking water limit of 0.4 mg/L has been determined on the basis of technical considerations. Due to beneficial effects of trace amounts of manganese, a daily intake of about 2.5–5 mg of this metal in the diet has been recommended. MERCURY
Exposures Natural evaporation of mercury is the major source of atmospheric pollution. Cinnabar, that is, mercury sulfide, has been used since ancient times as a pigment and constitutes the most important mercury ore. Inorganic mercury in the aquatic environment tends to sediment, where certain microorganisms are able to methylate mercury, possibly as a means of detoxication. The methylmercury generated then accumulates in fish, particularly in species at the higher trophic levels. Particularly high methylmercury concentrations are reached by marine carnivores. Increased human exposures, therefore, occur in individuals frequently eating fish; the highest exposures seen in the arctic where meat from marine mammals is included in the diet. Mercury is used for a variety of instruments, including thermometers, manometers, polarographs, and electrical equipment. Mercury is also used for the production of fluorescent light tubes, as a catalyst in chemical industry, including the production of chlorine, and in amalgams for dentistry practice. Mercury may evaporate at room temperature, and the rate depends on the surface area, temperature, and the ventilation. Thus, increased amounts will evaporate if mercury is scattered on the floor as small droplets. The amount that evaporates at 40°C is four times the amount that evaporates at 20°C. At saturation, the air at 20°C contains 15 mg/m3, which is more than 100 times the occupational exposure limit. Mercury compounds are now less frequently used, but some organomercury compounds have important uses. They contain a covalent bond between mercury and carbon, and the organic part of the molecule is often an alkyl group or an alkoxialkyl group. The former compounds are more toxic, because they are more easily absorbed and more slowly metabolized. Organomercury compounds have been used as fungicides on seed grains. Methylmercury was extensively used for this purpose in the past, until environmental effects were discovered. Thimerosal is used as a preservative, for example, in vaccines, but this application is now declining due to safety concerns. This compound is metabolized in the body to ethylmercury, which has toxic properties similar to those of methylmercury, but is less stable. Mercury emissions originate from the various uses of mercury and also from fossil fuel combustion, and some types of coal contain relatively high mercury concentrations. In addition, incineration of municipal and hospital waste may be important point sources. The various uses of mercury and mercury compounds result in occupational exposures in a range of occupations. Also, the industrial use of mercury may lead to releases to the environment, in particular through sewage water. Localized problems relating to contamination of river systems and bays have been caused by such contamination from chloralkaline plants, paper and pulp industries, and pesticide factories. In the most serious poisoning event, Minamata Bay in Japan became severely contaminated with methylmercury from a factory that used mercury as a catalyst in the production of vinyl chloride.20 Inhalation of metallic mercury results in an almost complete absorption of the vapors in the alveoli. Small amounts are released from dental amalgam fillings, especially from those in the molar teeth that are subjected to the highest pressures during chewing. However, only negligible absorption of the metal takes place in the gastrointestinal tract, unless some is retained, for example, in diverticula or the appendix. Inorganic mercury compounds from aerosols may be absorbed through the lungs as well, and some absorption (about 5–10%) also takes place in the gastrointestinal tract. A higher absorption rate has been demonstrated in newborn rats, but data on humans is lacking. The organomercury compounds are also absorbed when taken in by this
612
Environmental Health
route, methylmercury almost completely. Occupational exposures are frequently of a mixed type, and absorption patterns may, therefore, vary. In the blood, inorganic mercury is almost evenly distributed between plasma and erythrocytes, while about 90% of organomercury compounds are bound to the cells. Mercury vapor and methylmercury are lipophilic and may pass biological membranes, including the blood-brain barrier and placenta, and result in considerable deposition in the central nervous system and the fetus, respectively. The vapor dissolved in the blood and tissues rapidly becomes oxidized. Mercuric ions become bound to some extent to metallothionein and accumulate in the kidneys. Excretion takes place mainly through feces and urine, but significant amounts may be eliminated in sweat. The presence of ethanol in the blood influences the equilibrium between dissolved mercury vapor and mercury ions. Thus, after ethanol ingestion, mercury vapor may be detected in the expired air in individuals with high levels of mercuric ions in the blood. When selenium is present in the blood, a complex is formed that results in a longer half-life but also decreased toxicity, as judged from animal experiments. Methylmercury is slowly metabolized in the liver and by gut bacteria and is then eliminated as inorganic mercury.
Effects Acute poisoning with mercury vapor may cause a severe airway irritation, chemical pneumonitis, and pulmonary edema in severe cases. Ingestion of inorganic compounds results in symptoms of gastrointestinal corrosion and irritation, such as vomiting, bloody diarrhea, and stomach pains. Subsequently, shock and acute kidney dysfunction with uremia may ensue. Cutaneous exposure to mercury compounds may result in local irritation, and mercury compounds are among the most common allergens in patients with contact dermatitis. Chronic intoxication may develop a few weeks after the onset of a mercury exposure, more commonly if the exposure has lasted for several months or years. The symptoms depend on the degree of exposure and the kind of mercury in question. The symptoms may involve the oral cavity, the nervous system, and the kidneys. Severe exposure to inorganic mercury causes an inflammation of gingiva and oral mucosa, which become tender and bleed easily. Salivation is increased, most obviously so in subacute cases. Often the patient complains of a metallic taste in the mouth. Especially when oral hygiene is bad, a gray border is formed on the gingival edges. Mercury may damage both the peripheral and the central nervous system. In exposures to mercury vapor, the central nervous system is the critical organ, and the classic triad of symptoms includes erethism, intention tremor, and the gingivitis described above. The fine intention tremor of fingers, eyelids, lips, and tongue may progress to spasms of arms and legs. A jerky micrographia is typical as well. The changes in the central nervous system result in psychological effects known as erethism: restlessness, irritability, insomnia, concentration difficulties, decreased memory, and depression, sometimes in combination with shyness, unusual psychological vulnerability, anxiety, and total neglect concerning economic problems and daily needs. Newer studies suggest that early stages of erethism may occur, and this syndrome has been dubbed “micromercurialism” by Russian authors. The main problem here appears to be decreased memory, and headache, dizziness, and irritability may also be part of the picture. Similar nonspecific symptoms are described by patients who attribute their ill health to mercury from their dental fillings. Although slight adverse effects are difficult to rule out in susceptible subjects, little evidence is available to support this notion.21 Nephrotoxic effects include proximal tubular damage, as indicated by an increased excretion of small proteins in the urine, for example, β2-microglobulin. Glomerular damage seems to be caused by an autoimmune reaction to mercury complexes in the basal membrane, and mercury-related cases of nephrotic syndrome have been traced to this pathogenesis. In children, a different syndrome is seen, the so-called “pinkdisease” or acrodynia, diagnosed most frequently in children treated with teething powders that contained calomel and also occasionally seen in children who had inhaled mercury vapor (e.g., from broken
thermometers). A generalized eruption develops, and the hands and feet show a characteristic, scaly, reddish appearance. In addition, the children are irritable, sleep badly, fail to thrive, sweat profusely, and have photophobia. This condition was extremely common until the middle of the twentieth century, when the etiology was finally found and teething powders were phased out. Intoxications with alkoxialkyl or aryl compounds are similar to intoxications with inorganic mercury compounds, because these organomercurials are relatively unstable. Alkylmercury compounds, such as methylmercury, result in a different syndrome. The earliest symptoms in adults are paresthesias in the fingers, the tongue, and the face, particularly around the mouth. Later on, disturbances occur in the motor functions, resulting in ataxia and dysphasia. The visual field is decreased, and in severe cases the result may be total blindness. Similarly impaired hearing may progress to complete deafness. This syndrome has been caused by methylmercury-contaminated fish in Minamata, Japan and by methylmercury-treated grain used for baking or animal feed in Iraq and elsewhere. Children are more susceptible to the toxic effects of methylmercury than are adults, and congenital methylmercury poisoning may result in a cerebral palsy syndrome, even though the mother appeared healthy or experienced only minor symptoms due to the exposure. In various populations with a high consumption of large marine fish or marine mammals, methylmercury intakes may approach the levels that resulted in such serious disease in Japan and Iraq. While no clear-cut cases of intoxication have been reported in these populations, delays in cognitive development have been reported in children with increased prenatal exposures to methylmercury from the mother’s seafood diet.22 Methylmercury may therefore share a developmental neurotoxicity potential with lead, thus causing decrements in IQ levels. Recent evidence suggests that the vulnerability to such toxicity extends into the teenage years. Although the developing brain is considered the critical target organ in regard to methylmercury, recent evidence has suggested that mercury from fish and seafood may promote or predispose to the development of heart disease. Thus, studies in the United States and Europe have demonstrated a higher risk of cardiovascular death at increased exposures, that is, hair-mercury concentrations above 2 µg/g. In this regard, methylmercury seems to counteract the beneficial effect of essential fatty acids in fish. This evidence is yet inconclusive, but deserves attention, because it suggests that a narrow definition of subpopulations at risk, that is, pregnant women and small children, might leave out other vulnerable groups. For preventive purposes, therefore, the population at large should be considered at risk. Sufficient evidence exists that methylmercury chloride is carcinogenic to experimental animals, but in the absence of comprehensive epidemiological data, methylmercury is considered only a possible human carcinogen (class 2B).9
Prevention Biological monitoring is useful in the diagnosis of mercury exposure and in the control of occupational exposure levels. In the blood, inorganic mercury has a half-life of about 30 days, and methylmercury has a half-life of about twice as long. Unfortunately, blood levels do not reflect mercury retained in the brain where mercury after vapor inhalation has a half-life of several years. Urine levels are usually preferred as an indicator of occupational exposures. Long-term mercury vapor exposures should respect a time-weighted average limit of 25 µg/m3 and a corresponding urinary mercury excretion limit of 50 µg/g creatinine (28 µmol/mol creatinine). Induction of slight tremor by mercury vapor has been reported at urinary excretion levels of 50 µg/L (0.25 µmol/L) and above. With regard to methylmercury, the earliest effects in adults, such as paresthesias, appear to occur when blood concentrations are above 200 µg/L (1 µmol/L). Methylmercury is incorporated in hair, and hair mercury analyses have proved useful for screening, although hair permanent treatment may render the result unreliable. Methylmercury toxicity has been seen at hair levels above 50 µg/g (0.25 µmol/g). To protect against developmental neurotoxicity, WHO recommends a PTWI of 1.6 µg/kg
26 body weight, and the U.S. EPA similarly recommends a Reference Dose of 0.1 µg/kg body weight. Taking into account that the former limit is for 1 week, the latter for 1 day, the two limits are fairly similar, and a prudent approach would seem to be to minimize the exposure as much as possible, while maintaining a diet that includes seafood in appropriate quantities. The Reference Dose corresponds to a hair-mercury concentration of about 1 µg/g.20 This level is frequently exceeded in fish-consuming populations, especially if the diet includes predatory fish. Preventive measures should include the limitation of mercury released from industrial operations to the environment. Important nonindustrial sources are discarded batteries (for cameras and watches), fluorescent light tubes and bulbs, and thermometers. Some countries have instituted a practice of collecting and recycling the mercury from such consumer products. Mercury exposures from dental amalgam fillings should be minimized, but alternative restorative materials should be used only if their safety and durability are known to be superior to amalgam. Thimerosal is been phased out as a pharmaceutical preservative, but still occurs in certain vaccines. A concentration limit of 0.5 mg/kg has traditionally been used for fish and seafood products, but would seem insufficient to ensure that exposures are kept below the Reference Dose while maintaining a diet that includes one or two seafood meals for week. In addition, fish species that may exceed this limit (e.g., swordfish and shark) are usually only required to comply with a limit of 1.0 mg/kg. Because fish contamination cannot easily be controlled, a better way of decreasing methylmercury exposures is to advise the population to eat low in the food chain, preferably smaller and younger fish that contain less mercury. The current occupational exposure limits are 0.1 mg/m3 as a ceiling value for inorganic mercury and 0.01 mg/m3 for organic (alkyl) mercury. NIOSH has recommended a time-weighted average limit for inorganic mercury at 0.05 mg/m3. MOLYBDENUM
The largest deposit of molybdenite, the major molybdenum ore, is in Climax, Colorado. Most of the molybdenum consumption is used in alloys, but various compounds are also employed as catalyst and pigments. Considerable experimental evidence is available on the essential functions of molybdenum, but little information has been gathered on the toxic potentials. The human intake of this metal appears to be below 0.2 mg per day, unless significant contamination occurs. Absorption of molybdenum in food may be about 25–50% in humans, and excretion is mainly through urine; the biological half-life in the blood is probably only a few hours, although some molybdenum may be retained in the liver and other tissues for a longer time. Molybdenum serves a constituent of three oxidases, including xanthine oxidase, but deficiency states have not been reported in humans. Molybdenum poisoning in livestock may produce “teart disease” with anemia, growth retardation, and bone abnormalities, especially if the copper intake is low. In humans, the frequent occurrence of gout-like symptoms in some Armenian villages has been linked to the high intake of molybdenum, possibly via abnormalities of uric acid metabolism. Pulmonary fibrosis has been reported in experimental animals, and a few cases of pneumoconiosis have been seen in workers exposed to sparingly soluble forms of molybdenum. The current ACGIH exposure limits are 0.5 mg/m3 for soluble compounds and 10 mg/m3 for insoluble molybdenum compounds, respectively. A dietary intake of 0.15–0.5 mg of this metal per day has been recommended as safe and adequate for adults. NICKEL
Exposures Nickel is a ubiquitous trace metal, which occurs in nature, but ores of sufficient quality occur only at a few places, notably at Sudbury, Ontario. Nickel is particularly used for alloys but also for surface
Health Significance of Metal Exposures
613
treatment of metals, as a catalyst in the electronics industry, and in the production of nickel-cadmium batteries. Nickel exposures occur in the production trades and the various user industries (e.g., when welding stainless steel). The nickel intake through food may average about 0.1–0.2 mg per day, but it varies considerably because high contents may be encountered in legumes, cereals, nuts, and chocolate. Nickel may leach to food and beverages from nickel-plated or nickel-containing kitchen utensils. Gastrointestinal absorption of nickel from food is about 1%, but absorption from an aqueous solution taken on an empty stomach may be about 25%. Internal exposures may result from implantation of orthopedic prostheses and from intravenous infusion of nickel-contaminated solutions.
Effects Nickel apparently has limited acute toxicity in humans, including airway irritation, and the important adverse effects relate to allergic eczema and respiratory cancers.23 Nickel carbonyl may cause acute pulmonary disease and systemic toxicity. Respiratory exposure to nickel compounds in nickel production plants results in an increased risk of nasal and respiratory cancer.13 An increased respiratory cancer risk has also been seen in welders, but the contribution by nickel in welding fumes is unclear. Most respiratory cancers in refinery workers have been primary carcinoma of the lung, but nasal cancers may be 100-fold as frequent as otherwise expected. The risk is not limited to sparingly soluble compounds, such as nickel subsulfide, but also relates to easily soluble nickel compounds that may occur as aerosol exposures. Nickel allergy is the most frequent cause of contact eczema in women. The development of allergy is frequently provoked by earrings, but metal buttons, bracelets, and watches are frequent causes as well. More rarely, the primary allergy develops due to an occupational exposure. However, hand eczema often results as a consequence of exposures at work if nickel allergy is already present, as indicated (e.g., by earlobe dermatitis in the past). Some studies suggest that about 10–15% of women become allergic to nickel, and that almost half of them at some point develop hand eczema, in some cases so severe that the patient has to give up working. A much smaller proportion of the male population appears to be allergic to nickel. Nickel allergy is probably increasing worldwide in prevalence, and it most frequently develops during the teenage years. The hand eczema in a nickel-allergic patient may develop or progress as a result of increased nickel intake through food and beverages.24 In addition, inhalation allergy has resulted in asthmatic symptoms in a few recorded cases. Nickel carbonyl (Ni(CO)4) is a liquid that can evaporate at room temperature. Nickel carbonyl is produced in the Mond refining process of nickel. In addition, it may be formed or used in other branches of industry, such as electronics, oil refining, and plastics. After an acute exposure, dyspnea, headache, dizziness, vomiting, and substernal and hypogastric pain may occur, followed by a virtually symptom-free interval of 12–36 hours. Severe pulmonary symptoms then develop, and physical examination suggests pneumonia. The intoxication can lead to cerebral toxicity and death within 3–10 days. Pulmonary cancer has been reported in animal experiments, but the epidemiological evidence is uncertain on this point.
Prevention Exposure to soluble nickel compounds and nickel carbonyl, which is metabolized to form nickel ions and carbon monoxide, may be evaluated by analysis of nickel concentrations in plasma and urine. The biological half-life in the body and the release from particles retained in the lungs will depend on the solubility of the nickel compounds concerned. Nickel present in the blood seems to be cleared relatively rapidly by the kidneys, and animal experiments suggest a half-life of a few days. Limits for plasma levels must, therefore, depend on the nickel speciation in the exposure. Nickel levels in plasma are usually below 1 µg/L (17 nmol/L) in individuals without occupational exposures, at least when analysis of uncontaminated samples has been carried out by an experienced laboratory.
614
Environmental Health
Limits for occupational exposure are 1 mg/m3 for nickel metal and soluble nickel compounds, and 0.001 ppm for nickel carbonyl. The ACGIH limits are: 1 mg/m3 for nickel metal, nickel sulfide roasting, fumes, and dust; 0.1 mg/m3 for soluble compounds; and 0.05 ppm for nickel carbonyl. NIOSH has recommended that the permissible exposure limit for nickel be reduced to 0.015 mg/m3. Specific preventive measures apply with regard to nickel-induced contact dermatitis. Primary prevention would mean that nickelcontaining or nickel-plated metals should not be used in products that come into contact with the skin. Unfortunately, current fashions and the usefulness of nickel in cheap alloys (including coinage metal) seem to strongly oppose such measures. Contact with such products should be limited, if not totally avoided, in patients who have already developed allergy toward nickel. Many dermatologists have experienced some success in advising their patients to refrain from eating oatmeal, legumes, nuts, and chocolate and from using nickel-plated kitchen utensils. Beverages should not be ingested on an empty stomach. Some countries have enacted legislation concerning the acceptable degree of nickel release from metal objects that may come into contact with the skin. The degree of nickel leaching from white metal objects may be determined by Fisher’s test (dimethylglyoxime and ammonium hydroxide), which enables the allergic patient to identify and discard objects that could provoke an outbreak of dermatitis. OSMIUM
Environmental exposures are of limited significance, and the information on kinetics in the human body is incomplete. Of main interest is osmium tetroxide (osmic acid), which is used for various laboratory purposes, mainly as a fixative for tissue sections. The highly volatile osmium tetroxide may also be formed by oxidation of the finely divided metal. Inhalation of osmium tetroxide causes immediate irritation of the mucous membranes with cough and shortness of breath. These symptoms may last for several hours after a short exposure. Osmium tetroxide also has corrosive effects on the eyes, as indicated by severe irritation and lacrimation. After these symptoms have ceased, the patient may see large halos around lights until the tissue damage has been completely repaired. Skin contact results in irritant dermatitis. Repeated respiratory exposures have allegedly caused headache, insomnia, chronic airway irritation, and gastrointestinal disturbance. The permissible limit for occupational exposures to osmium tetroxide is 0.002 mg/m3. PLATINUM
Platinum is used in jewelry, in dentistry, and in chemical and electrical industries. Platinum compounds are employed in electroplating, in photography, and as a catalyst in the petroleum and pharmaceutical industries. Exposures to hexachloroplatinic acid and platinum tetrachloride are most frequent. When inhaled, the platinum compounds may cause upper airway irritation with violent sneezing, dyspnea, wheezing, and even cyanosis. Platinum rhinorrhea and platinum asthma are more typical clinical pictures that fade away shortly after the worker has left work for the day, and skin contact with chlorinated platinum salts may result in a scaly erythema, sometimes urticaria, and mostly only on hands and forearms.25 These allergic manifestations have been called platinosis. Long-term effects, such as lung fibrosis, are unlikely, but a worker with a past history of platinosis may not be able to work with platinum again without suffering a severe reaction to minute amounts of platinum salts in the atmosphere. Some platinum compounds, notably cis-diamino-dichloroplatinum (cis-platin), inhibit cell growth in tumors and have therefore been used as cytostatic agents, especially for testicular cancer. Environmental exposures result from industrial emissions and from the use of catalytic converters on automobile exhaust systems. Platinum is employed as a catalyst in catalytic converters on cars. About 1 µg of the metal was lost per kilometer of driving with a pellet-type
catalyst, but much lower losses have been achieved with the newer monolith-type catalysts. The limit for occupational exposures is 0.002 mg/m3 for soluble platinum salts, and ACGIH has adopted a limit of 1 mg/m3 for platinum metal. Limited information exists concerning biological monitoring, but platinum allergy can be diagnosed by specific IgE antibodies. SELENIUM
Selenium is often referred to as a metalloid, although it shares some chemical properties with sulfur. Selenium is usually a by-product obtained from primary copper production. This element has found considerable use in semiconductor technology and other electronic applications, in photocopy machines, as pigments in paints and glass, as an ingredient in certain alloys, in antidandruff shampoos, and several other applications. Perhaps the most intensive exposures occur in sulfide ore refineries, but harmful exposures may also result when selenium-containing rectifiers are overloaded or when scrap metal is melted. Environmental selenium exposures vary geographically, with average daily intakes through the diet varying from a low 30–50 µg in Scandinavia, Egypt, and New Zealand to a high of about 300 µg in Venezuela. Increased levels may occur due to emissions from coal combustion and manufacturing industries, but geological factors are generally most important. Some plants concentrate selenium and may contain concentrations up to several thousand parts per million. Effects of selenium used to be a concern mainly with regard to domestic animals. Acute poisoning (blind staggers) and chronic toxicity (alkali disease) have been known in livestock for over 50 years. Later, selenium deficiency was discovered as the cause of white muscle disease in ruminants, hepatosis dietetican in swine, and exudative diathesis in chickens. Soluble selenium compounds are almost completely absorbed from the gastrointestinal tract. Absorption through the skin may occur as well. The selenium concentrations in blood and urine seem to reflect recent absorption. Part of the selenium in the blood is associated with a glutathione peroxidase, and the activity of this enzyme is associated with the selenium levels. Selenium compounds are metabolized in the liver, in part by reduction and methylation. Dimethylselenide is an intermediary metabolite that is exhaled when the formation of this compound at high exposures exceeds the further formation of trimethylselenonium ions, which are excreted in the urine. The kinetics depends on the absorption level and perhaps on individual differences and on interfering substances, such as arsenic, cadmium, and mercury. Inhalation of selenium results in mucous membrane irritation, gastrointestinal symptoms, increased body temperature, headache, and malaise. Garlicky breath from dimethylselenide is frequently present. This symptom was already noted by the housekeeper of Berzelius, who discovered this element. In fact, most of the systemic toxicity may be due to the liberation of this metabolite from the liver. Selenium dioxide forms caustic selenous acid in contact with water and is, therefore, highly irritant and may produce burns and pulmonary edema. The nail beds become tender; deformed nails develop; and skin, teeth, and hair may be dyed red from precipitation of amorphous selenium. Hydrogen selenide is more toxic than hydrogen sulfide; immediate symptoms are related to the irritant properties. Seleniferous food has been related to vague symptoms, but lack of proper reference groups and other deficiencies hamper the interpretation of the data. Because selenium is an essential trace element, deficiency may occur. The most serious form was first described as Keshan disease, an endemic, juvenile cardiomyopathy in selenium-low areas of China. Low selenium intakes may also predispose to the development of cancer and arteriosclerosis. In addition, clinical improvement has been recorded in other groups of patients, including some on parenteral nutrition and some with lipidoses of the central nervous system. However, much needs to be discovered in these areas before conclusions concerning minimal daily intakes can be made, although a daily intake of 0.05–0.2 mg is currently recommended.
26 In regard to prevention of toxic effects, urine-selenium concentrations should be kept below 0.1 mg/L. Analysis of exhaled air for dimethylselenide could be considered but has not been widely applied. The occupational exposure limit is 0.2 mg/m3 for selenium and its inorganic compounds and 0.05 ppm for selenium hexafluoride, an airway irritant. In Finland, where the dietary intake of selenium was among the lowest in the world, selenium has been added to fertilizers to increase the selenium concentration of agricultural products. SILVER
Major uses of silver in the past, such as jewelry, silverware, and photographic emulsions, still continue to be important, but a range of other applications have increased the demand due to developments in coatings and alloy technology. Silver solder is also in use, although the adverse effects related to the cadmium content have necessitated a change of ingredients. Argyria is a bluish discoloration of the skin due to deposition of silver metal particles. A localized form is due to penetration of particles through the stratum corneum, but generalized argyria is due to absorption of silver compounds into the body. Argyrosis of the respiratory tract has been diagnosed by bronchoscopy, but ocular argyrosis, especially as evidenced by conjunctival discoloration, may be more easily detected. These signs occur as a result of occupational exposures but may also be caused by oral or dermal pharmaceuticals containing silver; they appear to be relatively benign. The current exposure limit is 0.01 mg/m3 for silver metal and soluble silver compounds, but ACGIH has recommended a limit of 0.1 mg/m3 for silver metal. THALLIUM
Thallium has important uses in various industrial processes, including the fabrication of phosphorescent pigments and glassware, and as a catalyst in organic synthesis. Environmental thallium pollution occurs near mines and refineries because zinc, cadmium, and copper ores usually contain thallium. Cement production and coal burning also cause thallium emissions. Historically, extensive application of thallium rodenticides has caused pollution problems and serious poisonings.26 Exposure to thallium from food is normally of no significance. Thallium compounds are without taste and odor; lethal doses may be less than 1 g. Gastrointestinal absorption is almost complete, and uptake through the skin has also led to several cases of intoxication. Within a few days, acute gastrointestinal effects are followed by peripheral neuropathy with muscle weakness and “burning feet syndrome.” The associated mental disturbances include irritability, concentration difficulties, and somnolence. Hair loss (alopecia) occurs about 1–3 weeks after the acute exposure. Thus, the characteristic triad, gastroenteritis, polyneuropathy, and hair loss is seen only at a rather late stage of the intoxication. Characteristic lunular stripes may develop on the nails. In survivors of severe poisoning, some nervous system damage may remain after recovery. Inhalation of thallium-containing dust at work over longer time periods may be associated with vague symptoms of joint pains, anorexia, fatigue, trembling, and with partial hair loss and polyneuropathy. A large-scale study of the population residing near a thallium-polluting cement factory in Germany found that elevated urine-thallium concentrations were associated with polyneuritic symptoms, sleep disorders, headache, fatigue, and other nonspecific symptoms.27 Excretion is mainly via the gastrointestinal tract and the kidney. Urine levels of thallium may remain high for several weeks, although plasma concentrations have decreased. A biological half-life of a couple of weeks seems to apply to humans. A slow excretion takes place through hair and nails, which may provide a profile of recent thallium levels in the body. The limit for occupational exposure to soluble thallium compounds is 0.1 mg/m3.
Health Significance of Metal Exposures
615
TIN
Tin has been used for many centuries in brass and pewter, and current uses also include tin plating, which consumes about half of the total tin production, tin foil, collapsible tubes, and pipes. Cans for food products are often plated with tin on the inner side. To prevent leaching to acidic contents (especially when a can is left open for a few days), tin-lined cans used for food are usually protected with lacquer. The dietary intake of tin is variable, although mostly about 1–4 mg/day. A large number of organotin compounds are in use (e.g., dioctyltin as a stabilizer in PVC, triorganotins as pesticides, in particular fungicides and antifouling agents, and in various compounds as catalysts). Ingestion of 50 mg of tin results in vomiting, but gastrointestinal absorption is only a few percent. Organotin compounds are more easily absorbed, also through the skin. Inhalation of tin dust is usually not a matter of major concern. However, a benign pneumoconiosis, called stannosis, has been described, where pulmonary function abnormalities are minor, if detectable at all. The organotin compounds, including dialkyl and trialkyl compounds, are strong skin irritants. The systemic toxicity in experimental animals has been studied in some detail; the neurotoxic potential is higher in trialkyltins than in dialkyltins, and it decreases with the length of the alkyl chains. Some compounds may be immunotoxic, and tributyltin marine antifouling paints may cause serious endocrine disruption in certain marine organisms. Human health effects of seafood contamination by tributyltin are unclear.28 More than 200 human cases of poisoning, half of them fatal, were described after the application of an ointment containing organotin compounds (mainly diethyltin) against staphylococcal infections. The symptoms included headache, vomiting, dizziness, visual disturbances, convulsions, and paresis. The limit for occupational exposure is 2 mg/m3, and for organotin compounds is 0.1 mg/m3. Biological monitoring seems to be of limited use, although urinary tin excretion may be worth studying more closely. In the preventive measures, eye protection and prevention of skin contact with organotin compounds should be included. Although tributyltin will not be used in marine paints after 2008, the marine contamination will remain for many years. The European Food Safety Authority uses a tolerable daily intake limit for the sum of all organotin compounds of 0.25 µg/kg body weight.28 This limit is likely to be exceeded only in case of frequent intake of heavily contaminated seafood. URANIUM
Uranium is a radioactive metal that may cause serious chemical toxicity. Most natural uranium is 238U, which has a half-life of almost 5 billion years. It is extracted from ores that may contain less than 1% of the metal. The main use is as fuel in nuclear power plants, but small amounts are used as pigments and catalysts. Uranium enrichment to secure fissile uranium results in depleted uranium as a by-product. Because of the high specific gravity (about 1.7 times the one of lead), the depleted uranium is used as a component of munitions in military conflicts, thereby leading to uranium aerosol exposures to munition producers, military personnel, and civilians. Human exposure occurs from production and use of this metal, but uranium may also leach into drinking water from natural deposits, and industrial sources, such as mill tailings. Gastrointestinal absorption varies with solubility, and perhaps 20% of uranium from food and water is absorbed. The tetravalent uranium is oxidized in the organism to hexavalent ions, which are excreted through the glomeruli. At low pH, uranyl ions (UO22+) will be reabsorbed in the tubuli, where they may cause cell damage or necrosis. Less-soluble uranium compounds from respiratory exposures will tend to accumulate in the lungs. Such accumulation, especially if the uranium is enriched with 235U, would tend to cause health effects associated with the alpha-radiation. However, the excess cancer risk in uranium miners seems to be mainly due to radon gas and radon progeny. Uranium in drinking water has been linked to
616
Environmental Health
excess excretion of β2-microglobulin in the urine as an indication of early tubulus dysfunction, resulting in kidney damage. WHO recommends a drinking-water limit of 9 µg/L, based on a high allowance of 50% of the total tolerated daily uranium intake. The standard for occupational exposure to uranium and insoluble compounds is 0.25 mg/m3, and for soluble compounds is 0.05 mg/m3, while ACGIH has recommended 0.2 mg/m3 for all uranium. VANADIUM
Vanadium is frequently used in various alloys, often in the form of ferrovanadium, which accounts for the majority of vanadium consumption. Vanadium oxides are important catalysts in the inorganic and organic chemical industries, and other vanadium compounds are used in the electronics, ceramics, glass, and pigment industries. Vanadium is primarily used for steel production (e.g., for automobile parts, springs, and ball bearings). Occupational exposure to vanadium may also occur at primary production of other metals when the ores contain considerable amounts of vanadium; certain qualities of oil contain much vanadium, and unexpected exposures may occur when servicing burners and filters. The daily intake through food is usually below 0.1 mg, and gastrointestinal absorption may be less than 1%. Vanadium is an essential element for chickens and rats, but the possible essentiality to humans has not been determined. Environmental exposures have not been reported to cause significant toxicity. Pentavalent vanadium compounds are more toxic than are the tetravalent compounds. Vanadium pentoxide (V2O5) dust and fume result in conjunctivitis, rhinitis, and other irritation of the mucous membranes, and in severe cases, in dyspnea and chemical pneumonitis. Some workers may become particularly sensitive to these actions, while others seem to show some adaptation. Vanadium-induced cough may be particularly bothersome, since it lasts for several days. Chronic bronchitis has been recorded as an apparent long-lasting effect following long-term exposures. Animal studies have indicated that vanadium could induce systemic effects, such as fatty degeneration of liver and kidneys, polycythemia, and cardio-toxicity at high doses. In humans, a lowering of serum cholesterol levels has been demonstrated as well as a reduction of cystine incorporation in fingernails. After oral intake of vanadium, records indicate that the tongue may be covered by a green layer. Vanadium is efficiently excreted via the urine, and about onehalf of the absorbed quantity is excreted within the first two days, but the existence of a slower compartment with a half-life of about several weeks has been suggested. Analysis of urine samples for vanadium may be useful to indicate the acute exposure levels, and levels below 0.5 mg/L (10 nmol/L) are believed to reflect safe exposures. The ceiling limits for occupational exposures to vanadium pentoxide are 0.5 mg/m3 for dust and 0.1 mg/m3 for fume, while NIOSH has recommended a limit of 0.05 mg/m3 for both. ZINC
Zinc is a common and essential metal with a low toxic potential. This metal is added to bronze, brass, and various other alloys to add corrosion resistance, and it is used for galvanizing steel and other iron products. In the presence of carbon dioxide and humidity, a surface film of alkaline zinc carbonate is formed, which protects against corrosion. Various zinc compounds are used in the chemical, ceramic, pigment, plastic, rubber, and fertilizer industries, and most frequently used are zinc oxide, carbonate, sulfate, chloride, and some organic compounds. The most significant occupational exposures occur during alloy founding, galvanizing, zinc smelting, and welding, especially of galvanized metals. The daily intake of zinc varies considerably, seafood and meat being high in zinc, but typically ranges from 10 to 15 mg. Also, soft drinking water may contain high concentrations of zinc leached from the water pipes. The average oral intake from this source is several
milligrams. The gastrointestinal absorption is difficult to evaluate, because the major excretion route is via the gut. The relative zinc absorption varies with the speciation and the presence of phytate, calcium, phosphate, and vitamin D. Under normal circumstances, the absorption is probably about 25–50%, while under zinc deficiency the absorption can increase substantially. Zinc is an essential metal, and more than 20 zinc-dependent enzymes have been identified. Zinc deficiency in children has resulted in endocrine disturbances with retarded growth and delayed puberty. This condition may be completely cured when zinc therapy is instituted. Acrodermatitis enterohepatica, a rare familial skin disease, has been found to be related to deficient zinc absorption. In addition, recent research has suggested that zinc supplements may be beneficial in certain dermatological conditions and in accelerating wound healing in surgical patients. Zinc also seems to somewhat protect against cadmium toxicity. However, in the occupational setting, the latter metal is a frequent impurity in zinc and may result in serious adverse effects. Oral zinc poisoning has occurred in a few instances due to zinc release from galvanized food containers. Symptoms included nausea, vomiting, stomach pains, and diarrhea. Inhalation of high concentrations of zinc oxide may cause metal fume fever, a condition that may also be caused by freshly formed oxides of several other metals, including copper, magnesium, manganese, and nickel. This condition is also referred to by other names, such as metal shakes or zinc chills. The metal oxide particles tend to aggregate after their formation and would then be unable to pass through to the lungs as easily; therefore only the freshly formed particles cause the disease. A few hours after the exposure, the first symptoms may be slight feeling of malaise, dry cough and sore throat, and a sweetish, metallic taste in the mouth. About 6–8 hours later, the patient develops an influenza-like syndrome with chills, muscle pains, headache, and medium-grade fever, then follows sweating and recovery. Blood tests show leukocytosis, increased sedimentation rate, and lactate dehydrogenase. Depending on the extent of the exposure, the total attack usually lasts less than 24 hours, and the patient usually returns to work the next morning. Many workers have experienced repeated, almost weekly spells of metal fume fever, and chronic damage could conceivably occur. However, this question is difficult to address, and no evidence currently suggests that repeated attacks of metal fume fever leave sequelae. In fact the patient develops a temporary resistance after each spell, and metal fume fever is, therefore most seen on Mondays, which accounts for the name, Monday morning fever. Zinc chloride has been in extensive use as a flux in soldering. In contact with water, hydrochloric acid is liberated, and the result is painful burns. Zinc chloride is also used in smoke bombs, and inhalation of the fume has caused corrosive effects in the airways with pulmonary edema and, in the survivors, bronchopneumonia. Biological monitoring is of dubious value, because the serum zinc concentration appears to be well regulated, and the urine represents only a minor part of the total amount excreted. Metal fume fever seems to be caused by zinc fume levels of 15 mg/m3, but only scattered information is available on effects of exposures below that level. The exposure limit for zinc oxide is 2 mg/m3, and it is 1 mg/m3 for zinc chloride. With regard to the beneficial effects, recommended values for daily requirements are 15 mg for adults, 20 mg for pregnant women, and 25 mg for lactating women. REFERENCES
1. Nieboer E, Richardson DHS. The replacement of the nondescript term “heavy metals” by a biological and chemically significant classification of metal ions. Environ Pollut (Ser B). 1980;1:3–26. 2. Elinder C-G, Friberg L, Kjellström T, Nordberg G, Oeberdoerster G. Biological Monitoring of Metals. Geneva: World Health Organization; 1994. 3. Cornelis R, Heinzow B, Herber RF, et al. Sample collection guidelines for trace elements in blood and urine. IUPAC Commission of Toxicology. J Trace Elem Med Biol. 1996;10:103–27.
26 4. Nordberg GF, Fowler BA, Nordberg M, Friberg L, eds. Handbook on the Toxicology of Metals. 3rd ed. Amsterdam: Elsevier (to be published, 2007). 5. International Programme on Chemical Safety. Aluminium. Environmental Health Criteria 194. Geneva: World Health Organization; 1997. 6. International Programme on Chemical Safety. Arsenic and Arsenic Compounds. Environmental Health Criteria 224. 2nd ed. Geneva: World Health Organization; 2001. 7. National Research Council Arsenic in Drinking Water. Washington, DC: National Academy Press; 1999. 8. International Programme on Chemical Safety. Beryllium. Environmental Health Criteria 106. Geneva: World Health Organization; 1990. 9. International Agency for Research on Cancer. Beryllium, Cadmium, Mercury, and Exposures in the Glass Manufacturing Industry. Monographs on the Evaluation of Carcinogenic Risks to Humans. Vol. 38. Lyon: Agency for Research on Cancer; 1993. 10. International Programme on Chemical Safety. Cadmium. Environmental Health Criteria 134. Geneva: World Health Organization; 1992. 11. Jarup L, Alfven T. Related low level cadmium exposure, renal and bone effects—the OSCAR study. Biometals. 2004;17:505–9. 12. Costa M. Toxicity and carcinogenicity of Cr(VI) in animal models and humans. Crit Rev Toxicol. 1997;27:431–42. 13. International Agency for Research on Cancer. Chromium, Nickel and Welding. Monographs on the Evaluation of Carcinogenic Risks to Humans. Vol. 47. Lyon: Agency for Research on Cancer; 1988. 14. Lauwerys R, Lison D. Health risks associated with cobalt exposure— an overview. Sci Total Environ. 1994;150:1–6. 15. Alexander CS. Cobalt-beer cardiomyopathy. Am J Med. 1972;53: 395–417. 16. International Programme on Chemical Safety. Inorganic Lead. Environmental Health Criteria 165. Geneva: World Health Organization; 1995. 17. Lanphear BP, Hornung R, Khoury J, et al. Low-level environmental lead exposure and children’s intellectual function: an international pooled analysis. Environ Health Perspect. 2005;113:894–9.
Health Significance of Metal Exposures
617
18. Krieger D, Krieger S, Jansen O, Gass P, Theilmann L, Lichtnecker H. Manganese and chronic hepatic encephalopathy. Lancet. 1995; 346:270–4. 19. Takser L, Mergler D, Hellier G, Sahuquillo J, Huel G. Manganese, monoamine metabolite levels at birth, and child psychomotor development. Neurotoxicology. 2003;24:667–4. 20. National Research Council Toxicological effects of methylmercury. Washington, DC: National Academy Press; 2000. 21. International Programme on Chemical Safety. Inorganic Mercury. Environmental Health Criteria 118. Geneva: World Health Organization; 1991. 22. Grandjean P, Cordier S, Kjellström T, Weihe P, Budtz-Jørgensen E. Health effects and risk assessments. In: Pirrone N, Mahaffey KR, eds. Dynamics of Mercury Pollution on Regional and Global Scales: Atmospheric Processes and Human Exposures around the World. Norwell, MA: Springer; 2005:499–523. 23. International Programme on Chemical Safety. Nickel. Environmental Health Criteria 108. Geneva: World Health Organization; 1991. 24. Nielsen GD, Jepsen LV, Jørgensen PJ, Grandjean P, Brandrup F. Nickel-sensitive patients with vesicular hand eczema: oral challenge with a diet naturally high in nickel. Br J Dermatol. 1990;122: 299–308. 25. International Programme on Chemical Safety. Platinum. Environmental Health Criteria 125. Geneva: World Health Organization; 1991. 26. International Programme on Chemical Safety. Thallium. Environmental Health Criteria 182. Geneva: World Health Organization; 1996. 27. Brockhaus A, Dolgner R, Ewers U, Kramer U, Soddemann H, Wiegand H. Intake and health effects of thallium among a population living in the vicinity of a cement plant emitting thallium containing dust. Int Arch Occup Environ Health. 1981;48:375–89. 28. Opinion of the Scientific Panel on Contaminants in the Food Chain on a request from the Commission to assess the health risks to consumers associated with exposure to organotins in foodstuffs. The EFSA Journal 2004;102:1–119.
This page intentionally left blank
27
Diseases Associated with Exposure to Chemical Substances Organic Compounds Stephen Levin • Ruth Lilis
ORGANIC SOLVENTS
Organic solvents comprise a large group of compounds (alcohols, ketones, ethers, esters, glycols, aldehydes, aliphatic and aromatic saturated and nonsaturated hydrocarbons, halogenated hydrocarbons, carbon disulfide, etc.) with a variety of chemical structures. Their common characteristic, related to their widespread use in many industrial processes, is the ability to dissolve and readily disperse fats, oils, waxes, paints, pigments, varnishes, rubber, and many other materials.1,2 Solvent exposure affects many persons outside industrial and occupational settings. The use of solvents in household products and in arts, crafts, and hobbies has significantly increased the population that may be affected by repeated exposure. Moreover, the deliberate inhalation of solvents as a form of addiction (“sniffing”) occurs, especially in younger population groups. Some solvents are well known for their specific toxic effects on the liver, kidney, and bone marrow,3 and a few organic solvents have specific toxicity for the nervous system. Carbon disulfide may induce a severe toxic encephalopathy with acute psychosis;3 methyl alcohol may induce optic neuritis and atrophy; methyl chloride and methyl bromide may cause severe acute, even fatal, toxic encephalopathy. Exposures to n-hexane, methyl-n-butyl ketone (MBK),4,5,6 and carbon disulfide have produced peripheral neuropathy. Most organic solvents share some common nonspecific toxic effects, the most important of which are those on the central nervous system (CNS). The depressant narcotic effects of organic solvents have long been recognized; numerous members of this heterogeneous group of chemical compounds have been used as inhalation anesthetics (chloroform, ethyl ether, trichloroethylene, etc.). The sequence of stages of anesthesia achieved with volatile solvents is of interest: the cerebral cortex is affected first, the lower centers of reflex activity in the brain stem and medulla oblongata, which control vital cardiovascular and respiratory functions, are the last to be depressed. This characteristic sequence makes it possible to use volatile anesthetic compounds for medical purposes. The earliest manifestations of the anesthetic effects of solvents are slight disturbances in psychomotor coordination. These may progress to more pronounced incoordination and, if exposure continues, through an excitation stage of longer or shorter duration, to loss of consciousness.
Occupational exposure to solvents may reproduce the entire sequence of medical anesthesia, up to loss of consciousness, and even death through paralysis of vital cardiovascular and respiratory centers. While such severe cases of occupational solvent poisoning are relatively uncommon under normal conditions, they may occur with unexpected accidental overexposure. The initial manifestations of CNS depression are frequent in workers handling solvents or mixtures of solvents in various industrial processes. A low boiling point, with generation of significant airborne concentrations of vapor, large surfaces from which evaporation may take place, lack of appropriate enclosure and/or exhaust ventilation systems, relatively high temperature of the work environment, and physical exercise required by the actual work performed (increasing the ventilatory volume per minute and thus the amount of solvent vapor absorbed) may all contribute to uptake of sufficient solvent to induce prenarcotic CNS symptoms. Early prenarcotic effects are dizziness, nausea, headache, slight incoordination, paresthesia, increased perspiration, tachycardia, and hot flushes. These symptoms are mostly subjective and transitory, and their causal relationship with solvent exposure has, therefore, often been overlooked. The transitory nature of prenarcotic symptoms is due to the common characteristics of the metabolic model for solvents: once exposure ceases after the end of the work shift, the body burden of solvents is usually rapidly depleted, mostly eliminated through exhalation. The prenarcotic symptoms subside as the concentration of solvent in blood and in the CNS decreases. With exposure to higher concentrations or with longer exposure, more marked incoordination and a subjective feeling of drunkenness may occur. The risk of accidents is increased, even with early prenarcotic symptoms and more so with more pronounced symptoms. While acute overexposure of higher magnitude with loss of consciousness is generally accepted as a serious condition (with possible persistent aftereffects, including neurological deficit), the long-term effect of repeated episodes of slight prenarcotic symptoms has remained unexplored until relatively recently, although it had been recognized that such symptoms are an expression of functional changes in some cortical neurons. It had been suspected for some time that repeated functional change may lead to permanent impairment of neuronal functions, and various possible mechanisms had been considered, including interference with cell membrane or neurotransmitter functions or even 619
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
620
Environmental Health
neuronal loss. Since no regeneration of neurons occurs, neuronal loss can result in permanent, irreversible neurological damage. The diffuse nature of such effects and the lack of major, well-localized neurological deficits have contributed to the relatively slow recognition of chronic, irreversible, solvent-induced neurological impairment. Repeated exposure to organic solvents may result in the gradual development of persistent symptoms, such as headache, tiredness, fatigue, irritability, memory impairment, diminished intellectual capacity, difficulty in concentration, emotional instability, depression, sleep disturbances, alcohol intolerance, loss of libido, and/or potency. These symptoms, often reported by workers with repeated solvent exposure and mentioned in many studies on chronic effects of solvents, had received relatively little attention until relatively recently, probably because of their nonspecific nature. Nevertheless, the term toxic encephalosis was proposed as early as 1947.7 More recently, the term psycho-organic syndrome has been used for this cluster of symptoms related to long-term solvent exposure. Effects on the CNS, including the diencephalic centers of the autonomic system with their interrelationships with endocrine functions, are probably important components in the development of the syndrome. The chronic neurotoxicity of solvents related to long-term exposure has received increasing attention. Research has been particularly active in the Scandinavian countries. Epidemiological studies of exposed workers and control groups have significantly contributed to recognition of the association between the psycho-organic syndrome and exposure to solvents; neurobehavioral and electrophysiologic methods, including electroencephalographic (EEG), visual evoked potential (VEP), and nystagmographic investigations, have added objective, quantitative measures for the assessment of CNS functions. In case-control studies,8 neuropsychiatric disease has been found to occur more frequently among solvent-exposed workers than in age-matched controls. In a large study in Denmark, in which solventexposed painters were compared with nonexposed bricklayers,9 the painters had a relative risk of 3.5 for disability due to cryptogenic presenile dementia. With modern methods of investigation and brain imaging, including computed tomographic (CT) scan, MRI, and cerebral blood flow studies, diffuse cerebral cortical atrophy has been demonstrated in cases of chronic solvent poisoning.10,11,12 Thus recent studies converge to indicate that long-term exposure to solvents may lead to chronic, irreversible brain damage. The clinical expression is that of intellectual impairment and decrements in performance, which can be detected by means of neurobehavioral testing; EEG abnormalities are frequent and characterized mostly by a diffuse low-wave pattern. The underlying pathological changes are represented by cortical atrophy; these changes can be of varying severity, with extreme cases of severe diffuse cerebral and cerebellar cortex atrophy in chronic poisoning due to solvent sniffing addiction.13 The axons and myelin sheaths may also be affected by organic solvents. This is well known for peripheral nerves, and peripheral neuropathy has been well documented with exposure to such solvents as carbon disulfide, MBK, and n-hexane. Specific CNS effects are also known to occur with carbon disulfide. Other solvents capable of producing peripheral neuropathy such as n-hexane and MBK have an effect on both long and short axons, and axonal degeneration of fibers in the anterior and lateral columns of the spinal cord, cerebellar vermix, spinocerebellar tracts, optic tracts, and tracts in the hypothalamus can also occur. In the past decade, advances in genomics and proteinomics has enabled extensive investigation into the mechanisms through which organic compounds exert their toxic and genotoxic effects, pointing to potentially useful biomarkers of exposure and toxicity, as well as possible interventions to protect against the adverse effects of exposure. ALIPHATIC HYDROCARBONS
Aliphatic hydrocarbons are mostly derived from petroleum by distillation or cracking; their chemical structure is relatively simple, since they are linear carbon chains of various lengths with a certain number
of hydrogen atoms attached. They are either saturated (alkanes or paraffins) or unsaturated (alkenes or olefins, with one or several double bonds and alkynes or acetylenes, with one or more triple bonds). The aliphatic hydrocarbons occur in mixtures that have numerous industrial uses: natural gas; heating fuel; jet fuel; gasoline; solvents for a variety of materials such as pigments, dyes, inks, pesticides, herbicides, resins, and plastic materials; in degreasing and cleaning; in the extraction of natural oils from seeds; and increasingly as raw material for the synthesis of numerous compounds in the chemical industry. Compounds with a low number of carbon atoms are gases (methane, ethane, propane, butane). Compounds with a higher number of carbon atoms (up to eight) are highly volatile liquids at room temperature, whereas those with longer carbon chains have higher boiling temperatures and usually do not generate dangerous air concentrations. Compounds with more than 16 carbon atoms are solids. The only adverse effect attributed to the lower members of the group is the indirect one they might exert when present in high concentrations, displacing oxygen. Toxic effects of paraffins (alkanes) are significant for the highly volatile liquid compounds from pentane through octane. These compounds are potent depressants of the CNS, and overexposure may result in deep anesthesia with loss of consciousness, convulsions, and death. Such high levels of exposure are infrequent under usual circumstances, but they may occur accidentally. Moderate irritation of mucous membranes of the airways and conjunctivae is a common but less severe effect; defatting of the skin might contribute to dermatitis, with repeated contact. Aspiration of liquid mixtures of aliphatic hydrocarbons into the airways or accidental ingestion of such liquids usually results in chemical pneumonitis, often severe and necrotizing. N-hexane exposure may result in toxic peripheral neuropathy, affecting both the sensory and motor components of peripheral nerves, initially in the lower extremities, but eventually with longer exposure, also in the upper extremities. Paresthesia, numbness, and tingling progressing from distal to proximal, distal hypoesthesia (touch, pain), followed by muscle weakness due to motor deficit, with difficulty in walking and eventual muscular atrophy, and diminished or absent deep tendon reflexes, are the characteristic clinical findings. Electromyographic abnormalities indicating peripheral nerve lesions, including abnormal fibrillation patterns and significant decreases in nerve conduction velocities (sensory and motor) are usually detected. Axonal degeneration and secondary demyelination have been found to be the underlying pathological abnormalities. Abnormalities in visual-, auditory-, and somatosensory-evoked potentials have been reported after experimental n-hexane exposure; longer latencies and central conduction times were interpreted as reflecting neurotoxic effects at the level of the cerebrum, brain stem, and spinal cord.14 N-hexane peripheral neuropathy, first described by Japanese investigators15 in 1969, has since been repeatedly reported from various European countries and the United States. It has also been reproduced in animal experiments at concentrations as low as 250 ppm. Outbreaks of toxic peripheral neuropathy due to n-hexane have continued to be reported. Such cases have occurred in press-proofing workers in Taiwan, associated with exposure to a solvent mixture with a high (60%) n-hexane content. The outbreak of peripheral neuropathy cases had been preceded by a gradual change (to a high n-hexane content) in the solvent mixture used to clean rollers of press- proofing machines.16 In an offset printing plant with 56 workers, 20 (36%) developed symptomatic peripheral neuropathy due to exposure to n-hexane. Optic neuropathy and CNS involvement were uncommon and autonomic neuropathy was not encountered.17 Cases of n-hexane subacute, predominantly motor, peripheral neuropathy have also been reported in young adults and in children after several months of glue sniffing. Although functional improvement after discontinuation of toxic exposure has been reported, in some cases full recovery has not been observed, even after long-term (16 years) follow-up.18,19,20 Experiments involving exposure of rats to high concentrations of n-hexane have revealed adverse effects on the seminiferous epithelium; repeated exposures resulted in severe, irreversible testicular lesions.21 Cellular changes were observed in the
27 myocardium as a result of administering n-hexane to rats. These cellular changes were considered to be responsible for the decreased threshold for ventricular fibrillation.22 A significant suppression was observed in the serum immunoglobulin (IgG, IgM, and IgA) levels in n-hexane-exposed workers.23 The main n-hexane metabolites are 2hexanol and 2,5-hexanedione, a good biomarker of occupational exposure to n-hexane.24 N-hexane is metabolized to the gamma-diketone 2,5-hexanedione (2,5-HD), a derivative that covalently binds to lysine residues in neurofilament proteins (NF) to yield 2,5-dimethylpyrrole adducts. Pyrrolylation is an absolute requirement in neuropathogenesis.25 Effects of chronic exposure to n-hexane on some nerve-specific marker proteins in rats’ central and peripheral nervous systems were studied after high exposure (2000 ppm n-hexane for 24 weeks). The level of neuron-specific enolase (NSE), creatine kinase-B (CK-B), and beta-S100 protein decreased significantly in the distal sciatic nerve, while the markers remained unchanged in the CNS.26 N-hexane accumulates in adipose tissue where it persists longer (estimated half-life, 64 hours); complete elimination from fat tissue after cessation of exposure has been estimated to require at least 10 days.27 In humans chronically exposed to a mixture of hexane isomers with concentrations ranging from 10 ppm to 140 ppm, the urinary 2,5-hexanedione excretion ranged from 0.4 mg/L to 21.7 mg/L.28 Urinary concentrations of 2,5-hexanedione in subjects not exposed to n-hexane or related hydrocarbons were found to range from 0.12 mg/L to 0.78 mg/L.29 The urinary 2,5-hexanedione excretion reaches its highest level 4–7 hours after the end of exposure.30 Biological monitoring to assess worker exposure to toxic chemicals has gained increasing recognition, especially for occupations characterized by highly variable exposure levels. The American Conference of Government Industrial Hygienists has recommended biological exposure indices (BEIs—levels of a biological indicator after an 8-hour exposure to the current threshold limit value [TLV]) for a limited number of widely used chemicals; n-hexane is one of these. 2,5-Hexanedione was found to be significantly correlated with a score of electroneuromyographic abnormalities. There is general agreement that, for practical purposes, the urinary concentrations of 2,5-hexanedione can predict the likelihood of subclinical peripheral neuropathy in persons exposed to n-hexane.31,32 4,5-Dihydroxy-2-hexanone as a metabolite of n-hexane has recently been identified in rats and in humans; it is excreted in amounts that at times exceed those of 2,5-hexanedione. It has been suggested that this metabolite indicates a route of detoxification.33 While there is no definitive evidence that other aliphatic hydrocarbons, such as pentane, heptane, or octane, have similar effects, some case reports suggest an association. In inhalation exposure of rats to n-hexane vapors at 900, 3000, and 9000 ppm, reproductive parameters were unaffected over two generations.34 The potential of commercial hexane to produce chromosome aberrations was evaluated in vitro and in vivo. No increase in chromosome aberrations was observed in either test system.35 The current OSHA permissible exposure limit (PEL) for n-hexane remains at the 500 ppm level adopted in 1971, despite evidence that this level is not protective for workers. Commercial hexane that had been used in industrial processes where workers had peripheral neuropathy was found to contain 2methyl pentane, 3-methyl pentane, and methyl cyclopentane, in addition to n-hexane. The neurotoxicity of these compounds has been tested in rats, and significant effects on peripheral nerves of a similar type but of lesser magnitude than those of n-hexane were detected. The order of neurotoxicity was found to be n-hexane > methyl cyclopentane > 2-methyl pentane = 3-methyl pentane.36 Other solvent mixtures, such as one containing 80% pentane, 5% hexane, and 14% heptane, have produced cases of peripheral neuropathy in humans. White spirit mixtures containing more than 10% n-nonane have been shown by neurophysiological and morphological criteria to produce axonopathy in rats after 6 weeks of daily exposure. Since the various members of the group are most often used in mixtures, a time-weighted average (TWA) of 100 ppm (350 mg/m3) has been proposed.3 Peripheral neuropathy similar to that associated with hexane has been found to result from exposure to MBK. DiVincenzo et al.37
Diseases Associated with Exposure to Chemical Substances
621
identified the metabolites of n-hexane and of MBK; the similarity of chemical structure between the metabolites of these two neurotoxic agents suggested the possibility of a common mechanism in the very similar peripheral neuropathy. n-hexane ↓ 2-hexanol ↓ 2,5-hexanediol
methyl n-butyl ketone ↓ 5-hydroxy-2-hexanone ↓ 2,5-hexanedione
It is now well established that 2,5-hexanedione is the most toxic metabolite. The biochemical mechanism of 2,5-hexanedione neurotoxicity is related to its covalent binding to lysine residues in neurofilament protein and cyclization to pyrroles. Pyrrole oxidation and subsequent protein cross-linking then lead to the accumulation of neurofilaments in axonal swellings, the histopathologic earmark of gamma-diketone peripheral neuropathy. Massive accumulation of neurofilaments has been shown to occur within the axoplasm of peripheral and some central nervous system fibers.38 MBK has not been shown to cause reproductive toxicity39 and was not mutagenic in the Ames test or in a mitotic gene-conversion assay in bacteria. Mammalian mutagenicity test results were also negative.40 Ethyl-n-butyl ketone (EBK, 3-heptanone) administered in relatively high doses for 14 weeks by gavage produced a typical central peripheral distal axonopathy in rats, with giant axonal swelling and hyperplasia of neurofilaments. Methyl-ethyl ketone (MEK) potentiated the neurotoxicity of EBK and increased the urinary excretion of two neurotoxic gamma-diketones, 2,5-heptanedione and 2,5-hexanedione. The neurotoxicity of EBK seems to be due to its metabolites, 2,5heptanedione and 2,5-hexanedione. Methyl-ethyl ketone is a widely used industrial solvent to which there is considerable human exposure. The potential to cause developmental toxicity was tested in mice. Mild developmental toxicity was observed after exposure to 3000 ppm, which resulted in reduction of fetal body weight. There was no significant increase in the incidence of any single malformation, but several malformations not observed in the concurrent control group were found at a low incidence: cleft palate, fused ribs, missing vertebrae, and syndactily.41 MEK potentiates EBK neurotoxicity by inducing the metabolism of EBK to its neurotoxic metabolites. Commercial-grade methyl-heptyl ketone (MHK, 5-methyl-2octanone) also produced toxic neuropathy in rats, clinically and morphologically identical to that resulting from n-hexane, methyl-n-butyl ketone (MBK), and 2,5-hexanedione. The MHK mixture was found by gas chromatography-mass spectrometry to contain 5-nonanone (12%), MBK (0.8%) and C7–C10 ketones and alkanes (15%), besides 5-methyl2-octanone. Purified 5-nonanone produced clinical neuropathy, whereas purified 5-methyl-2-octanone was not neurotoxic; given together with 5-nonanone, it potentiated the neurotoxic effect. In vivo conversion of 5-nonanone to 2,5-nonanedione was demonstrated.42 The toxicity of 5-nonanone was shown to be enhanced by simultaneous exposure to MEK. This effect is attributed to the microsomal enzyme-inducing properties of MEK. The neurotoxicity of methyl-n-butyl ketone has been shown to be enhanced by other aliphatic monoketones, such as MEK, methyl-n-propyl ketone, methyl-n-amyl ketone, and methyln-hexyl ketone; the longer the carbon chain of the aliphatic monoketone, the stronger the potentiating effect on methyl-n-butyl ketone neurotoxicity.43 Neuropathological studies have shown that the susceptibility of nerve fibers to linear aliphatic hydrocarbons and ketones is proportional to fiber length and the diameter of the axon. Fibers in the peripheral and central nervous systems undergo axonal degeneration, with shorter and smaller fibers generally being affected later. The long ascending and descending tracts of the spinal cord, the spinocerebellar, and the optic tracts can be affected. Giant axonal swelling, axonal transport malfunction, and secondary demyelination are characteristic features of this central peripheral distal axonopathy.
622
Environmental Health
The unsaturated olefins (with one or more double bonds), such as ethylene, propylene, and butylene, and the diolefins, such as 1,3butadiene and 2-methyl-1,3-butadiene, mainly obtained through cracking of crude oil, are of importance as raw materials for the manufacture of polymers, resins, plastic materials, and synthetic rubber. Their narcotic effect is more potent than that of the corresponding saturated linear hydrocarbons, and they have moderate irritant effects. 1,3-Butadiene, a colorless, flammable gas, is a by-product of the manufacture of ethylene; it can also be produced by dehydrogenation of n-butane and n-butene. Major uses of 1,3-butadiene are in the manufacture of styrene-butadiene rubber, polybutadiene rubber and neoprene rubber, acrylonitrile-butadiene-styrene resins, methyl methacrylatebutadiene-styrene resins, and other copolymers and resins. It is also used in the production of rocket fuel. In studies of chronic 1,3-butadiene inhalation, malignant tumors developed at multiple sites in rats and mice, including mammary carcinomas and uterine sarcomas in rats and hemangiosarcomas, malignant lymphomas, and carcinomas of the lung in mice.44 An excess of brain tumors following 1,3-butadiene exposure has been found in B6C3F1 mice.45 Other important effects were atrophy of the ovaries and testes. Ovarian lesions produced in mice exposed by inhalation to 1,3-butadiene included loss of follicles, atrophy, and tumors (predominantly benign, but also malignant granulosa cell tumors).46 A macrocytic megaloblastic anemia, indicating bone marrow toxicity, was also found in inhalation experiments on mice.47 Hepatotoxicity has been reported in rats exposed to 1,3-butadiene and its metabolite, 3-butene-1,2-diol, through a depletion of hepatic and mitochondrial glutathione.48 Evaluation of the human carcinogenicity of 1,3-butadiene hinges on evidence regarding leukemia risks from one large and wellconducted study and two smaller studies. The smaller studies neither support nor contradict the evidence from the larger study. The larger, United States-Canada study shows that workers in the styrene-butadiene rubber industry experienced an excess of leukemia and that those with apparently high 1,3-butadiene exposure had higher risk than those with lower exposure.49 The standardized mortality ratio for non-Hodgkin’s lymphoma was found to be increased in a large cohort of employees at a butadiene-production facility. There were, nevertheless, no clear exposure group or latency period relationships.50 1,3Butadiene is metabolized to 1,2-epoxy-3-butene. This metabolite has been shown to be carcinogenic in skin-painting experiments on mice. 1,3-Butadiene has been found to be mutagenic in in vitro tests on Salmonella and genotoxic to mouse bone marrow in vitro in the sister chromatid exchange (SCE) test. Glutathione-S-transferase theta-1 (GSTT1) and cytochrome P450 2E1(CYP2E1) polymorphisms have been shown to influence diepoxybutane-induced SCE frequency in human lymphocytes.51 A second metabolite of 1,3-butadiene is 1,2,3,4-diepoxybutane, also shown to be genotoxic in various test systems in vitro.52 Binding of 14C-labeled 1,3-butadiene to liver DNA was demonstrated in mice and rats.53 1,3-Butadiene is metabolized to several epoxides that form DNA and protein adducts, most resulting from 3-butene-1,2-diol metabolism to 3,4-epoxy-1,2-butanediol.54 Butadiene diepoxide, an active metabolite, induces cell cycle perturbation and arrest even with short-term exposure that does not produce other pathologic cellular effects.55 The International Agency for Research on Cancer concluded in 1999 that 1,3-butadiene is a probable carcinogen in humans (Group 2A).l The National Institute for Occupational Safety and Health has recommended that the present OSHA standard of 1000 ppm TWA for 1,3-butadiene be reexamined, since carcinogenic effects in rodents (mice) have been observed at exposure levels of 650 ppm. To minimize the carcinogenic risk for humans, it was recommended that exposures be reduced to the lowest possible level. Isoprene (2-methyl-1,3-butadiene), a naturally occurring volatile compound and close chemical relative of 1,3-butadiene, has been studied in inhalation experiments on rats. A mutagenic metabolite, isoprene diepoxide, was tentatively identified in all tissues examined.56 The principal member of the series of aliphatic hydrocarbons with triple bonds—alkynes—is acetylene (HCCH), a gas at normal temperature. Acetylene is widely used for welding, brazing, metal
buffing, metallizing, and other similar processes in metallurgy. It is also a very important raw material for the chemical synthesis of plastic materials, synthetic rubber, vinyl chloride, vinyl acetate, vinyl ether, acrylonitrile, acrylates, trichloroethylene, acetone, acetaldehyde, and many others. While the narcotic effect of acetylene is relatively low and becomes manifest only at high concentrations (15%) not found under normal circumstances, the frequent presence of impurities in acetylene represents the major hazard. Phosphine is the most common impurity in acetylene, but arsine and hydrogen sulfide may also be present. The hazard is especially significant in acetylene-producing facilities or when acetylene is used in confined, poorly ventilated areas. ALICYCLIC HYDROCARBONS
Alicyclic hydrocarbons are saturated (cycloalkanes, cycloparaffins, or naphthenes) or unsaturated cyclic hydrocarbons, with one or more double bonds (cycloalkenes or cycloolefins). The most important members of the group are cyclopropane, cyclopentane, methylcyclopentane, cyclohexane, methylcyclohexane ethylcyclohexane, cyclohexene, cyclopentadiene, and cyclohexadiene. These compounds are present in crude oil and its distillation products. Cyclopropane is used as an anesthetic. Most of the members of the group are used as solvents and, in the chemical industry, in the manufacture of a variety of other organic compounds, including adipic, maleic, and other organic acids; methylcyclohexane is a good solvent for cellulose ethers. Their toxic effects are similar to those of their linear counterparts, the aliphatic hydrocarbons, but they have more marked narcotic effects; the irritant effect on skin and mucosae is similar.
COMMERCIAL MIXTURES OF PETROLEUM SOLVENTS
Mixtures of hydrocarbons obtained through distillation and cracking of crude oil are gasoline, petroleum ether, rubber solvent, petroleum naphtha, mineral spirits, Stoddart solvent, kerosene, and jet fuels. These are all widely used commercial products. The composition of these mixtures is variable: all contain aliphatic saturated and nonsaturated hydrocarbons, alicyclic saturated and nonsaturated hydrocarbons, and smaller amounts of aromatic hydrocarbons such as benzene, toluene, xylene, and polycyclic hydrocarbons; the proportion of these components varies. The boiling temperature varies from 30°C to 60°C for petroleum ether to 175°C to 325°C for kerosene; the hazard of overexposure is higher with the more volatile mixtures with lower boiling temperatures. The toxic effects of these commercial mixtures of hydrocarbons are similar to those of the individual hydrocarbons: the higher the proportion of volatile hydrocarbons in the mixture, the greater the hazard of acute CNS depression, with possible loss of consciousness, coma, and death resulting from acute overexposure. Exposure to high concentrations, when not lethal, is usually followed by complete recovery. Nevertheless, irreversible brain damage may occur, especially after prolonged coma. The underlying pathologic change is represented by focal microhemorrhages. The irritant effects on the respiratory and conjunctival mucosae are generally moderate. Exposure to lower concentrations over longer periods is common; the potential effects of aromatic hydrocarbons, especially benzene, have to be considered under such circumstances. Bone marrow depression with resulting low red blood cell counts and leukopenia with neutropenia and/or low platelet counts can develop, and medical surveillance should include periodic blood counts for the early detection of such effects; cessation of exposure to mixtures containing aromatic hydrocarbons is necessary when such abnormalities occur. Long-term effects of benzene exposure include increased risk of leukemia; therefore, exposure should be carefully monitored and controlled so that the recommended standard for benzene not be exceeded.
27 Chronic effects on the central and peripheral nervous systems with exposure to commercial mixtures of hydrocarbons have received more attention only in recent years. Since some of the common components of such mixtures have been shown to produce peripheral neuropathy and to induce similar degenerative changes of axons in the CNS, such effects might also result from exposure to mixtures of hydrocarbons. Long-term exposure to solvents, including commercial mixtures of hydrocarbons, has been associated, in some cases, with chronic, possibly irreversible CNS impairment. Such effects have been documented by clinical, electrophysiological, neurobehavioral, and brain-imaging techniques. Accidental ingestion and aspiration of gasoline or other mixtures of hydrocarbons can occur, mainly during siphoning, and result in severe chemical pneumonitis, with pulmonary edema, hemorrhage, and necrosis. Gasoline and other hydrocarbon mixtures used as engine fuel have a variety of additives to enhance desired characteristics. Lead tetraethyl probably has the highest toxicity. Workers employed in the manufacture of this additive and in mixing it with gasoline have the highest risk of exposure, and their protection has to be extremely thorough. Ethylene dibromide (EDB) is another additive with important toxicological effects which has received increased attention recently. Skin irritation, related to the defatting properties of these solvents, and consequent increased susceptibility to infections, is frequent when there is repeated contact with such mixtures of hydrocarbons or with individual compounds. Chronic dermatitis is a common finding in exposed workers; protective equipment and appropriate work practices are essential in its prevention.
Prevention and Surveillance Exposure to airborne aliphatic hydrocarbons should be controlled so as not to exceed a concentration of 350 mg/m3 as a TWA. This concentration is equivalent to 120 ppm pentane, 100 ppm hexane, and 85 ppm heptane. For the commercial mixtures, a similar TWA has been recommended, except for petroleum ether (the most volatile mixture) for which a TWA of 200 mg/m3 is recommended.3 Exposure to benzene should not exceed the recommended standard of 1 ppm (3.2 mg/m3), given the marked myelotoxicity of benzene and the increased incidence of leukemia. There is a definite need to monitor for the presence and amount of aromatic hydrocarbons in mixtures of petroleum solvents. Medical surveillance programs should aim at the early detection of such adverse effects as toxic peripheral neuropathy, chronic CNS dysfunction, hematological effects, and dermatitis. Since accidental overexposure may result in rapid loss of consciousness and death (CNS depression), adequate and prompt therapy for such cases is urgent. Education of employees and supervisory personnel concerning potential health hazards, safe working practices (including respirator use when necessary), and first-aid procedures is essential. AROMATIC HYDROCARBONS
Aromatic hydrocarbons are characterized by a benzene ring in which the six carbon atoms are arranged as a hexagon, with a hydrogen atom attached to each carbon—C6H6. According to the number of benzene rings and their binding, the aromatic hydrocarbons are classified into three main groups: 1. Benzene and its derivatives: toluene, xylene, styrene, etc. 2. Polyphenyls: two or more noncondensed benzene rings— diphenyls, triphenyls. 3. Polynuclear aromatic hydrocarbons: two or more condensed benzene rings—naphthalene, anthracene, phenanthrene, and the carcinogenic polycyclic hydrocarbons (benz[a]pyrene, methylcholanthrene, etc.) Distillation of coal in the coking process was the original source of aromatic hydrocarbons; an increasing proportion is now
Diseases Associated with Exposure to Chemical Substances
623
derived from petroleum through distillation, dehydrogenation of cycloparaffins, and catalytic cyclization of paraffins.
Benzene Benzene is a clear, colorless, volatile liquid with a characteristic odor; the relatively low boiling temperature (80°C) is related to the high volatility and the potential for rapidly increasing air concentrations. Commercial-grade benzene contains variable amounts—up to 50%—of toluene, xylene, and other constituents that distill below 120°C. More important is the fact that commercial grades of other aromatic hydrocarbons, toluene and xylene, also contain significant proportions of benzene (up to 15% for toluene); this also applies to commercial mixtures of petroleum distillates, such as gasoline and aromatic petroleum naphthas, where the proportion of benzene may reach 16%. Benzene exposure is, therefore, a more widespread problem than would be suggested by the number of employees categorized as handling benzene as such. Many others exposed to mixtures of hydrocarbons or commercial grades of toluene and xylene may also be exposed to significant concentrations of benzene. Production of benzene has continuously expanded. It is estimated that more than 2 million workers are exposed to benzene in the United States.3 In recent years, there has been increasing concern with respect to benzene in hazardous waste-disposal sites. Benzene has been found in almost one-third of the 1177 National Priorities List hazardous waste sites. Other environmental sources of exposure include gasoline filling stations, vehicle exhaust fumes, underground gasoline storage tanks that leak, wastewater from industries that use benzene, and groundwater next to landfills that contain benzene. Urban structural fires yield benzene as a predominating combustion product.57 An important use of benzene in some parts of the world is as an additive in motor fuel, including gasoline. In Europe, gasolines have been found to contain up to 5% benzene; in the United States, levels up to 2% have been reported. An association between acute childhood leukemia and residence near auto repair garages and gasoline stations has been reported.58 Environmental levels of benzene in areas with intense automotive traffic have been found to range from 1 to 100 ppb. Urban air in high vehicular traffic zones with high levels of benzene and ultra-fine particulates is associated with elevated levels of chromosome strand breaks and other indicators of oxidative DNA damage in mononuclear blood cells of residents.59 DNA and protein adduct levels in liver and bone marrow in mice exposed to benzene showed a dose-dependent increase at doses mimicking human environmental (nonoccupational) exposure.60 Consumer products that contain benzene include glues, adhesives, some household cleaning products, paint strippers, some art supplies, and gasoline. Increasing focus has been directed toward the well-documented benzene content of cigarette smoke and the health risks associated with direct smoking and exposure to second-hand smoke.61 Exposure to benzene may occur in the distillation of coal in the coking process; in oil refineries; and in the chemical, pharmaceutical, and pesticides industries, where benzene is widely used as a raw material for the synthesis of products. Exposure may also occur with its numerous uses as a solvent, in paints, lacquers, and glues; in the linoleum industry; for adhesives; in the extraction of alkaloids; in degreasing of natural and synthetic fibers and of metal parts; in the application and impregnation of insulating material; in rotogravure printing; in the spray application of lacquers and paints; and in laboratory extractions and chromatographic separations. The largest amounts of benzene are used for the synthesis of other organic compounds, mostly in enclosed systems, where exposure is generally limited to equipment leakage, liquid transfer, and repair and maintenance operations. Exposures with the use of benzene as a solvent or solvent component present a more difficult problem, since enclosure of such processes and adequate control of airborne concentrations have not been easily achieved. Inhalation of the vapor is the main route of absorption; skin penetration is of minor significance. Benzene retention is highest in lipidrich organs: in adipose tissue and bone marrow, benzene concentrations
624
Environmental Health
may reach a level 20 times higher than the blood concentration; its persistence in these tissues is also much longer. Elimination is through the respiratory route (45–70% of the amount inhaled); the rest is excreted as urinary metabolites. Benzene is metabolized in the liver to a series of phenolic and ring-opened products and their conjugates, in part by the P450 mixedfunction microsomal oxidases; the first intermediate in its biotransformation is benzene epoxide, a precursor of several active metabolites proposed to be responsible for the carcinogenic effect of benzene. The metabolites of benzene include phenol, catechol, hydroquinone, p-benzoquinone, and trans, trans-mucondialdehyde. Recent studies have demonstrated that polymorphisms in NQO1, CYP2E1, and GSTT1 genes and their associated enzymes involved in benzene activation or detoxification, including oxidoreductase 1 (NQO1), CYP2E1, and GSTT1, and P450 enzyme-inducing ethanol consumption,62 might contribute to the development of benzene hematotoxicity in exposed workers63 and mice.64,65 The role of the aryl hydrocarbon receptor (AhR) is suggested by studies showing the mice lacking AhR exhibit no hematotoxicity after exposure to high concentrations of benzene.66 Trans, trans-muconaldehyde (MUC), a six-carbon-diene-dialdehyde, is a microsomal, hematotoxic ring-opened metabolite of benzene. MUC is metabolized to compounds formed by oxidation and reduction of the aldehyde group(s). MUC and its aldehydic metabolites 6-hydroxy-trans, trans-2,4-hexadienal and 6-oxo-trans, trans-hexadienoic acid are mutagenic in that order of potency. The order of mutagenic acitivity correlates with reactivity toward glutathione, suggesting that alkylating potential is important in the genotoxicity of these compounds.67 The triphenolic metabolite of benzene 1,2,4-benzenetriol (BT) is readily oxidized to its corresponding quinone. During this process, active oxygen species are formed that may damage DNA and other macromolecules. BT increases the frequency of micronuclei formation. BT also increases the level of 8-hydroxy-2′-deoxyguanosine (8OH-dG), a marker of active oxygen-induced DNA damage. Thus BT can cause structural chromosomal changes and point mutations indirectly by generating oxygen radicals. BT may, therefore, play an important role in benzene-induced leukemia.68 Catechol and hydroquinone were found to be highly potent in inducing sister chromatic exchange and delaying cell division; these effects were much more marked than those of benzene and phenol.69 Exposure to high airborne concentrations of benzene results in CNS depression with acute, nonspecific, narcotic effects. With very high exposure (thousands of ppm), loss of consciousness and depression of the respiratory center or myocardial sensitization to endogenous epinephrine with ventricular fibrillation may result in death. Recovery from acute benzene poisoning is usually complete if removal from exposure is prompt; in cases of prolonged coma (after longer exposure to high concentrations), diffuse or focal EEG abnormalities have been observed for several months after recovery, together with such symptoms as dizziness, headache, fatigue, and sleep disturbances. Chronic benzene poisoning is a more important risk, since it can occur with much lower exposure levels. It can develop insidiously over months or years, often without premonitory warning symptoms, and result in severe bone marrow depression. Benzene is a potent myelotoxic agent. Hematologic abnormalities detected in the peripheral blood do not always correlate with the pattern of bone marrow changes. Relatively minor deviations from normal in the blood count (red blood cells [RBCs], white blood cell [WBCs], or platelets) may coexist with marked bone marrow changes (hyperplastic or hypoplastic), and abnormalities are sometimes first found after cessation of exposure. Benzene-induced aplastic anemia can be fatal, with hemorrhage secondary to the marked thrombocytopenia and increased susceptibility to infections due to neutropenia. The number of reported cases of severe chronic benzene poisoning with aplastic anemia gradually decreased after World War II, because of better engineering controls, progressive reduction of the PELs, and efforts to substitute less toxic solvents for benzene in numerous industrial processes. The mechanism of aplastic anemia appears to involve the concerted action of several metabolites acting together on early stem and
progenitor cells, as well as on early blast cells, to inhibit maturation and amplification. Red blood cell, white blood cell, and platelet counts may initially increase, but more often anemia, leukopenia, and/or thrombocytopenia are found. The three cell lines are not necessarily affected to the same degree, and all possible combinations of hematological changes have been found in cases of chronic benzene poisoning. In some older reports, the earliest abnormalities have been described as reduction in the number of white blood cells and relative neutropenia; in later studies, lower than normal red blood cell counts and macrocytosis with hyperchromic anemia have been found more often to be the initial hematologic abnormalities.70,71 Thrombocytopenia has also been frequently reported.72 The bone marrow may be hyperplastic or hypoplastic. Compensatory replication of primitive progenitor cells in the bone marrow of mice during benzene exposure has been reported as a response to cytotoxicity among more differentiated cell types.73 All hematologic parameters (total white blood cells, absolute lymphocyte count, platelets, red blood cells, and hematocrit) were decreased among benzene-exposed workers compared to controls, with the exception of the red blood cell mean corpuscular volume (MCV), which was higher among exposed subjects.74 In a study of 250 workers exposed to benzene, white blood cell and platelet counts were significantly lower than in 140 controls, even for exposure below 1 ppm in air, the current workplace standard. Progenitor cell colony formation significantly declined with increasing benzene exposure and was more sensitive to the effects of benzene than was the number of mature blood cells. Two genetic variants in key metabolizing enzymes, myeloperoxidase and NAD(P)H:quinone oxidoreductase, influenced susceptibility to benzene hematotoxicity.75 In another study, polymorphism in myeloperoxidase was shown to influence benzene-induced hematotoxicity in exposed workers.76 Benzene has been shown to suppress hematopoiesis by suppression of the cell cycle by p53-mediated overexpression of p21, a cyclindependent kinase inhibitor.77 Nitric oxide has been shown to be a contributor to benzene metabolism, especially in the bone marrow, and can form nitrated derivatives that may, in part, account for bone marrow toxicity.78 The stromal macrophage that produces interleukin-1 (IL-1), a cytokine essential for hematopoiesis, is a target of benzene toxicity. Hydroquinone, a bone marrow toxin, inhibits the processing of pre-interleukin-1 alpha (IL-1 alpha) to the mature cytokine in bone marrow macrophages.79 Benzene and hydroquinone have been demonstrated to induce myeloblast differentiation and hydroquinone to induce growth in myeloblasts in the presence of IL-3.80 The stromal macrophage, a target of benzene toxicity, secretes IL-1, which induces the stromal fibroblast to synthesize hematopoietic colonystimulating factors. The processing of pre-IL-1 to IL-1 is inhibited by para-benzoquinone in stromal macrophages of mice.81 Benzene is an established animal and human carcinogen. Leukemia, secondary to benzene exposure has been repeatedly reported since the 1930s. All types of leukemia have been found; myelogenous leukemia (chronic and acute) and erythroleukemia (Di Guglielmo’s disease) apparently more frequently, but acute and chronic lymphocytic or lymphoblastic leukemia is represented as well. Malignant transformation of the bone marrow has been noted years after cessation of exposure, an added difficulty in the few epidemiological studies on long-term effects of benzene exposure. In Italy, with a large shoe-manufacturing industry, where benzene-based glues had been used for many years, at least 150 cases of benzenerelated leukemia were known by 1976.82 In Turkey, more than 50 cases of aplastic anemia and 34 cases of leukemia have been reported from the shoe-manufacturing industry.83 Epidemiological studies in the United States rubber industry have indicated a more than threefold increase in leukemia deaths; occupations with known solvent exposure (benzene widely used in the past and still a contaminant of solvents used) showed a significantly higher leukemia mortality than other occupations. Lymphatic leukemia showed the highest excess mortality. The risk of leukemia was much higher in workers exposed 5 years or more (SMR of 2100). Four additional cases of leukemia occurred among employees not encompassed by the definition of the cohort.84 In Japan, the incidence of leukemia among Hiroshima and
27 Nagasaki survivors was found to be significantly increased by occupational benzene exposure in the years subsequent to the bomb.85 In a large cohort of 74,828 benzene-exposed and 35,805 nonexposed workers in 12 cities in China, deaths due to lymphatic and hematopoietic malignancies and lung cancer increased significantly with increasing cumulative exposure to benzene.86 Experimental studies have demonstrated carcinogenic effects of benzene in experimental animals; in addition to leukemias,87 benzene has produced significant increases in the incidence of Zymbal gland carcinomas in rodents, cancer of the oral cavity, hepatocarcinomas, and possibly mammary carcinomas and lymphoreticular neoplasias.88 In experimental studies on mice, in addition to a high increase in leukemias, a significant increase in lymphomas was found.89 The National Toxicology Program conducted an oral administration experimental study in which malignant lymphoma and carcinomas in various organs, including skin, oral cavity, alveoli/bronchioli, and mammary gland in mice, and carcinomas of the skin, oral cavity, and Zymbal gland in rats were found with significantly increased incidence. Thus NTP concluded that there was clear evidence of carcinogenicity of benzene in rats and mice90. The Environmental Protection Agency (EPA) has come to the same conclusion. The International Agency for Research on Cancer (IARC) has acknowledged the existence of limited evidence for chronic myeloid and chronic lymphocytic leukemia. In addition, it was noted that studies had suggested an increased risk of multiple myeloma,91 while others indicate a dose-related increase in total lymphatic and hematopoietic neoplasms. The carcinogenicity of benzene is most likely dependent upon its conversion to phenol and hydroquinone, the latter being oxidized to the highly toxic 1,4-benzoquinone in the bone marrow. Many recent studies have explored the mechanism by which these benzene metabolites act. The modified base 8-hydroxy-deoxyguanosine (8-OH-dG) is a sensitive marker of DNA damage due to hydroxyl radical attack at the C8 guanine. A biomonitoring study of 65 filling station attendants in Rome, Italy, found the urinary concentration of 8-OH-dG to be significantly correlated with benzene exposure calculated on the basis of repeated personal samples collected during 1 year.92 Exposure to low, medium, and high concentrations of benzene resulted in a dosedependent increase in levels of 8-OH-dG and lymphocyte micronuclei in benzene-exposed workers.93 It has been shown that deoxyribonucleic acid (DNA) adducts (guanine nucleoside adducts) are formed by incubation of rabbit bone marrow with 14C-labeled benzene; p-benzoquinone, phenol, hydroquinone, and 1,2,4-benzenetriol also form adducts with guanine.94 The differential formation of DNA adducts by p-benzoquinone and hydroquinone and their respective mutagenetic activities have been characterized.95 Benzene and its metabolites do not function well as mutagens, but are highly clastogenic, producing chromosome aberrations, sister chromatid exchange, and micronuclei.96 Exposure of human lymphocytes and cell lines to hydroquinone has been shown to cause various forms of genetic damage, including aneusomy and the loss and gain of chromosomes. Chromosomal aberrations in lymphocytes of benzene-exposed workers have been well documented;97 they were shown to persist even years after cessation of toxic exposure.98 The “stable” aberrations are more persistent and have been considered to be the origin of leukemic clones. A more recent study demonstrated an increased incidence of chromosomal aberrations (particularly chromatid gaps and breaks) among long-term Turkish shoe manufacturing workers when compared to a control group.99 The occurrence of a significant excess of DNA damage in peripheral lymphocytes of human subjects with occupational exposure to low levels of benzene (12 gasoline station attendants) compared with controls, independent of the ages or smoking habits of the subjects, was demonstrated by the alkaline single cell gel electrophoresis (Comet) assay. Exposed subjects showed an excess of heavily damaged cells.100 High benzene exposure has been shown to induce aneuploidy of chromosome 9 in nondiseased workers, with trisomy being the most prevalent form, as determined by fluorescence in situ hybridization (FISH) and interphase cytogenetics.101
Diseases Associated with Exposure to Chemical Substances
625
Cytogenetic effects of benzene have been reproduced in animal models. In rats exposed to 1000 and 100 ppm, a significant increase in the proportion of cells with chromosomal abnormalities was detected; exposure to 10 and 1 ppm resulted in elevated levels of cells with chromosomal abnormalities that showed evidence of being dose-related, although they were not statistically significant.102 A dose-related increase in the frequency of micronucleated cells in tissue cultures from rat Zymbal glands (a principal target for benzene carcinogesis in rats) was reported.103 A significant increase in sister chromatic exchanges in bone marrow cells of mice exposed to 28 ppm benzene for 4 hours has been reported.104 Benzene induced a dose-dependent increase in the frequencies of chromosomal aberrations in bone marrow and spermatogonial cells. The damage was greater in bone marrow than in spermatogonial cells.105 Using fluorescence in situ hybridization with chromosome-specific painting probes (FISH painting), chromatid-type aberrations in mice were significantly increased 24 and 36 hours after a single high-dose benzene exposure, while chromosome-type aberrations were elevated above control values 36 hours and 15 days after exposure, showing that at least part of benzene-induced chromatid exchanges were converted into potentially stable chromosome aberrations.106 The target cells for leukemogenesis are the pluripotent stem cells or early progenitor cells which carry the CD34 antigen (CD34+ cells). Following benzene exposure in mice, aneuploid cells were more frequent in the hematopoietic stem cells compartment than in mature hematopoietic subpopulations.107 Hydroquinone, a benzene metabolite, increases the level of aneusomy of chromosomes 7 and 8 in human CD34-positive blood progenitor cells.108 Catechol and hydroquinone have been shown to act in synergy to induce loss of chromosomes 5, 7, and 8 as found in secondary myelodysplatic syndrome and acute myelogenous leukemia.109 Human CD34+ cells have been shown to be sensitive targets for 1,4-benzoquinone toxicity that use the p53 DNA damage response pathway in response to genotoxic stress. Apoptosis and cytotoxicity were dose-dependent, and there was a significant increase in the percentage of micronucleated CD34+ cells in cultures treated with 1,4-benzoqinone.110 The role of gene-environmental interaction in benzene-induced chromosomal damage has been investigated: the polymorphic genes GSTM1, GSTT1, and GSTP1, coding for GST, have been shown to exhibit differential metabolism of hydroquinone, associated with different frequencies of micronuclei and sister chromatid exchanges, induced by hydroquinone in human lymphocytes.111 Genotype-dependent chromosomal instability can be induced by hydroquinone doses that are not acutely stem cell toxic.112 DNA-protein crosslinking and DNA strand-breaks were induced by trans, trans-muconaldehyde and hydroquinone, with synergistic interactive effects of the two agents in combination.113 1,4-Benzoquinone has been shown to inhibit topoisomerase II catalysis, most probably by binding to an essential SH group,114 with a consequent increase in topoisomerase II-mediated DNA cleavage, primarily by enhancing the forward rate of scission. In vitro, the compound induced cleavage at DNA sites proximal to a defined leukemic chromosomal breakpoint.115 1,4-Benzoquinone and trans, trans-muconaldehyde were shown to be directly inhibitory, whereas all of the phenolic metabolites were shown to inhibit topoisomerase II activity in vitro following bioactivation using a peroxidase activation system116 and in vivo in the bone marrow of treated mice.117 The effect of p53 heterozygosity on the genomic and cellular responses of target tissues in mice to toxic insult has been demonstrated. Examination of mRNA levels of p53-regulated genes involved in cell cycle control (p21, gadd45, and cyclin G) or apoptosis (bax and bcl-2) showed that during chronic benzene exposure, bone marrow cells from p53+/+ mice expressed significantly higher levels of a majority of these genes compared to p53+/_ bone marrow cells.118 The ability of the benzene metabolites hydroquinone and trans, trans-muconaldehyde to interfere with gap-junction intercellular communication, a characteristic of tumor promoters and nongenotoxic carcinogens and shown to result in perturbation of hematopoiesis, has been proposed as a possible mechanism for benzene-induced hematotoxicity and development of leukemia.119 Recent studies suggest that benzene’s metabolites, catechol and phenol, may mediate benzene toxicity
626
Environmental Health
through metabolite-mediated alterations in the c-Myb signaling pathway, overexpression of which is believed to play a key role in the development of a wide variety of leukemias and tumors.120 Covalent binding of the benzene metabolites p-benzoquinone and p-biphenoquinone to critical thiol groups of tubulin has been shown to inhibit microtubule formation under cell-free conditions, possibly interfering with the formation of a functional spindle apparatus in the mitotic cell, thus leading to the abnormal chromosome segregation and aneuploidy induction reported for benzene.121 An effect of MUC, hydroquinone (HQ), and four MUC metabolites on gap-junction intercellular communication has been demonstrated.122 Toxic effects on reproductive organs have received increased attention. In subchronic inhalation studies, histopathological changes in the ovaries (characterized by bilateral cyst formation) and in the testes (atrophy and degenerative changes, including a decrease in the number of spermatozoa and an increase in abnormal sperm forms) have been reported.123 Benzene was shown to be a transplacental genotoxicant in mice, where it was found to significantly increase micronuclei and sister chromatic exchange in fetal liver when administered at a high dose (1318 mg/kg) to mice on day 14 and 15 of gestation.124 Levels of pregnandiol-3-glucuronide, follicle-stimulating hormone and estrone conjugate in the urine of female benzene-exposed workers were significantly lower than those in a nonexposed control group.125 Exposure to benzene at high concentrations (42.29 mg/m3) induced increases in the frequencies of numerical aberrations for chromosome 1 and 18 and of structural aberrations for chromosome 1 in sperm in exposed workers.126 There is little information on developmental toxicity of benzene in humans. Case reports have documented that normal infants without chromosomal aberrations can be born to mothers with an increased number of chromosomal aberrations;127 other investigators have reported increases in the frequency of sister chromatid exchanges and chromatid breaks in children of women exposed to benzene and other solvents during pregnancy. In animal experiments in vivo, benzene has not been found to be teratogenic; a decrease in fetal weight and an increase in skeletal variants have been associated with maternal toxicity. The embryotoxicity of toluene, xylene, benzene, styrene, and its metabolite, styrene oxide, was evaluated using the in vitro culture of postimplantation rat embryos. Toluene, xylene, benzene, and styrene all have a concentration-dependent embryotoxic effect on the developing rat embryo in vitro at concentrations ranging from 1.00 mµmol/ml for styrene, 1.56 mµmol/ml for benzene, and 2.25 mµmol/ml for toluene. There was no evidence of synergistic interaction among the solvents.128 The immunotoxicity of benzene in rats was demonstrated by a reduction in the number of B-lymphocytes after 2 weeks of exposure at 400 ppm and a subsequent reduction in thymus weight and spleen B-, CD4+/CD5+, and CD5+ T-lymphocytes at 4 weeks.129 Rapid and persistent reductions in femoral B-, splenic T- and B-, and thymic Tlymphocytes, along with a marked increase in the percentage of femoral B-lymphocytes and thymic T-lymphocytes in apoptosis, were induced in mice exposed to benzene at 200 ppm.130 Para-benzoquinone has been shown to inhibit mitogen-induced IL-2 production by human peripheral blood mononuclear cells.131 Hydroquinone, in concentrations comparable to those found in cigarette tar, is a potent inhibitor of IL-2-dependent T-cell proliferation.132
Prevention and Control Prevention of benzene poisoning and of malignant transformation of the bone marrow is based on engineering control of exposure. The TLV for benzene has been repeatedly reduced in the last several decades.2,3 In 1987, the OSHA occupational exposure standard for benzene was revised to 1 ppm TWA, with a 5 ppm short-term exposure limit (STEL). The National Institute for Occupational Safety and Health (NIOSH) has recommended that the standard be revised to a TWA of 0.1 ppm, with a 15-minute ceiling value of 1 ppm. Biological monitoring through measurements of urinary metabolites of benzene is useful as a complement to air sampling for the measurement of benzene concentrations. Elevation in the total
urinary phenols (normal range 20–30 mg/L) indicates excessive benzene exposure, and 50 mg/L should not be exceeded. The urinary inorganic/total sulfate ratio may also be monitored. Biological monitoring is recommended at least quarterly but should be more frequent when exposure levels are equal to or higher than the TWA. A urinary phenol level of 75 mg/L was found in one study to correspond to a TWA exposure to 10 ppm; in other studies the urinary phenol level corresponding to 10 ppm benzene was 45–50 mg/L. Trans, trans-muconic acid in urine is potentially useful as a monitor for low levels of exposure to benzene. A gas chromatography/ mass spectrometry assay was developed that detects muconic acid in urine of exposed workers at levels greater than 10 ng/ml. MUC excretion in urine has been shown to be a sensitive indicator of low levels of exposure to benzene in second-hand tobacco smoke,133,134 although interindividual variability rate of metabolizing benzene to MUC may introduce some limitations in the application of this metabolite as an exposure index of low benzene exposure.135 S-phenylmercapturic acid was reported to be more sensitive than MUC as a biomarker for low levels of workers’ exposure to benzene at concentrations less than 0.25 ppm.136 Preplacement and periodic examinations should include a history of exposure to other myelotoxic chemical or physical agents or medications and of other hematologic conditions. A complete blood count, a mean corpuscular volume determination, reticulocyte and platelet counts, and the urinary phenol test are basic laboratory tests. The frequency of these examinations and tests should be related to the level of exposure.3 Possible neurological and dermatological effects should also be considered in comprehensive periodic examinations. Adequate respirators should be available and should be used when spills, leakage, or other incidents of higher exposure occur. In recent years, the possibility of excessive benzene ingestion from contaminated water has received increasing attention. Benzene concentrations in water have been found to range from 0.005 ppb (in the Gulf of Mexico) to 330 ppb in contaminated well water in New York, New Jersey, and Connecticut. In 1985, the EPA proposed a maximum contamination level (MCL) for benzene in drinking water at 0.005 mg/L; this standard was promulgated in 1987.
Toluene Toluene (methylbenzene, C6H5CH3) is a clear, colorless liquid, with a higher boiling point (110°C) than benzene and, therefore, lower volatility. The production of toluene has increased markedly over the last several decades because of its use in numerous chemical synthesis processes, such as those of toluene diisocyanate, phenol, benzyl, and benzoyl derivatives, benzoic acid, toluene sulfonates, nitrotoluenes, vinyl toluene, and saccharin. More than 7 million tons are produced each year in the United States. Toluene is also used as a solvent, mostly for paints and coatings, and is often a component of mixtures of solvents. Technical grades of toluene contain benzene in variable proportions, reaching 25% in some products. Hematological effects in workers exposed to toluene have been reported in the past.1,2 Such effects were most probably due to the benzene content of toluene or to prior benzene exposure. Animal experiments indicate that pure toluene has no myelotoxic effects. Toluene has been shown to induce microsomal cytochrome P450 and mixed-function oxidases in the liver. Toluene exposure induces P450 isoenzymes CYP1A1/2, CYP2B1/2, CYP2E1, and CYP3A1, but decreases CYP2C11/6 and CYP2A1 in adult male rats. The inductive effect is more prominent in younger than in older animals and in males more than in females. Exposure to toluene does not influence renal microsomal P450-related enzyme activity in rats,137 but inhibited mixed-function oxidases in the lung.138 Exposure to toluene concentrations higher than 100 ppm results in CNS depression, with prenarcotic symptoms and in moderate eye, throat, airway, and skin irritation. These effects are more pronounced with higher concentrations. Volatile substance abuse has now been reported from most parts of the world, mainly among adolescents, individuals living in isolated communities, and in those who have ready access to such substances.
27 Solvents from contact adhesives, cigarette lighter refills, aerosol propellants, gasoline, and fire extinguishers containing mostly halogenated hydrocarbons may be abused by sniffing. Euphoria, behavioral changes similar to those produced by ethanol, but also hallucinations and delusions are the most frequent acute effects. Higher doses can result in convulsions and coma. Cardiac or central nervous system toxicity can lead to death. Chronic abuse of solvents can produce severe organ toxicity, mostly of the liver, kidney, and brain.139 There is evidence that volatile substance abuse has declined in the United States. In a study of the 6-year period from 1996 through 2001 involving all cases of intentional inhalational abuse of nonpharmaceutical substances, there was a mean annual decline of 9% of reported cases, with an overall decline of 37% from 1996 to 2001. There was, however, no decline in major adverse health outcomes or fatalities.140 Numerous reports on toluene addiction (sniffing) have indicated that irreversible neurological effects are possible. Severe multifocal CNS damage,141,142,143 as well as peripheral neuropathies,144 with impairment in cognitive, cerebellar, brain stem, auditory, and pyramidal tract function has been well documented in glue sniffers. Diffuse EEG abnormalities are usually present. Cerebral and cerebellar atrophy have been demonstrated by CT scans of the brain; brain stem atrophy has also been reported. MRI imaging following chronic toluene abuse demonstrated cerebral atrophy involving the corpus callosum and cerebellar vermis, loss of gray-white matter contrast, diffuse supratentorial white matter high-signal lesions, and low signal in the basal ganglia and midbrain.145 Toluene exposure in rats for 11 weeks resulted in a persisting motor syndrome, with shortened and widened gait and widened landing foot splay, and hearing impairment. This motor syndrome resembles the syndrome (e.g., widebased, ataxic gait) seen in some heavy abusers of toluene-containing products.146 Toluene can activate dopamine neurons within the mesolimbic reward pathway in the rat, an effect that may underlie its abuse potential.147 Increased sensitivity to the seizure-inducing properties of aminophylline has been reported in toluene-exposed mice.148 Subchronic exposure of rats to toluene in low concentrations (80 ppm, for 4 weeks, 5 days/week, 6 hours/day) causes a slight but persistent deficit in spatial learning and memory, a persistent decrease in dopamine-mediated locomotor activity, and an increase in the number of dopamine D2 receptors.149 Toluene exposure to a dose generally recognized as subtoxic (40 ppm) was reported to have adverse effects on catecholamine and 5-hydroxytryptamine biosynthesis.150 Selective inhibition by toluene of human GABA(A) receptors in cultured neuroblastoma cells, at concentrations comparable with brain concentrations associated with occupational exposure, has been reported.151 Toluene exposure of rats to concentrations of 100, 300, and 1000 ppm was found to produce a significant increase in three glial cell marker proteins (alpha-enolase, creatine kinase-B, and beta-S100 protein) in the cerebellum. Beta-S100 protein also increased in a dosedependent manner in the brain stem and spinal cord. The two neuronal cell markers did not show a quantitative decrease in the CNS. This indicates that the development of gliosis, rather than neuron death, is induced by chronic exposure to toluene.152 Toluene inhalation exposure induced a marked elevation in total glial fibrillary acidic protein, a specific marker for astrocytes, in the hippocampus, cortex, and cerebellum of rats, as well as a significant increase of lipid peroxidation products (malondialdehyde and 4-hydroxyalkenals) in all brain regions. Melatonin administration prevented these increases.153 There is evidence that the effects of toluene on neuronal activity and behavior may be mediated by inhibition of NMDA receptors.154 Progressive optic neuropathy and sensory hearing loss developed in some cases. Alterations in brain stem-evoked potentials and visual-evoked potentials have been demonstrated in relation to the length of occupational exposure to low levels of toluene.155 Toluene causes broad frequency auditory damage, but this effect is speciesspecific and most likely occurs in humans at average long-term doses greater than 50 ppm.156 A morphological study in rats and mice showed the cochlear outer hair cells in the organ of Corti to be mainly affected.157 Noise exposure enhanced the loss in auditory sensitivity due to toluene,158 as did concomitant ethanol exposure in studies in
Diseases Associated with Exposure to Chemical Substances
627
rats.159 Toluene exposure was also shown to accelerate age-related hereditary hearing loss in one genotype of mice.160 Concentrations of toluene as low as 250 ppm toluene were able to disrupt auditory function acutely in the guinea pig.161 Hepatotoxic and nephrotoxic effects have also been found in cases of toluene addiction;162 the possibility that other toxic agents might have contributed cannot be excluded. Long-term exposure to toluene was reported to be associated with proximal renal tubule cell apoptosis.163 Sudden death in toluene sniffers has been reported and is thought to be due to arrhythmia secondary to myocardial sensitization to endogenous catecholamines,164 a mechanism of sudden death similar to that reported with trichloroethylene and other halogenated hydrocarbons. Adverse developmental effects in offspring of women who are solvent sniffers have been reported. These include CNS dysfunction, microcephaly, minor craniofacial and limb abnormalities,165 and growth retardation. Developmental disability, intrauterine growth retardation, renal anomalies, and dysmorphic features have been described in offspring of women who abuse toluene during pregnancy. Experimental results166 confirm adverse developmental effects: skeletal abnormalities and low fetal weight were observed in several animal species (mice, rabbits). In an animal model replicating the brief, highintensity exposures characteristic of toluene sniffing in humans, brief, repeated, prenatal exposure to high concentrations of toluene were reported to cause growth restriction, malformation, and impairments of biobehavioral development in rats.167 Prenatal toluene exposure in rats results in abnormal neuronal proliferation and migration, with a significant reduction in the number of neurons within each cortical layer168 and reduced forebrain myelination169 in the brains of mature pups. A rapid, reversible, and dose-dependent inhibition of muscarinic receptor-mediated Ca2+ signaling has been demonstrated in neural precursor cells taken from rat embryonic cortex. Since muscarinic receptors mediate cell proliferation and differentiation during neural precursor cell development, depression of muscarinic signaling may play a role in toluene’s teratogenic effect on the developing nervous system.170 Prenatal exposure to 1800 ppm toluene increased neuronal apoptosis in the cerebellum of weaned male rats sacrificed 21 days after birth.171 Adverse reproductive effects have been detected in experimental, but not human, studies. In an experimental study on rats receiving toluene by gavage (520 mg/kg body weight during days 6–19 of gestation), no major congenital malformations or neuropathologic changes were found; the number of implantations and stillbirths were not affected. The weight of fetuses and placental weights were reduced, as were the weights of most organs. Prenatal toluene exposure produced a generalized growth retardation.172 Toluene was not embrotoxic, fetotoxic, or teratogenic for rabbits exposed during the period of organogenesis. The highest concentration tested was 500 ppm.173 In rats exposed to toluene at a dose of 6000 ppm, 2 hours/day for 5 weeks, the epididymal sperm counts, sperm motility, sperm quality, and in vitro penetrating ability to zona-free hamster eggs were significantly reduced, while no exposure-related changes in the testes weight or spermatogenesis within testes were detected.174 Conversely, in an earlier study in rats exposed to toluene at 2000 ppm for 90 days, decreases in the weights of the epididymides and in sperm counts were observed, indicating toxicity of toluene to the male reproductive system.175 Toluene is metabolized to p-cresol, a compound shown to produce DNA adducts in myeloperoxidase-containing HL-60 cells.176 Other toluene metabolites, methylhydroquinone and methylcatechols, have been shown to induce oxidative DNA damage in the rat testis.177 Nevertheless, toluene has been found to be nonmutagenic and nongenotoxic. There are no indications, from human observations, that toluene has carcinogenic effects; long-term experimental studies on several animal species have been consistently negative.178
Prevention and Control The recommended TWA for toluene is 100 ppm. It is important to monitor the benzene content of technical grades of toluene and to control exposures so that the TWA of 1 ppm for benzene is not exceeded. Engineering controls, such as enclosure and exhaust ventilation, are essential for the prevention of excessive exposure; adequate respirators
628
Environmental Health
should be provided for unusual situations, when higher exposures might be expected.3 Biological monitoring of exposure can be achieved by measuring urinary hippuric acid, the main urinary metabolite of toluene. Excretion of hippuric acid in excess of 3 g/L indicates an exposure in excess of 100 ppm. A second important urinary metabolite of toluene is o-cresol; as for hippuric acid, the excretion of o-cresol reaches its peak at the end of the exposure period (work shift). Interindividual differences in the pattern of toluene metabolism have been found, resulting in variable ratios between urinary hippuric acid and ocresol. For these reasons, biological monitoring should include measurements of both urinary metabolites. Simultaneous exposure by inhalation to toluene and xylene resulted in lower amounts of excreted hippuric acid and methylhippuric acid in urine, while concentrations of solvents in blood and brain were found during the immediate postexposure period. These results strongly suggest mutual metabolic inhibition between toluene and xylene.179 Preemployment and periodic medical examinations should encompass possible neurological, hematological, hepatic, renal, and dermatological effects. Hematological tests, as indicated for benzene, have to be used because, as noted, variable amounts of benzene may be present in commercial grades of toluene. Potential environmental toluene exposure is currently also of concern. The largest source of environmental toluene release is the production, transport, and use of gasoline, which contains 5–7% toluene by weight. Toluene in the atmosphere reacts with hydroxyl radicals; the half-time is about 13 hours. Toluene in soil or water volatilizes to air; the remaining amounts undergo microbial degradation. There is no tendency toward environmental buildup of toluene. Toluene is a very common contaminant in the vicinity of waste-disposal sites, where average concentrations in water have been found to be 7–20 µg/L and average concentrations in soil 70 µg/L. The EPA, in a 1988 survey, found toluene in groundwater, surface water, and soil at 29% of the hazardous waste sites tested. Toluene is not a widespread contaminant of drinking water; it was present in only about 1% of groundwater sources, in concentrations lower than 2 ppb.
Xylene Xylene (dimethylbenzene, C6H4[CH3]2) has three isomeric forms: ortho-, meta-, and paraxylene. Commercial xylene is a mixture of these but may also contain benzene, ethylbenzene, toluene, and other impurities. With a boiling temperature of 144°C, xylene is less volatile than benzene and toluene. It is used as a solvent and as the starting material for the synthesis of xylidines, benzoic acid, phthalic anhydride, and phthalic and terephthalic acids and their esters. Other uses are in the manufacture of quartz crystal oscillators, epoxy resins, and pharmaceuticals. In a study of two paint-manufacturing plants and 22 spray painting operations (car painting, aircraft painting, trailer painting, and video terminal painting), the main constituents of the mixtures of solvents used were xylene and toluene, with average contents of 46% and 29%, on a weight basis, of 67 air samples.180 It is estimated that 140,000 workers are potentially exposed to xylene in the United States. As with toluene, early reports on adverse effects of xylene have to be evaluated in light of the frequent presence of considerable proportions of benzene in the mixture.2 Xylene has been shown to induce liver microsomal mixed function oxidases and cytochrome P450 in a dose-dependent manner.181 m-Xylene treatment led to elevated P450 2B1/2B2 without significantly depressing P450 2C11, and produced significant increases in activities efficiently catalyzed by both isozymes.182 The metabolism of n-hexane to its highly neurotoxic metabolite 2,5-hexanedione was shown to be markedly enhanced in rats pretreated with xylene. Xylene also increases the metabolism of benzene and toluene. Thus, when present in mixtures with other solvents, xylene can increase the adverse effects of those compounds, which exert their toxicity mainly through more toxic metabolites. The effect on mixed-function oxidases is organ-specific, however, and inhibition of CYP isozymes in the nasal mucosa and lung following in vivo inhalation exposure to
m-XYL has been reported,183 with potential shifts in the metabolism of the carcinogen benzo-a-pyrene toward formation of DNA adducts and toxic metabolites in the lung.184 Xylene was also found to facilitate the biotransformation of progesterone and 17, β-estradiol in pregnant rats by inducing hepatic microsomal mixed-function oxidases. Decreased blood levels of these hormones were thought to result in reduced weight of the fetuses.185 Xylene exposure (500 ppm) of pregnant rats on gestation days 7–20 resulted in a lower absolute brain weight and impaired performance in behavioral tests of neuromotor abilities and for learning and memory.186 The effects of lacquer thinner and its main components, toluene, xylene, methanol, and ethyl acetate, on reproductive and accessory reproductive organs in rats were studied; the vapor from the solvents was inhaled twice a day for 7 days. Both xylene and ethyl acetate caused a decrease in the weights of testes and prostate, and reduced plasma testosterone. Spermatozoa levels in the epididymis were decreased.187 Acute effects of xylene exposure are depression of the CNS (prenarcotic and narcotic with high concentrations) and irritation of eyes, nose, throat, and skin. Acute effects of m-xylene were studied in nine volunteers exposed at rest or while exercising, to concentrations of 200 ppm TWA, with short-term peak concentrations of 400 ppm or less. Exposure increased the dominant alpha frequency and alpha percentage in the EEG during the early phase of exposure. The effects of short-term m-xylene exposure on EEG were minored and no persistent deleterious effects were noted.188 Exposure to m-xylene for 4 weeks at concentration as low as 100 ppm were reported to induce persistent behavioral alterations in the rat.189 Liquid xylene is an irritant to the skin, and repeated exposure may result in dermatitis. Dermal exposure to m-xylene has been shown to promote IL-1 alpha and inducible nitric oxide synthetase production in skin.190 Hepatotoxic and nephrotoxic effects have been found in isolated cases of excessive exposure. Nephrotoxicity has been demonstrated in rats exposed to o-xylene.191 p-Xylene reduced cell viability and increased DNA fragmentation in cell culture studies, indicating that long-term exposure may be associated with renal proximal tubule cell apoptosis.151 p-Xylene produced moderate to severe ototoxicity in rats exposed at 900 and 1,800 ppm. Increased auditory thresholds were observed at 2, 4, 8, and 16 kHz. The auditory threshold shifts (35–38 dB) did not reverse after 8 weeks of recovery, and losses of outer hair cells of the organ of Corti were found.192 Myelotoxic effects and hematologic changes have not been documented for pure xylene in humans; the possibility of benzene admixture to technical-grade xylene has to be emphasized. In animal studies, pure xylene was reported to reduce erythrocyte counts, hematocrit and hemoglobin levels, and increase platelet counts in rats.193 The TWA for xylene exposure is 100 ppm. The metabolites of ortho-, meta-, and paraxylene are the corresponding methyl hippuric acids. A concentration of 2.05 g m-methyl hippuric acid corresponds to 100 ppm (TLV) exposure to m-xylene. Prevention, control, and medical surveillance are similar to those indicated for toluene and benzene. Complete blood counts, urinalysis, and liver function tests should be part of the periodic medical examinations.
Styrene Styrene (vinyl benzene, C6H5CH = CH2), a colorless or yellowish liquid, is used in the manufacture of polystyrene (styrene is the monomer; at temperatures of 200°C, polymerization to polystyrene occurs) and of copolymers with 1,3-butadiene (butadiene-styrene rubber) and acrylonitrile (acrylonitrile-butadiene-styrene, ABS). The most important exposures to styrene occur when it is used as a solvent-reactant in the manufacture of polyester products in the reinforced plastics industry. An estimated 330,000 workers are exposed yearly in the United States.194 TWA exposures can be as high as 150 to 300 ppm, with excursions into the 1000–1500 ppm range. The metabolic transformation of styrene is characterized by its conversion to styrene-7,8-oxide by the mixed function oxidases and cytochrome P450 enzyme complex. These reactions have been shown
27 to be organ-specific; enzymes that metabolize styrene have been demonstrated to differ in the lung and liver.195 GST polymorphism influences styrene oxide genotoxicity, with susceptibility enhanced in null-type cells (a frequency of approximately 50% in Caucasians).196 Mandelic acid (MA) and phenyl glyoxylic acid (PGA) are the main urinary metabolites of styrene. In a mortality study of a cohort of styrene-exposed boat manufacturing workers, significantly increased mortality was found for esophageal cancer and prostate cancer. Among the most highly exposed workers, urinary tract cancer and respiratory disease rates were significantly elevated. Urinary tract cancer rates increased with the duration of employment.197 Chromosome aberrations and sister chromatic exchanges were reported to be significantly increased in several studies of workers exposed to styrene. In styrene-exposed workers, the frequencies of micronucleated mononucleated lymphocytes, micronucleated binucleated lymphocytes, and micronucleated nasal epithelial cells were reported to be significantly increased when compared with nonexposed controls.198 Micronuclei levels were shown to be related with end-of-shift urinary concentration of 4vinylphenol and were modulated by NAD(P)H:quinone oxidoreductase polymorphism; aneuploidogenic effects, evaluated by the identification of centromers in micronuclei using the fluorescence in situ hybridization technique, were related with before-shift urinary levels of mandelic and phenylglyoxylic acids and were influenced by GST M1 polymorphism.199 Hemoglobin and O(6)-styrene oxide-guanine DNA adducts were significantly higher in exposed workers as compared to controls and were correlated with exposure measures. 1Styrene oxide-adenine DNA adducts were detected in workers but not in unexposed controls; adduct levels were affected by both acute and cumulative exposure and were associated with CYP2E1 polymorphisms.200 Epoxide hydrolase polymorphism has also been shown to affect the genotoxicity of styrene-7,8-oxide.201 DNA single-strand breaks have been found in workers exposed to styrene at relatively low levels, as determined by urinary excretion of metabolites.202 A significantly higher number of DNA strand breaks in mononuclear leukocytes of styrene-exposed workers compared with unexposed controls, correlated with years of exposure, and a significantly increased frequency of chromosomal aberrations has been reported.203 Styrene-7,8oxide has been demonstrated to induce DNA damage, sister chromatid exchanges and micronuclei in human leukocytes in vitro, and a strong relationship was found between DNA damage, as measured by the comet assay and cytogenetic damage induced by styrene oxide.204 Styrene-7,8-oxide is a potent carcinogen in mice but not rats. In female mice exposed to styrene, the incidence of bronchioloalveolar carcinomas after 24 months was found to be significantly greater than in controls.205 Styrene-7,8-oxide is mutagenic in several prokaryotic and eukaryotic test systems. It has been shown to produce single-strand breaks in DNA of various organs in mice: kidney, liver, lung, testes, and brain.206 Styrene-7,8-oxide is an alkylating agent and reacts mostly with deoxyguanosine, producing 7-alkylguanine, and with deoxycytidine, producing N-3-alkylcytosine. Recent studies have pointed to the even greater toxicity of ring-oxidized metabolites of styrene (4-vinylphenol or its metabolites).207 The International Agency for Research on Cancer (IARC) has classified styrene a possible carcinogen to humans. Styrene has an irritant effect on mucous membranes (eyes, nose, throat, airways) and skin. Inhalation of high concentrations may result in transitory CNS depression, with prenarcotic symptoms. Chronic neurotoxic effects have been reported with repeated exposure to relatively high levels in the boat-construction industry, mostly in Scandinavian countries, where styrene is widely used by brush application on large surfaces. EEG changes, performance test abnormalities, and peripheral nerve conduction velocity changes have been reported.208 Peripheral neuropathy has been described following brief but intense exposure.209 Evidence from animal studies indicates that styrene can cause sensorineural hearing loss.210 Multiple indicators of oxidative stress were identified in neuronal cells exposed to styrene oxide, suggesting oxidative stress is an important contributor to styrene’s neurotoxic effects.211 Color vision discrimination has been reported to be affected in styrene-exposed workers.212
Diseases Associated with Exposure to Chemical Substances
629
A case-control study of styrene-exposed rubber-manufacturing workers demonstrated a significant association between recent styrene exposure and acute ischemic heart disease death among active workers.213 Styrene is hepatotoxic and pneumotoxic in mice. Styrene oxide and 4-vinylphenol cause similar toxicities.214 Styrene exposure has been reported to be associated with increased serum prolactin levels in exposed workers.215 Clinically, hyperprolactinemia is associated with infertility, impotence, and galactorrhea, but at levels in excess of those found in this population. In mice exposed to styrene in the prepubertal period, plasma-free testosterone levels were dramatically decreased following 4 weeks of styrene treatment compared with control group.216 The majority of studies have failed to demonstrate developmental or reproductive toxicity resulting from styrene exposure. Contact allergy to styrene has been reported. Cross-reactivity on patch testing with 2-, 3-, and 4-vinyl toluene (methyl styrene) and with the metabolites styrene epoxide and 4-vinyl phenol has been found.
Prevention In view of reports of persistent neurological effects with long-term exposure, the present federal standard for a styrene TWA of 100 ppm appears to be too high, and reduction has been suggested. The NIOSH has proposed a TWA of 50 ppm. Biological limits of exposure have been proposed corresponding to a TLV of 50 ppm styrene. At the end of the shift, urinary MA should not exceed 800 mg/g creatinine and the sum of MA + PGA should not be more than 1000 mg/g creatinine. In the morning, before the start of work, the values should not exceed 150 and 300 mg/g creatinine, respectively. Preemployment and periodic medical examinations should assess neurological status, liver and kidney function, and hematological parameters. HALOGENATED HYDROCARBONS
The compounds in this group result from the substitution of one or more hydrogen atoms of a simple hydrocarbon by halogens, most often chlorine. Simple chlorinated hydrocarbons are used in a wide variety of industrial processes. The majority are excellent solvents for oils, waxes, fats, rubber, pigments, paints, varnishes, etc. In the chemical industry these compounds are used for chlorination in the manufacture of such products as plastics, pesticides, and other complex halogenated compounds.1,2 Most are nonflammable; some, such as carbon tetrachloride, have been used as fire extinguishers. (This use has been stopped because of the marked toxicity of carbon tetrachloride and the formation of highly irritant combustion products.) The most widely used simple chlorinated hydrocarbons are as follows: Monochloromethane (methyl chloride) Dichloromethane (methylene chloride) Trichloromethane (chloroform) Tetrachloromethane (carbon tetrachloride) 1,2-Dichloroethane (ethylene chloride) 1,1-Dichloroethane 1,1,2-Trichloroethane 1,1,1-Trichloroethane (methyl chloroform) 1,1,2,2-Tetrachloroethane Monochloroethylene (vinyl chloride) 1,2-Dichloroethylene (cis and trans) Trichloroethylene Tetrachloroethylene
CH3Cl CH2Cl2 CHCl3 CCl4 CH2ClCH2Cl CHCl2CH3 CH2ClCHCl2 CH3CCl3 CHCl2CHCl2 CHCl = CH2 CHCl = CHCl CHCl = CCl2 CCl2 = CCl2
Many of the members of this series of compounds have a low boiling point and are highly volatile at room temperature; hazardous exposure levels may develop in a very short time. The application of heat is common in numerous industrial processes; air concentrations of halogenated hydrocarbons increase sharply under such circumstances. Many industrial solvents are sold as mixtures. These may sometimes contain highly toxic products, and hazardous exposure may
630
Environmental Health
occur without the exposed person’s knowledge of the specific chemical composition of the solvent mixture used. Carbon tetrachloride has been generally accepted as the prototype for a hepatotoxic agent; other members of the group have similar or lesser hepatotoxicity. The majority of the compounds have a narcotic effect on the central nervous system; in this respect they are more potent than the hydrocarbons from which they are derived. Some (chloroform, trichloroethylene) were used as anesthetics until their marked toxicity was recognized. Moderate irritation of mucous membranes (conjunctivae, upper and lower airways) is also a common effect of halogenated hydrocarbons. With acute overexposure or repeated exposures of a lesser degree, toxic damage to the liver and kidney is common; the severity of these effects is largely dependent on the specific compound and on the level and pattern of exposure. Individual susceptibility may also contribute but is of lesser importance. Halogenated hydrocarbons may produce liver injury and centrilobular necrosis with or without steatosis. They also have marked nephrotoxicity; tubular cellular necrosis is the specific lesion that may lead to anuria and acute renal failure. Many of the fatalities due to acute overexposure to halogenated hydrocarbons have been attributed to this effect, although concomitant liver injury was always present.1,2 The toxicity of many halogenated solvents is associated with their biotransformation to reactive electrophilic metabolites, which can alkylate macromolecules and thus produce organ injury. The microsomal mixed function oxidases and cytochrome P450 complex of enzymes are effective in the biotransformation of halogenated solvents. The role of human microsomal cytochrome P450 IIE1 in the oxidation of a number of chemical compounds has been established. P450 IIE1 is a major catalyst of the oxydation of benzene, styrene, CCl4, CHCl3, CH2Cl2, CH3Cl, CH3CCl3, 1,2-dichloropropane, ethylene dichloride, ethyene dibromide, vinyl chloride, vinyl bromide, acrylonitrile, and trichloroethylene. Levels of P450 IIE1 can vary considerably among individuals.217 The P450 enzyme is highly inducible by ethanol.218 Chloroethanes (1,2-dichloroethane, 1,1,1trichloroethane, and 1,1,2,2-tetrachloroethane) have also been shown to be metabolized by hepatic cytochrome P450. Food deprivation, more specifically a low intake of carbohydrates, and alcohol consumption enhance the metabolic transformation of the halogenated hydrocarbon solvents chloroform, carbon tetrachloride, 1,2-dichloroethane, 1,1-dichloroethylene, and trichloroethylene. Carbon tetrachloride rapidly promotes lipid peroxidation and inhibits calcium sequestration, glucose-6-phosphatase activity, and cytochrome P450. The urinary excretion of the lipid metabolites formaldehyde, malondialdehyde, acetaldehyde, and acetone was increased after administration of CCl4. The increased excretion of these lipid metabolites may serve as noninvasive markers of xenobiotic-induced lipid peroxidation.219 Pretreatment of rats with large doses of vitamin A potentiates the hepatotoxicity of CCl4. Vitamin A enhances CCl4-induced lipid peroxidation and release of active oxygen species from Kupffer cells and possibly other macrophages activated by vitamin A.220 The in vivo formation of PGF2-like compounds (F2-isoprostanes) derived from free radical-catalyzed nonenzymatic peroxidation of arachidonic acid has been found to be considerably increased (up to 50-fold) in rats administered CCl4. F2-isoprostanes are esterified to lipids in various organs and plasma. The measurement of F2-isoprostanes may facilitate the investigation of the role of lipid peroxidation in human disease.221 Considerable indirect evidence suggests that the cytokine tumor necrosis factor contributes to the hepatocellular damage resulting from toxic liver injury. By administering a soluble tumor necrosis factor receptor, the mortality from CCl4 was lowered from 60% to 16% in an experimental study. The degree of liver injury was reduced, as measured by levels of serum enzymes. There was no detrimental effect on liver regeneration. These results suggest that soluble tumor necrosis factor receptor may be of benefit in the treatment of toxic human liver disease.222 Cellular phosphatidyl choline hydroperoxide (PCOOH) and phosphatidyl ethanolamine hydroperoxide (PEOOH) were increased more than four times by exposure of cultured hepatocytes to CCl4,
1,1,1-trichloroethane, tetrachloroethylene, and 1,3-dichloropropene in a concentration of 10 mM. Peroxidative degradation of membrane phospholipids may play an important role in the cytotoxicity of some chlorinated hydrocarbons.223 It has been proposed that the nephrotoxicity of some compounds in this group is due to metabolic transformation in the kidney of the glutathione conjugates into the corresponding cysteine conjugates. The cysteine conjugates may be directly nephrotoxic or they may be further transformed in the kidney by renal cysteine conjugate β-lyase into reactive alkenyl mercaptans. Another toxic effect, more recently identified, is related to the arrhythmogenic properties of halogenated hydrocarbons. These were first reported with chloroform and trichloroethylene used as anesthetics; they have also been found to occur with occupational exposure and, more recently, in persons addicted to the euphoric effects of short-term exposure (solvent sniffers). Ventricular fibrillation secondary to myocardial sensitization to endogenous epinephrine and norepinephrine has been postulated as the mechanism underlying the arrhythmias and sudden deaths. Incorporation of halocarbons in the membrane of cardiac myocytes may block intercellular communication through modification of the immediate environment of the gap junctions. Inhibition of gap junctional communication is possibly a factor in the arrhythmogenic effects of acute halogenated hydrocarbon exposure.224 The hepatotoxicity of carbon tetrachloride has been studied extensively, both clinically and in various experimental models. The mechanisms of toxic liver injury, the underlying biochemical and enzymatic disruptions, and the corresponding ultrastructural changes have been progressively defined. Hepatic cirrhosis may follow repeated exposure to carbon tetrachloride. Hepatic perisinusoidal cells (PSCs) proliferate and are thought to be the principal source of extracellular matrix proteins during the development of liver fibrosis. The PSCs have been shown to be modulated into a synthetically active and contractile myofibroblast in the course of liver fibrosis.225 Simultaneous administration of trichloroethylene and carbon tetrachloride (0.05 ml/kg) resulted in a marked potentiation of liver injury caused by CCl4. Hepatic glutathione levels were depressed only in rats given both TCE and CCl4. The regenerative activity in the liver appeared to be delayed by TCE.226 Acetone (A), MEK, and methyl isobutyl ketone (MiBK) markedly potentiate CCl4 hepatotoxicity and chloroform (CHCl3) nephrotoxicity. The potency ranking for this potentiating effect is MiBK > A > MEK for hepatotoxicity and A > MEK ( MiBK for nephrotoxicity.227 An unusual type of fibrosis of the liver and spleen, including subcapsular fibrosis and the development of portal hypertension, can result from vinyl chloride exposure. Liver carcinogenicity has been documented for several compounds of this series. Hepatocellular carcinoma developing several years after acute carbon tetrachloride poisoning has been reported.228 In other cases long-term exposure, even without overt acute toxicity, may lead to the same end result. In animal studies, carbon tetrachloride has proved a potent hepatocarcinogen. Chloroform and trichloroethylene have been shown to be hepatocarcinogens in animals.229 Human data are not available; no longterm epidemiological study has been reported, and the possibility exists that instances of hepatocellular carcinoma may have occurred in workers exposed to these substances without recognition of the etiological link between exposure and malignancy. That this is a possibility has been illustrated by the example of vinyl chloride. Hemangiosarcoma of the liver was identified as one of the possible effects of vinyl chloride exposure in 1974, and many cases have since been reported from various industrial countries. Some of these cases had occurred in prior years, but at that time the link between toxic exposure and malignancy had not been suggested. Only after the etiological association was established, both by the first human cases reported and by results of animal experiments,230 was information on many other cases published. There are indications that vinyl chloride may induce hepatoma as well as hemangiosarcoma. Vinylidene chloride has also come under close scrutiny, since animal data seem to
27 indicate a carcinogenic effect. Chemical enhancement of viral transformation of Syrian hamster embryo cells has been demonstrated for 1,1,1-trichloroethane, 1,2-dichloroethane, 1,1-dichloroethane, chloromethane, and vinyl chloride; other chlorinated methanes and ethanes did not show such an effect.231 Exposure to halogenated hydrocarbons and other volatile organic compounds in the general environment, from various sources including contaminated water and toxic waste-disposal sites, has received increasing attention during recent years. Methods have been developed to determine individual exposures with personal monitors to determine ambient air levels and special equipment for the collection of expired air samples in field settings; gas chromatography— mass spectroscopy analysis—has permitted adequate detection and has clarified patterns of relationships between breathing zone concentrations and results of breath analysis. In a study of students in Texas and North Carolina, air has been found to be the major source of absorption, except for two trihalomethanes, chloroform and bromodichloromethane. Estimated total daily intake from air and water ranged from 0.3 to 12.6 mg, with 1,1,1-trichloroethane at the highest concentrations.232 Monitoring of airborne levels of mutagens and suspected carcinogens, including linear and cyclic halogenated hydrocarbons, has been undertaken in many urban centers of the United States. Average concentration levels for halogenated hydrocarbons were in the 0 to 1 ppb range. Similar efforts have been undertaken regarding the monitoring of water contamination with halogenated hydrocarbons. Rivers, lakes, and drinking water from various sources have been tested. Analytical methods have been developed for the detection of volatile organic compounds, including chlorinated hydrocarbons, in fish and shellfish. Regional data from Germany indicate that approximately 25% of the groundwater samples contained more than 1 µg/L of a single solvent, most prominent being tri- and tetrachloroethene, 1,1,1trichloroethane and dichloromethane, but also chloroform. Since the long-term effects of low-level exposure to halogenated hydrocarbon solvents, especially with regard to carcinogenicity and mutagenicity, are not known, it is necessary to monitor current exposures from all possible sources and to reduce such exposures to a minimum to protect the health of the general population.
Carbon Tetrachloride The production of carbon tetrachloride in the United States has varied from 250 to 400 million kg in recent years. It is currently used mainly in the synthesis of dichlorofluoromethane (fluorocarbon 12) and trichlorofluoromethane (fluorocarbon 11); a small proportion is still applied as a fumigant and pesticide for certain crops (barley, corn, rice, rye, wheat) and for agricultural facilities, such as grain bins and granaries. Airborne concentrations of carbon tetrachloride in the general environment have been found to vary from 0.05 to 18 ppb. In rural areas, levels of CCl4 were lower, in the range of 80–120 ppt. The photodecomposition of tetrachloroethylene results in the formation of about 8% (by weight) carbon tetrachloride233 and is thought to be possibly responsible for a significant proportion of atmospheric carbon tetrachloride. Carbon tetrachloride has also been found in rivers, lakes, and drinking water. Through 1983, about 95% of all surface water supplies contained less than 0.5 µg/L; in drinking water, detectable levels (>0.2 µg/L) were present in 3% of 945 samples tested. The toxicity of carbon tetrachloride is enhanced by its metabolic transformation in the liver. Induction of mixed function microsomal enzymes significantly increases CCl4 toxicity, while inhibition of the enzymatic system decreases its toxicity. The induction of mixed function oxidases can be downregulated by genes that are strongly, rapidly, and transiently induced in most cells on exposure to various stress agents.234 The toxic effect of carbon tetrachloride is due to a metabolite, a free radical (CCl3) that appears to produce peroxidation of the unsaturated lipids of cellular membranes. Plasma concentrations of the oxidation products 8-hydroxy-2′-deoxyguanosine, malondialdehyde, and isoprostanes and urinary concentrations of isoprostanes were increased in CCl4-treated rats.235 Metabolism of CCl4 to the more toxic metabolite is thought to occur in the endoplasmic
Diseases Associated with Exposure to Chemical Substances
631
reticulum. Cytochrome P450 is destroyed in the process. As the metabolite accumulates, carbon tetrachloride can produce disruption of all elements of the hepatocyte-plasma membrane, endoplasmic reticulum, mitochondria, lysosomes, and nucleus result. The consequent cellular destruction is reflected in zonal (centrilobular) necrosis, which can be accompanied by steatosis. The corresponding clinical manifestation is hepatocellular jaundice; in severe cases hepatic failure and death may occur. With lesser exposure, less extensive subclinical pathologic changes may result; nonspecific symptoms, such as fatigability, loss of appetite, and nausea, may be present without jaundice. Food restriction appears to enhance the hepatotoxicity of CCl4.236 Elevated serum enzymes (SGOT, SGPT, LDH), bilirubin and sometimes alkaline phosphatase arise in bromsulphalein retention, reduction of prothrombin, and increased urinary urobilin excretion may be found. Studies have found that 47 different genes were either upregulated or downregulated more than two-fold by the CCl4 compared with dimethyl formamide, a chemical that does not cause liver cell damage.237 The expression of genes involved in cell death, cell proliferation, metabolism, DNA damage, and fibrogenesis were upregulated following carbon tetrachloride exposure in mice.238 Repeated toxic insults may lead to the development of postnecrotic cirrhosis. The renin-angiotensin system239 and the proinflammatory cytokine tumor necrosis factor-alpha240 have been shown to contribute to carbon tetrachloride-induced hepatic fibrosis. Metallothionein, a small protein involved in the regulation of zinc homeostasis, was shown to improve the recovery of liver fibrosis in a mouse model.241 Protection against the hepatotoxic effects of carbon tetrachloride by a wide range of antioxidants has been demonstrated, some by inhibition of overexpression of the IL-6 gene and its associated protein242 or through inhibition of cytochrome P450 system that activates CCl4 into its active metabolite, the trichloromethyl radical.243 CCl4 administration has been shown to cause histopathological damage in the kidney, including glomerular and tubular degeneration, interstitial mononuclear cell infiltration and fibrosis, and vascular congestion in the peritubular blood vessels in the renal cortex. These changes can be prevented by concomitant administration of antioxidants.244 Intraperitoneal administration of CCl4 has been demonstrated to cause lung injury in mice.245 Chronic exposure to carbon tetrachloride has been demonstrated to cause immunosuppression in mice.246,247 Individual variation in the response to CCl4 is now better understood. Carbon tetrachloride hepatotoxicity was found to be much less severe in old rats than in young adult rats, as assessed by serum hepatic enzymes and disappearance of hepatic microsomal cytochrome P450.248 Previous mixed-function microsomal enzyme induction has been shown to enhance CCl4 toxicity through enhanced metabolic transformation to the active intermediate free radical. Alcohols, ketones, and some other chemical compounds enhance carbon tetrachloride toxicity: ethanol, isopropyl alcohol, butanol, acetone, PCBs and PBBs, chlordecone, and trichloroethylene have all been shown to potentiate CCl4 toxicity, mostly by hepatic enzyme induction. In accidentally exposed workers, chronic ethanol abuse increased the hepatotoxicity of CCl4.249 Mice without the cytochrome P450 enzyme CYP2E1 are resistant to CCl4 hepatotoxicity.250 Carbon tetrachloride metabolites form irreversible covalent bonds to hepatic macromolecules, and binding of radiolabeled CCl4 to DNA also occurs.251 Carbon tetrachloride is considered to be an Ames (Salmonella) assay negative carcinogen, but has been shown to be a bacterial mutagen under special conditions.252 Experimental evidence of carcinogenicity in mice and rats has accumulated. Liver tumors, including hepatocellular carcinomas, developed in various strains of mice, and benign and malignant liver tumors developed in rats.253 The carcinogenicity of CCl4 is thought to derive from its cell proliferative effects.
Prevention and Control The federal OSHA standard for a PEL for carbon tetrachloride exposure is 2 ppm. Replacement by less toxic substances, engineering controls, and enclosed processes are necessary. Respiratory protection should be available for emergency situations. Medical surveillance
632
Environmental Health
must include careful evaluation of liver and kidney function, central and peripheral nervous system function, and the skin. The World Health Organization has adopted a guideline for permissible CCl4 concentration of 0.003 mg/L in drinking water.
Chloroform Chloroform is a colorless, very volatile liquid, with a boiling point of 61°C. Most of the more than 300 million pounds produced annually in the United States is used in the manufacture of fluorocarbons. Chloroform has also been used in cosmetics and numerous products of the pharmaceutical industry; the FDA banned these uses in 1976. Another application of chloroform has been as an insecticidal fumigant for certain crops, including corn, rice, and wheat. Chloroform residues have been detected in cereals for weeks after fumigation. They have also been found in food products, such as dairy produce, meat, oils and fats, fruits, and vegetables, in amounts ranging from 1 to more than 30 mg/kg. The presence of chloroform in the water of rivers and lakes, in ground water, and in sewage treatment plant effluents has been documented at various locations. In drinking water, concentrations of 5–90 µg/L have been detected. Chlorination of water is thought to be responsible for the presence of chloroform in water. Chloroform has toxic effects similar to those of carbon tetrachloride, but fewer severe cases have been reported after industrial exposure. Chloroform undergoes metabolic transformation; one of the metabolites has been shown to be phosgene (COCl2). Metabolism by microsomal cytochrome P450 is obligatory for the development of chloroform-induced hepatic, renal, and nasal toxicity.254 Induction of cytochrome P450 results in increased chloroform hepatotoxicity. MBK and 2,5-hexanedione, the common metabolite of MBK and nhexane enhance chloroform hepatotoxicity by induction of cytochrome P450. Extensive covalent binding to liver and kidney proteins has been found in direct relationship with the extent of hepatic centrilobular and renal proximal tubular necrosis. Affects on immune function have been reported.255 Neither chloroform nor its metabolites had been thought to be directly DNA reactive, although more recent studies have demonstrated adducts formed by oxidative and reductive metabolites of chloroform in vivo in rats.256 In female rat glutathione-depleted hepatocytes, chloroform treatment at high doses resulted in a small dose-dependent increase in malondialdehyde deoxyguanosine adducts and DNA strand breakage.257 A statistically significant increase in the frequency of micronucleated cells was detected in rats given a single p.o. dose of chloroform (3.32 baseline).258 Using gas exposure methodology, chloroform has been shown to be mutagenic in Salmonella.259 The carcinogenicity of chloroform is, nevertheless, still generally thought to be secondary to induced cytolethality and regenerative cell proliferation.260,261 The National Cancer Institute report on the carcinogenic effect of chloroform in animals (hepatocellular carcinomas in mice and renal tumors in rats) draws attention again to the lack of long-term epidemiologic observations. As with other carcinogens, industrial exposure must not exceed the limit of detection, and appropriate engineering methods must be used to protect the health of employees. The NIOSH recommended a ceiling of 2 ppm. Environmental exposure of the general population to chloroform in water and food has also to be reduced to a minimum, given the fact that sufficient experimental evidence for the carcinogenicity of chloroform has accumulated.
Trichloroethylene Trichloroethylene (TCE) is a colorless, volatile liquid with a boiling point of 87°C. Trichloroethylene was thought to be much less toxic than carbon tetrachloride and was used, to a large extent, to replace CCl4 in many industrial processes. It is one of the most important chlorinated solvents. Its main applications have been as a dry-cleaning agent and a metal degreaser. In smaller amounts, it is used in extraction of fats and other natural products, in the manufacture of adhesives and industrial paints, and in the chemical industry, mainly in the production of fluorocarbons.
NIOSH has estimated that 3.5 million workers in the United States are occupationally exposed to trichloroethylene; about 100,000 are exposed full time. Trichloroethylene is absorbed rapidly through the respiratory route, and only a relatively small fraction of the amount inhaled is eliminated unchanged in the exhaled air. The metabolic transformation of trichloroethylene has been shown to proceed through formation of a complex with cytochrome P-450; several pathways can then follow: Destruction of heme Formation of chloral, which can be reduced to trichloroethanol or oxidized to trichloroacetic acid Formation of trichloroethylene oxide, which then decomposes into carbon monoxide and glyoxylate Formation of metabolites that bind irreversibly to protein, RNA, and DNA The relative proportion of these four different metabolic pathways can vary. Species differences in TCE metabolism have been demonstrated. Following a single oral dose of TCE of 1.5–23 mmol/kg, peak blood concentrations of trichloroethylene, trichloroacetate, and trichloroethanol were much greater in mice than in rats.262 Studies with human hepatocytes show interindividual differences in the capacity for cytochrome P450-dependent metabolism of TCE and increased CYP2E1 activity may increase susceptibility to TCE-induced toxicity in the human.263 Dichloroacetate, an inducer of hepatic tumors in mice, has been found to be an important metabolite of TCE in the mouse.264 The levels of protein and DNA adducts vary from species to species, and may contribute to species differences found in carcinogenicity bioassays. In some studies in rodents, no direct evidence of formation of liver DNA adducts could be detected. In other studies, covalent binding to liver and kidney RNA and to DNA in kidney, testes, lung, pancreas, and spleen was found. Chloral hydrate, a metabolite of trichloroethylene, was shown to be mutagenic in vitro and in vivo and induced sister chromatid exchanges and chromosomal aberrations.265 Significant increases in the average frequency of both DNA breaks and micronucleated cells were found in the kidney of rats following a single oral dose of TCE at one half the LD50.266 Dichlorovinylcysteine, a metabolite of TCE thought to be responsible for the nephrocarcinogenicity of trichloroethylene, has been found to induce DNA doublestrand breaks followed by increased poly(ADP-ribosyl)ation of nuclear proteins in cultured renal cells in male Wistar rats.267 In humans, most trichloroethylene is metabolized to trichloroacetic acid and trichloroethanol. The urinary excretion of these metabolites can be used for biologic monitoring of trichloroethylene exposure; trichloroethanol excretion reaches its peak 24 hours after exposure, while trichloroacetic acid reaches its highest urinary level 3 days after exposure. Trichloroethylene has a depressant effect on the CNS; prenarcotic and narcotic symptoms can develop in rapid sequence with high concentrations of vapor. TCE is also an irritant to the skin, conjunctivae, and airways. Acute intentional trichloroethylene exposure was reported to cause neurological and cardiovascular toxicity, with palsies of the third, fifth, and sixth cranial nerves.268 Hepatotoxicity and nephrotoxicity of trichloroethylene are much lower than those of carbon tetrachloride; there are few reports of acute fatal toxic hepatitis and only isolated reports of acute renal failure due to TCE. Among 70 workers exposed to trichloroethylene, significant differences between the exposed and controls were found for urinary levels of the nephrotoxicity markers N-acetylglucosaminidase and albumin, and for formic acid.269 In TCE-exposed rats, proximal tubular damage with significantly increased concentrations of N-acetyl-beta-D-glucosaminidase and low-molecularweight-proteins in urine were detected.270 Trichloroethylene can enhance the hepatotoxicity of carbon tetrachloride, possibly potentiating lipid peroxidation. Hepatotoxicity with moderate, long-term exposure has not been found in humans. Severe generalized dermatitis has been reported following TCE exposure; the susceptibility to such skin reactions was influenced by tumor necrosis factor genotype.271 TCE, through its metabolite
27 trichloroacetaldehyde, promotes T-cell activation and related autoimmunity in mice exposed via drinking water.272 Exposure to concentrations of trichloroethylene in the occupational range can accelerate an autoimmune response and can lead to autoimmune disease in mice. The mechanism of this autoimmunity appears to involve, at least in part, activated CD4+ T cells that then produced inflammatory cytokines.273 Cardiac arrest274 and sudden deaths in young workers exposed to TCE have been reported repeatedly and have been attributed to ventricular fibrillation, through myocardial sensitization to increased levels of epinephrine. Recent studies have demonstrated the capacity of TCE to inhibit Ca2+ dynamics in cardiomyocytes.275 Chronic effects on the central and peripheral nervous system have been described in TCE-exposed workers.276 Long-term exposure to low concentrations of TCE among people who consumed contaminated drinking water was found to be associated with neurobehavioral deficits.277 TCE has been shown to alter the fatty acid composition of mitochondria in neural cells in the rat.278 VEP amplitudes were significantly decreased in rabbits exposed to TCE via inhalation compared with VEPs obtained prior to exposure; a significant increase in VEP amplitude followed exposure at 700 ppm.279 Persistent mid-frequency hearing loss has been demonstrated in rats exposed to TCE, noted especially at 8 and 16 kHz.280 Cochlear histopathology revealed a loss of spiral ganglion cells.281 Brainstem auditory evoked potentials were depressed in TCE-exposed rats, with high-frequency hearing loss predominating.282 Dichloroacetylene, a metabolite of TCE, has been reported to cause trigeminal nerve dysfunction in the rat.283 Trichloroethylene has been reported to be a hepatocarcinogen in experimental animals. An increased incidence of hepatocellular carcinomas was found in mice, but this effect was not observed in rats, possibly due to differential rates of peroxisome proliferation induction. TCE metabolites were shown to bind to DNA and proteins in a dose-dependent manner in mouse liver.284 Kidney adenocarcinomas, testicular Leydig cell tumors, and possibly leukemia were found to be significantly increased in some experimental studies in rats. Epidemiological data have accumulated which suggest that TCE may be carcinogenic in humans. In a study of cancer incidence among 2050 male and 1924 female workers in Finland, those who were exposed to TCE had an increased overall cancer incidence when compared with that of the Finnish general population. Excesses of cancer of the stomach, liver, prostate, and lymphohematopoietic tissues were found.285 Among workers exposed for at least 1 year to TCE renal cell/urothelial cancers occurred in excess. Occupational exposure to trichloroethylene was reported to be associated with elevated risk for non-Hodgkin’s lymphoma among a large cohort of Danish workers.286 Associations of astrocytic brain tumors with trichloroethylene exposure among workers have been reported.287 A study of cancer mortality and morbidity among 1421 men exposed to TCE found no significant increase in cancer incidence or mortality at any site, but for a doubling of the incidence of nonmelanocytic skin cancer without correlation with exposure categories.288 Trichloroethylene has been shown to induce congenital cardiac malformations in Sprague-Dawley rats when females were given TCE in drinking water before and during pregnancy.289 Residence near trichloroethylene-emitting sites was reported to be associated with an increased risk of congenital heart defects in the offspring of older women.290 Trichloroacetic acid may be the cardiac teratogenic metabolite. Trichloroethylene had no effect on reproductive function in mice at doses up to one-tenth of the oral LD50.291 TCE exposure does not produce dominant lethal mutations in mice. Trichloroethylene oxide, an intermediate metabolite of TCE formed by mixed function oxidase metabolism, has been reported to be highly embryotoxic in the Frog Embryo Teratogenesis Assay.292 Evidence of toxic effects of TCE on male reproductive function has accumulated. Inhalation of TCE by male rats caused a significant reduction in absolute testicular weight and altered marker testicular enzymes activity associated with spermatogenesis and germ cell maturation, along with marked histopathological changes showing depletion of germs cells and spermatogenic arrest.293 TCE exposure led to impairment of sperm fertilizing ability in mice, attributed to TCE
Diseases Associated with Exposure to Chemical Substances
633
metabolites, chloral hydrate and trichloroethanol.294 Male rats exposed to TCE in drinking water exhibited a dose-dependent decrease in the ability to fertilize oocytes from untreated females, in the absence of treatment-related changes in combined testes/epididymides weight, sperm concentration, or sperm motility. Oxidative damage to sperm proteins was detected.295 Cytochrome P450-dependent formation of reactive intermediates in the epididymis and efferent ducts and subsequent covalent binding of cellular proteins may be involved in the male reproductive toxicity of TCE in the rat.296 Reduced oocyte fertilizability was found in rats following exposure to trichloroethylene; oocytes from exposed females had a reduced ability to bind sperm plasma membrane proteins.297 Medical surveillance of populations currently exposed or exposed in the past is necessary, with special attention to long-term and potential carcinogenic effects, neurological effects, and liver and kidney function abnormalities. The present federal standard for a permissible level of occupational TCE exposure is 50 ppm. The IARC has classified TCE as probably carcinogenic to humans. A lower exposure limit has been proposed in view of information on carcinogenicity in animals. Exposure of the general population to TCE has received increasing attention. In 1977, the FDA proposed a regulation prohibiting the use of TCE as a food additive; this included the use of TCE in extraction processes in the manufacture of decaffeinated coffee and of spice oleoresins. Trichloroethylene has been found in at least 460 of 1179 hazardous waste sites on the National Priorities List. Federal and state surveys have shown that between 9% and 34% of water supply sources in the United States are contaminated with TCE; the concentrations are, on the average, 1–2 ppb or less. Higher levels have been found in the vicinity of toxic waste-disposal sites; under such circumstances concentrations of several hundred up to 27,000 ppb have been detected. In 1989 the EPA established a drinking water standard of 5 ppb. A relationship between trichloroethylene exposure via drinking water during pregnancy and central nervous system defects, neural tube defects, and oral cleft defects was found (odds ratio ≥ 1.50).298 Long-term, low-level exposure to a mixture of common organic groundwater contaminants (benzene, chloroform, phenol, and trichloroethylene) was shown to induce significant increases in hepatocellular proliferation in F344 rats, in the absence of histopathological lesions or an increase in liver enzyme levels in serum.299 Synergy between TCE and CCl4 when administered in drinking water has been demonstrated in the rat.300
Perchloroethylene Perchloroethylene (PCE, tetrachloroethylene) is used in the textile industry for dry cleaning, processing, and finishing. More than 70% of all dry-cleaning operations in the United States use PCE. Another important use is in metal cleaning and degreasing. PCE is also a raw material for the synthesis of fluorocarbons. PCE is similar in most respects to trichloroethylene. Its hepatotoxicity, initially thought to be very low, has been well documented, with abnormal levels of liver enzymes after exposure and persistence of elevated urinary urobilinogen and serum bilirubin in asymptomatic persons. An arrhythmogenic effect of PCE has also been well documented in humans; premature ventricular contractions in young adults were frequent with high blood levels of PCE and disappeared completely after removal from exposure. Alteration of Ca2+ dynamics in cardiomyocytes is a common mechanism of cardiotoxic halogenated hydrocarbons’ action.301 In a collaborative European study, renal effects of PCE exposure in dry cleaners were assessed by a battery of tests, and the findings compared with those of matched controls. Increased high molecular weight protein in urine was frequently associated with tubular alterations, including changes consistent with diffuse abnormalities along the nephron, in workers exposed to low levels of PCE (median 15 ppm). Generalized membrane disturbances were thought to account for the increased release of laminin fragments, fibronectin and glycosaminoglycans, for high molecular weight proteinuria, and
634
Environmental Health
for increased shedding of epithelial membrane components from tubular cells at different locations along the nephron (brush border antigens and Tamm-Horsfall glycoprotein). These findings of early renal changes indicate that dry cleaners need to be monitored for chronic renal changes.302 Deaths due to massive PCE overexposure have occurred, especially in small dry cleaning establishments. Optic neuritis with residual tunnel vision has been described in an owner of a dry-cleaning shop exposed to PCE.303 Increases in the brain content of an astroglial protein (S-100) and of glutamine synthetase, a biomarker for astroglial hypertrophy, provide biochemical evidence of astroglial proliferation secondary to neuronal damage. Neurotoxic effects of PCE have been demonstrated in rodents. Effects on color vision in humans have been described. Abnormal chromatic responses and reduced contrast sensitivity were found in a two-and-a-half year-old boy following prenatal exposure to PCE.304 The metabolism of PCE is characterized by a cytochrome P450-catalyzed oxidative reaction that generates tri- and dichloroacetate as metabolites, compounds associated with hepatic toxicity and carcinogenicity. A glutathione conjugation pathway is associated with generation of reactive metabolites selectively in the kidneys and with Perc-induced renal toxicity and carcinogenicity. For biological monitoring of exposure to PCE, measurements of urinary trichloroacetic acid and blood levels of PCE can be used. A blood level of 1 mg/L found 16 hours after exposure corresponds to a TWA exposure of less than 50 ppm. Such an exposure was found to result in no adverse effects on the CNS, liver, or kidney. The excretion of urinary trichloroacetic acid is slow and, therefore, not very useful for biological monitoring. Concentrations of PCE in exhaled air may prove useful after recent exposure. PCE is an animal carcinogen that produces increased incidence of renal adenomas, adenocarcinomas, mononuclear cell leukemia, and hepatocellular tumors. In chronic inhalation studies, PCE increased the incidence of leukemia in rats and hepatocellular adenomas and carcinomas in mice. Epidemiological studies on workers exposed to PCE are considered inconclusive. Liver cancer and leukemia, of a priori concern because of results in experimental animals, have not been found with increased frquency in dry-cleaning personnel. Rates for esophageal cancer and bladder cancer were elevated by a factor of two. The confounding effect of alcohol and cigarette smoking is to be considered, and other solvents may have played a role in bladder cancer incidence.305 The IARC and the EPA have classified PCE as a category 2B carcinogen. The NIOSH has designated PCE as a carcinogen and has recommended that occupational exposure be limited to the lowest feasible limit. In 1986, the ACGIH recommended a TLV-TWA of 50 ppm. Mutagenicity tests with PCE have been negative. No increase in the rate of chromosomal aberrations or sister chromatic exchange has been found in workers occupationally exposed to PCE. In rats treated by gavage, malformations suggestive of teratogenicity were represented by microophthalmia (TCE, PCE); full-litter resorption and delayed parturition were caused by PCE.306 Contamination of the general environment with PCE has been documented. PCE exposure in 28 dry-cleaning establishments and in 25 homes occupied by dry cleaners in Modena, Italy, showed wide variations in PCE concentrations from establishment to establishment (2.6–221.5 mg/m3, 8-hour TWA personal sampling values). PCE inside the homes were significantly higher than in 29 houses selected as controls; alveolar air samples collected at home suggest that nonoccupational exposure to PCE exists for family members.307 PCE may be formed in small amounts through chlorination of water. It has been found in drinking water in concentrations of 0.5–5 µg/L. In trace amounts, it has also been detected in foodstuffs. The EPA has recommended that PCE in drinking water not exceed 0.5 mg/L.
Methyl Chloroform Methyl chloroform (1,l,l-trichloroethane) has recently gained widespread use because of its relatively low toxicity. It is mostly used as
a dry-cleaning agent, vapor degreaser, and aerosol vehicle and in the manufacture of vinylidene chloride. Hepato- and nephrotoxicity are low, but narcotic effects and even fatal respiratory depression have been reported. Cardiac arrhythmias due to myocardial sensitization to epinephrine have sometimes led to fatal outcomes. Methyl chloroform, rather than its metabolites, produces the arrhythmias. Fatal cases of 1,1,1-trichloroethane poisoning have occurred. Intentional inhalation of typewriter correction fluid has resulted in deaths. 1,1,1-trichloroethane and trichloroethylene are the components of this commercial product. Decrease in the availability of toluene-based glues, because of measures to combat glue sniffing, has resulted in abuse of more accessible solvents, such as l,l,l-trichloroethane. In subchronic inhalation experiments, 1,1,1-trichloroethane was shown to lead to a decrease in DNA concentration in several brain areas of Mongolian gerbils. These results were interpreted as indicating decreased cell density in sensitive brain areas.308 Technical-grade methyl chloroform often contains vinylidene chloride; elimination of this contaminant seems desirable in view of its potential carcinogenic and mutagenic risk.
Vinyl Trichloride Vinyl trichloride (1,1,2-trichloroethane) is a more potent narcotic and is a potent hepatotoxic and nephrotoxic agent. Significant increases in hepatocellular carcinomas and adrenal pheochromocytomas have been found in mice, but not in rats. DNA adduct formation in vivo was found to occur to a greater extent in mouse liver than in rat liver.309 The IARC (1987) has classified 1,1,2-trichloroethane in group 3 (not classifiable as to its carcinogenicity in humans). The EPA (1988) has included 1,1,2-trichloroethane in category C (possible human carcinogen). The permissible level for occupational exposure to 1,1,2trichloroethane is 10 ppm. The EPA (1987) has recommended that the concentration in drinking water not exceed 3 µg/L.
Tetrachloroethane Tetrachloroethane (1,1,2,2-tetrachloroethane) is the most toxic of the chlorinated hydrocarbons. It is an excellent solvent and has been widely used in the past in the airplane industry, from which numerous cases of severe and even fatal toxic liver injury have been reported. This has prompted its replacement by other, less toxic solvents in most industrial processes. Toxic liver damage due to tetrachloroethane is known to have been associated with the development of cirrhosis of the liver. 1,1,2,2-Tetrachloroethane has produced hepatocellular carcinomas in mice. In rats, no significant increase in hepatocellular carcinomas was found. It has been recommended by NIOSH that occupational exposure to 1,1,2,2-tetrachloroethane not exceed 1 ppm.
Vinyl Chloride Vinyl chloride, an unsaturated, asymmetrical chlorinated hydrocarbon, has found widespread use in the production of the polymer polyvinyl chloride. Although its industrial use had expanded in the 1940s and 1950s, it was not until 1973 that its hepatotoxicity and carcinogenicity310 were recognized. The acute narcotic effects had long been known: some rather unusual chronic effects had been reported in the 1960s, their main feature being Raynaud’s syndrome involving the fingers and hands, skin changes described as similar to those of scleroderma, and bone abnormalities with resorption and spontaneous fractures of the distal phalanges. This syndrome was reported under the name vinyl chloride acroosteolysis. In 1973 unusual hepatosplenic changes were described in vinyl chloride-exposed workers in Germany. Soon thereafter, the first cases of hemangiosarcoma of the liver were reported in workers of one vinyl chloride-polyvinyl chloride polymerization plant in the United States,311 and the search for similar cases elsewhere led to the identification of some 90 such otherwise rare tumors in workers of this industry in many industrialized countries.
27 The nonmalignant pathological changes in the liver are characterized by activation of hepatocytes, smooth endoplasmic reticulum proliferation, activation of sinusoidal cells including lipocytes, nodular hyperplasia of hepatocytes and sinusoidal cells, dilation of sinusoidal spaces, network-like collagen transformation of the sinusoidal walls, moderate portal fibrosis, and subcapsular fibrosis. An increased risk of developing liver fibrosis has been found by ultrasonography in asymptomatic workers who had high exposure to vinyl chloride.311 Portal hypertension has been the prominent feature in some cases of nonmalignant vinyl chloride liver disease; esophageal varices and bleeding have occurred. Fatty degenerative changes in the hepatocytes and focal necrosis have sometimes been observed and are thought to be more pronounced in cases studied shortly after cessation of toxic exposure. The dilation of sinusoidal spaces and the proliferative changes of sinusoidal cells are precursors of the malignant transformation and the appearance of angiosarcomas. While the pathological characteristics of hemangiosarcomas may differ, and several types (sinusoidal, papillar, cavernous, and anaplastic) have been described, the biological characteristics are similar, with rapid growth and a downhill clinical course. No effective therapeutic approach has been identified. Hemangiosarcoma of the liver is a very rare tumor, and therefore the identification of vinyl chloride as the etiologic carcinogen was facilitated. Excess lung cancers, lymphomas, and brain tumors have also been reported in some epidemiological studies. A significant mortality excess in angiosarcoma (15 cases), and cancer of the liver and biliary tract was found in a cohort of 10,173 men who had worked for at least one year in jobs with vinyl chloride exposure. The SMR for cancer of the brain was 180.312 In experimental animals exposed to vinyl chloride, carcinomas of the liver (hepatomas) also occur; sometimes both hemangiosarcoma and hepatoma have been found in the same animal. Malignant tumors of kidney, lung, and brain have also been found with increased incidence. Vinyl chloride is a transplacental carcinogen in the rat. It is metabolically activated by liver microsomal enzymes to intermediates, beginning with chloroethylene oxide, a potent mutagen, that bind covalently to proteins and nucleic acids. Polymorphisms in cytochrome P450 2E1, aldehyde dehydrogenase 2, GSTT1, and a DNA-repair gene, x-ray repair cross-complementing group 1, were shown to influence the risk of DNA damage elicited by vinyl chloride exposure in workers.313 The toxic active metabolite of vinyl chloride is, according to several groups of investigators, most probably the epoxide chloroethylene oxide:
The electrophilic epoxide may react with cellular macromolecules, including nucleic acids; covalent and noncovalent binding occurs. The vinyl chloride epoxide metabolite appears to represent an optimal balance between stability that allows it to reach the DNA target and reactivity that leads to DNA binding and thus to the carcinogenic effect. Proven sites of alkylation are adenine, cytosine, and guanine moieties of nucleic acids and sulfhydryl groups of protein. Covalent binding with hepatocellular proteins can lead to liver necrosis; it has been observed that after microsomal enzyme induction, high doses of vinyl chloride may result in acute necrosis of the liver. Binding to DNA is considered potentially important for mutagenicity and carcinogenicity. Ethenocytosine (epsilon C) is a highly mutagenic exocyclic DNA lesion induced by the carcinogen vinyl chloride. 3,N4-ethano-2′-deoxycytidine, 3-(hydroxyethyl)-2′deoxyuridine, and 3,N4-etheno-2′-deoxycytidine are also formed in
Diseases Associated with Exposure to Chemical Substances
635
cells treated with viny chloride.314 1,N6-ethenodeoxyadenosine (edA) and 3,N4-ethenodeoxycytidine (edC) are two mutagenic adducts associated with exposure to vinyl chloride. Four cyclic theno adducts—1,N6-ethenodeoxyadenine (epsilon A), 3,N4-ethenocytosine (epsilon C), N2,3-ethenoguanine (N2,3-epsilon G), and 1,N2ethenoguanine (1,N2-epsilon G) have been reported from human cells and tissues treated with the vinyl chloride metabolite chloroacetaldehyde.315 Epsilon G, N2,3-ethenoguanine, a cyclic base derivative in DNA, was shown to specifically induce G→A transitions during DNA replication in Escherichia coli.316 Under normal circumstances, altered DNA molecules are eliminated through physiological enzymatic systems.317 With defective function of repair mechanisms, cell populations modified by the toxic metabolite develop with increasing metabolic autonomy and eventual malignant growth. Repair enzyme concentration has been demonstrated to be lower in the target cell population for angiosarcoma, the nonparenchymal cells, than in hepatocytes.318 Higher adduct concentrations in young rats may contribute to their greater susceptibility to VC-induced hepatic angiosarcoma as well as their particular susceptibility to hepatocellular carcinoma.319 Cytogenetic studies in workers have indicated that vinyl chloride produces chromosomal aberrations.320 The level of DNA single-strand breaks and other measures of DNA damage were increased in the peripheral lymphocytes of workers with high exposures to vinyl chloride.321,322 A decrease in exposure levels in the workplace was associated with a fall in the frequency of sister chromatid exchanges found in the lymphocytes of active workers.323 The p53 tumor suppressor gene is often mutated in a wide variety of cancers, including angiosarcoma of the liver. Anti-p53 antibodies have been detected in sera of patients with a variety of cancers and can predate diagnosis of certain tumors such as angiosarcoma, making possible the identification of individuals at high cancer risk among the vinyl chloride-exposed workers.324 A significant association between cumulative vinyl chloride exposure and anti-p53 expression has been reported among workers,325 as well as a strong dose-response relationship between Asp13p21 and mutant p53 proteins levels and VC-exposure in workers.326 Activation of the Ki-ras 2 gene by GC→AT transition at the second base of codon 13 in human liver angiosarcoma associated with exposure to vinyl chloride has been recently reported. Experiments in rats exposed to vinyl chloride and developing liver angiosarcomas and hepatocellular carcinomas showed other sites of mutations affecting the Ha-ras gene in the hepatocellular carcinomas and the N-ras A gene in angiosarcomas. The nature of the ras gene affected by a given carcinogen depends on host factors specific to cell types. The molecular pathways leading to tumors in humans and rats are different, and differences are detected within a given species between different cell types.327 Mutations of ras oncogenes and expression of their encoded p21 protein products are thought to have an important role in carcinogenesis. In five patients with angiosarcoma of the liver and heavy past exposure to vinyl chloride, four were found to have the mutation (Asp 13 c-Ki-ras) and to express the corresponding mutant protein in their tumor tissue and serum. In 45 VC-exposed workers with no evidence of liver neoplasia, 49% were positive for the mutant p21 in their serum. In 28 age-, gender-, and race-matched, unexposed controls, results were all negative.328 Prolonged VC exposure at 1100 ppm did not adversely affect embryo-fetal developmental or reproductive capability over two generations in rats.329 Active research on the metabolic transformations of vinyl chloride has also resulted in a better understanding of the metabolic transformations of other chlorinated hydrocarbons, identification of reactive intermediate products (epoxides), and structural reasons for higher or lower reactivity. Tetrachloroethylene, 1,2-trans-dichloroethylene, and 1,2cis-dichloroethylene have been found not to be mutagenic, while trichloroethylene, 1,1-dichloroethylene, and vinyl chloride are mutagenic. The respective epoxides have been found to be symmetrical and relatively stable for the first group but asymmetrical, unstable, and highly reactive for the second.
636
Environmental Health
The federal standard for exposure to vinyl chloride is 1 ppm for an 8-hour period; the ceiling of 5 ppm should never be exceeded for more than 15 minutes. Air-supplied respirators should be available and are required when exposure levels exceed these limits.
Vinyl Bromide Vinyl bromide in is used in the chemical, plastic, rubber, and leather industries. Experimental studies have shown that vinyl bromide has produced angiosarcoma of the liver, lymph node angiosarcoma, lymphosarcoma, and bronchioloalveolar carcinoma in rats exposed to 50 and 25 ppm by inhalation. Mutagenicity of vinyl bromide has also been reported.330,331 DNA damage following vinyl bromide exposure was found in the stomach, liver, kidney, bladder, lung, and brain of mice.332 On the basis of these data, the NIOSH and OSHA jointly recommended that vinyl bromide be considered a potential carcinogen for humans and be controlled in a way similar to vinyl chloride, with a recommended exposure standard of 1 ppm.
Vinylidene Chloride Vinylidene chloride (1,1-dichloroethylene or DCE), like other vinyl halides, is used mainly in the plastics industry; it is easily polymerized and copolymerized to form plastic materials and resins with valuable properties. An increased incidence of necrosis of the liver in mice and chronic renal inflammation in rats exposed to vinylidene chloride by gavage has been reported.333 DCE undergoes biotransformation by NADPH-cytochrome P450 to several reactive species which conjugate with glutathione (GSH). Further activation of these conjugates occurs in renal tubular cells.334 DCE requires cytochrome P450-catalyzed bioactivation to electrophylic metabolites (1,1-dichloroethylene oxide, 2chloroacetyl chloride, and 2,2-dichloroacetaldehyde) to exert toxic effects. Conjugation of GSH with 1,1-dichloroethylene oxide leads to formation of mono- and diglutathione adducts. Species differences were detected; microsomes from mice were sixfold more active than those from rats. The epoxide is the major metabolite of DCE that is responsible for GSH depletion, suggesting that it may be involved in hepatotoxicity of DCE; mice are more susceptible than rats.335 DCE-mediated mitochondrial dysfunction preceded the onset of hepatotoxicity.336 1,1-Dichloroethylene (DCE) exposure to mice elicits lung toxicity that selectively targets bronchiolar Clara cells. The toxicity is mediated by its metabolites. The cytochrome P450 enzymes CYP2E1 and CYP2F2 catalyze the bioactivation of DCE to the epoxide in murine lung.337 An immunosuppressive effect in sera of mice treated with 1,1-dichloroethylene was found, with increased levels of tumor necrosis factor-alpha and IL-6 thought to contribute to this effect.338 In experimental studies, vinylidene chloride has been found to be carcinogenic in rats and mice: angiosarcoma of the liver, adenocarcinoma of the kidney, and other malignant tumors have been produced in
inhalation experiments. In a recent study, DCE caused renal tumors in male mice after inhalation. Renal tumors were not observed in female mice or in rats of either sex. Kidney microsomes from male mice biotransformed DCE to chloroacetic acid. Cytochrome P450 2E1 was detected in male mouse kidney microsomes; the expression of this protein was regulated by testosterone and correlated well with the ability to oxidize p-nitrophenol, a specific substrate for cytochrome P450 2E1. In kidney microsomes from rats of both sexes and in six samples of human kidney (male donors), no p-nitrophenol oxidase was detected. The data suggest that cytochrome P450 2E1 or a P450 enzyme with very similar molecular weight and substrate specificities is expressed only in male mouse kidney and bioactivates DCE.339 Workers occupationally exposed to vinylidene chloride have not been shown to have excessively high cancer mortality; nevertheless, the possibility of a carcinogenic risk for humans exposed to vinylidene chloride cannot yet be excluded. Vinylidene chloride has been shown to be mutagenic in several assay systems. Embryotoxicity and fetal malformations have been observed in rats and rabbits after inhalation exposure to maternally toxic concentrations. In studies using a chick model, significantly more embryonic deaths occurred in the DCE-treated group than in controls.340 Vinylidene chloride has not been shown to produce chromosomal aberrations or sister chromatic exchanges. In some experiments, vinylidene chloride has induced unscheduled DNA synthesis in rat hepatocytes and has alkylated DNA and induced DNA repair in mouse liver and kidney; the validity of these results has been questioned. The IARC has concluded that no evaluation of the carcinogenic risk of vinylidene chloride in humans could be made. The recommended exposure standard for vinylidene chloride is 1 ppm.
Ethylene Dichloride Ethylene dichloride (1,2-dichloroethane, ClCH2-CH2Cl) is a colorless liquid at room temperature; with a boiling temperature of 83.4°C, it is highly volatile. Ethylene dichloride has a rapidly increasing volume of annual production; approximately 10–13 billion pounds was manufactured in the United States in recent years. Most of it (approximately 75%) is used in the production of vinyl chloride; it has also found applications in the manufacture of trichloroethylene, PCE, vinylidene chloride, ethylene amines, and ethylene glycol. It is a frequent constituent of antiknock mixtures of leaded gasoline and a component of fumigant insecticides. Other uses are as an extractor solvent, as a dispersant for nylon, viscose rayon, styrene-butadiene rubber, and other plastics, as a degreasing agent, as a component of paint and varnish removers, and in adhesives, soaps, and scouring compounds. The main route of absorption is by inhalation; absorption through the skin is also possible. Ethylene dichloride is metabolized by cytochrome P450; chloroacetoaldehyde and chloroacetic acid are the resulting metabolites. Microsomal cytochrome P450 and nuclear cytochrome P450 have been shown to metabolize ethylene dichloride. The possibility that the metabolic transformation of ethylene dichloride by nuclear cytochrome P450 may in part mediate its mutagenicity and carcinogenicity has been considered. Covalent alkylation of DNA by ethylene dichloride has been demonstrated. DNA damage following ethylene dichloride exposure was found in the stomach, liver, kidney, bladder, lung, brain, and bone marrow of mice.341 Narcotic and irritant effects occur during or soon after acute overexposure. Studies of workers with prolonged, unprotected exposure to ethylene dichloride found lower neuropsychological functioning in the domains of processing speed; attention; cognitive flexibility; motor coordination and speed; verbal memory; verbal fluency; and visuospatial abilities. These workers also showed disturbed mood and impaired vision.342 Hepatotoxic and nephrotoxic effects become apparent several hours after acute exposure and can be severe, with centrilobular hepatic necrosis, jaundice, or proximal renal convoluted tubular necrosis and anuria; fatalities with high exposure levels have been reported.2,3 Chronic ethanol consumption increased 1,2dichloroethane liver toxicity in rats.343 A hemorrhagic tendency in acute ethylene dichloride poisoning has also been reported; disseminated intravascular coagulopathy and hyperfibrinolysis have been found in several cases. Experiments on rats and mice fed ethylene
27 dichloride in corn oil revealed a statistically significant excess of malignant and benign tumors. Glutathione conjugation is important in the metabolic transformation of 1,2-dichloroethane. The metabolic pathways for 1,2-dichloroethane biotransformation are saturable; saturation occurs earlier after ingestion than after inhalation. Such differences in metabolic transformation have been thought to explain differences in results of experimental carcinogenicity studies, positive after oral administration but negative in inhalation experiments. An increased frequency of sister chromatid exchanges has been found in the lymphocytes of workers with exposure to low levels of ethylene dichloride.344 A statistically significant increase in sister chromatic exchanges was detected in bone marrow cells of mice after acute 1,2-dichloroethane exposure. Ethylene dichloride has been found to be mutagenic in a variety of bacterial systems and to enhance the viral transformation of Syrian hamster embryo cells. Testing for teratogenic effects and dominant lethal effects in mice was negative.345 Environmental surveys conducted by the EPA have detected 1,2dichloroethane in groundwater sources in the vicinity of contaminated sites in concentrations of about 175 ppb (geometric mean). In a survey of 14 river basins in heavily industrialized areas in the United States, 1,2dichloroethane was present in 53% of more than 200 surface water samples. In drinking water, the compound has been detected at concentrations ranging from 1 to 64 µg/L.346 The OSHA PEL for occupational exposure is 1 ppm. The MCL for drinking water has been regulated by the EPA at 0.005 mg/L. The EPA has classified 1,2-dichloroethane for its carcinogenic potential in group 2B.
Ethylene Dibromide Ethylene dibromide (1,2-dibromoethane, BrCH2CH2Br) is a colorless liquid with a boiling point of 131°C. One of the most important uses is in antiknock compounds added to gasoline to prevent the deposition of lead on the engine cylinder. It has also been used as a fumigant for grains, fruit, and vegetables, as a soil fumigant, as a special solvent, and in organic synthesis. EDB has an irritant effect on the skin, with possible development of erythema, blistering, and ulceration after prolonged contact. It is also a potent eye and respiratory mucosal irritant. Systemic effects include CNS depression; after accidental ingestion, hepatocellular necrosis and renal proximal tubular epithelium necrosis have been reported. Cases of fatal EDB poisoning have been reported. In experimental studies, hepatotoxicity and nephrotoxicity have been found at exposure levels of 50 ppm in all animals tested (rats, guinea pigs, rabbits, and monkeys). EDB has been shown to produce significant decreases in cytochrome P450 levels in liver, kidney, testes, lung, and small intestine microsomes. Hepatic microsomal mixed-function oxidase activities decreased in parallel with the cytochrome P450 content. Dibromoalkane cytotoxicity is due to lipid peroxidation as well as cytochrome P450dependent formation of toxic bromoaldehydic metabolites which can bind with cellular macromolecules. Dibromoethane-GSH conjugates also contribute to EDB cytotoxicity.347 The liver toxicity of several halogen compound mixtures has been studied. Carbon tetrachloride (CT) and trichlorobromomethane (TCBM) undergo dehalogenation via the P450-dependent enzyme system. 1,2-dichloroethane (DCE) and 1,2-dibromoethane (EDB) are maily conjugated with the cytosolic GSH by means of GSHS-transferase. The mixture TCBM and DBE shows a more than additive action on lipid peroxidation and liver necrosis. TCBM, like CT, reduces hepatic levels of GSH-S-transferase, increasing the amount of EDB available for P450-dependent metabolism, with the production of toxic metabolites. The toxicity of mixtures of halogen compounds can be partly predicted. When their metabolism is quite different, a synergistic toxicity can occur if one pathway interferes with a detoxification mechanism of the other compound.348 EDB exerts a toxic effect on spermatogenesis in bulls, rams, and rats, with oligospermia and degenerative changes in spermatozoa. Effects of EDB on spermatogenesis have been studied in 46 men employed in papaya fumigation; the highest measured exposure was 262 ppb, and the geometric mean was 88 ppb. When compared with a nonexposed reference group, there were statistically significant
Diseases Associated with Exposure to Chemical Substances
637
decreases in sperm count, in percentage of viable and mobile sperm and in the proportion of sperm with specific morphologic abnormalities.349 A teratogenic effect is suspected; in rats and mice an increased incidence of CNS and skeletal malformations was found to be related to EDB exposure. GSH-S-transferase occurs abundantly in the human fetal liver. 1,2-dibromoethane is metabolized with high efficiency. Significant bioactivation with a possibility of only limited detoxification via cytochrome P450-dependent oxidation suggests that the human fetus may be at greater risk from DBE toxicity than the adult.350 GSHS-transferase (GST) from human fetal liver was purified and at least five isozymes of GST were found. All the isozymes of GST in human fetal liver metabolized EDB. Bioactivation of EDB by the GST isozyme P-3 resulted in toxicity to cultured rat embryos. The central nervous system, optic and olfactory system, and the hind limb were most significantly affected. A dose-dependent increase of renal malformations was detected in EDB-treated chick embryos.351 EDB may be classified as a suspected developmental toxicant in humans.352 The embryotoxic effects of EDB bioactivation, mediated by purified rat liver GST, were investigated using rat embryos in culture. EDB activation caused a significant reduction in general development structures. Most affected were the central nervous system and the olfactory system.353 The carcinogenicity of EDB has been well documented in several bioassays on rats and mice exposed through various routes, including inhalation of 10 and 40 ppm. An increased incidence of various malignant tumors occurred in one or both sexes of one or both species tested. Among these were tumors of the mammary gland and nasal cavity, alveolar bronchiolar carcinomas, hemangiosarcomas, and tumors of the adrenal cortex and kidney. An epidemological study 354 of a relatively small group of EDB-exposed workers suggests an increase in total mortality and total deaths from malignant diseases in the population with higher exposure. Mutagenic effects of EDB have been detected in several test systems. EDB is considered to be a bifunctional alkylating agent because of the two replaceable bromine atoms. It may form covalent bonds with cellular constituents; the reaction with DNA is thought to be especially important, with possible covalent cross-links between DNA strands. Irreversible binding of EDB to DNA and RNA has been demonstrated. A complex between reduced glutathione and EDB seems to be implicated in the covalent binding of EDB to DNA; this is unusual in that glutathione seems to play a role in the bioactivation of the carcinogen, as opposed to its more typical detoxification reactions. The major DNA adduct (greater than 95% of the total) resulting from the bioactivation of EDB by conjugation with GSH is S-(2-[N7-guanyl]ethyl) GSH. Other adducts are present at much lower levels.355 At least two pathways for 1,2-dibromoethane-induced mutagenicity, dependent on the DNA repair enzyme alkyltransferase, via reaction of EDB with alkyltransferase at its cysteine acceptor site, have been demonstrated.356 Evidence for deregulation by EDB of the genes controlling cell cycling has been reported.357 Environmental exposure of the general population to EDB has recently received increased attention. Several uses of EDB—as an antiknock additive in leaded gasoline, for soil fumigation, fumigation of citrus and other fruit to prevent insect infestation, and treatment of grain-milling equipment—have resulted in contamination of air, water, fruit, grain, and derived products. EDB has been found in groundwater in areas where it had been extensively used for soil fumigation. In the air of major cities, levels of EDB ranging from 16 to 59 ppt have been detected. Citrus fruits that had been fumigated were found to contain amounts of EDB of several hundred parts per billion; in lychee fruit (imported to Japan from Taiwan) levels varying from 0.14 to 2.18 ppm were detected.358 An important and rather wide-spread contamination problem is that of EDB residues in commercial flour; levels from 8 ppb to 4 ppm were detected. In some ready-to-eat food products levels up to 260 ppb were found. In 1983, the EPA introduced regulations to discontinue the use of EDB for soil fumigation, grain fumigation, treatment of grainmilling equipment, and postharvest fruit fumigation. In 1984, the EPA recommended guidelines for acceptable levels of the chemical in food for human consumption, based on samplings of grain stocks
638
Environmental Health
and packaged foods in markets. It was recommended that EDB concentrations in grain intended for human consumption not exceed 90 ppb; for flour the residue level should not be higher than 150 ppb, and for ready-to-eat products it should not be more than 30 ppb. These guidelines have been critically reviewed and requests for even lower acceptable levels have been made. The proposed OSHA-TWA standard for EDB exposure is 100 ppb. NIOSH has recommended 45 ppb.
Methyl Chloride and Methyl Bromide Methyl chloride and methyl bromide are gases at normal temperatures. Methyl chloride (CH3Cl) is used in the chemical industry as a chlorinating agent but mainly as a methylating agent; it is also used in oil refineries for the extraction of greases and resins, as a solvent in the synthetic rubber industry, and as an expanding agent in the production of polystyrene foam. In recent years, methyl chloride has been used primarily in the production of methyl silicone polymers and resins and organic lead additives for gasoline. Methyl bromide (CH3Br) is used as a fumigant for soil, grain, warehouses, and ships. Other important uses are as a methylating agent, a herbicide, a fireextinguishing agent, a degreaser, in the extraction of oils, and as a solvent in aniline dye manufacture. Currently most of the methyl bromide produced in the United States is used to manufacture pesticides. Methyl chloride and methyl bromide are irritants; exposure to high concentrations may result in toxic pulmonary edema. They are potent depressants of the CNS; with high exposure, toxic encephalopathy with visual disturbances, tremor, delirium, convulsions, and coma may occur and may be fatal. Inhibition of creatine kinase activities in brain appears to be a sensitive indicator of methyl bromide intoxication, and may be related to genesis of its neurotoxicity.359 Permanent neurological deficits have been reported after recovery from acute toxic encephalopathy caused by methyl chloride and methyl bromide. Hepatotoxic and nephrotoxic effects may also occur. Fatal poisonings after accidental exposure to high concentrations of methyl bromide, used as a fumigant, have occurred. In California in recent years, the most frequent cause of methyl bromide-related fatalities has been unauthorized entry into structures under fumigation. Toxic acute pulmonary edema, with hemorrhage, has been the most frequently reported lesion in such cases.360 Systemic methyl bromide poisoning developed in nine greenhouse workers after acute inhalational exposure on two consecutive days. Measurements of CH3Br at the site within hours after the accident suggested that exposure on the second day may have been in excess of 200 ppm (800 mg/m3). Two patients needed intensive care for several weeks because of severe myoclonus and tonic-clonic generalized convulsions which could be suppressed effectively only by thiopental. Prior,_ subchronic exposure to methyl bromide and highserum bromide (Br ) concentrations are likely to have contributed to the severity of the symptoms.361 Methyl bromide nonfatal poisoning in a young woman due to leakage of old fire extinguishers was characterized by major action and intention myoclonus on the day following exposure, associated with an initial plasma bromide level of 202 mg/L, 40-fold in excess than the commonly accepted tolerance limit, that decreased slowly to normal levels within 2 months.362 A case of early peripheral neuropathy, confirmed with nerve conduction velocity testing that demonstrated axonal neuropathy, and central nervous system toxicity as a result of acute predominantly dermal exposure to methyl bromide has been reported.363 Worker and community notification of the hazard whenever fumigation takes place are absolutely necessary.364 Methyl chloride, methyl bromide, and methyl iodide are alkylating agents; all three are direct mutagens in in vitro tests. Monohalogenated methanes (methyl chloride, methyl bromide, and methyl iodide) produced DNA adducts 7-methylguanine and O6-methylguanine in exposed rats.365 [14C]-methyl bromide was administered to rats orally or by inhalation. DNA adducts were detected in the liver, lung, and stomach. [14C]3-methyladenine, [14C]-7-methylguanine, and [14C]-O6-methylguanine were identified. A systemic DNA-alkylating potential of methyl bromide was thus demonstrated.366
Sister chromatid exchange (SCE) was determined in the lymphocytes of methyl bromide fumigators as an additional biomonitoring parameter. The determination of blood protein adducts can be applied for evaluation of environmental exposure.367 A hitherto unknown GST in human erythrocytes displays polymorphism: three quarters of the population (conjugators) possess, whereas one quarter (nonconjugators) lack this specific activity. Individuals with nonfunctional GSTT1 entirely lack the capacity to metabolize methyl chloride.368 A standard method for identification of conjugators and nonconjugators with the use of methyl bromide and gas chromatography (head space technique) has been developed. Methyl bromide, ethylene oxide, and dichloromethane (methylene chloride) were incubated in vitro with whole blood samples of conjugators and nonconjugators. All three substances led to a marked increase of SCEs in the lymphocytes of nonconjugators. A protective effect of the GST activity in human erythrocytes for the cytogenetic toxicity of these chemicals in vitro is thus confirmed.369 The formation of formaldehyde from dichloromethane (methylene chloride) is influenced by the polymorphism of GST theta, in the same way as the metabolism of methyl bromide, methyl chloride, methyl iodide, and ethylene oxide. Carcinogenicity of dichloromethane in longterm inhalation exposure of rodents has been attributed to metabolism of the compound via the GST-dependent pathway. Extrapolation of the results to humans for risk assessment should consider the newly discovered polymorthic enzyme activity of GST theta.370 Methyl chloride has produced a teratogenic effect (heart malformation) in offspring of pregnant mice exposed by inhalation. Methyl chloride and methyl bromide have been shown to produce testicular degeneration. The hemoglobin adduct methyl cysteine has been proposed as a biological indicator of methyl bromide exposure. The NIOSH recommends that methyl chloride and methyl bromide be considered as potential occupational carcinogens. The IARC (1986) found the evidence of carcinogenicity in humans and animals inconclusive. The 1987 TLV for methyl chloride is 50 ppm; for methyl bromide, it is 5 ppm. The U.S. Clean Air Act mandated a phase-out of the import and manufacture of methyl bromide because of its effects on the ozone layer of the atmosphere, beginning in 2001 and culminating with a complete ban, except for quarantine and certain pre-shipment uses and exempted critical uses, in January 2005.
Chloroprene Chloroprene (2-chloro-1,3-butadiene, H2C = CCl–CH = CH2) is a colorless, flammable liquid with a low boiling point of 59.4°C. The major use is as a monomer in the manufacture of synthetic rubber, neoprene, since it can polymerize spontaneously at room temperature. The annual neoprene production in the United States is approximately 400 million pounds. Inhalation of vapor and skin absorption are the routes of absorption. It is metabolized to the monoepoxides, 2-chloro-2-ethenyloxirane and (1-chloroethenyl)oxirane, a demonstrated mutagen, together with electrophilic chlorinated aldehydes and ketones.371 The epoxide intermediate of chloroprene may cause DNA damage in K-ras and H-ras proto-oncogenes of B6C3F1 mice following inhalation exposure. Mutational activation of these genes may be critical events in the pathogenesis of forestomach neoplasms induced in the B6C3F1 mouse.372 Chloroprene is an irritant of skin and mucosa (eyes, respiratory tract); it is a potent CNS depressant and has definite liver and kidney toxicity. In rats, exposure to 80 ppm chloroprene or higher concentrations caused degeneration and metaplasia of the olfactory epithelium and exposure to 200 ppm caused anemia, hepatocellular necrosis, and reduced sperm motility.373 Hair loss has also been associated with chloroprene exposure in humans. An excess of lung cancer and skin cancer in workers has been reported by Russian investigators; the mean age of chloropreneexposed workers with cancer was significantly younger than that in other groups.374 A more recent retrospective cohort mortality study of chloroprene-exposed workers found an elevated risk of liver cancer.375 The methodological limitations of these studies preclude firm conclusions on the carcinogenicity of chloroprene. A cohort study of
27 chloroprene production and polymerization workers376 gave negative results with regard to lung cancer but raised the possibility of an increased incidence of gastrointestinal cancer and hematopoietic and lymphatic cancer. Methodological difficulties of this latter study make it impossible to reach definitive conclusions. Chloroprene is classified in Group 2B (possibly carcinogenic to humans) by the International Agency for Research on Cancer on the basis of sufficient evidence for carcinogenicity at multiple organ sites in both mice and rats exposed by inhalation. The results of the studies from China, Armenia, and Russia suggest an excess risk of liver cancer.377 Based on animal experimental studies, chloroprene is listed in the National Toxicology Program’s Report on Carcinogens as reasonably anticipated to be a human carcinogen. An immunosuppressive effect of chloroprene is suspected. Chloroprene produces degenerative changes in male reproductive organs. Reproductive capacity in male mice and rats was affected after inhalation of chloroprene in concentrations of 12–150 ppm. Reductions in the number and mobility of sperm and testicular atrophy have been observed in rats after chloroprene exposure. In experiments on rats and mice, it was also found to be embryotoxic. Although chloroprene has been shown to be mutagenic in several test systems, the genotoxicity of 2-chloro-1,3-butadiene is controversial. A recent mutagenicity study detected a mutagenic effect that occurred linearly with increasing age of chloroprene. Major byproducts of chloroprene, probably responsible for mutagenic properties of aged chloroprene, were identified as cyclic chloroprene dimers.378 Chromosome aberrations have been reported in bone marrow cells of exposed rats. In several groups of chloroprene-exposed workers, an increased incidence of chromosome aberrations in peripheral blood lymphocytes was noted. Prevention. Occupational exposure to chloroprene should be limited to a maximum concentration of 1 ppm. Protective equipment to exclude the possibility of skin absorption, safety goggles, and air-supplied respirators are necessary to minimize exposure. Medical surveillance must be aimed not only at detection of shortterm toxic and irritant effects but also at long-term effects on the CNS, liver and kidney function, reproductive abnormalities, and cancer risk.
Fluorocarbons Fluorocarbons are hydrocarbons with fluorine, often with additional chlorine or bromine substitution of hydrogen atoms in their molecules. Most of them are nonflammable gases, and some are liquids at room temperature. Contact with open flame or heated metallic objects results in decomposition products, some of which are highly irritant, especially with chlorofluorocarbons (hydrogen fluoride, hydrogen chloride, phosgene, chlorine). The fluorocarbons are used as refrigerants (Freon is one of the most widely used trademarks), as aerosol propellants, in fire extinguishers, for degreasing of electronic equipment, in the production of polymers, and as expanding agents in the manufacture of plastic foam. The use of perfluorocarbons emulsified in water as blood substitutes (artificial blood) is an area of intensive investigation. Exposure to fluorocarbons in chemical plant operations and production is generally low but highly variable; high exposures can occur in areas without proper ventilation, during tank farm operations, tank and drum filling, and cylinder packing and shipping. Exposure to fluorocarbons can also occur during manufacturing, servicing, or leakage of refrigeration equipment. The use of fluorocarbons as solvents in the electrical and electronic industry can generate higher exposures, especially when open containers are used. Emissions of fluorocarbons from plastic foams, where they have been entrapped during foam blowing, is another source of exposure. Use of fluorocarbons in sterilization procedures for reusable medical equipment, mostly with ethylene oxide, does not usually generate major exposures. Fluorocarbons, especially trichlorofluoromethane (FC 11), have been used in the administration of certain
Diseases Associated with Exposure to Chemical Substances
639
drugs by inhalation, mostly sympathomimetics and corticosteroids, for the treatment of asthma. Fluorocarbons with the widest use are the following: Bromotrifluoromethane Dibromodifluoromethane Dichlorodifluoromethane Dichloromonofluoromethane Dichlorotetrafluoroethane Fluorotrichloromethane 1,1,1,2-Tetrachloro-2,2-difluoroethane 1,1,2,2-Tetrachloro-1,2-difluoroethane 1,1,2-Trichloro-1,2,2-trifluoroethane Bromochlorotrifluoroethane Chlorodifluoromethane Chloropentafluoroethane Chlorotrifluoroethylene Chlorotrifluoromethane Difluoroethylene Fluoroethylene Hexafluoropropylene Octafluorocyclobutane Tetrafluoroethylene Irritative effects of fluorocarbons are mild; after exposure to decomposition products, such effects may be severe. A bronchoconstrictive effect after inhalation of fluorocarbons has been demonstrated to occur at concentrations higher than 1000 ppm. Four cases of toxic pneumonitis due to direct inhalation of industrial fluorocarbon used as a waterproofing spray were reported.379 Narcotic effects occur at high concentrations. Liver and kidney toxicity have been reported with fluoroalkenes, thought to be more toxic than fluoroalkanes. Fatalities have been reported after acute overexposure to high concentrations of fluorocarbons used as refrigerants; in some of these cases simultaneous exposure to methyl chloride or to phosgene (a decomposition product of fluorocarbons) made it difficult to assess the contribution of fluorocarbon exposure to the lethal outcome. Perfluorooctane sulfonate is a degradation product of sulfonylbased fluorochemicals that are used extensively in industrial and household applications. It is environmentally persistent, and humans and wildlife are exposed to this class of compounds from several sources. Toxicity tests in rodents have raised concerns about its potential developmental, reproductive, and systemic effects, and exposure to perfluorooctane sulfonate has been shown to affect the neuroendocrine system in rats380 and to increase the permeability of cell and mitochondrial membranes.381 Inhibitory effects of perfluorooctane sulfonate on gap junctional intercellular communication, necessary for normal cell growth and function, has been demonstrated in rats.382 A wide range of birth defects, including cleft palate, anasarca, ventricular septal defect, and enlargement of the right atrium, were seen in both rats and mice exposed to this compound,383 but not in rabbits. A related compound, perfluorooctanoic acid, a potent peroxisome proliferator reported to increase the incidence of hepatic, pancreas and Leydig cell adenomas in rats, caused significant atrophy of the thymus and spleen in mice.384 A significant increase in the number of deaths from bronchial asthma was observed in Great Britain and found to coincide in time with the introduction and use of bronchodilator aerosols with fluorocarbon propellants. After withdrawal of these products from over-thecounter sale, the number of deaths from bronchial asthma decreased significantly.385 Numerous deaths due to inhalation of fluorocarbon FC 11 (trichlorofluoromethane) have occurred. Addiction to fluorocarbon propellants in bronchodilator aerosols has been reported.386 Experimental evidence from studies on various animal species, documenting the arrhythmogenic properties of fluorocarbons, has established that sudden deaths due to cardiac arrhythmias, most probably through a mechanism similar to that identified for many chlorinated
640
Environmental Health
hydrocarbons, can occur with exposure to fluorocarbons. Trifluoroiodomethane (CF3I) and 1,1,2,2,3,3,3-heptafluoro-1-iodopropane (C3F7I) were shown to be cardiac sensitizers to adrenaline in dogs.387 Mutagenicity tests were conducted on a series of fluorocarbons in two in vitro systems. Chlorodifluoromethane (FC 22), chlorofluoromethane (FC 31), chlorodifluoroethane (FC 142b), and trifluoroethane (FC 143a) gave positive results in one or two of the tests. Potential carcinogenicity was considered, and limited carcinogenicity bioassays have indicated that FC 31 and FC 133 were potent carcinogens.388 Tetrafluoroethylene, used in the production of Teflon, was shown to have hepatocarcinogenic activity in mice after 2 years of exposure.389 Perfluorooctane sulfonate and perfluorooctanoic acid have been shown to have adverse developmental effects in rodents.390 Fluorocarbons that are lighter than air accumulate at high altitudes, where they may interact with and degrade the ozone layer, leading to penetration to the earth’s surface of greater amounts of ultraviolet light. The problem of the ozone layer depletion is thought to be more specifically related to the fully halogenated, nonhydrogenated fluorocarbons, which produce free radical reactions with ozone by photodissociation in the upper atmosphere. Regulatory action has been taken to eliminate the use of fluorocarbon aerosol products in the United States. Other aspects of fluorocarbon use are still under consideration. ALCOHOLS AND GLYCOLS
Alcohols are characterized by the substitution of one hydrogen atom of hydrocarbons by a hydroxyl (–OH) group; glycols are compounds with two such hydroxyl groups. Both are used extensively as solvents. Under usual industrial exposure conditions, alcohols and glycols do not represent major acute health hazards, mostly because their volatility is much lower than that of most other solvents. Cases of severe poisoning with methyl alcohol or ethylene glycol are usually caused by accidental ingestion. They have an irritative effect on mucous membranes; the narcotic effect is much less prominent than with the corresponding hydrocarbons or halogenated hydrocarbons. Glycols are liquids with low volatility; the low vapor pressure prevents significant air concentrations, except when the compounds are heated or sprayed. Inhalation or skin contact does not usually result in absorption of toxic amounts; accidental ingestion accounts for the majority of poisoning cases. Glycols are used mainly as solvents and, because of their low freezing point, in antifreeze mixtures.
Methyl Alcohol Methyl alcohol (methanol, wood alcohol, CH3OH) is used in the chemical industry in the manufacture of formaldehyde, methacrylates, ethylene glycol, and a variety of other compounds such as plastics, celluloid, and photographic film.2,3 It is also used as a solvent for lacquers, adhesives, industrial coatings, inks, and dyes and in paint and varnish removers. It is used in antifreeze mixtures, as an additive to gasoline, and as an antidetonant additive for aircraft fuel. An experimental study of 26 human volunteers exposed for 4 hours to 200 ppm methanol vapor in a randomized, double blind study using a whole-body exposure chamber. No significant differences in serum formate concentrations between exposed and control groups were detected. It was concluded that at 200 ppm, methanol exposure does not contribute substantially to endogenous formate quantities.391 Methyl alcohol is a moderate irritant and depressant of the CNS. Systemic toxicity due to inhalation and skin absorption of methyl alcohol has been reported with very high exposure levels because of large amounts being handled in enclosed spaces. Accidental ingestion of methyl alcohol can be fatal; after a latency period of several hours (longer with smaller amounts), neurological abnormalities, visual disturbances, nausea, vomiting, abdominal pain, metabolic acidosis, and coma may occur in rapid sequence.
Toxic optic retrobulbar neuritis is a specific effect of methyl alcohol and may result in permanent blindness due to optic atrophy. In a rat model, functional changes preceded structural alterations. Histopathological changes were most pronounced in the outer retina with evidence of inner segment swelling, photoreceptor mitochondrial disruption, and the appearance of fragmented photoreceptor nuclei in the outer nuclear layer. The nature of both the functional and structural alterations observed is consistent with formate-induced inhibition of mitochondrial energy production, resulting in photoreceptor dysfunction and pathology.392 Bilateral putaminal necrosis is often recognized radiologically in severe methanol toxicity. A case of bilateral putaminal and cerebellar cortical lesions demonstrable on CT and MRI has been reported.393 Putamen and white matter necrosis and hemorrhage was found at autopsy in a case of fatal methanol poisoning.394 Nephrotoxic effects and toxic pancreatitis have also been reported. Methyl alcohol is slowly metabolized to formaldehyde and formic acid; the extent to which these metabolites are responsible for the specific toxic effects has not been completely clarified. Acute renal injury has been described following acute methanol poisoning.395 Formate metabolism to CO2 is governed by tissue H4folate and 10-formyltetrahydrofolate dehydrogenase (10-FTHFDH) levels. 10-FTHFDH was found to be present in rat retina, optic nerve, and brain. It was concluded that, in rats, target tissues possess the capacity to metabolize formate to CO2 and may be protected from formate toxicity through this folate-dependent system.396 Non-primate laboratory animals do not develop the characteristic human methanol toxicities even after a lethal dose.397 In humans, methanol causes systemic and ocular toxicity after acute exposure. The folate-reduced (FR) rat is an excellent animal model that mimics characteristic human methanol toxicity. Blood methanol levels were not significantly different in FR rats compared with folate-sufficient rats. FR rats, however, had elevated blood and vitreous humor formate and abnormal electroretinograms at 48 hours postdose, suggesting that formate is the toxic metabolite in methanol-induced retinal toxicity.398 Methanol exposure during growth spurt period in rats adversely affects the developing brain, the effect being more pronounced in folate-deficient rats as compared to rats with adequate levels of folate in the diet, suggesting a possible role of folic acid in methanol-induced neurotoxicity.399 In long-term exposure studies with rats, methyl alcohol was demonstrated to be carcinogenic for various organs and tissues.400 Methanol is believed to be teratogenic based on rodent studies may result from the enzymatic biotransformation of methanol to formaldehyde and formic acid, causing increased biological reactivity and toxicity. A protective role for the antioxidant glutathione (GSH) has been described.401 Formaldehyde is the most embryotoxic methanol metabolite and elicits the entire spectrum of lesions produced by methanol.402 Cell death plays a prominent role in methanol-induced dysmorphogenesis.403 Methyl groups from (14)C-methanol are incorporated into mouse embryo DNA and protein. Methanol exposure may increase genomic methylation under certain conditions which could lead to altered gene expression.404 The management of methanol poisoning includes standard supportive care, the correction of metabolic acidosis, the administration of folinic acid, the provision of an antidote to inhibit the metabolism of methanol to formate, and selective hemodialysis to correct severe metabolic abnormalities and to enhance methanol and formate elimination. Although both ethanol and fomepizole are effective, fomepizole is the preferred antidote for methanol poisoning.405 Prevention. The federal standard for methanol exposure is 200 ppm.3 Warning signs must be posted wherever methyl alcohol is stored or can be present in the working environment, with emphasis on the extreme danger of blindness if swallowed. Employees’ education and training must be thorough. Medical surveillance with attention to visual, neurological, hepatic, and renal functions is necessary. Formic acid in urine and methyl alcohol in blood can be used for the assessment of excessive exposure.
27
Allyl Alcohol Allyl alcohol (H2C = CHCH2OH) is a liquid with a boiling point of 96.0°C. It is used in the manufacture of allyl esters and of monomers for synthetic resins and plastics, in the synthesis of a variety of organic compounds, in the pharmaceutical industry, and as a herbicide and fungicide. Absorption occurs through inhalation and percutaneous penetration. Allyl alcohol is a potent irritant for the eyes, the respiratory system, and the skin. Muscle pain underlying the site of skin absorption, lacrimation, photophobia, blurring of vision, and corneal lesions have been reported.2 Allyl alcohol exhibits periportal necrotic hepatotoxicity in rats, due to its bioactivation to acrolein and subsequent protein sulfhydryl loss and lipid peroxidation.406 This effect is enhanced by exposure to bacterial endotoxins and by caffeine, via increased bioactivation pathways of allyl alcohol involving the P450 mixed-function oxidase system.407 The marked irritant properties of allyl alcohol probably prevent greater exposure in humans, which would result in the liver and kidney toxicity found in experimental animals but not reported in humans. Prevention. The federal standard for PEL to allyl alcohol is 2 ppm. Protective equipment is very important, given the possible skin absorption; the material of choice is neoprene.
Isopropyl Alcohol Isopropyl alcohol (CH3CHOHCH3, isopropanol) is a colorless liquid with a boiling point of 82.3°C and high volatility. It is used in the production of acetone and isopropyl derivatives. Other important uses are as a solvent for oils, synthetic resins, plastics, perfumes, dyes, and nitrocellulose lacquers and in the extraction of sulfonic acid from petroleum products. Isopropyl alcohol has many applications in the pharmaceutical industry, in liniments, skin lotions, mouthwashes, cosmetics, rubbing alcohol, etc. Isopropyl alcohol absorption takes place mainly by inhalation, although skin absorption is also possible. The irritant effects are slight; dermatitis has seldom been reported. A fatal neonatal accidental burn by isopropyl alcohol has been reported, however.408 Depressant (narcotic) effects have been observed in cases of accidental or intentional isopropyl alcohol ingestion. Coma and renal tubular degenerative changes have occasionally resulted in death. Acetone has been found in the exhaled air and in urine; isopropyl alcohol concentrations in blood can be measured. In the early 1940s, an unusual clustering of neoplasms of the respiratory tract—malignant tumors of the paranasal sinuses, lung, and larynx—was reported in workers in isopropyl alcohol manufacturing. It was thought that the carcinogenic compounds were associated with the “strong acid process” and especially with heavier hydrocarbon oils (tars) containing polyaromatic compounds. In the more modern direct catalytic hydration (weak acid process) of propylene, the isopropyl oil seems to contain compounds with lower molecular weight, although the precise composition is not known. Attempts to identify the carcinogen(s) in experimental studies have not been successful,3 and the question of a carcinogen present in the manufacture of isopropyl alcohol is still open. Prevention. The federal standard for a permissible level of isopropyl alcohol exposure is at present 400 ppm.
Ethylene Chlorhydrin Ethylene chlorhydrin (CH2CICH2OH)—synonyms: glycol chlorohydrin, 2-chloroethanol, β-chloroethyl alcohol—is a very toxic compound.2 It is used in the synthesis of ethylene glycol and ethylene oxide and in a variety of other reactions, especially when the hydroxyethyl group (–CH2CH2OH) has to be incorporated in molecules. Other uses are as a special solvent, for cellulose acetate and esters, resins, waxes, and for the separation of butadiene from
Diseases Associated with Exposure to Chemical Substances
641
hydrocarbon mixtures. Agricultural applications include seed treatment and application to accelerate the sprouting of potatoes. Ethylene chlorhydrin is absorbed through inhalation and readily through the skin. It is an irritant to the eyes, airways, and skin. Exposure to high concentrations may result in toxic pulmonary edema. Systemic effects are marked: depression of the CNS, hypotension, visual disturbances, delirium, coma and convulsions, hepatotoxic and nephrotoxic effects with nausea, vomiting, hematuria, and proteinuria. Death may occur as a result of pulmonary edema or cerebral edema. Even cases with slight or moderate initial symptoms may be fatal. Prevention. The federal standard for the limit of permissible exposure is 5 ppm. The use of ethylene chlorhydrin other than in enclosed systems should be completely eliminated. Protective clothing should use materials impervious to this compound; rubber is readily penetrated and has to be excluded. Protective clothing must be changed regularly so that no deterioration will jeopardize its effectiveness.
Ethylene Glycol Ethylene glycol (OHCH2CH2OH) is a viscous colorless liquid, used mainly in antifreeze and hydraulic fluids but also in the manufacture of glycol esters, resins, and other derivatives and as a solvent. CNS depression, nausea, vomiting, abdominal pain, respiratory failure, and renal failure with oliguria, proteinuria, and oxalate crystals in the urinary sediment are manifestations of ethylene glycol poisoning.2 In a case of acute ethylene glycol poisoning, the CT scan obtained three days after ethylene glycol ingestion showed low-density areas in the basal ganglia, thalami, midbrain, and upper pons. The neurologic findings were consistent with the abnormalities seen on CT.409 In addition, hepatic damage due to calcium oxalate deposition has been reported.410,411 Calcium oxalate monohydrate crystals, and not the oxalate ion, is responsible for the membrane damage and cell death observed in normal human and rat PT cells; calcium oxalate monohydrate accumulation in the kidney appears to be responsible for the renal toxicity associated with ethylene glycol exposure.412 Glycolic acid is the metabolite that is found in the highest concentrations in blood; serum and urine levels of glycolic acid correlate with clinical symptoms.413 The active enzyme is alcohol dehydrogenase. It is estimated that 50 deaths occur annually in the United States from accidental ingestion of ethylene glycol. Treatment of ethylene glycol poisoning consists of emergent stabilization, correction of metabolic acidosis, inhibition of further metabolism, and enhancing elimination of both unmetabolised parent compound and its metabolites. The prevention of ethylene glycol metabolism is accomplished by the use of antidotes that inhibit alcohol dehydrogenase. Historically, this has been done with intoxicating doses of ethanol. A recent alternative to ethanol therapy is fomepizole, or 4-methylpyrazole. Like ethanol, fomepizole inhibits alcohol dehydrogenase; however it does so without producing serious adverse effects.414 Hemodialysis has been successfully used in the treatment of accidental ethylene glycol poisoning by ingestion. The therapeutic use of 4-methyl pyrazole, an alcohol dehydrogenase inhibitor, has been recommended for the management of accidental or suicidal ethylene glycol poisoning.
Prevention. No federal standard for ethylene glycol exposure has been established. The American Conference of Industrial Hygienists recommended a TLV of 100 ppm. The most important preventive action is to alert employees to the extreme hazard of ingestion. Adequate respiratory protection should be provided wherever the compound is heated or sprayed. Increasing use of glycols as deicing agents for aircraft and airfield runways has generated concern about surface water contamination that may result from runoff. Degradation of ethylene glycol in river water is complete within 3–7 days (depending on temperature); degradation of diethylene glycol is somewhat slower. At low temperatures (8°C or less), both glycols degrade at a minimal rate.415
642
Environmental Health
Diethylene Glycol Diethylene glycol is similar in its effects to ethylene glycol; its importance is mainly historical, since more than 100 deaths occurred in the United States when it was used in the manufacture of an elixir of sulfanilamide. Fatal cases were caused by renal proximal tubular necrosis and renal failure.2 Diethylene glycol is teratogenic in rodents.416 ETHYLENE GLYCOL ETHERS AND DERIVATIVES
The most important alkyl glycol derivatives are ethylene glycol monoethyl ether (EGME), (ethoxyethanol, cellosolve) (CH3CH2OCHCH2OH) and its acetate; ethylene glycol monomethyl ether (methoxyethanol, methyl cellosolve, EGME) (CH3OCH2CH2OH) and its acetate; and ethylene glycol monobutyl ether (butoxyethanol, butyl cellosolve) (CH3CH2CH2CH2OCH2CH2OH).2 These compounds are colorless liquids with wide applications as solvents for resins, lacquers, paints, varnishes, coatings (including epoxy resin coatings), dyes, inks, adhesives, and plastics. They are also used in hydraulic fluids, as anti-icing additives, in brake fluids, and in aviation fuels. EGME is used in the formulation of adhesives, detergents, pesticides, cosmetics, and pharmaceuticals. Inhalation, transcutaneous absorption, and gastrointestinal absorption are all possible. These derivatives are irritants for the mucous membranes and skin. The acetates are more potent irritants. Corneal clouding, usually transitory, may occur. Acute overexposure may result in marked narcotic effects and encephalopathy; pulmonary edema and severe kidney and liver toxicity are also possible. At lower levels of exposure, CNS effects result in such symptoms as fatigue, headache, tremor, slurred-speech, gait abnormalities, blurred vision, and personality changes. Anemia is another possible effect; macrocytosis and immature forms of leukocytes can be found. Exposure to ethylene glycol monomethyl ether has also been associated with pancytopenia. In animal experiments, butyl cellosolve has been shown to produce hemolytic anemia. Exposure to ethylene glycol monomethyl ether and to EGME has been shown to result in adverse reproductive effects in mice, rats, and rabbits. These effects include testicular atrophy, degenerative testicular changes,417 abnormal sperm head morphology, and infertility in males, effects shown to be antagonized by concomitant exposure to toluene and xylene.418 The toxic effect of EGME on the male reproductive system may be strongly associated with the disproportion of testicular germ cells, with a depletion of haploid cells and a disproportionate ratio of diploid and tetraploid cells.419 Glycol ethers produce hemato- and testicular toxicity in animals, which are dependent on both the alkyl chain length and the animal species used. Levels of spermatogenesis-involved proteins were increased by ethylene glycol monomethyl ether, including GST, testisspecific heat shock protein 70-2, glyceraldehyde 3-phosphate dehydrogenase, and phosphatidylethanolamine-binding protein.420 An increased frequency of spontaneous abortions, disturbed menstrual cycle, and subfertility has been demonstrated in EGME-exposed women working in the semiconductor industry.421 Ethylene glycol monobutyl ether (2-butoxyethanol) ingestion causes metabolic acidosis, hemolysis, hepatorenal dysfunction, and coma.422 Butoxyacetic acid, formed from ethylene glycol monobutyl ether as a result of dehydrogenase activity, is a potent hemolysin. 2-Methoxyethanol (ME) produces testicular lesions in rats, characterised primarily by degeneration of spermatocytes undergoing meiotic division, with minimal or no hemolytic changes. In guinea pigs, a single dose or multiple (3 daily) doses of 200 mg ME/kg were given, and animals were examined 4 days after the start of treatment. In guinea pigs, spermatocyte degeneration was observed in stage III/IV tubules, but was much less severe than in rats.423 The stage-specific effect of a single oral dose (500 mg/kg body weight) of ethylene glycol monomethyl ether was characterized during one cycle of seminiferous epithelium in rats. Maximum peritubular membrane damage and germinal epithelium distortion were observed in stages IX–XII. Cell death occurred during conversion of zygotene to pachytene spermatocytes
(stage XIII) and between dividing spermatocytes and step I spermatids (stage late XIII–XIV).424 Exposure of pregnant animals resulted in increased rates of embryonic deaths and in various congenital malformations. The acetate esters of ethylene glycol monomethyl ether and of EGME have produced similar adverse male reproductive effects. Ethylene glycol monomethyl ether is metabolized to the active compound methoxy-acetic acid, which readily crosses the placenta and impairs fetal development. Pregnant mice exposed to EGME from gestational days 10–17 and offspring were examined on gestational day 18. Significant thymic atrophy and cellular depletion were found in EGMEexposed fetal mice, with decreased CD4+-8+ thymocytes and increased percentages of CD4-8-thymocytes. In addition, fetal liver prolymphocytes were also sensitive targets of EGME exposure.425 Methoxyacetic acid (MAA), a teratogenic toxin, is the major metabolite of EGME. Electron paramagnetic resonance (EPR) spinlabeling techniques were used to gain insight into the mechanism of MAA toxicity. The results suggested that MAA may lead to teratological toxicity by interacting with certain protein components, that is, transport proteins, cytoskeleton proteins, or neurotransmitter receptors.426 MAA was shown to induce sister chromatid exchanges in human peripheral blood cells.427 A cross-sectional study of 97 workers exposed to ethylene glycol monomethyl ether, with semen analysis in 15, did not reveal abnormalities other than possibly smaller testicular size.428 The occurrence of adverse male reproductive effects in humans cannot be excluded on the basis of this study. Ethylene glycol monomethyl ether, EGME, ethylene glycol n-butyl ether and their aldehyde and acid derivatives were tested for mutagenicity with the Ames test, with and without the rat S9 mix. Ethylene glycol n-butyl ether and the aldehyde metabolite of ethylene glycol monomethyl ether, methoxyacetaldehyde, were found to be mutagenic in strain Salmonella typhimurium 97a, with and without S9 mix.429 Administration of EGME and its metabolite methoxyacetaldehyde (MALD), in concentrations of 35–2500 mg/kg for EGME and 25–1000 mg/kg for MALD, did not cause any chromosomal aberrations in mice after acute or subchronic exposure by the oral route.430 Prevention. Federal standards for PELs are EGME, 200 ppm; EGME acetate, 100 ppm; ethylene glycol monomethyl ether, 25 ppm; ethylene glycol monomethyl ether acetate, 25 ppm; and ethylene glycol monobutyl ether, 50 ppm. The ACGIH has recommended a TLV of 25 ppm for ethylene glycol monomethyl ether and 100 ppm for EGME; this latter TLV was lowered in 1981 to 50 ppm. In 1982 it was proposed that the TWA exposure limits for both these compounds and their acetates be reduced to 5 ppm in view of the testicular effects observed in recent animal studies. ORGANIC ACIDS, ANHYDRIDES, LACTONES, AND AMIDES
These compounds have numerous industrial applications. Their common clinical characteristic is an irritant effect on eyes, nose, throat, and the respiratory tract. Skin irritation can be severe, and some of the acids (formic, acetic, oxalic, and others) can produce chemical burns. Accidental eye penetration may result in severe corneal injury and consequent opacities. Toxic pulmonary edema can occur after acute overexposure to high concentrations.
Phthalic Anhydride Phthalic anhydride (C6H4(CO)2O) is a crystalline, needlelike white solid. It is used in the manufacture of benzoic and phthalic acids, as a plasticizer for vinyl resins, alkyd and polyester resins, in the production of diethyl and dimethyl phthalate, phenolphthalein, phthalamide, methyl aniline, and other compounds. Phthalic anhydride as dust, fumes, or vapor is a potent irritant for the eyes, respiratory system, and skin; with prolonged skin contact, chemical burns are possible. Repeated exposure may result in
27 chronic industrial bronchitis. Phthalic anhydride is also a potent sensitizing substance: occupational asthma can be severe and hypersensitivity pneumonitis has been reported. Phthalic and maleic anhydrides stimulated vigorous expression of IL-5, IL-10, and IL-13 but relatively low levels of the type 1 cytokines interferon-gamma and IL-12 following topical application to BALB/c strain mice.431 Prolonged topical exposure of mice phthalic and maleic anhydrides in each case resulted in the development of a predominantly Th2-type cytokine secretion phenotype, consistent with the ability of these materials to provoke asthma and respiratory allergy through a type 2 (possibly IgE-mediated) mechanism.432 Skin sensitization may result in eczematiform dermatitis. Prevention. The federal standard for phthalic anhydride is a TLV of 1 ppm. Enclosure of technological processes where phthalic anhydride is used and protective clothing, including gloves and goggles, are necessary, respiratory protection must be available. Periodic examinations should focus on possible sensitization and chronic effects, such as bronchitis and dermatitis.
Maleic Anhydride Maleic anhydride (O CO CH = CO) is used mainly in the production of alkyd and polyester resins; it has also found applications for siccatives. Maleic anhydride can produce severe chemical burns of the skin and eyes. It is also a sensitizing substance and can lead to clinical manifestations similar to those described for phthalic anhydride. The 1987 TLV is 0.25 ppm.
Trimellitic Anhydride Trimellitic anhydride (1,2,4-benzenetricarboxylic acid, cyclic 1,2anhydride, C9H4O5) is used as a curing agent for epoxy resins and other resins, in vinyl plasticizers, polyesters, dyes and pigments, paints and coatings, agricultural chemicals, surface-active compounds, pharmaceuticals, etc. Chemical pneumonitis has been reported after an epoxy resin containing trimellitic anhydride was sprayed on heated pipes. Respiratory irritation after exposure to high concentrations of trimellitic anhydride was reported in workers engaged in the synthesis of this compound. It was also found that in some cases sensitization occurs after variable periods following onset of exposure (sometimes years); allergic rhinitis, occupational asthma, and hypersensitivity pneumonitis can be manifestations of sensitization. Trimellitic anhydride as the etiologic agent in cases of sensitization was confirmed by inhalation challenge tests.433 Human leukocyte (HLA) Class 2 alleles were demonstrated to be risk factors contributing to individual susceptibility in workers.434 Trimellitic anhydride inhalation challenge of sensitized rats caused challenge concentration-related allergic airway inflammation, asthma-like changes in breathing pattern, and increased nonspecific airway responsiveness.435 Dermal sensitization in mice is associated with increased IgE levels in serum and bronchioalveolar lavage fluid, with increased cell numbers and neutrophils after intratracheal challenge.436 Trimellitic anhydride has been shown to activate rat lymph nodes, with secretion of type 2 cytokines, including the expression of IL-5 and IL-13 cytokines which in the presence of only very low levels of IL-4 may provide for an IgE-independent mechanism for the development of chemical respiratory allergy.437 Prevention. The NIOSH recommended in 1978 that trimellitic anhydride be considered an extremely toxic agent, since it can produce severe irritation of the respiratory tract, including pulmonary edema and chemical pneumonitis; sensitization, with occupational asthma or hypersensitivity pneumonitis can occur at lower levels. Guidelines for engineering controls and protective equipment have been outlined by NIOSH.3 The current OSHA TLV standard is 0.04 mg/m3.
Diseases Associated with Exposure to Chemical Substances
643
Beta-Propiolactone Beta-propiolactone (O CH2 CH2 C = 0) is a colorless liquid with important applications in the synthesis of acrylate plastics; it is also used as a disinfectant and as a sterilizing agent against viruses. It is easily absorbed through the skin; inhalation is also important. Beta-propiolactone is a very potent irritant. In animal experiments it has been found to produce hepatocellular necrosis, renal tubular necrosis, convulsions, and circulatory collapse. Beta-propiolactone is a direct-acting alkylating agent and forms DNA adducts. It is mutagenic in a wide variety of in vitro and in vivo systems, both in somatic and germ cells.438 In several animal studies it has also been shown to be carcinogenic; skin cancer, hepatoma, and gastric cancer have been induced. Reports on systemic or carcinogenic effects in humans are not available. Beta-propiolactone is included in the federal standard for carcinogens; no exposure should be allowed to occur. The IARC has classified beta-propiolactone as possibly carcinogenic to humans (Group 2B).439 Protective equipment designed to prevent all skin contact or inhalation is necessary; this includes full-body protective clothing and full-face air-supplied respirators. Showers at the end of the shift are absolutely necessary. The 1987 TLV is 0.05 ppm.
N, N-Dimethylformamide N, N-dimethylformamide, HCON(CH3)2, is a colorless liquid with a boiling point of 153°C. It is miscible with water and organic solvents at 25°C. It has excellent solvent properties for numerous organic compounds and is used in processes where solvents with low volatility are necessary. Its major applications are in the manufacture of synthetic fibers and resins, mainly polyacrylic fibers and butadiene. It is absorbed through inhalation and through the skin and is irritating to the eyes, mucous membranes, and skin.2 Adverse effects of absorption include loss of appetite, nausea, vomiting, abdominal pain, hepatomegaly, and other indications of liver injury. Clusters of testicular germ cell tumors have been reported among airplane manufacturing employees and tannery workers.440,441 An increased incidence of cancer (oropharyngeal and melanoma) was reported in a cohort of formamide-exposed workers.442 DMF exposure was not associated with SCE frequency in peripheral lymphocytes of exposed workers,443 but occupational exposures to acrylonitrile and DMF induced increases in the frequencies of chromosomal aberrations and sister chromatid exchanges in peripheral blood lymphocytes.444 Inhalation exposure to DMF increased the incidence of hepatocellular adenomas and carcinomas in rats and the incidence of hepatocellular adenomas, carcinomas, and hepatoblastomas in mice.445 Results of in vitro and in vivo genotoxicity assays have been consistently negative.446 Dimethylformamide administered to mice and rats 5 days/week for 18 months did not produce effects on estrous cycle. Compoundrelated morphological changes were observed only in the liver. Centrilobular hepatocellular hypertrophy and centrilobular single-cell necrosis were found in rats and mice.447 Diemethylformamide exposure did not result in adverse effects on semen or menstrual cycle in cynomolgus monkeys, exposed for 13 weeks to concentrations up to 500 ppm.448 DMF caused cranial and sternebral skeletal malformations in mice449 and rats.450 N, N-dimethylformamide is metabolized by the microsomal cytochrome P450 into mainly n-hydroxymethyl- n-methylformamide, which further breaks down to N-methyformamide. Measurement of N-methylcarbamoylated hemoglobin in blood is a useful biomarker of exposure to N,N-dimethylformamide in health-risk assessment.451 The measurement of the excretion of urinary N-acetyl-S-(N-methylcarbamoyl)cysteine (AMCC) and N-methylformamide (NMF) in the urine has been used for biological monitoring in the occupational setting.452 The federal standard for a PEL is 10 ppm (30 mg/m3).
644
Environmental Health
N, N1-Dimethylacetamide N, N1-dimethylacetamide, CH3CON(CH3)2, is a colorless liquid that is easily absorbed through the skin. Inhalation is a less important route of absorption, since the volatility is low. N, N1-dimethylacetamide is used as a solvent in a variety of industrial processes. Hepatotoxicity is the most severe adverse effect; hepatocellular degenerative changes and jaundice have been reported in exposed workers. Experimental studies have also indicated hepatotoxicity as the prominent effect in rats and dogs. With high exposure, depressant neurotoxic effects become evident. Dimethylacetamide has been shown, in experiments on rodents, to produce testicular changes in rabbits and rats. Its hepatotoxicity was comparable to and possibly higher than that of dimethylformamide.453 Developmental toxicity (soft tissue and skeletal abnormalities) of dimethylacetamide was detected in rabbits following inhalation exposure.454 The federal standard for a PEL is 10 ppm (35 mg/m3). Protective equipment to exclude percutaneous absorption is necessary, as are eye and respiratory protection if high vapor concentrations are possible.
Acrylamide Acrylamide (CH2 = CHCONH2) is a white crystalline material with a melting point of 84.5°C and a tendency to sublime; it is readily soluble in water and in some other common polar solvents. Large-scale production started in the early 1950s; the major industrial applications are as a vinyl monomer in the production of high-molecular polymers such as polyacrylamides. These have many applications, including the clarification and treatment of municipal and industrial effluents and potable water; in the oil industry (for fracturing and flooding of oil-bearing strata); as flocculants in the production of ores, metals, and coal; as strengtheners in the paper industry; for textile treatment, etc. Acrylamide is of major concern because of its extensive use in molecular biology laboratories, where, in the United States, 100,000–200,000 persons are potentially exposed in chromatography, elecrophoresis, and electron microscopy.455 Acrylamide is present in tobacco smoke, and concern has arisen regarding human exposures through its presence in some prepared foods, especially high carbohydrate foods cooked at high temperatures, such as French fries and potato chips.456 Although the pure polyacrylamide polymers are nontoxic, the problem of residual unreacted acrylamide exists, since up to 2% residual monomer is acceptable for some industrial applications. The FDA has established a maximum 0.05% residual monomer level for polymers used in paper or cardboard in contact with food; similar levels are accepted for polymers used in clarification of potable water. Since acrylamide has cumulative toxic effects, it has been recommended that the general population not be exposed to daily levels in excess of 0.0005 mg/kg. The initial indication of a marked neurotoxic effect of acrylamide came when a recently introduced acrylamide production method (from acrylonitrile) was first used in 1953; several workers experienced weakness in their extremities, with numbness and tingling, strongly suggestive of toxic peripheral neuropathy. Cases of acrylamide neuropathy have since been reported from Japan, France, Canada, and Great Britain. Acrylamide is readily absorbed through the skin, which is considered an important route of absorption. Respiratory absorption and ingestion of acrylamide are also important; severe cases of acrylamide poisoning have resulted from ingestion of contaminated water in Japan. Acrylamide is metabolized to the epoxide glycidamide, metabolically formed from acrylamide by CYP 2E1-mediated epoxidation, and whose adducts to hemoglobin and to DNA have been identified in animals and humans. Dosing rats and mice with glycidamide, typically produced higher levels of DNA adducts than observed with acrylamide. Glycidamide-derived DNA adducts of adenine and guanine were formed in all tissues examined, including both target tissues identified in rodent carcinogenicity bioassays and in nontarget tissues.457 This metabolite may be involved in the reproductive and carcinogenic effects of acrylamide. The neurotoxicity of acrylamide and glycidamide were shown to differ in rats, suggesting that acrylamide itself is primarily responsible for peripheral neurotoxicity.458
Acrylamide poisoning in occupationally exposed workers has occurred after relatively short periods of exposure (several months to a year). Erythema and peeling of skin, mainly in the palms but also on the soles, usually precede neurologic symptoms; excessive fatigue, weight loss, and somnolence are followed by a slowly progressive symmetrical peripheral neuropathy. The characteristic symptoms include muscle weakness, unsteadiness, paresthesia, signs of sympathetic nervous system involvement (cold, blue hands and feet, excessive sweating), impairment of superficial sensation (touch, pain, temperature) and position sense, diminished or absent deep tendon reflexes in legs and arms, and the presence of Romberg’s sign. Considerable loss of muscle strength may occur, and muscular atrophy, usually starting with the small muscles of the hands, has been reported. This toxic neuropathy has a distal to proximal evolution; the earliest and most severe changes are in the distal segments of the lower and upper extremities, and progression occurs with involvement of more proximal segments (“stocking and glove” distribution). Signs indicating CNS involvement are somnolence, vertigo, ataxic gait, and occasionally slight organic mental syndrome. EEG abnormalities have also been described. Sensory nerve conduction velocities have been found to be more affected than motor nerve conduction velocities; potentials with markedly prolonged distal latencies are described. Recovery after cessation of exposure is slow; it may take several months to 2 years. Workers exposed to acrylamide and N-methylolacrylamide during grouting work reported a higher prevalence of symptoms during the exposure period than they did in an examination 16 months later. A statistically significant reduction in the mean sensory NCV of the ulnar nerve was observed 4 months postexposure when compared with the values of a control group, and the mean ulnar distal delay was prolonged. Both measures were significantly improved when measured one year later. Exposure-related improvements were observed from four to 16 months postexposure for both the median (motor and sensory NCV and F-response) and ulnar (sensory NCV, F-response) nerves. A significant reversible reduction in the mean sensory amplitude of the median nerve was also observed, while the mean sensory amplitude of the sural nerve was significantly reduced after 16 months.459 Experimental acrylamide neuropathy has been produced in all mammals studied; medium- to large-diameter fibers and long fibers are more susceptible to the primary giant axonal degeneration and secondary demyelination characteristic of acrylamide neuropathy. CNS pathology consists of degenerating fibers in the anterior and lateral columns of the spinal cord, gracile nucleus, cerebellar vermis, spinocerebellar tracts, CNS optic nerve tracts, and tracts in the hypothalamus. Changes in somatosensory evoked potentials have been found to be useful in the early detection of acrylamide neurotoxicity. They precede abnormalities of peripheral nerve conduction and behavioral signs of intoxication. Deterioration of visual capacity, with an increased threshold for visual acuity and flicker fusion and prolonged latency in VEPs, was reported in monkeys. These abnormalities were detected before overt signs of toxicity became apparent. Acrylamide preferentially damages P retinal ganglion cells in macaques, with marked effects on visual acuity, contrast discrimination, and shape discrimination.460 An underlying mechanism of acrylamide peripheral neuropathy has been found to be impaired retrograde transport of material from the more distal parts of the peripheral nerve. The buildup of retrogradely transported material has been shown to be dose-related. Changes in retrograde axonal transport are thought to play an initial and important role in the development of toxic axonopathies, possibly the primary biochemical event in acrylamide neuropathy. Local disorganization of the smooth endoplasmic reticulum, forming a complex network of tubules intermingled with vesicles and mitochondria, is thought to be responsible for the focal stasis of fast-transported proteins. These seem to be the earliest changes detectable in axons damaged by acrylamide. Acrylamide reduced microtubule-associated proteins (MAP1 and MAP2) in the rat extrapyramidal system. The effect was more marked in the caudate-putamen than in other components of the extrapyramidal system. The loss of MAPs occurs first in dendrites and proceeds toward the perikarya. The depletion of microtubule-associated proteins in the extrapyramidal system appears to be an early biochemical event preceding peripheral neuropathy.461 In addition, acrylamide
27 also produces necrosis of cerebellar Purkinje cells after high dose (50 mg/kg) administration in rats.462 Acrylamide has been found to depress fast anterograde transport of protein, resulting in reduction in delivery of protein to the axon and distal nerve degeneration.463 Acrylamide has been reported to produce effects on neurotransmitter and neuropeptide levels in various areas of the brain. Elevated levels of 5-hydroxyindolacetic acid in all regions of the rat brain were interpreted as being the result of an increased serotonin turnover. Changes in the affinity and number of dopamine receptor sites have also been found. Elevated levels of some neuropeptides were detected mainly in the hypothalamus. Significant decreases in plasma levels of testosterone and prolactin were found after repeated acrylamide administration. In recent studies, acrylamide intoxication was associated with early, progressive nerve terminal degeneration in all CNS regions and with Purkinje cell injury in the cerebellum.464 Acrylamide produced testicular atrophy, with degenerative changes in the epithelial cells of seminiferous tubules. Acrylamide treatment produced significant increases in chromosomal structural aberrations in late spermatids-spermatozoa of mice. Chromosomal damage was consistent with alkylation of DNA-associated protamines. A dose-dependent depletion of mature spermatids after treatment of spermatogonia and a toxic effect upon primary spermatocytes were also detected.465 Acrylamide (i.p.) produced a meiotic delay in spermatocytes of mice. This was predominately due to prolongation of interkinesis. Acrylamide toxicity appears to increase Leydig cell death and perturb gene expression levels, contributing to sperm defects and various abnormal histopathological lesions including apoptosis in rat testis.466 Acrylamide is highly effective in breaking chromosomes in germ cells of male mice, resulting both in early death of conceptuses and in the transmission of reciprocal translocation to live-born progeny. This effect has been demonstrated after topical application and absorption through the skin.467 Acrylamide-induced germ cell mutations in male mice require CYP2E1-mediated epoxidation of acrylamide.468 Acrylamide exposure in male mice caused a dose-dependent increase in the frequency of morphologic abnormalities in preimplantation embryos. Single-cell eggs, growth retardation, and blastomere lysis were detected after paternal treatment with acrylamide. A more than 100-fold elevation of chromatin adducts in sperm was observed during first and second week after treatment.469 The disturbances in cell division caused by acrylamide suggest that acrylamide might induce aneuploidy by interfering with proper functioning of the spindle; errors in chromosome segregation may also occur.470 Genotoxic effects of acrylamide and glycidamide have also been detected in several in vitro and/or in vivo unscheduled DNA synthesis assays.471 Acrylamide showed mutagenic potency in Salmonella, and both the chromosomal aberration assay and micronucleus assay indicated that acrylamide has genotoxic potency; the chromosomal aberration frequencies were observed to be proportional to acrylamide concentrations, and acrylamide significantly increased micronuclei in peripheral blood cells of mice.472 The DNA strand breaking effect of acrylamide in rat hepatocytes was enhanced by depletion of glutathione.473 Acrylamide has been shown to exert a wide spectrum of diverse effects on DNA of normal cells, including mostly DNA base modifications and apoptosis, and may also impair DNA repair.474 Oncogenicity studies on rats treated with acrylamide in drinking water for 2 years have been positive for a number of tumors (central nervous system, thyroid, mammary gland, uterus in females, and scrotal mesothelioma in males). Acrylamide increased DNA synthesis in the target tissues for tumor development (thyroid, testicular mesothelium, adrenal medulla) in the rat. In contrast, cell growth was not altered in the liver and adrenal cortex (non-target tissues for acrylamide carcinogenesis).475 In a mortality study involving a cohort of 371 employees exposed to acrylamide, an excess in total cancer deaths was due to excess in digestive and respiratory cancer in a subgroup that had previous exposure to organic dyes.476 IARC has classified acrylamide in Group 2A, probably carcinogenic to humans.477
Control and Prevention Engineering designs that prevent the escape of both vapor and dust into the environment are necessary; enclosure, exhaust ventilation,
Diseases Associated with Exposure to Chemical Substances
645
and automated systems must be used to minimize exposure. Prevention of skin and eye contact is especially important in handling of aqueous solutions, and closed systems are to be preferred. Measurements of hemoglobin adducts were developed as a way to monitor exposure to acrylamide and have been successfully applied in a field study of occupationally exposed workers.478 A study of 41 workers heavily exposed to acrylamide and acrylonitrile in Xinxiang, China, was undertaken because of frequent signs and symptoms indicating neuropathy. Hemoglobin adducts of acrylamide were significantly correlated with a “neurotoxicity index” based on signs and symptoms of peripheral neuropathy, vibration thresholds, and electroneuromyography measurements.479 The present recommended TWA for acrylamide exposure is 0.3 mg/m3. Skin exposure has to be carefully avoided by the use of appropriate protective clothing and work practices. Showers and eyewash fountains should be available for immediate use if contamination occurs. Preemployment and periodic medical examinations with special attention to skin, eyes, and nervous system are necessary. It is essential that employees be warned of the potential health hazards and the importance of personal hygiene and careful work practices. Frequent inspection of fingers and hands by medical or paramedical personnel is useful in detecting peeling of skin, which usually precedes clinical neuropathy. ALDEHYDES
Aldehydes are aliphatic or aromatic compounds with the general structure:
The aldehydes are highly reactive substances and are used extensively throughout the chemical industry. Formaldehyde is a gas that is readily soluble in water; the other aldehydes are liquids. The common characteristic of aldehydes is their strong irritative effect on the skin, eyes, and respiratory system. Acute overexposure may result in toxic pulmonary edema. Sensitization to aldehydes is possible, and allergic dermatitis and occupational asthma can occur. Unlike formaldehyde, gluteraldehyde has not been shown to increase neoplasia in rodent studies.480
Formaldehyde Formaldehyde (HCHO) is a colorless gas with a strong odor, which is readily soluble in water; the commercial solutions may contain up to 15% methanol to prevent polymerization. It has numerous industrial applications in the manufacture of textiles cellulose esters, dyes, inks, latex, phenol, urea, melamine, pentaerythrol, hexamethylenetetramine, thiourea, resins, and explosives and as a fungicide, disinfectant, and preservative. More than half of formaldehyde is used in the United States in the manufacture of plastics and resins: urea-formaldehyde resins phenolic, polyacetal, and melamine resins. Among the many other uses is in the manufacture of 4,4′-methylene dianiline and 4,4′-methylene diphenyl diisocyanate. Some relatively small-volume uses of formaldehyde are in agriculture, for seed treatment and as a soil disinfectant, in cosmetics, deodorants, in photography, and in histopathology. Formaldehyde has been found to be a relatively common contaminant of indoor air; it originates in urea-formaldehyde resins used in the production of particle board or in urea-formaldehyde foam used for insulation. Such insulation was applied in the United States in approximately 500,000 houses during the period 1975–1980. Concentrations of formaldehyde in residential indoor air have varied from 0.01 to 31.7 ppm. Significant concentrations of formaldehyde have been found in industrial effluents, mainly from the production of urea-, melamine- and phenol-formaldehyde resins, and also from users of such resins (e.g., plywood manufacturers). In water, formaldehyde undergoes rapid
646
Environmental Health
degradation and, therefore, does not represent a major source of absorption. Formaldehyde is also readily degraded in soil. Bioaccumulation does not occur.481 Other sources of formaldehyde exposure for the general population are from cigarette smoke (37–73 µg/per cigarette) and from small amounts in food, especially after the use of hexamethylenetetramine as a food additive. Formaldehyde resins applied to permanent-press textiles can emit formaldehyde when stored. Fingernail hardeners containing formaldehyde are a relatively recent addition to the potential sources of formaldehyde exposure. Measurement of formaldehyde levels in the air in office buildings in Taiwan raised concern about increases in lifetime cancer risk.482 Japanese anatomy students dissecting cadavers were exposed to formaldehyde levels in excess of the recommended level of 0.5 ppm set by Japan Society for Occupational Health.483 The normal endogenous concentration of formaldehyde in the blood is approximately 0.1 mM in rats, monkeys, and humans. Absorption occurs through inhalation. Skin and eye contact may result in chemical burns. Guinea pigs exposed by inhalation to formaldehyde (1 ppm for 8 hours) developed increased airway resistance and enhanced bronchial reactivity to acetylcholine, mediated through leukotriene biosynthesis.484 Smooth muscle reactivity in the airways was altered, despite absence of epithelial damage or inflammation histologically.485 Chronic formaldehyde exposure enhanced bronchoconstrictive responses to ovalbumin antigen challenge in ovalbumin-sensitized guinea pigs.486 Acute overexposure to very high concentrations may result in pulmonary edema. Sensitization resulting in allergic dermatitis is not uncommon; occupational asthma is also possible. With repeated exposures over 10 days, formaldehyde affected the learning behavior and the memory of male and female rats.487 Formaldehyde induced oxidative frontal cortex and hippocampal tissue damage in rats; a protective effect of vitamin E against oxidative damage was found.488 Formaldehyde carcinogenicity assays have revealed that inhalation exposure to concentrations of 14.3 ppm resulted in a significantly increased incidence of nasal squamous cell carcinomas in rats of both sexes. Induction of nasal carcinomas in rats exhibited a nonlinear relationship with formaldehyde dose, the rates increasing rapidly with increasing exposure concentrations.489 Formaldehyde-related increases in cell proliferation are thought to play an important role in formaldehyde carcinogenicity. In mice only a very small number of squamous cell carcinomas developed, and the incidence was not statistically significant. Dysplasia and squamous metaplasia of the respiratory epithelium, rhinitis, and atrophy of the olfactory epithelium were observed in mice; similar lesions were seen in rats, and goblet cell hyperplasia, squamous atypia, and papillary hyperplasia were also found. A 2-year experimental study on rats investigated the effects of formaldehyde in drinking water. Although pathologic changes in the gastric mucosa were found in the high-dose rats, no gastric tumors or tumors at other sites were detected.490 A cohort study of 2490 employees in a chemical plant manufacturing and using formaldehyde found an elevated proportional mortality for digestive tract cancer in white males; the small numbers make it difficult to draw conclusions. No deaths from cancers of the nose or nasal sinuses had occurred. The duration of employment was relatively short. The studies had a very limited power to detect excess mortality from nasal cancer. In a large retrospective cohort mortality study of more than 11,000 workers exposed to formaldehyde in the garment industry, significant excess mortality from cancer of the buccal cavity and connective tissue was found. The incidence of such cancers as leukemia and lymphoma was higher than expected, without reaching the level of statistical significance. Nasopharyngeal cancer mortality was statistically significantly increased in a cohort study of United States industrial workers exposed to formaldehyde, and was also increased in two other U.S. and Danish cohort studies. Five of seven case-control studies also found elevated risk for formaldehyde exposure. Leukemia mortality, primarily myeloid-type, was increased in six of seven cohorts of embalmers, funeral-parlor workers, pathologists, and anatomists. A greater incidence of leukemia in two cohorts of U.S. industrial workers491 and U.S. garment workers,492 but not in a third cohort of United Kingdom chemical workers, has been reported. An IARC Working Group concluded that there is
sufficient evidence in humans that formaldehyde causes nasopharyngeal cancer, but strong but not sufficient evidence for a causal association between leukemia and occupational exposure to formaldehyde. Overall, the Working Group concluded that formaldehyde is carcinogenic to humans (Group 1), on the basis of sufficient evidence in humans and sufficient evidence in experimental animals—a higher classification than previous IARC evaluations.493 Formaldehyde is mutagenic to bacteria, yeast, and Drosophila. Recently, formaldehyde-induced mutagenesis has been demonstrated in Chinese hamster ovary cells, primarily point mutations with single-base transversions.494 Formaldehyde is metabolized to carbon dioxide and formate. Studies using 14C-formaldehyde have demonstrated the presence of 14C-labeled cellular macromolecules. In microarray studies of the nasal epithelium of rats exposed to formaldehyde by nasal inhalation, multiple genetic pathways were found to be dysregulated by formaldehyde exposure, including those involved in DNA synthesis/repair and regulation of cell proliferation.495 Formaldehyde has been reported to react with nucleic acids and has been found to be among the most potent of DNA-protein cross-link inducers, compared with aldehydes with greater carbon chain length.496 DNA-protein cross-links were induced, along with cell proliferation, squamous metaplasia and squamous cell carcinomas, in the nasal lateral meatus (a high tumor site in bioassays) of F344 rats exposed to formaldehyde.497 DNA damage in human lymphocyte cultures was demonstrated using the comet assay for DNA alterations.498 Sister chromatid exchanges in lymphocytes of formaldehydeexposed anatomy students showed a small but statistically significant increase when compared with preexposure findings in the same persons.499 Nasal respiratory cell samples collected from formaldehydeexposed sawmill and shearing press workers showed a significantly higher frequency of micronucleated cells than found among unexposed controls.500 DNA-protein cross-links were found with significantly greater frequency in the white blood cells of 12 formaldehyde-exposed workers than in the white blood cells of 8 unexposed controls.501 A significant increase in the frequency of micronucleated buccal cells in buccal smears taken from anatomy and pathology staff exposed to formaldehyde has been reported.502 Similar findings were reported in a study of mortuary students.503 Evidence has accululated which indicates that formaldehyde is an important metabolite of a number of halogenated hydrocarbons, mediated through GST theta activity, including dichloromethane, methyl bromide, methyl chloride, and carbon tetrachloride.504,505 Formaldehyde administered to male rats at 10 mg/kg body weight/day for 30 days caused a significant fall in sperm motility, viability, and count.506 Embryonic viability, increased dysmorphogenesis, and decreased growth parameters were altered in a dose-dependent fashion when mouse and rat embryos were exposed to formaldehyde.402 The federal standard for formaldehyde is 1 ppm (1.2 mg/m3). Engineering controls are essential to control exposure. Protective equipment to prevent skin contact, adequate respirators for situations in which higher exposure could result, proper work practices, and continuous education programs for employees are necessary. The EPA and the OSHA, in their consideration of available epidemiological and toxicological studies, now regard formaldehyde as a possible human carcinogen, although the evidence in humans is limited and controversial.
Acrolein Acrolein (H2C = CHCHO), a clear liquid, is used in the production of plastics, plasticizers, acrylates, synthetic fibers, and methionine; it is produced when oils and fats containing glycerol are heated, and it is a component of cigarette smoke. Acrolein is one of the strongest irritants. Skin burns and severe irritation of eyes and respiratory tract, including toxic pulmonary edema, are possible. Inhalation of smoke containing acrolein, the most common toxin in urban fires after carbon monoxide, causes vascular injury with noncardiogenic pulmonary edema containing edematogenic eicosanoids such as thromboxane, leukotriene B4, and the sulfidopeptide leukotrienes. Thromboxane is probably responsible for the pulmonary hypertension which occurs after the inhalation of acrolein
27 smoke.507 Acrolein caused dose-dependent cytotoxicity to human alveolar macrophages as demonstrated by the induction of apoptosis and necrosis.508 Acrolein is produced at the subcellular level by the lipid peroxidation caused by a wide range of agents that cause intracellular oxidative stresss. Acrolein itself acts as a strong peroxidizing agent.509 Acrolein has been demonstrated in neurofibrillary tangles in the brain in Alzheimer’s disease and is toxic to hippocampal neurons in culture.510 Acrolein has been implicated in the pathogenesis of atherosclerosis. Glutathione and GST were protective against acrolein-induced toxicity in rat aortic smooth muscle cells,511 and glutathione has been demonstrated to reduce many of the toxic effects of acrolein exposure. Acrolein inhibited T-cell and B-cell proliferation and reduced the viability of mouse lymphocytes in vitro.512 Acrolein is embryotoxic and teratogenic in rats and chick embryos after intra-amniotic administration. Initial reactions between acrolein and protein generate adducts containing an electrophilic center that can participate in secondary deleterious reactions (e.g., cross-linking). Inactivation of these reactive protein adducts with hydralazine, a nucleophilic drug, counteracts acrolein toxicity.513 Acrolein is genotoxic, causes DNA single-strand breaks, and is a highly potent DNA crosslinking agent in human bronchial epithelial cells.514 Acrolein forms cyclic deoxyguanosine adducts when it reacts with DNA in vitro and in S. typhimurium cultures. 2-Chloroacrolein and 2-bromoacrolein are very potent direct mutagens not requiring metabolic activation in S. typhimurium strains.515 Acrolein was shown to be mutagenic in bacterial systems.516 Acrolein was not found to be a developmental toxicant or teratogen at doses not toxic to the does, when administered via stomach tube to pregnant white rabbits.517 Acrolein is not a selective reproductive toxin in the rat.518 The federal standard for a PEL for acrolein is 0.1 ppm. The IARC has concluded that there is inadequate evidence in humans for the carcinogenicity of acrolein and inadequate evidence in experimental animals (Group 3).519 Environmentally relevant concentrations of acrolein can induce bronchial hyperactivity in guinea pigs through a mechanism involving injury to cells present in the airways. There is evidence that this response is dependent on leukotriene biosynthesis.520 Other widely used aldehydes are acetaldehyde and furfural. They have irritant effects but are less potent in this respect than formaldehyde and acrolein. Evidence for carcinogenic potential in experimental animals is convincing for formaldehyde and acetaldehyde, limited for crotonaldehyde, furfural and glycidaldehyde, and very weak for acrolein.521 ESTERS
Esters are organic compounds that result from the substitution of a hydrogen atom of an acid (organic or inorganic) with an organic group. They constitute a very large group of substances with a variety of industrial uses in plastics and resins, as solvents, and in the pharmaceutical, surface coating, textile, and food-processing industries. Narcotic CNS effects and irritative effects (especially with the halogenated esters such as ethyl chloroformate, ethyl chloroacetate, and the corresponding bromo- and iodo-compounds) are common to most esters. Sensitization has been reported with some of the aliphatic monocarboxylic halogenated esters. Some of the esters of inorganic acids have specific, potentially severe toxicity.
Dimethylsulfate Dimethylsulfate, (CH3)2SO4, is an oily fluid. It is used mainly for its methylating capacity; it is used as a solvent in the separation of mineral oils and as a reactant in producing polyurethane resins. Absorption is mainly through inhalation, but skin penetration is also possible. Toxic effects are complex and severe; many fatalities have occurred. After a latency period of several hours, the irritant effects on the skin, eyes, and respiratory system become manifest; toxic pulmonary edema is not unusual. Vesication of the skin and ulceration can occur. Eye irritation usually results in conjunctivitis, keratitis, photophobia, palpebral edema, and blepharospasm. Irritation of the upper airways may also be severe, with dysphagia and sometimes edema of the glottis. Dyspnea, cough, and
Diseases Associated with Exposure to Chemical Substances
647
shallow breathing are the signs of toxic pulmonary edema. If the patient survives this critical period, 48 hours later the signs and symptoms of hepatocellular necrosis and renal tubular necrosis may become manifest. At very high levels of exposure, neurotoxic effects are prominent, with somnolence, delirium, convulsions, temporary blindness, and coma. Dimethylsulfate is an alkylating agent. In experimental studies on rats, it has been shown to be carcinogenic. Prenatal exposure has also produced tumors of the nervous system in offspring. The IARC has concluded that there is sufficient evidence of dimethyl sulfate carcinogenicity in animals and that it has to be assumed to be a potential human carcinogen. In inhalation experiments on rodents, embryotoxic and teratogenic effects have also been observed. The federal standard for a permissible level of dimethylsulfate exposure is 0.1 ppm. Diethylsulfate, methylchlorosulfonate, ethylchlorosulfonate, and methyl-p-toluene sulfonate have effects similar to those of dimethyl sulfate, and the same extreme precautions in their handling are necessary. The skin, eyes, and respiratory tract should be protected continuously when there may be exposure to dimethylsulfate or the other esters that have similar effects. Contaminated areas should be entered only by trained personnel with impervious protective clothing and air-supplied respirators.2 KETONES
The chemical characteristic of this series of compounds known as ketones is the presence of the carbonyl group. Their general structure is
Ketones are excellent solvents for oils, fats, collodion, cellulose acetate, nitrocellulose, cellulose esters, epoxy resins, pigments, dyes, natural and synthetic resins (especially vinyl polymers and copolymers), and acrylic coatings. They are also used in the manufacture of paints, lacquers, and varnishes and in the celluloid, rubber, artificial leather, synthetic rubber, lubricating oil, and explosives industries. Other uses are in metal cleaning, rapidly drying inks, airplane dopes, as paint removers and dewaxers, and in hydraulic fluids. The most important members of the ketone group, because of extensive use are as follows: Acetone Methyl-ethyl-ketone Methyl-n-propyl ketone Methyl-n-butyl ketone Methyl isobutyl ketone Methyl-n-amyl ketone Methyl isoamyl ketone Diisobutyl ketone Cyclohexane Mesityl oxide Isophorone(3,5,5trimethyl-2-cyclohexen1-one)
CH3COCH3 CH3COCH2CH3 CH3(CH2)2COCH3 CH3CO(CH2)3CH3 CH3COCH2CH(CH3)2 CH3CO(CH2)4CH3 CH3CO(CH2)2CH(CH3)2 (CH3)2CHCH2COCH2 CH(CH3)2 C6H10O CH3COCH = C(CH3)2 C10H14O
Methyl isobutyl ketone is used in the recovery of uranium from fission products. It has also found applications as a vehicle for herbicides, such as 2,4,5-T, and insecticides. Many of the ketones are valuable raw materials or intermediates in the chemical synthesis of other compounds. For example, approximately 90% of the two billion pounds of acetone produced each year is used by the chemical industry for the production of methacetylates and higher ketones. The major route of absorption is through inhalation of vapor; with some of the ketones, such as MEK and MBK, skin absorption may contribute significantly to the total amount absorbed if work practices allow for extensive contact (immersion of hands, washing with the solvents). All the ketones are moderate mucous membrane irritants (eyes and upper airways); at higher concentrations CNS depression with
648
Environmental Health
prenarcotic symptoms progressing to narcosis may occur. A specific neurotoxic effect of MBK, peripheral neuropathy, was reported in 1975522 in workers exposed in the plastic coatings industry. In 1976, similar cases were identified among spray painters. Cases of peripheral neuropathy were also found in furniture finishers exposed to methyl-n-butyl ketone (MBK) and in workers employed in a dewaxing unit in a refinery, where the exposure was reported to be to MEK. The toxic sensorimotor peripheral neuropathy caused by MBK exposure is very similar to that caused by other neurotoxic substances such as acrylamide and n-hexane. Typically, sensory dysfunctions (touch, pain, temperature, vibration, and position) are the initial changes, affecting the hands and feet. Distal sensory neuropathy can be the only finding in some affected persons; in more severe cases motor impairment (muscle weakness, diminished or abolished deep tendon reflexes) in the distal parts of the lower and then the upper extremities becomes manifest. With progression, and in more severe cases, both the sensory and motor deficits may also affect the more proximal segments of the extremities; muscle wasting may be present in severe cases. Electromyographic abnormalities and slowing of nerve conduction velocity can be detected in the vast majority of cases; these electrophysiological abnormalities are useful for early detection, since they most often precede clinical manifestations. The clinical course is protracted, and cessation of toxic exposure does not result in recovery in all cases; progressive dysfunction was observed to occur for several months after exposure had been eliminated. Animal experiments have demonstrated that exposure to methyln-butyl ketone results in peripheral neuropathy in all tested species; moreover, mixed exposure to MEK and MBK (in a 5:1 ratio) resulted in a more rapid development of peripheral neuropathy in rats than exposure to MBK alone, indicating a potentiating effect of MEK. These experimental data are of importance for human exposure, since mixtures of solvents are often used. MBK produces primary axonal degeneration, with marked increase in the number of neurofilaments, reduction of neurotubules, axonal swelling, and secondary thinning of the myelin sheath. Spencer and Schaumberg523 have identified similar changes in certain tracts of the CNS, the distal regions of long ascending and descending pathways in the spinal cord and medulla oblongata, and preterminal and terminal axons in the gray matter. For this reason, they have proposed centralperipheral distal axonopathy as a more appropriate term for this type of neurotoxic effect. The “dying back” axonal disease therefore seems not to be limited to the peripheral nerves but to be quite widespread in the CNS. Debate continues regarding the relationship between giant axonal swellings in CNS and PNS tissues, containing neurofilamentous masses, and axon atrophy.524 Recovery from peripheral neuropathy is slow; it is thought that recovery of similar lesions within the CNS is unlikely to occur and might result in permanent deficit, such as ataxia or spasticity. The predominant metabolite of MBK is 2,5-hexanedione. A similar type of giant axonal neuropathy was reproduced in animals exposed to this metabolite. 2,5-Hexanedione is also the main metabolite of n-hexane, another solvent with marked similar neurotoxicity. Other metabolites of MBK are 5-hydroxy-2-hexanone, 2-hexanol, and 2,5-hexanediol; all have been shown to produce typical giant axonal neuropathy in experiments on rats.3 The transformation of MBK to its toxic metabolites is mediated by the liver mixed-function oxidase system. MEK potentiates the neurotoxicity of MBK by induction of the microsomal mixed-function enzyme system. It is generally accepted that 2,5-hexanedione, the gamma-diketone metabolite of MBK, has the most marked neurotoxic effect of all MBK metabolites. Another ketone, ethyl-n-butyl ketone (EnBK, 3-heptanone) has also been reported to produce typical central-peripheral distal axonopathy in rats. MEK potentiated EnBK neurotoxicity; the excretion of two neurotoxic γ-diketones—2,5-heptanedione and 2,5-hexanedione— was increased. Technical-grade methyl-heptyl ketone (MHK) was also found to produce toxic neuropathy in rats; the effect was shown to be due to 5-nonanone. Metabolic studies have demonstrated the conversion of 5-nonanone to 2,5-nonanedione, MBK, and 2,5-hexanedione. Other γ-diketones—2,5-heptanedione and 3,6-octanedione—have also produced neuropathy.
Nephrotoxic (degenerative changes in proximal convoluted tubular cells) and hepatotoxic effects have been detected in experimental exposure of several animal species to the following ketones: isophorone (at 50 ppm), mesityl oxide (at 100 ppm), mesityl isobutyl ketone (at 100 ppm), cyclohexanone (at 190 ppm), and diisobutylketone (at 250 ppm). The potential for MEK to cause developmental toxicity was tested in mice. Mild developmental toxicity was observed after exposure to 3000 ppm, which resulted in reduction of fetal body weight. There was no significant increase in the incidence of any single malformation, but several malformations not observed in the concurrent control group were found at a low incidence—cleft palate, fused ribs, missing vertebrae, and syndactily.525 A recent study of developmental toxicity in rats found no adverse reproductive effects after exposure to 2000 ppm.526
Prevention Appropriate engineering, mainly enclosure and exhaust ventilation, and adequate work practices preventing spillage and vapor generation are essential to maintain exposure to ketones below the exposure limits. Adequate respiratory protection is recommended for situations in which excessive concentrations are possible (maintenance and repair, emergencies, installation of engineering controls, etc.). Appropriate protective clothing is necessary, and skin contact must be avoided. All ketones are flammable or combustible, and employees should be informed of this risk as well as of the specific health hazards. Warning signs in the work areas and on vessels and special educational programs for employees, especially new employees, are necessary as part of a comprehensive prevention program. The NIOSH recommends that occupational exposure to ketones be controlled so that the TWA concentration does not exceed the following exposure limits: MBK Isophorone Mesithyl oxide Cyclohexanone Diisobutyl ketone Methyl isobutyl ketone Methyl isoamyl ketone Methyl-n-amyl ketone Methyl-n-propyl ketone MEK Acetone
1 ppm 4 ppm 10 ppm 25 ppm 25 ppm 50 ppm 50 ppm 100 ppm 150 ppm 200 ppm 50 ppm
The marked neurotoxicity of at least one member of this group (MBK), the slow recovery in cases of distal axonal degeneration, and the possibility that irreversible damage may occur, possibly also in the central nervous system, indicate the need for appropriate protection and medical surveillance.3 Neurophysiological methods—electromyography and nerve conduction velocity measurements—are indicated wherever MBK, mixtures of MEK and MBK, or other neurotoxic ketones are used. Liver-function tests and indicators of renal function should be included in the periodic medical examination alone with the physical examination and medical history. ETHERS
Ethers are organic compounds characterized by the presence of a –C–O–C– group. They are volatile liquids, used as solvents and in the chemical industry in the manufacture of a variety of compounds. Some of the halogenated ethers are potent carcinogens (see Halogenated Ethers.) While all ethers have irritant and narcotic properties, dioxane (O–CH2–CH2–O–CH2–CH2) has marked specific toxicity.
Diethylene Dioxide (Dioxane) Dioxane is a colorless liquid with a boiling temperature of 101.5°C. It has applications as a solvent similar to those indicated for the ethylene glycol ethers; it is also a good solvent for rubber, cellulose acetate and other cellulose derivatives, and polyvinyl polymers. Dioxane has been used in the preparation of histologic slides as a dehydrating agent.
27 Absorption is mainly through inhalation but also through the skin. Dioxane is slightly narcotic and moderately irritant. The major toxic effect is kidney injury, with acute renal failure due to tubular necrosis; in some cases, renal cortical necrosis was reported. Centrilobular hepatocellular necrosis is also possible. 1,4-Dioxane was not genotoxic in vitro, but was an inducer of micronuclei in the bone marrow of rats and a carcinogen for both rats and mice. Together with the previously reported in vivo induction of DNA strand breaks in the rat liver, these data raise the possibility of a genotoxic action for 1,4-dioxane.527 Dioxane has been shown to have genotoxic effects in both the mouse bone marrow and liver, inducing micronuclei formed primarily from chromosomal breakage. Dioxane decreased cell proliferation in both the liver and bone marrow.528 Dioxane has been shown to be carcinogenic (by oral administration) in rats and guinea pigs. Several long-term studies with 1,4-dioxane have shown it to induce liver tumors in mice, and nasal and liver tumors in rats when administered in amounts from 0.5 to 1.8% in drinking water.529 IARC in 1999 classified 1,4-dioxane as possibly carcinogenic to humans (Group 2B),530 and The National Toxicology Program in 2002 concluded that 1,4-dioxane is reasonably anticipated to be a human carcinogen.531
Prevention The federal standard for the PEL is 100 ppm; because of the high toxicity, the ACGIH recommended 50 ppm. Protective equipment, appropriate work practices, and medical surveillance are similar to those indicated for the ethylene glycol ethers.
Carbon Disulfide Carbon disulfide (CS2) is a colorless, very volatile liquid (boiling temperature, 46°C). It is used in the production of viscose rayon and cellophane.3 Other important applications include the manufacture of carbon tetrachloride, neoprene cement and rubber accelerators, the fumigation of grain, various extraction processes, as a solvent for sulfur, iodine, bromine, phosphorus, and selenium, in paints, varnishes, paint and varnish removers, and in rocket fuel. Absorption is mainly through inhalation; skin absorption has been demonstrated but is practically negligible. After inhalation at least 40–50% of carbon disulfide is retained, while 10–30% is exhaled; less than 1% is excreted unchanged in the urine. Oxidative metabolic transformation of carbon disulfide is mediated by microsomal mixed-function oxidase enzymes. The monoxygenated intermediate is carbonyl sulfide (COS); the end product of this metabolic pathway is CO2, with generation of atomic sulfur.532 Atomic sulfur is able to form covalent bonds. Carbon disulfide is a very volatile liquid, and high airborne vapor concentrations can easily occur; under such circumstances, specific toxic effects on the central nervous system are prominent and may result in severe acute or subacute encephalopathy. The clinical symptoms include headache, dizziness, fatigue, excitement, depression, memory deficit, indifference, apathy, delusions, hallucinations, suicidal tendencies, delirium, acute mania, and coma. The outcome may be fatal; in less severe cases, incomplete recovery may occur with persistent psychiatric symptoms, indicating irreversible CNS damage. Many such severe cases of carbon disulfide poisoning have occurred in the past, during the second half of the nineteenth century in the rubber industry in France and Germany; as early as 1892, the first cases in the rubber industry were reported from the United States. Acute mania often led to admission to hospitals for the insane. With the rapid development of the viscose rayon industry, cases of carbon disulfide poisoning became more frequent, and Alice Hamilton repeatedly called attention to this health hazard in the rubber and rayon viscose industries.533 The first exposure standard for carbon disulfide in the United States was adopted in 1941. As late as 1946, cases of carbon disulfide psychosis were reported as still being admitted to state institutions for the mentally ill,2 often without any mention of carbon disulfide as the etiological agent. Chronic effects of carbon disulfide exposure were recognized later, when the massive overexposures leading to acute psychotic effects had been largely eliminated.
Diseases Associated with Exposure to Chemical Substances
649
Peripheral neuropathy of the sensorimotor type, initially involving the lower extremities but often also the upper extremities, with distal to proximal progression, can lead in severe forms to marked sensory loss, muscle atrophy, and diminished or abolished deep tendon reflexes. CNS effects can also often be detected in cases of toxic carbon disulfide peripheral neuropathy; fatigue, headache, irritability, somnolence, memory deficit, and changes in personality are the most frequent symptoms.2,534 Persistence of peripheral neurotoxic effects over three years after cessation of exposure and even longer persistence of CNS effects have been reported.535 CS2 exposure was reported to induce polyneuropathy and cerebellar dysfunction, along with parkinsonian features, in viscose rayon plant workers. Brain MRI studies showed multiple lesions in the cerebral white matter and basal ganglia.536 Optic neuritis has often been reported. Constriction of visual fields has been found in less severe cases. CS2 exposure enhanced human hearing loss in a noisy environment, mainly affecting hearing in the lower frequencies.537 Electromyographic changes and reduced nerve conduction velocity have been useful in the early detection of carbon disulfide peripheral neuropathy.538 Behavioral performance tests have been successfully applied for the early detection of CNS impairment. Neuropsychiatric effects, detected by psychological questionnaires and psychiatric assessment, have been found in workers with occupational exposure to carbon disulfide.539 In rats exposed to CS2 inhalation (200 and 800 ppm for 15 weeks), auditory brain stem responses were found to be delayed, suggesting a conduction dysfunction in the brain stem.540 In CS2-exposed rats, VEPs (flash and pattern reversal) were shown to be decreased in amplitude with an increase in latency. Repeated exposures had a more marked effect than acute exposure.541 Carbon disulfide peripheral neuropathy is characterized by axonal degeneration, with multifocal paranodal and internodal areas of swelling, accumulation of neurofilaments, abnormal mitochondria, and eventually thinning and retraction of myelin sheaths. Such axonal degeneration has been detected also in the central nervous system, mostly in long-fiber tracts. A marked reduction in met-enkephalin immunostaining in the central amygdaloid nuclei and the globus pallidus has been measured, with a parallel elevation in the lateral septal nucleus and the parietal cortex. These findings suggest that the enkephalinergic neuromodulatory system could play a role in CS2 neurotoxicity.542 A six-year observational cohort study of the effect of carbon disulphide on brain MRI abnormalities in rayon-manufacturing workers found an increased risk of hyperintense spots in T2weighted images, which point to so-called silent cerebral infarctions, among the exposed group compared with nonexposed controls.543 Carbon disulfide neuropathy is of the type described as central peripheral distal axonopathy, very similar to those produced by n-hexane and methyl-n-butyl ketone. Covalent binding of the highly reactive sulfur to enzymes and proteins essential for the normal function of axonal transport is thought to be the mechanism of axonal degeneration leading to carbon disulfide peripheral neuropathy. CS2 is a member of the class of neuropathy-inducing xenobiotics known as “neurofilament neurotoxicants.” Current hypotheses propose direct reaction of CS2 with neurofilament lysine epsilon-amine moieties as a step in the mechanism of this neuropathy. A lysine-containing dipeptide and bovine serum albumin, when incubated with 14CS2, exhibited stable incorporation of radioactivity. A specific intramolecular cross-link was also detected.544 Covalent cross-linking of proteins by CS2 has been demonstrated in vitro. In carbon disulfide inhalation studies in rats, carbon disulfide produced dosedependent intra- and intermolecular protein cross-linking in vivo, with cross-linking in neurofilament proteins prior to the onset of lesions, thought to contribute to the development of the neurofilamentous axonal swellings characteristic of carbon disulfide neurotoxicity. Magnetic resonance microscopy demonstrated that carbonyl sulfide, the primary metabolite of CS2, targets the auditory pathway in the brain. Decreases in auditory brain stem-evoked responses and decreased cytochrome oxidase activity in the posterior colliculus and parietal cortex were reported.545 Carbon disulfide interference with vitamin B6 metabolism has also been considered as a possible mechanism contributing to its neurotoxicity. Carbon disulfide reacts with pyridoxamine in vitro, with formation of a salt of pyridoxamine dithiocarbonic acid.
650
Environmental Health
With the recognition of carbon disulfide peripheral neuropathy, efforts to further reduce the exposure limits were made. As the incidence of carbon disulfide peripheral neuropathy decreased, previously unsuspected cardiovascular effects of long-term carbon disulfide exposure, even at lower levels, became apparent. Initially cerebrovascular changes, with clinical syndromes including pyramidal, extrapyramidal, and pseudobulbar manifestations, were reported with markedly increased incidence and at relatively young ages in workers exposed to carbon disulfide. A significant increase in deaths due to coronary heart disease was documented in workers with longterm carbon disulfide exposure at relatively low levels, and this led to the lowering of the TLV to 10 ppm in Finland in 1972. A higher prevalence of hypertension and higher cholesterol and lipoprotein levels have also been found in workers exposed to carbon disulfide and most probably contribute to the higher incidence of atherosclerotic cerebral, coronary, and renal disease. A high prevalence of retinal microaneurysms was found in Japanese and Yugoslavian workers exposed to carbon disulfide; retinal microangiopathy was more frequent with longer carbon disulfide exposure. A six-year follow-up study of the Japanese cohort demonstrated persistence of elevated prevalences of hypertension, elevated cholesterol and lipoprotein levels, and retinal microaneurysms among the exposed workers compared with controls.546 Adverse effects of carbon disulfide exposure on reproductive function and more specifically on spermatogenesis have been reported in exposed workers, with significantly lower sperm counts and more abnormal spermatozoa than in nonexposed subjects. DNA damage induced by carbon disulfide in mouse sperm was detected by the comet assay.547 CS2 exposure in male rayon workers was associated with doserelated increases in miscarriage rates.548 The toxic effect on spermatogenesis was confirmed in experiments on rats, where marked degenerative changes in the seminiferous tubules and degenerative changes in the Leydig cells, with almost complete disappearance of spermatogonia, were found. Effects on follicle development and implantation of blastocysts were identified in an embryotoxicity study in mice.549 Carbon disulfide has a high affinity for nucleophilic groups, such as sulfhydryl, amino, and hydroxy. It binds with amino groups of amino acids and proteins and forms thiocarbamates; these tend to undergo cyclic transformation, and the resulting thiazolidines have been shown to chelate zinc and copper (and possibly other trace metals), essential for the normal function of many important enzymes. The high affinity for sulfhydryl groups can also result in interference with enzymatic activities. Effects of carbon disulfide on catecholamine metabolism have been reported. The concentration of norepinephrine in the brain decreased in rats exposed to carbon disulfide, while dopamine levels increased in both the brain and the adrenal glands. The possibility that carbon disulfide might interfere with the conversion of dopamine to norepinephrine has been considered; the converting enzyme dopamine-β-hydroxylase contains copper, and the copper-chelating effect of carbon disulfide probably results in its inhibition. Carbon disulfide has been shown to produce a loss of cytochrome P450 and to affect liver microsomal enzymes. This effect is thought to be related to the highly reactive sulfur (resulting from the oxidative desulfuration of carbon disulfide), which binds covalently to microsomal proteins. Intraperitoneal injection of CS2 in rats produced several highmolecular weight proteins eluted from erythrocyte membranes which were not present in control animals. The high molecular weight proteins were shown to be alpha, beta heterodimers. The production of multiple heterodimers was consistent with the existence of several preferred sites for cross-linking. Dimer formation showed a cumulative dose response in CS2-treated rats.550 CS2 has been shown to produce inter- and intramolecular cross-linking of the low molecular weight component of the neurofilament triplet proteins.551 Long-term exposure to carbon disulfide was reported to cause damage to human buccal cell DNA, detected with the comet assay.552 Approximately 70–90% of absorbed carbon disulfide is metabolized. Several metabolites are excreted in the urine. Among these, thiocarbamide and mercaptothiazolinone have been identified. The
urinary metabolites of carbon disulfide have been found to catalyze the iodine-azide reaction (i.e., the reduction of iodine by sodium azide). The speed of the reaction is accelerated in the presence of carbon disulfide metabolites, and this is indicated by the time necessary for the disappearance of the iodine color. A useful biological monitoring test has been developed553 from these observations; departures from normal are found with exposures exceeding 16 ppm. It has been recommended the workers with an abnormal iodine-azide test reaction at the end of a shift, in whom there is no recovery overnight, should be removed (temporarily) from carbon disulfide exposure.
Prevention The present federal standard for a permissible level of carbon disulfide exposure is 10 ppm. Prevention of exposure should rely on engineering controls, and mostly on enclosed processes and exhaust ventilation. When unexpected overexposure can occur, appropriate3 respiratory protection must be available and used. Skin contact should be avoided, and protective equipment should be provided; adequate shower facilities and strict personal hygiene practices are necessary. Worker education on health hazards of carbon disulfide exposure and the importance of adequate work practices and personal hygiene must be part of a comprehensive preventive medicine program. Medical surveillance should encompass neurologic (behavioral and neurophysiological), cardiovascular (electrocardiogram and ophthalmoscopic examination), renal function, and reproductive function assessment. The iodine-azide test is useful for biological monitoring: it is an integrative index of daily exposure. AROMATIC NITRO- AND AMINO-COMPOUNDS
Aromatic nitro- and amino-compounds make up a large group of substances characterized by the substitution of one or more hydrogen atoms of the benzene ring by the nitro- (–NO2) or amino-(–NH2) radicals; some of the compounds have halogens (mainly chlorine and bromine) or alkyl radicals (CH3, C2H5, etc.). Substances of this group have numerous industrial uses in the manufacture of dyes, pharmaceuticals, rubber additives (antioxidants and accelerators), explosives, plastic materials, synthetic resins, insecticides, and fungicides. New industrial uses are continuously found in the chemical synthesis of new products.2 The physical properties of the aromatic nitroand amino-compounds influence the dimension of the hazards they may generate. Some are solid, and some are fluids with low volatility; most are readily absorbed through the skin, and dangerous toxic levels can easily be reached in persons thus exposed. A common toxic effect of most of these compounds is the production of methemoglobin and thus interference with normal oxygen transport to the tissues. This effect is thought to result not through a direct action of the chemical on hemoglobin but through the effect of intermediate metabolic products, such as paraaminophenol, phenylhydroxyl-1-amine, and nitrosobenzene. The microsomal mixed-function oxydase system is directly involved in these metabolic transformations. Methemoglobin (Met Hgb) results from the oxidation of bivalent Fe+2 in hemoglobin to trivalent Fe+3. Methemoglobin is a ferrihemoglobin (Hgb Fe+3OH) as opposed to hemoglobin, which is a ferrohemoglobin. Methemoglobin cannot serve in oxygen transport, since oxygen is bound (as –OH) in a strong bond and cannot easily be detached. The transformation of hemoglobin into methemoglobin is reversible; reducing agents, such as methylene blue, favor the reconversion. In humans, methemoglobin is normally present in low concentrations, not exceeding 0.5 g/100 ml whole blood. An equilibrium exists between hemoglobin and methemoglobin, the latter being continuously reduced by intracellular mechanisms in which a methemoglobin reductase-diaphorase has a central place. The production of methemoglobin after exposure to and absorption of nitro- and amino-aromatic compounds results in hypoxia, especially when higher concentrations of Met Hgb (in excess of 20–25% of total Hgb) are reached. The most prominent and distinctive
27 symptom is cyanosis (apparent when Met Hgb exceeds 1.5 g/100 ml); most of the other symptoms and signs are due to the effects of hypoxia on the central nervous and cardiovascular systems. With high levels of methemoglobinemia, coma, arrhythmias, and death may occur. After cessation of exposure, recovery is usually uneventful, taking place in a matter of hours or days, depending on the specific compound. Methemoglobinemia develops more rapidly with aromatic amines, such as aniline, than with nitro-aromatic compounds; with the latter, the reconversion of methemoglobin into hemoglobin is slower (several days). While the methemoglobin-forming effect is of an acute type, several significant chronic toxic effects have resulted from exposure to some of the members of this group. Liver toxicity, with hepatocellular necrosis, can be prominent, especially for polynitro-aromatic derivatives. Aplastic anemia is another severe effect, sometimes associated with the hepatotoxic effect, especially with trinitrotoluene. The major nitro- and amino-aromatic compounds include: Aniline Nitrobenzene Dinitrobenzene Trinitrobenzene Dinitrotoluene Trinitrotoluene Nitrophenol Dinitrophenol Tetranitromethylaniline (tetryl) Toluylenediamine Xylidine Phenylenediamine 4,4′-Diaminodiphenyl methane (methylene dianiline)
C6H5NH2 C6H5NO2 C6H4(NO2)2 C6H3(NO2)3 C6H3 CH3 (NO2)2 C6H2 CH3 (NO2)3 C6H4 OH NO2 C6H3 OH (NO2)2 C6H2(NO2)3 N(CH3) NO2 C6H3 CH3 (NH2)2 C6H3 (CH3)2 NH2 C6H4(NH2)2 NH2(C4H4) CH2 (C4H4)NH2
Diazo-positive metabolites (DPM) have been proposed as biological indicators of aromatic nitro- and amino-compound absorption, including that of trinitrotoluene.
Nitrobenzene Nitrobenzene is a major chemical intermediate used mainly in the production of aniline. It is easily absorbed through the skin and the respiratory route and is known to have resulted in numerous cases of industrial poisoning. Its toxicity is higher than that of aniline, and liver and kidney damage are not unusual, although most often these are transitory. Anemia of moderate degree and Heinz bodies in the red blood cells may also be found. A major part of the absorbed dose is excreted into the urine: 10–20% of the dose is excreted as 4-nitrophenol, the concentration of which may be used for biological monitoring. Nitrobenzene was tested by inhalation exposure in one study in mice and in two studies in rats. In mice, the incidences of alveolar-bronchiolar neoplasms and thyroid follicular-cell adenomas were increased in males. In one study in rats, the incidences of hepatocellular neoplasms, thyroid follicular-cell adenomas and adenocarcinomas, and renal tubular-cell adenomas were increased in treated males. In treated females, the incidences of hepatocellular neoplasms and endometrial stromal polyps were increased. In a study using male rats only, the incidence of hepatocellular neoplasms was increased. IARC has concluded that nitrobenzene is possibly carcinogenic to humans (Group 2B).554
Dinitrobenzene Dinitrobenzene, especially the meta-isomer, is more toxic than both aniline and nitrobenzene. Liver injury, sometimes severe, may even result in hepatocellular necrosis. Dinitrobenzene is a cerebellar neurotoxicant in rats,555 causing gliovascular lesioning in the rat brainstem, with the nuclei of the auditory pathway being particularly affected.556 Dinitrobenzene is a testicular toxin, producing a lesion in the seminiferous tubules of the rat.557 Germ cell apoptosis in rat testis was evident after administration of 1,3-dinitrobenzene.558
Diseases Associated with Exposure to Chemical Substances
651
Nitrotoluene The nitrotoluenes can cause liver toxicity and nephropathy in rats. 2-Nitrotoluene decreased sperm motility in mice. O-Nitrotoluene, administered in the feed for up to 2 years, caused clear evidence for cancer at multiple sites in rats and mice, including mesotheliomas, subcutaneous skin neoplasms, mammary gland fibroadenomas, and liver neoplasms in males, subcutaneous skin neoplasms and mammary gland fibroadenomas in females, and hemangiosarcomas and carcinomas of the cecum in both genders.559 The cecal tumors have a morphology and a molecular profile of oncogenes and tumor suppressor genes characteristic of human colon cancer.560 O-Nitrotoluene causes hemangiosarcomas in mice, probably via p53 and beta-catenin mutations.561 2-Nitrotoluene exposure has been shown to be carcinogenic in rats and was associated with hemoglobin and DNA adduct formation.562 IARC in 1995 considered the nitrotoluenes not classifiable as to their carcinogenicity to humans (Group 3).563
Dinitrotoluene Dinitrotoluenes are used primarily as chemical intermediates in the production of toluene diamines and diisocyanates. Exposure to technical-grade dinitrotoluene can cause cyanosis, due to methaemoglobinaemia, anemia, and toxic hepatitis. The dinitrotoluenes are skin sensitizers. Hepatotoxicity in animals has been consistently demonstrated. 2,4-Dinitrotoluene was tested by oral administration in mice; tumors of the renal tubular epithelium were observed in males. In studies in rats, the incidence of various tumors of the integumentary system was increased in males. The incidence of hepatocellular carcinomas was increased in treated males and females in one study. The incidence of fibroadenomas of the mammary gland was increased in females in both studies. 2,6-Dinitrotoluene was tested for carcinogenicity in male rats; an increase in the incidence of hepatocellular neoplastic nodules and carcinomas was found.564 A cohort study of workers from a munitions factory in the United States found an increased risk for cancer of the liver and gallbladder among workers exposed to a mixture of 2,4- and 2,6-dinitrotoluenes, based on six cases. Recent studies have demonstrated that dinitrotoluene forms adducts with hemoglobin, the levels of which correlated with symptoms of toxicity among exposed workers, suggesting the possible usefulness of adduct assays as a biomonitoring approach.565 IARC in 1996 concluded that 2,4- and 2,6-dinitrotoluenes are possibly carcinogenic to humans (Group 2B).566
Trinitrotoluene Trinitrotoluene (TNT) has produced thousands of cases of industrial poisoning. The first reported cases occurred during World War I, and several hundred fatalities were reported from the ammunition industry in Great Britain and the United States. During World War II, there were another several hundred cases and a smaller number of fatalities in both countries.2 Absorption takes place through the skin and also through the respiratory and gastrointestinal routes. 2-Amino-4,6-dinitrotoluene and its isomers are the most common metabolites of 2,4,6-trinitrotoluene; p53 accumulation has been demonstrated in amino-4,6dinitrotoluene-treated cells, providing evidence of the potential carcinogenic effects of amino-4,6-dinitrotoluene.567 Functional disturbances of the gastrointestinal, central nervous, and cardiovascular systems, and skin irritation or eczematous lesions may precede the development and clinical manifestations of toxic liver injury or aplastic anemia. Abdominal pain, loss of appetite, nausea, and hepatomegaly may be the first indications of toxic hepatitis. According to available records, toxic hepatitis developed in approximately one of 500 workers exposed, but the fatality rate was around 30% and higher in some reported series. High urinary coproporphyrin levels are a feature of TNT-induced toxic hepatitis. Acute liver failure may develop rapidly and may be fatal. Massive subacute hepatocellular necrosis has
652
Environmental Health
been found in fatal cases. A chronic, protracted course with development of cirrhosis was observed in other cases. Postnecrotic cirrhosis, becoming clinically evident as long as 10 years after apparent recovery from TNT-induced acute toxic hepatitis, has also been reported. Acute hemolytic anemia has been reported after TNT exposure of workers with glucose-6-phosphate dehydrogenase deficiency. Early equatorial cataracts were described in workers exposed to TNT. No adequate studies of the carcinogenicity of trinitrotoluene in humans have been reported. The levels of 4-amino-2,6-dinitrotoluenehemoglobin adducts were found to be statistically significantly associated with the risk of hepatomegaly, splenomegaly, and cataract formation among trinitrotoluene-exposed workers.568 Mutagenicity has been demonstrated in a Salmonella microsuspension system.569 In workers exposed to 2,4,6-trinitrotoluene, increased bacterial mutagenic activity was found in the urine. IARC in 1996 has deemed 2,4,6trinitrotoluene as not classifiable as to its carcinogenicity to humans (Group 3), due to inadequate evidence in humans and animals.570 The effects of TNT on the male reproductive system in Fischer 344 rats included germ cell degeneration, the disappearance of spermatozoa in seminiferous tubules, and a dramatic decrease in the sperm number in both the testis and epididymis. TNT increased the formation of 8-oxo7,8-dihydro-2′-deoxyguanosine (8-oxodG) in sperm, reflecting oxidative damage, whereas plasma testosterone levels did not decrease.571 Urinary metabolites of trinitrotoluene are 4-aminodinitrotoluene and 2-aminodinitrotoluene; they can be used for biological monitoring of exposed workers. Complete blood counts, bilirubin, prothrombin, liver enzyme (SGOT, SGPT, etc.) levels, and urinary coproporphyrins have been recommended in the medical surveillance of exposed workers.
Toluylenediamine Toluylenediamine can produce severe toxic liver damage, with massive hepatic necrosis.
Xylidine Xylidine has been shown to produce severe toxic hepatitis; postnecrotic cirrhosis has developed in experimental animals.
4,4′-Diaminodiphenylmethane More than 200 million pounds of 4,4′-diaminodiphenylmethane (methylene dianiline, MDA) are manufactured each year in the United States. It is widely used in the production of isocyanates and polyisocyanates which are the basis for polyurethane foams. Other uses are as an epoxy hardener, as a curing agent for neoprene in the rubber industry, and as a raw material in the production of nylon and polyamideimide resins. 4,4′-Diaminodiphenylmethane was the cause of an epidemic outbreak (84 cases) of toxic hepatitis with jaundice in Epping, England, in 1965 (an episode since known as “Epping jaundice”). The accidental spillage of the chemical from a plastic container and contamination of flour used for bread was the cause of this epidemic. Both the contaminated bread and the pure aromatic amine produced similar lesions in mice. In 1974, the first industrial outbreak of 13 cases of toxic hepatitis caused by 4,4′-diaminodiphenylmethane was reported. The aromatic amine had been used as an epoxy resin hardener for the manufacture of insulating material. The pattern of illness was similar to that described for the Epping epidemic, with abrupt onset, epigastric or right upper quadrant pain, fever, and jaundice. The duration of the illness ranged from one to seven weeks. Skin absorption had been important in some of the cases. Another small outbreak of methylene dianiline poisoning occurred when six of approximately 300 men who applied epoxy resins as a surface coat for concrete walls at the construction site of a nuclear power electricity-generating plant contracted toxic hepatitis two days to two weeks after starting work. The clinical picture was similar to the cases previously described. Methylene dianiline has been shown to produce hepatocellular necrosis in all animals tested, although there are species differences. Cirrhosis has developed in rats
and dogs in several experimental series. Nephrotoxicity has also been demonstrated in animal experiments. 4,4-Diaminodiphenylmethane causes contact allergy. MDA can initiate vascular smooth muscle cell proliferation and vascular medial hyperplasia in rats.572 Limited data suggest that workers in the textile, dye, and rubber industries experience a higher incidence of gallbladder and biliary tract cancer than control groups.3 In view of the very large number of chemicals used, however, a direct association with MDA has not been established. Long-term observations on workers exposed only to chemicals of this group are almost nonexistent, and therefore no firm conclusions can be drawn about its carcinogenicity in humans. In a chronic feeding experiment on rats and mice, MDA was found to produce thyroid carcinoma, hepatocellular carcinoma, lymphomas, and pheochromocytomas. MDA is specifically activated to DNA-damaging reactive species by hepatocytes and thyroid cells in both rats and humans.573 The NIOSH recommended that MDA be considered a potential human carcinogen and that exposures be controlled to the lowest feasible limit. The IARC concluded that there is sufficient evidence for carcinogenic effect of 4,4-methylenedianiline in experimental animals to consider it a carcinogenic risk to humans, and The National Toxicology Program in 2002 considered MDA reasonably anticipated to be a human carcinogen.574
Dinitrochlorobenzenes Dinitrochlorobenzenes (DCNBs) are potent skin sensitizers,2 via induction of type 1 cytokines interferon-gamma and IL-12,575 and are known testicular toxins in animals. Respiratory sensitization is not thought to occur. The toxicity of orally administered dinitrochlorobenzene in mice and rats included lesions affecting the liver, kidney, testis, and hematopoietic system. The liver was the most responsive to DCNB, as evidenced by a dose-related increase in relative liver weight in rats and mice and centrilobular hypertrophy of hepatocytes in mice. The kidney lesion was characterized by hyaline droplets in the renal tubular epithelial cells only in male rats. Testicular and hematopoietic lesions appeared at higher doses.576 Dinitrochlorobenzene caused a significant increase in sister chromatid exchange in cultured human skin fibroblasts.577 Mutagenicity has been demonstrated in Salmonella test systems.578
Paraphenylenediamine and Para-aminophenol Paraphenylenediamine and paraaminophenol are dye intermediates and are used mostly in the fur industry. They are potent skin and respiratory sensitizers. Severe occupational asthma is not unusual in exposed workers.2 Paraphenylenediamine was shown to induce sister chromatic exchanges in ovary cells of Chinese hamsters. Paraaminophenol was mutagenic in E. coli test systems.579 Paraaminophenol causes nephrotoxicity but not hepatotoxicity in the rat. Renal epithelial cells of the rat were shown to be intrinsically more susceptible to paraaminophenol cytotoxicity than are hepatocytes.580
4,4′-Methylene-Bis-Ortho-Chloroaniline 4,4′-Methylene-bis-ortho-chloroaniline (MOCA) is used mainly in the production of solid elastomeric parts, as a curing agent for epoxy resins, and in the manufacture of polyurethane foam. Absorption through inhalation and skin contact is possible. In rats, liver and lung cancer have followed the feeding of MOCA. Occupational exposure to MOCA was associated with an increased risk of bladder cancer. MOCA forms adducts with DNA, both in vitro and in vivo. Micronuclei frequencies were higher in the urothelial cells and lymphocytes of MOCA-exposed workers than in controls.581 An increased frequency of sister chromatid exchange was seen in a small number of workers exposed to MOCA. MOCA is comprehensively genotoxic. DNA adducts are formed by reaction with N-hydroxy-MOCA, and MOCA is genotoxic in bacteria and mammalian cells; the same major MOCA-DNA adduct is formed in the target tissues for carcinogenicity in animals (rat liver and lung; dog urinary bladder) as that found in urothelial cells from a man with
27 known occupational exposure to MOCA. IARC has classified MOCA as probably carcinogenic to humans (Group 2A).582 The National Toxicology Program in 2002 listed MOCA as an agent reasonably anticipated to be a human carcinogen. MOCA is included in the federal standard for carcinogens; all contact must be avoided.
Tetranitromethylaniline (Tetryl) Tetryl is a yellow solid used in explosives and as a chemical indicator.2 It can be absorbed through inhalation and skin absorption. It is a potent irritant and sensitizer; allergic dermatitis can be extensive and severe. Anemia with hypoplastic bone marrow has occurred. In animal experiments, hepatotoxic and nephrotoxic effects have been detected.
Prevention and Control Adequate protective clothing and strict personal hygiene with careful cleaning of the entire body, including hair and scalp, are essential to minimize skin absorption, which is particularly hazardous with this group of substances. Clean work clothes should be supplied at the beginning of every shift. Soiled protective equipment must be immediately discarded. Adequate shower facilities and a mandatory shower at the end of the shift, as well as immediately after accidental spillage, are necessary. Respirators must be available for unexpected accidental overexposure. Medical surveillance should comprise dermatological examination and hematological, liver, and kidney function evaluation. Workers must be informed of the health hazards and educated and trained to use appropriate work practices and firstaid procedures for emergency situations. ALIPHATIC AMINES
Aliphatic and alicyclic amines are derivatives of ammonia (NH3) in which one atom (primary amine) or more hydrogen atoms (secondary or tertiary amines) are substituted by alkyl, alicyclic, or alkanol radicals (ethanolamines). They have a characteristic fishlike odor; most are gases or volatile liquids. They are widely used in industry; one of the most important applications is as “hardeners” (cross-linking agents) and catalysts for epoxy resins. Other uses are in the manufacture of pharmaceutical products, dyes, rubber, pesticides, fungicides, herbicides, emulsifying agents, and corrosion inhibitors. The amines form strongly alkaline solutions that can be very irritating to the skin and mucosae. Chemical burns of the skin can occur. Skin sensitization and allergic dermatitis have been reported.2 Some of the amines can produce bronchospasm, and cases of amine asthma have been documented.2 Corneal lesions may result from accidental contact with liquid amines or solutions of amines.
Prevention Appropriate engineering controls, protective clothing, and eye protection (goggles), air-supplied respirators when concentrations exceeding the federal standard for exposure limits (from 3 to 10 ppm for various amines) are expected, and training programs for employees are necessary to prevent adverie effects due to exposure to these compounds.
ORGANIC NITROSO-COMPOUNDS
The organic nitroso-compounds comprise nitrosamines and nitrosamides, in which the nitroso-groups (–N = 0) are attached to nitrogen atoms
and C-nitroso-compounds in which the nitroso-groups are attached to carbon atoms. Nitrosamines are readily formed by the reaction of secondary amines with nitrous acid (nitrite in an acid medium).
Diseases Associated with Exposure to Chemical Substances
653
A large number of N-nitroso-compounds are known; several examples of dialkyl, heterocyclic, and aryl alkylnitrosamines with marked toxic activity are shown below, together with two Nnitrosamides, N-nitrosomethyl urea, and N-nitro-N′-nitro-N-methyl guanidine. The nitrosamines are more unstable in an alkaline medium, yielding the corresponding dialkanes; they are extensively used in synthetic organic chemistry for alkylating reactions. Toxicological interest in the N-nitroso-compounds was first aroused in 1954, when Barnes and Magee583 reported on the hepatotoxicity of dimethylnitrosamine. This compound had recently been introduced into a laboratory as a solvent, and two cases of clinically overt liver damage were etiologically linked to it. A search of the literature at that time revealed only a single short report of the toxic properties of dimethylnitrosamine (DMN). Hamilton and Hardy had reported in 1949 that the use of DMN in an automobile factory had been followed by illness in some of the exposed workers. Experiments on dogs showed DMN to be capable of producing severe liver injury.
654
Environmental Health
As a solvent, DMN is highly toxic and dangerous to handle, although its volatility is relatively low. The absence of a specific odor or irritant properties may favor the absorption of toxic amounts without any warning; contamination of skin and clothes may pass unnoticed. Information on the industrial uses of nitrosamines is incomplete. A relatively large patent literature indicates many potential applications. The manufacture of rubber, dyes, lubricating oils, explosives, insecticides and fungicides, the electrical industry, and the industrial applications of hydrazine chemistry appear to be the main uses for nitrosamines. The use of DMN as an intermediate in the manufacture of 1,1-dimethylhydrazine is well known. N-nitrosodiphenylamine is used in the rubber industry as a vulcanizing retarder, and dinitrosopentamethylene-tetramine is used as a blowing agent in the production of microcellular rubber. DMN produced severe liver injury in rats, rabbits, mice, guinea pigs, and dogs. Centrilobular and midzonal necrosis, depletion of glycogen and fat deposition, and dilation of sinusoidal spaces were the prominent changes in the acute stage. Hemorrhagic peritoneal exudate and bleeding into the lumen of the gut were striking features; such changes are not encountered in liver injury caused by carbon tetrachloride, phosphorus, or beryllium. Repeated doses were found to result in fibrosis of the liver. Increases in fibrosis-related gene transcripts, including alphaSMA, transforming growth factor-beta 1, connective tissue growth factor, tissue inhibitor of metalloproteinase-1, and procollagen I and III, have been identified in the livers of dimethylnitrosamine-intoxicated rats.584 DMN was shown to induce, besides typical centrilobular necrosis, veno-occlusive lesions in the liver in animals followed for longer periods after a high, nearly lethal dose. Prolonged oral administration of relatively low doses of dimethylnitrosamine resulted in gross, nodular cirrhosis of the liver; with lower doses, longer survival of the animals was achieved, and several malignant liver-cell-type tumors occurred. Tumor necrosis factor alpha and its receptor were shown to play a role in DMA-related hepatotoxicity in the mouse.585 The hepatocarcinogenicity of DMN was reported in 1956.586 The metabolic degradation of DMN in the liver proceeds through enzymatic oxidative demethylation, yielding a carcinogenic metabolite. In 1962 Magee and Farber,587 by administering 14C DMN to rats, were able to demonstrate the methylation of nucleic acids in the liver, especially at the N7 site of guanidine. Thus an alteration of the genetic information in the hepatocyte was detected and was considered the basis for the carcinogenic effect. This was the first experimental proof of such a molecular alteration of DNA by a carcinogen. The discovery of the role of drug-metabolizing microsomal enzymes in the biotransformation of DMN into a carcinogen opened an important field of investigation. Similar pathways were found to be effective for another compound of this group, diethylnitrosamine.589 The activation of DMN via microsomal metabolism occurs in the hepatocytes, although liver tumors arise from non-parenchymal cells, suggesting intercellular transport of the carcinogenic metabolites.588 The acute hepatotoxic effect of N-nitroso compounds is also caused by the alkylating intermediate metabolites. The acute toxicity is due to alkylation of proteins and enzymes, while the carcinogenic effect is related to the alkylation of nucleic acids. Several fundamentally important observations were also made by Druckrey and coworkers:589 1. A carcinogenic effect of a single dose of some of these compounds was demonstrated (tumors developed after various latency periods), and the kidney, liver, esophagus, stomach, and CNS were the main organs in which the primary tumors were detected. 2. The site of the primary malignant tumor was found to be, for certain compounds, in a clear relationship with the administered dose. 3. DMN was shown to be a more potent carcinogen than diethylnitrosamine. 4. The transplacental carcinogenicity of DMN was demonstrated; hepatocarcinogenicity was detected in offspring of treated pregnant rats.
5. Di-n-butyl-nitrosamine induced hepatocellular carcinoma and cirrhosis of the liver when administered orally in relatively high amounts. With the gradual decrease of the dose, fewer hepatocellular carcinomas and more cancers of the esophagus and the urinary bladder were found. Diamylnitrosamine resulted in hepatocellular carcinoma when given in high doses. Subcutaneous injections resulted in squamous cell and alveolar cell carcinoma of the lung, in addition to relatively few hepatocellular carcinomas. This finding was thought to be important since it indicated that lung cancer can develop not only after inhalation of carcinogens but also as a result of absorption of carcinogens through other routes. Cyclic N-nitroso compounds (N-nitroso-pyrrolidine, -morpholine, -carbethoxypyperazine) were also found to produce hepatocellular carcinomas. Heterocyclic nitrosamines (N-nitrosoazetidine, N-nitrosohexamethyleneimine, N-nitrosomorpholine, N-nitroso-pyrrolidine, and N-nitrosopiperidine) result in characteristic hepatic centrilobular necrosis; they have also been shown to produce a high incidence of tumors of the liver and other organs. The earliest change in the liver is the development of foci of altered hepatocytes, demonstrated histochemically by changes in the activities of glucose-6-phosphate dehydrogenase and glycogen phosphorylase, and in the glycogen content. Proliferating cells have been detected by immunohistochemical reaction for proliferating cell nuclear antigen. The number and size of foci of altered hepatocytes increased in a time- and dose-related manner.590 Pancreatic cancer developed in Syrian hamsters after subcutaneous administration of three nitrosamines, including N-nitro-2,6-dimethylmorpholine. Ras-oncogene activation was investigated in bladder tumors of male rats given N-butyl-N-(4-hydroxybutyl) nitrosamine. Enhanced expression of p21 was detected in all tumors. The tobacco-specific nitrosamine 4-(methylnitrosamine)-1-(3-pyridyl)-1butanone (NNK) is a potent carcinogen in laboratory animals. Analysis of DNA for K-ras mutation showed G ( A transition of codon 12 of the K-ras oncogene in tumor cells derived from pancreatic duct cells treated with NNK.591 In human esophageal cancers, no ras gene mutations but a relatively high prevalence of p53 gene mutations have been reported. A high prevalence of point mutations in Ha-ras and p53 genes was found in N-nitrosomethylbenzylamine (NMBA)-induced esophageal tumors in rats. The prevalent mutations were G → A.592 The carcinogenic properties of N-nitroso compounds are associated with their ability to alkylate DNA, in particular to form O6-alkylguanine and O4-alkylthymine.593 The carcinogenicity of NMBA has been shown to be reduced by dietary factors (e.g., strawberries, blackberries, grape seed extract) that decrease the formation of DNA-damaging intermediates.594 The dialkylnitrosamines, stable compounds, are decomposed only by enzymatic action and result in cell damage after having undergone an enzymatic activation process in organs that have adequate enzymatic systems. The toxic, mutagenic, teratogenic, and carcinogenic effects of nitroso-compounds all depend on this biologic activation by enzymatic reactions. Inhibition of hepatic microsomal enzymatic systems by a protein-deficient diet has been shown to result in a decrease in dimethylnitrosamine toxicity, confirming that the hepatotoxic effect is dependent on microsomal enzymatic activation. The predominant effect of the dialkylnitrosamines is liver injury, the characteristic lesion being a hemorrhagic type of centrilobular necrosis. This specificity of action is related to the fact that these compounds require metabolic transformation-activation for their toxic effect. The enzymatic systems effective for these metabolic transformations are present in highest amounts in the microsomal fraction of the liver, but also in the kidney, lung, and esophagus. Species differences have been documented; these metabolic differences parallel differences in the main site of effects— toxic, carcinogenic, or both. In contrast to the relative chemical stability of nitrosamines, the nitrosamides show varying degrees of instability. Many of these compounds yield diazoalkanes when treated with alkali, and they are extensively used in the synthetic chemical industry.
27 The nitrosamides differ in their effects from the nitrosamines; they have a local irritation effect at the site of administration; some have marked local cytopathic action, sometimes resulting in severe tissue necrosis. N-methyl-N-nitrosomethane causes severe necrotic lesions of the gastric mucosa and also periportal liver necrosis. In addition to their local action, some of the nitrosamides have a radiomimetic effect on organs with rapid cell turnover, with the bone marrow, lymphoid tissue, and small intestine being injured most. Several substances of the nitrosamide group are known to induce cancer at the site of chronic application. Morpholine is widely used in industry as a solvent for waxes, dyes, pigments, and casein; it has also found applications in the rubber industry.
As an anticorrosive agent and as an emulsifier (after reaction with fatty acids), morpholine is used in the manufacture of cleaning products. Long considered a relatively nontoxic substance, morpholine was also used in the food industry, in the coating of fresh fruit and vegetables (fatty acid salts of morpholine), and for anticorrosive treatment of metals (including those to be used in the food industry). Industrial occupational exposure and household exposure are therefore quite frequent. Absorption of morpholine through the oral route may, in the presence of nitrites from alimentary sources, result in the production of hazardous gastric levels of nitrosamine. In the rubber industry, efforts have been made to replace amino-compounds that can generate N-nitrosamines in accelerators with “safe” amino components. Derivatives of the dithiocarbamate and sulfenemide class were synthesized and found to be suitable for industrial application. The organic N-nitroso-compounds are characterized by marked acute liver toxicity; chronic absorption of smaller amounts has been shown to result in cirrhosis in experimental animals. Initial reports of human cases of postnecrotic cirrhosis, however, have not been followed by other reports on human effects. Suitable epidemiological data are not yet available on the real incidence of toxic liver damage, cirrhosis of the liver, hepatocellular carcinoma, and other malignant tumors in industrially exposed populations. Altered p53 expression has been demonstrated in the early stages of N-nitrosomorpholineinduced rodent hepatocarcinogenesis.595 Ethanol has been shown to enhance the hepatocarcinogenesis of N-nitrosomorpholine, related to increased ornithine decarboxylase activity and cell proliferation in enzyme-altered lesions.596 The presence of nitrosamines in cutting oils has been reported.1 The formation of nitrosamines had been suspected, since nitrites and aliphatic amines are known constituents of some cutting fluids. Concentrations of nitrosamines up to 3% have been found in randomly selected cutting oils; metal machining operators using cutting oils may, therefore, be significantly exposed to nitrosamines. Semisynthetic cutting oils and the synthetic cutting fluids most often contain amines as a soluble base and nitrites as additives. NIOSH estimated that almost 800,000 persons are occupationally exposed in the manufacture and use of cutting fluids, and issued guidelines for industrial hygiene practices in an effort to minimize skin and respiratory exposure.
Environmental Nitrosamines The possibility that exposure to compounds of the nitrosamine group may occur in situations other than the industrial environment was revealed by an outbreak of severe liver disease in sheep in Norway in 1960. Severe necrosis of the liver was the main pathologic feature.
Diseases Associated with Exposure to Chemical Substances
655
The sheep had been fed fish meal preserved with nitrite. This suggested that nitrosamines may have resulted from the reaction between secondary and tertiary amines present in the fish meal and the nitrites added as a preservative. The presence of dimethylnitrosamine at levels of 30–100 ppm was detected. Subsequently, the presence of nitrosamines in small amounts in food for human consumption has been documented. Smoked fish, smoked sausage, ham and bacon, mushrooms, some fruits, and alcoholic beverages (from areas in Africa with a high incidence of esophageal cancer) have been shown to contain various amounts of nitrosamines (0.5–40 µg/kg). Nitrosamines can be formed in the human stomach from secondary amines and nitrites. The methylation of nucleic acids of the stomach, liver, and small intestine in rats given 14C methyl urea and sodium nitrite simultaneously was also demonstrated, and malignant liver and esophageal tumors in rats have resulted from simultaneous feeding of morpholine or N-methylbenzylamine and sodium nitrite. Several bacterial species—E. coli, E. dispar, Proteus vulgaris, and Serratia marcescens—can form nitrosamines from secondary amines. The bacterial reduction of nitrate to nitrite in the human stomach has been shown. Tobacco-specific nitrosamines have been identified and have received considerable attention. Nicotine and the minor tobacco alkaloids give rise to tobacco-specific N-nitrosamines (TSNAs) during tobacco processing and during smoking (≤ 25 µg/g) and in mainstream smoke of cigarettes (1.3 TSNA/cigarette). In mice, rats, and hamsters, three TSNAs, N -nitrosonornicotine (NNN), 4-(methylnitrosamino)-1-(3 pyridyl)-1-butanone (NNK), and 4-(methylnitrosamino)-1-(3 pyridyl)-1-butanol (NNAL), are powerful carcinogens; two TSNAs are moderately active carcinogens and two TSNAs appear not to be carcinogenic. The TSNAs are procarcinogens that require metabolic activation. The active forms react with cellular components, including DNA, and with hemoglobin. The Hb adducts serve as biomarkers in smokers or tobacco chewers, and the urinary excretion of NNAL is an indicator of TSNA uptake. The TSNAs contribute to the increased risk of upper digestive tract cancer in tobacco chewers and lung cancer in smokers. In laboratory animals, DNA adduct formation and carcinogenicity of tobacco-specific N-nitrosamines are closely correlated.597 The high incidence of cancer of the upper digestive tract in the Indian subcontinent has been causally associated with chewing of betel quid mixed with tobacco. Betel quid is the source of four N-nitrosamines from the Areca alkaloids; two of these are carcinogenic.598 Human cytochrome P450 2A subfamily members play important roles in the mutagenic activation of essentially all betel quid-related N-nitrosamines tested.599 TSNAs NNN and NNK are metabolites of nicotine and are the major carcinogens in cigarette smoke. In fetal human lung cells exposed to NNN and NNK, a dose-dependent increase in DNA singlestrand breaks was observed. In combination with enzymaticallygenerated oxygen radicals, strand breakage increased by approximately 50% for both NNN and NNK.600 Tobacco-specific nitrosamine NNK produces DNA single-strand breaks (SSB) in hamster and rat liver. DNA SSB reached a maximum at 12 hours after treatment and persisted 2–3 weeks, reflecting deficient repair of some DNA lesions.601 Chromosomal abnormalities were significantly more frequent in the peripheral blood lymphocytes of women following in vitro exposure to NNK compared with those of men, suggesting a greater risk of tobaccorelated malignancy for women.602 NNK injected subcutaneously or instilled intratracheally into pregnant hamsters resulted in high incidence of respiratory tract tumors in offspring; target organs included the adrenal glands and the pancreas. The results suggested that NNK, at doses comparable to the cumulative exposure during a 9-month period in women, is a potent transplacental carcinogen in hamsters.603 Evidence of nitrosamine-induced DNA damage was found in the increased levels of 8-oxodeoxyguanosine and 8-hydroxydeoxyguanosine (8-OH-dG) in tissue DNA of mice and rats treated with the tobaccospecific nitrosamine NNK. These lesions were detected in lung DNA and liver DNA, but not in rat kidney (a nontarget tissue). These findings support the role of oxidative DNA damage in NNK lung tumorigenesis.604 NNK prduced pulmonary tumors in adult mice treated with
656
Environmental Health
a single dose (100 mg/kg i.p.). Progression of pulmonary lesions was noted from hyperplasia through adenomas to carcinomas (to 54 weeks). DNA was isolated from 20 hyperplasias and activation of the K-ras gene was found in 17 lesions, 85% of the mutations involving a GC → AT transition within codon 12, a mutation consistent with base mispairing produced by the formation of the O6-methylguanine adduct.605 NNK stimulated cell proliferation in normal human bronchial epithelial cells and small airway epithelial cells in culture, through activation of the nuclear factor kB, which in turn up-regulated cyclin D1 expression.606 4-(Methylnitrosamino)-1-(3 pyridyl)-1butanone (NNK) is a potent carcinogen in adult rodents and variably effective transplacentally, depending on species. NNK was tested in infant mice; at 13–15 months, 57% of NNK-exposed male offspring had hepatocellular tumors; a lower occurrence (14%) was found in female offspring. In addition, primary lung tumors were also found in 57% of males and 37% of females. These results call attention to the possibility that human infants may be especially vulnerable for tumor initiation by tobacco smoke constituents.607 The number of lung tumors and fore-stomach tumors in mice given 6.8 ppm N-nitrosodiethylamine was considerably increased when ethanol 10% was also added. Ethanol increased lung tumor multiplicity 5.5-fold when N-nitrosopyrrolidine was given. It is thought that coadministered ethanol increases the tumorigenicity of nitrosamines by blocking hepatic first-pass clearance.608 Numerous epidemiological studies have established that asbestos causes occupational lung cancer and mesothelioma; a cocarcinogenic effect of cigarette smoking on the incidence of lung cancer in asbestos workers has been well documented. In an experimental study on rats, chrysotile asbestos was administered intratracheally, Nbis(hydryoxypropyl)nitrosamine (DHPN) was injected intraperitoneally, and the animals were exposed to smoke from 10 cigarettes/ day for their entire life span. Lung tumors were detected in one of 31 rats receiving only asbestos; they occurred in 22% of rats receiving DPNH alone and in 60% of the rats receiving DPNH and asbestos. Thus the cocarcinogenic effect of tobacco-specific nitrosamines was clearly demonstrated in an animal model.609 The Areca-derived 3-(methylnitrosamino)propionitrile (MNPN) when tested on mouse skin produced multiple distant tumors in the lungs. When applied by swabbing the oral cavity, strong organ-specific carcinogenicity resulted in nasal tumors, lung adenomas, liver tumors, and papillomas of the esophagus, with relatively few oral tumors.610 Certain environmentally relevant nitrosamines specifically induce malignant tumors in the urinary bladder in several animal species. Butyl-3-carboxypropylnitrosamine, methyl-3-carboxypropylnitrosamine, and methyl-5-carboxypropylnitrosamine were found to be beta-oxidized by mitochondrial fractions to butyl-2-oxopropylnitrosamine, or methyl-2-oxopropylnitrosamine. By this reaction, watersoluble carboxylated nitrosamines of low genotoxic potential are converted into rather lipophilic 2-oxopropyl metabolites, with high genotoxic and carcinogenic potency.611 In northeast Thailand the consumption of raw freshwater and saltfermented fish results in repeated exposure to liver fluke (Opisthorchis viverrini) infection and ingestion of nitrosamine-contaminated food. A high prevalence of cholangiocarcinoma is known to exist in this region. The Syrian golden hamster receiving subcarcinogenic doses of dimethylnitrosamine (DMN) and infection with flukes developed cholangiocarcinomas. Nitrosamines are considered to be genotoxicants, while liver flukes are assumed to play an epigenetic role.612 Samples of food frequently consumed in Kashmir, a high-risk area for esophageal cancer, revealed high levels of N-nitrosodimethylamine, Nnitrosopiperidine, and N-nitrosopyrrolidine in smoked fish, sun-dried spinach, dried mixed vegetables, and dried pumpkin.613 A reduction of the high exposures to N-nitrosamines in the rubber and tire industry is possible by using vulcanization accelerators that contain amine moieties that are both difficult to nitrosate, and, on nitrosation, yield noncarcinogenic N-nitroso compounds. The toxicological and technological properties of some 50 benzothiazole sulfenamides derived from such amines have been evaluated.614
Laboratory research conducted over the last 30 years has identified the organic nitroso-compounds as some of the most potent carcinogens, mutagens, and teratogens for a variety of animal species. The possibility of nitrosamine formation from nitrites (or nitrates) and secondary or tertiary amines in the stomach and the possibility of a similar effect attributable to microorganisms normally present in the gut and frequently in the urinary tract suggest a potential hazard for the population at large. The identification of tobacco-specific nitrosamines and of nitrosamines in betel and in food stuffs in areas with high cancer incidence emphasizes the growing importance of this group of chemical carcinogens.
EPOXY COMPOUNDS
Epoxy compounds are cyclic ethers characterized by the presence of an epoxide ring.
These ethers, with an oxygen attached to two adjacent carbons, readily react with amino, hydroxyl, and carboxyl groups and also with inorganic acids to form relatively stable compounds. The epoxide group is very reactive and can form covalent bonds with biologically important molecules. Industrial applications have expanded rapidly in the manufacture of epoxy resins, plasticizers, surface-active agents, solvents, etc. Most epoxy resins are prepared by reacting epichlorhydrin with a polyhydroxy compound, most frequently bisphenol A, in the presence of a curing agent (cross-linking agents—“hardeners,” mainly polyamines or anhydrides of polybasic acids, such as phthalic anhydride). Catalysts include polyamides and tertiary amines; diluents such as glycidyl ethers, styrene, styrene oxide, or other epoxides are sometimes used to achieve lower viscosity of uncured epoxy resin systems. Epoxy compounds can adversely affect the skin, the mucosae, the airways, and the lungs; some have hepatotoxic and neurotoxic effects. Most epoxy compounds are very potent irritants (eyes, airways, skin), and they can produce pulmonary edema. Skin lesions can be due to the irritant effect or to sensitization. Respiratory sensitization can also occur. Carcinogenic effects in experimental models have been demonstrated for several epoxy compounds.
Epichlorhydrin Epichlorhydrin (1-chloro-2,3,-epoxypropane, CH2OCH-CH2Cl) is a colorless liquid with a boiling point of 116.4°C. The most important uses are for the manufacture of epoxy resins, surface-active agents, insecticides and other agricultural chemicals, coatings, adhesives, plasticizers, glycidyl ethers, cellulose esters and ethers, paints, varnishes, and lacquers.2,3 Absorption through inhalation and skin is of practical importance. Epichlorhydrin is a strong irritant of the eyes, respiratory tract, and skin. Obstructive airway disease was found related to epichlorhydrin exposure in workers; GST polymorphism influenced the risk of airway obstruction.615 Skin contact may result in dermatitis, occasionally with marked erythema and blistering. Skin sensitization with allergic dermatitis has also been reported. Severe systemic effects have been reported in a few cases of human overexposure: these included nausea, vomiting, dyspnea, abdominal pain, hepatomegaly, jaundice, and abnormal liver function tests. In experimental studies, nephrotoxic effects have been found; an adverse effect on liver mixed-function microsomal enzymes has also been reported. In experiments on rats, epiochlorhydrin was found to
27 significantly decrease the content in cytochrome P450 of microsomes isolated from the liver, kidney, testes, lung, and small intestine mucosa. An excess of lung cancer was observed among a small number of workers employed in the production of epichlorohydrin. A nested case-control study within this population found a weak association between epichlorohydrin and lung cancer. Another nested case-control study based on the same cohort found a weak association with central nervous system tumors. A small excess of lung cancer was observed in another cohort, but in a third no excess of cancer was observed. In a case-control study of lung cancer nested within another cohort of chemical workers, a significantly decreased risk of lung cancer was associated with epichlorohydrin exposure. All results were based on relatively small numbers. Epichlorohydrin by mouth caused papillomas and carcinomas of the forestomach, and by inhalation, induced papillomas and carcinomas of the nasal cavity in rats. It produced local sarcomas in mice after subcutaneous injection and was active in a mouse-lung tumor bioassay by intraperitoneal injection.616 Epichlorhydrin is considered a bifunctional alkylating agent; it reacts with nucleophilic molecules by forming covalent bonds; crosslinking bonds may also be formed. These chemical characteristics are believed to be of importance for their carcinogenic, mutagenic, and reproductive effects. Epichlorhydrin forms adducts with DNA.617 Chromosomal aberrations have been found in exposed workers. Workers with high epichlorhydrin exposure also had significantly higher sister chromatid exchange frequencies than those with low or no exposure.618 Epichlorhydrin exposure in vitro had significant effects on sister chromatid exchange frequencies in the lymphocyte cultures of human subjects.619 Several experimental studies suggest that interference with male reproductive function can result from epichlorhydrin exposure. In rats, epichlorhydrin was found to produce progressive testicular atrophy, reduction of sperm concentration, and an increase in the number of morphologically abnormal spermatozoa. Testicular function was studied in epichlorhydrin-exposed workers; no effects were demonstrated. Epichlorhydrin did not produce teratogenic effects in rats, rabbits, or mice.
Prevention The recommended standard3 for exposure to epichlorhydrin is 2 mg/m3 (0.5 ppm), with a ceiling of 19 mg/m3 (5 ppm) not to exceed 15 minutes. IARC concluded in 1999 that epichlorhydrin is probably carcinogenic to humans (Group 2A).
Ethylene Oxide Ethylene oxide (1,2-epoxyethane, H2COCH2), is a colorless gas used in the organic synthesis of ethylene glycol and glycol derivatives, ethanolamines, acrylontrile, polyester fibers, and film and surfaceactive agents; it has been used as a pesticide fumigant and for sterilization of surgical equipment. Ethylene oxide is highly reactive and potentially explosive; it is relatively stable in aqueous solutions or when diluted with halogenated hydrocarbons or carbon dioxide. Ethylene oxide is a high-volume production chemical; production capacity in the United States was 6.1 billion pounds a year in 1981. Exposure to ethylene oxide is very limited in chemical plants, where it is produced and used for intermediates, mostly in closed systems. Maintenance and repair work, sampling, loading and unloading, and accidental leaks can generate exposure. Although only a small proportion of ethylene oxide is used in health care and medical equipment manufacturing industries, and even less for sterilization of equipment in medical care facilities, NIOSH has estimated that more than 75,000 employees in sterilization areas have been exposed; concentrations as high as hundreds of parts per million were found on occasion, mostly in the vicinity of malfunctioning or inadequate equipment. Absorption occurs through inhalation. Ethylene oxide is a strong irritant, especially in aqueous solutions. Severe dermatitis and even
Diseases Associated with Exposure to Chemical Substances
657
chemical burns, marked eye irritation, and toxic pulmonary edema have occurred with high concentrations. The presence of lens opacities in combination with loss of visual acuity was found to be significantly increased among sterilization workers exposed to ethylene oxide, when compared with unexposed controls.620 Allergic dermatitis may develop. With high levels of exposure, CNS depression with drowsiness, headaches, and even loss of consciousness have occurred. Six workers accidentally exposed acutely to ethylene oxide experienced nausea, vomiting, chest tightness, shortness of breath, dizziness, cough, and ocular irritation. One worker had transient loss of consciousness.621 A number of cases of sensory motor peripheral neuropathy have been reported in personnel performing sterilization with ethylene oxide. Removal from exposure resulted in gradual improvement over several months. A cluster of 12 operating-room nurses and technicians developed symptoms after a five-month exposure to high levels of ethylene oxide in disposable surgical gowns. All patients reported a rash on the wrist where contact was made with the gowns, headaches, and hand numbness with weakness. About 10 of 12 patients complained of memory loss. Neurologic evaluation revealed neuropathy on examination in 9 of the 12 patients, elevated vibration threshold in 4 of 9, abnormal pressure threshold in 10 of 11, atrophy on head MRI in 3 of 10, and neuropathy on conduction studies in 4 of 10. Neuropsychological testing demonstrated mild cognitive impairment in four of six patients. Sural nerve biopsy in the most severely affected patient showed findings of axonal injury.622 The distal axonal degenerative changes have been shown in rats exposed to 500 ppm for 13 weeks. In rats chronically exposed to ethylene oxide (500 ppm for 6 hours/day, 3 days/week for 15 weeks), the distal portions of the sural nerve showed degenerational changes in myelinated fibers and fewer large myelinated fibers in the distal peroneal nerve, with a decrease in the velocity of anterograde axonal transport.623 Studies of sterilization personnel have found that mortality from lymphatic and haematopoietic cancer was only marginally elevated, but a significant trend was found, especially for lymphatic leukemia and non-Hodgkin’s lymphoma, in relation to estimated cumulative exposure to ethylene oxide. For exposure at a level of 1 ppm over a working lifetime (45 years), a rate ratio of 1.2 was estimated for lymphatic and hematopoietic cancer. Three other studies of workers involved in sterilization (two in Sweden and one in the United Kingdom) each showed nonsignificant excesses of lymphatic and haematopoietic cancer. In a study of chemical workers exposed to ethylene oxide at two plants in the United States, the mortality rate from lymphatic and hematopoietic cancer was elevated, but the excess was confined to a small subgroup with only occasional lowlevel exposure to ethylene oxide. Six other studies in the chemical industry (two in Sweden, one in the United Kingdom, one in Italy, one in the United States, and one in Germany) were based on fewer deaths. Four found excesses of lymphatic and hematopoietic cancer (which were significant in two), and in two, the numbers of such tumors were as expected from control rates.624 Ethylene oxide has been shown to be mutagenic in several assay systems including human fibroblasts.625 Covalent binding to DNA has been demonstrated. Sterilization plant workers have been shown to exhibit evidence of DNA damage. DNA strand breaks, alkali-labile sites of DNA and DNA cross-links were seen in excess in peripheral mononuclear blood cells, compared to findings in unexposed controls.626 The frequency of hemoglobin adducts and sister chromatid exchanges (SCEs) in peripheral blood cells increased with cumulative exposure to ethylene oxide among hospital workers.627 Increased frequencies of HPRT mutants, chromosomal aberrations, micronuclei, and sister chromatid exchanges have been reported among sterilization plant workers.628 Chromosomal aberrations and sister chromatic exchange have been found to occur with significantly increased frequency in workers exposed to ethylene oxide at concentrations not exceeding a TWA of 50 ppm (but with occasional excursions to 75 ppm). Exposures near or below 1 ppm among workers in a hospital sterilization unit were associated with increased hemoglobin adduct formation and SCEs, independent of smoking history.629 In general,
658
Environmental Health
the degree of damage is correlated with level and duration of exposure. The induction of sister chromatid exchange appears to be more sensitive to exposure to ethylene oxide than is that of either chromosomal aberrations or micronuclei. In one study, chromosomal aberrations were observed in the peripheral lymphocytes of workers two years after cessation of exposure to ethylene oxide, and sister chromatid exchanges six months after cessation of exposure.630 Adverse reproductive effects (reduced numbers of pups per litter, fewer implantation sites, and a reduced ratio of fetuses to number of implantation sites) were observed in rats exposed to 100 ppm ethylene oxide. An increased proportion of congenital malformations (mostly skeletal) was also reported. The effect occurred predominately when exposure occurred during the zygotic period rather than during organogenesis.631 Genotoxic effects on male germ cells in postmeiotic stages have been demonstrated in both Drosophila and the mouse.632 Significant effects on fetal deaths and resorptions, malformations, crown-to-rump length, and fetal weight were found in ethylene oxide exposed female mice.633 Testicular damage following ethylene oxide exposure in rats has been reported, with specific but reversible injury to Sertoli cells.634 Women hospital employees exposed to ethylene oxide were found to have a higher incidence of miscarriages than a comparison group. In 1981, the NIOSH recommended that ethylene oxide be regarded in the workplace as a potential carcinogen and that appropriate controls be used to reduce exposure. This recommendation was based on the results of a carcinogenicity assay, clearly indicating that ethylene oxide can produce malignant tumors in experimental animals. In a chronic inhalation study, mononuclear cell leukemias and peritoneal mesotheliomas were found to be significantly increased in ethylene oxide-exposed rats; both were dose-related and occurred at concentrations of 33 ppm. Ethylene oxide induced uterine adenocarcinomas in mice in a two-year inhalation study.635 A mortality study of workers in a Swedish ethylene oxide plant636 showed an increased incidence of total cancer deaths, with leukemia and stomach cancer accounting for most of these excess cancer deaths. Other chemical exposures (including some wellknown carcinogens) had also been possible in that plant. An excess of leukemia was also found in another plant in which 50% ethylene oxide and 50% methyl formate were used for sterilization of hospital equipment.637 The small number of observed deaths and the complex chemical exposures do not allow definitive conclusions regarding the human evidence of ethylene oxide carcinogenicity, although it is entirely consistent with the experimental data. A more recent study of the mortality experience among 18,254 United States sterilization plant workers (4.9 years average exposure duration and 16 years of follow-up), with 8-hour TWAs averaging 4.3 ppm, reported a significant trend toward increased mortality with increasing length of time since first exposure for all hematopoietic cancers; among men, but not women, there was a significant increase in mortality from hematopoietic cancers.642 A recent follow-up study of this cohort revealed an internal exposure-response trend for hematopoietic cancers limited to males (15-year lag). The trend in hematopoietic cancer was driven by lymphoid tumours (non-Hodgkin’s lymphoma, myeloma, lymphocytic leukemia), which also had a positive trend with cumulative exposure for males with a 15-year lag.638 The current (1987) TLV for ethylene oxide is 1 ppm. IARC has concluded that ethylene oxide is carcinogenic to humans (Group 1).622
Glycidyl Ethers Glycidyl ethers are characterized by the group:
Their most important use is for epoxy resins; diglycidyl ether of bisphenol A is one of the basic ingredients used to react with epichlorhydrin. Glycidyl ethers are also used as diluents, to reduce the
viscosity of uncured epoxy resins systems. These find applications in protective coatings, bonding materials, reinforced plastics, etc. The NIOSH estimates that about one million workers are exposed to epoxy resins; it is difficult to reach an accurate estimate of the number exposed to glycidyl ethers, but it is probably around 100,000 workers. Evidence has accumulated indicating that the epoxy resin glycidyl methacrylate is genotoxic and forms DNA adducts. Glycidyl ethers are irritants for the skin and mucosae; dermatitis and sensitization have been reported. In experimental studies, an adverse effect on spermatogenesis and testicular atrophy has been the result of glycidyl ether exposure of several species (rats, mice, rabbits) to concentrations as low as 2–3 ppm. A potent effect on lymphoid tissue, including atrophy of the thymus and of lymph nodes, low white blood cell counts, or bone marrow toxicity have also been reported in rats, rabbits, and dogs. Information on immunosuppressive or myelotoxic effects in humans is not available, and the possibility that such effects have not been detected in the past cannot be excluded. The present federal standard for PELs are listed below: Allyl glycidyl ether N-Butyl glycidyl ether Diglycidyl ether Isopropyl glycidyl ether Phenyl glycidyl ether
5 ppm 25 ppm 0.1 ppm 50 ppm 1 ppm
IARC has classified phenyl glycidyl ether as possibly carcinogenic to humans (Group 2B), based on evidence of carcinogenicity in animals.639,640 REFERENCES
1. Browning E. Toxicity and Metabolism of Industrial Solvents. Amsterdam: Elsevier; 1965. 2. Finkel AJ. Hamilton and Hardy’s Industrial Toxicology. 4th ed. Boston: John Wright; 1983. 3. U.S. Department of Health, Education, and Welfare, Public Health Service, CDC, NIOSH. Criteria for a Recommended Standard Occupational Exposure to: Trichloroethylene, 1978; Benzene, 1974; Carbon Tetrachloride, 1976; Carbon Disulfide, 1977; Alkanes (C5–C8), 1977; Refined Petroleum Solvents, 1977; Ketones, 1978; Toluene, 1973; Xylene, 1975; Trichloroethylene, 1978; Chloroform, 1974; Epichlorhydrin, 1976; Ethylene Dichloride (1,2, dichloroethane), 1978 (revised); Ethylene Dichloride (1,2 dichloroethane), 1976; Ethylene Dibromide, 1977; Methyl Alcohol, 1976; Isopropyl Alcohol, 1976; Acrylamide, 1976; Formaldehyde, 1977. Washington, DC: Government Printing Office. 4. Cavanagh JB. Peripheral neuropathy caused by chemical agents. CRC Crit Rev Toxicol. 1973;2:365–417. 5. Spencer PS, Schaumburg HH. A review of acrylamide neurotoxicity. II. Experimental animal neurotoxicity and pathologic mechanisms. Can J Neurol Sci. 1974;152–69. 6. Spencer PS, Schaumburg HH. Experimental neuropathy produced by 2,5-hexanedione—a major metabolite of the neurotoxic industrial solvent methyl n-butyl ketone. J Neurol Neurosurg Psychiatry. 1975;38(8):771–5. 7. Borbely F. Erkennung und Behandlung der organischen Losungs mittel-vergiftungen. Bern: Medizinischer Verlag Hans Huber; 1947. 8. Olsen J, Sabroe S. A case-reference study of neuropsychiatric disorders among workers exposed to solvents in the Danish wood and furniture industry. Scand J Soc Med. 1980;16:44–9. 9. Mikkelson S. A cohort study of disability pension and death among painters with special regard to disabling presenile dementia as an occupational disease. Scand J Soc Med. 1980;16:34–43. 10. Juntunen J, Hupli V, Hernberg S, Luisto M. Neurological picture of organic solvent poisoning in industry: a retrospective clinical study of 37 patients. Int Arch Occup Environ Health. 1980;46(3):219–31.
27 11. Escobar A, Aruffo C. Chronic thinner intoxication: clinicopathologic report of a human case. J Neurol Neurosurg Psychiatry. 1980;43(11):986–94. 12. Arlien-Sborg P, Henriksen L, Gade A, Gyldensted C, Paulson OB. Cerebral blood flow in chronic toxic encephalopathy in house painters exposed to organic solvents. Acta Neurol Scand. 1982; 66(1):34–41. 13. Sasa M, Igarashi S, Miyazaki T, et al. Equilibrium disorders with diffuse brain atrophy in long-term toluene sniffing. Arch Otorhinolaryngol. 1978;221(3):163–9. 14. Chang YC. Neurotoxic effects of n-hexane on the human central nervous system: evoked potential abnormalities in n-hexane polyneuropathy. J Neurol Neurosurg Psychiatry. 1987;50(3): 269–74. 15. Yamamura Y. N-hexane polyneuropathy. Folia Psychiatr Neurol Jpn. 1969;23:45–57. 16. Aksoy M, Erdem S, Dincol G. Types of leukemia in chronic benzene poisoning: a study in thirty-four patients. Acta Haematol. 1976;55:65–72. 17. Chang CM, Yu CW, Fong KY, et al. N-hexane neuropathy in offset printers. J Neurol Neurosurg Psychiatry. 1994;56(5):538–42. 18. Kurihara K, Kita K, Hattori T, Hirayama K. N-hexane polyneuropathy due to sniffing bond G10: clinical and electron microscope findings. Brain Nerve (Tokyo). 1986;38(11):1011–17. 19. Hall D MB, Ramsey J, Schwartz MS, Dookun D. Neuropathy in a petrol sniffer. Arch Dis Child. 1986;61(9):900–1. 20. Oryshkevich RS, Wilcox R, Jhee WH. Polyneuropathy due to glue exposure: case report and 16-year follow-up. Arch Phys Med Rehabil. 1986;67(11):827–8. 21. De Martino C, Malorni W, Amantini MC, Barcellona PS, Frontali N. Effects of respiratory treatment with n-hexane on rat testis morphology. I. A light microscopic study. Exp Mol Pathol. 1987;46(2): 199–216. 22. Khedun SM, Maharaj B, Naicker T. Hexane cardiotoxicity––an experimental study. Isr J Med Sci. 1996 Feb;32(2):123–8. 23. Karakaya A, Yucesoy B, Burgaz S, Sabir HU, Karakaya AE. Some immunological parameters in workers occupationally exposed to n-hexane. Hum Exp Toxicol. 1996;15(1):56–8. 24. Mayan O, Teixeira JP, Alves S, Azevedo C. Urinary 2,5 hexanedione as a biomarker of n-hexane exposure. Biomarkers. 2002; 7(4):299–305. 25. Zhu M, Spink DC, Yan B, Bank S, DeCaprio AP. Inhibition of 2,5hexanedione-induced protein cross-linking by biological thiols: chemical mechanisms and toxicological implications. Chem Res Toxicol. 1995;8(5):764–1. 26. Huang J, Kato K, Shibata E, Asaeda N, Takeuchi Y. Nerve-specific marker proteins as indicators of organic solvent neurotoxicity. Environ Res. 1993;63(1):82–7. 27. Perbellini L, Mozzo P, Brugnone F, Zedde A. Physiologicomathematical model for studying human exposure to organic solvents: kinetics of blood/tissue n-hexane concentrations and of 2,5-hexanedione in urine. Br J Ind Med. 1986;43(11):760-8. 28. Perbellini L, Amantini MC, Brugnone F, Frontali N. Urinary excretion of n-hexane metabolites: a comparative study in rat, rabbit and monkey. Arch Toxicol. 1982;50(3–4):203–15. 29. Fedtke N, Bolt HM. Detection of 2,5-hexanedione in the urine of persons not exposed to n-hexane. Int Arch Occup Environ Health. 1986;57(2):143–8. 30. Ahonen I, Schimberg RW. 2,5-Hexanedione excretion after occupational exposure to n-hexane. Br J Ind Med. 1988;45(2):133–6. 31. Governa M, Calisti R, Coppa G, Tagliavento G, Colombi A, Troni W. Urinary excretion of 2,5-hexanedione and peripheral polyneuropathies in workers exposed to hexane. J Toxic Environ Health. 1987;20(3):219–28. 32. Ichihara G, Saito I, Kamijima M, et al. Urinary 2,5-hexanedione increases with potentiation of neurotoxicity in chronic coexposure
Diseases Associated with Exposure to Chemical Substances
659
to n-hexane and methyl ethyl ketone. Int Arch Occup Environ Health. 1998;71(2):100–4. 33. Fedtke N, Bolt HM. The relevance of 4,5-dihydroxy-2-hexanone in the excretion kinetics of n-hexane metabolites in rat and man. Arch Toxicol. 1987;61(2):131–7. 34. Daughtrey WC, Neeper-Bradley T, Duffy J, et al. Two-generation reproduction study on commercial hexane solvent. J Appl Toxicol. 1994;14(5):387–93. 35. Daughtrey WC, Putman DL, Duffy J, et al. Cytogenetic studies on commercial hexane solvent. J Appl Toxicol. 1994;14(3):161–5. 36. Takeuchi Y, Ono Y, Hisanaga N. An experimental study on the combined effects of n-hexane and toluene on the peripheral nerve of the rat. Br J Ind Med. 1981;38(1):14–9. 37. DiVincenzo GD, Kaplan CJ, Dedinas J. Characterization of the metabolites of methyl n-butyl ketone, methyl iso-butyl ketone, methyl ethyl ketone in guinea pigs and their clearance. Toxicol Appl Pharmacol. 1976;36:511–22. 38. DeCaprio AP. Molecular mechanisms of diketone neurotoxicity. Chem Biol Interact. 1985;54(3):257–70. 39. Nemec MD, Pitt JA, Topping DC, et al. Inhalation two-generation reproductive toxicity study of methyl isobutyl ketone in rats. Int J Toxicol. 2004 Mar–Apr;23(2):127–43. 40. Johnson W, Jr. Safety assessment of MIBK (methyl isobutyl ketone). Int J Toxicol. 2004;23, Suppl. 1:29–57. 41. Schwetz BA, Mast TJ, Weigel RJ, Dill JA, Morrisey RE. Developmental toxicity of inhaled methyl ethyl ketone in Swiss mice. Fundam Appl Toxicol. 1991;16(4):742–8. 42. O’Donoghue JL, Krasavage WJ, DiVincenzo GD, Ziegler PA. Commercial grade methyl heptyl ketone (5-methyl-2-octonone) neurotoxicity: contribution of 5-nonanone. Toxicol Appl Pharmacol. 1982;62(6):307–16. 43. Misvmi J, Nagano M. Experimental study on the enhancement of the neurotoxicity of methyl n-butyl ketone by non-neurotoxic aliphatic monoketones. Br J Ind Med. 1985;42(3):155–61. 44. U.S. Deptartment of Health and Human Services, Public Health Service, Centers for Disease Control, National Institute for Occupational Safety and Health. NIOSH Current Intelligence Bulletin 41:1,3 Butadiene. Washington, DC: NIOSH, Feb 9, 1984. 45. Kim Y, Hong HH, Lachat Y, et al. Genetic alterations in brain tumors following 1,3-butadiene exposure in B6C3F1 mice. Toxicol Pathol. 2005;33(3):307–12. 46. Maronpot RR. Ovarian toxicity and carcinogenicity in eight recent national toxicology program studies. Environ Health Perspect. 1987;73:125–130. 47. Irons RD, Smith CN, Stillman WS, Shah RS, Steinhagen WH, Leiderman LJ. Macrocytic-megaloblastic anemia in male NIH Swiss mice following repeated exposure to 1,3-butadiene. Toxicol Appl Pharmacol. 1986;85(3):450–5. 48. Sprague CL, Elfarra AA. Protection of rats against 3-butene1,2-diol-induced hepatotoxicity and hypoglycemia by N-acetyl-lcysteine. Toxicol Appl Pharmacol. 2005 Sep 15;207(3):266–74. 49. International Agency for Research on Cancer. 1,3-Butadiene. Monogr Eval Carcinog Risks Hum. 1999;71:109. 50. Downs TD, Crane MM, Kim KW. Mortality among workers at a butadiene facility. Am J Ind Med. 1987;12(3):311–29. 51. Schlade-Bartusiak K, Rozik K, Laczmanska I, Ramsey D, Sasiadek M. Influence of GSTT1, mEH, CYP2E1 and RAD51 polymorphisms on diepoxybutane-induced SCE frequency in cultured human lymphocytes. Mutat Res. 2004;558(1–2):121–30. 52. Norppa H, Sorsa M. Genetic toxicity of 1,3-butadiene and styrene. IARC Sci Publ. 1993;127:185–93. 53. deMeester C. Genotoxic properties of 1,3-butadiene. Mutat Res. 1988;195(1–4):273–81. 54. Boysen G, Georgieva NI, Upton PB, et al. Analysis of diepoxidespecific cyclic N-terminal globin adducts in mice and rats after
660
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72. 73.
74.
Environmental Health inhalation exposure to 1,3-butadiene. Cancer Res. 2004 Dec 1;64(23):8517–20. Schmiederer M, Knutson E, Muganda P, Albrecht T. Acute exposure of human lung cells to 1,3-butadiene diepoxide results in G1 and G2 cell cycle arrest. Environ Mol Mutagen. 2005;45(4):354–64. Dahl AR, Birnbaum LS, Bond JA, Gervasi PG, Henederson RF. The fate of isoprene inhaled by rats: comparison to butadiene. Toxicol Appl Pharmacol. 1987;89(2):237–48. Austin CC, Wang D, Ecobichon DJ, Dussault G. Characterization of volatile organic compounds in smoke at municipal structural fires. J Toxicol Environ Health A. 2001;63(6):437–58. Steffen C, Auclerc MF, Auvrignon A, et al. Acute childhood leukemia and environmental exposure to potential sources of benzene and other hydrocarbons; a case-control study. Occup Environ Med. 2004;61(9):773–8. Avogbe PH, Ayi-Fanou L, Autrup H, et al. Ultrafine particulate matter and high-level benzene urban air pollution in relation to oxidative DNA damage. Carcinogenesis. 2005;26(3):613–20. Epub 2004 Dec 9. Turteltaub KW, Mani C. Benzene metabolism in rodents at doses relevant to human exposure from urban air. Res Rep Health Eff Inst. 2003;(113):1–26; discussion 27–35. Cocco P, Tocco MG, Ibba A, et al. Trans,trans-Muconic acid excretion in relation to environmental exposure to benzene. Int Arch Occup Environ Health. 2003;76(6):45660. Epub 2003 Apr 9. Marrubini G, Castoldi AF, Coccini T, Manzo L. Prolonged ethanol ingestion enhances benzene myelotoxicity and lowers urinary concentrations of benzene metabolite levels in CD-1 male mice. Toxicol Sci. 2003;75(1):16–24. Epub 2003 Jun 12. Wan J, Shi J, Hui L, et al. Association of genetic polymorphisms in CYP2E1, MPO, NQO1, GSTM1, and GSTT1 genes with benzene poisoning. Environ Health Perspect. 2002;110(12):12138. Iskander K, Jaiswal AK. Quinone oxidoreductases in protection against myelogenous hyperplasia and benzene toxicity. Chem Biol Interact. 2005;153–4:147–57. Epub 2005; Apr 7. Bauer AK, Faiola B, Abernethy DJ, et al. Genetic susceptibility to benzene-induced toxicity: role of NADPH: quinone oxidoreductase-1. Cancer Res. 2003;63(5):929–35. Yoon BI, Hirabayashi Y, Kawasaki Y, et al. Aryl hydrocarbon receptor mediates benzene-induced hematotoxicity. Toxicol Sci. 2002;70(1):150–6. Chang RL, Wong CQ, Kline SA, Conney AH, Goldstein BD, Witz G. Mutagenicity of trans, trans-muconaldehyde and its metabolites in V79 cells. Environ Mol Mutagen. 1994;24(2):112–5. Zhang L, Robertson ML, Kolachana P, Davison AJ, Smith MT. Benzene metabolite, 1,2,4-benzenetriol, induces micronuclei and oxidative DNA damage in human lymphocytes and HL60 cells. Environ Mol Mutagen. 1993;21(4):339–48. Morimoto K, Wolff S. Increase in sister chromatic exchanges and perturbations of cell division kinetics in human lymphocytes by benzene metabolites. Cancer Res. 1980;40(4):1189–93. Greenburg L. Benzol poisoning as an industrial hazard. VII. Results of medical examination and clinical tests made to discover early signs of benzol poisoning in exposed workers. Public Health Rep. 1926;41:1526–39. Greenburg L, Mayers MR, Goldwater L, Smith AR. Benzene (benzol) poisoning in the rotogravure printing industry in New York City. J Ind Hyg Toxicol. 1939;21:295–420. Savilahti M. More than 100 cases of benzene poisoning in a shoe factory. Arch Gewerbepathol Gewerbebyg. 1956;15:147–57. Farris GM, Robinson SN, Gaido KW, et al. Benzene-induced hematotoxicity and bone marrow compensation in B6C3F1 mice. Fundam Appl Toxicol. 1997;36(2):119–29. Rothman N, Li GL, Dosemeci M, et al. Hematotoxocity among Chinese workers heavily exposed to benzene. Am J Ind Med. 1996; 29(3):236–46.
75. Lan Q, Zhang L, Li G, et al. Hematotoxicity in workers exposed to low levels of benzene. Science. 2004;306(5702):1774–6. 76. Xu JN, Wu CL, Chen Y, Wang QK, Li GL, Su Z. Effect of the polymorphism of myeloperoxidase gene on the risk of benzene poisoning. Zhonghua Lao Dong Wei Sheng Zhi Ye Bing Za Zhi. 2003; 21(2):86–9. 77. Yoon BI, Hirabayashi Y, Kawasaki Y, et al. Mechanism of action of benzene toxicity: cell cycle suppression in hemopoietic progenitor cells (CFU-GM). Exp Hematol. 2001;29(3):278–85. 78. Chen KM, El-Bayoumy K, Cunningham J, Aliaga C, Li H, Melikian AA. Detection of nitrated benzene metabolites in bone marrow of B6C3F1 mice treated with benzene. Chem Res Toxicol. 2004; 17(3):370–7. 79. Renz JF, Kalf GF. Role for interleukin-1 (IL-1) in benzene-induced hematotoxicity: inhibition of conversion of pre-IL-1 alpha to mature cytokine in murine macrophages by hydroquinone and prevention of benzene-induced hematotoxicity in mice by IL-1 alpha. Blood. 1991;78(4):938–44. 80. Hazel BA, O’Connor A, Niculescu R, Kalf GF. Induction of granulocytic differentiation in a mouse model by benzene and hydroquinone. Environ Health Perspect. 1996;104, Suppl. 6:1257–64. 81. Kalf GF, Renz JF, Niculescu R. p-Benzoquinone, a reactive metabolite of benzene, prevents the processing of pre-interleukins-1 alpha and -1 beta to active cytokines by inhibition of the processing enzymes, calpain, and interleukin-1 beta converting enzyme. Environ Health Perspect. 1996;104, Suppl. 6:1251–6. 82. Vigliani EC. Leukemia associated with benzene exposure. Ann NY Acad Sci. 1976;271:143–51. 83. Aksoy M, Erdem S, Dincol G. Types of leukemia in chronic benzene poisoning: a study in thirty-four patients. Acta Haematol. 1976;55: 65–72. 84. Rinsky RA, Young RJ, Smith AB. Leukemia in benzene workers. Am J Ind Med. 1981;2(3):217–45. 85. Ishimaru T, Okada H, Tomiyasu T, et al. Occupational factors in the epidemiology of leukemia in Hiroshima and Nagasaki. Am J Epidemiol. 1971;93:157–65. 86. Hayes RB, Yin SN, Dosemeci M, et al. Mortality among benzeneexposed workers in China. Environ Health Perspect. 1996;104, Suppl. 6:1349–52. 87. Snyder CA, Goldstein BD, Sellakumar AR, Albert RE. Evidence for hematotoxicity and tumorigenesis in rats exposed to 100 ppm benzene. Am J Ind Med. 1984;5(6):429–34. 88. Maltoni C, Conti B, Cotti G. Benzene: a multipotential carcinogen; results of long-term bioassays performed at the Bologna Institute of Oncology. Am J Ind Med. 1983;4(5):589–630. 89. Cronkite EP. Benzene hematotoxicity and leukemogenesis. Blood Cells. 1986;12:129–37. 90. NTP. Toxicology and Carcinogenesis Studies of Benzene. Research Triangle Park, NC: National Toxicology Program; 1986. 91. Rinsky RA, Alexander B, Smith MD, et al. Benzene and leukemia: an epidemiological risk assessment. N Engl J Med. 1987;316: 1044–50. 92. Lagorio S, Tagesson C, Forastiere F, Iavarone I, Axelson O, Carere A. Exposure to benzene and urinary concentrations of 8-hydroxydeoxyguanosine, a biological marker of oxidative damage to DNA. Occup Environ Med. 1994;51(11):739–43. 93. Liu L, Zhang Q, Feng J, Deng L, Zeng N, Yang A, Zhang W. The study of DNA oxidative damage in benzene-exposed workers. Mutat Res. 1996;370(3–4):14550. 94. Rushmore T, Snyder R, Kalf G. Covalent binding of benzene and its metabolites to DNA in rabbit bone marrow mitochondria in vitro. Chem Biol Interact. 1984;49(1–2):133–54. 95. Gaskell M, McLuckie KI, Farmer PB. Comparison of the mutagenic activity of the benzene metabolites, hydroquinone and parabenzoquinone in the supF forward mutation assay: a role for minor
27
96. 97.
98. 99.
100.
101.
102.
103.
104.
105.
106.
107.
108.
109.
110.
111.
112.
113.
114.
DNA adducts formed from hydroquinone in benzene mutagenicity. Mutat Res. 2004;554(1–2):387–98. Snyder R, Witz G, Goldstein BD. The toxicology of benzene. Environ Health Perspect. 1993;100:293–306. Forni A, Cappellini A, Pacifico E, Vigliani EC. Chromosome changes and their evolution in subjects with past exposure to benzene. Arch Environ Health. 1971;23:285–391. Forni A. Benzene-induced chromosome aberrations: a follow-up study. Environ Health Perspect. 1996;104, Suppl. 6:1309–12. Tunca BT, Egeli U. Cytogenetic findings on shoe workers exposed long-term to benzene. Environ Health Perspect. 1996;104(6): 1313–7. Andreoli C, Leopardi P, Crebelli R. Detection of DNA damage in human lymphocytes by alkaline single cell gel electrophoresis after exposure to benzene or benzene metabolites. Mutat Res. 1997; 377(1):95–104. Zhang L, Rothman N, Wang Y, et al. Interphase cytogenetics of workers exposed to benzene. Environ Health Perspect. 1996;104, Suppl. 6:1325–9. Styles J, Richardson CR. Cytogenetic effects of benzene: dosimetric studies on rats exposed to benzene vapour. Mutat Res. 1984;135(3):203–9. Angelosanto FA, Blackburn GR, Schreiner CA, Mackerer CR. Benzene induces a dose-responsive increase in the frequency of micronucleated cells in rat Zymbal glands. Environ Health Perspect. 1996;104, Suppl. 6:1331–6. Tice RR, Vogt TF, Costa DL. Cytogenetic effects of inhaled benzene in murine bone marrow. In: Genotoxic Effects of Airborne Agents. Environ Sci Res. 1982;25:257–75. Ciranni R, Barale R, Adler ID. Dose-related clastogenic effects induced by benzene in bone marrow cells and in differentiating spermatogonia of Swiss CD1 mice. Mutagenesis. 1991;6(5):417–21. Stronati L, Farris A, Pacchierotti F. Evaluation of chromosome painting to assess the induction and persistence of chromosome aberrations in bone marrow cells of mice treated with benzene. Mutat Res. 2004;545(1–2):1–9. Giver CR, Wong R, Moore DH, II, Pallavicini MG. Dermal benzene and trichloroethylene induce aneuploidy in immature hematopoietic subpopulations in vivo. Environ Mol Mutagen. 2001;37(3):185–94. Smith MT, Zhang L, Jeng M, et al. Hydroquinone, a benzene metabolite, increases the level of aneusomy of chromosomes 7 and 8 in human CD34-positive blood progenitor cells. Carcinogenesis. 2000;21(8):1485–90. Stillman WS, Varella-Garcia M, Irons RD. The benzene metabolites hydroquinone and catechol act in synergy to induce dose-dependent hypoploidy and -5q31 in a human cell line. Leuk Lymphoma. 1999; 35(3–4):269–81. Abernethy DJ, Kleymenova EV, Rose J, Recio L, Faiola B. Human CD34+ hematopoietic progenitor cells are sensitive targets for toxicity induced by 1,4-benzoquinone. Toxicol Sci. 2004;79(1):82–9. Epub 2004 Feb 19. Silva Mdo C, Gaspar J, Duarte Silva I, Faber A, Rueff J. GSTM1, GSTT1, and GSTP1 genotypes and the genotoxicity of hydroquinone in human lymphocytes. Environ Mol Mutagen. 2004;43(4): 258–64. Gowans ID, Lorimore SA, McIlrath JM, Wright EG. Genotypedependent induction of transmissible chromosomal instability by gamma-radiation and the benzene metabolite hydroquinone. Cancer Res. 2005;65(9):3527–30. Amin RP, Witz G. DNA-protein crosslink and DNA strand break formation in HL-60 cells treated with trans,trans-muconaldehyde, hydroquinone and their mixtures. Int J Toxicol. 2001;20(2):69–80. Hutt AM, Kalf GF. Inhibition of human DNA topoisomerase II by hydroquinone and p-benzoquinone, reactive metabolites of benzene. Environ Health Perspect. 1996;104, Suppl. 6:1265–9.
Diseases Associated with Exposure to Chemical Substances
661
115. Lindsey RH Jr, Bromberg KD, Felix CA, Osheroff N. 1,4-Benzoquinone is a topoisomerase II poison. Biochemistry. 2004;43(23): 7563–74. 116. Frantz CE, Chen H, Eastmond DA. Inhibition of human topoisomerase II in vitro by bioactive benzene metabolites. Environ Health Perspect. 1996;104, Suppl. 6:1319–23. 117. Eastmond DA, Schuler M, Frantz C, Chen H, Parks R, Wang L, Hasegawa L. Characterization and mechanisms of chromosomal alterations induced by benzene in mice and humans. Res Rep Health Eff Inst. 2001;(103):1–68; discussion 6980. 118. Boley SE, Wong VA, French JE, Recio L. p53 heterozygosity alters the mRNA expression of p53 target genes in the bone marrow in response to inhaled benzene. Toxicol Sci. 2002;66(2):209–15. 119. Rivedal E, Witz G. Metabolites of benzene are potent inhibitors of gap-junction intercellular communication. Arch Toxicol. 2005; 79(6):303–11. Epub 2005 Feb 3. 120. Wan J, Winn LM. The effects of benzene and the metabolites phenol and catechol on c-Myb and Pim-1 signaling in HD3 cells. Toxicol Appl Pharmacol. 2004;201(2):194–201. 121. Pfeiffer E, Metzler M. Interaction of p-benzoquinone and p-biphenoquinone with microtubule proteins in vitro. Chem Biol Interact. 1996;102(1):37–53. 122. Rivedal E, Witz G. Metabolites of benzene are potent inhibitors of gap-junction intercellular communication. Arch Toxicol. 2005; 79(6):303–11. Epub 2005 Feb 3. 123. Ward CO, Kuna RA, Snyder NK, Alsaker RD, Coate WB, Craig PH. Subchronic inhalation toxicity of benzene in rats and mice. Am J Ind Med. 1985;7:457–73. 124. Xing SG, Shi X, Wu ZL, et al. Transplacental genotoxicity of triethylenemelamine, benzene, and vinblastine in mice. Teratog Carcinog Mutagen. 1992;12(5):223–30. 125. Chen H, Wang X, Xu L. Effects of exposure to low-level benzene and its analogues on reproductive hormone secretion in female workers. Zhonghua Yu Fang Yi Xue Za Zhi. 2001;35(2): 83–6. 126. Liu XX, Tang GH, Yuan YX, Deng LX, Zhang Q, Zheng LK. Detection of the frequencies of numerical and structural chromosome aberrations in sperm of benzene series-exposed workers by multi-color fluorescence in situ hybridization. Yi Chuan Xue Bao. 2003;30(12):117782. 127. Messerschmitt J. Bone-marrow aplasias during pregnancy. Nouv Rev Fr Hematol. 1972;12:115–28. 128. Brown-Woodman PD, Webster WS, Picker K, Huq F. In vitro assessment of individual and interactive effects of aromatic hydrocarbons on embryonic development of the rat. Reprod Toxicol. 1994;8(2):121–35. 129. Robinson SN, Shah R, Wong BA, Wong VA, Farris GM. Immunotoxicological effects of benzene inhalation in male Sprague-Dawley rats. Toxicology. 1997;119(3):227–37. 130. Farris GM, Robinson SN, Wong BA, Wong VA, Hahn WP, Shah R. Effects of benzene on splenic, thymic, and femoral lymphocytes in mice. Toxicology. 1997;118(2–3):137–48. 131. Geiselhart LA, Christian T, Minnear F, Freed BM. The cigarette tar component p-benzoquinone blocks T-lymphocyte activation by inhibiting interleukin-2 production, but not CD25, ICAM-1, or LFA-1 expression. Toxicol Appl Pharmacol. 1997;143(1):30–6. 132. Li Q, Geiselhart L, Mittler JN, Mudzinski SP, Lawrence DA, Freed BM. Inhibition of human T lymphoblast proliferation by hydroquinone. Toxicol Appl Pharmacol. 1996;139(2):317–23. 133. Yu R, Weisel CP. Measurement of the urinary benzene metabolite trans,trans-muconic acid from benzene exposure in humans. J Toxicol Environ Health. 1996;48(5):453–77. 134. Cocco P, Tocco MG, et al. trans,trans-Muconic acid excretion in relation to environmental exposure to benzene. Int Arch Occup Environ Health. 2003;76(6):456–60. Epub 2003 Apr 9.
662
Environmental Health
135. Gobba F, Rovesti S, Borella P, Vivoli R, Caselgrandi E, Vivoli G. Inter-individual variability of benzene metabolism to trans, trans-muconic acid and its implications in the biological monitoring of occupational exposure. Sci Total Environ. 1997;199(1–2):41–8. 136. Qu Q, Shore R, Li G, et al. Validation and evaluation of biomarkers in workers exposed to benzene in China. Res Rep Health Eff Inst. 2003;(115):1–72. Discussion 73–87. 137. Nakajima T, Wang RS. Induction of cytochrome P450 by toluene. Int J Biochem. 1994;26(12):1333–40. 138. Furman GM, Silverman DM, Schatz RA. Inhibition of rat lung mixed-function oxidase activity following repeated low-level toluene inhalation: possible role of toluene metabolites. J Toxicol Environ Health A. 1998;54(8):633–45. 139. Flanagan RJ, Ives RJ. Volatile substance abuse. Bull Narc. 1994;46(2):49–78. 140. Spiller HA. Epidemiology of volatile substance abuse (VSA) cases reported to U.S. poison centers. Am J Drug Alcohol Abuse. 2004;30(1):155–65. 141. Knox JW, Nelson JR. Permanent encephalopathy from toluene inhalation. N Engl J Med. 1966;275:1494–6. 142. Fornazzari L, Wilkonson DA, Kapur BM, Carlen PL. Cerebellar, cortical and functional impairment in toluene abusers. Acta Neurol Scand. 1983;67(6):319–29. 143. Streicher HA, Gabow PA, Moss AH, Kano D, Kaehny WD. Syndromes of toluene sniffing in adults. Ann Intern Med. 198194(6): 758–62. 144. Uzun N, Kendirli Y. Clinical, socio-demographic, neurophysiological and neuropsychiatric evaluation of children with volatile substance addiction. Child Care Health Dev. 2005;31(4):425–32. 145. Kamran S, Bakshi R. MRI in chronic toluene abuse: low signal in the cerebral cortex on T2-weighted images. Neuroradiology. 1998; 40(8):519–21. 146. Pryor GT. A toluene-induced motor syndrome in rats resembling that seen in some human solvent abusers. Neurotoxicol Teratol. 1991;13(4):387–400. 147. Riegel AC, French ED. Abused inhalants and central reward pathways: electrophysiological and behavioral studies in the rat. Ann N Y Acad Sci. 2002;965:281–91. 148. Chan MH, Chen HH. Toluene exposure increases aminophyllineinduced seizure susceptibility in mice. Toxicol Appl Pharmacol. 2003;193(2):303–8. 149. von Euler G, Ogren SO, Li XM, Fuxe K, Gustafsson JA. Persistent effects of subchronic toluene exposure on spatial learning and memory, dopamine-mediated locomotor activity and dopamine D2 agonist binding in the rat. Toxicology. 1993;77(3):223–32. 150. Soulage C, Perrin D, Berenguer P, Pequignot JM. Sub-chronic exposure to toluene at 40 ppm alters the monoamine biosynthesis rate in discrete brain areas. Toxicology. 2004;196(1–2):21–30. 151. Meulenberg CJ, Vijverberg HP. Selective inhibition of gammaaminobutyric acid type A receptors in human IMR-32 cells by low concentrations of toluene. Toxicology. 2003;190(3):243–8. 152. Huang J, Asaeda N, Takeuchi Y, et al. Dose dependent effects of chronic exposure to toluene on neuronal and glial cell marker proteins in the central nervous system of rats. Br J Ind Med. 1992;49(4):2826. 153. Baydas G, Reiter RJ, Nedzvetskii VS, et al. Melatonin protects the central nervous system of rats against toluene-containing thinner intoxication by reducing reactive gliosis. Toxicol Lett. 2003; 137(3):169–74. 154. Cruz SL, Mirshahi T, Thomas B, Balster RL, Woodward JJ. Effects of the abused solvent toluene on recombinant N-methyl-D-aspartate and non-N-methyl-D-aspartate receptors expressed in Xenopus oocytes. J Pharmacol Exp Ther. 1998;286(1):334–40. 155. Vrca A, Bozicevic D, Bozikov V, Fuchs R, Malinar M. Brain stem evoked potentials and visual evoked potentials in relation to the
156.
157.
158.
159.
160.
161.
162. 163.
164. 165. 166. 167.
168.
169.
170.
171.
172.
173.
174.
175.
176.
length of occupational exposure to low levels of toluene. Acta Med Croatica. 1997;51(4–5):215–9. Schaper M, Demes P, Zupanic M, Blaszkewicz M, Seeber A. Occupational toluene exposure and auditory function: results from a follow-up study. Ann Occup Hyg. 2003;47(6):493–502. Lataye R, Campo P, Loquet G. Toluene ototoxicity in rats: assessment of the frequency of hearing deficit by electrocochleography. Neurotoxicol Teratol. 1999;21(3):267–76. Lataye R, Campo P. Combined effects of a simultaneous exposure to noise and toluene on hearing function. Neurotoxicol Teratol. 1997;19(5):373–82. Campo P, Lataye R, Cossec B, Villette V, Roure M, Barthelemy C. Combined effects of simultaneous exposure to toluene and ethanol on auditory function in rats. Neurotoxicol Teratol. 1998; 20(3):321–32. Johnson AC. The ototoxic effect of toluene and the influence of noise, acetyl salicylic acid, or genotype. A study in rats and mice. Scand Audiol Suppl. 1993;39:1–40. McWilliams ML, Chen GD, Fechter LD. Low-level toluene disrupts auditory function in guinea pigs. Toxicol Appl Pharmacol. 2000; 167(1):18–29. Park CK, Kwon KT, Lee DS, et al. A case of toxic hepatitis induced by habitual glue sniffing. Taehan Kan Hakhoe Chi. 2003;9(4):332–6. Al-Ghamdi SS, Raftery MJ, Yaqoob MM. Toluene and p-xylene induced LLC-PK1 apoptosis. Drug Chem Toxicol. 2004;27(4): 42532. Reinhardt DF, Azar A, Maxfield ME, Smith PE, Mullin LS. Cardiac arrhythmias and aerosol “sniffing.” Arch Environ Health. 1971;22:265. Hersh JH, Podruch PE, Rogers G, et al. Toluene embryopathy. J Pediatr. 1985;106:922–7. Courtney KD, Andrews JE, Springer J, et al. A perinatal study of toluene in CD-1 mice. Fundam Appl Toxicol. 1986;6:145–54. Bowen SE, Batis JC, Mohammadi MH, Hannigan JH. Abuse pattern of gestational toluene exposure and early postnatal development in rats. Neurotoxicol Teratol. 2005;27(1):105–16. Gospe SM, Jr, Zhou SS. Prenatal exposure to toluene results in abnormal neurogenesis and migration in rat somatosensory cortex. Pediatr Res. 2000;47(3):362–8. Gospe SM, Jr, Zhou SS. Toluene abuse embryopathy: longitudinal neurodevelopmental effects of prenatal exposure to toluene in rats. Reprod Toxicol. 1998;12(2):11926. Wu M, Shaffer KM, Pancrazio JJ, et al. Toluene inhibits muscarinic receptor-mediated cytosolic Ca2+ responses in neural precursor cells. Neurotoxicology. 2002;23(1):61–8. Dalgaard M, Hossaini A, Hougaard KS, Hass U, Ladefoged O. Developmental toxicity of toluene in male rats: effects on semen quality, testis morphology, and apoptotic neurodegeneration. Arch Toxicol. 2001;75(2):103–9. Gospe SM, Jr, Saeed DB, Zhou SS, Zeman FJ. The effects of highdose toluene on embryonic development in the rat. Pediatr Res. 1994;36(6):811–5. Klimisch HJ, Hellwig J, Hofmann A. Studies on the prenatal toxicity of toluene in rabbits following inhalation exposure and proposal of a pregnancy guidance value. Arch Toxicol. 1992;66(6): 373–81. Ono A, Kawashima K, Sekita K, et al. Toluene inhalation induced epididymal sperm dysfunction in rats. Toxicology. 1999; 139(3): 193–205. Ono A, Sekita K, Ogawa Y, et al. Reproductive and developmental toxicity studies of toluene. II. Effects of inhalation exposure on fertility in rats. J Environ Pathol Toxicol Oncol. 1996;15(1): 9–20. Gaikwad NW, Bodell WJ. Formation of DNA adducts in HL-60 cells treated with the toluene metabolite p-cresol: a potential biomarker for toluene exposure. Chem Biol Interact. 2003;145(2):149–58.
27 177. Nakai N, Murata M, Nagahama M, et al. Oxidative DNA damage induced by toluene is involved in its male reproductive toxicity. Free Radic Res. 2003;37(1):69–76. 178. Huff J. Absence of carcinogenic activity in Fischer rats and B6C3F1 mice following 103-week inhalation exposures to toluene. Int J Occup Environ Health. 2003;9(2):138–46. 179. Tardif R, Plaa GL, Brodeur J. Influence of various mixtures of inhaled toluene and xylene on the biological monitoring of exposure to these solvents in rats. Can J Physiol Pharmacol. 1992;70(3):385–93. 180. Chen JD, Wang JD, Jang JP, Chen YY. Exposure to mixtures of solvents among paint workers and biochemical alterations of liver function. Br J Ind Med. 1991;48(10):696–701. 181. Toftgard R, Halpert J, Gustafsson JA. Xylene induces a cytochrome P-450 isozyme in rat liver similar to the major isozyme induced by phenobarbital. Mol Pharmacol. 1983;23(1):265–71. 182. Backes WL, Sequeira DJ, Cawley GF, Eyer CS. Relationship between hydrocarbon structure and induction of P450: effects on protein levels and enzyme activities. Xenobiotica. 1993;23(12): 1353–66. 183. Vaidyanathan A, Foy JW, Schatz R. Inhibition of rat respiratorytract cytochrome P-450 isozymes following inhalation of m-Xylene: possible role of metabolites. J Toxicol Environ Health A. 2003; 66(12):1133–43. 184. Park SH, Schatz RA. Effect of low-level short-term o-xylene inhalation of benzo[a]pyrene (BaP) metabolism and BaP-DNA adduct formation in rat liver and lung microsomes. J Toxicol Environ Health A. 1999;58(5):299–312. 185. Unguary G, Varga B, Horvath E, Tatrai E, Folly C. Study on the role of maternal sex steroid production and metabolism in the embryotoxicity of para-xylene. Toxicology. 1981;19(3):263–8. 186. Hass U, Lund SP, Simonsen L, Fries AS. Effects of prenatal exposure to xylene on postnatal development and behavior in rats. Neurotoxicol Teratol. 1995;17(3):341–9. 187. Yamada K. Influence of lacquer thinner and some organic solvents on reproductive and accessory reproductive organs in the male rat. Biol Pharm Bull. 1993;16(4): 425–7. 188. Seppalainen AM, Laine A, Salmi T, Verkkala E, Hiihimaki V, Luukkonen R. Electroencephalographic findings during experimental human exposure to m-xylene. Arch Environ Health. 1991;46(1): 16–24. 189. Gralewicz S, Wiaderna D. Behavioral effects following subacute inhalation exposure to m-xylene or trimethylbenzene in the rat: a comparative study. Neurotoxicology. 2001;22(1):79–89. 190. Gunasekar PG, Rogers JV, Kabbur MB, Garrett CM, Brinkley WW, McDougal JN. Molecular and histological responses in rat skin exposed to m-xylene. J Biochem Mol Toxicol. 2003;17(2):92–4. 191. Morel G, Bonnet P, Cossec B, et al. The role of glutathione and cysteine conjugates in the nephrotoxicity of o-xylene in rats. Arch Toxicol. 1998;72(9):553–8. 192. Gagnaire F, Marignac B, Langlais C, Bonnet P. Ototoxicity in rats exposed to ortho-, meta- and para-xylene vapours for 13 weeks. Pharmacol Toxicol. 2001;89(1):6–14. 193. d’Azevedo PA, Tannhauser M, Tannhauser SL, Barros HM. Hematological alterations in rats from xylene and benzene. Vet Hum Toxicol. 1996;38(5):340–4. 194. NIOSH. Criteria for a Recommended Standard: Occupational Exposure to Styrene. Cincinnati, OH: U.S. Department of Health and Human Services, National Institute of Occupational Safety and Health, Robert A, Taft Laboratories; 1983:250. 195. Carlson GP. Comparison of the susceptibility of wild-type and CYP2E1 knockout mice to the hepatotoxic and pneumotoxic effects of styrene and styrene oxide. Toxicol Lett. 2004;150(3):335–9. 196. Shield AJ, Sanderson BJ. Role of glutathione S-transferase mu (GSTM1) in styrene-7,8-oxide toxicity and mutagenicity. Environ Mol Mutagen. 2001;37(4):285–9.
Diseases Associated with Exposure to Chemical Substances
663
197. Ruder AM, Ward EM, Dong M, Okun AH, Davis-King K. Mortality patterns among workers exposed to styrene in the reinforced plastic boatbuilding industry: an update. Am J Ind Med. 2004; 45(2):165–76. 198. Godderis L, De Boeck M, Haufroid V, et al. Influence of genetic polymorphisms on biomarkers of exposure and genotoxic effects in styrene-exposed workers. Environ Mol Mutagen. 2004;44(4): 293–303. 199. De Palma G, Mozzoni P, Scotti E, et al. Genetic polymorphism of biotransforming enzymes and genotoxic effects of styrenes. G Ital Med Lav Ergon. 2003;25 Suppl(3):63–4. 200. Vodicka P, Koskinen M, Stetina R, et al. The role of various biomarkers in the evaluation of styrene genotoxicity. Cancer Detect Prev. 2003;27(4):275–84. 201. Laffon B, Perez-Cadahia B, Pasaro E, Mendez J. Effect of epoxide hydrolase and glutathione S-tranferase genotypes on the induction of micronuclei and DNA damage by styrene-7,8-oxide in vitro. Mutat Res. 2003;536(1–2):49–59. 202. Shamy MY, Osman HH, Kandeel KM, Abdel-Moneim NM, El SK. DNA single strand breaks induced by low levels of occupational exposure to styrene: the gap between standards and reality. J Environ Pathol Toxicol Oncol. 2002;21(1):57–61. 203. Somorovska M, Jahnova E, Tulinska J, et al. Biomonitoring of occupational exposure to styrene in a plastics lamination plant. Mutat Res. 1999;428(1–2):255–69. 204. Laffon B, Pasaro E, Mendez J. Genotoxic effects of styrene-7,8oxide in human white blood cells: comet assay in relation to the induction of sister-chromatid exchanges and micronuclei. Mutat Res. 2001;491(1–2):163–72. 205. Cruzan G, Cushman JR, Andrews LS, et al. Chronic toxicity/oncogenicity study of styrene in CD-1 mice by inhalation exposure for 104 weeks. J Appl Toxicol. 2001;21(3):185–98. 206. Solveig-Walles SA, Orsen I. Single-strand breaks in DNA of various organs of mice induced by styrene and styrene oxide. Cancer Lett. 1983;21(1):9–15. 207. Cruzan G, Carlson GP, Turner M, Mellert W. Ring-oxidized metabolites of styrene contribute to styrene-induced Clara-cell toxicity in mice. J Toxicol Environ Health A. 2005;68(3):229–37. 208. Harkonen H, Lindstrom K, Seppalainen AM, et al. Exposureresponse relationship between styrene exposure and central nervous functions. Scand J Work Environ Health. 1978;4:53–9. 209. Fung F, Clark RF. Styrene-induced peripheral neuropathy. J Toxicol Clin Toxicol. 1999;37(1):91–7. 210. Loquet G, Campo P, Lataye R. Comparison of toluene-induced and styrene-induced hearing losses. Neurotoxicol Teratol. 1999; 21(6):689–97. 211. Vettori MV, Caglieri A, Goldoni M, et al. Analysis of oxidative stress in SK-N-MC neurons exposed to styrene-7,8-oxide. Toxicol In Vitro. 2005;19(1):11–20. 212. Gobba F, Cavalleri A. Evolution of color vision loss induced by occupational exposure to chemicals. Neurotoxicology. 2000;21(5): 777–81. 213. Matanoski GM, Tao XG. Styrene exposure and ischemic heart disease: a case-cohort study. Am J Epidemiol. 2003;158(10):988–95. 214. Turner M, Mantick NA, Carlson GP. Comparison of the depletion of glutathione in mouse liver and lung following administration of styrene and its metabolites styrene oxide and 4-vinylphenol. Toxicology. 2005;206(3):383–8. 215. Luderer U, Tornero-Velez R, Shay T, Rappaport S, Heyer N, Echeverria D. Temporal association between serum prolactin concentration and exposure to styrene. Occup Environ Med. 2004; 61(4):325–33. 216. Takao T, Nanamiya W, Nazarloo HP, Asaba K, Hashimoto K. Possible reproductive toxicity of styrene in peripubertal male mice. Endocr J. 2000;47(3):343–7.
664
Environmental Health
217. Guengerich FP, Kim DH, Iwasaki M. Role of human cytochrome P-450 IIE1 (P-450 IIE1) in the oxidation of many low molecular weight cancer suspects. Chem Res Toxicol. 1991;4(2):168–79. 218. Raucy JL, Kraner JC, Lasker JM. Bioactivation of halogenated hydrocarbons by cytochrome P450 2E1. Crit Rev Toxicol. 1993;23(1):1–20. 219. Bagchi D, Bagchi M, Hassoun E, Stohs SJ. Carbon tetrachlorideinduced urinary excretion of formaldehyde, malondialdehyde, acetaldehyde and acetone in rats. Pharmacology. 1993;47(3):209–16. 220. elSisi AE, Earnest DL, Sipes IG. Vitamin A potentiation of carbon tetrachloride hepatotoxicity: role of liver macrophages and active oxygen species. Toxicol Appl Pharmacol. 1993;119(2):295–301. 221. Morrow JD, Awad JA, Kato T, et al. Formation of novel noncyclooxygenase-derived prostanoids (F2-isoprostanes) in carbon tetrachloride hepatotoxicity. An animal model of lipid peroxidation. J Clin Invest. 1992;90(6):2502–7. 222. Czaja MJ, Xu J, Alt E. Prevention of carbon tetrachloride-induced rat liver injury by soluble tumor necrosis factor receptor. Gastroenterology. 1995;108(6):1849–54. 223. Suzuki T, Nezu K, Sasaki H, Miyazawa T, Isono H. Cytotoxicity of chlorinated hydrocarbons and lipid peroxidation in isolated rat hepatocytes. Biol Pharm Bull. 1994;17(1):82–6. 224. Toraason M, Breitenstein MJ, Wey HE. Reversible inhibition of intercellular communication among cardiac myocytes by halogenated hydrocarbons. Fundam Appl Toxicol. 1992;18(1):59–65. 225. Schmitt-Graff A, Chakroun G, Gabbiani G. Modulation of perisinusoidal cell cytoskeletal features during experimental hepatic fibrosis. Virchows Arch A Pathol Anat Histopathol. 1993;422(2):99–107. 226. Steup DR, Hall P, McMillan DA, Sipes IG. Time course of hepatic injury and recovery following coadministration of carbon tetrachloride and trichloroethylene in Fischer-344 rats. Toxicol Pathol. 1993;21(3):327–34. 227. Raymond P, Plaa GL. Ketone potentiation of haloalkane-induced hepato- and nephrotoxicity. I. Dose-response relationships. J Toxicol Environ Health. 1995;45(4):465–80. 228. Tracey JP, Sherlock P. Hepatoma following carbon tetrachloride poisoning. NY State J Med. 1968;68:2202–4. 229. U.S. Department of Health, Education and Welfare, Public Health Service. CDC, NIOSH. Current Intelligence Bulletin. Bull. 2, Trichloroethylene, June 6, 1975; Trichloroethylene, February 28, 1978; Bull. 28, Vinyl Halides Carcinogenicity, September 21, 1978; Bull. 25, Ethylene Dichloride, April 19, 1978; Bull. l, Chloroprene, January 20, 1975; Bull. 9, Chloroform, March 15, 1976; Bull. 21, Trimellitic Anhydride (TMA), February 3, 1978; Bull. 8, 4,4Diaminodiphenyl-methane (DDM) January 30, 1976; Bull. 15, Nitrosamines in Cutting Fluids, October 6, 1976; Bull. 30, Epichlorhydrin, October 12, 1978. Washington, DC: GPO. 230. Maltoni C. Predictive value of carcinogenesis bioassays. Ann NY Acad Sci. 1976;271:431–47. 231. Hatch GG, Mamay PD, Ayer ML, Castro BC, Nesnow S. Chemical enhancement of viral transformation in Syrian hamster embryo cells by gaseous and volatile chlorinated methanes and ethanes. Cancer Res. 1983;43(5):1945–50. 232. Wallace L, Zweidinger R, Erikson M, et al. Monitoring individual exposure: measurements of volatile organic compounds in breathing zone air, drinking water, and exhaled breath. Environ Int. 1982; 8(1–6):269–82. 233. Singh BH, Lillian D, Appleby A, Lobban L. Atmospheric formation of carbon tetrachloride from tetrachloroethylene. Environ Lett. 1975;10:253–6. 234. Taieb D, Malicet C, Garcia S, et al. Inactivation of stress protein p8 increases murine carbon tetrachloride hepatotoxicity via preserved CYP2E1 activity. Hepatology. 2005;42(1):176–82. 235. Kadiiska MB, Gladen BC, Baird DD, et al. Biomarkers of oxidative stress study II: are oxidation products of lipids, proteins, and DNA
236.
237.
238.
239.
240.
241. 242.
243.
244.
245.
246.
247.
248.
249.
250.
251.
252.
253.
254.
markers of CCl4 poisoning? Free Radic Biol Med. 2005;38(6): 698–710. Seki M, Kasama K, Imai K. Effect of food restriction on hepatotoxicity of carbon tetrachloride in rats. J Toxicol Sci. 2000; 25(1):33–40. Holden PR, James NH, Brooks AN, Roberts RA, Kimber I, Pennie WD. Identification of a possible association between carbon tetrachloride-induced hepatotoxicity and interleukin-8 expression. J Biochem Mol Toxicol. 2000;14(5):283–90. Jiang Y, Liu J, Waalkes M, Kang YJ. Changes in the gene expression associated with carbon tetrachloride-induced liver fibrosis persist after cessation of dosing in mice. Toxicol Sci. 2004; 79(2):404–10. Epub 2004 Mar 31. Kanno K, Tazuma S, Chayama K. AT1A-deficient mice show less severe progression of liver fibrosis induced by CCl(4). Biochem Biophys Res Commun. 2003 Aug 15;308(1):177–83. Simeonova PP, Gallucci RM, Hulderman T, et al. The role of tumor necrosis factor-alpha in liver toxicity, inflammation, and fibrosis induced by carbon tetrachloride. Toxicol Appl Pharmacol. 2001; 177(2):112–20. Jiang Y, Kang YJ. Metallothionein gene therapy for chemicalinduced liver fibrosis in mice. Mol Ther. 2004;10(6):1130–9. Gao J, Dou H, Tang XH, Xu LZ, Fan YM, Zhao XN. Inhibitory effect of TCCE on CCl4-induced overexpression of IL-6 in acute liver injury. Acta Biochim Biophys Sin (Shanghai). 2004;36(11):767–72. Sheweita SA, El-Gabar MA, Bastawy M. Carbon tetrachloride changes the activity of cytochrome P450 system in the liver of male rats: role of antioxidants. Toxicology. 2001;169(2):83–92. Ogeturk M, Kus I, Colakoglu N, Zararsiz I, Ilhan N, Sarsilmaz M. Caffeic acid phenethyl ester protects kidneys against carbon tetrachloride toxicity in rats. J Ethnopharmacol. 2005;97(2):273–80. Epub 2005 Jan 12. Paakko P, Anttila S, Sormunen R, et al. Biochemical and morphological characterization of carbon tetrachloride-induced lung fibrosis in rats. Arch Toxicol. 1996;70(9):540–52. Guo TL, McCay JA, Brown RD, et al. Carbon tetrachloride is immunosuppressive and decreases host resistance to Listeria monocytogenes and Streptococcus pneumoniae in female B6C3F1 mice. Toxicology. 2000;154(1–3):85–101. Jirova D, Sperlingova I, Halaskova M, Bendova H, Dabrowska L. Immunotoxic effects of carbon tetrachloride—the effect on morphology and function of the immune system in mice. Cent Eur J Public Health. 1996;4(1):16–20. Rikans LE, Hornbrook KR, Cai Y. Carbon tetrachloride hepatotoxicity as a function of age in female Fischer 344 rats. Mech Ageing Dev. 1994;76(2–3):89–99. Manno M, Rezzadore M, Grossi M, Sbrana C. Potentiation of occupational carbon tetrachloride toxicity by ethanol abuse. Hum Exp Toxicol. 1996;15(4):294–300. Wong FW, Chan WY, Lee SS. Resistance to carbon tetrachlorideinduced hepatotoxicity in mice which lack CYP2E1 expression. Toxicol Appl Pharmacol. 1998;153(1):109–18. Dias Gomez MI, Castro JA. Covalent binding of carbon tetrachloride metabolites to liver nuclear DNA, proteins, and lipids. Abstract No. 223. Toxicol Appl Pharmacol. 1970;45:315. Araki A, Kamigaito N, Sasaki T, Matsushima T. Mutagenicity of carbon tetrachloride and chloroform in Salmonella typhimurium TA98, TA100, TA1535, and TA1537, and Escherichia coli WP2uvrA/pKM101 and WP2/pKM101, using a gas exposure method. Environ Mol Mutagen. 2004;43(2):128–33. International Association of Research on Cancer Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans. Some Halogenated Hydrocarbons. Vol 20. Lyon, France: 1979. Constan AA, Sprankle CS, Peters JM, et al. Metabolism of chloroform by cytochrome P450 2E1 is required for induction of toxicity
27
255.
256.
257.
258.
259.
260.
261.
262.
263.
264.
265.
266.
267.
268.
269.
270.
271.
272.
273.
in the liver, kidney, and nose of male mice. Toxicol Appl Pharmacol. 1999;160(2):120–6. Ban M, Hettich D, Bonnet P. Effect of inhaled industrial chemicals on systemic and local immune response. Toxicology. 2003; 184(1):41–50. Gemma S, Testai E, Chieco P, Vittozzi L. Bioactivation, toxicokinetics and acute effects of chloroform in Fisher 344 and Osborne Mendel male rats. J Appl Toxicol. 2004;24(3):203–10. Beddowes EJ, Faux SP, Chipman JK. Chloroform, carbon tetrachloride and glutathione depletion induce secondary genotoxicity in liver cells via oxidative stress. Toxicology. 2003;187(2–3): 101–15. Robbiano L, Mereto E, Migliazzi Morando A, Pastore P, Brambilla G. Increased frequency of micronucleated kidney cells in rats exposed to halogenated anaesthetics. Mutat Res. 1998; 413(1):1–6. Araki A, Kamigaito N, Sasaki T, Matsushima T. Mutagenicity of carbon tetrachloride and chloroform in Salmonella typhimurium TA98, TA100, TA1535, and TA1537, and Escherichia coli WP2uvrA/pKM101 and WP2/pKM101, using a gas exposure method. Environ Mol Mutagen. 2004;43(2):128–33. Larson JL, Sprankle CS, Butterworth BE. Lack of chloroforminduced DNA repair in vitro and in vivo in hepatocytes of female B6C3F1 mice. Environ Mol Mutagen. 1994;23(2):132–6. Hard GC, Boorman GA, Wolf DC. Re-evaluation of the 2-year chloroform drinking water carcinogenicity bioassay in Osborne-Mendel rats supports chronic renal tubule injury as the mode of action underlying the renal tumor response. Toxicol Sci. 2000;53(2):237–44. Larson JL, Bull RJ. Species differences in the metabolism of trichloroethylene to the carcinogenic metabolites trechloroacetate and dichloroacetate. Toxicol Appl Pharmacol. 1992;115(2):278–85. Lipscomb JC, Garrett CM, Snawder JE. Cytochrome P450dependent metabolism of trichloroethylene: interindividual differences in humans. Toxicol Appl Pharmacol. 1997;142(2):311–8. Templin MV, Parker JC, Bull RJ. Relative formation of dichloroacetate and trichloroacetate from trichloroethylene in male B6C3F1 mice. Toxicol Appl Pharmacol. 1993;123(1):1–8. Beland FA. NTP technical report on the toxicity and metabolism studies of chloral hydrate (CAS No. 302-17-0). Administered by gavage to F344/N rats and B6C3F1 mice. Toxic Rep Ser. 1999; (59):1–66, A1–E7. Robbiano L, Baroni D, Carrozzino R, Mereto E, Brambilla G. DNA damage and micronuclei induced in rat and human kidney cells by six chemicals carcinogenic to the rat kidney. Toxicology. 2004;204(2–3):187–95. McLaren J, Boulikas T, Vanvakas S. Induction of poly(ADPribosyl)ation in the kidney after in vivo application of renal carcinogens. Toxicology. 1994;88(1–3):101–12. Szlatenyi CS, Wang RY. Encephalopathy and cranial nerve palsies caused by intentional trichloroethylene inhalation. Am J Emerg Med. 1996;14(5):464–6. Green T, Dow J, Ong CN, et al. Biological monitoring of kidney function among workers occupationally exposed to trichloroethylene. Occup Environ Med. 2004;61(4):312–7. Mensing T, Welge P, Voss B, Fels LM, Fricke HH, Bruning T, Wilhelm M. Renal toxicity after chronic inhalation exposure of rats to trichloroethylene. Toxicol Lett. 2002;128(1–3):243–7. Dai Y, Leng S, Li L, et al. Genetic polymorphisms of cytokine genes and risk for trichloroethylene-induced severe generalized dermatitis: a case-control study. Biomarkers. 2004;9(6):470–8. Gilbert KM, Whitlow AB, Pumford NR. Environmental contaminant and disinfection by-product trichloroacetaldehyde stimulates T cells in vitro. Int Immunopharmacol. 2004;4(1):25–36. Griffin JM, Gilbert KM, Lamps LW, Pumford NR. CD4+ T-cell activation and induction of autoimmune hepatitis following
Diseases Associated with Exposure to Chemical Substances
274.
275.
276. 277.
278.
279.
280.
281.
282.
283.
284.
285.
286.
287.
288.
289.
290.
291.
292.
293.
665
trichloroethylene treatment in MRL+/+ mice. Toxicol Sci. 2000; 57(2):345–52. Wernisch M, Paya K, Palasser A. [Cardiovascular arrest after inhalation of leather glue.] Wien Med Wochenschr. 1991;141(3): 71–4. Hoffmann P, Heinroth K, Richards D, Plews P, Toraason M. Depression of calcium dynamics in cardiac myocytes—a common mechanism of halogenated hydrocarbon anesthetics and solvents. J Mol Cell Cardiol. 1994;26(5):579–89. Rasmussen K, Jeppesen HJ, Sabroe S. Solvent-induced chronic toxic encephalopathy. Am J Ind Med. 1993;23(5):779–92. Reif JS, Burch JB, Nuckols JR, Metzger L, Ellington D, Anger WK. Neurobehavioral effects of exposure to trichloroethylene through a municipal water supply. Environ Res. 2003;93(3):248–58. Okamoto T, Shiwaku K. Fatty acid composition in liver, serum and brain of rat inhalated with trichloroethylene. Exp Toxicol Pathol. 1994;46(2):133–41. Blain L, Lachapelle P, Molotchnikoff S. Evoked potentials are modified by long term exposure to trichloroethylene. Neurotoxicology. 1992;13(1):203–6. Crofton KM, Zhao X. Mid-frequency hearing loss in rats following inhalation exposure to trichloroethylene: evidence from reflex modification audiometry. Neurotoxicol Teratol. 1993;15(6):413–23. Fechter LD, Liu Y, Herr DW, Crofton KM. Trichloroethylene ototoxicity: evidence for a cochlear origin. Toxicol Sci. 1998; 42(1):28–35. Rebert CS, Day VL, Matteucci MJ, Pryor GT. Sensory-evoked potentials in rats chronically exposed to trichloroethylene: predominant auditory dysfunction. Neurotoxicol Teratol. 1991;13(1): 83–90. Albee RR, Nitschke KD, Mattsson JL, Stebbins KE. Dichloroacetylene: effects on the rat trigeminal nerve somatosensory evoked potential. Neurotoxicol Teratol. 1997;19(1):27–37. Kautiainen A, Vogel JS, Turteltaub KW. Dose-dependent binding of trichloroethylene to hepatic DNA and protein at low doses in mice. Chem Biol Interact. 1997;106(2):109–21. Anttila A, Pukkala E, Sallmen M, Hernberg S, Hemminki K. Cancer incidence among Finnish workers expsed to halogenated hydrocarbons. J Occup Environ Med. 1995;37(7):797–806. Raaschou-Nielsen O, Hansen J, McLaughlin JK, et al. Cancer risk among workers at Danish companies using trichloroethylene: a cohort study. Am J Epidemiol. 2003;158(12):1182–92. Heineman EF, Cocco P, Gomez MR, et al. Occupational exposure to chorinated aliphatic hydrocarbons and risk of astrocyte brain cancer. Am J Ind Med. 1994;26(2):155–69. Axelson O, Selden A, Andersson K, Hogstedt C. Updated and expanded Swedish cohort study on trichloroethylene and cancer risk. J Occup Med. 1994;36(5):556–62. Dawson BV, Johnson PD, Goldberg SJ, Ulreich JB. Cardiac teratogenesis of halogenated hydrocarbon-contaminated drinking water. J Am Coll Cardiol. 1993;21(6):1466–72. Yauck JS, Malloy ME, Blair K, Simpson PM, McCarver DG. Proximity of residence to trichloroethylene-emitting sites and increased risk of offspring congenital heart defects among older women. Birth Defects Res A Clin Mol Teratol. 2004;70:808–14. Cosby NC, Dukelow WR. Toxicology of maternally ingested trichloroethylene (TCE) on embryonal and fetal development in mice and of TCE metabolites on in vitro fertilization. Fundam Appl Toxicol. 1992;19(2):268–74. Fort DJ, Stover EL, Rayburn JR, Hull M, Bantle JA. Evaluation of the developmental toxicity of trichloroethylene and detoxification metabolites using Xenopus. Teratog Carcinog Mutagen. 1993; 13(1):35–45. Kumar P, Prasad AK, Mani U, Maji BK, Dutta KK. Trichloroethylene induced testicular toxicity in rats exposed by inhalation. Hum Exp Toxicol. 2001;20(11):585–9.
666
Environmental Health
294. Xu H, Tanphaichitr N, Forkert PG, Anupriwan A, Weerachatyanukul W, Vincent R, Leader A, Wade MG. Exposure to trichloroethylene and its metabolites causes impairment of sperm fertilizing ability in mice. Toxicol Sci. 2004;82(2):5907. Epub 2004 Sep 16. 295. DuTeaux SB, Berger T, Hess RA, Sartini BL, Miller MG. Male reproductive toxicity of trichloroethylene: sperm protein oxidation and decreased fertilizing ability. Biol Reprod. 2004;70(5):1518–26. Epub 2004 Jan 21. 296. DuTeaux SB, Hengel MJ, DeGroot DE, Jelks KA, Miller MG. Evidence for trichloroethylene bioactivation and adduct formation in the rat epididymis and efferent ducts. Biol Reprod. 2003;69(3): 771–9. Epub 2003 Apr 30. 297. Berger T, Horner CM. In vivo exposure of female rats to toxicants may affect oocyte quality. Reprod Toxicol. 2004;18(3):447. 298. Bove FJ, Fulcomer MC, Klotz JB, Esmart J, Dufficy EM, Savrin JE. Public drinking water contamination and birth outcomes. Am J Epidemiol. 1995;141(9):850–62. 299. Constan AA, Yang RS, Baker DC, Benjamin SA. A unique pattern of hepatocyte proliferation in F344 rats following long-term exposures to low levels of a chemical mixtue of groundwater contaminants. Carcinogenesis. 1995;16(2):303–10. 300. Steup DR, Wiersma D, McMillan DA, Sipes IG. Pretreatment with drinking water solutions containing trichloroethylene or chloroform enhances the hepatotoxicity of carbon tetrachloride in Fischer 344 rats. Fundam Appl Toxicol. 1991;16(4):798–809. 301. Hoffmann P, Heinroth K, Richards D, Plews P, Toraason M. Depression of calcium dynamics in cardiac myocytes—a common mechanism of halogenated hydrocarbon anesthetics and solvents. J Mol Cell Cardiol. 1994;26(5):579–89. 302. Mutti A, Alinovi R, Bergamaschi E, et al. Nephropathies and exposure to perchloroethylene in dry-cleaners. Lancet. 1992;340(8813):189–93. 303. Onofrj M, Thomas A, Paci C, Rotilio D. Optic neuritis with residual tunnel vision in perchloroethylene toxicity. J Toxicol Clin Toxicol. 1998;36(6):603–7. 304. Till C, Rovet JF, Koren G, Westall CA. Assessment of visual functions following prenatal exposure to organic solvents. Neurotoxicology. 2003 Aug;24(4–5):725–31. 305. Weiss NS. Cancer in relation to occupational exposure to perchloroethylene. Cancer Causes Control. 1995;6(3):257–66. 306. Narotsky MG, Kavlock RJ. A multidisciplinary approach to toxicological screening: II. Developmental toxicity. J Toxicol Environ Health. 1995;45(2):145–71. 307. Aggazzotti G, Fantuzzi G, Righi E, et al. Occupational and environmental exposure to perchloroethylene (PCE) in dry cleaners and their family members. Arch Environ Health. 1994;49(6):487–93. 308. Karlsson JE, Rosengren LE, Kjellstrand P, Haglid KG. Effects of low-dose inhalation of three chlorinated aliphatic organic solvents on deoxyribonucleic acid in gerbil brain. Scand J Work Environ Health. 1987;13(5):453–8. 309. Mazzullo M, Colacci A, Grilli S, et al. 1,1,2-Trichloroethane: evidence of genotoxicity from short-term tests. Jpn J Cancer Res. 1986;77:532–9. 310. Creech JL, Jr, Johnson MN. Angiosarcoma of liver in the manufacture of polyvinyl chloride. J Occup Med. 1974;16:150. 311. Hsiao TJ, Wang JD, Yang PM, Yang PC, Cheng TJ. Liver fibrosis in asymptomatic polyvinyl chloride workers. J Occup Environ Med. 2004;46(9):962–6. 312. Wong O, Whorton MD, Foliart DE, Ragland D. An industry-wide epidemiologic study of vinyl chloride workers, 1942-1982. Am J Ind Med. 1991;20(3):317–34. 313. Wong RH, Wang JD, Hsieh LL, Cheng TJ. XRCC1, CYP2E1 and ALDH2 genetic polymorphisms and sister chromatid exchange frequency alterations amongst vinyl chloride monomer-exposed polyvinyl chloride workers. Arch Toxicol. 2003;77(8):433–40. Epub 2003 May 9.
314. Nair J, Barbin A, Guichard Y, Bartsch H. 1,N6-ethenodeoxyadenosine and 3,N4-ethenodeoxycytine in liver DNA from humans and untreated rodents detected by immunoaffinity/32P-postlabeling. Carcinogenesis. 1995;16(3):613–7. 315. Dosanjh MD, Chenna A, Kim E, Fraenkel-Condrat H, Samson L, Singer B. All four known cyclic adducts formed in DNA by the vinylochloride metabolite chloroacetaldehyde are released by a human DNA glycosylase. Proc Natl Acad Sci USA. 1994;91(3): 024–8. 316. Cheng KC, Preston BD, Cahill DS, Dosanjh MK, Singer B, Loeb LA. Reverse chemical mutagenesis: identification of the mutagenic lesions resulting from reactive oxygen species-mediated damage to DNA. Proc Natl Acad Sci USA. 1991;88(22):9974–8. 317. Singer B, Hang B. Mammalian enzymatic repair of etheno and parabenzoquinone exocyclic adducts derived from the carcinogens vinyl chloride and benzene. IARC Sci Publ. 1999;(150):233–47. 318. Swenberg JA, Bogdanffy MS, Ham A, et al. Formation and repair of DNA adducts in vinyl chloride- and vinyl fluoride-induced carcinogenesis. IARC Sci Publ. 1999;(150):29–43. 319. Morinello EJ, Ham AJ, Ranasinghe A, Nakamura J, Upton PB, Swenberg JA. Molecular dosimetry and repair of N(2),3-ethenoguanine in rats exposed to vinyl chloride. Cancer Res. 2002; 62(18):5189–95. 320. Heath CW, Jr, Dumont CR, Gamble J, Waxweiler RJ. Chromosomal damage in men occupationally exposed to vinyl chloride monomer and other chemicals. Environ Res. 1977;14:68–72. 321. Lei YC, Yang HT, Ma YC, Huang MF, Chang WP, Cheng TJ. DNA single strand breaks in peripheral lymphocytes associated with urinary thiodiglycolic acid levels in polyvinyl chloride workers. Mutat Res. 2004;561(1–2):119–26. 322. Awara WM, El-Nabi SH, El-Gohary M. Assessment of vinyl chloride-induced DNA damage in lymphocytes of plastic industry workers using a single-cell gel electrophoresis technique. Toxicology. 1998;128(1):9–16. 323. Fucic A, Barkovic D, Garaj-Vrhovac V, et al. A nine-year follow up study of a population occupationally exposed to vinyl chloride monomer. Mutat Res. 1996;361(1):49–53. 324. Trivers GE, Cawley HI, DeBenedetti VM, et al. Anti-p53 antibodies in sera of workers occupationally exposed to vinyl choride. J Natl Cancer Inst. 1995;87(18):1400–7. 325. Mocci F, De Biasio AL, Nettuno M. Anti-p53 antibodies as markers of carcinogenesis in exposures to vinyl chloride. G Ital Med Lav Ergon. 2003;25 Suppl(3):21–3. 326. Marion MJ. Critical genes as early warning signs: example of vinyl chloride. Toxicol Lett. 1998;102–3:603–7. 327. Froment O, Boivin S, Barbin A, Bancel B, Trepo C, Marion MJ. Mutagenesis of ras proto-oncogenes in rat liver tumors induced by vinyl chloride. Cancer Res. 1994;54(20):5340–5. 328. DeVivo I, Marion MJ, Smith SJ, Carney WP, Brandt-Rauf PW. Mutant c-Ki-ras p21 protein in chemical carcinogenesis in humans exposed to vinyl chloride. Cancer Causes Control. 1994;5(3): 273–8. 329. Thornton SR, Schroeder RE, Robison RL, et al. Embryo-fetal developmental and reproductive toxicology of vinyl chloride in rats. Toxicol Sci. 2002;68(1):207–19. 330. Bartsch H, Malaveille C, Barbin A, et al. Alkylating and mutagenic metabolites of halogenated olefins produced by human and animal tissues. Proc Am Assoc Cancer Res. 1976;17:17. 331. Nivard MJ, Vogel EW. Genetic effects of exocyclic DNA adducts in vivo: heritable genetic damage in comparison with loss of heterozygosity in somatic cells. IARC Sci Publ. 1999;(150):335–49. 332. Sasaki YF, Saga A, Akasaka M, et al. Detection of in vivo genotoxicity of haloalkanes and haloalkenes carcinogenic to rodents by the alkaline single cell gel electrophoresis (comet) assay in multiple mouse organs. Mutat Res. 1998;419(1–3):13–20.
27 333. National Toxicology Program. Carcinogenesis bioassay of vinylidene chloride (CAS No. 75-35-4) in F344 rats and B6C3F1 mice (gavage study). Natl Toxicol Program Tech Rep Ser. 1982;228:1–184. 334. Ban M, Hettich D, Huguet N, Cavelier L. Nephrotoxicity mechanism of 1,1-dichloroethylene in mice. Toxicol Lett. 1995;78(2):87–92. 335. Dowsley TF, Forkert PG, Benesch LA, Bolton JL. Reaction of glutathione with the electrophilic metabolites of 1,1-dichloroethylene. Chem Biol Interact. 1995;95(3):227–44. 336. Martin EJ, Racz WJ, Forkert PG. Mitochondrial dysfunction is an early manifestation of 1,1-dichloroethylene-induced hepatotoxicity in mice. J Pharmacol Exp Ther. 2003;304(1):121–9. 337. Simmonds AC, Reilly CA, Baldwin RM, et al. Bioactivation of 1,1dichloroethylene to its epoxide by CYP2E1 and CYP2F enzymes. Drug Metab Dispos. 2004;32(9):1032–9. 338. Ban M, Hettich D, Goutet M, Binet S. Serum-borne factor(s) of 1,1dichloroethylene and 1,2-dichlorobenzene-treated mice inhibited in vitro antibody forming cell response and natural killer cell activity. Toxicol Lett. 1998;94(2):93–101. 339. Speerschneider P, Dekant W. Renal tumorigenicity of 1,1dichloroethene in mice: the role of male-specific expression of cytochrome p450 2E1 in the renal bioactivation of 1,1dichloroethene. Toxicol Appl Pharmacol. 1995;130(1):48–56. 340. Goldberg SJ, Dawson BV, Johnson PD, Hoyme HE, Ulreich JB. Cardiac teratogenicity of dichloroethylene in a chick model. Pediatr Res. 1992;32(1):23–6. 341. Sasaki YF, Saga A, Akasaka M, et al. Detection of in vivo genotoxicity of haloalkanes and haloalkenes carcinogenic to rodents by the alkaline single cell gel electrophoresis (comet) assay in multiple mouse organs. Mutat Res. 1998;419(1–3):13–20. 342. Bowler RM, Gysens S, Hartney C. Neuropsychological effects of ethylene dichloride exposure. Neurotoxicology. 2003;24(4–5): 553–62. 343. Cottalasso D, Domenicotti C, Traverso N, Pronzato M, Nanni G. Influence of chronic ethanol consumption on toxic effects of 1,2dichloroethane: glycolipoprotein retention and impairment of dolichol concentration in rat liver microsomes and Golgi apparatus. Toxicology. 2002;178(3):229–40. 344. Cheng TJ, Chou PY, Huang ML, Du CL, Wong RH, Chen PC. Increased lymphocyte sister chromatid exchange frequency in workers with exposure to low level of ethylene dichloride. Mutat Res. 2000;470(2):109–14. 345. Lane BW, Riddle BL, Borzelleca JF. Effects of 1,2dichloroethane and 1,1,1-trichloroethane in drinking water on reproduction and development in mice. Toxicol Appl Pharmacol. 1982;63(3):409–21. 346. Toxicological Profile for 1,2-Dichloroethane. Agency for Toxic Substances and Disease Registry: U.S. Public Health Service: 1989. 347. Khan S, Sood C, O’Brien PJ. Molecular mechanisms of dibromoalkane cytotoxicity in isolated rat hepatocytes. Biochem Pharmacol. 1993;45(2):439–47. 348. Danni O, Aragno M, Tamagno E, Ugazio G. In vivo studies on halogen compound interactions. IV. Interaction among different halogen derivatives with and without synergistic action on liver toxicity. Res Commun Chem Pathol Pharmacol. 1992;76(3):355–66. 349. Ratcliffe JM, Schrader SM, Steenland K, Clapp DE, Turner T, Hornung RW. Semen quality in papaya workers with long term exposure to ethylene dibromide. Br J Ind Med. 1987;44(5):317–26. 350. Kulkarni AP, Edwards J, Richards IS. Metabolism of 1,2-dibromoethane in the human fetal liver. Gen Pharmacol. 1992;23(1):1–5. 351. Naprstkova I, Dusek Z, Zemanova Z, Novotna B. Assessment of nephrotoxicity in the chick embryo: effects of cisplatin and 1,2dibromoethane. Folia Biol (Praha). 2003;49(2):78–6. 352. Mitra A, Hilbelink DR, Dwornik JJ, Kulkarni A. A novel model to assess developmental toxicity of dihaloalkanes in humans: bioactivation of 1,2-dibromoethane by the isozymes of human fetal liver
Diseases Associated with Exposure to Chemical Substances
353.
354.
355.
356.
357.
358.
359.
360. 361.
362. 363.
364. 365.
366.
367.
368.
369.
370.
667
glutathione S-transferase. Teratog Carcinog Mutagen. 1992;12(3): 113–27. Mitra A, Hilbelink DR, Dwornik JJ, Kulkarni A. Rat hepatic glutathione S-transferase-mediated embryotoxic bioactivation of ethylene dibromide. Teratology. 1992;46(5):439–46. Ott MG, Scharnweber HC, Langner RR. The Mortality Experience of 161 Employees Exposed to Ethylene Dibromide in Two Production Units. Midland, Mich. Report submitted to NIOSH by the Dow Chemical Co.; March 1977. Cmarik JL, Humphreys WG, Bruner KL, Lloyd RS, Tibbetts C, Guengerich FP. Mutation spectrum and sequence alkylation selectivity resulting from modification of bacteriophage M13mp18DNA with S-(2-chloroethyl)glutathione. Evidence for a role of S-(2-N7guanyl)ethyl)glutathione as a mutagenic lesion formed from ethylene dibromide. J Biol Chem. 1992;267(10):6672–9. Liu L, Hachey DL, Valadez G, et al. Characterization of a mutagenic DNA adduct formed from 1,2-dibromoethane by O6-alkylguanineDNA alkyltransferase. J Biol Chem. 2004;279(6):4250–9. Epub 2003 Nov 25. Santucci MA, Mercatali L, Brusa G, Pattacini L, Barbieri E, Perocco P. Cell-cycle deregulation in BALB/c 3T3 cells transformed by 1,2dibromoethane and folpet pesticides. Environ Mol Mutagen. 2003;41(5):315–21. Sekita H, Takeda M, Uchiyama M. Analysis of pesticide residues in foods: 33. Determination of ethylene dibromide residues in litchi (lychee) fruits imported from Formosa. Eisei Shikenjo Hokoku. 1981;99:130–2. Hyakudo T, Hori H, Tanaka I, Igisu H. Inhibition of creatine kinase activity in rat brain by methyl bromide gas. Inhal Toxicol. 2001; 13(8):659–69. Yang RS, Witt KL, Alden CJ, Cockerham LG. Toxicology of methyl bromide. Rev Environ Contam Toxicol. 1995;142:65–85. Hustinx WN, van de Laar RT, van Huffelen AC, Verwey JC, Meulenbelt J, Savelkoul TJ. Systemic effects of inhalational methyl bromide poisoning: a study of nine cases occupationally exposed to inadvertent spread during fumigation. Br J Ind Med. 1993;50(2): 155–9. Hoizey G, Souchon PF, Trenque T, et al. An unusual case of methyl bromide poisoning. J Toxicol Clin Toxicol. 2002;40(6):817–21. Lifshitz M, Gavrilov V. Central nervous system toxicity and early peripheral neuropathy following dermal exposure to methyl bromide. J Toxicol Clin Toxicol. 2000;38(7):799–801. Fuortes LJ. A case of fatal methyl bromide poisoning. Vet Hum Toxicol. 1992;34(3):240–1. Xu DG, He HZ, Zhang GG, Gansewendt B, Peter H, Bolt HM. DNA methylation of monohalogenated methanes of F344 rats. J Tongii Med Univ. 1993;13(2):100–4. Gansewendt B, Foest U, Xu D, Hallier E, Bolt HM, Peter H. Formation of DNA adducts in F-344 rats after oral administration or inhalation of [14C]methyl bromide. Food Chem Toxicol. 1991;29(8): 557–63. Goergens HW, Hallier E, Muller A, Bolt HM. Macromolecular adducts in the use of methyl bromide as a fumigant. Toxicol Lett. 1994;72(1–3):199–203. Lof A, Johanson G, Rannug A, Warholm M. Glutathione transferase T1 phenotype affects the toxicokinetics of inhaled methyl chloride in human volunteers. Pharmacogenetics. 2000;10(7):645–53. Hallier E, Langhof T, Dannappel D, et al. Polymorphism of glutathione conjugation of methyl bromide, ethylene oxide and dichloromethane in human blood: influence on the induction of sister chromatid exchanges (SCE) in lymphocytes. Arch Toxicol. 1993; 67(3):173–8. Hallier E, Schroder KR, Asmuth K, Dommermuth A, Aust B, Goergens HW. Metabolism of dichloromethane (methylene chloride) to formaldehyde in human erythrocytes: influence of polymorphism of glutathione transferase theta (GST T1-1). Arch Toxicol. 1994;68(7): 423–7.
668
Environmental Health
371. Munter T, Cottrell L, Golding BT, Watson WP. Detoxication pathways involving glutathione and epoxide hydrolase in the in vitro metabolism of chloroprene. Chem Res Toxicol. 2003;16(10): 1287–97. 372. Sills RC, Hong HL, Boorman GA, Devereux TR, Melnick RL. Point mutations of K-ras and H-ras genes in forestomach neoplasms from control B6C3F1 mice and following exposure to 1,3-butadiene, isoprene or chloroprene for up to 2 years. Chem Biol Interact. 2001; 135–6:373–86. 373. Melnick RL, Elwell MR, Roycroft JH, Chou BJ, Ragan HA, Miller RA. Toxicity of inhaled chloroprene (2-chloro-1,3-butadiene) in F344 rats and B6C3F(1) mice. Toxicology. 1996;108(1–2):79–91. 374. Khachatryan EA. The occurrence of lung cancer among people working with chloroprene. Probl Oncol. 1972;18:85. 375. Zaridze D, Bulbulyan M, Changuina O, Margaryan A, Boffetta P. Cohort studies of chloroprene-exposed workers in Russia. Chem Biol Interact. 2001;135–6:487–503. 376. Pell S. Mortality of workers exposed to chloroprene. J Occup Med. 1978;20:21–9. 377. Rice JM, Boffetta P. 1,3-Butadiene, isoprene and chloroprene: reviews by the IARC monographs programme, outstanding issues, and research priorities in epidemiology. Chem Biol Interact. 2001; 135–6:11–26. 378. Westphal GA, Blaszkewicz M, Leutbecher M, Muller A, Hallier E, Boldt HM. Bacterial mutagenicity of 2-chloro-1,3-butadiene (chloroprene) caused by decomposition products. Arch Toxicol. 1994;68(2):79–84. 379. Wallace GM, Brown PH. Horse rug lung: toxic pneumonitis due to fluorocarbon inhalation. Occup Environ Med. 2005;62(6):414–6. 380. Austin ME, Kasturi BS, Barber M, Kannan K, MohanKumar PS, MohanKumar SM. Neuroendocrine effects of perfluorooctane sulfonate in rats. Environ Health Perspect. 2003;111(12):1485–9. 381. Hu W, Jones PD, DeCoen W, et al. Alterations in cell membrane properties caused by perfluorinated compounds. Comp Biochem Physiol C Toxicol Pharmacol. 2003;135(1):77–88. 382. Hu W, Jones PD, Upham BL, Trosko JE, Lau C, Giesy JP. Inhibition of gap junctional intercellular communication by perfluorinated compounds in rat liver and dolphin kidney epithelial cell lines in vitro and Sprague-Dawley rats in vivo. Toxicol Sci. 2002; 68(2):429–36. 383. Thibodeaux JR, Hanson RG, Rogers JM, et al. Exposure to perfluorooctane sulfonate during pregnancy in rat and mouse. I: maternal and prenatal evaluations. Toxicol Sci. 2003;74(2):369–81. Epub 2003 May 28. 384. Yang Q, Xie Y, Depierre JW. Effects of peroxisome proliferators on the thymus and spleen of mice. Clin Exp Immunol. 2000; 122(2):219–26. 385. Clayton GD, Clayton FE, eds. Patty’s industrial hygiene and toxicology. Toxicology. Vol 2B. 3 ed. rev. New York: John Wiley; 1981. 386. Brennon PD. Addiction to aerosol treatment. Br Med J. 1983;287: 1877. 387. Dodd DE, Vinegar A. Cardiac sensitization testing of the halon replacement candidates trifluoroiodomethane (CF3I) and 1,1,2,2,3,3,3-heptafluoro-1-iodopropane (C3F7I). Drug Chem Toxicol. 1998;21(2):137–49. 388. Longstaff E, Robinson M, Bradbrook C, Styles JA, Purchase IF. Genotoxicity and carcinogenicity of fluorocarbons: assessment by short-term in vitro tests and chronic exposure in rats. Toxicol Appl Pharmacol. 1984;72(1):15–31. 389. Hong HH, Devereux TR, Roycroft JH, Boorman GA, Sills RC. Frequency of ras mutations in liver neoplasms from B6C3F1 mice exposed to tetrafluoroethylene for two years. Toxicol Pathol. 1998; 26(5):646–50. 390. Lau C, Butenhoff JL, Rogers JM. The developmental toxicity of perfluoroalkyl acids and their derivatives. Toxicol Appl Pharmacol. 2004;198(2):231–41.
391. d’Alessandro A, Osterloh JD, Chuwers P, Quinlan PJ, Kelly TJ, Becker CE. Formate in serum and urine after controlled methanol exposure at the threshold limit value. Environ Health Perspect. 1994;102(2):178–81. 392. Seme MT, Summerfelt P, Henry MM, Neitz J, Eells JT. Formateinduced inhibition of photoreceptor function in methanol intoxication. J Pharmacol Exp Ther. 1999;289(1):361–70. 393. Chen JC, Schneiderman JF, Wortzman G. Methanol poisoning: bilateral putaminal and cerebellar cortical lesions on CT and MR. J Comput Assist Tomogr. 1991;15(3):522–4. 394. Feany MB, Anthony DC, Frosch MP, Zane W, De Girolami U. August 2000: two cases with necrosis and hemorrhage in the putamen and white matter. Brain Pathol. 2001;11(1):121–2, 125. 395. Verhelst D, Moulin P, Haufroid V, Wittebole X, Jadoul M, Hantson P. Acute renal injury following methanol poisoning: analysis of a case series. Int J Toxicol. 2004;23(4):267–73. 396. Neymeyer VR, Tephly TR. Detection and quantification of 10formyltetrahydrofolate dehydrogenase (10-FTHFDH) in rat retina, optic nerve, and brain. Life Sci. 1994;54(22):PL395–9. 397. Lee EW, Garner CD, Terzo TS. A rat model manifesting methanolinduced visual dysfunction suitable for both acute and long-term exposure studies. Toxicol Appl Pharmacol. 1994;128(2):199–206. 398. Garner CD, Lee EW, Terzo TS, Louis-Ferdinand RT. Role of retinal metabolism in methanol-induced retinal toxicity. J Toxicol Environ Health. 1995;44(1):43–56. 399. Aziz MH, Agrawal AK, Adhami VM, Ali MM, Baig MA, Seth PK. Methanol-induced neurotoxicity in pups exposed during lactation through mother: role of folic acid. Neurotoxicol Teratol. 2002; 24(4):519–27. 400. Soffritti M, Belpoggi F, Cevolani D, Guarino M, Padovani M, Maltoni C. Results of long-term experimental studies on the carcinogenicity of methyl alcohol and ethyl alcohol in rats. Ann N Y Acad Sci. 2002; 982:46–69. 401. Harris C, Dixon M, Hansen JM. Glutathione depletion modulates methanol, formaldehyde and formate toxicity in cultured rat conceptuses. Cell Biol Toxicol. 2004;20(3):133–45. 402. Hansen JM, Contreras KM, Harris C. Methanol, formaldehyde, and sodium formate exposure in rat and mouse conceptuses: a potential role of the visceral yolk sac in embryotoxicity. Birth Defects Res A Clin Mol Teratol. 2005;73(2):72–82. 403. Degitz SJ, Rogers JM, Zucker RM, Hunter ES, III. Developmental toxicity of methanol: Pathogenesis in CD-1 and C57BL/6J mice exposed in whole embryo culture. Birth Defects Res A Clin Mol Teratol. 2004;70(4):179–84. 404. Huang YS, Held GA, Andrews JE, Rogers JM. (14)C methanol incorporation into DNA and proteins of organogenesis stage mouse embryos in vitro. Reprod Toxicol. 2001;15(4):429–35. 405. Barceloux DG, Bond GR, Krenzelok EP, Cooper H, Vale JA. American Academy of Clinical Toxicology Ad Hoc Committee on the Treatment Guidelines for Methanol Poisoning. American Academy of Clinical Toxicology practice guidelines on the treatment of methanol poisoning. J Toxicol Clin Toxicol. 2002;40(4):415–46. 406. Maddox JF, Roth RA, Ganey PE. Allyl alcohol activation of protein kinase C delta leads to cytotoxicity of rat hepatocytes. Chem Res Toxicol. 2003;16(5):609–15. 407. Karas M, Chakrabarti SK. Caffeine potentiation of allyl alcoholinduced hepatotoxicity. II. In vitro study. J Environ Pathol Toxicol Oncol. 2001;20(2):155–64. 408. Brayer C, Micheau P, Bony C, Tauzin L, Pilorget H, Samperiz S, Alessandri JL. Neonatal accidental burn by isopropyl alcohol. Arch Pediatr. 2004;11(8):932–5. 409. Morgan BW, Ford MD, Follmer R. Ethylene glycol ingestion resulting in brainstem and midbrain dysfunction. J Toxicol Clin Toxicol. 2000;38(4):445–51. 410. Krenova M, Pelclova D. Course of intoxications due to concurrent ethylene glycol and ethanol ingestion. Przegl Lek. 2005;62(6): 508–10.
27 411. Krenova M, Pelclova D, Navratil T, et al. Experiences of the Czech toxicological information centre with ethylene glycol poisoning. Biomed Pap Med Fac Univ Palacky Olomouc Czech Repub. 2005;149(2):473–75. 412. Guo C, McMartin KE. The cytotoxicity of oxalate, metabolite of ethylene glycol, is due to calcium oxalate monohydrate formation. Toxicology. 2005;208(3):347–55. 413. Hewlett TP, McMartin KE, Lauro AJ, Ragan FA, Jr. Ethylene glycol poisoning: the value of glycolic acid determinations for diagnosis and treatment. J Toxicol Clin Toxicol. 1986;24(5):389–402. 414. Brent J. Current management of ethylene glycol poisoning. Drugs. 2001;61(7):979–88. 415. Evans W, David EJ. Biodegradation of mono-, di-, and triethylene glycols in river waters under controlled laboratory conditions. Water Res. 1974;8(2):97–100. 416. Ballantyne B, Snellings WM. Developmental toxicity study with diethylene glycol dosed by gavage to CD rats and CD-1 mice. Food Chem Toxicol. 2005;43(11):1637–46. 417. Miller ER, Ayres JA, Young JT, McKenna MJ. Ethylene glycol monomethyl ether. I. Subchronic vapor inhalation study in rats and rabbits. Fundam Appl Toxicol. 1983;3(1):49–54. 418. Yu IJ, Lee JY, Chung YH, et al. Co-administration of toluene and xylene antagonized the testicular toxicity but not the hematopoietic toxicity caused by ethylene glycol monoethyl ether in SpragueDawley rats. Toxicol Lett. 1999;109(1–2):11–20. 419. Yoon CY, Hong CM, Cho YY, et al. Flow cytometric assessment of ethylene glycol monoethyl ether on spermatogenesis in rats. J Vet Med Sci. 2003;65(2):207–12. 420. Yamamoto T, Fukushima T, Kikkawa R, Yamada H, Horii I. Protein expression analysis of rat testes induced testicular toxicity with several reproductive toxicants. J Toxicol Sci. 2005;30(2): 111–26. 421. Correa A, Gray RH, Cohen R, et al. Ethylene glycol ethers and risks of spontaneous abortion and subfertility. Am J Epidemiol. 1996; 143(7):707–17. 422. McKinney PE, Palmer RB, Blackwell W, Benson BE. Butoxyethanol ingestion with prolonged hyperchloremic metabolic acidosis treated with ethanol therapy. J Toxicol Clin Toxicol. 2000;38(7):787–93. 423. Ku WW, Ghanayem BI, Chapin RE, Wine RN. Comparison of the testicular effects of 2-methoxyethanol (ME) in rats and guinea pigs. Exp Mol Pathol. 1994;61(2):119–33. 424. Vachhrajani KD, Dutta KK. Stage specific effect during one seminiferous epithelial cycle following ethylene glycol monomethyl ether exposure in rats. Indian J Exp Biol. 1992;30(10):892–6. 425. Holladay SD, Comment CE, Kwon J, Luster MI. Fetal hematopoietic alterations after maternal exposure to ethylene glycol monomethyl ether: prolymphoid cell targeting. Toxicol Appl Pharmacol. 1994;129(1):53–60. 426. Lee J, Trad CH, Butterfield DA. Electron paramagnetic resonance studies of the effects of methoxyacetic acid, a teratologic toxin, on human erythrocyte membranes. Toxicology. 1993;83(1–3):131–48. 427. Arashidani K, Kawamoto T, Kodama Y. Induction of sisterchromatid exchange by ethylene glycol monomethylether and its metabolite. Ind Health. 1998;36(1):27–31. 428. Cook RR, Bodner KM, Kolesar RC, et al. A cross-sectional study of ethylene glycol monomethyl ether process employees. Arch Environ Health. 1982;37(6):346–51. 429. Hoflack JC, Lambolez L, Elias Z, Vasseur P. Mutagenicity of ethylene glycol ethers and of their metabolites in Salmonella typhimurium his-. Mutat Res. 1995;341(4):281–7. 430. Au WW, Morris DL, Legator MS. Evaluation of the clastogenic effects of 2-methoxyethanol in mice. Mutat Res. 1993;300(3–4):273–9. 431. Dearman RJ, Filby A, Humphreys IR, Kimber I. Interleukins 5 and 13 characterize immune responses to respiratory sensitizing acid anhydrides. J Appl Toxicol. 2002;22(5):317–25.
Diseases Associated with Exposure to Chemical Substances
669
432. Dearman RJ, Warbrick EV, Humphreys IR, Kimber I. Characterization in mice of the immunological properties of five allergenic acid anhydrides. J Appl Toxicol. 2000;20(3):221–30. 433. Leach CL, Hatoum NS, Ratajczak HV, Zeiss CR, Garvin PJ. Evidence of immunologic control of lung injury induced by trimellitic anhydride. Am Rev Respir Dis. 1988;137(1):186–90. 434. Taylor AN. Role of human leukocyte antigen phenotype and exposure in development of occupational asthma. Curr Opin Allergy Clin Immunol. 2001;1(2):157–61. 435. Arts J, de Koning M, Bloksma N, Kuper C. Respiratory allergy to trimellitic anhydride in rats: concentration-response relationships during elicitation. Inhal Toxicol. 2004;16(5):259–69. 436. Sailstad DM, Ward MD, Boykin EH, Selgrade MK. A murine model for low molecular weight chemicals: differentiation of respiratory sensitizers (TMA) from contact sensitizers (DNFB). Toxicology. 2003;194(1–2):147–61. 437. Hopkins JE, Naisbitt DJ, Humphreys N, Dearman RJ, Kimber I, Park BK. Exposure of mice to the nitroso metabolite of sulfamethoxazole stimulates interleukin 5 production by CD4+ T-cells. Toxicology. 2005;206(2):221–31. 438. Brault D, Bouilly C, Renault D, Thybaud V. Tissue-specific induction of mutations by acute oral administration of N-methyl-N′-nitroN-nitrosoguanidine and beta-propiolactone to the Muta Mouse: preliminary data on stomach, liver and bone marrow. Mutat Res. 1996; 360(2):83–7. 439. IARC. β-Propiolactone [57-57-8]. Monogr Eval Carcinog Risks Hum. 1999;4(Suppl. 7):1. 440. Ducatman AM, Conwill DE, Crawl J. Germ cell tumors of the testicles among aircraft repairmen. J Urol. 1986;136(4):834–6. 441. Levin SM, Baker DB, Landrigan PJ, Monaghan SV, Frumin E, Braithwaite M. Testicular cancer in leather tanners exposed to dimethylformamide. Lancet. 1987;2(8568):1153. 442. Chen JL, Fayerweather WE, Pell S. Cancer incidence of workers exposed to dimethylformamide and/or acrylonitrile. J Occup Med. 1988;30(10):813–8. 443. Cheng TJ, Hwang SJ, Kuo HW, Luo JC, Chang MJ. Exposure to epichlorohydrin and dimethylformamide, glutathione S-transferases and sister chromatid exchange frequencies in peripheral lymphocytes. Arch Toxicol. 1999;73(4–5):282–7. 444. Major J, Hudak A, Kiss G, et al. Follow-up biological and genotoxicological monitoring of acrylonitrile- and dimethylformamide-exposed viscose rayon plant workers. Environ Mol Mutagen. 1998;31(4):301–10. 445. Senoh H, Aiso S, Arito H, et al. Carcinogenicity and chronic toxicity after inhalation exposure of rats and mice to N,Ndimethylformamide. J Occup Health. 2004;46(6):429–39. 446. IARC. Dimethylformamide. Monogr Eval Carcinog Risks Hum. 1999;71 Pt 2:545–74. 447. Malley LA, Slone TW, Jr, et al. Chronic toxicity/oncogenicity of dimethylformamide in rats and mice following inhalation exposure. Fundam Appl Toxicol. 1994;23(2):268–79. 448. Hurtt ME, Placke ME, Killinger JM, Singer AW, Kennedy GL Jr. 13-week inhalation toxicity study of dimethylformamide (DMF) in cynomolgus monkeys. Fundam Appl Toxicol. 1992;18(4):596–601. 449. Fail PA, George JD, Grizzle TB, Heindel JJ. Formamide and dimethylformamide: reproductive assessment by continuous breeding in mice. Reprod Toxicol. 1998;12(3):317–32. 450. Saillenfait AM, Payan JP, Beydon D, Fabry JP, Langonne I, Sabate JP, Gallissot F. Assessment of the developmental toxicity, metabolism, and placental transfer of N,N-dimethylformamide administered to pregnant rats. Fundam Appl Toxicol. 1997;39(1):33–43. 451. Kafferlein HU, Ferstl C, Burkhart-Reichl A, et al. The use of biomarkers of exposure of N,N-dimethylformamide in health risk assessment and occupational hygiene in the polyacrylic fibre industry. Occup Environ Med. 2005;62(5):330–6. 452. Kim HA, Kim K, Heo Y, Lee SH, Choi HC. Biological monitoring of workers exposed to N, N-dimethylformamide in synthetic leather
670
453.
454.
455.
456.
457.
458.
459.
460.
461.
462.
463.
464. 465.
466.
467.
468.
469.
470. 471.
Environmental Health manufacturing factories in Korea. Int Arch Occup Environ Health. 2004;77(2):108–12. Epub 2003 Dec 9. Kennedy GL Jr, Sherman H. Acute and subchronic toxicity of dimethylformamide and dimethylacetamide following various routes of administration. Drug Chem Toxicol. 1986;9(2):147–70. Klimisch HJ, Hellwig J. Developmental toxicity of dimethylacetamide in rabbits following inhalation exposure. Hum Exp Toxicol. 2000;19(12):676–83. Costa LG, Deng H, Gregotti C, et al. Comparative studies on the neuroand reproductive toxicity of acrylamide and its epoxide metabolite glycidamide in the rat. Neurotoxicology. 1992;13(1):219–24. Konings EJ, Baars AJ, van Klaveren JD, et al. Acrylamide exposure from foods of the Dutch population and an assessment of the consequent risks. Food Chem Toxicol. 2003;41(11):1569–79. Doerge DR, da Costa GG, McDaniel LP, Churchwell MI, Twaddle NC, Beland FA. DNA adducts derived from administration of acrylamide and glycidamide to mice and rats. Mutat Res. 2005; 580(1–2):131–41. Costa LG, Deng H, Calleman CJ, Bergmark E. Evaluation of the neurotoxicity of glycidamide, an epoxide metabolite of acrylamide: behavioral, neurochemical and morphological studies. Toxicology. 1995;98(1–3):151–61. Kjuus H, Goffeng LO, Heier MS, et al. Effects on the peripheral nervous system of tunnel workers exposed to acrylamide and N-methylolacrylamide. Scand J Work Environ Health. 2004;30(1): 21–9. Lynch JJ, III, Silveira LC, Perry VH, Merigan WH. Visual effects of damage to P ganglion cells in macaques. Vis Neurosci. 1992;8(6):575–83. Chauhan NB, Spencer PS, Sabri MI. Acrylamide-induced depletion of microtubule-associated proteins (MAP1 and MAP2) in the rat extrapyramidal system. Brain Res. 1993;602(1):111–8. Jortner BS, Ehrich M. Comparison of toxicities of acrylamide and 2,5-hexanedione in hens and rats on 3-week dosing regimens. J Toxicol Environ Health. 1993;39(4):417–28. Sickles DW. Toxic neurofilamentous axonopathies and fast anterograde axonal transport. III. Recovery from single injections and multiple dosing effects of acrylamide and 2,5-hexanedione. Toxicol Appl Pharmacol. 1991;108(3):390–6. LoPachin RM, Balaban CD, Ross JF. Acrylamide axonopathy revisited. Toxicol Appl Pharmacol. 2003;188(3):135–53. Pacchierotti F, Tiveron C, D’Archivio M, et al. Acrylamide-induced chromosomal damage in male mouse germ cells detected by cytogenetic analysis of one-cell zygotes. Mutat Res. 1994;309(2): 273–84. Yang HJ, Lee SH, Jin Y, et al. Toxicological effects of acrylamide on rat testicular gene expression profile. Reprod Toxicol. 2005; 19(4):527–34. Gutierrez-Espeleta GA, Hughes LA, Piegorsch WW, Shelby MD, Generoso WM. Acrylamide: dermal exposure produces genetic damage in male mouse germ cells. Fundam Appl Toxicol. 1992; 18(2):189–92. Ghanayem BI, Witt KL, El-Hadri L, et al. Comparison of germ cell mutagenicity in male CYP2E1-null and wild-type mice treated with acrylamide: evidence supporting a glycidamide-mediated effect. Biol Reprod. 2005;72(1):15763. Epub 2004 Sep 8. Holland N, Ahlborn T, Turteltaub K, Markee C, Moore D, II, Wyrobek AJ, Smith MT. Acrylamide causes preimplantation abnormalities in embryos and induces chromatin-adducts in male germ cells of mice. Reprod Toxicol. 1999;13(3):167–78. Adler ID, Zouh R, Schmid E. Perturbation of cell division by acrylamide in vitro and in vivo. Mutat Res. 1993;301(4):249–54. Butterworth BE, Eldridge SR, Sprankle CS, Working PK, Bentley KS, Hurtt ME. Tissue-specific genotoxic effects of acrylamide and acrylonitrile. Environ Mol Mutagen. 1992;20(3):148–55.
472. Yang HJ, Lee SH, Jin Y, Choi JH, Han CH, Lee MH. Genotoxicity and toxicological effects of acrylamide on reproductive system in male rats. J Vet Sci. 2005;6(2):103–9. 473. Puppel N, Tjaden Z, Fueller F, Marko D. DNA strand breaking capacity of acrylamide and glycidamide in mammalian cells. Mutat Res. 2005;580(1–2):71–80. 474. Blasiak J, Gloc E, Wozniak K, Czechowska A. Genotoxicity of acrylamide in human lymphocytes. Chem Biol Interact. 2004; 149(2–3):137–49. 475. Lafferty JS, Kamendulis LM, Kaster J, Jiang J, Klaunig JE. Subchronic acrylamide treatment induces a tissue-specific increase in DNA synthesis in the rat. Toxicol Lett. 2004;154(1–2):95–103. 476. Sobel W, Bond GG, Parsons TW, Brenner FE. Acrylamide cohort mortality study. Br J Ind Med. 1986;43(11):785–8. 477. IARC. Acrylamide [79–06–1]. Monograph Eval Carcinog Risks Hum. 1994;60:389. 478. Costa LG, Manzo L. Biochemical markers of neurotoxicity: research strategies and epidemiological applications. Toxicol Lett. 1995;77(1–3):137–44. 479. Calleman CJ, Wu Y, He F, et al. Relationships between biomarkers of exposure and neurological effects in a group of workers exposed to acrylamide. Toxicol Appl Pharmacol. 1994;126(2):361–71. 480. van Birgelen AP, Chou BJ, Renne RA, et al. Effects of glutaraldehyde in a 2-year inhalation study in rats and mice. Toxicol Sci. 2000; 55(1):195–205. 481. U.S. Dept. of Health and Human Services. Public Health Service. Centers for Disease Control: NIOSH Current Intelligence Bulletin 34. Formaldehyde: Evidence of Carcinogenicity. Washington, DC: U.S. Government Printing Office, 1981. 482. Wu PC, Li YY, Lee CC, Chiang CM, Su HJ. Risk assessment of formaldehyde in typical office buildings in Taiwan. Indoor Air. 2003;13(4):359–63. 483. Tanaka K, Nishiyama K, Yaginuma H, et al. Formaldehyde exposure levels and exposure control measures during an anatomy dissecting course. Kaibogaku Zasshi. 2003;78(2):43–51. 484. Leikauf GD. Mechanisms of aldehyde-induced bronchial reactivity: role of airway epithelium. Res Rep Health Eff Inst. 1992;49(1): 1–35. 485. Swiecichowski AL, Long KJ, Miller ML, Leikauf GD. Formaldehyde-induced airway hyperreactivity in vivo and ex vivo in guinea pigs. Environ Res. 1993;61(2):185–99. 486. Kita T, Fujimura M, Myou S, et al. Potentiation of allergic bronchoconstriction by repeated exposure to formaldehyde in guineapigs in vivo. Clin Exp Allergy. 2003;33(12):1747–53. 487. Malek FA, Moritz KU, Fanghanel J. A study on the effect of inhalative formaldehyde exposure on water labyrinth test performance in rats. Ann Anat. 2003;185(3):277–85. 488. Gurel A, Coskun O, Armutcu F, Kanter M, Ozen OA. Vitamin E against oxidative damage caused by formaldehyde in frontal cortex and hippocampus: biochemical and histological studies. J Chem Neuroanat. 2005;29(3):173–8. 489. Monticello TM, Swenberg JA, Gross EA, et al. Correlation of regional and nonlinear formaldehyde-induced nasal cancer with proliferating populations of cells. Cancer Res. 1996;56(5):1012–22. 490. Til HP, Woutersen RA, Feron VJ, Hollanders VH, Falke HE, Clary JJ. Two-year drinking water study of formaldehyde in rats. Food Chem Toxicol. 1989;27(2):77–87. 491. Hauptmann M, Lubin JH, Stewart PA, et al. Mortality from lymphohematopoietic malignancies among workers in formaldehyde industries. J Natl Cancer Inst. 2003;95:1615–23. 492. Pinkerton LE, Hein MJ, Stayner LT. Mortality among a cohort of garment workers exposed to formaldehyde: an update. Occup Environ Med. 2004;61(3):193–200. 493. IARC. Formaldehyde, 2-Butoxyethanol and 1-tert-Butoxy-2-propanol. 2004;88.
27 494. Graves RJ, Trueman P, Jones S, Green T. DNA sequence analysis of methylene chloride-induced HPRT mutations in Chinese hamster ovary cells: comparison with the mutation spectrum obtained for 1,2-dibromoethane and formaldehyde. Mutagenesis. 1996;11(3): 229–33. 495. Hester SD, Benavides GB, Yoon L, et al. Formaldehyde-induced gene expression in F344 rat nasal respiratory epithelium. Toxicology. 2003;187(1):13–24. 496. Kuykendall JR, Bogdanffy MS. Efficiency of DNA-histone crosslinking induced by saturated and unsaturated aldehydes in vitro. Mutat Res. 1992;283(2):131–6. 497. Casanova M, Morgan KT, Gross EA, Moss OR, Heck HA. DNAprotein cross-links and cell replication at specific sites in the nose of F344 rats exposed subchronically to formaldehyde. Fundam Appl Toxicol. 1994;23(4):525–36. 498. Andersson M, Agurell E, Vaghef H, Bolcsfoldi G, Hellman B. Extended-term cultures of human T-lymphocytes and the comet assay: a useful combination when testing for genotoxicity in vitro? Mutat Res. 2003;540(1):43–55. 499. Yager JW, Cohn KL, Spear RC, Fisher JM, Morse L. Sister chromatid exchanges in lymphocytes of anatomy students exposed to formaldehyde-embalming solution. Mutat Res. 1986;174(2):135–9. 500. Ballarin C, Sarto G, Giacomelli L, Bartolucci GB, Clonfero E. Micronucleated cells in nasal mucosa of formaldehyde-exposed workers. Mutat Res. 1992;280(1):1–7. 501. Shaham J, Bomstein Y, Meltzer A, Kaufman Z, Palma E, Ribak J. DNA-protein crosslinks, a biomarker of exposure to formaldehyde— in vitro. Carcinogenesis. 1996;17(1):121–5. 502. Burgaz S, Erdem O, Cakmak G, Erdem N, Karakaya A, Karakaya AE. Cytogenetic analysis of buccal cells from shoe-workers and pathology and anatomy laboratory workers exposed to n-hexane, toluene, methyl ethyl ketone and formaldehyde. Biomarkers. 2002; 7(2):151–61. 503. Titenko-Holland N, Levine AJ, Smith MT, et al. Quantification of epithelial cell micronuclei by fluorescence in situ hybridization (FISH) in mortuary science students exposed to formaldehyde. Mutat Res. 1996;371(3–4):237–48. 504. Hallier E, Schroder KR, Asmuth K, Dommermuth A, Aust B, Goergens, HW. Metabolism of dichloromethane (methylene chloride) to formaldehyde in human erythrocytes: influence of polymorphism on glutathione transferase theta (GST T1-1). Arch Toxicol. 1994;68(7): 423–7. 505. Dennis KJ, Ichinose T, Miller M, Shibamoto T: Gas chromatographic determination of vapor-phase biomarkers formed from rats dosed with CCl4. J Appl Toxicol 13(4):301–303, 1993. 506. Majumder PK, Kumar VL. Inhibitory effects of formaldehyde on the reproductive system of male rats. Indian J Physiol Pharmacol. 1995;39(1):80–2. 507. Janssens SP, Musto SW, Hutchison WG, et al. Cyclooxygenase and lipoxygenase inhibition by BW-755C reduces acrolein smokeinduced acute lung injury. J Appl Physiol. 1994;77(2):888–95. 508. Li L, Hamilton RF, Jr, Taylor DE, Holian A. CROLEIN-induced cell death in human alveolar macrophages. Toxicol Appl Pharmacol. 1997;145(2):331–9. 509. Awasthi S, Boor PJ. Lipid peroxidation and oxidative stress during acute allylamine-induced cardiovascular toxicity. J Vasc Res. 1994;31(1):33–41. 510. Lovell MA, Xie C, Markesbery WR. ACROLEIN is increased in Alzheimer’s disease brain and is toxic to primary hippocampal cultures. Neurobiol Aging. 2001;22(2):187–94. 511. Cao Z, Hardej D, Trombetta LD, Trush MA, Li Y. Induction of cellular glutathione and glutathione S-transferase by 3H-1,2-dithiole3-thione in rat aortic smooth muscle A10 cells: protection against ACROLEIN-induced toxicity. Atherosclerosis. 2003;166(2): 291–301.
Diseases Associated with Exposure to Chemical Substances
671
512. Roux E, Ouedraogo N, Hyvelin JM, Savineau JP, Marthan R. In vitro effect of air pollutants on human bronchi. Cell Biol Toxicol. 2002;18(5):289–99. 513. Burcham PC, Fontaine FR, Kaminskas LM, Petersen DR, Pyke SM. Protein adduct-trapping by hydrazinophthalazine drugs: mechanisms of cytoprotection against ACROLEIN-mediated toxicity. Mol Pharmacol. 2004;65(3):655–64. 514. Kuykendall JR, Bogdanffy MS. Efficiency of DNA-histone crosslinking induced by saturated and unsaturated aldehydes in vitro. Mutat Res. 1992;283(2):131–6. 515. Eder E, Deininger C, Deininger D, Weinfurtner E. Genotoxicity of 2-halosubstituted enals and 2-chloroacrylonitrile in the Ames test and the SOS-chromotest. Mutat Res. 1994;322(4):321–8. 516. Parent RA, Caravello HE, San RH. Mutagenic activity of ACROLEIN in S. typhimurium and E. coli. J Appl Toxicol. 1996; 16(2):103–8. 517. Parent RA, Caravello HE, Christian MS, Hoberman AM. Developmental toxicity of acrolein in New Zealand white rabbits. Fundam Appl Toxicol. 1993;20(2):248–56. 518. Parent RA, Caravello HE, Hoberman AM. Reproductive study of acrolein on two generations of rats. Fundam Appl Toxicol. 1992; 19(2):228–37. 519. IARC. Acrolein. Vol 63. 1995: 337. 520. Leikauf GD. Mechanisms of aldehyde-induced bronchial reactivity: role of airway epithelium. Res Rep Health Eff Inst. 1992;(49):1–35. 521. Feron VJ, Til HP, de-Vrijer F, Woutersen RA, Cassec FR, vanBladeren PJ. Aldehydes: occurrence, carcinogenic potential, mechanism of action and risk assessment. Mutat Res. 1991;259(3–4): 363–85. 522. Allen N, Mendell JR, Billmaier DJ, et al. Toxic polyneuropathy due to methyl n-butyl ketone. Arch Neurol. 1975;32:209–18. 523. Spencer PS, Schaumburg HH. Ultrastructural studies of the dyingback process. IV. Differential vulnerability of PNS and CNS fibers in experimental central-peripheral distal axonopathies. J Neuropathol Exp Neurol. 1977;36:300–20. 524. LoPachin RM, Lehning EJ. The relevance of axonal swellings and atrophy to gamma-diketone neurotoxicity: a forum position paper. Neurotoxicology. 1997;18(1):7–22. 525. Schwetz BA, Mast TJ, Weigel, RJ, Dill JA, Morrissey RE. Developmental toxicity of inhaled methyl ethyl ketone in Swiss mice. Fundam Appl Toxicol. 1991;16(4):742–8. 526. Nemec MD, Pitt JA, Topping DC, et al. Inhalation two-generation reproductive toxicity study of methyl isobutyl ketone in rats. Int J Toxicol. 2004;23(2):127–43. 527. Rosenkranz HS, Klopman G. 1,4-Dioxane: prediction of in vivo clastogenicity. Mutat Res. 1992;280(4):245–51. 528. Roy SK, Thilagar AK, Eastmond DA. Chromosome breakage is primarily responsible for the micronuclei induced by 1,4-dioxane in the bone marrow and liver of young CD-1 mice. Mutat Res. 2005; 586(1):28–37. 529. Goldsworthy TL, Monticello TM, Morgan KT, et al. Examination of potential mechanisms of carcinogenicity of 1,4-dioxane in rat nasal epithelial cells and hepatocytes. Arch Toxicol. 1991;65(1):1–9. 530. IARC. 1,4-Dioxane. 1999;71:589. 531. National Toxicology Program. 1,4-Dioxane. Rep Carcinog. 2002; 10:110–1. 532. Dalvy RA, Neal RA. Metabolism in vivo of carbon disulfide to carbonyi sulfide and carbon dioxide in the rat. Biochem Pharmacol. 1978;27:1608. 533. Hamilton A. The making of artificial silk in the United States and some of the dangers attending it. In U.S. Department of Labor, Division of Labor Standards: Discussion of Industrial Accidents and Diseases. Bulletin No. 10, Washington, DC: U.S. Government Printing Office, 1937, 151–60.
672
Environmental Health
534. Lilis R. Behavioral effects of occupational carbon disulfide exposure. In: Xintaras C, Johnson BL, de Groot I, eds. Behavioral Toxicology, Early Detection of Occupational Hazards. Washington, DC: U.S. Dept. of HEW, Public Health Service, Centers for Disease Control, National Institute for Occupational Safety and Health; 1974: 51–9. 535. Huang CC. Carbon disulfide neurotoxicity: Taiwan experience. Acta Neurol Taiwan. 2004;13(1):3–9. 536. Huang CC, Yen TC, Shih TS, Chang HY, Chu NS. Dopamine transporter binding study in differentiating carbon disulfide induced parkinsonism from idiopathic parkinsonism. Neurotoxicology. 2004;25(3):341–7. 537. Chang SJ, Shih TS, Chou TC, Chen CJ, Chang HY, Sung FC. Hearing loss in workers exposed to carbon disulfide and noise. Environ Health Perspect. 2003;111(13):1620–4. 538. Seppalainen AM, Tolonen MT. Neurotoxicity of long-term exposure to carbon disulfide in the viscose rayon industry—a neurophysiological study. Work Environ Health. 1974;11:145–53. 539. Krstev S, Perunicic B, Farkic B, Banicevic R. Neuropsychiatric effects in workers with occupational exposure to carbon disulfide. J Occup Health. 2003;45(2):81–7. 540. Hirata M, Ogawa Y, Okayama A, Goto S. Changes in auditory brainstem response in rats chronicaly exposed to carbon disulfide. Arch Toxicol. 1992;66(5):334–8. 541. Herr DW, Boyes WK, Dyer RS. Alterations in rat flash and pattern reversal evoked potentials after acute or repeated administration of carbon disulfide (CS2). Fundam Appl Toxicol. 1992;18(3):328–42. 542. de Gandarias JM, Echevarria E, Mugica J, Serrano R, Casis L. Changes in brain enkephalin immunostaining after acute carbon disulfide exposure in rats. J Biochem Toxicol. 1994;9(2):59–62. 543. Nishiwaki Y, Takebayashi T, O’Uchi T, et al. Six year observational cohort study of the effect of carbon disulphide on brain MRI in rayon manufacturing workers. Occup Environ Med. 2004;61(3):225–32. 544. DeCaprio AP, Spink DC, Chen X, Fowke JH, Zhu M, Bank S. Characterization of isothiocyanates, thioureas, and other lysine adduction products in carbon disulfide-treated peptides and protein. Chem Res Toxicol. 1992;5(4):496–504. 545. Sills RC, Harry GJ, Valentine WM, Morgan DL. Interdisciplinary neurotoxicity inhalation studies: carbon disulfide and carbonyl sulfide research in F344 rats. Toxicol Appl Pharmacol. 2005;207(Suppl 2):245–50. 546. Takebayashi T, Nishiwaki Y, Uemura T, et al. A six year follow up study of the subclinical effects of carbon disulphide exposure on the cardiovascular system. Occup Environ Med. 2004;61(2): 127–34. 547. Tang GH, Xuan DF. Detection of DNA damage induced by carbon disulfide in mice sperm with single-cell gel electrophoresis assay. Zhonghua Lao Dong Wei Sheng Zhi Ye Bing Za Zhi. 2003; 21(6):440–3. 548. Patel KG, Yadav PC, Pandya CB, Saiyed HN. Male exposure mediated adverse reproductive outcomes in carbon disulphide exposed rayon workers. J Environ Biol. 2004;25(4):413–8. 549. Wang ZP, Xie KQ, Li HQ. Effect of carbon disulfide exposure at different phases on the embryonic development in mid-pregnancy of female mice. Zhonghua Lao Dong Wei Sheng Zhi Ye Bing Za Zhi. 2005;23(2):139–41. 550. Valentine WM, Graham DG, Anthony DC. Covalent cross-linking of erythrocyte spectrin by carbon disulfide in vivo. Toxicol Appl Pharmacol. 1993;121(1):71–7. 551. Valentine WM, Amarnath V, Amarnath K, Rimmele F, Graham DG. Carbon disulfide mediated protein cross-linking by N,Ndiethyldithiocarbamate. Chem Res Toxicol. 1995;8(1):96–102. 552. Chen XQ, Tan XD. Studies on DNA damage in workers with longterm exposure to lower concentration of carbon disulfide. Zhonghua Yu Fang Yi Xue Za Zhi. 2004;38(1):36–8.
553. Djuric D, Surducki N, Berkes I. Iodine-azide test on urine of persons exposed to carbon disulfide. Br J Ind Med. 1965;22:321–3. 554. IARC. Nitrobenzene. 1996;65:381. 555. Miller RT. Dinitrobenzene-mediated production of peroxynitrite by neuronal nitric oxide synthase. Chem Res Toxicol. 2002;15(7):927–34. 556. Mulheran M, Ray DE, Lister T, Nolan CC. The effect of 1,3-dinitrobenzene on the functioning of the auditory pathway in the rat. Neurotoxicology. 1999;20(1):27–39. 557. Irimura K, Yamaguchi M, Morinaga H, Sugimoto S, Kondou Y, Koida M. Collaborative work to evaluate toxicity on male reproductive organs by repeated dose studies in rats 26. Detection of 1,3dinitrobenzene-induced histopathological changes in testes and epididymides of rats with 2-week daily repeated dosing. J Toxicol Sci. 2000;25 Spec No:251–8. 558. Strandgaard C, Miller MG. Germ cell apoptosis in rat testis after administration of 1,3-dinitrobenzene. Reprod Toxicol. 1998; 12(2):97–103. 559. Dunnick JK, Burka LT, Mahler J, Sills R. Carcinogenic potential of o-nitrotoluene and p-nitrotoluene. Toxicology. 2003;183(1–3): 221–34. 560. Sills RC, Hong HL, Flake G, et al. o-Nitrotoluene-induced large intestinal tumors in B6C3F1 mice model human colon cancer in their molecular pathogenesis. Carcinogenesis. 2004;25(4):605–12. Epub 2003 Dec. 561. Hong HL, Ton TV, Devereux TR, et al. Chemical-specific alterations in ras, p53, and beta-catenin genes in hemangiosarcomas from B6C3F1 mice exposed to o-nitrotoluene or riddelliine for 2 years. Toxicol Appl Pharmacol. 2003;191(3):227–34. 562. Jones CR, Beyerbach A, Seffner W, Sabbioni G. Hemoglobin and DNA adducts in rats exposed to 2-nitrotoluene. Carcinogenesis. 2003;24(4):779–87. 563. IARC. 2-Nitrotoluene, 3-Nitrotoluene and 4-Nitrotoluene. 1996; 65:409. 564. IARC. 2,4-Dinitrotoluene, 2,6-Dinitrotoluene and 3,5-Dinitrotoluene. 1996;65:309. 565. Jones CR, Liu YY, Sepai O, Yan H, Sabbioni G. Hemoglobin adducts in workers exposed to nitrotoluenes. Carcinogenesis. 2005; 26(1):133–43. Epub 2004 Oct 7. 566. IARC. 2,4-Dinitrotoluene, 2,6-Dinitrotoluene and 3,5-Dinitrotoluene. 1996;65:309. 567. Banerjee H, Hawkins Z, Dutta S, Smoot D. Effects of 2-amino-4,6dinitrotoluene on p53 tumor suppressor gene expression. Mol Cell Biochem. 2003;252(1–2):387–9. 568. Sabbioni G, Liu YY, Yan H, Sepai O. Hemoglobin adducts, urinary metabolites and health effects in 2,4,6-trinitrotoluene exposed workers. Carcinogenesis. 2005;26(7):1272–9. Epub 2005 Apr 7. 569. George SE, Huggins-Clark G, Brooks LR. Use of a Salmonella microsuspension bioassay to detect the mutagenicity of munitions compounds at low concentrations. Mutat Res. 2001;490(1):45–56. 570. IARC. 2,4,6-Trinitrotoluene. 1996;65:449. 571. Homma-Takeda S, Hiraku Y, Ohkuma Y, et al. 2,4,6-trinitrotolueneinduced reproductive toxicity via oxidative DNA damage by its metabolite. Free Radic Res. 2002;36(5):555–66. 572. Dugas TR, Kanz MF, Hebert VY, et al. Vascular medial hyperplasia following chronic, intermittent exposure to 4,4′-methylenedianiline. Cardiovasc Toxicol. 2004;4(1):85–96. 573. Martelli A, Carrozzino R, Mattioli F, Brambilla G. DNA damage induced by 4,4’-methylenedianiline in primary cultures of hepatocytes and thyreocytes from rats and humans. Toxicol Appl Pharmacol. 2002;182(3):219–25. 574. National Toxicology Program. 4,4′-Methylenedianiline and its dihydrochloride salt. Rep Carcinog. 2002;10:152–3. 575. Dearman RJ, Warbrick EV, Humphreys IR, Kimber I. Characterization in mice of the immunological properties of five allergenic acid anhydrides. J Appl Toxicol. 2000;20(3):221–30.
27 576. Yamazaki K, Ohnishi M, Aiso S, et al. Two-week oral toxicity study of 1,4-Dichloro-2-nitrobenzene in rats and mice. Ind Health. 2005; 43(2):308–19. 577. DeLeve LD. Dinitrochlorobenzene is genotoxic by sister chromatid exchange in human skin fibroblasts. Mutat Res. 1996;371(1–2):105–8. 578. Catterall F, King LJ, Ioannides C. Mutagenic activity of the glutathione S-transferase substrate 1-chloro-2,4-dinitrobenzene (CDNB) in the Salmonella mutagenicity assay. Mutat Res. 2002;520(1–2): 119–24. 579. Yoshida R, Oikawa S, Ogawa Y, et al. Mutagenicity of p-aminophenol in E. coli WP2uvrA/pKM101 and its relevance to oxidative DNA damage. Mutat Res. 1998;418(1):59. 580. Li Y, Bentzley CM, Tarloff JB. Comparison of para-aminophenol cytotoxicity in rat renal epithelial cells and hepatocytes. Toxicology. 2005;209(1):69–76. Epub 2005 Jan 21. 581. Murray EB, Edwards JW. Micronuclei in peripheral lymphocytes and exfoliated urothelial cells of workers exposed to 4,4′-methylenebis(2-chloroaniline) (MOCA). Mutat Res. 1999;446(2):175–80. 582. IARC. 4,4′-Methylenebis(2-Chloroaniline) (MOCA). 1993:271. 583. Barnes JM, Magee PN. Some toxic properties of dimethylaitrosamine. Br J Ind Med. 1954;11:167. 584. Hsu YC, Chiu YT, Lee CY, Lin YL, Huang YT. Increases in fibrosis-related gene transcripts in livers of dimethylnitrosamineintoxicated rats. J Biomed Sci. 2004;11(3):408–17. 585. Kitamura K, Nakamoto Y, Akiyama M, et al. Pathogenic roles of tumor necrosis factor receptor p55-mediated signals in dimethylnitrosamine-induced murine liver fibrosis. Lab Invest. 2002; 82(5):571–83. 586. Magee PN, Barnes JM. Carcinogenic nitroso compounds. Adv Cancer Res. 1956;10:163. 587. Magee PN, Farber E. Toxic liver injury and carcinogenesis: methylation of rat-liver nucleic acids by dimethylnitrosamine in vivo. Biochem J. 1962;83:114. 588. Frei E, Kuchenmeister F, Gliniorz R, Breuer A, Schmezer P. N-nitrososdimethylamine is activated in microsomes from hepatocytes to reactive metabolites which damage DNA of non-parenchymal cells in rat liver. Toxicol Lett. 2001;123(2–3):227–34. 589. Druckrey H, Preussman R, Ivankovic S, Schmahl D. Organotrope carcinogene Wirkungen bei 65 verschiedenen N-Nitroso-Verbindungen an BD-Ratten. Z Krebsforsch. 1967;69:103–201. 590. Enzmann H, Zerban H, Kopp-Schneider A, Loser E, Bannach P. Effects of low doses of N-nitrosomorpholine on the development of early stages of hepatocarcinogenesis. Carcinogenesis. 1995;16(7): 1513–8. 591. Baskaran K, Laconi S, Reddy MK. Transformation of hamster pancreatic duct cells by 4-(methylnitrosamino)-1-butanone (NNK), in vitro. Carcinogenesis. 1994;15(11):2461–6. 592. Lozano JC, Nakazawa H, Cros MP, Cabral R, Yamasaki H. G → A mutations in p53 and Ha-ras genes in esophageal papillomas induced by N-nitrosomethylbenzylamine in two strains of rats. Mol Carcinog. 1994;9(1):33–9. 593. Georgiadis P, Xu YZ, Swann PF. Nitrosamine-induced cancer: O4alkylthymine produces sites of DNA hyperflexibility. Biochemistry. 1991;30(50):11725–32. 594. Carlton PS, Kresty LA, Siglin JC, Morse MA, Lu J, Morgan C, Stoner GD. Inhibition of N-nitrosomethylbenzylamine-induced tumorigenesis in the rat esophagus by dietary freeze-dried strawberries. Carcinogenesis. 2001;22(3):441–6. 595. Wirnitzer U, Topfer R, Rosenbruch M. Altered p53 expression in early stages of chemically induced rodent hepatocarcinogenesis. Toxicol Pathol. 1998;26(5):636–45. 596. Tatsuta M, Iishi H, Baba M, Yano H, Iseki K, Uehara H, Nakaizumi A. Enhancement by ethyl alcohol of experimental hepatocarcinogenesis induced by N-nitrosomorpholine. Int J Cancer. 1997;71(6):1045–8. 597. Hecht SS. DNA adduct formation from tobacco-specific Nnitrosamines. Mutat Res. 1999;424(1–2):127–42.
Diseases Associated with Exposure to Chemical Substances
673
598. Hoffmann D, Brunnemann KD, Prokopczyk B, Djordjevic MV. Tobacco-specific N-nitrosamines and Areca-derived N-nitrosamines: chemistry, biochemistry, carcinogenicity, and relevance to humans. J Toxicol Environ Health. 1994;41(1):1–52. 599. Miyazaki M, Sugawara E, Yoshimura T, Yamazaki H, Kamataki T. Mutagenic activation of betel quid-specific N-nitrosamines catalyzed by human cytochrome P450 coexpressed with NADPHcytochrome P450 reductase in Salmonella typhimurium YG7108. Mutat Res. 2005;581(1–2):165–71. Epub 2005 Jan 12. 600. Weitberg AB, Corvese D. Oxygen radicals potentiate the genetic toxicity of tobacco-specific nitrosamines. Clin Genet. 1993;43(2): 88–91. 601. Jorquera R, Castonguay A, Schuller HM. DNA single-strand breaks and toxicity induced by 4-(methyl-nitrosamino)-1-(3-pyridyl)-1butanone or N-nitrosodimethylamine in hamster and rat liver. Carcinogenesis. 1994;15(2):389–94. 602. Hill CE, Affatato AA, Wolfe KJ, et al. Gender differences in genetic damage induced by the tobacco-specific nitrosamine NNK and the influence of the Thr241Met polymorphism in the XRCC3 gene. Environ Mol Mutagen. 2005;46(1):22–9. 603. Schuller HM, Jorquera R, Lu X, Riechert A, Castonguay A. Transplacental carcinogenicity of low doses of 4-(methylnitrosamino)-1-(3pyridyl)-1-butanone administered subcutaneously or intratracheally to hamsters. J Cancer Res Clin Oncol. 1994;120(4):200–3. 604. Chung FL, Xu Y. Increased 8-oxodeoxyguanosine levels in lung DNA of A/J mice and F344 rats treated with the tobacco-specific nitrosamine 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone. Carcinogenesis. 1992;13(7):1269–72. 605. Belinsky SA, Devereux TR, Foley JF, Maronpot RR, Anderson MW. Role of the alveolar type II cell in the development and progression of pulmonary tumors induced by 4-(methylnitrosamino)-1(3-pyridyl)-1-butanone in the A/J mouse. Cancer Res. 1992;52(11): 3164–73. 606. Ho YS, Chen CH, Wang YJ, et al. Tobacco-specific carcinogen 4(methylnitrosamino)-1-(3-pyridyl)-1-butanone (NNK) induces cell proliferation in normal human bronchial epithelial cells through NFkappaB activation and cyclin D1 up-regulation. Toxicol Appl Pharmacol. 2005;205(2):133–48. Epub 2005 Jan 8. 607. Anderson LM, Hecht SS, Kovatch RM, Amin S, Hoffmann D, Rice JM.Tumorigenicity of the tobacco-specific carcinogen 4-(methylnitrosamino)-1-(3 pyridyl)-1-butanone in infant mice. Cancer Lett. 1991;58(3):177–81. 608. Anderson LM, Carter JP, Driver CL, Logsdon DL, Kovatch RM, Giner-Sorolla A. Enhancement of tumorigenesis by Nnitrosodiethylamine, N-nitrosopyrrolidine and N6-(methylnitroso)adenosine by ethanol. Cancer Lett. 1993;68(1):61–6. 609. Yoshimura H, Takemoto K. Effect of cigarette smoking and/or N-bis(2-hydroxypropyl)nitrosamine (DHPN) on the development of lung and pleural tumors in rats induced by administration of asbestos. Sangyo Igaku. 1991;33(2):81–93. 610. Prokopczyk B, Rivenson A, Hoffmann D. A study of betel quid carcinogenesis. IX. Comparative carcinogenicity of 3-(methylnitrosamino) propionitrile and 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanone upon local application to mouse skin and rat oral mucosa. Cancer Lett. 1991;60(2):153–7. 611. Janzowski C, Landsiedel R, Golzer P, Eisenbrand G. Mitochondrial formation of beta-oxopropyl metabolites from bladder carcinogenic omega-carboxyalkylnitrosamines. Chem Biol Interact. 1994;90(1): 23–33. 612. Pairojkul C, Shirai T, Hirohashi S, et al. Multistage carcinogenesis of liver-fluke-associated cholangiocarcinoma in Thailand. Princess Takamatsu Symp. 1991;22:77–86. 613. Siddiqi MA, Tricker AR, Kumar R, Fazili Z, Preussmann R. Dietary sources of N-nitrosamines in a high-risk area for oesophageal cancer— Kashmir, India. IARC Sci Publ. 1991(105):210–3.
674
Environmental Health
614. Wacker DC, Spiegelhalder B, Preussmann R. New sulfenamide accelerators derived from ‘safe’ amines for the rubber and tyre industry. IARC Sci Publ. 1991(105):592–4. 615. Luo JC, Cheng TJ, Kuo HW, Chang MJ. Decreased lung function associated with occupational exposure to epichlorohydrin and the modification effects of glutathione s-transferase polymorphisms. J Occup Environ Med. 2004;46(3):280–6. 616. IARC. Epichlorohydrin. 1999;71:603. 617. Singh US, Decker-Samuelian K, Solomon JJ. Reaction of epichlorohydrin with 2′-deoxynucleosides: characterization of adducts. Chem Biol Interact. 1996;99(1–3):109–28. 618. Cheng TJ, Hwang SJ, Kuo HW, Luo JC, Chang MJ. Exposure to epichlorohydrin and dimethylformamide, glutathione S-transferases and sister chromatid exchange frequencies in peripheral lymphocytes. Arch Toxicol. 1999;73(4–5):282–7. 619. Bukvic N, Bavaro P, Soleo L, Fanelli M, Stipani I, Elia G, Susca F, Guanti G. Increment of sister chromatid exchange frequencies (SCE) due to epichlorohydrin (ECH) in vitro treatment in human lymphocytes. Teratog Carcinog Mutagen. 2000;20(5):313–20. 620. Deschamps D, Leport M, Cordier S, et al. Toxicity of ethylene oxide on the crystalline lens in an occupational milieu. Difficulty of epidemiologic surveys of cataract. J Fr Ophthalmol. 1990;13(4): 189–97. 621. Lin TJ, Ho CK, Chen CY, Tsai JL, Tsai MS. Two episodes of ethylene oxide poisoning—a case report. Kaohsiung J Med Sci. 2001; 17(7):372–6. 622. Brashear A, Unverzagt FW, Farber MO, Bonnin JM, Garcia JG, Grober E. Ethylene oxide neurotoxicity: a cluster of 12 nurses with peripheral and central nervous system toxicity. Neurology. 1996; 46(4):992–8. 623. Nagata H, Ohkoshi N, Kanazawa I, Oka N, Ohnishi A. Rapid axonal transport velocity is reduced in experimental ethylene oxide neuropathy. Mol Chem Neuropathol. 1992;17(3):209–17. 624. IARC. Ethylene Oxide. 1994;60:73. 625. Kolman A, Bohusova T, Lambert B, Simons JW. Induction of 6-thioguanine-resistant mutants in human diploid fibroblasts in vitro with ethylene oxide. Environ Mol Mutagen. 1992;19(2):93–7. 626. Oesch F, Hengstler JG, Arand M, Fuchs J. Detection of primary DNA damage: applicability to biomonitoring of genotoxic occupa-
627.
628.
629.
630. 631.
632. 633. 634.
635.
636.
637. 638. 639.
640.
tional exposure and in clinical therapy. Pharmacogenetics. 1995;5 Spec No: S118–22. Schulte PA, Boeniger M, Walker JT, et al. Biologic markers in hospital workers exposed to low levels of ethylene oxide. Mutat Res. 1992;278(4):237–51. Tates AD, Grummt T, Tornqvist M, et al. Biological and chemical monitoring of occupational exposure to ethylene oxide. Mutat Res. 1991;250(1–2):483–97. Mayer J, Warburton D, Jeffrey AM, et al. Biological markers in ethylene oxide-exposed workers and controls. Mutat Res. 1991;248(1): 163–76. IARC. Ethylene Oxide. Vol 60. 1994: 73. Polifka JE, Rutledge JC, Kimmel GL, Dellarco V, Generoso WM. Exposure to ethylene oxide during the early zygotic period induces skeletal anomalies in mouse fetuses. Teratology. 1996;53(1):1–9. Vogel EW, Natarajan AT. DNA damage and repair in somatic and germ cells in vivo. Mutat Res. 1995;330(1–2):183–208. Weller E, Long N, Smith A, et al. Dose-rate effects of ethylene oxide exposure on developmental toxicity. Toxicol Sci. 1999;50(2):259–70. Kaido M, Mori K, Koide O. Testicular damage caused by inhalation of ethylene oxide in rats: light and electron microscopic studies. Toxicol Pathol. 1992;20(1):32–43. Picut CA, Aoyama H, Holder JW, Gold LS, Maronpot RR, Dixon D. Bromoethane, chloroethane and ethylene oxide induced uterine neoplasms in B6C3F1 mice from 2-year NTP inhalation bioassays: pathology and incidence data revisited. Exp Toxicol Pathol. 2003; 55(1):1–9. Hogstedt C, Rohlen BS, Berndtsson O, Axelson O, Ehrenberg L. A cohort study of mortality and cancer incidence in ethylene oxide production workers. Br J Ind Med. 1979;36:276–80. Hogstedt C, Malmquist N, Wadman B. Leukemia in workers exposed to ethylene oxide. JAMA. 1979;241:1132–3. Steenland K, Stayner L, Greife A, et al. Mortality among workers exposed to ethylene oxide. N Engl J Med. 1991;324(20):1402–7. Steenland K, Stayner L, Deddens J. Mortality analyses in a cohort of 18 235 ethylene oxide exposed workers: follow up extended from 1987 to 1998. Occup Environ Med. 2004;61(1):2–7. IARC. Phenyl Glycidyl Ether. 1999;71:1525.
Polychlorinated Biphenyls
28
Richard W. Clapp
The group of chemicals termed polychlorinated biphenyls is part of the larger class of chlorinated organic hydrocarbon chemicals. There are 209 individual compounds (congeners) with varying numbers and locations of chlorine on the two phenyl rings, with varying degrees of toxicity and adverse human and ecological effects.1 Some of the PCBs are structurally similar to dioxins and furans and these congeners may cause similar health effects.2 The higher chlorinated PCBs are particularly persistent in the environment,3 although not all potential congeners were manufactured and there was a shift toward lower-chlorinated PCB mixtures in later years. In 1976, the U.S. Congress passed the Toxic Substances Control Act which led to the ban of production of PCBs in the United States. PCBs were first produced by the Monsanto Company in the late 1920s in two U.S. states for use in electrical products; initially, polychlorinated biphenyls were found to have properties that made them desirable in electrical transformers and capacitors, because of their insulating and low flammability characteristics.4 Subsequently, PCBs were used in hydraulic fluids, microscope oil, paints, surface coatings, inks, adhesives, in carbonless copy paper, and chewing gum, among other products. Because of leaks in the production process, and spills or leaks from transformers and other products, fires and incineration of PCB products, and improper disposal of PCBcontaining wastes in landfills, there is widespread contamination from PCBs in the environment and wide distribution in the food chain and human adipose tissue.1 There have been some dramatic examples of leakage and spills, including the Hudson River, in New York, and the New Bedford Harbor, in Massachusetts, and the town of Anniston, Alabama among many other examples. Indeed, PCBs have been found in mammalian blood and adipose tissue samples throughout the world,5 including remote Arctic populations with limited industrial production or use of these compounds.6 The likely source of PCB exposure in these remote settings is ingestion of PCBs accumulated through the food chain, especially in fish and marine mammals. Because of the many adverse health effects and widespread distribution of PCBs in the environment, these compounds have not been made in the United States since 19777 and are being phased out under the recent Stockholm Convention on Persistent Organic Pollutants (POPs). There are environmental and occupational exposure limits for PCBs in the United States that set allowable levels in workplaces, in drinking water sources, during transport or disposal, discharge into sewage treatment plants, and in food consumed by infants and adults. The current OSHA occupational limits are 1 mg/m3 for PCB mixtures with 42% chlorine and 0.5 mg/m3 for mixtures with 52% chlorine over an 8-hour day. Presumably, these limits would protect workers exposed during spills of old equipment containing PCBs. The U.S. Food and Drug Administration recommends that drinking water not contain more than 0.5 parts per billion PCBs and that foods such as milk, eggs, poultry fat, fish, shellfish, and infant formula not contain more PCBs than 3 parts per million or 3 mcg/g on a lipid basis.1
PCBs are chemically similar to other compounds such as dioxins and furans and human exposures are often to mixtures of these related compounds. For example, in transformer fires such as the one that occurred in the Binghamton, New York State Office building,8 or in two major contaminated rice oil poisoning incidents in Japan9 and Taiwan10,11 the exposures were to a combination of PCBs, dioxins (PCDDs), dibenzofurans (PCDFs), and possibly some other chlorinated compounds. This makes the determination of the causes of the health effects observed in these situations complex. CHEMICAL PROPERTIES OF PCBs
PCBs were produced by the catalyzed addition of chlorine to the basic double benzene ring structure; any number of chlorine atoms from 1 to 10 can be added, typically resulting in mixtures of dozens of congeners in commercial mixtures. These mixtures can be oily or solid, and colorless to light yellow, with no characteristic smell or taste. The commercial products were primarily six or seven mixtures classified by their percentage of chlorine. The major manufacturer, Monsanto, called its PCB product Aroclor and assigned identifying numbers based on the chlorine content of the congener mixtures.1 Other manufacturers, such as Bayer in Europe, used other names and numbering schemes (Clophen A60, Kanechlor 500, etc.). These products resist degradation in the environment and have low solubility in water, but they are soluble in oils and certain organic solvents. They are lipophilic and, therefore, bio-accumulate in fatty tissue in humans and other species. GLOBAL CONTAMINATION
Beginning with the production of PCBs in Anniston, Alabama in the late 1920s, later production in other part of the United States and Europe in the middle of the last century, and extending beyond the curtailment of production for nearly all uses in 1977, there have been many examples of environmental contamination by these persistent compounds. In Sweden, widespread PCB contamination was documented in the 1960s, and in North America, surveys documented contamination of human breast milk and fish around the Great Lakes beginning in the 1970s. Two major episodes of PCB poisoning from contaminated rice oil occurred in 1968 in Japan and in 1979 in Taiwan. Cohorts of PCB-exposed manufacturing workers were established and follow-up studies were conducted in the United States,12 Italy,13 and Sweden14 in the 1970s and considerable human and environmental exposure was described in the areas where these plants were located. For example, the Hudson River in New York and the Housatonic River in Massachusetts have been contaminated by PCBs from manufacturing plants in Hudson Falls and Pittsfield. These rivers have had fish consumption warnings posted for decades 675
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
676
Environmental Health
because of the high concentrations of PCBs found there. More recently, studies of offspring of mothers exposed to PCBs through their diet, primarily through fish consumption, have been conducted around the Great Lakes in North America15–19 and in Europe20 and these have added to the literature about health effects in children. The health effects identified in these recent studies include disruption of reproductive function, neurobehavioral and developmental deficits in newborns and children exposed to PCBs in utero, systemic effects such as liver disease, effects on the thyroid and immune systems.21,22
Mechanisms of Toxicity The primary mechanism of toxicity for dioxin-like coplanar PCBs appears to be the induction of gene product expression after initial binding to the aryl hydrocarbon (Ah) receptor in the intracellular cytosol of mammalian cells. The most sensitive effects of this process are the alteration of cytochrome P450 1A1 and 1A2 expression, and induction of ethoxyresorufin O-deethylase (EROD) which produce a series of downstream effects.1 These can result in a variety of adverse responses in different tissues which may vary by sex and developmental stage in different animal species. Other PCBs appear to be estrogenic and affect reproductive and endocrine systems. PCB effects on the neurological system may occur through other mechanisms that are not currently understood. Furthermore, although PCB mixtures appear to have both tumor initiating and promoting capabilities in experimental animal studies, the mechanism of carcinogenicity is not currently known.
Human Health Effects The earliest reports of adverse human health effects of PCBs were dermal effects in exposed workers who were diagnosed with rashes and chloracne. Typically, the exposures are to mixtures of PCBs and other chlorinated compounds, making it difficult to isolate the effects specific to PCBs. For example, the two major outbreaks of PCB poisoning, with clinical syndromes called Yusho and Yu-cheng, were examples of mixed exposures to cooking oil contaminated with PCBs and other chlorinated compounds such as polychlorinated dibenzofurans.23 Nevertheless, these two episodes provided much early evidence of PCB-related health effects including chloracne, other skin abnormalities, hyperpigmentation, swelling of the eyelids, and eye discharge. Furthermore, the offspring of PCB-exposed mothers exhibited dark skin pigmentation, pigmented nails, and abnormal dentition. Chronic effects of PCB poisoning in Yusho victims included headache, joint swelling and pain, numbness of extremities, irregular menstruation, and low birth weight in offspring. Children also were found to have growth retardation and various other developmental effects which were later investigated in other studies.24 Worker cohorts exposed to PCBs in manufacturing capacitors and transformers were followed and several types of cancer were found to be elevated. For example, deaths due to melanoma of skin were increased (24 observed/13.7 expected) in capacitor workers and transformer workers. Lymphoma (10 observed/5.7 expected) and brain cancer (13 observed/3.7 expected) deaths were also increased in transformer workers, and liver and biliary tract cancer deaths were elevated in some capacitor workers’ studies.7 Electrical workers with potential exposure to PCBs have also been shown to have excess deaths due to melanoma of skin12,25 and brain cancer.26 These studies of workers were the scientific basis for the current classification of PCBs as “probable” human carcinogens.27 Other studies of PCB and cancer in nonoccupationally-exposed persons indicated increased incidence of non-Hodgkin’s lymphoma.28,29 A recent Swedish study of testicular cancer suggested that risk is increased by prenatal exposure to two subgroups of estrogenic and enzyme-inducing PCBs.30 A number of studies of breast cancer cases have been carried out, with equivocal results.31 Some breast cancer studies estimated exposure from fat or blood samples taken shortly before diagnosis,32 and some also combined all congeners or looked
at large groups of PCB congeners.33 More recently, one study looked at specific congeners and found increased risk in women with higher blood concentrations of the dioxin-like PCBs.34 Another study of breast cancer suggested that exposure to PCBs was associated with an increased risk of the disease in women with a specific CYP1A1 polymorphism (the m2 genotype).35 Abnormal thyroid function has been found in offspring of Dutch women exposed to PCBs, dioxins, and furans, and there is increasing evidence that exposure during the perinatal period can result in learning and cognitive development during childhood.36 Strong correlations between Great Lakes fish consumption and PCB levels in umbilical cord blood and breast milk have been found. Menstrual cycle length and time-to-pregnancy have been investigated in relation to Lake Ontario fish consumption with inconsistent results, but newborn neurological development was abnormal in offspring of women in the high exposure category.37 PUBLIC HEALTH IMPACTS OF ENVIRONMENTAL EXPOSURE
The health impacts of low-level environmental exposure are controversial. Studies of low-dose effects are sometimes contradictory, as in the case of blood pressure and serum PCB levels. Similarly, some studies of neurobehavioral effects in offspring of PCB-exposed mothers have been questioned.38 Nevertheless, the effect on memory, attention, and IQ in children on a population level is significant enough to warrant limiting exposure to PCBs through the food chain. Clinicians seeking to provide guidance to worried patients, for example, should inquire about dietary fish consumption and residence near potential PCB contamination sites. Exposure in the United States can be reduced by adherence to current regulations governing disposal of PCB-containing waste. The primary consideration is safe transport of PCB-containing waste and either burial in an approved landfill, incineration at temperatures greater than 1500°C or, preferably, chemical treatment and dechlorination of the PCBs.39 International efforts to manage and dispose of PCBs are underway and are being coordinated by the United Nations Environment Program.40 The Stockholm Convention on Persistent Organic Pollutants took effect in May, 2004 and required the participating parties to eliminate the use of PCBs by 2025 and accomplish environmentally sound PCB waste management worldwide by 2028. The first steps toward establishing inventories or PCBs and standard methods for phasing out and eliminating wastes are already being taken. A critical parallel effort is adherence to the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and their Disposal. As these efforts move forward, the global PCB pollution caused by past uses and practices can be expected to diminish and human exposures and health effects will decline further. REFERENCES
1. ATSDR, Agency for Toxic Substances and Disease Registry. Toxicological Profile for Polychlorinated Biphenyls. Atlanta: U.S. Department of Health and Human Services; 2000. 2. International Agency for Research on Cancer. IARC Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans. Polychlorinated Dibenzo-dioxins and Polychlorinated Dibenzofurans. Vol 69. Lyon, France: World Health Organization; 1997. 3. Cogliano VJ. Assessing the cancer risk from environmental PCBs. Environ Health Perspect. 1998;106(6):317–23. 4. Gilpin RK, Wagel DJ, Solch JG. Production, distribution and fate of polychlorinated dibenzo-p-dioxins, dibenzofurans and related organohalogens in the environment. In: Schecter A, Gasiewicz TA, eds. Dioxins and Health. 2nd ed. Hoboken, NJ: Wiley-Interscience; 2003: 55–87.
28 5. Schecter A. Exposure assessment: measurement of dioxins and related chemicals in human tissues. In: Schecter A, ed. Dioxins and Health. New York: Plenum Press; 1993: 449–85. 6. Dewailly E, Nantel AJ, Weber JP, Meyer F. High levels of PCBs in breast milk of Inuit women from arctic Quebec. Bull Environ Contam Toxicol. 1989;43:641–6. 7. Nicholson WJ, Landrigan PJ. Human health effects of polychlorinated biphenyls. In: Schecter A. ed. Dioxins and Health. New York: Plenum Press; 1994: 487–524. 8. Schecter A, Tiernan T. Occupational exposure to polychlorinated dioxins, polychlorinated furans, polychlorinated biphenyls, and biphenylenes after and electrical panel and transformer accident in an office building in Binghamton, NY. Environ Health Perspect. 1985;60:305–13. 9. Matsuda Y, Yoshimura H. Polychlorinated biphenyls and dibenzofurans in patients with Yusho and their toxicological significance: review. Am J Ind Med. 1984;5:31–44. 10. Rogan WJ, Gladen BC. Study of human lactation for effects of environmental contaminants: the North Carolina Breast Milk and Formula Project and some other ideas. Environ Health Perspect. 1988; 60:215–21. 11. Chen Y-CJ, Guo Y-L, Hsu C-C, et al. Cognitive-development of Yu-cheng (oil disease) children prenatally exposed to heat-degraded PCBs. JAMA. 1992;268:3213–8. 12. Sinks T, Steele, G, Smith AB, et al. Mortality among workers exposed to polychlorinated biphenyls. Am J Epidemiol. 1992;136: 389–98. 13. Bertazzi PA, Riboldi L, Persatori A, Radice L, Zocchetti C. Cancer mortality of capacitor manufacturing workers. Am J Ind Med. 1987; 11:165–76. 14. Gustavsson P, Hoisted C, Rappe C. Short-term mortality and cancer incidence in capacitor manufacturing workers exposed to polychlorinated biphenyls (PCBs). Am J Ind Med. 1986;10:341–4. 15. Jacobson JL, Jacobson SW, Humphrey HEB. Effects of in utero exposure to polychlorinated-biphenyls and related contaminants on cognitivefunctioning in young children. J Pediatr. 1990a;116:38–45. 16. Jacobson JL, Jacobson SW, Humphrey HEB. Effects of exposure to PCBs and related compounds on growth and activity in children. Neurotoxicol Teratol. 1990b;12:319–26. 17. Jacobson JL, Jacobson SW. Intellectual impairment in children exposed to polychlorinated biphenyls in utero. N Engl J Med. 1996;335:783–9. 18. Lonky E, Reihman J, Darvill T, Mather J, Daly H. Neonatal behavioral assessment scale performance in humans influenced by maternal consumption of environmentally contaminated Lake Ontario fish. J Great Lakes Res. 1996;22:198–212. 19. Stewart PW, Reihman J, Lonky EI, et al. Cognitive development in preschool children prenatally exposed to PCBs and MeHg. Neurotoxicol Teratol. 2003;25(1):11–22. 20. Weisglas-Kuperus N, Sas TC, Koopman-Esseboom C. Immunologic effects of background prenatal and postnatal exposure to dioxins and polychlorinated biphenyls in Dutch infants. Pediatr Res. 1995; 38(3):404–10. 21. Hauser P. Resistance to thyroid hormone: implications for neurodevelopmental research. Toxicol Ind Health. 1998;14:85–101. 22. Hagamar L, Hallbery T, Leja M, Nilsson A, Schultz A. High consumption of fatty fish from the Baltic Sea is associated with changes in human lymphocyte subset levels. Toxicol Lett. 1995;77: 335–42.
Polychlorinated Biphenyls
677
23. Longnecker M, Korrick S, Moysich K. Health effects of polychlorinated biphenyls. In: Schecter A, Gasiewicz T, eds. Dioxins and Health. 2nd ed. New York: Plenum Press; 2003. 24. Guo Y-L, Lambert GH, Hsu C-C. Growth abnormalities in the population exposed in utero and early postnatally to polychlorinated biphenyls and dibenzofurans. Environ Health Perspect. 1995; 103(Suppl 6):117–22. 25. Loomis D, Browning SR, Schenck AP, Gregory E, Savitz DA. Cancer mortality among electric utility workers exposed to polychlorinated biphenyls. Occup Environ Med. 1997;54:720–8. 26. Yassi A, Tate R, Fish D. Cancer mortality in workers employed at a transformer manufacturing plant. Am J Ind Med. 1994;25(3):425–37. 27. International Agency for Research on Cancer. IARC Monographs on the Evaluation of the Carcinogenic Risk of Chemicals to Humans. Suppl 7. Update of IARC monographs volumes 1–42. Lyon, France: World Health Organization; 1987. 28. Rothman N, Cantor KP, Blair A, et al. A nested case-control study of non-Hodgkin lymphoma and serum organochlorine residues. Lancet. 1997;350:240–4. 29. Cole JS, Severson RK, Lubin J, et al. Organochlorines in carpet dust and non-Hodgkin lymphoma. Epidemiology. 2005;16(4): 516–25. 30. Hardell L, van Bavel B, Lindstrom G, et al. Concentrations of polychlorinated biphenyls in blood and the risk of testicular cancer. Int J Andrology. 2004;27:282–90. 31. Laden F, Collman G, Iwamoto K, et al. 1,1-Dichloro-2,2-bis (p-chlorophenyl) ethylene and polychlorinated biphenyls and breast cancer: a combined analysis of five U.S. studies. J Natl Cancer Inst. 2001;93(10):768–75. 32. Wolff MS, Toniolo PG, Lee Ew, et al. Blood levels of organochlorine residues and risk of breast cancer. J Natl Cancer Inst. 1993; 85:648–52. 33. Krieger N, Wolff MS, Hiatt RA, et al. Breast cancer and serum organochlorines: a prospective study among white, black and Asian women. J Natl Cancer Inst. 1994;86(8):589–99. 34. Demers A, Ayotte A, Brisson J, et al. Plasma concentrations of polycholorinated biphenyls and the risk of breast cancer: a congenerspecific analysis. Am J Epidemiol. 2002;155:629–35. 35. Zhang Y, Wise JP, Holford TR, et al. Serum polychlorinated biphenyls, cytochrome P-450 1A1 polymorphisms, and risk of breast cancer in Connecticut women. Am J Epidemiol. 2004;160:1177–83. 36. Koopman-Esseboom C, Morse DC, Weisglas-Kuperus N, et al. Effects of dioxins and polychlorinated biphenyls on thyroid hormone status of pregnant women and their infants. Pediatr Res. 1994;36:468–73. 37. Mendola P, Buck GM, Sever LE, Zieiezny M, Vena JE. Consumption of PCB-contaminated freshwater fish and shortened menstrual cycle length. Am J Epidemiol. 1997;146:955–60. 38. Kimbrough RD, Doemland ML, Krouskas CA. Analysis of research studying the effects of polychlorinated biphenyls and related compounds on neurobehavioral development in children. Veterinary Human Toxicol. 2001;43:220–28. 39. Costner P. Non-combustion technologies for the destruction of PCBs and other POPs wastes: civil society, international conventions and technological solutions. Greenpeace International. Amsterdam, Netherlands; 2004. 40. United Nations Environment Program. Consultation Meeting on PCB Management and Disposal under the Stockholm Convention on Persistent Organic Pollutants. Proceedings. Geneva, Switzerland; 2004.
This page intentionally left blank
Polychlorinated Dioxins and Polychlorinated Dibenzofurans
29
Yoshito Masuda • Arnold J. Schecter
Polychlorinated dibenzo-p-dioxins (PCDDs) have been described as the most toxic man-made chemicals known. They are synthetic, lipophilic, and very persistent. They are also relatively controversial. Toxicological studies of 2,3,7,8-tetrachlorodibenzo-p-dioxin (2,3,7,8TCDD), which is known as the most toxic congener among PCDDs and usually called Dioxin, demonstrate dose-dependent toxic responses to other PCDDs and related chemicals such as the polychlorinated dibenzofurans (PCDFs), which frequently accompany polychlorinated biphenyls (PCBs). (Both PCDFs and PCBs are chemically and biologically similar to PCDDs.) However, the findings from human studies, at least until recently, have been less consistent. The animal health effects include but are not limited to: death several weeks after dosing, usually accompanied by a “wasting” or loss of weight syndrome; increase in cancers (found in all animal cancer studies); increased reproductive and developmental disorders including fetal death in utero, malformations, and in offspring dosed in utero, endocrine disruption with altered thyroid and sex hormone blood levels; immune deficiency sometimes leading to death of new born rodents, especially following dosing with infectious agents; liver damage including transient increase in serum liver enzymes as well as the characteristic lesions of hepatocytes to chlorinated organics, enlarged cells, intracytoplasmic lipid droplets, increase in endoplasmic reticulum, enlarged and pleomorphic mitochondria with altered structure of the cristae mitochondriales and enlarged dense intramitochondrial granules; central nervous system and peripheral nervous system changes including altered behavior and change in nerve conduction velocity; altered lipid metabolism with increase in serum lipids; and skin disorders including rash and chloracne (acne caused by chlorinated organic chemicals). Some effects are species specific. Other findings have been reported but with less frequency or consistency.1 Findings reported in some human studies are similar to those from animal studies. These include an increase in cancers of certain types, including soft tissue sarcomas, Hodgkin’s lymphoma, nonHodgkin’s lymphoma, lung cancer, and liver cancer; adverse reproductive and developmental effects following intrauterine and nursing exposure such as lower birth weight and smaller head circumference for gestational age, decreased cognitive abilities, behavioral impairment, and endocrine disruptions including altered thyroid hormone levels; immune deficiency; liver damage; altered lipid metabolism with increase in serum lipids; altered nerve conduction velocity; altered sex ratio in children born to dioxin-exposed women (more females than males); increase in diabetes or altered glucose metabolism in exposed chemical workers and sprayers of dioxin-contaminated Agent Orange herbicide; and behavioral changes including anxiety, difficulty sleeping, and decrease in sexual ability in males.1–10 Some of the human health effects are subtle such as those reported in the
Dutch studies. These effects are not likely to be detected by the clinician on individual patients but only in a larger population-based study. Skin disorders including rash and chloracne are also observed in some exposed persons. PCDDs and PCDFs are not manufactured as such, but are usually found as unwanted contaminants of other synthetic chemicals or as products of incineration of chlorinated organics. PCDDs consist of two benzene rings connected by a third middle ring containing two oxygen atoms in the para position. PCDFs have a similar structure but the middle connecting ring contains only a single oxygen atom. PCBs consist of two connected biphenyl rings with no oxygen. When chlorine atoms are in the 2, 3, 7, and 8 positions, PCDDs and PCDFs are extremely toxic. The most toxic congener, 2,3,7,8-TCDD is defined as having a “Dioxin toxic equivalency factor” (TEF) of 1.0; other toxic PCDDs and PCDFs have TEFs from 0.00001 to 111 (Table 29-1). PCDDs and PCDFs without chlorines in the 2, 3, 7, and 8 positions are devoid of dioxinlike toxicity. Some PCBs also have dioxin-like toxicity as shown in Table 29-1. The Dioxin toxic equivalency (TEQ) approximates the toxicity of the total mixture. The TEQ is determined by multiplying the measured level of each congener by the congener’s TEF and then adding the products. The total Dioxin toxicity of a mixture is the sum of the TEQs from the PCDDs, the PCDFs, and the dioxin-like PCBs. There are characteristic levels and patterns of PCDD and PCDF congeners found in human tissues which correspond to levels of industrialization and contamination in a given country. At the present time, seven toxic PCDD and 10 toxic PCDF congeners as well as 12 PCBs can usually be identified in human tissue in persons living in more industrialized countries. The measurement of the individual congeners is done by capillary column gas chromatography coupled to high-resolution mass spectrometry. Extraction, chemical cleanup, and the use of known chemical standards have markedly improved specificity and sensitivity of such measurements in recent years. Intake of 1–6 pg/kg body weight (BW)/day of TEQ of dioxinlike chemicals (PCDDs, PCDFs, and PCBs) is characteristic of adult daily intake in the United States at the present time.12 Intake of TEQ is mostly from food, especially meat, fish, and dairy products. Fruits and vegetables have very low levels of Dioxins, which are from surface deposition. Air and water contain very low levels of the fatsoluble Dioxins and are believed to usually contribute little to human intake, as food intake has been demonstrated in several studies to result in more than 90% of human exposure. Nursing infants in the United States consume approximately 35–65 pg/kg BW/day of TEQ during the first year of life. The U.S. Environmental Protection Agency (EPA) has used a value of 0.006 pg/kg BW/day of TEQ over a 70-year lifetime as a dose believed to possibly lead to an excess of one cancer per 1 million population. The EPA Dioxin Reassessment 679
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
680
Environmental Health
TABLE 29-1. PCDD, PCDF, AND PCB CONGENERS WITH TEF PCDDs/PCDFs/PCBs
WHO TEF ∗
2,3,7,8-TetraCDD 1,2,3.7,8-PentaCDD 1,2,3,4,7,8-HexaCDD 1,2,3,6,7,8-HexaCDD 1,2,3,7,8,9-HexaCDD 1,2,3,4,6,7,8-HeptaCDD OctaCDD
1 1 0.1 0.1 0.1 0.01 0.0001
2,3,7,8-TetraCDF 1,2,3,7,8-PentaCDF 2,3,4,7,8-PentaCDF 1,2,3,4,7,8-HexaCDF 1,2,3,6,7,8-HexaCDF 1,2,3,7,8,9-HexaCDF 2,3,4,6,7,8-HexaCDF 1,2,3,4,6,7,8-HeptaCDF 1,2,3,4,7,8,9-HeptaCDF OctaCDF
0.1 0.05 0.5 0.1 0.1 0.1 0.1 0.01 0.01 0.0001
3,4,4’,5-TetraCB (#81) 3,3’,4,4’-TetraCB (#77) 3,3’,4,4’,5-PentaCB (#126) 3,3’,4,4’,5,5’-HexaCB (#169) 2,3,3’,4,4’-PentaCB (#105) 2,3,4,4’,5-PentaCB (#114) 2,3’,4,4’,5-PentaCB (#118) 2’,3,4,4’,5-PentaCB (#123) 2,3,3’,4,4’,5-HexaCB (#156) 2,3,3’,4,4’,5’-HexaCB (#157) 2,3’,4,4’,5,5’-HexaCB (#167) 2,3,3’,4,4’,5,5’-HeptaCB (#189)
0.0001 0.0001 0.1 0.01 0.0001 0.0005 0.0001 0.0001 0.0005 0.0005 0.00001 0.0001
∗
Data from the report of an expert meeting (1997) at the World Health Organization. Source: Van den Berg M, Birnbaum L, Bosveld ATC, et al. Toxic equivalency factors (TEFs) for PCBs, PCDDs, PCDFs for humans and wildlife. Environ Health Perspect. 1998;106:775–92.
draft document is considering a change from 0.006 to 0.01 pg/kg BW/day of TEQ as a cancer reference dose. Some European countries and Japan use values between 1 and 10 pg/kg BW/day of TEQ as their reference value or tolerable daily intake (TDI). These different values are all based on review of the same published animal and human literature and each involves certain assumptions and safety factor considerations including extrapolation between animal species and from animals to humans. From a public health perspective, however, it is noteworthy that the U.S. daily intake of Dioxins, especially in the presumably more sensitive nursing infant, exceeds reference values.13–15 The PCBs, which, unlike PCDDs and PCDFs, were deliberately manufactured, are also found in most countries as environmental contaminants in humans, wildlife, and environmental samples. They were used as electrical and thermal insulating fluids for electrical transformers and capacitors, as hydraulic fluids in carbonless copying paper, and in microscope oil. Higher levels are found in more industrialized countries. One of the most well-known PCB and PCDF contaminations is the rice oil poisoning (the 1968 Yusho incident) in Japan where PCBs and PCDFs contaminated rice oil used for cooking. We describe this incident in detail later because it clearly documented the human toxicity of dioxinlike chemicals as early as 1968. An almost identical incident, known as Yucheng, occurred in Taiwan in 1979. Dioxins became of concern because of a number of well-known incidents. One of the most well-known is the spraying of Agent Orange herbicide in Vietnam. Repeated spraying of concentrated solutions of 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T), the latter contaminated with the most toxic Dioxin, 2,3,7,8-TCDD, over jungles and rice crops in the south of Vietnam between 1962 and 1971 during the Vietnam War has been a
concern to those exposed: the Vietnamese and U.S. Vietnam veterans. Jungles were sprayed to deprive enemy troops of cover, and crops were sprayed to deprive enemy troops and civilians of food. Areas around base camps as well as naval areas were sprayed for similar reasons. Elevated Dioxin levels have been found in fat tissue, blood, and milk decades afterward in Vietnamese exposed to Agent Orange and in some exposed American Vietnam veterans.16,17 The highest levels of Dioxins in breast milk ever measured were in Vietnamese women who were nursing during the spraying of Agent Orange. The half-life of elimination of 2,3,7,8-TCDD is believed to be between 7 and 11 years in humans. Vietnamese studies concerning adverse reproductive consequences and increases in cancers following potential exposure to Agent Orange are limited. Other well-known PCDD and PCDF incidents include the Seveso, Italy, explosion of 1976; Times Beach, Missouri; Love Canal, New York; the Binghamton State Office Building PCB transformer fire incident of 1981; the rice oil poisoning incidents in Japan (Yusho) and in Taiwan (Yucheng); the Coalite exposures in England; Nitro, West Virginia; several German industrial exposures; and the Ufa, Russia, exposures.1 Recent epidemiology studies from the United States, Europe, and Japan show increased rates of cancer in workers who were more highly exposed to dioxinlike chemicals and also in consumers of the contaminated rice oil. In addition, one German study of chemical workers exposed to PCDDs found an increase in mortality from ischemic cardiovascular disease as well as from cancer in the more highly exposed members of one German cohort of chemical workers.18–22 Recent Dutch studies found reproductive and developmental alterations in children born to women in the general population with higher levels of TEQs. These latter examinations are among the first studies to document human health effects of Dioxins at levels found in the general population in industrial countries.4,6 The levels of Dioxins in the Dutch population are similar to but slightly higher than those found in the United States and other industrial countries. Rogan and coworkers have previously described developmental findings in North Carolina children born to women in the general population with higher levels of PCBs. They have also described more striking and persistent findings in children whose mothers had high levels of PCBs and PCDFs from the Taiwan Yucheng rice oil poisoning.23–27 Recent research has documented the discovery of a Dioxin receptor in the cytoplasm of human as well as other mammalian cells. The Dioxins, which appear not to be directly genotoxic, but which can initiate or promote cancer as shown in all animal studies investigating Dioxins and cancer, bind with the aryl hydrocarbon (Ah) receptor in the cytoplasm. The complex then moves into the nucleus. The exact mechanisms by which the many adverse health outcomes are achieved is not known.28 To illustrate the human health consequences of PCDF exposure, we review the Yusho incident, which has provided a substantial amount of public health and medical information.10 Yusho, which means “oil disease” in Japanese, occurred in Western Japan in 1968. This poisoning was caused by ingestion of commercial rice oil (used for home cooking), which had been contaminated with PCBs, PCDFs, polychlorinated quaterphenyls (PCQs), and a very small amount of PCDDs. About 2000 people became ill and sought medical care. The marked increase of PCDFs in the rice oil is believed to have occurred in the following way. Although PCBs are usually contaminated with small amounts of PCDFs, the commercial PCBs used as a heattransfer medium for deodorizing rice oil were heated above 200°C and the PCBs were gradually converted into PCDFs and PCQs. The PCBs with increased PCDF concentration leaked into the rice oil through holes formed in a heating pipe because of inadequate welding.29 Yusho patients ingested more than 40 different PCDF congeners in the rice oil, but only a small number of PCDF congeners persisted in their tissues. High concentrations of 2,3,4,7,8-pentachlorodibenzofuran (2,3,4,7,8-pentaCDF), up to 7 ppb, were observed in tissue samples in 1969, a year after the incident.30 Although the levels of PCDF congeners declined significantly, elevated levels of PCDF congeners did, however, continue for a substantial period of time. In 1986, the levels of PCDF congeners were observed up to 40 times higher than those of
29 the general population, and at the present time they are still elevated. PCDF concentrations in the liver were almost as high as those in adipose tissue, but PCB concentrations were much lower in the liver than in the adipose tissue, so partitioning was not simply a passive process. In calculating the toxic contribution of PCDDs, PCDFs, and PCBs in a Yusho patient using the TEF, 2,3,4,7,8-pentaCDF was found to have accounted for most of the dioxin-like toxicity from TEQs in the liver and adipose tissue of patients. The toxicity of individual congeners of PCDFs and PCBs was compared to 2,3,7,8-TCDD toxicity by the use of the TEFs. Total TEQ in the rice oil was calculated to be 0.98 ppm, of which 91% was from PCDFs, 8% from PCBs, and 1% from PCDDs. Thus, more than 90% of the dioxin-like toxicity in Yusho was considered to have originated from PCDFs rather than the more plentiful PCBs. Therefore, at the present time Yusho is considered to have been primarily caused by ingestion of PCDFs.29 On an average, the total amounts of PCBs, PCQs, and PCDFs consumed by the 141 Yusho patients surveyed were 633, 596, and 3.4 mg, respectively. During the latent period, the time between first ingestion of the oil and onset of illness, the average total amounts consumed were 466, 439, and 2.5 mg of PCBs, PCQs, and PCDFs, respectively. The smallest amounts consumed which caused Yusho were 111, 105, and 0.6 mg of PCBs, PCQs, and PCDFs, respectively. In Yusho, it took on average about 3 months for clinical effects to be readily detected. Most patients were affected within the 9-month period beginning February 1968, when the contaminated rice oil was shipped to the market from the Kanemi rice oil producing company, to October 1968, when the epidemic of Yusho was reported to the public. Prominent signs and symptoms of Yusho are summarized in Table 29-2.
Polychlorinated Dioxins and Polychlorinated Dibenzofurans
681
Pigmentation of nail, skin, and mucous membranes; distinctive follicles; acneiform eruptions; increased eye discharge; and increased sweating of the palms were frequently noted. Common symptoms included pruritus and a feeling of weakness or fatigue.31 The most notable initial signs of Yusho were dermal lesions such as follicular keratosis, dry skin, marked enlargement and elevation of the follicular orifice, comedo formation, and acneiform eruption.32 Acneiform eruptions developed in the face, cheek, jaw, back, axilla, trunk, external genitalia, and elsewhere (Fig. 29-1). Dark pigmentation of the corneal limbus, conjunctivae, gingivae, lips, oral mucosa, and nails was a specific finding of Yusho. Severity of the dermal lesions was proportional to the concentrations of PCBs and PCDFs in the blood and adipose tissue. The skin symptoms diminished gradually in the 10 years after the onset, probably related to the decreasing PCDF concentrations in the body, while continual subcutaneous cyst formation with secondary infection persisted in a relatively small number of the most severely affected patients. The most prominent ocular signs immediately after onset were hypersecretion of the Maibomian glands and abnormal pigmentation of the conjunctiva. Cystic swelling of the meibomian glands filled with yellow infarctlike contents was observed in typical cases33 (Fig. 29-2). These signs markedly subsided in the 10 years after the onset of Yusho. Eye discharge was a persistent complaint in many patients. A brownish pigmentation of the oral mucosa was one of the characteristic signs of Yusho. Pigmentation of the gingivae and lips was observed in many victims during 1968 and 1969. This pigmentation persisted for a considerable period of time and was still observed in most patients in 1982. Radiographic examination of the mouth of
TABLE 29-2. PERCENT DISTRIBUTION OF SIGNS AND SYMPTOMS OF YUSHO PATIENTS EXAMINED BEFORE OCTOBER 31, 1968
Symptoms Increased eye discharge Acnelike skin eruptions Dark brown pigmentation of nails Pigmentation of skin Swelling of upper eyelids Hyperemia of conjunctiva Distinctive hair follicles Feeling of weakness Transient visual disturbance Pigmented mucous membrane Increased sweating of palms Itching Numbness in limbs Headache Stiffened soles in feet and palms of hands Vomiting Swelling of limbs Red plaques on limbs Diarrhea Hearing difficulties Fever Jaundice Spasm of limbs
Males N = 98
Females N = 100
88.8 87.6
83.0 82.0
83.1 75.3 71.9 70.8 64.0 58.4 56.2 56.2 50.6 42.7 32.6 30.3
75.0 72.0 74.0 71.0 56.0 52.0 55.0 47.0 55.0 52.0 39.0 39.0
24.7 23.6 20.2 20.2 19.1 18.0 16.9 11.2 7.9
29.0 28.0 41.0 16.0 17.0 19.0 19.0 11.0 8.0
Data from Professor Kuratsune. Source: Kuratsune M. Epidemiologic investigations of the cause of the “Strange disease”. In: Kuratsune M, Yoshimura H, Hori Y, Okumura M, Masuda Y, eds. YUSHO: A Human Disaster Caused by PCBs and Related Compounds. Fukuoka, Japan: Kyushu University Press; 1996:26–37.
Figure 29-1. Acneiform eruption on the back of a Yusho patient (female, age 33, photographed in December, 1968). The photograph from Dr Asahi. (Source: Adapted from Asahi M, Urabe H. A case of “Yusho”-like skin eruptions due to halogenated PCB-analogue compounds. Chemosphere. 1987;16:2069–72.)
682
Environmental Health
Figure 29-2. The lower eyelid of a 64-year-old Yusho patient, 13 years after onset. White cheesy secretions were noted from the ducts of the Maibomian glands when the eyelid was manually squeezed. The photograph from Dr Ohnishi. (Source: Adapted from Ohnishi Y, Kohno T. Ophthalmological aspects of Yusho. In: Kuratsune M, Yoshimura H, Hori Y, Okumura M, Masuda Y, eds. YUSHO: A Human Disaster Caused by PCBs and Related Compounds. Fukuoka, Japan: Kyushu University Press; 1996:206–9.)
Yusho patients demonstrated anomalies in the number of teeth and in the shape of the roots and marginal bone resorption at the roots. Irregular menstrual cycles were observed in 58% of female patients in 1970. This was not related to elevation of Yusho tissue levels. Thyroid function was investigated in 1984, 16 years after onset. The serum triiodothyronine and thyroxine levels were significantly higher than those of the general population, while thyroid-stimulating hormone levels were normal. The serum bilirubin concentration in the
patients correlated inversely with the blood levels of PCBs and serum triglyceride concentration, characteristically increased in the poisoning. Marked elevation of serum triglyceride was one of the abnormal laboratory findings peculiar to Yusho in its early stages. Significant positive correlation was observed between serum triglyceride levels and blood PCB concentrations in 1973. Significantly elevated levels of triglycerides persisted in Yusho patients for 15–20 years after exposure to PCBs and PCDFs. From the follow-up data of three Yucheng patients and five Yusho patients,34 fat-based concentrations of TEQ and PCBs in the Yusho patients with severe grade illness were estimated to have decreased from 40 ppb and 75 ppm, respectively, in 1969, to 0.6 ppb and 2.3 ppm, respectively, in 1999 (Fig. 29-3). Estimated median half-lives of three PCDFs and six PCBs were 3.0 and 4.6 years, respectively, in the first 15 years after the incident, and 5.4 and 14.6 years, respectively, in the following 15 years. Typical Yusho symptoms of acneiform eruption, dermal pigmentation, and increased eye discharge were very gradually recovered with lapse of 10 years. However, enzyme and/or hormone-mediated sign of high serum triglyceride, high serum thyroxin, immunoglobulin disorder, and others are persistently maintained for more than 30 years.35 Blood samples of 152 residents in Fukuoka, where several hundreds Yusho patients are living, were examined in 1999 for TEQ and PCB concentrations.36 Their mean levels were 28 pg/g lipid (range 9.2–100) and 0.4 ug/g lipid (range 0.06–1.7), respectively. Mean values of TEQ and PCBs in Yusho patients were only six and two times higher, respectively, than those in controls in 1999 as shown in Fig 29-3. A statistically significant excess mortality was observed for malignant neoplasm of all sites. This was also the case for cancer of the liver in males. However, excess mortality for such cancer was not statistically significant in females. It is still too early to draw any firm conclusion from this mortality study. However, the Yusho rice oil poisoning incident was one of the first to demonstrate human health effects caused by the dioxinlike PCDFs and PCBs. These effects are somewhat similar to those noted in laboratory animals and wildlife from PCDD, PCDF, or PCB exposure.10
ppt, Fat base 100000000
75 ppm
PCB: Yusho patients
12 ppm
PCB: 83 Yusho patients, 0.8 (0.09–5.2) ppm
10000000
2.3 ppm 1000000
0.37 ppm PCB: Controls 100000
60 ppb 40 ppb
Total PCBs 2, 2', 4, 4', 5, 5'-HexaCB
PCB: 151 Controls, 0.4 (0.06–1.7) ppm
Half-life 2.9 years
2, 3, 4, 7, 8-PentaCDF
10000
TEQ
Half-life 7.7 years Half-life 4.5 years 1000
TEQ: 83 Yusho patients, 0.16 (0.01–1.02) ppb
0.8 ppb 0.6 ppb
100
TEQ: 152 Controls, 28 (9.2–100) ppt
10
Exposure
15
30
Time since exposure (years) Figure 29-3. Estimated changes of PCB/TEQ concentrations in Yusho patients from 1969 to 1999 for 30 years. (Source: Adapted from Masuda Y. Fate of PCDF/PCB congeners and changes of clinical symptoms in patients with Yusho PCB poisoning for 30 years. Chemosphere. 2001;43:925–30; Masuda Y. Behavior and toxic effects of PCBs and PCDFs in Yusho patients for 35 years. J Dermatol Sci. 2005;1:511–20.)
29 Reduction of PCDDs, PCDFs, and related chemicals in the environment can be and has been addressed in a variety of ways. One way is preventing the manufacture of certain chemicals such as PCBs. Another is banning the use of certain phenoxyherbicides such as 2,4,5-T, which is contaminated with the most toxic Dioxin, 2,3,7,8TCDD. Improved municipal, toxic waste, and hospital incinerators that produce less Dioxin is another approach, as is not burning certain chlorine-containing compounds, such as the very common polyvinyl chlorides. The use of unleaded gasoline avoids chlorinated scavengers found in leaded gasoline, which may facilitate formation of Dioxins. Cigarette smoke contains a small amount of Dioxins. Cessation of smoking and provision of smoke-free workplaces, eating establishments, airports, etc. helps prevent Dioxin formation and exposure. In Europe, over the past decade, PCDD and PCDF levels appear to be declining in human tissue including breast milk and blood. This decline coincides in time with regulations and enforcement of regulations designed to decrease PCDD and PCDF formation, especially with respect to incineration. Since intrauterine exposure cannot be prevented on an individual basis, and breast-feeding, which involves substantial Dioxin transfer to the child, is otherwise desirable, worldwide environmental regulations with strong enforcement are clearly indicated as a preventive public health measure. REFERENCES
1. Schecter A, Gasiewicz TA, eds. Dioxins and Health. 2nd ed. John Wiley & Sons Inc: Hoboken, NJ; 2003. 2. Institute of Medicine. Veterans and Agent Orange: Health Effects of Herbicides Used in Vietnam. Washington, D.C.: National Academy Press; 1994. 3. Institute of Medicine. Veterans and Agent Orange: Update 1996. Washington, D.C.: National Academy Press; 1996. 4. Huisman M, Koopman-Esseboom C, Fidler V, et al. Perinatal exposure to polychlorinated biphenyls and dioxins and its effect on neonatal neurological development. Early Hum Dev. 1995;41:111–127. 5. Koope JG, Pluim HJ, Olie K. Breast milk, dioxins and the possible effects on health of newbom infants. Sci Total Environ. 1991;106:33–41. 6. Koopman-Esseboom C, Morse DC, Weisglas-Kuperus N, et al. Effects of dioxins and polychlorinated biphenyls on thyroid hormone status of pregnant women and their infants. Pediatr Res. 1994;36:68–473. 7. Henriksen GL, Ketchum NS, Michaiek JE, Swaby JA. Serum dioxins and diabetes mellitus in veterans of operation ranch hand. Epidemiology. 1997;8(3):252–8. 8. Sweeney MH, Homung RW, Wall DK, Fingerhut MA, Halperin WE. Prevalence of diabetes and elevated serum glucose levels in workers exposed to 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). Organohalogen Compds. 1992;10:225–6. 9. Mocarelli P, Brambilla P, Gerthoux PM. Change in sex ratio with exposure to dioxin. Lancet. 1996;348:409. 10. Kuratsune M, Yoshimura H, Hori Y, Okumura M, Masuda Y, eds. YUSHO: A Human Disaster Caused by PCBs and Related Compounds. Fukuoka, Japan: Kyushu University Press; 1996. 11. Van den Berg M, Birnbaum L, Bosveld ATC, et al. Toxic equivalency factors (TEFs) for PCBs, PCDDs, PCDFs for humans and wildlife. Environ Health Perspect. 1998;106:775–92. 12. Schecter A, Startin J, Wright C, et al. Congener-specific levels of dioxins and dibenzofurans in U.S. food and estimated daily dioxin toxic equivalent intake. Environ Health Perspect. 1994;102(11):962–66. 13. U.S. Environmental Protection Agency. Exposure Factors Handbook. EPA/600/8-89/043. Washington, DC: U.S. Environmental Protection Agency, Office of Health and Environmental Assessment, 1989. 14. U.S. Environmental Protection Agency. Estimating Exposure to Dioxin-Like Compounds (Review Draft). Washington, DC: U.S. Environmental Protection Agency, Office of Health and Environmental Assessment, 1994. 15. U.S. Environmental Protection Agency: Health Assessment Document for 2,3,7,8-Tetrachlorodibenzo-p-Dioxin (TCDD) and Related Compounds
Polychlorinated Dioxins and Polychlorinated Dibenzofurans
683
(Review Draft). Washington, DC: U.S. Environmental Protection Agency, Office of Health and Environmental Assessment, 1994. 16. Schecter A, Dai LC, Thuy LTB, et al: Agent Orange and the Vietnamese: the persistence of elevated dioxin levels in human tissue. Am J Public Health. 1995;85(4):516–22. 17. Schecter A, McGee H, Stanley J, Boggess K, Brandt-Rauf P. Dioxins and dioxin-like chemicals in blood and semen of American Vietnam veterans from the state of Michigan. Am J Ind Med. 1996;30(6):647–54. 18. Fingerhut MA, Halperin WE, Marlow DA, et al. Cancer mortality in workers exposed to 2,3,7,8-tetrachlorodibenzo-p-dioxin. N Engl J Med. 1991;324:212–18. 19. Flesch-Janys D, Berger J, Gum P, et al. Exposure to polychlorinated dioxins and furans (PCDD/F) and mortality in a cohort of workers from a herbicide-producing plant in Hamburg, Federal Republic of Germany. Am J Epidemiol. 1995;142(11):1165–75. 20. Manz A, Berger J. Dwyer JH, et al. Cancer mortality among workers in chemical plant contaminated with dioxin. Lancet. 1991;338:959–64. 21. Saracci R, Kogevinas M, Bertazzi PA, et al. Cancer mortality in workers exposed to chlorophenoxy herbicides and chlorophenols. Lancet. 1991;338:1027–32. 22. Zober A, Messerer P, Huber P. Thirty-four-year mortality follow-up of BASF employees exposed to 2,3,7,8-TCDD after the 1953 accident. Int Arch Occup Environ Health. 1990;62:139–57. 23. Rogan WJ, Gladen BC, McKinney JD, et al. Neonatal effects of transplacental exposure to PCBs and DDE. J Pediatr. 1986;109:335–41. 24. Rogan WJ, Gladen BC, McKinney JD, et al. Polychlorinated biphenyls (PCBs) and dichlorodiphenyl dichloroethene (DDE) in human milk: effects on growth, morbidity, and duration of lactation. Am J Public Health. 1987;77:1294–7. 25. Rogan WJ, Gladen BC. PCBs, DDE, and child development at 18 and 24 months. Ann Epidemiol. 1991;1:407–13. 26. Gladen BC, Rogan WJ. Effects of perinatal polychlorinated biphenyls and dichlorodiphenyl dichloroethene on later development. J Pediatr. 1991;119:58–63. 27. Guo Y-L L, Yu M-L M, Hsu C-C. The Yucheng rice oil poisoning incident. In: Schecter A, Gasiewicz TA, eds. Dioxins and Health. 2nd ed. Hoboken, NJ: John Wiley & Sons, Inc; 2003:893–919. 28. Martinez JM, DeVito MJ, Birnbaum LS, Walker NJ. Toxicology of dioxins and related compounds. In: Schecter A, Gasiewicz TA, eds. Dioxins and Health. 2nd ed. Hoboken, NJ: John Wiley & Sons, Inc.; 2003:137–57. 29. Masuda Y. Causal agents of Yusho. In: Kuratsune M, Yoshimura H, Hori Y, Okumura M, Masuda Y, eds. YUSHO: A Human Disaster Caused by PCBs and Related Compounds. Fukuoka, Japan: Kyushu University Press; 1996:47–80. 30. Masuda Y. The Yusho rice oil poisoning incident. In: Schecter A, Gasiewicz TA, eds. Dioxins and Health. 2nd ed. Hoboken, NJ: John Wiley & Sons, Inc.; 2003:855–91. 31. Kuratsune M. Epidemiologic investigations of the cause of the “Strange disease”. In: Kuratsune M, Yoshimura H, Hori Y, Okumura M, Masuda Y, eds. YUSHO: A Human Disaster Caused by PCBs and Related Compounds. Fukuoka, Japan: Kyushu University Press; 1996:26–37. 32. Asahi M, Urabe H. A case of “Yusho”-like skin eruptions due to halogenated PCB-analogue compounds. Chemosphere. 1987;16:2069–72. 33. Ohnishi Y, Kohno T. Ophthalmological aspects of Yusho. In: Kuratsune M, Yoshimura H, Hori Y, Okumura M, Masuda Y, eds. YUSHO: A Human Disaster Caused by PCBs and Related Compounds. Fukuoka, Japan: Kyushu University Press; 1996:206–9. 34. Masuda Y. Fate of PCDF/PCB congeners and changes of clinical symptoms in patients with Yusho PCB poisoning for 30 years. Chemosphere. 2001;43:925–30. 35. Masuda Y. Behavior and toxic effects of PCBs and PCDFs in Yusho patients for 35 years. J Dermatol Sci. 2005;1:511–20. 36. Masuda Y, Haraguchi K, Kono S, Tsuji H, Päpke O. Concentrations of dioxins and related compounds in the blood of Fukuoka residents. Chemosphere. 2005;58:329–44.
This page intentionally left blank
Brominated Flame Retardants
30
Daniele F. Staskal • Linda S. Birnbaum
INTRODUCTION
The incidence of fire-related injuries, deaths, and economic damages has decreased over the past 25 years, partly because of fire prevention policies requiring flame retardant chemicals in many industrial products. Brominated flame retardants (BFRs) have routinely been added to consumer products for several decades to reduce fire-related incidents. They represent a major industry involving high-production chemicals with a wide variety of uses, yet all BFRs are not alike and often the only thing that they have in common is the presence of bromine. Concern for this emerging class of chemicals has been raised following a rapid increase of levels in the environment, wildlife, and people in combination with reports of developmental, reproductive and neurotoxicity, and endocrine disruption. Despite these concerns, little information is available on their sources, environmental behavior, and toxicity. Because of limited knowledge, few risk assessments have been completed. PRODUCTION AND USE
More than 175 different types of flame retardants are commercially available and can be generally divided into classes that include halogenated organic (usually brominated or chlorinated), phosphorus- or nitrogen-containing, and inorganic flame retardants. The BFRs are currently the largest market group because of their low cost and high efficiency. Some, such as the polybrominated biphenyls (PBB), are no longer being produced because of recognized toxicity and accidental poisoning.1 “Tris-BP” was also removed from the market after its original use as a flame retardant on children’s clothing because it was shown to have mutagenic and nephrotoxic effects.2 Over 75 BFRs are recognized; however, five BFRs constitute the overwhelming majority of BFR production. Tetrabromobisphenol A (TBBPA), hexabromocylododecane (HBCD), and three commercial mixtures of polybrominated diphenyl ethers, or biphenyl oxides, known as decabromodiphenyl ether (DBDE), octabromodiphenyl ether (OBDE), and pentabromodiphtnyl ether (PentaBDE), are used as additive or reactive components in a variety of polymers. The spectrum of final applications is very broad, but includes domestic and industrial equipment such as: TVs, mobile phones, computers,
Note: The information in this document has been subjected to review by the National Health and Environmental Effects Research Laboratory, U.S. Environmental Protection Agency, and approved for publication. Approval does not signify that the contents reflect the views of the Agency, nor does mention of trade names or commercial products constitute endorsement or recommendation for use. Partial funding provided by the NHEERL-DESE Training in Environmental Sciences Research, EPA CT 826513.
furniture, insulation boards, carpet padding, mattresses, and upholstered textiles. About 90% of electrical and electronic appliances contain BFRs. Information on global production and usage of BFRs is supplied by the Bromine Science and Industrial Forum.3
ENVIRONMENTAL PREVALANCE
Global environmental studies indicate that these chemicals are ubiquitous in sediment and biota and undergo long range transport.4,5 All of the major BFRs (PBDEs, HBCD, and TBBPA) have been documented in air, sewage sludge, sediment, invertebrates, birds, and mammals (including humans). Environmental trends show that levels are increasing and that often the specific congener patterns found in biota do not mimic what is used in commercial products. This suggests breakdown or transformation of the flame retardant products during manufacture, use, disposal, or during biomagnification in the food web. Full documentation and specific concentrations in the various media can be found in special issues of the journals Chemosphere4 and Environment International.5 HEALTH EFFECTS
No known health effects have been reported in humans following exposure to BFRs currently in production; however, no investigative studies have been conducted. PBB and Tris-BP, two BFRs with known human health effects, are no longer produced. Proposed health effects of BFRs are based on fish and mammalian toxicity data primarily available for the five major BFRs. Thorough reviews and extended references for toxicological studies can be found in the provided references.1,2,4–9 PBDEs. There are 209 potential PBDE congeners, of which approximately 25 are found in commercial mixtures ranging from trisubstituted up to the fully brominated deca-congener. The lower brominated congeners tend to be well absorbed following oral ingestion, are not well metabolized, and primarily distribute to lipophilic tissues in the body and, therefore, appear to have a long half-life in humans (>2 years). These also appear to be the most toxic congeners. Both the technical PBDE products as well as individual congeners can induce phase I and phase II detoxification enzymes in the liver. Several of the individual congeners have been tested in a variety of developmental neurotoxicity studies in rodents. Mice dosed during critical windows of development demonstrate effects on learning and memory that extend into adulthood. Rodents have also been exposed to PBDEs using a standardized protocol which detects endocrine disruption during puberty and results demonstrate that both male and female rats are sensitive to their effects.10 The most consistently reported 685
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
686
Environmental Health
effect following exposure to PBDEs in animal studies is an accompanying decrease in circulating thyroid hormones. This could be particularly harmful during development as small changes in these essential hormones have been associated with cognitive deficits in children. DecaBDE is the only BFR that has been extensively studied for cancer effects.6 The results of the 2-year bioassays concluded that there was some evidence of carcongenicity in rodents demonstrated by an increased incidence of hepatocellular and thyroid gland follicular cell adenomas or carcinomas. TBBPA. Rodent studies indicate that TBBPA is not acutely toxic and has a low rate of absorption paired with a high rate of metabolism; long term exposure data are unavailable. The majority of adverse effects of TBBPA have been found in vitro, demonstrated by damage to hepatocytes, immunotoxicity in culture, and neurotoxicity in cerebellar granule cells. Disruption of thyroid homeostasis appears to be the primary toxic effect in rodent studies, further adding evidence to the endocrine disruption potential of the BFRs. HBCD. Toxicity data for HBCD is extremely limited; however, a handful of studies have shown effects on circulating thyroid hormones as well as developmental neurotoxicity following a single neonatal exposure.
HUMAN EXPOSURE
Environmental sources of TBBPA, HBCD, and PBDEs have not been isolated, but are believed to include leaching from a wide range of final consumer applications (e.g., plastics and foam). These chemicals have all been detected in air, water, soil, and food. Body burdens (blood, adipose, and breast milk) have also been established, indicating that most people have low-level exposures. While it is generally assumed that the major route of exposure for adult humans is through dietary intake, primarily through foods of animal origin, there is increasing evidence that suggests indoor dust and indoor air may also play major roles. Nursing infants are believed to receive the highest daily exposure as breast milk may have relatively high concentrations of these chemicals; a concerning trend since these chemicals appear to be most toxic to developing systems.
REGULATIONS
Regulations vary among countries; some areas, such as Europe, banned the use of some PBDEs in mid-2004. There are currently no federal regulations in the United States; however, individual states have legislation banning or restricting the use of some of the mixtures. The sole U.S. producer of the PentaBDE and OctaBDE mixtures voluntarily phased out production at the end of 2004. DecaBDE, HBCD, and TBBPA are part of the High Production Volume (HPV) initiative through the International Council of Chemical Associations in which the chemical industry will provide data, hazard assessments, and production information for these chemicals. Up-to-date information on regulatory action can be found on the Bromine Science and Environmental Forum website.3 REFERENCES
1. Agency for Toxic Substances and Disease Registry. Toxicological Profile: Polybrominated Biphenyls and Polybrominated Diphenyl Ethers (PBBs and PBDEs). September 2004. Report number: PB2004-107334. 2. Birnbaum LS, Staskal DF. Brominated flame retardants: cause for concern? Environ Health Perspect. 2004;112:9–17. 3. Bromine Science and Industrial Forum. Available: www.bsef.com. Accessed May 2005. 4. Brominated Flame-Retardants in the Environment. Chemosphere. 2002;46:5. 5. State-of-Science and trends of BFRs in the Environment. Environment International. 2003;29:6. 6. National Toxicology Program. Toxicology and Carcinogenesis Studies of Decabromodiphenyl Oxide (Case No. 1163-19-5) In F344/N Rats and B6C3F1 Mice. May 1986. Report: TR-309. 7. U.S. Environmental Protection Agency. Integrated Risk Information System: Deca-, Octa-, Penta-, and Tetrabromodiphenyl Ethers. Available: www.epa.gov/iris/. Accessed May 2005. 8. WHO/ICPS. Environmental Health Criteria 162: Brominated Diphenyl Ethers; 1994. 9. WHO/ICPS. Environmental Health Criteria 172: Tetrabromobisphenol A and Derivatives; 1994. 10. Stoker TE, Laws SC, Crofton KM, et al. Assessment of DE-71, a commercial polybrominated diphenyl ether (PBDE) mixture in the EDSP male and female pubertal protocols. Toxicol Sci. 2004;78:6144–55.
Multiple Chemical Sensitivities
31
Mark R. Cullen
INTRODUCTION
During the 1980s a curious clinical syndrome emerged in occupational and environmental health practice characterized by apparent intolerance to low levels of man-made chemicals and odors. Although still lacking a widely agreed upon definition or necessarily permanent designation,1 the disorder idiosyncratically occurs in individuals who have experienced a single or recurring episodes of a typical chemical intoxication or injury such as solvent or pesticide poisoning or reaction to poor indoor air quality. Subsequently, an expansive array of divergent environmental contaminants in air, food, or water may elicit a wide range of symptoms at doses far below those which typically produce toxic reactions. Although these symptoms are not associated with objective impairment of the organs to which they are referable, the complaints may be impressive and cause considerable dysfunction and disability for the sufferer. Although such reactions to chemicals are doubtless not new, there is an unmistakable impression that multiple chemical sensitivities, or MCS as the syndrome is now most frequently called*, is occurring and presenting to medical attention far more commonly than in the past. Although no longitudinal data are available, it has become prevalent enough to have attracted its own group of specialists— clinical ecologists or environmental physicians—and substantial public controversy. Unfortunately, despite widespread debate over who should treat patients suffering with the disorder and who should pay for it, research has progressed only modestly in the last two decades. Neither the cause(s), pathogenesis, optimal treatment, nor strategies for prevention have been adequately elucidated. This sorry state of affairs notwithstanding, MCS is clearly occurring and causing significant morbidity in the workforce and general populations. It is the goal of the sections which follow to describe what has been learned about the disorder in the hope of improving recognition and management in the face of uncertainty and stimulating further constructive scientific engagement of this timely problem.
Definition and Diagnosis Although, as noted, there has yet to be general consensus on a single definition of MCS, certain features can be described which allow differentiation from other well-characterized entities.2 These include: 1. Symptoms appear to begin after the occurrence of a more typical occupational or environmental disease such as an intoxication or chemical insult. This ‘initiating’ problem may be one episode such as a smoke inhalation, or repeated, as in solvent
Note: ∗The term Idiopathic Environmental Intolerance has recently been introduced by some investigators.
2.
3.
4.
5. 6.
intoxication. Often the preceding events are mild and may blur almost imperceptibly into the syndrome which follows. Symptoms, often initially very similar to those of the initiating illness, begin to occur after reexposures to lower levels of the same or related compounds, in environments previously well tolerated, such as the home, stores, etc. Generalization of symptoms occurs such that multiple organsystem complaints are involved. Invariably these include symptoms referable to the central nervous system such as fatigue, confusion, headache, etc. Generalization of precipitants occurs such that low levels of chemically diverse agents become capable of eliciting the responses often at levels orders of magnitude below accepted TLVs or guidelines. Work-up of complaints fails to reveal impairment of organs which would explain the pattern or intensity of complaints. Absence of psychosis or systemic illness which might explain the multiorgan symptoms.
While not every patient will fit this description in its entirety, it is very important to consider each point before “labeling” a patient with MCS or including them in any study population. Each of the criteria serves to rule out other disorders with which MCS may be confused: panic or a related somatization disorder, classic sensitization to environmental antigens (e.g., occupational asthma), pathologic sequelae of organ system damage (e.g., reactive airways dysfunction syndrome after a toxic inhalation), or a masquerading systemic disease (e.g., cancer with paraneoplastic phenomena). On the other hand, it is important to recognize that MCS is not a diagnosis of exclusion nor should exhaustive and therapeutically disruptive (see below) tests be required in most cases. While many variations will be encountered, MCS has a quite unmistakable character which should allow prompt recognition in skilled hands. In practice the most difficult diagnostic problems with MCS fall into two categories. The first occurs with patients early in their course in whom it is often challenging to separate MCS from the more clearcut occupational or environmental health problem that usually precedes it. For example, patients who have experienced untoward reactions around organic solvents may find that their reactions are persisting even when they have been removed from high exposure areas or after these exposures have been abated; clinicians may assume that high exposures which could be remedied are still occurring and pay direct attention to that, an admirable but unhelpful error. This is especially troublesome in the office setting where MCS may be seen as a complication of nonspecific building related illness (NSBRI). Whereas the office worker with NSBRI typically responds promptly to steps which improve indoor air quality, a patient who has acquired MCS may continue to experience symptoms despite the far lower exposures involved. Again, attempts to further improve the air quality may be frustrating to patient and employer alike. 687
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
688
Environmental Health
Later in the disorder, confusion often is created by patient reactions to chronic illness. The MCS patient who has been symptomatic for many months is often depressed and anxious as are many medical patients with chronic diseases to which they have not adapted. This may lead to a focus exclusively on psychiatric aspects in which the chemically stimulated symptoms are viewed as a component. Without diminishing the importance of recognizing and treating these complications of MCS nor the evidence that MCS itself has psychological origins, the underlying symptomatic responses to chemcial exposures, and the belief system that engenders, must be recognized to facilitate appropriate management. Focusing exclusively on psychological aspects while ignoring the patient’s perception of his or her illness is therapeutically counterproductive.
Pathogenesis The sequence of events which leads in some individuals from a selflimited episode or episodes of occupational or environmental illness to the development of potentially disabling symptomatic responses to very low levels of ubiquitous chemicals is presently unknown. Presently there are several theories which have been offered, including the following: 1. The clinical ecologists and their adherents initially attributed the illness to immune dysfunction caused by excessive cumulative burden of xenobiotic material in susceptible hosts.3,4 According to this view, such factors may include relative or absolute nutritional deficiencies ( e.g., vitamins, anti-oxidants, essential fatty acids, etc.) or the presence of subclinical infections such as candida or other yeasts, or other life stresses. In this view, the role of the “initiating” illness is important only insofar as it may contribute heavily to this overload. 2. Critics of clinical ecology have invoked a primarily psychological view of the disorder, characterizing it in the spectrum of somatoform illnesses.5,6 Variations of this view include the concept that MCS is a variant of classic posttraumatic stress disorder or a conditioned (“nocebo”) response to an unpleasant experience. In these views, the initiating illness plays an obviously more central role in the pathogenesis of the disorder. Host factors may also be important, especially the predisposition to somaticize. 3. More recently, several theories have emerged which invoke a synthesis of biologic and neuropsychologic mechanisms. Central in these theories is the role of altered chemoreception of odor and irritation stimuli in the nose7 resulting in altered CNS responses to otherwise minimally noxious stimuli. A model of sensitization or “kindling” of limbic pathways, analogous to mechanisms postulated to explain drug addictions and other CNS adaptations, has also been proposed.8 The rich network of neural connections between the nasal epithelium and the CNS provide an intriguing theoretical basis for these hypotheses. Unfortunately, despite considerable literature generated on the subject, little compelling clinical or experimental science has been published to conclusively prove any of these views. Limitations of published clinical studies include failure to rigorously define the population on which tests have been performed and problems with identifying appropriately matched groups of referent subjects for comparison. Neither subjects of research nor observers have been blind to subjects’ status or research hypotheses. In the end, much of the published data must be characterized as anecdotal. Problematically still, legitimate debate over the etiologic basis of the disorder has been heavily clouded by dogma. Since major economic decisions may hinge on the terms in which an individual case or cases generally are viewed (e.g., patient benefit entitlements, physician reimbursement acceptance etc.), many patients as well as their physicians may have very strong views of the illness which have inhibited scientific progress as well as patient care. It is essential to an understanding of MCS itself that the above theories are extant and
often well known to patients who often have very strong views themselves. As such MCS differ markedly from other environmentally related disorders like progressive massive fibrosis in miners in which uncertainty about pathogenesis has not interfered with efforts to study the problem or manage its victims. That notwithstanding, recent reports have shed light on several of these possibilities. Evidence is mounting that the immunologic manifestations earlier reported are spurious; controlled analyses have failed to show consistent patterns of difference in a wide range of immune functions.9 While our rapidly expanding knowledge of immunology implies that differences may emerge based on future science, for now this theoretical consideration seems least relevant to MCS pathogenesis. On the other hand, studies of the physiologic and psychophysical responses of nasal epithelium in affected subjects suggest this “organ”—viewed as the upper respiratory epithelia and their neural connections in the CNS—as a more reasonable consideration as the locus of injury or abnormal response. Regarding psychological theories, delineation of the limitations of previous studies,10 and newer contributions,11 speak to the high likelihood that psychological phenomena are at least involved, if not central to the pathogenesis of MCS.
Epidemiology Several populational studies have appeared since the last edition, enhancing our knowledge of responses of large populations to lowlevel chemical exposures and clinical MCS. Kreutzer et al, surveying a representative group of Californians, reports a cross-sectional rate of self-report MCS by almost 6%, many of whom reported being diagnosed for same by a physician.12 Using a different instrument, Meggs found slightly under 4% claimed to be chemically sensitive in North Carolina.13 In a survey of military and reserve personnel from Iowa active during the Persian Gulf Conflict of 1990–91, Black and colleagues reported over 5% of returning veterans met stringent questionnaire criteria for MCS; 2.6% of those Iowan military who did not serve in the war area also met the case definition.14 Some patterns are apparent from these and other sources.15 Compared to other occupational disorders, women are affected more than men. MCS appears to occur more commonly in midlife (especially fourth and fifth decades), although no age group appears exempt from risk. While previous clinical reports had suggested that the economically disadvantaged and nonwhites were underrepresented, population-based data suggests that SES is not an important predictor, nor was race/ethnicity. Neither classic allergic manifestations nor any familial factor has proved important to date. In addition to these demographic features, some insights may be gleaned about the settings in which the illness occurs. Although many develop after nonoccupational exposures, e.g., in cars, homes, etc., several groups of chemicals appear to account overwhelmingly for the majority of initiating events—organic solvents, pesticides, and respiratory irritants. While this may be a function of the broad usage of these materials in our workplaces and general environment, the impression is that they are overrepresented. The other special setting in which many cases occur is in the so-called tight building, victims NSBRI occasionally evolving into classic MCS. Although the two illnesses have a great deal in common, their epidemiologic features readily distinguish them. NSBRI typically affects most individuals sharing a common (“sick”) environment and responds characteristically to environmental improvement; MCS occurs in isolation and does not abruptly respond to quantitative modifications of the environment. A final issue of considerable interest is whether MCS is, in fact, a truly new disorder or whether it has only recently come to attention because of widespread interest in the environment as a source of human disease. Views on this are split, largely along the same lines as opinion regarding the pathogenesis of the disorder. Those who suspect a primarily biologic role for environmental agents, including the clinical ecologists, would argue that MCS is uniquely a twentieth century phenomenon with rapidly rising incidence because of increased
31 chemical contamination of the environment.16 Contrarily, those who invoke primarily psychological mechanisms have argued that only the societal context of the disease is in any sense new. According to this view, the social perception of the environment as a hostile agent has resulted in the evolution of new symbolic content to the age-old problem of psychosomatic disease, changing the perception of patient and doctor but not the fundamental disease mechanism.17,18
Natural History Although MCS has yet to be subjected to careful clinical study sufficient to delineate its course or outcome, anecdotal experience with large numbers of patients has shed some preliminary light on this issue which may be of great importance in appropriate management. Based on this information, the general pattern of illness appears to be one of initial progression as the process of generalization evolves, followed by cyclical periods of ameliorations and exacerbations. While these cycles are generally perceived by the patient to be related to improvement or contamination of his or her environment, the pattern seems to have some life of its own as well, although the basis for it is far from clear. Once the disorder is established, there is a tendency for more chronic symptoms to supervene as well, with less obvious temporal relationship to exposures. The two most typical patterns are fatigue—many patients meet clinical criteria for chronic fatigue syndrome—and muscle pain, clinically indistinguishable from fibromyalgia in many cases.19 The overlap among the three disorders, both clinically and epidemiologically, has encourged the thinking that they may share a common final pathway or even pathogenesis, but this has not been proved. This disease history has two important ramifications. First, other than during the early stages in which the process initially emerges, there is little evidence to suggest that the disease is in any sense progressive.15 Patients do not tend to deteriorate from year to year, nor have obvious complications such as infections or organ system failure resulted. There is no evidence of mortality from MCS, although many patients become convinced that progression and death are inevitable based on the profound change in perception of health which the disorder engenders. While this observation may provide the basis for a sanguine prognosis and reassurance, it has been equally clear from described clinical experience that true remission of symptoms is also rare. While various good outcomes have been described, these are usually premised on improved patient function and sense of well-being, rather than reduced reactivity to environmental stimuli. The underlying tendency to react adversely to chemical exposures continues, although symptoms may become sufficiently tolerable to allow return to a near-normal lifestyle. In sum, MCS would appear to be a disorder with well-defined upper and lower bounds in outcome. While neither limit has been confirmed by large well-characterized series, it is probably not premature to include this assumption in planning treatment and assisting in vocational rehabilitation.
Clinical Management Very little is known about treatment of MCS. A vast array of modalities have been proposed and tried, but none has been subjected to the usual scientific standards to determine efficacy: a controlled clinical trial. As with other aspects, theories of treatment follow closely the theories of pathogenesis. Clinical ecologists, convinced that MCS represents immune dysfunction caused by excessive body burdens of xenobiotics, focus much of their attention on reducing burden by strict avoidance of chemicals; some have advocated extreme steps resulting in complete alterations in patient lifestyle. This approach is often accompanied by efforts to determine “specific” sensitivities by various forms of skin and blood testing—none as yet validated by acceptable standards—and utilizing therapies akin to desensitization with a goal of inducing “tolerance.” Coupled with this are a variety of strategies to bolster underlying immunity with dietary supplementation and other metabolic supports. A most radical approach involves efforts to eliminate toxins from the body by chelation or accelerated turnover of fat (where some toxicants are stored).
Multiple Chemical Sensitivities
689
Those inclined to a more psychological view of the disorder have explored alternative approaches consistent with their theories. Supportive individual or group therapies and more classic behavioral methods have been described.20 However, as with the more biological theories, the efficacy of these approaches remains anecdotal. Although none of these modalities is likely to be directly dangerous, limitations to present knowledge would suggest that they would best be reserved for settings in which well-controlled trials are being undertaken. In the meantime, certain treatment principles have emerged which can be justified based on present knowledge and experience. These include: 1. Taking steps to limit to the extent possible the search for the mysterious “cause” of the disease is an important first aspect of treatment. Many patients will have had considerable work-up by the time MCS is considered and will equate, not irrationally, extensive testing with extensive pathology. Uncertainty feeds this cycle as well as the patients’ common underlying fear that they have been irrevocably poisoned. 2. Whatever the theoretical proclivity of the clinician, it is crucial that the existing knowledge and uncertainty about MCS be explained to the patient, including specifically that the cause is unknown. The patient must be reassured that the possibility of a psychological basis does not make the illness less real, less serious, or less worthy of treatment. Reassurance that the disease will not lead inexorably to death, as many patients imagine, is also valuable, coupled with caution that with current knowledge cure is an unrealistic treatment objective. 3. Steps to remove the patient from the most obviously offensive aspects of their environment are almost always necessary, especially if the patient still lives or works in the same environment where the initiating illness occurred. While radical avoidance is probably counterproductive given the goal of improving function, protection from daily misery is important for establishing a strong therapeutic relationship which the patient needs. In general, this requires some vocational change which will also require attention to sufficient benefits to make this choice viable for the patient. For cases which occur as a consequence of an occupational illness, however mild, workers’ compensation may be available; most jurisdictions do not require detailed understanding of disease pathogenesis but can be invoked viewing MCS as a complication of a disorder which is accepted by local convention as work related. 4. Having established this foundation of support, subsequent therapy should be targeted at improved function. Obviously psychological problems, like adjustment difficulties, anxiety or depression, should be treated aggressively, as should coexistent pathology like atopic manifestations. Unfortunately, since these patients do not tolerate chemicals readily, nonpharmacologic approaches may be necessary. Beyond these measures, patients need direction, counseling, and reassurance in order to begin the challenging process of adjusting to an illness without established treatment. To the extent consistent with tolerable symptoms, patients should be encouraged to expand the range of their activities and should be discouraged from passivity, dependence, or resignation which intermittently recur throughout the course of the illness. It is worth emphasizing that there are no data to suggest, let alone prove, that intermittent chemical exposures capable of inducing transient symptoms otherwise adversely modify the future course of the illness. 5. Although it is appropriate to provide patients with all available factual information about MCS as well as fairly representing the view of the clinician, it must be recognized that many patients will get desperate and will try available alternative treatment modalities, sometimes several at once or in a sequence. It is probably not reasonable to strongly resist such efforts or to undermine a therapeutic relationship on this account but rather to hold steadily to a single coherent perspective treating such “treatments” as yet another troublesome aspect of a troublesome condition.
690
Environmental Health
Prevention It goes without saying that primary prevention cannot be seriously considered, given present knowledge of the pathogenesis of the disorder or the host factors which render certain individuals susceptible to it. At this time, the most reasonable approach is to reduce the opportunities in the workplace and ambient environment for the kinds of acute exposures which would appear to precipitate MCS in some hosts, especially solvents and pesticides. Reduction in the proportion of poorly ventilated offices would also appear likely to help. Secondary prevention would appear to offer some greater control opportunity although no intervention has been studied. On the possibility that psychological factors may play a role in victims of environmental mishaps, careful early management of individuals exposed to toxic substances would seem advisable, even if that exposure was relatively trivial, the prognosis from a biologic perspective is good. For example, patients seen in clinics or emergency rooms after acute exposures should have some exploration of their reactions to the events and should probably receive close follow-up where undue fears of long-term effects or recurrence are expressed. Equally important, efforts must be made on behalf of such patients to ensure that preventable recurrences do not occur since this may be an important pathway leading to MCS by whichever mechanism is truly responsible. REFERENCES
1. Cullen MR. The worker with multiple chemical sensitivities: an overview. Occup Med. 1987;2:655–61. 2. Kreutzer R. Idiopathic environmental intolerance. Occup Med. 2000; 15:511–8. 3. Levine AS, Byers VS. Multiple chemical sensitivities: a practicing clinician’s point of view: clinical and immunologic research findings. Toxicol Health. 1992;8:95–109. 4. Dietert RR, Hedge A. Chemical sensitivity and the immune system: a paradigm to approach potential immune involvement. Neurotoxicology. 1998;19:253–7. 5. Brodsky CM. Psychological factors contributing to somatoform diseases attributed to the workplace. The case of intoxication. J Occup Med. 1983;25:459–64. 6. Gothe CJ, Molin C, Nilsson CG. The environmental somatization syndrome. Psychosomatics. 1995;36:1–11. 7. Meggs WJ, Cleveland CH. Rhinolaryngoscopic examination of patients with the multiple chemical sensitivity syndrome. Arch Environ Health. 1993;48:14–8.
8. Bell IR, Miller CS, Schwartz GE. An olfactory-limbic model of multiple chemical sensitivity syndrome: possible relationships to kindling and affective spectrum disorders. Biol Psychiatry. 1992;32: 218–42. 9. Mitchell CS, Donnay A, Hoover DR, Margolick JB. Immunologic parameters of multiple chemical sensitivity. Occup Med. 2000;15: 647–65. 10. Brown-DeGagne A-M, McGlone J, Santor DA. Somatic complaints disproportionally contribute to Beck Depression inventory estimates of depression severity in individuals with multiple chemical sensitivity. J Occup Environment Med. 1998;40: 862–9. 11. Black DW. The relationship of mental disorders and idiopathic environmental intolerance. Occup Med. 2000;15:557–70. 12. Kreutzer R, Neutra RR, Lashuay N. Prevalence of people reporting sensitivities to chemicals in a population based survey. Am J Epid. 1999;150:1–12. 13. Meggs WJ, Dunn KA, Bloch RM, Goodman PE, Davidoff AL. Prevalence and nature of allergy and chemical sensitivity in a general population. Arch Environ Health. 1996;51:275–82. 14. Black DW, Doebbeling BN, Voelker MD, et al. Multiple chemical sensitivity syndrome: symptom prevalence and risk factors in a military population. Arch Intern Med. 2000;160: 1169–76. 15. Cullen MR, Pace PE, Redlich CA. The experience of the Yale occupational and environmental medicine clinic with MCS 1986–91. In: Mitchell FL, ed. Multiple Chemical Sensitivity: A Scientific Overview. Princeton: Princeton Scientific; 1995:15–20. 16. Ashford NA, Miller CS. Chemical Exposures: Low Levels and High Stakes. 2nd ed. New York: John Wiley and Sons; 1998. 17. Brodsky CM. Multiple chemical sensitivities and other “environmental illnesses”: a psychiatrist’s view. Occup Med. 1987;2:695–704. 18. Shorter E. From Paralysis to Fatigue: A History of Psychosomatic Illness in the Modern Era. New York: Macmillan; 1992: 233–323. 19. Donnay A, Ziem G. Prevalence and overlap of chronic fatigue syndrome and fibromyalgia syndrome among 100 new patients with multiple chemical sensitivity syndrome. J Chronic Fatigue Syndrome. 1999;5:71–80. 20. Staudenmeyer H. Psychological treatment of psychogenic idiopathic environmental intolerance. Occup Med. 2000;15:627–46.
Pulmonary Responses to Gases and Particles
32
Kaye H. Kilburn
This chapter defines the functional zones of human lung, describes responses to occupationally polluted air, reviews the adverse health effects caused by environmental air pollution, and considers indoor air pollution. FUNCTIONAL ZONES OF HUMAN LUNG
The lungs’ two regions are the conducting airways and the gasexchanging alveolar zone. In the former, a mucociliary escalator removes deposited particles. The alveolar zone, which includes alveolarized respiratory bronchioles and alveolar ducts, lacks this ability1 (Fig. 32-1). The two zones differ greatly in defenses and susceptibility to damage. For example, water-soluble gases such as sulfur dioxide and ammonia adsorb to water in proximal conducting airways, while relatively insoluble ozone and nitrogen dioxide damage the nonmucouscovered alveolar zone (Table 32-1). The airways selectively filter particles. Thus large particles (50 µm in diameter) lodge in the nose or pharynx, but particles less than 10 µm (and usually less than 5 µm) reach the alveolar zone.2 Fungal spores with diameters of 17–20 µm affect only proximal conducting airways (Fig. 32-2), while the 1 µm diameter spores of Micropolyspora faeni affect alveoli as well (Fig. 32-3). As a first approximation, reactions to particles can be predicted from their size, which is best defined by the mean median diameter, and from solubility in water. The site of lodgment of fibers and fibrils is predicted from aerodynamic diameter, not from length. OCCUPATIONAL POLLUTED AIR
Acute Alveolar Reactions Asphyxiant Gases Asphyxiant gases, divide into groups (1) represented by carbon dioxide, methane, and fluorocarbons that displace oxygen from alveoli to cause death, and carbon monoxide that combines with hemoglobin more avidly than oxygen and (2) chemical reactive poisons for mitochondrial cytochromes, hydrogen cyanide, hydrogen sulfide, and sodium azide. Their properties, exposure sources, toxicity, and applicable standards for occupational exposure in the United States are listed in Table 32-1. Carbon dioxide stimulates respiration at concentrations less than 10% but depresses it at higher concentrations and is anesthetic and lethal. Hazards occur when people go into poorly ventilated chambers, often underground. For example, carbon dioxide, methane, ammonia, and hydrogen sulfide are generated from manure collected from cattle feeding lots or from sewage and in
wells, pits, silos, holds of ships, or abandoned mine shafts. Workers entering these areas often collapse after a few breaths. Tragically, the first person to attempt rescue often dies of asphyxiation before it is realized that the exposure is lethal. Arc welding is hazardous in small compartments, since it does not require oxygen but burns organic material with oxygen to produce carbon monoxide; if the space is poorly ventilated, lethal quantities of carbon monoxide accumulate. Methane, as coal damp, is an asphyxiant and an explosion hazard for miners. Community contamination with hydrogen sulfide has occurred from coal seams in Gillette, Wyoming, from evaporative (salt crystallization) chemistry in Trona, California, and from hydrocarbon petroleum refining in Ponca City, Oklahoma, Lovington and Artesia, New Mexico and Nipoma, California. However, the most serious incident of this type was the Bhopal, India, disaster of 1984. Methyl isocyanate (used in manufacturing the insecticide carbaryl (Sevin)) escaped from a 21-ton liquid storage tank, killing more than 2300 people and injuring more than 30,000. Hydrogen sulfide inhalation has produced nausea, headache, shortness of breath, sleep disturbances, and throat and eye irritation at concentrations of 0.003–11 mg/m3 during a series of intermittent air pollution episodes. Hydrogen sulfide concentrations of 150 ppm quickly paralyzes the sense of smell, so that victims may be unaware of danger. Instantaneous death has occurred at levels of 1,400 mg/m3 (1000 ppm) to 17,000 mg/m3 (12,000 ppm). As the level of hydrogen sulfide increases in the ambient environment, symptoms vary from headache, loss of appetite, burning eyes, and dizziness at low concentrations, to low blood pressure, arm cramps, and unconsciousness at moderate concentrations, to pulmonary edema, coma, and death at higher concentrations. The recommended occupational standard for carbon dioxide is 0.5%, but for carbon monoxide it is 50 ppm for an 8-hour workday, with a single exposure to 200 ppm considered dangerous for chronic as well as acute impairment of the central nervous system (CNS). Since hydrogen sulfide is highly toxic even at low concentrations, the Occupational Safety and Health Administration (OSHA) has not set a time-weighted average for an 8-hour day. Instead, 20 ppm has been set as a maximum 15-minute exposure. New federal regulations are under review.
Oxidant Gases A potent oxidizing agent, ozone is a bluish pungent gas generated by electrical storms, arcs, and ultraviolet light. Ozone and nitrogen oxides are important in environmental air pollution. At high altitudes, the ozone shield protects the earth against solar radiation. Excess ozone is found aboard high-flying long-distance aircraft, particularly over the North Pole if cabin adsorption is inadequate or absent. Otherwise exposure to 691
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
692
Environmental Health
Figure 32-1. Diagram showing the possible fates and influence of inhaled aerosols and ingested materials. Alv, alveolus; Alv macro, alveolar macrophages; GIT, gastrointestinal tract; Ins, insoluble particles; NP, nasopharynx; RB, red blood cell; RES, reticuloendothelial system; S, soluble particles; TB, terminal bronchioles; TLN, thoracic lymph nodes. (Adapted from Kilburn KH. A hypothesis for pulmonary clearance and its implications. Am Rev Respir Dis. 98:449–63; 1968. Courtesy of the Editor of American Review of Respiratory Diseases.)
oxidant gases occurs mainly from welding, near generation of electricity, and in the chemical industry (Table 32-1). Nitrogen dioxide has a pungent odor seen in fuming nitric acid, silos containing alfalfa, and manufacture of feeds, fertilizers, and explosions. Although ozone and nitrogen dioxide irritate mucous membranes and the eyes, greater damage is produced in the distal zone of the lung, the respiratory bronchioles, and alveoli ducts. These gases enter alveolar epithelial cells, produce swelling, and secondarily affect the capillary endothelial cells. The thin alveolar membranes are made permeable to plasma fluids and proteins, which leads to pulmonary edema after exposure to large concentrations. Exposure to nitrogen oxides, principally nitrogen dioxide generated in silos by silage, in animal feed processing, and in nitrocellulose film fires in movie theaters, caused subacute necrotizing bronchiolitis in victims who survive acute pulmonary edema. Sulfur dioxide may also cause alveolar edema but is extremely irritating; so unless doses are unbearably high, the nose and upper airways reduce the concentrations reaching alveoli.
Irritant Gases The irritant gases include fluorine, bromine, and chlorine, hydrochloric acid, hydrogen fluoride, phosgene (and chlorine, which were poison gases used in World War I), sulfur dioxide, ammonia, and dimethyl sulfate. Oxides of vanadium, oxmium, cadmium, and platinum as finely divided fumes act like gases. The sources are generally industrial processing, although inadvertent production may occur. In addition, bromine and chlorine are injected to sterilize municipal water supplies, so that large amounts of concentrated gas are transported into and stored in densely populated cities. One desulfurization processes for petroleum uses vanadium pentoxide as a catalyst for hydrogen sulfide, and although portions of this are regenerated, workplace and environmental exposures occur. When liquid ammonia is injected into soil industrial use in making amphetamines or fertilizers, workers may be exposed to large quantities. Irritant gases in large quantities damage alveolar lining cells and capillary endothelial cells, causing alveolotoxic pulmonary edema; when removal of fluid by lymphatic drainage fails they may also severely damage the epithelial surfaces of airways. Edema fluid moves up into the terminal bronchioles and hence into the conducting airways, to be ascultated as rales.
Recent observations show that the brain is the target of chlorine, hydrochloric acid, ammonia, formaldehyde, and hydrogen sulfide. Sensitive measurements show impaired balance, reaction time, visual fields, verbal recall, problem solving, and decision-making and high frequencies of headache, memory loss, dizziness, and other symptoms.3
Particles Particles causing alveolar edema include small fungal spores such as M. faeni, bacterial endotoxins, and metal fumes (particles), particularly vanadium pentoxide, osmium, platinum, cadmium, and cobalt. Particles of hyphe and spores may be generated from vegetable crops used as food, fiber, or forage; as aerosols from sewage or animal fertilizer; or from petroleum desulfurization. Acute inhalation of high concentrations produces pulmonary edema, noted clinically in minutes to hours.
Mixtures Mixtures created by combustion of fuel, such as diesel exhaust in mines and welding fumes, particularly in compartments with limited ventilation, may reach edemagenic levels of ozone, nitrogen dioxide, formaldehyde, and acrolein. Again, if combustion or arcing takes place in spaces without adequate ventilation, pulmonary edema, or acute airways obstruction is more likely.
Therapy Physiological therapy of pulmonary edema is oxygen delivered under positive pressure by mask or endotrachial tube that restores oxygen to alveoli blocked by foaming to improve systemic oxygenation. Diuretics and fluid restriction, or adrenal corticosteroids are secondary measures. Speed is crucial. If breathing is impaired or the patient is unconscious, intubation and artificial ventilation are lifesaving.
Control Personnel must don self-contained breathing apparatus or air supply respirators before entering areas where harmful gases may collect and work in such areas only with adequate provision for air exchange. Would-be rescuers of afflicted individuals should wear an individual air or oxygen supply and be attached to a safety harness by which they can be retrieved safely by fellow workers. Appropriate advice and rules should be posted for personnel and reviewed frequently.
32
Pulmonary Responses to Gases and Particles
693
TABLE 32-1. PROPERTIES, SOURCES, AND TOXICITY OF COMMON GASES Health Effects Name
Asphyxiant Gases
Carbon dioxide Carbon monoxide Methane Carbon disulfide Hydrogen sulfide
Oxidant Gases Ozone Nitrogen oxides
Irritant Gases
Sulfur dioxide Formaldehyde Acetaldehyde Acrolein Ammonia Chlorine Bromine Fluorine Hydrogen fluoride Hydrogen bromide Hydrogen chloride Trichlorothylene Phosgene Carbon tetrachloride Chloroform Vinyl chloride Vinylidene chloride
Formula
Color and Odor
Sources of Exposure
Acute
CO2 CO CH4 CS2 H2S
c, ol c, ol c, olf c, so c, re
M, We, FC CS, T, FC Ng, D CM Ae, D, Ng, P
A, H, D, Ch A, H, Cv, Co A H, D A, Pe, D, H, Co
O3 NO NO2(N2O4)
c, po rb, po rb, po
S, EA, W, AC T, Pe, Mm, Tp W T, Mm, Pe, Tp CS, W, FC Ch
SO2 HCHO CH3CHO CH2=CHCHO NH3 Cl2 Br2 F2 HF HBr HCl C2HCl3 COCl2 CCl4 CHCl3 CH2=CHCL CH2=CCl2
c, po c, po, p c, po, p c, po, p c, po gy, po rb, po y, po c, po c, po c, po C, so c, ol-po c, so c, so c, so, p c, so, p
P CS, CM CS, CM CS, CM Ae, Af, Cm CM CM CM CM CM CM CM CM CM CM CM
T, Mm, Tp, Pe, Ch T, Mm, Ch, Tp T, Mm, Ch, Tp T, Mm, Ch, Tp A, Pe, Mm, Tp, T, Ch Pe, Mm, Ch, T, H, D, L Pe Pe Pe, T, Mm, B Pe, T, Mm, B Pe, T, Mm, B Co, NP Pe, T, Mm, Ch, Tp, B H, D, Pe H, D, Pe, Co H, D, Mm B, Mm
OSHA [TWA] ∗ (ppm)
IDLH†(ppm)
5000 50
50,000 1500
Np Np
20 (20) ceiling
500 300
AO
0.1 25 5
10 100 50
5
100
2 0.1 50 1 1 0.1 3 3 5 100 0.1 10 50 1 10
100 5 500 25 10 25 20 50 100 1000 2 300 1000 5 50
Chronic
AO
AO, Ca, Np AO, N AO, Np Np AO, Np AO AO AO AO AO, Np C AO L, Np L, Np Ca, AOL, Np Ca
Color and Odor: c, colorless; f, flammable; gy, green, yellow; o, odorless; p, polymerizes; po, pungent; rb, red-brown; re, rotten eggs; so, sweet. AC, aircrew; Ae, animal excreta; Af, agrifertilizer; CM, chemical manufacture; CS, cigarette smoke; D, dumps; EA, electric arcs; FC, fuel combustion; M, mining; Ng, natural gas; P, petroleum drilling, refining; S, stratosphere; T, tunnels; W, welding; We, wells. Health Effects: A, asphyxiant; AO, airways obstruction; AOL, acro-osteolysis; B, burns, skin; Ca, cancer; Ch, cough; Co, coma; Cv, depressed heart rate; D, dizziness; H, headache; L, liver; Mm, mucous membrane irritation; Np, neuropsychological toxin; Pe, pulmonary edema; T, tearing; Tp, tracheal pain. ∗TWA, time-weighted average. †IDLH, level of immediate danger to life or health.
Prevention Opportunities for gas leakage and accumulation should be minimized by industrial hygiene surveillance; the above advice postulates that every effort has been made to reduce leakage and maximize avoidance.
Chronic Alveolar Disease Extrinsic allergic alveolitis, lipoproteinosis, and granulomatous alveolitis are disorders of the alveolar cells and spaces caused by inhalation of chemically active particles.
Nongranulomatous Alveolitis (Allergic Pneumonitis) The original description of extrinsic allergic alveolitis, or farmer’s lung, implicated inhalation of fungal spores and vegetable material from hay or grain dust,4 that recruited cells to alveoli. Some exposed farmers developed shortness of breath. Frequently, they had precipitating serum antibodies to crude preparations of fungi. However, antibodies were also found in asymptomatic farmers. Farmer’s lung occurred in areas where animal feeds were stored wet, with the consequent enhanced generation of fungal spores. Classic descriptions came from Northwest England,5 Scotland, and the north-central U.S. dairy states.6 Both the size of the spores, less than 7 µm to be respirable but less than 3 µm to reach alveoli, and their solubility influence the disorder. Fungal toxins, including endotoxins, are important in the pathogenesis of farmer’s lung, and hypersensitivity may be responsible for part of the pathological picture. Whether this is type IV allergy or also type III is not clear. Initial high-dose exposure to spores
frequently produces both airway narrowing and acute pulmonary edema7 (Fig. 32-3) requiring hospitalization and oxygen therapy. After repeated exposure and development of precipitating antibodies, many cells may be recruited into alveoli. This pneumonitis can be lethal with repeated heavy exposure. Or the reaction may clear completely during absence from exposure. Adrenal corticosteroids frequently help resolve the acute phase but do not affect the chronic fibrotic stage.
Molds and Mycotoxin Previously mold and mycotoxin disease occurred at work but infantile hemosiderosis was linked to be a mold, Statchybotyrus chartarum, growing on Cleveland homes with excess humidity in the 1990s (CDC 1994, CDC 1997, Etzel 1998, Dearborn 2002). A decade later, mold disease of adults and children appears nationwide with greatest prevelance in a swath across the deep south from Florida and Texas to Arizona and California (Kilburn 2003). Patients developed headaches, fatigue, memory loss, impaired concentration, dizziness, and deficient balance together with flulike nasal and pulmonary congestion and phlegm tinged with blood. They saw black mold growing on walls, floors, and ceiling, smells were musty and they felt better away from home. An infant mouse model replicated findings in human infants (Yiki et al 2001.) Numerous school rooms were affected and teachers and children sickened. Mold growth was seen on lift samples and confirmed by culture. Indoor air samples showed more colony-forming units then did outdoor air samples. Molds included Stachybotyrus chartarum or atra, Aspergillus-penicillin, Cladosporium, and other genera. Patients had serum antibodies to molds and mycotoxins particularly trichothocenes
694
Environmental Health
Fibrosis
Figure 32-2. Effects of exposure to thermophiles.
and satratoxins but not aflatoxin, but neither patterns nor titers distinguish affected patients from asymptomatic control subjects. For a general source see Straus 2004. Neurobehavioral testing found impaired balance, slowed reaction time, decreased strength, excess errors distinguishing colors and visual field defects. Hearing and blink reflex latency were usually not affected. Verbal recall for stories, cognition for problem sloving and multitasking, and perceptual motor function were frequently impaired as was the ability to see missing items in pictures (Kilburn 2003). The abnormalities resembled those produced by exposures to hydrogen sulfide, chlorpyrifos,8 and chlorine.9
Berylliosis Beryllium, a dense, corrosion-resistant metal, produces fulminant chemical pneumonia when inhaled as a soluble salt in large doses. Inhalation of fumes or fine particles leads to chronic granulomatous alveolitis. Originally beryllium disease was interpreted as an accelerated sarcoidosis.10 First recognized in workers making phosphores for fluorescent lamps, berylliosis is recognized pathologically by noncaseating granuloma with giant cells and the absence of necrosis. Specific helperinducer T cells accumulate in the lung, identifying berylliosis as a hypersensitivity disease.11 Insidious shortness of breath was accompanied by characteristic x-ray changes, which led to hospitalization in a tuberculosis sanitarium. Patients with accelerated sarcoidosis were brought to the attention of Dr Harriet Hardy, who isolated beryllium as the cause from 42 materials used by the original workers.10 Subsequently, the problem was recognized in workers from other fluorescent light and electrical factories that used beryllium nitrate phosphores. Some patients with advanced disease died. Those with less advanced berylliosis gradually improved but had residual interstitial fibrosis.12 Because beryllium is irreplacable in nuclear reactors and in exotic alloys for spacecraft, exposure to beryllium fumes continues for engineers and skilled workers.
Chronic interstitial fibrosis occurs after exposure to hard metal (tungsten carbide), silicon carbide, rare earths, copper (as sulfate in vineyard sprayer’s lung), aluminum, beryllium, and cadmium may follow a granulomatous sarcoidlike response. Aluminum has been associated with fibrosis in workers making powdered aluminum for paints,18 but is infrequent and must be differentiated radiographically or by lung biopsy from asbestosis and silicosis. Powdered tungsten and carbon are fluxed with cobalt to make hard metal. Animals exposed to cobalt alone show the lesions seen in workers,19 of proliferation of alveolar and airway cells.20 Similar to berylliosis, removing the worker from exposure leads to prompt improvement; reexposure causes exacerbation. Its similarity to farmer’s lung or alveolar lipoproteinosis suggests that lung lavage may be helpful. Adrenal corticosteroids help reverse the airways obstruction. Cadmium is unusual in producing both pulmonary edema (acute respiratory distress syndrome), particularly when fumes are generated from silver (cadmium) soldering and pulmonary fibrosis, which is fine and non-nodular in cadmium refinery workers.21 Because of the frequency of asbestos exposure and asbestosis among metal smelting and refinery workers,22 caution is advised in attributing pulmonary fibrosis to cadmium alone. Nodular infiltrates resembling those of berylliosis, hard metal disease, and silicosis have been reported among dental technicians and workers machining alloys of exotic metals. Because these illnesses occur infrequently among exposed workers (e.g., only 12.8% of 425 workers exposed to hard metal had radiographic evidence of disease),23 individual immune response or susceptibility factors appear to be important.
Asthma, Acute Airway Reactivity Acute airway narrowing, or asthma, is defined by shortness of breath or impaired breathing usually accompanied by wheezing that is relieved spontaneously or with therapy. Its spectrum includes acute responses that develop within a few minutes of exposure in a sensitized individual and those needing several hours of exposure to reach their peak, as with cotton dust.24 Asthma is the fastest growing lung disease of the twenty-first century. Although the causes are not agreed upon, chemical exposures are important at work and at home. It is estimated that asthma increased by 60% in the 1980s.25 Asthma currently affects 5–10% of children in the United States and 5–10% of adults.26 African Americans are three times more likely to die of asthma than are whites.25 They are also frequent victims of environmental inequality, meaning living in chemical “soups.” Asthma is a disease of increasing mortality in the United States and across the world, particularly in developed countries such as Sweden, Denmark, and New Zealand.
Lipoproteinosis In this disorder, alveolar spaces are filled with neutral lipids resembling pulmonary surfactant and its apoproteins.13 Inorganic particles and Myobacterium tuberculosis have been causally associated.14 Thus areas of lipoproteinosis are frequently found in lung biopsies or in lungs at autopsy from workers exposed to silica15 and to many other particles. Diagnosis is made by sampling alveolar fluids by minibronchial lavage through the fiberoptic bronchoscope or by lung biopsy. Treatment is removal by lung lavage. Both granulomatous and nongranulomatous alveolitis occur from inhaling moldy plant debris. Animal experiments suggest that granulomas may be due to poorly digestible complex chitins, which are complex carbohydrates forming the walls of spores and of plant cells.16 Chronic farmer’s lung may produce lipoproteinosis17 and pulmonary fibrosis.
Figure 32-3. Effects of exposure to pigeon proteins.
32
Pulmonary Responses to Gases and Particles
695
In 1979, the mortality in the United States was 1.2 per 100,000 and it had risen to 1.5 in 1983–1984.27 In African Americans, the corresponding figures were 1.8 in 1979 and 2.5 in 1984. Possible causes of this startling increase in asthma were not identified nor reasons why this disease became epidemic. Many authors25 have elaborated on the mechanisms, in a sea of multiple and complex causes. Air pollution and the synthetic chemical triggers for asthma have increased manyfold in the past 50 years, while buildings and homes have become tighter containers for them. The prevention of asthma, which is the most effective control measure, depends on reducing exposures to ambient air pollution and other chemical causes.28,29 Because the processing of cereal grains and flour maximizes opportunities for exposure, farmers, grain handlers, millers, and bakers probably constitute the largest worldwide group with reactive airways disease.30,31 Fortunately, exposures that produce the highest prevalence of airways reactivity, as to diisocyanates and cotton dust, have been controlled in the United States or use reduced.32 An estimated 8 million workers in the world are exposed to welding gases and fumes. Such exposure produces symptoms but practically no acute airway response and relatively mild impairment of function. This is detectable 10 or 11 years after beginning of exposure and is greater in cigarette smokers.33
The first principle of control and prevention is to reduce exposure for all workers by improved industrial hygiene. This controlled byssinosis (cotton dust disease) in the United States. Since 1973, as dust was reduced in cotton textile mills so that after 15 years there was debate on whether byssinosis had existed. In contrast, it continues to be a problem in the waste cotton industry35 and in developing countries36 lacking adequate engineering controls. The second principle of control and prevention is to remove reactive individuals from exposure. Reactivity is judged from symptoms or objectively from impaired function after challenge. Often individuals who react sharply to inhaled agents select themselves out of work. Because removing impaired workers from cotton textile mills did not improve their function, at least in the short term, byssinosis needs longitudinal surveillance added to the acute shift exposure so that workers with accelerated functional deterioration in 6 or 12 months are removed from exposure before they have suffered impairment that interferes with their ability to work. Annual and semiannual surveillance by pulmonary function testing was mandated by the cotton dust standards invoked in 197837 under the 1970 amendments to the Occupational Health and Safety Act (OSHA).
Diagnosis
Chronic Airway Disease: Chronic Bronchitis
Acute or reactive airway response is recognized by an increased resistance to expiratory flow by contraction of smooth muscle or swelling in airway walls causing tightness in the chest, shortness of breath, and wheezing, quickly or insidiously. Nonproductive cough is frequent, but as mucus secretion is stimulated, the cough becomes productive. Generalized wheezing is heard low and posterior as the lung empties during forced expiration. Alternatively, scattered localized wheezing may be heard. The lungs’ appearance on chest x-ray film is usually normal with abnormalities seen only from preexisting disease. Occasionally, severe hyperinflation causes increased radiolucency and low and flattened diaphragms, suggesting emphysema. A second exception is accentuated venous markings and a prominent minor lung fissure, suggesting pulmonary edema. Symptoms occur within a few hours of beginning work, are more frequent on Monday or the first day back after a holiday, and gradually increase during the work shift.24 The diagnosis is confirmed by finding decreased expiratory flow when comparing measurements at the middle or end of the shift with those made before entry to the workplace. Cross-shift decrements at work are optimal but a laboratory exposure challenge may be substituted4,7 with workers’ exposure long enough to simulate the workplace.
Definition
Mechanisms Acute airway responses may be nociceptive, inflammatory, or immune. The reactive segment of a workforce includes but is not limited to atopic individuals, those with IgE antibodies. In the instance of toluene diisocyanate (TDI), which has been well studied, reactivity to low doses does not appear to correlate with atopic status.34 Etiologies of many workplace exposures are imperfectly understood because flour and dusts from cotton, grain, coal, and in foundries are complex mixtures. Single agent-specific causes include metal fumes from zinc, copper, magnesium, aluminum, osmium, and platinum, endotoxins from Gram-negative bacteria and possibly fungal toxins. Many organic, naturally occurring food, fodder, and fiber plant products contain endotoxin. Concentrations increase with senescence of plants and thus are maximal at harvest time, as with cotton and for rye after frost.
Control, Surveillance, and Prevention
Chronic bronchitis is defined by the presence of phlegm or sputum production for more than 3 months of 2 succeeding years. Chronic bronchitis is the most common respiratory disease in the world.38
Effects of Cigarette Smoking The prevalence of chronic bronchitis is mainly due to smoking cigarettes, a plague of the twentieth century after World War I. Although the habit is on the wane in the United States, it is entrenched in Europe and has taken developing nations by storm, where the peak prevalence of cigarette smoking may not yet have been reached. Certainly there is no evidence that not smoking has become the accepted social behavior, as it has in the United States. Chronic bronchitis has such a high prevalence in blue collar cigarette smokers, particularly 20 years or more after they start smoking, that it often takes careful analysis to uncover occupational chronic bronchitis.39 Occupational effects are best assessed by studying large populations of individuals who have never smoked.40 Alternately, effects of cigarette smoking and occupational exposure can be partitioned by adjusting predicted function values for expiratory flows for duration of smoking using standard regression coefficients.41 Similarly, accelerated functional deterioration or increased prevalence of symptoms across years of occupational exposure after adjusting for the cumulative effects of smoking may show the effects of occupational exposure. Considering the additive effects of cigarette smoking to occupational dusts and fumes and atmospheric air pollution, a decrement in forced expiratory volume in one second (FEV1) exceeding 21–25 mL/year in a person who has never smoked, is excessive. Cigarette smoking alone in men increases the age-associated decrement 40% or 9 mL/year.41 Women show no such effect, probably because they smoke fewer cigarettes daily but still show increased lung cancers. In groups of men, who smoke decrements in FEV1 of more than 30 mL/year suggest occupational or environmental exposures. Airborne particle burdens increase age-related decrements.
Occupational Exposures Causal Agents To cover comprehensively, the occupational exposures of importance is an encyclopedic job. However, Table 32-2 provides an index of the categories of materials and types of reactions that depended on patient reports and descriptive epidemiology. Causative agents are logically grouped so the reader can add new materials and reactions to them. Such reports have been published infrequently in the last decade.
Occupational exposure to many dusts including those containing silica, coal, asbestos, and cotton (including flax and hemp) dust, and exposures during coking, foundry work, welding, and papermaking increase the prevalence or lower the age of appearance of chronic bronchitis. Although it is clear that high exposures to silica and to asbestos produce characteristic pneumoconiosis, lower doses cause airways obstruction. Symptoms and airways obstruction from cotton and other
696
Environmental Health TABLE 32-2. PARTICLES AFFECTING HUMAN LUNGS: CLASSES AND EXAMPLES Source
Persons Affected
Airways
Alveoli
Reference
Bacteria Aerobacter cloaceae Phialophora species Escherichia coli endotoxin
Air conditioner, humidifier workers Textile workers (mill fever)
+ +
+
Pseudomonas sp.
Sewer workers
+
+
Friend JAR. Lancet. 1:297, 1977 Pernis B, et al. Br J Ind Med. 18:120, 1961 Rylander R. Schweiz Med Wochenschr. 107:182, 1977
Fungi Aspergillus sp. Micropolyspora faeni Aspergillus clavatus
Farmers
+
Malt workers
+
Cladosporium sp.
Combine operators
+
+
Verticillium sp. Alternaria sp.
Mushroom workers
+
+
Micropolyspora faeni Penicillium casei Penicillium frequentans Thermoactinomyces (vulgaris) sacchari
Cheese washers
+
Cork workers (suberosis) Sugar cane workers (bagassosis)
+ +
+
Air conditioning, humidifier workers
+
+
Barley dust
Farmers
+
Carbon black
Production workers
+
Castor bean (ricin)
Oil mill workers
+
Cinnamon
Cinnamon workers
+
Coffee bean
Roasters
+
Cotton, hemp, flax, jute, kapok
Textile workers
+
Flour dust
Millers
+
Grain dust
Farmers
+
Gum arabic, gum Papain Proteolytic enzymes— Bacillus subtilis (subtilisin, alcalase) Soft paper
Printers Preparation workers Detergent workers
+ + +
Paper mill workers
+
Tamarind seed powder
Weavers
+
Tea
Tea workers
+
Emanuel DA, et al. Am J Med. 37:392, 1964 Channell S, et al. Q J Med. 38:351, 1969 Darke CS, et al. Thorax. 31: 294–302, 1976 Lockey SD. Ann Allergy. 33:282, 1974 Minnig H, deWeck AL. Schweiz Med Wochenschr. 102:1205, 1972 Arila R, Villar TG. Lancet. 1:620, 1968 Seabury J, et al. Proc Soc Exp Biol Med. 129:351, 1968
Amoeba Acanthamoeba castellani Acanthamoeba polyphaga Naegleria gruberi
Edwards JH, et al. Nature. 264:438, 1976
Vegetable Origin
+ +
McCarthy PE, et al. Br J Ind Med. 42:106–10, 1985 Crosbie WA. Arch Environ Health. 41:346–53, 1986 Panzani R. Int Arch Allergy. 11:224–236, 1957 Uragada CG. Br J Ind Med. 41:224–7, 1984 Freedman SD, et al. Nature. 192:241, 1961 Van Toorn DW. Thorax. 25:399–405, 1970 Roach SA, Schilling RSF. Br J Ind Med. 17:1, 1960 Jamison JP, et al. Br J Ind Med. 43:809–13, 1986 Buck MG, et al. Br J Ind Med. 43:220–6, 1986 Tse KS, et al. Arch Environ Health. 27:74, 1973 Warren P, et al. J Allergy Clin Immunol. 53:139, 1974 Awad el Karim MA, et al. Arch Environ Health. 41:297–301, 1986 Gelfand HH. J Allergy. 14:208, 1954 Flindt MLH. Lancet. 1:430, 1978 Pepys J, et al. Lancet. 1:1181, 1969
Enarson DA, et al. Arch Environ Health. 39:325–30, 1984 Thoren K, et al. Br J Ind Med. 46:192–5, 1989 Murray R, et al. Br J Ind Med. 14:105, 1957 Zuskin ES, Kuric Z. Br J Ind Med. 41:88–93, 1984
(Continued)
32
Pulmonary Responses to Gases and Particles
TABLE 32-2. PARTICLES AFFECTING HUMAN LUNGS: CLASSES AND EXAMPLES (Continued) Source
Persons Affected
Airways
Tobacco dust
Cigarette, cheroot factory workers
+
Wood dust
Those who work with Canadian red cedar, South African boxwood, rosewood (Dalbergia sp.)
+
Furniture workers
Alveoli
+
Reference Viegi G, et al. Br J Ind Med. 43: 802–8, 1986 Huuskonen MS, et al. Br J Ind Med. 41:77–83, 1984 Chan-Yeung M, et al. Am Rev Respir Dis. 108:1094–102, 1973 Carosso A, et al. Br J Ind Med 44:53–6, 1987 Vedal S, et al. Arch Environ Health. 41:179–83, 1986 Gerhardsson MR, et al. Br J Ind Med. 42:403–5, 1985
Animal Origin Ascaris lumbricoides
Zoologists
+
Ascidiacea
Oyster culture workers
+
Dander Egg protein
Farmers, fur workers, grooms Turkey and chicken farmers
+ +
Feathers
Poultry workers
+
Furs
Furriers
Insect chitin (Sitophilus granarius) Mayfly Screwfly
Flour Outdoor enthusiasts Screwworm controllers
+ +
King crab
Processors
+
Pancreatic enzymes
Preparation workers
+
+
Rat serum and urine
Laboratory workers
+
+
Swine confinement
Farm workers
+
+
Hansen K. Occupational Allergy. Springfield,IL: Charles C Thomas, 1958 Nakashima T. Hiroshima J Med Sci. 18:141, 1969 Squire JR. Clin Sci. 9:127, 1950 Smith AB, et al. Am J Ind Med. 12:205–18, 1987 Boyer RS, et al. Am Rev Respir Dis. 109:630–5, 1974 Zuskin E, et al. Am J Ind Med. 14:189–96, 1988 Lunn JA, Hughes DTD. Br J Ind Med. 24:158, 1967 Figley KD. J Allergy. 11:376, 1940 Gibbons HL, et al. Arch Environ Health. 10:424–30, 1965 Orford RR, Wilson JT. Am J Ind Med. 7:155–69, 1985 Colten HR, et al. N Engl J Med. 292:1050–3, 1975 Flood DFS, et al. Br J Ind Med. 42:43–50, 1985 Taylor AN, et al. Lancet. 2:847, 1977 Agrup G, et al. Br J Ind Med. 43:192–8, 1986 Donham KJ. Am J Ind Med. 5: 367–75, 1984
Chemicals Inorganic Beryllium
Metal workers
+
Calcium hydroxidetricalium silicate Chromium
Cement workers
+
Casters
+
Copper sulfate and lime
Vineyard sprayers
Hard metal Vanadium pentoxide
Sintering and finishing workers Refinery workers
+
Nickel sulfate
Platers
+
Platinum chloroplatinate Titanium chloride
Photographers Pigment workers
+ +
Titanium oxide
Paint factory
+ +
+
Saltini C, et al. N Engl J Med. 320:1103–9, 1989 Eid AH, El-Sewefy AZ. J Egypt Med Assoc. 52:400, 1969 Dodson VN, Rosenblatt EC. J Occup Med. 8:326, 1966 Pimental JC, Marques F. Thorax. 24:678–88, 1969 Meyer-Bisch C, et al. Br J Ind Med. 46:302–9, 1989 Zenz C, et al. Arch Environ Health. 5:542, 1962 McConnell LH, et al. Ann Intern Med. 78:888, 1973 Pepys J, et al. Clin Allergy. 2: 391, 1972 Redline S, et al. Br J Ind Med. 43:652–6, 1986 Oleru UG. Am J Ind Med. 12: 173–80, 1987
(Continued)
697
698
Environmental Health TABLE 32-2. PARTICLES AFFECTING HUMAN LUNGS: CLASSES AND EXAMPLES (Continued) Source Tungsten carbide (cobalt); hard metal Zinc, copper, magnesium fumes Iron, chromium, nickel (oxides)
Persons Affected
Airways
Hard metal workers
+
Welders, bronze workers (metal fume fever) Welders
+ +
Organic Aminoethyl ethanolamine Ayodicarbonemide
Solderers Plastic injection molders
+
Chlorinated biphenyls
Transformer manufacturers
Colophony (pine resin)
Solderers
+
Diazonium salts
Chemical workers
+
Diisocyanates—toluene, diphenylmethane
Production workers, foundry workers
+
Formaldehyde (Permapress, urethane foam)
Histology technicians, office workers
+
Paraphenylenediamine
Solderers
+
Paraquat
Sprayers
+
Penicillin, ampicillin
Production workers, nurses
+
Parathion
Sprayers
+
Piperazine Polymer fumes (polytetrafluoroethylene)
Chemists Teflon manufacturers, users
+ +
Polyvinyl chloride
Fabrication workers
+
Synthetic Fibers Nylon, polyesters, dacron
Textile workers
Rubber (neoprene)
Injection press operators
+
Tetralzene
Detonators
+
Vinyl chloride (phosgene, hydrogen chloride)
Meat wrappers (asthma)
+
Alveoli +
+
+
+
+
Firefighters
+
Polymerization plant workers
+
vegetable dusts have been well studied for more than a century.42 In the 1960s studies in British textile mills (using American-grown cotton), the severity of this Monday-morning asthma (byssinosis) and of shortness of breath and tightness in the chest correlated with concentrations of respirable cotton dust in workplace air.43 Similarly, exposure to welding gases and fumes accelerates reductions in expiratory flows.33 Shipbuilding, construction, and coal mining associated with asbestosis and to a lesser extent with silicosis, are also strongly correlated with
Reference Coates EO, Watson JHL. Ann Intern Med. 75:709, 1971 Gleason RP. Am Ind Hyg Assoc J. 29:461, 1968 Kilburn KH. Am J Indust Med. 87:62–9, 1989 McCann JK. Lancet. 1:445, 1964 Whitehead LW, et al. Am J Ind Med. 11:83–92, 1987 Shigematsu N, et al. PCB’s Environ Res. 1978 Fawcett IW, et al. Clin Allergy. 6(4)577, 1976 Perry KMA. Occupational lung diseases. In: Perry KMA, Sellers TH, eds. Chest Diseases. London: Butterworth, 1963, 518 Brugsch HG, Elkins HG. N Engl J Med. 268:353–7, 1963 Zammit-Tabona M, et al. Am Rev Respir Dis. 128: 226–30, 1983 Popa V, et al. Dis Chest. 56: 395; 1969; Alexandersson R, et al. Arch Environ Health. 43:222, 1988 Perry KMA. Occupational lung diseases. In: Perry KMA, Sellers TH, eds. Chest Diseases. London: Butterworth, 1963, 518; Dally KA, et al. Arch Environ Health. 36:277–84, 1981 Bainova A, et al. Khig-i zdravespazane 15:25, 1972 Davies RJ, et al. Clin Allergy. 4:227, 1974 Ganelin RS, et al. JAMA. 188:108, 1964 Pepys J. Clin Allergy. 2:189, 1972 Harris DK. Lancet. 2:1008, 1951 Lewis CE, Kirby GR. JAMA. 191:103, 1965 Ernst P, et al. Am J Ind Med. 14:273–9, 1988 Pimental JC, et al. Thorax. 30: 204, 1975 Thomas RJ, et al. Am J Ind Med. 9:551–9, 1986 Burge SB, et al. Thorax. 39: 470, 1984 Sokol WN, et al. JAMA. 226: 639, 1973 Dyer RE, Esch VH. JAMA. 235: 393, 1976 Arnard A, et al. Thorax. 33:19, 1978
chronic bronchitis.44,45 The common thread is inhalation of respirable particles with inflammation stimulated by one or more chemically active species contained or absorbed and work in foundries.46 Clinical signs are cough with mucus coming from goblet cell hyperplasia in small airways and to hyperplasia of mucous glands in large bronchi and exertional dyspnea due to small airways obstruction.47 Although inhalation of 200–400 ppm of sulfur dioxide by rats or guinea pigs models chronic bronchitis, these levels exceed, by two
32 orders of magnitude for workers in smelting or metal roasting operations or by three orders of magnitudes for most ambient air pollution exposures. The important difference is that human exposures include quantities of respirable particles. Similarly, exposure to chlorine, fluorine, bromine, phosgene, and vapors of hydrogen fluoride and hydrogen chloride produce bronchitic reactions. Discontinuous pulses of damage produce cycles of injury and repair rather than chronic bronchitis. Gases are adsorbed on particles in many occupational exposures, such as welding, metal roasting, smelting operations, and foundries especially where compressed air jets are used for cleaning. Gas molecules adsorbed on particles deposit in small airways.2 This deposition is studied in animal models with gases and pure carbon. Carbon, by itself an innocuous particle, adsorbs gas molecules and in the lining creates a nidus of damage because the particle is difficult to remove and the adsorbed gas molecules leach into cells.48 Perhaps the best examples are the adsorption of ozone, nitrogen dioxide, and hydrocarbons on respirable particles40 in Los Angeles, Mexico City, Athens, and other cities28,29 where large amounts of fossil fuel are combusted with limited atmospheric exchanges because of mountains, prevailing winds, and weather conditions. The prevalence of occupational chronic bronchitis has declined in the postindustrial era in the United States, Great Britain, and Northern Europe. Byssinosis and chronic bronchitis from cotton dust have been on the wane since the early 1970s.49,50 A similar decline in prevalence in workers in foundries, coke ovens, welding, and other dusty trades is attributed to improved air hygiene often dictated by economic or processing imperatives.44–46 Workers in Eastern Europe, China, India, Southeast Asia, and South America are now plagued by these “solved” problems.
Pulmonary Responses to Gases and Particles
699
Control Measures Control measures for chronic bronchitis depend on avoiding exposure— to cigarette smoke, to contaminated respirable particles in coal mines, smelters, and foundries, and to worldwide air pollution from fossil fuel combustion. Poverty often associates with more asthma and chronic bronchitis and residence in cities, near freeways and fuel building. As the standard of living rises, the prevalence of chronic bronchitis falls.28 Control ultimately depends on improving the population’s general health and curtailing its exposure to respirable particles.
Surveillance Effects of a personal, occupational, or atmospheric air pollution control program are best assessed by surveying symptoms and pulmonary functional performance of samples of the affected population. Most essential data—the prevalence of chronic bronchitis and measurement of expiratory airflow—are easily obtained and can be appraised frequently. Decreasing exposure reduces the prevalence of cough and phlegm and the rate of deterioration of expiratory airflow.
Prevention The prevention of chronic bronchitis essentially centers on avoiding generation of respirable particles into the human air supply. Cigarette smoking cannot be condoned. Air filtration helps but enclosing particle generation away from human noses, as in cotton textile mills is best. Socioeconomic measures include cleaner combustion of fossil fuels, reduction of human crowding, provisions for central heating, and improved standard of living.
Natural History The natural history of chronic bronchitis in urban dwellers has been investigated since the early nineteenth century.51,52 Chronic inhalation of polluted air stimulates mucus production, cough with phlegm, which define chronic bronchitis epidemiologically.53 Chronic bronchitis identified by cough and sputum was studied in more than 1,000 English civil servants and transport workers over a decade.53,54 Approximately the same proportion were symptomatic at the end of the decade as at its beginning, although some individuals had left and others had entered the symptomatic group over the interval.54 Chronic bronchitis prevalence increases with age in both females and males. The male predominance may be entirely due to cigarette smoking. The latent period before deterioration of expiratory airflow may be long if chronic bronchitis begins in childhood or early adulthood but short if it begins in late middle age.53,54 Chronic bronchitis may begin with an abrupt onset of bronchitis, unassociated with cigarette smoking or occupational exposure.55 More common in women, preceded by a viral or chemical respiratory illness and more likely to respond to treatment with broadspectrum antibodies. Afterward there is chronic phlegm production and more rapid than expected airflow limitation with deterioration of pulmonary function. When shortness of breath accompanies the cardinal symptoms, airflow limitation is generally present, and the yearly decrements in function are usually twice as large as predicted. For many individuals who smoke and have had an insidious onset of shortness of breath, expiratory airflow declines more steeply after age 50.54
Epidemiology Since the early 1960s, atmospheric air pollution has been recognized as an important cause of chronic bronchitis.56 Studies in London54 Groningen, The Netherlands,57 Cracow, Poland,58 firmly established that episodic severe pollution increased mortality and that chronic levels of atmospheric air pollution were associated with increased prevalence and morbidity from chronic bronchitis. Mortality from asthma and chronic bronchitis fell in Japan when sulfur dioxide air pollution decreased.59 In 1986, restudy of Italian schoolchildren showed that previously reduced expiratory flows rose to levels of controls when air pollution decreased.60
Neoplastic Disease of Airways Lung cancer from occupational exposure to uranium (radon), asbestos, chromate pigments, and arsenic was described before the worldwide epidemic of lung cancer from cigarette smoking. Unfortunately, early reports often failed to mention cigarette smoking, delaying secure attribution of cause until the study of large numbers of individuals who had never smoked. The causal linkage of asbestos to lung cancer without smoking is firm. The histological types including adenocarcinoma, squamous cell, undifferentiated and small-cell or oat-cell carcinoma are the same as seen in the general population. One sentinel disorder is small-cell carcinoma, after exposure to chloromethyl ethers. The association of lung cancer with exposure to polycyclic aromatic hydrocarbons in coke oven workers and roofers is established and follows Percival Pott’s attribution of the scrotal skin cancers in chimney sweeps to coal tar in London, over 200 years ago. Similarly, the occupational exposures to radon, radium, and uranium in mining and metalworking cause lung cancers. A recent example is uraniummining Navajo Indians on the Colorado Plateau, who, despite a low prevalence of smoking and a low consumption of cigarettes among those who smoked, had a tenfold increase (observed over expected) in lung cancer.61 Sentinel nasal sinus cancers and excessive lung cancers have resulted from exposure to the nickel refining in calcination of impure nickel and copper sulfide to nickel oxide or in the carbonyl process.62 Lung cancer may be caused by other exposures to nickel, to chromium, and to arsenic, but the data are less convincing than for asbestos and radon.63 In recent studies of copper smelter workers and aluminum refinery workers, it may be asbestosis which had a prevelance between 8% and 25% using the International Labor Organization (ILO) criteria for x-ray diagnosis.22 The factor common to higher pulmonary disease prevalence and lung cancer mortality among metal smelter workers was asbestos used for heat insulation; for patching of calciners, retorts, and roasters; and for heat protection for personnel. The contribution of asbestos must be taken into account before attributing cancer or irregular opacities in the lung to the useful metals.
700
Environmental Health
ENVIRONMENTAL AIR POLLUTION
History The famous fogs along the Thames in the City of London chronicled by Sir Arthur Conan Doyle in the Sherlock Holmes stories 100 years ago underscored a problem from the beginning of the Industrial Revolution with John Evelyn’s description in 1621. Death from such exposure ambient air pollution were first recognized in the Meuse Valley of Belgium during a thermal inversion in December 1930.56 Sixty people died. In Donora, Pennsylvania, a town of about 14,000 people along the Monongahela River with steel mills, coke ovens, a zinc production plant, and a chemical plant manufacturing sulfuric acid, a continuous temperature inversion created a particularly malignant fog that caused many illnesses and 20 deaths in October 1948. Deaths occurred from the third day. In December 1952, a particularly vicious episode in London produced excessive deaths in infants, young children, and elderly persons with cardiorespiratory disease. High particle loads were 4.5 mg/m3 for smoke and 3.75 mg/m3 for sulfur dioxide. A 1953 episode in New York City underscored this twentieth century plague. Repeated in Tokyo, Yokohama, New Orleans, and Los Angeles they led to investigation of the health effects of environmental air pollution in the 1960s and early 1970s. Air pollution swept across the Northern Hemisphere between November 27 and December 10, 1962. Excessive respiratory symptoms were observed in Washington, D.C., New York City, Cincinnati, and Philadelphia. London had 700 excess deaths due to high sulfur dioxide levels, and in Rotterdam, sickness, absenteeism, and increased hospital admissions occurred, with a fivefold increase in sulfur oxides. Hamburg, West Germany, reported increased sulfur dioxide and dust and increased heart disease mortality. In Osaka, 60 excess deaths were linked to high pollution levels. Currently, several cities stand out. Mexico City, the world’s capitol of air pollution, with extreme levels of pollution at an altitude of 7,000 feet is an enclosed valley with over 25 million people. Athens, located like Los Angeles with a mountain backdrop to prevailing westerly winds, has experienced such serious pollution as to jeopardize some of its monuments of antiquity. Adverse health effects from air pollution have been observed in São Paulo and Cubatao, Brazil, which have many diesel vehicles, a heavy petrochemical industry, and fertilizer plants. Brazil is experimenting with ethyl alcohol as fuel for internal combustion engines. As more countries industrialize and automobilize, the lessons of Donora, London, and New York are ignored in Bangkok and Beijing.
Sources The major source of air pollution is fossil fuel combustion.63 During this century, automotive gasoline-based transportation has become the predominant contributor, with a shift from coal for space heating and industrial production. In fact, the internal combustion automobile engine is the major source of both particles and gases, including hydrocarbons. The interaction of atmospheric gases with hydrocarbons under sunlight (photocatalysis) produces ozone and nitrogen dioxide. Adding these to the direct products of combustion in air produces the irritating acrid smog, coined for the mixture of smoke and fog. Thus, the horizon of many cities shows a burnished copper glow from nitrogen oxides. The smog in Los Angeles has remained practically static for 30 years; efforts to ameliorate the problem have simply kept pace with the additional population and its motor vehicle exhaust.64,65 Now Bakersfield, Fresno, and Riverside have more bad air days than does Los Angeles. In certain areas, such as the Northeastern United States, industrial processing, coking, steel production, as well as paper mills and oil refineries contribute their selective and somewhat specific flavor to the problem.66 As in occupational exposures, the particles are of respirable size and adsorb gases. Fly ash, from the combustion of coal in power stations, from space heating, and in industry consists of fused glass spheres with adsorbed metals and acidic gases.67 Adsorbed chemicals increase particle toxicity and their size determines the
zones injured in the lung.68 Hypertension, coronary artery occlusive disease, and myocardial infarction all link to nanoparticles of fly ash in the air.67–69 From inflammation in the lungs, nanoparticles effects70 spread to arteries and arterioles causing hypertension. The role of inflammation due to particle burdens on coronary artery disease worldwide has been developed in the past decade.71 Even children cancer rates, climbing in developed countries have climbed since 1970 throughout Europe.72 More childhood cancers were correlated with Chernobyl. Air pollution is also linked to genetic alterations in infants in New York.73 Waste incineration has increased the burden in the air, and greater population has nearly exhausted available canyons and open spaces for landfills for garbage around major cities. Although it appears that selective incineration under properly controlled conditions may help solve the solid waste problem, it increases the burden of particles and gases in the atmosphere unless carefully controlled.74 Moreover, nature may be responsible for freak episodes of air pollution. In 1986, release of carbon dioxide from Lake Nyos in West Africa killed 1700 people as they slept, and already the lake may be partly recharged.75
Regulated Pollutants Since 1970 in the United States, carbon monoxide, hydrocarbons, sulfur dioxide, nitrogen oxides, and ozone have been regulated by the Environmental Protection Agency (EPA). In various urban areas, ozone and oxidant concentrations have been defined above which occupants are alerted to limit physical activity. Although the respirable particles, particularly flyash, hydrocarbons, and coated carbon particles from diesel engines, are the principal components of visible pollution, recently considerable attention has focused on acids and chlorofluorocarbons.76 Chlorofluorocarbons manufactured as refrigerants and used to power “convenience” aerosols liberate chlorine into the stratosphere, where it combines with ozone to reduce the shield against ultraviolet radiation.76,77 The combination of loss of the ozone shield and carbon dioxide from increased combustion of fuel and destruction of tropical rain forests, among other causes, has raised atmospheric carbon dioxide, leading to global warming, the so-called greenhouse effect.77,78 This constitutes an entirely different but potentially very serious complication of environmental air pollution. Northern Europeans, particularly in Sweden, Norway, and the city of Cologne, West Germany, have been greatly concerned with the problem of acid rain, which is precipitation of large amounts of acidic gases combined with water.79 The acidity of these solutions has etched limestone buildings and acidified lakes and reservoirs, killing aquatic life and changing natural habitats. Ozone loss (which increases the risk of skin cancer),80 acid rain, and global warming are likely to produce future human health problems.
Modifiers The effects of particles and gases in the atmosphere are lessened by wind and rain dilution and made worse by thermal inversion. Studies in Tokyo showed that the heat worked with ozone to produce respiratory symptoms in schoolchildren.28,29,69 Unpremeditated experiments show that stopping automotive transportation in a city such as New York, for a day or two, ameliorates problems from rising levels of air pollutants. Thus it appears obvious that a clean transportation in urban areas would greatly reduce air pollution. Because combustion of diesel fuel and gasoline in automobiles and trucks is the major problem, designing cleaner engines and fuels are essential to improving air quality.81 Alternate fuels emphasizing methanol and ethanol alone or mixed with gasoline may be important and are included in the EPA plans for clearer air for the United States in this decade. Prudence is essential. One additive methyl-n-butyl ether increased respiratory illnesses in winter and asthma and ruined some engines. Organified manganese MMT has also caused human toxicity. Almost 20 years of retrofit (regressive) engineering, the installation of catalytic converters, has been less satisfactory. Although they have kept air pollution from increasing in Los Angeles, it is unclear whether this technology would help in Mexico City, Athens, or São Paulo.
32
Effects Toxicity is determined by particle size, adsorption, and respiratory deposition profiles.82 Respirable particles are those capable of depositing beyond the ciliated conducting airways of the human lung.1,2 Air pollution causes symptoms, impaired pulmonary function, respiratory diseases, and mortality. Acute symptoms, including eye irritation, nasal congestion, and chest tightness, appear to be due to the oxidant gases, aldehydes, and hydrocarbons largely in the gaseous phase, including peroxyacetyl nitrate.83 In the most sensitive 7–10% of the population, exposure to these gases decreases expiratory air flow, with wheezing and cough. Symptoms increase with exercise and are usually relieved within a few hours of removal from exposure. Large studies of European populations exposed to air pollution have shown that airways obstruction varied on days of greater or lesser levels of sulfur dioxide and particles.57,60 However, the question of reversibility is unanswered. Whether, or how quickly, airflow limitation is relieved by removal from exposure has not been tested. Meanwhile, to assume that the effects resemble those of cigarette smoking with irreversible airways obstruction, it is justified. The prevalence of chronic bronchitis in the exposed population is one of the most reliable indicators of exposure to the gases and particles of atmospheric air pollution.83 A number of classic studies— Grotingen, The Netherlands; Cracow, Poland; London; Tokyo; and Los Angeles—have shown that prevalence of chronic bronchitis rises with the level and duration of air pollution.84 This is best studied in individuals who have never smoked and in children. The production of excess mucus, to necessitate coughing for its removal, appears to be essentially a protective mechanism for the respiratory tract. Both clinical and experimental data show goblet cell metaplasia in small airways47 and goblet cell and mucous gland hyperplasia in large conducting airways. This latter finding is the consistent pathological accompaniment of chronic bronchitis in autopsies from exposed populations.85 Deaths from the air pollution disasters, and from current levels of air pollution, struck infants who died of pneumonia and adults with cardiorespiratory disease, particularly chronic bronchitis and emphysema. Recent genetic effects on newborns73 and excess cancer rates in children extend this range.72 Those with precarious respiratory function are highly susceptible to additional insult and by analogy constitute, in the picturesque lumberjack terms, “standing dead timber,” susceptible to the “strong wind” provided by a prolonged period of increased air pollution. Other results of severe air pollution include the retardation of children’s mental development from airborne lead,86 which constituted the principal reason for first reducing lead tetraethyl and similar additives and finally removing them from gasoline and motor fuel in the United States during the 1970s. The clear inverse relationship between lead and population intelligence is being verified again in Mexico City.87 Myocardial infarction from acute coronary artery occlusion is a twentieth century epidemic disease76,88 that reached a first zenith in the 1960s when with lung cancer deaths were linked to cigarette smoking.88 Unfortunately in the United States, cigarette smoking has fallen but myocardial infarction rates continue to climb. Fine particles, nanoparticles from air pollution fossil fuel combustion, acute inflammation in arterioles and arterioles that is the nidus for inflammatory cells and followed plaque formation, cholesterol deposit, rupture, and occlusion.69–71,88 INDOOR AIR POLLUTION
Pulmonary Responses to Gases and Particles
701
illnesses occurring at a convention of the American Legion at the Bellvue Stratford Hotel in Philadelphia. Its etiology was a bacterium since named Legionella pneumophila.93 Episodes of the tight building syndrome became more frequent after the energy crisis of 1973 but many investigations failed to find a bacterial or fungal source so the search turned to chemical contamination.
Sources of Chemicals Indoor air receives gases, vapors, and some particles generated by the activities therein (Table 32-3). Their concentrations reflect the amounts generated or released in the volume, the number of air exchanges, and the purity of makeup air. Thus, human effluents, chiefly carbon dioxide and mercaptans, combine with combustion products of space heating and cooking, cigarette smoke, and contributions from airconditioning systems. Added to these are outgassing of building construction, adhesive, and decorating materials to make a potent witches’ brew. If the building has sufficient of air exchanges, the concentration gradient may be reversed and the building atmosphere made hospitable. On the other hand, reducing the air exchanges to conserve heat or cold can build up noxious odors, vapors, and gases. Location of air intakes, types of filtration, and refrigeration and heating systems may all decrease the quality of indoor air and increase volatile organic chemicals (VOCs). New evidence shows that pesticides such as chlordane and organophosphates such as chlorpryifos sprayed indoors in xylene water, can cause excessive neurobehavioral symptoms and measurable brain injury.3 A causal analysis of personal factors and indoor air quality in Sweden found that total hydrocarbon concentrations plus smoking, psychosocial factors, and static electricity were significantly correlated with eye, skin, and upper airway irritation, headache, and fatigue.94 Hyperactivity, as well as sick leave due to airway diseases, were important chronic effects in this study but atopy, age, and sex were not correlated with symptoms. The studies were extended to 129 teaching and maintenance personnel of six primary schools in Uppsala,94 a Swedish city 50 km from Stockholm. All buildings had elevated carbon dioxide levels of more than 800 ppm, indicating a poor air supply. Mean indoor VOCs ranged from 70 µg/m3 to 180 µg/m3. Arithmetic mean was 130 µg/m3, aromatics mean was 39 µg/m3, while formaldehyde was below the detection limit of 10 µg/m3. Chronic sick building symptoms were not related to carbon dioxide levels, but instead were correlated with VOCs, as well as to wall-to-wall carpeting, hyperactivity, and psychosocial factors. Formaldehyde. Because many building materials are bonded with formaldehyde phenol resins, formaldehyde which also is a constituent of cigarette smoke and permanent press fabrics, has been consistently found in indoor air,95 and most studies find it a major contaminant. Thus, prohibition of smoking indoors makes air safer as well as more pleasant. Cooking with natural gas generates nitrogen oxides that rival formaldehyde in their capacity to irritate. Asbestos. During the late 1970s and early 1980s, concern for release of asbestos from construction materials into indoor air stimulated measurement of fiber levels.96 Generally, these have been well below occupational levels, usually between 0.01 and 0.0001 fibers/mL. However, during repair or renovation of home or school heating systems, with maximal conservation of air, levels may reach 0.2–1.0 fibers/mL The experience with asbestos has raised concerns about bystander exposure to fibrous glass that has been widely used in insulation.
Living Agents Illness and excessive symptoms from indoor exposures to sick buildings have increased rapidly in a generation. Illness associated with exposure indoors has been observed repeatedly89,90 and reviewed at length.91,92 Sometimes as in Pontiac, Michigan, investigations into bacterial and fungal contamination had fruitful results. For example, legionnaire’s disease was discovered from an investigation of
Freon and Chlorofluorocarbons. The leakage of freon, a refrigerant used in air-conditioning systems, is particularly noxious because phosgene, a poison gas used in World War I, is generated at ignition points such as electric arcs, burning cigarettes, and open flames. This problem was first identified aboard nuclear submarines, which remained submerged for long periods. Phosgene was generated at the
702
Environmental Health TABLE 32-3. SELECTED GUIDELINES FOR AIR CONTAMINANTS OF INDOOR ORIGIN Contaminant*
Concentration
Acetone—O Ammonia—O Asbestos
— — —
— — —
Benzene—O
—
—
Carbon dioxide Chlordane—O Chlorine Cresol—O Dichloromethane—O Formaldehyde—O Hydrocarbons, aliphatic—O Hydrocarbons, aromatic—O Mercury Ozone—O Phenol—O Radon Tetrachloroethylene—O Trichloroethane—O Turpentine—O Vinyl chloride—O
4.5 g/m3 5 g/m3 — — — 120 g/m3
Exposure Time
Continuous Continuous — — — Continuous
—
—
—
—
— 100 g/m3 — 0.01 working level — — — —
— Continuous — Annual average — — — —
Comments — — Known human carcinogen; best available control technology Known human carcinogen; best available control technology — — — — — West German and Dutch guidelines — — — — — Background 0.002–0.004 working level — — — Known human carcinogen; best available control technology
Source: Reprinted with permission from American National Standards Institute/American Society of Heating, Refrigeration, Air-Conditioning Engineers. Standard 62-1981—Ventilation for Acceptable Indoor Air Quality. New York: The Society; 1981: 48, which states: “If the air is thought to contain any contaminant not listed (in various tables), guidance on acceptable exposure . . . should be obtained by reference to the standards of the Occupational Safety and Health Administration. For application to the general population the concentration of these contaminants should not exceed 1/10 of the limits that are used in industry . . . In some cases, this procedure may result in unreasonable limits. Expert consultation may then be required . . . These substances are ones for which indoor exposure standards are not yet available.” ∗Contaminants marked “O” have odors at concentrations sometimes found in indoor air. The tabulated concentrations do not necessarily result in odorless conditions.
burning tips of cigarettes. Paint solvents contributed most to indoor pollution on board these vessels so regulations prohibit painting less than 30 days before putting to sea. Radon. Another concern indoors is radon and daughter products, which may concentrate in indoor air due to building location (e.g., the granite deposits of Reading Prong in Pennsylvania, New York, and New Jersey) or be released from concrete and other building materials.97 As with asbestos, the human health hazards of large exposures to radon products are well known from the miners of Schneeberg, Germany, and Jacymov, Czechoslovakia. The long-term health impact of low doses of radon products from basements, particularly from building materials or the substrata of rock, is poorly understood.98 Thus decisions as to legislation and rule making have wavered in the breezes of indecision. In summary, living organisms bacteria, molds and their products cause disease in buildings spread by heating and air-conditioning systems. Also, solvents such as trichloroethylene that are widespread contaminants of culinary water and dispersed into the air by showering and other water use, cause chronic neurobehavioral impairment.99–101 Lowlevel exposures appear to cause these cancer effects after latent periods of 20 or 25 years.72
Effects Symptoms. Initially, temporary ill effects from indoor exposure begin minutes to hours after exposure and diminish or disappear in a
few hours or overnight after leaving the building. They recur on reentry. Symptoms include fatigue, feeling of exhaustion, headache, and sometimes anorexia, nausea, lack of concentration, and lightheadedness. As occupants talk about their problems, irritability and recent memory loss may be noted along with the irritation of eyes and throat. Although demonstrating physiological changes may be difficult because of slight changes, interpretation may be aided by comparing exposed peoples’ observed functions to those predicted.99 As the methods for proving these diagnoses have improved, concerns over possible mass hysteria, or “crowd syndrome” have eased. Recommended investigational methods for these problems were all aided by follow-up measurements, so subjects are their own controls. Investigation. Use a standard inventory of symptoms and obtain information on as many occupants of a structure as possible. Affective disorder inventories such as the profile of mood states are useful. This information should be accompanied by mapping of affected and unaffected subjects’ work areas and their locations in the building. Air sampling should recognize chemical groups such as aldehydes, solvents, mercaptans, oxidant gases, chlorofluorocarbons (freons), carbon monoxide, pesticides (organochlorines and organophosphates) mold and mycotoxins plus bacteria and endotoxins. The decision to use physiological tests for pulmonary or neurological function should be made after reviewing the exposures and the symptom inventories.
32
Control and Prevention Provision for adequate air exchange with entrainment of fresh air not contaminated by motor vehicle exhaust or effluents from surrounding industrial activities is most prudent for prevention. Removal of contaminants in air by hoods with back- or down-draft suction works for welding, painting, and similar operations. Internal filtration of air removes particles in cotton textile mills and metal machining operations but is rarely useful in indoor pollution, where total particle burdens are rarely more than 0.2 or 0.3 mg/m3 and nanoparticles and VOCs are incriminated. On high-altitude aircraft, activated charcoal absorbers for ozone are workable, as they are on submarines. However, the cost of these for buildings, compared with cost for air exchanges, is prohibitively high. Freon, formaldehyde, solvents, and asbestos should be controlled to as low levels as possible in the indoor environment. These concerns compete with energy conservation. The problems of indoor air pollution in “sick buildings,” especially neurobehavioral impairment associated with VOCs and pesticides, molds and mycotoxins demand attention as workers are forced to retire early for neurobehavioral disability. REFERENCES
1. Kilburn KH. A hypothesis for pulmonary clearance and its implications. Am Rev Respir Dis. 1968;98:449–63. 2. Kilburn KH. Particles causing lung disease. Environ Health Perspect. 1984;55:97–109. 3. Kilburn KH. Chemical Brain Injury. New York: John Wiley; 1998. 4. Pepys J. Hypersensitivity disease of the lungs due to fungi and organic dusts. In: Kolos A, ed. Monograph in Allergy. Vol 4. New York: Karger; 1969, 1–147. 5. Morgan DC, Smyth JT, Lister RW, et al. Chest symptoms in farming communities with special reference to farmer’s lung. Br J Ind Med. 1975;32:228–34. 6. Roberts RC, Wenzel FJ, Emanuel DA. Precipitating antibodies in a midwest dairy farming population toward the antigens associated with farmer’s lung disease. J Allergy Clin Immunol. 1976;57: 518–24. 7. Schlueter DP. Response of the lung to inhaled antigens. Am J Med. 1974;57:476–92. 8. Kilburn KH. Evidence for chronic neurobehavioral impairment from chlorpyrifos. Environ Epidermal Toxocol. 1999;1:153–62. 9. Kilburn KH. Effects of chlorine and its cresylate byproducts on brain and lung performance. Arch Environ Health. 2005;58:746–55. 10. Hardy HL, Tabershaw IR. Delayed chemical pneumonitis in workers exposed to beryllium compounds. J Ind Hyg Toxicol. 1946;28: 197–211. 11. Saltini C, Winestock K, Kirby M, Pinkston P, Crystal RG. Maintenance of alveolitis in patients with chronic beryllium disease by beryllium-specific helper T cells. N Engl J Med. 1989;320:1103–9. 12. Hardy HL. Beryllium poisoning—lessons in control of man-made disease. N Engl J Med. 1965;273:1188–99. 13. Passero MA, Tye RW, Kilburn KH, Lynn WS. Isolation characterization of two glycoproteins from patients with alveolar proteinosis. Proc Natl Acad Sci USA. 1973;70:973–6. 14. Davidson JM, MacLeod WM. Pulmonary alveolar proteinosis. Br J Dis Chest. 1969;63:13–28. 15. Heppleston AG, Wright NA, Stewart JA. Experimental alveolar lipo-proteinosis following the inhalation of silica. J Pathol. 1970; 101:293–307. 16. Smetana HF, Tandon HG, Viswanataan R, Venkitasubrunarian TA, Chandrasekhary S, Randhawa HS. Experimental bagasse disease of the lung. Lab Invest. 1962;11:868–84. 17. Seal RME, Hapke EJ, Thomas GO, Meck JC, Hayes M. The pathology of the acute and chronic stages of farmer’s lung. Thorax. 1968;23: 469–89.
Pulmonary Responses to Gases and Particles
703
18. Mitchell J, Mann GB, Molyneux M, Lane RE. Pulmonary fibrosis in workers exposed to finely powdered aluminum. Br J Ind Med. 1961;18:10–20. 19. Coates EO, Watson JHL. Diffuse interstitial lung disease in tungsten carbide workers. Ann Intern Med. 1971;75:709–16. 20. Schepers GWEH. The biological action of particulate cobalt metal. AMA Arch Ind Health. 1955;12:127–33. 21. Smith TJ, Petty TL, Reading JC, Lakshminarayans S. Pulmonary effects of chronic exposure to airborne cadmium. Am Rev Respir Dis. 1976;114:161–9. 22. Kilburn KH. Re-examination of longitudinal studies of workers. Arch Environ Health. 1989;44:132–3. 23. Meyer-Bisch C, Pham QT, Mur JM, et al. Respiratory hazards in hard-metal workers: a cross sectional study. Br J Ind Med. 1989;46: 302–9. 24. Merchant JA, Lumsden JC, Kilburn KH, et al. Dose response studies in cotton textile workers. J Occup Med. 1973;15:222–30. 25. Lichtenstein LM. Allergy and the immune system. Sci Am. 1993; 269:116–24. 26. Knicker WT. Deciding the future for the practice of allergy and immunology. Ann Allergy. 1985;55:106–13. 27. Sly RM. Mortality from asthma 1979–1984. J Allergy Clin Immunol. 1988;82:705–17. 28. Kim JJ, Shannon MW, Best D, et al. Ambient air pollution: health hazard to children. Pediatrics. 2004;114:1699–1707. 29. Guo YL, Lin YC, Sung SL, et al. Climate, traffic related air pollutants and asthma prevelance in middle school children in Taiwan. Environ Health Perspectives. 1999;107:1001–1006. 30. Manfreda J, Cheang M, Warren CPW. Chronic respiratory disorders related to farming and exposure to grain dust in rural adult community. Am J Ind Med. 1989;15:7–19. 31. Anto JM, Sunyer J, Rodriguez-Roisin R, Suarez-Cervera M, Vasquez L. Community outbreaks of asthma associated with inhalation of soybean dust. N Engl J Med. 1989;320:1097–102. 32. Musk AW, Peters JM, Wegman DH. Isocyantes and respiratory disease: current status. Am J Ind Med. 1988;13:331–49. 33. Kilburn KH, Warshaw RH. Pulmonary function impairment from years of arc welding. Am J Med. 1989;87:62–9. 34. Diem JE, Jones RN, Hendrich DJ, et al. Five-year longitudinal study of workers employed in a new toluene diisocyanate manufacturing plant. Am Rev Respir Dis. 1982;126:420–8. 35. Engelberg AL, Piacitelli GM, Petersen M, et al. Medical and industrial hygiene characterization of the cotton waste utilization industry. Am J Ind Med. 1985;7:93–108. 36. Pei-lian L, Christiani DC, Ting-ting Y, et al. The study of byssinosis in China: a comprehensive report. Am J Ind Med. 1987;12: 743–53. 37. Occupational Health and Safety Standards. Cotton Dust 29CFR 1910 § 1910 1043, (K) (J-4). 38. Ciba Guest Symposium. Terminology, definitions and classification of chronic pulmonary emphysema and related conditions. Thorax. 1959;14:286. 39. Kilburn KH, Warshaw RH. Effects of individually motivating smoking cessation on male blue collar workers. Am J Public Health. 1990;80:1334–7. 40. Hodgkin JE, Abbey DE, Euler GL, Magie AR. COPD prevalence in non-smokers in high and low photochemical air pollution areas. Chest. 1984;86:830–8. 41. Miller A, Thornton JC, Warshaw RH, Bernstein J, Selikoff IJ, Teirstein AS. Mean and instantaneous expiratory flows, FVC and FEV1: prediction equations from a probability sample of Michigan, a large industrial state. Bull Eur Physiopathol Respir. 1986;22: 589–97. 42. Schilling RSF, Hughes JPW, Dingwall-Fordyce I, Gilson JC. An epidemiological study of byssinosis among Lancashire cotton workers. Br J Ind Med. 1955;12:217–26.
704
Environmental Health
43. McKerrow CB, McDermott M, Gilson JC, Schilling RSF. Respiratory function during the day in cotton workers: a study in byssinosis. Br J Ind Med. 1958;15:75–83. 44. Lowe CR, Khosla T. Chronic bronchitis in ex-coal miners working in the steel industry. Br J Ind Med. 1972;29:45–9. 45. Sluis-Cremer GK, Walters LG, Sichel HS. Ventilatory function in relation to mining experience and smoking in a random sample of miners and non-miners in a Witwatersrand Town. Br J Ind Med. 1967;24:13–25. 46. Davies TAL. A Survey of Respiratory Disease in Foundrymen. London: HM Stationery Office; 1971. 47. Karpick RJ, Pratt PC, Asmundsson T, Kilburn KH. Pathological findings in respiratory failure. Ann Intern Med. 1970;72:189–97. 48. Boren HG, Lake S. Carbon as a carrier mechanism for irritant gases. Arch Environ Health. 1964;8:119–24. 49. Merchant JA, Lumsden JC, Kilburn KH, et al. An industrial study of the biological effects of cotton dust and cigarette smoke exposure. J Occup Med. 1973;15:212–21. 50. Kilburn KH. Byssinosis 1981. Am J Ind Med. 1981;2:81–8. 51. Higgins ITT, Cochrane AL, Gilson JC, Wood CH. Population studies of chronic respiratory disease. Br J Ind Med. 1959;16:255–68. 52. Oswald NC, Harold JT, Martin WJ. Clinical pattern of chronic bronchitis. Lancet. 1953;2:639–43. 53. Fletcher CM. Chronic bronchitis, its prevalence, nature and pathogenesis. Am Rev Respir Dis. 1959;80:483–94. 54. Fletcher CM, Peto R, Tinker C, Speizer FE. The Natural History of Chronic Bronchitis and Emphysema. Oxford: Oxford University Press; 1976. 55. Gregory J. A study of 340 cases of chronic bronchitis. Arch Environ Health. 1971;22:428–39. 56. Goldsmith JR. Effects of air pollution on human health. In: Stern AC, ed. Air Pollution. 2nd ed. New York: Academic Press; 1968, 547–615. 57. Van der Lende R, Kok T, Peset R, et al. Longterm exposure to air pollution and decline in VC and FEV1. Chest. 1981;80:23S–26S. 58. Kryzyanowski M, Jedrychowski W, Wysocki M. Factors associated with the change in ventilatory function and the development of chronic obstructive pulmonary disease in the 13 year follow-up of the Cracow study. Am Rev Respir Dis. 1986;134:1011–90. 59. Imai M, Yoshida K, Kitabtake M. Mortality from asthma and chronic bronchitis associated with changes in sulfur oxides air pollution. Arch Environ Health. 1986;41:29–35. 60. Arossa W, Pinaci SS, Bugiani M, et al. Changes in lung function of children after an air pollution decrease. Arch Environ Health. 1987;42:170–4. 61. Samet JM, Kutvirb DM, Waxweiler RJ, Kay CR. Uranium mining and lung cancer in Navajo men. N Engl J Med. 1984;310:1481–4. 62. Sunderman FW, Jr. Recent progress in nickel carcinogenesis. Toxicol Environ Chem. 1984;8:235–52. 63. Comar CL, Nelson N. Health effects of fossil fuel combustion products: report of a workshop. Environ Health Perspect. 1975;12: 149–70. 64. Health and Welfare Effects Staff Report. Ambient Air Quality Standard for Ozone. Sacramento, CA: Research Division Air Resources Board; 1987. 65. South Coast Air Quality Management District. Seasonal and Diurnal Variation in Air Quality in California’s South Coast Air Basin. El Monte, CA, 1987. 66. Rahn KA, Lowenthal DH. Pollution aerosol in the Northeast: northeastern-midwestern contributions. Science. 1985;228:275–84. 67. Fisher GL, Chang DPY, Brummer M. Fly ash collected from electrostatic precipitators: microcrystalline structures and the mystery of the spheres. Science. 1976;192:553–5. 68. Nel A. Air pollution related illness: effects of particles. Science. 2005;308:804–6. 69. Raloff J. Nano hazards: exposure to minute particles harms lungs, circulatory system. Science Now. 2005;167:179–80.
70. Kearnay P, Whelton M, Reynolds K, et al. Global burden of hypertension: analysis of world wide data. Lancet. 2005;365:217–23. 71. Hansson GK. Inflammation, artherosclerosis and coronary heart disease. New England J Med. 2005;352:1686–95. 72. Steliarova-Foucher E, Stiller C, Keatsch P, et al. Geographical patterns and time trends of cancer incidence and survival among children and adolescence in Europe since the 1970’s (the ACCIS project): an epidemiological Study. Lancet. 2004;364:2097–105. 73. Pollution is linked to fetal harm. New York Times. February 6, 2005. 74. Anderson MS. Assessing the effectiveness of Denmark’s waste tax. Environment. 1998;40:10–5:38–41. 75. Kerr RA. Nyos, the Killer Lake, may be coming back. Science. 1989;244:1541–2. 76. Hively W. How bleak is the outlook for ozone? Am Sci. 1989;77: 219–24. 77. Rohter L. Antarctica, warming looks even more vulnerable. New York Times Science Times 1, Jan 23, 2005. 78. Houghton RA, Woodwell GM. Global climate change. Sci Am. 1989;260:36–44. 79. La Bastille A. Acid rain—how great a menace? National Geographic. 1981;160:652–80. 80. Jones RR. Ozone depletion and cancer risk. Lancet. 1987;2:443–6. 81. Dahl R. Heavy traffic ahead car culture accelerator. Environ Health Perspectives. 2005;113:A239–45. 82. Natusch FS, Wallace JR. Urban aerosol toxicity: the influence of particle size. Science. 1974;186:695–9. 83. World Health Organization Regional Office for Europe, Copenhagen. Air Quality Guidelines for Europe. Geneva: WHO Regional Publications, European series, No. 23; 1987. 84. National Research Council. Epidemiology and Air Pollution. Washington, DC: National Academy Press; 1985. 85. Reid L. Measurement of the bronchial mucous gland layer: a diagnostic yardstick in chronic bronchitis. Thorax. 1960;15: 132–41. 86. Needleman HL, Gunnoe C, Leviton A, et al. Deficits in psychologic and classroom performance of children with elevated dentine lead levels. N Engl J Med. 1979;300:689–95. 87. Grove N. Air—an atmosphere of uncertainty. National Geographic. 1987;171:502–37. 88. Kilburn KH. Stop inhaling smoke: prevent coronary heart disease. Arch Environ Health. 2003;58:68–73. 89. Arnow PM, Fink JN, Schlueter DP, et al. Early detection of hypersensitivity pneumonitis in office workers. Am J Med. 1978;64:236–42. 90. National Academy Press. Indoor Pollutants. Washington, DC: The Press; 1981. 91. Spengler JD, Sexton K. Indoor air pollution: a public health perspective. Science. 1983;221:9–17. 92. Morey PR. Microbial agents associated with building HVAC systems. Presented at The California Council—American Institute of Architects’ National Symposium on Indoor Pollution: The Architect’s Response. San Francisco; Nov. 9, 1984. 93. Norback D, Michel I, Widstroem J. Indoor air quality and personal factors related to sick building syndrome. Scand J Work Environ Health. 1990;16:121–8. 94. Norbach D, Torgen M, Ealing C. Volatile organic compounds, respirable dust and personal factors related to the prevalence and incidence of sick building syndrome in primary schools. Br J Ind Med. 1990;47:733–41. 95. Konopinski VJ. Formaldehyde in office and commercial environments. Am Ind Hyg Assoc J. 1983;44:205–8. 96. Board on Toxicology and Environmental Health Hazards, Commission on Life Sciences, National Research Council. Asbestiform Fibers: Nonoccupational Health Risks. Washington, DC: National Academy Press; 1984. 97. Archer VE. Association of lung cancer mortality with Precambrian granite. Arch Environ Health. 1987;42:87–91.
32 98. Stebbings JH, Dignam JJ. Contamination of individuals by radon daughters: a preliminary study. Arch Environ Health. 1988;43: 149–54. 99. Kilburn KH, Thornton JC, Hanscom BE. Population-based prediction equations for neurobehavioral tests. Arch Environ Health. 1998;53:257–63. 100. Kilburn KH. Is neurotoxicity associated with environmental trichlorothyline? Arch Environ Health. 2002;57:121–6. 101. Kilburn KH. Do duration, proximity and a law suit affect chlorinated solvent toxicity? Arch Environ Health. 2002;57: 113–20.
The Recent Mold Disease 1. CDC. Acute pulmonary hemorrhage/hemiosiderosis among infantsCleveland, January 1993–November 1994. Morbidity Mortality Weekly Report. 1994;43:881–83.
Pulmonary Responses to Gases and Particles
705
2. CDC. Pulmonary Hemmorrhage/Hemosiderosis Among InfantsCleveland, Ohio, 1993–1996. Morbidity Mortality Weekly Report 1997; 46:33–35. 3. Etzel RA, Montana E, Sorenson WG, et al. Acute pulmonary hemorrhage in infants associated with exposure to Stachybotyrus atra and other fungi. Arch Pediatr Adolesc Med 1998;152:757–62. 4. Dearborn DG, Dahms BB, Allan TM, et al. Clinical profile of 30 infants with acute pulmonary hemorrhage in Cleveland. Pediatrics 2002; 110:627–37. 5. Kilburn KH. Indoor mold exposure associated with neurobehavioral and pulmonary impairment: A preliminary report. Arch Environ Med 2003;58:390–98. 6. Yike I, Miller MJ, Tomasheefski J, et al. Infant rat model of Stachybotrys chartarum in the lungs of rats. Mycopathologia 2001; 154:139–52. 7. Straus DC. Sick Building Sickness. New York Elsevier Academic Press; 2004.
This page intentionally left blank
33
Pesticides Marion Moses
Pesticides are among the few toxic substances deliberately added to our environment. They are, by definition, toxic and biocidal, since their purpose is to kill or harm living things. Pesticides are ubiquitous global contaminants found in air, rain, snow, soil, surface and ground water, fog, even the Artctic ice pack. All living creatures tested throughout the world are contaminated with pesticides—birds, fish, wildlife, domestic animals, livestock, and human beings, including newborn babies. The term pesticide is generic, and different classes are named for the pest they control: insecticides (e.g., ants, aphids, beetles, bugs, caterpillars, cockroaches, mosquitoes, termites), herbicides (e.g., weeds, grasses, algae, woody plants), fungicides (e.g., mildew, molds, rot, plant diseases), acaricides (mites, ticks), rodenticides (rats, gophers, vertebrates), picisides (fish), avicides (birds), and nematocides (microscopic soil worms). HISTORY
Use of sulfur and arsenic as pesticides dates back to ancient times. Botanicals such as nicotine (tobacco extract) date from the sixteenth century, and pyrethrum (from a type of chrysanthemum) since the nineteenth century. In the United States, Paris green (copper-aceto-arsenite) was first used in 1867 to control the Colorado potato beetle. In 1939 there were 32 pesticide products registered in the United States, primarily inorganic compounds containing arsenic, copper, lead, mercury, nicotine, pyrethrums, and sulfur. Widespread use of petrochemical-based synthetic pesticides began in the 1940s. Swiss chemist Paul Mueller discovered the insecticidal properties of dichlorodiphenyltrichloroethane (DDT) in 1939. Dusting of allied troops during World War II to kill body lice averted a typhus epidemic, making it the first war in history in which more soldiers died of wounds than of disease. DDT was marketed for commercial use in the United States in 1945. German scientists experimenting with nerve gas during World War II synthesized the first organophosphate insecticide, parathion, marketed in 1943. The phenoxy herbicides 2,4-dichlorophenoxy acetic acid (2,4-D) and 2,4,5-trichlorophenoxy acetic acid (2,4,5-T) were introduced in the 1940s, carbaryl and other N-methyl carbamate insecticides in the 1950s, the synthetic pyrethroid insecticides in the 1960s, and genetically modified products (plant-incorporated protectants, PIPs) in the 1990s. The first serious challenge to synthetic pesticides was the 1962 publication of Silent Spring by wildlife biologist Rachel Carson.1 She documented environmental persistence, bioaccumulation in human and animal tissues, severe toxic effects on birds, fish, and other nontarget species, and potentially devastating ecological, wildlife, and human health effects of DDT and related chlorinated hydrocarbon insecticides.
In 1970, authority for administration and enforcement of the federal pesticide law was transferred from the U.S. Department of Agriculture to the newly created Environmental Protection Agency (EPA). PRODUCTION AND USE
In 2001 there were 18 major basic producers of pesticides in the United States, 100 smaller producers, 150–200 major formulators, 2000 smaller formulators, 250–300 major distributors, 16,900 smaller distributors and establishments, and 40,000 commercial pest control companies. In 2002, average production of conventional pesticides (herbicides, insecticides, fungicides, rodenticides, and fumigants) in the United States was 1.6 billion pounds. Exports averaged 400 million pounds, and imports 100 million pounds. Total sales were $9.3 billion, including exports of $1.6 billion, and imports of $1.0 billion. The United States is the world’s largest pesticide user, accounting for 24% of the estimated 5 billion pounds used worldwide. About 5 billion pounds of other chemicals regulated as pesticides were used in 2001—approximately 2.6 billion pounds of chlorine compounds, 797 million pounds of wood preservatives, 363 million pounds of disinfectants, and 314 million pounds for other uses.2 California, which accounts for 25% of all U.S. pesticide use, mandates reporting of all agricultural and commercial pesticide use, including structural fumigation, pest control, and turf applications. It does not require reporting of home and garden use and most industrial and institutional uses. Total use reported in 2004 was 175 million pounds.3 EPA broadly classifies pesticides as general or restricted use. Some pesticides may be general for some uses, and restricted for others. Restricted-use pesticides must be applied by a state-certified applicator or by someone under the supervision of a certified applicator. The states vary enormously in the quality of their education and training programs for pesticide applicators. Usually one person on each farm or in each company is certified, most often a supervisor or manager. In actual practice, most workers applying pesticides are not certified and work “under the supervision of a certified applicator.” Many are minimally or poorly trained, and turnover is high.
Agricultural Use By the 1950s, synthetic chemical pesticides were major pest control agents in agriculture in the United States. In 2001, agriculture accounted for 76% of conventional pesticide use (herbicides, insecticides, and fungicides) with major use in corn, soybeans, and cotton. Of the estimated average 722 million pounds used, almost 60% were herbicides, 21% insecticides, 7% fungicides, and 14% all other types. The top 15 pesticides used in 2001 were glyphosate, atrazine, metam sodium, 707
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
708
Environmental Health
acetochlor, 2,4-D, malathion, methyl bromide, dichloropropene, metolachlor-s, metolachlor, pendimethalin, trifluralin, chlorothalonil, copper hydroxide, and chlorpyrifos (Lorsban). In California, almost 90% of reported use is in agriculture. Sulfur, favored by both conventional and organic farmers, accounted for 30% of use (53.2 million pounds) in 2004. Pesticides other than sulfur for which 1 million or more pounds were used were petroleum oils (unclassified), metam sodium, methyl bromide, 1,3-dichloropropene, mineral oil, glyphosate, chloropicrin, copper sulfate, sulfuryl fluoride, copper hydroxide, petroleum distillates, sodium chlorate, chlorpyrifos, calcium hydroxide, propanil, diuron, trifluralin, propargite, and maneb. Eight crops accounted for 58% of use: grapes, almonds, process tomatoes, strawberries, carrots, oranges, cotton, and rice. Agricultural pesticide use in Canada and Western Europe is similar to that of the United States. Patterns in Latin America, the Asia-Pacific region, and Africa are similar to those of the 1950s with insecticides accounting for 60–80% of use and herbicides 10–15%.
States. About 20% of all termite jobs are in Texas alone, where estimates are that consumers spend more than $1 billion annually for services. About 30 million homes were treated with chlordane for subterranean termites before it was banned in 1988. Chlorpyrifos (Dursban), which largely replaced chlordane, is itself under restriction for subterranean termite control, being replaced by other chemicals including imidacloprid (Premise), fipronil (Termidor), and chlorfenapyr (Phantom). Baiting systems are also increasing in use including sulfluramid (Terminate, Firstline), hexaflumuron (Sentricon), hydramethylnon (Subterfuge), and diflubenzuron (Advance, Exterra). The fumigant sulfuryl fluoride (Vikane) has replaced methyl bromide for tenting structures for control of dry-wood termites.
Over-the-Counter Products
Major nonagricultural uses of pesticides include wood preservation; lawn, landscape, and turf maintenance; rights-of-way (highways, railroads, power lines); and structural, industrial, public health, and home and garden use.
About 71 million pounds of pesticides were sold directly to the consumer as aerosols, foggers, pest strips, baits, pet products, and lawn and garden chemicals in 1993.2 Home use pesticides include the herbicides 2,4-D, glyphosate (Roundup), and simazine; home use insecticides include carbaryl (Sevin), dichlorvos (DDVP), methoxychlor, malathion, pyrethrins, pyrethroids, and propoxur (Baygon), and the fungicides, maneb, captan, benomyl, and chlorothalonil (Daconil). The organophoshates diazinon and chlorpyrifos (Dursban) were the most widely used insecticides until banned for indoor and outdoor home use and direct sale to consumers in 2001.
Wood Preservatives
Industrial Use
About 797,000 million pounds of wood preservatives are used annually in the United States. The largest single use is creosote on railroad ties. Pentachlorophenol and copper-chromium-arsenate are used for preservation of utility poles, dock pilings, and lumber for construction purposes.
Fungicides are widely used as mildewcides; preservatives and antifoulants in paints, glues, pastes, and metalworking fluids; and in fabrics for tents, tarpaulins, sails, tennis nets, and exercise mats. Carpets are routinely treated with insecticides for protection against insects and moths. Pesticides are used in many consumer products including cosmetics, shampoos, soaps, household disinfectants, cardboard and other food packaging materials, and in many paper products. The pulp and paper products industry uses large amounts of slimicides. Water for industrial purposes and in cooling towers is treated with herbicides and algicides to prevent growth of weeds, algae, fungi, and bacteria. Canals, ditches, reservoirs, sewer lines, and other water channels are similarly treated. The EPA estimates that 111 million pounds of active-ingredient conventional pesticides, about 13% of the total, were used in the industrial/commercial government sector market in 2001. The most commonly used in 2001 were 2,4-D, glyphosate, copper sulfate, penidmethalin, chlorothalonil, chlorpyrifos, diuron, MSMA, triclopyr, and malathion.
Nonagricultural Use
Home and Garden The EPA estimates that 102 million pounds of active ingredient pesticides were used in the home and garden sector in 2001, about 11% of conventional pesticide use. The most common were 2,4-D, glyphosate (Roundup), pendimethalin, diazinon, MCPP, carbaryl (Sevin), malathion, DCPA, and benefin.2 All residential use of diazinon was banned in 2004.
Lawn, Landscape, Turf, Golf Courses If home lawns were a single crop, it would be the largest in the United States, covering some 50,000 square miles (the size of Pennsylvania). The use of lawn and turf pesticides is widespread and about $30 billion is spent annually.4 About 40% of lawns are treated, with 32 million pounds applied by householders themselves, and an additional 38 million pounds by commercial firms. Herbicides account for 70% of use, insecticides 32%, and fungicides 8%. There are about 14,000 golf courses in the United States, and many are intensively chemically managed, especially those used year round in southern states. Herbicides and fungicides are the most widely used.
Maintenance of Right-of-Way Herbicides are extensively used for maintenance of rights-of-way along highways, power transmission lines, and railroads. County and state agencies can be major users. The California Transportation Agency (CalTrans) is the largest single pesticide user in the state, treating 25,000 miles of highway with herbicides annually.
Structural Use A major nonagricultural use of pesticides is pest control in homes, apartments, offices, retail stores, commercial buildings, sports arenas, and other structures. Common practice is to contract for regular spraying for cockroaches, ants, and other indoor pests. Subterranean and drywood termites are major structural pests. Estimates are that one million termite treatments of 500,000 households occur annually in the United
Public Health Use The major public health use of pesticides in the United States is the treatment of drinking water and sewage. In 2001, the EPA estimated that 2.6 billions pounds of chlorine/hypochlorites were used for water treatment, 1.57 billion pounds for disinfection of potable and wastewater, and 1 billion pounds for disinfection of recreational water. There has been an increase in mosquito control pesticide spraying in the United States in response to West Nile Virus. Common practice is to spray ultra low volume (ULV) formulations using less than three ounces per acre of a synthetic pyrethroid insecticide (usually permethrin or d-phenothrin), or the organophosphates malathion or naled. Ground applications are also used. The Centers for Disease Control and Prevention has issued a fact sheet for the public regarding larvicides and adulticides, and recommendations for repellents.5
Malaria Control Worldwide, the biggest public health use of pesticides is in malaria control. DDT is still in use in some countries, but ULV spraying of synthetic pyrethroids is more widely used. Pyrethroid impregnated bed nets, shown to reduce childhood mortality and morbidity, are being used as a preventive measure in many countries.6 Cost, distribution, and the need for net retreatment every 6–12 months are barriers to full implementation in endemic areas. The U.S. Centers for Disease Control and Prevention is testing several nets that
33 theoretically retain lethal concentrations of insecticide for the life of the net, 3–5 years.7
Aircraft Use Cargo holds, passenger cabins, and other areas of aircraft are sprayed with a wide variety of insecticides. A controversial policy is the spraying of occupied cabins with aerosol insecticides, usually synthetic pyrethroids. U.S. airlines have abandoned this practice within U.S. borders, but spray them on international flights to countries that require it by law, including Australia and the Caribbean.
Active and Inert Ingredients Pesticide products are mixtures of active and inert ingredients. The Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) defines an active ingredient as one that prevents, destroys, repels, or mitigates a pest, or is a plant regulator, defoliant, desiccant, or nitrogen stabilizer. It must be identified by name on the label with its percentage by weight. In 2000, there were about 900 active ingredient pesticides registered by the EPA. The total number of registered products is not known because the EPA allows multiple registrations of the same active ingredient in brand-name products under different “house labels.” Estimates are that 100,000 to 400,000 registered products are on the market—many being similar products formulated by different companies. Table 33-1 lists the major classes and types of chemicals used as pesticides in the United States. Inert ingredients are all other ingredients not active as a pesticide in the product, including solvents, surfactants, carriers, thickeners, wetting/spreading/dispersing agents, propellants, microencapsulating agents, and emulsifiers. “Inert” does not mean nontoxic, since inert ingredients can be chemically and biologically active, and some are classified as both active and inert. Isopropyl alcohol, for example, may be an active ingredient in antimicrobial pesticide in some products, and an inert ingredient used as a solvent in others. Typical solvents include xylene, deodorized kerosene, 1,1,1-trichloroethane, methylene chloride, and mineral spirits. Over-the-counter aerosol pesticide products may contain carcinogenic solvents such as trichlorethylene and methylene chloride as “inert” ingredients.
Pesticides
709
There are 1200 inert ingredients registered with the EPA, categorized into four lists, based on potential adverse effects on human health. List 1 has nine ingredients of toxicological concern which are the only inerts required to be named on the label: isophorone, adipic acid, bis(2ethylhexyl) ester, phenol, ethylene glycol monoethyl ether, phthalic acid, bis(2-ethylhexyl) este, hydroquinone, and nonylphenol. List 2 contains 55 potentially toxic inerts with a high priority for testing. The largest number of ingredients is on List 3, about 1500 of unknown toxicity. List 4A contains 160 inerts generally regarded as safe. List 4B contains 310 ingredients with sufficient data to conclude that current use will not adversely affect public health or the environment. When an inert reaches List 4B, no further regulatory action is anticipated. Except for List 1, pesticide registrants can withhold the names of inert ingredients and list only percentages, because of industry claims of confidentiality based on the FIFRA trade secret provisions. Environmental groups filed a lawsuit in federal court against the EPA in 1994 under the Freedom of Information Act, demanding public disclosure of inert ingredients. The court ruled in 1996 that pesticide companies must disclose inert ingredients in six pesticide products: Aatrex 80W (atrazine), Weedone LV4 (2,4-D), Roundupÿ (glyphosate), Velpar (hexazinone), Garlon 3A (triclopyr), and Tordon 101 (picloram and 2,4-D).
Pesticide Formulations There are four basic types of pesticide formulations: (a) foggers, bombs, and aerosols, (b) liquids and sprays, (c) powders, dusts, and granules, and (d) baits and traps. Contaminants. Many technical pesticide products contain pesticide metabolites and process contaminants. Pesticides manufactured from chlorinated phenols, such as 2,4-D and pentachlorophenol, contain dibenzodioxins and dibenzofurans. Hexachlorobenzene contaminates the fungicides chlorothalonil, dacthal, pentachloronitrobenzene, and pentachlorophenol. DDT is a contaminant of the miticide dicofol (Kelthane). Many pesticides are contaminated with nitrosoamines, including trifluralin, glyphosate, and carbaryl. The ethylenebisdithiocarbmate fungicides contain the metabolite ethylene thiourea (ETU),
TABLE 33-1. PESTICIDE RESIDUES IN BLOOD AND URINE: U.S. ADULTS 1999–2003
Chemical 2,5-DCP 2,5-dichlorophenol, PDB∗ metabolite p.p’-DDE metabolite of DDT DEP metabolite organophosphate pesticides DMTP metabolite organophosphate pesticides beta-HCH Hexachlorcyclohexane (in lindane) 1-Naphthol metabolite of carbaryl (Sevin)‡ 2-Naphthol naphthalene metabolite OPP ortho-phenylphenol fungicide/disinfectant § 2,4-6 TCP trichlorophenol 3,5,6-TCPy (chlorpyrifos/ Dursban)¶ TNA trans-Nonachlor, metabolite of chlordane ∗
Geometric Mean
All Ages
Age 20–59
Male
Female
Mex-Amer
White
µg/g creat
5.38
5.36
5.25
5.5
12.9
10.7
3.6
ng/g lipid µg/g creat
260 0.924
297 0.88
249 0.86
270 1
674 1.09
295 1.07
217 0.931
µg/g creat
1.64
1.47
1.61
1.66
1.6
1.45
1.68
†
†
†
ng/g lipid
15.0
16.9
NC
17.2
25.9
NC
NC
µg/g creat
1.52
1.64
1.33
1.73
1.34
1.22
1.6
µg/g creat µg/g creat
0.421 0.441
0.47 0.45
0.39 0.38
0.46 0.51
0.5 0.49
0.54 0.38
NC† 0.438
µg/g creat µg/g creat
2.54 1.58
2.32 1.41
2.24 1.48
2.88 1.69
2.43 1.46
2.13 1.47
2.59 1.66
ng/g lipid
18.3
20.8
17.7
18.8
NC
†
20.3
19.1
Paradichlorobenzene ( mothballs). Not calculated because too many samples were below the limit of detection. ‡Also found in tobacco smoke and certain polyaromatic hydrocarbons. § Metabolite of several pesticides including lindane and hexachlorobenzene. ¶3,5,6-trichloro-2-pyridinol major metabolite of chlorpyrifos. Source: Third National Report on Human Exposure to Environmental Chemicals. CDC. July 2005. http://www.cdc.gov/exposurereport/. †
Black
710
Environmental Health
and carbon disulfide is a biodegradation product. Lengthy storage can also increase the toxicity of contaminants and metabolites in pesticide formulations, including sulfotepp in diazinon, and isopropylmalathion and O,O,S-trimethylphosphorothioate in malathion. Intermediates. Pesticide intermediates can be highly toxic. Methylisocyanate (MIC), the chemical that poisoned and killed thousands of people in Bhopal, India, in 1984, is an intermediate in the manufacture of the N-methyl carbamate insecticides aldicarb (Temik) and carbaryl (Sevin). EXPOSURE TO PESTICIDES
Occupational Exposure to Pesticides The EPA estimates there are 351,600 pest control operator/exterminator professionals certified to apply pesticides commercially; and 965,692 certified private applicators, most of whom are individual farmers. Pesticide law allows noncertified applicators to work “under the supervision of” a certified applicator. Thus, there are thousands more noncertified applicators working with commercial pest control firms and on farms. There is no estimation of their actual numbers, or of their qualifications and training. Those who handle concentrated formulations—mixers, loaders, and applicators—have the highest exposure. Batch processing used in pesticide manufacturing requires little direct contact, and exposures are usually lower. Farm workers who cultivate and harvest crops are exposed to dislodge able pesticide residues on leaf surfaces, on the crop itself, in the soil, or in duff (decaying plant and organic material that collects under vines and trees). Field workers are exposed to overspray from cropdusting aircraft and drift from airblast and other ground rig sprayers. Farm worker families, especially migrant workers, who often live in camps, are surrounded by fields that are sprayed.
Children’s Exposures Children’s exposures to pesticides are magnified by their greater likelihood of direct exposure from skin contact with contaminated floors, carpets, lawns, and other surfaces due to their crawling, toddling, and exploring activities. They can swallow significant amounts from ingesting contaminated house dust, and mouthing and chewing pesticide-contaminated objects. Their higher respiratory rate, larger skin surface for their size, and less mature immune and detoxifying systems put them at greater risk than adults at comparable exposure levels. Farm worker children are at high risk of exposure because they may work in the fields or be taken to the fields by their parents and exposed to pesticide drift from nearby fields and from take-home contamination by their parents.8,9
Absorption of Pesticides Pesticides are readily absorbed through the skin, the respiratory tract (inhalation), and the gastrointestinal tract (ingestion). The eyes can be
a significant route of exposure in splashes and spills. The rate of absorption of pesticides into the body is product specific and depends on the properties of the active ingredient pesticide and the inert ingredients in a particular formulation. The skin, not the respiratory system as is commonly believed, is the chief route of absorption. Fumigants, which are in the form of gases, which accounts in part for their greater toxicity, are a notable exception. Inhalation can be an important route of exposure in the home from the use of aerosols, foggers, bug bombs, and moth control products, but the dermal route is still the most important route, especially in children. A new method to estimate residue transfer of pesticides by dermal contact and indirect ingestion uses riboflavin (vitamin B2), a highly fluorescent, water soluble, nontoxic tracer compound as a surrogate for pesticide residues. Coupled with video imaging and computer quantification, the system measures transfer of pesticide residues to the skin. It is especially useful in estimating surface-to-skin and skin-to-mouth residue transfer in children from carpets, upholstery, and other surfaces inside the home.10,11
Biomonitoring A biomonitoring program of the U.S. general population, which began in 1999–2000 by the Centers for Disease Control and Prevention (CDC), used blood and urine samples from participants in the National Health and Nutrition Examination Survey (NHANES). The CDC’s first report of the findings, The National Report on Human Exposure to Environmental Chemicals, was issued in 2001. The only pesticide data in that report were urinary dialkylphosphate metabolites (DAPs). The second report, with the same title, was issued in 2003. It included much more data on pesticides, including selected organophosphate, organochlorines, N-methyl carbamates, herbicides, and pest repellents and disinfectants. Table 33-1 summarizes selected data in adults from the 2003 report. Racial differences are most striking with DDE residues. Levels in Mexican Americans were 311% greater than in whites, and 228% greater than in blacks. Levels in blacks were 36% greater than in whites, and 44% lower than in Mexican Americans. DDT is still widely used in Mexico for malaria control, but efforts to ban use are in progress.12 TOXICOLOGY
The U.S. EPA ranks pesticides into four categories based on acute toxicity (Table 33-2). Most of the rest of the world uses the World Health Organization (WHO) classification (Table 33-3).
Organophosphates Organophosphates are responsible for the majority of occupational poisonings and deaths from pesticides in the United States and throughout the world. There are many reports of severe poisoning and fatalities from accidental and suicidal ingestion of these compounds. Even less toxic organophosphates can be deadly. Malathion
TABLE 33-2. ENVIRONMENTAL PROTECTION AGENCY PESTICIDE TOXICITY CATEGORIES BY MEDIAN LETHAL DOSE (LD50) IN MG/KG BODY WEIGHT IN THE RAT ∗ Toxicity Class and Signal Word Required on Label
Oral (mg/kg)
Dermal (mg/kg)
Inhalation (mg/L)
I Highly toxic DANGER II Moderately toxic WARNING III Minimally toxic CAUTION IV Least toxic CAUTION
<50
<200
<0.2
50–500
200–2000
0.2–2
500–5000
2000–20,000
>5000
>20,000
∗
Effects Eye
Skin Corrosive
2–20
Corneal opacity (irreversible) Corneal opacity (reversible 7 days) Irritation
>20
No irritation
Mild irritation
Severe irritation Moderate irritation
The median lethal dose (LD50) is the amount that will kill 50% of the exposed animals. The lower the median lethal dose, the more hazardous the chemical.
33
Pesticides
711
TABLE 33-3. WORLD HEALTH ORGANIZATION–RECOMMENDED CLASSIFICATION OF PESTICIDES BY HAZARD BY MEDIAN LETHAL DOSE (LD50) IN MG/KG BODY WEIGHT IN THE RAT ∗ Oral Hazard Class IA Extremely hazardous IB Highly hazardous II Moderately hazardous III Slightly hazardous
Solids <5 5–50 50–500 >500
Dermal Liquids <20 20–200 200–2000 >2000
Solids <10 10–100 100–1000 >1000
Liquids <40 40–400 400–4000 >4000
∗
The median lethal dose (LD50) is the amount that will kill 50% of the exposed animals. The lower the median lethal dose, the more hazardous the chemical.
contaminated with a toxic isomerization product, isomalathion, caused five deaths and 2800 poisonings in Pakistan malaria sprayers in 1975. In the state of Washington there were 26 reports of severe poisoning in workers applying Phosdrin (mevinphos) in 19 different apple orchards in 1993. The state banned the pesticide in 1993 and the federal EPA banned it in 1995. In 1989 in Florida, 185 farm workers were severely poisoned when sent to work in a cauliflower field 12 hours after it had been sprayed with Phosdrin, when the legal reentry interval was 4 days. Phosdrin was banned in 1994. Severe poisonings have occurred from wearing laundered uniforms previously contaminated with parathion, which was banned in 2002. Providing emergency care to patients who attempt suicide by ingesting organophosphate insecticides can result in poisoning. Two emergency medical technicians were poisoned after mouth-to-mouth resuscitation to an attempted suicide victim who ultimately died. Ten hospital emergency room workers and paramedics were symptomatic after contact with a patient who ingested an organophosphate insecticide, requiring temporary closing of the emergency department.13 Signs and symptoms of organophosphate poisoning occur soon after exposure, from minutes to hours. Mild poisoning results in fatigue, headache, dizziness, nausea, vomiting, chest tightness, excess sweating, salivation, abdominal pain, and cramping. In moderate poisoning the victim usually cannot walk, has generalized weakness, difficulty speaking, muscular fasciculations, and miosis. Central nervous system effects also occur, including restlessness, anxiety, tremulousness, insomnia, excessive dreaming, nightmares, slurring of speech, confusion, and difficulty concentrating. Coma and convulsions accompany severe poisoning, which can result in death without proper treatment.14 The organophosphates are readily metabolized and excreted, and with early and proper treatment most poisoned workers will recover. In accidental or suicidal ingestion, recovery depends on the amount ingested, the interval before emergency resuscitation, and the appropriateness of treatment. While recovery appears to be complete, long-term neurological effects can occur (vide infra). Organophosphates are similar to nerve gas and exert their toxic action by inhibition of the enzyme acetylcholinesterase at synaptic sites in muscles, glands, autonomic ganglia, and the brain, resulting in a build-up of the neurotransmitter acetylcholine. Enzymes that hydrolyze choline esters in humans are found in red blood cells (RBCs) (“true” cholinesterase) and plasma (“pseudocholinesterase”) derived from the liver. Decreased activity of RBCs and plasma cholinesterase is an indicator of excess absorption of organophosphates, and testing activity levels is an excellent tool for monitoring worker exposure, and diagnosing poisoning. A 10–40% reduction in cholinesterase activity usually results in latent poisoning without clinical manifestations. A 50–60% reduction usually results in mild poisoning. A reduction of 70–80% results in moderate poisoning, and 90% or more indicates severe poisoning that can be fatal without treatment. The rate of reduction in cholinesterase activity is an important determinant of poisoning. A rapid reduction over a few minutes or hours can produce marked signs and symptoms, which can be minimal or absent for a gradual drop of the same magnitude over a period of days or weeks. In worker-monitoring programs, a reduction in RBC enzyme activity of 25% or more, or in plasma cholinesterase of 40% or more from a pre-exposure or “baseline” level, is evidence of
excess absorption. Workers should be removed from further exposure until recovery of activity to at least 80% of baseline. Atropine, which blocks the effects of acetylcholine, is the antidote for organophosphate pesticide poisoning. Pralidoxime (2-PAM), if given within 24–48 hours of exposure, can reactivate cholinesterase and restore enzyme function. After this time, “aging” of the enzymepesticide complex occurs, making it refractory to reactivation.15,16 Genetic factors, especially paraoxonase (PON1) activity levels, can affect metabolism and detoxification of organophosphates and may account for differing susceptibility to poisoning,17,18 especially in children.19 Alkylphosphate metabolites of organophosphates are excreted in the urine and can be useful as a measure of recent absorption in exposure assessment and biomonitoring. Levels peak within 24 hours of exposure and usually are not detectable 48 hours or more after exposures ceases. Action on Chlorpyrifos. First marketed in 1975, chlorpyrifos has become one of the most widely used organophosphates in the United States. Registered under the trade name Dursban and Lorsban, it is found in hundreds of “house label” products. In April 1995, the EPA fined the basic manufacturer, DowElanco, $732,000,000 for failing to report to the agency adverse health effects known to the company over the past decade. An EPA review of chlorpyrifos for reregistration resulted in an agreement with the registrant for withdrawal of flea control, total release foggers, paint additive, and pet care (shampoos, dips, sprays) products.
N-Methyl-Carbamate Insecticides The N-methyl-carbamate insecticides are similar to the organophosphates in their acute toxic effects and mechanism of action. However, the inhibition of acetylcholinesterase is readily reversible. Signs and symptoms appear earlier, and workers are more likely to remove themselves from excess exposure. Except for an aldicarb (Temik)-related tractor accident death of a farm worker reported in 1984, there are no deaths from occupational exposure reported in the United States, but there are reports from other countries.20 Atropine is also the antidote for N-methyl-carbamate poisoning, but 2-PAM is not recommended unless there is concomitant exposure to an organophosphate. Testing RBC and plasma cholinesterase activity is less useful in poisoning with the carbamates because carbamylation of the enzyme, unlike phosphorylation, is readily reversible, and can occur in vitro during transport of the specimen to the laboratory. Poisoning of grape girdlers in California with prolonged exposure to methomyl (Lannate)-contaminated soil was unusual in the occurrence of significant depression of cholinesterase activity.
Chlorinated Hydrocarbon Insecticides Most organochlorine insecticides including DDT, aldrin, endrin, dieldrin, chlordane, heptachlor, and toxaphene (Table 33-4), are no longer used in the United States. They are central nervous system stimulants, and in toxic doses cause anxiety, tremors, hyperexcitability, confusion, agitation, generalized seizures, and coma that can result in death. Those in current use—dienochlor, endosulfan (thiodan), and methyoxychlor––are readily metabolized and excreted and do not persist in the environment.
712
Environmental Health
TABLE 33-4. PESTICIDES BANNED OR SEVERELY RESTRICTED IN THE UNITED STATES BY YEAR AND ACTION TAKEN Alar (daminozide) Aldrin/Dieldrin Azinphosmethyl Bendiocarb (Ficam) Benomyl (Benlate) BHC Cadmium Calcium arsenate Captafol Captan CCA† Chlordimeform Chlordane Chlorpyrifos Clopyralid Cyanazine Cyhexatin DBCP DDT Diazinon
Dicofol Dinoseb Endrin EPN Ethylene dibromide
Fenamiphos Folpet Fonofos Heptachlor
Hexachlorobenzene Kepone Lead arsenate Lindane (γ-HCH) Mancozeb Mirex Monocrotophos Nitrofen (TOK) Parathion Phosdrin 2,4,5-T, Silvex Sodium arsenite Toxaphene Vinclozolin Zineb ∗
1990 1974 1989 2002 2005 2001 2002 1978 1987 1990 1989 1987 1999 2003 1989 1978 1988 2001 2002 1999 1987 1979 1989 1972 1986 2001 2002 2004 1998 1986 1985 1983 1987 1984 1987 1989 2002 1999 1999 1983 1988 1994 1995 1999 1984 1977 1987 1986 1990 1992 1977 1987 1988 1983 1991 2002 1994 1979 1985 1989 1993 1982 1990 2005 1990
Ban food use, nursery/plants allowed Ban all use except termites Termite use cancelled Cancellation 23 crop uses Cancellation 9 crop uses; time-limited registration 10 remaining crop uses∗ All use voluntary cancellation All use voluntary cancellation All use cancelled Home lawn, golf fairway use cancelled Golf tees/greens use cancelled Non-wood uses cancelled All use cancelled Residential lawn use cancelled, sod farms, golf courses allowed Residential use cancelled All use cancelled All use except termites cancelled Termite use cancelled Ban OTC sales directly to the public Ban lawn/turf Washington, California All use cancelled All use cancelled Ban all use except pineapple in Hawaii Pineapple use cancelled Ban all use except health emergencies Ban golf courses, sod farms Ban OTC sales directly to the public Ban all indoor use Ban all outdoor nonagricultural use Residential use cancelled Ban all use after emergency suspension All use cancelled Mosquito larvacide use cancelled All use cancelled Grain fumigant use cancelled Payapa fumigation use cancelled Citrus export fumigation use cancelled Voluntary phase-out all uses Ban except paints/coatings/sealants All use cancelled Most seed treatment use cancelled Most termite use cancelled Most remaining uses cancelled Technical product export cancelled Ban fire ant use, domestic production All use cancelled All use cancelled All use cancelled Ban indoor smoke fumigation use Many uses cancelled; seed treatment, lice/scabies use allowed Home garden, turf, fruit use cancelled Cancelled except pineapple in Hawaii All use cancelled All use cancelled All use cancelled All use cancelled except nine field crops All use cancelled All use cancelled Emergency suspension All use cancelled Ant bait use cancelled; grapes, seed okra, cotton use allowed All use cancelled Cancelled except in P. Rico, Virgin I All use cancelled Lettuce use cancelled; phaseout all other uses All use cancelled
Almonds, apples, blueberries, brussel sprouts, cherries, crab apples, nursery stock, parsley, pears, pistachios, and walnuts. Chromated copper arsenate, a wood preservative.
†
33 Lindane. The only persistent chlorinated hydrocarbon insecticide still on the market in the United States is lindane (γ-hexachlorocyclohexane, γ-HCH). It is available by prescription only for lice and scabies (formerly available over-the-counter as Kwellÿ, now discontinued). Generalized seizures have occurred in children and adults from dermal application for lice and scabies. Prescriptions for lindane have decreased 67% from 1998 to 2003 when the Food and Drug Administration (FDA) required dispensing it in 1–2 ounce single-use packets. CDC recommends that lindane should not be used for persons weighing less than 110 pounds (50 kg), that treatment should not be repeated, and that it should not be tried unless other treatments have failed.21 Kepone. The most serious outbreak of chlorinated hydrocarbon poisoning in the United States occurred at a plant manufacturing chlordecone (Kepone) in Hopewell, Virginia, in 1974. The plant was closed in 1975, and the registration cancelled in 1976, but a consumption advisory is still in effect for Kepone-contaminated fish in the St. James River estuary. Hexachlorobenzene. More than 3000 cases of acquired porphyria cutanea tarda occurred in Turkey in the late 1950s from consumption of hexachlorobenzene-treated wheat seed illegally sold for food use. Turkey banned hexachlorobenzene in 1959, and all use
Pesticides
713
was cancelled in the United States in 1984. Hexachlorobenzene is a contaminant of Dacthal, chlorothalonil, pentachloro-nitrobenzene, and pentachlorophenol.
Pyrethrums/Pyrethrins/Synthetic Pyrethroid Insecticides Pyrethrums are crushed petals of a type of chrysanthemum that contains insecticidal chemicals called pyrethrins. Pyrethrin formulations contain the active pyrethins, which is a solvent extracted from the flowers, and are more acutely toxic than pyrethrums. Pyrethroids are synthetic analogs of natural pyrethrins. The synergist piperonyl butoxide is added to most pyrethrin and pyrethroid formulations to prolong their residual action. Pyrethrins and pyrethroids slow the closing of the sodium activation gate in nerve cells. Pyrethroids with the alpha-cyano moiety (cyfluthrin, lambda-cyhalothrin, cyphenothrin, cypermethrin, esfenvalerate, fenvalerate, fenpropathrin, fluvalinate, tralomethrin) are more toxic than those without this functional group (permethrin, d-phenothrin, resmethrin). The pyrethrins and pyrethroids are readily metabolized and excreted and do not bioaccumulate in humans or in the environment. They are less acutely toxic than most organophosphate insecticides; most are in toxicity categories III and IV (Table 33-5). Many household aerosols and pet care products contain pyrethrins and synthetic
TABLE 33-5. SELECTED PESTICIDES IN CURRENT USE IN THE UNITED STATES BY CATEGORY OF USE AND CHEMICAL CLASS
Insecticides Chitin inhibitors Diflubenzuron Hexaflumuron Noviflumuron Chlorinated hydrocarbons Dicofol (Kelthane) Dienochlor (Pentac) Endosulfan (thiodan) Lindane Methoxychlor N-methyl carbamates Aldicarb (Temik) Carbaryl (Sevin) Carbofuran (Furadan) Methomyl (Lannate) Propoxur (Baygon) Organophosphates Acephate (Orthene) Azinphos-methyl (Guthion) Chlorpyrifos (Dursban/Lorsban) Diazinon, dichlorvos (DDVP) Dimethoate Malathion Methidathion Methyl parathion Tetrachlorvinphos Pyrethrins Pyrethroids (synthetic) Cyfluthrin (Tempo) Cypermethrin (Demon) Deltamethrin, fenvalerate Lambda-cyhalothrin (Karate) Permethrin (Dragnet) Phenothrin Resmethrin Pyrethrums Sulfite esters Porpargite (Omite)
Rodenticides Anticoagulants Aluminum/zinc phosphide Brodifacoum Bromadiolone Chloro/diphacinone, warfarin Phosphine gas releasers
Herbicides Acetanilides Alachlor (Lasso) Amides Propachlor, propanil Arsenicals Cacodylic acid Bipyridyls Caraquat, mepiquat, diquat Carbamates/thiocarbamates Cycloate, EPTC, molinate Pebulate Dinitroanilines Trifluralin (Treflan) Pendimethalin (Prowl) Diphenyl ethers Oxyflurofen (Goal) Organophosphates DEF, merphos Phenoxyaliphatic acids 2,4-D, dicamba, MCPA Phosphonates Fosamine (Krenite) Glyphosate (Roundup) Phthalates Dacthal, endothall Thiobencarb Substituted phenols Dinocap, dintriophenol Pentchlorophenol Substituted ureas Diuron, linuron, monuron Sulfanilimides Oryzalin (Surflan)
Sulfonylureas Chlorsulfuron (Glean) Sulfometuron (Oust) Triazines Atrazine, cyanazine, simazine Triazoles Amitrole
Fungicides Carboximides Captan, iprodione (Rovral) Vinclozolin (Ronilan) Dithio/thiocarbamates Maneb, mancozeb, nabam, ferbam, thiram Heterocyclic nitrogens Imizadole derivative Imazalil Substituted benzenes Chlorothalonil (Daconil), chloroneb, hexachlorobenzene, pentachloronitrobenzene Triazines Anilazine (Dyrene) Triazoles Triadimefon (Bayleton) Fumigants Halogenated hydrocarbons 1,3- dichloropropene (Telone-II) Methyl bromide, naphthalene, para-dichlorobenzene Oxides/aldehydes Ethylene oxide, formaldehyde Sulfur compounds Sulfur dioxide, sulfuryl Fluoride (Vikane) Thiocarbamates Metam-sodium Wood Preservatives Arsenic, copper, creosote, boric acid/ polyborates, copper/zinc naphthenate, pentachlorophenol
714
Environmental Health
pyrethroids and piperonyl butoxide, and they are widely used by exterminators for treatments of homes and buildings. Characteristic symptoms of exposure to synthetic pyrethroids are transient facial and skin paresthesias and dysesthesias such as burning, itching, and tingling sensations which disappear soon after exposure ceases and can be exacerbated by sweating and washing with warm water. Signs and symptoms of mild to moderate poisoning include dizziness, headache, nausea, anorexia, and fatigue. Severe poisoning results in coarse muscular fasciculations in large muscles of the extremities and generalized seizures. Recovery is usually rapid after exposure ceases. There are no specific antidotes to poisoning, and treatment is supportive.22 Pyrethrins cross-react with ragweed and other pollens. Members of this class of chemicals, including the synthetic pyrethroids, are potential allergens and skin sensitizers. Fatalities. A fatality in a child was associated with sudden irreversible bronchospasm from use of a pyrethrin shampoo.23 A 43year-old woman with a history of asthma and ragweed allergy experienced an anaphylactic reaction after using a pyrethrin lice shampoo.24 A 36-year-old woman with a history of asthma developed severe shortness of breath 5 minutes after she began washing her dog with a 0.05% pyrethrin shampoo, and was in cardiopulmonary arrest within 5 minutes.
failure; the patient may recover only to die of asphyxiation due to a relentlessly progressive pulmonary fibrosis. Death usually occurs 1–3 weeks after ingestion, depending on the dose and treatment. Dermal exposure to paraquat has also caused fatal pulmonary fibrosis. Deaths have been reported in farmers and landscape maintenance workers and from application to the skin for treatment of lice and scabies. There is no antidote to paraquat poisoning, and most patients who absorb or ingest an amount sufficient to cause severe organ toxicity do not survive.29 Its toxic action is most likely due to lipid peroxidation from reaction with molecular oxygen to form a superoxide ion. Diquat, a related compound used mainly for aquatic weed control, is much less toxic.
Fumigants Fumigants are among the most toxic pesticide products. As gases, they are rapidly absorbed into the lungs and distributed throughout the body. Most are alkylating agents, mutagens, and carcinogens and are neurotoxic and hepatotoxic. They are responsible for many deaths, especially methyl bromide. The central nervous system, lungs, liver, and kidneys can be severely affected. Pulmonary edema can occur and is a frequent cause of death.
These highly toxic pesticides include pentachlorophenol, dinsoseb, DNOC, and dinocap. They are uncouplers of oxidative phosphorylation, and poisoning produces anorexia, flushing, severe thirst, weakness, profuse diaphoresis, and hyperthermia, which can progress to coma and death. Aspirin is contraindicated in treatment. Many occupational deaths have occurred from these compounds, as well as deaths in infants in a newborn nursery in France where sodium pentachlorophenate was mistakenly added to a wash solution for diapers.
Methyl Bromide. Severe neurotoxic and behavioral effects, including toxic psychosis, can result from poisoning with methyl bromide. Mental and behavioral changes can occur soon after acute poisoning or from low-level chronic exposure. There are many reports of permanent sequelae after recovery from acute methyl bromide poisoning. Anxiety, difficulties in concentration, memory deficits, changes in personality, and other behavioral effects occur and can be progressive and irreversible. Methyl bromide is a potent ozone depleter. The United States is a signatory to the Montreal Protocol, an international agreement to phase out all use of the fumigant by 2001. The phaseout was extended to 2005 and then waived for agricultural uses in the United States.
Herbicides
Fungicides
Glyphosate (Roundup, Rodeo), the most widely used herbicide in the United States, is much less acutely toxic than paraquat, the herbicide it primarily replaced, and is sold over-the-counter. Occupational illnesses, mostly irritant and skin reactions, involving glyphosate products are among the most frequently reported in agricultural and landscape maintenance workers in California.25 Ocular effects are reported in factory workers.26 A toxic inert ingredient in some formulations, polyoxyethylenamine (POEA), is linked to fatalities from accidental or suicidal ingestion.27
Most of the widely used fungicides are in toxicity category IV, the least acutely toxic. Many cause contact dermatitis and can be potent allergens and sensitizers (vide infra). Many are also known or suspect carcinogens—including benomyl, captan, chlorothalonil, maneb, and mancozeb.
Phenolic and Cresolic Pesticides
Insect Repellents
Action on Glyphosate. New York State charged Monsanto, the registrant of glyphosate, with deceptive and misleading advertising, challenging unsubstantiated safety and health claims for Roundup and other products. In 1996, the company agreed to discontinue the use of terms such as “biodegradable” and “environmentally friendly.” Atrazine, the second most widely used herbicide, is also not acutely toxic, is sold over-the-counter, and is used for lawn and turf management in some states. It is persistent in soil, is a widespread groundwater contaminant, and causes mammary cancer and other tumors in rodents. Atrazine is under review by the EPA as an endocrine disruptor. The widely used chorophenoxy herbicides including 2,4-D, dicamba, and MCPA are also not acutely toxic, but can be fatal if ingested.28
N,N-diethyl-m-toluamide (deet, OFF!, Skintastic), developed by the military for troops in the field, was first marketed in 1954, and is estimated to be used by 30 million people annually. It is applied directly to the skin, and use has been increasing, especially for children, because of concerns regarding ticks that carry Lyme disease, and mosquitoes that carry West Nile virus. Deet is neurotoxic, and signs and symptoms of mild poisoning include headache, restlessness, irritability, crying spells in children, and other changes in behavior. Severe poisoning results in toxic encephalopathy, with slurring of speech, tremors, generalized seizures, and coma. Generalized seizures have occurred in children when used according to label directions, and fatalities in children and adults within hours of repeated dermal exposure. Anaphylactic shock, though rare, has also been reported, resulting in a requirement for the signal word “Warning” on the label.
Paraquat and Other Bipyridyls
Surveillance Data
Unlike most herbicides which have a relatively low acute toxicity, paraquat (Gramoxone) is an epithelial toxin and can cause severe injury to the eyes, skin, nose, and throat, resulting in ulceration, epistaxis, and severe dystrophy or complete loss of the fingernails. Acute poisoning, from suicidal or accidental ingestion, can result in hepatic and renal
The number of pesticide-related illnesses and deaths in the United States is unknown. Annual data are available from the Poison Control Center Toxic Exposure Surveillance System (TESS) and from the California Pesticide Illness Surveillance Program (PISP), but there is no systematic national collection. Reports are also available from the
33 National Center for Health Statistics, and from the Sentinel Event Notification System of Occupational Risk (SENSOR), a collaboration between National Institute of Occupational Safety and Health (NIOSH) and seven states. In 2003, TESS reported 99,522 pesticide-related incidents, 4.2% of total reports. About 51% of the incidents were in children less than 6 years old. There were 41 fatalities, including 16 suicides.30 In California PISP, 1232 reports were investigated in 2003, of which 803 were suspected or confirmed. Agricultural pesticide use accounted for 405 of the cases and nonagricultural pesticides for 395, of which 69% were occupational. Eight were admitted to hospitals and 70 lost time from work.31 SENSOR reported 1009 cases of acute pesticide-related illness from 1998 to 1999, with a rate of 1.17 incidents per 100,000 full time equivalents (FTEs). The rate in agriculture of 18.2 FTEs was 34 times higher than the nonagriculture rate of 0.53. Insecticides were responsible for 49% of all illnesses, which were of low severity in 69.7% of cases, moderate in 29.6%, and severe in 0.4% (four cases), with three fatalities.32
Reentry Poisoning Dermal absorption of dislodgeable residues on crops they are harvesting has caused systemic poisoning of thousands of farm workers. California is the only state that enforces mandatory reporting of pesticide illness, so most information on reentry poisonings is from that state. The earliest poisoning incidents were in crops with high foliar contact such as grapes, peaches, and citrus, that had been sprayed with Toxicity I organophosphates such as parathion, phosdrin, and azinphos-methyl (Guthion). One of the largest outbreaks of pesticide-related dermatitis in California occurred in 1986, among 198 farm workers picking oranges sprayed with propargite (Omite-CR). About 52% of the workers sustained severe chemical burns. No violations of reentry intervals or application rates were found. A new inert ingredient that prolonged residue degradation had been added to the formulation, and subsequent field degradation studies showed that the proper reentry interval should have been 42 days, not 7. Omite-CR was banned for any use in California but is still used in other states. The establishment of waiting periods before workers could be sent into the fields, called reentry intervals or restricted entry intervals (REIs), decreased poisoning in California to 117 in 1993, compared to an average of 168 from 1989 through 1992. Prior to 1989, the average number of field residue cases per year had been 279.
Drift Episodes Drift is the movement of pesticides away from the site of application. Approximately 85–90% of pesticides applied as broadcast sprays drift off target and can affect birds, bees, fish, and other species, as well as human beings. Significant concentrations can drift a mile or more; lower concentrations can drift many miles depending on droplet size, wind conditions, ambient temperature, and humidity. Pesticide exposures to bystanders and community residents from drift are increasing with the building of residential housing adjacent to agricultural fields and golf courses. Off-gassing and drift from fields where methyl bromide, chloropicrin, and metam sodium are used to fumigate the soil have resulted in evacuation of residents in surrounding communities. Problems are increasing in urban areas with increasing chemical treatment of lawns, sports areas, parks, and recreation areas. The state of California reported 256 drift-related exposures in 2003, involving 33 episodes. One episode resulting from improper soil injection of chloropicrin was responsible for 166 of the cases. In 2002 there were 478 exposures involving 39 episodes. A law enacted in California in 2005, prompted by rural agricultural drift incidents, requires responsible parties to pay for emergency medical treatment for injures to innocent bystanders, and offers incentives to provide immediate medical aid before cases are litigated.
Pesticides
715
Developing Countries The majority of pesticide poisonings and deaths are in low-income and developing countries, which account for 25% of pesticide use, 50% of acute poisonings, and 75% of deaths. WHO estimates that the total number of acute unintentional poisonings annually in the world is between 3 and 5 million cases, with 3 million severe poisonings and 20,000 deaths. WHO estimates that intentional poisonings number 2 million with 200,000 resulting in death by suicide. Suicide is reported to be responsible for most deaths, but this may be due to biased reporting, minimization of occupational hazards, and faulty assumptions resulting in inappropriate blame being attributed to victims. A South Africa study found that hospital and health authorities greatly underestimate occupational cases and overestimate suicides. Assumptions that a lack of awareness is responsible for most poisonings was not borne out when reporting was supervised and intensified, and reports increased almost tenfold during an intervention period. The risks for women were underestimated during routine notifications.33 In most countries there is easy access to pesticides, poor regulation and enforcement, and inadequate or unavailable medical facilities, and even government distribution programs which can contribute to poisonings. A survey of six Central American countries found 98% underreporting of pesticide poisoning, estimating 400,000 poisonings per year (1.9% of the population) of which 76% were work related.34,35,36 Suicide is reported as the fifth leading cause of death in China and 58% are from ingesting pesticides. Phasing out WHO Class I and II pesticides (USEPA Toxicity Category I) would greatly reduce acute poisoning and death where pesticides are readily available and where laws and policies are insufficient to protect workers and the public.37,38
HEALTH EFFECTS
Asthma Exposure to pesticides can trigger or exacerbate asthma, induce bronchospasm, or increase bronchial hyperreactivity. Pesticides that inhibit cholinesterase can provoke bronchospasm through increased cholinergic activity. At high doses, certain pesticides can act as airway irritants. Low levels that are insufficient to cause acute poisoning can trigger severe reactions in those without a previous diagnosis of asthma. Pesticides linked to asthma, wheezing, and hyperreactive airway disease include the antimicrobials chlorine and chloramine; the fumigants metam sodium and ethylene oxide; the fungicides captafol, chlorothalonil, maneb/mancozeb, and other ethylenbisdithiocarbamates; the herbicides alachlor, atrazine, EPTC, and paraquat; and the insecticides carbofuran, chlorpyrifos, dichlorvos, malathion, pyrethrins, pyrethrum, and synthetic pyrethroids. The Children’s Health Study, a population-based study in southern California, found that children diagnosed by the age of five were more likely to have asthma if exposed to pesticides.39 Wheezing in Iowa farm children was associated with herbicide exposure, but most studies show farmers’ children to be at lower risk of allergic disease, including hay fever.40 A study in New Zealand found no adverse effects on asthmatic children from community spraying of the biological insecticide Bacillus thuringiensis (BT). In a pesticide fire, respiratory symptoms in the affected surrounding community were highest in preschool children and asthmatics.
Work Related SENSOR found that 3.4% of 534 cases of work-related asthma in Michigan and New Jersey, reported from 1995 to 1998. were pesticide related. From 1993 to 1995, 2.6% of 1101 cases of occupational asthma reported in California, Massachusetts, Michigan, and New Jersey were pesticide related.41 Dyspnea and cough were found in over 78% of workers on apricot farms where large amounts of sulfur were used.42 Outdoor
716
Environmental Health
workers exposed to pesticides had an increase in asthma mortality.43 Decreased risk was found in animal farmers in Denmark, Germany, Switzerland, and Spain who had a lower prevalence of wheezing, shortness of breath, and asthma than the general population.44 No increase in asthma emergency room visits to public hospitals was found in New York City during urban spraying of pyrethroid pesticides for West Nile Virus control.45 Some household aerosol sprays trigger symptoms and impair lung function in asthmatics,46 and use of mosquito coils inside the home was associated with a higher prevalence of asthma.47,48
Swimming Pools Swimming pools are treated with sodium hypochlorite, which is 1% chlorine. A major chlorination by-product (trihalomethane) found in the air of indoor chlorinated pools is nitrogen trichloride. Increase in asthma was found in children who regularly attend indoor pools,49 and bronchial hyperresponsiveness and airway inflammation in swimmers with long-term repeated exposure during training and competition.50 Serum levels of Clara cell protein, an anti-inflammatory biomarker, are significantly lower in children who are indoor pool swimmers.51 Air contamination can trigger asthma in pool workers who do not enter the water.
Chronic Health Effects Epidemiological studies in populations with occupational and environmental exposure to pesticides show increased risk of cancer, birth defects, adverse effects on reproduction and fertility, and neurological damage. The increased risk can occur without any evidence of past acute health effects or poisoning and from long-term exposure to low levels not considered toxicologically significant. Constraints in chronic disease epidemiology of pesticides include difficulty in assessing and documenting exposure; simultaneous exposure to other pesticides (and inert ingredients); the changing nature of exposures over time; and potential additive and synergistic effects from multiple exposures, especially in exposures to the fetus, infants, and children at critical periods in development.
Data Sources Data are now being reported from the Agricultural Health Study (AHS), a prospective cohort of 52,395 farmers, 4916 licensed commercial applicators, and 32,347 spouses of farmer applicators from Iowa and North Carolina, with data collection from 1993 to 1997, and continuing surveillance conducted by the National Cancer Institute (NCI).52,53 The NCI also collects data from farmer/farm worker studies in five northeastern states, six southern states, seven midwestern states, and six western states. The Midwest Health Study conducted by NIOSH collects data from Iowa, Michigan, Minnesota, and Wisconsin. The National Health Information Survey household survey of the U.S. civilian noninstitutionalized population, conducted annually since 1957, includes pesticide use data. The NOMS (National Occupational Mortality Surveillance) is a collaborative study of NIOSH, NCI, and the National Center for Health Statistics using pooled death certificate data from 26 states. Useful data are also available from the NHATS (National Human Adipose Tissue Survey) of fat tissue collected from 1967 to 1983 of 20,000 autopsy cadavers and surgical patients for analysis of 20 organochlorine pesticides. Potential adverse long-term effects of pesticides include cancer in adults and children, and effects on the nervous and reproductive systems. In the discussion that follows, only studies in which potential pesticide exposure was included as a risk factor and in which the findings were statistically significant are included.
Pesticides and Cancer A large number of pesticide-active ingredients are known or suspect animal carcinogens. Based on the evidence for cancer in humans, the
EPA classifies pesticides into seven categories: A, human carcinogen; B, probable human carcinogen; C, possible human carcinogen; D, not classifiable as to human carcinogenicity; E, no evidence of carcinogenic risks to humans; L, likely, and NL, not likely human carcinogenesis. Epidemiological studies done in the United States and other countries report significant increased risk of certain cancers with occupational pesticide exposure in children and adults.
Cancer in Children Pesticide use in the home has shown the most consistent increase in risk for several childhood cancers in the United States and other countries. In most studies, the risk is higher in children younger than five, and for use during pregnancy. Parental occupation as a farmer or farm worker has been shown to increase risk for certain kinds of cancers.54,55,56 Parental Occupational Exposure. Increased risk of bone cancer was found in California for paternal pesticide exposure, and in Australia for maternal exposure; of brain cancer for parental exposure in the United States/Canada and in Norway and Sweden; of Hodgkin’s disease in children of Iowa farmer applicators; of kidney cancer for parental exposure in England/Wales; of leukemia for periconceptual exposure in the United States/Canada; for maternal exposure during pregnancy in China and Germany; and for parental exposure in Sweden; of neuroblastoma in New York related to paternal creosote exposure, and maternal insecticide exposure; and in the children of Iowa farmer applicators; of non-Hodgkin lymphoma for maternal exposure during pregnancy in Germany. Home Exposure.57 Increased risk of bone cancer was found in California/Washington for home extermination (boys only); of brain cancer in Los Angeles for use of pet flea/tick foggers and sprays; in Missouri for use of bombs/foggers, pet flea collars, any termite treatment, garden diazinon and carbaryl use, yard herbicides, and pest strips; in Denver for pest strip use; in Washington state for home use during pregnancy; of leukemia in California for professional extermination in the third trimester, any use three months prior, during, and 1 year after pregnancy; in Los Angeles for parental garden use (higher if maternal), and for indoor use once a week or more; in Denver for pest strip use; in the United States/Canada for maternal home exposure and postnatal rodent control; in England/Wales for propoxur mosquito control; in Germany for home garden use; of neuroblastoma in the United States/Canada for garden herbicides; of non-Hodgkin lymphoma in Denver for home extermination; in the United States/Canada for frequent home use for home extermination, and in Germany for professional home treatment; of soft tissue sarcoma in Denver for yard treatment; of Wilm’s tumor in the United States/Canada for home extermination. Environmental Exposure. Increased risk of hematopoetic cancer (leukemia/lymphoma) was found in the Netherlands for swimming in a pesticide-polluted pond; of leukemia for maternal residence in a propargite use area in California, and within one-half mile of dicofol and metam sodium use.
Cancer in Adults58,59,60 Farmers. Increased risk of brain cancer was found in U.S. applicators, in women in China, and men in Italy; of colorectal cancer in Italy; of kidney cancer in Canada and Italy; of leukemia in Illinois, Iowa, Minnesota, Nebraska, Denmark, France, Italy, and Sweden; of liver/biliary cancer in the United States; of lung cancer in Missouri, and in the Agricultural Health Study cohort related to use of chlorpyrifos, metolachlor, pendimethalin, and diazinon; of malignant melanoma (skin) in Norway and Sweden; of multiple myeloma in the Agricultural Health Study cohort in U.S. midwest states and in Norway; of nonHodgkin lymphoma in U.S. midwest states, in New York (women), in Wisconsin; and in Canada, Italy, and Sweden; of pancreatic cancer in
33 Iowa, Louisiana and Italy; of prostate cancer in the Agricultural Health Study cohort applicators related to use of methyl bromide; in Canada and in Italy related to use of dicofol and DDT. North Dakota farmers with prostate cancer who did not use pesticides had a median survival 8 months longer than users; of soft tissue sarcoma in Kansas; of stomach cancer in Italy; and of testicular cancer in Swedish farmers using deet repellent. Glyphosate exposure was not associated with increased risk in the Agricultural Health Study.61 Farm Workers. Increased risk of Hodgkin’s disease was found in Italy; of leukemia in California; in wives of pesticide-licensed farmers in Italy; of lung cancer in heavily exposed men and women in Costa Rica; of malignant melanoma (skin) in Australia and Scotland; of multiple myeloma in the United States; and of non-Hodgkin lymphoma in California. Pesticide Applicators. Increased risk of bladder cancer was found in the United States; of colorectal cancer in Iceland; of leukemia in the United States, of Australia, and in Iceland (women); of liver/biliary cancer in DDT malaria sprayers in Italy; of multiple myeloma in DDT malaria sprayers in Italy62 and herbicide applicators in the Netherlands; of pancreatic cancer in U.S. aerial applicators and Australian DDT malaria sprayers; of prostate cancer63 in Florida and Sweden; of soft tissue sarcoma in herbicide sprayers in Europe and Canada; and of testicular cancer in Florida pest control operators. Factory Workers. Increased risk of bladder cancer was found in workers manufacturing the carcinogenic pesticide chlordimeform in Denmark and Germany, and in a U.S. bladder cancer cohort; of kidney cancer in Michigan pentachlorophenol workers, in an international herbicide cohort; of leukemia in U.S. alachlor workers, and U.S. formaldehyde workers; of liver/biliary cancer in DDT workers; of lung cancer in Alabama herbicide workers, in California diatomaceous earth workers, in Illinois chlordane workers, in Michigan DBCP workers, and in an English pesticide cohort; of non-Hodgkin lymphoma in U.S. atrazine and arsenic workers, and German and Swedish phenoxy herbicide workers; of nasal cancer in U.S. chlorophenol workers, in English herbicide workers, in European male and female formaldehyde workers, and in Filipino formaldehyde workers; of soft tissue sarcoma in Alabama herbicide workers, in U.S. chlorophenol workers, in Denmark pesticide workers, and in herbicide workers in Europe and Canada; of stomach cancer in Maryland arsenical workers; and of testicular cancer in methyl bromide workers in Michigan. Other Occupational Exposure. U.S. Agricultural extension agents were at increased risk for brain cancer, colorectal cancer, Hodgkin’s disease, kidney cancer, leukemia, multiple myeloma, non-Hodgkin lymphoma, and prostate cancer. U.S. forestry soil conservationists were at increased risk for colorectal cancer, kidney cancer, multiple myeloma, non-Hodgkin lymphoma, and prostate cancer. Golf course superintendents were at increased risk for brain cancer, colorectal cancer, multiple myeloma, and prostate cancer. Increased risk of bladder cancer was found for pesticide exposure in Spain; of Hodgkin’s disease in Swedish creosote workers; of lung cancer in China; of non-Hodgkin lymphoma in herbicideexposed forest workers, and pesticide exposure in Australia and Sweden; of pancreatic cancer in Spain related to DDT exposure; of soft tissue sarcoma in Sweden related to phenoxy herbicide exposure; and of stomach cancer related to herbicide exposure in Sweden. Home Exposure. Increase in risk of lung cancer was found in China; of nasal cancer in the Philippines for the daily burning of insecticide coils; of prostate cancer for home and garden use in Canada; and of soft tissue sarcoma for self-reported herbicide use in the United States. Environmental Exposure. Increased risk of brain cancer was found in women in Massachusetts living near cranberry bogs; of pancreatic
Pesticides
717
cancer for residents in a dichoroporpene use area in California; of soft tissue sarcoma from community chorophenol contamination in Finland; of soft tissue sarcoma in men living near hexachlorobenzene emissions in Spain (thyroid cancer also increased); and of stomach cancer in a high pesticide use village in Hungary.
Breast Cancer (Female) Early studies finding an increase in the risk of breast cancer associated with serum and fat levels of the DDT metabolite DDE and other pesticides were not always supported by larger cohort and case-control studies, especially those using historical samples gathered before diagnosis. Levels in the body at the time of cancer diagnosis may not reflect actual past exposures, and body stores depend on intake, changes in body size, and metabolism, among other conditions.64–66 A summary of the findings of pesticide-related studies in which the findings were statistically significant follows. Serum and Fat (Adipose) DDE. Increased risk related to DDE levels in serum was found in New York, in North Carolina blacks, and in Belgium, Canada, Columbia, and Mexico; and to fat levels in Connecticut, New York, and Germany. Decreased risk related to serum DDE levels was found in California, Maryland, New England, New York, and Brazil; and to fat levels in five European countries (Germany, Spain, Netherlands, northern Ireland, Switzerland). No association with DDE serum levels was found in California, in a U.S. meta-analysis, in Missouri, in the Nurses’ Health Study, in Long Island, NY, in Denmark, or Vietnam; and with fat levels in Connecticut, in a national U.S. study, and in Sweden and Vietnam. Other Pesticide Serum and Fat Levels. Increased risk related to serum hexachlorobenzene was found in Missouri, to β-HCH (isomer found in lindane) in Connecticut, and to serum dieldrin in Denmark; and to β-HCH in autopsy fat in Finland, to breast fat aldrin and lindane in Spain, and to hexachlorbenzene in postmenopausal women with ER+ tumors in Sweden. No association was found related to serum chlordane and dieldrin in Long Island, NY, to transnonachlor in New York, to β-HCH in Connecticut and Norway, and to breast fat levels in Connecticut of oxychlordane, transnonachlor, and hexachlorobenzene. Estrogen Receptor Status. Two studies found an increase in risk related to estrogen receptor-positive tumors—a national U.S. study related to DDE levels and a Swedish study related to hexachlorobenzene levels in postmenopausal women. A study in Canada found an increase in risk related to DDE and estrogen receptor-negative tumors. No association with estrogen receptor status was found in two Connecticut studies of DDE and oxychlordane, and one of DDE in Belgium. Occupational Exposures. A few studies have been done of occupation as a risk factor for breast cancer. Increased risk was found in farmers in North Carolina, and no association with atrazine exposure in Kentucky. Decreased risk was found for Florida licensed pest control operators, and in a national study of applicators. Environmental Exposure. Increased risk was found in Kentucky for residing in a triazine herbicide area. Decreased risk was found in California for residing in areas of agricultural use of probable human carcinogens and mammary carcinogens. No association was found for California teachers living within a half mile of agricultural pesticide use.
Neurological Effects Although there is a dearth of data on chronic neuropathological and neurobehavioral effects of pesticides, available studies show adverse effects in two areas: long-term sequelae of acute poisoning and organophosphate-induced delayed neuropathy.
718
Environmental Health
Long-term Sequelae of Acute Poisoning The percentage of acutely poisoned individuals who develop clinically significant sequelae is not known. Early reports document that organophosphate pesticides can cause profound mental and psychological changes.44 Follow-up studies in persons poisoned by organophosphates suggest that long-term neurological sequelae occur even though recovery appeared to be complete. Even single episodes of severe poisoning may be associated with a persistent decrement in function. Neuropsychological status of 100 persons poisoned by organophosphate pesticides (mainly parathion), an average of 9 years prior, was significantly different from that in control subjects in measures of memory, abstraction, and mood. Twice as many had scores consistent with cerebral damage or dysfunction, and personality scores showed greater distress and complaints of disability.45 Other studies find that auditory attention, visual memory, visualmotor speed, sequencing, problem solving, motor steadiness, reaction time, and dexterity are significantly poorer among the poisoned cohort. Complaints of visual disturbances were found in 10 of 117 individuals 3 years after occupational organophosphate poisoning (mainly from parathion and phosdrin). One-fourth of workers poisoned 10–24 months after hospitalization for acute organophosphate poisoning had abnormal vibrotactile thresholds.
Parkinson’s Disease An association between pesticide exposure and Parkinson’s disease was first suggested in 1978. The role of toxic chemicals in the human pathology of the disease was highlighted in 1983 with a report of parkinsonism in an addict exposed to MPTP (1-methyl-4-phenyl1,2,3,6-tetra-hydropyridine), a street drug contaminant. The toxic mode of action of the insecticide rotenone leading to degeneration of dopaminergic neurons is similar to MPTP and has become the first animal model of pesticide-induced Parkinson’s disease.67 Heptachlor, and perhaps other organochlorine insecticides, exerts selective effects on striatal dopaminergic neurons and may play a role in the etiology of idiopathic Parkinson’s disease. Low doses of permethrin can reduce the amount of dopamine transporter immunoreactive protein in the caudate-putamen, and triadimefon induces developmental dopaminergic neurotoxicity. As more pesticides are studied, many are shown to predispose dopaminergic cells to proteasomal dysfunction, which can be further exacerbated by environmental exposure to certain neurotoxic compounds like dieldrin. Examination of the brain of addicts decades after MPTP exposure shows activated microglia—cells in areas of neural damage and inflammation––suggesting that even a brief toxic exposure to the brain can produce long-term damage. It is postulated that certain pesticides may produce a direct toxic action on the dopaminergic tracts of the substantia nigra and contribute to the development of Parkinson’s in humans based on gentic variants (vide infra), exposure conditions, family history, and other factors. Silent neurotoxicity produced by developmental insults can be unmasked by challenges later during life as well as the potential for cumulative neurotoxicity over the life span.
Human Studies68,69 Farmers. Increased risk of Parkinson’s disease related to pesticide exposure was found in farmers in Italy and Australia, and in women in China, and in Taiwan. A nonsignificant increase in risk was found in Washington State for self-reported crop use of paraquat and other herbicides. No association was found for exposure to herbicides/ pesticides in Kansas, to agricultural fungicides in Michigan, to insecticides/herbicides/rodenticides in India, to insecticides/herbicide and paraquat use in Canada, and to pesticides/herbicides in Finland. Farm Workers. Increased risk was found in Washington State and British Columbia orchard workers, and in pesticide-exposed sugar plantation workers (nonsmokers, non-coffee drinkers) in Hawaii; for
herbicide/insecticide exposure (adjusted for smoking) in Michigan; and paraquat exposure in Taiwan. Decreased risk was found in French workers (smokers); and there was no association with pesticides in Quebec. Other Occupational or Unstated. Increased risk was found related to insecticide exposure in Washington State (diagnosis before age 50); to occupations in pest control for black males in the NOMS study; to herbicide/insecticide exposure in Germany; in a French elderly cohort; to Germany wood preservatives in Germany; and to any occupational handling in Sweden. No association with herbicide/pesticide exposure was found in Pennsylvania, Australia, Quebec, Spain, and Italy. Home Exposure. Increased risk was found for residents in a fumigated house in Washington State (diagnosed less than age 50), for home wood paneling more than 15 years in Germany. No association was found for home use, and self-reported use in another study in Washington State. Environmental Exposure. An increase in mortality was found for living in a pesticide use area in California. Pesticide Serum and Tissue Levels. The mean plasma level of DDE in Greenland Inuits (men and women) with Parkinson’s disease was almost threefold higher than in controls. The mean lindane level in substantia nigra autopsy samples of Parkinson’s patients was four-and-a-half times higher than in nonneurological controls. Dieldrin residues were significantly higher in postmortem brain samples from patients with Parkinson’s compared to those with Alzheimer’s disease, and nonneurological controls. Another study found that dieldrin was significantly decreased in a parkinsonian brain when analyzed by lipid weight. Genetic Interactions.70 Genetic susceptibility to Parkinson’s may be mediated by pesticide metabolism and degradation enzymes in the cytochrome P450 system. A study in France found a threefold increase in risk in D6 CYP2D6 poor metabolizers exposed to pesticides that was not present in the unexposed control. In a Kansas study, those with pesticide exposure and at least one copy of the CYP2D6 29B+ allele had an 83% predicted probability of Parkinson’s with dementia. An Australian study found that those with regular exposure to pesticides who were poor D6 CYP2D6 metabolizers had an eightfold increase in risk of Parkinson’s, and carriers of the genetic variant a threefold increase. Polymorphism of the CYP2D6 gene is common in Caucasians, but very rare in Asians and was not found to be a significant factor in Parkinson’s disease in a large study in China. Animal studies show that polymorphisms at position 54(M54L) and 192(Q192R) in paraoxonase (PON1) can affect metabolism and detoxification of pesticides. A study in Finland found no association between sporadic Parkinson’s disease in humans and PON1 variation in these alleles.
Other Neurological Disease Dementia. Most studies of pesticides as a risk factor for dementia report small, insignificant increases in risk or no association. Increased risk of Alzheimer’s disease was found for exposure to pesticides in Canada and France, of mild cognitive dysfunction in the Netherlands, and of presenile dementia for self-reported use in the United States. Other studies in the United States did not report any association with pesticides. Amyotrophic Lateral Sclerosis. An ongoing mortality study of Dow Chemical Company workers in Michigan Dow reported three deaths from Amyotrophic Lateral Sclerosis (ALS), all in workers whose only common exposure was to 2,4–D (1947–49, 1950–51, 1968–86). A nonsignificant increased risk was found for pesticide exposure in Italy. Other investigations are anecdotal case reports: from Brazil of two men exposed to aldrin, lindane, and heptachlor; from England of a death of a man exposed to chlordane and pyrethrins; and
33 from Italy of a conjugal cluster 30 months apart in which no association was found for pesticide levels in their artesian well. vCruetzfeld-Jacob Disease. The only reports are two studies from England that found no association with PON1 alleles (paraoxonase). Eye Disorders. A study in the United States found a significant 80% increase in risk of retinal degeneration related to cumulative days of fungicide use; a follow-up of 89 poisoning cases in France found two cases of visual problems along with other neurological sequelae; and a study of 79 workers in India exposed to fenthion found macular lesions in 15% and three cases of paracentral scotoma and peripheral field constriction. Other investigations are anecdotal case reports, including blindness related to methyl bromide in an agricultural applicator in California and in a suicidal ingestion of carbofuran in Tennessee. Guillains-Barre Syndrome. There are no well-designed studies of pesticide exposure as a risk factor for Guillain-Barre, only sporadic reports. Multiple System Atrophy. Increased risk was found for occupational pesticide exposure in the United States and Italy. A death record review in the United States implicated pesticides/toxins in 11% of cases. A prevalence study in France found no association with occupational pesticide exposure. Progressive Supranuclear Palsy. No associaiton with pesticides was found in a U.S. study. A report from Canada cites multiple insecticide exposure in two cases. Vascular Dementia (Stroke). Increased risks related to occupational pesticide exposure were found in Canada.
Pesticide-Induced Delayed Neuropathy Certain organophosphates are known to produce a delayed neuropathy 1–3 weeks after apparent recovery from acute poisoning, known as organophosphate-induced delayed neuropathy (OPIDN). It is characterized by a sensory-motor distal axonopathy and myelin degeneration, resulting in muscle weakness, ataxia, and paralysis. Exclusive sensory neuropathy is not seen in OPIDN, and in all reported cases, the sensory component, if present, is much milder than the motor component. The delayed neurotoxic action is not related to cholinesterase inhibition, but to the binding (phosphorylation) of a specific enzyme in the nervous tissue called neurotoxic-esterase, or neuropathy target esterase (NTE).71 OPIDN has been reported from exposure to Mipafox in a research chemist in 1953, in leptophos (Phosvel) manufacturing workers in 1977, and more recently from high exposures to methamidophos (Monitor), chlorpyrifos (Dursban, Lorsban), trichlorfon, and dichlorvos. Most reported cases are from suicidal or accidental ingestion of large doses. The hen brain inhibition bioassay is required by the EPA for screening new organophosphate insecticides for delayed neuropathic effects.
Reproductive Effects Maternal and paternal pesticide exposure has been found to be a risk factor for infertility, sterility, spontaneous abortion, stillbirth, and birth defects.72–76 Several studies document a high percentage of women use pesticides in the home during pregnancy.
Data Sources Pesticide-related data are available from the Collaborative Perinatal Project, a 1959–1965 cohort of about 56,000 pregnant women and their children at 12 medical centers; and from the Child Health and Development Study, a 1959–1967 cohort of 20,754 pregnancies in San Francisco Bay Area Kaiser members.
Pesticides
719
Fetal Loss Spontaneous Abortion. Increased risk was found in wives of pesticide applicators in Minnesota; in wives of pesticide licensed farmers in Italy; in Canada related to exposure to thiocarbamates, glyphosate, and phenoxy herbicides; and in DBCP-exposed workers in Israel and in India. Maternal occupational exposure increased the risk in China (threatened), in Canada; in farm couples in India, in Columbia female flower workers and wives of male workers, in Filipino farmers using conventional versus less-pesticide-intensive methods, and in the toxic release incident in Bhopal, India. Decreased risk was found in wives of New Zealand sprayers. No associations with pesticides were found in Germany, Italy, U.S. crop duster pilots, wives of DDT sprayers in Mexico, in a 17-year followup of wives of DBCP exposed in Israel, and with DDE blood levels in Florida.
Stillbirth Increased risk was found in the United States for maternal and paternal home use, for maternal occupational exposure, in women exposed to pesticides and germicides in Canada, in Hispanics living near an arsenate pesticide factory in Texas, in Canadians living in a high pesticide use area, and in female farm workers in Canada and male farm workers in Spain. No associations were found in Columbia flower workers, in wives of Minnesota pesticide applicators, for home pesticide use in California, and in long-term follow-up of women involved in the toxic release in Bhopal, India.
Birth Defects Farmers. Increased risk for cleft lip/palate was found for agricultural chemical users in Iowa and Michigan; decreased risk was found for cleft lip/palate for paternal exposure in England/Wales; no association was found for limb reduction defects for farmers in New York state or for neural tube defects in male and female farmers exposed to mancozeb in Norway, or of any major defect in Filipino farmers using high pesticide input methods. Farm Workers. Increased risks of cleft lip/palate was found in Finland for female farm workers exposed in the first trimester; of limb reduction defects in California if either or both parents were farm workers; of any major defect in female flower workers and wives of male workers in Columbia, and in female cotton field workers in India. Pesticide Applicators. Increased risk of cardiac defects and any major defect was found for paternal occupation as a licensed applicator in Minnesota and of central nervous system defects for paternal occupation as glyphosate and phophine applicators in Minnesota. No association for cleft lip/palate was found in New Zealand herbicide sprayers, for limb reduction defects in United States crop-dusters, and of any major defect in male malaria DDT sprayers in Mexico. Other Occupational or Unstated. Increased risk was found for cardiac defects and eye defects in foresters in Canada; of limb reduction defects for maternal exposure in Washington State; of neural tube defects in China for maternal exposure in the first trimester (very high risk found, study done before folate supplementation instituted); for cryptorchidism (undescended testicles) for paternal occupational exposure in China, and in the Netherlands for paternal but not maternal exposure; of hypospadias for paternal exposure in Italy; and for any major defect in Spain for paternal paraquat exposure. Decreased risk was found for cardiac defects in women exposed in the first trimester in Finland. No association was found with maternal exposure for septal defects, or hypoplastic left heart syndrome in Finland; with eye defects and parental benomyl exposure in a large multicenter study in Italy; with hypospadias parental exposure in Norway.
720
Environmental Health
Home Use. Increased risk of cleft lip/palate was found in California for maternal periconceptual home use; of cardiac defects in California for periconceptual home use and maternal use of insect repellent; of cardiac defects in the Baltimore Washington Infant Study, including transposition of the great arteries for maternal exposure in the first trimester to rodenticides, herbicides, or any pesticide; of total anomalus venous return, and an attributable risk of 5.5% for ventricular septal defect; of neural tube defects in California if the mother was the user, which was borderline significant for commercial home application; of limb reduction defects in California for periconceptual home use, and in Australia for home use during the first trimester which increased further for more than one use. No association was found for maternal first trimester exposure and Down syndrome in Texas. Environmental Exposure. Increased risk was found of cleft lip/palate, kidney defects, and neural tube defects for living in a high pesticide use area in Canada; of limb reduction and neural tube defects for maternal residence in high pesticide use areas in California; of potential benomyl-related eye defects (anophthalmia/ microophthalmia) in rural areas in England. No association with pesticide exposure was found for neural tube defects in a study on the United States–Mexican border, or for a cluster in California; or with any major defect for a toxic release of methylisocyanate from a pesticide factory in Bhopal, India.
with pesticide exposure and fertility was found in male vinclozolin and molinate factory workers. Semen. 2,4-D was found in 50% of seminal fluid samples in Canadian farmers. Detectable levels of hexachlorobenzene, lindane, DDT, and dieldrin were found in German men, with the highest levels in chemistry students. DDE, aldrin, endosulfan, and isomers of hexachlorocyclohexane (α-,β-,γ-,δ -) were detected in men in India, and DDE and ε-HCH in Poland. A study in France found no DDE in semen of fertile and subfertile men, and no difference in blood levels, but serum DDE was higher in the mothers of the subfertile men.80 Sperm Counts. In Denmark, a report that a self-selected group of organic farmers attending a convention had higher sperm counts than traditional farmers created quite a stir. A well-designed study using a random sample of a larger number of farmers did not support the earlier findings. The mean sperm count in organic farmers was 10% higher than in traditional farmers, but the difference was not significant. A study of pesticide exposure in Danish farmers found 197 m/ml before pesticide exposure, decreasing 22% to 152 m/ml after exposure, but the difference was not significant. Lower counts were found in farmers in Argentina using 2,4-D or for any farm pesticide exposure. Except for well-documented studies in DBCP workers, the only pesticide-related study of sperm counts in the United States was in molinate factor workers where no association was found.81
Fertility77 Sterility. Two pesticides, chlordecone (Kepone) and DBCP (dibromochloroporpane), are well-documented causes of sterility in male factory workers. Kepone was banned in 1976 and DBCP in 1979. The cases in DBCP workers occurred without any related acute illness; in Kepone only in the severely poisoned. Studies of workers exposed to ethylene dibromide (EDB), a soil fumigant related to DBCP, found lowered sperm counts and impaired fertility. EDB replaced DBCP in 1979, but was banned in 1984, and most uses were replaced by methyl bromide. Fecundability. Easily collected data that includes both partners and that can be done in any randomly selected population is “time to pregnancy.” A fecundability ratio is determined, which is a comparison of the number of months it takes for a couple to conceive when not using birth control in exposed versus unexposed couples. Pesticide-related effects have not shown a clear pattern as the results of recent studies show. A significant decrease in fertility was found in the Netherlands related to pesticide exposure, but no dose-response was found; another study found lower fertility during spray season. Females exposed to pesticides in Canada had significantly lower fertility, and Danish female greenhouse sprayers a nonsignificant decrease. Nonsignificant lower fertility related to DDE serum levels was found in the Collaborative Perinatal Project78 and in a study of former malaria sprayers in Italy.79 No carbaryl-associated effects on fertility were found in U.S. factory workers. A significant increase in fecundability ratios (greater fertility) was found in male farmers and farm workers in Denmark, France, and Italy.
Biomonitoring Males. High LH and FSH levels, an indication of testicular failure, were found in Chinese pesticide factory workers, in German workers with short-term exposure to pesticides, in Israeli workers 17 years after cassation of exposure to DBCP, and in lindane factory workers. No increase was found in Minnesota herbicide applicators. Decreased levels of testosterone were found in Chinese pesticide factory workers, Danish farmers, lindane factory workers, and black farmers in North Carolina exposed to DDT. No significant association with pesticide exposure was found in Minnesota herbicide applicators. The fungicide vinclozolin and the herbicide molinate act as antiandrogens in animal studies. No significant association
Adipose Tissue (Fat). The highest reported level of DDT in mothers at delivery is 5,900 ppb in fat tissue of Kenyan women who also had high levels of -HCH (30 ppb). Very high levels of DDE (4,510 ppb) were also found in tissue of Mexican women at delivery. Ovarian Follicular Fluid. Trace amounts of chlordane, DDE, and hexachlorobenzene were found in follicular fluid from Canadian women undergoing in vitro fertilization (IVF); endosulfan and mirex in 50% or more of samples in another study, and hexachlorobenzene and lindane in the fluid of German women. Amniotic Fluid. A study done in Florida at a time of heavy agricultural DDT use found 14 ppb in black babies and 6 ppb in whites. A recent study found low levels of DDE and -hexachlorocyclohexane in California women in their second trimester of pregnancy. Meconium. Meconium is a newborn baby’s intestinal contents—the first “bowel movement”—an accumulation of intestinal epithelial cells, mucus, and bile. Alkylphospyhate metabolites of organophosphate pesticides were found in a recent New York study; DDE, DDT, dieldrin, and - - -HCH in an early study in Japan. A collaborative study in Australia and the Philippines found lindane, pentachlorophenol, chlordane, DDE, chlorpyrifos, and malathion in all samples, with levels much higher in the Filipino babies. Diazinon and parathion were found only in the Filipinos. A study done in Germany found DDE in 5% of samples collected in 1997. Placenta. DDE was found in 1965 samples from women living in high agricultural production areas of California, and DDE and β-HCH in samples from Japanese women in the 1970s. Levels of DDT and lindane in stillborn babies were not different from live births in India. A study in Mexico found that pesticide exposure increased the prevalence of atypical placental villi. Testes. In Greece, autopsies of suicide victims who died from ingesting pesticides found paraquat, fenthion, and methidathion in the testes.
Endocrine Disruptors “The dose makes the poison,” attributed to the sixteenth-century Dutch alchemist Paracelsus, is the key concept in toxicology, in the
33 scientific basis of dose-response models used in determination of thresholds, and in regulation of allowable levels of exposure to toxic chemicals. The ability to detect increasingly lower levels of chemical contaminants in biological samples, and a rethinking of dose-response when the exposure is to the developing fetus, has had a profound impact on risk assessment. Challengers of a rigid dose-response model state that it is not relevant to the fetus during critical periods of development in the first days and weeks of pregnancy, and contend that small disturbances in hormonal function by xenobiotics, called “endocrine disruptors,” can lead to profound effects, which may not be manifested until adulthood.82 Many pesticides can be considered potential endocrine disruptors based on animal findings in tests of the pituitary, adrenal, thyroid, testes, ovaries, reproductive outcome, and transgenerational effects, and in vitro screening in nonmammalian species. This includes chlorniated hydrocarbons, organophosphates, synthetic pyrethroids, triazine and carbamate herbicides, and fungicides. Questions have been raised about the relevance of the findings in wildlife to human populations. Human studies have conflicting findings in the role of pesticides in conditions often attributed to endocrine disruptors, such as decreasing quantity and quality of sperm, increasing incidence of breast and prostate cancer, cryptorchidism and hypospadias, and other effects, as described above.83 A recent review concludes that “At this time, the evidence supporting endocrine disruption in humans with background-level exposures is not strong.”84 The EPA is developing protocols for screening and testing pesticides to determine if they are endocrine disruptors, and research is ongoing.
REGULATION AND CONTROLS
Legislation The Federal Insecticide Act of 1910, a labeling law to prevent adulteration, was repealed by the Federal Insecticide Fungicide and Rodenticide Act (FIFRA) of 1947. FIFRA was administered by the U.S. Department of Agriculture (USDA) until 1970, when control passed to the Environmental Protection Agency. Most pesticides now on the market were approved by the USDA in the 1940s through the early 1970s, without the chronic toxicity, health, and environmental fate data required by current law. Pesticides must be registered with the EPA before they can be sold. Registration is contingent upon submission by the registrant (manufacturer) of scientific evidence that when used as directed, the pesticide will effectively control the indicated pest(s); that it will not injure humans, crops, livestock, wildlife, or the environment; and that it will not result in illegal residues in food and feed. About 25–30 new active pesticide ingredients are registered annually. FIFRA amendments in 1972 required all pesticides to meet new health and safety standards for oncogenicity/carcinogenicity, chronic toxicity, reproductive toxicity, teratogenicity, gene mutation, chromosomal aberrations, DNA damage, and delayed neurotoxicity by 1975. Failure to meet the new standards resulted in 1988 FIFRA amendments, requiring the EPA to undertake a comprehensive reregistration review of the 1138 active-ingredient pesticides first registered before November, 1984. Registration Eligibility Decisions (REDs) summarize the reviews of these older chemicals. There were a large number of voluntary cancellations due to the review process, but a large percentage has not met the new rules. In 1996, the Food Quality Protection Act (FQPA) amended both FIFRA and the Federal Food, Drug and Cosmetic Act (administered by the FDA) requiring a reassessment of all food tolerances (the maximum amount of pesticide residues allowed on food), and replacement of FIFRA’s cost-benefit analysis with a “reasonable certainty of no harm” standard, mandating three additional steps to determine the new health-based standard: (a) Take into account aggregate exposure from food, water, and home and garden uses; (b) Add an additional tenfold margin of safety (or higher if necessary) to protect infants and
Pesticides
721
children; (c) Consider cumulative risks from all pesticides which have a common mechanism of activity. Organophosphate insecticides, for example, have the same basic mechanism of toxicity and biological activity.85 To date, 7000 of the 9721 tolerances requiring reassessment have been completed.86 Human Studies. Intense controversy continues to surround the issue of the use of human beings in risk assessment of pesticides.87 An initial study approved by the EPA was called the Children’s Environmental Exposure Research Study (Cheers). It offered $970, a free camcorder, a bib, and a T-shirt to parents whose infants or babies were exposed to pesticides if the parents completed the two-year study. The requirements for participation were living in Duval County, Florida, having a baby under 3 months old or 9–12 months old, and “spraying pesticides inside your home routinely.” The study was being paid for in part by the American Chemistry Council that includes pesticide registrants. The EPA withdrew approval for the study, but is still reconsidering the issue of some form of human testing.
Worker Protection Standards Chemical workers who manufacture and formulate pesticides are covered by the Occupational Safety and Health Act (OSHA) passed in 1970. Agricultural workers were specifically excluded from the law, including the Hazard Communication Standard (Right-to-Know) provisions. The EPA issued Worker Protection Standards (WPS) under section 170 of FIFRA in 1992, with full implementation by October 1995. The EPA estimates that about four million workers on farms and in nurseries, greenhouses, and forestry are covered by the rules. The regulations require restricted entry intervals (REIs) for all pesticides: 48 hours for all toxicity category I products, which can be extended up to 72 hours for organophosphates applied outdoors in arid areas; 24 hours for toxicity category II products; and 12 hours for all other products, later amended to exempt cut-rose workers. The rules require posting of warning signs for certain applications, worker education and training, and providing pesticide-specific materials upon request. The WPS are based on acute toxicity only, and there are no rules that specifically address the exposures to pesticides that are known or suspect carcinogens and teratogens. The rules apply to adult workers, without any modifications or consideration of exposures to children and pregnant women.
Federal and State Administration and Enforcement The EPA delegates administration and enforcement of FIFRA to the states through working agreements. In most states, enforcement authority is in departments of agriculture. The pesticide label is the keystone of FIFRA enforcement, and any use inconsistent with the label is illegal. The label must contain the following: brand name, chemical name, percentage active ingredient(s) and inert ingredient(s); directions for use; pests that it is effective against; crops, animals, or sites to be treated; dosage, time, and method of application; restricted entry interval, preharvest interval; protective clothing and equipment required for application; first-aid and emergency treatment; name and address of the manufacturer; and toxicity category. The toxicity category and associated signal word (Table 33-2) must also be on the label.
Other Agencies Other Federal agencies with responsibilities for enforcement of pesticide regulations include the Food and Drug Administration (FDA), the USDA, and the Federal Trade Commission (FTC). The EPA sets the maximum legal residues of pesticides (called tolerances) allowed to be on food at the time of retail sale, but does not enforce them. The FDA is responsible for enforcement of tolerances in fruits, vegetables, grains, feed, and fiber; and the USDA, for meat, poultry, and fish. The FTC protects consumers against false and deceptive advertising claims by pesticide distributors and professional applicators—the FTC has brought only three actions in the past 10 years.
722
Environmental Health
Banned, Suspended, and Severely Restricted Pesticides Table 33-4 lists selected pesticides that have been banned, suspended, or severely restricted for use in the United States. Many pesticides that are banned or severely restricted in the United States, Canada, and Western Europe are widely used in developing countries. An Executive Order requires the United States to inform third-world countries if an exported pesticide is banned in the United States and to obtain official approval before it can be exported.
REFERENCES
1. Carson R. Silent Spring. Boston: Houghton Mifflin; 1962. 2. Kiely T, et al. Pesticides Industry Sales and Usage: 2000 and 2001 Market Estimates. Washington, D.C.: Environmental Protection Agency, OPP, EPA 733-R-04-001; 2004. http://www.epa.gov/ oppbead1/pestsales/. 3. California Environmental Protection Agency. Annual Report of Pesticide Use in 2003 by Chemical and by Commodity. Sacramento: Department of Pesticide Regulation; 2005. http://www.cdpr.ca. gov/docs/pur/purmain.htm. 4. Graham W. The Grassman. The New Yorker, August 19:34–7; 1996. 5. www.cdc.gov/ncidod/dvbid/westnile/qa/pesticides.htm; www.cdc.gov/ncidod/dvbid/westnile/qa/insect_repellent.htm. 6. Lengeler C. Insecticide-treated bed nets and curtains for preventing malaria. Cochrane Database Update of 2000;(2):CD000363. Cochrane Syst Rev. 2005;2:CD000363. 7. http://www.cdc.gov/malaria/control_prevention/vector_control.htm. 8. Fenske RA, et al. Lessons learned for the assessment of children’s pesticide exposure: critical sampling and analytical issues for future studies. Env Health Persp. 2005;113(10):1455–62. 9. Kimmel CA, et al. Lessons learned for the National Children’s Study from the NIEHS/USEPA Centers for Children’s Environmental Health and Disease Prevention research. Env Health Persp. 2005; 113(10):1414–8. 10. Cohen Hubal EA, et al. Characterizing residue transfer efficiencies using a fluorescent imaging technique. J Expo Anal Env Epid. 2004;15(3):261–70. 11. Ivancic WA, et al. Development and evaluation of a quantitative video–fluorescence imaging system and fluorescent tracer for measuring transfer of pesticide residues from surfaces to hands with repeated contacts. Ann Occ Hyg. 2004;48(6):519–32. 12. Centers for Disease Control. Third National Report on Human Exposure to Environmental Chemicals. CDC. July 2005. http://www.cdc. gov/exposurereport/. 13. Stacey R, et al. Secondary contamination in organophosphate poisoning: analysis of an incident. Quart J Med. 2004;97(2):75–80. 14. Simpson WM, et al. Recognition and management of acute pesticide poisoning. Am Fam Phys. 2002;65 (8):1599–604. 15. Robenshtok E, et al. Adverse reaction to atropine and the treatment of organophosphate intoxication. Isr Med Assoc J. 2002;4(7):535–9. 16. Eddleston M, et al. Oximes in acute organophosphorus pesticide poisoning: a systematic review of clinical trials. QJM. 2002;95(5):275–83. 17. Akgur SA, et al. Human serum paraoxonase (PON1) activity in acute organophosphorous insecticide poisoning. Forensic Sci Int. 2003; 133(1–2):136–40. 18. Mackness B, et al. Paraoxonase and susceptibility to organophosphorus poisoning in farmers dipping sheep. Pharmacogenetics. 2003;13(2):81–8. 19. Furlong CE, et al. Role of paraoxonase (PON1) status in pesticide sensitivity: genetic and temporal determinants. Neurotoxicology. 2005;26(4):651–9. 20. Tsatsakis AM, et al. Acute fatal poisoning by methomyl caused by inhalation and transdermal absorption. Bull Env Contam Toxicol. 2001;66(4):415–20.
21. Centers for Disease Control. Unintentional topical lindane ingestions–United States, 1998–2003. MMWR. 2005;54(21):533–5. 22. Bradberry SM, et al. Poisoning due to pyrethroids. Toxicol Rev. 2005;24(2):93–106. 23. Wax PM, et al. Fatality associated with inhalation of a pyrethrin shampoo. J Toxicol Clin Toxicol. 1994;32(4):457–60. 24. Culver CA, et al. Probable anaphylactoid reaction to a pyrethrin pediculicide shampoo. Clin Pharm. 1988;7:846–9. 25. Goldstein DA, et al. An analysis of glyphosate data from the California Environmental Protection Agency Pesticide Illness Surveillance Program. J Toxicol Clin Toxicol. 2002;40(7):885–92. 26. Acquavella JF, et al. Human ocular effects from self-reported exposures to Roundup herbicides. Hum Exp Toxicol. 1999;18(8):479–86. 27. Lee HL, et al. Clinical presentations and prognostic factors of a glyphosate-surfactant herbicide intoxication: a review of 131 cases. Acad Emerg Med. 2000;7(8):906–10. 28. Bradberry SM, et al. Poisoning due to chlorophenoxy herbicides. Toxicol Rev. 2004;23(2):65–73. 29. Huang CJ, et al. Subacute pulmonary manifestation in a survivor of severe paraquat intoxication. Am J Med Sci. 2005;330(5):254–56. 30. Watson WA, et al. 2003 annual report of the American Association of Poison Control Centers Toxic Exposure Surveillance System. Am J Emerg Med. 2004;22(5):335–404. TESS reports are available online at: http://www.aapcc.org. 31. State of California. The California Pesticide Illness Surveillance Program-2003. Sacramento: Dept. of Pesticide Regulation, 2005. http://www.cdpr.ca.gov/docs/whs/pisp.htm. 32. Calvert GM, et al. Acute occupational pesticide-related illness in the U.S., 1998–1999: surveillance findings from the SENSOR-pesticides program. Am J Ind Med. 2004;45(1):14–23. 33. Roberts DM, et al. Influence of pesticide regulation on acute poisoning deaths in Sri Lanka. Bull WHO. 2004;81(11):789–98. 34. Wesseling C, et al. Acute pesticide poisoning and pesticide registration in Central America. Toxicol Appl Pharmacol. 2005;207(Suppl 2): 697–705. 35. Mancini F, et al. Acute pesticide poisoning among female and male cotton growers in India. Int J Occ Env Health. 2005;11(3):221–32. 36. London L, et al. Pesticide usage and health consequences for women in developing countries: out of sight, out of mind? Int J Occ Env Health. 2002;8(1):46–59. 37. Konradsen F, et al. Reducing acute poisoning in developing countriesoptions for restricting the availability of pesticides. Toxicology. 2003;192:2–3:249–61. 38. Clarke EE. The experience of starting a poison control centre in Africa-the Ghana experience. Toxicology. 2004;198(1–3):267–72. 39. Salam MT, et al. Early-life environmental risk factors for asthma: findings from the Children’s Health Study. Env Health Persp. 2004;112(6):760–5. 40. Braun-Fahrlander C. Allergic diseases in farmers’ children. Pediatr Allergy Immunol. 2000;11(Suppl 13):19–22. 41. Centers for Disease Control. Surveillance for work-related asthma in selected U.S. states using surveillance guidelines for State Health Departments-California, Massachusetts, Michigan, New Jersey, 1993–1995. MMWR. 1999;48(SS-1):2–20. 42. Koksal N, et al. Apricot sulfurization: an occupation that induces an asthma-like syndrome in agricultural environments. Am J Ind Med. 2003;43(4):447–53. 43. Beard J, et al. Health impacts of pesticide exposure in a cohort of outdoor workers. Env Health Persp. 2003;111(5):724–30. 44. Radon K, et al. Respiratory symptoms in European animal farmers. Eur Resp J. 2001;17(4):747–54. 45. Karpati AM, et al. Pesticide spraying for West Nile virus control and emergency department asthma visits in New York City, 2000. Env Health Persp. 2004;112(11):1183–7. 46. Salome CM, et al. The effect of insecticide aerosols on lung function, airway responsiveness and symptoms in asthmatic subjects. Eur Resp J. 2004;16(1):38–43.
33 47. Nriagu J, et al. Prevalence of asthma and respiratory symptoms in south-central Durban, South Africa. Eur J Epid. 1999;15(8):747–55. 48. Azizi BHO, et al. The effects of indoor environmental factors on respiratory illness in primary school children in Kuala Lumpur (Malaysia). Int J Epid. 1991;20(1):144–50. 49. Bernard A, et al. Lung hyperpermeability and asthma prevalence in schoolchildren: unexpected associations with the attendance at indoor chlorinated swimming pools. Occ Env Med. 2003;60(6): 385–94. 50. Helenius IJ, et al. Respiratory symptoms, bronchial responsiveness, and cellular characteristics of induced sputum in elite swimmers. Allergy. 1998;53(4):346–52. 51. Lagerkvist BJ, et al. Pulmonary epithelial integrity in children: relationship to ambient ozone exposure and swimming pool attendance. Env Health Persp. 2004;112(17):1768–71. 52. Alavanja MC, et al. The agricultural health study. Env Health Persp. 1996;104(4):362–9. 53. Blair A, et al. Disease and injury among participants in the Agricultural Health Study. J Agric Saf Health. 2005;11(2):141–50. 54. Buffler PA, et al. Environmental and genetic risk factors for childhood leukemia: appraising the evidence. Cancer Invest. 2005;23(1): 60–75. 55. Reynolds P, et al. Agricultural pesticide use and childhood cancer in California. Epidemiology. 2005;16(1):93–100. 56. Flower KB, et al. Cancer risk and parental pesticide application in children of agricultural health study participants. Env Health Persp. 2004;112(5):631–5. 57. Pogoda JM, et al. Household pesticides and risk of pediatric brain tumors. Env Health Persp. 1997;105(11):1214–20. 58. Jaga K, et al. The epidemiology of pesticide exposure and cancer: a review. Rev Env Health. 2005;20(1):15–38. 59. Alavanja MC, et al. Cancer incidence in the agricultural health study. Scand J Work Environ Health. 2005(Suppl 1):39–45; discussion 5–7. 60. Fritschi L, et al. Occupational exposure to pesticides and risk of nonHodgkin’s lymphoma. Am J Epid. 2005;162(9):849–57. 61. DeRoos AJ, et al. Cancer incidence among glyphosate-exposed pesticide applicators in the Agricultural Health Study. Env Health Persp. 2005;113:49–54. 62. Cocco P, et al. Long-term health effects of the occupational exposure to DDT. A preliminary report. Ann N Y Acad Sci. 1997;837: 246–56. 63. VanMaele-Fabry G, et al. Prostate cancer among pesticide applicators: a meta-analysis. Int Arch Occup Env Health. 2004;77(8): 559–70. 64. Wolff MS, et al. Improving organochlorine biomarker models for cancer research. Can Epid Biomark Prev. 2005;14(9):2224–36. 65. Snedeker SM. Pesticides and breast cancer risk: a review of DDT, DDE, and dieldrin. Env Health Perspect. 2001;109 Suppl 1: 35–47. 66. Houghton DL, et al. Organochlorine residues and risk of breast cancer. J Am Coll Toxicol. 1995;14(2):71–89. 67. Hirsch EC, et al. Animal models of Parkinson’s disease in rodents induced by toxins: an update. J Neural Transm Suppl. 2003;65:89–100.
Pesticides
723
68. Firestone JA, et al. Pesticides and risk of Parkinson disease: a population-based case-control study. Arch Neurol. 2005;62(1):91–5. 69. Gorell JM, et al. Multiple risk factors for Parkinson’s disease. J Neurol Sci. 2004;217(2):169–74. 70. Elbaz A, et al. CYP2D6 polymorphism, pesticide exposure, and Parkinson’s disease. Ann Neurol. 2004;55(3):430–4. 71. Fournier L, et al. Lymphocyte esterases and hydroxylases in neurotoxicology. Vet Hum Toxicol. 1996;38(3):190–5. 72. Bradman A, et al. Characterizing exposures to nonpersistent pesticides during pregnancy and early childhood in the National Children’s Study: a review of monitoring and measurement methodologies. Env Health Persp. 2005;113(8):1092–9. 73. Needham LL. Assessing exposure to organophosphorus pesticides by biomonitoring in epidemiologic studies of birth outcomes. Env Health Persp. 2005;113:494–8. 74. Bhatia R, et al. Organochlorine pesticides and male genital anomalies in the child health and development studies. Env Health Persp. 2005;113(2):220–4. 75. Hanke W, et al. The risk of adverse reproductive and developmental disorders due to occupational pesticide exposure: an overview of current epidemiological evidence. Int J Occ Med Env Health. 2004; 17(2):223–43. 76. Regidor E, et al. Paternal exposure to agricultural pesticides and cause specific fetal death. Occ Env Med. 2004;61(4):334–9. 77. Gracia CR, et al. Occupational exposures and male infertility. Am J Epid. 2005;162(8):729–33. 78. Law DC, et al. Maternal serum levels of polychlorinated biphenyls and 1,1-dichloro-2,2-bis(p-chlorophenyl)ethylene (DDE) and time to pregnancy. Am J Epid. 2005;162(6):523–32. 79. Cocco P, et al. Reproductive outcomes in DDT applicators. Env Res. 2005;98(1):120–6. 80. Charlier CJ, et al. Comparative study of dichlorodiphenyldichloroethylene in blood and semen of two young male populations: lack of relationship to infertility, but evidence of high exposure of the mothers. Reprod Toxicol. 2005;20(2):215–20. 81. Tomenson JA, et al. An assessment of fertility in male workers exposed to molinate. J Occ Env Med. 1999;41(9):771–87. 82. Pflieger-Bruss S, et al. The male reproductive system and its susceptibility to endocrine disrupting chemicals. Andrologia. 2004;36(6): 337–45. 83. Longnecker MP, et al. An approach to assessment of endocrine disruption in the National Children’s Study. Env Health Persp. 2003; 111:1691–7. 84. Barlow SM. Agricultural chemicals and endocrine-mediated chronic toxicity or carcinogenicity. Scand J Work Environ Health. 2005; 31(Suppl 1):141–5; discussion 119–22. 85. USGAO. Children and Pesticides. New Approach to Considering Risk is Partly in Place. GAO/HEHS-00-175. Washington D.C: GPO; 2000. 86. USEPA. Office of Pesticide Programs FY 2004 Annual Report. 735R-05-001, 2005. http://www.epa.gov/opp/. 87. USEPA. Human Testing; Proposed Plan and Description of Review Process. Fed Reg. February 8, 2005;70(5):6661–7.
This page intentionally left blank
34
Temperature and Health Edwin M. Kilbourne
THERMOREGULATION
Humans are a homeothermic (warm-blooded) species. Although the temperature of the arms, legs, and superficial areas (acral body parts) may vary greatly, the body maintains a relatively constant deep body (core) temperature. Substantial deviations from normal core body temperatures cause adverse effects ranging from minor annoyance to life-threatening illness. Although far less affected by temperature changes than the core, acral body parts can be adversely affected by cold temperatures, particularly if the exposure is prolonged or repeated.1 Body temperature is affected by five fundamental physical processes: 1. Metabolism—Heat is generated by the biochemical reactions of metabolism. 2. Evaporation—Heat is lost by evaporation of moisture from the skin and respiratory passages. 3. Conduction—Heat is transferred to or from matter with which the body is in contact. 4. Convection—Heat transfer by conduction is greatly facilitated when the body is immersed in a fluid medium (gas or liquid) because of the ability of substance to flow over body surfaces. Conduction in this context is called convection. 5. Thermal radiation—Heat may be gained or lost due to thermal radiation. The body radiates heat into cold surroundings or gains heat from objects that radiate infrared and other wavelengths of electromagnetic radiation (for example, the sun or a hot stove). The process is independent of the temperature of matter in contact with the body.1 ADVERSE EFFECTS OF HEAT
Heat Stress Heat stress may result from alteration of any of the five physical processes involved in determining body temperature. For example, increased metabolic heat production caused by strenuous physical activity may stress the runner in a long-distance race or the soldier undertaking military maneuvers. A steel worker may experience heat stress because of the radiant heat emitted from a furnace at the workplace. At a hazardous waste site, a worker who must wear a heavy, impermeable suit may develop heat stress as the air in the suit becomes humid (decreasing evaporative cooling) and warm (limiting heat loss by conduction/convection).
People seek to relieve heat stress by altering one or more of the processes by which the body gains or loses heat. They may rest (lowering metabolic heat production), move to the shade (avoiding radiant solar heat), sit in front of a fan (increasing convective and evaporative heat loss), or swim (facilitating heat loss by conduction/ convection through water). The acute physiological response to heat stress includes perspiration and dilation of the peripheral blood vessels. Perspiration increases cutaneous moisture, allowing greater evaporative cooling. Peripheral vasodilation reroutes blood flow toward the extremities and body surfaces, thereby enhancing transmission of heat from the body’s core to peripheral body parts, from which it can be more readily lost.2,3 With continuing exposure to heat stress, a process of physiological adaptation takes place. Although maximal adaptation may take weeks, significant acclimatization occurs within a few days of the first exposure.4,5
Indices of Heat Stress In most circumstances, there are four principal environmental determinants of heat stress. They are ambient (dry-bulb) temperature, humidity, air speed, and thermal radiation. A number of heat indices have been developed to attempt to combine some or all of these separate factors into a single number indicating how hot “it feels” and, by implication, attempting to quantify the net pathophysiological significance of a given set of environmental conditions. The original “effective temperature” (ET) index is read from a nomogram reflecting dry-bulb and wet-bulb temperatures, as well as air speed. The ET was derived empirically, based on subjects’ reports or thermal sensations of subjects placed in a wide variety of conditions of temperature, humidity, and air movement. As originally formulated, ET attempted to quantify the dry-bulb temperature of still, saturated air that would produce the same subjective thermal effect as the conditions being evaluated.6 A revision of the ET, the corrected effective temperature (CET), was developed to take radiant heat into account and substitute globe thermometer temperature for dry-bulb temperature. (The globe thermometer is a dry-bulb thermometer with the bulb placed at the center of a 6-inch-diameter thin copper sphere, the outside of which is painted matte black.) Because of concern that the original ET was too sensitive to the effect of humidity at low temperatures and not sensitive enough to humidity at high temperatures, a reformulated version of ET has been published.7,8 The wet-bulb globe temperature (WBGT) is a heat stress index calculated as a weighted average of wet-bulb, globe, and dry-bulb thermometer temperatures: Outdoors:
WBGT = 0.7 Twb + 0.2 Tg + 0.1 Tdb
Indoors:
WBGT = 0.7 Twb + 0.3 Tg
1
Burns involve acute destruction of skin and other tissues. They are caused by a variety of noxious physical and chemical influences, including both extremely high and extremely low temperatures. Burns present a unique set of problems and issues and thus are not discussed further in this chapter.
where Twb is the temperature read by a naturally convected wet-bulb thermometer, Tg is the globe thermometer temperature, and Tdb is the 725
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
726
Environmental Health
dry-bulb temperature. Its formulae were chosen to yield values close to those of the ET for the same conditions.9 The WBGT has been used to assess the danger of heatstroke or heat exhaustion for persons exercising in hot environments. Curtailing certain types of activities when the WBGT is high decreases the incidence of serious hour-related illness among military recruits.10 Current standards and recommendations for limiting heat stress in the workplace are frequently expressed in terms of WBGT, although a person’s degree of acclimatization, energy expenditure, and the amount of time spent performing the stressful task are factored in as well.11 The “Botsball” or wet-globe thermometer (WGT) consists of a thermal probe within a black sphere 6 cm in diameter, the surface of which is covered with black cloth kept wet by water in a reservoir. The WGT is smaller and lighter than the equipment required to take WBGT readings and has a shorter stabilization time. These attributes facilitate its use to measure conditions in an employee’s personal workspace. WGT readings approximate those of WBGT. Mathematical formulae for approximating the WBGT from the WGT have been proposed.12,13 Other than in military and occupational contexts, the use of R.G. Steadman’s scheme of apparent temperature (AT) is favored by meteorologists and climatologists in the United States, Canada, Australia, and other countries. In the United States, the dry-bulb temperature and humidity components are used alone during hot weather to generate an approximation of the AT referred to as the “heat index.” Like ET, WGT, and WBGT, AT functions as a measure of the heat stress associated with a given set of meteorological conditions (Fig. 34-1). Unlike the effective temperature, which was derived empirically, AT is the product of mathematical modeling, based on principles of physics and physiology. The AT for a given set of conditions of temperature, humidity, air speed, and radiant heat energy is equal to the dry-bulb temperature with the same predicted thermal impact on an adult walking in calm air of “moderate” humidity with surrounding objects at the same temperature as ambient air (no net heat gain or loss by radiation).14 For public health purposes, heat stress indices may be helpful in assessing the danger posed by particular weather conditions, but they are limited by underlying assumptions regarding metabolic heat production, clothing, body shape and size, and other factors. Moreover, most indices are also limited in that they yield instantaneous values that do not reflect the time course of a community’s heat exposure, which may be critical to the occurrence (or not) of adverse health effects.
Heatstroke The most serious illness caused by elevated temperature is heatstroke. Its hallmark is a core body temperature of 105°F (40.6ºC) or greater. Temperature elevations as high as 110°F (43.3ºC) or higher are not uncommon. Mental status is altered, and initial lethargy proceeds to confusion, stupor, and finally unconsciousness. Classically, sweating is said to be absent or diminished, but many victims of clear-cut heatstroke perspire profusely. The outcome is often fatal, even when patients are brought quickly to medical attention. Death-to-case ratios of 40% or more have been reported.15,16,17 Heatstroke is a medical emergency requiring immediate steps to lower core body temperature. A patient can be cooled with an icewater bath, ice massage, or specialized evaporative cooling procedures. Further treatment is supportive and directed toward potential complications of hyperthermia, including fluid and electrolyte abnormalities, rhabdomyolysis, and bleeding diathesis. Maximal recovery may occur quickly or may not occur for a period of days or weeks, and there may be permanent neurological residua.15,16
Heat Exhaustion Heat exhaustion is a milder illness than heatstroke, due primarily to the unbalanced or inadequate replacement of water and salts lost in perspiration. It typically occurs after several days of heat stress. Body temperature is normal-to-moderately elevated but rarely exceeds 102ºF (38.9ºC). The symptoms, primarily dizziness, weakness, and fatigue, are those of circulatory distress. Treatment is supportive and directed toward normalizing fluid and electrolyte balance.16,17
Heat Syncope and Heat Cramps Heat syncope and heat cramps occur principally in persons exercising in the heat. Heat syncope is a transient fall in blood pressure with an associated loss of consciousness. Consciousness typically returns promptly in the recumbent posture. The disorder is thought to arise from circulatory instability due to cutaneous vasodilation in response to heat stress. Prevention is accomplished by avoiding strenuous exercise in the heat, unless one is well trained and acclimatized.18 Heat cramps are muscle cramps, particularly in the legs, that occur during or shortly after exercise in a hot environment. They are thought to arise from transient fluid and electrolyte abnormalities. Heat cramps decrease in frequency with athletic training and acclimatization to hot weather. Increasing salt intake may be helpful.16
Dry bulb temperature isopleths (degrees celsius) 60
50
Figure 34-1. Nomogram for approximating Apparent Temperature. Based on data from Steadman RG. A universal scale of apparent temperature. J Climate Appl Meteorol. 1984;23:1674–87. Draw a vertical line upward from the relative humidity value (horizontal axis) to meet the dry-bulb temperature. The vertical axis value directly left of this point is the approximate apparent temperature.
Apparent temperature (degrees celsius)
48
46
44 40
50
38 36
42
34 32 40
30 28 26
30
24 22 20 20
10
0
10
20
30
40
50
60
Relative humidity in percent
70
80
90
100
34
Reproductive Effects Among men, frequent or prolonged exposure to heat can result in elevated testicular temperatures, causing a substantial decrease in sperm count.19 Occupational exposure to heat has been associated with delayed conception.20 Measures to enhance scrotal cooling have been shown to increase both the numbers and quality of spermatozoa.21 Data continue to accumulate suggesting that heat stress during pregnancy may cause neural tube defects.22–25 Current data associating this group of birth defects with exposure to environmental heat may not be sufficient to prove cause and effect. However, they are sufficiently strong to make it advisable that women who are pregnant or who may become pregnant avoid environments and physical activities that are likely to result in a substantial increase in core temperature. EPIDEMIOLOGY OF HEAT-RELATED ILLNESS
Heat Waves Prolonged spells of unusually hot weather can cause dramatic increases in mortality, particularly in the urban areas of temperate regions. Although they are especially frequent in North America, a lethal summer heat wave killed thousands in Europe during the summer of 2003, underscoring the need for an international view of prevention. During the heat wave of 1980 in St. Louis, Missouri, some 300 more persons died than would have been expected on the basis of death rates observed before and after the heat wave.26 More recently, in the summer of 1995, record-breaking heat resulted in the loss of more than 700 lives in Chicago, largely in the course of a single week.27 In fact, more than 150 excess deaths occurred in a single day.28 A surprisingly small proportion of heat wave–related mortality is identified as being caused by or precipitated by the heat. In general, recognized heat-related deaths comprise from none to less than two-thirds of the heat-wave mortality increase.29 The connection of heat with many heat wave–related deaths is simply unrecognizable. Retrospective reviews of death certificates and clinical records have shown that increases in three categories largely account for the heat-related increase: These are deaths due to cardiovascular, cerebrovascular, and respiratory diseases.29 As a practical matter, it may be difficult or impossible for a physician to distinguish the myocardial infarctions or strokes that would have occurred anyway from those occurring because of the heat. Frequently, the overall health effects of the heat are most evident in the office of the medical examiner or coroner, where elevated
Temperature and Health
mortality due to the heat presents as an abrupt increase in the number of sudden unattended deaths. (Such cases are generally referred to the medical examiner or coroner.) In severe heat, the sheer volume of such cases may preclude conducting an in-depth investigation of each one. The absence of such data may further complicate the task of distinguishing those that are heat related. Finally, although efforts have been made to standardize postmortem diagnosis of heat-related cases, both the requirements for investigation and the interpretation of findings are at the discretion of individual medical examiners and are not standardized. Nevertheless, the reported increases in numbers of deaths apparently due to cerebrovascular disease (largely stroke) and cardiovascular disease (principally ischemic heart disease) are biologically plausible. Some studies suggest that heat stress induces some degree of blood hypercoagulability.30,31 Thus, external heat may favor the development of thrombi and emboli and may cause an increase in fatal strokes and myocardial infarctions. The increase in mortality during heat waves is paralleled by an increase in nonspecific measures of morbidity. During hot weather, the numbers of hospital admissions and emergency room visits increase.26,32 Excess mortality due to heat waves occurs primarily in urban areas. Suburban and rural areas are at far less risk.26,32 The urban predominance of adverse health consequences of the heat may be explained, in part, by the phenomenon of the urban “heat island.”33 The masses of stone, brick, concrete, asphalt, and cement that are typical of urban architecture absorb much of the sun’s radiant energy, functioning as heat reservoirs and reradiating heat during nights that would otherwise be cooler. In many urban areas, there are few trees to provide shading. In addition, tall buildings may effectively decrease wind velocity, decreasing in turn the cooling, convective, and evaporative effects of moving air. Other factors contributing to the severity of heatrelated health effects in cities include the relative poverty of some urban areas.26,32 Poor people are less able to afford cooling devices such as air conditioners and the energy needed to run them.
Impact on the Elderly The elderly are at particularly high risk of severe, heat-related health effects. Except for infancy and early childhood, the risk of death due to heat increases throughout life as a function of age (Fig. 34-2). In St. Louis and Kansas City, Missouri, during the 1980 heat wave, about 71% of heatstroke cases occurred in persons age 65 and over; despite the fact that this group constituted only about 15% of the population.27 A similar predominance of elderly casualties during other heat waves has been noted.34
Heat-related deaths—United States 1979–1998
Deaths per 10 million person-years
1000
100
10
1
0.1 <1
‘1–4’
‘5–9’ ‘10–14 15–19 20–24 25–34 35–44 45–54 55–64 65–74 75–84
Age group (years)
727
85+
Figure 34-2. U.S. heat-related deaths (ICD-9 Codes E900.0–E900.9) for the years 1979–1998, rates by age group indicated.
728
Environmental Health
The predisposition to heat-related illness among the elderly may be explained, in part, by impaired physiological responses to heat stress. Vasodilation in response to heat requires increased cardiac output, but persons older than 65 are less likely to have the capacity to increase cardiac output and decrease systemic vascular resistance during hot weather.35 Moreover, the body temperature at which sweating begins increases with increasing age.36 The elderly are more likely to have underlying diseases or to be taking medications (major tranquilizers and anticholinergics) that have been reported to increase the risk of heatstroke.37,38,39,40 Finally, the elderly perceive differences in temperature less well than younger persons do. This attribute may render an older person less able to regulate his or her thermal environment.41
waves have failed to demonstrate high body mass index as a risk factor.27,37 Neuroleptic (“major tranquilizing”) drugs have been strongly implicated in increasing risk from the heat in both animal and human studies.37,38,40,41,53 Neuroleptic drugs appear to impair thermoregulatory function in both directions, sensitizing to cold as well as to the heat. Anticholinergics decreased the heat tolerance of human volunteers in laboratory tests. Persons treated with anticholinergics while exposed to heat had a decrease or cessation of sweating and a rise in rectal temperature.39 Many commonly used prescription drugs (e.g., tricyclic antidepressants, antiparkinson agents) and nonprescription drugs (e.g., antihistamines, sleeping pills) have prominent anticholinergics effects, and in one study the use of such drugs was more common in heatstroke victims than in control subjects.37
Other Factors Affecting Risk Although their death rates due to heat are lower than those of the elderly, infants and young children are also at increased risk from the heat. Healthy babies kept in a hot area have been found to run temperatures as high as 103ºF (39.4ºC). Mild fever-causing illnesses of babies may lead to frank heatstroke by a heat stress.42 Sensitivity to heat is greatest in children less than 1 year old and decreases quickly up to the age of about 5–9 years (Fig. 34-2). The risk of both fatal and nonfatal heatstroke is increased in infants and young children.43 Children with congenital abnormalities of the central nervous system and with diarrheal illness appear to be particularly vulnerable.42,43 Parents may contribute to risk by failing to give enough hypotonic fluid during the heat and dressing or covering the child too warmly.43,44 Temperatures may approach 140°F (60°C) in cars parked in sunlight in warm weather, and the great hazard of leaving infants and young children in parked cars has been emphasized repeatedly.45,46 From news media accounts alone, one study identified 171 heat-related fatalities during the years 1995–2002 among children in stationary motor vehicles.47 Death rates due to heat in the United States are generally higher in males than in females. This trend is most evident among young adults and is much less evident at the extremes of age. The reasons for the apparent increased risk of males are not known, but differences between the sexes in patterns of thermal exposure (for example, in choice of occupation, recreational activities, and risk-taking behavior) may be maximal during young adult life and could be the causal factor. During an urban heat wave, the rate of heatstroke is disproportionately high in areas of low socioeconomic status. The association in the United States of black race with relatively low socioeconomic status may well explain the disproportionately high heatstroke rates of blacks in the United States.27,32 No biologically based vulnerability of any particular race has been shown. Chronic illnesses resulting in loss of the ability to care for oneself or in a bedfast or relatively immobile lifestyle are more frequent in heatstroke patients than control subjects. No specific chronic disease is known to be as effective a predictor of heatstroke as this more general characterization.27,37 Socially isolated persons appear to be a high-risk group. In studies of the 1995 and 1999 heat waves in Chicago, factors such as living alone, not having access to transportation, or being confined to bed indicated an increased risk for the combined category of heatstroke death and death due to heat-related exacerbation of underlying cardiovascular disease.27,48 Persons with a history of prior heatstroke maintained thermal homeostasis in a hot environment less well than comparable volunteers who have never suffered heatstroke.49 Whether heatstroke causes damage to the body’s ability to regulate its temperature or thermoregulatory abnormalities antedate the first heatstroke is not known. Frequently referred to as a risk factor, the extent to which obesity contributes to heatstroke risk is unclear. Obese subjects exercising in a hot environment showed a greater increase in rectal temperature and heart rate than did lean subjects.50,51 Soldiers in the U.S. Army who died from exertional heatstroke during basic training in World War II were more likely to be obese than their peers.52 However, studies of heatstroke and fatal cardiovascular disease among the relatively sedentary, older persons principally at risk during urban summer heat
PREVENTION OF HEAT-RELATED ILLNESS
In most parts of the United States, heat waves severe enough to threaten health do not occur every year. Several relatively mild summers may intervene between major heat waves. The erratic occurrence of heat waves hinders prevention planning. It is logistically difficult to provide adequate resources if needed, but not waste these resources if a heat wave does not materialize. Programs to prevent heat-related illness should concentrate on measures the efficacy of which is supported by empirical data. Many heatstroke-prevention efforts for the community at large have been based on the distribution of electric fans to persons at risk. Nevertheless, systematic studies of urban heat waves failed to demonstrate any protective effect of electric fans.27,37 Indices of heat stress predict a diminished cooling effect of air movement as dry-bulb temperature increases.6,14 Physiologic experimentation confirms the inability of increasing air movement to increase heat tolerance at high temperatures.51 Fans thus appear unlikely to offer protection from heat under the conditions of very high ambient temperature at which heat-related health effects are most likely to occur. Accordingly, the distribution of free fans during heat waves as a public health measure should be abandoned. Air conditioning, on the other band, is the single most effective intervention, for prevention of heatstroke. In separate studies, the availability of home air conditioning was associated with a 70% decrease in fatality from the combined endpoint of either heatstroke or cardiovascular disease27 and a 98% decrease in fatal heatstroke.37 Moreover, both studies showed additional major reductions in risk (50% and 75%, respectively) from simply spending more time in air-conditioned places.27,37 Thus, such strategies as setting up air-conditioned heatwave shelters and air conditioning the lobbies of apartment buildings of lower socioeconomic–status tenants may be effective in preventing heat wave–related illness and death. Even when shelters cannot be provided, elderly and other persons at high risk can be encouraged to spend a few hours each day at public air-conditioned places, such as movie theaters and shopping malls. Heatstroke is an occupational risk for an estimated six million Americans who work in “hot” industries (e.g., foundries, glassworks, and mines). To prevent heat-related illness among the occupationally exposed, the U.S. National Institute for Occupational Safety and Health (NIOSH) recommends acclimatizing new workers and those returning from leave, arranging frequent rest periods in a cool environment, scheduling hot operations for the coolest part of the day, making drinking water readily available, conducting preemployment and periodic medical examinations, and instructing workers and supervisors about preventive measures and early recognition of heat-related illnesses.11 COLD WEATHER
Seasonal Trends in Mortality Human mortality is highly seasonal. In the United States, the death rate is greatest in late winter (usually February) and lowest in late summer (August) (Fig. 34-3). A similar seasonal pattern of mortality
34
Temperature and Health
729
Mortality by month—United States 8,000
Mean daily deaths
7,500
7,000
6,500
6,000 Jan
Mar
May Jul 2004
Sep
Nov
Jan
Mar
May
occurs in other countries in the temperate zones of both the Northern and Southern Hemispheres, although the mortality curves of the two hemispheres are 6 months out of phase.1 The wintertime increase in the death rate is most marked in the elderly and becomes increasingly prominent with advancing age. Among persons aged 45 years and younger, however, the pattern is reversed; the death rate is lower in the winter and greater in the summer.54 The extent of seasonal variation in mortality varies greatly by cause of death. The death rates for diseases of heart, cerebrovascular disease, pneumonia and influenza, and chronic obstructive pulmonary disease show substantial increases in the winter. In contrast, the occurrence of death due to malignant neoplasms remains virtually constant throughout the year.54 Some of the seasonal winter increase in deaths due to major chronic diseases such as stroke and myocardial infarction may reflect seasonal changes in underlying risk factors for vascular diseases. For example, blood pressure in humans fluctuates seasonally and is higher in the winter.55 Cold stress can enhance the coagulation of blood, possibly contributing to the winter excess of deaths due to stroke and ischemic heart disease.56 In addition, many types of exercise are practiced seasonally, with sedentary periods tending to occur in winter.57 The winter death increase cannot be attributed entirely to the direct effect of cold exposure. In the United States, the increase occurs even in states noted for their relatively mild winter temperatures (e.g., Florida and Arizona) at approximately the same magnitude as in colder states (e.g., Michigan and Montana).54 Low winter humidity may contribute to the winter death excess, since it favors the transmission of certain infectious agents, notably influenza.58 In addition, winter increases in deaths due to some types of unintentional injuries may reflect seasonal increases in certain behaviors. For example, deaths due to fire are more common in the winter, perhaps a result of the use of fireplaces and heating devices. Finally, the peaks and valleys in the U.S. death rate have not always come in mid-to-late winter and late summer, respectively, as they usually do now. In the early part of this century, the peak was usually in February or March, and the nadir was in June rather than August.59 This change in seasonal pattern is further evidence that temperature is not the only determinant of seasonality in mortality.
Cold Stress and its Indices The two most important adaptive physiological responses to the cold are vasoconstriction and shivering. Peripheral vasoconstriction causes a rerouting of some blood away from cutaneous and other
Jul 2005
Sep
Nov
Figure 34-3. Mean deaths per day by month among U.S. residents during the years 2004 and 2005.
superficial vascular beds toward deeper tissues where the blood’s heat is less easily lost. In addition, blood is rerouted from the superficial veins of the limbs to the venae comitantes of the major arteries. Such rerouting activates a “countercurrent” mechanism by which arterial blood warms venous blood before the venous blood returns to the core. Conversely, venous blood cools arterial blood so that it gives up less heat when it reaches the periphery. The result is a fall in the temperature of superficial body parts in defense of core temperature.1,60 Humidity and radiant heat energy are less important in the evaluation of cold environments than of hot. Thus the popular “wind chill” index of Siple expresses the intensity of cooling expected from a cold environment as a function only of ambient temperature and wind speed: H = (10.45 + 10 s − s) (33 − t where H is the wind chill expressed in kcal/m2/h1, s is the wind speed in m/sec, and t is the ambient temperature in degrees Celsius.61 The value of H permits comparison of the cooling effect of various temperature and wind speed combinations. The subjective thermal perception associated with any given value of H is influenced greatly by one’s level of activity and the type and amount of clothing worn. Often, the wind-chill effect is described in terms of a wind-chill equivalent temperature. This is the temperature that would produce the same intensity of cooling as the temperature-wind speed combination under consideration if the wind speed were some relatively low reference value. A wind-chill equivalent temperature can be calculated from a modification of the Siple formula: teq = 33 −
(10.45 + 10 s − s) (33 − t ) 10.45 + 10 sref − sref
where teq and t are the wind-chill equivalent and ambient temperatures in degrees Celsius, and s and sref the actual and reference wind speeds in m/sec. Wind-chill equivalent temperatures in degrees Celsius for a reference wind speed of 2 m/sec are shown in Fig. 34-4. The wind-chill formula of Siple has been criticized as being too sensitive to changes in wind speed when wind speed is low, and not sensitive enough to changes in wind speed at higher velocities.62 The formula is clearly only an approximation since, for any temperature, the wind chill is maximal at winds of 25 m/sec (56 mph) and actually decreases as wind speed goes even higher, a physical impossibility.
730
Environmental Health Dry-bulb temperature isopleths (degrees celsius) showing “wind chill”
Figure 34-4. Nomogram of wind chill. A line corresponding to the value of the wind speed and drawn directly upward from the horizontal axis will intersect the curve of the measured dry-bulb temperature at a height corresponding to the windchill equivalent temperature, which can be read from the vertical axis. The curves are based on the formula of Siple and Passel61 and are based on a comparison to wind chill in relatively “still” air moving with a speed of 2 m/sec.
Wind chill equivalent temperature (degrees celsius)
20 10
15°C
0
10°C 5°C
–10
0°C
–20
–5°C –30
–10°C
–40
–15°C –20°C
–50
–25°C –60
–30°C
–70
–35°C –40°C
–80 –90
1
3
5
ILLNESSES CAUSED BY COLD
Hypothermia Hypothermia refers to a core body temperature below 35°C (95°F). The condition may be purposefully induced (e.g., to decrease oxygen consumption during surgery). More notably, hypothermia also occurs unintentionally as a result of exposure to cold environmental conditions (so-called accidental hypothermia). Unintentional hypothermia is a problem of considerable public health importance. As body temperature drops, consciousness becomes clouded, and the patient appears confused or disoriented. Intense vasoconstriction causes pallor of the skin. Shivering is maximal in the higher range of hypothermia core temperatures, but decreases markedly in intensity as body temperature falls further and hypothermia itself impairs thermoregulation. In severe hypothermia (body temperature below about 90ºF or 30ºC), consciousness is lost, respirations may become imperceptibly shallow, and the pulse may not be palpable.1 At such low temperatures, the myocardium becomes irritable and ventricular fibrillation is common. The patient may appear dead even though he or she may yet be revived with proper treatment. Persons found apparently dead in circumstances suggesting the possibility of hypothermia should be treated for this condition until death can be confirmed. In particular, the potential for recovery of cold-water drowning victims should not be underestimated, since there have been reports of virtually complete recovery in patients who were without an effective heartbeat for periods as long as two hours.63 Hypothermia occurs both as a direct consequence of overexposure to the cold (primary hypothermia) and as the apparent result of thermoregulatory failure due principally to other severe illness (e.g., sepsis, myocardial infarction, central nervous system damage, metabolic derangements). Cold exposure may also contribute to such secondary hypothermia. Primary hypothermia has a better prognosis than hypothermia occurring as a result of concomitant illness.64 Death is also more likely in patients who present with a particularly low body temperature.65 Treatment of hypothermia depends on its severity. Noninvasive, external rewarming is appropriate for mildly hypothermic patients who have a perfusing cardiac rhythm. Invasive rewarming procedures, such as body cavity lavage, may be required if hypothermia is severe.
7
9
11
13
15
17
19
21
23
25
Wind speed (meters/second)
However, rapid rewarming or sudden alterations in other metabolic variables may precipitate ventricular fibrillation. Cardiopulmonary bypass with extracorporeal rewarming of the blood is a definitive treatment and may be required in patients with severe hypothermia or in patients who have no effective cardiac function.66,67 All but very mild hypothermia cases require intensive supportive medical care.
Frostbite Local tissue injury as a result of exposure to cold may be seen in hypothermia cases but often occurs independently from it. Frostbite involves actual freezing of tissue. It affects primarily acral body parts (i.e., distal extremities, ears, and nose) and can occur over a period of minutes to hours in severe cold. Severe frostbite may result in tissue necrosis requiring amputation. Frostbite injuries may become particularly frequent during a spell of unusually cold weather.68
Nonfreezing Local Tissue Injury Perniosis, also called chilblains, is characterized by tender and/or pruritic, erythematous, or violaceous papules occurring in the skin of acral body parts, particularly the hands. When severe, the lesions may blister or ulcerate. The condition is typically present only during the colder months of the year, and women are afflicted more frequently than are men.69 The underlying pathophysiology may involve coldinduced ischemia of involved areas or a cold-mediated inflammatory reaction. Vasodilators (for example, nifedipine) may be useful both in treatment of the lesions and in prevention of recurrences.70 A condition known as “cold water immersion injury” or “trench foot” (when it affects the lower extremity) results from continuous exposure of body parts (most frequently, the lower extremities) to wet and above-freezing cold conditions for a period of days to weeks. Local tissue injury occurs, possibly from reduced blood flow due to prolonged vasoconstriction. When affected extremities are warmed, they at first become swollen and numb. Later, a phase of painful hyperemia develops. Still later, muscle weakness and atrophy and fibrosis may occur, and there may be other long-lasting sequelae, including persistent pain, hypoesthesia, or increased sensitivity to the cold.71 Anyone with prolonged continuous exposure to cold water and/or cold wet clothing is at risk. The condition is prevented by fully rewarming and drying the body at frequent intervals.
34 EPIDEMIOLOGY OF COLD INJURY
Hypothermia in the Elderly The extent to which indoor cold causes clinically significant hypothermia has been increasingly appreciated in recent years. In particular, the special vulnerability of elderly persons to this condition has been recognized. After the first year or so of life, the rate of death due to effects of the cold increases steadily with advancing age (Fig. 34-5). In the United States, approximately 700–1000 deaths due to cold exposure occur each year. More than half of these cases occur in persons aged 60 years or older,72 although persons in this age group comprise less than 17% of the population.73 The extent of hypothermia morbidity is difficult to measure; a nationwide study of hypothermia in New Zealand found an incidence of hypothermia hospital admissions that was 12 times the hypothermia death rate. However, hypothermia hospitalizations primarily involved infants, whereas hypothermia deaths occurred primarily among the elderly and among males 13–65 years old.74 A wintertime survey conducted in Great Britain of 1020 persons age 65 and over revealed that relatively few (0.5%) persons surveyed had hypothermic morning deep-body temperatures (<35°C) and none had hypothermic evening temperatures. Nevertheless, a substantial number (10%) had near-hypothermic temperatures (35.5°C but >35°C).75 In contrast, 3.6% of 467 patients more than 65 years old admitted to London hospitals in late winter and early spring were hypothermic.76 The fact that hypothermia is relatively common among elderly persons admitted to hospitals, although virtually absent in the community, has been interpreted as showing that most elderly Britons with hypothermia are quickly hospitalized. The apparent cold sensitivity of the elderly may be due to physiological factors. Collins and others found that a high proportion of persons age 65 and older failed to develop physiologically significant vasoconstriction in response to a controlled cold environment and that the proportion of such persons increased with the age of the cohort examined. These elderly subjects with abnormal vasoconstriction tended to have relatively low core temperatures.77 The basal metabolic rate (BMR) declines substantially with age, requiring elderly people to battle cold stress from a relatively low level of basal thermogenesis.78 Shivering, a mechanism by which metabolic thermogenesis can be increased, may be impaired in some older persons.79 Voluntary muscular activity also releases heat, but the elderly are more prone than others to debilitating chronic illnesses that limit mobility. Metabolic heat produced through the oxidation of brown fat is less available to the elderly, in whom this type of adipose tissue is less abundant than in children and younger adults.80
Temperature and Health
Elderly persons appear to perceive cold less well than younger persons and may voluntarily set thermostats to relatively low temperatures.81 In addition, the high cost of energy, together with the relative poverty of some elderly people, may discourage their setting thermostats high enough to maintain comfortable warmth.82
Drugs Predisposing to Hypothermia Ethanol ingestion is an important predisposing factor for hypothermia. The great majority of patients in many hypothermia case series are middle-aged alcoholic men.83,84 Ethanol produces vasodilation, interfering with the peripheral vasoconstriction that is an important physiological defense against the cold. Although ethanol-containing beverages are sometimes taken in cold surroundings for the subjective sense of warmth they produce, this practice is dangerous. Ethanol also predisposes to hypothermia by inhibiting hepatic gluconeogenesis, and thus producing hypoglycemia in carbohydrate-depleted persons (e.g., many chronic alcoholics). Ethanol-induced hypoglycemia has been clearly shown to produce hypothermia in healthy volunteers.85 Treatment with the neuroleptic drugs (phenothiazines, butyrophenones, and thioxanthenes) also predisposes to hypothermia. Chlorpromazine, the prototype drug of this group, has been used to induce hypothermia pharmacologically.86 Chlorpromazine suppresses shivering, probably by a central mechanism, and causes vasodilation. The hypothermic action of drugs of this class becomes more pronounced with decreasing ambient temperature.87
Other Hypothermia Risk Factors Infants under 1 year of age have a higher rate of death due to cold than do older children (Fig. 34-5). Neonates, especially premature or smallfor-gestational-age babies, are at particularly high risk. Although the mechanisms for maintaining thermal homeostasis (vasoconstriction and thermogenesis by shivering) are present at birth, they function less effectively than in older children. Infants have a relatively large ratio of heat-losing surface to heat-generating volume, and the layer of insulating subcutaneous fat is relatively thin. Perhaps most importantly, a baby is unable to control his or her own environment. Babies are totally dependent on others to keep them warm, and if sufficient warmth is not provided, hypothermia results. Hypothermia in infants can be a substantial public health problem in areas with severe winter weather. During December and January of the winters of 1961–1962 and 1962–1963, 110 hypothermic (T < 900F, 32.2°C) babies were admitted to hospitals in Glasgow, Scotland. Mortality in this group was 46%.88 Hypothermia, however, is not only a problem in cold climates. In tropical climates,
Deaths per 10 million person-years
1000
100
10
1
0.1 <1
‘1–4’
‘5–9’ ‘10–14’ 15–19 20–24 25–34 35–44 45–54 55–64 65–74 75–84
Age group (years)
731
85+
Figure 34-5. U.S. cold-related deaths (ICD-9 Codes E901.0–E901.9) for the years 1979–1998, rates by age group indicated.
732
Environmental Health
hypothermia among babies and young children can also be a problem in winter. Children and infants suffering from protein-calorie malnutrition are particularly susceptible.89 In older children and young adults, lethal hypothermia is relatively infrequent (Fig. 34-5). However, persons in this age group are still susceptible to an overwhelming cold stress. Unintentional immersion in very cold water can lead rapidly to hypothermia.90 Cold and wet weather may be especially dangerous, because the insulating properties of clothing are markedly reduced by moisture.91 The rate of death due to cold is greater in males than in females in all age groups. Behavioral differences (for example, in choice of occupation and recreational activities) resulting in increased frequency of exposure to cold may account for the particularly great relative risk of males during the teenage years through late middle age but do not fully explain the apparent difference between the sexes in susceptibility. Homelessness is an important hypothermia risk factor. Substantial proportions of hypothermia case series involve persons without a fixed address.92,93
Epidemiology of Frostbite Serious frostbite injury occurs predominantly among males and is less frequent among babies, young children, and the elderly than among other age groups. Alcohol intoxication plays a role in about half of each of several case series. Other factors frequently contributing to frostbite injury are psychiatric illness, vehicular failure or crash, and drug use. Hypothermia is frequently present. In one case series, 12% of frostbite patients had hypothermia (temperature less than 32ºC).94 Finnish conscripts were found to be at increased risk of frostbite if they did not wear scarves or headgear with earflaps or if they did wear supposedly protective ointments.95 PREVENTION OF COLD-RELATED ILLNESS
Hypothermia is best prevented by limiting the cold stress of susceptible populations. Thus, programs to help the elderly poor receive financial assistance for wintertime heating bills may be helpful. In some areas, governmental agencies and/or utility companies have been involved in establishing programs that provide either direct financial aid toward the payment of elderly people’s energy bills or provisions for deferred payment. Awareness of the problem of neonatal hypothermia by pediatricians and communication of this concern to new parents may help prevent hypothermia in infants. Children and young adults who are at low risk from the cold should nevertheless take appropriate precautions when venturing into a cold environment. Clothing should provide sufficient insulation, and care should be taken that it does not get wet. One should especially guard against immersion in cold water. To avoid frostbite in below-freezing temperatures, skin exposure should be minimized. REFERENCES
1. Collins KJ. Hypothermia: The Facts. New York: Oxford University Press; 1983. 2. Rowell LE. Human adjustments and adaptations to heat stress. Where and how? In: Folinsbee LI, Wagner JA, Borgia JF, et al., eds. Environmental Heat Stress: Individual Human Adaptations. New York: Academic Press; 1978:3–27. 3. Nadel ER, Roberts MF, Wenger CB. Thermoregulatory adaptations to heat and exercise: comparative responses of men and women. In: Folinsbee LI, Wagner JA, Borgia JF, et al., eds. Environmental Heat Stress: Individual Human Adaptations. New York: Academic Press; 1978:29–38. 4. Bonner RM, Harrison MH, Hall CJ, Edwards RJ. Effect of heat acclimatization on intravascular responses to acute heat stress in man. J AppI Physiol. 1976;41:708–13.
5. Wyndham CH, Rogers GG, Sensay LC, Mitchell D. Acclimatization in a hot, humid environment: cardiovascular adjustments. J Appl Physiol. 1976;40:779–85. 6. Yaglou CP. Temperature, humidity, and air movement in industries: the effective temperature index. J Ind Hyg. 1927;9:297–309. 7. American Society of Heating, Refrigerating, and Air Conditioning Engineers (ASHRAE). Handbook of Fundamentals. Atlanta: ASHRAE; 1981. 8. American Society of Heating, Refrigerating, and Air Conditioning Engineers (ASHRAE). 1989 ASHRAE Handbook: Fundamentals. I-P ed. Atlanta: ASHRAE; 1989. 9. Lee DHK. Seventy-five years of searching for a heat index. Environ Res. 1980;22:331–56. 10. Minard D, Belding HS, Kingston JR. Prevention of heat casualties. JAMA. 1957;1655:1813–18. 11. National Institute for Occupational Safety and Health. Criteria for a Recommended Standard: Occupational Exposure to Hot Environments. Revised Criteria 1986. Washington, DC: Government Printing Office; 1986. 12. Beshir MY, Ramsey ID, Burford CL. Threshold values for the botsball: a field study of occupational heat. Ergonomics. 1982;25: 247–54. 13. Onkaram B, Stroschein LA, Goldman RF. Three instruments for assessment of WBGT and a comparison with WGT (botsball). Am Ind Hyg Assoc J. 1980;41:634–41. 14. Steadman RJ. A universal scale of apparent temperature. J Climate Appl Meteorol. 1984;23:1674–87. 15. Hart GR, Anderson RI, Crumpler CP, et al. Epidemic classical heat stroke: clinical characteristics and course of 28 patients. Medicine. 1982;61:189–97. 16. Knochel JP: Environmental heat illness: an eclectic review. Arch Intern Med. 133:841–864, 1974. 17. Knochel JP. Heat stroke and related heat stress disorders. Dis Mon. 1989;35:301–77. 18. National Institute for Occupational Safety and Health. Criteria for a Recommended Standard: Occupational Exposure to Hot Environments. Washington, DC: U.S. Department of Health, Education, and Welfare; 1972. 19. Levine RI. Male fertility in hot environments [Letter]. JAMA. 1984;252:3250–1. 20 Rachootin P, Olsen I. The risk of infertility and delayed conception associated with exposures in the Danish workplace. J Occup Med. 1983;25:394–402. 21. Jung A, Eberl M, Schill WB. Improvement of semen quality by nocturnal scrotal cooling and moderate behavioural change to reduce genital heat stress in men with oligoasthenoteratozoospermia. Reproduction. 2001;121:595–603. 22. Miller P, Smith DW, Shepard TH. Maternal hyperthermia as a possible cause of anencephaly. Lancet. 1978;1:519–21. 23. Layde PM, Edmonds LD, Erickson JD: Maternal fever and neural tube defects. Teratology 21:105–108, 1980. 24. Lynberg MC, Khoury MJ, Lu X, Cocian T. Maternal flu, fever, and the risk of neural tube defects: a population-based case-control study. Am J Epidemiol. 1994;140(3):244–255. 25. Suarez L, Felkner M, Hendricks K. The effect of fever, febrile illnesses, and heat exposures on the risk of neural tube defects in a TexasMexico border population. Birth Def Res (Part A). 2004;70:815–9. 26. Jones TS, Liang AP, Kilbourne EM, et al. Morbidity and mortality associated with the July 1980 heat wave in St. Louis and Kansas City, Missouri. JAMA. 1982;247:3327–31. 27. Semenza JC, Rubin CH, Falter KH, et al. Heat-related deaths during the July 1995 heat wave in Chicago. N EngI J Med. 1996;335:84–90. 28. Centers for Disease Control and Prevention. Heat-related mortalityChicago, July 1995. MMWR. 1995;44:577–80. 29. Kilbourne EM. Heat waves and hot environments. In: Noji E, ed. The Public Health Consequences of Disasters. New York: Oxford University Press; 245–69.
34 30. Keatinge WR, Coleshaw SRK, Easton JC, Cotter F, Mattock MB, Chelliah R. Increased platelet and red cell counts, blood viscosity, and plasma cholesterol levels during heat stress, and mortality from coronary and cerebral thrombosis. Am J Med. 1986;81:795–800. 31. Strother SV, Bull JMC, Branham SA. Activation of coagulation during therapeutic whole body hyperthermia. Thromb Res. 1986;43:353–60. 32. Applegate WB, Runyan JW, Jr, et al. Analysis of the 1980 heat wave in Memphis. J Geriatr Soc. 1981;29:337–42. 33. Clarke IF. Some effects of the urban structure on heat mortality. Environ Res. 1972;5:93–104. 34. Austin MO, Berry JW. Observations on one hundred cases of heatstroke. JAMA. 1956;161:1525–9. 35. Sprung CL. Hemodynamic alterations of heat stroke in the elderly. Chest. 1979;75:362–6. 36. Crowe JP, Moore RE. Physiological and behavioral responses of aged men to passive heating. J Physiol. 1973;236:43P–45P. 37. Kilbourne EM, Choi K, Jones TS, et al. Risk factors for heatstroke. A case-control study. JAMA. 1982;247:3332–6. 38. Wise TN: Heatstroke in three chronic schizophrenics: case reports and clinical considerations. Compr Psychiatry. 1973;14:263–267. 39. Littman RE: Heat sensitivity due to autonomic drugs. JAMA. 1952;149:635–636. 40. Adams BE, Manoguerra AS, Lilja GP, Long RS, Ruiz E. Heatstroke: associated with medications having anticholinergic effects. Minn Med. 1977;60:103–6. 41. Collins KJ, Exton-Smith AN, Dore C. Urban hypothermia: preferred temperature and thermal perception in old age. Br Med J. 1981;282: 175–7. 42. Cardullo HM. Sustained summer heat and fever in infants. J Pediatr. 1949;35:24–42. 43. Danks DM, Webb DW, Allen J. Heat illness in infants and young children. A study of 47 cases. Br Med J. 1962;2:287–93. 44. Bacon C, Scott D, Jones P. Heatstroke in well-wrapped infants. Lancet. 1979;1:422–5. 45. Gibbs LI, Lawrence DW, Kohn MA. Heat exposure in an enclosed automobile. J La State Med Soc. 1995;147:545–6. 46. Centers for Disease Control and Prevention. Heat-related illnesses and deaths-United States, 1994–1995. MMWR. 1995;44:465–8. 47. Guard A, Gallagher SS. Heat related deaths to young children in parked cars: an analysis of 171 fatalities in the United States, 1995–2002. Injury Prevention. 2005;11:33–7. 48. Naughton MP, Henderson A, Mirabelli MC, et al. Heat-related mortality during a 1999 heat wave in Chicago. Am J Prev Med. 2002;22: 221–7. 49. Shapiro Y, Magazanik A, Udassin R, et al. Heat intolerance in former heatstroke patients. Ann Intern Med. 1979;90:913–6. 50. Bar-Or O, Lundegren HM, Buskirk ER. Heat tolerance of exercising obese and lean women. J AppI Physiol. 1969;26:403–9. 51. Haymes EM, McCormick RJ, Buskirk ER. Heat tolerance of exercising lean and obese prepubertal boys. J Appl Physiol. 1975;39:457–61. 52. Schickele E. Environment and fatal heat stroke: an analysis of 157 cases occurring in the army in the U.S. during World War II. Mil Surg. 1947;98:235–56. 53. Kollias J, Ballard RW. The influence of chlorpromazine on physical chemical mechanisms of temperature regulation in the rat. J Pharmacol Exp Ther. 1964;145:373–81. 54. Feinlieb M. Statement of manning feinleib. In: Deadly Cold: Health Hazards due to Cold Weather. Washington, DC: Government Printing Office; 1984:85–125. 55. Giaconi S, Ghione 5, Palumbo C, et al. Seasonal influences on blood pressure in high normal to mild hypertensive range. Hypertension. 1989;14:22–7. 56. Keatinge WR, Coleshaw SRK, Cotter F, Mattock M, Murphy M, Chelliah R. Increases in platelet and red cell counts, blood viscosity and arterial pressure during mild surface cooling: factors in mortality from coronary and cerebral thrombosis in winter. Br Med J. 1984;289:1405–8.
Temperature and Health
733
57. Dannenberg AL, Keller JB, Wilson PWF, Castelli WP. Leisure time physical activity in the Framingham offspring study. Am J Epidemiol. 1989;129:76–88. 58. Schulman JL, Kilbourne ED. Experimental transmission of influenza virus in mice: II. Some factors affecting incidence of transmitted infection. J Exp Med. 1963;118:267–75. 59. Rosenwaike I. Seasonal variation of deaths in the United States, 1951–1960. J Am Stat Assoc. 1966;61:706–19. 60. Maclean D, Emslie-Smith D. Accidental Hypothermia. Oxford: Blackwell Scientific Publications; 1977. 61. Siple PA, Passel CF. Measurement of dry atmospheric cooling in subfreezing temperatures. Proc Am Philos Soc. 1945;89: 177–99. 62. Steadman RG. Indices of wind chill of clothed persons. J AppI Meteorol. 1971;10:674–83. 63. Young RSK, Zaineraris EL, Dooling EC. Neurological outcome in cold water drowning. JAMA. 1980;244:1233–5. 64. Miller JW, Danzl DF, Thomas DM. Urban accidental hypothermia: 135 cases. Ann Emerg Med. 1980;9:456–60. 65. Danzl DF, Pozos RS, Auerbach PS, et al. Multicenter hypothermia survey. Ann Emerg Med. 1987;16:1042–55. 66. Danzl DF, Pozos RS. Accidental hypothermia. N EngI J Med. 1994; 331:1756–60. 67. Anonymous. Treatment of hypothermia. Med Let Drugs Ther. 1994; 36:116–7. 68. Bishop HM, Collin J, Wood RAM, Morris PJ. Frostbite in Oxfordshire: the impact of a severe winter on an unprepared civilian population. Injury. 1984;15:379–80. 69. Goette DK. Chillblains (perniosis). J Am Acad Dermatol. 1990;23: 257–62. 70. Rustin MHA, Newton JA, Smith NP, Dowd PM. The treatment of chilblains with nifedipine: the results of a pilot study, a double-blind placebo-controlled randomized study, and a long-term open trial. Br J Dermatol. 1989;120:267–75. 71. Mills WJ, Jr, Mills WJ, III. Peripheral non-freezing cold injury: immersion injury. Alaska Med. 1993;35:117–28. 72. National Center for Health Statistics. Public Use Mortality Data Tapes for the Years 1983–1993. Hyattsville, MD. 73. U.S. Bureau of the Census. Decennial Census for 1990. 74. Taylor NAS, Griffiths RF, Cotter 3D. Epidemiology of hypothermia: fatalities and hospitalisations in New Zealand. Aust N Z J Med. 1994;24: 705–10. 75. Fox RH, Woodward PM, Exton-Smith AN, et al. Body temperatures in the elderly: a national study of physiological, social, and environmental conditions. Br Med J. 1973;1:200–6. 76. Goldman A, Exton-Smith AN, Francis G, O’Brien A. A pilot study of low body temperatures in old people admitted to hospital. J R Coll Physicians Lond. 1977;11:291–306. 77. Collins KJ, Dore C, Exton-Smith AN, et al. Accidental hypothermia and impaired temperature homeostasis in the elderly. Br Med J. 1977;1: 353–6. 78. Shock NW, Wathin DM, Yiengst MJ, et al. Age differences in the water content of the body as related to basal oxygen consumption in males. J Gerontol. 1963;18:1–8. 79. Collins KJ, Easton JC, Exton-Smith AN. Shivering thermogenesis and vasomotor responses with convective cooling in the elderly. J Physiol. 1981;320;76P. 80. Heaton JM. The distribution of brown adipose tissue in the human. J Anat. 1972;112:35–9; Environ Res. 1971;5:119–26. 81. Watts AJ. Hypothermia in the aged: a study of the role of cold sensitivity. Environ Res. 1971;5:119–26. 82. Morgan R, King D, Blair A. Urban hypothermia. Many elderly people cannot keep warm in winter without financial hardship (Letter). Br Med J. 1996;312:124. 83. Centers for Disease Control and Prevention. Exposure-related hypothermia deaths—district of Columbia, 1972–1982. MMWR. 1982;31: 669–71.
734
Environmental Health
84. Weyman AE, Greenbaum DM, Grace WJ. Accidental hypothermia in an alcoholic population. Am J Med. 1974;56:13–21. 85. Haight JSJ, Keatinge WR. Failure of therrnoregulation in the cold during hypoglycemia induced by exercise and ethanol. J Physiol. 1973; 229:87–97. 86. Courvoisier S, Fournel J, Ducrot R, Kolsky M, Koetschet P. Proprietés pharmacodynamiques du chlorhydrate de chloro-3-(dimethylamino3′-propyl)-10-phenothiazine (4,560 R.P.); etude experimentale d’un nouveau corps utilisé dans l’anesthesie potentialisée et dans l’hibernation artificielle. Arch lnt Pharmacodyn Ther. 1953;92:305–61. 87. Higgins EA, Lampietro PF, Adams T, Holmes DD. Effects of a tranquilizer on body temperature. Proc Soc Exp Biol Med. 1964;115: 1017–9. 88. Arneil GC, Kerr MM. Severe hypothermia in Glasgow infants in winter. Lancet. 1963;2:756–9. 89. Cutting WAM, Samuel GA. Hypothermia in a tropical winter climate. Indian Pediatr. 1971;8:752–7.
90. Bullard RW, Rapp GM. Problems of body heat loss in water immersion. Aerospace Med. 1970;41:1269–77. 91. Pugh LGC. Clothing insulation and accidental hypothermia in youth. Nature. 1966;209:1281–6. 92. Centers for Disease Control and Prevention. Hypothermia-related deaths—Cook County, Illinois, November 1992—March 1993. MMWR. 1993;42:917–9. 93. Centers for Disease Control and Prevention. Hypothermia-related deaths—New Mexico, October 1993—March 1994. MMWR. 1995;44: 933–5. 94. Valnicek SM, Chasmar LR, Clapson JB. Frostbite in the prairies. A 12-year review. Plast Reconstr Surg. 1993;92:633–41. 95. Lehmuskallio E, Lindholm H, Koskenvuo K, Sarna S, Friberg O, Viljanen A. Frostbite of the face and ears: epidemiological study of risk factors in Finnish conscripts. Br Med J. 1995;311: 1661–3.
35
Ionizing Radiation Arthur C. Upton
Since the discovery of the x-ray, in 1895, studies of the health effects of ionizing radiation have received continuing impetus from the expanding uses of radiation in medicine, science, and industry, as well as from the peaceful and military applications of atomic energy.1 The extensive knowledge of the effects of ionizing radiation generated by these studies has prompted strategies for protection against radiation that have been influential in shaping measures for protection against other hazardous physical and chemical agents as well.
the element in question. Once deposited, the amount of radioactivity remaining in situ decreases with time as a result of both physical decay and biological removal. The physical half-lives of the different radionuclides vary, from less than a second in some to billions of years in others.2,3 Biological half-lives also vary, tending to be longer with radionuclides that localize in bone (e.g., radium, strontium, plutonium) than with those that are deposited predominantly in soft tissue (e.g., iodine, cesium, tritium).4
PHYSICAL PROPERTIES OF IONIZING RADIATION
SOURCES AND LEVELS OF IONIZING RADIATION IN THE ENVIRONMENT
Ionizing radiations differ from other forms of radiant energy in being able to disrupt atoms and molecules on which they impinge, giving rise to ions and free radicals in the process. Ionizing radiations include (a) electromagnetic radiations of short wave length and high energy (e.g., x-rays and gamma rays) and (b) particulate radiations, which vary in mass and charge (e.g., electrons, protons, neutrons, alpha particles, and other atomic particles). Ionizing radiation, impinging on a living cell, collides randomly with atoms and molecules in its path, giving rise to ions and free radicals and depositing enough localized energy to damage genes, chromosomes, or other vital macromolecules. The distribution of such events along the path of the radiation—that is, the quality or linear energy transfer (LET) of the radiation—varies with the energy and charge of the radiation, as well as the density of the absorbing medium.2 Along the path of an alpha particle, for example, the collisions occur so close together that the radiation typically loses all of its energy in traversing only a few cells, whereas along the path of an x-ray the collisions are far enough apart so that the radiation may be able to traverse the entire body (Fig. 35-1). Because the biological effects of ionizing radiation result from the deposition of energy in exposed cells, doses of ionizing radiation are customarily expressed in terms of energy deposition (Table 35-1). On traversing a given cell, a densely ionizing radiation (e.g., an alpha particle) is more likely than a sparsely ionizing radiation (e.g., an x-ray) to deposit enough energy in a critical site, such as a gene or chromosome, to injure the cell.3–6 Hence an additional dose unit (the equivalent dose) is used in radiation protection to enable different types of radiation to be normalized in terms of their relative biological effectiveness (RBE). The equivalent dose (expressed in sievert [Sv]) is the dose in gray (Gy) multiplied by an appropriate weighting factor to adjust for differences in RBE; that is, 1 Sv of alpha radiation is that dose (in gray) of alpha radiation that is equivalent in biological effectiveness to 1 Gy of gamma rays (Table 35-1). The uptake, distribution, and retention of an internally deposited radionuclide vary, depending on the physical and chemical properties of
Life has evolved in the continuous presence of natural background radiation. The major sources of natural background radiation to which the human population is exposed are (a) cosmic rays, which originate in outer space; (b) terrestrial radiations, which emanate from the thorium, uranium, radium, and other radioactive constituents of the earth’s crust; (c) internal radiation, which is emitted by the potassium-40, carbon-14, radium, and other radionuclides normally present in living cells; and (d) radon and its daughter elements, which are inhaled in indoor air (Table 35-2). The dose from cosmic rays varies appreciably with altitude, being higher by a factor of 2 in the mountains than at sea level and being higher by orders of magnitude at jet aircraft altitudes.7 Likewise, the dose from internally deposited radium may be higher by a factor of 2 or more in geographic regions where the earth’s crust is rich in this element.5,7 The dose to the bronchial epithelium from radon also may vary by an order of magnitude or more, depending on the concentration of radon in indoor air, and it typically exceeds by far the dose from all other sources combined.5,7 In cigarette smokers, moreover, portions of the bronchial epithelium may receive additionally as much as 0.2 Sv (20 rem) per year from the polonium that is normally present in cigarette smoke.7 In addition to natural background radiation, populations in the modern world are exposed to radiation from various artificial sources as well. The largest such source is the use of x-rays in medical diagnosis (Table 35-2). Lesser sources include (a) radioactive minerals in building materials, phosphate fertilizers, and crushed rock; (b) radiation-emitting components of TV sets, video display terminals, smoke detectors, and other consumer products; (c) radioactive fallout from nuclear weapons and nuclear accidents; and (d) radionuclides released in the production of nuclear power (Table 35-2). Additional doses of radiation are received by workers in various occupations, depending on their particular work assignments and working conditions. The average annual effective dose received occupationally by monitored workers in the United States is lower than the dose from natural background radiation, and in any given year less 735
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
736
Environmental Health
Type of radiation ALPHA
Source
Range in tissue
210
Po 5.3 MeV
Range 0.037 mm
Dose∗
14
BETA
C 0.154 MeV maximum energy
Source Maximum range 0.29 mm (typically less)
32
BETA
GAMMA
GAMMA
P 17.1 MeV maximum energy
Maximum range 8 mm (typically less)
125
I 0.035 MeV
(mSv)
(mrem)
Radon† Cosmic Terrestrial Internal
2.0 0.27 0.28 0.39
200 27 28 39
55 8 8 11
Total natural
2.94
294
82
0.39 0.14 0.10 <0.01 <0.01 <0.01
39 14 10 <1.0 <1.0 <1.0
11 4 3 <0.3 <0.03 <0.03
Total artificial
0.63
63
18
Total natural and artificial
3.57
357
100
Artificial
Average distance to collision 164 mm
X-ray diagnosis Consumer products Occupational Nuclear fuel cycle Nuclear fallout Miscellaneous‡
Figure 35-1. Differences among various types of ionizing radiation in penetrating power in tissue.2
(%)
Natural Background
Average distance to collision 33 mm
60
Co 1.33 MeV
TABLE 35-2. AVERAGE AMOUNTS OF IONIZING RADIATION RECEIVED ANNUALLY FROM DIFFERENT SOURCES BY A MEMBER OF THE U.S. POPULATION
∗
than 1% of such workers receive a dose that approaches the maximum permissible yearly limit of 50 mSv (5 rem).5,9 Radiation accidents have been another source of exposure for workers and members of the public.10–12 In spite of elaborate precautions, some 285 nuclear reactor accidents (excluding the Chernobyl accident) were reported in various countries between 1945 and 1987, resulting in the exposure of more than 1,350 persons and 33 fatalities.10 In the Chernobyl accident alone, enough radioactivity was released to require the evacuation of tens of thousands of people and farm animals from the surrounding area and to result in a collective committed effective dose to the Northern Hemisphere of 600,000 person-Sv (60,000,000 person-rem).4,11,12 The large amounts of radioactive iodine (>600 PBq) that were released in the accident10 have since been implicated in an increase in the incidence of thyroid cancer in
TABLE 35-1. QUANTITIES AND DOSE UNITS OF IONIZING RADIATION Quantity Being Measured Absorbed dose Equivalent dose
Byelorussia and the Ukraine, as noted below. More numerous than reactor accidents, although less catastrophic, are accidents involving medical and industrial sources.12 In 1981, for example, a cesium-131 radiotherapy source that was inadvertently dismantled by junk dealers severely contaminated parts of Goiania, Brazil, exposing more than 120 persons, 54 of whom required hospitalization and four of whom were injured fatally.12
RADIATION EFFECTS Definition
Energy deposited in tissue Absorbed dose weighted for the relative biological effectiveness of the radiation Effective dose Equivalent dose weighted for the sensitivity of the exposed organ(s) Collective effective Effective dose applied to dose a population Committed effective Cumulative effective dose dose to be received from a given intake of radioactivity Radioactivity One atomic disintegration per second
∗
Average effective dose to soft tissues, excluding bronchial epithelium. †Average effective dose to bronchial epithelium alone. ‡Department of Energy facilities, smelters, transportation, etc. Source: Adapted from National Council on Radiation Protection and Measurements. Ionizing Radiation Exposure of the Population of the United States. NCRP Report 93. Bethesda, MD: National Council on Radiation Protection and Measurements; 1987 7and National Academy of Sciences Advisory Committee on the Biological Effects of Ionizing radiation. Health Risks from Exposure to Low Levels of Ionizing Radiation: BEIR VII Phase 2. Washington, DC: National Academy of Sciences, National Academies Press. 2006.8
Dose Unit ∗ Gray (Gy) Sievert (Sv)
Sievert (Sv)
Person-Sv Sievert (Sv)
Becquerel (Bq)
The units of measure listed are those of the International System, introduced in the 1970s to standardize usage throughout the world.3 They have largely supplanted the earlier units; namely the rad (1 rad =100 ergs/g = 0.01 Gy); the rem (1 rem = 0.01 Sv); and the curie (1 Ci = 3.7 × 1010 disintegrations per second = 3.7 × 1010 Bq).
Types of Effects In radiation protection, it is customary to distinguish between effects for which there are dose thresholds and effects for which there may be no dose thresholds. The former—so-called nonstochastic (or deterministic) effects—include various tissue reactions that are elicited only by doses large enough to kill many cells in the affected organs.13 The latter, by contrast—which include the mutagenic and carcinogenic effects of radiation—are viewed as stochastic (or probabilistic) phenomena of a type that may be produced by a subtle change within a single cell in an affected organ and which may therefore be expected to increase in frequency as linear-nonthreshold functions of the dose of radiation.3–6,8
Effects on Genes and Chromosomes Any molecule in the cell may be damaged by ionizing radiation, but damage to a single gene, unless properly repaired, may permanently alter or kill the cell. Such damage may be caused by the radiation energy that is deposited within an affected cell itself, or it may be caused by the effects of radiation on one or more of its neighboring cells (the so-called “bystander effect”).14 A dose that is large enough
35 to kill the average dividing cell (1–2 Sv) suffices to cause dozens of lesions in its DNA. Most such lesions tend to be reparable, depending on the effectiveness of the cell’s repair processes, but residual damage, expressed in the form of mutations, appears to increase as a linear-nonthreshold function of the dose in human somatic cells and the cells of other organisms. The frequency of such mutations approximates 10−5–10−6 per locus per Sv, depending on the genetic locus and conditions of irradiation.4,8 Chromosomal aberrations also increase in frequency with the dose of ionizing radiation, approximating 0.1 aberration per cell per Sv in the low-to-intermediate dose range (Fig. 35-2). The dose-dependent increase in the frequency of such aberrations, which has been reported to be detectable in radiation workers and persons residing in areas of elevated natural background radiation levels, may be of use as a biological dosimeter in radiation accident victims.16,17 The yields of mutations and chromosome aberrations produced by a given dose of low-LET radiation are lower at low-dose rates than at high dose rates; but the weight of evidence suggests that there may be no threshold in the dose-response relationship for these effects.5,8,18 Extensive studies of the children of the A-bomb survivors have been largely negative thus far, but the findings are not incompatible statistically with the results of experiments on laboratory animals, in which heritable mutagenic effects of radiation have been well documented.5,8 On the basis of the available data, it is estimated that a dose in excess of 1.0 Sv would be required to double the frequency of heritable mutations in the human species, and that less than 1% of all genetically related human diseases is attributable to natural background radiation (Table 35-3).
Ionizing Radiation
737
Neutron energy 0.7 Mev 7.0 Mev 14.7 Mev 2.0
250 kVp X-rays 1.0 Gy/min γ rays 0.5 Gy/min 60Co
Dicentrics per cell
1.5
250 kVp X-rays 0.2 Gy/h
1.0
0.5 60Co
0
1
2
3
γ rays 0.15 Gy/h
4
5
Dose (Gy) Figure 35-2. Frequency of dicentric chromosome aberrations in human lymphocytes in relation to dose, dose rate, and quality of irradiation in vitro. (Source: Modified from Lloyd DC, Purrott RJ. Chromosome aberration analysis in radiological protection dosimetry. Radiat Protect Dosim. 1981;1:19–28.15)
Through cytotoxic effects on dividing cells, intensive irradiation can give rise to a wide variety of acute and chronic tissue reactions, depending on the tissue or organ irradiated, the dose, and the conditions of exposure.4 In such reactions—exemplified by erythema of the skin, depression of the blood count, impairment of fertility, and cataract of the lens—interference with normal cell replacement in the exposed area leads to hypoplasia, functional disturbances, and atrophy of the affected part. If enough stem cells remain viable to repopulate the tissue in question, regeneration may ensue within days or weeks; however, a second wave of degenerative changes may occur months or years later, as a result of residual damage and gradually progressive radiation-induced
Cytotoxic Effects As noted early in this century by Bergonie and Tribondeau, cells generally vary in radiosensitivity in proportion to their rate of proliferation and inversely in relation to their degree of differentiation. Cells of only few types (e.g., lymphocytes and oocytes) are radiosensitive in a nonproliferative state. The percentage of clonogenic human cells retaining the ability to proliferate decreases exponentially with increasing dose, acute exposure to 1–2 Sv typically sufficing to reduce the surviving population by 50%. Successive exposures tend to be less than fully additive in their cytotoxicity if they are sufficiently separated in time, owing to repair of radiation damage during the interim.4,6,8
TABLE 35-3. ESTIMATES OF THE RISKS OF GENETIC DISORDERS IN CHILDREN THAT ARE ATTRIBUTABLE TO IRRADIATION OF THEIR PARENTS
Disease Class Autosomal dominant and X-linked diseases Autosomal recessive diseases Chromosomal diseases Chronic multifactorial diseases Congenital abnormalities Total ∗
Natural Incidence per Million Liveborn Children
Risk per Sv per Million Liveborn Children
Risk from Natural Background Irradiation per Million Liveborn Children ∗ (No.)
(%)
16,500
~750–1500
22–45
~0.2
7500
~0
<1
<1
4000
†
†
†
650,000‡
~250–1200
8–36
~0.004
60,000
~2000§
60§
0.1
738,000
~4000
~90–140
~0.02
Based on an assumed dose rate of 1 mSv per year and a genetic doubling dose of 1 Gy. Risk of chromosomal diseases is assumed to be subsumed under the risk of autosomal dominant and X-linked diseases and, in part, under the risk of congenital abnormalities. ‡Frequency in the general population. §Estimated on the basis of mouse data, without recourse to the doubling-dose method. (Based on data from NAS, 2006 and Sankaranarayanan, 2001.19) †
738
Environmental Health
fibrosis of the exposed connective tissue and vasculature.4,20 Depending on their anatomical location and severity, such changes can cause a dose-dependent decrease in the long-term survival of the affected individuals.21,22
THE ACUTE RADIATION SYNDROME
Intensive irradiation of the hemopoietic system, gastrointestinal tract, lungs, or brain can cause the acute radiation syndrome. This syndrome may take one of several forms, depending on the size and anatomical distribution of the dose (Table 35-4). In each of the forms, anorexia, nausea, and vomiting typically occur within minutes or hours after irradiation, to be followed by a symptom-free interval that lasts until the onset of the main phase of the illness.
Carcinogenic Effects Cancers of various types have been observed to increase in frequency with the dose of ionizing radiation in atomic bomb survivors, radiotherapy patients, early radiologists, radium dial painters, uranium miners, and other irradiated human populations.4,5,23 Such growths have not appeared until years or decades after irradiation, and none has exhibited features identifying it as having been produced specifically by radiation, as opposed to some other cause. The causal connection between such cancers and previous irradiation can, therefore, be inferred only from appropriate epidemiological analysis of the dose-incidence relationship.5,6,8
The most extensive dose-response data available thus far have come from the study of atomic bomb survivors, in whom the overall incidence of cancer has increased roughly in proportion with the radiation dose (Fig. 35-3). The magnitude of the dose-dependent increase varies, however, from one type of cancer to another, and not all types of cancer appear to have been affected. The most extensive data available to date concerning dose-response relationships for individual types of cancer pertain to leukemia, cancer of the female breast, and cancer of the thyroid gland. Leukemia. The frequencies of all major types of leukemia, except chronic lymphocytic leukemia, have been observed to increase with dose after exposure of the whole body or a major part of the hemopoietic system. In A-bomb survivors and other irradiated populations, the increases have appeared within 2–5 years after exposure; have been dose-dependent, averaging approximately 1–3 cases per 10,000 persons per year per Sv to the bone marrow over the first 25 years after irradiation; and have persisted for 15 years or longer, depending on the type of leukemia, age at irradiation, and other variables.5,8 A comparable excess has been reported in radiation workers, based on combined analyses of different occupational cohorts.5,24,25 While the data do not suffice to define the shape of the dose-incidence relationship precisely, they appear to be most consistent with a linear-quadratic function.5,6 Leukemia has also been observed to be increased in frequency in children who were x-irradiated prenatally through the abdominal radiographic examination of their mothers, the increase approximating 25 cases per 10,000 per Sv per year during the first 10 years of life.5,8,25 Although no such increase was evident in prenatally exposed A-bomb survivors, the difference is not statistically significant in view of the limited numbers of such survivors.25 Irradiation of maternal or
TABLE 35-4. MAJOR FORMS AND FEATURES OF THE ACUTE RADIATION SYNDROME Time after Irradiation
Cerebral Form (>50 Sv to Brain)
Gastrointestinal Form (10–20 Sv to Intestines)
Hemopoietic Form (2– 10 Sv to Bone Marrow)
Pulmonary Form (>6 Sv to Lungs)
First day
Nausea Vomiting Diarrhea Headache Disorientation Ataxia Coma Convulsions Death
Nausea Vomiting Diarrhea
Nausea Vomiting Diarrhea
Nausea Vomiting
Second week
Third–sixth weeks
Second–eighth months
Nausea Vomiting Diarrhea Fever Erythema Prostration Death Weakness Fatigue Anorexia Fever Hemorrhage Epilation Recovery or Death Cough Dyspnea Fever Chest pain Resperatory failure
Source: Data from United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). Sources, Effects, and Risks of Ionizing Radiation, Report to the General Assembly, with Annexes. New York: United Nations; 1988.11
35
Ionizing Radiation
739
2.5 1.5 2.0 1.4 1.5
Relative risk
1.3
1.0 0.0
0.5
1.0
1.5
2.0
1.2
1.1
1.0 0.0
0.1
0.2
0.3
Gamma-ray dose equivalent (Sv)
paternal germ cells also has been postulated to account for excesses of leukemia that have been observed in children subsequently conceived by some exposed individuals, but the weight of evidence argues against this hypothesis.25 Breast Cancer. The incidence of breast cancer has appeared to increase in proportion to the radiation dose in women surviving Abomb irradiation, women given radiotherapy to the breast for acute postpartum mastitis, women fluoroscoped repeatedly in the treatment of pulmonary tuberculosis with artificial pneumothorax, and women employed as radium dial painters.4,5,26 In all four groups, the excess did not become evident until at least 5–10 years after irradiation, depending on age at the time of exposure, and it has persisted for the duration of follow-up. The excess, averaged over all ages, has also been of similar magnitude in each group, in spite of marked differences among the groups in the rapidity with which the total doses of radiation were received, implying that successive small doses were highly additive in their cumulative carcinogenic effects.4,5,26 Susceptibility decreases markedly with increasing age at the time of irradiation, little excess being detectable in women exposed beyond the age of 40.27,28 Following irradiation in childhood, moreover, the resulting cancers are similar in age distribution to those occurring in the general population, implying that expression of the carcinogenic effects of radiation on the breast depends on the hormonal stimulation associated with sexual maturation.27,28 In those Abomb survivors who were the first to develop tumors, the excess was disproportionately large, suggesting that such women may have represented a genetically susceptible subgroup.28 Thyroid Gland. Dose-dependent excesses of thyroid cancer have been observed in A-bomb survivors, patients treated with x-rays for various benign conditions in childhood, Marshall Islanders and others exposed during childhood to radioactive fallout from nuclear weapons tests, and children exposed to radionuclides from the Chernobyl accident.4,5 The cancers have consisted mainly of papillary carcinomas and
0.4
0.5
Figure 35-3. Dose-response relationship for the relative risk of solid cancer, all types combined, in atomic bomb survivors, 1958–1994. The dashed curves represent ± 1 standard error for the smoothed curve. The unity baseline corresponds to the zero-dose value for survivors within 3 km of the bombs, and the horizontal dotted line represents the alternative zero-dose baseline when the more distal survivors are also included. The inset shows the same information for the fuller dose range. (Source: Modified from Pierce DA, Preston DL. Radiationrelated cancer risks at low doses among atomic bomb survivors. Radiat Res. 2000;154:178–86.23 )
have typically been preceded by a latent period of 10 years or longer, after which their frequency has remained elevated for the duration of follow-up. Children appear to be several times more susceptible to the induction of such tumors than adults, and females several times more susceptible than males.4,5 The dose-incidence relationship after therapeutic x-irradiation of the neck in infancy has been observed to be consistent with a linear-nonthreshold function, corresponding to approximately four additional cancers per 10,000 persons per Sv per year, with an excess evident at doses as low as 65 mSv.4,5,8 No excess has been detectable in persons who have received as much as 0.5 Gy to the thyroid from iodine-131 administered for diagnostic purposes, however, which implies that the radiation emitted by this radionuclide is appreciably less carcinogenic to the thyroid than external x- or gamma radiation, possibly because of spatial and temporal differences in the distribution of the radiation within the gland.4,5 Assessment of the Risks from Low-Level Exposure. Although existing evidence does not suffice to define precisely the dose-incidence relationship for the carcinogenic effects of low-level radiation or to exclude the possibility that a threshold for such effects may exist in the milisievert dose range, the available epidemiologic and experimental data argue against the likelihood of such a threshold, in spite of evidence that cells have some capacity to adapt to low-level radiation.4–6,8 Attempts to estimate the risks of radiation-induced cancers from low doses have, therefore, generally been based on the assumption that the overall incidence of cancer varies as a linearnonthreshold function of the dose. Extrapolations based on the linearnonthreshold model have yielded risk estimates for cancers of different organs (Table 35-5). These estimates imply that less than 3% of all cancers in the general population are attributable to natural background radiation, although a larger percentage—perhaps up to 10%— of lung cancers may be attributable to inhalation of indoor radon.4,5,8,30 The extent to which a cancer arising in a previously irradiated individual can be attributed to the radiation that he or she may have received cannot be determined with certainty; however, it may be assumed to increase with the radiation dose in question, all other
740
Environmental Health
TABLE 35-5. ESTIMATED LIFETIME RISKS OF CANCER ATTRIBUTABLE TO 0.1 SV (10 REM) LOW-DOSE-RATE IRRADIATION* Excess Cancer Deaths per 100,000 Type or Site of Cancer
(No.)
(%)†
Colon Lung Bone marrow (leukemia) Stomach Breast Urinary bladder Esophagus Liver Gonads Thyroid Bone Skin Remainder
95 85 50 50 45 25 10 15 15 5 3 2 100
5 3 10 8 2 4 3 8 3 5 3 2 2
Total
500
2
∗
Modified from International Commission on Radiological Protection. Recommendations of the International Commission on Radiological Protection. ICRP Publication 60. Ann ICRP 21, No. 1–3. Oxford: Pergamon Press; (1991) and Puskin JS, Nelson CB. Estimates of radiological cance risks. Health Phys. 1995;69:93–101.29 †Percentage increase in spontaneous “background” risk expected for a nonirradiated population.
things being equal.31,32 On the basis of this assumption, one may arrive at a crude estimate of the probability of causation, given sufficient knowledge of the dose, when the dose was received, and the extent to which other causal factors also may have been involved.31,32 EFFECTS OF PRENATAL IRRADIATION
Apart from the relatively high susceptibility of the unborn child to the carcinogenic effects of ionizing radiation, noted above, the embryo is also highly susceptible to the teratogenic effects of radiation. Thus, although the latter are generally considered to be nonstochastic in nature, exposure to as little as 0.25 Sv during critical stages of organogenesis has sufficed to cause malformations of many types in laboratory animals,33,34 and similar developmental disturbances have been reported to follow intensive prenatal irradiation in humans.4,5,8,34 Noteworthy examples of the latter include a dose-dependent increase in the frequency of severe mental retardation and dose-dependent decreases in IQ and school performance scores in A-bomb survivors who were irradiated between the 8th and 15th weeks (and to a lesser extent the 16th and 25th weeks) after conception.4,5,8,34 Furthermore, unlike mutagenic and carcinogenic effects, which are expressed in only a small percentage of exposed individuals, some disturbance of growth and development may be projected to affect all who are exposed at a vulnerable stage to a dose that exceeds the relevant threshold. Thus, while only a small percentage of the individuals who were exposed prenatally to atomic bomb radiation at a critical stage in brain development (i.e., 8–26 weeks after conception) exhibited severe mental retardation, a larger percentage exhibited less marked decrements in intelligence and school performance, implying that there was a dose-dependent downward shift in the distribution of intelligence levels within the entire cohort.8,12
and increases resistance to the cytotoxic, genetic, and carcinogenic effects of a subsequent, larger “test” dose of radiation.4,6,8,12,36,37 The adaptive response to radiation resembles in many respects adaptive responses elicited by other toxicants,37 and it undoubtedly accounts in part for the decrease in the biological effectiveness of X-rays and gamma-rays that generally occurs as the dose rate is reduced. These features of the adaptive response have prompted some observers to postulate that the dose-response relationships for the genetic and carcinogenic effects of ionizing radiation is biphasic or “hormetic” in nature; that is, that it increases with the dose at moderate-to-high levels of exposure but decreases with the dose at low levels of exposure.36,38 This hypothesis, far reaching in its implications for radiation protection, remains to be validated, however, and the weight of existing evidence argues against it.6,8,18,39,40
RADIATION PROTECTION
With the abandonment of the threshold dose-response hypothesis for the mutagenic and carcinogenic effects of radiation, the goal of minimizing the risks of such effects has become preeminent in radiation protection. In pursuit of this goal, the following guidelines have been recommended for any activity involving exposure to ionizing radiation: (a) justification, that is, the activity should not be considered justifiable unless it produces a sufficient benefit to those who are exposed, or to society at large, to offset any harm it may cause; (b) optimization, that is, the dose and/or likelihood of exposure should be kept as low as is reasonably achievable (ALARA), all relevant economic and social factors considered; and (c) dose limits, that is, the likelihood of exposure and the resulting dose to any individual should be subject to control by operating limits.3 The dose limits that have been recommended (Table 35-6) are intended to restrict exposures sufficiently to completely prevent nonstochastic effects in any organ of the body, even in the most sensitive members of the population.3 Although the limits are not expected to protect completely against the mutagenic and carcinogenic effects of radiation, since there may be no thresholds for such effects, the limits are judged to be low enough to prevent the risks of mutagenic and carcinogenic effects from reaching levels that are socially unacceptable.3,41 Implicit in the above guidelines are requirements that any facility dealing with ionizing radiation (a) be properly designed; (b) be carefully planned and its operating procedures be overseen, including dose calibration; (c) have in place a well-conceived radiation protection program; (d) ensure that its workers are adequately trained and supervised; and (e) maintain a well-developed and well-rehearsed emergency preparedness plan, to be able to respond promptly and effectively in the event of a malfunction, spill, or other type of radiation accident.3,41 Since the doses received from medical radiographic examinations and from indoor radon constitute the most important controllable sources of exposure to ionizing radiation for members of the general public, measures to limit these exposures are also called for.3,41 Other potential sources of exposure against which protection is warranted are those posed by the millions of cubic feet of radioactive and mixed wastes (mine and mill tailings, spent nuclear fuel, waste from the decommissioning of nuclear power plants, dismantled industrial and medical radiation sources, radioactive pharmaceuticals and reagents, heavy metals, polyaromatic hydrocarbons, and other contaminants), which tax increasingly severely the existing storage capacities at numerous waste sites.42,43
ADAPTIVE RESPONSES AND HORMESIS
SUMMARY
A brief exposure to a small, “conditioning” dose of x-rays or gamma rays has been observed experimentally to elicit an adaptive response that enhances growth and survival, augments the immune response,
The health effects of ionizing radiation are widely diverse, ranging from rapidly fatal injuries to cancers, birth defects, and hereditary disorders months or decades later. The nature, frequency, and severity
35
Ionizing Radiation
741
TABLE 35-6. RECOMMENDED LIMITS OF EXPOSURE TO IONIZING RADIATION FOR RADIATION WORKERS AND MEMBERS OF THE PUBLIC∗ Maximum Permissible Dose (mSv)
Type of Exposure
A. Occupational Exposures 1. For protection against stochastic effects a. Annual effective dose b. Cumulative effective dose 2. For protection against nonstochastic effects in individual organs a. Lens of the eye (annual effective dose) b. All other organs (annual effective dose) 3. Planned special exposures (effective dose)† 4. Emergency exposure
50 Age × 10 150 500 100 —‡
B. Public Exposures 1. Continuous or frequent exposure (effective dose per year) 2. Infrequent exposure (effective dose per year) 3. Remedial action recommended if: a. Annual effective dose would exceed b. Effective dose from radon would exceed
1 5 5 0.007 jhm−1
C. Education and Training Exposures§ 1. Annual effective dose 2. Annual equivalent dose to lens of the eye, skin, extremities
1 50
D. Exposure of the Embryo and Fetus 1. Total equivalent dose 2. Equivalent dose in any one month
5 0.5
∗
Including natural background radiation exclusive of that from internally deposited radionuclides. Sum of internal and external exposures, excluding medical irradiation. ‡Effective dose in any one planned event; or cumulative effective dose in planned special exposures should not exceed 100 mSv (10 rem) over a working lifetime. §Short-term exposure to more than 100 mSv (10 rem) is justified only in lifesaving emergency situations. Source: From National Council on Radiation Protection and Measurements. Limitation of Exposure to Ionizing Radiation (NCRP) Report No. 116, Bethesda, MD: National Council on Radiation Protection and Measurements; 1993.11 †
of the effects depend on the quality of the radiation in question, as well as on the dose and conditions of exposure. For most effects, radiosensitivity varies with the rate of proliferation and inversely with the degree of differentiation of the exposed cells; as a result, the embryo and growing child are especially vulnerable to radiation injury. Although many types of effects require relatively high levels of exposure, the genotoxic and carcinogenic effects of ionizing radiation appear to increase in frequency as linear-nonthreshold functions of the dose. To minimize the risks of the 1atter, therefore, exposures to ionizing radiation need to be limited accordingly.
REFERENCES
1. Upton AC. Historical perspective on radiation carcinogenesis. In: Upton AC, Albert RR, Burns FJ, Shore RE, eds. Radiation Carcinogenesis. New York: Elsevier Science Publishing; 1986: 1–10. 2. Shapiro J. Radiation Protection: A Guide for Scientists and Physicians. 3rd ed. Cambridge MA: Harvard University Press; 1972. 3. International Commission on Radiological Protection. 1990 Recommendations of the International Commission on Radiological Protection. ICRP Publication 60. Ann ICRP 21~ No. 1–3. Oxford: Pergamon Press; 1991. 4. Mettler FA, Jr, Upton AC. Medical Effects of Ionizing Radiation. New York: WB Saunders; 1995. 5. United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). Sources and Effects of Ionizing Radiation. Report to the General Assembly, with Annexes, New York: United Nations; 2000.
6. National Council on Radiation Protection and Measurements (NCRP). Evaluation of the Linear-Nonthreshold Dose-Response Model for Ionizing Radiation. NCRP Report No. 136, Bethesda, MD: National Council on Radiation Protection and Measurements; 2001. 7. National Council on Radiation Protection and Measurements. Ionizing Radiation Exposure of the Population of the United States. (NCRP) Report 93: Bethesda, MD: National Council on Radiation Protection and Measurements; 1987. 8. National Academy of Sciences Advisory Committee on the Biological Effects of Ionizing Radiation. Health Risks from Exposure to Low Levels of Ionizing Radiation BEIR VII Phase 2. Washington, DC: National Academies Press; 2006. 9. National Council on Radiation Protection and Measurements. Exposure of the U.S. Population from Occupationa1 Radiation (NCRP) Report 101. Bethesda. MD: National Council on Radiation Protection and Measurements; 1989. 10. Lusbbaugb CC, Fry SA, Ricks RC. Nuclear radiation accidents: preparedness and consequences. Br J Radiol. 1987;60;1159–83. 11. United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). Sources, Effects, and Risks of Ionizing Radiation. Report to the General Assemb1y, with Annexes. New York: United Nations; 1988. 12. United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). Sources and Effects of Ionizing Radiation. UNSCEAR 1993 Report to the General Assembly with Annexes. New York: United Nations; 1993. 13. Fry RJM. Deterministic effects. Health Physics. 2001;80:338–43. 14. Mothersill C, Seymour C. Radiation-induced bystander effects: past history and future directions. Radiat Res. 2001;155:759–67.
742
Environmental Health
15. Lloyd DC, Purrott RJ. Chromosome aberration analysis in radiological protection dosimetry. Radiat Protect Dosim. 1981;1:19–28. 16. International Atomic Energy Agency (IAEA). Biologica1 Dosimetry: Chromosomal Aberration Analysis for Dose Assessment. Technological Report No. 260. Vienna: Internationa1 Atomic Energy Agency; 1986. 17. Edwards AA. The use of chromosomal aberrations in human lymphocytes for biological dosimetry. Radiat Res. 1997;148: S39–S44. 18. Vilenchik MM, Knudson AG, Jr. Inverse radiation dose-rate effects on somatic and germ-line mutations and DNA damage rates. Proc Nat Acad Sci U.S. 2000;97:5381–6. 19. Sankaranarayanan K. Estimation of the hereditary risks of exposure to ionizing radiation: history, current status, and emerging perspectives. Health Phys. 2001;80:363–9. 20. Carnes BA, Gavrilova N, Grahn D. Pathology effects at doses below those causing increased mortality. Radiat Res. 2002;158: 187–94. 21. Preston DL, Shimizu Y, Pierce DA, Suyama A, Mabuchi K. Studies of mortality in atomic bomb survivors, Report 13: solid cancer and noncancer disease mortality, 1950-1997. Radiat Res. 2003; 160:381–407. 22. Carnes BA, Grahn D, Hoel D. Mortality of atomic bomb survivors predicted from laboratory animals. Radiat Res. 2003;160:159–67. 23. Pierce DA, Preston DL. Radiation-related cancer risks at low doses among atomic bomb survivors. Radiat Res. 2000;154:178–86. 24. Cardis E, Gilbert ES, Carpenter L, et al. Effects of low doses and low dose rates of external ionizing radiation: cancer mortality among nuclear industry workers in three countries. Radiat Res. 1995; 142: 117–32. 25. Wakeford R. The cancer epidemiology of radiation. Oncogene. 2004; 23:6404–28. 26. Preston DL, Mattsson A, Holmberg E, Shore R, Hildreth NG, Boice JD, Jr. Radiation effects on breast cancer risk: a pooled analysis of eight cohorts. Radiat Res. 2002;158:220–35. 27. Mettler FA, Upton AC, Kelsey CA, Ashby RN, Rosenberg RD, Linver MIN. Benefits versus risks from mammography. A critical reassessment. Cancer. 1996;77:903–9. 28. Land CE, Tokunaga M, Koyama K, et al. Incidence of female breast cancer among atomic bomb survivors, Hiroshima and Nagasaki, 1950-1990. Radiat Res. 2003;160:707–17. 29. Puskin JS, Ne1son CB. Estimates of radiogenic cancer risks. Health Phys. 1995;69:93–101.
30. National Academy of Sciences/National Research Council. Health Effects of Exposure to Radon. Washington, D.C.: National Academy Press; 1998. 31. Rall JE, Beebe GW, Hoel DG, et al. Report of the National Institutes of Health Ad Hoc Working Group to Develop Radio-epidemiological Tables. NIH Publication No. 85-2748. Washington. DC: Government Printing Office; 1985. 32. Wakeford R, Antell BA, Leigh WJ. A review of probability of causation and its use in a compensation scheme for nuclear industry workers in the United Kingdom. Health Phys. 1998;74:1–9. 33. United Nations Scientific Committee on the Effects of Atomic Radiations (UNSCEAR). Ionizing Radiation: Sources and Biological Effects. 1982 Report to the General Assembly with Annexes. New York: United Nations; 1982. 34. United Nations Scientific Committee on the Effects of Atomic Radiations (UNSCEAR). Genetic and Somatic Effects of Ionizing Radiation. Report to the General Assembly with Annexes. New York: United Nations; 1986. 35. United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR). Sources and Effects of Ionizing Radiation. Report to the General Assembly with Annexes. New York: United Nations; 1994. 36. Calabrese EJ, Baldwin LA. Radiation hormesis: the demise of a legitimate hypothesis. Human Exper Toxicol. 2000;19:76–84. 37. McBride WH, Chiang CS, Olson JL, et al. A sense of danger from radiation. Radiat Res. 2004;162:1–19. 38. Luckey TD. Radiation Hormesis. Boca Raton: CRC Press; 1991. 39. Wojcik A. The current status of the adaptive response to ionizing radiation in mammalian cells. Human Ecol Risk Assess. 2000;6: 281–300. 40. Upton AC. Radiation hormesis: data and interpretations. Crit Rev Toxicol. 2001;31:681–95. 41. National Council on Radiation Protection and Measurements. Limitation of Exposure to Ionizing Radiation (NCRP). Report No. 116, Bethesda, MD: National Council on Radiation Protection and Measurements; 1993. 42. National Academy of Sciences/National Research Council. The Nuclear Weapons Complex. Washington, DC: National Academy Press; 1989. 43. U.S. Department of Energy (USDOE). U.S. Department of Energy Interim Mixed Waste Inventory Report: Waste Streams, Treatment Capacities, and Technologies. DOE/NBM-1100, Washington, DC: Government Printing Office; 1993.
Nonionizing Radiation
36
Arthur L. Frank
The term nonionizing radiation refers to several forms of electromagnetic radiation of wavelengths longer than those of ionizing radiation. As wavelength lengthens, the energy value of electromagnetic radiation decreases, and all nonionizing forms of radiation have less energy than cosmic, gamma, and x-radiation. In order of increasing wavelength, nonionizing radiation includes ultraviolet (UV) radiation, visible light, infrared radiation, microwave radiation, and radiofrequency radiation. The latter two are often treated as a single category. The energy, frequency, and wavelength range for electromagnetic forces are shown in Table 36-1. All forms of electromagnetic radiation have the same velocity of 3 × 1010 cm/s in a vacuum. Radiation is emitted continuously from the sun over a wide range from 290 nm in the ultraviolet range to more than 2000 nm in the infrared range with a maximum intensity at about 480 nm in the visible range. The radiation from the sun is modified as it passes through the earth’s atmosphere. Ozone, which is found in the upper atmosphere, absorbs the highest energy ultraviolet radiation. Infrared radiation is absorbed by water vapor, and other wavelengths are altered by passage through smoke, dust, and gas molecules. All objects above absolute zero temperature emit radiation, much of it as infrared radiation. At low temperatures, only long wavelength radiation is emitted, but as the temperature of the objects increases, shorter wavelength radiation is emitted. Heated metal gives off a red glow; if heating continues, the metal becomes “white hot” as energy throughout the whole visible spectrum is given off. Heated gases may give off wavelengths in the ultraviolet, visible, or infrared regions. Ultraviolet radiation is given off with the use of extremely high-temperature welding equipment such as carbon or electric arcs. The biological effect of radiation exposure depends on the type and duration of exposure and on the amount of absorption by the organism. The carcinogenic and other effects of ionizing radiation are discussed in Chapter 36. ULTRAVIOLET RADIATION
The sun is the major source of ultraviolet radiation although there are artificial sources such as electric arc lights, welding arcs, plasma jets, and special ultraviolet bulbs. The amount of ultraviolet radiation reaching the earth from the sun varies with season, time of day, latitude, altitude, and specific atmospheric conditions. Intensity is greatest at midday and is greater in summer than in winter. In a summer month, about as much ultraviolet radiation reaches the earth’s surface as in the entire period from autumn to spring equinoxes. Total ultraviolet exposure is greater on a cloudy day due to reflection, and snow reflects about 75% of ultraviolet radiation. Therefore, sunburn may be more severe on a cloudy than a clear day and may be especially severe in those spending a great deal of time on snow. Window glass and light clothing efficiently filter out ultraviolet radiation.
There is a wide range of potential occupational exposures1,2 to ultraviolet radiation in both outdoor work and industrial settings (Table 36-2).
Biological Effects The organs primarily affected by ultraviolet radiation are the skin and eyes since it has little ability to penetrate. Ultraviolet radiation is strongly absorbed by nucleic acids and proteins, and the effects in humans are largely chemical rather than thermal. Mutations resulting from ultraviolet exposure occur in organisms such as plants and flies but not in humans, again because of low penetration. Short-term effects on humans include acute changes in the skin. There are four types of changes: (a) darkening of pigment, (b) erythema (sunburn), (c) increase in pigmentation (tanning), and (d) changes in cell growth. Ultraviolet radiation does not penetrate subcutaneous tissue. The corneum, or outermost layer of skin, which is about 0.03 mm thick, absorbs the shortest wavelength ultraviolet radiation. The longer the wavelength, the deeper the radiation penetrates; the longest ultraviolet radiation passes through the corneum and corium into the Malpighian layer. The darkening of preformed pigment occurs immediately and is particularly noted at wavelengths between 300 and 400 nm. The erythema (sunburn) does not begin for at least one-half hour, and there are several peaks within the ultraviolet spectrum with variable times of maximum effect, ranging from 12 hours for radiation at 54 nm to 48 hours for radiation at 297 nm. Darker skin has protective effect, and estimates for darkest skin shades suggest a two- to tenfold threshold value for erythema production. Subsequent exposure reduces the threshold value for erythema production. The increase in pigmentation (tanning) results from a migration of melanin pigment into more superficial skin cells and also from an increased production of melanin pigment. Ultraviolet radiation works as a catalyst to oxidize tyrosine to dihydroxyphenol 1-alanine, which is a precursor of melanin. Changes in skin cell growth follow exposure to ultraviolet radiation. There occurs a cessation of cell growth, followed 24 hours later by an increase in cell division. At this time there is intracellular and intercellular edema that thickens the skin. Eventually there is shedding of cells by scaling. Severe reactions can be seen with blistering, desquamation, and even ulceration of the skin. Ultraviolet radiation also causes acute effects on tissue of the eye. Exposure can lead to keratitis, inflammation of the cornea, and conjunctivitis. The keratitis may develop after a latency of several hours and returns to normal in a few days. Since the cornea possesses a large number of nerve endings, even small amounts of inflammation can be painful. The effect in the eye is independent of skin color, and there appears to be no development of protection of the eye with repeated exposures. Long-term effects of ultraviolet exposure include an increased rate of aging of skin with degeneration of skin tissue and a decrease 743
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
744
Environmental Health TABLE 36-1. ENERGY, FREQUENCY, AND WAVELENGTH RANGE FOR ELECTROMAGNETIC FORCES Type of Radiation
Energy Range
Ionizing (includes cosmic, gamma, and x-ray) Ultraviolet Visible Violet Blue Green Yellow Orange Red Infrared Microwave Radio frequency
Frequency Range
Wavelength Range
>12.4 eV
>3000 THz
<100 nm
6.2–3.1 eV 3.1–1.8 eV
1500–750 THz 750429 THz
1.8 eV–1.2 meV 1.2 meV–1.2 µeV 1.2 µeV–1.2 neV
429 THz–300 GHz 300 GHz–300 MHz 300 MHz–300 KHz
200–400 nm 400–700 nm 400–424 424–491 491–575 575–585 585–647 647–700 700 nm–1 mm 1 mm–1 m 1 m–km
Source: Adapted from NIOSH Technical Report. Ionizing Radiation. Washington DC: NIOSH Publication No. 78–142; 1978.
in elasticity. Late effects of ultraviolet radiation on the eye include the development of cataracts. The most serious chronic effect of ultraviolet exposure is skin cancer. More than 90% of skin cancers occur on parts of the body exposed to sunlight. Approximately 40% of all cancers in the United States are skin cancers, and in general they are the most common malignancy in light-skinned populations. Rates for skin cancer vary from less than 2 cases per 100,000 in dark-skinned populations to more than 100 per 100,000 in South African whites and Australians.3 The incidence of skin cancer on a worldwide bases correlates with decreasing latitude. Great excesses of skin cancer occur among persons with outdoor occupations such as agricultural, forestry, and marine activity. Most skin cancers in humans are epithelial cell origin, most commonly noted as basal cell carcinomas followed in frequency by squamous cell carcinomas followed in frequency by melanomas. Some individuals, for example, those with xeroderma pigmentosum, have particular sensitivity to ultraviolet radiation and are at increased risk for developing disease on exposure. Photosensitivity reactions occur after exposure to a variety of chemicals and drugs, including dyes, phenothiazine, sulfonamide, and sulfonylurea. Ultraviolet radiation has an important role in the prevention of rickets. Vitamin D is produced by the action of ultraviolet radiation on 7-dehydrocholesterol or related steroidal compounds.
and regulating exposure time. Equipment design includes placement of ultraviolet glass shields. Personal protection includes the use of shields, goggles, and appropriate clothing. Polyvinyl chloride can be used for gloves, and the use of barrier creams is also possible. Exposure during recreation, such as winter sports and sunbathing, including use of tanning beds, should be done in moderation, especially by fair-skinned persons.
Protection
TABLE 36-3. THRESHOLD LIMIT VALUES (TLV) FOR ULTRAVIOLET RADIATION
Protection measures against ultraviolet radiation include administrative controls, equipment design, and personal protection. Administrative actions include educating and instructing of individuals who will be exposed, posting of notices, limiting access in the workplace,
TABLE 36-2. OCCUPATIONAL EXPOSURE TO ULTRAVIOLET RADIATION Aircraft workers Barbers Bath attendants Construction workers Drug makers Electricians Farmers Fishermen Food irradiators Foundry workers
Glass blowers Metal casting inspectors Oil field workers Railroad tract workers Ranchers Seaman Steel mill workers Tobacco irradiators Vitamin D makers Welders
Recommended Values for Protection against Ultraviolet Radiation. Based on regulations adopted from the American Conference of Governmental and Industrial Hygienists in 1976, the federal limits in the United States are as follows: 1. For the near ultraviolet spectral region (320–400 nm), total irradiance incident upon the unprotected skin or eye should not exceed 1 mW/cm2 for periods greater than 103 seconds (approximately 16 minutes), and for exposure times less than 103 seconds, should not exceed 1 J/cm2. 2. For the actinic ultraviolet spectral region (200–315 nm), radiant exposure incident upon the unprotected skin or eye should not exceed the values given in Table 36-3 within an 8-hour period.
Wavelength (nm) 200 210 220 230 240 250 254 260 270 280 290 300 305 310 315
TLV (mJ/cm2)
Relative Spectral Effectiveness (Sγ)
100.0 40.0 25.0 16.0 10.0 7.0 6.0 4.6 3.0 3.4 4.7 10.0 50.0 200.0 1000.0
0.03 0.075 0.12 0.19 0.30 0.43 0.5 0.65 1.0 0.88 0.64 0.30 0.06 0.015 0.003
36 3. To determine the effective irradiance of a broadband source weighted against the peak of the spectral effectiveness curve (270 nm), the following weighting formula should be used: Eeff = Σ Eλ Sλ ∆λ where Eeff = effective irradiance relative an Ex Eλ = monochromatic source at 270 nm spectral irradiance in W/cm2/nm Sλ = relative spectral effectiveness (unitless) ∆λ = band width in nanometers 4. Permissible exposure time in seconds for exposure to actinic ultraviolet radiation incident upon the unprotected skin or eye may be computed by dividing 0.003 J/cm by Eeh in W/cm2. The exposure time may also be determined using Table 36-4, which provides exposure times corresponding to effective irradiances in µW/cm2. VISIBLE LIGHT
Visible light 4,5 is radiation with a wavelength between 400 and 700 nm. The sun is the major source of visible light, but it can also be produced by heating tungsten or other filaments and by electrical discharge in a gas such as mercury or neon. Any ultraviolet radiation given off is largely absorbed by the glass enclosing the bulb. The abnormal biological effects of visible radiation are generally not serious. A flash of light will bleach visual pigments, causing “spots” in the visual field. Intense visible light, such as one may experience by staring directly into the sun for extended periods, may cause coagulation of the retina, and the scrotoma that results may be permanent. Snow blindness results from overexposure to sunlight and is characterized by conjunctivitis and keratitis accompanied by photophobia. Use of appropriate lenses will protect against the above effects. Of potentially greater seriousness are injuries caused by lasers. Laser stands for light amplification of stimulated emission of radiation. Lasers are used in industry, communications, surveying, construction, medicine, and electronics. There are many types of laser apparatus, but all are characterized by their ability to produce an intense, monochromatic, coherent beam in which all waves are parallel and all are in phase. There are three types of lasers: (a) continuous, (b) pulsed, and (c) Q-switched, which are pulsed, but the beam is turned on and off at a rapid rate to produce a beam with higher peak power of shorter duration than the pulsed variety.
TABLE 36-4. THRESHOLD LIMIT VALUES FOR ULTRAVIOLET RADIATION Duration of Exposure per Day
Effective Irradiance, Eoff
8h 4h 2h 1h 30 min 15 min 10 min 5 min 1 min 30 s 10 s 1s 0.5 s 0.1 s
0.1 0.2 0.4 0.8 1.7 3.3 5.0 10.0 50.0 100.0 300.0 3000.0 6000.0 30,000.0
Nonionizing Radiation
745
Because the laser is a light beam, it follows all the laws of optics and can be manipulated like other light beams. When focused on a spot, a laser can produce enormous heat for drilling and related purposes. Burns may occur with exposure to lasers, either to the skin or to the eye, if the laser beam hits the retina. This can cause blindness. Lasers also emit ultraviolet radiation, which can cause corneal damage, and infrared radiation, which can cause opacification of the lens. Threshold values have been proposed for a wide variety of laser equipment. ILLUMINATION
Units for Expressing Amount of Light The amount of visible radiation (light) emitted by a luminous object, such as an electrical light bulb, is measured in terms of candle power, based on a standard international candle. The amount of illumination that falls on a surface from a light source is expressed in terms of footcandles. One foot-candle of illumination is intensity of illumination at any point on a surface 1 foot away from a light source of 1 candle power. The illumination falling on a surface varies inversely as the square of the distance from the light source. The total amount of light that falls on 1 square foot of surface, all points of which are 1 foot from a light source of 1 standard candle, is called 1 lumen, lumen being the term used to measure light flux. The brightness of the light source or of an object reflecting light is usually expressed in terms of foot-lamberts or candles per square inch. One foot-lambert is equivalent to 1 lumen emitted per square foot of the light source. One candle per square inch is the candle power emitted per square inch of light source and is equivalent to 452 foot-lamberts.
General Principles of Illumination Intensity of Illumination. Sufficient illumination is essential for visual acuity, maximum speed of seeing, prevention of eye fatigue and eye strain, and thus for efficient work and prevention of accidents. Definite proof that poor illumination leads to permanent eye injury is lacking, but the character of the illumination may affect psychological reactions. Most authorities agree that high levels of illumination, except under such unusual circumstances as direct viewing of the sun, do not produce harmful effects on the eye. The human eye is adapted for vision outdoors where foot-candle levels may range from 1000 in the shade to 10,000 in the sun. Standards of illumination usually are set in terms of the amount of illumination that falls on the work area. Since vision depends on the light reaching the eye, however, the important consideration is not the amount of illumination on the desk or workbench but the amount of light reflected to the eye. For example, if there are 50 foot-candles of illumination falling on a white object, which reflects about 80% of the visible light, then 40 foot-candles of illumination are reflected toward the eye. If the same amount of light falls on a dark object, which reflects 20% of the light, only 10 foot-candles of illumination are reflected to the eye. Hence it is necessary to specify different standards of illumination for different circumstances, depending on the amount of light reflected from each work area. Authorities differ on the amount of illumination essential for vision. Visual acuity and speed of vision increase markedly with an increase in illumination up to about 10 foot-candles, and then increase more slowly up to about 20 foot-candles. Hence 15–20 foot-candles can be accepted as a bare minimum level of illumination for vision under the most optimum conditions. When the reflection factor is reduced, as in work on dark colors or when the contrast in color between the object and its background is reduced, higher levels of illumination are necessary for good visual acuity and speed of vision. Higher levels are required also for continuous and fine eye work. Persons with poor vision or eye defects require more illumination than those with normal eyesight. Generally it is recommended that when the contrast in color and brightness between the object and the immediate
746
Environmental Health
background is good and when the object being viewed is the size of normal print, the lighting for continuous eye work should supply a minimum of 30 foot-candles on the object. Where poor contrast exists or the size of the object is small, the minimum illumination requirement should be set at 50 foot-candles. Higher levels are necessary under certain conditions. In 1965, the American National Standards Institute (ANSI), in cooperation with the Illuminating Engineering Society, published an American Standard Practice for Industrial Lighting, including a list of the current recommended much higher levels of illumination, but the 1965 standards were reconfirmed in 1970 by the American National Standards Institute and adopted by the United States Department of Labor as the standards to be used under the 1971 Occupational Safety and Health Act. Brightness and Glare. The amount of light reaching the eye from a light source or by reflection from an object is commonly designated as the brightness of the source or object and is usually expressed in foot-lamberts. Although the eye can adapt to very high levels of brightness, such as daylight outdoors, it cannot tolerate great contrasts in brightness between the central field of vision and the surrounding area. Such contrasts interfere with vision and may produce an uncomfortable sensation. In viewing an object against its surroundings, the visual acuity is greatest when the surrounding area has the same brightness as the central field of vision. The brightness of this central field should never be less than that of the surroundings. Brightness contrasts are produced also when bright light sources are in the field of view. If the eye is adapted to a high level of illumination and the contrast is not great, a bright light in the field of vision does not produce discomfort. The degree of the glare sensation depends on the distance of the eye from the light source, the brightness of the light source in relation to that of the object on which the eye is focused. Excessive reflection from shiny surfaces, so-called reflected glare, produces an uncomfortable sensation and may completely obliterate the outline of an object. The effect of glare on vision increases sharply in older age groups; bare light bulbs should never be permitted in the field of vision. Differences in Illumination. Great differences in illumination between one work space and another or between a work area and a hallway are dangerous if people are required to move from one space to the other. When passing from a brightly lighted area to one with a low level of illumination, the visual acuity is markedly decreased until dark adaptation has occurred. Although some adaptation occurs fairly rapidly, it requires at least one-half hour for adequate readjustment of vision to dim light. The greater the light adaptation, the slower the dark adaptation that follows. During the readjustment period, the ability of the eye to see clearly is so reduced that the danger of accident is increased. Adaptation requires only a few minutes when passing from a dimly lighted space to one at a high level of illumination.
a room to prevent great contrasts in brightness. Local or supplemental lighting, in addition to general lighting, is necessary when very high levels of illumination are required, when illumination is needed in specific areas not accessible to general lighting, where the light must come from a particular angle, where hand readjustments are needed, where shadows are required for the prevention of reflected glare, and in various other circumstances. Supplementary lighting sources should be arranged so that other persons in the vicinity are not exposed to excessively bright spots of light. Lighting fixtures fall into four types: 1. Totally indirect units give diffuse illumination with no shadows or glare, but they are uneconomical and accumulate dirt. Good reflection from the ceiling is necessary, but excessive brightness of the ceiling must be avoided. 2. Direct units are economical but cause shadows, produce glare, and give spot rather than diffuse illumination. They are used chiefly with high ceilings or for local lighting. 3. Semi-indirect units are satisfactory when equipped with diffusers, when ceiling reflection is adequate, and when they are properly placed to avoid too much brightness in the field view. 4. Large units have a lower candle power per square inch; for example, long tabular fluorescent lights give less concentrated lighting than round tungsten-filament bulbs, which have a higher brightness per unit area (Table 36-5). Large units with moderate brightness also may cause discomfort if placed directly in the field of view. Natural Illumination. Daylight, if properly arranged, may be a very effective source of good illumination in a room. Much more difficulty is encountered in designing for daylighting than for artificial lighting, however. The amount of daylight reaching a room varies with the location and orientation of the building, with the presence of surrounding buildings, and with the time of day, season, weather, and degree of atmospheric pollution. Furthermore, while artificial lighting can be evenly spaced throughout a room and directed as desired, daylight is available only from certain areas, and its distribution is more difficult to control. Because of these variable factors, only a few general recommendations for providing daylight illumination can be given. Windows facing south give maximum heat in cold climates but considerable glare; those facing north are advised for buildings in warm climates. The glass area should be at least 20% of the floor area of the room. The tops of the windows should be as near the ceiling as possible, since the higher the windows, the more effectively the light
TABLE 36-5. BRIGHTNESS OF NATURAL AND ARTIFICIAL LIGHT SOURCES
Color of Light and Surroundings and Surface Finish A contrast in color between the object and its immediate background is important; the more definite the color contrast, the greater the visual acuity and speed of vision. The value of color contrast is due partly to the dissimilarity in color and partly to differences in the amount of light reflected by the different colors. Recognition of an object becomes most difficult when a black object is viewed against a black background. Here, differences in texture and shadows are necessary for vision. Higher levels of illumination are required where the color contrast is reduced. The color and finish of the walls, ceiling, furniture, and machinery are of great importance in illumination because the amount of light reflected is determined chiefly by the color.
Recommendations: Artificial and Natural Lighting Artificial Illumination. It is evident from the above discussion that a basic amount of general illumination must be supplied to all areas of
Light Source
Foot-Lamberts
Candles per Square Inch
Sun as observed at earth’s surface Full moon, clear sky 1000 W type H-6 mercury lamp 400 W type H-1 mercury lamp Brightest spot on bulb of: 500 W tungsten-filament lamp 100 W tungsten-filament lamp 40 W tungsten-filament lamp 30 W fluorescent, 1-inch tube (white) 40 W fluorescent, 11/2 inch tube (white) 100 W fluorescent, 21/8 inch tube (white)
450,000,000 1500 104,000,000 443,000
1,000,000.0 3.3 230,000.0 980
131,000 58,800 24,800 2400
290.0 130.0 55.0 5.3
1750
3.9
2180
4.8
36 reaches the opposite side of the room. An increase in the height of a window produces a much greater increase in illumination than a proportional increase in width. Windows on two sides of the room are desirable, but where windows are only on one side of a room, the glass area should extend the full length of the room if possible. It is recommended that windows should not be in the field of view for normal working conditions. The size and position of monitors and skylights also must be related to the size of the building. Since direct sunlight often produces excessive brightness, it is necessary to provide some means of sunlight control, such as venetian blinds, shades, louvers, outside projectors, and glass block. A complete discussion of recommended practices for daylighting in schools, factories, offices, and homes has been published by the Committee on Daylighting of the Illuminating Engineering Society. There has been an increase in research activities related to lighting in the past several years. Lighting patterns have been looked at in nursing home settings6 (where they seemed to make little difference), to the effects on food intake patterns7,8 (where lighting does seem to make a difference). Lighting seems to have a potential beneficial effect in some hospital settings, such as neonatal intensive care units,9–11 or nursing stations.12 There is also increased interest in the issue of how light may effect circadian rhythms.13 With increasing long-distance travel, this may become an increasingly important area of useful research.
Nonionizing Radiation
747
TABLE 36-6. OCCUPATIONAL EXPOSURE TO INFRARED RADIATION Bakers Blacksmith Chemists Cooks Electricians
Foundry workers Glass workers Solderers Steel mill workers Welders
INFRARED RADIATION
Infrared radiation, of longer wavelength than visible light, ranges in wavelength from 700 nm to 1 mm. All objects above absolute zero radiate some infrared radiation. Objects of higher temperature radiate to objects of lower temperature; the sensation of a hot stove results from this. Infrared radiation is the most important part of the spectrum for the production of heat. Infrared radiation causes dilation of the capillary bed of the skin and if strong enough can cause a burn. Infrared radiation can cause damage to the eye and is a cause of cataract development among glassblowers and others. Occupational exposures to infrared radiation are listed in Table 36-6.
Extremely Low Frequency Electromagnetic Fields Louis Slesin
Extremely low frequency (ELF) electromagnetic fields (EMFs) are in the 0–300 Hz frequency range. Transmission and distribution power lines, which operate at 60 Hz in the United States and at 50 Hz in most other countries—together with electric appliances—are the most common sources of ELF EMF exposures. The strength of the magnetic field, measured in microtesla (µT) or milligauss (mG), is a function of the electric current flowing in the power line: the greater the current, the higher the magnetic field (note that 1 µT = 10 mG). The electric field is proportional to the voltage of the line and is measured in volts per meter (V/m). The intensities of the electric and magnetic fields decrease as one moves away from the source. Because the current flowing in a power line changes over time, so does the associated magnetic field. The only reliable way to estimate the magnetic field is to use a gaussmeter.1 Both single-and threeaxis meters are available. The three-axis units automatically calculate the vector sum of the field in all directions. A large-scale survey of ELF EMF exposures in the United States in the mid-1990s estimated that more than half of the American population is exposed to an average magnetic field of less than 1 mG, while approximately 6% are exposed to a 24-hour average of more than 3 mG, and approximately 0.5% or one million Americans are exposed to more than 10 mG.2 The largest exposures occurred in the workplace and the lowest were in bed at home. Electric appliances can have a very high magnetic field very close to the units, but the fields decrease even more rapidly with distance than for power lines. In general, the greater the current draw, the higher the fields. Appliances that can entail the highest exposures are microwave ovens and hair dryers, as well as some types of electric blankets.
The potential health effects of exposure to ELF EMFs remain controversial. On the one hand, there is now a consensus among epidemiologists that children exposed to ambient magnetic fields of 3–4 mG show an increased incidence of leukemia. Yet, on the other hand, there is still no accepted mechanism to explain how ELF EMFs can induce or promote cancer and only inconsistent support for a cancer link from laboratory and animal studies. In 2001, a committee assembled by the International Agency for Research on Cancer (IARC) classified ELF magnetic fields as a Group 2B carcinogen—that is, they are “possibly carcinogenic to humans.”3 Three years earlier, another expert group advised the U.S. National Institute of Environmental Health Sciences (NIEHS) that it too considered ELF magnetic fields to be possible human carcinogens.4 In each case, the driving force for the designation was the possible risk of childhood leukemia. The following year, 2002, at the end of an 8-year project at a cost of more than $7 million, the California Department of Health Services issued a report,1 which concluded that EMFs are likely linked to the development of not only childhood leukemia, but also of adult brain cancer, amyotrophic lateral sclerosis (ALS), and miscarriages.
Epidemiological Studies Nancy Wetheimer and Ed Leeper first pointed to a link between highcurrent electrical facilities and childhood cancer in 1979.6 Over the next 20 years, a host of follow-up studies were carried out, most of which, but not all, showed an increased risk of childhood leukemia among those exposed to milligauss-level magnetic fields.
748
Environmental Health
More recently, two independent meta-analyses,7 with data from the best of these epidemiological studies, have found that the risk of a child developing leukemia doubles following exposure to magnetic fields above 3–4 mG. Associations have also been reported among those living near power lines. For instance, in June 2005, the Childhood Cancer Research Group in the United Kingdom found a 70% increase in leukemia among children living within 200 meters of a power line. The team concluded like many others in the past: “We have no satisfactory explanation for our results in terms of causation by magnetic fields, and the findings are not supported by convincing laboratory data or any accepted biological mechanism.”8 The lack of support from laboratory and animal experiments clearly weakens the association but may be due to our still primitive understanding of what aspect of EMF exposure is responsible for the increased cancer risk. Most experimental studies have used a pure sinusoidal EMF at 50–60 Hz, rather than real-world fields that include more complex waveforms. For instance, short-lived electromagnetic transients with high peak-power have long been cited as being more biologically active than simple sine waves, but unfortunately, very little research has been done to investigate the potential effects of complex fields. As Kenneth Olden, the director of the NIEHS, reported to the U.S. Congress in 1999 at the end of a 6-year federal research program known as EMF RAPID: “The human data are in the ‘right’ species, are tied to ‘real life’ exposures and show some consistency that is difficult to ignore.”9 Studies of EMF-exposed workers show less consistency than the childhood residential studies, but they too show a pattern of excess leukemia, as well as brain cancer. Meta-analyses sponsored by the U.S. electric utility industry point to small, but statistically significant, associations for both types of cancer.10 Here again, the difficulties associated with identifying the appropriate exposure parameters have frustrated attempts at reaching firm conclusions. Most occupational studies have focused on worker exposure to magnetic fields, but Anthony Miller observed much higher risk estimates when he took into account exposures to electric fields. For instance, in a study of electric utility workers, Miller reported “significant elevations” in the risk of leukemia for relatively high exposures to both magnetic and electric fields.11 In a later analysis, Paul Villeneuve and Miller found that workers who had worked for at least 20 years and had considerable exposure to electric fields above a threshold of 10–20 V/m had an increased risk of leukemia which was 8–12 times the expected rate.12 Unfortunately, there have not been any follow-up studies to see whether other workers have experienced similar, elevated risks of leukemia and whether the risks are dependent on exposures above a certain threshold. The importance of an exposure threshold got a big boost when a prospective study by De-Kun Li of Kaiser Permanente in Oakland, California, showed that women who were exposed to 16 mG or more on a typical day had up to a sixfold increased risk of spontaneous miscarriage.13 Prior to this, EMF-miscarriage studies had found mixed results. Li’s result also awaits replication and confirmation. The lack of a clear resolution is common for a number of possible ELF EMF health risks. The case for ALS is quite strong,14 while that for Alzheimer’s is more mixed.15 For female breast cancer, it is still hard to reach any firm conclusion: Some occupational studies show an association,16 while others do not.17 For residential exposures, all the recent efforts have shown no elevated risk,18 but this may be at least partially due to the fact that exposures at home were quite low. There is a stronger association for male breast cancer and EMF exposures.19
Biophysical Bases for the Epidemiological Links The lack of a mechanism of interaction continues to undermine the acceptance of the EMF–cancer link. Nevertheless, two lines of laboratory research offer support for the epidemiological findings. The first stems from experiments carried out at the University of Washington, Seattle, where Henry Lai and N. P. Singh have shown
that power-frequency magnetic fields can induce single- and doublestrand breaks in the DNA of rats.20 A number of others have also found this genotoxic effect: Sweden’s Britt-Marie Svendenstal21 did so using mice, and Austria’s Hugo Rüdiger22 with in vitro studies. Even though the quantum energy at 50/60 Hz is far below that needed to break a chemical bond, there may be alternative explanations based on epigenetic changes. Meanwhile, researchers are trying to extend the work in the hope that it throws some light on the nature of the interaction. For instance, Rüdiger found the strongest effect on DNA with an intermittent EMF exposure (field on for 5 minutes, followed by 10 minutes with the field off) and at a relatively low intensity— 350 mG (35 µT). Lai and Singh argue that free radicals are the key to understanding the EMF–DNA interaction: they have shown that free radical scavengers can block DNA strand breaks.23 They have won support from an Italian group that has also linked the breaks to the formation of free radicals.24 A second possible mechanism centers on melatonin, a natural hormone and powerful antioxidant produced by the pineal gland at night. Visible light has long been known to stop the flow of melatonin from the pineal; EMFs can have a similar, albeit weaker, effect. In 1987, Richard Stevens proposed that electric power, by both increasing light-at-night and EMF exposures, could be responsible for the increased incidence of breast cancer in industrially advanced societies.25 While laboratory studies exposing humans and animals to pure sinusoidal 50/60 Hz magnetic fields have yielded mixed results, surveys of people exposed to EMFs on the job26 and at home27—that is, in real-world environments—have been much more consistent in showing suppression of melatonin. Jim Burch of Colorado State University has helped elucidate some of the complexities of the EMF–melatonin interaction. He has found that electric utility workers exposed to circularly- or ellipticallypolarized fields have lower melatonin levels, but he did not see a similar reduction among those exposed to linearly-polarized EMFs.28 EMFs can not only inhibit the production of melatonin by the pineal, they can also block its oncostatic action. In the 1980s, David Blask found that melatonin can inhibit the growth of MCF7 breast cancer cells.29 Then about 10 years later, Robert Liburdy showed that a very weak (1.2 µT or 12 mG) 60 Hz field can counteract the antiproliferation action of the melatonin on breast cancer cells.30 Four other labs have succeeded in repeating Liburdy’s experiment. A Japanese group, the fifth to document this effect, went on to show how a low-intensity magnetic field can disrupt a cell’s signaling system.31 In a series of animal exposure studies, a team led by Wolfgang Löscher of Germany’s Hannover Medical School has shown that a 50 Hz magnetic field can promote breast cancer in rats initiated by the carcinogen 7,12-dimethylbenz[a]anthracene (DMBA).32 An effort by the U.S. National Toxicology Program failed to replicate this finding,33 but subsequent experiments by Löscher explained the apparent discrepancy. He identified a genetic component to the effect: two substrains of the rats can respond differently to the tumor-promoting effects of the magnetic fields.34 RADIO FREQUENCY AND MICROWAVE RADIATION
Radiofrequency and microwave (RF/MW) radiation covers the 3 kHz– 300 GHz frequency band of the electromagnetic spectrum. The most common sources of public exposure to RF/MW radiation are mobile phones and their associated towers (see following sections). Television and radio stations use more powerful signals to broadcast their programs. Other high-power sources include radars and satellite uplinks. (Satellite dishes are passive: they only collect microwave signals much like a magnifying glass can focus the sun’s rays.) The military is a major user of RF/MW radiation for communications, radar, and electronic warfare. A multitude of industrial applications make use of the radiation’s heating properties—for instance RF heaters and sealers are used to make products as diverse as loose-leaf plastic
36 binders to car seats; other applications include laminating wood veneers. Microwaves are also used in hyperthermia for the treatment of cancer. The ambient intensity of the radiation is measured in milliwatts per square centimeter (mW/cm2) or watts per square meter (W/m2) (1 mW/cm2 = 10 W/m2). Specific absorption rates (SARs) are used to quantify energy delivered to tissues and are measured in watts per kilogram (W/Kg). RF/MW meters are more expensive than those that can measure 50/60 Hz fields. SARs are difficult to estimate and must be converted to intensity limits for enforcement. For adult humans, an average whole-body SAR of 4 W/Kg is approximately equivalent to a power density of 10 mW/cm2 at 30–300 MHz.
Epidemiological Studies There are far fewer high-quality epidemiological studies of RF/MW-exposed populations than there are power frequencies. Stanislaw Szmigielski of the Center for Radiobiology and Radiation Safety in Warsaw is the only researcher to ever run a major epidemiological study of military personnel occupationally exposed to RF/MW radiation. Overall, he found that exposed soldiers had twice the expected rate of cancer, a statistically significant finding. For leukemia and lymphoma, the incidence was six times that of the controls, with even higher rates for younger (20–50-year-old) servicemen.35 In the United States, an early effort36 to investigate those exposed to radar in the military was marred by the selection of controls—some of these had been exposed to radar radiation. A later study37 of navy personnel found a suggestion of a leukemia risk, but this study also suffered from poor exposure assessment. Problems with estimating exposures have similarly set back epidemiological studies of possible risks associated with radio and TV broadcast radiation. Nevertheless, studies in a number of different countries have implicated various different types of broadcast radiation with leukemia, especially among the young. In Australia, Bruce Hocking, an occupational physician, found higher rates of leukemia among children living near a TV tower in Sydney.38 Helen Dolk saw a similar pattern in the United Kingdom among adults near TV and FM transmitters outside of Birmingham,39 but her follow-up study of those living near other transmitter sites in England was ambiguous.40 A team from South Korea identified a significantly higher mortality rate from leukemia among young adults (under 30 years of age) living in the vicinity of AM broadcast towers.41 A higher than expected incidence of childhood leukemia has also been found near the highpower shortwave transmitters operated by the Vatican radio outside Rome. Convincingly, there was a significant decline in cancer risk with increasing distance from the Vatican antennas.42 In an earlier study, Sam Milham had reported higher rates of leukemia and lymphoma among amateur radio operators43 but the nature of their electromagnetic exposures (as well as other possible toxic substances) is not clear. The focus of RF/MW epidemiology has now turned to mobile phones and to a lesser extent to their associated towers (see below). This is also the case for in vitro and animal studies. MOBILE PHONES AND TOWERS
The most important public health issue related to nonionizing electromagnetic radiation is the widespread use of mobile (cellular) phones. More than two billion people around the world are now regular users of these hand-held devices. Whether or not there are any deleterious effects of long-term exposure to microwave radiation remains an open question. The International Agency for Research on Cancer (IARC) is coordinating the Interphone Project,44 in which epidemiologists from 13 countries (the United States is not among them) are investigating possible mobile phone cancer risks. Each country is running its own case-control study. The combined data—projections point to a total
Nonionizing Radiation
749
of more than 5100 cases of benign and malignant brain tumors, as well as more than 1100 cases of acoustic neuromas and more than 100 cases of malignant parotid gland tumors—will then be analyzed together. The results are due by 2008. A number of participating teams have already published their findings—these include Denmark,45 Germany,45A and Sweden.46 More significantly, five northern European countries have pooled their data and seen a statistically significant 39% increase in gliomas on the side of the head the phone was used among those who had used cell phones for at least ten years.47 A similar ipsilateral, longterm risk has been seen for acoustic neuroma, a benign tumor of the acoustic nerve by the Swedish group48 and in a five-country meta-analysis.49 A second team of Swedish researchers, led by Lennart Hardell and Kjell Hansson Mild, has also seen a risk of acoustic neuromas, as well as certain types of brain tumors.50 In addition, they found indications that there are also health risks associated with the use of cordless phones. These same researchers have reported a higher brain tumor risk among those using phones in rural areas, compared to those using them in urban environments.51 These potential risks should become somewhat clear when the complete Interphone results become available. Nevertheless, there are reasons to suspect that even then considerable uncertainty will remain. First, only a relatively small number of people participating in the Interphone study will have used mobile phones for more than 10 years, an important limitation given that some cancers have a latency of 15–20 years. In addition, epidemiologists have a hard time estimating exposures to mobile phone radiation. This is further complicated by changes in technology—for instance, the transition from analog to digital as well as the variety of possible signal types, such as CDMA (code division multiple access) and TDMA (time division multiple access). While much of the concern over mobile phones has been focused on cancer and acoustic neuromas, a number of other effects have also been reported. These include: • DNA Breaks: Lai and Singh, who have shown that ELF EMFs can cause DNA breaks, had previously found a similar effect at RF/MW frequencies.52 These findings touched off a major controversy with clear implications for the safety of mobile phones. As a result, Motorola commissioned a series of studies in Joe Roti Roti’s lab at Washington University in St. Louis. He failed to see any RF/MW-induced DNA breaks.53 More recently, however, others have found that mobile phone signals, at relatively low intensities, can damage DNA both in live animals54 and in cell cultures.55 Here again, there is a lot of contradictory results and the nature of this genotoxic effect remains uncertain. • Increasing the Permeability of the Blood-Brain Barrier (BBB): This microwave effect also remains unresolved. The issue is as controversial today as it was when first reported 30 years ago by Allan Frey.56 Leif Salford and Bertil Persson of Sweden’s University of Lund are the latest to point to changes in the BBB following low-level microwave exposure.57 More recently they have observed cellular damage in the brains of exposed rats after only a 2-hour exposure to very-low intensity mobile phone radiation.58 They attribute the neuronal damage to changes in the BBB. Pierre Aubineau of France’s University of Bordeaux has also observed BBB leakage in rat brains.59 • Changes in Brain Activity and Sleep Patterns: A Swiss group led by Alexander Borbély and Peter Achermann at the University of Zurich has shown that a single 30-minute peak exposure to a 1 W/Kg (in the head) microwave signal simulating that from a GSM mobile phone had an immediate effect on the brain’s electrical activity which lasted through most of the night’s sleep.60 This group has found that “pulse modulation is crucial for RF EMF-induced alterations in brain physiology.”
750
Environmental Health • Activation of Stress Proteins: Dariusz Leszczynski of Finland’s Radiation and Nuclear Safety Authority has found that nonthermal mobile phone signals can cause changes in the expression of heat-shock proteins. Leszczynski has suggested that these effects, “when occurring repeatedly over a long period of time, might become a health hazard because of the possible accumulation of brain tissue damage.”61 • Changes in Reaction Times: A number of research teams have seen improvements in cognitive functions and reaction times in psychological tests—by small but significant amounts— following exposure to mobile phone radiation.62,63 There are inconsistencies among these reports, and their relevance to human health remains unclear.
In light of the uncertain health impacts associated with the use of mobile phones, a number of expert panels have recommended a precautionary approach to their use by children. The first of these was a U.K. group headed by Sir William Stewart, which, in its report issued in 2000, discouraged widespread use by children “because of their developing nervous system, the greater absorption of energy in the tissue of the head and longer lifetime of exposure.”64 In a followup report issued in 2004, Stewart reaffirmed this recommendation.65 Similarly, a French panel has also advised that parents limit the use of mobile phones by children.66 Numerous devices are being marketed to reduce radiation exposures from hand-held phones. Practically all are useless. Hands-free sets are the single exception: they allow you to move the phone away from your head and your eyes. Although unrelated to radiation exposure, it is worth pointing out that the use of a mobile phone while driving a motor vehicle substantially increases—by a factor of four, according to one estimate67—the risk of an accident. The use of a hands-free set has been shown to do little to improve the reaction time for applying brakes.68,69 Mobile phone towers entail much lower exposures than the phones—on the order of 1000-times less—but on the other hand, the towers are transmitting all the time. Such low exposures make epidemiological studies very difficult to carry out. Nevertheless, in 2003, the U.K. government funded a study of leukemia and other cancers among children living near these towers—the first effort of its kind. One provocation study, carried out in The Netherlands, has found, to the surprise of many observers, that a RF/MW radiation of only 1 V/m has an impact on the “well-being” of those exposed.70 But an attempt to repeat this finding failed.70A Claims of electrosensitivity among certain populations remain controversial and unresolved. EMF AND RF/MW EXPOSURE LIMITS
The United States has no federally enforceable standards to govern ambient exposures to any type of ELF EMFs or RF/MW radiation. Two, sometimes competing, groups set voluntary exposure limits: the International Commission on Non-Ionizing Radiation Protection, better known as ICNIRP,71 and the International Committee on Electromagnetic Safety (ICES), a group working under the aegis of the Institute of Electrical and Electronics Engineers (IEEE), based in Piscataway, New Jersey. There are two federal limits for specific products: one governing mobile phones and the other microwave ovens. The Federal Communications Commission (FCC) has adopted the IEEE exposure limit for hand-held mobile phones: an SAR of 1.6 W/Kg averaged over 1 g of tissue. This is stricter than the ICNIRP limit of 2.0 W/Kg averaged over 10 g of tissue. The averaging volume may seem of little consequence, but in fact going from 1 g to 10 g results in a twoto-threefold increase in allowable exposures. In 2005, the IEEE relaxed its mobile phone limit to match the ICNIRP standard, except that the IEEE exempts the pinna whlie ICNIRP does not.71A The FCC has not yet indicated whether it might adopt the weaker standard.
More than 30 years ago, the Food and Drug Administration adopted an emission standard of 1 mW/cm2 at 5 cm from the door of a new microwave oven and 5 mW/cm2 once it leaves the store. Both the ICNIRP and the IEEE guidelines are based only on acute hazards and do not address possible long-term risks, such as cancer. At ELF frequencies, the standards seek to protect against shocks and burns, while for RF/MW radiation, they are designed to protect against thermal hazards. Both groups have discounted the well-documented childhood leukemia risk at power frequencies. The ICNIRP standard specifies a general public limit of 1 G (1000 mG) and 5 kV/m for power frequency magnetic and electric fields, respectively. For workers, the limits are 5 G and 10 kV/m. The IEEE limits are more lenient: approximately 9 G and 5 kV/m for the public and 27 G and 20 kV/m for those in “controlled” environments, respectively (these latter limits are essentially equivalent to occupational standards).72 Given the enormous gulf between the general public limit of 1–9 G and the apparent threshold of 3–4 mG for a leukemia risk among children, there have been calls for a precautionary approach to reduce exposures to ELF EMFs73 similar to those for mobile phones and children. Others, for instance those at the World Health Organization, have opposed such proposals, arguing that they would undermine the scientific basis of exposure standards.74 In the United States, precautionary policies are framed in terms of “prudent avoidance,” a term first applied to EMF health risks by a team at Carnegie Mellon University in the late 1980s in a report prepared by the Congressional Office of Technology Assessment (OTA— now disbanded).75 Prudent avoidance, like other precautionary policies, may be defined in various ways. The OTA proposed that ELF EMF exposures may be reduced by rerouting power lines and redesigning electrical systems and appliances when these actions entail “modest costs.” Prudent avoidance is a low-cost variation of the ALARA (as low as reasonably achievable) strategy devised to limit exposures to ionizing radiation. At RF/MW frequencies, the ICNIRP76 and IEEE71A standards are similar. They are both frequency dependent, based on the assumption that the threshold for ill effects is 4 W/Kg (averaged over a 6-minute interval). Each then applies a safety factor of 10 to determine the occupational or controlled exposure limits. These are thus based on an SAR of 0.4 W/Kg. For exposures of the general public, each then adds another safety factor of 5 for a resulting SAR of 0.08 W/Kg. When converted to ambient exposure limits, the two sets of guidelines are frequency dependent to take into account the changes in energy absorption. When plotted as a function of frequency, the limits have a well shape. At their most restrictive frequencies (10–400 MHz for ICNIRP and 100–400 MHz for IEEE) these SARs translate to 1 mW/cm2 for workers and 200 µW/cm2 for the general public. Above 400 MHz, the ICNIRP public exposure limit rises to 1 mW/cm2 at 2 GHz (to 5 mW/cm2 for workers). The IEEE is less strict above 400 MHz, rising to 10 mW/cm2 at 15 GHz for the uncontrolled and at 2 GHz for controlled exposures. For frequencies below approximately 100 MHz, limits for contact currents are specified. The guidelines also include looser limits for partial body exposures. ICNIRP and the IEEE allow a 25-fold (for the head and trink) and a 20-fold increase, respectively. These less strict limits do not apply to the eyes or the testes, however. (The mobile phone exposure limits of 1.6 W/Kg [IEEE] and 2.0 W/Kg [ICNIRP] are derived by multiplying the 0.08 W/Kg guideline by 20 or 25, respectively.) Some national governments, notably those of Italy and Switzerland, have adopted precautionary limits for both ELF EMFs and for RF radiation. In addition, China and Russia have their own sets of limits that are significantly stricter than those of ICNIRP and the IEEE. In 2000, for instance, Switzerland adopted a 10 mG (1µT) exposure standard for magnetic fields from new power lines, substations, and electric railway lines in places where people spend time— a level that is 100 times stricter than the ICNIRP guidelines. For RF/MW radiation from mobile phone towers, the Swiss have an
36 ambient limit of 4.2 µW/cm2 (4 V/m), which is 100–150-times stricter than ICNIRP and the IEEE. Italy’s standard for mobile phone towers is 10 µW/cm2 (6 V/m). Individual countries cannot easily set strict standards for mobile phones because they would be seen as barriers to trade. Nevertheless, TCO Development, an arm of the Swedish white-collar union, is advocating an SAR limit of 0.8 W/Kg averaged over 10 g of tissue for mobile phones, with an additional specification on the communications efficiency.77 Using a similar strategy promoting a precautionary limit that is technologically feasible—TCO prompted an industrywide reduction of operator exposures from video display terminals (VDTs). Today, essentially all large manufacturers market TCOcompliant displays, and concerns over radiation emissions have faded away. While this is good news for data entry workers and the huge number of other computer users, it has left unresolved the question as to whether or not EMFs from cathode ray tubes can lead to miscarriages and other adverse pregnancy outcomes.78 FUTURE RESEARCH
Health research on ELF EMFs and RF/MW radiation has come to a standstill in the United States. The only organizations doing any work are EPRI, the research arm of the electric utility industry, and the U.S. Air Force. Both organizations have clear conflicts of interest. EPRI is spending most of its budget trying to show that childhood leukemia is attributable to contact currents rather than the magnetic fields from their members’ power lines. The Air Force is developing crowd control weapons, for example “active denial technology”—one of a growing number of “nonlethal weapons.” Active denial uses millimeter waves (~100 GHz) to cause heat-induced pain to disperse crowds. Air Force researchers argue that skin heating does not cause any long-term ill effects.79 The Air Force has played a leading role in the development of the IEEE safety standards, but few have raised objections to its simultaneous promotion of weapons and exposure limits. The U.S. National Toxicology Program has proposed to undertake a major series of RF/MW-animal studies. At this writing, the studies are due to begin in the near future, possibly by late 2007. The situation in Europe is very different with both the EC and individual countries sponsoring their own sets of health studies. REFERENCES
Nonionizing Radiation 1. Hughes D. Hazards of Occupational Exposure to Ultraviolet Radiation. Occupational Hygiene Monograph No. 1. Leeds, England: University of Leeds Industrial Services; 1978. 2. Occupational Exposure to Ultraviolet Radiation. NIOSH Criteria Document. Washington, DC.: U.S. Department of HEW, NIOSH Publication No. 73-11009: 1973; 108. 3. Urbach F. Geographic distribution of skin cancer. J Surg Oncol. 1971;3:219–34. 4. American National Standards Institute. Practice of Industrial Lighting A 11.1, 1965 (reaffirmed 1970). Practice for Office Lighting A 132. 1, 1966. Guide for School Lighting A 23.1, 1962 (reaffirmed 1970). New York: The Institute. 5. Illuminating Engineering Society, Committee on Daylighting. Recommended Practice of Daylighting. Baltimore: The Society; 1950. 6. Schnelle JF, et al. The nursing home at night: effects of an intervention on noise, light, and sleep. J Am Ger. 1999;47:430–38. 7. DeCastro JM. Effect of ambience on food intake and food choice. Nutrition. 2004;20:821–38. 8. Wansink B. Environmental factors that increase the food intake and consumption volume of unknowing consumers. Ann Rev Nutr. 2004;24: 455–79.
Nonionizing Radiation
751
9. Walsh-Sukys M, et al. Reducing light and sound in the neonatal intensive care unit: an evaluation of patient safety, staff satisfaction and costs. J Perinatology. 2001;21:230–5. 10. Rea M. Lighting for caregivers in the neonatal intensive care unit. Clin Perinatol. 2004;31:229–42. 11. White RD. Lighting design in the neonatal intensive care unit: practical applications of scientific principles. Clin Perinatol. 2004;31: 323–30. 12. Hunter CM. Bright ideas. Some rules of thumb for interior lighting design and selection. Hlth Fac Mngt. 15:26–30, 2002 13. Pauley SM. Lighting for the human circadian clock: recent research indicates that lighting has become a public health issue. Med Hypoth. 2004;63:588–96.
Extremely Low Frequency Electromagnetic Fields 1. For a list of gaussmeters, go to: http://www.microwavenews.com/ EMF1.html. 2. Zaffanella LE, Kalton GW. EMF RAPID Program, Project #6 Report: Survey of Personal Magnetic Field Exposure, Phase II: 1000-Person Survey. Lee, MA: Enertech Consultants; 1998. Full text available at: http://www.emf-data.org/rapid6-report.html. 3. IARC Monographs on the Evaluation of Carcinogenic Risks to Humans. Non-Ionizing Radiation, Part 1: Static and Extremely Low Frequency (ELF) Electric and Magnetic Fields, Vol. 80. Lyon, France: International Agency for Research on Cancer; 2002. 4. Portier CJ, Wolfe MS, eds. Assessment of Health Effects from Exposure to Power-Line Frequency Electric and Magnetic Fields: Working Group Report. Research Triangle Park, NC: National Institute of Environmental Health Sciences; 1998. Full text available at: http:// www.niehs.nih.gov/emfrapid/html/WGReport/PDF_Page.html. 5. Neutra R, DelPizzo V, Lee G. An Evaluation of the Possible Risks From Electric and Magnetic Fields (EMFs) From Power Lines, Internal Wiring, Electrical Occupations and Appliances. Oakland, CA: California EMF Program; June 2002. Full text available at: http://www.dhs.ca.gov/ps/deodc/ehib/emf/RiskEvaluation/ riskeval.html. 6. Wetheimer N, Leeper E. Electrical wiring configurations and childhood cancer. Am J Epidemiol. 1979;109:273–84. 7. Ahlbom A, Day N, Feychting M, Roman E, et al. A pooled analysis of magnetic fields and childhood leukemia. Br J Cancer. 2000;83: 692–8; Greenland S, Sheppard AR, Kaune WT, Poole C, Kelsh MA. A pooled analysis of magnetic fields, wire codes, and childhood leukemia. Epidemiology. 2000;11:624–34. 8. Draper G, Vincent T. Kroll ME, Swanson J. Childhood cancer in relation to distance from high voltage power lines in England and Wales: a case-control study. Br Med J. 2005;330:1290–2. 9. NIEHS Report on Health Effects from Exposure to Power-Line Frequency Electric and Magnetic Fields. Research Triangle Park, NC: NIEHS, 1999. NIEHS Publication No.99-4493. Full text available at: http://www.niehs.nih.gov/emfrapid/html/ EMF_DIR_RPT/ Report_18f.htm. 10. Kheifets LI, Afifi AA, Buffler PA, Zhang ZW. Occupational electric and magnetic field exposure and brain cancer: a meta-analysis. J Occup Environ Med. 1995;37:1327–41; Kheifets LI, Afifi AA, Buffler PA, et al. Occupational electric and magnetic field exposure and leukemia: a meta-analysis. J Occup Environ Med. 1997;39:1074–91; Kheifets LI, Gilbert ES, Sussman SS, et al. Comparative analyses of the studies of magnetic fields and cancer in electric utility workers: studies from France, Canada and the United States. Occup Environ Med. 1999;56:567–74. 11. Miller AB, To T, Agnew DA, Wall C, Green LM. Leukemia following occupational exposure to 60 Hz electric and magnetic fields among Ontario electric utility workers. Am J Epidemiol. 1996;144: 150–60.
752
Environmental Health
12. Villeneuve PJ, Agnew DA, Miller AB, et al. Leukemia in electric utility workers: the evaluation of alternative indices of exposure to 60 Hz electric and magnetic fields. Am J Ind Med. 2000;37: 607–17. 13. Li D-K, Odouli R, Wi S, et al. A population-based prospective cohort study of personal exposure to magnetic fields during pregnancy and the risk of miscarriage. Epidemiology. 2002;13:9–20. 14. Li CY, Sung FC. Association between occupational exposure to power-frequency electromagnetic fields and amyotrophic lateral sclerosis: a review. Am J Ind Med. 2003;43:212–20. 15. Feychting M, Jonsson F, Pedersen NL, Ahlbom A. Occupational magnetic field exposure and neurodegenerative disease. Epidemiology. 2003;14:413–9; Hakansson N, Gustavsson P, Johansen C, Floderus B. Neurodegenerative diseases in welders and other workers exposed to high levels of magnetic fields. Epidemiology. 2003;14:420–6; Qiu C, Fratiglioni L, Karp A, Winblad B, Bellander T. Occupational exposure to electromagnetic fields and risk of Alzheimer’s disease. Epidemiology. 2004;15:687–94. 16. Kliukiene J, Tynes T, Andersen A. Residential and occupational exposures to 50-Hz magnetic fields and breast cancer in women: a population-based study. Am J Epidemiol. 2004;159:852–61; Labreche F, Goldberg MS, Valois MF, et al. Occupational exposures to extremely low frequency magnetic fields and postmenopausal breast cancer. Am J Ind Med. 2003;44:643–52. 17. Forssen UM, Rutqvist LE, Ahlbom A, Feychting M. Occupational magnetic fields and female breast cancer: a case-control study using swedish population registers and new exposure data. Am J Epidemiol. 2005;161:250–9. 18. Davis S, Mirick DK, Stevens RG. Residential magnetic fields and the risk of breast cancer. Am J Epidemiol. 2002;155:446–54; Schoenfeld ER, O’Leary ES, Henderson K, et al. Electromagnetic fields and breast cancer on Long Island: a case-control study. Am J Epidemiol. 2003;158:47–58; London SJ, Pogoda JM, Hwang KL, et al. Residential magnetic field exposure and breast cancer risk: a nested case-control study from a multiethnic cohort in Los Angeles County, California. Am J Epidemiol. 2003;158:969–80. 19. Erren TC. A meta-analysis of epidemiologic studies of electric and magnetic fields and breast cancer in women and men. Bioelectromagnetics Suppl. 2001;5:S105–19. 20. Lai H, Singh NP. Acute exposure to a 60 Hz magnetic field increases DNA strand breaks in rat brain cells. Bioelectromagnetics. 1997;18:156–65; Singh N, Lai H. 60 Hz magnetic field exposure induces DNA crosslinks in rat brain cells. Mutat Res. 1998;400: 313–20. 21. Svedenstal BM, Johanson KJ, Mattsson MO, Paulsson LE. DNA damage, cell kinetics and ODC activities studied in CBA mice exposed to electromagnetic fields generated by transmission lines. In Vivo. 1999;13:507–13; Svedenstal BM, Johanson KJ, Mild KH. DNA damage induced in brain cells of CBA mice exposed to magnetic fields. In Vivo. 1999;13:551–2. 22. Ivancsits S, Diem E, Pilger A, Rüdiger HW, Jahn O. Induction of DNA strand breaks by intermittent exposure to extremely-low-frequency electromagnetic fields in human diploid fibroblasts. Mutat Res. 2002;519:1–13; Ivancsits S, Diem E, Jahn O, Rüdiger HW. Intermittent extremely low frequency electromagnetic fields cause DNA damage in a dose-dependent way. Int Arch Occup Environ Health. 2003;76: 431–6. 23. Lai H, Singh NP. Magnetic-field-induced DNA strand breaks in brain cells of the rat. Environ Health Perspect. 2004;112:687–94. 24. Wolf FI, Torsello A, Tedesco B, et al. 50-Hz extremely low frequency electromagnetic fields enhance cell proliferation and DNA damage: possible involvement of a redox mechanism. Biochim Biophys Acta. 2005;1743:120–9. 25. Stevens RG. Electric power use and breast cancer: a hypothesis. Am J Epidemiol. 1987;125:556–61.
26. Burch JB, Reif JS, Yost MG, Keefe TJ, Pitrat CA. Nocturnal excretion of a urinary melatonin metabolite among electric utility workers. Scand J Work Environ Health. 1998;24:183–9; Juutilainen J, Stevens RG, Anderson LE, et al. Nocturnal 6-hydroxymelatonin sulfate excretion in female workers exposed to magnetic fields. J Pineal Res. 2000;28:97–104. 27. Davis S, Kaune WT, Mirick DK, Chen C, Stevens RG. Residential magnetic fields, light-at-night, and nocturnal urinary 6-sulfatoxymelatonin concentration in women. Am J Epidemiol. 2001;154:591–600. 28. Burch JB, Reif JS, Noonan CW, Yost MG. Melatonin metabolite levels in workers exposed to 60-Hz magnetic fields: work in substations and with 3-phase conductors. J Occup Environ Med. 2000;42: 136–42. 29. Blask DE, Hill SM. Effects of melatonin on cancer: studies on MCF-7 human breast cancer cells in culture. J Neural Transm Suppl. 1986;21:433–49. 30. Harland JD, Liburdy RP. Environmental magnetic fields inhibit the antiproliferative action of tamoxifen and melatonin in a human breast cancer cell line. Bioelectromagnetics. 1997;18:555–62. 31. Ishido M, Nitta H, Kabuto M. Magnetic fields (MF) of 50 Hz at 1.2 microT as well as 100 microT cause uncoupling of inhibitory pathways of adenylyl cyclase mediated by melatonin 1a receptor in MF-sensitive MCF-7 cells. Carcinogenesis. 2001;22:1043–8. 32. Thun-Battersby S, Mevissen M, Löscher W. Exposure of spraguedawley rats to a 50-hertz, 100-microtesla magnetic field for 27 weeks facilitates mammary tumorigenesis in the 7,12-dimethylbenz[a]-anthracene model of breast cancer. Cancer Res. 1999;59:3627–33. 33. National Toxicology Program. NTP studies of magnetic field promotion (dmba initiation) in female sprague-dawley rats (wholebody exposure/gavage studies. Natl Toxicol Program Tech Rep Ser. 1999;489:1–48. 34. Fedrowitz M, Kamino K, Löscher W. Significant differences in the effects of magnetic field exposure on 7,12-dimethylbenz(a) anthracene-induced mammary carcinogenesis in two substrains of Sprague-Dawley rats. Cancer Res. 2004;64:243–51. 35. Szmigielski S. Cancer morbidity in subjects occupationally exposed to high frequency (radiofrequency and microwave) electromagnetic radiation. Sci Total Environ. 1996;180:9–17. 36. Robinette CD, Silverman C, Jablon S. Effects upon health of occupational exposure to microwave radiation (radar). Am J Epidemiol. 1980;112:39–53. 37. Garland FC, Shaw E, Gorham ED, et al. Incidence of leukemia in occupations with potential electromagnetic field exposure in United States navy personnel. Am J Epidemiol. 1990;132: 293–303. 38. Hocking B, Gordon IR, Grain HL, Hatfield GE. Cancer incidence and mortality and proximity to TV towers. Med J Aust. 1996;165:601–5; Hocking B, Gordon I. Decreased survival for childhood leukemia in proximity to television towers. Arch Environ Health. 2003;58: 560–4. 39. Dolk H, Shaddick G, Walls P, et al. Cancer incidence near radio and television transmitters in Great Britain. I. Sutton coldfield transmitter. Am J Epidemiol. 1997;145:1–9. 40. Dolk H, Elliott P, Shaddick G, Walls P, Thakrar B. Cancer incidence near radio and television transmitters in Great Britain. II. All high power transmitters. Am J Epidemiol. 1997;145:10–7. 41. Park SK, Ha M, Im H-J. Ecological study on residences in the vicinity of AM radio broadcasting towers and cancer death: preliminary observations in Korea. Int Arch Occup Environ Health. 2004;77:387–94. 42. Michelozzi P, Capon A, Kirchmayer U, et al. Adult and childhood leukemia near a high-power radio station in Rome, Italy. Am J Epidemiol. 2002;155:1096–103.
36 43. Milham S. Increased mortality in amateur radio operators due to lymphatic and hematopoietic malignancies. Am J Epidemiol. 1988;127: 50–4. 44. See: http://www.iarc.fr/ENG/Units/RCAd.html. 45. Christensen HC, Schuz J, Kosteljanetz M, et al. Cellular telephones and risk for brain tumors: a population-based, incident case-control study. Neurology. 2005;64:1189–95. 45A. Schuz J, Bohller E, Berg G, et al. Cellular phones, cordless phones and the risk of glioma and meningioma (Interphone Study Group, Germany). Am J Epidemiol. 2006;163:512–20. 46. Lönn S, Ahlbom A, Hall P, Feychting M. Long-term mobile phone use and brain tumor risk. Am J Epidemiol. 2005;161:526–35. 47. Lahkola A, Auvinen A, Raitanen J, et al. Mobile phone use and risk of glioma in 5 north European countries. Int J Cancer. 2007;120: 1769–75. 48. Lönn S, Ahlbom A, Hall P, Feychting M. Mobile phone use and the risk of acoustic neuroma. Epidemiology. 2004;15:653–9. 49. Schoemaker M, Swerdlow A, Ahlbom A, et al. Mobile phone use and risk of acoustic neuroma: results of the Interphone case-control study in five north European countries. Br Cancer J. 2005;93: 842–8. 50. Soderqvist D, et al. Long-term use of cellular phones and brain tumors—increased risk associated with use for 10 years. Occup Environ Med. 2007; published online April 4. 51. Hardell L, Carlberg M, Hansson Mild K. Use of cellular telephones and brain tumor risk in urban and rural areas. Occup Environ Med. 2005;62:390–4. 52. Lai H, Singh NP. Acute low-intensity microwave exposure increases DNA single-strand breaks in rat brain cells. Bioelectromagnetics. 1995;16:207–10; Lai H, Singh NP. Single- and doublestrand DNA breaks in rat brain cells after acute exposure to radiofrequency electromagnetic radiation. Int J Radiat Biol. 1996;69:513–21. 53. Malyapa RS, Ahern EW, Straube WL, et al. Measurement of DNA damage after exposure to 2450 MHz electromagnetic radiation. Radiat Res. 1997;148:608–17; Malyapa RS, Ahern EW, Straube WL, et al. Measurement of DNA damage after exposure to electromagnetic radiation in the cellular phone communication frequency band (835.62 and 847.74 MHz). Radiat Res. 1997;148:618–27; Lagroye I, Anane R, Wettring BA, et al. Measurement of DNA damage after acute exposure to pulsed-wave 2450 MHz microwaves in rat brain cells by two alkaline comet assay methods. Int J Radiat Biol. 2004;80:11–20. 54. Aitken RJ, Bennetts LE, Sawyer D, Wiklendt AM, King BV. Impact of radiofrequency electromagnetic radiation on DNA integrity in the male germline. Int J Androl. 2005;28:171–9. 55. Diem E, Schwarz C, Adlkofer F, Jahn O, Rüdiger H. Non-thermal DNA breakage by mobile-phone radiation (1800 MHz) in human fibroblasts and in transformed GFSH-R17 rat granulosa cells in vitro. Mutat Res. 2005;583:178–83. 56. Frey AH, Feld SR, Frey B. Neural function and behavior: defining the relationship. Ann N Y Acad Sci. 1975;247:433–9. 57. Salford LG, Brun A, Sturesson K, Eberhardt JL, Persson BR. Permeability of the blood-brain barrier induced by 915 MHz electromagnetic radiation, continuous wave and modulated at 8, 16, 50, and 200 Hz. Microsc Res Tech. 1994;27:535–42. 58. Salford LG, Brun AE, Eberhardt JL, Malmgren L, Persson BR. Nerve cell damage in mammalian brain after exposure to microwaves from gsm mobile phones. Environ Health Perspect. 2003;111:881–3. 59. Tore F, Dulou P-E, Haro E, Veyret B, Aubineau P. Two-hour exposure to 2 W/Kg, 900-MHz GSM microwaves induces plasma protein extravasation in rat brain and dura mater. Fifth International Congress of the European Bioelectromagnetics Association, Helsinki, Finland, 2001 September 6–8:43–5.
Nonionizing Radiation
753
60. Huber R, Treyer V, Schuderer J, et al. Exposure to pulse-modulated radio frequency electromagnetic fields affects regional cerebral blood flow. Eur J Neurosci. 2005;21:1000–6; Huber R, Treyer V, Borbély AA, et al. Electromagnetic fields, such as those from mobile phones, alter regional cerebral blood flow and sleep and waking EEG. J Sleep Res. 2002;11:289–95. 61. Leszczynski D, Joenvaara S, Reivinen J, Kuokka R. Non-thermal activation of the hsp27/p38MAPK stress pathway by mobile phone radiation in human endothelial cells: molecular mechanism for cancer- and blood-brain barrier-related effects. Differentiation. 2002;70:120–9. 62. Preece AW, Iwi G, Davies-Smith A, et al. Effects of a 915 MHz simulated mobile phone signal on cognitive function in man. Int J Radiat Biol. 1999;75:447–56. 63. Koivisto M, Revonsuo A, Krause C, et al. Effects of 902 MHz electromagnetic field emitted by cellular telephones on response times in humans. Neuroreport. 2000;11:413–5; Krause CM, Sillanmaki L, Koivisto M, et al. Effects of electromagnetic field emitted by cellular phones on the EEG during a memory task. Neuroreport. 2000;20:761–4; Krause CM, Sillanmaki L, Koivisto M, et al. Effects of electromagnetic fields emitted by cellular phones on the electroencephalogram during a visual working memory task. Int J Radiat Biol. 2000;76: 1659–67. 64. Independent Expert Group on Mobile Phones. Mobile Phones and Health. Didcot, Oxon (U.K.): National Radiological Protection Board; 2000. Full text available free at: http://www.iegmp.org.uk/ report/ index.htm. 65. National Radiological Protection Board (NRPB). Mobile Phones and Health 2004. Documents of the NRPB. 2004;15(5):1–114. 66. Mobile Telephones, Their Base Stations and Health. Paris, France: Direction Générale de la Santé, 2001. English summary available at: http://www.sante.gouv.fr/htm/dossiers/telephon_mobil/resum_uk.htm. 67. Redelmeier DA, Tibshirani RJ. Association between cellulartelephone calls and motor vehicle collisions. N Engl J Med. 1997;336: 453–8. 68. Consiglio W, Driscoll P, Witte M, Berg WP. Effect of cellular telephone conversations and other potential interference on reaction time in a braking response. Accid Anal Prev. 2003;35:495–500. 69. Strayer DL, Drews FA, Johnston WA. Cell phone-induced failures of visual attention during simulated driving. J Exp Psychol Appl. 2003;9:23–32. 70. Zwamborn AP, Vossen SH, van Leersum BJ, Ouwens MA, Mäkel WN. Effects of Global Communications System Radiofrequency Fields on Well Being an Cognitive Functions of Human Subjects With and Without Subjective Complaints. Netherlands Organization for Applied Scientific Research (TNO), Report No. FEL030C148, 2003; Health Council of the Netherlands. TNO Study on the Effects of GSM and UMTS Signals on Well Being and Cognition. The Hague: Health Council of the Netherlands, Publication No. 2004/13E; 2004. 70A. Regel S, Negovetic S, Roosli M, et al. UMTS base station-like exposure, well-being and cognitive performance. Environ Health Perspect. 2006;114:1270–5. 71. ICNIRP exposure guidelines can be downloaded at no charge from http://www.icnirp.de/downloads.htm. 71A. Institute of Electrical and Electronics Engineers. C95.1-2005 IEEE Standard for Safety Levels with Respect to Human Exposure to Radio Frequency Electromagnetic Fields, 3 kHz to 300 GHz. 72. Institute of Electrical and Electronics Engineers. C95.6-2002 IEEE Standard for Safety Levels with Respect to Human Exposure to Electromagnetic Fields 0 to 3 kHz. 73. Jamieson D, Wartenberg D. The precautionary principle and electric and magnetic fields. Am J Public Health. 2001;91:1355–8. 74. Foster KR, Vecchia P, Repacholi MH. Science and the precautionary principle. Science. 2000;288:979–81.
754
Environmental Health
75. Office of Technology Assessment. Biological Effects of Power Frequency Electric and Magnetic Fields—Background Paper (No.OTA-BP-E-53). Washington, DC: Government Printing Office; 1989. 76. International Commission on Non-Ionizing Radiation Protection. Guidelines for Limiting Exposure to Time-Varying Electric, Magnetic, and Electromagnetic Fields (up to 300 GHz). Health Physics. 1998;74:494–522. 77. For more on TCO Development’s initiative, go to: http://www. mobilelabelling.com. 78. Goldhaber MK, Polen MR, Hiatt RA. The risk of miscarriage and birth defects among women who use visual display terminals during pregnancy. Am J Ind Med. 1988;13:695–706. 79. Walters TJ, Ryan KL, Nelson DA, Blick DW, Mason PA. Effects of blood flow on skin heating induced by millimeter wave irradiation in humans. Health Phys. 2004;86:115–20.
General References Ahlbom A, Cardis E, Green A, et al. Review of the epidemiologic literature on EMF and health. Environ Health Perspect. 2001;109:911–33. Ahlbom A, Green A, Kheifets L, Savitz D, Swerdlow A. Epidemiology of health effects of radiofrequency exposure. Environ Health Perspect. 2004;112:1741–54. Barnes F, Greenebaum B. Handbook of Biological Effects of Electromagnetic Fields. 3rd ed, Vol. 1, Vol. 2. Boca Raton, FL: CRC Press; 2006. Becker R, Selden G. The Body Electric: Electromagnetism and the Foundation of Life. New York: William Morrow; 1985. Bowman JD, Kelsh MA, Kaune WT. Manual for Measuring of Occupational Electric and Magnetic Field Exposures (NIOSH publication No.98-154). Cincinnati, OH: National Institute for Occupational Safety and Health; 1998. Brodeur P. Currents of Death. New York: Simon and Schuster; 1989. Brodeur P. The Zapping of America. New York: Norton; 1977. Carpenter DO, Ayrapetyan S, eds. Biological Effects of Electric and Magnetic Field: Sources and Mechanisms; Beneficial and Harmful Effects. Vol. 1, Vol. 2. San Diego, CA: Academic Press; 1994. Feychting M, Ahlbom A, Kheifets L. EMF and health. Ann Rev Public Health. 2005;26:165–89. Goldsmith JR. Epidemiologic evidence relevant to radar (microwave) effects. Environ Health Perspect. 105 Suppl 1997;6:1579–87. Hamblin DL, Wood AW. Effects of mobile phone emissions on human brain activity and sleep variables. Int J Radiat Biol. 2002;78: 659–69.
International Commission on Non-Ionizing Radiation Protection (ICNIRP). General approach to protection against non-ionizing radiation. Health Phys. 2002;82:540–8. ICNIRP. Guidance on determining compliance of exposure to pulsed and complex non-sinusoidal waveforms below 100 kHz with ICNIRP guidelines. Health Phys. 2003;84:383–7. Kundi M, Mild K, Hardell L, Mattsson MO. Mobile telephones and cancer: a review of epidemiological evidence. J Toxicol Environ Health B Crit Rev. 2004;7:351–84. Kuster N, Balzano, Lin JC. Mobile Communications Safety. London: Chapman & Hall; 1997. Leeper E. Silencing the Fields: A Practical Guide To Reducin AC Magnetic Fields. Boulder, CO: Symmetry Books; 2001. Löscher W, Liburdy RP. Animal and cellular studies on carcinogenic effects of low frequency (50/60-Hz) magnetic fields. Mutat Res. 1998;410: 185–220. National Research Council. Possible Health Effects of Exposure to Residential Electric and Magnetic Fields. Washington, DC: National Academy Press; 1997. McKinlay AF, Repacholi MH, eds. Exposure metrics and dosimetry for EMF epidemiology: proceedings of an international workshop. Radiat Prot Dosimetry. 1999;83(1–2):1–194. National Council on Radiation Protection and Measurements (NCRP). A Practical Guide to the Determination of Human Exposure to Radiofrequency Fields. Bethesda, MD: NCRP; 1993. NCRP. Biological Effects and Exposure Criteria for Radiofrequency Electromagnetic Fields. Bethesda, MD: NCRP; 1986. National Radiological Protection Board (NRPB). Review of the scientific evidence for limiting exposures to electromagnetic fields (0-300 GHz). Documents of the NRPB. 2004;15(3):1–215. Portier CJ, Wolfe MS, eds. Assessment of Health Effects from Exposure to Power-Line Frequency Electric and Magnetic Fields. Research Triangle Park. NC: National Institute of Environmental Health Sciences; 1998. Reilly JP. Electrical Stimulation and Electropathology. New York, NY: Cambridge University Press; 1992. Reilly JP. An analysis of differences in the low-frequency electric and magnetic field exposure standards of ICES and ICNIRP. Health Phys. 2005;89:71–80. Steneck NH. The Microwave Debate. Cambridge, Mass: MIT Press; 1984. Stevens RG, Wilson BW, Anderson LE, eds. The Melatonin Hypothesis: Breast Cancer and Use of Electric Power. Columbus, OH: Battelle Press; 1997.
Effects of the Physical Environment: Noise as a Health Hazard
37
Aage R. Moller
Noise is hazardous to health mainly because it can damage the ear, but it may also influence other bodily functions. A temporary or permanent decrease in hearing acuity, such as that from noise exposure (noise induced hearing loss, NIHL), may impair speech communication. Noise can also mask speech and warning signals and thus poses a risk to safety and to the general health of workers. The most apparent and best-known health risk from noise is damage to hearing, so this will be addressed first. The other effects of noise are dealt with later in the chapter.
EFFECT OF NOISE ON HEARING
In this chapter, we use the word noise to describe sound that may be damaging to hearing, because this word has traditionally had negative connotations and thus will be identified more readily with health hazards. The potential of noise to damage hearing, however, is entirely related to its physical properties. The amount of NIHL that is acquired is related to the intensity and durations of the noise exposure and the character of the noise (spectrum and time pattern). The character of the noise—whether it is continuous or transient and its spectrum—also plays a role and different types of noise pose different degrees of risk to hearing, even though the overall intensity of the noises is the same; impulsive sounds such as that from gunshots generally pose a greater risk than continuous noise. Low-frequency sounds are considered to be less damaging than high-frequency sounds of the same physical intensity. Therefore, when noise intensity is measured with a sound-level meter for predicting its effect on hearing, a frequency weighting is used. The commonly used weighting (A-weighting) gives energy at low frequencies less weight than energy at high frequencies. The importance of the temporal pattern of noise is more difficult to represent in standard measurement of noise level. Since it is the physical characteristics of the sound that determines its potential for causing hearing loss, the origin of the sound has no influence upon the degree of risk it presents for hearing damage, and sounds to which people are exposed during recreational activities pose as great a risk to hearing as noise that is associated with work activities such as in industry. Activities where people are exposed to gunshot noise in particular pose a high risk of inducing NIHL. There is great variation in an individual person’s susceptibility to noise-induced hearing loss (NIHL) and, therefore, only the
average probability for acquiring a hearing loss can be predicted on the basis of knowledge about the physical characteristics of noise and the duration of exposure to noise.
Temporary Threshold Shift and Permanent Threshold Shift The first effect noticed when an ear is exposed to sounds above a certain intensity and for a certain time is a reduction in the ear’s sensitivity (elevated hearing threshold). This reduction in hearing is greatest immediately after the exposure and decreases gradually after the exposure has ended. If the noise has not been too loud or the exposure too long, hearing will gradually return to its original level. This kind of hearing loss is known as temporary threshold shift (T TS) (Fig. 37-1). TTS may be experienced after single exposures to highintensity sounds such as from explosions and from gunfire. If the noise is more intense than a certain value and/or the exposure time longer than a certain time, the resulting hearing threshold never returns to its original value and a permanent threshold shift (PTS) has occurred. PTS is the stable threshold shift that is experienced after recovery from T TS (Fig. 37-1). PTS dominate in people who have been exposed to such noise for many years, and the TTS component after the end of the exposure is small. Individual variation in NIHL for the same noise exposure is considerable and the curves in Figure 37-1 represent the average course of hearing loss. While TTS probably results from temporary impairment of the function of the sensory cells in the cochlea (which is a part of the inner ear), PTS has been associated with irreversible damage to these cells. However, research has shown that the cause of NIHL (TTS and PTS) is more complex than just morphological changes in hair cells.1 It is thus interesting that prior exposure to sounds of moderate levels can decrease the T TS caused by exposure to more intense noise at a later time.2,3 It has also been shown that exposure to noise causes morphological changes in the auditory nervous system (cochlear nucleus).4 Whether these changes in the nervous system are caused by a direct effect of overstimulation or by the deprivation of input caused by the injury to cochlear hair cells is not clear, but the changes in the nervous system most likely contribute to the symptoms of NIHL. Animal studies have disclosed that activation of a particular neural circuit in the brainstem (the olivocochlear bundle) may protect the ear from noise-induced hearing loss.5 While the damage to the cochlear hair cells can be seen when the cells are examined histologically under high-power magnification, these other 755
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
756
Environmental Health 120 120 120 dB. 7 days 120
Figure 37-1. Schematic diagram illustrating how noise can affect hearing. The graph shows the hearing loss (threshold shift) at 4000 Hz a certain time (horizontal axis) after noise exposure. Noise with intensity below a certain value is expected to give rise to a temporary threshold shift (90 dB, 7 days curve), while a louder noise (100 dB, 7 days) results in a permanent threshold shift. A very intense noise (120 dB, 7 days) gives rise to a considerable permanent shift in threshold.27 (Modified from Miller J. Effects of noise on people. J Acoust Soc Am. 1974;56:3.)
Threshold shift at 4000 hz in dB
90 80 100 dB. 7 days
70 60
90 dB. 7 days 50 40 Permanent threshold shift 50 dB
30 20
Permanent threshold shift 11 dB
10 0 2
4
6 8 10 15
Minutes
changes caused by noise exposure are more difficult to evaluate quantitatively. The basilar membrane of the cochlea, along which the sensory cells are located, is a complex and intricate organ that performs spectral analysis of sounds so that specific groups of sensory cells become activated in accordance with the spectrum of a sound. There are two types of sensory cells (hair cells) in the cochlea, inner and outer hair cells. Although they are similar in appearance, they have totally different functions. The inner hair cells convert sounds into a neural code in the individual fibers of the auditory nerve, but the outer hair cells’ function is mechanical; they act as “motors” that amplify the motion of the basilar membrane and thereby increase the sensitivity of the ear. Destruction of outer hair cells from noise exposure is more extensive than destruction of inner hair cells.6 Destruction of outer hair cells reduces the sensitivity of the ear because the “cochlear amplifiers” have been destroyed also impairing the automatic gain control of the cochlea, resulting in an abnormal perception of loudness (recruitment of loudness). Damage to hair cells cannot be reversed.1
NATURE OF NOISE-INDUCED HEARING LOSS
Hearing loss from noise exposure is normally greatest in the frequency range around 4 kHz. When the exposure time to noise is increased, or the level of the noise is increased (100 dB versus 90 dB), the magnitude of the hearing loss increases and the frequency range of the hearing loss widens. Most of the hearing loss that is expected after 40 years of noise exposure is already acquired during the first 10 years of the exposure (Fig. 37-2).
30 45 1
2
4
8 12
1
Hours Time after noise
2
3
Days
61
2
3 4 5 6 8 10
Weeks
Measurement of Hearing Hearing loss is measured in decibels (dB)a relative to a normative average hearing threshold obtained based on the hearing thresholds of young people who have had no known exposure to noise. Slightly different standards for “normal” hearing are used in different parts of the world. The difference between the hearing threshold of an individual and the “standard” hearing threshold is known as the “hearing threshold level” (HTL) and is measured in decibels. When the hearing level is plotted on the vertical axis as a function of the frequency tested, a graph results that is known as an audiogram. (Usually, the hearing thresholds are determined only in the frequency range of 125–8,000 Hz [8 kHz], despite the fact that a young person with normal hearing can hear sounds in the frequency range of about 18 Hz–20 kHz.) Hearing loss caused by exposure to pure tones, or to noise the energy of which is limited to a narrow range of frequencies, is largest in a frequency range that is approximately one-half octave above that at which the tone or noise has its highest energy. The reason is that nonlinearities of the cochlea cause a shift of the maximal deflection of the basilar membrane toward the base of the cochlea when the intensity is increased.1 The hearing loss shown in Figure 37-2 is typical for individuals who have been exposed to noise in various manufacturing industries adB is an abbreviation of decibel and is a logarithmic measure, used here as a measure of sound pressure. 1 dB is one-tenth of a logarithmic unit (a ratio of 1:10). The reason for using a logarithmic measure of sound pressure to measure hearing thresholds is that the subjective sensation of sound intensity is approximately related to the logarithm of the sound pressure.
37
Effects of the Physical Environment: Noise as a Health Hazard
–10
757
–20
Hearing level at 4 kHz
–10
Noise-induced threshold shift, dB
0
10
20
0 10 20 30 40 50 60
30
40
70 90 100 Exposure, y dBA dBA 10 20 30 40
90
100 110 120 130 Noise immission level (dB)
140
150
Figure 37-3. Individual age-corrected hearing levels, at 4 kHz as a function of the total amount of noise exposure (emission level), for 581 individuals. Each square represents an individual person, and the solid lines are the mean values of the threshold.
50 0.5
1
2
3
4
6
Frequency, kHz Figure 37-2. Median estimated noise-induced permanent threshold shift plotted as a function of frequency for two exposure levels (assuming 8-hour daily exposure) and four durations of exposure.
where the noise tends to be of a broad spectrum and continuous in nature. The reason that the hearing loss is greatest around 4 kHz is that the ear canal acts as a resonator that amplifies sounds in the frequency range of 3 kHz, and the half octave shift makes the greatest hearing loss to occur around 4 kHz (the exact frequency of the largest hearing loss in an individual person depends on the length of the ear canal which varies among individuals).1
Individual Variation in Noise Susceptibility The susceptibility to NIHL varies among individuals and different people who are exposed to exactly the same noise for exactly the same period of time may suffer different degrees of hearing loss. Some people can tolerate high-intensity noise for a lifetime and not suffer any noticeable degree of hearing loss while other people may acquire a substantial hearing loss from exposure to much less intense noise (Fig 37-3). Notice that the average hearing loss as a result of exposure to continuous noise with a sound intensity of 90 dB(A) for 20 years in the study depicted in Fig 37.2 is less than 5 dB at 4,000 Hz, but that many people in this study experienced 30- to 40-dB hearing loss. The noise emission level combines the two characteristics of noise—duration and intensity—which are assumed to be of the greatest importance in defining its potential for harm. The noise emission level E = L + 10 log(T), where L represents the sound level (measured with A-weighting) that is exceeded during 2% of the exposure time, T, in months. For example, exposure to 85-dB noise during 20 years of work corresponds to 85 + 10 log(20 * 12) = 85 + 10 log 240 = 85 + 24 = 109. For continuous noise, L deviates only slightly from the A-weighted sound intensity, but for noise that contains transient or intermittent noises (i.e., noises that vary considerably in intensity) the difference between these two values is great. Susceptibility to noise exposure varies among individuals and attempts have been made to estimate an individual’s susceptibility to PTS by the degree of TTS evidenced on exposure to a test sound that is not loud enough to cause permanent hearing loss, but the results have been discouraging. It appears that there is only a weak correlation
between susceptibility to PTS and the degree of TTS in any individual person. The only way to determine an individual’s susceptibility to noise-induced hearing loss is to test, at frequent intervals, the hearing of those who were exposed to loud noise. Studies in animals have pointed toward some factors that may predispose to noise-induced hearing loss. For example, studies in rats showed that rats that were genetically predisposed to high blood pressure acquired a higher degree of hearing loss from noise exposure than normal rats when both groups were exposed to noise for their entire lifetimes.7 Although these findings have not been duplicated in humans, the results of some studies in humans support a relationship between high blood pressure and hearing loss from noise exposure.8 Alterations in cochlear blood flow may also affect susceptibility to noise-induced hearing loss.9 Research along these lines has provided important knowledge but has not resulted in the development of efficient ways to assess an individual’s susceptibility to noise-induced hearing loss or to effectively decrease a person’s susceptibility to noise-induced hearing loss. Since NIHL usually first affects the hearing threshold at frequencies around 4 kHz thus above the range that is essential for perception of speech, NIHL often goes unnoticed until it becomes severe. Common hearing tests can easily reveal hearing loss before it affects the ability to understand speech. A beginning hearing loss may indicate that the person in question is particularly susceptible to noise-induced hearing loss. It is, therefore, important to do frequent hearing tests in workers who are exposed to noise. This testing is part of modern hearing conservation programs. In addition, to obtain pure tone audiograms, it is important to determine a person’s ability to understand speech because there are great individual variations in the relationship between the tone audiogram and the ability to understand speech.1 However, determination of speech discrimination is not standardized and the outcome depends on whether the tests are done in quiet or with a background of noise.10 Also, other effects of noise exposure on people with noise-induced hearing loss may include ringing in the ears (tinnitus)11 and headaches. NOISE STANDARDS
To reduce the risk of noise-induced hearing loss, recommendations of acceptable noise levels have been established and appear in the form of “noise standards.” Different countries have adopted different
758
Environmental Health
standards, and the ways in which the standards are enforced also differ. All presently accepted standards use a single value that is a combination of noise level and the duration of the exposure to calculate the risk of noise-induced permanent hearing loss. Some of these standards include correction factors regarding the nature of the sound (for instance, impulsive versus continuous sounds). Some standards take normal age-related hearing loss (presbycusis) into account while others do not.
Present Noise Standards In the United States, legislation that covers noise includes the Federal Aviation Act of 1958, the 1969 Amendment of the Walsh-Healy Public Contracts Act, the Occupational Safety and Health Act of 1970, the Noise Control Act of 1972, and the Mine Safety and Health Act of 1978. These acts require certain agencies to regulate noise. In Europe, legislation in various countries regarding the limitations on industrial noise has largely been guided by recommendations made by the International Organization for Standardization (ISO).12 The maximal noise level and duration accepted in most industrial countries is either 85 or 90-dB(A),b for 8 hours a day, 5 days a week. In Europe the 85-dB(A) level is more common. In the United States 90 dB(A) is the accepted level stated by the Occupational Safety and Health Administration (OSHA), although certain measures have to be taken if workers are exposed to noise levels above 85 dB(A). The National Institute for Occupational Safety and Health (NIOSH) has recently issued a recommendation that has 85 dB(A) as the limit of accepted exposure level.13
Noise Level and Exposure Time Noise standards are based on exposure for 8 hours per day. If the exposure time is shorter, a higher level of noise can be tolerated. To estimate how much higher level of noise can be tolerated when the duration of the exposure to noise is less than 8 hours per day, a conversion factor is used. Europe has used a 3-dB “doubling factor” for a long time while the United States has used a 5-dB doubling factor. Research indicates that a doubling factor of 5 dB may be adequate for relatively low noise levels, but that a smaller doubling factor (3 dB, i.e., equal energy) more correctly reflects the hazards presented by noise of a high level. NIOSH also now recommend a 3-dB doubling factor for calculation of the time-weighted average exposure to noise.13 A 3-dB doubling factor implies that a reduction of the exposure time by a factor of 2 (e.g., from 8 to 4 hours), can allow a 3-dB higher sound level to be accepted. Thus 88 dB(A) for 4 hours is assumed to have the same effect on hearing as 85 dB(A) for 8 hours. If the exposure time to noise is 2 hours per day, a 6-dB higher sound level is assumed to be acceptable, and so on. This way of calculating an acceptable noise level reflects “the equal energy principle,” which assumes that it is the total energy of the noise that determines the risk for permanent hearing loss. In the United States, standards have been tightened by stating that no worker should be exposed to continuous noise above 115 dB(A) or impulsive noise above 140 dB(A), independent of the duration of exposure. This action sets a ceiling for acceptable combination of noise intensity and exposure time. Because the level of noise exposure usually varies during a work day, noise exposure is often described by its equivalent level (LEq), which is defined as the level of noise that has the same average energy as the noise that is measured during a work day. The equivalent level is measured by summing the total noise energy to which a person is exposed and dividing it by the duration of exposure. The calculation of this equivalent level assumes that the equal energy principle discussed above is valid.
bThe (A) after dB indicates that the noise spectrum has been weighted to place less emphasis on low frequencies than on high frequencies. This is done because low-frequency sounds generally possess less risk for causing hearing loss than do high-frequency sounds.
The fact that the present noise standards are based on a simplified measure of noise, namely the A-weighted measure dB(A), adds to the uncertainty in predicting the risk of acquiring a hearing loss that may result from exposure to a certain noise. MEASUREMENT OF NOISE
Sound level meters are available in many different forms, from simple devices consisting of basic components such as a microphone, amplifier with circuits that allow integration of the output of the amplifier and display of a single value. Noise level meters are now standardized by the International Electrotechnical Commission (IEC) (IEC 61672:1999), the International Organization for Standardization (ISO), and the American Standards Institute (ANSI) S1.4-1971 (R1976) or S1.4-1983 (R 2001) with Amd.S1.4A-1985, S1.43-1997 (R2002) (Type 0 is used in laboratories, Type 1 is used for precision measurements in the field, and Type 2 is used for general-purpose measurements). (Standards are available from www.ansi.org). Most sound level meters have at least one spectral weighting, namely A-weighting. The most sophisticated devices that have many options regarding weighting functions, spectral filtering (1/3- and 1-octave wide), and integration times and provide readings of Leq. Measurements of sound levels are usually made at a location where people work, but the sound level at the entrance of the ear canal of a person in that location will be different because the head and the outer ear amplify sounds within frequencies between 2 and 5 kHz by as much as 10–15 dB.1 If the noise contains much energy in that frequency range, the sound that actually reaches the ear may be as much as 10–15 dB higher than the actual reading on a sound-level meter placed in the person’s location when the person is not present. The noise level is often different at different locations and when a person walks around, the exposure varies and it becomes difficult to estimate the average exposure. Noise dosimeters have been developed to improve the accuracy of determination of the average noise exposure. These devices, worn by the person, function in a similar way as radiation monitors. They register the sound level near the ear or, sometimes, at other locations on the body and integrate the energy over an entire working day. Impulsive Noise. Noise-level meters in earlier times were designed to integrate sound over about 100 millisecond (ms) in order to provide a reading that was in accordance with the perceived loudness of sounds. This integration time is appropriate for the purpose to assess the subjective intensity (or annoyance) of sound but that integration time is not appropriate for assessing what risk noise poses to hearing because the ear (cochlea) has a much shorter integration time than the brain, and injury from noise exposure occurs in the cochlea. The more sophisticated sound-level meters (so-called impulse sound-level meters)14 have integration time that is appropriate for measurement of impulsive sounds.
What Degree of Hearing Loss Is Acceptable? Because the great individual variation in susceptibility to noise-induced hearing loss makes it impossible to predict what hearing loss an individual will acquire when exposed to a certain noise, noise standards at best merely predict the percentage of people in a population with normal hearing who will acquire less than a certain specified (acceptable) hearing loss when exposed to noise no louder than a certain value.15,16 The presently applied standards allow that a certain (small) percentage of a normal-hearing population will acquire a permanent hearing loss (threshold elevation) that is greater than a certain value. In the beginning of the era in which efforts were made to reduce (or prevent) noise-induced hearing loss, the “acceptable hearing loss” was defined as the level of hearing loss at which an individual begins to experience difficulty in understanding everyday speech in a quiet environment. This definition was based on the American Academy of Ophthalmology and Otolaryngology (AAOO) guidelines for evaluation
37 of hearing impairment (revised in 1979 by AAO, from 1959 and 1973),14 which state that the ability to understand normal everyday speech at a distance of about 1.5 m (5 ft) does not noticeably deteriorate as long as the hearing loss does not exceed an average value of 25 dB at frequencies 500 Hz, 1 kHz, and 2 kHz, and that hearing loss was regarded as a just-noticeable handicap for which a worker in the United States was entitled to receive worker’s compensation for loss of earning power. These recommendations have not been updated by the American Academy of Otolaryngology (AAO) but the American Medical Association (AMA)17 has recently provided its own guidelines. However, these guidelines follow the AAO 1979 guidelines. It is puzzling that this degree of hearing loss given in the AAO (1979) recommendation to describe the hearing level at and above which disability occurs was later designated as acceptable. The estimated percentage of individuals who acquire hearing loss in excess of such hearing loss (Table 37-1) depends on the noise exposure. It has been argued that these guidelines should be modified to include hearing loss at 3,000Hz.17 The AMA Guidelines include 3,000 Hz and if the average hearing loss at 500, 1,000, 2,000, and 3,000 Hz is equal or less than 25 dB, no impairment rating is assigned. These values are based on studies of workers in the weaving industry, and research indicates that the number of people with noiseinduced hearing losses may be higher in other industries. The difference between 85 and 90 dB(A) daily average exposure is, however, that the risk of hearing impairment doubles, regardless of which data are used as a basis. The NIOSH 1998 Criteria Document13 (a revision of the 1972 criteria document) states that “an increase of 15 dB in the hearing threshold level (HTL) at 500, 1000, 2000, 3000, 4000, or 6000 Hz in either ear as determined by two consecutive audiometric tests” is a criterion for significant threshold shift. Using a criterion of the equivalent of 8-hour exposure to 85 dB-A-weighted noise, it is estimated that 8% of exposed individuals will acquire more hearing loss over a 40-year work period, while exposure to 90 dB(A) noise will result in an average of 29% excess risk. Effect of Age-Related Hearing Loss. Hearing loss from causes other than noise interacts with noise-induced hearing loss in a complex way. For instance, the “normal” progressive hearing loss that occurs with age (presbycusis) is not directly additive to hearing loss from noise. The 1998 NIOSH criterion13 no longer recommend age correction to take into account presbycusis. If one would attempt to determine the hearing loss from noise alone by subtracting the hearing loss from aging, a paradoxical result will in many cases become evident, namely, that the noise-induced hearing loss will decrease with age and with the duration of the exposure to noise. The reasons for the
TABLE 37-1. ESTIMATED RISK OF HEARING LOSS AFTER 40 YEARS WORKING LIFETIME∗ Reporting Organization ISO
EPA
NIOSH
∗
Average Daily Exposure (dBA) 90 85 80 90 85 80 90 85 80
Excess Risk ∗∗ 21 10 0 22 12 5 29 15 3
Data from NIOSH (Anonymous. National Institute for Occupational Safety and Health (NIOSH) Criteria for a Recommended Standard: Occupational Exposure to Noise. Revised criteria 1998 Publication No. 98-126. 1998.) ∗∗ Percentage with hearing loss greater than 25 dB at 500, 1000, and 2000 Hz after subtracting the percentage who would normally incur such impairment in an unexposed population.
Effects of the Physical Environment: Noise as a Health Hazard
759
paradoxical findings are that subtracting presbycusis from the total hearing loss to get the PTS assumes that these two factors add in a linear way, which they do not. Presbycusis also varies very much from individual to individual, which adds to the uncertainty in predictions of an individual person’s hearing loss. In the recent NIOSH recommendation, age-related hearing loss is not added to the allowed hearing loss.13 Models of Noise-Induced Hearing Loss. Elaborate models of noise-induced hearing loss have been used for prediction of hearing loss from noise exposure, mainly for medicolegal purposes but some models have also been used to predict previous exposure on the basis of hearing loss.18 While such models may provide valid predictive values of the average hearing loss or average noise exposure, the large individual variation makes the accuracy of such predictions low when used for individual people. It is, therefore, questionable to use such models for prediction of the hearing loss that an individual will acquire, or for predicting what noise exposure a person has had on the basis of his or her hearing loss, which has been done for medicolegal purposes. PREVENTION OF NOISE-INDUCED HEARING LOSS
It has been advocated that noise standards be modified to reduce the number of people who acquire a hearing loss that can be regarded as a social handicap. The maximal tolerable noise level for an 8-hour exposure is around 75 dB(A) if significant noise-induced hearing loss is to be eliminated.19 The main obstacles in adopting a lower noise level are economic: the cost of having all workplaces comply with such regulations has been considered prohibitive. However, a much less expensive alternative,20 having all new equipment comply with regulations, has not been considered. It is not the noise level that machinery emits that is important, but rather the noise level to which workers are exposed. Therefore, moving people to less noisy locations can reduce the exposure levels, which means that changes in operating machinery can, in fact, lead to reduction in the risk of noise-induced hearing loss.
Personal Protection Two types of personal protection are in common use: earmuffs, which are attached to a helmet or worn on a headband, and earplugs. Earmuffs can be removed more easily than earplugs and are therefore better suited for intermittent use as in situations when people are walking in and out of noisy areas (such as airports). On the other hand, earplugs are more practical for people who spend long periods of time in noisy environments. The sound attenuation of different types of earplugs and earmuffs depends on the type of device and how well it fits the individual person. When measured in the laboratory, earplugs are found to attenuate sound more than earmuffs: Insert ear protectors provide approximately 20 dB attenuation at 125 and 250 Hz, 20–25 dB for frequencies from 500–2000 Hz, and approximately 40 dB at 4000 and 8000 Hz. Some types of earplugs provide 4–5 dB more. Ear muffs provide less attenuation; 10–15 dB for 125 and 250 Hz, 20–25 dB at 500 Hz and 35–40 dB for 1000–8000 Hz. Studies of hearing loss from noise exposure in workers who were exposed to high-intensity noise (shipyard) showed that those who wore earplugs had better protection than those who wore earmuffs.21 The gain that is achieved in practice from wearing ear protection may be less than anticipated. The efficacy of ear protectors depends not only on their sound attenuation determined in the laboratory but also on compliance with the use of ear protectors, which is difficult to control and poorly documented. The beneficial effect is much reduced if the protective devices are not worn all the time.21 Wearing ear protectors for long periods may be inconvenient especially in hot environments, and ear protectors impair speech communication, which makes it more difficult for people to hear alarm signals or other acoustic signs of danger.
760
Environmental Health
For ethical reasons, it is not possible to do studies making use of a controlled situation where participants who wear ear protectors are randomized with control participants who do not wear ear protectors. Nilsson and Lindgren’s study in which the hearing loss in groups of people wearing ear protectors was compared with the hearing loss in people not wearing ear protectors22 found that people who did not wear ear protectors were almost twice as likely to acquire a hearing threshold shift of 15 dB or more than those who used earmuffs. Other studies showed similar results.23 Earmuffs are easier to remove and may not always be worn when indicated.22 When the efficacy of ear protectors was studied in shipyards, in combination with intense continuous noise and superimposed impulsive noise, thus presenting an extreme hazard to hearing,22 those who were exposed to low-intensity noise suffered more hearing loss than did those in the high-intensity noise group. This surprising result is likely due to workers’ different habits of wearing ear protectors: many more workers exposed to highintensity noise rather than low-intensity noise wore ear protectors.22 Active noise cancellation can reduce the sound that reaches the ear. Such devices amplify sounds and apply the sound through an earphone after being reversed. The amplification is set so that the sound from the microphone cancels out the sound that reaches the inside of the headset. HEARING CONSERVATION PROGRAMS
Hearing conservation programs are based on understanding of the effect of noise on the ear, measurement of noise levels in the workplace and personal measurements (using dosimeters), and measurement of hearing (audiometry). Knowledge about noise standards and promotion of noise reduction at the source and promotion of personal protections (ear protectors) are also important factors in reducing the risk of acquiring NIHL. Regulations on noise-induced hearing loss by OSHA24 state that hearing conservation programs must be designed so that people who are exposed to noise levels of 85 dB(A) (8-hour weighted average) or more can be identified and that measures must be taken to reduce the noise. If these measures do not result in a reduction of the noise level to 90 dB(A) or lower, workers must participate in a hearing conservation program, and employers must make personal hearing protection devices (ear protectors) available to such workers and perform hearing tests at specified intervals during employment. If a hearing loss of 10 dB average over frequencies 2, 3, and 4 kHz is detected, then the person must be referred for further evaluation and action must be taken to avoid further deterioration of hearing. The progress of hearing deterioration can usually be halted by moving the person to a less noisy environment, thus preventing the progress of the hearing loss before it becomes a social handicap. EFFECTS OF NOISE ON OTHER BODILY FUNCTIONS
The effects of noise on bodily functions other than hearing are poorly understood. It has been reported that noise exposure can cause an increase in blood pressure and changes in other important bodily functions such as change (usually increases) in the secretion of pituitary hormones. Some retrospective studies (i.e., Jonsson and Hansson)25 of the effects of exposure to noise on the blood pressures of industrial workers found that workers who were exposed to industrial noise had higher systolic and diastolic blood pressures, while other studies (i.e., Sanden and Axelsson)26 found no relationship between noise-induced hearing loss and blood pressure in shipyard workers. However, there is evidence that individuals with a predisposition for circulatory diseases acquire more PTS when exposed to noise than people in general. The observed correlation between PTS and elevated blood pressure may thus be the result of a higher susceptibility of people with hypertension. Studies in rats have shown that animals with a hereditary predisposition for hypertension developed considerably greater degrees of hearing loss from exposure to noise than did rats without this hereditary predisposition to high blood pressure.8
If the results of these experiments in spontaneously hypertensive rats8 can be applied to humans, then the results of the study of hypertension reported by Jonsson and Hansson25 may have to be reevaluated. By using hearing loss as the criterion for degree of noise exposure, they may inadvertently have selected workers who were predisposed to hearing loss because of their hypertension and not vice versa, as was intended. EFFECTS OF SOUNDS ABOVE AND BELOW THE AUDIBLE FREQUENCY RANGE (ULTRASOUND AND INFRASOUND)
Sounds that are not audible to humans because their frequencies are above or below our audible frequency range are known as ultrasound and infrasound, respectively. There is no evidence to indicate that exposure to sounds that are not audible can damage the ear, and there is little evidence that such sounds could have other untoward effects. Ultrasounds are rapidly attenuated when transmitted in air and, therefore, decrease rapidly in intensity with distance from the source. Although very high intensities of ultrasound can kill furred animals such as mice, rats, and guinea pigs because of the buildup of heat by sound absorption in the fur, such an effect could not occur in humans because bare skin cannot absorb enough energy to cause damage. Exposure to low-frequency sounds (infrasound) of high intensity has been reported to cause various diffuse symptoms such as headache, nausea, and fatigue. The results of some experiments indicate that infrasounds may give rise to a decrease in blood pressure, possibly mediated through stimulation of the vestibular part of the inner ear. REFERENCES
1. Moller AR. Hearing: Its Physiology and Pathophysiology. San Diego: Academic Press; 2000. 2. Miller JM, Watson CS, Covell WP. Deafening effects of noise on the cat. Acta Oto Laryng Suppl. 1963;176:1–91. 3. Canlon B, Borg E, Flock A. Protection against noise trauma by preexposure to a low level acoustic stimulus. Hear Res. 1988;34:197–200. 4. Morest DK, Bohne BA. Noise-induced degeneration in the brain and representation of inner and outer hair cells. Hear Res. 1983;9:145–52. 5. Rajan R, Johnstone BM. Contralateral cochlear destruction mediates protection from monaural loud sound exposures through the crossed olivocochlear bundle. Hear Res. 1989;39:263–78. 6. Borg E, Engstrom B. Noise level, inner-hair cell damage audiometric features and equal-energy hypothesis. J Acoust Soc Amer. 1989;86: 1776–82. 7. Borg E, Moller AR. Noise and blood pressure: effects on lifelong exposure in the rat. Acta Physiol Scand. 1978;103:340–2. 8. Borg E. Noise, hearing, and hypertension. Scand Audiol. 1981;10: 125–6. 9. Borg E, Canlon B, Engstrom B. Noise-induced hearing loss. Literature review and experiments in rabbits. Scand Audiol. 1995;24(Suppl 40): 1–147. 10. Smoorenburg GF. Speech reception in quiet and in noisy conditions by individuals with noise-induced hearing loss in relation to their tone audiogram. J Acoust Soc Am. 1992;91:421–37. 11. Moller AR. Pathophysiology of tinnitus. In: Sismanis A, ed. Otolaryngologic Clinics of North America. Amsterdam: WB Saunders; 2003:249–66. 12. Anonymous. Determination of Occupational Noise Exposure and Estimation of Noise-Induced Hearing Impairment, ISO-1999 International Organization for Standardization: Acoustics. Geneva, Switzerland; 1990. 13. Anonymous. National Institute for Occupational Safety and Health (NIOSH) Criteria for a Recommended Standard: Occupational Exposure to Noise. Revised criteria 1998 Publication No. 98-126. 1998.
37 14. Anonymous. American Academy of Ophthalmology and Otolaryngology (AAOO). Committee on Hearing and Equilibrium and the American Council of Otolaryngology, Committee on Medical Aspects of Noise. Guide for Evaluation of Hearing Handicap. Am J Med Assoc. 1979;241:2055–9. 15. Kryter KD. Impairment to hearing from exposure to noise. Acoust Soc Am. 1973;53:1211–34. 16. Moller AR. Noise as a health hazard. Ambio. 1975;4:6–13. 17. Demeter SL, Andersson GBJ. Chapter 11. Ear, nose, throat, and related structures. Guides to the Evaluation of Permanent Impairment. 5th ed. American Medical Association; 2003. 18. Dobie RA. Medical-Legal Evaluation of Hearing Loss. New York: van Nostrand Reinhold; 1993. 19. Gierke von H.E, Johnson DL. Summary of present damage risk criteria. In: Henderson D, Hamernik RP, Dosaujh DS, Mills JHM, eds. Effects of Noise on Hearing. New York: Raven Press; 1976: 547–60. 20. Moller AR. Noise as a health hazard. Scand J Work Environ Health. 1977;3:73–9. 21. Erlandsson B, Hakanson H, Ivarsson A, Nilsson P. The difference in protection efficiency between earplugs and earmuffs. Scand Audiol (Stockh). 1980;9:215–21. 22. Nilsson R, Lindgren F. The effect of long term use of hearing protectors in industrial noise. Scand Audiol (Stockh). 1980;Suppl 12:204–11.
Effects of the Physical Environment: Noise as a Health Hazard
761
23. Dobie RA. Prevention of noise-induced hearing loss. Arch Otolaryngol Head and Neck Surg. 1995;121:385–91. 24. Anonymous. Occupational Safety and Health Administration (OSHA). Occupational Noise Exposure: Hearing Conservation Amendment, Final Rule. Fed Regis. 1983;48:9738–85. 25. Jonsson A, Hansson L. Prolonged exposure to a stressful stimulus (noise) as a cause of raised blood-pressure in man. Lancet. 1977;1:86–7. 26. Sanden A, Axelsson A. Comparison of cardiovascular responses in noise-resistant and noise-sensitive workers. Acta Otolaryngol (Stockh). 1981;Suppl 377:75–100. 27. Miller JD. Effects of noise on people. J Acoust Soc Am. 1974;56:729–64.
General References Dobie RA. Medical-Legal Evaluation of Hearing Loss. New York: van Nostrand Reinhold; 1993. Kryter KD. The Effects of Noise on Man. 2nd ed. New York: Academic Press; 1985. Lipscomb DM, ed. Hearing Conservation in Industry, Schools, and the Military. Boston: Little Brown & Company; 1988. Moller AR. Hearing: Physiology and Disorders of the Auditory System. Academic Press, Amsterdam; 2006. Salvi RJ, Henderson D, Hamernik RP, Colletti V. Basic and Applied Aspects on Noise-Induced Hearing Loss. New York: Plenum Press; 1985.
This page intentionally left blank
Ergonomics and Work-Related Musculoskeletal Disorders
38
W. Monroe Keyserling • Thomas J. Armstrong
Ergonomics is the study of humans at work in order to understand the complex relationships among people, machines, job demands, and work methods in order to minimize gaps between task demands and human capacities in activities of work and daily living.1 All human activities, regardless of their nature, place both physical and mental demands on the worker. As long as these demands are kept within reasonable limits, performance will be satisfactory and health will be maintained. However, if stresses are excessive, undesirable outcomes may occur in the form of errors, accidents, injuries, and/or a decrement in health. Occupational ergonomics is a discipline concerned with evaluating stresses that occur in the work environment and the ability of people to cope with these stresses. Its goal is to design facilities (e.g., factories and offices), furniture, equipment, tools, and job demands to be compatible with human dimensions, capabilities, and expectations. Ergonomics is a multidisciplinary science with four major areas of specialization: Cognitive Ergonomics (sometimes called engineering psychology) is concerned with the information-processing requirements of work. Major applications include designing displays (e.g., gauges, warning buzzers, signs, instructions), controls (e.g., knobs, buttons, joysticks, steering wheels), and software to enhance human performance while minimizing the likelihood of error.2–3 Anthropometry is concerned with the measurement and statistical characterization of body size in the context of workplace and task dimensions. Anthropometric data provide important information to the designers of clothing, furniture, machines, tools, and workstations.4–6 Work Physiology is concerned with the responses of the cardiovascular system, pulmonary system, and skeletal muscles to the metabolic demands of work. This discipline is concerned with the prevention of whole body and/or localized fatigue that results from a mismatch between job demands and worker capacities.7 Biomechanics is concerned with the transfer of forces through the musculoskeletal system and the corresponding deformation of tissues.8 Many mechanical stresses can cause overt injuries (e.g., a concussion when a worker is struck in the head by a dropped object). In most cases, overt injury hazards are readily recognized and can be controlled through safety engineering techniques such as machine guarding and personal protective equipment.9 Other stresses are more subtle and can cause chronic or cumulative injuries and disorders. These stresses may be external (e.g., a vibrating tool that causes white finger syndrome) or internal (e.g., tension in a tendon when the attached muscle contracts).
This chapter is concerned primarily with physical work activities and prevention of work-related musculoskeletal disorders (WRMSDs). Typical examples of WRMSDs include: • A poultry worker develops numbness and tingling in the hand and fingers due to the repetitive hand motions associated with dismembering chickens. • A farm worker experiences pain in the lower back attributed to the awkward stooping posture required to harvest vegetables. • A nurse’s aide suffers a back strain when transferring a patient from a hospital bed to a wheelchair. It is important to note that the health problems described above typically are not the result of an accident. (An accident is defined as an unanticipated, sudden, and discrete event that results in an undesired outcome such as property damage, injury or death.9) Instead, they can be generally classified as overexertion or overuse disorders and syndromes caused by performing work tasks that are regular and predictable requirements of the job. Anthropometry, work physiology, and biomechanics are the ergonomic disciplines which are most relevant to the development of programs for ameliorating overexertion injuries and chronic musculoskeletal disorders. The following sections present some of the tools used for measuring and analyzing physical work requirements so that they can be compared to recommended human capacity as shown in Fig. 38-1. It will consider not only the ability to merely perform a given task, but also the ability to perform it repeatedly and safely, day in and day out, over the course of many years. Readers who desire additional information pertaining to cognitive ergonomics are directed to the References section for a short list of general survey texts.2–3 ANTHROPOMETRY
Anthropometry is concerned with measuring the size of the human body and using this information to design facilities, equipment, tools, and personal protective equipment (e.g., gloves, respirators, etc.) to accommodate the physical dimensions of the user. As illustrated in Fig. 38-2, most anthropometric design problems are nontrivial due to the large variation in body dimensions within the working population. In this example, a designer must specify the height of an overhead conveyor used to transport parts between two areas of a plant. If the conveyor is too high, short workers would not be able to load or unload parts without elevating the shoulder to an extended reach posture. On the other hand, if the conveyor is too low, tall workers could sustain head injuries from collisions with hung parts. 763
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
764
Environmental Health
Figure 38-1. Ergonomics entails analysis of jobs so that job demands can be compared with worker capacities and job/task design enhancements.13
Suppose that the designer’s primary goal is to avoid head injuries to tall workers. To accomplish this, she decides to provide sufficient overhead clearance to accommodate 95 percent of the U.S. male population by positioning the conveyor so that the lowest point of the hung parts is 190.5 cm (75 in) above the floor. (Note: This dimension is computed using nude stature data for a 95th percentile male from Table 38-1 and adding 2.5 cm [1 in] as an adjustment for shoes.)5,10 With this design, a short worker (a 5th percentile female is illustrated) can reach the parts only by raising the shoulder into an elevated, awkward position which may cause fatigue and/or
Figure 38-2. For safety reasons, an overhead conveyor should be higher than the stature of a tall worker (95th percentile male illustrated). This can create a difficult reach for a short worker (5th percentile female illustrated).
musculoskeletal injury in the shoulder region.11 In this situation, there is no simple solution that will simultaneously satisfy the needs of persons who are very tall or very short. The characterization of body size must consider the large variations in dimensions from person to person and from population to population. Consequently, statistical methods are used to analyze body dimensions, and the results are typically reported as means and standard deviations for various body segments.12 Extensive tables of these statistics are available in reference texts.4–6,10,12 Table 38-1 and Fig. 38-3 present a summary of useful body dimensions for anthropometric applications. In the following sections, several examples are presented which illustrate how awkward postures can contribute to the onset of fatigue and musculoskeletal and nerve disorders. Body posture is frequently determined by the physical dimensions of a workstation and the location and orientation of equipment and tools. Anthropometric methods can be used during the design of workstations in order to avoid situations which require the use of awkward working postures. FATIGUE
Repeated or sustained exertions are associated with a constellation of performance impairments and symptoms collectively referred to as fatigue. Fatigue is typically characterized as “whole body” or “localized.” Whole body fatigue is associated with activities where the work load is distributed concurrently over many parts of the body (e.g., legs, torso, and arms), causing high rates of energy expenditure, such as when walking briskly, shoveling snow, or stacking containers. Localized fatigue is associated with tasks in which one segment of the body performs repeated or sustained work, for example, forearm fatigue when using a hand tool, shoulder fatigue associated with overhead work, or back fatigue resulting from sustained trunk flexion. Fatigue not only affects how workers feel; it also affects their ability to manipulate parts and tools precisely, and may increase their risk of an accident. Symptoms of localized fatigue include localized discomfort, a sense of tiredness, reduced strength, reduced motor control, and tremor.14 In addition, there are circulatory, biochemical, and electrical changes within the muscle tissue. Localized fatigue entails physiological and biomechanical processes.14,15 Muscle contraction results in consumption of substrates and accumulation of byproducts. At low-level exertions, blood flow increases and these concentrations are maintained at work levels. At high-exertion levels, increased muscle pressure and deformation of the vascular bed impede circulation, and concentrations of substrates and metabolites become excessive. In addition to increased muscle pressure, there also is deformation of connective tissues that causes pain and may be a precursor to chronic soft tissue injuries. Localized fatigue can develop over periods as short as several seconds or as long as several hours. Similarly, recovery occurs within periods of seconds, minutes, or hours and should be complete after a night of rest, or in extreme cases, after a few days. Altering the work activity will generally provide prompt relief of fatigue symptoms. For example, a seated operator usually gets relief from stretching, changing seat position, standing up, or from a night of rest. If altering the work activity or posture does not provide prompt relief, if symptoms persist from one day to the next, or if the symptoms interfere with activities of work or daily living, then the affected person should be referred to a health-care provider for evaluation. Chronic localized fatigue may be a harbinger of more clinically significant muscle, tendon, and nerve disorders.16–20 Localized fatigue may also have an impact on work performance because fatigue may limit the endurance time prior to the onset of objectionable discomfort during a sustained exertion. Numerous laboratory studies have shown that endurance time increases as the intensity (forcefulness of the exertion) decreases. At an exertion level of 100% of a muscle’s maximum strength, the endurance time is only a few seconds before exhaustion occurs. Reducing the exertion level to 50% maximum strength extends the endurance time to approximately
38
Ergonomics and Work-Related Musculoskeletal Disorders
765
TABLE 38-1. BODY DIMENSIONS FOR THE 5TH, 50TH, AND 95TH PERCENTILES OF THE U.S. CIVILIAN POPULATION U.S. Civilian Females Multiplier
Dimension
1.0 0.285 0.53 0.63 0.818 0.936 0.377 0.485 0.129 0.186 0.146 0.108 0.152 0.056
Stature (cm) Floor-knee Floor-hip Floor-elbow Floor-shoulder Floor-eye Floor-finger Floor-wrist Sag. plane-shld. Shoulder-elbow Elbow-wrist Wrist-finger Foot length Foot breadth
5th Percentile 150.4 42.9 79.7 95.2 123.6 141.4 57.0 73.3 19.5 28.1 22.1 16.3 23.0 8.5
50th Percentile 161.8 46.1 85.8 101.7 132.1 151.2 60.9 78.3 20.8 30.0 23.6 17.4 24.5 9.0
U.S. Civilian Males 95th Percentile
5th Percentile
50th Percentile
95th Percentile
173.0 49.3 91.7 108.5 140.9 161.2 64.9 83.5 22.2 32.0 25.1 18.6 26.2 9.6
163.6 46.6 86.7 103.1 133.8 153.1 61.7 79.3 21.1 30.4 23.9 17.7 24.9 9.2
175.5 50.0 93.0 110.4 143.4 164.1 66.1 85.0 22.6 32.6 25.6 18.9 26.6 9.8
188.0 53.6 99.6 117.7 152.9 174.9 70.5 90.6 24.1 34.8 27.3 20.2 28.4 10.5
Stature from Wagner D, Birt JA, Snyder MD, Duncanson JP, eds. Human Factors Design Guide for Acquisition of Commercial Off-the-Shelf Subsystems, NonDevelopmental Items, and Developmental Systems. Document number PB96-191267INZ. Atlantic City International Airport, NJ: Federal Aviation Administration Technical Center; 1996 and link lengths from Drillis R, Contini R. Body Segment Parameters. Report No. 1166-03 (Office of Vocational Rehabilitation, Dept. of HEW). New York: NYU School of Engineering and Science; 1966.
1 minute.15,19,21–22 These studies led investigators to conclude that exertions below 15% maximum strength could be sustained indefinitely without fatigue. This conclusion was also supported by studies showing that intramuscular blood flow is unimpeded in exertions below 15% maximum strength.15,23 It is not desirable for workers to exert themselves to the point of exhaustion used in laboratory experiments (i.e., task termination), as objectionable levels of discomfort are experienced well before this endpoint.21,24–25 Significant fatigue occurs within parts of muscles for even the lowest levels of exertion; it is recommended that work activities be designed so that workers can rest or alter their work activities.16–28
Figure 38-3. Link lengths of body segments expressed as a proportion of stature. (Adapted from Drillis R, Contini R. Body Segment Parameters. Report No. 1166-03 [Office of Vocational Rehabilitation, Dept. of HEW.] New York: NYU School of Engineering and Science; 1966.)
Bystrom and Fransson-Hall29 concluded that intermittent hand exertions (5 seconds of static work at 10–40% maximum strength alternating with rest periods of 3–7.5 seconds) with mean contraction intensity greater than 17% maximum strength were unacceptable, while continuous exertions greater than 10% maximum strength were also unacceptable. These results, based on strength, electromyography, blood potassium and lactate concentrations, muscle blood flow, heart rate, and perceived strain during the exertions and up to 24 hours following the exertions, provide guidance for design of work activities. They also demonstrate how fatigue is a complex process involving multiple physiological and biomechanical mechanisms, which are dependent on the intensity and temporal qualities of exertion. Fatigue can be assessed through the intensity, location, and consistency of discomfort.24–26 Advantages of using discomfort for assessing fatigue are (a) it is relevant to how workers feel, (b) it provides information about many tissues and parts of the body, and (c) it requires minimum equipment. Disadvantages are (a) it requires worker cooperation, (b) it can be hard to separate the symptoms of work-related fatigue from other causes, and (c) it is necessary to study multiple workers to control intra- and intersubject variability. In its simplest form, assessment of effort or discomfort entails asking workers how they feel; however, it is important to ask the question in a way that does not suggest they should be experiencing discomfort or pain. It also is important to ask the question so that the actual areas of discomfort can be identified. Several scales and procedures have been suggested and used for this purpose. The Borg scale of perceived exertion showed a high correlation with heart rate among subjects running on a treadmill.30 As illustrated in Fig. 38-4a, the Borg scale uses ordinal numbers with verbal anchors. It can be argued that perceived exertion data should be treated as ordinal data and analyzed using nonparametric statistics, but many investigators utilize parametric analyses. Borg characterized these scales as categorical with analog properties.30 An alternative to the Borg scale is a body map and visual analog scale, shown in Figs. 38-4b and c. Visual analog scales are lines with verbal anchor points at various locations.26,31 The subjects place a check at the location on the line that corresponds to the level of their perceived discomfort or effort at a specific body location. Studies by HarmsRingdahl et al.26 found that subject ratings of elbow pain using the 10-point Borg scale and a visual analog scale agreed favorably; Ulin et al.31 reported similar findings for subjects using powered hand tools. The scale shown in Fig. 38-4c was used to rate back discomfort during
766
Environmental Health
0 Nothing at all 0.5 Very, very easy 1 Very easy 2 Easy 3 Moderately hard 4 Somewhat hard 5 Hard 6 7 Very, hard 8 9 10 Very, very hard
2 3 4 5
6
10
7
11 12 13
8 9
14 16
15
17
B
A
NO DISCOMFORT
0 1
C
WORST IMAGINABLE DISCOMFORT
Figure 38-4. (A) Ten-point Borg perceived exertion scale, (B) body map used to identify areas of localized discomfort, and (C) visual analog discomfort scale.
sustained trunk flexion.32 A mark placed at a distance of 20% of the scale length measured from the left anchor point corresponded to “distracting discomfort” (the point where a worker would choose to take a momentary rest pause if permitted by job demands) for an average subject. A mark placed at a distance of 50% of the scale length corresponded to the maximum level of discomfort considered to be acceptable for routine work activities. As a practical matter, the visual analog scale may be the easiest to use in work settings where subjects do not have time to read and contemplate all of the verbal anchor points. Another technique for evaluating localized fatigue in muscles involves the use of electromyography, or EMG. EMG uses electrodes, preamplifiers, amplifiers, rectifiers, frequency analyzers, and recorders to measure electrical responses of muscles to work.14,33 EMG measurements are considered by some to be more objective than discomfort surveys. However, it is difficult to obtain reliable EMG measurements on some subjects and in certain work environments; and the intra- and intersubject variability of those measurements may be quite high. EMG responses are most useful after the affected muscles have been identified using discomfort surveys or other methods. Use of EMG is beyond the scope of this discussion. To summarize, localized fatigue is characterized as discomfort and/or performance decrement caused by repeated or sustained exertions. The development of and recovery from localized fatigue can occur in seconds, minutes, or hours. Failure to control localized fatigue through the proper design of work equipment and/or the effective management of work activities may lead to more-severe, long-lasting pain or recognized medical conditions. Workers who experience persistent symptoms should be referred to a qualified health-care provider. CHRONIC WORK-RELATED MUSCULOSKELETAL DISORDERS
Musculoskeletal disorders are a leading cause of worker impairment, lost work, and compensation. There is strong evidence that both personal and work-related factors are important in the pathogenesis of these disorders.17,20,34,35 The World Health Organization uses the term “work related” to characterize disorders that involve both personal and work-related factors, and distinguishes them from occupational
diseases where the entire cause is attributed to work exposures.34 Frequently-cited personal factors associated with chronic musculoskeletal disorders include history of certain injuries or illnesses, age, vitamin deficiencies, gender, and obesity. Work factors include: repeated or sustained exertions, high forces, certain postures, mechanical contact stresses, low temperature, and vibration. Personal factors are important and should be considered during clinical assessments of patients and controlled in studies of the causes and amelioration of WRMSDs. Other terms, such as cumulative trauma disorders, repetitive strain injuries, and overexertion injuries and overuse syndromes, are sometimes used in place of the term Work-Related Musculoskeletal Disorders (WRMSDs).17,35 These terms are not intended to be used in place of specific diagnoses such as tendinitis, epicondylitis, bursitis, fibromyalgia, or carpal tunnel syndrome for the upper extremity; or sciatica, spondylosis, or osteoarthrosis for the lower back. WRMSDs refer to a group of disorders that have work activities as a common factor. Some important characteristic of WRMSDs include the following: 1. Their pathogenesis involves both mechanical and physiological processes 2. Weeks, months, and years may be required for them to develop 3. Weeks, months, and years may be required for recovery, and in extreme cases, recovery may never be complete 4. Their symptoms are often nonspecific, poorly localized, and episodic 5. They often go unreported Mechanical processes refer to the deformation and, in some cases, damage caused by exertions and movements of the body. Physiological processes refer to pain, metabolic repair, and adaptive responses that result from deformation of tissue. The symptoms of WRMSDs set them apart from acute injuries such as lacerations or fractures where there is a conspicuous event and a conspicuous effect that can be observed by a health-care provider, supervisor, or coworker. Work factors associated with WRMSDs are often overlooked, and the symptoms may not be observable by a third party. Workers themselves may not recognize an association between what they do at work and how they feel. This may explain why WRMSDs are often not reported. Even when workers do suspect an association with their job, they may be reluctant to report their condition to an employer for fear of loosing their job. These factors contribute to the great reporting differences of WRMSDs that sometimes occur between one site and another. WRMSDs are observed frequently in the upper extremity and the lower back, as discussed in the following sections. WORK-RELATED MUSCULOSKELETAL DISORDERS OF THE UPPER EXTREMITY
Morbidity One of the earliest references to these disorders is that of Bernardino Ramazzini36 who in 1713 attributed “diseases reaped by certain workers” to “violent and irregular motions and unnatural postures.” Gray37 in 1893 described “washerwomen’s sprain,” which is commonly referred to as “de Quervain’s” disease, or tendinitis of the extrinsic abductor and extensor muscles of the thumb near the radial styloid process.38 Reports of insurance claims due to tendinitis were recorded in the early twentieth century. For example, in 1927 Zollinger39 reviewed Swiss insurance records and reported 929 cases of crepitant tenosynovitis attributed to repeated strain. The work-relatedness and compensability of musculoskeletal disorders have been surrounded with controversy starting with the implementation of workers’ compensation laws in the early part of this century. Conn reported that the State of Ohio amended its state workers’ compensation rules in 1931 to include musculoskeletal disorders following 12 years of debate in the state legislature.40
38 The controversy centers on how causation is divided between personal and work-related factors. WRMSDs are now compensable in most states of the United States; however, the rules and reporting behavior may vary considerably from one state to another. In most cases, workers must initiate a claim against their employer to receive compensation. This often results in an adversarial relationship, which may inhibit some workers from taking action. While workers’ compensation claims provide valuable documentation of the severity of WRMSDs, only in a few cases have investigators been able to develop meaningful generalizations from compensation data.41,42 The reporting of WRMSDs increased substantially at the end of the twentieth century in the United Sates, reaching a peak of 41.1 lost time cases per 10,000 workers in 1994 and then decreasing to 23.8 cases in 2001 according to the Bureau of Labor Statistics.43 Due to changes in guidelines for reporting work-related illnesses and injuries implemented in 2004,44 new data for musculoskeletal disorders cannot be directly compared to the rates reported above. In summary, the overall prevalence, incidence, and severity of various WRMSDs of the upper extremity in the United States are difficult to determine. The data that are available suggest that while there is significant underreporting, these disorders are a major cause of impairment and work disability.
Individual and Work-Related Risk Factors Commonly-cited individual risk factors include age, female gender, acute trauma, rheumatoid arthritis, diabetes mellitus, hormonal factors, wrist size or shape, and vitamin deficiency.20,35 These factors should be evaluated as possible causes in each reported case; however, the sensitivity and specificity of these factors is not sufficient for use as a screening test at the time of employment to identify workers at risk. Even if such tests were available, affirmative action regulations would require employers to show that workplace modifications to accommodate “atrisk” workers are unfeasible before they could be denied employment. Attempts to use individual risk factors for worker selection or screening should be regarded as experimental and must include appropriate safeguards for risks and rights. It may be advisable to monitor workers with recognized personal risk factors and to counsel these individuals in regard to the potential risks of certain types of work. Work-related factors include repeated and sustained exertions, forceful exertions, certain postures, mechanical contact stress, vibration, low temperatures, and work organization.17,20,35,45 It has been shown that the prevalence of WRMSDs increases with exposure to certain risk factors; however, it is not known at what level the risk becomes significantly elevated for a single factor or combination of work factors. While it is not yet possible to state specific design standards for equipment and work procedures; it is possible to identify some of the most conspicuous risk factors, to identify possible workrelated causes when new cases are reported, and to modify jobs in order to accommodate affected workers and prevent future cases.
Control of Upper Extremity WRMSDs The process for controlling WRMSDs includes (a) surveillance of worker health, (b) treatment and follow-up of new cases, (c) inspections and analyses of workplaces and jobs for possible risk factors, (d) proactive design of new jobs, (e) corporate support, including strong management commitment, ( f ) education of company personnel and health-care providers, and (g) worker involvement. Detailed information on ergonomics program management is beyond the scope of this chapter; however, several useful publications are listed in the References section.46–49 Employers and workers are referred to their respective trade organizations, unions, and workers’ compensation carriers for guidance on implementation of a control program.
Surveillance of Worker Health Surveillance includes (a) to the extent possible, identifying and evaluating all musculoskeletal disorders for possible work relatedness, (b) periodically reviewing available medical records for musculoskeletal disorders, and (c) proactive surveys of the workplace for risk
Ergonomics and Work-Related Musculoskeletal Disorders
767
factors at the time of program implementation or following a substantive change in work equipment or procedures. While analysis of available injury and illness data is recommended, it is often difficult to identify areas or processes with statistically elevated risks because of the small numbers of workers.50,51 In addition, at least several months are generally required for the effects of a given job, method, or tool change to stabilize. Unfortunately, most work populations are not stable enough to rigorously evaluate all possible factors. There is turnover in the work force, due to work and nonwork causes, changes in production schedules, plant shut downs, etc. For this reason it is recommended that all cases be identified and investigated. Surveillance may also be supplemented with worker surveys and medical examinations. Surveys provide information about overall discomfort or morbidity patterns; however, most survey instruments do not yet have sufficient sensitivity or specificity to be used for case screening or medical diagnosis.52,53
Job Analysis The number of jobs at a given work site may vary from only a few to several thousand, and there may not be sufficient resources to examine them all in detail. The level of detail required for analyzing jobs depends on the purpose of the analysis. In some cases, it will be better to obtain a little information about a lot of jobs rather than a lot of information about a few jobs. A walk-through inspection of the production facility may be sufficient to confirm that the production process has not changed since a previous study or to find out about the types of equipment, materials, and methods used. In some cases, these walk-through inspections may be supplemented with critiques of representative jobs.53 If high levels of exposure to risk factors are found, it may be desirable to perform more-detailed analyses to quantify those stresses, understand their causes, and design interventions.45,50 Job analysis is divided into four steps: (a) documentation of the job, (b) analysis of stresses, (c) design of interventions, and (d) evaluation of intervention effectiveness. Documentation is the collection of the information necessary to identify and quantify risk factors for WRMSDs. Documentation is based on traditional industrial engineering work methods analysis and entails collection of data for a systematic evaluation of the job.54,55 The following items are determined during job documentation: • • • • •
Objective: why the job is performed Standards: production quantity and quality expectations Staffing: the number of workers performing the job Method: the steps required to perform each task Workstation layout: blueprints or a sketch of the workplace with dimensions that can be used to determine reach distances • Materials: parts and substances used in the production process • Tools: devices used to accomplish the work • Environment: conditions at and near the workstation
Analysis of Work Factors The ergonomic assessment of work factors entails characterization of stresses that may contribute to WRMSDs. Stresses can be identified, ranked, and rated from observations by the analyst.56–58 Jobs may be analyzed from direct workplace observations and measurements, or from videotapes. An advantage of videotapes is that they may be played repeatedly and/or slowed down. A disadvantage is that it may be hard to see the entire job in a videotape. Worker ratings also may be used to supplement observations; however, care should be exercised not to ask workers leading questions.59 Quantitative physical measurements, such as the cycle time, weights of tools, and locations of work objects, should also be taken. In some cases, these can be supplemented with physiological and biomechanical measurements such as muscle activity and joint position.50,50–62 The best method of assessing work stresses depends on the purpose of the analysis and available resources.
768
Environmental Health Taping machine
Flat cartons 0.6 × 0.4m
Notebooks 0.10 × 0.22 × .30m 0.600m
Erect carton 0.22w × 0.30d × 0.30h m
Conveyor 0.30w × 0.80h m
0.800m
Pack station 0.50d × 0.8h m
Figure 38-5. Workstation for packing notebooks. Bundles of notebooks weighing 25N move from right to left to the worker who puts two bundles into each case.
Repeated and Sustained Exertions The number of exertions per hour or shift can be estimated from the work standard and methods analysis. Assessments of repeated exertions should take into consideration the frequency and speed of exertions as well as the recovery time between exertions. An exertion is defined as a movement or action to gain control of or to work on an object (e.g., picking up a part, placing a part into a machine, twisting a screw once with a screw driver, pressing a control to activate a machine, etc.). Figure 38-5 shows a workstation for packing notebooks
into cases. Table 38-2 provides a time line of the steps required to perform this job. The job entails a series of reaches and grasps to obtain and erect the case (a corrugated carton) and to transfer notebooks. Each step that involves an exertion is identified by the letter “E.” This job requires 12 exertions during a work cycle with a duration of 13.7 seconds (a rate of 0.88 exertions/s). It is also possible to calculate the time spent working (exertions) versus the total cycle time. The ratio of the work time to the total time is called the duty cycle. For the notebook job, this is computed as 70% (9.6 sec/13.7 sec) for the left hand and 66% (9.0 sec/13.7 sec) for the right hand. An observational scale for rating repeated exertions is presented in Fig. 38-6.58 The verbal anchor points consider the frequency of motion, recovery time, and the speed of motion. This repetition scale has been adopted by ACGIH Worldwide as a measure of Hand Activity Level (HAL) and is used as a basis of a Threshold Limit Value (TLV) for hand-intensive work described later in this chapter.63
Forceful Exertions Forceful exertions can be identified by inspecting the work methods for steps that involve resisting gravity, surface finishing operations (e.g., grinding, polishing, or trimming), or tool reaction forces (using a manual or powered tool to tighten a screw or nut). On most jobs, exerted forces are not constant throughout the work cycle; there are usually distinct periods of exertion and periods of recovery, and the magnitude of exertions vary from step to step. Returning to the notebook-packing job (Fig. 38-5), each step was inspected for exertion of force (identified by the letter “E”). The magnitude of the force was estimated based on task attributes. Force estimates can be expressed in conventional force units or normalized on a scale of 0–100% or 0–10.
TABLE 38-2. THE STEPS AND START TIMES FOR EACH STEP OF THE NOTEBOOK-PACKING JOB. STEPS ARE LISTED ON THE SAME TIME LINE FOR THE LEFT AND RIGHT HANDS. MAJOR ACTIONS INCLUDE (A) GETTING AND ERECTING CASES (CORRUGATED CARTONS), (B) LOADING BUNDLES OF NOTEBOOKS INTO THE CASE, (C) CLOSING THE CASE, AND (D) PUSHING IT INTO A TAPING MACHINE. EACH DISCERNABLE EXERTION IS IDENTIFIED WITH THE LETTER “E.” Left Hand
Right Hand
Time
Step
Ex
0.00 0.60 0.80 1.20 0.93 1.43 2.63 3.00 3.36 3.60 3.96 4.73 4.86 5.03 5.40 5.50 6.13 7.16 8.53 9.40 8.56 11.26 11.50 11.80 11.86 12.13 12.46 12.80 13.66
Reach for flat case Pinch/move flat case
E
Pinch/move Move (fold) flap Release/reach for flap Press/move (fold) flap
Press/move Release/reach for flap Press/move flap Pinch/hold case Release/reach for flap Grasp/move flap Release/reach for notebooks Pinch/move notebooks to case Release/reach Pinch/move notebooks to case Release/reach for flap Press/move flap Press/hold flaps closed Release/reach for flap Press/move flap Press/move (guide) Reach for next carton
E E E
E E
E E E E
E E
Step
Reach for flat case Pinch/move flat case Pinch/move Hold/move (fold) flap down Release/reach for flap Press/move (fold) flap down Release/reach for bottom of case Pinch/move (rotate) case Release/reach for flap Press/move flap Release/reach for flap Grasp/move flap Reach for notebooks Pinch/move notebooks to case Release/reach Pinch/move notebooks to case Release/reach for flap Press/move flap Release/reach Grasp/move flap Move/hold flap Hold/flap down Move (push) case to taping machine Release reach for next case
Ex
E E
E E E
E E E E E E E
38
Ergonomics and Work-Related Musculoskeletal Disorders
769
Hand repetition–hand activity level 0 Hands idle Consistent, most of conspicuous, the time; long pauses; no regular slow motions exertions
Slow steady motion/ exertion; frequent brief pauses
Steady motion/ exertion; infrequent pauses
10 Rapid Rapid steady steady motion or motion/ continuous exertions; exertion; infrequent difficulty pauses keeping up
Hand force (average/peak) 0
10
None at all
Greatest imaginable
Contact stress (average/peak) Finger/hand
Wrist/palm
Forearm
Elbow 0 None at all
10 Greatest imaginable
Posture (average/peak) Wrist F/E Wrist R/U Forearm Elbow Figure 38-6. Form for recording observational ratings of physical job stresses. (Adapted from Latko W. Development and evaluation of an observational method for quantifying exposure to hand activity and other physical stressors in manual work. Ph.D. Dissertation, Dept. of Industrial and Operations Engineering. Ann Arbor, MI: The University of Michigan; 1997.)
Shoulder Neck Back 0
10
Neutral
Greatest imaginable
For example, it was estimated that the pinch force required to “get and erect the cases” was 5N based on size and weight of the carton and how it was handled. Normalized for a female with 50N pinch strength, 5N corresponds to 10% on a 0–100% scale or to 1 on a 0–10 scale. The force to move a bundle of notebooks to the case (“Pinch/ move notebooks to case”) was 25N per hand, corresponding to normalized values of 50% or 5. Considering all of the steps required to perform this job, the peak force is associated with the “Pinch/move notebooks to case” step. As shown later in this chapter, the ACGIH TLV63 considers peak force when evaluating a job. Forces can be determined from biomechanical calculations, estimated from knowledge of task attributes and observations, estimated using worker ratings or measured using instrumental methods. Fig. 38-6 shows a 10-point visual analog scale for estimating peak and average hand force.58 Assessments of force requirements should take the following factors into consideration:
• The magnitude of weight, resistance, and reaction forces • The effects of friction • Balance (well-balanced tools require lower exertions than poorly balanced tools) • Posture (pinch grips require higher exertions than power grips) • Pace • Gloves Jobs that require workers to get, hold, or use heavy objects will require more force than jobs that require workers to get, hold, or use light objects in the same way. Ratings should be adjusted upward if objects or glove surfaces are slippery or if objects are poorly balanced or supported with the ends of the fingers. Ratings also should be increased for rapid movements or if stiff or bulky gloves are used. Forceful exertions can be averaged across the entire length of the work cycle, but they must be weighted for time durations. As pointed out before, most people cannot sustain an average force exertion
770
Environmental Health
greater than 10–20% of maximum strength without excessive fatigue. For example, consider a job in which a worker gets parts one at a time and installs them onto a passing unit on an assembly line. The hand force required to reach for the part is negligible, the force required to transfer the part is 20% of maximum muscle strength, and the force required to install the part is 60% of maximum strength. The average force across all tasks (including nonexertion tasks and recovery time) is 10% of maximum strength. Using a 10-point scale for rating force, this job would be given average and peak ratings of 1 and 6, respectively. Force can also be assessed using electromyography and direct measurements. Jonsson has proposed a method in which the normalized EMG measurements (0–100% of maximum) are presented as a cumulative frequency histogram, called an amplitude probability distribution.60,61 Armstrong et al.64 utilized force gauges under keyboards to measure forces exerted during typing.
Posture Stresses Stressful postures can be identified by inspecting work elements for steps that involve repeated or sustained maximum reaches: elevation of the elbows, reaching behind the torso, full elbow flexion, full forearm rotation, ulnar/radial wrist deviation, wrist flexion, full wrist extension, or a pinch grip. Posture analysis may be performed by directly observing the job or from videotapes that are played back in slow motion. The analysis should examine each joint, for example, neck, shoulder, elbow, wrist, and hand (see Fig. 38-6). Posture stress ratings increase as deviations from neutral position, duration, and frequency increase. Posture, like force, varies from the beginning to the end of the work cycle. Consequently, the maximum and average values should be considered in the analysis. Postures can also be assessed with computers using goniometers attached to the joint of interest.62
absence of proper instrumentation, ratings may be based on the duration and amplitude of contact with vibrating objects. For example, a grinding or buffing job would probably be rated higher in terms of vibration stress than an assembly job that requires periodic use of a powered wrench.
Worker input Assessment of ergonomic stresses may be supplemented by worker interviews. Interviews should be carefully designed to avoid suggesting to workers how they should feel. Also, it is important that all workers be asked the same questions. One way of doing this is through the use of surveys in which workers rate discomfort or perceived exertion. An example of a survey in which a visual analog scale was used to assess the weights used in an automobile trim shop is shown in Fig. 38-7.59 These data show a significant increase in ratings toward “too heavy” as the tool mass increases above 2 kg. It cannot be said that workers will not develop a WRMSD if they use tools less than 2 kg, but in the absence of better data, worker ratings may be used as a design or selection benchmark. Designing lifting tasks to match acceptable levels of perceived exertion has been reported to reduce the risk of overexertion disorders of the back.66 While this has not been shown for upper limb disorders, it is a tenable hypothesis.
Exposure Limits for Jobs with Upper Extremity Exertions A number of tools or procedures have been proposed that can be used to quantify the risk factors described above so that they can be compared with worker capacities or recommended exposure limits. A discussion of all of the tools is beyond the scope of this chapter. Please refer to the reference list for additional information on tools and job analysis methods.46,58,63,67–72
Localized Mechanical Stresses Localized contact stresses can be calculated as the force acting on the body, divided by the area of contact. Consequently, the average contact stress will be higher if the weight of the arm is distributed over a padded surface than if it is rested on the sharp edge of a work surface. Stresses may not be uniformly distributed due to the irregular shapes of the workstation and tools, as well as the bones. Stresses can be identified by inspecting work methods for steps that involve contact of the body with external objects. Average and peak stresses should be considered in the analysis. Contact forces can be rated based on observations (see Fig. 38-6), however, there may be significant variation from one rater to the next.
Low Temperature Exposure to low temperature affects how workers hold and use tools, peripheral circulation, and neurological symptoms of existing nerve disorders. Adverse effects may occur when the skin temperature falls below 20°C. Exposure to low temperatures can be identified by inspecting work methods for steps that result in exposure to cold air, tools, and/or materials. Rankings may be based on temperature, but should be adjusted for thermal conductivity and protective equipment. Ratings also should be adjusted for clothing, as fingers’ skin temperature is affected by the body’s core temperature.
Vibration Vibration refers to the cyclical displacement of an object and has properties of frequency and amplitude. Available evidence suggests that vibration exerts a direct action on soft tissue and that it affects the way workers hold and use work objects.16,65 Vibration is often reported as a velocity or an acceleration. Acceleration is often reported because accelerometers are widely used to measure vibration.65 Instrumental measurements are beyond the scope of this discussion, but vibration exposure can be identified by inspecting work methods for steps that involve the use of stationary or hand-held power tools, impact tools, or controls connected to vibrating equipment. In the
ACGIH TLV for Monotask Handwork ACGIH Worldwide defines itself as a “scientific organization”—“not a standards setting body.” The ACGIH maintains committees of qualified experts who review peer-reviewed literature and develop guidelines known as Threshold Limit Values—TLVs. TLVs are intended to help health professionals make decisions about safe levels of worker exposure. Information regarding the TLVs can be found in the ACGIH TLV guide and their documentation.63 The TLVs for HAL apply to monotask hand work that is performed for four or more hours per shift. It considers HAL and peak finger force (Fp) Hand described above. “Monotask” means that the worker repeats a similar set of motions or exertions for 4 or more hours. At the present time, the TLV does not consider work durations beyond 8 hours, awkward posture, mechanical contact stress, vibration, and/or psychosocial stresses; these are left to professional
4.7 (±1.3) 6.9 (±2.3)
Key =1
9.1 (±1.2)
Too 10 heavy 8
=2 =3 =4
Too 6 right 4 Too 2 light 0 0
=5 =6 1
2
3 4 5 Tool mass (kg)
6
7
=7
Figure 38-7. Ratings of 33 tools by 22 workers show that tools in excess of 2 Kg were considered “Too Heavy” in an automobile trim shop. (Source: From Reference 59, Armstrong TJ, Punnett L, Ketner P. Subjective worker assessments of hand tools used in automobile assembly. 1989;50:639–45. Reproduced with permission of American Industrial Hygiene Association Journal. http://www.aiha.org.)
38
Ergonomics and Work-Related Musculoskeletal Disorders
771
The TLV can be applied using observations and ratings as described above. If additional information and certainty are desired, it also can be applied from a work methods analysis like the one shown for the notebook-packing job in Table 38-2. The ACGIH provides a table for estimating HAL based on exertion frequency and duty cycle. (See Table 38-3.) In the notebook-packing job, the frequency of exertions was estimated as 0.88/sec and the duty cycle as 70%. Using Table 38-3, the HAL can be estimated as 5. The peak finger force during notebook packing was estimated as 5. In Fig. 38-8, the maximum acceptable hand force for an HAL of 5 is 3.8. Clearly, the TLV is exceeded and engineering enhancements should be implemented to reduce peak finger force and/or HAL.
Intervention and Evaluation of Control Methods
Figure 38-8. ACGIH TLV for hand activity level provides guidance for determining acceptable peak finger force (Fp) levels for a given hand activity level (HAL). (Adapted from American Conference of Governmental Industrial Hygienists [ACGIH]. 2005 TLVs and BEIs. Cincinnati: ACGIH Worldwide; 2005.)
Jobs ranked high in terms of ergonomic stresses should be redesigned to minimize those stresses. Possible strategies may focus on the redesign or modification of methods, tools, workstations, and production processes. Worker training may help workers to select and use tools properly or to properly adjust their workstation, but in many cases ergonomic stresses result from the work requirements and cannot be reduced through training. As has been previously stated, there are not yet specification standards for acceptable levels of ergonomic stress. Therefore, it is necessary to evaluate interventions to ascertain their effectiveness. Evaluation may be accomplished through reanalysis of the job, measures of localized discomfort or exertion, and ongoing surveillance of health data. These procedures have been described above.
OCCUPATIONAL LOW BACK PAIN
judgment. TLV users are cautioned that some transient discomfort is a normal part of all physical activity (see the discussion of fatigue) and may not be prevented by the TLV. Also, there are many nonoccupational exposures that may contribute to the development of MSDs. The TLV is shown graphically in Fig. 38-8. Peak finger force (Fp) is plotted on the vertical axis and HAL is plotted on the horizontal axis using scales ranging between 0 and 10. The TLV is depicted as a line that goes from a peak finger force of 7 for an HAL value of 1 to a peak finger force of zero for an HAL value of 10. There is also an action limit (AL) that goes from a peak finger force of 5 for an HAL value of 1 to zero for an HAL value of 10. Work with HAL levels less than 1.0 is not considered repetitive work and the TLV does not apply. Exposures to combinations of HAL and Fp should not exceed the TLV. When this occurs, job modifications are indicated to reduce HAL and/or Fp. These modifications should be based on the results of job analysis methods described above. Exposures that exceed the AL should trigger a control program that includes surveillance of workers for MSDs, education of managers, supervisors, and workers, early reporting of symptoms, and appropriate health care of workers who have developed MSDs.
Low back pain is a nonspecific condition that refers to perceptions of acute or chronic pain and discomfort in or near the lumbosacral spine that can be caused by inflammatory, degenerative, neoplastic, gynecologic, traumatic, metabolic, and other types of disorders.73 A large number of disease conditions have been associated with low back pain, including sciatica, lumbago, spondylosis, osteoarthrosis, and degenerative disc disease.74 However, most episodes of work-related back pain cannot be associated with a specific lesion. Therefore, in most epidemiologic studies of occupational low back pain, the specific cause is not identified. Typically, all categories are grouped together as an idiopathic condition with similar reported symptoms.75 There is no objective measurement of back pain, it can only be assessed using subjective self-reports such as pain/discomfort diagrams and visual-analog scales. In some cases, back pain may affect activities of daily living and/or result in occupational disability (lost work time or work restrictions). However, disability is a complex process that is affected by a variety of occupational, socioeconomic, and personal factors (e.g., physical job demands, psychosocial climate, compensation systems and insurance benefits, personality, etc.). As a result, there is a poor correlation between back pain and disability.73,76
TABLE 38-3. HAL CAN BE ESTIMATED FROM EXERTION FREQUENCY (EXERTIONS PER SECOND) OR PERIOD (CYCLE DURATION IN SECONDS) AND WORK DUTY CYCLE TIME ∗ Duty Cycle Frequency
Period
0.12/s (0.09–0.18) 0.25/s (0.18–0.35) 0.5/s (0.35–0.71) 1.0/s (0.71–1.41) 2.0/s (1.41–2.83)
8.0s (5.66–11.31) 4.0s (2.83–5.66) 2.0s (1.41–2.83) 1.0s (0.71–1.41) 0.5s (0.35–0.71)
∗
0–20%
20–40%
40–60%
60–80%
80–100%
1 2 3 4 4∗
1 2 4 5 5
3† 3 5 5 6
4∗ 4† 5 5 6
6∗ 6† 6 7 8
Adapted from American Conference of Governmental Industrial Hygienists (ACGIH). 2005 TLVs and BEIs. Cincinnati: ACGIH Worldwide; 2005. †Values extrapolated by author—not from ACGIH.
772
Environmental Health
Because the causes of back pain are so poorly understood, it is difficult to specify a treatment plan. In most episodes, people with back pain are able to continue working and cope with the problem without seeking medical treatment. Most disabling cases are temporary and typically resolve themselves within a few weeks using only conservative treatment such as reduced physical activity and OTC pain medication. More invasive interventions such as surgery should not be considered during the first three months unless indicated by a specific diagnosis.77 Low back pain is one of the most common and costliest health problems in industrialized societies. Studies in the United States and Scandinavia have shown that 60–80% of adults experience at least one episode of back pain during their adult working life (ages 18–65).78,79 Other studies have found the one-month prevalence rate to be approximately 35% and the one-year prevalence rate to be approximately 50%.80,81 (It is important to note that most episodes reported in these prevalence studies did not result in occupational disability.) It is estimated that over 2% of U.S. workers file injury claims for back pain each year.80,82,83 When workers’ compensation indemnity payments and other indirect costs are added to medical expenditures, the total cost of occupational low back pain in the United States is estimated to be $50–$100 billion per year.80 Occupational risk factors associated with the development of back pain include the following: • Forceful exertions during manual materials handling, such as lifting, pushing, and/or pulling of heavy loads20,84–93 • Awkward trunk postures, such as flexion, lateral bending, axial twisting, and/or prolonged sitting20,88–91,93,94–96 • Whole body vibration, usually transmitted through a vibrating seat or platform20,89,90,97,98 • Repetitive or prolonged exposure to any of the above risk factors75,88,89,92,99 • Work-related psychological or psychosocial stress80,100,101 • Slips and falls87,102
To use the NIOSH lifting guidelines, it is necessary to measure the following eight task variables: 1. Load Weight (L)—measured in kilograms. 2. Horizontal Location (H)—the distance from the midpoint of a line connecting the ankles to a point on the floor directly below the load center as shown in Fig. 38-9. This distance is measured in centimeters at the origin and destination of the lift. 3. Vertical Location (V)—the location of the hands at the origin of the lift, measured vertically from the floor or working surface in centimeters. See Fig. 38-9. 4. Vertical Travel Distance (D)—the vertical displacement of the object (origin to destination) over the course of the lift, measured in centimeters. 5. Asymmetry Angle (A)—angular displacement of the load from the front of the body (the midsagittal plane) at the origin and destination of the lift, measured in degrees as shown in Fig. 38-10. 6. Lifting Frequency (F)—the average number of lifts per minute. 7. Duration of lifting activities—measured in hours. 8. Coupling Classification (C)—quality of the hand to object coupling (i.e., gripping surface), classified as good, fair, or poor. These variables are substituted into the NIOSH Lifting Equation to compute the Recommended Weight Limit (RWL): RWL = 23 kg. × HM × VM × DM × AM × FM × CM
(39-1)
where: HM is the horizontal multiplier computed as (25/H), where H is the horizontal location (defined above). Table 38-4 presents values of HM for various horizontal locations. If H is ever less that 25 cm (10 in), the
Note: For the first five risk factors listed above, workers are typically exposed on a continuing or ongoing basis, and it may be difficult to associate a back complaint with a specific incident or accident. Back pain complaints associated with slips and falls are different in the sense that the complaint can almost always be associated with a specific event. Truck drivers experience elevated rates of back pain when compared to other occupational groups.91,99 Many truck drivers load and unload their own rigs; this activity often requires heavy lifting combined with awkward posture (e.g., trunk flexion when bending down to grasp an object on the floor of the trailer). Truck drivers also spend a considerable portion of their workday in a sustained seated posture and may be exposed to high levels of whole body vibration if the vehicle and seat suspension systems do not adequately isolate the driver from roadway bumps and shocks. Finally, slips and falls are common in the truck driving population due to the need to regularly ingress and egress tractors and trailers, working and walking outdoors on slippery surfaces in inclement weather, and maneuvering hand trucks on ramps and other irregular surfaces, sometimes with impeded vision (due to the size of packages that can partially block the visual field).102 Other high-risk occupations include nurses and nurses aides, garbage collectors, warehouse workers, and mechanics.103,104 All of these occupations require heavy lifting and associated materials handling tasks.
Lifting and Back Pain Because of the hazards associated with manual lifting, the National Institute for Occupational Safety and Health (NIOSH) developed guidelines for evaluating lifting tasks in 1981.85 These guidelines were updated in 1993 in a monograph titled Applications Manual for the Revised NIOSH Lifting Equation.105 This document discusses risk factors associated with lifting and describes procedures for analyzing and designing manual tasks to keep biomechanical, physiological, and psychophysical loads within acceptable limits.
Figure 38-9. Definition of horizontal and vertical locations of the hands when using the NIOSH Lifting Equation. (Adapted from Waters TR, PutzAnderson V, Garg, A. Applications Manual for the Revised NIOSH Lifting Equation. Publication No. 94-110, Cincinnati: National Institute for Occupational Safety and Health; 1994.)
38
Ergonomics and Work-Related Musculoskeletal Disorders
773
Forward Angle of asymmetry (A) used in NIOSH equation
Center of object during twisting of the trunk Centerline of shoulders
Frontal plane (prior to any twisting of the trunk)
Midpoint between ankles
Sagittal plane (prior to any twisting of the trunk) Figure 38-10. Definition of asymmetric angle when using the NIOSH Lifting Equation. View is looking down on the worker from above. The trunk is twisted clockwise from its original forward-facing orientation. (Adapted from Waters TR, Putz-Anderson V, Garg A. Applications Manual for the Revised NIOSH Lifting Equation. Publication No. 94-110. Cincinnati: National Institute for Occupational Safety and Health; 1994.)
TABLE 38-4. VALUES OF THE HORIZONTAL MULTIPLIER (HM) FOR VARIOUS HORIZONTAL DISTANCES(H)∗ Horizontal Distance (H) (in) <10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 >25
Horizontal Multiplier (HM) 1.00 0.91 0.83 0.77 0.71 0.67 0.63 0.59 0.56 0.53 0.50 0.48 0.45 0.43 0.42 0.40 0.00
Horizontal Distance (H) (cm) <25 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 63 >63
Horizontal Multiplier (HM) 1.00 0.89 0.83 0.78 0.74 0.69 0.66 0.63 0.60 0.57 0.54 0.52 0.50 0.48 0.46 0.45 0.43 0.42 0.40 0.00
∗Adapted from Waters TR, Putz-Anderson V, Garg A. Applications Manual for the Revised NIOSH Lifting Equation. Publication No. 94-110. Cincinnati: National Institute for Occupational Safety and Health; 1994.
multiplier is set to a value of 1.0. If H exceeds 63 cm (25 in), HM is set to zero since this is greater than the reach capability of some workers. VM is the vertical multiplier computed as [1 − (.003 × |V − 75|)], where V is the vertical location (defined above). Values of VM for various vertical locations are presented in Table 38-5. VM is set to zero if the vertical location is higher than 175 cm since this exceeds the vertical reach capability of some workers. DM is the distance multiplier computed as [.82 + (4.5/D)] where D is the vertical travel distance (defined above). DM cannot exceed a value of 1.0 even if the actual vertical travel distance is less than 25 cm. Table 38-6 presents values of DM for selected travel distances. AM is the asymmetric multiplier computed as (1 − 0032 × A), where A is the angle of asymmetry (defined above). Values of AM for selected asymmetry angles are presented in Table 38-7. FM is the frequency multiplier from Table 38-8. The purpose of the frequency multiplier is to adjust for fatigue that results from frequent and/or prolonged lifting. Note that it is necessary to consider both the frequency and duration of lifting activities in order to use Table 38-8. It is also necessary to consider the vertical location since low lifts involve lowering and raising the weight of the trunk and the head. This requires additional energy and may contribute to fatigue. (Note: NIOSH has not yet determined multipliers for jobs where the duration of lifting activities exceeds 8 hours.) CM is the coupling multiplier. A “good” coupling (CM = 1.0) exists if the object is equipped with handles or hand-hole cutouts of sufficient size and clearance to accommodate a large hand. For loose objects without handles, a “good” coupling exists if the shape of the object allows the fingers to be comfortably wrapped around the object. A “fair” coupling (CM = 0.95) exists if the object has no handles or hand-holes, but the size, shape, and rigidity are such that the worker can
774
Environmental Health
TABLE 38-5. VALUES OF THE VERTICAL MULTIPLIER (VM) FOR VARIOUS VERTICAL LOCATIONS (V)∗
TABLE 38-7. VALUES OF THE ASSYMETRIC MULTIPLIER (AM) FOR VARIOUS ANGLES (A)∗
Vertical Locations (V) (in)
Angle (A) (degree)
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70 >70
Vertical Multiplier (VM) 0.78 0.81 0.85 0.89 0.93 0.96 1.00 0.96 0.93 0.89 0.85 0.81 0.78 0.74 0.70 0.00 160 170 175 >175
Vertical Locations (V) (cm) 0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 0.75 0.72 0.70 0.00
Vertical Multiplier (VM) 0.78 0.81 0.84 0.87 0.90 0.93 0.96 0.99 0.99 0.96 0.93 0.90 0.87 0.84 0.81 0.78
Adapted from Waters TR, Putz-Anderson V, Garg A. Applications Manual for the Revised NIOSH Lifting Equation. Publication No. 94-110. Cincinnati: National Institute for Occupational Safety and Health; 1994.
comfortably clamp the fingers under the object (such as when lifting a corrugated case from the floor). “Poor” coupling (CM = 0.90) exists whenever the object is difficult to grasp (e.g., slippery surfaces, sharp edges, nonrigid shape, etc.). Large objects that require a hand separation distance of more than 40 cm are considered to have a poor coupling. Once all the multipliers have been determined, use Equation 39-1 to compute the Recommended Weight Limit for the lifting task. To estimate the relative level of stress associated with a lifting task,
TABLE 38-6. VALUES OF THE DISTANCE MULTIPLIER (DM) FOR VARIOUS LIFT TRAVEL DISTANCES (D)∗
10 15 20 25 30 35 40 45 50 55 60 65 70 >70 ∗
Distance Multiplier (DM) 1.00 0.94 0.91 0.89 0.88 0.87 0.87 0.86 0.86 0.85 0.85 0.85 0.85 0.00
Travel Distance (D) (cm) 0 40 55 70 85 100 115 130 145 160 175 >175
0 15 30 45 60 75 90 105 120 135 >135
1.00 0.95 0.90 0.86 0.81 0.76 0.71 0.66 0.62 0.57 0.00
∗
Adapted from Waters TR, Putz-Anderson V, Garg A. Applications Manual for the Revised NIOSH Lifting Equation. Publication No. 94-110. Cincinnati: National Institute for Occupational Safety and Health; 1994.
NIOSH defines the Lifting Index (LI) as the ratio of the Load Weight (L) to the computed Recommended Weight Limit: LI = L/RWL
∗
Travel Distance (D) (in)
Assymetric Multiplier (AM)
Distance Multiplier (DM) 0.78 0.93 0.90 0.88 0.87 0.87 0.86 0.85 0.85 0.85 0.85 0.00
Adapted from Waters TR, Putz-Anderson V, Garg A. Applications Manual for the Revised NIOSH Lifting Equation. Publication No. 94-110. Cincinnati: National Institute for Occupational Safety and Health; 1994.
(39-2)
The LI can be used to compare the relative hazard of two or more jobs, or to prioritize lifting jobs for ergonomic interventions. There is limited epidemiological evidence that the rate of back injury increases as the LI increases from 1.0 to 2.0.92 NIOSH suggests that jobs should be designed to achieve an LI of 1.0 or less. For additional information on using the NIOSH Lifting Equation, including numerous detailed examples, refer to the Applications Manual for the Revised NIOSH Lifting Equation.105 The most effective way for reducing injuries and disorders associated with manual lifting is to implement engineering controls (i.e., changes in equipment, workstation layout, work methods, etc.) that reduce exposure to one or more of the risk factors discussed above. Possible approaches are briefly outlined below: 1. Reduce the weight of lifted items. For example, is it possible to put fewer parts in a tote pan or to reduce the size and weight of bags containing granular or powdered materials? 2. If the weight of the load cannot be reduced, provide mechanical assistance (e.g., hoist or articulating arm) to reduce the forces exerted by workers. 3. Eliminate low reaches by delivering objects to the worker at knee height or above. Provide an adjustable-height lift table to allow the worker to pick up objects without excessive trunk flexion. 4. Reduce horizontal reach distances by eliminating or relocating barriers that prevent a worker from getting as close to the object as safely as possible prior to starting the lift. Forward reaches should require no trunk flexion. 5. Reduce carrying distances by changing the workstation layout or by installing mechanized equipment (e.g., conveyors). Eliminate twisting by changing the layout or changing the task sequence. If engineering controls do not reduce the LI to less than 1.0, administrative controls should be considered. A rotation scheme that allows workers to alternate between jobs with heavy lifting requirements and jobs with insignificant lifting requirements reduces cumulative exposure to lifting stresses. Although NIOSH does not endorse the use of worker selection tests, a limited number of studies have indicated that strength testing and/or aerobic capacity testing may be used to identify workers who can perform work activities with moderate to high lifting without significantly increasing their risk of work-related injury.106–107 Employee selection testing, however, is a
38
Ergonomics and Work-Related Musculoskeletal Disorders
775
TABLE 38-8. VALUES OF THE FREQUENCY MULTIPLIER (FM) FOR VARIOUS LIFTING FREQUENCIES (F)∗ Work Duration Frequency (F) lifts/min. ≤0.2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
£8 hrs
£2 hrs
£1 hr
V<30
V ≥30
V<30
V ≥30
V<30
V ≥30
0.85 0.81 0.75 0.65 0.55 0.45 0.35 0.27 0.22 0.18 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.85 0.81 0.75 0.65 0.55 0.45 0.35 0.27 0.22 0.18 0.15 0.13 0.00 0.00 0.00 0.00 0.00 0.00
0.95 0.92 0.88 0.84 0.79 0.72 0.60 0.50 0.42 0.35 0.30 0.26 0.00 0.00 0.00 0.00 0.00 0.00
0.95 0.92 0.88 0.84 0.79 0.72 0.60 0.50 0.42 0.35 0.30 0.26 0.23 0.21 0.00 0.00 0.00 0.00
1.00 0.97 0.94 0.91 0.88 0.84 0.80 0.75 0.70 0.60 0.52 0.45 0.41 0.37 0.00 0.00 0.00 0.00
1.00 0.97 0.94 0.91 0.88 0.84 0.80 0.75 0.70 0.60 0.52 0.45 0.41 0.37 0.34 0.31 0.28 0.00
∗
Adapted from Waters TR, Putz-Anderson V, Garg A. Applications Manual for the Revised NIOSH Lifting Equation. Publication No. 94-110. Cincinnati: National Institute for Occupational Safety and Health; 1994.
nontrivial process that requires extensive analysis of job demands and validation of all screening criteria. Another type of administrative control claimed to be beneficial in preventing back injuries is the back belt, a personal protective device worn by personals who perform manual materials-handling tasks. Studies of the effectiveness of back belts have been limited. Laboratory research has indicated that back belts may be effective in reducing biomechanical risk factors during lifting.108 However, based on a small number of epidemiological studies, NIOSH concluded that there is insufficient evidence to support the claim that back belts prevent injuries to healthy workers.109–110 A study of airline baggage handlers found that worker compliance in wearing back belts is poor, with 58% of subjects discontinuing use after 8 months. Furthermore, workers who discontinued wearing belts had higher injury experience than a control group who never wore belts.111
Awkward Posture and Back Pain Awkward trunk posture during work can be caused by poor workstation layout. The neutral position of the trunk occurs when it is in a vertical upright position with no axial twisting. Trunk flexion (forward bending in the sagittal plane) can usually be attributed to one of two causes: (a) reaching down to grasp an object that is lower than knuckle height (the level of the hands when standing upright with the arms hanging vertically), or (b) reaching forward to grasp an object that is too far in front of the body. Lateral bending (in the frontal plane) and axial twisting are usually associated with reaching for objects that are located either to the side of or behind a worker’s body. Laboratory and field studies have shown that these nonneutral postures are associated with local muscle fatigue and excessive rates of back pain.85,95–97,112,113 Nonneutral postures can prove particularly challenging for persons with preexisting and chronic back pain; and workplace accommodations to reduce postural demands are essential to successful placement.32 Workstations should be designed to avoid trunk postures that deviate more than 30° from the neutral, upright position, and highly dynamic trunk motions should be avoided. Because working posture is a function of an individual’s anthropometry, a workstation layout that is good for one person may not be
appropriate for workers who are considerably larger or smaller. For this reason, adjustability should be incorporated into the workstation wherever possible.
Seated Work and Back Pain Due to the rapid growth of service and information industries and technological advances in manufacturing methods, an increasing number of workers are spending a major fraction of their workday in a seated posture.114 Sitting provides many ergonomic benefits, such as a reduction in the amount of body weight borne by the tissues of the feet and lower extremities, a reduction in whole-body energy expenditure due to decreased muscle activity, and stabilization of the body for tasks that require precise manual dexterity. The primary disadvantage of sitting is increased stress on the spine.8 Clinical and epidemiological studies have shown that prolonged sitting is associated with increased rates of lower back pain.94,115 A possible explanation is that when a person moves from a standing to a sitting posture, the pelvis rotates backward, flattening the normal lordotic curve of the lower spine.116,117 This flattening has the following effects: compression on the anterior portion of the disc, tension on the exterior portion of the disc, increased intradiscal pressure, tension on the apophyseal joint ligaments, and tension on the erector spinae muscles.118,119 These stresses affect the supply of nutrients to the disc and surrounding tissue and may be related to the development of back disorders. Spinal stresses associated with sitting are affected by the design of the chair. An important design consideration is the angle between the backrest and the seatpan. As this angle is increased, pelvic rotation and lumbar flattening is reduced.117 This can be accomplished by tilting the seatpan slightly below the horizontal or by rotating the backrest in a rearward direction from the vertical. Jobs which require the worker to lean forward while sitting (e.g., sewing, many bench assembly tasks, microscope work, etc.) should have forward-slanting seatpans. A study of full-time sewing-machine operators found that comfort was enhanced and fatigue reduced by tilting the seatpan to slant forward at an angle 15° below the horizontal.120 Furthermore, laboratory experiments have demonstrated that intradiscal pressures can be reduced up to 50% by increasing the included angle between
776
Environmental Health
the seatpan and backrest from 90° to 110°. Adding a lumbar support to the backrest also reduces intradiscal pressure.121 The height and shape of the seatpan are also important considerations in chair design. If the seat is too high, the worker’s feet dangle causing pressure on the underside of the thigh. This can interfere with circulation and cause swelling in the feet and lower legs. If the seat is too low, the thighs do not make good contact with the seatpan and an excessive amount of body weight will be borne by the ischial tuberosities and surrounding tissue. This may cause considerable discomfort, particularly if sitting on an unpadded seat or if sitting for a prolonged period. To accommodate the range of body sizes found in the working population, it is suggested that seat height be adjustable from 38 cm to 53 cm, measured from the floor to the front of the seatpan.121 This adjustment should be easy to perform and not require any special tools. Ease of adjustment is particularly important if the chair is used by more than one person (e.g., where the same workstation is used by both day-shift and night-shift workers). Where feasible, the workstation and job demands should be designed to avoid prolonged seated postures.122 For example, the job can include occasional tasks that must be performed away from the primary workstation. This allows the worker to periodically stand up and walk during the shift. When selecting or designing a work seat, it is important to match the characteristics of the chair to the requirements of the job. For example, workers who must periodically reach behind or to the side of the body typically prefer seatpans that swivel, while workers who perform precision assembly tasks prefer seatpans which are stable.123 MANAGEMENT ISSUES
As discussed above, stresses that result in musculoskeletal disorders of the upper limb and low back can frequently be controlled through redesign of equipment, tools, and/or work methods. While these interventions may reduce the incidence and severity of WRMSDs, it is unrealistic to prevent all disorders. They are multifactorial in nature and their causation is not yet understood well enough to achieve zero risk. In addition, there will continue to be some cases due to individual factors. It is necessary to provide accommodations for people who experience these impairments so that they do not become long-term disability cases. The details of such a program are beyond the scope of this chapter, but are described elsewhere.46–49 It suffices to say that the worker, supervisor, engineer, and health-care provider must work together as a team to determine what the worker can do, and to find or modify jobs to accommodate any limitations. Control of WRMSDs involves health professionals, supervisors, engineers, and workers. Thus, an ergonomics program is best managed by a team of persons from each of these areas. The team should meet regularly to review health data, new and old cases, set goals, recommend allocation of resources to control ergonomic stresses, and review the progress of ergonomic interventions. SUMMARY
Many worker health and safety problems can be attributed to failure to anticipate the capacity and behavior of the entire work population. Fatigue, accidents, and back and upper limb disorders are all too common examples of these problems. Ergonomics is the application of epidemiology, anthropometry, biomechanics, physiology, psychology, and engineering to the evaluation and design of work for preventing injury and illness while maximizing productivity. Ergonomics is not yet an exact science, therefore, all interventions should include appropriate evaluations to ascertain their effectiveness. REFERENCES
1. Human Factors and Ergonomics Society. What is HFES? Santa Monica, CA: Human Factors and Ergonomics Society; 2005. 2. Kantowitz BH, Sorkin RD. Human Factors: Understanding People— System Relationships. New York: John Wiley & Sons; 1983.
3. Wickens CD, Hollands JG. Engineering Psychology and Human Performance. 3rd ed. Upper Saddle River, NJ: Prentice-Hall; 2000. 4. VanCott HP, Kincade RG, eds. Human Engineering Guide to Equipment Design. Washington DC: U.S. Government Printing Office; 1972. 5. Pheasant S. Bodyspace—Anthropometry, Ergonomics, and Design. London: Taylor and Francis; 1986. 6. Kroemer KHE, Kroemer HB, Kroemer-Elbert KE. Ergonomics: How to Design for Ease and Efficiency. Englewood Cliffs, NJ: PrenticeHall; 1994. 7. Rodahl K. The Physiology of Work. London: Taylor and Francis; 1989. 8. Chaffin DB, Andersson GBJ, Martin BJ. Occupational Biomechanics. 3rd ed. New York, John Wiley & Sons; 1999. 9. Keyserling WM. Occupational safety: prevention of accidents and overt trauma. In: Levy BS, Wegman DH, eds. Occupational Health—Recognizing and Preventing Work-Related Disease. 4th ed. Philadelphia: Lippincott Williams & Wilkins; 2000: 196–206. 10. Wagner D, Birt JA, Snyder MD, Duncanson JP, eds. Human Factors Design Guide for Acquisition of Commercial Off-the-Shelf Subsystems, Non-Developmental Items, and Developmental Systems. Document number PB96-191267INZ. Atlantic City International Airport, NJ: Federal Aviation Administration Technical Center; 1996. 11. Hagberg M. Local shoulder muscular strain—symptoms and disorders. J Human Ergol. 1982;11:99–108. 12. Roebuck JA, Kroemer KHE, Thomson WG. Engineering Anthropometry Methods. New York: John Wiley and Sons; 1975. 13. Drillis R, Contini R. Body Segment Parameters. Report No. 1166-03 (Office of Vocational Rehabilitation, Dept. of HEW). New York: NYU School of Engineering and Science; 1966. 14. Basmajian, JV, De Luca CJ. Muscles Alive, Their Functions Revealed by Electromyography. 5th ed. Baltimore, MD: Williams & Wilkins; 1985. 15. Lieber RL, Friden J. Skeletal muscle metabolism, fatigue and injury. In: Gordon SL, Blair SJ, Fine LJ, eds. Repetitive Motion Disorders of the Upper Extremity. Rosemont, IL: American Academy of Orthopaedic Surgeons; 1995: 287–300. 16. Friden J, Lieber RL. Biomechanical injury to skeletal muscle from repetitive loading: eccentric contractions and vibration. In: Gordon SL, Blair SJ , Fine LJ, eds. Repetitive Motion Disorders of the Upper Extremity. Rosemont, IL: American Academy of Orthopaedic Surgeons; 1995: 301–12. 17. Armstrong TJ, Buckle P, Fine LJ, et al. A conceptual model for work-related neck and upper-limb musculoskeletal disorders. Scand J Work Environ Health. 1993;19(2):73–84. 18. Johansson H, Sojka P. Pathophysiological mechanism involved in genesis and spread of muscular tension in occupational muscle pain and in chronic musculoskeletal pain syndromes—a hypothesis. Med Hypoth. 1991;35:196–203. 19. Rohmert W. Problems in determining rest allowances. Part 1: Use of modern methods to evaluate stress and strain in static muscular work. Appl Ergonomics. 1973;4(2):91–5. 20. National Research Council and Institute of Medicine. Musculoskeletal Disorders and the Workplace—Low Back and Upper Extremities. Washington, DC: National Academy Press; 2001. 21. Carlson BR. Level of maximum isometric strength and relative load—isometric endurance. Ergonomics. 1969;12(3):429–35. 22. Armstrong TJ. Circulatory and Local Muscle Responses to Static Manual Work. Ann Arbor, MI: The University of Michigan; 1976. 23. Lind AR, McNicol GW. Local and central circulatory responses to sustained contractions and the effect of free or restricted arterial inflow on post-exercise hyperaemia. J Physiol. 1967;192:575–93. 24. Corlett EN, Bishop RP. The ergonomics of spot welders. Appl Ergonomics. 1978;9:23–31. 25. Saldana N, Herrin GD, Armstrong TJ, Franzblau A. A computerized method for assessment of musculoskeletal discomfort in the workforce: a tool for surveillance. Ergonomics. 1994;37(6):1097– 112.
38 26. Harms-Ringdahl K. On assessment of shoulder exercise and loadelicited pain in the cervical spine: biomechanical analysis of load— EMG—methodological studies of pain provoked by extreme position. Scand J Rehab Med. 1986; Suppl 14:4–34. 27. Sjogaard G, Kiens B, Jorgensen K, Saltin B. Intramuscular pressure, EMG and blood flow during low-level prolonged static contraction in man. Acta Physiol Scand. 1986;128:475–84. 28. Bjorkesten M, Jonsson B. Endurance limit of force in long-term intermittent static contractions. Scand J Work Environ Health. 1977;3:23–7. 29. Bystrom S, Fransson-Hall C. Acceptability of intermittent handgrip contractions based on physiological response. Hum Factors. 1994;36(1):158–71. 30. Borg G. Perceived exertion as an indicator of somatic stress. Scand J Rehab Med. 1970;2(3):92–8. 31. Ulin S, Ways CM, Armstrong TJ, Snook SH. Perceived exertion and discomfort versus work height with a pistol-shaped screwdriver. Am Ind Hygiene Assn J. 1990;51(11):588–94. 32. Keyserling WM, Sudarsan SP, Martin BJ, Haig AJ, Armstrong TJ. Effects of low back disability status on lower back discomfort during sustained and cyclical trunk flexion. Ergonomics. 2005;48(3): 219–33. 33. Chaffin DB. Localized muscle fatigue—definition and measurement. J Occup Med. 1973;15(4):346–54. 34. WHO Expert Committee. Identification and Control of WorkRelated Diseases. Geneva: World Health Organization Technical Report Series: 1985; 3–11. 35. Hagberg M, Silverstein B, Wells R, et al. Work Related Musculoskeletal Disorders (WMSDs): A Reference Book for Prevention. London: Taylor & Francis Ltd.; 1995. 36. Ramazzini B. Diseases of Workers (De Morbis Artificum). Chicago, Illinois: The University of Chicago Press; 1713. 37. Gray H. Gray’s Anatomy. New York: Bounty Books; 1893. 38. de Quervain. Ueber eine form von chronischer tendovaginitis. Correspondenz-Blatt f Aertzte. 1895;25:389–94. 39. Zollinger F. A few remarks on the question of tubercular tendovaginitis and bursitis after an accident. Archiv fur Orthopadische und Unfall-Chirurgie. 1927;24:456–67. 40. Conn HR. Tenosynovitis. Ohio State Med J. 1931;27:713–16. 41. Fine LJ, Silverstein BA, Armstrong TJ, Anderson C. The detection of cumulative trauma disorders of the upper extremities in the workplace. J Occup Med. 1986;28(8):674–78. 42. Franklin GM, Haug J, Heyer N, Checkoway H, Peck N. Occupational carpal tunnel syndrome in Washington State, 1984-1988. Am J Public Health. 1991;81(6):741–6. 43. BLS. Injuries, Illnesses, and Fatalities in 2001. Washington, DC: U.S. Department of Labor, Bureau of Labor Statistics; 2003. 44. OSHA. Forms for Recording Work-Related Illnesses and Injuries. Washington, DC: U.S. Department of Labor, Occupational Safety and Health Administration; 2004. 45. Armstrong TJ, Radwin RG, Hansen DJ, Kennedy KW. Repetitive trauma disorders: job evaluation and design. Hum Factors. 1986;28(3):325–36. 46. NIOSH. Elements of Ergonomic Programs—A Primer Based on Workplace Evaluations of Musculoskeletal Disorders. Publication No. 97-117. Cincinnati: DHHS National Institute for Occupational Safety and Health; 1997. 47. Jones RJ. Corporate ergonomics program of a large poultry processor. Am Industrial Hygiene Assn J. 1997;58:132–7. 48. Mansfield JA, Armstrong TJ. Library of Congress workplace ergonomics program. Am Ind Hygiene Assn J. 1997;58:138–44. 49. Cohen R. Ergonomics program development: prevention in the workplace. Am Ind Hygiene Assn J. 1997;58:145–50. 50. Armstrong TJ, Foulke JA, Joseph BS, Goldstein SA. Investigation of cumulative trauma disorders in a poultry processing plant. Am Ind Hygiene Assn J. 1982;43(2):103–16. 51. Armstrong TJ. Control of upper-limb cumulative trauma disorders. Appl Occup Environ Hygiene. 1996;11(4):275–81.
Ergonomics and Work-Related Musculoskeletal Disorders
777
52. Katz JN, Larson MG, Fossel AH, Liang MH. Validation of a surveillance case definition of carpal tunnel syndrome. Am J Public Health. 1991;81(2):189–93. 53. Katz JN, Larson MG, Sabra A, et al. The carpal tunnel syndrome: diagnostic utility of the history and physical examination findings. Am Int Med. 1990;112:321–7. 54. Barnes RM. Motion and Time Study: Design and Measurement of Work. 7th ed. New York: John Wiley; 1978. 55. Niebel B, Frievalds A. Methods, Standards, and Work Design. 11th ed. Boston, WCB: McGraw-Hill; 2002. 56. Armstrong TJ, Latko WA. Physical stressors: their characterization, assessment, and relationship with physical work requirements (chapter 6, pp. 87–98). In: Repetitive Motion Disorders of the Upper Extremity. Rosemont, IL: American Academy of Orthopedic Surgeons; 1995. 57. Latko WA, Armstrong TJ, Foulke JA, Herrin GD, Rabourn RA, Ulin SS. Development and evaluation of an observational method for assessing repetition in hand tasks. Am Ind Hygiene Assn J. 1997;58:278–85. 58. Latko W. Development and evaluation of an observational method for quantifying exposure to hand activity and other physical stressors in manual work. Ph.D. Dissertation, Dept. of Industrial and Operations Engineering. Ann Arbor, MI: The University of Michigan; 1997. 59. Armstrong TJ, Punnett L, Ketner P. Subjective worker assessments of hand tools used in automobile assembly. Am Ind Hygiene Assn J. 1989;50:639–45. 60. Jonsson B. The static load component in muscle work. Eur J Appl Physiol. 198857:305–10. 61. Jonsson B. Quantitative electromyographic evaluation of muscular load during work. Scand J Rehab Med. 1978;Suppl 6:69–74. 62. Marras WS, Schoenmarklin RW. Wrist motions in industry. Ergonomics. 1993;36(4):341–51. 63. American Conference of Governmental Industrial Hygienists (ACGIH). 2005 TLVs and BEIs. Cincinnati: ACGIH Worldwide; 2005. 64. Armstrong TJ, Foulke JA, Martin BJ, Gerson J, Rempel DM. Investigation of applied forces in alphanumeric keyboard work. Am Ind Hyg Assoc J. 1994;55(1):30–5. 65. Pelmear PL, Taylor W, Wasserman DE. Hand-Arm Vibration: A Comprehensive Guide for Occupational Health Professionals. New York, NY: Van Nostrand Reinhold; 1992. 66. Snook SH, Campanelli RA, Hart JW. A study of three preventive approaches to low back injury. J Occup Med. 1978;20:478–81. 67. Karhu O, Harkonen R, Sorvali P, Vepsalainen P. Observing working postures in industry: examples of OWAS application. Appl Ergonomics. 1981;12:13–7. 68. Keyserling WM, Armstrong TJ, Punnett L. Ergonomic job analysis: a structured approach for identifying risk factors associated with overexertion injuries and disorders. Appl Occ Env Hyg. 1991;6: 353–63. 69. Keyserling WM, Stetson DS, Silverstein BA, Brouwer ML. A checklist for evaluating ergonomic risk factors associated with upper extremity Cumulative trauma disorders. Ergonomics. 1993;36: 807–31. 70. McAtamney L, Corlett E. RULA: a survey method for the investigation of work-related upper limb disorders. Appl Ergonomics. 1993;24:91–9. 71. Moore JS, Garg A. The Strain Index: a proposed method to analyze jobs for risk of distal upper extremity disorders. Am Ind Hyg Assoc J. 1995;56: 443–58. 72. Colombini D. An observational method for classifying exposure to repetitive movements of the upper limbs. Ergonomics. 1998;41: 1261–89. 73. Snook SH. Back risk factors: an overview. In: Violante F, Armstrong T, Kilbom A. Occupational Ergonomics: Work Related Musculoskeltal Disorders of the Upper Limb and Back. London: Taylor and Francis; 2000.
778
Environmental Health
74. Burdorf A. Assessment of Postural Load on the Back in Occupational Epidemiology. Alblasserdam, The Netherlands: Thesis Rotterdam; 1992. 75. Kelsey JL, Hochberg MC. Epidemiology of musculoskeletal disorders. Ann Rev Pub Health. 1988;9:379–401. 76. Snook SH, Webster BS, McGorry RW, Fogleman MT, McCann KB. The reduction of chronic, non-specific low back pain through the control of early morning lumbar flexion: a randomized controlled trial. Spine. 1998;23:2601–7. 77. Waddell G. A new clinical model for the treatment of low back pain. Spine. 1987;12:632–44. 78. Berquist-Ullman M, Larsson U. Acute low back pain in industry. Acta Orthop Scand (Stockholm). 1977;Suppl 170. 79. Battie MC, Bigos SJ, Fisher LD, Spengler DM, Hansson TH, Nachemson AL. The role of spinal flexibility in back pain complaints within industry—a prospective study. Spine. 1990;15:768–73. 80. Frymoyer JW, Cats-Baril WL. An overview of the incidences and costs of low back pain. Orthop Clin North Am. 1991;22:263–71. 81. Papageorgiou AC, Croft PR, Ferry S, Jayson MIV, Silman AJ. Estimating the prevalence of low back pain in the general population. Spine. 1995;20:1889–94. 82. Spengler DM, Bigos SJ, Martin NA, Zeh J, Fisher L, Nachemson A. Back injuries in industry: a retrospective study. I. Overview and cost analysis. Spine. 1986;11:241–56. 83. Battie MC. Minimizing the impact of back pain: workplace strategies. Semin Spine Surg. 1992;4:20–8. 84. Snook SH. Approaches to the control of back pain in industry: job design, job placement and education/training. Spine: State of the Art Rev. 1987;2:45–59. 85. NIOSH. Work Practices Guide for Manual Lifting. Cincinnati: National Institute for Occupational Safety and Health (Pub No. 81-122); 1981. 86. Bigos SJ, Spengler DM, Martin NA, et al. Back injuries in industry— A retrospective study: Part II—injury factors. Spine. 1986;11: 246–51. 87. National Council on Compensation Insurance. Workers’ Compensation Back Pain Claim Study. New York: National Council on Compensation Insurance; 1992. 88. Waters TR, Putz-Anderson V, Garg A, Fine LJ. Revised NIOSH equation for the design and evaluation of manual lifting tasks. Ergonomics. 1993;36:749–76. 89. NIOSH. Musculoskeletal Disorders and Workplace Factors—A Critical Review of Epidemiologic Evidence for Work-Related Disorders of the Neck, Upper Extremity, and Low Back. Bernard BP, ed. Cincinnati: National Institute for Occupational Safety and Health (Pub No. 97-141); 1997. 90. Alcouffe J, Manillier P, Brehier M, Fabin C, Faupin F. Analysis by sex of low back pain among workers from small companies in the Parisk area: severity and occupational consequences. Occ Env Med. 1999;56:696–701. 91. Magnusson ML, Pope MH, Wilder DG, Areskoug B. Are occupational drivers at an increased risk for developing musculoskeletal disorders? Spine. 1996;21:710–7. 92. Waters TR, Baron SL, Piacitelli LA, et al. Evaluation of the revised NIOSH lifting equation—a cross-sectional epidemiologic study. Spine. 1999;24:386–94. 93. Suadicani P, Hansen K, Fenger AM, Gyntelberg F. Low back pain in steel plant workers. Occ Med. 1994;44:217–21; Estryn-Behar M, Kaminski M, Peigne E, Maillard MF, Pelletier A, Berthier C. Strenuous working conditions and musculoskeletal disorders among female hospital workers. Int Arch Occ Env Health. 1990;62:47–57. 94. Magora A. Investigation of the relation between low back pain and occupation. Indus Med Surg. 1972;41:5–9. 95. Punnett L, Fine LJ, Keyserling WM, Herrin GD, Chaffin DB. Back disorders and non-neutral trunk postures of automobile assembly workers. Scand J Work Environ Health. 1991;17:337–46.
96. Burdorf A, Govaert G, Elders L. Postural load and back pain of workers in the manufacturing of pre-fabricated concrete elements. Ergonomics. 1991;34:909–18. 97. Frymoyer JW. Back pain and sciatica. New Eng J Med. 1988;318:291–300. 98. Boshuizen HC, Bongers PM, Hulshof CTJ. Self-reported back pain in tractor drivers exposed to whole-body vibration. Int Arch Occup Environ Health. 1990;62:109–15. 99. Kelsey JL, Hardy RJ. Driving motor vehicles as a risk factor for acute herniated lumbar intervertebral disc. Am J Epidemiology. 1988;102:63–73. 100. Bigos S, Spengler DM, Martin NA, Zeh J, Fisher L, Nachemson A. Back injuries in industry: a retrospective study III. Employeerelated factors. Spine. 1986;11:252–6. 101. Burdorf A, van Riel M, Brand T. Physical load as risk factor for musculoskeletal complaints among tank terminal workers. Am Ind Hyg Assoc J. 1997;58:489–97. 102. Keyserling WM, Monroe KA, Woolley C, Ulin SS. Ergonomic considerations in trucking delivery operations: an evaluation of hand trucks and ramps. Am Ind Hyg Assoc J. 1999;60:22–31. 103. Klein BP, Jensen RC, Sanderson LM. Assessment of workers’ compensation claims for back sprains/strains. JOM. 1984;26:443–8. 104. Estryn-Behar M, Kaminski M, Peigne E, Maillard MF, Pelletier A, Berthier C. Strenuous working conditions and musculoskeletal disorders among female hospital workers. Int Arch Occ Env Health. 1990;62:47–57. 105. Waters TR, Putz-Anderson V, Garg A. Applications Manual for the Revised NIOSH Lifting Equation. Publication No. 94-110. Cincinnati: National Institute for Occupational Safety and Health; 1994. 106. Keyserling WM, Herrin GD, Chaffin DB. Isometric strength testing as a means of controlling medical incidents on strenuous jobs. JOM. 1980;22:332–6. 107. Ayoub MM, Mital A. Manual Materials Handling. London: Taylor and Francis; 1989. 108. Giorcelli RJ, Hughes RE, Wassell JT, Hsiao H. The effect of wearing a back belt on spine kinematics during asymmetric lifting of large and small boxes. Spine. 2001;26:1794–8. 109. National Institute for Occupational Safety and Health. Workplace use of back belts: review and recommendations. NIOSH Publication No. 94-122. Cincinnati: National Institute for Occupational Safety and Health; 1994. 110. Wassell JT, Gardner LI, Landsittel DP, Johnston, JJ, Johnston JM. A prospective study of back belts for prevention of back injury. JAMA. 2000;284:2727–32. 111. Riddell CR, Congleton JJ, Huchingson RD, Montgomery JF. An evaluation of a weight lifting belt and back injury training class for airline baggage handlers. Appl Ergonomics. 1992;23:319–29. 112. Andersson GBJ, Ortengren R, Herberts F. Quantitative electromyographic studies of back muscle activity related to posture and loading. Orthopedic Clinics N Am. 1977;8:85–96. 113. Snook SH, Ciriello VM. The design of manual handling tasks: revised tables of maximum acceptable weights and forces. Ergonomics. 1991;34:1197–213. 114. Bendix T. Low back pain and seating. In: Luder R, Noro K, eds, Hard Facts About Soft Machines. London: Taylor and Francis; 1994. 115. Andersson GBJ. Epidemiologic aspects of low back pain in industry. Spine. 1981;6:53–60. 116. Keegan JJ. Alterations of the lumbar curve related to posture and sitting. J Bone Joint Surg. 1953;35A:589–603. 117. Andersson GBJ, Ortengren R, Nachemson A, Elfstrom S. The influence of backrest inclination and lumbar support on lumbar lordosis. Spine. 1979;4:52–8. 118. Adams MA, Hutton WC, Scott JRR. The resistance to flexion of the lumbar intervertebral joint. Spine. 1980;5:245–53. 119. Holm S, Nachemson A. Variation in nutrition of the canine intervertebral disc induced by motion. Spine. 1983;8: 866–74.
38 120. Yu C, Keyserling WM. Evaluation of a new work seat for industrial sewing operations: results of three field studies. App Ergonomics. 1989;20:17–25. 121. Andersson GBJ, Ortengren R, Nachemson A, Elfstrom G. Lumbar disc pressure and myoelectric back muscle activity during sitting, studies on an experimental chair. Scand J Rehab Med. 1974;3: 104–14.
Ergonomics and Work-Related Musculoskeletal Disorders
779
122. Grandjean E. Fitting the Task to the Man—A Textbook of Occupational Ergonomics. 4th ed. London: Taylor & Francis; 1988. 123. Yu C, Keyserling WM, Chaffin DB. Development of a workseat for industrial sewing operations: results of a laboratory study. Ergonomics. 1988;31:1765–86.
This page intentionally left blank
39
Industrial Hygiene Robert F. Herrick
BACKGROUND
Within the scope of public health practice, industrial hygiene is the health profession devoted to the recognition, evaluation, and control of hazards in the working environment. These include chemical hazards, physical hazards, biological hazards, and ergonomic factors that cause or contribute to injury, disease, impaired function, or discomfort. Throughout the world, the profession that addresses these hazards is known as occupational hygiene; however, the United States has not yet adopted this newer, more accurate term. In this chapter, the term industrial hygiene is used as the equivalent of occupational hygiene. Industrial hygiene principles have evolved over many years with accelerated development since the Industrial Revolution. Industrial hygiene is a young profession, which traces its name to Hygeia, the goddess of health and prevention, daughter of Aesculapius, god of medicine in Greek mythology. The modern history of industrial hygiene starts with the organization of manufacturing processes into industrial sectors. This history was chronicled by Theodore Hatch, who summarized the “Major Accomplishments in Occupational Health in the Past Fifty Years” on the 50th anniversary of the Division of Occupational Health of the U.S. Public Health Service in 1964. Hatch noted that, prior to World War I (about 1914), the United States was a rural, agricultural society, where the industrial processes were few and conducted by manual labor. The only plastic available was celluloid, petroleum refining dumped most of the product to waste, and Henry Ford had just introduced the radical concept of a $5 daily wage. This was the industrial world that Alice Hamilton discovered when she began to trace the health problems she found among immigrant families back to the husbands’ workplaces. In the 50 years that Hatch reviewed, industrial hygiene had emerged as one of the core disciplines in public health. In 1964, he attributed the progress that had been made in improving workplace conditions to the application of the principle of “epidemiologic assessment of occupational health hazards.” Progress in the identification of these hazards resulted from “. . . the joining of skills from the health sciences and medicine on the one hand, and from the physical sciences and engineering on the other, with the two groups cemented together by biostatistics and epidemiology.”1 This approach taken by the pioneers in industrial hygiene resulted in remarkable progress. Not only did they identify important questions, they had the vision to develop interdisciplinary approaches to solve them. This vision places industrial hygiene in the larger field of public health. The industrial hygienist’s work in the recognition, evaluation, and control of hazardous exposures in the work environment is a practice of primary prevention, and the identity of industrial hygienists as public health practitioners is clear. Prevention is the key to a safe and healthful workplace, and industrial hygiene is a practice of primary prevention. The steps that are involved in the prevention of occupational and environmental diseases are hazard recognition, hazard evaluation, and hazard control/intervention.
Hazard Recognition The human health hazard resulting from an occupational exposure is determined by both the toxicity of an agent or factor and the extent or magnitude of human exposure. Successful industrial hygiene practice has been defined to include a step that is in some ways preliminary to hazard recognition: the anticipation of hazardous exposures and conditions before they actually occur. Toxicological testing in animals produces information that is an important component of hazard anticipation and early recognition. In combination with human health data that may be generated through environmental/occupational medicine and surveillance programs, or through epidemiological studies, this information provides the basis for a strategy of hazard anticipation and recognition. For established workplace conditions, surveillance of both exposure and disease provides clues and hypotheses for further evaluation.
Hazard Evaluation Hazard evaluation is a type of risk assessment, developed from the information gained in the hazard recognition and identification process and the characteristics of the (exposed) population at risk. The series of steps in reaching a conclusion about the degree of hazard associated with a particular exposure or work condition is known as hazard evaluation. Hazard evaluations are essential to determine the need for control measures to minimize exposures and to identify clues to the etiology of an adverse health condition observed in a worker or group of workers.
Hazard Control/Intervention Primary prevention involves identification and evaluation of environmental hazards that are factors or cofactors in disease production, followed by application of methods to reduce or eliminate human exposures. This is the classical public health approach. Principles and methods for controlling occupational hazards include a range of techniques from substitution or elimination of hazardous materials and processes, to engineering and administrative exposure controls, to exposure reduction using personal protective equipment at the level of individual workers.
Recognition of Occupational and Environmental Hazards Principles of Hazard Recognition Hazard recognition involves a systematic review of a worker’s occupational environment to identify exposures and potential exposures. This review should include information on the materials used and produced, the characteristics of the workplace including the equipment used, and the nature of each worker’s interaction with the sources of workplace hazards. Specific information is obtained on the 781
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
782
Environmental Health
raw materials used in a process, materials produced or stored, and the by-products formed during the production process. Sources of this information are described in the following section. Hazard recognition also includes gathering information on the types of equipment used in the workplace, the cycle of operation and/or frequency of exposure, and the operational methods and work practices used. Information such as this is available from industrial hygiene reference sources.2 A workplace review for the purpose of hazard recognition also includes identification of health and safety controls in place, including use of personal protective equipment. The Occupational Safety and Health Administration (OSHA) Hazard Communication Standard provides a valuable information resource for hazard identification and evaluation. This standard requires employers to: (a) develop a written hazard communication program, (b) maintain a list of all hazardous chemicals in the workplace, (c) make available to workers Material Safety Data Sheets (MSDSs) for each hazardous chemical, (d) place labels on containers as to the chemical identity and precautions in handling, and (e) provide workers with education and training in the handling of hazardous chemicals.
Classification of Hazards For purposes of hazard control and disease prevention, contaminants are classified largely on the basis of their physical and chemical characteristics, as these characteristics determine the route of exposure. Workers may be exposed to contaminants by inhalation, by absorption through the skin, by ingestion, or by injection, as in the case of accidental puncture wounds. Inhalation and skin absorption are the primary routes of exposure for most materials in the occupational environment. In cases where poor hygiene practices such as consumption of food and beverages in contaminated work areas are allowed, ingestion may be an important source of exposure. Environmental agents can be classified as either physical hazards or health hazards. Physical Hazards. In its hazard communication standard, OSHA classifies materials such explosives, flammable or combustible liquids, oxidizers, compressed gases, organic peroxides, pyrophoric materials, unstable (reactive) chemicals, or water-reactive chemicals as physical hazards. Other exposures in the workplace such as excessive noise, ionizing and nonionizing radiation, and temperature extremes are other examples of physical hazards. There is a rapidly growing recognition that ergonomic factors are important causes of injury in the workplace. Repetitive motions, conducted in awkward positions, result in a variety of chronic trauma disorders, including carpal tunnel syndrome. Health Hazards. Chemical and biological materials capable of producing adverse acute or chronic health effects are defined as health hazards. Exposures to chemical mists, vapors, gases, or airborne particles (dusts and fumes) occur through inhalation, ingestion, or by absorption through the skin. OSHA classifies hazardous chemicals as carcinogens, toxic or highly toxic agents, reproductive toxins, irritants, corrosives, sensitizers, hepatoxins, nephrotoxins, agents that act on the hematopoietic system, and agents that damage the lungs, skin, eyes, or mucous membranes. Biological hazards include exposures to infectious or immunologically active agents such as molds, fungi, and bacteria.
Types of Airborne Contaminants Aerosols. Liquid droplets or solid particles in a size range that allows them to remain dispersed in air for a prolonged period of time are known as aerosols. Aerosols are also known as airborne particulate matter. The hazard associated with airborne particulate matter is determined by three factors: (a) the biological activity of the material, (b) concentration of the airborne material, and (c) airborne particle size. Particle size is an important determinant of hazard, because it strongly influences the site of deposition within the respiratory system. Many occupational diseases, including silicosis and asbestosis,
are associated with material deposited in specific regions of the respiratory tract. Criteria3 have been developed to define critical sizefractions most closely associated with various health effects and are defined as follows: • Inhalable fraction: This is the fraction of airborne particulate matter that can present a hazard when deposited any place within the respiratory tract. Most particles of diameter less than 100 µm are considered inhalable. • Thoracic fraction: Those particles that are hazardous when deposited anywhere within the lung airways and the gasexchange region. Particles in this size range are generally less than 25 µm in diameter. • Respirable fraction: Those particles that are a hazard when deposited in the gas-exchange region of the lungs. These particles are less than 10 µm in diameter. Gases and Vapors. In general, materials are considered gases if they are predominantly in the gaseous state at temperatures and pressures normally found in ambient or occupational environments. Vapors are the gaseous form of substances normally present in the solid or liquid state at room temperature and pressure. Liquids undergo phase transformation to the vapor state by the process of evaporation and mix with the surrounding atmosphere. In the workplace environment, organic solvents volatilize to form vapors at normal temperatures and pressures. In many industrial applications, solvents are heated, which results in increased vaporization and elevated airborne solvent concentrations.
Measures of Airborne Concentration A number of terms and units are used to describe airborne concentrations and exposures to contaminants. The form of the contaminant and the sampling and analytical method used to measure the airborne concentration dictate the choice of terms that are used. The following terms are used to describe airborne concentrations and exposure: • Ppm (ppb): parts of vapor or gases per million (or billion) parts of contaminated air by volume at room temperature and pressure. • Mppcf: millions of particles of a particulate per cubic foot of air. • Mg/m3 (µg/m3): milligrams (or micrograms) of a substance per cubic meter of air. • Vapor %: parts of vapor or gas per 100 parts of contaminated air by volume at room temperature and pressure. • Fibers/cc: a measure of the numbers of fibers longer than 5 µm in length per cubic centimeter of air. This measure is used for asbestos and other fibers.
Sources of Hazard Information Toxicological Reviews. There are many sources of information on hazardous properties of materials found in the workplace environment. These reviews and evaluations are prepared by private organizations as well as government agencies in the United States and internationally. The U.S. National Institute for Occupational Safety and Health (NIOSH) prepares several sets of criteria and recommendations for limiting exposure to occupational hazards. These are not legally enforceable themselves, but NIOSH recommendations are transmitted to OSHA, where they can be used in promulgating legal standards. The Agency for Toxic Substances and Disease Registries (ATSDR) of the U.S. Department of Health and Human Services develops toxicological profiles for compounds commonly found at hazardous waste sites. The National Institute of Environmental Health Sciences (NIEHS) of the U.S. Department of Health and Human Services prepares an Annual Report on Carcinogens, which reviews and evaluates information on evidence of carcinogenicity. The report provides a listing of chemicals classified on the basis of the strength of the evidence of carcinogenic risk. The American
39
Industrial Hygiene
783
Conference of Governmental Industrial Hygienists (ACGIH) prepares a listing of Threshold Limit Values (TLVs) and Biological Exposure Indices (BEIs), which are updated annually. Several international organizations review scientific information for purposes of evaluating risks resulting from human exposure to chemicals. The International Agency for Research on Cancer (IARC) prepares critical reviews of information on evidence of carcinogenicity for chemicals. The International Programme on Chemical Safety (IPCS) is a joint venture of the United Nations Environment Program, the International Labor Organization, and the World Health Organization. This program develops Environmental Health Criteria Documents, which are summaries and evaluations of the information on toxic effects of specific chemicals and groups of chemicals. A number of information sources are now available on CD-ROM and the World Wide Web. For example, the OSHA Standards, Letters of Interpretation, Environmental Protection Agency (EPA) Standards, and Hazardous Substances Databanks from the National Library of Medicine, Medline, TOXLINE, etc. can all be accessed directly from a personal computer. The use of Internet-based resources is one of the industrial hygienist’s greatest information and communication assets. The following list is a sample of some of the readily available information on the World Wide Web. It is not an exhaustive list, and due to the dynamic nature of internet-based resources, general search tools (such as Google) should be used to locate the most current information sources on any subject. The OSHA “Safety/Health Topics” webpage contains a wide variety of information on hundreds of topics to provide users relevant reference materials including standards, directives, training materials, etc. The site also covers timely topics such as molds, ergonomics, and anthrax. On the OSHA eTools webpage there are a wide range of downloadable “Expert Advisors” and “eTools” for example, expert advisors available on topics such as: asbestos, confined space, fire safety, hazard awareness, lead in construction, respiratory protection, and lockout/tagout. The National Library of Medicine (http://www.nlm.nih.gov/) includes links to Medline (a large database of peer-reviewed publications in medicine and toxicology; Hazardous Substances Databank (HSDB), (including ~4500 detailed reviews); Integrated Risk Information System (IRIS) (including Cancer Slope Factors, and a variety of other information); Toxicology Database of Peer Reviewed Toxicology Publications (TOXLINE); Developmental and Reproductive Toxicology Data (DART); Chemical Carcinogenesis Research Information System (CCRIS); Toxic Release Inventory (TRI) of chemicals released by companies into the environment, by name, location etc.). Other information sources include INCHEM from the World Health Organization (WHO) in Conjunction with the International Labour Organization (ILO) and the United Nations (UN). This database includes Chemical Safety Data with references (www.inchem.org); and INTOX Toxicological information (www. intox.org). Additional information is found at Work Safe (Australia) (www.worksafe.gov.au); the Canadian Center for Occupational Safety and Health (CCOSH) (www.ccohs.ca); and North Carolina Occupational Safety and Health Education and Research Center (OEM Web Resource http://occhealthnews.net/index2.htm).
unknown chemicals and provides advice on proper emergency response methods and procedures. It does not provide emergency treatment information other than basic first aid, however. CHEMTREC also facilitates contact with chemical manufacturers when further information is required. The National Pesticides Telecommunications Network Hotline is jointly operated through Oregon State University and the U.S. Environmental Protection Agency (800-858-7378). The hotline provides information on pesticide-related health effects on approximately 600 active ingredients contained in over 50,000 products manufactured in the United States since 1947. It is also a source of information on pesticide product formulations, basic safety practices, health and environmental effects, and cleanup and disposal procedures. Several hotline and information lines are available for response to information requests on toxic materials and environmental issues. The Toxic Substances Control Act (TSCA) Assistance Information Service (TAIS) provides information and publications about toxic substances, including lead and asbestos (202-554-1404). The EPA also operates a National Response Center, which is a source of information on oil discharges and releases of hazardous substances (800-424-8802). In addition, each of the 10 U.S. EPA Regional Offices has a hotline telephone number.
Resource Hotlines. A number of emergency response services are in operation, some of which are primarily intended to provide information on environmental aspects of chemical hazards. These services are good sources of information on the toxicity and risk of exposure to a wide range of chemicals, regardless of whether exposure takes place in an environmental or an occupational setting. NIOSH operates a toll-free technical service to provide information on workplace hazards. The service is staffed by technical information specialists who can provide information on NIOSH activities, recommendations and services, or any aspect of occupational safety and health. The number is not a hotline for medical emergencies, but is a source of information and referrals on occupational hazards. The NIOSH tollfree number is 800-35-NIOSH (800-356-4674). CHEMTREC is a 24-hour hotline to the Chemical Transportation Emergency Center operated by the Chemical Manufacturers Association (800-262-8200). CHEMTREC assists in the identification of
Measurements of Environmental Contaminants
EVALUATION OF HAZARDS
The series of steps followed to assess the hazard associated with a particular exposure or work condition is known as hazard evaluation. Hazard evaluations are essential to determine the need for control measures to minimize exposures. They are also conducted in search of clues to the etiology of an adverse health condition observed in a worker or group of workers. Hazard evaluation is founded upon the information gained in the hazard recognition and identification process just described and requires knowledge and information on: 1. Workplace activities and processes, and potential exposures to contaminants 2. Properties of contaminants and potential routes of human exposure 3. The actual magnitude and frequency of worker exposures to a contaminant. In the absence of quantitative exposure information, estimates of the potential for human exposure are often useful for hazard evaluation 4. Potential adverse health effects resulting from an exposure and the approximate level of exposure at which adverse effects occur While the techniques for evaluation are tailored to each type of hazard, the principles of evaluation can be generalized. Exposure is evaluated in its role as an underlying cause of disease, so in these investigations, exposure may be regarded the measure of contact with the potential causal agent(s).
Over the range of types of exposures (gases and vapors, aerosols, and biological and physical agents), there are two general classes of measurement techniques. One class is termed the extractive methods, in which the contaminants of interest are removed from the environment for laboratory analysis. With these methods, a sampling device is used to collect the contaminants, usually from air in the vicinity of the worker’s breathing zone. This sort of measurement of exposure is termed a personal sample, as it attempts to characterize the composition of the environment at the point the worker contacts it by inhalation. Because of the importance of inhalation exposures, most measurement methods assess airborne contaminants. However, methods to measure contamination of surfaces, as well as the exposure of the skin, are available. These methods are described later in this section. A large number of sampling and analytical methods are available for measurement of personal exposures. Both NIOSH and OSHA
784
Environmental Health
develop and publish methods, and these are considered to be standard for workplace exposure measurements.4,5 Direct measurements of contaminants in the atmosphere comprise the second general class of techniques. These approaches are described as monitoring methods, and they have been developed from instrumental methods first used in the laboratory. Examples of these monitoring methods are devices that perform automated chemical analysis or make measurements based upon chromatographic or spectrophotometric approaches. These monitoring methods can measure continuously and report results immediately, which allows the examination of the pattern of exposure as it changes over time. This can be a substantial improvement over the information provided by extractive sampling methods, which accumulate material over the time of sampling and give a result that is time integrated over that period.
used to evaluate workplace exposure is reversible biological change, which is characteristically induced by chemical exposure. These measurements can be made in blood, urine, exhaled breath, or other media. Biological monitoring methods are usually used to complement measurements of inhalation exposures, as they provide information on the total exposure from all sources (nonoccupational and workplace) and by all routes (i.e., skin and gastrointestinal absorption). The medium that is selected for sampling can be chosen to suit a particular purpose, as materials such as organic solvents may be eliminated by several pathways. There are reference values for measurements made in biological media. The ACGIH has prepared these values, known as BEIs, with documentation for their measurement and interpretation of results, for approximately 45 chemicals.3
Interpreting Exposure Measurement Information Dermal and Surface Contamination As methods for hazard evaluation have progressed, it has become apparent that inhalation is only one of the significant routes of exposure. Contamination on the skin, as well as on surfaces, may be sources of dermal exposure in the workplace. One approach to measuring this exposure is the placement of cloth patches on the outside of workers’ clothing, or by providing workers with thin cotton inspectors’ gloves, which are worn while they perform tasks where dermal exposure is of interest. A similar method has been used in which the patches are placed under the workers’ clothing to measure the quantity of a contaminant that penetrates the protective clothing. Analysis of the patches or glove material then provides an estimate of the exposure. This approach has also been used to evaluate the performance of protective gloves by wearing the cotton glove under the protective glove. Another technique used to estimate dermal exposure of the hands is to rinse the hands, or both the inside surface of the protective gloves and the hands, after a worker has performed the task of interest. The volume of rinse solution is collected and analyzed for the contaminant. A dermal exposure sampling strategy that removes the outermost layer of skin (the strateum cornuem) has been used in studies of workers exposed to metals, acrylates, and jet fuel. This approach is intended to measure the contaminant that is absorbed through the skin, rather than the amount that is deposited on the skin surface.6 The measurement of surface contamination in a working environment can provide another, less direct indication of dermal exposure. Techniques for wipe sampling have been developed to measure surface contamination. These methods have been widely used in industries where exposure may result from resuspension of settled aerosols. For example, surface sampling for lead has been extensively done in industries such as foundries and lead smelters. In cases where the exposure of interest is nonvolatile, such as most metals, and organic chemicals such as polychlorinated biphenyls (PCBs), surface contamination is a useful measure of the likelihood of exposure from skin absorption or ingestion resulting from eating or smoking with contaminated hands. There is a standardized surface wiping method specified by OSHA that describes techniques for collecting samples from contaminated surfaces.5
Exposure measurements are usually compared with a legal or recommended exposure limit. There are several sources of exposure limits, as discussed earlier. While the exposure limit values vary between sources, virtually all these limits are specific to the airborne concentration of a single chemical, and the vast majority set a level that is not to be exceeded as a time-integrated average over an 8- to 10-hour work shift. While this data lends itself to determining compliance with a limit, measurements of levels below a limit should not be considered conclusive for purposes of hazard evaluation. None of the exposure limit values, including the OSHA Permissible Exposure Limits (PELs), which are legally enforceable, are intended to be used as fine lines to distinguish between safe and dangerous working conditions. When interpreting the results of exposure measurements, an environment should not be considered to be free from risk when exposure levels are below the limit value. In the case of individual workers in the environment, reported symptoms should not be considered nonwork related only because measured exposure levels are below a limit. The extent of individual variability in response to workplace exposure is not well known, and a conservative approach to the interpretation of exposure is appropriate. The ACGIH has thoroughly described the factors that must be considered in interpreting exposure information, including simultaneous exposure to mixtures of toxic agents, variability in the composition and levels of exposure over time, exposure by multiple routes, and unusual working conditions.3
Mixed Exposures Most exposures in the working environment are comprised of mixtures of potentially hazardous materials. In general, very little is known about the combined effects of exposure to multiple agents. The combined effects of some materials that act upon the same organ system are recognized in the ACGIH TLVs for Mixtures. A common example is an atmosphere containing a mixture of solvents. While each solvent may have neurotoxic effects, it may be that no single chemical exceeds its recommended exposure limit. The hazard should be evaluated in consideration of the additive effect of the exposure. The TLVs provide guidelines for assessing the effect of exposure when the components of a mixture have similar toxicological properties.
Measurements in Biological Media Comprehensive hazard evaluation includes the assessment of exposure by several routes. In addition to pulmonary absorption, materials may be cleared from the respiratory tract and swallowed, resulting in uptake from the gastrointestinal tract. Many industrial materials can also be absorbed directly through the skin. Measurement of contaminants in biological media reflects the contributions from these multiple routes of exposure, as well as the variability in absorption, distribution, and metabolism among exposed individuals. Progress in biological monitoring has been driven by the uncertainties in the relationship between measurement of contamination in the workplace environment, such as those made with conventional industrial hygiene air sampling methods, and the actual quantity of a toxic material that may be present in the body. The measurement made in biological media may be for a particular chemical itself or its metabolites. Another type of measurement
Exposure Variability The variability in exposure can be broken down into components. The characteristics of a contaminant in the environment are described by its composition and intensity. The composition, that is, the chemical makeup, and the distribution of particle sizes changes through time. The intensity of exposure, expressed as its concentration (such as parts of benzene vapor per million parts of air, or number of asbestos fibers per cubic centimeter of air) may also change through time, resulting in a highly variable exposure over a workday. Exposure variability is also introduced by the characteristics of the individuals in the exposure environment. Even for jobs at fixed workstations, where workers perform similar tasks, there can be substantial exposure differences between individuals because of personal work practices. When interpreting exposure information for hazard evaluation, these sources of exposure variability must be considered. For example,
39 consider two workplaces where benzene exposure is of concern. In one workplace, there is a steady concentration of 1 ppm, so the exposure of a worker spending a full shift in this area would be measured as 1 ppm as an 8-hour time-weighted average exposure. A worker in the second workplace could be in an environment in which the level of exposure to benzene varies widely from periods of no detectable exposure to very high but short-term peaks of exposure. For example, if this second worker experienced a single, high, peak exposure level of 48 ppm of the solvent, for only 10 minutes a day, then spent the remainder of the shift in an unexposed area, this worker’s 8-hour time-weighted average exposure would also be 1 ppm. Classifying these two workers as equally exposed could result in an erroneous conclusion in hazard evaluation. When exposure varies widely over time, the time course of exposure must be considered in order to develop an appropriate hazard-control strategy. Industrial hygiene sampling methods can be used to measure the high, short-term exposure and identify the work activities that cause it, as well as to measure exposure integrated over the time of sampling.
Exposure by Multiple Routes While inhalation is an important route of exposure for many occupational hazards, skin exposure may also be a significant route of entry for industrial chemicals. Most exposure guidelines and limits include notations indicating cases in which skin contact may be a significant route of exposure. In the case of the ACGIH-TLVs, this notation appears for approximately 10% of the chemicals listed. Unlike measurements of airborne contaminants, the interpretation of information obtained by measuring dermal contact is complicated by the absence of guidelines or reference values. Measurement of skin contact does not necessarily provide a direct indication of the quantity of a chemical that may be absorbed, as the relationship between the material found on the skin and the absorbed amount depends on several factors. The physical and chemical properties of the material, the anatomical area of contact, the duration of contact, and the individual characteristics of the exposed individual can all influence the relationship between the amount of material on the skin and the amount that may be dermally absorbed. The importance of dermal exposure should not be underestimated; however, as in some occupational settings, materials such as pesticides have been shown to enter the body primarily by dermal absorption. In these cases, measurements in biological media can be very helpful in hazard evaluation, as they can integrate the contribution of exposures from a number of routes.
Unusual Working Conditions Any interpretation of exposure information should recognize that there is uncertainty associated with both the measurement of exposure, as well as the limit value to which it is compared. Information on exposure should be interpreted in view of the overall conditions in the working environment. For example, exposure measurements are generally made with the expectation that the individuals are in the working environment for the “normal” 8-hour day, and 40-hour work week. Many jobs operate on a schedule that varies from this. The potential effect of extended duration on occupational exposure is rarely recognized in exposure limits, however. Of the over 600 materials for which there are OSHA PELs, only the lead standard specifies that the maximum daily allowable exposure level be adjusted down in proportion to the time by which the length of the daily exposure exceeds 8 hours. For purposes of hazard evaluation and decisions about the need for exposure controls, however, duration of exposure should be considered for any exposure situation. CONTROL OF HAZARDS
Principles and Limitations of Controls Recalling the public health basis of industrial hygiene practice, exposure control is a means of primary prevention. The elimination or reduction of hazards to the extent feasible is the primary means of
Industrial Hygiene
785
prevention for occupational disease and injury. The strategy for effective hazard control is an ordered hierarchy. The three elements of this effective ordered hierarchy of control solutions are: 1. First, prevent or contain hazardous workplace emissions at their source 2. Next, remove the emissions from the pathway between the source and the worker 3. Last, control the exposure of the worker with barriers between the worker and the hazardous work environment7 This strategy mandates the use of environmental controls as the primary means of exposure prevention. These controls may take several forms and are frequently used in combinations as part of an overall prevention strategy. Specific control methods include substitution of materials with less hazardous substances, modification of the working environment to contain the source of the hazard, isolation of the worker from the hazardous environment, removal of the hazardous substance by ventilation, modification of work practices to reduce exposure, and use of personal protective equipment to reduce exposure. It should be noted that the use of protective equipment, including respirators, is intentionally mentioned last. Personal protective equipment should be considered the least preferable means of hazard control, implemented only when other means of control are not feasible or effective.
Material Substitution The practice of reducing risk in the workplace by the removal of a toxic material and its replacement with a less toxic substitute is well established. Elimination or reduction of extremely toxic materials, such as asbestos as an insulating material, or benzene in solvents, adhesives, and gasoline, illustrates the principle of substitution. These examples also illustrate the risk of replacing one hazard with another. As more information is discovered about their toxicity, some of the materials used to replace asbestos as an insulating material, such as artificial mineral fibers and fibrous glass, are suspected of having effects similar to asbestos. The replacement of benzene with another chemical, such as hexane, with similar solvent properties may reduce the risk of exposure to a carcinogen, but increase the hazard of exposure to a neurotoxin. Substitution is an important method of primary prevention of workplace exposures, but it should be practiced with a recognition of the effect the replacement material may have on the work environment. The result of substitution should not be the replacement of one hazard with another.
Process Modification The application of engineering control technology to modify the design of industrial processes is a very effective method of intervention to reduce exposures. Spray painting is an example of a process in which technology has changed, substantially reducing solvent exposures by using airless atomization systems instead of compressed air spray guns. Many common industrial processes, such as material handling procedures, can be redesigned to minimize the release of contaminants. Exposure control should be included as a central design element at the design stage of a new industrial process or in the modification of existing operations. The anticipation and control of potential hazards at the design stage is more efficient than redesign of existing systems.
Isolation By considering exposure to be the result of personal contact with a source of contamination, we can easily see the effectiveness of isolation to interrupt the pathway between the source of a hazard and the worker. This approach can be implemented in two ways: by enclosure to isolate a source from the working environment or by isolating the workers from a contaminated environment. Both approaches may be part of a comprehensive exposure-control strategy; however, containment of the source is generally preferable. The glove box used in
786
Environmental Health
handling infectious materials is a common example of containment for hazard control. This approach is particularly well suited to control individual point sources of contaminants, or physical hazards such as noise. By preventing the release of a hazardous agent into the work environment, exposure is controlled at the source. Isolation of the workers from the contaminated environment may be preferable, and more feasible, in cases where contaminants are released from multiple sources dispersed through the work environment. While this approach does not prevent the release of the hazard into the environment, it is possible to protect workers through isolation. The use of clean air-supplied control rooms in chemical production facilities is an example of isolation of workers from general environmental contamination.
Ventilation Ventilation is a very common method of workplace hazard control. There are two general types of ventilation: dilution ventilation (also known as general or comfort ventilation) and local exhaust ventilation. There is some amount of dilution ventilation in any indoor space, even if it is only the natural infiltration of outside air. Most workplaces require additional ventilation, known as local exhaust, to capture contaminants at or near their source and remove them from the work environment. Although they are frequently used together, the two types of ventilation are very different in design and performance. Dilution Ventilation. Dilution (also known as general) ventilation is the replacement of contaminated air with fresh air. In its most simple form, general ventilation is provided by the natural entry of outdoor air through windows, doors, and other openings. Most indoor workplaces require some means of providing mechanical air movement to supplement the natural airflow. Mechanical roof ventilators or wall fans are common in buildings used as workplaces. The human occupants of office buildings may be the primary source of indoor pollution in cases where there are no industrial processes. General building air provided by a heating, ventilation, and air conditioning (HVAC) system may be the only means of controlling the carbon dioxide, water vapor, particulate material, and biological aerosols that are the result of human occupancy. Ventilation guidelines for general dilution are provided by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) to specify minimum ventilation rates and indoor air quality that provide an acceptable work environment; however, these guidelines are based more on perceptions of comfort by building occupants than on prevention of adverse health effects. Dilution ventilation is generally not sufficient to provide effective control in workplaces where there are sources of contamination in addition to human occupancy. This is clearly the case where major industrial process are conducted, but it can also be true where the contaminant sources are limited to office equipment such as photocopy machines. The volume of air needed to dilute contaminants to acceptable levels is usually large, requiring large and expensive air handling systems to move the air, as well as to heat and cool it. These systems may reduce the amount of contaminant present in the work environment, but they do not control its release. Local exhaust systems, described in the following section, are generally preferable for a variety of reasons. Local Exhaust Ventilation. Local exhaust systems differ fundamentally from dilution systems. Rather than allowing contaminants to escape, then reducing their concentration by dilution with clean air, local exhaust ventilation systems capture air contaminants at the source and prevent their dispersion in the environment. By interrupting the pathway between the source of the contaminant and the worker, these local systems control emissions and prevent exposures. These systems typically include a hood, which may partially enclose the source and facilitate the entry of contaminated air entry into the exhaust system. The force to move air into the system is provided by a fan, connected to the hood with duct work. Many systems also include an air cleaning device, such as a filter, to remove contaminants
before the air is released to the environment. The design and testing of local exhaust systems is a specialized aspect of industrial hygiene, and the ACGIH’s manual8 and text by Burgess2 should be consulted as sources of further information.
Personal Protective Equipment Personal protective devices are at the lowest level of the hierarchy of exposure control methods. These devices are intended to provide a barrier between workers and contaminated environments. They include equipment to protect the eyes (safety glasses, goggles, and face shields); the skin (gloves, aprons, and full body suits made of impervious materials); and the respiratory tract (a wide variety of respiratory protective devices). The selection and use of these devices is largely driven by the particular application, and there is a large number of choices available to protect against chemical, physical, and biological hazards.9 The specification and use of respiratory protective devices is more complex than is the case for other personal protective equipment. This is due to the legal requirements, as well as the importance of matching the choice of respiratory protection with both the hazard and the individual respirator user. OSHA has a specific regulation for respirator use (Code of Federal Regulations, 29 CFR 1910.134). In addition, some OSHA standards for specific air contaminants such as asbestos and lead include requirements for respiratory protection programs. Respiratory protective devices may be classified into two general types. Respirators that operate by removing contaminants from air by filtration, adsorption, or chemical reaction are known as airpurifying respirators. Respirators that supply air from a source other than the surrounding environment (such as from a cylinder of compressed air) are known as atmosphere-supplying respirators. Both types of respirators are tested and certified for use by NIOSH. NIOSH has developed a full respirator decision logic, which is a recommended procedure to guide to the selection and use of respiratory protection. The correct choice of a respirator requires consideration of the particular contaminants that may be present and the concentrations at which they will be found in the working environment. OSHA has an internet-based eTool that guides users through the respirator selection process at http://www.osha.gov/SLTC/etools/ respiratory/index.html, and NIOSH and has updated its respirator decision guidelines as of 2005.10 The ability of an individual worker to wear the respirator in a manner that will provide adequate protection must also be determined as part of a respirator selection and use program. Individual workers vary widely in the degree of protection that a respirator will provide in actual use in the working environment, and the decision logic includes requirements for comprehensive respiratory protection programs with respirator fit testing to ensure that each respirator performs effectively for the individual user.
Education and Training Worker education and training are essential components of effective programs of primary prevention and exposure control. Any of the control strategies just described function best when workers understand the physical and chemical hazards associated with their work, as well as methods for controlling these hazards. In the OSHA substancespecific regulations (for asbestos, lead, arsenic, cotton dust, etc.), worker education and training are required, although these regulations often lack detailed training specifications. Training requirements also are contained in several OSHA process-specific standards such as the respiratory protection standard, the blood-borne pathogens standard, and the standard concerning process safety management for highly hazardous materials. In addition, the OSHA Hazard Communication Standard, which was promulgated in 1985, establishes generic training requirements for hazardous substances. The OSHA Hazard Communication Standard requires chemical manufacturers and importers to provide hazard information to users of their products. Information must be provided in the form of MSDSs and product labels. The standard requires that employees be provided with information and training on hazardous chemicals in the workplace. Training must include
39 information concerning requirements of the OSHA standard, identification of hazardous materials in the work area, information on the company’s written hazard communication standard, methods for detecting presence or release of hazardous chemicals in the work area, specific hazards of chemicals in the workplace, measures to protect workers from exposure to hazardous chemicals, and details concerning the employer’s hazard-labeling system for chemicals in the workplace. Although the sort of training required by the hazard communication standard is not legally required for all occupational hazards, it contains the elements of a model program that can be adapted to a variety of workplace situations where hazard control is needed. REFERENCES
1. Hatch T. Major accomplishments in occupational health in the past fifty years. Ind Hyg J. 1964;25:108–13. 2. Burgess WA. Recognition of Health Hazards in Industry. 2nd ed. New York: John Wiley & Sons; 1995. 3. American Conference of Governmental Industrial Hygienists (ACGIH). Threshold Limit Value for Chemical Substances and Physical Agents in the Workroom Environment with Intended Changes for 2007. Cincinnati: American Conference of Governmental Industrial Hygienists; 2007. 4. National Institute for Occupational Safety and Health. NIOSH Manual of Analytical Methods. 4th ed. Cincinnati: National Institute for Occupational Safety and Health; 1994. 5. U.S. Department of Labor, Occupational Safety and Health Administration. OSHA Technical Manual. 4th ed. Washington, DC: Government Printing Office; 1996.
Industrial Hygiene
787
6. Nylander-French LA. Occupational dermal exposure assessment. In: Harrison R, ed. Patty’s Industrial Hygiene. New York: John Wiley & Sons, Inc.; 2003. 7. Burgess WA. Philosophy of management of engineering controls. In: Cralley LJ, Cralley LV, Harris RJ, eds. Patty’s Industrial Hygiene and Toxicology. 3rd ed. New York: John Wiley & Sons; 1994. 8. American Conference of Governmental Industrial Hygienists (ACGIH). Industrial Ventilation, A Manual of Recommended Practice for Design. 25th ed. Cincinnati: American Conference of Governmental Industrial Hygienists; 2007. 9. Forsberg, K. Quick Selection Guide to Chemical Protective Clothing. 5th ed. Cincinnati: American Conference of Governmental Industrial Hygienists; 2007. 10. National Institute for Occupational Safety and Health. NIOSH Respirator Selection Logic, 2005, DHHS (NIOSH). Publication No. 2005-100.
General References Burgess WA, Ellenbecker MJ, Treitman RD. Ventilation for Control of the Work Environment. 2nd ed., New York: John Wiley & Sons; 2004. Di Nardi SR, ed. The Occupational Environment—Its evaluation, Control and Management. 2nd ed. Fairfax, VA: American Industrial Hygiene Association; 2003. U.S. Department of Labor, Occupational Safety and Health Administration. 29 CFR Part 1910, Air contaminants, Final Rule. Fed Reg. January 19, 1989;54(12):2651–2.
This page intentionally left blank
40
Surveillance and Health Screening in Occupational Health Gregory R. Wagner • Lawrence J. Fine
INTRODUCTION
This chapter will discuss surveillance and health screening in occupational health and the common principles that guide program performance.
occurrence within an enterprise, a community, or a country can be identified and investigated. Some conditions such as silicosis are so characteristically occupational that all cases should be investigated. These are known as sentinel events.2
Hazard Surveillance SURVEILLANCE IN OCCUPATIONAL HEALTH
Surveillance in occupational health, as in other public health endeavors, involves the systematic and ongoing collection, evaluation, interpretation, and reporting out of health-relevant information for purposes of prevention. Surveillance can help establish the extent of a problem, track trends, identify new problems or causes, help set priorities for preventive interventions, and provide the means to evaluate the adequacy of the interventions. Surveillance programs can focus on an enterprise, an industry, or on the general population. At the national level, surveillance data can be used to identify high-risk industries. One of the few sources of national data is collected by the Bureau of Labor Statistics (BLS) in the Department of Labor, which surveys a representative sample of private sector employers with more than 11 employees each year.1 The number of occupational illnesses and injuries is collected from each surveyed employer. This system is periodically revised to improve the classification of occupational diseases and to collect more information about the etiology of diseases and injuries. The most effective workplace surveillance systems have both health and hazard or exposure components. While hazard surveillance may be less common than health surveillance, it is vital. Hazard surveillance provides the opportunity to identify and intervene on hazardous exposures before an injury or disorder develops. Both health hazard surveillance efforts are often characterized by their speed and practicality. Indications of abnormality generally need confirmation or further validation.
Health Surveillance Health surveillance within an enterprise often involves analysis of the information gathered in baseline or pre-placement examinations and periodic screening testing. In addition, administrative records such as health insurance data, work absence records, workers’ compensation claims, or worksite “incident reports” may provide insight into the health of the workforce. Records from poison control centers and from emergency room visits have been used for population-based occupational injury surveillance as well. Population-based workforce data can be analyzed for rates of disease or injury, so areas of unusual
Hazard surveillance (systematic monitoring of the workplace for hazardous exposures) is an important part of occupational surveillance activities. The identification of potentially harmful levels of exposure to hazardous substances or conditions before work-related diseases or injuries have developed or are recognized provides the opportunity for prevention through workplace redesign and implementation of engineering or administrative controls to reduce risk. Hazard surveillance information can be collected by worker interview, walk-through inspections, or environmental sampling. As a result of hazard surveillance and other health surveillance information, jobs can be prioritized for more intensive evaluation to identify hazardous exposures. The purpose of the more complete evaluation is to precisely assess the nature of the exposures and to evaluate possible methods to reduce exposures. Sometimes, exposures identified by hazard surveillance will be so clearly hazardous and ways to reduce the level of exposure will be so obvious that more sophisticated evaluation will be unnecessary. In most contemporary U.S. workplaces, when hazardous exposures involve only small groups of workers, serious work-related health problems are infrequent. It is particularly difficult to detect increased occurrence of common diseases that may be caused by occupational and nonexposures factors (alone or in combination) based on health surveillance alone. In contrast, with hazard surveillance data, hazards may be readily identified regardless of the number of exposed workers. The ability of a hazard surveillance system to identify hazardous exposures depends on the overall accuracy of methods used to identify the nature and the intensity of the exposures. TYPES AND PURPOSES OF WORKPLACE HEALTH EXAMINATIONS
Pre-placement Examinations After an offer of employment is made, but before or soon after work is initiated, workers may undergo selective or comprehensive health examinations. Ethically and legally, these examinations may not be used to exclude the worker from employment but may be used to guide proper placement for the worker, identify educational and training needs, assist in the selection of personal protective equipment, 789
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
790
Environmental Health
and identify necessary work-station design or other kinds of accommodations needed for workers with disabilities. These examinations are more likely to take place when known hazardous exposures are anticipated, and some are mandated by legal health standards. For example, each coal miner is mandated by the Mine Safety and Health Act (MSHA) to have a chest radiograph prior to starting underground coal mine work. Pre-placement examinations provide an opportunity for education concerning work hazards and an orientation to occupational health services.
Medical Screening Medical screening examinations attempt to identify health effects from work exposures at an earlier stage than they would ordinarily be detected by the worker without the examination. In general, after a positive screening test is confirmed, available, acceptable interventions must be able either to reverse the detected abnormality or to reduce the severity of the outcome. Screening is intended to benefit the screened individuals. Screening programs may also indirectly benefit other similarly exposed workers if the detection of workrelated health effects trigger an investigation of the workplace and efforts to reduce hazardous exposures or change unsafe working conditions. If large groups are tested periodically, the resulting data can be analyzed to identify group trends as part of a surveillance program as described above. Screening examinations may include administration of questionnaires, physical examinations, and clinical tests such as tests of pulmonary or liver function. Screening examination should be voluntary and are intended to benefit the individual worker who is screened. Therefore, the screening tests used in these one-time or periodic examinations should be evaluated to ensure that the tests are effective for screening objectives and pose minimal risk.
Biological Monitoring Biological monitoring involves the measurement of workplace agents or their metabolites in biological specimens, usually blood or urine, for the purpose of monitoring the level of exposure and absorption. It is a common adjunct to medical monitoring or screening. This approach to exposure assessment is particularly useful when dermal absorption is possible. Biological monitoring should not be used to replace careful assessment of exposure conditions by other effective methods such as environmental air measurements.
Susceptibility Screening Another type of screening, where ethical issues are particularly important, is the attempted identification of individuals who may be more susceptible to workplace toxins from individual characteristics such as genetic or phenotypic factors common in the general population. There is currently no regulatory mandate to perform any such testing, and the performance of examinations for genetic or other susceptibility factors raises significant legal issues. Few validated tests are currently available, and the predictive value of proposed tests is limited. Employers continue to have a legal and ethical responsibility to maintain a workplace free of recognized hazards for the entire workforce, not just the least susceptible.
General Health Appraisal Some employers offer limited or comprehensive health examinations at work as a component of an overall effort to promote employee health. These examinations may include structured questionnaires (investigating diet, exercise, tobacco use, etc.) and medical testing (e.g., blood pressure, cholesterol, BMI calculation) to appraise risk and assist in general health promotion counseling. Although general health appraisal examinations have traditionally been separate from examinations focused on occupational risk factors, there is growing interest in exploring the value of integrating programs for protection of the workforce from occupational hazards with efforts at individual health promotion.3
TABLE 40-1. SELECTED EXPOSURES WITH OSHA-MANDATED MEDICAL EXAMINATIONS Acrylonitrile Arsenic, inorganic Asbestos Benzene Blood-borne pathogens Cadmium Coke oven emissions Ethylene oxide Noise Lead
Legally Mandated Medical Examinations Some OSHA and MSHA standards mandate medical examinations as part of a comprehensive approach to prevention. For example, people exposed to asbestos or cotton dust in general industry must be offered periodic pulmonary examinations; lead-exposed workers must undergo periodic blood lead analyses; and workers exposed to excessive noise must be offered periodic audiometry. Examinations either focus on the primary “target organ” of the toxin, as with asbestos, or involve biological monitoring as with lead. Table 40-1 lists selected substances from among approximately 30 OSHA standards requiring medical screening or surveillance. Generally, examinations are required if a worker is exposed above a specific level of exposure, which is often one-half of the 8-hour permissible exposure limit (PEL).4 For example, OSHA requires baseline and annual audiometry testing in employees exposed to noise at an average of 85 dBA or above for a typical 40-hour work week. NIOSH recommends health examinations for a broader list of agents than those covered by OSHA or MSHA standards.
ETHICAL ISSUES IN HEALTH EXAMINATIONS IN THE WORKPLACE
The relationship between the health-care provider and the examinee in occupational settings is different from the traditional physicianpatient relationship. In the traditional physician-patient relationship, the health-care provider serves only the interests of the patient and the health-care provider’s only loyalty is to the patient. When the employer hires or contracts for the occupational health-care provider, the provider may have difficulty resolving conflicts of interest between the employer and the employee-patient. This conflict is one of the most important ethical concerns of occupational health.5 Ethical codes have been developed by professional organizations such as the American College of Occupational and Environmental Medicine (ACOEM) and the International Commission on Occupational Health (ICOH).6,7 Rothstein has proposed a Bill of Rights of Examinees.5 ICOH codes explicitly deal with many of the issues related to screening and surveillance activities, and the ACOEM has a position on medical surveillance in the workplace.8 All of these codes recognize the need to maintain the confidential nature of most medicalscreening information. Legal responsibilities to maintain medical information confidentially is reinforced by the Americans with Disabilities Act (ADA)9 and mandated by the Health Insurance Portability and Accountability Act (HIPAA).10 All medical information must be collected confidentially and stored in separate, secure medical files. Under ADA, management may be informed of workers’ restrictions that limit their ability to perform the job duties. In addition to ADA and HIPAA, other federal and state laws or regulations such as the Occupational Safety and Health Act, Department of Transportation examinations for interstate truck drivers, or state laws on human immunodefiency virus (HIV), or drug testing deal with the issue of medical confidentiality. While the OSHA
40
Surveillance and Health Screening in Occupational Health
791
TABLE 40-2. CRITICAL INFORMATION CONCERNING MEDICAL EXAMINATIONS IN THE WORKPLACE
TABLE 40-3. COMPONENTS OF A MEDICAL SURVEILLANCE PROGRAM
The purpose and nature of the examination and any risks Who is employing the health-care provider Policies and practices to protect the confidentiality of the collected data Who will be provided with the results of the examination How the information will be used, including what actions will be taken to further evaluate possible hazardous workplace exposures How the worker will be notified of individual and group test results How the worker may have access to his or her health records How medical follow-up may be obtained if the test results are positive
Exposure assessment and identification of most likely adverse health effects Selection of medical tests based on evaluation of test characteristics Identification of employees to be tested and testing frequency Training of testing staff Analysis and interpretation of individual and group test results Actions based on test results Verification of test results Notification of employees and the employer while protecting confidentiality Additional tests or treatment and steps to reduce an individual’s exposure Exposure evaluation and reeducation of hazardous exposures Maintenance of records Evaluation for adequate quality control and revise based on the program performance
Data from Rothstein MA. Legal and ethical aspects of medical screening. Occup Med. 1996;11(1)31–9.
mandates various pre-placement and periodic medical examinations that employers must offer employees, the employees have the right to refuse to participate in these OSHA-mandated examinations unless participation is specified in an employee-employer contract. Maintaining the confidentiality of medical data is not only important from legal and ethical perspectives but is critical in facilitating the employee’s participation in the program. One of the best methods to address the ethical issues in workplace examinations and to ensure a high level of voluntary participation in a workplace screening program is to carefully educate workers about the program. Rothstein has suggested a number of issues that should be addressed in any education effort5 (Table 40-2). OTHER PROGRAM DESIGN ISSUES
The value of preventive examinations at work depends on a number of program planning and design elements.11 The purpose (or purposes) of any program should be clear both to those performing examinations and to the intended beneficiaries. As a rule, programs for screening and surveillance respond to the presence of hazardous exposures in the workplace and focus on workers most likely to be exposed. Examinations should be selected to identify early evidence of significant health effects that might result from these exposures. Tests of reasonable sensitivity, specificity, and predictive value must be available, and a process for confirmation of abnormal tests and medical follow-up incorporated into the program. Most questionnaires and some of the medical tests that are commonly included in occupational screening programs have not been extensively evaluated for their ability to detect those with and without adverse effects. An efficient medical screening program should detect most individuals with subclinical adverse health effects (high sensitivity) while not mislabeling any truly healthy individuals (high specificity). Tests must be free of any significant risk for the screened subjects, since the main use of the test is to identify subclinical disease or diseases before an employee would normally seek health care. Tests must also be acceptable to the screened population. OSHA or MSHA standards or NIOSH recommendations can help guide program development. International organizations such as the International Labor Organization (ILO) and World Health Organization (WHO) have developed materials that are useful in designing occupational health surveillance programs.12,13 Table 40-3 summarizes design elements for workplace health examinations. An individual is responsible for oversight of the program and should be identified. Trained and qualified technical and professional staff should be performing all components of the examinations. Adequate maintenance and calibration of equipment is necessary to obtain valid test results that can be compared with one another over time to track health status in individuals and groups. Individually identifiable health information must be stored in a way that meets legal and ethical obligations to protect confidentiality.
Selection of tests and test frequency can be a challenge. Professional organizations may provide guidelines to assist in equipment selection and test performance,14 and comprehensive health regulations or guidelines in some instances specify test standards and frequency. When programs are being designed de novo and are not in response to a legal mandate, tests should be performed frequently enough to identify problems that may arise between test cycles sufficiently early to intervene effectively and should also take into consideration the likelihood that not every worker will participate in each test cycle. Interpretation of changes in test results in any individual over time must take into consideration expected fluctuations in testing in individuals or populations as well as variability related to equipment, technician performance, etc. Test selection and frequency is often resource dependent. The adequacy of some surveillance programs attempting to track trends or determine the success of interventions depend on reasonable levels of participation of the workforce. If a program provides workers with the type of information listed in Table 40-2, a high level of participation is more likely. Privacy and confidentiality must be assured. Consent to any testing must be provided, and all programs must be free of any hint of coercion. Individuals who participate in programs for medical screening and surveillance should be given their own individual test results and counseling should be available to provide answers to any resultant questions and advice on any follow-up that might be appropriate. Participants should also have access to the results of analyses of group data and be informed of any actions taken in response to problems identified. DATA ANALYSIS
Effective health screening and surveillance programs depend on data analysis, although this analysis does not need to be sophisticated. In some instances, confirmation of any occurrence of an abnormal potentially occupational condition such as tuberculosis in a healthcare setting should immediately stimulate further evaluation and response. In other settings, calculation of rates and analysis of trends is needed to target work areas requiring intervention. Screening and surveillance can identify problems but do not prevent them. The analysis of the data and the response to findings are critical steps for reducing the burden of disease and injury in individuals and groups. One of the features of an effective surveillance program is the use of a standard coding system for recording health outcomes. Standardized coding permits more homogeneous disease categories comparable across an industry or among industries with common exposures. For example, the ILO disseminates a standardized method for classification of chest x-rays for the presence of pneumoconiosis15 and the
792
Environmental Health
WHO disseminates an International Classification of Diseases, facilitating common coding of medical records.16 Surveillance systems generally have to be as cost effective as possible to be widely used. The principal advantage of using existing data sources such as workers’ compensation records is low cost. Supplementing an existing surveillance system with an additional component such as symptom questionnaires should be considered when observations of the workplace suggest that there are potentially hazardous common exposures, but the existing surveillance data suggests that there are no problems. The apparent absence of problems will commonly occur for two reasons: the exposures are not high enough to cause any health complaints or underreporting. Underreporting of problems is likely to be more common where there are obstacles or disincentives to the reporting of a possible disorder to supervisors or health professionals. For example, if an organization gives awards to departments without lost time injuries or work-related disorders, either supervisors or coworkers may discourage reporting. More active collection of surveillance data is indicated when there is simply no existing health surveillance information to determine if a problem exists but substantial exposures are common. For example, in many sectors of the economy, OSHA logs are not required. Symptom questionnaires are used frequently for workforce surveillance and may be administered by a number of methods. The analysis of questionnaire data requires some training. Generally, the case definition must be defined prior to analysis. The purpose of these definitions is to improve the uniformity or consistency of the data collected, thereby improving the quality of the surveillance data. The goal is to ensure that cases have a common set of characteristics. Symptom questionnaires are generally not used to establish a clinical diagnosis unless supplemented by other more definitive health examinations. The analysis of health surveillance data is conceptually similar to the analysis of epidemiological research data.17 In the analysis of surveillance and epidemiological data, issues of misclassification and random or systematic errors in assessing either exposures or health outcomes should be considered. Errors due to misclassification are likely to be more common with surveillance data compared to epidemiological research data. When the goal of the analysis is to determine if a specific group of workers or jobs is associated with an elevated risk, use of an internal comparison reference group from the same organization rather than some external comparison is useful since the identification of cases within an organization and their reporting are likely to be similar. While random and systematic errors in surveillance data limit the conclusions that can be drawn, these limitations are less important, since the goals of the surveillance analyses are the identification of a possible problem than in hypothesis-testing epidemiological research. Changes in requirements for case reporting may occur over time in surveillance systems, making longitudinal analyses difficult. Frequently in the analysis of surveillance data, the variation in risk between jobs, departments, or industries is so large that real differences in risk can be characterized by simple statistical analyses and are unlikely to be explained principally by errors in the classification of disease, confounding factors, or random errors. Nevertheless, surveillance data should always be interpreted cautiously, given its limitations. The goal of the analysis of surveillance data is to trigger further investigation if a problem is detected, not to definitively establish its presence or absence. The magnitude of the occupational injury or disease problem can be estimated at the national, state, or enterprise level. Local surveillance systems are typically based on one or more of the following data sources: (a) OSHA 200 log, an important source of data for the BLS surveillance system;18 (b) in-plant medical records or logs; or (c) workers’ compensation records. Analytic methods such as capturerecapture methods using different data sources to examine the same outcome in the same population can be helpful in improving the validity of estimates of the magnitude of disease occurrence.19 Analyses of surveillance data for the purpose of determining the magnitude of a problem may suggest a possible cause for the problem.
Since resources for evaluating exposures and implementing possible prevention strategies are commonly limited, surveillance data identifying the magnitude of the problem should be used to guide resource allocation for further investigation and preventive activities. The goal of many surveillance systems is to track trends in the number of workers exposed to occupational hazards, or the number of workers with injuries, disorders, and diseases over time. A major uses of trend data is to qualitatively evaluate the effectiveness of prevention activities. However, an important limitation of surveillance data is that changes in the rate of disorders may be due to changing levels of exposure or changes in reporting of disorders independent of their level of occurrence. Despite the limitations of surveillance data systems, the opportunity they provide for evaluation of preventive efforts is often unique because large-scale research evaluations of intervention programs are difficult and costly to undertake. CONCLUSIONS
Occupational health surveillance can contribute to improved prevention of occupational disease and injury. Health examinations at work are the “inputs” for programs aimed at early identification of adverse effects to reduce disease in individuals and for programs of surveillance designed to identify new hazards, track trends, and evaluate the adequacy of interventions for groups of workers. Hazard surveillance is another significant element in comprehensive occupational disease and injury prevention efforts. The development and conduct of any successful program that includes health examinations must address critically important ethical issues including those of worker autonomy and confidentiality. The results of health examinations and hazard information, thoughtfully analyzed, can help target preventive interventions. Surveillance systems can contribute to prevention but do not, in themselves, prevent disease or injury. This is done through the recognition and control of hazardous exposures at work.
REFERENCES
1. U.S. DOL. BLS Home Page. Injuries, Illnesses, and Fatalities. Accessed September 21, 2005 at http://www.bls.gov/iif/home.htm#tables. 2. Rutstein DD, Mullan RJ, Frazier TM, et al. Sentinel events (occupational) for physician recognition and public health surveillance. Am J Pub Health. 1983;73:1054–62. 3. Sorensen G, Barbeau E. Steps to a Healthier U.S. Workforce: Integrating Health and Safety and Health Promotion: State of the Science. 2004 Accessed September 21, 2005 at http://www.cdc.gov/niosh/steps/ pdfs/NIOSH-post-symprevision.pdf 4. Jones DL. Occupational health services and OSHA compliance. Occup Med. 1996;11(1):57–68. 5. Rothstein MA. Legal and ethical aspects of medical screening. Occup Med. 1996;11(1)31–9. 6. American College of Occupational and Environmental Medicine. Code of Ethical Conduct, ACOEM, Arlington Heights 1993 Accessed September 21, 2005 at http://www.acoem.com/code/ default.asp 7. ICOH. International Code of Ethics for Occupational Health Professionals (Rev 2002). Rome, Italy, 2002. Accessed September 21, 2005 at http://www.icoh.org.sg/core_docs/code_ethics_eng.pdf 8. American College of Occupational and Environmental Medicine. ACOEM Position on Medical Surveillance in the Workplace. American College of Occupational and Environmental Medicine 1989 Report ACOEM, Arlington Heights. 9. U.S. Department of Justice. ADA Home Page. Accessed September 21, 2005 at http://www.usdoj.gov/crt/ada/adahom1.htm 10. HHS Office for Civil Rights. HIPAA Home Page. Accessed September 22, 2005 at http://www.hhs.gov/ocr/hipaa
40 11. Maizlish NA, ed. Workplace Health Surveillance: An ActionOriented Approach. New York: Oxford University Press; 2000. 12. ILO. Technical and Ethical Guidelines for Workers’ Health Surveillance, Occupational Safety and Health Series No. 72, Geneva, 1998. Accessed September 21, 2005 at http://www.ilo.org/public/english/ support/publ/pindex.htm 13. Wagner GR. Screening and Surveillance of Workers Exposed to Mineral Dusts. World Health Organization: Geneva; 1996. 14. Miller MR, Hankinson J, Brusasco V, et al. Standardisation of spirometry. No. 2 in series: ATS/ERS task force: standardization of lung function testing. Eur Respir J. 2005;26:319-38. Accessed on September 22, 2005 at http://www.thoracic.org/adobe/statements/ pft2.pdf 15. International Labour Office. International classification of radiographs of pneumoconiosis. In: Occupational Safety and Health Series, No. 22. 2000 ed. Geneva: International Labour Office; 2002.
Surveillance and Health Screening in Occupational Health
793
16. WHO. International Statistical Classification of Diseases and Related Health Problems. 10th Revision Version for 2003, Geneva, 2003. Online version accessed September 21, 2005 at http://www.who.int/ classifications/icd/en/ 17. Checkoway H, Pearce N, Kriebel D. Chapter 8: Occupational health surveillance. In: Research Methods in Occupational Epidemiology. 2nd ed. Oxford University Press: New York; 2004. 18. U.S. Department of Labor, OSHA. Occupational Illness and Injury Recording and Reporting Requirements. Standard Number 1904; 1952. Final Rule. U.S. Federal Register 66:5916-6135. January 19, 2001, accessed September 21, 2005 at http://www.osha.gov/ pls/oshaweb/owadisp.show_document?p_table=FEDERAL_ REGISTER&p_id=16312 19. Rosenman KD, Reilly MJ, Henneberger PK. Estimating the total number of newly-recognized silicosis cases in the United States. Am J Ind Med. 2003;44:141–7.
This page intentionally left blank
Workers with Disabilities
41
Nancy R. Mudrick • Robert J. Weber • Margaret A. Turk
FRAMEWORK FOR DEFINING DISABILITY
The term disability is defined in various ways. In some contexts it is defined in terms of health conditions; in other contexts it is defined in terms of functional limitations; and in still other settings it is defined in terms of activity and role limitations. These varying definitions of disability have in some cases been codified into law, into standardized data collection instruments, and into the practice framework of professionals and organizations that serve people with disabilities. One consequence of the different ways in which disability is defined is that before the characteristics and needs of people with disabilities can be discussed, the parameters of the disability definition being used must be addressed. Whatever the specific components of the definition, there does appear to be some consensus that a person with a disability is someone who experiences limitations in function as a consequence of a permanent physical or mental impairment or a chronic health or mental health condition in interaction with the person’s environment. The health condition or impairment may be one that is visible, or it may be invisible. Onset may occur at any age or it may be present at birth. Finally, the severity of disability may vary, even among people with the same condition or impairment, such that some individuals may find it difficult to participate in many life activities, while others experience the effects of disability in a single area. Among the many definitions of disability used by professionals, government programs, service agencies, and individuals with disabilities, there are three that are most dominant. The first definition involves the extent of limitation in the Activities of Daily Living (ADL) and Instrumental Activities of Daily Living (IADL). The second construct for defining disability is based upon a model developed by Saad Nagi that defines disability in terms of the interaction of environment, functional limitation, and impairment.1,2 The third definition is embodied in the International Classification of Functioning, Disability, and Health (known as ICF) of the World Health Organization (WHO). A fourth measure, used in epidemiological contexts, does not define disability, but it tries to account for the severity of disability by measuring what is referred to as “disability adjusted life years” (DALY).
ADL and IADL The ADL scale measures disability in terms of limitations in the Activities of Daily Living. This scale was developed by Katz and coworkers in the 1950s, and has been used extensively by researchers studying the elderly.3 The ADL scale asks about the need for assistance in the activities of eating, bathing, dressing, transfer, and toileting. A related measure, developed by Lawton and Brody in 1969, is the IADL scale.3 The items in this scale ask about the need for assistance in such activities as everyday household chores, managing finances, shopping, and getting around outside one’s home. The scales are now used to define levels of disability among all adults.4,5 Both the
ADL and IADL approaches measure disability by examining tasks or activities that are limited or prevented by an impairment or health condition. The items do not directly address work, although people with ADL and IADL limitations report low rates (approximately 25%) of employment.5
Functional Limitation Model Saad Nagi’s work has served as the basis for a model with three components: impairment, functional limitation, and disability.1 Impairment is defined as the chronic or permanent anatomical or physiological problem (i.e., health conditions) that results from injury or illness. Functional limitations are the restrictions or functional inabilities that result from an impairment. Functional limitations include the inability to climb stairs or lift objects weighing more than 20 pounds. Finally, disability is defined as the consequence of functional limitation in terms of the activities of normal or expected roles. Although people have many different roles in their lives, it is the work role that has been most often used to assess whether impairments and functional limitations are disabling. Someone whose employment is affected by functional limitations is often labeled disabled. More recent elaboration of the model has included consideration of the impact of environment on role performance and quality of life.2 One implication of this model is that the determination of disability rests on the particular activities required by different roles—as well on the presence or absence of environmental barriers that support or impede role (work) performance. In this framework, it is possible for two people with the same impairments and functional limitations to be rated differently in terms of disability.
ICF The ICF is part of the “family” of international classifications developed by the WHO that are intended to provide codes that can be applied internationally to describe health conditions and compare prevalence of morbidity, mortality, and health outcomes.6 The ICF is intended to complement the WHO International Classification of Diseases-10 (ICD), which focuses on diseases and disorders that are often used to classify death, by offering a classification system that describes health conditions and related health outcomes as a means of describing population health.7 The ICF is a substantially revised and modified version of the first disability-related classification system issued by the World Health Organization in 1980, called the International Classification of Impairments, Disabilities, and Handicaps (ICIDH).8 The ICIDH used a framework consisting of four main categories: disease, impairment, disability, and handicap. Disease was not really defined in the ICIDH, but was implicitly based upon the definitions contained in the ICD. While the ICIDH was similar to the Nagi model because it separated the medical condition from its functional and social consequences, it was not as well accepted. Part of the reason was the lack of conceptual 795
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
796
Environmental Health
clarity in the different classifications and categories and the use of the term “handicap.”9,2 The World Health Organization’s new ICF is a “biopsychosocial model” (p. 9) of disability that is a synthesis of the two previously dominant disability models, the medical model and the social model.7 It is structured with two major parts, each with two components. Part 1 is titled Functioning and Disability, and within it the two components are (a) Body functions and structure and (b) Activities and participation. Part 2 is called Contextual Factors, and within it the components are (c) Environmental factors and (d) Personal factors. Within each of the components are domains, and within the domains are the categories that are the units of classification. Without detailing how the exact classification code is implemented, the overall conceptual model is illustrated in Fig. 41-1. What is significant about the ICF is its utilization of information about the body function and structure along with information about the individual’s activities, participation, and environmental circumstances to arrive at a coded description of health outcome. This may have limited utility in determining interventions for a specific individual; however, it has the potential to provide a global description of the prevalence of disability in a population that takes into account not only medical condition, but societal barriers and individual expectations for social participation. Since the ICF is in the early stages of implementation, it is too early to evaluate its utility, utilization, or impact in the United States and internationally.
Disability Adjusted Life Years (DALYs) As part of the Global Burden of Disease study, the World Health Organization, in collaboration with the World Bank, supported the development of a measure that provides a quantitative estimate of the burden of disease and disability across populations in different nations. The measure, disability adjusted life years, or DALY, is a single number calculated for an individual based upon a formula that considers the impact of a health condition on life expectancy, the age of the individual, and a measure of the quality of life for someone with that condition or disability. When aggregated, disability adjusted life years for a nation or population represent the gap between the population’s health and a hypothetical ideal. The DALY measure has generated considerable controversy, and there are a number of critiques of its underlying assumptions and calculation methodology.10 DALYs calculations are built upon an estimate of the negative impact upon quality of life of various diseases and chronic conditions by a panel of experts, mostly without disabilities, a fact defended by the DALYs’ developer because the goal is to indicate social valuations, not individual valuations which may be affected by a person’s ability to adjust over time.11 DALYs weight the impact of conditions by age; young children and older persons receive smaller weights for a condition than working-age persons. In their summary of the ethical issues raised by the DALYs,
Health condition (disorder or disease)
Bodily functions and structures
Environmental factors
Activities
Participation
Personal factors
Figure 41-1. Disability model of the International Classification of Functioning. (Source: World Health Organization, ICF: Introduction. p. 18.)
Gold, Stevenson, and Fryback point out that if DALYs are used to determine health intervention investments, the computational structure gives lower priority to those with preexisting disability or illness, who are young or elderly, and whose poor health may be related to low social class.10 Thus, while DALYs are viewed as a way of determining an appropriate allocation of resources for health care and rehabilitation, the assumptions built into the measure must be understood. Also, the DALYs methodology ties life with a disability to lower quality of life, at odds with the other disability definitions that treat disability as a characteristic with rights to equal access in the built and social environments.
Disability Defined for Workers The most common definition of disability in the research literature on employment is that of work disability. Work disability is present when an individual reports that a mental or physical condition “limits the kind or amount of work, or prevents work” (understood to be paid work). This work disability definition comes out of the Nagi functional limitation model, and has been used to identify people with disabilities in surveys since 1966.12 It also is implicit in the definition of disability utilized by the Social Security Administration to determine eligibility for income support. As a result of its wide usage, many of the statistics on the prevalence of disability in the United States are actually reports of work disability. While the work disability construct has been useful because it focuses on role performance, not medical condition, it is increasingly inadequate as the demand for information about the labor market experience of people with disabilities increases. This is because disability is only recorded if employment is limited or prevented by a chronic condition or impairment. If an assistive device or a workplace accommodation enables someone to work without limits in the kind or amount of work, then no disability may be deemed present. This may be an appropriate outcome—the presence of a disability is not noteworthy because it is not a relevant fact about the skills and value of the worker. However, it prevents a full estimation of the number of working Americans with chronic health or other conditions that constitute disabilities. A second problem with the work disability construct is that it may not sufficiently distinguish work from occupation. Asked whether they are limited in the kind or amount of work they can do, do respondents answer with reference to their usual occupation or kind of work, or do they view this as asking about limitations to any kind of work, even work they would not consider doing if there were no disability? The work disability construct may be vulnerable to the attitudes, aspirations, life experiences, and opportunities of the respondents.
DEMOGRAPHIC CHARACTERISTICS
In recognition of the problems posed by the work disability construct, the U.S. Census Bureau has introduced some additional indicators of disability, both in the decennial census and the more frequent Current Population Survey. These indicators ask whether the respondent experiences a disability in broad physiological categories (sensory, physical, or mental) and whether the individual’s condition results in limitation in self-care, going outside the home, and/or employment. The prevalence of disability based upon these several indicators is displayed in Table 41-1. Table 41-1 indicates that the percentage with any disability is higher than the percent of persons who report that disability results in employment disability. Overall, nearly 12% of the U.S. population aged 16–64 report employment disability, with a small difference between men and women (the percentage is slightly higher for men). The presence of any disability varies by race and ethnicity (Table 41-1), with higher prevalence rates among AfricanAmericans, Latinos, and American Indians and Alaskan Natives. The differences in disability prevalence are larger across race than between the genders. The prevalence for whites is 16.8%, while it ranges from 24% to 27% for the other race/ethnic groups. Disability is
41 TABLE 41-1. PERCENTAGE OF THE POPULATION AGE 16–64 WITH DISABILITY BY GENDER, RACE, AND HISPANIC ORIGIN: 2000∗
% With Disability TOTAL POPULATION
18.6
Men Any disability Employment disability Difficulty going outside home Sensory disability Physical disability Mental disability Self-care disability
19.6 13.0 6.4 2.7 6.0 3.9 1.7
Women Any disability Employment disability Difficulty going outside home Sensory disability Physical disability Mental disability Self-care disability Race and Hispanic origin (any disability) White alone Black or African American alone American Indian and Alaska Native alone Asian alone Native Hawaiian and other Pacific Islander alone Some other race alone Two or more races Hispanic or Latino (of any race) White alone, not Hispanic or Latino
Number with any Disability 33,153,211 17,139,019
Workers with Disabilities
797
Table 41-2 displays the most prevalent conditions reported by people age 18–69 who also report work limitation. Musculoskeletal conditions, especially involving the back, are the most common; heart disease and arthritis and related joint disorders also are prevalent. Most of the conditions listed in Table 41-2 are conditions with onset in midlife or later. Some of the conditions are the consequence of disease, while others are the result of injury, on or off the job. For many of the conditions there may have been a period of acute illness; however, it is not the case that all people with disabilities are sick. Finally, it is possible for people to experience more than one disabling condition.13 An additional risk faced by people with a disability is the onset of a secondary condition that is a consequence of or related to the primary condition.15 EMPLOYMENT AND DISABILITY
16,014,192 17.6 10.9 6.4 1.9 6.4 3.7 1.9 16.8 26.4 27.0 16.9 21.0 23.5 25.1 24.0 16.2
∗
Source: U.S. Census Bureau (2003). Disability Status: 2000, Census 2000 Brief, C2KBR-17, March, Table 1 and Table 2.
associated with a substantially higher prevalence of poverty compared to persons without disability, across all age groups. For working-age persons (16–64 years), the poverty rate difference is 9.6% for those without disability compared to 18.8% for persons with disability.13 Work disability prevalence increases with age and decreases with increasing years of education. Among those 16–24 years, 4.1% report work disability (2.8% severe work disability); among those 45–54 years, 13.0% report work disability (9.4% severe); and among those 55–69 years, 21.6% report work disability (15.6% severe for ages 55–64 and 8.4% severe for ages 65–69).14 Differences in the prevalence of work disability by demographic characteristic illustrate that work disability is associated with other social, economic, and environmental factors. Age is a factor because health may decline with age, and because older persons have had more years in which to experience an impairment or health problem. While those with higher levels of education may work in occupations that pose fewer risks to health, it is also the case that the occupations associated with higher levels of education are less physical so that impairments may have only a modest impact on the ability to continue working. Racial difference in the prevalence of disability may reflect racial differences in education and occupation. With respect to poverty status, disability affects earnings and income; however, for a variety of social and economic reasons associated with low income, people with low incomes also experience impairments and health problems that have a disabling impact to a greater degree than persons at higher levels of income.
The labor force participation rates of people with disabilities are substantially lower than the labor force participation rates of people without disabilities. Of those who are employed, a large proportion work part-time. Data from the 2004 Current Population Survey find that 28.7% of men and 23.7% of women ages 16–64 with a work disability are in the labor force. This contrasts with a labor force participation rate for men and women without disabilities of 87.1% and 74.2%, respectively.16 Among persons with disabilities who are employed, 13.3% work full-time; 62.1% of employed persons without disabilities work full-time. The unemployment rate for people with a work disability is also higher than for other labor force participants. Across both men and women aged 16–64 with a work disability, the unemployment rate was 15.2% in 2004, compared to 5.8% for men and women without a work disability.16 Another national survey of people with disabilities, conducted by Louis Harris and Associates for the National Organization on Disability, found that 63% of their respondents with disabilities said they preferred to be working; however, only 28% actually had a job or were self-employed.17 When people with disabilities are asked what they believe are the most significant barriers to employment, they name factors that include the limitations imposed by their impairments, the absence of transportation and accommodations, labor market discrimination, and the concern that employment might cause them to lose their disability TABLE 41-2. MOST PREVALENT CONDITIONS CAUSING WORK LIMITATION, 1992
Main Cause of Limitation All conditions Orthopedic impairments, deformities, and disorders of the spine or back Heart disease Arthritis and allied disorders Orthopedic impairments, deformities, and disorders of other extremities Intervertebral disc disorders Asthma and other respiratory diseases Diseases of the nervous system Mental disorders, excluding learning disability and mental retardation Speech, hearing, and visual impairments (including disorders of the eye) Diabetes Cancer
Number of People (1,000s)
% of People with Work Limitation
19,023
100.0
2181 2071 1818
11.5 10.9 9.6
1775 1479 1072 1029
9.3 7.8 5.6 5.4
925
4.9
743 624 529
3.8 3.3 2.8
Source: LaPlante M, Carlson D. Disability in the United States: Prevalence and Causes, 1992. Disability Statistics Report (7). Washington, DC: U.S. Department of Education, National Institute on Disability and Rehabilitation Research, Table 7a. 1996; 120–3.
798
Environmental Health
and health benefits.18 Among those who are employed, nearly onehalf feel their jobs do not utilize the full extent of their talents or abilities.18 About 22% of people with disabilities also report that they have encountered job discrimination, mostly in the form of being refused a job interview, refused a job, or denied a workplace accommodation due to disability.17 Economists also have measured wage differences between people with and without disabilities to assess what portion of the wage differential is due to productivity differences associated with limitations, and what portion can be attributed to wage discrimination.19,20 Baldwin and Johnson find larger discriminatory wage differentials among men whose impairments are subject to greater prejudice compared to those with impairments associated with less prejudice.20 Most of the impairments classified in the less prejudiced group are invisible impairments, such as back or spine problems and heart trouble, while those in the more prejudiced group tend to be visible impairments, such as paralysis or missing legs, feet, arms, hands, or fingers. Also included in the group that is subject to more prejudice, are persons with mental illness, alcohol or drug problems, and cancer. Although wage discrimination has been observed, Johnson and Baldwin conclude that employment discrimination is the more significant problem for workers with disabilities.20 POLICIES FOR WORKERS WITH DISABILITIES
Policies for workers with disabilities can be placed in one of three categories: (a) policies to protect employment by prohibiting employment discrimination, (b) policies to enable employment through rehabilitation and training, and (c) policies to replace income for workers no longer able to work. While these policies are implemented at the federal, state, community, and firm level, only the federal policies in these areas will be described in the following sections.
Nondiscrimination The principal source of protection from discrimination in employment on the basis of disability is through Title I of the 1990 Americans with Disabilities Act (ADA), which applies to employers of 15 or more employees. The enforcement structure relies on the Equal Employment Opportunity Commission (EEOC) and the methodology developed to enforce the 1964 Civil Rights Act. The ADA defines persons protected from disability-based employment discrimination as (a) people with a mental or physical impairment that substantially limits a major life activity, (b) persons with a record of such impairment, and (c) those who are perceived to have such an impairment. Title I protects a “qualified person with a disability,” who is someone who can perform, with or without “reasonable accommodation,” the “essential functions” of the job. An employer who can show that a requested accommodation would cause the firm undue hardship will not be required to hire the individual and make the accommodation. Undue hardship is judged in terms of the expense of the accommodation in the context of size of the firm, the extent of change required, and the potential hazard posed to self or other people. These provisions of the ADA require that employers articulate the essential functions of their jobs, take action to ensure that the job application process does not improperly screen out people with disabilities, and have a means through which they can respond to requests for reasonable accommodation. Between July, 1992, when the ADA went into effect, and September 30, 2004, there were 204,997 complaints of discrimination filed with the EEOC under Title I of the ADA.21 Data from the EEOC through September 2003 indicate that of the complaints filed, 31.5% were about discharge, 17.8% involved failure to provide reasonable accommodation, 8.7% involved terms and conditions of employment, and 7.8% charged harassment. Altogether, 90% of the complaints were about issues that arise from current employment (e.g., promotion, discipline, wages, demotion). Only 6% of all complaints charged failure to hire.22 This pattern of complaints reflects the large percentage
of people with disabilities who experience the onset of disability in midlife, many of whom are employed at the onset of disability.19 For employers, this means that some people who already work for them will become people with disabilities on whose behalf they must comply with the requirements of the ADA.
Rehabilitation and Training Rehabilitation services were first authorized under federal and state law to enable veterans with disabilities to obtain and maintain employment. The system of services has been expanded to include all persons with disabilities with the aim of assisting first-time employment for persons with disabilities, as well as the maintenance of employment for workers post disability onset. Some rehabilitation services are financed by private insurance, under an individual’s health insurance or through workers’ compensation, but many rehabilitation services are financed by public funds at the state and federal level. Until July 2000, the Rehabilitation Act was the main vehicle for specifying federal policy and expenditures for rehabilitation services. Various titles under this act supported programs to provide job counseling, retraining, and the provision of prosthetic and assistive devices with much of the service activity delivered through a network of state agencies. The Workforce Investment Act of 1998 (PL 105-220) consolidated most of the federal employment and training programs into a single statute, with much of the prior Rehabilitation Act incorporated into Title IV. The Workforce Investment Act (WIA) requires states to establish One-Stop service delivery systems to provide employment assistance to all workers, including those with disabilities. State vocational rehabilitation agencies are to be an integral part of the services available through the One-Stop centers, ideally with on-site vocational rehabilitation staff. Assessments of the new structure, which serves workers with disabilities in a manner integrated with other workers seeking employment assistance, indicate that the One-Stops have yet to be fully accessible, with staff sufficiently familiar with disability issues. Moreover, performance measures for One-Stops appear to create a disincentive to serve people for whom obtaining or maintaining employment may be difficult.23,24 The services of the One-Stop centers are not well known by people with disabilities. Less than 50% of those surveyed by the National Council on Disability/Harris Poll had heard of the One-Stops, and of those, 26% had ever used the services.17 For those who used special equipment or assistive devices, nearly 50% reported learning about them through their doctors.17
Income Support The main source of replacement income for workers whose disabilities prevent continued employment is Social Security Disability Insurance (DI). Coverage for DI is earned at the same time that workers earn coverage for the social security retirement benefit, with part of the social security payroll tax (FICA) directed to the DI Trust Fund. To be eligible, a worker must have contributed to Social Security for 20 of the past 40 quarters (essentially 5 of the past 10 years) and have a condition meeting the medical criteria that prevents “substantial gainful activity.” The disability need not be the result of a work injury or work related. Because it defines disability as the inability to engage in substantial gainful activity (measured as earnings in excess of $900 per month in 2007, $1500 if statutorily blind), and includes a 5-month waiting period before the start of benefits, DI essentially requires complete labor force withdrawal to establish eligibility. The DI benefit amount is based on a formula that is a variant of the one used to calculate monthly social security retirement benefits. After 2 years as a DI beneficiary, health insurance coverage under Medicare is available. Less than 0.5% of DI beneficiaries on the program at a point in time ever leave DI for employment.25 The Social Security Administration has several policy provisions within DI and other service initiatives to try to increase the rates of reemployment. One initiative involves placing a “Navigator” at the One-Stop centers to help people with disabilities access needed services, training, or other information
41 to facilitate employment. Another initiative is a new structure for financing rehabilitation services, called the “Ticket to Work,” that reached full implementation by the Social Security Administration in 2005. The Ticket to Work Program and Work Incentives Improvement Act of 1999 involves a voucher for rehabilitation services (i.e., ticket), sent to the DI beneficiary. That individual uses the ticket to contract for services with a traditional state vocational rehabilitation agency or one of a number of other organizations registered with the Social Security Administration as an employment network. In contrast to the past, the service providers are paid based on outcome, with partial payment early and full payment occurring after the client has been employed for 60 months. It is too soon to judge whether this alternative structure to financing rehabilitation facilitates rehabilitation and the return to work. Early evaluation results indicate beneficiaries are slow to utilize the tickets and some of the providers are considering withdrawing because they cannot cover their costs.25 Workers with work-related disabilities usually receive workers’ compensation. Workers’ compensation programs are state laws that require employers to carry insurance to compensate workers for injuries or illness obtained on the job. In some states, employers purchase workers’ compensation insurance from private insurance carriers or self-insure. In other states, employers must purchase the insurance through a state-run insurance fund. The intent of workers’ compensation is to replace lost earning capacity. People with a permanent impairment may receive a lump sum or a monthly payment in perpetuity; otherwise workers are paid a portion of their wage for the period they are out of work recovering from the injury or illness. Workers’ compensation also pays the medical expenses associated with the treatment of the work-related condition. A small proportion of workers with disabilities also receives income support from the Supplemental Security Income Program (SSI), an income-tested public assistance program for low-income persons with disabilities. Many SSI recipients have no work history or have work attachment insufficient to meet DI eligibility, or earnings so low they are dually eligible for DI and SSI. The SSI program is administered by the Social Security Administration and financed out of general revenues (some states add a supplemental amount). SSI recipients are eligible for health-care coverage under the public assistance Medicaid program.
ROLE OF CLINICIANS
Clinicians play an integral role in the diagnostic, rehabilitation, and return-to-work plan of workers with disabilities. Rehabilitation is a dynamic process, which is most effective when provided through a comprehensive transdisciplinary team approach. Rehabilitation deals not only with physical restoration, but recognizes the importance of psychosocial health and support. The disability evaluation process requires medical evaluation in preparation for hiring, the development and direction of a rehabilitation and return-to-work plan, and the determination of impairment or disability. Of significant importance is recognition that a worker with a disability is not ill or in poor health, but rather can participate in ongoing health maintenance and prevention of further disabilities and secondary conditions.
Acute and Chronic Medical Care Acute medical management of worker injuries offers the clinician a range of challenges with distinct differences from those posed by chronic management. However, the attitudes, knowledge, and skills required of the practitioner in each area have broad overlap. The process of decision-making and support of worker needs should vary principally in respect to the weight assigned to input elements in each circumstance. The most obvious feature of acute management is the possibility of the abrupt onset of a catastrophic problem which forces an emergency course of treatment. In that circumstance the physician
Workers with Disabilities
799
provides triage, diagnosis, and treatment, directing care until stability is restored. This process should ideally lead to the transition into rehabilitation where worker involvement and empowerment become key elements. The physician’s role in the acute management of problems with a less dramatic presentation, such as back injury or repetitive use disorders, is established through the dynamic interaction among worker, physician, employer, and the compensation system. The clinician should be attuned to address the worker’s interests as he individually, or through referral and transfer, moves through the responsibility for triage, diagnosis, treatment, and rehabilitation. Where the worker has knowledge of medical issues, she can influence the clinician’s role even more significantly. A knowledge of similar work site problems and workers’ experiences increases the likelihood that the worker will utilize the health-care system more effectively, a de facto selfselection of the physician role by the worker. Acute and chronic management of occupational problems share the need for specific knowledge of signs, symptoms, etiology, treatment, rehabilitation, and prognosis of problems encountered in the workplace. While acute management may favor a greater proficiency in some diagnostic and intervention procedure skills, it is in the realm of attitude that clinicians most differ in the spectrum of acute to chronic care. In the chronic phase, maintenance, adjustment, support and accommodation, and prevention of secondary conditions are more prominent than direct intervention. Worker values, rather than medical pathways, determine the proper course. The physician serves to facilitate and to advise, not to effect change, and to intervene only when new factors emerge.
Rehabilitation Rehabilitation programs are termed comprehensive when they function in an integrated manner to address the full spectrum of medical, functional, and psychosocial needs of the client throughout the time of need. They are transdisciplinary when they are organized across traditional disciplinary boundaries for services. Core members of comprehensive teams include a rehabilitation physician and nurse, physical, occupational, and vocational specialists, a psychologist, and a social worker. The team develops rehabilitation goals through formal team conferences that meet regularly to update goals, to discover and address evolving issues, and to devise means to leverage the team via transdisciplinary synergy. The team approach offers benefits through the initiative and knowledge resulting from wide participation, and from the efficiency of extending patient learning, reinforcement of skills, endurance, and confidence building throughout the full day through the close integration of the nursing unit with all services in the total plan. The rehabilitation process can be separated into two broad conceptual categories. The first is composed of those cases in which the anticipated rehabilitation outcome is the ability to resume life and work roles with little or no accommodation. The second involves cases for which significant accommodation and perhaps residual impairment is likely. In each instance, rehabilitation proceeds through three general phases: establishment of goals, worker focused programming, and transition to the workforce. Rehabilitation goal setting requires the translation of the medical prognosis into a function-based worker profile, the identification of resources, and the integration of worker options and preferences into practical outcome targets. Defining goals also separates expectations into broad programmatic categories, and promotes a rationale and consistency when selecting among options related to service intensity, intervention risks, and accommodations. Goal setting requires an understanding of both the personal and material resources available for rehabilitation. The worker-focused phase is the process usually identified as medical rehabilitation—the transdisciplinary delivery of medical, physical, psychosocial, and vocational services to maximize the physical and psychological function of the worker in the context of the established goals. Here specific skill acquisition, adjustment and the determination of specific vocational targets, and accommodation requirements are emphasized.
800
Environmental Health
The transition phase is relatively straightforward where function is well restored. Work site assessment is helpful in ensuring that worker preparation is appropriate or where return to work can be facilitated by minor or temporary accommodations or a phased return. Transition is often a longer process where major accommodation is required. Frequently a gradual shift occurs from a medically directed team management approach to a vocationally directed one. Here skill assessment and training for the worker, resource acquisition, job site analysis, and negotiation for job site accommodation and its funding take center stage.
Psychological Care Psychological variables have an effect on the rehabilitation process and outcome, and can modify the expression of disability or determine the impact on function. In particular, cognitive impairment and functional limitation as a result of brain injury or mental retardation can determine initiation of or return to work capability. Issues of role or performance change can be difficult for the worker with the disability, the family or other support system, and the employer. Psychological evaluation of cognitive functioning involves standard intelligence and achievement tests, and batteries or individualized approaches for neuropsychological testing. The evaluation provides information regarding cognitive performance of executive functions, relative strengths and weaknesses, possible direction for cognitive related services, and useful strategies for cognitive compensation. A worker with a disability may require psychological support during the acute medical phase and through the rehabilitation process. A psychological assessment is a combination of standardized testing and interview information that determines attributes (e.g., personality, intellectual, and cognitive factors) that may influence the rehabilitation process and outcome, and that identifies the presence or magnitude of certain other psychological factors (e.g., depression, anxiety, anger, disinhibition, denial) that may have an impact on return to work. Following a medical event, such psychological forces begin to play an increasingly important role in overall level of disability. Coincidental and independent sources of distress from family, employer, or vocational settings can be incorporated into the sense of impairment. The impact of the patient’s altered social behavior on families can be substantial. Psychological intervention can be direct counseling, skill building in coping and adjustment strategies, and training in social skills. Families and other support systems should be a part of this process to better understand the psychological status of the patient, reinforce appropriate behaviors or coping strategies, and maintain personal psychological health to continue what may be a prolonged course of impairment and disability.
ASSESSING THE ABILITY TO WORK AND ACCOMMODATIONS TO ENABLE WORK
Determination of work capability requires evaluation of the worker as well as the workplace. The physician becomes involved with issues of work disability in the context of medical evaluations of workers in preparation for hire, for the development and direction of a return-towork plan, or for the determination of impairment or disability. A workplace assessment involves job description and on-site evaluation. In the recent past, a number of employers have engaged in selection screening to identify a healthier workforce with the use of medical criteria in the selection and maintenance of a workforce. This involves worker fitness evaluations (e.g., current health, ability to perform job functions, required modifications) and risk evaluations (e.g., prediction of increased risk for illness or injury based on health history, work history, or behavioral patterns). In most instances, few data exist to support such determinations.26,27 The disability evaluation process may require a clinician to assume one or more of three different roles that are potentially in conflict. The physician may act as an advocate and counselor to the patient, a source of information for the agencies that determine
benefits, and adjudicator and certifier of impairment or disability.28 As advocate and counselor, the physician can advise the patient of disease or injury-specific issues related to initiation of or return to work. In this role, the physician can outline the process of rehabilitation and discuss possible accommodations. The physician also may provide information about the advantages and potential pitfalls of the various compensation and rehabilitation programs and make referrals to appropriate services. For those patients who have applied for benefits, the physician will likely be asked to provide medical records and documentation of impairment. During the phases of reporting on initial, interim, and maximal medical improvement (MMI, the achievement of maximal benefit from intervention with stabilization of impairment), the physician is asked to complete return-to-work status reports, including a date of MMI. Evaluation of the patient’s impairment places the physician in the role of certifier. Impairments can be expressed in terms of functional loss of a unit or to a whole person. The impairment rating system most commonly employed for musculoskeletal impairments is the American Medical Association (AMA) Guides to the Evaluation of Permanent Impairment.29,30 The AMA Guides are anatomically based (description and quantifiable physical examination measurements) and diagnosis related (history plus objective diagnostic findings). However, there are concerns related to validity and reliability,31,32 inference of functional limitation from anatomically based impairment scales or findings,29 and the issue of pain as it relates to impairment.33 In cases involving a dispute between claimant and insurer concerning MMI determination or impairment rating, a physician examiner unfamiliar with the case can review the case records, examine the patient, and render a second opinion, referred to as an Independent Medical Examination (IME). A number of standardized assessment tools are available to assist the physician in determining physical performance expectations for a disabled worker. A Functional Capacity Evaluation (FCE) is a comprehensive assessment of an individual’s strength, flexibility, endurance, and job-specific functional abilities. This is perhaps the most valid predictor of appropriate restrictions to activities throughout the rehabilitation course and at the MMI. When no specific job is available, a more generic Functional Capacity Assessment (FCA) can provide more global information to assist in job placement. A Job Description is a formal listing of the essential job functions, and provides the basis of specific performance requirements. A Job Site Evaluation (JSE) determines optimal ergonomic design and validates performance requirements of the job. A JSE in conjunction with a FCE may be useful in determining work restrictions, need for accommodation, and employer/employee willingness to comply. There is a wide range of possible workplace modifications. Many modifications made for the worker with disabilities also may benefit other workers or even customers. Architectural barriers can be modified with ramps, railings, more easily opened doors, modified bathroom fixtures, and space to accommodate wheelchair turning radius, to name a few. Work site adjustments, like ergonomic seating, placement of equipment for ease of use or reach, telephone headsets, use of switches, or lift assist devices, can allow improved productivity. Print adjustment (larger size, Braille, raised lettering) and improved lighting will assist persons with vision impairment, and also an aging population. Amplification systems, telephone devices for the deaf (TTD), and the use of vibration or lighting to alert individuals to surrounding activities are among the accommodations helpful to hearing-impaired workers. Schedules to assist with cognitive and physical performance (e.g., focused tasks, routine breaks, schedule) and to allow for needed health activities (e.g., intermittent catheterization for healthy urinary management, allowance for position change) must also be considered. The Job Accommodation Network (JAN), a federally-financed consultation service, provides specific advice and information about various methods of accommodation.34 Independent Living Centers, regional ADA information centers, and many of the organizations that focus on a specific condition (e.g., National Spinal Cord Injury Association) also provide advice regarding workplace accommodation.
41 PREVENTION AND WELLNESS
With disability ranking among the nation’s largest public health problems, prevention is pertinent. Primary prevention of unintentional injuries, occupationally related injuries or exposures, and other medically or health-related etiologies of disabilities are a part of the national agenda. However, primary prevention of other health issues or secondary conditions of persons with disabilities should be acknowledged. This requires use of traditional public health prevention strategies and the clinician’s index of suspicion regarding possible secondary conditions. Secondary prevention is aimed at early recognition of disability or disability-producing activities, with reduction of risk factors for work disabilities and improvement in the quality of life. Appropriate modification of the workplace for a worker with a disability who has initiated or returned to work also is a secondary prevention strategy. Tertiary prevention is centered on the rehabilitation aspects of a return to-work-plan. Despite the medical complications and implications of disabling conditions, workers with disabilities are not ill or in poor health. There has been a paradigm shift from illness and disease to health and wellness. It is important to recognize health promotion for the worker with a disability, in spite of the disabling condition.
CONCLUSION
Workers with disabilities make an important contribution to the support of their families and to the economy as a whole. While the care, services, and policies for workers with disabilities in the past often worked to push these individuals out of the labor force, there is now a stronger focus on facilitating continued employment. This focus is reinforced by the ADA, which prohibits discrimination and requires workplace accommodation, and by the increasingly wide acceptance of the functional/environmental model to define disability and disability policy. Challenges remain to achieve fuller inclusion in the workforce of workers with disabilities. Key issues include the need to increase insurance coverage for accommodations and assistive devices crucial to the maintenance of function, and reduce the inaccessibility and inadequacy of transportation systems so that workers with disabilities can reach the workplace. There also is a continuing need to educate employers, co-workers, and other professionals about the positive qualities of the lives and abilities of people with disabilities.
REFERENCES
1. Nagi SZ. Disability and Rehabilitation: Legal, Clinical, and SelfConcepts and Measurement. Columbus, OH: Ohio State University Press; 1969. 2. Pope AM, Tarlov AR. Disability in America: Toward a National Agenda for Prevention. Institute of Medicine, Committee on a National Agenda for the Prevention of Disabilities, Division of Health Promotion and Disease Prevention, Washington, D.C.: National Academy Press; 1991:77. 3. Katz S. Assessing self-maintenance: activities of daily living, mobility, and instrumental activities of daily living. J Am Geriatrics Society. 1983;31(12):721–7. 4. LaPlante M, Miller K. People with disabilities in basic life activities in the U.S., Disability statistics abstract, No. 3, April, Disability Statistics Program. University of California, San Francisco, Washington, D.C.: U.S. Department of Education, National Institute on Disability and Rehabilitation Research; 1992. 5. McNeil J. Americans with Disabilities: 1991–92. U.S. Bureau of the Census, Current Population Reports, P70-33, Washington, D.C.: U.S. Government Printing Office; 1993. 6. World Health Organization. ICF Introduction. http://www3. who.int/ icf/intros/ICF-Eng-Intro.pdf. Accessed November 21, 2004.
Workers with Disabilities
801
7. World Health Organization. Toward a Common Language for Functioning, Disability, and Health: ICF. Geneva: World Health Organization, 2002. Accessed November 21, 2004 at http://www3.who.int/ icf/beginners/bg.pdf. 8. World Health Organization. International Classification of Impairments, Disabilities, and Handicaps. Geneva: World Health Organization; 1980. 9. Haber LD. Issues in the definition of disability and the use of disability survey data. In: Levine DB, Zitter M, Ingram L, eds. Disability Statistics: An Assessment. Report of a workshop, National Research Council, Washington, D.C.: National Academy Press; 1990: Appendix B 35–51. 10. Gold MR, Stevenson D, Fryback, DG. HALYs and QALYs and DALYs, Oh my: similarities and differences in summary measures of population health. Ann Rev Pub Health. 2002;23:115–34. 11. Murray CJL, Acharya AK. Understanding DALYs. J Health Econ. 1997;16:703–30. 12. A version of this work disability question appears in the 1966 Survey of the Disabled, the 1972 Survey of Health and Work Characteristics, the 1978 Survey of Disability and Work, the various editions of the Survey of Income and Program Participation, the various waves of the Panel Study of Income Dynamics, the Current Population Survey, and the 1980 and 1990 U.S. Census. 13. Waldrop J, Stern SM. Disability Status: 2000. Census 2000 Brief, C2KBR-17, U.S. Census Bureau: Department of Commerce; 2003:10. 14. U.S. Census Bureau. Disability Selected Characteristics of Persons 16 to 74: 2004. Accessed on November 30 2004 at http://www.census. gov/hhes/www/disable/cps/cps104.html, Table 1. 15. Kinne S, Patrick DL, Doyle DL. Prevalence of secondary conditions among people with disabilities. Am J Pub Health. 2004, 94: 443–5. 16. U.S. Census Bureau. Labor Force Status—Work Disability Status of Civilians 16 to 74 Years Old, by Educational Attainment and Sex: 2004. Accessed on July 30 2005 at http://www.census.gov/hhes/ www/disability/cps/cps204.html, Table 2. 17. National Organization on Disability. 2004 N.O.D./Harris Survey of Americans with Disabilities. Washington, D.C.: National Organization on Disability: 2004. Available at http://www.nod.org/Resources/ harris 2004/ harris2004_data.pdf. 18. Louis Harris & Associates. N.O.D./Harris Survey of Americans with Disabilities. N.Y.: Louis Harris and Associates, Inc.; 1994. 19. Baldwin ML, Johnson WG. Dispelling the myths and work disability. In: Thomason T, Burton JF, Jr, Hyatt DE, eds. New Approaches to Disability in the Workplace. University of Wisconsin-Madison, Madison, WI: Industrial Relations Research Association; 1998:39–61. 20. Johnson WG, Baldwin M. The Americans with Disabilities Act: will it make a difference? Policy Studies J. 1994;21(4): 775–88. 21. Equal Employment Opportunity Commission. Americans with Disabilities Act of 1990 (ADA). Charges FY1992–FY2004. Accessed on July 27 2005 at http://www.eeoc.gov/stats/ada-charges.html. 22. McMahon BT, Edwards R, Rumrill PD, Hursh N. An overview of the national EEOC ADA research project. WORK: J Prevent Disability Rehab. 2005;24(1): 1–7. 23. National Council on Disability. National Council on Disability Recommendations: Workforce Investment Act Reauthorization. Washington, D.C. Accessed on March 17 2005 at http://www.ncd.gov/ newsroom/publications/2005/pdf/workforce_investment.pdf. 24. U.S. Government Accountability Office. Workforce Investment Act: Labor Has Taken Several Actions to Facilitate Access to One-Stops for Persons with Disabilities, but These Efforts May Not be Sufficient. GAO-05-54, 2004. 25. Thornton C, Livermore G, Stapleton D, et al. Evaluation of the Ticket to Work Program: Initial Evaluation Report, Mathematical Policy Research, Inc. and Cornell University. Accessed on February 2004 at http://www.mathematica.org/publications/PDFs/evalttw.pdf.
802
Environmental Health
26. Rothstein MA. Medical Screening and the Employee Health Cost Crisis. Washington DC: Bureau of National Affairs: 1989. 27. Derr PG. Ethical considerations in fitness and risk evaluations. In: Himmelstein JS, Pransky GS, ed. Worker Fitness and Risk Evaluations, State Art Rev Occup Med. Philadelphia: Hanley & Belfus; 1988. 28. Carey TS, Hadler NM. The role of the primary physician in disability determination for social security insurance and workers’ compensation. Ann Intern Med. 1986;104:706–10. 29. Rondinelli RD. Practical aspects of impairment rating and disability determination. In: Braddom RL, ed. Physical Medicine & Rehabilitation. Philadelphia: WB Saunders Company, 1996.
30. Andersson GGJ, Cocchiarella L. Guides to the Evaluation of Permanent Impairment. 5th ed. Chicago, American Medical Association; 2000. 31. Lankhorst GJ, Van de Stadt RJ, Van der Korst JK. The natural history of idiopathic low back pain. Scand J Rehabil Med. 1985;17:1–4. 32. Matheson LN. Symptom magnification syndrome structured interview: rationale and procedure. J Occup Rehabil. 1991;1:43–56. 33. Osterweis M, Kleinman A, Mechanic D, eds. Pain and Disability: Clinical, Behavioral, and Public Policy Perspectives. Washington D.C.: National Academy Press; 1987. 34. Job Accommodation Network, a service of the Office of Disability Employment Policy of the U.S. Department of Labor, located at West Virginia University and accessible at http://www.jan.wvu.edu/.
42
Environmental Justice: From Global to Local Howard Frumkin • Enrique Cifuentes • Mariana I. Gonzalez
INTRODUCTION
Environmental health, in its broadest sense, connotes places that are free of exposures that threaten human health and that promote healthy, wholesome lives. Such places may be defined on a very small scale— a home, a workplace, or a neighborhood—or on a much larger scale—a river system, a metropolitan area, or the entire earth. Healthy environments are not equally distributed across populations. Within the United States, the term “environmental racism” emerged in the 1980s, reflecting evidence of disparities across racial groups (and ethnic and income groups as well) in exposures to environmental toxins.1,2 Indeed, there is increasing recognition that members of ethnic and racial minorities, whether in the workplace or in community settings, sustain disproportionate risk from chemical, physical, biological, and psychological hazards. These disparities, in turn, are related to health disparities, which have been defined as differences in health—or likely determinants of health—that are systematically associated with different levels of underlying social advantage or position in a social hierarchy. Braveman et al.3 explain that social advantage or position is reflected by economic resources, occupation, education, racial/ethnic group, gender, sexual orientation, and other characteristics associated with greater resources, influence, prestige, and social inclusion. “Environmental justice” is a complementary term. While it explicitly refers to fair and equitable access to healthy environments, it also evokes broader underlying themes important in public health: access to information, community-based participatory decision-making, and social justice.4 Environmental justice is global in scope. The underlying notion is that economic and social disadvantages carry an increased risk of harm related to environmental exposures, a pattern that emerges both within nations and across national boundaries. The concept of environmental justice—or distributive and procedural justice with respect to environmental goods—has a long history,5 rooted in the teachings of major religions and the practices of ancient societies.6–8 In recent years, environmental justice has been recognized as a subset of human rights. In the early 1970s, the United Nations Conference on the Human Environment declared that “Man has the fundamental right to freedom, equality and adequate conditions of life, in an environment of a quality that permits a life of dignity and well-being”.9 Twenty years later, the UN Draft Principles on Human Rights and the Environment began with these three statements: 1. Human rights, an ecologically sound environment, sustainable development and peace are interdependent and indivisible.
2. All persons have the right to a secure, healthy and ecologically sound environment. This right and other human rights, including civil, cultural, economic, political and social rights, are universal, interdependent and indivisible. 3. All persons shall be free from any form of discrimination in regard to actions and decisions that affect the environment. DDHRE, UN 1994
Within the United States, environmental justice concerns have focused on ethnic and racial minorities, including African-American, Hispanic, and Native American communities, and on poor and immigrant communities.10 A robust literature, including empirical data, policy analysis and commentary, and government reports, is now available (see the Further Reading section). On a global scale, environmental justice concerns have focused on indigenous peoples, communities, regions, and even entire nations in poor regions of the world. The environmental justice literature at the global scale is more sparse, consisting of a small number of journal articles, grey literature, and internet documents. In several respects, environmental justice issues in wealthy and poor countries are comparable. In both settings, environmental justice issues arise both in the workplace and in the ambient environment. And in both wealthy and poor countries, affected populations live and work in patterns that distinguish them from the general population and from each other. They are likely to be composed of ethnic minorities. They tend to have less education, lower income, poorer housing, worse health status, and less access to services such as health care and legal support, compared with majority groups. The spatial scale of environmental justice is different in wealthy and poor countries. In wealthy countries, landmark environmental justice struggles have typically arisen in disadvantaged local communities, such as Warren County, North Carolina, Anniston, Alabama, or Calcasieu Parish, Louisiana. Such local struggles certainly occur in poor countries, but environmental justice also occurs on the national scale, when entire nations become favored destinations for hazardous waste or hazardous industries. ENVIRONMENTAL HEALTH DISPARITIES: MECHANISMS
One or more of several mechanisms may contribute to the increased risk that underlies environmental justice concerns. These include excessive exposures, greater susceptibility, inadequate technical resources, and inadequate implementation of public health policies. These mechanisms operate in both the workplace and the general environment. 803
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
804
Environmental Health
Excessive Exposures In wealthy countries, certain populations work in relatively more dangerous industries and/or jobs than others. Classic examples include the Gauley Bridge, West Virginia, mining disaster of 1935, in which hundreds of minority workers succumbed to acute silicosis after working at unprotected tunnel-drilling jobs; 11 the steel industry of the mid-twentieth century, in which black workers were far more likely than white workers to be assigned to the most dangerous “topside” jobs on coke ovens, with attendant carcinogenic exposures;12,13 and the agricultural sector, in which predominantly minority workers sustain excessive exposures to pesticides and other hazards.14,15 Excessive exposures can also be seen in the workplaces of poor countries.16 In part, this reflects the central economic role of agriculture and primary extractive industries—often dangerous and polluting activities—in poor countries. In part, it reflects unique environmental conditions, such as tropical weather, which increases the risk of heat stress among workers. Challenges also arise from work practices, such as work weeks that greatly exceed 40 hours and the use of imported, obsolete production machinery. Perhaps most significantly, industry in developing countries has weaker legal and technical resources, so hazardous exposures tend to be less controlled and therefore more intense. Exposures may reach levels not seen in developed nations for many decades. This pattern has been documented across the developing world, in Africa,17,18 China19 and other parts of Asia,20–23 and Latin America and the Caribbean,24,25 for hazards ranging from asbestos26 to musculoskeletal hazards,27 from shiftwork28 to job insecurity.29 Particular concerns exist for susceptible working populations such as women,30,31 children,32–35 and workers in the informal sector.36 Moving beyond workplaces to the general environment, at-risk communities in both wealthy and poor countries also confront excessive hazardous exposures. In the United States, minority neighborhoods are more likely than white neighborhoods to be located near environmental hazards such as polluting factories and hazardous waste sites.37,38 A considerable body of work in recent years, much of it in the form of correlational studies, small area case studies, and ecological studies using Geographic Information Systems (GIS) or similar techniques, has demonstrated this pattern1,2,39–41 with respect to a range of exposures, including air pollutants;39,42 hazardous waste sites,1,43,44 water pollution (Calderon, 1993); and lead, mostly from substandard housing 45,46 but also from road traffic.47 Some environmental disparities are not toxicologic in nature; examples include squalid neighborhoods,48 scarcity of healthy food alternatives in retail stores,49,50 and inadequate public transit.51,52 In poor countries, environmental conditions can be far worse, and hazardous exposures far more widespread. Levels of urban air pollutants in many cities are well above those in the dirtiest North American and European cities.53 The resulting risk of respiratory and cardiovascular disease is especially severe for vulnerable populations such as children.54 Indoor air pollution is also a persistent problem in both urban and rural areas, due to the use of biomass fuels, coal, and other dirty-burning fuels in inefficient stoves or open hearths.55,56 Many poor nations lack potable drinking water, and the management of stormwater and sewage are hampered by scarce resources, informal settlements, and other factors.57 As a result, water supplies commonly carry microbiological and chemical contaminants.58,59 This contributes to the staggering burden of diarrheal disease in poor countries; in what has been called a “silent emergency,” a child in a poor nation dies every 15 seconds of a water-borne disease, the equivalent of 20 jumbo jets crashing every day.60 An additional burden is naturally occurring contaminants such as arsenic, which has caused a public health emergency in Bangladesh.61 Deficient solid waste management practices in poor countries compound problems with water, causing exposures to gastrointestinal and respiratory disease risks.62,63 Hazardous wastes are often mismanaged,64 a problem that is compounded by the export of hazardous wastes from wealthy to poor countries.65–67 As urbanization proceeds rapidly in poor countries, all these environmental health problems, together with inadequate housing, transport, health-care
services, and other infrastructure, create enormous challenges in urban environmental health.68,69 Country-level case studies have explored the interrelationships and impacts of these problems.70,71
Greater Susceptibility Independent of excessive hazardous exposures, members of minority and poor communities may be especially susceptible to the effects of hazardous exposures. One or more of several mechanisms may operate: increased baseline risk of certain diseases to which occupational and environmental exposures further contribute; increased probability of other exposures that may combine with workplace or environmental exposures to harm health; increased genetic susceptibility; and increased general susceptibility to disease through stress, poverty, and decreased social supports. Increased baseline risk of certain diseases: Many common illnesses are multifactorial in etiology. While occupational and environmental exposures may contribute to illness, so may a range of genetic, social, environmental, and lifestyle factors. If members of minority groups, poor people, and/or people in poor nations carry an increased baseline risk for some of these illnesses, then workplace and environmental exposures could pose special hazards for these groups. For example, in the United States, lung cancer incidence is approximately 50% higher in black men than in white men.72 The black excess is not fully explained by differences in smoking; although the prevalence of smoking is higher among black men than among white men, blacks initiate smoking at a later age and smoke fewer cigarettes than do whites. Based on this increased risk, blacks may be especially susceptible to the effects of further exposures to lung carcinogens in the workplace or general environment. Similarly, asthma prevalence and mortality differ by race and ethnicity; the prevalence is 122 per 1000 in blacks and 104 per 1000 in whites, and asthma mortality is approximately three times higher in blacks than in whites.73 The racial and ethnic disparities may imply that minority populations are especially susceptible to the effects of any of the hundreds of environmental agents known to cause or aggravate asthma.74 Other examples are relevant on a global scale. People in poor nations may suffer from anemia because of nutritional deficiencies; accordingly, they are more susceptible to the effects of lead, which interferes with heme synthesis. Asian populations have a high prevalence of hepatitis B antigenemia, which may increase susceptibility to hepatotoxins. Endemic parasitic disease in some poor nations reduces work capacity and immune competence, and increases susceptibility to a range of diseases.75 Increased probability of other exposures that may combine with workplace or environmental exposures to harm health: Members of poor communities may be excessively exposed to risk factors that aggravate the effects of workplace or environmental exposures. Examples include exposures in the home environment, behavioral factors such as alcohol consumption, and cultural practices. Poor people and members of minority groups are at risk of living in substandard housing.76,77 Depending on circumstances, this housing may entail exposures to biomass fuel smoke, lead dust, secondary tobacco smoke, and antigens such as cockroaches and dust mites, compounded by inadequate ventilation.78 Such home exposures could aggravate the effects of workplace and ambient environmental exposures to respiratory hazards. Behavioral factors such as alcohol abuse vary to some extent by ethnicity,79–81 and heavy drinking can increase the risk of injuries and compound the potential toxicity of liver toxins. Finally, cultural practices may entail hazardous exposures; examples include lead-containing cosmetics such as kohl and surma,82,83 and traditional medications that contain mercury.84 Increased genetic susceptibility: It has long been recognized that several single-gene disorders vary in frequency among different racial and ethnic groups. Among blacks, disorders that are relatively prevalent include glucose-6-phosphate dehydrogenase (G6PD) deficiency, hemoglobinopathies (HbS and HbC), and alpha and beta thalassemias.85 Moreover, differences in the ability to metabolize certain drugs, related to polymorphisms of one or more
42 gene loci, have been associated with specific racial and ethnic backgrounds. One example is debrisoquin hydroxylase (also known as CYP2D6), a cytochrome P450 enzyme that catalyzes the oxidation of more than 30 drugs. Compared to whites, blacks and Asians have fewer abnormalities of this enzyme.86 Such abnormalities may either increase or decrease risk, depending upon the metabolic fate of a particular chemical. Increasingly, with growing success at mapping the human genome, individual genes have been associated with specific racial or ethnic groups, and with specific diseases. Cancer risk is a special area of interest with respect to genetic polymorphisms. Some genes that alter cancer risk have been reported to vary by race. For example, mutations of the CYP1A1 gene, which is involved with the metabolism of polycyclic aromatic hydrocarbons, are thought to increase the risk of lung cancer among smokers; this abnormality is more common among blacks than whites.87–89 While genetic differences in disease susceptibility are increasingly recognized, their practical significance in occupational and environmental health remains limited. In the first place, few genetic factors have been clearly demonstrated to increase the risk of specific occupational diseases. Workers with G6PD deficiency are susceptible to hemolytic crises following exposure to oxidants such as naphthalene and trinitrotoluene,90 but this event is unusual. Perhaps more importantly, job applicants have been excluded from certain jobs because of purported genetic risks, a practice that has been recognized as racial or ethnic discrimination.91–93 Hence, although genetic bases for susceptibility are being increasingly recognized, an emphasis on primary prevention—decreasing exposures to levels that are safe for all persons—remains the preferred approach. Increased general susceptibility to disease through stress, poverty, and decreased social supports: Both in the United States and on a global scale, poor communities suffer poor baseline health due to poor nutrition, highly prevalent infectious diseases, poor access to immunization and other health-care services, stress, and other factors, as reflected in high infant mortality rates, low life expectancy, and other health indicators.94 Poverty, and the income inequality that accompanies it, are bad for health.95,96 As a result, these populations may be less resilient following a wide range of hazards, including infectious, chemical, physical, and radiologic exposures.
Inadequate Technical Resources Poor nations face severe shortages of trained personnel essential to environmental and occupational health practice. Adequately trained industrial hygienists, safety professionals, and environmental engineers who would be able to recognize, assess, and control hazards, are in short supply. So are epidemiologists with skills in surveillance, who would be able to monitor disease and injury trends and identify problem areas. Health-care providers such as occupational physicians and nurses are scarce, preventing adequate diagnosis and treatment of environmental and occupational illnesses. In addition to these human resources, equipment such as environmental measurement devices, analytical laboratories, and even vehicles to permit travel to field locations are often unavailable. These shortages of technical resources pose an obstacle to effective environmental and occupational health practice.
Inadequate Implementation of Public Health Policies In both wealthy and poor countries, poor people and members of minority communities are less able to rely on protective public health actions. For example, research in the 1980s documented a pattern of selective enforcement of environmental laws across the United States, with less stringent enforcement in minority communities.97 Minority workers are more likely to be employed in small and/or marginal firms, firms that are less likely to implement workplace safeguards because of scarcity of resources and know-how. And in poor nations, public health policies—epidemiologic and hazard surveillance, provision of information, regulatory enforcement, and private sector voluntary actions—are less consistently applied.
Environmental Justice: From Global to Local
805
THE GLOBAL CONTEXT
At the global level, environmental justice is a subset of global environmental affairs, which in turn reflect demographic changes, trends in resource use, and practices in manufacturing, transportation, energy, and other sectors. Population growth in most of the world’s poor countries, population shifts from rural to urban areas with the resulting growth of large cities, economic liberalization with rapid growth in the manufacturing and service sectors (in at least some countries), and depletion of key resources, all play a role in defining the environmental conditions that affect health.98 Several processes are illustrative: the growth of multinational companies, the development of free trade zones, and the promulgation of multilateral free trade agreements.
Multinational Companies Multinational companies have increased in size, wealth, and international reach over recent decades. Half of the 100 largest economies around the globe are not countries but rather multinational corporations (MNCs). The 500 largest MNCs now account for 70% of world trade, 30% of all manufacturing exports, and 80% of technical and management services.99 Many MNCs have the resources and expertise to implement environmental and occupational safety and health practices in their facilities worldwide. Indeed, in many instances the MNC facilities in poor countries are leading local examples of sound practice.100 However, MNCs have also taken advantage of lax regulations to avoid standards of practice that prevail in rich countries. Examples include petrochemical industry operations in Ecuador 101 and Nigeria,102,103 mining operations in Indonesia,104,105 and assembly plants in northern Mexico106 (see case studies in this chapter). Moreover, there is typically a set of domestic firms associated with MNCs, supplying components, packaging, and other inputs, and these firms may not implement optimal environmental and workplace practices.
Free Trade Agreements Multilateral free trade agreements (FTA) evolved throughout the world after World War II. To the extent that these agreements promote economic development in poor countries, they improve living standards and environmental performance. And to the extent that these agreements incorporate related social issues such as working conditions and environmental protection, they may motivate governments and private firms to protect environmental and occupational health. On the other hand, trade agreements that omit environmental and workplace standards may permit and even aggravate health risks that accompany development.107,108 Several major trade agreements are illustrative. The World Trade Organization (WTO) succeeded the General Agreement on Tariffs and Trade (GATT), a system that originated in 1948 as an effort to liberalize and normalize world trade. With the creation of the WTO, the scope of this system expanded from trade in goods to include trade in services and intellectual property as well. Both GATT and the WTO have been generally silent on issues of worker safety, environmental health, and justice, restricting their domain to problems that bear directly on trade.109 The process of European economic integration, in contrast, has extended well beyond trade issues to incorporate a wide range of social considerations, including occupational safety and health. Since the 1957 creation of the European Economic Community (now the European Union, or EU), foundational documents such as the Single European Act, the Social Charter, and the Framework Directive have established the intent to include environmental and workplace issues in the European trade regime. On the environmental side, a series of multiyear Action Programmes began in 1972, setting the stage for over 200 pieces of environmental legislation addressing such issues as waste management, water pollution, and air pollution. The EU has also integrated environmental considerations into laws and policies in other sectors such as agriculture, energy, transport, and tourism.
806
Environmental Health
The Sixth Action Programme for the Environment, adopted in July 2002, identified four priority areas, of which one is environmental health (the others are climate change, nature and biodiversity, and the management of natural resources and waste). In this context, important environmental health initiatives such as the Precautionary Principle and a regulatory system known as REACH (Registration, Evaluation and Authorization of Chemicals) have arisen.110–112 On the workplace side, action on occupational health dates from the 1952 formation of the European Coal and Steel Community. Since then, an extensive organizational infrastructure has arisen, including a Health and Safety Directorate within the European Commission, a multipartite Advisory Committee on Safety, Hygiene and Health Protection at Work (ACSH), and two agencies that conduct research and provide information and technical assistance, the European Foundation for the Improvement of Living and Working Conditions, based in Dublin, and the European Agency for Health and Safety at Work, based in Bilbao. Together these entities have worked with national counterparts on capacity building (such as national exchanges of workplace inspectors), policy development, and technical innovation. The 2002 EU document on occupational health, “Adapting to change in work and society” (available at http://europa.eu.int/comm/ employment_social/health_safety/index_en.htm), introduced several new elements, including an enhanced emphasis on psychosocial aspects of work, a consolidated approach to risk prevention combining legislation, social dialogue and partnerships, best practices, corporate social responsibility, and economic incentives, and an explicit statement of the value of occupational health policy in economic competitiveness. Several challenges persist in the European approach to environmental and occupational health—reconciling national sovereignty with coordinated progress, monitoring compliance with Community directives, reconciling differences between more and less progressive countries, integrating the countries of the former Soviet Union, and sharing scarce technical expertise and resources. However, Europe provides the world’s most advanced example of linking environmental and occupational health with free trade. The North American Free Trade Agreement (NAFTA), ratified by the United States, Mexico, and Canada in 1992 and entered into force in 1994 for implementation over the next decade, was designed to abolish most trade restrictions among the three countries, addressing labor rights and environmental protection, through its “side agreements” (NAFTA, 1994). NAFTA is intermediate between the WTO and the EU in terms of its inclusion of labor issues. The process that led to NAFTA differed from the European experience in several ways. NAFTA had a shorter history and was negotiated rapidly. There was limited interest in incorporating social and environmental issues into the process. Labor unions and their allies, especially in the United States and Canada, vigorously opposed NAFTA and campaigned more to block the treaty altogether than for specific labor-friendly provisions. Some environmental groups, in contrast, participated in negotiations, perhaps accounting for the relatively greater emphasis on environmental practices. Moreover, all three governments were reluctant to relinquish any sovereignty over their respective labor and environmental laws. Accordingly, the focus in NAFTA is on dispute resolution, some information exchange, and promoting each country’s compliance with its own labor and environmental laws, rather than on joint research, training, standard-setting, technology development, and related initiatives. Neither the main trade agreement nor the side agreements express a shared commitment to upgrading or harmonizing environmental and occupational health laws or practices. Given these limitations, while NAFTA’s contribution to advancing environmental and occupational health in North America has yet to be fully evaluated, it is likely to be limited.113–120 Assessing the impact of NAFTA is complex, since environmental and occupational health reflects many other forces—economic, technological, and political—in addition to NAFTA itself.121 In some sectors, there is evidence of deepening environmental health problems. For example, along the United States–Mexico border, the volume of freight transport has increased air pollution, and rapid growth has aggravated water
pollution and water scarcity. In other sectors, such as fisheries, evidence suggests little impact of NAFTA. There is little evidence that economic growth following the adoption of NAFTA has led to major investments in environmental services or infrastructure, or major improvements in environmental indicators. However, the prediction of pollution havens also did not materialize to the extent predicted, with possible exceptions in specific industry sectors such as denim.122 On the labor side, the effect of NAFTA on workplace safety and health is also complex. Few data are available that would permit tracking workplace conditions and health outcomes, and associating any observed trends with NAFTA. Several cases have been brought under the NAFTA grievance procedure, alleging serious health and safety hazards in Mexican facilities and failure of the Mexican government to enforce applicable laws. Even when these cases are found in favor of the complainants, advocates maintain that the solutions are ineffective, amounting only to calls for consultation.123
Free Trade Zones and Maquiladoras Many governments have established special zones to promote global economic activity, typically located near seaports, airports, and/or national borders.124 From their origin in the early 1970s to the 1990s, about 200 free trade zones (FTZ) had been established with an employment of approximately 4 million people. In the next 10 years, especially with the rise of China as a global manufacturing and export powerhouse, FTZs proliferated. By 2003, FTZs employed approximately 42 million people, about 30 million of these in China.125 Free trade zones may offer any or all of several benefits to manufacturing and trading firms, including tax relief, low or absent customs duties, reduced export controls, land, water, energy, and infrastructure subsidies, a plentiful and tightly controlled labor force supply with low wages and weak institutions, and limited enforcement of labor and environmental laws.99,126 The economic activity in FTZs is typically labor-intensive, centering on manufacturing, although service work is also increasingly common. The workforces are often predominantly female. Workers may face physical hazards such as repetitive motions, awkward work positions, and noise, with risks of musculoskeletal disorders and hearing loss, chemical exposures, and stresses such as highly regimented work routines and dangerously high production quotas. Maquiladoras are assembly plants south of the United States– Mexico border which typically import components from the United States and other countries, complete assembly and other value-added processes, and re-export the products. The maquiladoras produce a wide variety of products, including electrical and electronic equipment, automobile parts, toys, clothing, and others (see case studies below). Health and environmental studies in maquiladoras are scarce, but do suggest a high burden of musculoskeletal disorders, stress, and other health burdens.
The Export of Hazards and the Race to the Bottom Scholars, public health practitioners, and labor advocates have for some years recognized that increasing international trade may threaten worker health and safety. In the 1980s, considerable attention was devoted to the “export of hazard.”127–131 Concern grew out of observations of double standards;132 critics anticipated that industries from developed nations would relocate plants in developing nations due to lower labor costs, more lax regulatory environments, and in some cases proximity to raw materials and/or markets.133 In doing so they would fail to follow the same workplace and environmental safeguards that were required in their countries of origin, creating “pollution havens” and exposing people in developing nations to relatively greater risks. Case studies of such products as pesticides134–137 and hazardous wastes,138–140 and high-profile disasters such as the Bhopal explosion,141–145 fed concern that developing nations faced serious risks from rapid industrialization. Interestingly, these concerns recapitulated longstanding disparities within developed nations. For example, the migration of industry from northern to southern states— that is, from wealthier to poorer regions—in the United States during the nineteenth and early twentieth centuries may have demonstrated a similar pattern.146,147
42 The same process was also postulated as a threat to environmental and occupational health in industrial nations. North American labor unions forcefully argued this point during the NAFTA debates in the early 1990s.148–152 Local and national governments, they maintained, would hesitate to enforce regulations for fear of driving plants from their jurisdictions to lower-wage areas and losing needed jobs— a phenomenon known as “regulatory chill.” Workers and communities, perceiving the same dilemma, would refrain from pressing for safer workplaces and lower emissions. And firms in developed nations, increasingly facing international competitors and seeking the lowest possible costs, would play one location against another. Standards of practice would descend in developed nations toward those of developing nations, exactly as predicted by the factor price equalization theorem that is central to the economics of free trade. This “race to the bottom” would threaten environmental and occupational health in both developed and developing nations. An opposing set of arguments, based on economic theory and empirical data, suggested a more optimistic prognosis: that free trade would spur economic development, which would in turn lead to improved environmental performance. Moreover, liberalization of trade and investment would lead to the spread of greener technologies, especially as higher consumer expectations for “green” products and processes would emerge in an open, competitive market. Finally, increasing foreign direct investment was predicted to make funds available for upgrading industrial facilities.119,153–156 Further research will help clarify when, and in what circumstances, trade liberalization advances environmental and occupational health. GLOBAL SOLUTIONS
Several strategies may combine to advance environmental and occupational health in poor nations, in the process rectifying disparities that exist on a global scale. Some of these are policy initiatives, to which environmental and occupational health professionals can contribute as advisors and advocates. Others, such as training and research, fall within the traditional domain of public health practice.
Policy Initiatives Policy initiatives include those that are official and legally binding, such as regulatory standards promulgated by governments, and those that are voluntary. Official standards may be developed in the context of trade agreements, but as noted above, with the exception of the European Union, this linkage is rare and controversial. More typically, standards are promulgated by national government agencies with jurisdiction over labor and environment. Two sources of standards are available to the governments of developing nations for this purpose. First, many adopt specific exposure standards used in industrialized nations, especially those of the United States, Germany, Japan, Russia, and the Nordic countries. Of note, such standards are only as effective as their enforcement mechanisms, and enforcement often lags well behind promulgation. Second, developing nations may model their policies on relevant international norms, such as the Conventions of the International Labor Office,157 or on treaties such as the Montreal Protocol on Substances that Deplete the Ozone Layer,158 the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes and their Disposal 159or the Stockholm Convention on Persistent Organic Pollutants.160 In general, such norms are legally binding when ratified by member nations. In some cases, such as the Montreal Protocol, global environmental diplomacy has been highly effective in reducing a hazard. However, such efforts are often compromised by the failure of important nations to ratify them and by limited compliance by nations that do ratify them, limiting their impact. Voluntary standards for environmental and occupational health are also available. A principal example is the International Organization on Standardization (ISO), which has promulgated internationally recognized standards on quality management (ISO 9000) and environmental management (ISO 14000). The ISO considered promulgating a
Environmental Justice: From Global to Local
807
standard for occupational health and safety management systems in the mid-1990s, but this proposal was set aside in late 1996 due to strong opposition from businesses, labor organizations, and some governments. Other voluntary standards or Codes of Practice have been promulgated by international agencies. For example, the Organization on Economic Co-operation and Development has published Guiding Principles for Chemical Accident Prevention, Preparedness and Response (www2.oecd.org/guidingprinciples/index.asp), the United Nations Environment Programme has promulgated a Code of Ethics on the International Trade in Chemicals (www.chem.unep.ch/ ethics/english/ CODEEN.html), the ILO has promulgated numerous Codes of Practice on workplace health and safety (www.ilo.org/public/english/protection/safework/cops/english/index.htm), and the International Programme on Chemical Safety (IPCS, a joint program of the ILO, UNEP, and WHO) has published Health and Safety Guides for nearly 100 chemicals (www.inchem.org/pages/hsg.html) and International Chemical Safety Cards, including handling recommendations, for over 700 chemicals (www.inchem.org/pages/icsc. html). The recommendations in these documents are readily available to authorities and practitioners in developing nations, and are often based on a careful review of evidence available at the time they are prepared. They may be of special utility to multinational firms that wish to establish a standard of practice in each of their operating locations. However, the resources to implement them may be out of reach, especially to smaller firms and governments in developing nations. Some voluntary standards have been issued by industry groups. For example, the International Chamber of Commerce developed the 16-point “Business Charter for Sustainable Development” which was endorsed during its 1991 Rotterdam conference.161 The American Chemistry Council (formerly the Chemical Manufacturers Association) introduced its Responsible Care Program in 1988 to improve the industry’s safety and environmental performance (www.responsiblecareus.com/about.asp). Segments of some industries, such as coffee and chocolate production, have adopted “fair trade” practices that include both occupational health and environmental safeguards.162 The principles are widely available and may be used by environmental and occupational health professionals in developing nations as useful benchmarks. Voluntary standards and codes have also been issued by nongovernmental organizations. Although these lack official status, they may be effective in the context of public education, consumer campaigns, stockholder campaigns, and similar efforts. One example is the Ceres Principles (formerly the Valdez principles, growing out of the response to the Exxon Valdez disaster) (www.ceres.org/coalitionandcompanies/principles.php). These principles were promulgated in 1989 by the Coalition for Environmentally Responsible Economics (Ceres), a national network of banks, investment funds, brokers, environmental organizations, and other public interest groups working to advance environmental stewardship. They aim to promote environmentally sustainable operation by companies, and include commitments to protection of the biosphere, sustainable use of natural resources, reduction and safe disposal of waste, energy conservation, and environmental restoration. There is an explicit commitment to risk reduction—striving to “minimize the environmental, health, and safety risks to our employees and the communities in which we operate through safe technologies, facilities, and operating procedures, and by being prepared for emergencies”—and to providing safe products and services. And there are procedural commitments, such as to informing the public and conducting audits. Advocates promote the implementation of Ceres principles by working through stockholders and by encouraging socially responsible investment. Other nongovernmental organizations have issued standards that include environmental performance and safe working conditions, and have organized consumer campaigns to promote compliance with these standards through market pressure. Particular attention has been directed at the apparel, carpet, and toy industries. Examples include standards issued by the Clean Clothes Campaign (www.cleanclothes.org/codes/index.htm), the Fairtrade Labelling Organizations International (FLO, at www.fairtrade.net), and Co-op America (www.coopamerica.org/).
808
Environmental Health
A final kind of policy relates to investment patterns. Lending institutions, such as the World Bank, and private lenders can link investment in industrial development to the implementation of sound workplace and environmental policies. Increasingly, major projects are contingent on environmental impact statements and proper provisions for environmental safeguards.
Initiatives by Public Health Professionals Within the realm of public health work, professionals can promote environment and health in developing nations in several ways. These include training, technical assistance, collaborative research, and advocacy. Training is an essential activity for occupational health professionals, given the shortages of expertise in industrial hygiene, environmental and safety engineering, occupational and environmental medicine and nursing, and related fields. Approaches to training include formal academic study in institutions in industrialized nations, short courses, and distance learning through newsletters and electronic means. One notable example is the extensive training efforts of the Finnish Institute of Occupational Health, through its ILO/FINNIDA African Safety and Health Project and its ILO/FINNIDA Asian-Pacific Regional Programme on Occupational Safety and Health. Regional newsletters (www.ttl.fi/AfricanNewsletter and www.ttl.fi/AsianPacificNewsletter for Africa and Asia, respectively) published by the Institute are distributed to thousands of readers in developing nations of Africa and Asia, covering such topics as information retrieval, small-scale enterprises, and specific industries. In the United States, the Fogarty International Center of the National Institutes of Health introduced an international training program in environmental and occupational health in 1995.163 This program brings trainees from developing nations to U.S. institutions for intensive study. Through large-scale efforts such as these, and through more limited training initiatives, needed expertise in occupational safety and health can be transferred to developing countries. Technical assistance is another important area of effort for public health professionals. Joint investigations of outbreaks, consultancies on environmental and occupational health problems, and direct technology transfer can all advance the protection of workers in developing countries. Technical assistance may occur through private firms, through professional associations, through nongovernmental organizations, through multilateral organizations such as the International Labor Organization, and/or through government efforts. One example is the work of the Maquiladora Health and Safety Network (www.mhssn.org), which assists labor organizations and employers in the United States–Mexico border and in Asia to recognize and remediate workplace hazards. Collaborative research is a third important activity for environmental and occupational health professionals. Would-be researchers in developing nations face daunting challenges: university salaries below subsistence levels, requiring outside employment; lack of infrastructure needs such as libraries, computers, analytical testing capability, and laboratory equipment; lack of domestic sources of research funding; lack of research mentors and collaborators; and the need to address diverse content areas rather than build in-depth specialization. Despite these challenges, there remain important research needs in developing nations.164–168 One goal of such research, as anywhere in the world, is the discovery of unknown exposure-response associations and disease mechanisms. Just as important, however, is the traditional public health research function of documentation. Many workplace and environmental hazards are well understood, and their effects easily predicted. However, it may require in-country data demonstrating that a hazard is taking a toll on local workers and communities to stimulate government to take action to control the hazard. Finally, environmental and occupational health professionals in developing nations, with the strong support of colleagues in developed nations, need to engage in advocacy. In countries where expertise is rare, professionals rarely have the luxury of remaining only practitioners, or researchers, or teachers, or policy makers; they must play all these roles. Steps must be taken to identify and correct
workplace hazards, and working people must be cared for when injured or ill. Relevant data must be assembled, through primary or secondary research. Students must be taught to perform these functions. However, for lasting changes to be made, practical experience, data, and moral conviction must be laid before “those who need to know,” including government officials, company officials, and worker representatives, and systematic approaches to protecting public health must be implemented. U.S. SOLUTIONS
Solutions to domestic environmental justice concerns are in some cases parallel to those used globally, but in other cases reflect the unique social and political circumstances of the United States. A landmark event was the promulgation, in 1994, of a Presidential Executive Order, “Federal Actions to Address Environmental Justice in Minority Populations and Low-Income Populations” (www.fs. fed.us/land/envjust.html). This Order required each Federal agency to “make achieving environmental justice part of its mission by identifying and addressing, as appropriate, disproportionately high and adverse human health or environmental effects of its programs, policies, and activities on minority populations and low-income populations. . .” Among its provisions, it established an Interagency Working Group on Environmental Justice, required each federal agency to develop an agency strategy to advance environmental justice, and mandated the inclusion of “diverse populations” in environmental health research. Perhaps the most active agency in advancing environmental justice has been the Environmental Protection Agency (EPA). EPA established an Office of Environmental Justice in 1992, established environmental justice authorities and activities throughout the agency, and formed an advisory committee, the National Environmental Justice Advisory Committee, that has become an important national forum for discussion and debate, including grassroots voices. EPA also makes environmental justice grants and operates an environmental justice internship program. These activities are described on the agency’s environmental justice web site (www.epa. gov/compliance/ environmentaljustice/index.html). Grassroots efforts in environmental justice have played a major role in advancing these issues in the United States. A large number of local groups, often formed around specific concerns such as a hazardous waste site or a polluting industrial facility, have arisen. These groups have coalesced at periodic National People of Color Environmental Leadership Summits (http://www.ejrc.cau.edu/EJSUMMIT wlecome.html) and in national networks such as the National Black Environmental Justice Network (http://www.nbejn.org/) and the Indigenous Environmental Network (http://www.ienearth.org/). In addition, academic units such as the Environmental Justice Resource Center at Clark Atlanta University (www.ejrc.cau.edu) and the Deep South Center for Environmental Justice at Xavier University in New Orleans (relocated to Dillard University following Hurricane Katrina; see www.dscej.com/) have supported these community groups through unique academic-community partnerships. An important means of partnership between researchers and communities has been community-based participatory research.169–176 Through this technique, researchers and community members together identify the need for research, define the most important research questions, agree on methods to be used, and pursue the research collaboratively. Ideally, this approach helps target environmental health research toward scientific questions important both to communities and to scientists, and results in a more responsive, relevant body of scientific knowledge. Litigation has been used extensively by communities in an effort to seek redress of alleged environmental justice offenses.177–180 Plaintiffs have used a number of legal tools, including environmental laws, civil rights laws (especially Title VI of the Civil Rights Act of 1964), common law property claims, and constitutional challenges. Defendants have included both polluters and government agencies that
42 permitted siting or operation of emitting facilities. Interestingly, litigation practices in the United States have increasingly been applied globally, especially in developing nations, as illustrated in the case study on Texaco in Ecuador.181,182 A final important strategy in environmental justice is building diversity in environmental health and related professions. An example is the Minority Youth Environmental Training Institute, a project of the National Hispanic Environmental Council (http://www.nheec.org/). This program targets Hispanic teenagers for a 10-day intensive training experience, during which they learn various aspects of environmental sciences, meet role models, and are encouraged to consider careers in environmental sciences, including environmental health. At the university level, the Association of Academic Environmental Health Programs (www.aehap.org) has joined the Environmental Justice and Health Union (www.ejhu.org) to encourage minority-serving institutions to offer accredited environmental health training. At the graduate level, the Agency for Toxic Substances and Disease Registry, working with the Minority Health Professions Foundation (www.minorityhealth.org), has funded fellows and residents to conduct research designed to fill identified data gaps on the health effects of toxic chemicals. Efforts such as these, across the entire spectrum of academic training, should help increase diversity in the environmental and occupational health professions, and thereby help advance environmental justice. DISCUSSION
This chapter has addressed environmental justice both in the United States and on a global scale. The concept of environmental justice has evolved differently on the two scales, but there is a consistency between the two. Both domestically and internationally, disadvantaged populations—as defined by race, ethnicity, socioeconomic status, and/or other attributes—tend to sustain disproportionate exposures to hazards, both in the workplace and in the ambient environment. Relative to the general population, these vulnerable populations bring fewer resources to bear in addressing the hazards they face—less biological reserve with which to withstand and recover, fewer technical resources, less political power, and less access to legal remedies. As a result, these vulnerable populations suffer adverse health consequences following exposures. The disparities that give rise to these circumstances are collectively known as environmental injustice. While environmental injustice may occur without intent, firms and governments have at times taken actions that create, exploit, and/or aggravate disparities (Kamuzora 2006). Across the globe, there is increasing recognition of the moral and practical dimensions of environmental injustice, and of the need to address these issues.183,184 Public health has a long history of focusing attention on vulnerable populations, to target efforts at disease prevention and health promotion.185 In recent years, this focus has become a central strategy in environmental and occupational health, both in the United States and across the globe—a trend that has the potential to transform the environmental movement.186 From surveillance to technical assistance, from research to training, as public health professionals link with affected communities and workers, governments and nongovernmental organizations, and private firms, they will be increasingly able to remediate disparities and improve health for all people. CASE STUDIES
The five case studies selected for this chapter were drawn from a limited number of reports, often unpublished, and are based on the DPSEEA framework (Driving force-Pressure-State-Exposure-Effect-Action).187 The case studies relate to agriculture, mining, the maquiladora industry, and the petroleum industry, whose effects are especially marked in rural areas and in areas inhabited by indigenous, minority, and other marginalized populations. The case studies are drawn from both North and South America.
Environmental Justice: From Global to Local
809
The first two case studies focus on farm workers in the Valle de San Quintín, in the Mexican state of Baja California, and in the Yakima Valley of Washington state. Agricultural work is one of the most dangerous occupations. In the case of San Quintín, the majority of migrant and seasonal farm workers come from the poorest communities of Oaxaca. While working in San Quintín, they are housed in crowded conditions in substandard structures, often without access to potable water, sewage, and other environmental and health services. Agricultural workers in the United States are predominantly Hispanic, with about 77% from Mexico.188 The Yakima Valley is illustrative. The majority of farm workers there come from the purépecha region of Michoacán. With no access to labor unions or other mechanisms of social insurance, they have enjoyed few protections on the job. The third case study describes small-scale mining in Ecuador. Artisanal mining is an activity steeped in poverty. These mines are generally located in geographically isolated communities, with little or no governmental regulation. Mine shafts often originate inside or next to workers’ homes. Entire families may be involved, with children starting work in the mines before finishing primary school. Children who do not work with their families may be hired directly by mine owners. Still other children may start by doing jancheo work (gathering mineral rocks from stockpiles and dumps), either on their own or together with their mothers, as a way of contributing to the family income. The fourth case study refers to workers in the maquiladora industry along Mexico’s north border. As described in the text, environmental challenges in this region, on both sides of the border, are well documented. Within the maquiladora plants, workers face numerous hazards as well. The final case study focuses on indigenous communities in the Amazonas region, affected by petroleum exploitation. Between 1971 and 1992, Texaco discharged an estimated 15 million gallons of crude petroleum and 20 billion gallons of toxic waste into pristine rainforest.189 The resulting environmental damage has been compared to the 1989 Exxon Valdez disaster in Alaska. Together these case studies illustrate the diversity of affected populations and economic sectors, common features such as the social disadvantages and resulting vulnerability these populations suffer, and the diversity of responses they and their advocates have mounted. SAN QUINTÍN FARMWORKERS, BAJA CALIFORNIA NORTE, MEXICO
The Valley of San Quintín, located south of the municipality of Ensenada in Baja California Norte, is known for its production of fruits and vegetables for export. This production depends on the labor of migrant workers, many of whom arrive as part of a seasonal cycle that brings workers from Mexico’s west coast (especially from the Mixtec area of the Valley of Oaxaca).190,191 Every year, from March to July, thousands of workers arrive in the valley to tend the fields on properties that belong to 39 families. Workers are paid the equivalent of only five to seven dollars a day and do not receive even the minimal work benefits required under the law. Most workers are unaware of legal rights such as holiday pay, social security, and disability, and even if they do learn about them, they often cannot or do not access them. Pressures that play a role include relative geographic isolation, the lack of planning policies, an annual population growth rate of 11.9%, tensions among different ethnic groups, the growing use of modern technology in agricultural activities, the introduction of other economic activities (for the most part aquaculture and tourism in the lakeside district), problems of land ownership, and the depletion of aquifers. These have resulted in rapid, unplanned growth of human settlements and deficient infrastructure for basic services such as drinking water and sewage treatment systems. Living conditions and services are marginal. Of the field workers, 66.7% workers live in camp barracks and 33.3% live in informal communities known as colonias. Over 80% of housing is constructed
810
Environmental Health
from improvised, nondurable materials, and there are few piped water sources for many homes, almost no sewage service, and only limited electrical service. Available piped water often fails to meet standards of human consumption, and is contaminated with fecal coliform bacteria and components of agricultural chemicals such as ammonia and phosphorous. In addition to health effects, this water flows downstream and eventually to the ocean, raising concerns about environmental pollution.191 Additional environmental pressures arise from production practices. Pesticide use is common, and many lead to ecosystem contamination. Plastic sheeting is used to protect the growing fruit and vegetables, but it fragments and is often improperly disposed of. The plastic waste interferes with the movement of moisture and nutrients in surface soil and with the recharge of the aquifers. Water with high concentrations of salt is used to irrigate crops, causing deterioration of the land, while treated and untreated wastewater often mix and recirculate in the fields. Other contributing factors are the domestic combustion of gas and biomass with resulting emissions of particles, carbon monoxide (CO), and volatile organic compounds, the emissions of dioxins, furans, mercury, and other pollutants from the burning of refuse, and a lack of waste disposal services and appropriate sites for the disposal of solid waste. The migrant workers’ assignments differ according to gender, age, and ethnicity.192 Mature men perform the heaviest field activities, including fumigating, irrigating, and working as stewards, camperos, and drivers. Women and children pick the fruits and vegetables. Mestizo women from the state of Sinaloa are generally hired for the packing process.192 Prevalent health problems among the farmworkers include acute respiratory infections, acute and chronic diarrhea, and tuberculosis. The incidence of pesticide toxicity is unknown, and possible longterm sequelae such as cancer have not been well characterized in this population. Several actions have resulted from these problems. Members of the migrant population, led by women and with the support of organizations of residents from the colonies, have sought improved services including water, electricity, transportation, medical care, and education.193 In the 1990s, day nurseries, primary schools, and chapels were set up in some of the camps. Local nongovernmental organizations also successfully requested establishment of a rural Clinic-Hospital by Mexico’s Instituto Mexicano del Seguro Social (IMSS), located in Delegación Vicente Guerrero. More recently, local groups, the Mexican federal government, and the home states of the migrants collaborated in a “Vete Sano Regresa Sano” (Leave Healthy, Return Healthy) public health initiative, in order to provide health protection for the migrants. More information on this initiative can be found at www.bajacalifornia.gob.mx/ informe/1er_informe/part_social.htm. THE YAKIMA VALLEY, WASHINGTON STATE
The state of Washington is known for its agricultural products, including apples, cherries, asparagus, pears, berries, and hops. Much of this output, valued at more than a half billion dollars annually, comes from the Yakima Valley in the central part of the state. However, the plight of farm workers in the Yakima Valley stands in sharp contrast with the prosperity they help generate.194–197 Many of these workers are Mexican migrants, who arrive for the spring, summer, and fall seasons and depart during the winter months seeking employment elsewhere in the United States. This workforce has been mostly male since the early twentieth century, but in recent decades the proportion of women workers has increased.194,198 The farm work of Mexican immigrants is considered highly “flexible.” Characteristics include low pay (rising far more slowly than inflation), job insecurity, and absence of health insurance, pension contributions, occupational health protection, and other benefits typically provided to other workers.194,198 There is a wide range of job positions in the region, ranging from field croppers to fumigators, pesticide mixers,
assistant foremen, and foremen,195,198 and a worker may be assigned to a variety of jobs without adequate cross-training or supervision. Another important aspect is the role of unions such as the Teamsters and United Farm Workers (UFW). These unions attacked the apple industry in Washington when producers in the state continued to accept Purépecha Indians (from the Mexican state of Michoacán) to maintain the flow of migrant workers.198 Pesticides are used extensively in Yakima Valley agriculture, and worker exposures have been carefully described in a series of studies at the University of Washington.196,199–203 Exposures during farm work are intensified by long hours, absence of hand-washing facilities, and the lack of protective equipment and training.204 There is also evidence of take-home exposures, which may affect children. Among workers who handle pesticides, there is considerable concern about potential toxicity.195 Potential health effects include irritation and inflammation of eyes and mucus membranes, allergic reactions, respiratory symptoms, neurologic toxicity following higher exposures, and nonspecific symptoms such as nausea and fatigue. Long-term effects may include cancer, neurotoxicity, and reproductive damage. Environmental problems include the dispersion and persistence in the environment of many of the pesticides that are used. This concern is heightened by the recirculation of irrigation water, although there are currently efforts to eliminate this form of contamination. A wide range of solutions is available, including training of both farmers and farm workers, interventions by community health workers, and promulgation and enforcement of regulations.205 CHILDREN ARTISANAL MINERS
IN NAMBIJA, ECUADOR Nambija, meaning “the place no-one can find,” is a remote goldmining settlement of about 2000 people in the mountains of southern Ecuador, near the Peruvian border. The population is predominantly of indigenous Saraguro and mestizo background, many of whom migrated to Nambija in order to work as gold miners. The settlement consists of hundreds of dilapidated wooden dwellings high on a mountain that has been extensively damaged by years of mining operations.206 Mining is embedded in the social fabric of Nambija. Entire families, including the children, work at gold mining. Homes are often built directly over the openings of small mine tunnels, reflecting the “cottage industry” quality of the work. This small-scale, informal gold mining, known as “artisanal mining,” is common in Latin America and elsewhere in the world.207 While artisanal mines may operate productively with techniques that protect health and the environment, more typically they are characterized by low technology, low productivity, unstable employment, high workforce turnover, low pay, little sanitation, lacking health and safety protection for workers, and poor environmental performance. Mining families often have no legal claim to the land they work. Legal and institutional oversight are rare. As many as 13 million people are estimated to work at artisanal mining worldwide, accounting for the bulk of production of minerals such as emeralds and tungsten, and for as much as a quarter of world gold production.207,208 For the rural poor, artisanal mining is an alternative to unemployment and misery. It can be a force for local economic development. However, the health, environmental, and social costs of artisanal mining are high, as exemplified in Nambija. Workers extract gold from ore using liquid mercury, which forms an amalgam with gold that can be separated from ground ore. The mercury is then boiled off, leaving a residue of gold. Mercury toxicity has been well documented in the adults and children of Nambija.206,209,210 Children become miners in different ways, starting as early as the age of five. Some may be hired directly by mine owners, or some may begin informally with jancheo (the act of gathering mineralized rocks from stockpiles and dumps), either on their own or together with their mothers as a way of contributing to family income. Mine work may occupy all of their time or part of it. In either situation,
42 schooling suffers. As children grow into the teen years, they become fully integrated into the mining workforce, graduating from jancheo to the full complement of tasks. Hazards other than mercury threaten the health and safety of artisanal miners in Nambija. In addition, toxic chemicals such as cyanide and acids are used. Injuries are common, the result of rock falls and subsidence, falls from heights, misuse of explosives, and the use of grinding machinery and hand tools. Inhalation of silica dust poses a risk of respiratory disease. Explosions and noisy machinery can damage hearing, and tasks such as hauling large loads and repetitive motions can cause musculoskeletal injuries. Women and children are especially susceptible to the effects of some chemicals, and may also be subject to abuse such as threats and physical and psychological assault from adult miners. Social problems such as alcoholism, violence, and prostitution are persistent in this setting.207,211,212 Environmental impacts are also extensive.213–218 In streams and rivers downstream of the mining area, mercury is converted to organic forms and bioaccumulates. Land degradation, deforestation, and siltation of waterways are common as the result of nonsustainable excavation practices. On the local scale, while most homes have electrical service, well under half receive piped drinking water, and of these, many receive unprocessed water directly from the source. Fewer than one in five homes have sanitary services.219 Gender roles are important in addressing problems of child labor. Children begin working in the mines alongside their mothers, who care for them and their family. Strengthening women’s involvement in the community’s social life advances social development, including concern for child labor, and may increase the chance of limiting it.220,221 WOMEN MAQUILADORA WORKERS ON MEXICO’S
NORTHERN BORDER With the end of the Bracero initiative, a temporary worker program, in 1964, large numbers of Mexican workers were deported from the United States. The population along Mexico’s north border swelled, and social problems such as unemployment rose.222 Partially in response to this problem, the Mexican government established its Border Industrialization Program in 1965, to encourage foreign (usually U.S.) companies to site assembly plants south of the border. Key to the program is a provision that allows firms to import components and raw materials without paying customs duties and to export finished products paying customs only on the value added—the labor— in Mexico. The assembly plants, known as maquiladoras, grew slowly at first. In the 1980s, Mexico joined GATT and liberalized its trade restrictions, and the peso was repeatedly devalued, lowering the cost of Mexican labor. By 2000, when maquiladora employment peaked, over 3000 maquiladoras employed approximately 1.3 million workers, accounting for nearly 10% of Mexico’s formal sector employment and 40% of Mexico’s exports.223 (Maquiladora employment declined during 2001–2003 due to a downturn in the U.S. economy and competition from other low-wage countries, but some rebound occurred starting in 2003.) While three out of four maquiladoras are located in Mexico’s border states (Tamaulipas, Coahuila, Chihuahua, Sonora, Baja California, and Nuevo León), one in four is now located farther south, in such booming cities as Monterrey.223,224 The concentration of economic activity along the border has made this region a magnet for Mexicans seeking employment.222,225 At the same time, with the continued growth of globalization, Mexico has faced competition from other low-wage countries. In the years after 2000, at least 170 maquiladoras closed their operations in Mexico to move to China and other Asian countries. This migration, representing a loss of over 200,000 jobs, was widely noted in Mexico,222 as the maquiladoras had emerged as a major factor in the national economy.226 The maquiladoras produce a wide variety of products, including electrical and electronic equipment, automobile parts, toys, clothing, and others.227 Labor-intensive assembly processes pose physical hazards such as repetitive motion, awkward work positions, and noise,
Environmental Justice: From Global to Local
811
with risks of musculoskeletal disorders and hearing loss. Chemicals such as solvents, acids, and metals are used in cleaning metal parts, fabricating electronic components, and such operations as painting and gluing, affecting not only workers but the ambient environment as well. While epidemiologic surveillance data are unavailable, surveys of maquiladora facilities and communities have suggested that hazards are common (U.S. GAO, 1993; Takaro et al. 1999). Health problems reported include injuries, nonspecific symptoms such as headache, insomnia, and dizziness, neurologic symptoms such as paresthesias, adverse reproductive outcomes, and urinary tract disorders.106,224,226,228–234 Work practices such as inadequate breaks increase potential hazards, and with the workforce consisting predominantly of women, the presence of reproductive hazards is a special concern. Other features of the United States–Mexico border aggravate the effects of these exposures. Many of the workers are migrants from elsewhere in Mexico, who arrive without financial resources, education, job skills, or experience. Housing and health services are inadequate, general environmental health risks such as water contamination are prevalent,235,236 and there is little job security due to the constant influx of new arrivals in search of work. Hence, while stress levels are high among maquiladora employees, they are also high among other workers in the same locations,229 suggesting that general features of the economic and social environment affect people both in and out of the foreign facilities. Environmental enforcement has never been rigorous in the region, and given the impulse to retain plants in the face of global competition, emissions control, waste management, and other environmental practices are often suboptimal. This combination of forces has made progress in environmental and occupational health extremely difficult to achieve. Gender issues in the maquiladora industry deserve special mention. Much of the workforce consists of young women. While this represents newfound economic opportunity and independence for many women, it also offers opportunities for exploitation. There is evidence of wage discrimination, assignment to the most tedious tasks, and less labor protection and social security, especially in situations where women need flexibility in work hours because of responsibilities at home.237 There is also evidence of gender disparities in access to the most desirable jobs. While men hold 74% of technical positions and 64% of administrative positions, they account for fewer than 50% of production line jobs.224 TEXACO AND THE ECUADORIAN INDIANS
Ecuador’s vast jungle basin, known as the “Oriente,” consists of more than 100,000 km2 of tropical rainforest at the headwaters of the Amazon River. The region is home to some 500,000 people, including eight indigenous peoples such as the Cofán, Secoya, Siona, Huarorani, and Quichua. The region has historically been poor, with few health services and with high levels of malnutrition, infant mortality, and infectious disease. Traditional lifestyles relied on hunting, fishing, and agriculture.238 Oil deposits were discovered in the Oriente in the 1960s, and a consortium of oil companies, led by Texaco (later ChevronTexaco) and including Gulf and the national oil company CEPE (now known as Petroecuador), commenced operations. These included exploration, drilling and processing of oil, and construction of a 498-mile pipeline, the SOTE (Sistema Oleoducto TransEcuatoriano), across the Andes to the Pacific coast. These operations have greatly reshaped the region, creating an extensive network of roads, pipelines, and oil facilities. More than two billion barrels of crude oil have been extracted from the Ecuadorian Amazon.238,239,240 According to local people who eventually sued ChevronTexaco, considerable environmental damage occurred. Road construction in the jungles resulted in deforestation of about 2.5 million acres.189 At peak operation, ChevronTexaco was releasing some 4.3 million gallons per day of toxic wastewater directly into waterways and pits rather than reinjecting it into subsoil formations or treating it.189 Contaminants included hydrocarbons such as benzene and polycyclic
812
Environmental Health
aromatic hydrocarbons, metals such as mercury and arsenic, and salts. Oil spills from the pipeline were common; more than 60 major ruptures have been documented since 1972, discharging 614,000 barrels of oil—a quantity more than twice as large as the Exxon Valdez spill, and more than seven times the cumulative spillage from the 800-mile trans-Alaskan pipeline, which came on line in 1977 and carries more than twice the flow of oil (Knudson, 2003). Over 600 open, unlined sludge pits were abandoned.189 Extensive contamination of streams and rivers followed—the same streams and rivers used for drinking, cooking, and bathing.241 Burning of gas flares at hundreds of well sites released organic pollutants, particulate matter, and carbon dioxide into the air. ChevronTexaco, in responding to lawsuits, counters that its “employees work hard to ensure that our operations around the world are managed in a safe and environmentally sound manner;” details of the company’s position can be found at http://www.texaco.com/ sitelets/ecuador/en/. Social changes have also occurred. Internal migration occurred as indigenous people relocated in search of employment and/or when forced out of damaged local environments. Oil cities arose and became known as settings of violence, prostitution, and alcohol abuse. Some ethnic groups, such as the Cofán, declined precipitously, while at least one isolated indigenous group, the Tetetes of Lago Agrio, apparently disappeared, a casualty of destabilized social circumstances and disease. Several studies attempted to quantify the impact of the oil extraction and associated environmental damage on the health of local populations. These studies were distinguished by their “popular epidemiology” framework,238 an approach that helped orient the research to local needs and priorities, and overcome some barriers to conventional epidemiologic research. The results suggested lower levels of some cancers among indigenous people than among nonindigenous people.242,243 However, there were possible associations between proximity to oil fields and several cancer sites in adults 244,245 and leukemia in children.246 Residents near contaminated streams was also associated with nonspecific symptoms such as eye and throat irritation and fatigue,247 and with spontaneous abortions (but not stillbirths).248 An interesting feature of the Ecuadoran situation is the use of litigation, beginning in U.S. courts and moving to Ecuadoran courts when the U.S. courts declined jurisdiction. This may signal a global trend. The efficacy of this approach, and its impact on environmental health, remains to be seen. REFERENCES
1. Commision for Radical Justice, United Church of Christ, 1987. Toxic Waste and Race in the United States: National Report on the Racial and Socioeconomic Characteristics of Communities with Hazardous Waste Sites. New York, NY: Public Access Data. 2. Bullard RD. Dumping in Dixie: Race, Class, and Environmental Quality. Boulder, CO: Westview; 1990. 3. Braveman PA, Egerter SA, Cubbin C, Marchi KS. An approach to studying social disparities in health and health care. Am J Public Health. 2004 94:2139–148. 4. Lee C. Environmental justice. In: Frumkin H, ed. Environmental Health: From Global to Local. San Francisco: Jossey Bass; 2005. 5. Clay R. Still moving towards environmental justice. Environ Health Perspect. 1999;107:107–10. 6. Riechmann J. Tres principios basicos de justicia ambiental. Revista internacional de filosofía política. 2003;21:103–20. 7. Lloyd M, Bell L. Toxic disputes and the rise of environmental justice in Australia. Int J Occup Environ Health. 2003;9:14–23. 8. Cairncross E, Nicol E. South African incinerators: waste disposal or dumping the waste burden on the poor? Symposium on Environmental Justice—sharing lessons learnt in industrialized and developing countries. 2005;S80.
9. UNEP (United Nations Environment Program). Declaration of the United Nations Conference on the Human Environment. Stockholm, 1972. Accessed May 18, 2006 on http://www.unep.org/Documents. multilingual/Default.asp?DocumentID=97&ArticleID= 1503&l=en. 10. Taylor DE. The rise of the environmental justice paradigm: injustice framing and the social construction of environmental discourses. Am Behav Sci. 2000;43(4):508–80. 11. Cherniak M. The Hawk’s Nest Incident: America’s Worst Industrial Disaster. New York: Vail-Ballou; 1986. 12. Lloyd JW. Long-term mortality study of steelworkers. V. Respiratory cancer in coke plant workers. J Occup Med. 1971;13:53–68. 13. Mazumdar S, Redmond C, Sellecito W, Sussman N. An epidemiological study of exposures to coal tar pitch volatiles among coke oven workers. APCA J. 1975;25:382–89. 14. Kahn E. Pesticide related illness in California farm workers. J Occup Med. 1976;18(10):693–6. 15. Wilk VA. The Occupational Health of Migrant and Seasonal Farmworkers in the United States. Washington: Farmworkers Justice Fund; 1986. 16. Ahasan MR, Partanen T. Occupational health and safety in the least developed countries—a simple case of neglect. J Epidemiol. 2001;11(2): 74–80. 17. Joubert DM. Occupational health challenges and success in developing countries: a South African perspective. Int J Occup Environ Health. 2002;8(2):119–24. 18. Rongo LM, Barten F, Msamanga GI, Heederik D, Dolmans WM. Occupational exposure and health problems in small-scale industry workers in Dar es Salaam, Tanzania: a situation analysis. Occup Med (Oxford). 2004;54(1):42–6. 19. Pringle TE, Frost SD. “The absence of rigor and the failure of implementation”: occupational health and safety in China. Int J Occup Environ Health. 2003;9(4):309–16. 20. Laskar MS, Harada N, Rashid HA. The present state and future prospects of occupational health in Bangladesh. Industrial Health. 1999;37(1):116–21. 21. Rajgopal T. Occupational health in India—current and future perspective. J Indian Med Assoc. 2000;98(8):432–3. 22. Siriruttanapruk S, Anantagulnathi P. Occupational health and safety situation and research priority in Thailand. Industrial Health. 2004;42(2):135–40. 23. Baig LA. Rasheed S, Zameer M. Health and safety measures available for young labourers in the cottage industries of Karachi. JCPSP. 2005;15(1):7–10. 24. Bedrikow B, Algranti E, Buschinelli JT, Morrone LC. Occupational health in Brazil. Int Arch Occup Environ Health. 1997;70(4): 215–21. 25. Giuffrida A, Iunes RF, Savedoff WD. Occupational risks in Latin America and the Caribbean: economic and health dimensions. Health Pol Plann. 2002;17(3):235–46. 26. Harris LV, Kahwa IA. Asbestos: old foe in 21st century developing countries [editorial]. Sci Total Environ. 2003;307(1–3):1–9. 27. Kawakami T, Batino JM, Khai TT. Ergonomic strategies for improving working conditions in some developing countries in Asia. Ind Health. 1999;37(2):187–98. 28. Fischer FM. Shiftworkers in developing countries: health and wellbeing and supporting measures. J Human Ergology. 2001;30(1–2): 155–60. 29. Quinlan M, Mayhew C, Bohle P. The global expansion of precarious employment, work disorganization, and consequences for occupational health: placing the debate in a comparative historical context. Int J Health Serv. 2001;31(3):507–36. 30. Loewenson RH. Women’s occupational health in globalization and development. Am J Industrial Med. 1999;36(1):34–42. 31. Ngai P. Made in China: Women Factory Workers in a Global Workplace. Durham and Hong Kong: Duke University Press and Hong Kong University Press; 2005.
42 32. Banerjee SR. Occupational health hazards of working children. J Indian Med Assoc. 1995;93(1):22. 33. Fassa AG, Facchini LA, Dall’agnol MM, Christiani DC. Child labor and health: problems and perspectives. Int J Occup Environ Health. 2000;6(1):55–62. 34. Scanlon TJ, Prior V, Lamarao ML, Lynch MA, Scanlon F. Child labour [editorial]. BMJ. 2002;325(7361):401–3. 35. Gharaibeh M,. Hoeman S. Health hazards and risks for abuse among child labor in Jordan. J Ped Nursing. 2003;18(2):140–7, 2003. 36. Loewenson RH. Health impact of occupational risks in the informal sector in Zimbabwe. Int J Occup Environ Health. 1998;4(4):264–74. 37. Environmental Protection Agency. Environmental Equity: Reducing Risk for All Communities. EPA230-R-92-008. Washington: USEPA; 1992. 39. Mohai P, Bryant B. Environmental racism: reviewing the evidence. In: Bryant B, Mohai P, eds. Race and the Incidence of Environmental Hazards. Boulder: Westview Press; 1992: 163–76. 40. Sexton K, Gong H, Bailar JC, et al. Air pollution health risks: do class and race matter? Toxicol Ind Health. 1993;9:843–78. 41. Bullard RD, ed. Unequal Protection: Environmental Justice and Communities of Color. San Francisco: Sierra Club Books; 1996. 42. Lena TS, Ochieng V, Carter M, Holguin-Veras J, Kinney PL. Elemental carbon and PM(2.5) levels in an urban community heavily impacted by truck traffic. Environ Health Persp. 2002;110(10): 1009–15. 43. Bullard RD. Unplanned environs: the price of unplanned growth in boomtown Houston. California Sociologist. 1984;7:85–101. 44. White HL. Hazardous waste incineration and minority communities. In: Bryant B, Mohai P, eds. Race and the Incidence of Environmental Hazards. Boulder: Westview Press; 1992: 126–39. 45. Lanphear BP, Matte TD, Rogers J, et al. The contribution of leadcontaminated house dust and residential soil to children’s blood lead levels. A pooled analysis of 12 epidemiologic studies. Environ Res. 1998;79(1):51–68. 46. Meyer PA, Pivetz T, Dignam TA, et al. Surveillance for elevated blood lead levels among children—United States, 1997–2001. MMWR. 2003;52(SS10):1–21. 47. Macey GP, Her X, Reibling ET, Ericson J. An investigation of environmental racism claims: testing environmental management approaches with a geographic information system. Environ Management. 2001;27:893–907. 48. McCarthy M. Social determinants and inequalities in urban health. Rev Environ Health. 2000;15:97–108. 49. Morland K, Wing S, Diez Roux A, Poole C. Neighborhood characteristics associated with the location of food stores and food service places. Am J Prev Med. 2002;22(1):23–29. 50. Moore LV, Diez Roux AV. Associations of neighborhood characteristics with the location and type of food stores. Am J Public Health. 2006; 96: 325–31. 51. Federal Transit Administration. Transportation: Environmental Justice and Social Equity. Conference Proceedings. Washington: United States Department of Transportation; 1995. 52. Bullard RD, Johnson GS, eds. Just Transportation: Dismantling Race and Class Barriers to Mobility. Gabriola Island, BC: New Society Publishers; 1997. 53. Cohen AJ, Ross Anderson H, Ostro B, et al. The global burden of disease due to outdoor air pollution. J Toxicol Environ Health Part A. 2005;68(13–14):1301–7. 54. Romieu I, Samet JM, Smith KR, Bruce N. Outdoor air pollution and acute respiratory infections among children in developing countries. J Occup Environ Med. 2002; 44(7):640–9. 55. Ezzati M, Kammen DM. The health impacts of exposure to indoor air pollution from solid fuels in developing countries: knowledge, gaps, and data needs. Environ Health Persp. 2002;110(11): 1057–68.
Environmental Justice: From Global to Local
813
56. Smith KR, Mehta S. The burden of disease from indoor air pollution in developing countries: comparison of estimates. Int J Hygiene Environ Health. 2003;206(4–5):279–89. 57. Silveira AL. Problems of modern urban drainage in developing countries. Water Sci Technol. 2002;45(7):31–40; 2002. 58. Bandara NJ. Water and wastewater related issues in Sri Lanka. Water Sci Technol. 2003;47(12):305–12. 59. Gundry S, Wright J, Conroy R. A systematic review of the health outcomes related to household water quality in developing countries. J Water Health. 2004;2(1):1–13. 60. Morris K. “Silent emergency” of poor water and sanitation. Lancet. 2004;363(9413):954. 61. Smith KR. Environmental health–for the rich or for all. Bull World Health Org. 2000;78:1135–36. 62. Makoni FS, Ndamba J, Mbati PA, Manase G. Impact of waste disposal on health of a poor urban community in Zimbabwe. East African Med J. 2004;81(8):422–6. 63. Boadi KO, Kuitunen M. Environmental and health impacts of household solid waste handling and disposal practices in third world cities: the case of the Accra metropolitan area, Ghana. J Environ Health. 2005;68(4):32–6. 64. Orloff K, Falk H. An international perspective on hazardous waste practices. Int J Hygiene Environ Health. 2003;206(4–5): 291–302. 65. O’Neill K. Out of the backyard: Tthe problems of hazardous waste management at a global level. J Environ Dev. 1998;7(2):138–63. 66. Asante-Duah DK, Nagy IV. International trade in hazardous waste. London: Spon Press; 1998. 67. Clapp J. Toxic Exports: The Transfer of Hazardous Wastes from Rich to Poor Countries. Ithaca: Cornell University Press; 2001. 68. Harpham T, Tanner M. Urban Health in Developing Countries: Progress and Prospects. New York: St. Martin’s Press; 1995. 69. Hardoy JE, Mitlin D, Satterthwaite D. Environmental Problems in an Urbanizing World. London: Earthscan; 2001. 70. Anwar WA. Environmental health in Egypt. Int J Hyg Environ Health. 2003;206(4–5):339–50. 71. Economy EC. The River Runs Black: The Environmental Challenge to China’s Future. A Council on Foreign Relations Book. Ithaca: Cornell University Press; 2004. 72. Gadgeel SM, Kalemkerian GP. Racial differences in lung cancer. Cancer Metast Rev. 2003;22(1):39–46. 73. Rhodes L, Bailey CM, Moorman JE. Asthma Prevalence and Control Characteristics by Race/Ethnicity—United States, 2002. MMWR. 2004;53(07):145–8. 74. Chan-Yeung M, Malo JL. Occupational asthma. New Eng J Med 1995;333:107–12. European Commission. Adapting to change in work and society. 2002. Accessed May 18, 2006 at http://europe.osha.eu.int/ systems/strategies/future/com2002_en.pdf. 75. Van Ee JH, Polderman AM. Physiological performance and work capacity of tin mine labourers infested with schistosomiasis in Zaire. Trop Geogr Med. 1984;36(3):259–66. 76. Rosenbaum E. Race and ethnicity in housing: turnover in New York City, 1978–87. Demography. 1992;29:467–86. 77. Krieger J, Higgins DL. Housing and health: time again for public health action. Am J Public Health. 2002;92(5):758–68. 78. Malveaux FJ, Fletcher-Vincent SA. Environmental risk factors of childhood asthma in urban centers. Environ Health Persp. 1995;103 Suppl 6:59–62. 79. Caetano R, Kaskutas LA. Changes in drinking patterns among whites, blacks and Hispanics, 1984–1992. J Stud Alcohol. 1995;56: 558–65. 80. Lamarine RJ. Alcohol abuse among Native Americans. J Community Health. 1988;13:143–55. 81. Centers for Disease Control. Alcohol-related hospitalizations— Indian Health Service and tribal hospitals, United States, May 1992. MMWR. 1992;41:757–60.
814
Environmental Health
82. Al-Ashban RM, Aslam M, Shah AH. Kohl (surma): a toxic traditional eye cosmetic study in Saudi Arabia. Public Health. 2004;118(4): 292–8. 83. Mojdehi GM, Gurtner J. Childhood lead poisoning through kohl. Am J Public Health. 1996;86(4):587–8. 84. Riley DM, Newby CA, Leal-Almeraz TO, Thomas VM. Assessing elemental mercury vapor exposure from cultural and religious practices. Environ Health Persp. 2001;109(8):779–84. 85. Polednak AP. Racial and Ethnic Differences in Disease. New York: Oxford University Press; 1989. 86. Evans WE, Relling MV, Rahman A, et al. Genetic basis for a lower prevalence of deficient CYP2D6 oxidative drug metabolism phenotypes in black Americans. J Clin Invest. 1993;91:2150–54. 87. Crofts F, Cosma GN, Currie D, et al. A novel CYP1A1 gene polymorphism in African-Americans. Carcinogenesis. 1993;14: 1729–31. 88. Shields PG, Caporaso NE, Falk RT, et al. Lung cancer, race, and a CYP1A1 genetic polymorphism. Cancer Epidemiol Biomark Prev. 1993;2:481–5. 89. Garte S, Gaspari L, Alexandrie AK, et al. Metabolic gene polymorphism frequencies in control populations. Cancer Epidemiol Biomarkers Prev. 2001;10(12):1239–48. 90. Calabrese EJ, Moore G, Brown R. Effects of environmental oxidant stressors on individuals with a G-6-PD deficiency with particular reference to an animal model. Environ Health Persp. 1979;29: 49–55. 91. Severo R. Genetic tests by industry raise questions on rights of workers. New York Times, 3 February 1980; p A1. 92. Hoiberg A, Ernst J, Uddin DE. Sickle cell trait and glucose-6-phosphate dehydrogenase deficiency. Effects on health and military performance in black Navy enlistees. Arch Int Med. 1981;141:1485–88. 93. Murray RF. Tests of so-called genetic susceptibility. J Occup Med. 1986;28:1103–07. 94. World Health Organization. The world health report 2005–make every mother and child count. 2005. Accessed May 18, 2006 at http://www.who.int/whr/2005/en/index.html. 95. Marmot M, Wilkinson RG. Social Determinants of Health. Oxford: Oxford University Press; 1999. 96. Leon D, Walt G, eds. Poverty Inequality and Health: An International Perspective. Oxford: Oxford University Press; 2001. 97. Lavelle M, Coyle M. Unequal protection: the racial divide on environmental law. National Law J. 1992, S1–S12. 98. Spiegel JM, Labonte R, Ostry SA. Understanding “globalization” as a determinant of health determinants. Int J Occup Environ Health. 2004;10:360–67. 99. Brown G. Protecting workers’ health and safety in the globalizing economy through international trade treaties. Int J Occup Environ Health. 2005;11:207–09. 100. Harrison M. Beyond the fence line: corporate social responsibility. Clin Occup Environ Med. 2004;4(1):1–8. 101. San Sebastián M, Hurtig AK. Oil exploitation in the Amazon basin of Ecuador: a public health emergency. Pan-Am J Public Health. 2004;15(3):205–11. 102. Ikein A. The Impact of Oil on a Developing Country: The Case of Nigeria. New York: Praeger; 1990. 103. Hutchful E. Oil companies and environmental pollution in Nigeria. In: Claude Ake, ed. Political Economy of Nigeria, ed. Claude Ake. London: Longman Press; 1985. 104. Perlez J, Rusli E. Spurred by illness, Indonesians lash out at U.S. mining giant. New York Times, September 8, 2004; p 1. 105. Perlez J, Bonner R. The Cost of Gold. Below a mountain of wealth, a river of waste. New York Times, December 27, 2005; p 1. 106. Moure-Eraso R, Wilcox M, Punnett L, MacDonald L, Levenstein C. Back to the future: sweatshop conditions on the Mexico-U.S. border. II. Occupational health impact of maquiladora industrial activity. Am J Industrial Med. 1997;31(5):587–99.
107. Shaffer ER, Brenner JE. International trade agreements: Hazards to health? Int J Health Serv. 2004;34(3):467–81. 108. Frumkin H. Across the water and down the ladder: Ooccupational health in the global economy. Occup Med. 1999;14(3):637–63. 109. LaDou J. World Trade Organization, ILO conventions, and workers’ compensation. Int J Occup Environ Health. 2005;11(2):210–1. 110. Grant W, Matthews D, Newell P. The Effectiveness of European Union Environmental Policy. New York: Palgrave Macmillan; 2001. 111. McCormick J. Environmental Policy in the European Union. New York: Palgrave Macmillan; 2001. 112. Jordan A. Environmental Policy in the European Union. 2nd ed. London: Earthscan; 2005. 113. Lee J. NAFTA and the environment. The Mandala Project. American University, School of International Service, Trade and Environment Database. Accessed May 18, 2006 at http://www.american.edu/ TED/maquila.htm. 114. García C, Simpson A. Globalization at the Crossroads: Ten Years of NAFTA in the San Diego/Tijuana Border Region. San Diego: Environmental Health Coalition; 2004. Accessed May 18, 2006 at http://www.environmentalhealth.org/globalizationFINALRELEASED.10.18.04.pdf. 115. Hufbauer GC, Esty D, eds. Nafta and the Environment: Seven Years Later. Washington: Institute for International Economics; 2000. 116. Commission for Environmental Cooperation. The Environmental Effects of Free Trade. Papers Presented at the North American Symposium on Assessing the Linkages between Trade and Environment. Montréal: CEC, 2000. Available at http://www.cec.org/files/PDF/ ECONOMY/symposium-e.pdf. 117. Commission for Environmental Cooperation. Free Trade and the Environment: The Picture Becomes Clearer. Montreal: CEC; 2002. Available at http://www.cec.org/files/PDF/ ECONOMY/symposiume.pdf. 118. Commission for Environmental Cooperation. Understanding and Anticipating Environmental Change in North America: Building Blocks for Better Public Policy. Montreal: CEC, 2003. Available at http://www.cec.org/files/pdf/ECONOMY/Trends_en.pdf. 119. Mayrand K, Paquin M. The CEC and NAFTA effects on the environment: discussion paper. Montreal: Unisféra International Center, 2003. Accessed May 18, 2006 at http://www.unisfera.org/IMG/pdf/ Unisfera-NAFTA_effects.pdf. 120. Vaughan S. How green is NAFTA? Measuring the impacts of agricultural trade. Environment. 2004;46:26–42. 121. Commission for Labor Cooperation. Labor Markets in North America: Main Changes Since NAFTA. Washington: Commission for Labor Cooperation;, 2003. Available at http://www.naalc.org/english/ pdf/labor_markets_en_1.pdf. 122. Abel A, Philips T. The relocation of El Paso’s stonewashing industry and its implications for trade and the environment. In: Commission for Environmental Cooperation. The Environmental Effects of Free Trade. Papers Presented at the North American Symposium on Assessing the Linkages between Trade and Environment. Montréal: CEC, 2000. Available at http://www.cec.org/files/pdf/ECONOMY/ symposium-e.pdf. 123. Brown GD. NAFTA’s 10-year failure to protect Mexican workers’ health and safety. Berkeley: Maquiladora Heatlh and Safety Support Network, 2004. Available at http://mhssn.igc.org/NAFTA_ 2004.pdf. 124. Burns JG. Free trade zones: Global overview and future prospects. Industry, Trade, and Technology Review. 1995. 125. Boyenge J. ILO database on export processing zones. International Labour Organization, 2003. Accessed May 18, 2006 at http://www. ilo.org/public/english/dialogue/sector/themes/epz/epz-db.pdf. 126. Smith EA. Cultural and linguistic factors in worker notification to blue collar and no-collar African–Americans. Am J Ind Med. 1993;23:37– 42.
42 127. Ives JH. The Export of Hazard: Transnational Corporations and Environmental Control Issues. Boston, MA: Routledge & Kegan Paul; 1985. 128. International Labor Rights Education and Research Fund. Trade’s Hidden Costs: Worker Rights in a Changing World Economy. Washington, DC: ILRERF; 1988. 129. Gaventa JP. From the Mountains to the Maquiladoras: A Casestudy of Capital Flight and Its Impact on Workers. New Market, TN: Highlander Center; 1990. 130. Jeyaratnam J. The transfer of hazardous industries. J Soc Occup Med. 1990;40(4):123–6. 131. Hecker S, Hallock M. Labor in a global economy: perspectives from the U.S. and Canada. Eugene, OR: Labor Education and Research Center; 1991. 132. Castleman BI. The double standard in industrial hazards. Public Health Reviews. 1980;9(3–4):169–84. 133. Van Liemt G. Economic globalization: labour options and business strategies in high labour cost countries. Int Labour Rev. 1992;131 (4/5): 453–70. 134. Weir D, Schapiro M. Circle of Poison: Pesticides and People in a Hungry World. San Francisco, CA: Institute for Food and Development Policy; 1981. 135. Bull D. Growing Problem: Pesticides and the Third World Poor. UK: Oxfam; 1982. 136. Uram C. International regulation of the sale and use of pesticides. Northwestern J International Law Bus. 1990;10:460–78. 137. Smith C. Pesticide exports from U.S. ports, 1997–2000. Int J Occup Environ Health. 2001;7:266–74. 138. Third World Network. Toxic terror: dumping of hazardous wastes in the third world. Penang, Malaysia: Third World Network; 1989. 139. Hilz C. The International Toxic Waste Trade. New York, NY: Van Nostrand Reinhold; 1992. 140. Hess J, Frumkin H. The international trade in toxic waste: the case of Sihanoukville, Cambodia. Int J Occup Environ Health. 2000;6(4): 331–44. 141. Aydelotte C. Bhopal tragedy focuses on changes in chemical industry. Occup Health Saf. 1985;54(3):33–5,50,59. 142. Weiss B, Clarkson TW. Toxic chemical disasters and the implications of Bhopal for technology transfer. Milbank Q. 1986;64(2): 216–40. 143. Bhopal Working Group. The public health implications of the Bhopal disaster. Report to the Program Development Board, American Public Health Association. Am J Public Health. 1987; 77(2):230–6. 144. Murti CR. Industrialization and emerging environmental health issues: lessons from the Bhopal disaster. Toxicol Ind Health. 1991; 7(5–6):153–64. 145. Broughton E. The Bhopal disaster and its aftermath: a review. Environ health: a global access science source 2005;4:6. Accessed May 18, 2006 at http://www.ehjournal.net/content/4/1/6. 146. Beardsley EH. A History of Neglect: Health Care for Blacks and Mill Workers in the 20th Century South. Knoxville: University of Tennessee Press; 1987. 147. Cobb JC. The Selling of the South: The Southern Crusade for Industrial Development, 1936–1990. 2nd ed. Champaign-Urbana: University of Illinois Press; 1993. 148. Kochan L. The Maquiladoras and Toxics: The Hidden Costs of Production South of the Border. Publication No. 186-PO690-5. Washington, DC: American Federation of Labor and Congress of Industrial Organizations; 1990. 149. Witt M. An injury to one is un agravio a todos: the need for a MexicoU.S. health and safety movement. New Sol. 1991;28–31. 150. McGaughey, William. A U.S.-Mexico-Canada Free Trade Agreement: Do We Just Say No? Minneapolis: Thistlerose Publications; 1992.
Environmental Justice: From Global to Local
815
151. Moody K, McGinn M. Unions and Free Trade: Solidarity vs. Competition, Detroit: Labor Notes; 1992. 152. Cavanagh J, Gershman J, Baker K, Helmke G. Trading Freedom:How Free Trade Affects Our Lives, Work and Environment. SanFrancisco, CA: Institute for Food and Development Policy; 1992. 153. Grossman GM, Krueger AB. Environmental Impacts of a North American Free Trade Agreement. National Bureau of Economic Research Working Paper 3914. Cambridge MA: NBER; 1991. 154. Grossman GM, Krueger AB. Econbomic growth and the environment. Quart J Economics. 1995;110:353–77. 155. Copeland BR, Taylor MS. Trade, growth and the environment. J Economic Lit. 2004;42:7–71. 156. Taylor MS. Unbundling the pollution haven hypothesis. Adv Economic Analysis Policy. 2004;4(2):Article 8. 157. International Labour Organization. Fundamental ILO Conventions. 2000. Accessed May 18, 2006 at http://www.ilo.org/public/english/ standards/norm/whatare/fundam/. 158. United Nations Environment Programme. The Montreal protocol on substances that deplete the ozone layer. 2000. Accessed May 18, 2006 at http://www.unep.org/ozone/Montreal-Protocol/MontrealProtocol2000.shtml. 159. United Nations Environment Programme. Basel convention on the control of transboundary movements of hazardous wastes and their disposal. 1989. Accessed May 18, 2006 at http://www.basel.int/. 160. United Nations Environment Programme. Stockholm Convention on Persistent Organic Pollutants. 2001. Accessed May 18, 2006 at http://www.pops.int/. 161. Ember L. Environment protection: global companies set new endeavor. Chem Engin News. 1991;69:4. 162. Global Exchange. Home page. Updated: March 2006. Accessed May 18, 2006 at http://www.globalexchange.org/campaigns/ fairtrade/. 163. Claudio L. Building self-reliance in environmental science: The ITREOH experience. Environ Health Persp 2003;111(9):A460–3. 164. Partanen TJ, Hogstedt C, Ahasan R, et al. Collaboration between developing and developed countries and between developing countries in occupational health research and surveillance. Scand J Work Environ Health. 1999;25(3):296–300. 165. Loewenson R. Epidemiology in the era of globalization: skills transfer or new skills? Int J Epidemiol. 2004;33(5):1144–50. 166. Rantanen J, Lehtinen S, Savolainen K. The opportunities and obstacles to collaboration between the developing and developed countries in the field of occupational health. Toxicology. 2004;198(1–3): 63–74. 167. Nuwayhid IA. Occupational health research in developing countries: a partner for social justice. Am J Public Health. 2004;94(11): 1916–21. 168. Rosenstock L, Cullen MR, Fingerhut M. Advancing worker health and safety in the developing world. J Occup Environ Med. 2005;47(2): 132–6. 169. Israel BA, Eng E, Schulz AJ, Parker EA, eds. Methods in CommunityBased Participatory Research for Health. San Francisco: JosseyBass; 2005. 170. Kimmel CA, Collman GW, Fields N, Eskenazi B. Lessons learned for the National Children’s Study from the National Institute of Environmental Health Sciences/U.S. Environmental Protection Agency Centers for Children’s Environmental Health and Disease Prevention Research. Environ Health Persp. 2005;113(10): 1414–8. 171. Minkler M. Community-based research partnerships: challenges and opportunities. J Urban Health. 2005;82(Suppl 2):ii3–12. 172. Horowitz CR, Arniella A, James S, Bickell NA. Using communitybased participatory research to reduce health disparities in East and Central Harlem. Mount Sinai J Med. 2004;71(6):368–74. 173. Viswanathan M, Ammerman A, Eng E, et al. Community-based participatory research: assessing the evidence. Evidence Report: Technology Assessment (Summary). 2004;99:1–8.
816
Environmental Health
174. Leung MW, Yen IH, Minkler M. Community based participatory research: a promising approach for increasing epidemiology’s relevance in the 21st century. Int J Epidemiol. 2004;33(3):499–506. 175. Israel BA, Parker EA, Rowe Z, et al. Community-based participatory research: lessons learned from the Centers for Children’s Environmental Health and Disease Prevention Research. Environ Health Persp. 2005;113(10):1463–71. 176. Minkler M, Wallerstein N. Community-Based Participatory Research for Health. San Francisco: Jossey-Bass; 2002. 177. Colopy JH. The Road Less Traveled: Pursuing Environmental Justice Through Title VI of the Civil Rights Act of 1964. 13 Stan. Envtl. L.J. 1994;125:180–85. 178. Cole LW. Environmental justice litigation: Another stone in David’s sling. Fordham Urban Law J. 1994;21(3):523–46. 179. Fisher M. Environmental racism claims brought under Title VI of the Civil Rights Act. Environmental Law. 1995;25:285–334. 180. Latham Worsham JB. Disparate Impact Lawsuits Under Title VI, Section 602. Boston College Environmental Affairs Law Review. 2000;27:631–706. 181. Rutherford L. Redressing U.S. corporate environmental harms abroad through transnational public law litigation: generating a global discourse on the international definition of environmental justice. Georgetown International Environmental Law Review. 2002;14(4): 807–36. 182. Sharma DC. 2005—By order of the court. Environmental cleanup in India. Env Health Persp. 2005;113(6):A395–7. 183. Lloyd-Smith ME, Bell L. Toxic disputes and the rise of environmental justice in Australia. Int J Occup Environ Health. 2003;9: 14–23. 184. Mamo C, Marinacci CH, Demaria M, Mirabelli D, Costa G. Factors other than risks in the workplace as determinants of socioeconomic differences in health in iItaly. Int J Occup Environ Health. 2005;11:70–6. 185. Ezzati M, Utzinger J, Cairncross S, Cohen AJ, Singer BH. Environmental risks in the developing world: exposure indicators for evaluating interventions, programmes, and policies. J Epidemiol Commun Health. 2005;59(1):15–22. 186. Shabecoff PA. A Fierce Green Fire. The American Environmental Movement. New York: Hill & Wang; 1993. 187. Kjellström T, Corvalán C. Framework for the development of environmental health indicators. World Health Stat Q. 1995;48(2): 144–54. 188. Reeves M, Shafer KS. Greater risks, fewer rights: U.S. farmworkers and pesticides. Int J Occup Environ Health. 2003;9:30–39. 189. Koenig K. Chevron-Texaco on trial. World Watch Magazine, January/February 2004; pp 10–19. 190. Montaño O. La otra California, Valle de San Quintín, tierra de inmigrantes, Universidad Obrera de México, 2001. 191. Nolasco M. La relacion hombre-medio-tecnologia en la Frontera Norte [The human-environment-technology relationship on the north border]. Ecologica., June 1997. Accessed May 18, 2006 at http://www.planeta.com/ecotravel/mexico/ecologia/97/0797frontera1. html. 192. Cornejo A. Familias completas de jornaleros son explotadas en el valle agrícola, San Quintín, la eterna miseria, La Jornada. August 2000. 193. Velasco L. Organizational experiences and female participation of Oaxacan indigenous peoples in Baja California. Paper delivered at conference on Indigenous Mexican Migrants in the U.S.: Building Bridges between Researchers and Community Leaders, sponsored by the Latin American and Latino Studies Department (LALS), University of California, Santa Cruz, October 11–12, 2002. Accessed May 18, 2006 at http://lals.ucsc.edu/conference/papers/English/ Velasco.html. 194. Maurer S. National and transnational logics in the Yakima borderlands. Paper delivered at the Women and Globalization Conference, Center for Global Justice, San Miguel de Allende, Mexico,
195.
196.
197.
198.
199.
200.
201.
202.
203.
204.
205.
206.
207.
208.
209.
210.
211.
July 27–August 3, 2005. Accessed May 18, 2006 at http://www. globaljusticecenter.org/papers2005/maurer_eng.htm. Thompson B, Coronado G, Puschell K, Allen E. Identifying constituents to participate in a project to control pesticide exposure in children of farm workers. Environ Health Persp. 2001;109(Suppl 3): 443–8. Berg B. Dangerous path of pesticides. Fred Hutchinson Cancer Research Center News. 6 March 2003. Accessed May 18, 2006 at http://www.fhcrc.org/about/pubs/center_news/2003/mar6/ sart1.html. Arcury T, Quandt S, Dearry A. Farmworker pesticide exposure and community-based participatory research: rationale and practical applications. Environ Health Perspect. 2001;109:420–34. Fred Krissman. ¿Manzanas y naranjas?: Como el reclutamiento de indígenas mexicanos divide los mercados laborales agrícolas en el oeste de EU. Paper prepared for Conference on “Indigenous Mexican Immigrants in California: Building Bridges Between Researchers and Community Leaders,” University of California, Santa Cruz, October 11–12, 2002. Accessed May 18, 2006 at http:// lals.ucsc.edu/conference/papers/Spanish/KrissmanEspanol.pdf. Strong LL, Thompson B, Coronado GD, et al. Health symptoms and exposure to organophosphate pesticides in farmworkers. Am J Ind Med. 2004;46(6):599–606. Coronado GD, Thompson B, Strong L, Griffith WC, Islas I. Agricultural task and exposure to organophosphate pesticides among farmworkers. Environ Health Persp. 2004;112(2):142–7. Thompson B, Coronado GD, Grossman JE, et al. Pesticide takehome pathway among children of agricultural workers: study design, methods, and baseline findings. JOEM. 2003;45:42–53. Curl CL, Fenske RA, Kissel JC, et al. Evaluation of take-home organophosphorus pesticide exposure among agricultural workers and their children. Environ Health Persp. 2002;110: A787–92. Woodward K. Pathways to pesticide exposure. PHS project tracks paths of pesticide residue among highly exposed agricultural workers in Yakima Valley. Fred Hutchinson. Cancer Research Center. 2004. Available at at http://www.fhcrc.org/about/pubs/center_ news/2004/mar4/sart4.html. Reeves M, Katten A, Guzmán M. Fields of Poison 2002. California Farmworkers and Pesticides. San Francisco: Californians for Pesticides Reform;, 2002. Available at http://www.panna.org/campaigns/docsWorkers/CPRreport.pdf. Forst L, Lacey S, Yun H, et al. Effectiveness of community health workers for promoting use of safety eyewear by Latino farm workers. Am J Ind Med. 2004;46:607–13. Counter SA, Buchanan LH, Ortega F. Mercury levels in urine and hair of children in an Andean gold-mining settlement. Int J Occup Environ Health. 2005;11:132–7. International Institute for Environment and Development. Artisanal and small-scale mining. Chapter 13. In: Breaking New Ground: Mining, Minerals, and Sustainable Development. London: Earthscan: International Institute for Environment and Development; 2002. International Labour Organization. Social and Labour Issues in Small-Scale Mines. Geneva: ILO; 1999. Accessed May 18, 2006 http://www.natural-resources.org/minerals/cd/docs/ilo/TMSSM_ 1999.pdf. Counter SA, Buchanan LH, Ortega F, Laurell G. Elevated blood mercury and neuro-otological observations in children of the Ecuadorian gold mines. J Toxicol Environ Health Part A. 2002;65(2):149–63. Counter SA, Buchanan LH, Laurell G, Ortega F. Blood mercury and auditory neuro-sensory responses in children and adults in the Nambija gold mining area of Ecuador. Neurotoxicol. 1998;19(2):185–96. Mosquera C, Valencia R, Rivera G. El Rol de los Trabajadores en la Lucha Contra el Trabajo Infantil Minero: Guía para Acción institucional. Lima: OIT-IPEC, 2005. Accessed May 18, 2006 at http://www.oit.org.pe/ipec/boletin/documentos/guia_mineria_ trabajadores.pdf.
42 212. Bonfim E. Los niños mineros: un problema aun oculto en Ecuador. 2004. Available at http://www.rebelion.org/noticia. php?id=5125. 213. McMahon G. An Environmental Study of Artisanal, Small, and Medium Mining in Bolivia, Chile, and Peru. Washington: World Bank; 1999. 214. Tarras-Wahlberg NH, Flachier A, Fredriksson G, et al. Environmental impact of small-scale and artisanal gold mining in southern Ecuador: iImplications for the setting of environmental standards and for the management of small-scale mining operations. AMBIO J Human Environ. 2000;29:484–91. 215. Veiga MM. Introducing New Technologies for Abatement of Global Mercury Pollution in Latin America. Rio de Janeiro: UNIDO/UBC/CETEM/CNPq, 1997. Accessed May 18, 2006 at http://www.facome.uqam.ca/pdf/veiga_01.pdf. 216. Douglas A, Forster CB. Price of gold: environmental costs of the new gold rush. Ecologist. 1993;23;91–2. 217. Mol JH, Ouboter PE. Downstream effects of erosion from smallscale gold mining on the instream habitat and fish community of a small neotropical rainforest stream. Conservation Biol. 2004;18; 201–14. 218. Peterson DG, Heemskerk M. Deforestation and forest regeneration following small-scale gold mining in the Amazon: the case of Suriname. Environ Conservation. 2001;28;117–26. 219. Harari R, Forastiere F, Axelson O. Unacceptable occupational exposure to toxic agents among children in Ecuador. Am J Ind Med. 1997;32:185–9. 220. Centro Desarollo y Autogestión (DyA), Programa Internacional para la Erradicación de Trabajo Infantil (IPEC), Organización Internacional del Trabajo (OIT). Línea de Base: Trabajo infantil en la minería artesanal del oro in Ecuador. Lima: Sistema de Informacioón Reginal sobre Trabajo Infantil, 2002. Accessed May 18, 2006 at http://www.oit.org.pe/ipec/documentos/lb_mineria_ ecuador.pdf. 221. Hinton J, Veiga M, Beinhoff C. Women and artisanal mining: gender roles and the road ahead. In: The Socio-Economic Impacts of Artisanal and Small-Scale Mining in Developing Countries. Hilson G, ed. (Oxford: Taylor & Francis; 2003). 222. Comas A. Las maquiladoras en México y sus efectos en la clase trabajadora. Globalización: Revista Mensual de Economía, Sociedad y Cultura, November 2002. Accessed May 18, 2006 at http://www. rcci.net/globalizacion/2002/fg296.htm. 223. INEGI (Instituto Nacional de Estadística, Geografia, e Informática). Estadística de la industria maquiladora de exportación (EIME). 2005. Accessed May 18, at http://www.inegi.gob.mx/est/default. asp?c=1807. 224. Comité Fronterizo de Obreras. Algunos datos de la industria maquiladora de exportación con base en cifras del Instituto Nacional de Estadística, Geografía e Informática (INEGI). June 2005. Accessed May 18, 2006 at http://www.cfomaquiladoras.org/ dataprincipalabril05.htm. 225. Rodríguez OL. The city that makes the maquila: the case of Ciudad Juárez (México). [La ciudad que hace la maquila: el caso de Ciudad Juárez (México)]. Scripta Nova, Revista Electrónica de Geografía y Ciencias Sociales 2002:6:119(53). Accessed May 18, 2006 at http:// www.ub.es/geocrit/sn/sn119–53.htm. 226. Kourous G. La salud y la seguridad laboral en las maquiladoras. El bienestar de los trabajdores esta en juego. Borderlines 47:1998. Accessed May 18, 2006 at http://americas.irc-online.org/borderlines/ spanish/1998/bl47esp/bl47seg.html. 227. Frumkin H, Hernandez-Avila M, Torres F. Maquiladoras: a case study of free trade zones. Occup Environ Health. 1995; 1:96–109. 228. Harlow SD, Becerril LA, Scholten JN, Sanchez Monroy D, Sanchez RA. The prevalence of musculoskeletal complaints among women in Tijuana, Mexico: sociodemographic and occupational risk factors. Int J Occup Environ Health. 1999;5(4):267–75.
Environmental Justice: From Global to Local
817
229. Guendelman S, Jasis M. The health consequences of maquiladora work: women on the U.S.-Mexico border. Am J Public Health. 1993;83:37–44. 230. Jasis M, Guendelman S. Maquiladoras y mujeres fronterizas: ¿Beneficio o daño a la salud obrera? Salud pública de México. 1993;35:620–29. 231. Guendelman S, Samuels S, Ramirez M. Women who quit maquiladora work on the U.S.-Mexico border: assessing health, occupation, and social dimensions in two transnational electronics plants. Am J Industrial Med. 1998;33(5):501–9. 232. Guendelman S, Samuels S, Ramirez-Zetina M. [The relationship between health and job quitting in female workers of the electronics assembly industry in Tijuana]. Salud Publica de Mexico. 1999;41(4): 286–96. 233. Meservy D, Suruda AJ, Bloswick D, Lee J, Dumas M. Ergonomic risk exposure and upper-extremity cumulative trauma disorders in a maquiladora medical devices manufacturing plant. JOEM. 1997;39(8): 767–73. 234. Environmental Health Coalition. Border Environmental Justice Campaign. Accessed May 18, 2006 at http://www.environmentalhealth. org/border.html. 235. Warner DC. Health issues at the U.S.-Mexican border. JAMA. 1991;265:242–7. 236. Derechos Humanos en Mexico. Globalización, migración y explotación en la industria maquiladora. El caso de la frontera de Tamaulipas. Estudios Fronterizos, January-Febrary, 2000. Accessed May 18, 2006 at http://www.derechoshumanosenmexico.org/ informesenword/ infglbl.doc. 237. United Nations. Commission on the Status of Women, United Nations, New York, February 28-March 17, 2000. Accessed May 18, 2006 at http://www.un.org/womenwatch/daw/csw/. 238. San Sebastián M, Hurtig A. Oil development and health in the Amazon basin of Ecuador: the popular epidemiology process. Soc Sci Med. 2005;60:799–807. 239. Jochnick C, Normand R, Zaidi S. Rights violations in the Ecuadorian Amazon: the human consequences of oil development. Health Hum Right. 1994;1:82–100. 240. Almeida A. Reseña sobre la historia ecológica de la Amazonía ecuatoriana. In: Martínez E, ed., El Ecuador post petrolero, Quito: Acción Ecológica, 2000, pp 27–38. 241. Kimerling J. Oil development in Ecuador and Peru: law, politics and the environment. In: Hall A, ed. Amazonia at the Crossroads: The Challenge of Sustainable Development. London: Institute of Latin American Studies; 2000. 242. San Sebastian M, Armstrong B, Cordoba JA, Stephens C. Exposures and cancer incidence near oil fields in the Amazon basin of Ecuador. Occup Environ Med. 2001;58(8):517–22. 243. San Sebastian M, Hurtig AK. Cancer among indigenous people in the Amazon Basin of Ecuador, 1985–2000. Rev Panam Salud Publica. 2004b;16:328–33. 244. Hurtig AK, San Sebastian M. Geographical differences in cancer incidence in the Amazon basin of Ecuador in relation to residence near oil fields. Int J Epidemiol. 2002a;31(5):1021–7. 245. Hurtig AK, San Sebastian M. Gynecologic and breast malignancies in the Amazon basin of Ecuador, 1985–1998. Int J Gynecol Obstet. 2002b;76:199–201. 246. Hurtig AK, San Sebastián M. Incidence of childhood leukemia and oil exploitation in the Amazon basin of Ecuador. Int J Occup Environ Health. 2004;10:245–50. 247. San Sebastian M, Armstrong B, Stephens C. Health of women living near oil wells and oil production stations in the Amazon region of Ecuador. Rev Panam Salud Publica. 2001;9:375–84. 248. San Sebastian M, Armstrong B, Stephens C. Outcomes of pregnancy among women living in the proximity of oil fields in the Amazon basin of Ecuador. Int J Occup Environ Health. 2002;8: 312–9.
818
Environmental Health
Further Reading Adamson J, Evans M, Stein R. The Environmental Justice Reader: Politics, Poetics & Pedagogy. Tucson, AZ: University of Arizona Press; 2002. Agyman J. Sustainable Communities and the Challenges of Environmental Justice. New York, NY: New York University Press; 2005. Agyman J, Bullard RD, Evans B. Just Sustainabilities: Development in an Unequal World. Cambridge, MA: MIT Press; 2003. Bryant B, Mohai P. Race and the Incidence of Environmental Hazards: A Time for Discourse. Boulder, CO: Westview Press Inc.; 1992. Bryant B. Environmental Justice: Issues, Policies, and Solutions. Washington D.C.: Island Press; 1995. Bullard RD, Johnson GS, Torres AO. Sprawl City: Race, Politics, and Planning in Atlanta. Washington D.C.: Island Press; 2000. Bullard RD. Dumping in Dixie: Race, Class, and Environmental Quality. Boulder, CO: Westview Press Inc.; 1990. Bullard RD. The Quest for Environmental Justice. San Francisco, CA: Sierra Club Books; 2005. Bullard RD. Unequal Protection: Environmental Justice & Communities of Color. San Francisco, CA: Sierra Club Books; 1994. Calderon RL, Johnson CC, Jr, Craun GF, et al. Health risks from contaminated water: do class and race matter? Toxicol Industrial Health. 1993;9:879–900. Camacho D. Environmental Injustices, Political Struggles: Race, Class and the Environment. Durham, NC: Duke University Press; 1998. Cole LW, Foster SR. From the Ground Up: Environmental Racism and the Rise of the Environmental Justice Movement. New York, NY: New York University Press; 2001. Colopy JH. The road less traveled: pursuing environmental justice through Title VI of the Civil Rights Act of 1964. Stanford Environ Law J. 1994;13(125):1–89. Corburn J. Street Science: Community Knowledge and Environmental Health Justice. Cambridge, MA: MIT Press; 2005. Doyle T. Environmental Moments in Majority and Minority Worlds: A Global Perspective. New Brunswick, NJ: Rutgers University Press; 2005. Draft Principles On Human Rights And The Environment, E/CN.4/Sub.2 / 1994/9, Annex I. 1994. Accessed on May 18, 2006 http://www1. umn.edu/humanrts/instree/1994-dec.htm. Edelstein MR. Contaminated Communities: The Social and Psychological Impacts of Residential Toxic Exposure. Boulder, CO: Westview Press Inc.; 1988. Environmental Protection Agency. Environmental Equity: Reducing Risk for All Communities. EPA230-R-92-008. Washington: USEPA; 1992. Faber D. The Struggle for Ecological Democracy: Environmental Justice Movements in the United States. New York, NY: Guilford Press; 1998.
Foreman CH. The Promise and Peril of Environmental Justice. Washington D.C.: Brookings Institution Press; 1998. Gerrard MB. Whose Backyard, Whose Risk: Fear and Fairness in Toxic and Nuclear Waste Siting. Cambridge, MA: MIT Press; 1994. Institute of Medicine. Toward Environmental Justice: Research, Education and Health Policy Needs. Washington D.C.: National Academy Press; 1999. Kamuzora M. Non-decision making in occupational health policies in developing countries. Int J Occup Environ Health. 2006;12:65–71. Knudson T. State of Denial: A special report on the environment. Chapter One: Staining the Amazon. The tropics suffer to satisfy state’s thirst for oil. Sacramento Bee., 27 April 2003. Accessed May 18, 2006 at http://www.sacbee.com/static/live/news/projects/denial/. Lerner S. Diamond: A Struggle for Environmental Jjustice in Louisiana’s Chemical Corridor. Cambridge, MA: MIT Press; 2005. Lester JP, Allen DW, Hill KM. Environmental Injustice in the United States: Myths and Realities. Boulder, CO: Westview Press Inc.; 2001. North American Comisión for Environmental Cooperation. Significant biodiversity loss across North America. 2002. Accessed May 18, 2006 at http://www.cec.org/news/details/index.cfm?varlan= english&ID=2441. North American Free Trade Agreement. 1994. Accessed May 18, 2006 at http://www.nafta-sec-alena.org/DefaultSite/index_e.aspx?DetailID =78. Pellow DN, Brulle RJ. Power, Justice, and the Environment. Cambridge, MA: MIT Press; 2005. Richter M. Nambija gold rush. Accessed May 18, 2006. Accessed May 18, 2006 at http://www.geographie.uni-erlangen.de/mrichter/. Roberts JT, Toffolon-Weiss MM. Chronicles from the Environmental Justice Frontline. Cambridge, UK: Cambridge University Press; 2001. Severo R. Air force rejects cadets with sickle trait. New York Times, 4 February 1980; A1. Severo R. Dispute arises over Dow studies on genetic damage in workers. New York Times, 5 February 1980; A1. Severo R. Federal mandate for gene tests disturbs U.S. job safety official. New York Times, 6 February 1980; A1. Severo R. Screening of blacks by DuPont sharpens debate on gene tests. New York Times, 4 February 1980; A1. Soliman MR, Derosa CT, Mielke HW, Bota K. Hazardous wastes, hazardous materials and environmental health inequity. Toxicol Indust Health. 1993;9:901–12. Takaro TK, Gonzalez Arroyo M, Brown GD, Brumis SG, Knight EB. Community-based survey of maquiladora workers in Tijuana and Tecate, Mexico. Int J Occup Environ Health. 1999;5(4):313–5. U.S. General Accounting Office. U.S.-Mexico Trade: The Work Environment at Eight U.S.-Owned Maquiladora Auto PartsPlants. GAO/GGD-94-22. November 1993.
43
The Health of Hired Farmworkers Don Villarejo • Marc B. Schenker
OVERVIEW
For several decades, migrant and seasonally employed, hired farm laborers were identified as a “special population” in need of programs of government and/or philanthropic assistance. Thus, Migrant Health, Migrant Education, Migrant Job-Training, Migrant Legal Services, and, more recently, Migrant Head Start, were developed to respond to the needs of workers who often traveled great distances, often with their entire families, in search of farm work. In the first years of these programs, only U.S.-born “migrant” workers were eligible to be served. Subsequently, it was recognized that those workers who were employed on a “seasonal” basis in agriculture had very similar characteristics and needs, and the requirement of being U.S.-born was dropped from eligibility standards. The composition of the hired farm labor force has dramatically changed since those early years of the “migrant” programs. For this reason, it is important to be clear about the population of interest in this chapter. The term “farmworker” can refer to three groups: farmers, unpaid family workers (usually members of the farm family), and hired workers. This paper is concerned with hired farmworkers, defined as persons who are employed on a farm to perform tasks that directly result in the production of an agricultural commodity intended for sale. Postharvest processing tasks are excluded from this definition. Note also that the nature of the employer is not specified in this definition. Individuals performing farm tasks might be working for a farmer, labor contractor, packer/shipper, or another type of labor market intermediary. There is a marked absence of reliable data on the number of such workers, either today or at any time in the past. Thus, epidemiology in this population is severely restricted by the lack of reliable denominator data. In 1992, the authoritative federal Commission on Agricultural Workers (CAW) estimated the number of persons employed as hired farm laborers in the United States at 2.5 million individuals.1 Most jobs filled by farm laborers are short term so that the corresponding employment figure (sometimes described as full-time-equivalents or FTE), equal to the annual average of monthly employment, is much lower, perhaps numerically half or less of the CAW estimate. In contrast, the self-employment of farmers and unpaid family members is estimated to be about 2.0 million.2 Thus, directly hired farmworker and agricultural service (contract) worker employment was an estimated 37% of the national total of farm employment in 1992. The relative importance of hired farm laborers in U.S. agriculture has increased in recent years. For example, in California, where virtually all hired farmworker employment and/or payroll is reported to both the state Labor and Employment Agency, and to the Workers Compensation Insurance Rating Bureau, the proportion of the total amount of all work on farms that was performed by farmers and unpaid family members dropped from 40% in 1950 to just 15% in 2001.3 Correspondingly, the share performed by hired workers increased from 60% to 85% of the total.3
As a result of this trend and other factors (see below), the reported employment of directly hired and contract farm laborers in California increased significantly in recent years.4 There is also evidence that a smaller, but nevertheless significant, increase of hired farm laborer employment during the past several decades occurred in some other important farm states, such as Oregon and Washington, but geographic variation exists in the United States.5 Three major factors account for the greater utilization of hired workers in U.S. agriculture. First, there has been a substantial growth in the importance of labor-intensive crops in the nation’s agriculture. For over 20 years, there has been a steady increase in the proportion of U.S. crop farm cash receipts derived from the sale of fruits and nuts, vegetables, and nursery and greenhouse products (F-V-N crops). As reported by the Census of Agriculture, in 1974, F-V-N commodities were just over one-sixth (17.3%) of farm cash receipts from crop sales.6 By 2002, the F-V-N share had increased to more than two-fifths (43.3%) of the total for all crops.7 U.S. farmers now receive more than twice as much from the sale of nursery and greenhouse crops as from wheat production ($14.7 billion vs. $5.9 billion in 2002).7 Since gross agricultural cash receipts may be reduced when production is high, owing to lower commodity prices when increases of supply exceed demand, an independent, and possibly more accurate, measure of the growth in fruit and vegetable production is the change of the physical output, measured in tons. U.S. fruit and vegetable production, in tons harvested, nearly doubled (+95%) between the population census years 1970 and 2000.8 Since population growth was about 38% during this period,9 increased per capita consumption of some fresh commodities and increased exports of many commodities accounted for more than half of the growth of production. Greater utilization of some other commodities, such as wine grapes or processing tomatoes, accounts for the remainder of the growth of production. Second, sharply increased farm size is associated with supplementing farmer and family labor with hired labor. Among fruit and vegetable producers, the increase of size concentration is particularly dramatic. Between 1974 and 1992, the number of U.S. farms reporting 500 or more acres of harvested vegetables increased from 919 to 1416, and the corresponding aggregate acreage of vegetables harvested went from 1,145,703 to 2,028,928. For land in orchards, between 1974 and 2002, the number of U.S. farms with at least 500 acres of trees and/or vines grew from 972 to 1522, and the corresponding aggregate acres in orchards increased from 1,334,105 to 2,152,941. Third, the steady, long-term decline in farming as an occupation has led to a greater reliance on hired labor to supplement or replace family labor. This is reflected in Census of Agriculture reports on workers directly employed for 150 days or more by U.S. farms (these longer-term hired laborers are described as “regular” workers by some economists). In 1974, nearly one in 10 (9.6%) U.S. farms reported they employed at least some of their laborers for this duration, and the 819
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
820
Environmental Health TABLE 43-1. FARM EMPLOYMENT, CALIFORNIA, 2003, BY CATEGORY OF EMPLOYMENT, USDA Category of Employment
January
April
July
October
Direct-hire workers, 150 days or more Direct-hire workers, less than 150 days Agricultural service (contract) workers
190,000 40,000 75,000
185,000 35,000 67,000
203,000 32,000 125,000
179,000 51,000 118,000
Total
305,000
287,000
360,000
348,000
aggregate total was 712,715 such workers.6 By 2002, the proportion of farms employing “regular” hired workers had increased, and the total of such workers had grown to 927,708.10 Thus, in 28 years, the share of U.S. farms directly hiring workers for at least 150 days had increased, and the number of such workers climbed by 30%. Interestingly, the increase in the hiring of “regular” workers by some farms has not been associated with increased direct hiring of short-term workers on U.S. farms. Between 1974 and 2002, the aggregate number of workers reportedly hired for less than 150 days fell sharply from 4,502,517 to 2,108,762. This figure is the aggregate of individual reports by farm operators of the number of persons hired for less than 150 days, and it is likely that a hired laborer could work on two or more farms and be counted multiple times. For this reason, this figure is often referred to as the number of jobs by economists, and should not be regarded as a count of individuals. At the same time, there has been a very sharp increase in the use of labor contractors and other labor market intermediaries, especially from the mid-1980s to the present. Between 1974 and 2002, the number of U.S. farms reportedly utilizing labor contractors nearly doubled, from 119,385 to 228,692. Nominal contract labor expenses by farm operators grew in the same period by a phenomenal 575%, from roughly $512 million to $3.5 billion. The greatest growth in labor contractor utilization in this period was in those states with a very high proportion of immigrants among the farm labor force, such as Arizona, Florida, and California. There is some evidence that the increased reliance on labor contractors is associated with the enactment of the Immigration Reform and Control Act of 1986, a law that imposed fines and possible imprisonment on employers who “knowingly hired” unauthorized immigrants; described as the employer sanctions provision of the law. For the very first time, every U.S. employer was required to demand that every employee, and prospective hire, document his or her eligibility for employment in the United States. A government-issued reporting form, Form I-9, requires both the worker and the employer to attest as to the verification of employment eligibility. This shift to labor market intermediaries may have significant health implications because of the insulation of farm operators from the farmworkers working on their farms. Data from the U.S. Department of Agriculture’s Quarterly Survey of Agricultural Labor demonstrate that direct-hire short-term employees (<150 days) are far fewer in number than those directly hired and working 150 days or more, even during periods of peak employment (Table 43-1).11 In contrast, contract workers are far more numerous than direct-hire short-term employees during all 4 months, and greatly exceed their number at peak periods. During 1985, the first year for which strictly comparable data were reported, the number of direct-hire short-term workers was greater than the number of contract workers for all 4 survey months, and was twice as numerous in 2 of those 4 months. Thus, by 2003, the situation had been completely reversed.12 It is also of significance that there are very substantial regional differences in all of these developments. The Pacific and Northwest Region has especially benefited from the increased importance of fruit, vegetable, and nursery crop production. While Midwestern and Northeastern states have experienced substantial economic stress in the farm sector, the western states have enjoyed a boom in both production and net returns. For example, of the nation’s net increase of land in orchards of 1,282,318 acres between 1974 and 2002, the
Pacific and Northwest states alone accounted for 1,271,516 acres. In the case of harvested vegetable acres, the net increase on all U.S. farms was 628,976 acres, while for the Pacific and Northwest states it was 546,934 acres. Thus, the vast majority of the economic value of increased production of both of these types of crops was captured by the western states. Correspondingly, the employment of hired farmworkers increased most markedly in that region. Less well appreciated has been the remarkable increase in dairy production in the western states during this same period. California is now the nation’s leader in fluid milk production and will surpass Wisconsin in cheese output within the next several years. Overall, the Pacific and Northwest states have experienced a doubling of the number of milk cows from 1974 to 2002; from 1,246,533 in 1974 to 2,590,308 in 2002. In the same states and period, the number of dairy farms fell from 8821 to 4813; correspondingly, the average number of milk cows per dairy farm shot up from 141 to 538 and continues to rise. Many more “milkers” have been hired to assist with the continuous tasks of caring for and milking the animals. U.S.-HIRED FARMWORKERS—CHARACTERISTICS
The U.S. Department of Labor conducts a large-scale, on-going national survey of workers employed in seasonal agricultural crop services, the National Agricultural Workers Survey (NAWS). Hired livestock laborers and certain other types of hired farmworkers are excluded from the survey. Begun in 1988 to assess the effects of the Immigration Reform and Control Act on the supply and characteristics of U.S. agricultural workers, the NAWS provides a detailed body of knowledge about this population, including demographic information, patterns of employment, working conditions, use of social services, and, beginning in 1999, information about health-care access. The most recent published report of findings from the NAWS is based on 6472 personal interviews conducted between October 1, 2000, and September 30, 2002, in 80 or more randomly selected counties throughout the United States.13 The NAWS finds that the characteristic hired crop farm laborer is a young, low-income, foreign-born (mostly Mexican) male with low educational attainment who has only recently migrated to the United States (Table 43-2). Most U.S.-hired farmworkers are characterized by low socioeconomic
TABLE 43-2. CHARACTERISTICS OF U.S.-HIRED CROP FARMWORKERS, 2000–02, NAWS, N = 6,472
Characteristics Age (median) Male Foreign-born Educational attainment (median) Undocumented immigrant Household income (median) Yearly farm work (median) Indigenous migrant
Finding
Trend from 1990–92
31 years 79% 78% 7 years
Unchanged Increasing Increasing Unchanged
53% $10,000–$12,499 34 weeks N/A
Increasing Unchanged Increasing Increasing
43 status (SES), a characteristic that has long been associated with adverse health outcomes. One surprising finding from the NAWS is that 16% of all workers interviewed were “newcomers,” having been in the United States for less than 1 year at the time of the interview. All but a handful of these were young men, and over 90% of them told government interviewers that they had entered the country without immigration authorization. When asked about their ethnicity, the vast majority of all NAWS participants (83%) self-identified as Latino or Hispanic. Thirty percent live in poverty, as measured by their total household income in the year prior to the interview and using Federal poverty income standards for the corresponding household size and period. An overriding fact about the participants in the NAWS survey is that more than half (53%) told government interviewers that they lacked immigration authority to work in the United States, that is, were undocumented. Apart from legal issues associated with their employment, the immigration status of such workers, by itself, creates enormous barriers in access to services and in their willingness to report abuses and other misconduct to appropriate authorities. In the post-September 11, 2001, period of border security anxiety, few undocumented workers choose to place themselves at risk of deportation by seeking out government agencies or service providers. One of the more-difficult-to-measure characteristics of this population is the recent increase of the number of indigenous migrants from southern Mexico and Central America. Mayan, Mixtec, Zapotec, Triqui, and other native peoples are coming to the United States in very large numbers seeking employment. There is evidence that these recent arrivals, many of whom do not speak Spanish or English, relying instead on their own indigenous language, are displacing traditional mestizo (mixed race) immigrants in agricultural jobs in some regions of the United States.14 The NAWS does not report indigenous ethnicity but does comment that the share of foreign-born workers coming from the southern Mexican states of Guerrero, Oaxaca, Chiapas, Puebla, Morelos, and Veracruz doubled to 19% as compared with 1993–94 NAWS findings. Most indigenous migrants originate from these states. An important finding from the NAWS is that roughly half of all crop workers interviewed said they could not read or speak English “at all” (53% and 44%, respectively). A substantial additional fraction said they could read or speak English “a little” (20% and 26%, respectively). This finding has important implications in all aspects of their employment and access to services. Cultural and linguistic barriers are substantial, and are increasing with the arrival of large numbers of indigenous migrants for whom Spanish is a second language. Nearly three-quarters (72%) of crop farmworkers told NAWS interviewers that they had only one farm employer in the previous year. Those working for a labor contractor may actually perform farm tasks on many farms, but their formal employer is just the contractor. The NAWS found that the share of all crop farmworkers who were employed by a labor contractor was 21%, a sharp increase over the 11% who reported to NAWS in federal fiscal year (FY) 1989 that they were so employed. This finding of increased utilization of contractors in recent years is consistent with a wide range of other data on farm labor employment. U.S.-HIRED FARMWORKERS–ORGANIZATIONS
Historically, labor unions have proved to be extremely difficult to organize among hired farmworkers. Migrant status, seasonal employment, high turnover within the labor force, and immigration status have usually been cited as leading factors contributing to this difficulty. But some successes since the early 1960s, notably by the United Farmworkers Union, raised the hope that this historic difficulty would, at last, be overcome. However, those early successes have proved short-lived. At the present time, there are only five unions known to have active labor agreements with farm employers.
The Health of Hired Farmworkers
821
The total number of farm laborers under active union contract is about 28,000, or a little over 1% of the estimated national total of persons eligible.15 A less well-known aspect of farm labor organizing has been the growth of organizations based on home village networks, which are of particular importance among the indigenous migrants from southern Mexico and Central America.15 This binational networking provides both support systems for recent immigrants as well as leadership to address common problems.
U.S.-HIRED FARMWORKERS—HEALTH STATUS
Relatively little is known about the health status of U.S.-hired farmworkers. There have been no national, cross-sectional assessments of the health status of this population that included a reasonably comprehensive physical examination, and few studies have been done in localized areas.16 Many of the hazards of agricultural work are common to everyone exposed to agricultural work hazards. However, studies of health among farmer owners and managers, which in themselves have been fewer than for other industrial sectors, may not reflect the status of farmworkers because of differences in age, ethnicity, economic status, work conditions, family structure, and health-related behaviors. We have focused this chapter on research that has specifically addressed the health of hired farmworkers. Studies of mortality and morbidity of this population are hampered by the fact that an unknown, but presumably large, number of Mexican-born workers seek treatment in Mexico or return to their home community after an injury or a period of work in the United States. The magnitude of this movement and its effect on mortality rates is unknown.
Mortality An early study of mortality in California analyzed all deaths from 1979–1981.17 Deaths from falls and machinery accidents (SMR = 380) and from other accidents (SMR = 310) were significantly elevated among farmworkers, as they are among farmers. Deaths from chronic obstructive pulmonary disease were also significantly elevated in this population (SMR = 147). The overall mortality of farmworkers was also significantly elevated (SMR = 166, 95% CI, 160–172). A more recent report summarized proportionate mortality from death certificates of 26,148 farmworkers in 24 states for 1984–1993.18 Elevated proportionate mortality was found for injuries, tuberculosis, mental disorders, cerebrovascular disease, respiratory diseases, ulcers, hypertension, and cirrhosis. Reduced mortality was found from other infectious diseases, endocrine disorders, nervous system diseases, pneumoconiosis, arteriosclerotic heart disease, and all cancers combined. The increased mortality from injuries and respiratory disease is also seen among farm owners and managers, and reflects occupational hazards of farm work. The increase in tuberculosis mortality reflects higher endemic rates in countries sending farmworkers to the United States, lack of adequate medical care, and housing conditions among migrant and seasonal farmworkers. Several studies have identified an increased risk of tuberculosis among hired farmworkers.19,20 The Centers for Disease Control and Prevention estimates that 1% of tuberculosis cases in the United States are among migrant farmworkers. Risk factors include being male, foreign-born or Hispanic, and having a history of alcohol abuse or homelessness.20
Morbidity The California Agricultural Workers Health Survey (CAWHS) was a large-scale, population-based assessment of the health of hired farmworkers conducted in 1999.21 CAWHS was a household survey based on selection in agricultural regions of California. A total of 970
822
Environmental Health
participants (83% participation rate) completed a lengthy, structured interview, and two-thirds of these (652) successfully completed the physical examination and a private risk behavior interview. The physical examination findings of the CAWHS reveal a high prevalence of adverse chronic health outcomes. Among male participants in all but the youngest age group, obesity (body mass index >30) was found at a significantly higher rate than in the general U.S. population (Fig. 43-1).22 The percentage of male workers who exhibited healthful weight was low in all age groups, and was just 4% in the age group 45–54 (which compares with a corresponding figure of 29% among U.S. males of the same age group).22 Female CAWHS participants also showed a significantly higher prevalence of obesity. The prevalence of high serum cholesterol and high blood pressure among CAWHS participants was also elevated. Nearly one-fifth (18%) of male workers exhibited at least two of the three risk factors for chronic disease (obesity, high serum cholesterol, high blood pressure). Anemia was also common among both men and women. The CAWHS survey and other studies of hired farmworkers show elevated rates of dental disease.23 This is consistent with the lack of preventive health care in this population, most significantly due to the economic barriers to receiving nonemergency health services.
Infectious Diseases In addition to tuberculosis, other infectious diseases are increased in the hired farmworker population. Some of these infections reflect diseases endemic to sending communities or countries that are carried to the United States. Reports of intestinal parasites24 and malaria25 reflect diseases from sending populations in Latin America. A recent report found that the seroprevalences of the tapeworm, Taenia solium cysticercosis (1.8%) and T. solium taeniasis (1.1%) were highest among hired farmworkers in a sample of Hispanic residents of Ventura County, California. The seroprevalences were only seen in adults, and prevalences were similar to the prevalences in Latin American countries where the disease is endemic.26 HIV and AIDS are another concern because many hired farmworkers engage in high-risk behaviors, particularly solo males. A few local or statewide studies report a high prevalence of syphilis in this population.27 The CAWHS survey documented elevated rates of many high-risk behaviors among men and women associated with STDs.21 These included sex with intravenous (IV) drug users, sex with prostitutes, and low frequency of condom use. Many respiratory infections have been associated with agricultural work.28 Mycobaterium tuberculosis may result from person to person transmission, or M. bovis infection may occur from infected cattle. Numerous respiratory infections, can result from exposure to
50.00% Obesity prevalence (percent)
45.00% 40.00%
CAWHS 1999 U.S. 1999–2000
35.00% 30.00% 25.00% 20.00% 15.00% 10.00% 5.00% 0.00% 20–34 years 35–44 years 45–54 years 55–64 years Age group
Figure 43-1. Obesity, male hired farm workers, 1999, California, CAWHS, N = 415.
infected animals including bacterial agents (e.g., Anthrax, Brucellosis, Leptospirosis, psittacosis, Q fever, and Tularemia), viruses (e.g., Equine morbillivirus, Swine influenza, Avian influenza), and parasites (e.g., Ascariasis, Echinococcosis), can result from exposure to infected animals. Exposure to soil and its contaminants in agriculture may be a source of fungal infections such as Coccidiomycosis, Histoplasmosis, and Blastomycosis. There are few epidemiologic studies addressing the risk of infection among farmworkers, but these agricultural respiratory infections are likely to be increased in the farmworker population because of their work in the farm environment with potential for exposure.
Respiratory Disease There are numerous studies documenting increased respiratory disease among farmers and farmworkers.29 Respiratory diseases occur in agriculture from exposure to a wide range of toxicants including organic and inorganic dusts, allergens, microorganisms, mycotoxins, decomposition and silo gases, pesticides, fertilizers, fuel, and welding fumes. Diseases include respiratory infections, discussed above, as well as airway disorders, interstitial lung disease, and acute toxic injuries. A discussion of this topic is beyond the space available. However, recent research has begun to address these diseases among farmworker populations. Exposures occurring in western and southern agricultural settings have been of particular interest because of the labor-intensive crops harvested in that region, employing large numbers of farmworkers. Very high concentrations of dust exposure occur, independently associated with an increase in chronic respiratory symptoms (cough, phlegm, wheezing) and with increased airflow obstruction.30,31 Organic dust may be associated with occupational asthma from farming exposures, while inorganic dusts can result in both airway obstruction and interstitial disease.32,33
Injuries and Musculoskeletal Disorders Agriculture ranks with mining and construction as one of the three industries with the highest rates of fatal occupational injuries.34,35 The annual rate of fatal occupational injuries in agriculture is over five times the rate in the private sector, and there has been only a slight decline in the rate over the past decade.35 The rate of fatal occupational injuries is slightly lower among Hispanic than non-Hispanic agriculture workers (20.2 versus 15.8 fatalities per 100,000 employed workers), but is significantly elevated in both ethnic groups. Risk of fatal injuries increases markedly for workers over 55 years of age, and the leading causes of fatal injuries is farm tractors. Motor vehicles are a particular hazard for farmworkers, including unsafe farm transportation vehicles. Nonfatal occupational injury rates among farmworkers are highest between ages of 45–59 years. There are an estimated 140,000 nonfatal disabling injuries in agriculture annually, although the exact number or the percentage occurring in farmworkers is unknown.36 Risk of agricultural injuries is approximately 5–10 per 100 persons per year, but is higher in certain risk groups. Falls, sprains, machinery, and animals are among the most common causes. The most recent NAWS found that 24% of farmworkers surveyed reported at least one musculoskeletal problem in the past 12 months.13 The prevalence of musculoskeletal conditions has increased over the past decade, and also increased with increased years working in agriculture. A significant missing component of surveys of foreign-born U.S.-hired farmworkers are those who have permanently left the United States and returned to their home communities. Mines and colleagues sought to find hired farmworkers who had permanently returned to Mexico after concluding a lengthy period of employment in the United States. The Bi-National Health Survey (BHS) was based on a census of seven villages in Zacatecas, and sought both returnees as well as those who were still working on U.S. farms.37 One of the important findings of the BHS is the high prevalence (42%) of persistent pain which participants invariably attributed to
43 their workplace exposures while working on U.S. farms. Half of these returnees preferred the Mexican health-care system to the U.S. system. Those too ill or injured to continue working decided to return to their home villages where their families provide care.
Adverse Reproductive Outcomes Maternal health-care services to migrant farmworker women regarding prenatal care, weight gain during pregnancy, and birth outcomes have been assessed. It was found that national goals were not being met.38 Linguistic and cultural barriers have proven to be formidable, particularly among indigenous migrants who do not speak English or Spanish and whose native language does not have a written form.39 Counteracting this effect is the Hispanic Epidemiology Paradox, described below.
The Health of Hired Farmworkers
823
increased prostate cancer risk associated with high levels of exposure to a variety of agricultural pesticides.48 A study of mammography screening among Hispanic women living in Lower Rio Grande Valley farmworker communities found that lack of health insurance was the primary cause of not having a mammogram among women over 50.49 This is consistent with the lack of preventive health care among farmworkers, directly attributable to lower socioeconomic status. Cancer risk among Hispanic farmworkers due to occupational exposures should be considered in the context of overall cancer risk in this population. In general, Hispanic immigrants have lower overall cancer morbidity and mortality, particularly for smoking-related malignancies. However, cervical cancer rates are higher among the Hispanic than non-Hispanic population.50 RISK BEHAVIORS, ACCULTURATION
PESTICIDE TOXICITY
It has been more than a decade since the Worker Protection Standard (WPS) regulations were implemented requiring additional protection for U.S.-hired farmworkers from the hazards of workplace exposure to agricultural pesticides. In California, the Department of Pesticide Regulation (DPR) sought to assess the effects of these important new workplace safety standards by conducting in-field inspections of compliance. This internal DPR review found widespread lack of full compliance: half of pesticide handler firms were out of compliance and just under three-fourths of field worker employers (farm operators and labor contractors) had not fully complied with WPS standards. There are no comparable compliance surveys on a national level. A recent report on WPS compliance among 267 migrant farm laborer families whose usual place of residence is South Texas found that about half (46%) had received pesticide safety training as required by WPS.40 There are no national surveillance data on pesticide illnesses, and no estimates of risk by farmer versus farmworker status. The most comprehensive surveillance system is in California, where pesticide illnesses are a reportable disease, and all reported cases are investigated by the County Agricultural Commissioners. In California there are an estimated 0.024 fatalities and 1.38 hospitalizations per 100,000 person-years due to pesticides. Most fatalities are nonoccupational (e.g., due to intentional ingestion).41 Excluding antimicrobials reduces these rates to 0.019 deaths and 0.92 hospitalizations per 100,000 person-years. These rates are similar to other estimates for the United States.42,43,44 The chronic effects of pesticide exposure among farmworkers are largely unknown. Concerns have been raised about cancer, adverse reproductive outcomes, and neurologic disorders, but hard data are largely lacking. The Agricultural Health Study is addressing chronic effects of agrochemical exposures, but the study population is composed of farmers, spouses, and pesticide applicators, and not farmworkers.45 CANCER
Epidemiologic and toxicologic data suggest that chronic low-level exposure to pesticides may be associated with an increased risk of some cancers.46 As noted, there are little data on specific risks among farmworkers because of difficulties studying this population. One recent investigation used California Cancer Registry data for a casecontrol study of breast cancer among farm labor union members.47 Mills and colleagues found an increased association of breast cancer prevalence among hired farmworker women who likely had greater exposure to certain specific agricultural chemicals. Several studies have observed an increased risk of prostate cancer among farming populations.46 A cancer registry-based study of incident prostate cancers among Hispanic farmworkers found evidence of
Acculturation is the phenomena that results when groups of individuals having different cultures come into continuous first-hand contact with subsequent changes in the original cultural patterns of either or both groups. Among immigrant farmworkers in the United States, several health-related behavioral changes have been associated with increased residence duration in the United States, and with specific acculturation scales.51 Studies of Hispanic immigrant populations, including farmworkers, show that several adverse health behaviors are associated with acculturation, including cigarette smoking, alcohol use, illegal drug use, and unhealthy diet.52,53 Some of these acculturation-associated changes have been observed in women and not men, but more research is needed to better characterize this phenomenon and to focus preventive efforts. It is interesting that recent immigrants are more likely to do farm work when their health behavior profile may be better. With increased duration of residence in the United States, some adverse behaviors increase as the likelihood of doing farm work decreases. The CAWHS survey also found a relationship between obesity prevalence and the duration of U.S. residence for foreign-born hired farmworkers.21 The prevalence of obesity in each of three age groups was greater among those who had been U.S. residents for 15 years or longer as compared with those who had been in the country less than 15 years. A study of mental health among Mexican migrant farmworkers in California found lifetime prevalences of any psychiatric disorder among men of 26.7% and among women of 16.8%.54 Lifetime prevalence for any psychiatric disorder was lower among migrants than for Mexican-Americans or for the U.S. population, suggesting that acculturation may increase the likelihood of psychiatric disorders. The Hispanic epidemiologic paradox refers to the observation that immigrant Hispanics have many health outcomes similar or better than the non-Hispanic population despite lower socioeconomic status.55 Thus, many health indicators among Hispanics such as infant mortality, life-expectancy, mortality from cardiovascular diseases, and mortality from major types of cancer are better than would be expected based on socioeconomic status. However, some of these outcomes worsen with more time residing in the United States. For example, adverse pregnancy outcomes (preterm, low birthweight) double among women who have lived in the United States for more than 5 years.56 While there have been challenges to the concept that a paradox exists, the explanation for the observations is likely to be multifactorial and relate in part to behavioral changes occurring with increased residence time in the United States and associated changes in health behaviors (acculturation). A population at particular risk of adverse outcomes are the solo male farmworkers, many of whom migrate for work. This population is at particular risk for substance abuse and STDs. In the CAWHS study, more than one-fourth (28%) of male workers routinely engaged in binge drinking (average five or more drinks per episode) and nearly one-fourth (23%) reported having used drugs at some point in their lives (Table 43-3).
824
Environmental Health
TABLE 43-3. RISK BEHAVIORS, MALE HIRED FARMWORKERS, 1999, CALIFORNIA, CAWHS, N = 413 Risk Behavior Consume alcohol Drinks per episode—median Average five or more drinks per episode Drinks per month—median Alcohol use while at farm job Drug use—ever Threatened with violence at work
Finding 64% 4 28% 20 7% 23% 2%
The available evidence indicates that most hired farmworkers seek medical care only when it becomes absolutely necessary. Among male participants in the CAWHS, nearly one-third (31.8%) said they had never had a doctor or clinic visit, half (49.5%) said they had never been to a dentist, and more than two-thirds had never had an eye-care visit. There are only a few reports regarding health-care services for the children of hired farmworkers. Findings include evidence of late immunization,63 child abuse and neglect,64 iron deficiency,65 psychiatric disorders,66 and large numbers of children with untreated dental caries.67 A study of nearly all of the children in a predominately farmworker community in California found that 70% of the children required a medical referral.68
HOUSING CONDITIONS
SUMMARY
The contribution of poor housing conditions to adverse health outcomes among farmworkers is unknown, but is likely to be significant. Crowded and substandard housing is likely to increase the risk of diseases due to aerosol transmission and poor sanitation conditions. In the CAWHS study, about 30% of participants were found to reside in unconventional dwellings such as sheds, garages, and temporary structures—even under trees in open fields or orchards. Recent studies have documented pesticide exposures inside the housing of farmworkers with small children.57–60 Exposures may be from agricultural applications or from residential pesticides. In a study of 41 homes of farmworkers with children <7 years old in North Carolina and Virginia, agricultural, residential, or both pesticides were found in 95% of homes.57 A study of risk behaviors for pesticide exposure among pregnant women living in farmworker households found that between 25% and 60% of women demonstrated risky behaviors related to handwashing, bathing, protective clothing, house cleaning, and other factors that could increase indoor pesticide exposures.61
The health of hired farmworkers in this country is affected by several factors, each having an influence on acute and chronic conditions in the population. A major impact is the poverty affecting this group. This has its impact in diverse ways including lack of access to health care; decrease in preventive health services (e.g., dental, vision care, vaccinations), and poor housing conditions. Interestingly, the Hispanic epidemiologic paradox counteracts some of the expected effects of decreased socioeconomic effects in the population, with many health indicators being similar to those of the white population. Some health outcomes such as infectious disease, obesity, and diabetes reflect the lower socioeconomic status of the population. A second major influence is the hazards of agricultural work. Agricultural hazards cover a broad spectrum that includes physical stresses (e.g., trauma, heat, cold), infectious agents, chemical hazards, psychosocial stresses, and the effects of repetitive trauma. The effects of agricultural work on numerous health outcomes have been documented, although studies among hired farmworkers have been done much less frequently. Data on this population is further limited by the lack of effective surveillance systems, a paucity of studies on chronic health effects, and the mobility of the population. Health status is further limited by inadequate medical care, workers’ compensation for occupational injuries, and lack of legal rights due to illegal immigrant status. Finally, behavioral and other changes associated with acculturation, disruption of families, and migration have an important impact on the health of hired farmworkers. Many of the behavioral changes associated with acculturation, particularly among women, are reflected in worsening of health status after longer residence in the United States. Improvement in the health of hired farmworkers will require attention to all of these factors. Many of the occupational health hazards of farming also affect farmers and farm family members, but specific attention to the health status of hired farmworkers is needed because of the unique conditions under which they labor.
U.S.-HIRED FARMWORKERS—ACCESS TO CARE
Most hired farmworkers lack any form of health insurance. Among NAWS participants, only 8% said they have health insurance provided by their employer, and only 15% said they or members of their immediate family used Medicaid in the year prior to the interview, despite the fact that a very large share meets the income-eligibility requirements. In the CAWHS sample, about one-ninth of the participants (11%) said they had employer-provided health insurance, but a larger share, one-sixth (16%), said their employer offered this benefit. Fewer workers had the coverage than were offered the opportunity because they said they could not afford to pay the share of the premium that their employer required. Lacking any form of health insurance, most hired farmworkers report paying “out-of-pocket” for their most recent health-care visit.62 Faced with formidable financial barriers in seeking health-care services, some workers prefer to rely on home remedies, or return to Mexico when care is needed. Importantly, when asked about their most recent health-care visit, male undocumented immigrants in the CAWHS sample were far more likely to say they had never gone to a doctor or clinic as compared with those who were documented or citizens.62 However, among females there was no significant difference regarding this measure of access to health-care services between undocumented workers and the other groups.62 The reason is that for many years, California has provided “emergency Medical” services for undocumented pregnant women, and extends the care from prenatal examinations through 4 months after giving birth. This additional form of coverage has clearly been of great benefit to undocumented women and their newly born children.
REFERENCES
1. Report of the Commission on Agricultural Workers. Commission on Agricultural Workers: Washington DC; 1992: 1. 2. United States Department of Agriculture. Farm Labor. National Agricultural Statistics Service: Washington, DC; Quarterly, 2001. 3. California Department of Employment Development. Agricultural Employment Estimates. Labor Market Information Division: Sacramento, CA; 2001. 4. Villarejo D. California’s Agricultural Employers: Twenty-five Years Later. In: Symposium to Observe 25th Anniversary of the Agricultural Labor Relations Act. 2000. 5. Larson A, Migrant and Seasonal Farmworker Enumeration Project. Office of Migrant Health, Health Resources and Services Administration, U.S. Department of Health and Human Services; 2000.
43 6. 1974 Census of Agriculture. United States. Summary and State Data. Bureau of the Census: Washington DC; 1977: Table 8. 7. 2002 Census of Agriculture. United States. Summary and State Data. USDA, National Agricultural Statistics Service: Washington DC: 2004: Table 2. 8. USDA. Agricultural Statistics. Economic Research Service: Washington DC. 9. Cited 2005; Available from http://www.census.gov. 10. USDA. 1997 Census of Agriculture. National Agricultural Statistics Service: Washington DC; 1997: Table 5. 11. United States Department of Agriculture. Farm Labor. National Agricultural Statistics Service, Washington, DC; Quarterly, 2003. 12. United States Department of Agriculture. Farm Labor. National Agricultural Statistics Service, Washington, DC; Quarterly, 1985. 13. Findings from the National Agricultural Workers Survey (NAWS): A Demographic and Employment Profile of United States Farmworkers. U.S. Department of Labor, Office of the Assistant Secretary for Policy: Washington DC; 2005. 14. Zabin C, Kearney M, Garcia A, Runsten D, Nagengast C. Mixtec Migrants in California Agriculture. California Institute for Rural Studies and Center for U.S.-Mexican Studies, UC San Diego: San Diego; 1993. 15. Wells M, Villarego D. Promise Unfulfilled: Unions, Immigration and the Farm Workers. Politics Society. 2004;32(3):22291–326. 16. Villarejo D. The health of U.S. hired farm workers. Annu Rev Public Health. 2003;24:175–93. 17. California Occupational Mortality, 1979–1981. Health Data and Statistics Branch, Health Demographics Section, California Department of Health Services; 1987. 18. Colt JS, et al. Proportionate mortality among U.S. migrant and seasonal farmworkers in twenty-four states. Am J Ind Med. 2001;40(5): 604–11. 19. Ciesielski SD, et al. The epidemiology of tuberculosis among North Carolina migrant farm workers [published erratum appears in JAMA 1991 Jul 3;266(1):66]. JAMA. 1991;265(13):1715–9. 20. Schulte JM, et al. Tuberculosis cases reported among migrant farm workers in the United States, 1993–97. J Health Care Poor Underserved. 2001;12(3):311–22. 21. Villarejo D, et al. Suffering in Silence: A Report on the Health of California’s Agricultural Workers. Davis, California: California Institute for Rural Studies; 2000: 48. 22. Health Statistics. United States. Atlanta, GA: Centers for Disease Control and Prevention; 2002. 23. Lukes SM, Miller FY. Oral health issues among migrant farmworkers. J Dent Hyg. 2002;76(2):134–40. 24. Bechtel GA. Parasitic infections among migrant farm families. J Community Health Nurs. 1998;15(1):1–7. 25. United States Centers for Disease Control. Morb Mortal Wkly Rep. 1990;39(6):91–4. 26. DeGiorgio C, et al. Sero-prevalence of Taenia solium cysticercosis and Taenia solium taeniasis in California, USA. Acta Neurol Scand. 2005;111(2):84–8. 27. United States Centers for Disease Control. Morb Mortal Wkly Rep. 1992;41(39):723–5. 28. McCurdy S. Agricultural respiratory infections. Am J Respir Crit Care Med. 1998;158:S46–52. 29. Schenker M, ed. Respiratory health hazards in agriculture. Am J Respir Crit Care Med. 1998;S1–76. 30. Nieuwenhuijsen MJ, et al. Exposure to dust, noise and pesticides, their determinants and the use of protective equipment among California farm operators. Appl Occup Environ Hyg. 1996;11:1217–25. 31. Nieuwenhuijsen MJ, et al. Personal exposure to dust, endotoxin and crystalline silica in California agriculture. Ann Occup Hyg. 1999;43(1):35–42. 32. Schenker M. Exposures and health effects from inorganic agricultural dusts. Environ Health Perspect. 2000;108(Suppl 4):661–4.
The Health of Hired Farmworkers
825
33. Pinkerton KE, et al. Distribution of particulate matter and tissue remodeling in the human lung. Environ Health Perspect. 2000;108(11): 1063–9. 34. Schenker MB. Preventive medicine and health promotion are overdue in the agricultural workplace. J Pub Health Pol. 1996;17(3): 275–305. 35. Services, D.o.H.a.H., Worker Health Chartbook, 2004. 2004 ed. 2004: NIOSH. 279–80. 36. McCurdy SA, Carroll DJ. Agricultural injury. Am J Ind Med. 2000;38(4):463–80. 37. Mines R, Mullenax N, Saca L. The Binational Farmworker Health Survey: An In-Depth Study of Farmworker Health in Mexico and the United States. Davis: California Institute for Rural Studies: 2001: 28. 38. United States Centers for Disease Control. Morb Mortal Wkly Rep. 1997;46(13):283–6. 39. Bade B. Problems surrounding health care utilization for mixtec migrant farmworker families in Madera, California. California Institute for Rural Studies, Davis, CA: Davis; 1993. 40. Shipp EM, et al. Pesticide safety training and access to field sanitation among migrant farmworker mothers from Starr County, Texas. J Agric Saf Health. 2005;11(1):51–60. 41. Mehler LN, O’Malley MA, Krieger RI. Acute pesticide morbidity and mortality: California. Rev Environ Contam Toxicol. 1992;129(51): 51–66. 42. Klein-Schwartz W, Smith GS. Agricultural and horticultural chemical poisonings: mortality and morbidity in the United States. Ann Emerg Med. 1997;29(2):232–8. 43. Swinker M, et al. Pesticide poisoning cases in North Carolina, 1990–1993. A retrospective review. NC Med J. 1999;60(2):77–82. 44. Caldwell ST, et al. Hospitalized pesticide poisonings decline in South Carolina, 1992–1996. JSC Med Assoc. 1997;93(12):448–52. 45. Alavanja MC, et al. The agricultural health study. Environ Health Perspect. 1996;104(4):362–9. 46. Blair A, Zahm SH. Agricultural exposures and cancer. Environ Health Perspect.1995;103(Suppl 8):205–8. 47. Mills PK, Yang R. Breast cancer risk in Hispanic agricultural workers in California. Int J Occup Environ Health. 2005;11(2):123–31. 48. Mills PK, Yang R. Prostate cancer risk in California farm workers. J Occup Environ Med. 2003;45(3):249–58. 49. Palmer RC, et al. Correlates of mammography screening among Hispanic women living in lower rio grande valley farmworker communities. Health Educ Behav. 2005;32(4):488–503. 50. From the Centers for Disease Control and Prevention. Invasive cervical cancer among Hispanic and non-Hispanic Women—United States, 1992–1999. JAMA. 2003;289(1):39–40. 51. Cuellar I, Arnold B, Moldonado R. Acculturation rating scale for Mexican Americans-II: A revision of the original ARSMA scale. Hisp J Behav Sci. 1995;17(3):275–304. 52. Kasirye O. Acculturation in a Rural Latino Population and its Association with Selected Health-Risk Behaviors, in Unpublished master’s thesis. California, U.S.A.: University of California, Davis; 2003. 53. Bethel JW, Schenker MB. Acculturation and smoking patterns among hispanics a review. Am J Prev Med. 2005;29(2):143–8. 54. Alderete E, et al. Lifetime prevalence of and risk factors for psychiatric disorders among Mexican migrant farmworkers in California. Am J Public Health. 2000;90(4):608–14. 55. Markides KS, Coreil J. The health of Hispanics in the southwestern United States: an epidemiologic paradox. Public Health Rep. 1986;101(3):253–65. 56. Guendelman S, English PB. Effect of United States residence on birth outcomes among Mexican immigrants: an exploratory study. Am J Epidemiol. 1995;142(Suppl 9):S30–8. 57. Quandt SA, et al. Agricultural and residential pesticides in wipe samples from farmworker family residences in North Carolina and Virginia. Environ Health Perspect. 2004;112(3):382–7.
826
Environmental Health
58. Bradman MA, et al. Pesticide exposures to children from California’s Central Valley: results of a pilot study. J Expo Anal Environ Epidemiol. 1997;7(2):217–34. 59. McCauley LA, et al. The Oregon migrant farmworker community: an evolving model for participatory research. Environ Health Perspect. 2001;109(Suppl 3):449–55. 60. Lu C, et al. Pesticide exposure of children in an agricultural community: evidence of household proximity to farmland and take home exposure pathways. Environmental Res. 2000;84(3):290–302. 61. Coye M. Goldman L Summary of environmental data: McFarland childhood cancer cluster investigation, Phase III Report. 1991, California Department of Health Services Environmental Epidemiology and Toxicology Program. 62. Villarejo D, Lighthall D, Williams D, III, et al. Access to Health Care for California’s Hired Farm Workers: A Baseline Report. Berkeley: California Program on Access to Care, California Policy Research Center, University of California Berkeley; 2001.
63. Lee CV, McDermott SW, Elliott C. The delayed immunization of children of migrant farm workers in South Carolina. Public Health Rep. 1990;105(3):317–20. 64. Larson OW, III, Doris J, Alvarez WF. Migrants and maltreatment: comparative evidence from central register data. Child Abuse Negl. 1990;14(3):375–85. 65. Ratcliffe SD, et al. Lead toxicity and iron deficiency in Utah migrant children. Am J Public Health. 1989;79(5):631–3. 66. Kupersmidt JB, Martin SL. Mental health problems of children of migrant and seasonal farm workers: a pilot study. J Am Acad Child Adolesc Psychiatry. 1997;36(2):224–32. 67. Nurko C, et al. Dental caries prevalence and dental health care of Mexican-American workers’ children. ASDC J Dent Child. 1998;65(1): 65–72. 68. McFarland Child Health Screening Project, 1989. Emeryville, CA: California Department of Health Services; 1992.
44
Women Workers Karen Messing
In the United States, women are 46% of the paid workforce1 and have one-third of the compensated occupational health and safety problems, resulting in 81% of claims on a per hour basis.2 Although employed women live longer than unemployed women and housewives,3 risk factors present in some jobs may adversely affect women’s health. Action to improve women’s occupational health has been slowed by a notion that women’s jobs are safe2 and that any health problems identified among women workers can be attributed to unfitness for the job, hormonal factors, or unnecessary complaining. In the past, little research in occupational health has concerned women.4,5 However, the rise in the number of women in the labor force has sensitized public health practitioners, workers, and scientists to the necessity to include women’s concerns in their occupational health activities.6 Recently, various institutions and governments have become interested in women’s occupational health, and the amount of research specifically on women is growing.7,8 Methods for examining women’s occupational health are being developed, and gender comparisons are becoming more common.9,10 However, as interest grows, it is also necessary to consider the implications of using sex (biological differences) or gender (socially based differences) routinely as explanatory variables in occupational health research. From an equity perspective, it is also important to understand the causes of sex and gender differences in occupational health so that they not be used erroneously to justify job segregation or inequitable health promotion measures.5 Potential causes of sex differences in occupational health outcomes are multiple and are discussed below: job and employment patterns, biological specificity, and societal attitudes. In the discussion about women that follows, many of the remarks will apply to some degree to other groups that have been subject to discrimination because of age, race/ethnicity, or social class.11–13 Belonging to any of these categories may affect exposure to workplace hazards and create a context that affects responses to the hazards. Since each of these has its own interactions with work environment and health effects, only women will be discussed here. WOMEN WORKERS AND THEIR JOBS
Women are in different industries from men. Men are more prominent in primary (raw materials) and secondary (production) sectors of the economy, while women are more often in the tertiary (service) sector.14 Women are more likely to work for small companies.15 Women usually work in specific types of jobs in all countries where this has been studied. For example, in Québec, Canada, only one profession (retail salesperson) is found among the 10 most common jobs of women as well as among the 10 most common men’s jobs. Both women and men most often work among a majority of their own sex (Table 44-1). In an analysis of tendencies in employment, Asselin17 classified professions as very disproportionately male if the
proportion of women in the profession was less than half their proportion in the labor force. By this criterion, she found that well over half of jobs were very disproportionately assigned to one sex (221 of 506 professions were very disproportionately male, while, by analogous criteria, 66 were very disproportionately female). Thus, women’s jobs differ from men’s. Further, even within the same job title, men and women are often assigned to different tasks18–20 and, therefore, exposed to different working conditions. For example, women in retail sales in Europe more often sell cosmetics and shoes, while men more often sell automobiles and electronic equipment.21 This type of unequal distribution of the sexes across jobs and tasks is called horizontal segregation. There is also vertical segregation: women’s concentration in the lower ranks can be inferred from the fact that, on the average, women earn 71.6% of what men do for full-time, full-year work.22 Women are still in a distinct minority in senior management positions.23 Women work at specific schedules. Almost three times as many women work part-time as men.24 Women’s work also tends to be intermittent: 37.9% of women have spent every year since their first full-time job working at least part-time or part year compared to 72.9% for men.25 Slightly more women than men hold multiple jobs (part-time or full-time).26 A growing literature confirms that women’s specific job situations result in a distinctive pattern of exposures. Even within the same jobs, physical and psychosocial exposures differ between the sexes,27 and these are associated with differences in symptoms.28 For example, women in the United States are more often exposed to risk factors for carpal tunnel syndrome,20 such as highly repetitive work on assembly lines29 and at computer keyboards.30,31 Women’s low position in the hierarchy also exposes them preferentially to awkward and difficult work postures.32 Women are a majority among those who suffer from indoor air problems,33 and they are especially likely to be exposed to asthmagenic substances at work.34 In addition, equipment and workspaces may have been designed using criteria derived from male dimensions. Women are shorter on average than men and are proportioned differently.35 If thought has not been given to making equipment adjustable, women may find, for example, that personal protective equipment is too large for them, that tool handles are too big, or that counters are too high.36,37 It has been found that women and men use different methods to accomplish the same tasks in offices;38 some differences may be attributable to workspace design. The fact that women have breasts has not been taken into account in most biomechanical models, so some lifting and carrying equipment has not been designed well for women.39 When women are forced to work in awkward positions due to workspaces designed for larger people, they suffer more musculoskeletal symptoms than men in the same jobs.40 Lortie showed that many female hospital orderlies had adapted their lifting methods to their particular aptitudes; they had found ways to change lifting tasks into pushing and pulling tasks.41 However, in 827
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
828
Environmental Health TABLE 44-1. PRINCIPAL PROFESSIONS OF WOMEN AND MEN, QUÉBEC, CANADA, 200116 Profession (Women)
% of Women in the Profession
1
Secretary
97.7
2 3 4 5 6
Retail saleswoman, sales clerk Cashier, teller Accounting clerk Nurse Primary school teacher, kindergarten teacher Educational technician, primary or kindergarden Office clerk Waitress, bartender Counter food server, assistant cook 20 top professions of women
7 8 9 10
Profession (Men)
% of Men in the Profession
58.7 86.5 87.8 91.0 86.0
Information technology professions Truck driver Retail salesman, sales clerk Director, retail sales Janitor Mechanic, auto repair
74.2 97.7 41.3 63.3 79.2 99.1
95.7
Manual materials handler
90.6
83.2 97.1 60.5
Driver, delivery Salesman, wholesale nontechnical Construction helper
92.8 69.6 71.0
74.8
20 top professions of men
67.3
a rigid, repetitive sorting task, where there was little control over task parameters and where certain dimensions of the workstation caused problems for shorter workers, women had more work accidents than did men.42 A conclusion from the above results is that attention should be given to increasing the ability of workers to adjust all the parameters of their work context, so that it can be adapted to their particular characteristics and situations. Sexual stereotyping and discrimination may also affect exposures for women and men. Discrimination against women in the workplace puts them at greater risk for adverse psychological outcomes.43,44 Sexual segregation may in itself have a negative effect on health.45,46 In addition, in workplaces where physically demanding operations are required, it was found that women overexerted themselves in response to a perception that they did not do their share of heavy jobs, while men could be induced to overexert themselves by appeal to their gallantry.47 These differences in exposure patterns by sex support caution in using job titles to estimate exposure for both genders if a job exposure matrix has not previously been validated separately by gender. In addition, it is unwise to adjust relationships between job title and disease incidence for gender, thus treating gender as a confounder when it may be a proxy for specific exposures.9,48 Stratification by gender (as well as race, age, etc.) is desirable where numbers permit.
A number of hormonal and physiological sex differences have been found or suggested that may lead to differences in susceptibility to disease, including occupational disease. These were reviewed by Wizemann and Pardue.56 Some attention in the occupational health and safety literature has been given to differences in musculoskeletal symptoms (thoughtfully reviewed by Punnett and Herbert57), in indoor air quality (reviewed by Hodgson and Storey58), in respiratory health59 and lung disease (reviewed carefully by Camp et al.37), in metabolism of certain solvents,60 and in reactions to lead.61,62 However, caution should be exercised in considering biological differences where within-sex differences may exceed between-sex differences.5 Also, in studying sex differences, there is a risk of false positives or false negatives, that is, of discovering sex differences where none in fact exist, or of missing true effects. These risks arise in any large set of studies, from the fact that, randomly, one in 20 studies of sex differences will result in statistical significance at the 0.05 level, and that, on the other hand, study power may be insufficient to reveal differences. In addition, since not all studies carefully check all relevant characteristics of their samples, age, fitness, or nutritional differences or even differences in sample size may be interpreted as sex differences. The public interest in sex differences may then result in hasty conclusions that have detrimental effects on public health policy and on the body of scientific knowledge.5
WOMEN AND BIOLOGY
WOMEN IN SOCIETY
A number of studies have found a difference in symptoms between women and men workers even after controlling for workplace exposures to risk factors for musculoskeletal28 or toxic33 effects. These remaining differences could be due to residual unreported or unanalysed exposures, extraprofessional exposures, or physiological susceptibility. Biological sex differences are too extensive to be thoroughly discussed here, but a few major concerns will be mentioned. Women and men differ, on average, on almost all anthropometric dimensions: height, weight, body segment length, sitting height, etc.35 Their physical strength also differs; on average, women in a standing position can push or pull 56% of the weight that men can push or pull49 and can lift about half of the weight men lift. Sex differences in other kinds of strength have also been described.50,51 However, differences within a sex are larger than average between-sex differences, and the degree of sex difference varies with the details of the task.52,53 Also, some evidence shows that young women benefit more than men from strength training,54 presumably because they are less likely to have tried to increase their upper body strength previously. The importance of strength for women’s occupational health is not clear. It is possible that increasing upper-body strength is associated with fewer musculoskeletal symptoms in the upper body,55 but conclusive evidence for this has not been found.
Women’s social roles affect their movements in and out of employment. Thus, the “healthy worker effect” (defined as a tendency for workers to be healthier than the general population) manifests itself in specific ways with women workers. Their reproductive health may affect their employment status, so that reproductive ill health may be more characteristic of working women than those not working.63 Also, women have been found to be more likely than men to leave work because of a health problem but less likely to fail to be hired because of a health problem.64,65 They are also less likely to receive compensation for a work-related health problem66,67 and less likely to be assigned appropriate retraining even if compensated,68 all factors which may affect their readiness to leave a job if they become ill. Women’s extraprofessional activities in the home can combine with paid work to produce health effects.69,70 Twice as many women as men report doing over 5 hour of housework per week71 and 55% more women take care of small children or the elderly.72 The extra time includes many tasks that can prolong or repeat exposures at work and result in accumulated fatigue, toxic exposures, or musculoskeletal symptoms.73 Attention should be paid to the fact that indicators of family status such as marital status or the presence of young children can have different associations with the health status of male versus female workers.73,74
44 RESEARCH METHODS APPROPRIATE
FOR STUDYING WOMEN WORKERS For a long time, researchers neglected women workers, and many still do.5,9,75 When studies do include women, they often do so inappropriately.5,9,76 Descriptors representing the place of people in society (gender, race, class) pose a special problem for research in occupational health. These categories may be associated with specific probabilities of some biological characteristics (hormonal status, blood groups, nutritional status), but they also represent probabilities of different occupational and extraprofessional exposures. If researchers simply adjust (“control”) their analyses for sex/gender, or if they include sex/gender as a variable along with other exposure variables, the effects of gender-specific exposures may disappear from sight, and gender or sex in itself may falsely appear to contribute to a health problem. Therefore, information on women’s problems should be sought specifically. However, problems in data collection can preclude women from being studied. For example, studies of the effects of agricultural exposures can be forced to eliminate women since only the husband of a farm family is identified as a farmer in most provincial records,77,78 although women farmers are also exposed to pesticides and the like.79 Also, many death certificates have not contained information on women’s professions, in part because once a woman has retired she may be considered to be a housewife. Many registries still do not code or do not publish results by sex in their reports. Thus, a priority for research in women’s (and men’s) occupational health is establishment of appropriate databases. Getting information on women’s occupational health problems poses certain additional challenges. Compensated work accidents and injuries are the usual statistics used to assess occupational health. Men have more compensated industrial accidents and illnesses per worker than women, although the male/female difference is attenuating as information systems and analytical tools improve.80,81 Health surveys show more occupational illnesses and accidents than those represented in the official compensation statistics, especially for women.82,83 Some of the difference may be due to women having less access to compensation, since they are less often in jobs where they are represented by unions.84 Also, some jurisdictions use methods of compensation and of record-keeping that understate injuries and illnesses of women workers.85 Some of the difference in compensation rate is a result of the difference in jobs and tasks—when comparisons are made within the same industry, sometimes women have more accidents than men, sometimes fewer.86 It is easy to recognize that a leg broken in the workplace is an occupational problem, but an allergy or inflammation that develops more slowly is not readily associated with the job. Women average more industrial disease (as distinguished from accidents) than men,87 and their problems may be underestimated since many industrial diseases go unrecognized. Women’s illness and injury rates may also be artificially lowered by a technical factor. Because women tend to work fewer hours than men at paid jobs, accident rates of women appear lower when, as is usual, the rates are calculated per worker rather than per hour worked. Of 14 studies comparing women and men, reviewed in 1994, only two gave information on person-hours worked.88 Studies still report work-related injuries on a per-worker basis.81 Some researchers have suggested that certified sick leaves might be a useful indicator of occupational health problems for both sexes, in complement to the usual methods that would detect occupationrelated illness.89 They found that sick leaves of nurses were related to various indicators of work load and to shift work.90 Research instruments and standards that have been derived with male populations have sometimes been used without further validation on female populations.9 An example is strength testing done with instruments validated only for male populations.91 Some occupational prestige scales and social class scales use the husband’s job to ascribe a score to the wife.92 This causes problems when data on health are adjusted for social class, since some social class influences on health may be mediated through common family-revenue-dependent factors
Women Workers
829
such as nutrition, and others may be specific to the individual situation such as education-related, health-protective behavior. Occupational health researchers trained in medicine have often limited their interest to pathologies rather than to indicators, signs, or symptoms of deterioration in physical or mental states, reasoning that the presence of pathology guarantees that the problem examined is worthy of serious consideration. However, a requirement for diagnosed pathology may be premature when studying women’s occupational health. Since the aggressors present in women’s traditional work have been understudied, and the effects of even well-known conditions on women workers are often unknown, identification of occupational disease in women’s work is embryonic. The requirement for pathology has two further consequences. First, it forces the researcher to consider events that are rare among populations still at work. This requirement for populations of considerable size is a particular obstacle to identifying women’s occupational health problems because women work in very small workplaces.15 Second and more important, the risks found in women’s jobs are often undramatic and diffuse. In fact, obvious danger can be a reason for excluding women from particular jobs. Thus, epidemiological studies that seek to link isolated, identifiable risk factors such as chemical exposures to well-defined pathologies are not well adapted to discovering other types of problems that occur in women’s jobs, like aches and pains from prolonged static standing, unhappiness about discrimination. WOMEN’S OCCUPATIONAL HEALTH PROBLEMS
Women’s most common self-reported health problems are musculoskeletal problems, headaches, allergies including skin problems, and hypertension.93,94 Women are also much more likely than men to report psychological distress.95 Workplace conditions are relevant to all these conditions. These subjects cannot all be reviewed thoroughly here, but a few points of interest can be mentioned in relation to some common health problems of women in the workplace.
Musculoskeletal Disorders The major research area in women’s occupational health is probably musculoskeletal problems.57 Women’s working conditions, particularly repetitive work, prolonged standing, and carrying heavy loads, may be at the source of some of these musculoskeletal problems. Where loads carried by men are usually inanimate, those carried by women are usually patients or children, who can interfere actively with the process, increasing the risk of injury. Nursing assistants and other medical personnel are particularly at risk for back injuries.96 In many jobs assigned to women (as well as some assigned to men), the work cycle is under 10 seconds long, and the same movements are repeated many thousands of times in a day.97 These movements can individually make trivial demands on the human body, but the enormous degree of repetition makes tiny details of the setup assume primary importance. A chair of the wrong height or a counter of the wrong width may cause constant overuse of the same tendons or joints, increased among those with heavy family responsibilities,99 yet the observer sees no problem.98 Many women’s jobs require static effort, exerted when muscles are contracted for long periods. For example, cleaning (dusting high surfaces, bending over toilets) requires long periods bent over or reaching up.19 This type of effort creates musculoskeletal and circulatory problems due to interference with circulation. Women’s jobs in North America as sales clerks, cashiers, tellers, and receptionists require long hours of standing without moving very much,100 a position which is uncomfortable101 and is associated with varicose veins.102 (In Europe, Asia, Africa, and Latin America, these workers usually work sitting down.)
Health Effects of Stress Any discussion with women (and often men) workers tends to identify “stress” as an important occupational health problem, which may be related to the headaches, hypertension, and psychological distress
830
Environmental Health
commonly reported by women. We can ask whether women “really” have more such problems, but it is undeniable that women consult more health practitioners and take more medication for mental problems than men.80 Several studies now link psychological distress, anxiety, and depression to women’s workplace conditions.103,104 Examining women’s work-related mental problems involves several challenges. First, there is a history of disbelieving women’s reports of their physical ills and ascribing women’s physical problems to mental causes.105 For example, women workers who complained of symptoms of exposure to neurotoxic agents have often been accused of “hysteria.”106 Similar uncertainty has surrounded discussions of women’s high proportion of cases of sick building syndrome,107 multiple chemical sensitivity,108 and musculoskeletal disorders.109 Therefore, it is necessary to be sure that physical causes have been excluded before ascribing women’s problems to stress. Second, there is a problem in identification of mentally stressful working conditions of both women and men. Questionnaires such as the well-known Job Content Questionnaire110 use questions on repetitiveness and monotony (among others) to identify psychological job demands. However, repetition and monotony are found in jobs that require repeated physical movements such as assembly line production and data processing; it is hard to tell whether the effects found are from the psychological or physical stress, or both. Stressful working conditions have been associated with both physical and mental outcomes for women.111,112 The work of Karasek and others has related several workplace variables (degree of job control, level of demand) to effects on the cardiovascular system, and these effects hold for both women and men.112,113 Unfortunately, most scientists who have studied heart disease by occupation have restricted their samples to men.112,113 Although coronary artery disease is the most common cause of death among women, and as many women as men report hypertension, heart disease is still thought of as a man’s problem.114 Several professions that are commonly held by women are among the 10 professions with the highest diastolic blood pressure: laundry and dry cleaning operatives, food service workers, private child-care workers, and telephone operators.115 Stress from family responsibilities can combine with job strain to produce risks of cardiovascular disorders.116 For women, it is important to be careful in considering combined effects of home and job. Stress arising from interference between work and family can generate a number of health problems,69,117 including mental health problems.118 The characteristics of both paid work119 and of the home situation118 may facilitate or interfere with women’s efforts to reconcile work and family, so that difficulties in reconciling the two cannot be considered as a personal characteristic of women, but should be incorporated into workplace stressor assessment.
Violence Although women suffer much less violence at work than men, they are found in some jobs with a high risk of violence, such as healthcare worker, food server, bar attendant, convenience store clerk, and gas station attendant.120–122 Violence against women in the workplace can also arise from spillover from their domestic situation, with a violent partner carrying the violence into the work situation.121
Occupational Cancers In the 1990s, researchers noted that women, especially AfricanAmerican women, had been largely excluded from studies on occupational cancers.4 A lot of effort has been expended to stimulate interest in women’s occupational cancers, and women have been increasingly included in studies, although not yet enough for much information to have accumulated.75,123 Risks are becoming apparent, for example, among cleaners, hairdressers, agricultural workers, health-care workers, laboratory workers, and others exposed to chemical and physical risks.123 In addition, several studies have identified teachers as a group particularly likely to contract breast cancer. This has been explained as a result of delayed childbirth among this occupational group.123,124 This points up the necessity to examine the whole issue of delayed
childbirth and its effects both on certain cancers and on fertility as an occupational health issue. It may be that childbirth is delayed in some occupations due to an incompatibility between professional and family responsibilities. This is another example of how women’s social situations are not dissociable from their likelihood to suffer from occupational disease.
Reproductive Problems Specific to Women Menstrual symptoms are among the most commonly diagnosed disorders of women. During the mid-1980s, several researchers suggested that menstrual symptoms might be useful for the study of occupational effects on reproductive health, as well as indicative of health problems that should be addressed.125,126 Parameters of the menstrual cycle that can be studied in relation to occupation include regularity and length of cycle, length and volume of flow, and symptoms of pain and discomfort associated with the periods. The latter symptoms are common and can be studied in normal populations. Abnormalities of the menstrual cycle have been explored in relation to occupational exposures to mercury,127 lead,128 pesticides,129 synthetic hormones,130 organic solvents,131,132 carbon disulfide,133 chemical exposures of hairdressers,134 physically challenging work such as ballet dancing,135 exposure to cold temperatures, irregular schedules,136 and shift work.137 Dysmenorrhea or painful menstruation occurs with increased prostaglandin production. Release by the endometrium during menstruation gives rise to increased abnormal uterine activity that produces ischemia and cramping pelvic pain. There may be other associated symptoms such as leg- or backache or gastrointestinal upset. Premenstrual syndrome (PMS) is a less well-defined diagnostic category that refers to a group of symptoms thought to occur during the days preceding the onset of menses. Since its diagnosis requires making an association with an event (menstruation) that has not yet occurred, reports of prevalence are not consistent.138 One prospective study has suggested a link between PMS and productivity loss among female workers;139 no studies have investigated productivity variations at any other time in the cycle. Prevalence estimates of perimenstrual symptoms vary greatly between studies, according to age, parity, contraceptive methods, and other demographic characteristics. Dysmenorrhea was not associated with work in the reinforced plastics industry140 or to exposure to toluene,141 but has been found in association with cold exposure, time pressure,142 and mercury exposure.127 It was found at a high level among hairdressers, who are exposed to chemicals and work standing for prolonged periods.143 Studies of the prevalence and etiology of back pain, a common occupational health problem among hospital workers, may be confused if perimenstrual back pain is not taken into account.142 Pregnancy alters the shape of the body and thus the interaction with the work site.144,145 The high frequency of falls at work during pregnancy, especially in food service work, may be due to awkwardness due to a change in body shape,146 but also to failure to adapt the work process. A study of precautionary leave or reassignment of pregnant workers exposed to dangerous working conditions showed that ergonomic considerations were the most common reason for giving such leave, with chemical and physical exposures following.147 In jurisdictions where no such program exists, women may risk health damage due to exposures during pregnancy. For example, it has been found that, during pregnancy, certain working conditions (noise, lifting weights) are associated with higher blood pressure.148 However, most research on pregnancy has been limited to fetal effects, and little information exists on the effects of conditions during pregnancy on the woman herself. Information is also lacking on the relation between working conditions and age at menopause or menopausal symptoms. Age at menopause can be an indicator of exposure to environmental pollution, as shown by its relationship to smoking and a possible relationship to exposure to some chemicals.149,150 Belonging to a lower socioeconomic class is also associated with earlier menopause,151 so one might expect to find an association with manual work, but one large study showed no relation between heavy physical work and early menopause.152
44 REFERENCES
1. United States Department of Labor. http://data.bls.gov/PDQ/ outside.jsp?survey=ln consulted August 4, 2005 for the second quarter of 2005. 2. McDiarmid MA, Gucer PW. The “GRAS” status of women’s work. J Occup Environ Med. 2001;43(8):665–9. 3. Rose KM, Carson AP, Catellier D, et al. Women’s employment status and mortality: the atherosclerosis risk in communities study. J Womens Health (Larchmt). 2004;13(10):1108–18. 4. Zahm SH, Pottern LM, Lewis DR, Ward MH, White DW. Inclusion of women and minorities in occupational cancer epidemiologic research. J Occup Med. 1994;36(8):842–7. 5. Messing K, Stellman JM. Sex, gender and health: the importance of considering mechanism. Environ Res. 2006;101(2):149–62. 6. Messing K, Östlin P. Gender Equality, Work and Health: A Review of The Evidence. World Health Organisation. Geneva; 2006. 7. Kogevinas M, Zahm SH. Introduction: epidemiologic research on occupational health in women. Am J Ind Med. 2003;44(6):563–4. 8. Messing K, de Grosbois S. Women workers confront one-eyed science: building alliances to improve women’s occupational health. Women Health. 2001;33(1–2):125–41. 9. Messing K, Punnett L, Bond M, et al. Be the fairest of them all: challenges and recommendations for the treatment of gender in occupational health research. Am J Ind Med. 2003;43(6):618–29. 10. Kennedy SM, Koehoorn M. Exposure assessment in epidemiology: does gender matter? Am J Ind Med. 2003;44(6):576–83. 11. Krieger N. Embodying inequality: a review of concepts, measures, and methods for studying health consequences of discrimination. Int J Health Ser. 1999;29(2):295–352. 12. Wegman DH. Older workers. Occup Med. 1999;14(3):537–57. 13. Chaturvedi N. Ethnicity as an epidemiological determinant— crudely racist or crucially important? Int J Epidemiol. 2001;30: 925–7. 14. Stellman J, Lucas A. Women’s occupational health: international perspectives. In: Goldman M, Hatch MC, ed. Women and Health. New York: Academic Press; 2000: 514–22. 15. Arcand R., Labrèche F, Stock S, Messing K, Tissot F. Travail et santé, in Enquête sociale et de santé 1998. 2nd ed. Montréal: Institut de la statistique du Québec; 2001: pp 525–70. Available at: http://www.stat.gouv.qc.ca/publications/sante/e_soc-sante98.htm 16. Institut de la statistique du Québec. Les 20 principales professions féminines et masculines, Québec. 1991 et 2001; 2003. http:// www.stat.gouv.qc.ca/donstat/societe/march_travl_remnr/cat_profs_ sectr_activ/professions/recens2001/tabwebprof_juin03-1.htm consulted August 2, 2005. Translated by the author. 17. Asselin S. Professions: convergence entre les sexes? Données sociodémographiques en bref. 2003;7(3):6–8. 18. Messing K, Dumais L, Courville J, Seifert AM, Boucher M. Evaluation of exposure data from men and women with the same job title. J Occup Med. 1994;36(8):913–7. 19. Messing K, Chatigny C, Courville J. “Light” and “heavy” work in the housekeeping service of a hospital. Appl Ergon. 1998;29(6): 451–9. 20. McDiarmid M, Oliver M, Ruser J, Gucer P. Male and female rate differences in carpal tunnel syndrome injuries: personal attributes or job tasks? Environ Res. 2000;83(1):23–2. 21. McGauran A-M. Vive la différence: the gendering of occupational structures in a case study of Irish and French retailing. Women Studies Int Forum. 2000;23(5):613–27:Table 1,615. 22. Status of Women Canada. Women and Men in Canada: A Statistical Glance. Ottawa: Statistics Canada; 2003:23. 23. Status of Women Canada.Women and Men in Canada: A Statistical Glance. Ottawa: Statistics Canada; 2003:16. 24. Status of Women Canada. Women and Men in Canada: A Statistical Glance. Ottawa: Statistics Canada; 2003:17.
Women Workers
831
25. Simpson W. Labour Market Intermittency and Earnings in Canada. Income and Labour Dynamics Working Paper Series. Statistics Canada Product No 75F0002M. Catalogue number 97-12 Ottawa: Statistics Canada. Quoted in Townson, M. 2003. Women in NonStandard Jobs: The Public Policy Challenge. Ottawa: Status of Women Canada; 1997. 26. Bureau of Labor Statistics. News. July 8, 2005. 15. www.bls.gov/cps/ 27. Hooftman WE, van der Beek AJ, Bongers PM, van Mechelen W. Gender differences in self-reported physical and psychosocial exposures in jobs with both female and male workers. J Occup Environ Med. 2005;47(3):244–52. 28. Karlqvist L, Tornqvist EW, Hagberg M, Hagman M, Toomingas A. Self-reported working conditions of VDU operators and associations with musculoskeletal symptoms: a cross-sectional study focusing on gender differences. Int J Industr Ergon. 2002;30(4–5):277–94. 29. Dumais L, Messing K, Seifert AM, Courville J,Vézina N. Make me a cake as fast as you can: determinants of inertia and change in the sexual division of labour of an industrial bakery. Work, Employment Society. 1993;7(3):363–82. 30. Punnett L, Bergqvist U. Musculoskeletal disorders in visual display unit work: gender and work demands. Occup Med. 1999;14(1): 113–24,iv. 31. Ekman A, Andersson A, Hagberg M, Hjelm EW. Gender differences in musculoskeletal health of computer and mouse users in the Swedish workforce. Occup Med (Lond). 2000;50(8):608–13. 32. Leijon O, Bernmark E, Karlqvist L, Harenstam A. Awkward work postures: association with occupational gender segregation. Am J Ind Med. 2005;47(5):381–93. 33. Brasche S, Bullinger M, Morfeld M, Gebhardt HJ, Bischof W. Why do women suffer from sick building syndrome more often than men?—subjective higher sensitivity versus objective causes. Indoor Air. 2001;11(4):217–22. 34. Le Moual N, Kennedy SM, Kauffmann F. Occupational exposures and asthma in 14,000 adults from the general population. Am J Epidemiol. 2004;160(11):1108–16. 35. Chamberland A, Carrier R, Forest F, Hachez G. Anthropometric Survey of the Land Forces. (98-01897). North York, Ontario, Canada: Defence and Civil Institute of Environmental Medicine; 1998. 36. Messing K, Stevenson JM. Women in Procrustean beds: strength testing and the workplace. Gender Work Organization. 1996;3(3): 156–67. 37. Camp PG, Dimich-Ward H, Kennedy SM. Women and occupational lung disease: sex differences and gender influences on research and disease outcomes. Clin Chest Med. 2004;25(2): 269–79. 38. Wahlstrom J, Svensson J, Hagberg M, Johnson PW. Differences between work methods and gender in computer mouse use. Scand J Work Environ Health. 2000;26(5):390–7. 39. Tate AJ. Some limitations in occupational biomechanics modelling of females. Proceedings of a colloquium held at the Université du Québec à Montréal. March 27–28, 2003. Report presented to Women’s Health Bureau, Health Canada. Montréal: CINBIOSE; 2004. 40. Dahlberg R, Karlqvist L, Bildt C, Nykvist K. Do work technique and musculoskeletal symptoms differ between men and women performing the same type of work tasks? Appl Ergon. 2004;35(6): 521–9. 41. Lortie M. Analyse comparative des accidents déclarés par des préposés hommes et femmes d’un hôpital gériatrique. J Occup Accident., 1987;9:59–81. 42. Courville J, Vézina N, Messing K. Analyse des facteurs ergonomiques pouvant entraîner l’exclusion des femmes du tri des colis postaux. Le travail humain. 1992;55:119–34. 43. Bond MA, Punnett L, Pyle JL, Cazeca D, Cooperman M. Gendered work conditions, health, and work outcomes. J Occup Health Psychol. 2004;9(1):28–45.
832
Environmental Health
44. Bildt C, Michelsen H. Gender differences in the effects from working conditions on mental health: a 4-year follow-up. Int Arch Occup Environ Health. 2002;75(4):252–8. 45. Hensing G, Alexanderson K. The association between sex segregation, working conditions, and sickness absence among employed women. Occup Environ Med. 2004;61(2):e7. 46. Leijon M, Hensing G, Alexanderson K. Sickness absence due to musculoskeletal diagnoses: association with occupational gender segregation. Scand J Public Health. 2004;32(2):94–101. 47. Messing K, Elabidi D. Desegregation and occupational health: how male and female hospital attendants collaborate on work tasks requiring physical effort. Pol Prac Health Safety. 2003;1(1):83–103. 48. Messing K, Tissot F, Saurel-Cubizolles MJ, Kaminski M, Bourgine M. Sex as a variable can be a surrogate for some working conditions: factors associated with sickness absence. J Occup Environ Med. 1998;40(3):250–60. 49. Das B, Wang Y. Isometric pull-push strengths in workspace: 1. Strength profiles. Int J Occup Saf Ergon. 2004;10(1):43–58. 50. Hayward B, Griffin MJ. Repeatability of grip strength and dexterity tests and the effects of age and gender. Int Arch Occup Environ Health. 2002;75:111–9. 51. Peebles L, Norris B. Filling “gaps” in strength data for design. Appl Ergonomics. 2003;34:73–88. 52. Fothergill DM, Grieve DW, Pheasant ST. Human strength capabilities during one-handed maximum voluntary exertions in the fore and aft plane. Ergonomics. 1991;34(5):563–73. 53. Fothergill DM, Grieve DW, Pinder AD. The influence of task resistance on the characteristics of maximal one- and two-handed lifting exertions in men and women. Eur J Appl Physiol Occup Physiol. 1996;72(5–6):430–9. 54. Ivey FM, Tracy BL, Lemmer JT, et al. Effects of strength training and detraining on muscle quality: age and gender comparisons. J Gerontol A Biol Sci Med Sci. 2000;55(3):B152–7;discussion B158–9. 55. Skargren E, Oberg B. Effects of an exercise program on musculoskeletal symptoms and physical capacity among nursing staff. Scand J Med Sci Sports. 1996;6(2):122–30. 56. Wizemann T, Pardue ML, ed. Exploring the biological contributions to human health: Does sex matter? Washington DC: National Academy Press; 2001. 57. Punnett L, Herbert R. Work-related musculoskeletal disorders: is there a gender differential, and if so, what does it mean? In: Goldman M, Hatch MC, eds. Women and Health. New York: Academic Press; 2000: 474–92. 58. Hodgson M, Storey E. Indoor air quality. In: Goldman M, Hatch MC, ed. Women and Health. New York: Academic Press; 2000: 503–13. 59. Dimich-Ward H, Camp PG, Kennedy SM. Gender differences in respiratory symptoms—does occupation matter? Environ Res. 2006; 101(2):175–83. 60. Ernstgard L, Gullstrand E, Lof A, Johanson G. Are women more sensitive than men to 2-propanol and m-xylene vapours? Occup Environ Med. 2002;59(11):759–67. 61. Counter SA, Buchanan LH, Ortega F. Gender differences in blood lead and hemoglobin levels in Andean adults with chronic lead exposure. Int J Occup Environ Health. 2001;7(2):113–8. 62. Oishi H, Nomiyama H, Nomiyama K, Tomokuni K. Comparison between males and females with respect to the porphyrin metabolic disorders found in workers occupationally exposed to lead. Int Arch Occup Environ Health. 1996;68(5):298–304. 63. Joffe M. Biases in research on reproduction and women’s work. Int J Epidemiol. 1985;14(1):118–23. 64. Nordander C, Ohlsson K, Balogh I, Rylander L, Palsson B, Skerfving S. Fish processing work: the impact of two sex dependent exposure profiles on musculoskeletal health. Occup Environ Med. 1999;56(4):256–64.
65. Lea C, Hertz-Picciotto I, Anderson A, et al. Gender differences in the healthy worker effect among synthetic vitreous fiber workers. Am J Epidemiol. 1999;150:1099–106. 66. Lippel K. Workers’ compensation and stress. Gender and access to compensation. Int J Law Psychiatry. 1999;22(1):79–89. 67. Lippel K. Compensation for musculoskeletal disorders in Quebec: systemic discrimination against women workers? Int J Health Serv. 2003;33(2):253–81. 68. Lippel K, Demers D. Invisibilité: facteur d’exclusion: les femmes victimes de lésions professionnelles. Revue Canadienne de droit et société. 1996;11:87–134. 69. van Hooff MLM, Geurts SAE, Taris TW, et al. Disentangling the causal relationships between work-home interference and employee health. Scand J Work, Environ Health. 2005;31:15–29. 70. Walters V, McDonough P, Strohschein L. The influence of work, household structure, and social, personal and material resources on gender differences in health: an analysis of the 1994 Canadian National Population Health Survey. Soc Sci Med. 2002;54(5):677–92. 71. Stone LO, Swain S. The 1996 Census Unpaid Work Data Evaluation Study. Ottawa: Status of Women Canada; 2000. www.swc-cfc.gc.ca/ 72. Zukewich N, Normand J, Lindsay C, et al. Women in Canada: a gender-based statistical report (89-503-XPE). Ottawa: Statistics Canada; 2000. 73. de Fatima Marinho de Souza M, Messing K, Menezes PR, Cho HJ. Chronic fatigue among bank workers in Brazil. Occup Med (Lond). 2002;52(4):187–94. 74. Akerlind I, Alexanderson K, Hensing G, Leijon M, Bjurulf P. Sex differences in sickness absence in relation to parental status. Scand J Soc Med. 1996;24(1):27–35. 75. Zahm SH, Blair A. Occupational cancer among women: where have we been and where are we going? Am J Ind Med. 2003;44(6):565–75. 76. Niedhammer I, Saurel-Cubizolles MJ, Piciotti M, Bonenfant S. How is sex considered in recent epidemiological publications on occupational risks? Occup Environ Med. 2000;57(8):521–7. 77. Semenciw RM, Morrison HI, Riedel D, Wilkins K, Ritter L, Mao Y. Multiple myeloma mortality and agricultural practices in the Prairie provinces of Canada. J Occup Med. 1993;35(6):557–61. 78. McDuffie H, Pahwa P, Spinelli JJ, et al. Canadian male farm residents, pesticide safety handling practices, exposure to animals and non-Hodgkin’s lymphoma (NHL). Am J Ind Med. 2002;Aug(Suppl 2): 54–61. 79. Meeker B, Carruth A, Holland CB. Health hazards and preventive measures of farm women. Emerging issues. AAOHN J. 2002;50(7): 307–14. 80. Gluck JV, Oleinick A. Claim rates of compensable back injuries by age, gender, occupation, and industry. Do they relate to return-towork experience? Spine. 1998;23(14):1572–87. 81. Islam S, Velilla AM, Doyle EJ, Ducatman AM. Gender differences in work-related injury/illness: analysis of workers compensation claims. Am J Ind Med. 2001;39(1):84–91. 82. Stock S, Tissot F, Messing K, Goudreau S. Can 1998 Quebec Health Survey data help us estimate underreporting of workers’ compensation lost-time claims for musculoskeletal disorders of the neck, back and upper extremity? Proceedings of the 4th International PREMUS Conference, July 14, 2004. Zurich, Switzerland. 2004;2:573–4. 83. Smith G, Wellman HM, Sorock GS, et al. Injuries at work in the U.S. adult population: contributions to the total injury burden. Am J Public Health. 2005;95(7):1213–19. 84. United States Bureau of Labor Statistics, 2005. http://www.bls.gov/ news.release/union2.t01.htm consulted August 3 2005. 85. Hébert F, Duguay P, Massicotte P. Les indicateurs de lésions indemnisées en santé et en sécurité du travail au Québec: analyse par secteur d’activité économique en 1995–1997 (A-333). Montréal: Institut de recherche Robert-Sauvé en santé et de sécurité du travail du Québec; 2003.
44 86. Smith PM, Mustard CA. Examining the associations between physical work demands and work injury rates between men and women in Ontario, 1990-2000. Occup Environ Med. 2004;61(9): 750–6. 87. Khan J, Jansson B. Risk level assessment and occupational health insurance expenditure: a gender imbalance. J Socio-Economics. 2001;30:539–47. 88. Messing K, Courville J, Boucher M, Dumais L, Seifert AM. Can safety risks of blue-collar jobs be compared by gender? Safety Sci. 1994;18:95–112. 89. Alexanderson K. Sickness absence: a review of performed studies with focused on levels of exposures and theories utilized. Scand J Soc Med. 1998;26(4):241–9. 90. Bourbonnais R, Mondor M. Job strain and sickness absence among nurses in the province of Quebec. Am J Ind Med. 2001;39:194–202. 91. Stevenson JM, Greenhorn DR, Bryant JT, Deakin JM, Smith JT. Gender differences in performance of a selection test using the incremental lifting machine. Appl Ergon. 1996;27(1):45–52. 92. Blishen BR, Carroll WK, Moore C. The 1981 socioeconomic index for occupations in Canada. Canadian Rev Soc Anthropol. 1987;24: 465–88. 93. Statistics Canada. Women in Canada: A Statistical Report. Cat. No. 89-503E. Ottawa: Statistics Canada; 1995:37–53. 94. Levasseur M, Goulet L. Problèmes de santé. In Enquête sociale et de santé 1998. 2nd ed. Montréal: Institut de la statistique du Québec; 2001: 273–95, Table 13.4. Available at: http://www.stat.gouv.qc.ca/ publications/sante/e_soc-sante98.htm 95. Legaré G, Préville M, Massé R, Poulin C, St-Laurent D, Boyer R.Santé mentale. In: Enquête sociale et de santé 1998. 2nd ed. Montréal: Institut de la statistique du Québec; 2001: 333–53. Available at http://www.stat.gouv.qc.ca/publications/sante/e_soc-sante98.htm 96. Guo H-R, Tanaka S, Cameron LL, et al. Back pain among workers in the United States: national estimates and workers at high risk. Am J Ind Med. 1995;28:591–602. 97. Vézina N, Tierney D, Messing K. When is light work heavy? Components of the physical workload of sewing machine operators which may lead to health problems. Appl Ergon. 1992;23: 268–76. 98. Brisson C, Vézina M, Vinet A. Health problems of women employed in jobs involving psychological and ergonomic stressors: the case of garment workers in Québec. Women Health. 1992;18(3):49–66. 99. Kaergaard A, Andersen JH. Musculoskeletal disorders of the neck and shoulders in female sewing machine operators: prevalence, incidence, and prognosis. Occup Environ Med. 2000;57(8):528–34. 100. Tissot F, Messing K, Stock S. Standing, sitting and associated working conditions in the Quebec population in 1998. Ergonomics. 2005;48(3):249–69. 101. Messing K, Fortin S, Rail G, Randoin M. Standing still: why North American workers are not insisting on seats despite known health benefits. Int J Health Serv. 2005;35(4):745–63. 102. Tüchsen F, Krause N, Hannerz H, Burr H, Kristensen TS. Standing at work and varicose veins. Scand J Work Environ Health. 2000;26(5): 414–20. 103. Schonfeld I. An updated look at depressive symptoms and job satisfaction in first-year women teachers. J Occup Organizational Psychol. 2000;73: 363–71. 104. Sanne B, Mykletun A, Dahl AA, Moen BE, Tell GS. Occupational differences in levels of anxiety and depression: the Hordaland health study. J Occup Environ Med. 2003;45(6):628–38. 105. Macintyre S, Ford G, Hunt K. Do women “over-report” morbidity? Men’s and women’s responses to structured prompting on a standard question on long standing illness. Soc Sci Med. 1999;48(1): 89–98. 106. Brabant C, Mergler D, Messing K. Va te faire soigner, ton usine est malade: la place de l’hystérie de masse dans la problématique de la santé des femmes au travail [Go take care of yourself, your factory
Women Workers
833
is sick: the place of mass hysteria in the problem of women’s health at work]. Sante Ment Que. 1990;15(1):181–204. 107. Bullinger M, Morfeld M, von Mackensen S, Brasche S. The sickbuilding-syndrome—do women suffer more? Zentralbl Hyg Umweltmed. 1999;202(2–4):235–41. 108. Ford CV. Somatization and fashionable diagnoses: illness as a way of life. Scand J Work Environ Health. 1997;23(Suppl 3):7–16. 109. Lucire Y. Neurosis in the workplace. Med J Aust. 1986;145:323–7. 110. Karasek R, Theorell T. 1990 Healthy Work. New York: Basic Books. 111. Melamed S, Fried Y, Froom P. The joint effect of noise exposure and job complexity on distress and injury risk among men and women: the cardiovascular occupational risk factors determination in Israel study. J Occup Environ Med. 2004;46(10):1023–32. 112. Gallo LC, Bogart LM, Vranceanu AM, Walt LC. Job characteristics, occupational status, and ambulatory cardiovascular activity in women. Ann Behav Med., 2004;28(1):62–73. 113. Belkic KL, Landsbergis PA, Schnall PL, Baker D. Is job strain a major source of cardiovascular disease risk? Scand J Work Environ Health. 2004;30(2):85–128. 114. Mosca L, Ferris A, Fabunmi R, Robertson RM. Tracking women’s awareness of heart disease: an American Heart Association national study. Circulation. 2004;109(5):573–9. 115. Leigh JP. A ranking of occupations based on the blood pressures of incumbents in the National Health and Nutrition Examination Survey I. J Occup Med. 1991;33:853–61. 116. Brisson C, Laflamme N, Moisan J, et al. Effect of family responsibilities and job strain on ambulatory blood pressure among whitecollar women. Psychosom Med. 1999;61(2):205–13. 117. Blane D, Berney L, Montgomery SM. Domestic labour, paid employment and women’s health: analysis of life course data. Soc Sci Med. 2001;52(6):959–65. 118. Escribà-Agüir V, Tenías-Burillo JM. Psychological well-being among hospital personnel: the role of family demands and psychosocial work environment. Int Arch Occup Environ Health. 2004;77(6):401–8. 119. Prévost J, Messing K. Stratégies de conciliation d’un horaire de travail variable avec des responsabilités familiales. Le travail humain. 2000;64:119–43. 120. Gerberich SG, Church TR, McGovern PM, et al. An epidemiological study of the magnitude and consequences of work related violence: the Minnesota Nurses’ Study. Occup Environ Med. 2004;61(6):495–503. 121. Moracco KE, Runyan CW, Loomis DP, Wolf SH, Napp D, Butts JD. Killed on the clock: a population-based study of workplace homicide, 1977-1991. Am J Ind Med. 2000;37(6):629–36. 122. Loomis D, Wolf SH, Runyan CW, Marshall SW, Butts JD. Homicide on the job: workplace and community determinants. Am J Epidemiol. 2001;154(5):410–7. 123. Blair A, Zahm SH, Silverman DT. Occupational cancer among women: research status and methodologic considerations. Am J Ind Med. 1999;36(1):6–17. 124. Rubin CH, Burnett CA, Halperin WE, Seligman PJ. Occupation as a risk identifier for breast cancer. Am J Public Health. 1993;83: 1311–5. 125. Mergler D, Vézina N. Dysmenorrhea and cold exposure. J Reprod Med. 1985;30:106–11. 126. Harlow SD. Function and dysfunction: a historical critique of the literature on menstruation and work. Health Care Women Int. 1986;7: 39–50. 127. Yang JM, Chen QY, Jiang XZ. Effects of metallic mercury on the perimenstrual symptoms and menstrual outcomes of exposed workers. Am J Ind Med. 2002;42(5):403–9. 128. Tang N, Zhu ZQ. Adverse reproductive effects in female workers of lead battery plants. Int J Occup Med Environ Health. 2003;16(4): 359–61. 129. Farr SL, Cooper GS, Cai J, Savitz DA, Sandler DP. Pesticide use and menstrual cycle characteristics among premenopausal women in the Agricultural Health Study. Am J Epidemiol. 2004;160(12): 1194–204.
834
Environmental Health
130. Mills JL, Jefferys JL, Stolley PD. Effects of occupational exposure to estrogen and progesteogens and how to detect them. J Occup Med. 1984;26:269–72. 131. Cho SI, Damokosh AI, Ryan LM, et al. Effects of exposure to organic solvents on menstrual cycle length. J Occup Environ Med. 2001;43(6):567–75. 132. Cho SI, Damokosh AI, Ryan LM, et al. Effects of exposure to organic solvents on menstrual cycle length. J Occup Environ Med. 2001;43(6):567–75. 133. Zhou SY, Liang YX, Chen ZQ, Wang YL. Effects of occupational exposure to low-level carbon disulfide (CS2) on menstruation and pregnancy. Ind Health. 1988;26:203–14. 134. Kersemaekers WM, Roeleveld N, Zielhuis GA. Reproductive disorders due to chemical exposure among hairdressers. Scand J Work Environ Health. 1995;21(5):325–34. 135. Stokic E, Srdic B, Barak O. Body mass index, body fat mass and the occurrence of amenorrhea in ballet dancers. Gynecol Endocrinol. 2005;20(4):195–9. 136. Messing K, Saurel-Cubizolles MJ, Bourgine M, Kaminski M. Menstrual-cycle characteristics and work conditions of workers in poultry slaughterhouses and canneries. Scand J Work Environ Health. 1992;18(5):302–9. 137. Hatch MC, Figa-Talamanca I, Salerno S. Work stress and menstrual patterns among American and Italian nurses. Scand J Work Environ Health. 1999;25(2):144–50. 138. Gurevitch M. Rethinking the label: who benefits from the PMS construct? Women Health. 1995;23(2):67–98. 139. Dean BB, Borenstein JE. A prospective assessment investigating the relationship between work productivity and impairment with premenstrual syndrome. J Occup Environ Med. 2004;46(7): 649–56. 140. Lemasters G, Hagen A, Samuels SJ. Reproductive outcomes in women exposed to solvents in 36 reinforced plastics companies. I. Menstrual dysfunction. J Occup Med. 1985;27:490–4. 141. Ng TP, Foo SC, Yoong T. Menstrual function in workers exposed to toluene. Br J Ind Med. 1992;49:799–803.
142. Tissot F, Messing K. Perimenstrual symptoms and working conditions among hospital workers in Quebec. Am J Ind Med. 1995;27(4): 511–22. 143. Blatter BM, Zielhuis GA. Menstrual disorders due to chemical exposure among hairdressers. Occup Med (Lond). 1993;43(2): 105–6. 144. Paul JA, van Dijk FJH, Frings-Dresen MHW. Work load and musculoskeletal complaints during pregnancy. Scand J Work Environ Health. 1994;20:153–9. 145. Paul JA, Frings-Dresen MHW. Standing working posture compared in pregnant and non-pregnant conditions. Ergonomics. 1994;37(9): 1563–75. 146. Dunning K, LeMasters G, Levin L, Bhattacharya A, Alterman T, Lordo K. Falls in workers during pregnancy: risk factors, job hazards, and high risk occupations. Am J Ind Med. 2003;44(6): 664–72. 147. Malenfant R. Le droit au retrait préventif de la travailleuse enceinte ou qui allaite: à la recherche d’un consensus. Sociologie et Sociétés. 1993;25(1):61–75. 148. Saurel-Cubizolles MJ, Kaminski M, Du Mazaubrun C, Bréart G. Les conditions de travail professionnel des femmes et l’hypertension artérielle en cours de grossesse. Rev Epidemiol Sante Publique. 1991;39:37–43. 149. Stanosz S, Kuligowski D, Pieleszek A. Concentration of dihydroepiandrosterone, dihydroepiandrosterone sulphate and testosterone during premature menopause in women chronically exposed to carbon disulphide. Med Pr. 1995;46(4):340. 150. Hardy R, Kuh D, Wadsworth M. Smoking, body mass index, socioeconomic status and the menopausal transition in a British national cohort. Int J Epidemiol. 2000;29(5):845–51. 151. Wise LA, Krieger N, Zierler S, Harlow BL. Lifetime socioeconomic position in relation to onset of perimenopause. J Epidemiol Community Health. 2002;56(11):851–60. 152. Cassou B, Derriennic F, Monfort C, Dell’Accio P, Touranchet A. Risk factors of early menopause in two generations of gainfully employed French women. Maturitas. 1997;26(3):165–74.
Health Hazards of Child Labor
45
Susan H. Pollack • Philip J. Landrigan
Child labor or youth work is defined in the United States as employment of children less than 18 years of age. While adolescents under age 18 are usually thought of as students, by senior year of high school 75% of U.S. teens are also working in a formal setting as employees.1 More than five million U.S. children and adolescents are estimated to be legally employed after school, on weekends, and during the summer (U.S. Department of Labor). Several million more are believed to be employed under conditions that violate wage, hour, and safety regulations, and an uncounted additional segment work in areas that are not even covered by child labor laws.2 Even as freshmen at age 14, almost a quarter of students hold jobs, and work in informal arrangements such as yard work, babysitting, work in family, or community agriculture is common much earlier.3 Despite the existence of laws that are intended to protect them, the number of young U.S. workers under age 18 who die each year has remained relatively constant in the past few years at about 68 per year; rates of younger teen occupational death are actually rising,1 and more than 200,000 teens continue to be injured on the job every year.4 Child and adolescent work-related injuries and exposures and their resulting health effects are not just remnants of Dickensian history but remain an important public health issue in the twenty-first century, as the following cases illustrate. Each of the cases represents a sentinel health event, a single isolated event that serves as a marker for a whole group of youth at potential risk of exposure, injury, or death. Case 1: A 16-year-old boy cleaning a grill in a Kentucky fastfood restaurant collapsed and died, despite rapid emergency medical response. He had no history of cardiac problems, huffing solvents, or other drug use. The cleaning solution was an unknown mixture of substances, but analysis was unfortunately impossible as it was thrown out during all the activity. Concern remains that the cleaning mixture, when heated, may have released fumes causing a fatal arrhythmia. Since fast food is one of the major industrial segments hiring youth, and most youth who work in food service end up also performing cleaning tasks of some type with cleaning chemicals at the end of their shift this unresolvable case continues to cause concern. (Kentucky Fatality Assessment and Control Evaluation program, personal communication 1994 and subsequent discussions with county Coroner.) Case 2: A 17-year-old boy running a thriving T-shirt printing business out of his bedroom presented to adolescent clinic one cold Pittsburgh winter with fatigue and elevated liver enzymes. His symptoms and physical findings were felt to be a result of his solvent exposure. His family had inadequate resources to purchase a spray booth, but were also unwilling to see him give up his lucrative business. As a compromise, they agreed to have him move his business out of his bedroom into another room in the house. (L. Sanders, MD, Children’s Hospital of Pittsburgh, personal communication 1993.) Case 3: As a volunteer, summer, church youth-group project, twins are helping elderly people with home maintenance in an area of their state known for old houses with lead-based paint. The youth group comes in, scrapes and repaints houses in crews that work for
about a week each. No one has considered the lead exposure risk to volunteers scraping and generating lead paint dust. No specific training is provided nor is monitoring done. This scenario is repeated in Pennsylvania, Kentucky, and numerous other states. Case 4: Just before Christmas in the early 1990s, a carload full of Kentucky teens returning to their factory work from an off-site lunch break crashed and all were killed. A few years later, a 17-yearold driving a truck to deliver newspapers on a rainy Sunday morning on a rural road failed to negotiate a curve and hit a tree. A substitute driver, he died after a week in the ICU without ever regaining consciousness. In 2005, the Kentucky legislature was considering passage of a version of Graduated Drivers Licensing that permits new drivers to carry two passengers and has an exemption for driving to work. Case 5: A 16-year-old girl with diabetes was admitted to the inpatient hospital service with poor control of her sugars. During her hospitalization, it was discovered that she was working after school for a pizza maker. If orders came in fast and they were busy, she would not be permitted to take a dinner break, consequently her eating schedules were not consistent. It was suggested that she speak with her employer about the requirement under the law for such a break and the need to uphold that law for her medical well-being. She and her mother agreed, and after meeting with the employer, more regular dinner breaks were made available to her. Case 6: A 16-year-old Ohio jockey was killed 5 weeks into a successful career when his horse broke a leg and fell over him in a November, 2005, race. Most experts agreed that neither age nor experience would have made a difference in the outcome. Case 7: A 15-year-old boy was killed in a tobacco field when the rear wheel of the tractor he was driving went over the edge of a small ravine and the tractor rolled twice and landed on him. He had no Roll Over Protection System (ROPS) and no seat belt. He and another 15-year-old had been plowing all day on a farm without adult supervision. His 14-year-old brother almost rolled another tractor racing to the field to help him when he heard. HISTORICAL PERSPECTIVE
Child labor has a long history. In the Middle Ages, children worked in agriculture and as apprentices to artisans.5 In Colonial America, children who helped out on their own farms and households commonly were hired out to perform similar tasks for neighbors, a practice that has continued in rural areas almost without change. Under these conditions, proximity to family and social relationships provided some degree of protection for the child worker.6 Child labor underwent major expansion and restructuring during the eighteenth century as a consequence of the industrial revolution’s need for large numbers of workers. Most mill owners preferred to hire children rather than adults because child workers were cheaper, more tractable, and as labor unions developed, less likely to strike.7 Families 835
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
836
Environmental Health
sent children as young as 11, especially girls, to work in the mills, where the wages they could earn far exceeded the income of their parents at home on rural farms. These young girls often were victims of sexual exploitation outside of the workplace in addition to exploitation inside the factories, where they commonly labored for 12 or more hours a day, 6 days a week.8 Depiction of the horrors of child labor in the literature9,10 and art of the eighteenth and nineteenth centuries sparked great popular revulsion against the worst abuses, but the practice nevertheless continued. In Britain, concern over the plight of working children stimulated passage of the first legislation protecting the health of all workers.5,7 The 1802 Health and Morals of Apprentices Act fixed the maximum number of hours of work for apprentices, forbade night work, and ordered the walls of factories to be washed twice each year and workrooms to be ventilated. In the United States, concerns about working children led to the enactment of compulsory education laws in the eighteenth and nineteenth centuries. For example, an 1874 New York State law mandated schooling for all 8- to 14-year-old children and proscribed work on school days.7 Despite federal and state legislation, child labor continued to be a major problem during the first third of the twentieth century, largely because of inadequate enforcement of existing statutes. The need for enforcement was demonstrated by the death of 146 women and children in the 1911 Triangle Shirtwaist fire in New York City, only 8 years after the passage of landmark child labor and fire protection legislation.11 Between 1916 and 1930, Congress enacted three major pieces of child labor legislation, but the U.S. Supreme Court invalidated all three. Finally, in 1938, Congress passed the Fair Labor Standards Act (FLSA), which remains the major federal legislation governing child labor today. Major reductions in child labor occurred during the 40 years after passage of the FLSA. Although provisions of the Act helped to produce this decrease, automation, structural shifts in the American industrial economy, reductions in family size, and restrictive immigration policies all contributed to the declining use of child laborers. After World War II, widespread emphasis on the personal and societal value of education and a generally strong economy combined to further decrease the prevalence of child labor in most sectors of the economy. The major exception was in agricultural employment, which was exempted from many of the provisions of the FLSA. Consequently, the employment of children in agriculture remained common and is to the present time relatively underregulated. CURRENT YOUTH EMPLOYMENT
The estimated 5 million legally working American youth under age 18 do not fully include several additional populations: an estimated 1.3 million youth living and working on family farms and ranches, migrant farmworker children working in the fields and adding to the piecework rate for which their fathers are paid (including both legal and illegal immigrants and U.S. born citizens), and children working in a variety of small family businesses. Although legal employment primarily includes youth ages 14 and above, farming, newspaper delivery, and other jobs are legal for even younger children, and children as young as age 11 do appear in workers’ compensation databases. Both the number of teens employed and the hours worked per week tend to increase with age, in part due to increased hours and job type permitted under federal regulation as they reach ages 16 and 17. Data from the 1988 census about the 50% of 16- and 17-year-olds then working at some point during the year indicated that they were working an average of more than 20 hours per week for almost half the year.2 High frequency of job change has been reported as the norm for many high school workers, especially in the food service industry, but middle school students were noted to keep their summer jobs into the school year.3 Little specific was known about workers under high school age until recently. In October, 2001, an anonymous survey about
employment, injury, work-related habits, and school performance was administered to middle school students in five school districts and one urban school in Wisconsin chosen to be representative of the state as a whole, resulting in an analysis of replies by 10,366 students.3 About 58% of middle school students ages 10–14 reported working during the past summer (2001), and 60% of those students reported working at the same job during the school year. Two thirds of those who reported working were 12–13 years old. A third of the students were working more than 10 hours per week, including 5% working 40 or more hours a week. A third of the middle school employees were working between 7 and 11 pm, and 6% reported working after 11 pm (8% of females). Weller et al.12 conducted anonymous surveys in the classrooms of lower-income Hispanic South Texas middle school students in sixth through eighth grade in which 3008 students (56%) reported current or recent employment. Of the respondents, 63% were Hispanic. Only half of Hispanic children but 66% of white children reported current/recent employment. South Texas students reported working an average of 8 hours per week, but 12% worked more than 10 hours per week. During the 1990s, increasing numbers of adolescents were being pushed into the workplace through the nationwide School to Work initiative and through local gang prevention, violence prevention, and juvenile justice-related job programs.13 It is unclear what effects the economic climate of the decade from 2000 will bring, though there is some suggestion that with cuts in summer job programs and poor economic times for at least a segment of the U.S. adult population, there may be fewer youth jobs available. LAWS PROTECTING THE HEALTH/SAFETY OF WORKING YOUTH AND THE WORK PERMIT SYSTEM
The Fair Labor Standards Act (FLSA) of 1938 remains the major piece of legislation regulating the employment of youth under age 18. Under the FLSA, no child under the age of 16 years may work during school hours, and a ceiling is set on the number of hours of employment permissible for each school day and school week. Employment in any hazardous nonagricultural occupation is prohibited for anyone less than 18 years old, and specific prohibitions are listed in the Hazard Orders (HOs). Thus, no one under age 18 may work in mining, logging, brick and tile manufacture, roofing, excavating, or as a helper on a vehicle or on power-driven machinery. Work with meat-processing machinery and delicatessen slicers is specifically prohibited.14 In agriculture, where the restrictions are much less stringent, hazardous work is prohibited only until age 16, and all work on family farms is totally exempted. According to the law, however, no child under age 16 working on a nonfamily farm is allowed to drive a tractor with an engine over 20 horsepower or to handle or apply pesticides and herbicides.15 The intent of the FLSA and HOs is to protect the safety of working youth. Despite a call by the National Academy of Sciences in 1998 for a national surveillance system to monitor adolescent occupational injuries, in 2006 there is still no such centralized mechanism for evaluating how well the FLSA actually protects children in the workplace. Yet without any data to suggest that current law is even doing enough to protect as it was designed to do, a number of initiatives have been undertaken in the past decade to weaken the HOs, leading to passage of permission to drive a motor vehicle for limited time and the elimination of box-crusher restrictions in supermarkets. Critics of the FLSA and HOs complain that job processes have changed and no updates have been done on restrictions. It is also true that new processes and machines have been invented, and many (such as the new stand-up mowers used in lawn care) are not even addressed under the current HOs. Although the FLSA provides a broad framework for the regulation of child labor, most administration of the law occurs on a state level, largely through the work permit system. Work permits are issued to children by state and local school systems. This authority was
45 placed within the schools to allow for discretion in the issuance and rescission of a work permit based on a student’s academic performance. In reality, however, most school systems, overwhelmed by more pressing responsibilities, virtually never exercise their discretionary authority. Administration of the FLSA in most states also suffers from a lack of centralized data collection on the number or types of work permits issued or the industries in which children are employed. Thus, in most states, only meager information is available on the number and ages of employed children or on the nature of their employment. ILLEGAL CHILD LABOR, ENFORCEMENT
AND SITUATIONS WHICH VIOLATE OR AVOID THE LAW While a small subset of child labor in the United States involves undocumented or “illegal” immigrants in fields, garment sweatshops, poultry plants, and construction, it is important to realize that the majority of illegal child labor refers to the employment of U.S. citizens under conditions which violate the wage, hour, and or safety laws.16–18 Examples include clocking out/failure to pay for time spent cleaning up an establishment at the end of a work shift, and failure to abide by hazardous order. Despite the FLSA, illegal employment of children continues to occur in all industrial sectors and often exists under sweatshop conditions. A sweatshop is defined as any establishment that routinely and repeatedly violates wage, hour, or child labor laws and the laws protecting occupational safety and health. Traditionally, these shops have been considered fringe establishments, such as those in the garment and meat-packing industries.19–22 Increasingly, however, restaurants and grocery stores, not typically considered to be sweatshops, also are sometimes satisfying the definition. In an effort to quantify the magnitude of illegal child labor in the absence of readily available national statistics, the General Accounting Office (GAO) surveyed the directors of state labor departments in 1987. The GAO found that, in Chicago, half of the approximately 5000 restaurants met the criteria for sweatshops and about 25,000 workers were employed in such establishments. In New Orleans, 25% of the 100 apparel firms (employing 5000 workers) were estimated to be sweatshops. In Los Angeles and New York, anywhere from 500 to 2000 or more sweatshops were thought to exist.23 The problem is not confined to large urban areas. In 1987, several high school students employed by a chain restaurant in a small West Virginia town quit after having tried unsuccessfully to negotiate with the manager to stop keeping them past midnight on school nights. The critical importance of child labor law violations lies in their continued link to adolescent occupational injuries and deaths. In Suruda and Halperin’s early study of 1984–87 Occupational Safety and Health Administration (OSHA) adolescent fatality investigations, 41% of the deaths occurred while adolescents were engaged in work that was specifically prohibited under the FLSA, and employer citations for safety violations were issued in 70% of those adolescent death investigations.24 In Suruda’s 2005 review of construction fatalities among workers ages 16–19, 76 teens under age 18 were killed during the 5 years of his study.25 Half the deaths to these workers occurred under situations that were in apparent violation of existing child labor laws; 15 involved age violations (workers under age 16) and 28 involved violations of specific HOs (some involved both). Teen deaths were noted to occur at small, nonunion firms, many of which were exempt from both federal child labor law enforcement and from routine OSHA inspections because of their small size. The risk of work in situations not well covered under the law or under inspection programs was echoed by Derstine26 in a study of 80 fatal adolescent injuries in 1992–1993. Of those 80, 31 occurred among children working in a family business. Half of those were among children less than age 14, and 28 were in agriculture. In a study of North Carolina adolescent occupational fatalities based on medical examiners reports, Dunn and Runyan also found that 86% of workers under age 18 who incurred fatal injuries were engaged in work that appeared to violate the FLSA.27
Health Hazards of Child Labor
837
ADOLESCENT OCCUPATIONAL INJURIES
AND FATALITIES Injuries and deaths related to adolescent employment in the United States have been characterized quite extensively in the past two decades (Table 45-1). Approximately 70 youth die on the job every year.4 In small states, hundreds of teen workers are known to incur occupational injury each year, while in large states adolescent occupational injuries number in the thousands.
Occupational Fatalities from Injuries and Exposures Numbers and rates of fatalities by age: In an October 2005 paper, Windau et al.1 of the Bureau of Labor Statistics provide the most comprehensive and current broad summary of adolescent occupational injuries and fatalities in the United States. When examined in a variety of ways, the overall picture is still one that shows substantial risk of death and injury, and much work to be done in prevention, education, and enforcement of current child labor laws. The number of U.S. young worker fatalities averaged 68 per year from 1992 to 2000, decreased in 2001–2002, increased in 2003 primarily among workers under age 16, then decreased again in 2004 to almost half the number in 1992,37 a decrease larger than that for adult workers in the same time. Fatality rates per 100,000 full-time equivalent workers were examined for the decade 1994–2004. During that time, the rate for workers older than 15 as a whole decreased 3%, primarily based on decreases in death rates of workers over age 55. For workers ages 15–17, the death rate bounced around, initially declining until 1998, increasing to the highest ever recorded in 1999 at 3.8% decreasing to 2.3% in 2002, and ending at 2.7% in 2004. Examining occupational deaths from 1994 to 2004 among teen workers more closely, workers ages 16 and 17 had a fatality rate of 3.0 per 100,000 while workers ages 15 had a rate of 4.7 per 100,000. During the time period, overall death rates of workers in most adult age groups declined by 1–5%, while the rate for 16- to 17-year-olds declined about 1% and the rate for 15-year-olds actually increased 9%. When 5-year periods were used as a time period for analysis, overall worker fatality rates declined 14% between 1994–8 and 1999–2003 while rates for 15- to 17-year-olds declined 6%; as a result, the worker fatality rates for 15- to 17-year-olds approached the same rate as for workers ages 18–34. Death by industry and by cause/mechanism: Agriculture is the leading industry responsible for adolescent occupational fatalities, followed by construction.25 More than half the adolescent occupational fatalities in the United States from 1998 to 2002 were related to transportation incidents.1 This represented a 14% increase from the previous 5-year period and was a result both of both vehicle-related incidents on farmland and public roadways and incidents of workers being hit by vehicles. Vehicles or farm machinery were responsible for almost all of the deaths involving workers younger than age 14, and in about 25% of those cases the child was the driver/operator. There was a 17% increase in deaths among youth riding on farm vehicles as a passenger or outside helper. In most cases, victims fell from and were struck by the same farm vehicle. Deaths also increased for youth riding on other vehicles and as pedestrians. Among 16- to 17-year-old workers, 12% were killed driving a car or truck, while another 12% were killed in retail work, mostly as homicides. Work-related homicide decreased 44% between the two time periods. Deaths from being struck by objects and from fires/explosions also declined. Deaths related to falls increased due to an increased number of falls from scaffolds. The risk in construction was also seen by the doubling of deaths that occurred while installing building materials, mostly on construction sites, at a time when most other work activity fatalities declined. Occupational Exposure–related deaths: From 1980 to 1989, NIOSH found electrocutions were the third leading cause of occupational fatality among 16- and 17-year-olds, with higher rates than those for adult workers.4 Contact with an energized power line
838
Environmental Health
accounted for more than half the cases. Windau in 2005 noted a doubling of electrocution deaths, accounting for 5% of fatalities to farm workers under age 18.1 In a study of fatal teen construction injuries in 15- to 19-year-olds from 1984 to 1998, Suruda also noted the importance of electrocution, which led to more deaths than did roof falls, and was more common among youth deaths than adult deaths.25 Poisonings constitute a small but persistent piece of adolescent occupational fatalities. Dunn and Runyan27 reviewed 1980–1989 North Carolina Medical Examiner records of 71 youth less than age 20 years and found one poisoning death of an adolescent under age 18. Castillo et al.28 reviewed 670 nonmilitary deaths of 16- and 17-yearolds in the National Traumatic Occupational Fatalities database during the same time frame (1980 to 1989). Poisonings were found to be responsible for 3.0% of deaths (20 males, 0 females). For 16- and 17-year-olds, the risk of occupational poisoning death was 1.5 times that for adult workers. In California during that same time period, Schenker et al.29 found odds ratio of deaths by accidental poisoning on farm compared with off was 1.8 for 10- to 14-year-olds. Belville et al.30 found that 2 of 31 occupational fatalities in New York State workers’ compensation awards from 1980 to 1987 were caused by exposure to toxicants. Both aged 17 years, one was asphyxiated by carbon monoxide at a trucking storage depot, and one died of gas inhalation (probably hydrogen sulfide in a manure pit) while working on a dairy farm. In a review of 104 deaths of youth less than age 18 from 1984 to 1987 that resulted in OSHA investigations, Suruda and Halperin24 found 12 deaths from asphyxiation. Abuse of substances available at work was implicated in three of those deaths, with one sniffing trichloroethane and two inhaling nitrous oxide. Deaths while cleaning tanks (enclosed space fatalities) have been reported in Canadian and Colorado adolescents.
Nonfatal Occupational Injury As evidenced by Table 45-1 below, workers’ compensation files have provided some of the most comprehensive available information in the United States on adolescent occupational injury, despite limitations that tilt the data toward undercounts. Reasons for undercounting include failure to recognize injured “students” as workers, lack of knowledge among teen workers and their families about workers’ comp eligibility, and differences between states in indemnity requirements. Because of the part-time nature of much teen work, accurate injury rates are difficult to deduce, but data from the state of Washington suggests that adolescent occupational injury rates may exceed those of adult workers when adjusted for hours worked.31
TABLE 45-1. OCCUPATIONAL INJURY TO WORKING TEENS State Washington
California New York
Texas Connecticut Massachusetts
Minnesota Rhode Island Kentucky
Number Injured 4450 per year accepted workers’ compensation claims for workers’ aged 11–17 2104 injuries in 1991 More than 1000 per year received workers’ compensation (more than 8 lost work days to qualify) More than 1000 per year reported to workers’ compensation Almost 800 per year 400 per year treated in emergency departments and 5% of the state population under surveillance, thus potentially 8000 per year Almost 750 per year An average of 500 per year accepted by workers’ compensation More than 400 per year reported to workers’ compensation
Reference 31
32 30
33 34 35
36 37 38
Available emergency department data also provide a glimpse into the important role of work in the epidemiology of adolescent injury. A Massachusetts study35 of adolescent emergency department visits for treatment of injuries found that 26% of the injuries with a known location among 17-year-olds occurred at work, and that work was the single most common location of injury in this age group, as it was among a surveyed group of 16- to 17-year-old Saskatchewan high school students.39 In Massachusetts, 1 of every 30 adolescents aged 16–17 in the population received treatment in an emergency department for a work-related injury each year. In a 2005 study of CHIRPP data (Canadian Hospitals Injury Reporting and Prevention Program) for Canadian children, Lipskie and Breslin found 999 children ages 5–17 who had suffered an occupational injury between 1995 and 1998. Occupational injuries increased with age and were concentrated in two main areas: clerical/service and manual labor.40 Injury severity: Ehrlich41 utilized the requirement for significant surgery as a proxy measure for severity in a study of WV workers’ compensation claims from 1996 to 2000 and found that workers under 20 had an increased relative risk of lacerations, fractures, and amputations relative to adults over age 20 and had injuries that resulted in significant surgical procedures more often than adults. However, application of this study to child labor is limited by their inclusion of 18- and 19-year-olds. Injury by commercial versus family business: In a communitybased telephone survey of work and injuries among teenage agricultural workers in Washington state, Bonauto et al.42 found that rates of injury among both Hispanic and non-Hispanic teenage agricultural workers who were working for an agricultural business owned by a family member were higher than those who were working for an agricultural business not owned by a family member. Since teens employed in family farm businesses also were found to have worked more seasons and fewer hours per week, they would theoretically have more experience and less fatigue, but it may be that the type of tasks for which they are responsible pose the risk. Those tasks were found to include driving, animal care, and mechanic work.
Nonfatal Occupational Exposure In a Washington state study of 1988–1991 workers’ compensation awards to 17,800 adolescent workers ages 11–17 years, 4.9% of the awards resulted from toxic exposures.31,32 While workers’ compensations records have provided the bulk of useful data on adolescent occupational injuries from many states, it is now clear from the work of Woolf et al.43 that Poison Control Centers data can provide useful knowledge on both substances responsible for adolescent occupational poisoning and patterns of occupational poisoning. The Toxic Exposure Surveillance System database compiled by the American Association of Poison Control centers was analyzed for 1993–97 cases. Of the workplace toxic exposures in their system, 3% (8779 cases) involved adolescent workers under age 18, and the proportion of their cases that were teens increased over that time. There were 2 deaths, approximately 877 life-threatening cases, and an additional 14.2% were considered severe injuries. Approximately a third involved toxic inhalations, 27% involved eye exposures, 24% and skin exposures, and 19% involved ingestions. The most common agents were alkaline corrosives (13%), gases and fumes (12%), and involved cleaning agents (9%). Issues that arose from that data and also in Kentucky workers’ comp data include the importance of poisoning from cleaning agents among youth working in food service. Because so many of those are poisoned while doing cleaning tasks in occupations that would not automatically suggest exposure to cleaning agents, the understanding has evolved that it is important for parents, emergency care providers, and researchers to examine tasks and not just occupational classifications when examining adolescent occupational exposures/ injuries. Noise is one of the few specific areas of adolescent occupational exposure that not only has been studied but also for which an intervention program has been designed, implemented, and evaluated.44 From 1985 to 1988, audiometric assessment of 872 high school vocational
45 agriculture students was conducted in 12 central Wisconsin schools. Students with active involvement in farm work had more than twice the risk for mild and early noise-induced hearing loss compared with their peers who had no involvement in farm work. Larger exposures paralleled degree of hearing loss in at least one ear. Approximately 50% of adolescents employed on farms tested had evidence of some hearing loss in at least one ear; this was true for 74% in one of the higher-exposure groups. As with adult farmers who drive tractors, the left ear was most affected in adolescents doing farm work. (Farmers normally look over their right shoulder while driving, shielding the right ear with the head while placing the left ear closer to the engine noise). Few students employed on farms drove only tractors with enclosed cabs, which have been shown to be less noisy. Only 9% of students employed on farms reported the use of hearing protection devices (HPDs). Hearing deficits documented in the first year generally persisted into the second year. The use of amplified music and exposure to noise from snowmobiles or motorcycles had only weak associations with the prevalence of noise-induced hearing loss. The Wisconsin findings have been echoed in North Carolina students, in which at least 30% of male adolescents in 4-H who were employed on farms reported being exposed to loud noises. Among 562 North Carolina adolescents with nonfarming work experience, 27% reported working around loud noises. Pechter noted noise exposure in construction and kitchen jobs in Massachusetts (Elise Pechter, unpublished report to NIOSH, 1998). Additional exposures of concern include pesticide poisonings, green tobacco sickness (nicotine poisoning), repetitive motions problems, occupational dermatitis, and exposure to second-hand tobacco smoke and pulmonary sensitizers. Further discussion of these is beyond the scope of this chapter.45
OPTIONS AND SUCCESSFUL MODELS FOR PREVENTION
As in all injury interventions, the best approach to prevention includes a combination of education, engineering, and policy and enforcement options, and includes the necessity for surveillance in order to measure how any of these alone and in combination are working. Education: Classroom and on-the-job training are both necessary, and need to involve parents, schools, and youth workers themselves as well as employers. One example is the Wisconsin hearing conservation program44 that was devised in response to studies of high school farm youth and hearing loss there. A 4-year hearing conservation program was designed, implemented, and evaluated in 34 Wisconsin junior and senior high schools. Its primary goal was to protect hearing by promoting the use of HPDs. Intervention group students (n = 375) received a five-component educational intervention over the course of 4 school years and three summers, including yearly hearing tests, whereas control students (n = 378) received only a baseline hearing test that was repeated in years 2 and 3. Agricultural and industrial arts teachers also were offered free hearing tests in year 2 in hopes that, through their involvement in the study, they also would increase their encouragement of students to use hearing protection. The educational intervention was modeled after an ideal, industrial hearing-conservation program. In addition to the hearing tests, it included (a) classroomstyle education, including basics, of anatomy and physiology of the ear, a videotape of youth with hearing loss from noise in agricultural settings, and examples of music with deleted frequencies simulating hearing loss; (b) frequent reminders provided through periodic school visits and mailings home (every 6 weeks throughout the first 2 study years); (c) noise-level assessments done by students on their own farms using a sound-level meter; and (d) distribution of a variety of types of HPDs provided and replaced on a regular basis through the 4 years. Baseline HPD use was 23% among the intervention students versus 24% among controls. After the intervention, self-report of planned future use of HPDs was 81% among
Health Hazards of Child Labor
839
intervention students versus 435 among controls. Students rated the most important interventions as the provision of free earmuffs and earplugs (94%), yearly audiometric exams (90%), and educational mailings to their homes (77%). Two factors believed by the research team to have been instrumental to the success of the program were the opportunity for the students to test sound levels at their own farm (almost two-thirds of compliant intervention students reported that this was a factor) and the repeated opportunity through the course of the study to practice fitting the earplugs correctly. A second prevention example is a school-based interactive program—Health and Safety Awareness for Working Teens, implemented as part of school to work in Washington state. Teachers found it useful and easy to use, and it led to increased student knowledge.46 A third educational effort that resulted in safer work practices involved the creation of active participatory stations in which students had to conduct farm tasks with simulated disabilities caused by farm injuries.47 Enforcement: More inspectors, bigger fines, and serious judges would make employers less likely to risk the lives and well-being of teen employees. Policy: Motor vehicles and teen drivers don’t mix well, for play or work, and exemptions from teen driving laws should be limited as the data and logic behind them are not different on the way to work. Engineering/Process change: Burn prevention can be improved by changing the time of filter change above grease to when it is cold, not hot. ACKNOWLEDGMENTS
The lead author would like to thank Ms Amelia Jones for her assistance in the preparation of this manuscript. This chapter is dedicated to all the children killed and injured at work since 1980, to the memory of the two New Jersey police officers who were killed on December 25, 2005, when they drove off a foggy drawbridge that had been opened while they were setting flares to protect the public, and to the memory of the 12 West Virginia coal miners who died after an underground explosion on January 1, 2006, when time ran out to save them. REFERENCES
1. Windau Janice and Meyer Samuel. Occupational injuries among young workers. Monthly Labor Rev. 2005:11–23. 2. Institute of Medicine (Committee on the Health and Safety Implications of Child Labor). Protecting Youth at Work—Health, Safety, and Development of Working Children and Adolescents in the United States. Washington, DC: National Academy Press; 1998. 3. Zierold KM, Garman S, Anderson H. Summer work and injury among middle school students aged 10–14 years. Occup Environ Med. 2004;61:518–22. Available at http://oem.bmjjournals.com. Accessed Nov 18, 2005. 4. National Institute for Occupational Safety and Health. Alert: Request for Assistance in Preventing Deaths and Injuries of Adolescent Workers. DHHS (NIOSH) publication No 95–125, May 1995. 5. Hunter D. The Diseases of Occupations. 5th ed. London: The English Universities Press. Ltd; 1974. 6. Postol T. Child labor in the United States: its growth and abolition. Am Educator. 1989;13(2):30–1. 7. Trattner WI. Crusade for the Children: A History of the National Child Labor Committee and Child Labor Reform in America. Chicago: Quadrangle Books; 1970. 8. Rosner J. Emmeline. New York: Pocket Books; 1980. 9. Dickens C. Hard Times, London, 1854. 10. Trollope A. The Life and Adventures of Michael Armstrong, the Factory Boy. London: Colburn; 1840. 11. Wertheimer BM. “We Were There.” The Story of Working Women in America. New York: Pantheon Books; 1977.
840
Environmental Health
12. Weller Nancy F, Cooper Sharon P, Tortolero Susan R, et al. Workrelated injury among south Texas middle school students: prevalence and patterns. Southern Med J. 2003;96(12):1213–20. 13. Davis L, Pollack S. School to work opportunities act (letter). Am J Public Health. 1995;85:590. 14. Child Labor Requirements in Nonagricultural Occupations under the Fair Labor Standards Act, Child Labor Bulletin No 101. Washington, DC: U.S. Department of Labor, U.S. Government Printing office, Employment Standards Administration, Wage and Hour Division, U.S. Department of Labor Wage and Hour publication No. 1330: 1985. 15. Child Labor Requirements in Agriculture Under the Fair Labor Standards Act, Child Labor Bulletin No. 102. Washington DC, U.S. Department of Labor: U.S. Government Printing office, Employment Standards Administration, Wage and Hour Division, U.S. Department of Labor Wage and Hour publication No. 1295: 1984. 16. Corbin T. Child Labor Law Survey of Teenagers. Albany: New York State Department of Labor, Division of Research and Statistics, Working Paper No. 5; 1988. 17. New York State Department of Labor. Hearings on Child Labor Law Review. Albany, Buffalo, Manhattan, Hauppauge, L.I., and Syracuse; 1988. 18. Landrigan PJ. The hazards to children of industrial homework. Testimony before the U.S. Department of Labor. New York; Mar 29, 1989. 19. U.S. General Accounting Office. “Sweatshops” and child labor violations: a growing problem in the United States. W. Gainer before the Capitol Hill Forum on the Exploitation of Children in the Workplace; 1989. 20. Bagli CV. Child labor and sweatshops—growing programs in the city. Observer. 1988. 21. Bagli CV. Some “hard workers” in garment district are just 12 or 14. NY Observer. 1989. 22. Powell M. Babes in toil-land: child labor and the city’s sweatshops. NY Newsday. 1989. 23. U.S. General Accounting Office. Sweatshops in the U.S. Opinions on their Extent and Possible Enforcement Actions. Washington, DC: 1988 (Publ. No. GAO/HRD-88-130 BR). 24. Suruda A. Halperin W. Work-related deaths in children. Am J Ind Med. 1991;19:739–45. 25. Suruda A, Philips P, Lillquist D, et al. Fatal injuries to teenage construction workers in the U.S. Am J Ind Med. 2003;44:510–4. 26. Derstine B. Youth workers at risk of fatal injuries. Presented at the 122nd Annual Meeting of the American Public Health Association. Washington DC; 1994 27. Dunn K, Runyan C. Deaths at work among children and adolescents. Am J Dis Child. 1993;147:1044–7. 28. Castillo DN, Landen DD, Layne LA. Occupational injury deaths of 16- and 17- year olds in the United States. Am J Public Health. 1994;84:646–9. 29. Schenker MB, Lopez R, Wintemute G. Farm-related fatalities among children in California, 1980–1989. Am J Public Health. 1985;85:89–92. 30. Belville R, Pollack SH, Godbold J, et al. Occupational injuries among working adolescents in New York state. JAMA. 1993;269: 2754–59.
31. Miller M. Occupational Injuries among Adolescents in Washington State, 1998–91: A Review of Workers’ Compensation Data. Olympia, WA: Safety and Health Assessment and Research for Prevention, Washington State Department of Labor and Industries, Technical Report No. 35-1-1995; 1995. 32. Bush D, Baker R. Young Workers at Risk: Health and Safety Education and the Schools. Berleley, CA: Labor Occupational Health Program; 1994. 33. Cooper SP, Rothstein MA. Health hazards among working children in Texas. South Med J. 1995;88:550–4. 34. Banco L, Lapidus G, Braddock M. Work-related injury among Connecticut minors. Pediatrics. 1992;89:957–60. 35. Brooks DR, Davis LK, Gallagher SS. Work-related injuries among Massachusetts children: a study based on emergency department data. Am J Ind Med. 1993;24:313–24. 36. Parker DL, Carl WR, French LR, et al. Characteristics of adolescent work injuries reported to the Minnesota Department of Labor and Industry. Am J Public Health. 1994;84:606–11. 37. Horwitz IB, McCall BP. Occupational injury among Rhode Island adolescents: an analysis of workers’ compensation claims, 1998–2002. J Occup Environ Med. 2005;47(5):473–81. 38. Pollack SH, Scheurich-Pane SL, Bryant. S The nature of occupational injury among Kentucky adolescnts. Presented at the Occuupational Injury Symposium. Sydney, Australia Feb 26, 1996. 39. Glor ED. Survey of comprehensive accident and injury experience of high school students in Saskatchewan. Can J Public Health. 1989;80:435–40. 40. Lipskie T, Breslin FC. A descriptive analysis of Candian youth treated in emergency departments for work-related injuries. Chronic Dis Can. 2005;26(4):107–13. 41. Ehrlich PF, McClellan WT, Helmcamp JC, et al. Understanding work-related injuries in children: a perspective in West Virginia using the state-managed workers’ compensation system. J Pediat Surg. 2004;39:768–72. 42. Bonauto DK, Keifer M, Rivara FP, et al. A community-based telephone survey of work and injuries in teenage agricultural workers. J Agri Safety Health. 2003;9(4):303–17. 43. Woolf A, Alpert HR, Garg A, et al. Adolescent occupational toxic exposures—a national study. Arch Pediatric Adolescent Med. 2001;155:704–10. 44. Broste SK, Hansen DA, Strand RL, et al. Hearing loss among high school farm students. Am J Public Health. 1989;79(5):619–22. 45. Pollack SH. Adolescent occupational exposures and pediatricadolescent take-home exposures. Pediat Clin N Am. (Children’s Environmental Health). 2001;48:xv-xxxiii. 46. Linker D, Miller ME, Freeman KS, et al. Health and safety awareness for working teens. Fam Commun Health. 2005;28(3): 225–38. 47. Reed DB, Kidd PS. Collaboration between nurses and agricultural teachers to prevent adolescent agricultural injuries: the agricultural disability awareness and risk education model. Pub Health Nurs. 2004;(4):323–30.
Occupational Safety and Health Standards
46
Eula Bingham • Celeste Monforton
Until 1970, there was almost total reliance on state and local governments and the forces of the market to improve working conditions related to occupational injuries, death, and disease. For more than 50 years, state governments had attempted to inspect workplaces and to advise employers about hazards. Few of these programs, however, had adequate enforcement authority to compel abatement of dangerous conditions. In some states, no attempt was made by government to change workplace conditions, either by enforcement or by persuasion. Variations in state legislation resulted in comprehensive, strong regulation in some states (e.g., New York and Illinois) and nonexistent regulation in others (e.g., Mississippi). The doctrine of states’ rights and a tradition of state regulatory activity in the area of labor standards protected this status quo. Another traditional approach was to trust market and private sector mechanisms to provide worker protection. Workers’ compensation insurance carriers made some attempt to improve workplace safety for economic reasons. Many carriers provided consultative service to their clients and charged lower rates to large companies that were successful in reducing injuries. Then, as well as now, insurance companies’ consultative resources are limited and are not available to all who may need them; while it may be possible to provide economic incentives to large firms by basing their premium rates on accident experience, it is not possible to provide this same incentive to small firms, which have few employees to record a statistically significant accident experience. More importantly, these economic incentives are inadequate where health problems are concerned because occupational diseases are not often diagnosed as workplace related. Occupational diseases often have complex origins; many years may elapse between exposure and the appearance of symptoms, making physicians and compensation boards reluctant to attribute the symptoms to time spent with specific employers or to the exposure to particular working conditions.1 A third approach evolved to cope with occupational safety and health problems; industry-based organizations filled the vacuum by producing guidelines for safe work practices for various types of industrial equipment and processes and for “acceptable” exposure limits to certain harmful substances. These “consensus standards” were adopted by the Occupational Safety and Health Administration (OSHA) in 1972 as federal standards. Thus a long series of private, voluntary efforts and a slowly evolving pattern of government initiatives (e.g., the Walsh-Healey Act [1936], which authorized sanctions against federal contractors who violated standards) tested a variety of approaches to improving safety and health. These experiences served as the basis for broad federal legislation. As legislators had a record of approaches that had not
worked, it became clear that voluntary-compliance approaches and consensus guidelines would have to be backed by a technically experienced federal enforcement staff and that inadequate workplace safety and health efforts at the state level would have to be reshaped to meet national standards of effectiveness. The economic realities of the marketplace had overwhelmed voluntary efforts, and the weak incentives of workers’ compensation programs and of the states appeared unable to act effectively because of a need to compete among themselves for industry and jobs.2 The Occupational Safety and Health Act (OSH Act) was signed into law in 1970 and was designed to address workplace hazards faced by private-sector employees in most industries including manufacturing, services, construction, and agriculture. The Federal Mine Safety and Health Act of 1977 (Mine Act) is a comparable law aimed specifically at workers employed in underground and surface coal, metal, and nonmetal mining operations.3 Both laws feature a strong standards-setting authority vested in the Secretary of Labor. The standards-setting process was open to labor, industry, and public inputs at all stages. The word “standard” connotes uniformity, consensus, and regulatory power. OSHA and MSHA standards are an attempt, through the federal government’s regulatory powers, to set a minimum level of protection for workers against specified hazards and to achieve that level through enforcement, education, and persuasion. Sections 6 and 3(8) of the OSH Act govern the standards-setting process. They contain three major schemes under which standards can be promulgated: (a) a short-lived authority for adoption of existing consensus standards, (b) development and promulgation of new or amended standards, and (c) promulgation of temporary emergency standards. Likewise, the Mine Act includes provisions allowing the agency to promulgate new and emergency temporary standards and to amend existing standards; however, no authority was given to MSHA to adopt consensus standards. Instead, Congress stipulated by statute a number of interim standards, including mandates for mine operators to reduce coal mine dust levels to prescribed levels, offer chest radiographs to underground coal miners to detect pneumoconiosis, follow roof control and ventilation plans, and conduct safety exams and methane checks on every workshift. CONSENSUS STANDARDS
At the time the OSH Act was passed, a large body of consensus standards was already in existence, developed as guidelines by such groups as the American National Standards Institute (ANSI), the 841
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
842
Environmental Health
National Fire Prevention Association (NFPA), and the American Conference of Governmental Industrial Hygienists (ACGIH). The standards represented industry’s agreement on certain reasonable exposures, work practices, and equipment specifications. To establish as rapidly as possible a body of occupational safety and health rules already familiar to employers, Congress required adoption of these standards, but recognized that many were seriously out of date. The legislative history of the OSH Act emphasized that the standards would need to be constantly improved, and recognized that new standards were especially needed to prevent occupational illnesses. Many of the consensus standards contained provisions that were irrelevant to safety and health (e.g., several pages of specifications for the wood to be used in ladders). The standards were adopted wholesale, however, without significant deletions, in the interest of speed. Competing priorities made it impossible to evaluate and amend the body of standards within the 2-year deadline allowed by Congress. Thus, OSHA began with initial standards derived from previous industry use, which had these key weaknesses: They were unduly complex and obsolete. One standard, for example, prohibited the use of ice in drinking water, a rule that dated from a time when ice was cut from contaminated rivers. Certain standards were only tangentially related to the safety or health of workers, for example, the requirement for coat hooks in toilet stalls. The consensus standards were guidelines and not designed for enforcement and the adjudicatory process. Provisions that should have been advisory became inflexible law. Threshold limit values reflected industry consensus as to acceptable practice and were not necessarily designed for the greatest protection to workers and often lacked documentation. By 1978, OSHA removed the most inappropriate of these rules from the books. At that time, 1110 standards provisions were proposed for deletion; after participation by labor and the business community, 927 were finally eliminated. PERMANENT STANDARDS
The authority for setting permanent workplace safety and health standards is provided in Section 6(b) of the OSH Act and Section 101 of the Mine Act, and follows a multistep process: 1. Initiating the standards development project: The Secretary of Labor may begin the process on the basis of recommendations from the National Institute of Occupational Safety and Health (NIOSH) or other governmental agencies, petitions of private parties, research findings from any source, accident and injury data, congressional input, or court decisions. 2. Drafting the proposal: Agency staff assemble all the supporting documents, draft the preamble and regulatory text, and prepare economic and environmental impact statements to fulfill the requirements of the National Environmental Policy Act of 1969, the Regulatory Flexibility Act, the Small Business Regulatory Enforcement Fairness Act, the Paperwork Reduction Act, and various Presidential Executive Orders. 3. Appointing an advisory committee: An advisory committee may at the discretion of the Secretary be formed to provide expertise and guidance to the agency. The statute requires its composition and includes representatives of labor and industry, the safety and health professions, and recognized experts from government or the academic world. 4. Revising and reviewing the draft proposed rule: Agency scientists, engineers, and technical experts prepare the proposed rule, in consultation with attorneys from the Solicitor’s Office, and may seek review by other agencies. In most instances, a proposed rule must also be approved by the White House’s Office of Management and Budget (OMB). 5. Publishing the proposal in the Federal Register: The public is invited to comment. The broad issues debated during the
comment period include the agency’s determination that (a) the hazard presents a significant risk to workers; (b) the proposed standard will substantially reduce the risk; and (c) the means of compliance are technologically and economically feasible. 6. Conducting informal hearings: Nearly always, public hearings are held to allow further public comment. 7. Analyzing the rulemaking record: Following the public hearings and after the end of the public comment period, the staff analyzes the entire rulemaking record. Major issues requiring policy decisions are defined and presented to the assistant secretary. Alternate approaches, if appropriate, are presented. 8. Preparing the final standard: The staff develops a proposed final standard based on the record of rule making and submits the document for internal review and final approval by the assistant secretary and OMB. 9. Publishing the final rule: The completed final standard is published in its entirety in the Federal Register. An interested party may challenge the validity of the standard by filing, within 60 days, a petition for review of the standard in a federal circuit court of appeals. This process is only an outline. The length of time between steps may stretch for months or years. At times, proposals are abandoned after first hearings or public comment, and the decision to proceed with a rule making is reevaluated. If appropriate, an entirely new proposal is developed.
TEMPORARY EMERGENCY STANDARDS
Both the OSH Act and Mine Act give the Secretary of Labor authority to issue “. . . an emergency temporary standard to take immediate effect upon publication in the Federal Register if employees are exposed to grave danger from exposure to substances or agents determined to be toxic or physically harmful or from new hazards.” These standards are promulgated without the extensive public participation characteristic of permanent standards. The statutes require that emergency temporary standards (ETS) be replaced with a permanent standard within 6 months. ETS may be used as a “proposed standard” in the permanent standards proceedings. This provision of the act has been chilled because of unfavorable court decision. The last time OSHA issued an ETS was a 1983 update to its asbestos standard, but the emergency rule faced a legal challenge and was rejected by the court.4 As a result, OSHA was required to proceed with its usual notice and comment rulemaking.
NATURE OF OSHA AND MSHA STANDARDS
OSHA standards are written to control risks even if exposure continues throughout a person’s working life. The effectiveness of the technology available for controlling exposures and the characteristics of the hazard in the particular workplace determine how compliance with the standard will be achieved. The standards are variable in several areas. The technical content necessarily differs according to the hazard being regulated, although it is possible to group related problems in a single standard. A specification approach or a performance approach may be employed, or the two approaches may be combined. Specification standards tell precisely what protection an employer must provide. This approach has been used most often in developing safety standards. The advantage of specification standards is that they tell the employer exactly what must be done to “be in compliance” with the regulation. The disadvantage of specification standards is that they tend to be inflexible and may restrict an employer’s efforts to provide equivalent protection using alternative—and sometimes more satisfactory—methods. In certain instances, employers may be granted
46 a variance if the agency determines that the alternative means of compliance will provide equivalent protection to workers. The trend in OSHA and MSHA regulations is toward performance standards that set an exposure limit but leave the means of compliance largely to the decision of the employer. This greater degree of flexibility allows the employer to consider alternative methods and equipment and choose those most suited to their particular industry and worksite. Performance standards, however, do not give the employer carte blanche to substitute less effective means of protection (such as personal protective equipment) for engineering controls of dangerous emissions or other hazards. Health standards are generally addressed by way of the performance, rather than the specification, approach. While many large corporations prefer performance standards in both the safety and health areas, small employers tend to prefer more specification in workplace standards. In some instances, OSHA publishes an appendix to the standard which provides employers an acceptable “specific” method of compliance.5 HAZARDOUS WASTE AND EMERGENCY RESPONSE
An estimated 1.8 million workers are potentially exposed to hazardous waste or toxic materials as a routine part of their jobs or from spills or emergency incidents. This includes firefighters, police officers, and emergency responders. In 1986, Congress amended the Resource Conservation and Recovery Act of 1976 (RCRA) and the Comprehensive Environmental Response, Compensation and Liability Act of 1980 (CERCLA) under a law entitled the Superfund Amendments and Reauthorization Act (SARA). Among other things, SARA required OSHA to issue standards to ensure the health and safety of employees engaged in hazardous waste operations.6 OSHA’s standard, often referred to as HAZWOPER,7 is designed to protect workers in several distinct settings: cleanup operations at uncontrolled hazardous waste-disposal sites; cleanup at recognized hazardous waste sites (e.g., EPA Superfund sites); routine operations at hazardous waste treatment, storage, and disposal facilities; and emergency response activities at sites where hazardous substances have been or may be released. The HAZWOPER standard requires employers to develop and implement a written safety and health program that identifies, evaluates, and establishes a means to control workers’ exposure to hazards at these sites. Moreover, the rule contains explicit training requirements, including at least 40 hours of initial training and three days of field experience for workers who are directly involved in cleanup work, and 24 hours of initial training and one day of field experience for workers who are occasionally at these sites. Annual refresher training is also required. Persons conducting the training must issue written certificates confirming that the student successfully completed the training, and anyone who does not have this written certification is prohibited from working at a hazardous waste operation. THRESHOLD LIMIT VALUES, PERMISSIBLE EXPOSURE LIMITS, AND ACTION LEVELS
Older occupational health standards (still used in developing countries) were based on threshold limit values (TLVs) developed by the ACGIH. In this system, maximum exposure limits were usually set based on the level of a contaminant known to produce acute effects, allowing some margin for safety and considering what was readily achievable by employers. Unfortunately, such limits do not protect against long-term chronic or subclinical effects on the body, such as changes in blood chemistry, liver function, or the reaction time of the central nervous system. In addition, these values were derived mainly for healthy, young, adult white males, not for the diverse makeup of working populations. In addition, TLVs were not designed to address the problem of irreversible health problems such as cancer.
Occupational Safety and Health Standards
843
Permissible exposure limits (PELs) are used in OSHA health standards. PELs are based on consideration of the health effects of hazardous substances. The lead standard, for example, contains a PEL of 50 µg of lead per cubic meter of air, averaged over an 8-hour period. In 1989, OSHA attempted to establish PELs for 164 unregulated substances and update the limits for 212 toxic air contaminants which were originally adopted by OSHA in 1971.8 OSHA’s PEL Update faced a strong legal challenge and the Court of Appeals eventually vacated the rule.9 As a result, most of the PELs currently enforced by OSHA are outdated and do not reflect current scientific knowledge on the health effects of these contaminants. MEDICAL REMOVAL PROTECTION
Medical removal protection (MRP) is a protective, preventive health mechanism complementing the medical surveillance portion of some OSHA standards. The lead standard, for example, calls for temporary removal for medical purposes of any worker having an elevated blood lead level. During the period of removal, the employer must maintain the worker’s earnings, seniority, and other employment rights and benefits as though the worker had not been removed. Under the Mine Act, coal miners with chest x-ray evidence of pneumoconiosis, as determined by NIOSH, are given the option of transferring to a less dusty job and maintaining their regular rate of pay. Medical removal protection is essential; without it, the major cost of health hazards falls directly on the worker and the worker’s family in the event of illness, death, or lost wages. Without a requirement for the protection of workers’ wages and job rights, removal could easily take the form of transfer to a lower-paying job, temporary layoff, or termination. A worker who participates in the medical surveillance program might risk losing his or her livelihood. The alternative has sometimes been to resist participation and, thereby, lose the protection that surveillance offers. An interesting leveraging effect of MRP is its role as an economic incentive for employers to comply with the workplace standards. For example, employers who do not comply with the lead standard will have a greater number of removals and thus will have higher labor costs over a long period, while employers who invest in control technology will experience savings from lowered removal costs. COMPLIANCE
To comply with the PELs, employers first conduct an industrial hygiene survey, including environmental sampling. This process identifies contaminants, their sources, and the severity of exposure. The employer then devises methods to reduce exposure to permissible levels. Methods commonly employed by industrial hygienists to control exposures fall into three basic categories: engineering controls, work practice controls (including administrative controls), and personal protective equipment. Engineering controls employ mechanical means or processes redesign to reduce exposure. The contaminant may be eliminated, contained, diverted, diluted, or collected at the source. Examples of this type of control include process isolation or enclosure, such as is used in uranium fuel processing. Employee isolation or machine and process enclosure are also used to protect workers from excessive fumes or noise. Closed material-handling systems, product substitution, and exhaust ventilation are also commonly employed. Work practice controls rely on employees to perform certain activities in a carefully specified manner so that exposures are reduced or eliminated. For example, employers may instruct workers to keep lids on containers, to clean up spills immediately, or to observe specific, required hygiene practices. Such work practices are often required to complement engineering controls. This is particularly true in cases where engineering controls cannot provide complete compliance with the standard. Noise hazards are often controlled by a combination of
844
Environmental Health
engineering steps and work practices limiting the amount of time workers are exposed to excessive noise levels. Personal protective equipment controls exposure by isolating the employee from the emission source. Respirators are a common type of personal protective equipment, used when protection from an inhaled contaminant is required. Personal protective equipment is used to supplement engineering controls and work practices. Often overlooked is the great importance of personal hygiene, which includes the use of protective clothing to provide barriers to both the worker and the worker’s family, the provision for shower facilities, and the cleaning of protective clothing so that contaminants are not transferred to others. Engineering control is the best method for effective and reliable control of worker exposure to many substances. It acts at the source of the emission and eliminates or reduces employee exposure without reliance on self-protective action by the employee. Work practices also act on the source of the emission, but rely on employee behavior, which requires supervision, motivation, and education for effectiveness. While personal protective equipment provides a cheaper alternative to engineering controls, it does so at the expense of safety and reliability. The equipment does not eliminate the source of the exposure, often fails to provide the degree of protection required (or fails to provide it with certainty in all cases), and may create additional hazards by interfering with vision, hearing, and mobility. Individual differences in employees also affect the acceptability of personal protective equipment. For example, some employees develop infections from some ear-protection devices and respirator face pieces, and some who have impaired breathing cannot safely or comfortably use respirators. Additionally, personal protective equipment is made in standard sizes and facial configurations that may not properly fit female workers and unusually large or small workers. OSHA should progress from a reactive, priority-setting system to one with an information-based approach. Highest priority must be given to hazards that cause irreversible adverse health effects. Court decisions have required the agency to establish a “reasonably necessary” approach, that is, determine the number of workers affected and the number protected by the new regulation. This has been translated into a risk assessment requirement. For example, OSHA’s cancer policy10 could be modified to increase the speed with which the particular carcinogens are regulated, with priorities shaped according to the population of the workers exposed, current exposure levels, and the potency of a substance. Consideration should be given to the ways in which these substances are used in actual operations and to the likelihood of substantial accidental exposures. These same criteria can be applied to other health hazards. In the safety standards area, a parallel process must occur, which should include guidance in the establishment of standards for reducing deaths due to inappropriately designed lock-out procedures, for reducing musculoskeletal injuries, and for controlling the development of stress-related diseases associated with newer technologies. Development of so-called generic standards, for example, hazard identification, reaches many workers in providing protection. These types of standards are difficult to promulgate because of the divergent industrial sectors and numerous employers coming under the regulation. Critics of occupational safety and health standards encourage the use of theoretical economic models based on cost-benefit analysis. Common sense indicates that the numbers of workers exposed, the severity of hazards, and the technological feasibility must be considered in setting standards. These factors should be explicit in OSHA’s prioritysetting processes. Precise costs and benefits, however, cannot be measured. The costs of standards compliance can be estimated with some precision. New equipment, engineering modifications, and work practices have readily measurable costs. Industry, however, sometimes overestimates these costs by several magnitudes in their testimony against standards: actual costs for vinyl chloride standards compliance turned out to be but a fraction of those indicated in public testimony.11 More recently, even with the thoroughly worked and reworked estimates of the costs to comply with the cotton dust standard, it appears
that costs were overestimated by both the government and industry. OSHA has never had the authority to require facilities to open their financial books in preparing economic feasibility impact studies, so it must be content with voluntarily divulged economic data. The benefits of regulation, however, are more difficult to calculate. One cannot count all accidents that were avoided as one can number the accidents and injuries that actually occurred. One cannot precisely identify the health benefits that will accrue in 10, 20, or 30 years from current reduced exposures to toxic substances or carcinogens. The data for prediction do not exist, and causality mechanisms in occupational disease are too complex to be defined with the same certainty as the costs of a new ventilating system. The largest problem with cost-benefit analysis, however, is not lack of information—it is the impossibility of weighing lives spared against the dollar costs for prevention. Workers are coming to realize that hazardous-pay differentials are in fact based on a dangerously false assumption that lives can be valued and, in effect, “prorated” on a cash basis. Public debate over regulatory costs can begin to clarify this issue and to uncover the hidden social costs of failure to regulate out of deference to faulty labor market mechanisms. These hidden social costs include not only loss of life and health of workers, but also increased incidence of illness and death among families of workers exposed to some substances such as lead and asbestos, and disruption of family and community life due to death and disability of workers and to local environmental effects of industrial contaminants.12 GLOBAL STANDARDS
Particularly important for OSHA, MSHA, and NIOSH is participation in international occupational health and safety forums to achieve full awareness of available research and enforcement experience, including those of the Commission of the European Communities, the International Labor Organization, the World Health Organization, and many foreign national governments. It is critical that the United States share information internationally and encourage other nations to adopt effective health and safety standards. Without comparable standards in other countries, U.S. industries can choose to export hazardous processes such as asbestos milling or pesticide formulation. This is doubly unacceptable because it not only exposes foreign workers to hazardous conditions but would tend to export jobs along with the hazards. Indeed, the failure to participate in the global efforts for health and safety standards could lead U.S. workers backward if U.S. occupational safety and health standards are considered to be a barrier to free trade under trade agreements, for example, the North American Free Trade Agreement (NAFTA) or the Central American Free Trade Agreement (CAFTA). CONCLUSION
Standards alone will not guarantee healthful, safe working conditions. Enforcement inspections to determine whether compliance exists are essential. Training and education of workers and employers is also necessary. Government cannot provide direct, constant enforcement of employee protection; this effort must be assisted by employer and employee participation. Workers’ rights to a safe and healthful workplace are facilitated in part by the existence of employer standards, by federal and state enforcement activities, but most of all by the workers’ own knowledge and vigilance. The OSH Act and Mine Act recognize this fact. These statutes reinforce the workers’ rights, with guarantees against reprisals by employers, when workers file and obtain abatement of health and safety hazards. Whether improvements come from voluntary employer action, from direct enforcement, or from labor-management negotiations, health and safety standards are essential to define the necessary levels of protection and the acceptable means of attaining them.
46 REFERENCES
1. Protecting the Health of Eighty Million Americans: A National Goal for Occupational Health. Special Report to the Surgeon General of the United States Public Health Service; 1965. 2. Page JA, O’Brien M. Bitter Wages. New York: Grossman Publishers, 1973; and Chapter 1: Evolution of the Occupational Safety and Health Act of 1970 in Mintz BW, OSHA: History, Law, and Policy. Washington DC: The Bureau of National Affairs; 1984. 3. The precursor to the Mine Act of 1977 was the Federal Coal Mine Health and Safety Act of 1969 (Public Law 91-173) which also included provisions for health and safety standards. 4. In November 1983, OSHA published an ETS to immediately reduce the permissible exposure limit for asbestos from 2.0 fibers/cc to 0.5 fibers/cc. The ETS was challenged in the U.S. Court of Appeals for the 5th Circuit. In March 1984, the Appeals Court ruled that the ETS was invalid and OSHA was prohibited from enforcing it. A final rule promulgated through the normal rulemaking process was published in June 1986. 5. 29 Code of Federal Regulations 1910.120, Asbestos, Appendix F: Work practices and engineering controls for automotive brake and
6. 7. 8. 9. 10.
11.
12.
Occupational Safety and Health Standards
845
clutch inspection, disassembly, repair and assembly; 29 Code of Federal Regulations 1910.269 Electric Power Generation, Transmission, and Distribution, Appendix B: Working on Exposed Energized Parts. Section 126, Superfund Amendments and Reauthorization Act of 1986. 29 Code of Federal Regulations, Part 1910.120. Final Rule on Air Contaminants. Federal Register. 1989;54:2332. AFL-CIO v. OSHA, 965 F.2d 962; 1992. In 1980, OSHA published a Carcinogen Policy that was designed to expedite the process for issuing health standards for carcinogenic substances. Identification, Classification, and Regulation of Potential Occupational Carcinogens. 45 Federal Register 5002; January 22, 1980. Federal Register. 1983;48:241. Gauging Control Technology and Regulatory Impacts in Occupational Safety and Health: An Appraisal of OSHA’s Analytic Approach. U.S. Office of Technology Assessment, Report No. OTA-ENV-635, September 1995. Heinzerling L, Ackerman F. Priceless: On Knowing the Price of Everything and the Value of Nothing. New York: The New Press; 2004.
This page intentionally left blank
Ensuring Food Safety
47
Douglas L. Marshall • James S. Dickson
INTRODUCTION
The objective of food processing and preparation is to provide safe, wholesome, and nutritious food to the consumer. The responsibilities for accomplishing this objective lie with every step in the food chain; beginning with food production on the farms, and continuing through processing, storage, distribution, retail sale, and consumption. Producing safe food is a continuum, where each party has certain obligations to meet and certain reasonable expectations of the other parties involved in the process. No single group is solely responsible for producing safe food, and no single group is without obligations in assuring the safety of food. Food producers have a reasonable expectation that the food he or she produces will be processed in such a manner that further contamination is minimized. Food producers are an integral part of the food production system, but are not solely responsible for food safety. It is not practical to deliver fresh unprocessed food that is completely free of microorganisms, whether the food in question is of animal or plant origin. The environment in which the food is produced precludes the possibility that uncontaminated food can be grown or produced. However, appropriate methods can be utilized to reduce, to the extent possible, this level of background contamination. These methods are referred to as “Good Agricultural Practices” (GAPs).1 Alternately, producers have an obligation to use these same reasonable practices to prevent hazards from entering the food chain. As an example, when dairy cattle are treated with antibiotics for mastitis, producers have an obligation to withhold milk from those animals from the normal production lot. Milk from these animals must be withheld for the specified withdrawal time, so that antibiotic residues will not occur in milk delivered to dairies. In contrast, production of salmonellae-free poultry in the United States has been an elusive goal for poultry producers. While it is not a reasonable expectation for producers to deliver salmonellae-free birds to poultry processors, it is reasonable to expect producers to use good livestock management practices to minimize the incidence of Salmonella within a flock. Food processors have reasonable expectations that raw materials delivered to the processing facility are of reasonable quality and not contaminated with violative levels of any drugs or pesticides. In addition, processors have a reasonable expectation that processed food will be properly handled through the distribution and retail chain, and that it will be properly prepared by the consumer. The latter is particularly important, as processors have responsibility for products because they are labeled with the processor’s name, even though the food is no longer under processor’s control once it leaves the processing facility. Processors’ obligations are to process raw foods in a manner that minimizes growth of existing microorganisms as well as minimizes additional contamination during processing. These obligations extend from general facility maintenance to the use of the best available methods and technologies to process a given food.
Clearly, consumers have an important role in the microbiological safety of foods. However, it is not reasonable to expect every consumer to have a college degree in food science or microbiology. Consumers have a reasonable expectation that foods they purchase have been produced and processed under hygienic conditions. They also have a reasonable expectation that foods have not been held under unsanitary conditions, or that foods have not been adulterated by the addition of any biological, chemical, or physical hazards. In addition, consumers have an expectation that foods will be appropriately labeled, so that the consumer has information available on both composition and nutritional aspects of products. These expectations are enforced by regulations that govern production, processing, distribution, and retailing of foods in the United States. The vast majority of foods meets or exceeds these expectations, and the average consumer has relatively little to be concerned with regarding the food they consume. Some consumers have advocated additional expectations, which may or may not be reasonable. For example, some would argue that raw foods should be free of infectious microorganisms. Initially, this would appear to be reasonable; however, in many cases technologies or processes do not exist in a legal or practical form to assure that raw foods are not contaminated with infectious agents. Two recent examples are the outbreaks of Cyclospora epidemiologically linked to imported raspberries and Escherichia coli O157:H7 in raw ground beef. With the exception of irradiation, technologies do not exist to assure that either of these foods would be absolutely free of infectious agents while still retaining desirable characteristics associated with raw food. Therefore, in some cases, the expectation that raw foods should be free of infectious agents may not be reasonable. Consumers have several obligations regarding food safety. As part of the food production-to-consumption chain, consumers have similar obligations to food processors. Namely, not holding foods under unsanitary conditions prior to consumption and not adulterating foods with the addition of biological, chemical, or physical agents. Improper food handling can increase food-borne illness risks by allowing infectious bacteria to increase in numbers or by allowing for cross contamination between raw and cooked foods. In addition, consumers have an obligation to use reasonable care preparing foods for consumption, as do personnel in food service operations. As an example, consumers should cook poultry until it is “done” (internal temperature at or above 68°C) to eliminate any concerns with salmonellae. Consumer education on the basics of food safety in the home should be a priority. Every consumer should understand that food is not sterile, and the way food is handled in the kitchen may affect the health of individuals consuming it. Although our long-term goal is to reduce or eliminate food-borne disease hazards, in the near term we need to remind consumers of what some of the potential risks are and how consumers can avoid them. In the end, it is the consumer who decides what they will or will not consume. 847
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
848
Environmental Health
COMMON FOOD-BORNE DISEASE HAZARDS
Contrary to popular consumer perception about the risk of chemicals in foods, major hazards associated with food-borne illness are clearly of biological origin.2 The Centers for Disease Control and Prevention (CDC) has published summaries of food-borne diseases by etiology for the years 1993 through 1997 (Table 47-1).3 CDC groups food-borne disease agents in four categories; bacterial, parasitic, viral, and chemical. Greater than 95% of all reported outbreaks, food-borne illnesses are caused by microorganisms or their toxins. Fully 97% of reported cases are likewise linked to a microbial source. Only around 3% of the outbreaks and less than 1% of cases can be truly linked to chemical (heavy metals, monosodium glutamate, and other chemicals) contamination of foods. Furthermore, 97% of reported deaths are due to microbial sources. These data are from reported outbreaks. CDC estimates for the actual number of cases of food-borne disease caused by microbial agents are much higher due to underreporting (Table 47-2).
Bacterial agents are by far the leading cause of illness, with total numbers estimated as high as 76 million cases per year and deaths as high as 5000 annually in the United States.4 Costs are estimated to be $9.7 billion annually in medical expenses and lost productivity in the United States.4 The high incidence of food-borne disease is paralleled in other developed countries.5 Enteric viruses are now recognized as the leading cause of food-borne infections, although the bacteria are better known. Predominant bacterial agents are Campylobacter spp., Salmonella spp., Shigella spp., and Clostridium perfringens. Food-borne bacterial hazards are classified based on their ability to cause infections or intoxications. Food-borne infections are usually the predominant type of food-borne illness reported. Food-borne outbreaks most often occur with foods prepared at food service establishments and at home (Table 47-3). Improper holding temperatures and poor personal hygiene are the leading factors contributing to reported outbreaks (Table 47-4). Bacterial hazards are further classified based upon the severity of risk.6 Severe hazards are those capable of causing widespread
TABLE 47-1. REPORTED FOOD-BORNE DISEASES IN THE UNITED STATES, 1993–1997 ∗ Outbreaks Etiologic Agent
No.
Cases
Deaths
%
No.
%
No.
%
Bacterial Bacillus cereus Brucella Campylobacter Clostridium botulinum Clostridium perfringens Escherichia coli Listeria monocytogenes Salmonella Shigella Staphylococcus aureus Streptococcus, Group A Streptococcus, other Vibrio cholera Vibrio parahaemolyticus Yersinia enterocolitica Other bacterial
14 1 25 13 57 84 3 357 43 42 1 1 1 5 2 6
0.5 0.0 0.9 0.5 2.1 3.1 0.1 13.0 1.6 1.5 0.0 0.0 0.0 0.2 0.1 0.2
691 19 539 56 2772 3260 100 32,610 1555 1413 122 6 2 40 27 609
0.8 0.0 0.6 0.1 3.2 3.8 0.1 37.9 1.8 1.6 0.1 0.0 0.0 0.0 0.0 0.7
0 0 1 1 0 8 2 13 0 1 0 0 0 0 1 1
0.0 0.0 3.4 3.4 0.0 27.6 6.9 44.8 0.0 3.4 0.0 0.0 0.0 0.0 3.4 3.4
Total bacterial
655
23.8
43,821
50.9
28
96.6
Giardia lamblia Trichinella spiralis Other parasitic
4 2 13
0.1 0.1 0.5
45 19 2261
0.1 0.0 2.6
0 0 0
0.0 0.0 0.0
Total parasitic
19
0.7
2325
2.7
0
0.0
Hepatitis A Norwalk/Norwalk-like Other viral
23 9 24
0.8 0.3 0.9
729 1233 2104
0.8 1.4 2.4
0 0 0
0.0 0.0 0.0
Total viral
56
2.0
4066
4.7
0
0.0
60 4 1 7 69 1 6
2.2 0.1 0.0 0.3 2.5 0.0 0.2
205 17 2 21 297 3 31
0.2 0.0 0.0 0.0 0.3 0.0 0.0
0 0 0 0 0 0 0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
Parasitic
Viral
Chemical Ciguatoxin Heavy metals Monosodium glutamate Mushrooms Scrombotoxin Shellfish Other chemical Total chemical
148
5.4
576
0.7
0
Unknown etiology
1873
68.1
35,270
41.0
1
3.4
Grand total
2751
100.0
86,058
100.0
29
100.0
∗
Olsen SJ, MacKinon LC, Goulding JS, Bean NH, Slutsker L. Surveillance for foodborne disease outbreaks—United States, 1993–1997. MMWR. 2000;49(SS01):1–51.
47
Ensuring Food Safety
849
TABLE 47-2. REPORTED AND ESTIMATED ∗ ILLNESSES, FREQUENCY OF FOOD-BORNE TRANSMISSION, AND HOSPITALIZATION AND CASE-FATALITY RATES FOR KNOWN FOOD-BORNE PATHOGENS, UNITED STATES† Reported Cases by Surveillance Type Disease or Agent
Estimated Total Cases
Active
Passive
Outbreak
720 29 111 37,496 6540 2725
72
% Food-Borne Transmission
Hospitalization Rate
Case-Fatality Rate
Bacterial Bacillus cereus Botulism, food-borne Brucella spp. Campylobacter spp Clostridium perfringens Escherichia coli O157:H7 E. coli, non-O157 STEC E. coli, enterotoxigenic E. coli, other diarrheogenic Listeria monocytogenes Salmonella Typhi‡ Salmonella, nontyphoidal Shigella spp. Staphylococcus food poisoning Streptococcus, food-borne Vibrio cholerae, toxigenic V. vulnificus Vibrio, other Yersinia enterocolitica
27,360 58 1554 2,453,926 248,520 73,480 36,740 79,420 79,420 2518 824 1,412,498 448,240 185,060
Subtotal
5,204,934
50,920 54 94 7880 96,368
64,577 3,674 1837
1259 37,171 22,412
393 2536
100 100 50 80 100 85 85 70 30 99 80 95 20 100
0.006 0.800 0.550 0.102 0.003 0.295 0.295 0.005 0.005 0.922 0.750 0.221 0.139 0.180
0.0000 0.0769 0.0500 0.0010 0.0005 0.0083 0.0083 0.0001 0.0001 0.2000 0.0040 0.0078 0.0016 0.0002
100 90 50 65 90
0.133 0.340 0.910 0.126 0.242
0.0000 0.0060 0.3900 0.0250 0.0005
2788 98 22,907 15,000 26
10 90 10 50 100
0.150 0.020 n/a n/a 0.081
0.005 0.0005 n/a n/a 0.003
27,797
40 1 1 5
n/a n/a n/a 0.130
n/a n/a n/a 0.0030
2090 2090 373 412 37,842 17,324 4870 1340 27 47 112
146 654 500 209
3640 1476 487 134
Parasitic Cryptosporidium parvum Cyclospora cayetanensis Giardia lamblia Toxoplasma gondii Trichinella spiralis
300,000 16,264 2,000,000 225,000 52
Subtotal
2,541,316
6630 428 107,000
Viral Norwalk-like viruses Rotavirus Astrovirus Hepatitis A
23,000,000 3,900,000 3,900,000 83,391
Subtotal
30,883,391
Grand Total
38,629,641
∗
Numbers in italics are estimates; others are measured. Data from (http://www.cdc.gov/ncidod/eid/vol5no5/mead.htm and http://www.cdc.gov/epo/mmwr/preview/mmwrhtml/ss4901a1.htm). ‡>70% of cases acquired abroad. †
epidemics. Moderate hazards can be those that have potential for extensive spread, with possible severe illness, complication, or sequelae in susceptible populations. Mild hazards can also cause outbreaks but have limited ability to spread. Those involved with food production, processing, and service should pay careful attention to controlling these biological hazards by (a) destroying or minimizing the hazard, (b) preventing contamination of food with the hazard, or (c) inhibiting growth or preventing toxin production by the hazard. Control steps will follow in later sections of this chapter. When investigating food-borne disease outbreaks, the most important factor is time.7 Prompt reporting of an outbreak is essential to identifying implicated foods and stopping potentially widespread epidemics. Initial work in the investigation should be inspection of the premises where the outbreak occurred. Look for obvious sources, including sanitation and worker hygiene. Food preparation, storage, and serving should be carefully monitored. Interview
those involved in the outbreak. Obtain case histories of victims and healthy individuals. Discuss health history and work habits of food handlers. Collect appropriate specimens for laboratory analysis, including stool samples, vomitus, and swabs of rectum, nose, and skin. Attempt to collect suspect foods, including leftovers or garbage if necessary. Specific tests for pathogens or toxins will depend on potential etiological agents and food type. Analysis of data should include case histories, illness specifics (incubation time, symptoms, and duration), lab results, and attack rates. All food-borne disease outbreaks should be reported to local and state health officers and to the CDC.
Bacterial Infections Predominant bacterial infections transmitted via foods are salmonellosis, campylobacteriosis, yersiniosis, vibriosis, and shigellosis.8 Most causative agents are Gram-negative rod-shaped bacteria that are
850
Environmental Health
TABLE 47-3. PLACES WHERE FOOD-BORNE OUTBREAKS OCCURRED, 1993–1997∗ Place Home Deli, café, restaurant School Picnic Church Other Unknown
Number
Percentage
582 1185 91 34 63 664 99
21.3 43.1 3.3 1.2 2.3 24.1 3.6
∗
Olsen SJ, MacKinon LC, Goulding JS, Bean NH, Slutsker L. Surveillance for food-borne disease outbreaks—United States, 1993–1997. MMWR. 2000;49 (SS01):1–51.
inhabitants of the intestinal tract of animals. Indeed, federal and most state regulatory agencies consider foods of animal origin (meat, poultry and eggs, fish and shellfish, and milk and dairy products) potentially hazardous foods. One look at epidemiological data confirms this suspicion. That said, fresh produce (fruits and vegetables) is increasingly being implicated in outbreaks of both bacterial and viral agents.
Salmonellosis Salmonella resides primarily in the intestinal tract of animals (humans, birds, wild animals, farm animals, and insects).9 Many people are permanent, often asymptomatic, carriers. Salmonellosis varies with species and strain, susceptibility of host, and total number of cells ingested. Several dozen serotypes cause food-borne outbreaks. Incubation time is 24–36 hours, which may be longer or shorter. Symptoms include nausea, vomiting, abdominal pain, and diarrhea, which may be preceded by headache, fever, and chills. Weakness and prostration may occur. Duration is 1–4 days with a low mortality rate (0.1%). High-risk very young and elderly may have a considerably higher mortality rate (3.8%).10 The condition needed for an outbreak is the ingestion of live cells (10,000) present in the food. For high-fat foods such as chocolate, 50 cells may be a sufficient infectious dose due to protective enrobement of cells by fat allowing survival in high acid gastric fluid during intestinal transit. Foods primarily involved in outbreaks include meat, poultry, fish, eggs, and milk products. S. enteritidis is present in raw uncooked eggs even with sound shells.11 Most often, the bacterium is transferred from a raw food to a processed food via cross contamination. Control of Salmonella in foods can be accomplished in several ways. Avoidance of contamination by using only healthy food handlers and adequately cleaned and sanitized food contact surfaces, utensils, and equipment works best. Heat treatment of foods by cooking or pasteurization is sufficient to kill Salmonella. Refrigeration temperatures at or below 5°C are sufficient, as the minimum temperature for growth is 7–10°C. The prevalence of salmonellosis as a food-borne disease has prompted regulatory agencies to adopt a zero tolerance for the genus in ready-to-eat TABLE 47-4. CONTRIBUTING FACTORS LEADING TO FOODBORNE OUTBREAKS, 1993–1997∗ Number Improper holding Temperature Inadequate cooking Contaminated equipment Food from unsafe source Poor personal hygiene Other ∗
938 274 400 153 490 282
Percentage 37.0 10.8 15.8 6.0 19.3 11.1
Olsen SJ, MacKinon LC, Goulding JS, Bean NH, Slutsker L. Surveillance for foodborne disease outbreaks—United States, 1993–1997. MMWR. 2000; 49(SS01):1–51.
foods. Presence of the bacterium in these foods (luncheon meats, dairy products, pastries, produce, etc.) renders them unwholesome and unfit for consumptions. These foods must then be destroyed or reprocessed to eliminate the pathogen.
Shigellosis Four species are associated with food-borne transmission of dysentery, S. dysenteriae, S. flexneri, S. boydii, and S. sonnei.12 The disease is characterized with an incubation period of 1–7 days (usually less than 4 days). Symptoms include mild diarrhea to very severe with blood, mucus, and pus. Fever, chills, and vomiting also occur. Duration is long, typically 4 days to 2 weeks. Shigella spp. have a very low infectious dose of around 10–200 cells. Foods most often associated with shigellosis are any that are contaminated with human fecal material, with salads frequently implicated. Control is best focused on worker hygiene and avoidance of human waste.
Vibriosis Most vibrios are obligate halophiles that are found in coastal waters and estuaries.13 Consequently, most food-borne outbreaks are associated with consumption of raw or undercooked shellfish (oysters, crabs, shrimp) and fish (sushi or sashimi).14 V. parahaemolyticus causes most vibriosis outbreaks in developed countries and is primarily food borne. V. cholerae is primarily water borne, but has been associated with foods from aquatic origin.15 Because V. cholerae is halotolerant, it can survive and grow in nonsalt foods. Hence, the bacterium has been spread through foods of terrestrial origin in addition to nonsaline fresh water. V. vulnificus is capable of causing very serious infections leading to septicemia and a high mortality rate (30–40%).16 This very high mortality rate is the highest of all food-borne infectious agents. Fortunately the incidence of V. vulnificus infections is extremely low. Consumption of raw oysters harvested from warm waters (U.S. Gulf Coast) among high-risk individuals (chronic alcoholics, severely immunocompromised) are factors involved with fatalities.17 Several other Vibrio species may be pathogenic.17 Incubation period for vibriosis is 2–48 hours, usually 12 hours. Symptoms include abdominal pain, watery diarrhea, usually nausea and vomiting, mild fever, chills, headache, and prostration. Duration is usually 2–5 days. Cholera typically expresses profuse rice water stools as a characterizing symptom. V. vulnificus infections can include septicemia and extremity cellulitis. Prevention of vibriosis includes cooking shellfish and fish, harvesting shellfish from approved waters, preventing cross contamination, and chilling foods to less than 10°C.18
Escherichia coli There are six pathogenic types of E. coli associated with food-borne illness.19 The infectious dose for most strains is high (10 6–108 cells), although enterohemorrhagic strains may be much lower (2–45 cells). Enteropathogenic (EPEC) strains are serious in developing countries but rare in the United States. These strains are a leading cause of neonatal diarrhea in hospitals. Likewise, diffusely adherent (DAEC) and enteroaggregative (EAEC) E. coli strains are associated with childhood diarrhea. Enteroinvasive (EIEC) strains have an incubation period of 8–24 hours, with 11 hours most often seen. Symptoms are similar to Shigella infections, with bloody diarrhea lasting for several days. Enterotoxigenic (ETEC) strains are a notable cause of traveler’s diarrhea. Onset for illness by these strains is 8–44 hours, 26 hours normal. Symptoms are similar to cholera, with watery diarrhea, rice water stools, shock, and maybe vomiting lasting a short 24–30 hours. Enterohemorrhagic or verotoxigenic strains (EHEC) are the most serious E. coli found in foods, especially in developed countries. E. coli O157:H7 is the predominant serotype among these shiga-like toxin-producing bacteria, although other serotypes are found. EHEC strains cause three syndromes.20,21 Hemorrhagic colitis (red, bloody stools) is the first symptom usually seen. Hemolytic uremic
47 syndrome (HUS), which is the leading cause of renal failure in children, is characterized by blood clots in kidneys leading to death or coma in children and the elderly. Rarely, individuals may acquire thrombotic thrombocytopenic purpura (TTP), which is similar to HUS but causes brain damage and has a very high mortality rate. Verotoxic strains have an incubation period of 3–4 days. Symptoms include bloody diarrhea, severe abdominal pain, and no fever. Duration ranges from 2 to 9 days. Vehicles of transmission include untreated water, cheese, salads, raw vegetables, and water. For 0157:H7, ground beef, raw milk, and raw apple juice or cider are common vehicles. Prevention of E. coli outbreaks includes treatment of water supplies and proper cooking of food. Complete cooking of hamburgers is necessary for destruction of verotoxigenic strains.
Yersiniosis Most environmental Yersinia enterocolitica strains are avirulent; however, pathogenic strains are often isolated from porcine or bovine foods.22 The disease is predominately serious to the very young or the elderly and is more common in Europe and Canada compared to the United States. Incubation period for the disease is 24 hours to several days with symptoms including severe abdominal pain similar to acute appendicitis, fever, headache, diarrhea, malaise, nausea, vomiting, and chills. It is not uncommon for children involved in outbreaks to experience unnecessary appendectomies. Duration is usually long: one week to perhaps several months. The majority of foods involved in yersiniosis outbreaks are pork and other meats. Milk, seafood, poultry, and water may also serve as vehicles. Control is achieved by adequate pasteurization and cooking and avoiding cross-contamination. Refrigeration is not adequate because the bacterium is psychrotrophic.
Campylobacteriosis Three species are linked to food-borne diseases, C. jejuni, C. coli, and C. laridis.23 C. jejuni is most often associated with poultry, C. coli with swine, and C. laridis with shellfish. C. jejuni gastroenteritis is the most frequent infection among the bacterial agents of food-borne disease (Table 47-1). Campylobacters and related pathogens Arcobacter spp. and Helicobacter pylori are microaerophilic and are thus sensitive to normal atmospheric oxygen concentrations (21% O2) and very low oxygen concentrations (less than 3%). Growth is favored by 5% O2. Disease characteristics are an incubation period of 1–10 days, 3–5 days normal. Symptoms include fever, abdominal pain, vomiting, bloody diarrhea, and headache, which last for 1 day to several weeks. Relapses are common. The infectious dose is low, 10–500 cells. Foods linked to outbreaks include raw milk, animal foods, raw meat, and fresh mushrooms. Control is achieved by adequate cooking, pasteurization, and cooling and by avoiding cross-contamination. Although gastroenteritis is the predominant clinical presentation of campylobacteriosis, chronic sequelae may occur. Guillian-Barré syndrome, which is a severe neurological condition, and Reiter’s syndrome, which is reactive arthritis, are rare but serious consequences of campylobacteriosis. H. pylori is associated with chronic peptic ulcers.
Ensuring Food Safety
851
Clostridium perfringens C. perfringens is a moderate thermophile showing optimal growth at 43–47°C, with a maximum of 55°C.27 Large numbers of viable cells (>108) must be consumed, which then pass through the stomach into the intestine. The abrupt change in pH from stomach to intestine causes sporulation to occur, which releases an enterotoxin. Furthermore, the bacterium can grow in the intestine leading to a toxicoinfection. The illness is characterized by an incubation period of 8–24 hours. Symptoms are abdominal pain, diarrhea, and gas. A cardinal symptom is explosive diarrhea. Fever, nausea, and vomiting are rare. Duration is short, 12–24 hours. Because of the large infectious dose, foods often associated with outbreaks are cooked meats and poultry that have been poorly cooked, such as gravy (anaerobic environment at bottom of pot), stews, and sauces. Outbreaks frequently occur in food service establishments where large quantities of food are made and poorly cooled. Control is best achieved by rapidly cooling cooked food to less than 7°C, holding hot foods at greater than 60°C, and reheating leftovers to greater than 71°C.
Other Bacterial Food-Borne Infections Many other bacteria have been linked to food-borne diseases including Plesiomonas shigelloides (raw seafood), Aeromonas hydrophila (raw seafood), Arizona hinshawii (poultry), Streptococcus pyogenes (milk, eggs), and perhaps Enterococcus faecalis.28 Their contribution to food-borne illness appears to be minimal, but they may contribute to opportunistic infections.
Nonbacterial Food-Borne Infections Numerous infectious viruses and parasitic worms are capable of causing food-borne illness. All are easily controlled by proper heat treatment of foods. Difficulty with laboratory confirmation of viral agents as causes of food-borne illness leads to probable underreporting.29,30
Infectious Hepatitis Hepatitis A virus is a fairly common infectious agent having an incubation period of 10–50 days, mean of 4 weeks.31 Symptoms include loss of appetite, fever, malaise, nausea, anorexia, and abdominal distress. Approximately 50% of cases develop jaundice that may lead to serious liver damage. The duration is several weeks to months. The infectious dose is quite low, less than 100 particles. The long incubation period and duration of the disease mean that affected individuals will shed virus for a prolonged period. Foods handled by an infected worker or those that come in contact with human feces are likely vehicles (raw shellfish, salads, sandwiches, and fruits). Filter-feeding mollusks concentrate virus particles from polluted waters. Control is achieved by cooking food, stressing personal hygiene, and by avoiding shellfish harvested from polluted waters.
Listeriosis Listeria monocytogenes emerged as a cause of food-borne disease in 1981.24,25 Susceptible humans include pregnant women and their fetuses, newborn infants, the elderly, and immunocompromised individuals due to cancer, chemotherapy, and AIDS. The disease has a high, 30%, mortality rate. Incubation period is variable, ranging from 1 day to a few weeks.26 In healthy individuals, symptoms are mild fever, chills, headache, and diarrhea. In serious cases, septicemia, meningitis, encephalitis, and abortion may occur. The duration is variable. The infectious dose is unknown, but for susceptible individuals it may be as low as 100–1000 cells. Foods associated with listeriosis are milk, soft cheeses, meats, and vegetables. Like Y. enterocolitica, the bacterium is psychrotrophic and will grow at refrigeration temperatures, though slowly. Control is best done by avoiding cross-contamination and adequately cooking food.
Enteroviruses Noroviruses in the calicivirus family (Coxsackie, ECHO, Norwalk, Rotavirus, Astrovirus, Calicivirus, Parvovirus, and Adenovirus) are now considered the leading cause of food-borne gastroenteritis in the United States.2 Other viruses most certainly are involved but our ability to isolate them from infected consumers and foods is limited. Incubation period is typical for infectious organisms, 27–72 hours.31 Symptoms are usually mild and self-limiting and include fever, headache, abdominal pain, vomiting, and diarrhea. Duration is from 1–6 days. The infectious dose for these agents is thought to be very low, 1–10 particles. Foods associated with transmission of viral agents are raw shellfish, vegetables, fruits, and salads. Control is primarily achieved by cooking and personal hygiene.
852
Environmental Health
Parasites
Staphylococcus aureus enterotoxin
Nematodes (roundworms) linked to food-borne illness in humans include Trichinella spiralis, Ascaris lumbricoides, Trichuris trichiura, Enterobius vermicularis, Anisakis spp., and Pseudoterranova spp.32 T. spiralis can invade skeletal muscle and cause damage to vital organs leading to fatalities. Incubation period of trichinosis is 2–28 days, usually 9 days. Symptoms include nausea, vomiting, diarrhea, muscle pains, and fever. Several days duration is common. Foods linked to the disease are raw or undercooked pork and wild game meat (beaver, bear, and boar). Control in pork is accomplished by (a) cooking to 60°C for 1 minute, (b) freezing at –15°C for 20 days, –23°C for 10 days, or –30°C for 6 days, or (c) following USDA recommendations for salting, drying, and smoking sausages or other cured pork products. Anisakis simplex and Pseudoterranova decipiens are found in fish and are potential problems for consumers of raw fish. The incubation period is several days with irritation of throat and digestive tract as primary symptoms. Control of these nematodes is by thoroughly cooking fish or by freezing fish prior to presenting for raw consumption. A. lumbricoides is commonly transmitted by use of improperly treated water or sewage fertilizer on crops. Cestoda (tapeworms) are common in developing countries. Examples include Taenia saginata (raw beef), Taenia solium (raw pork), and Diphyllobothrium latum (raw fish).32 Incubation period is 10 days to several weeks with usually mild symptoms including abdominal cramps, flatulence, and diarrhea. In severe cases weight loss can be extreme. Control methods are limited to cooking and freezing. Salting has been suggested as an additional control technique. Protozoa cause a large number of food-borne and waterborne outbreaks each year. Entamoeba histolytica, Toxoplasma gondii, Cyclospora cayetanensis, Crytosporidium parvum, and Giardia lamblia cause dysentery-like illness that can be fatal.32 Incubation period is a few days to weeks leading to diarrhea. Duration can be several weeks, with chronic infections lasting months to years. Those foods that contacted feces or contaminated water are common vehicles. Control is best achieved by proper personal hygiene and water and sewage treatment.
Certain strains of S. aureus produce a heat-stable enterotoxin that is resistant to denaturation during thermal processing (cooking, canning, pasteurization).33 The bacterium is salt (10–20% NaCl) and nitrite tolerant, which enables survival in cured meat products (luncheon meats, hams, sausages, etc.). Conditions that favor optimum growth favor toxin production, that is, high protein and starch foods. S. aureus competes poorly with other microorganisms, so if competitors are removed by cooking and S. aureus is introduced, noncompetitive proliferation is possible. The toxin affects the vagus nerve in the stomach causing uncontrolled vomiting shortly after consumption (1–6 hours). Other symptoms include nausea, retching, severe abdominal cramps, and diarrhea, which clear in 12–48 hours. Fortunately, fatalities are rare. Sources of the bacterium are usually from nasal passages, skin, and wound infections of food handlers. Hence, suspect foods are those rich in nutrients, high in salt, and those that are handled, with ham, salami, cream-filled pastries, and cooked poultry common vehicles. Control is accomplished by preventing contamination, personal hygiene, and no hand-food contact. Refrigeration below 5°C prevents multiplication, and heating foods to greater than 60°C will not destroy the toxin but will kill the bacterium. Prolific growth of the bacterium is possible in the 5–40°C range. Problems with the bacterium occur most frequently with foods prepared at home or at food service establishments, where gross temperature abuse has occurred.
Prions Prions are small proteins found in animal nervous tissues (brain, spinal cord).31 They are capable of forming holes in brains of affected animals leading to neurological deficits. In cattle, prions are associated with bovine spongiform encephalopathy (BSE), and consumers of beef from affected animals are at risk of obtaining the human form of the disease called variant Creutzfeldt-Jakob disease (vCJD). Although this link is tenuous, a few human cases in Europe are thought to be based on consumption of contaminated nervous tissue in beef. The disease is characterized by progressive brain dysfunction ultimately leading to death. Little is known about the incubation period or the infectious dose, as this is a newly emerged condition. Meat and milk from affected animals are not considered a transmission risk. FOOD-BORNE BACTERIAL INTOXICATIONS
Food-borne microbial intoxications are caused by a toxin in the food or production of a toxin in the intestinal tract. Normally the microorganism grows in the food prior to consumption. There are several differences between food-borne infections and intoxications. Intoxicating organisms normally grow in the food prior to consumption, which is not always true for infectious microorganisms. Microorganisms causing intoxications may be dead or nonviable in the food when consumed; only the toxin need be present. Microorganisms causing infections must be alive and viable when food is consumed. Infection-causing microorganisms invade host tissues, and symptoms usually include headache and fever. Toxins usually do not cause fever, and toxins act by widely different mechanisms.
Bacillus cereus enterotoxin This spore-forming bacterium produces a cell-associated endotoxin that is released when cells lyse upon entering the digestive tract.34 There are two distinct types of disease syndromes seen with this bacterium. The diarrheal syndrome occurs 8–16 hours after consumption. Symptoms include abdominal pain and watery diarrhea, with vomiting and nausea rarely seen. Duration is a short 12–24 hours. Foods linked to transmission of this syndrome are pudding, sauces, custards, soups, meat loaf, and gravy. The second, emetic, syndrome is similar to S. aureus intoxication. The incubation period is very short, 1–5 hours. Symptoms commonly are nausea and vomiting, with rare occurrence of diarrhea. Duration again is short, less than 1 day. This syndrome is commonly linked to consumption of fried rice in Oriental restaurants. Other foods include mashed potatoes and pasta. The infectious dose for both is thought to be at least 500,000. Because the bacterium forms spores, prevention of outbreaks is by proper temperature control. Hot foods should be held at greater than 65°C, leftovers should be reheated to greater than 72°C, and chilled foods should be quickly cooled to less than 10°C.
Botulism This rare disease is caused by consumption of neurotoxins produced by Clostridium botulinum.35 This spore-forming bacterium grows anaerobically and sometimes produces gas that can swell improperly processed canned foods. The bacterium produces several types of neurotoxins that are differentiated serologically. The toxins are heatlabile exotoxins. Two main food-poisoning groups (proteolytic and nonproteolytic) are found in nature. Nonproteolytic strains can be psychrotrophic and grow at refrigeration temperatures without the food showing obvious signs of spoilage (no swollen cans or off odor). Incubation period is 12–48 hours, but may be shorter or longer. Early symptoms, which may be absent, include nausea, vomiting, and occasionally diarrhea. Other symptoms are dizziness, fatigue, headache, constipation, blurred vision, double vision, difficulty in swallowing, breathing, and speaking, dry mouth and throat, and swollen tongue. Later, paralysis of muscles followed by the heart and respiratory system can lead to death due to respiratory failure. Duration is 3–6 days for fatal cases, several months for nonfatal cases. Treatment of suspect cases is by immediate administration of antisera, which can be useful if given early. Respiratory assistance is usually required. Foods frequently linked to botulism are inadequately home-canned foods, primarily low-acid vegetables, preserved meats, and fish (more
47 common in Europe), cooked onions, and leftover baked potatoes. The bacterium generally will not grow at a pH of less than 4.6 or at a water activity below 0.85. Thus, high-acid foods, like tomatoes and some fruits, generally are safer than low-acid foods, like corn, green beans, peas, muscle foods, etc. Control is by applying a minimum botulinum cook (12 D) to all thermally processed foods held in hermetically sealed containers. Each particle of food must reach 120°C (and be held at that temperature for 3 minutes to reach a 12 D process). Consumers should reject swollen or putrid cans of food. Properly cured meats (hams, bacon, luncheon meats) should not support growth and toxin production by the bacterium. A related illness caused by C. botulinum is infant botulism. The bacterium can colonize and grow in the intestinal tract of some newborn infants who have not developed a desirable competing microflora. The toxin is then slowly released in the intestines leading to weakness, lack of sucking, and limpness. Evidence suggests that infant botulism may be associated with sudden infant death syndrome. Consumption of honey by young infants has been linked to this type of disease. CHEMICAL INTOXICATIONS
Chemical hazards are minimally important as etiological agents of food-borne disease (Table 47-1). It should be noted that a number of chemicals, whether naturally occurring or intentionally added, have tolerance limits in foods. These limits are published in the Code of Federal Regulations, Title 21. Informal limits are available through FDA Compliance Policy Guidelines (Center for Food Safety and Applied Nutrition, Washington, D.C.). Prohibited substances (CFR 21, Part 189) are not allowed in human foods either because they have been shown to be a public health risk or because they have not been shown to be safe using sound scientific data.36 Safe food additives are oftentimes referred to as Generally Recognized as Safe (GRAS) substances. There are no documented occurrences of food-borne disease associated with the proper use of insecticides, herbicides, fungicides, fertilizers, food additives, package material migration chemicals, and other industrial use chemicals. Most human-made chemicals associated with food-borne disease find their way into foods by nonintentional means. Accidental or inadvertent contamination with heavy metals, detergents, or sanitizers can occur.37 Although infrequently reported to CDC, most chemical intoxications are likely to be short in duration with mild symptoms. CDC does not attempt to link exposure to these chemicals with chronic diseases. There are measurable levels of pesticides, herbicides, fungicides, fertilizers, and veterinary drugs and antibiotics in most foods. In the vast majority of instances where these residues are found, levels are well below tolerance. Heavy metal poisonings have occurred primarily due to leaching of lead, copper, tin, zinc, or cadmium from containers or utensils in contact with acidic foods. Although usually considered minor contributors to human illness, toxic chemicals in foods may be significant contributors to morbidity and mortality of consumers. A number of toxic chemicals found in foods are of microbial origin. For example, mycotoxins are secondary metabolites produced by fungi.38 The aflatoxins were the first fungal metabolite in foods regulated by the U.S. government. Grains and nut products are common carriers of these and other mold toxins. Other fungal toxins not associated with microscopic molds include toxic alkaloids associated with certain mushrooms. In this case, direct consumption of wild mushrooms that are frequently confused with edible domesticated species can lead to acute toxicity.39 There are no current food processing or sanitation methods that can render these mushrooms acceptable as human food. A number of seafood toxins are naturally associated with shellfish and some predatory reef fish.40 Again, the ultimate cause of these intoxications is traced to the presence of microorganisms. Under favorable environmental conditions, populations of planktonic algae (dinoflagellates) are high (algal bloom) in shellfish-growing waters. The algae are removed from the water column during filter feeding of molluscan shellfish (oysters, clams, mussels, cockles, and scallops). The shellfish
Ensuring Food Safety
853
then concentrate the algae and associated toxins in their edible flesh. Four primary shellfish intoxications have been identified: amnesic shellfish poisoning (ASP), diarrhetic shellfish poisoning (DSP), neurotoxic shellfish poisoning (NSP), and paralytic shellfish poisoning (PSP). ASP has been linked to mussels, DSP with mussels, oysters, and scallops, NSP with oysters and clams, and PSP with all mentioned shellfish. Control of shellfish toxins is best accomplished by monitoring harvest waters for the toxic algae. Postharvest control is not presently possible; however, depuration or relaying may be of some use. Some marine fish harvested from temperate or tropical climates may contain toxic chemicals. Scombroid fish (anchovy, herring, marlin, sardine, tuna, bonito, mahi mahi, mackerel, bluefish, and amberjack) under time/temperature abuse during storage can support growth of bacteria that produce histidine decarboxylase.40 This enzyme releases free histamine from the fish tissues. High histamine levels lead to an allergic response among susceptible consumers. Prompt and continued refrigeration of these fish after harvesting will limit microbial growth and enzyme activity. Fish most often associated with histamine scombrotoxicity are mahi mahi, tuna, mackerel, bluefish, and amberjack. Another form of naturally occurring chemical food poisoning found in tropical and subtropical fish is ciguatera. Like shellfish toxicity, ciguatera results when fish bioconcentrate dinoflagellate toxins through the food chain. Thus, large predatory fish at the top of the food chain can accumulate enough toxin to give a paralysis-type response among consumers. Fish associated with ciguatera poisoning are grouper, barracuda, snapper, jack, mackerel, and triggerfish. Again, monitoring of harvest waters is the essential control step to avoid human illness. PHYSICAL HAZARDS
Consumers frequently report physical defects with foods, of which presence of foreign objects predominate.6 Glass is the leading object consumers report and is evidence of manufacturing or distribution error. Most physical hazards are not particularly dangerous to the consumer, but their obvious presence in a food is disconcerting. Most injuries are cuts, choking, and broken teeth. Control of physical hazards in foods is often difficult, especially when these hazards are a normal constituent of the food, such as bones and shells. Good manufacturing practices (GMPs) and employee awareness are the best measures to prevent physical hazards. Metal detectors and x-ray machines may be installed where appropriate. ADMINISTRATIVE REGULATION
Several regulatory groups are involved in the regulation of food safety and quality standards, from local and state agencies to international agencies. Since there is tremendous variation within and between local and state agencies, this discussion will be confined to the national and international agencies that regulate food. At the national level, two federal agencies regulate the vast majority of food produced and consumed in the United States; namely, the U.S. Department of Agriculture (USDA)41 and the Food and Drug Administration (FDA).42
U.S. Department of Agriculture USDA has responsibility for certification, grading, and inspection of all agricultural products. All federally inspected meat and meat products, including animals, facilities, and procedures, are covered under a series of meat inspection laws that began in 1906 and have been modified on several different occasions, culminating in the latest revisions in 1996.43 These laws cover only meat that is in interstate commerce, leaving the legal jurisdiction of intrastate meats to individual states. In the states that do have state-inspected meats, in addition to federally inspected meats, the regulations require that the state inspection program be “equivalent” to the federal program. Key elements in meat inspection are examination of live animals for obvious signs of
854
Environmental Health
clinical illness and examination of gross pathology of carcasses and viscera for evidence of transmissible diseases. The newest regulations also require the implementation of an HACCP system and microbiological testing of carcasses after chilling. Eggs and egg products are also covered by USDA inspection under the Egg Products Inspection Act of 1970.44 This act mandates inspection of egg products at all phases of production and processing. USDA inspection of meat processing is continuous; that is, products cannot be processed without an inspector or inspectors present to verify the operation.
U.S. Food and Drug Administration FDA has responsibility for ensuring that foods are wholesome and safe, and have been stored under sanitary conditions, as outlined by the Food Drug and Cosmetic Act of 1938. This act has been amended to include food additives, packaging, and labeling. The last two issues relate not only to product safety and wholesomeness, but also to nutritional labeling and economic fraud. FDA is also empowered to act if pesticide residues exceed tolerances set by the U.S. Environmental Protection Agency. Unlike USDA inspection, FDA inspection is discontinuous, with food-processing plants being required to maintain their own quality control records while inspectors themselves make random visits to facilities.
Milk Sanitation Perhaps one of the greatest public health success stories of the twentieth century has been the pasteurization of milk. The U.S. Public Health Service drafted a model milk ordinance in 1924, which has been adopted by most local and state regulatory authorities and has become known as the Grade A PMO (Pasteurized Grade A Milk Ordinance).45 This ordinance covers all phases of milk production, including but not limited to animal health, design and construction of milk-processing facilities, equipment, and most importantly, the pasteurization process itself. The PMO sets quality standards for both raw and processed milk, in the form of cooling requirements and bacteriological populations. The PMO also standardizes the pasteurization requirements for fluid milk, which insures that bacteria of public health significance will not survive in the finished product. From a historical perspective, it is interesting to note that neither the public nor the industry initially embraced pasteurization, but that constant pressure from public health officials finally succeeded in making this important advance in public health almost universal.
International Administration The Codex Alimentarius Commission, created by the Food and Agriculture Organization and the World Health Organization, has the daunting task of implementing food standards on an international scale.46 These standards apply to both general and specific food categories and also set limits for pesticide residues in foods. Acceptance of these standards is voluntary and at the discretion of individual governments, but acceptance of the standards requires that the country applies them equally to both domestically produced and imported products. The importance of international standards is growing daily as international trade in food expands. Many countries find that they are both importing and exporting foods, and a common set of standards is critical in establishing trade without the presence of nontariff trade barriers.
Prerequisite Programs In order to achieve the goal of producing a safe food product, food processors should have in place a variety of fundamental programs covering the general operation of the process and the processing facility. These programs are considered “prerequisites,” as without these basic programs in place, it is impossible to produce safe and wholesome foods, irrespective of the available technology, inspection process, or
microbiological testing. These prerequisite programs fall generally under the term “good manufacturing practices,” but also include sanitation, equipment and facility design, personal hygiene issues, and pest control.
Good Manufacturing Practices GMPs cover a broad range of activities with the food-processing establishment. Although there is general guidance in the Code of Federal Regulations,47 GMPs are established by the food processor, and are specific to their own operation. There is also general guidance on GMPs available from a variety of organizations representing specific commodities or trades. Specific applications of GMPs are discussed in the following sections, but GMPs also apply to activities that affect not only the safety of the product, but also the quality. As an example, a refrigerated holding or storage temperature may be set by a GMP at a point below that which is actually required for product safety, but is set at that point for product quality reasons. Conversely, if a raw material or partially manufactured product, which under normal circumstances would be kept refrigerated, were subsequently found to be at a higher temperature, it would be deemed to be out of compliance with the GMP. GMPs may also focus on the actual production processes and controls within those processes. GMPs may be viewed as rules that assure fitness of raw materials and ingredients, rules that maintain the integrity of processed foods, and rules to protect the finished product (foods) from deterioration during storage and distribution. Other GMPs may address the presence of foreign materials in the processing area, such as tramp metal from equipment maintenance or broken glass from a shattered light bulb. These GMPs are established to provide employees with specific guidance as to the company’s procedures for addressing certain uncommon but unavoidable issues. While GMPs, by their nature, cover broad areas of operation, the individual GMP is usually quite specific, presenting complete information in a logical, stepwise fashion. An employee should be able to retrieve a written GMP from a file, and should be able to perform the required GMP function with little or no interpretation of the written material.
Training and Personal Hygiene Personnel who are actually involved in food-processing operations should also understand the necessity for proper cleaning and sanitation, and not simply rely on the sanitation crew to take care of all issues.48 In addition, all employees must be aware of basic issues of personal hygiene, especially when they are in direct contact with food or food-processing equipment. Some key elements, such as hand washing and clean clothing and gloves, should be reemphasized on a periodic basis. An important aspect of this is an emphasis on no “bare-handed” contact with the edible product, using utensils or gloves to prevent this from occurring. This information has been outlined by the U. S. Food and Drug Administration in the Good Manufacturing Practices section of the Code of Federal Regulations.
Pest Control Pests, such as insects and rodents, present both physical as well as biological hazards.49 While the consumer would undoubtedly object to the proverbial “fly in the soup,” the concerns with the introduction of biological hazards into the foods by pests are even greater. Integrated Pest Management (IPM) includes the physical and mechanical methods of controlling pests within the food-processing environment and the surrounding premises. At a minimum, the processing environment and the area surrounding the processing plant should be evaluated by a competent inspector for both the types of pests likely to be present, and the potential harborages for such pests. A comprehensive program should be established that addresses flying insects, crawling insects, and rodents, the objective being to prevent access to the processing environment. Given that it is impossible to completely deny
47 pest access to the processing environment, internal measures should be taken to reduce the numbers of any pests that enter the processing area. Since it is undesirable to have poisonous chemicals in areas surrounding actual food production, active pest-reduction methods should be mechanical in nature (traps, insect electrocuters, etc.). Record keeping is an important aspect of pest management. Documentation of pest management activities should include maps and maintenance schedules for rodent stations, bait stations, insect electrocutors, an inventory of pesticides on the premises, and reports of inspections and corrective actions. There should be standard operating procedures for applying pesticides, and they should only be applied by properly trained individuals. Many food-processing establishments contract with external pest control operators to address their pest control needs.
SANITATION
Sanitation is the fundamental program for all food-processing operations, irrespective of whether they are converting raw products into processed food or preparing food for final consumption. Sanitation impacts all attributes of processed foods, from organoleptic properties of the food to the safety and quality of the food itself. From a food processor’s perspective, an effective sanitation program is essential to producing quality foods with reasonable shelf lives. Without an effective program, even the best operational management and technology will ultimately fail to deliver the quality product that consumers demand. Sanitation programs are all-encompassing, focusing not only on the details of soil types and chemicals, but on the broader environmental issues of equipment and processing-plant design. Many foodborne microorganisms, both spoilage organisms and bacteria of public health significance, can be transferred from the plant environment to the food itself.50 Perhaps one of the most serious of these microorganisms came to national and international attention in the mid-1980s, when Listeria monocytogenes was found in processed dairy products. Listeria was considered to be a relatively minor veterinary pathogen until that time, and not even considered a potential food-borne agent. However, subsequent research demonstrated that L. monocytogenes was a serious human health concern, and more importantly was found to be widely distributed in nature. In many food-processing plants, Listeria were found to be in the general plant environment, and subsequently efforts have been made to improve plant sanitation, through facility and equipment design as well as focusing more attention on basic cleaning and sanitation.
Sanitary Facility Design Some of the basic considerations of food-processing facility design include the physical separation of raw and processed products, adequate storage areas for nonfood items (such as packaging materials), and a physical layout that minimizes employee traffic between raw and processed areas. While these considerations are easily addressed in newly constructed facilities, they may present challenges in older facilities that have been renovated or added on to. Exposed surfaces, such as floors, walls, and ceilings, in the processing area should be constructed of material that allows for thorough cleaning. Although these surfaces are not direct food contact surfaces, they contribute to overall environmental contamination in the processing area. These surfaces are particularly important in areas where food is open to the environment, and the potential for contamination is greater when temperature differences in the environment result in condensation.51 As an example, a large open cooking kettle will generate some steam that may condense on surfaces above the kettle. This condensate may, without proper design and sanitation, drip back down into the product carrying any dirt and dust from overhead surfaces back into the food. Other obvious considerations are basic facility maintenance as well as insect and rodent control programs, as all of these factors may contribute to contamination of food.
Ensuring Food Safety
855
Sanitary Equipment Design Many of the same considerations for sanitary plant design also apply to the design of food-processing equipment. Irrespective of its function, processing equipment must protect food from external contamination and from undue conditions that will allow existing bacteria to grow. The issue of condensate as a form of external contamination has already been raised. Opportunities for existing bacteria to reproduce may be found in the so-called “dead spaces” within some equipment. These areas can allow food to accumulate over time under conditions that allow bacteria to grow. These areas then become a constant inoculation source for additional product as it moves through the equipment, increasing the bacteriological population within the food. Other considerations of food equipment design include avoiding construction techniques that may allow product to become trapped within small areas of the equipment, creating the same situation that occurs in the larger dead spaces within the equipment. As an example, lap seams that are tack welded provide ample space for product to become trapped. Not only does this create a location for bacteria to grow and contaminate the food product, it also creates a point on the equipment that is difficult if not impossible to clean.
Cleaning and Sanitizing Procedures Cleaning and sanitizing processes can be generically divided into five separate steps that apply to any sanitation task.52 The first step is removal of residual food, waste materials, and debris. This is frequently referred to as a “dry” cleanup. The dry cleanup is followed by a rinse with warm (48–55°C) water to remove material that is only loosely attached to surfaces and to hydrate material that is more firmly attached to surfaces. Actual cleaning follows the warm water rinse, which usually involves the application of cleaning chemicals and some form of scrubbing force, either with mechanical brushes or with high-pressure hoses. The nature of the residual food material will determine the type of cleaning compound applied. After this, surfaces are rinsed and inspected for visual cleanliness. At this point, the cleaning process is repeated on any areas that require further attention. Carbohydrates and lipids can generally be removed with warm to hot water and sufficient mechanical scrubbing. Proteins require the use of alkaline cleaners, while mineral deposits can be removed with acid cleaners. Commercially available cleaning compounds generally contain materials to clean the specific type of food residue of concern, as well as surfactants and, as necessary, sequesterants that allow cleaners to function more effectively in hard water.53 When surfaces are visually clean, a sanitizer is applied to reduce or eliminate remaining bacteriological contamination. Inadequately cleaned equipment cannot be sanitized, as the residual food material will protect bacteria from the sanitizer. One of the most common sanitizing agents widely used in small- and medium-sized processing facilities is hot water. Most regulatory agencies require that when hot water is used as the sole method of sanitization, the temperature must be at or above 85°C. While heat sanitization in effective, it is not as economical as chemical sanitizers because of the energy costs required to maintain the appropriate temperature. Chlorine containing sanitizers are economical and effective against a wide range of bacterial species, and are widely used in the food industry.54 Typically, the concentrations of chlorine applied to equipment and surfaces are in the 150–200 parts per million range. Chlorine sanitizers are corrosive and can, if improperly handled, release chlorine gas into the environment. Iodine-containing sanitizers are less corrosive than chlorine sanitizers, but are also somewhat less effective. These sanitizers must be used at slightly acidic pH values to allow for the release of free iodine. The amber color of iodine sanitizers can give an approximate indication of concentration, but can also leave residual stains on treated surfaces. Quaternary ammonium compounds (QACs) are noncorrosive and demonstrate effective bactericidal action against a wide range of microorganisms. These sanitizers are generally more costly and not as effective as chlorine compounds, but they are stable and provide residual antimicrobial activity on sanitized surfaces. Food-processing plants
856
Environmental Health
will frequently alternate between chlorine and QAC sanitizers to prevent development of resistant bacterial populations or will use chlorine sanitizers on regular production days and then apply QACs during periods when the facility is not operating (for example, over a weekend). Another element in food plant sanitation programs is the personnel who perform the sanitation operations as well as the employees who work in the processing area. Sanitation personnel should be adequately trained to understand the importance of their function in the overall processing operation in addition to the training necessary to properly use the chemicals and equipment necessary for them to perform their duties. HAZARD ANALYSIS CRITICAL CONTROL POINT SYSTEM (HACCP)
The basic concept of HACCP was developed in the late 1950s and early 1960s as a joint effort to produce food for the manned space program. The U.S. Air Force Space Laboratory Project Group, the U.S. Army Natick Laboratories, and the National Aeronautics and Space Administration contributed to the development of the process, as did the Pillsbury Company, which had a major role in developing and producing the actual food products. Since that time, the HACCP system has evolved and been refined, but still focuses on the original goal of producing food that is safe for consumption.6 Since development, HACCP principles have been used in many different ways. However, recent interest in the system has been driven by changes in the regulatory agencies, specifically the U.S. Department of Agriculture—Food Safety and Inspection Service (USDA-FSIS), and the U.S. Food and Drug Administration. USDAFSIS recently revised the regulations that govern meat inspection to move all federally inspected meat plants to an HACCP-based system of production and inspection.43 FDA has also changed the regulations for fish and seafood, again moving this to an HACCP-based system for production.55 It is likely, given current trends by federal agencies, that most commercially produced foods will be produced under HACCP systems within the next 10 years. The goal of an HACCP system is to produce foods that are free of biological, chemical, and physical hazards.56 HACCP is a preventative system, designed to prevent problems before they occur, rather than try to fix problems after they occur. Biological hazards fall into two distinct categories: those that can potentially cause infection and those that can potentially cause intoxications. Infectious agents require the presence of viable organisms in the food and may not, depending on the organisms and the circumstances, require that the organism actually reproduce in the food. As an example, Escherichia coli O157:H7 has an extremely low infectious dose for humans (possibly less than 100 viable cells), and as such the mere presence of the bacterium in foods is a cause for concern. In contrast, organisms involved in intoxications usually require higher numbers of the organism in the food to produce sufficient amounts of toxin to cause clinical illness in humans. However, some of the toxins involved in food-borne diseases are heat stable, so that absence of viable organisms in the food is not necessarily an indication of the relative safety of the food. Staphylococcus aureus is a good example, where it typically requires greater than 1,000,000 to 10,000,000 cells per gram of food to produce sufficient toxin to cause illness in humans.57 However, because the toxin itself is extremely heat stable, cooking the food will eliminate the bacterium but not the toxin, and the food can still potentially cause an outbreak of food-borne illness. Chemical hazards include chemicals that are specifically prohibited in foods, such as cleaning agents, as well as food additives that are allowed in foods but only at regulated concentrations. Foods containing prohibited chemicals or food additives in levels higher than allowed are considered adulterated. Adulterated foods are not allowed for human consumption and are subject to regulatory action by the appropriate agency (USDA or FDA). Chemical hazards can be minimized by assuring that raw materials (foods and packaging
materials) are acquired from reliable sources that provide written assurances that the products do not contain illegal chemical contaminants or additives. During processing, adequate process controls should be in place to minimize the possibility that an approved additive will be used at levels not exceeding maximum legal limits for both the additive and the food product. Other process controls and GMPs should also insure that industrial chemicals, such as cleaners or lubricants, will not contaminate food during production or storage.47 Physical hazards are extraneous material or foreign objects that are not normally found in foods. For example, wood, glass, or metal fragments are extraneous materials that are not normally found in foods. Physical hazards typically affect only a single individual or a very small group of individuals, but because they are easily recognized by the consumer, are sources of many complaints. Physical hazards can originate from food-processing equipment, packaging materials, the environment, and from employees. Physical contaminants can be minimized by complying with GMPs and by employee training. While some physical hazards can be detected during food processing (e.g., metal by the use of metal detectors), many nonferrous materials are virtually impossible to detect by any means and so control often resides with employees.
HACCP Plan Development Prior to the implementation of HACCP, a review should be conducted of all existing prerequisite programs. Deficiencies in these programs should be addressed prior to the implementation of HACCP, because an HACCP plan presumes that these basic programs are fully functional and effective. Development of an HACCP plan begins with the formation of an HACCP team.58 Individuals on this team should represent diverse sections within a given operation, from purchasing to sanitation. The team is then responsible for development of the plan. Initial tasks that the team must accomplish are to identify the food and method of distribution, and to identify the consumer and intended use of the food. Having done this, the HACCP team should construct a flow diagram of the process and verify that this diagram is accurate. The development of an HACCP plan is based on seven principles or steps in logical order (Table 47-5).59 With the flow diagram as a reference point, the first principle or step is to conduct a hazard analysis of the process. The HACCP team identifies all biological, chemical, and physical hazards that may occur at each step during the process. Once the list is completed, it is reviewed to determine the relative risk of each potential hazard, which helps identify significant hazards. Risk is the interaction of “likelihood of occurrence” with “severity of occurrence.” As an extreme example, a sudden structural failure in the building could potentially contaminate any exposed food with foreign material. However, likelihood of the occurrence of such an event is small. In contrast, if exposed food is held directly below surfaces that are frequently covered with condensate, then the likelihood of condensate dripping on exposed food is considerably higher. An important point in the determination of significant hazards is a written explanation by the HACCP team regarding how the determination of “significant” was made. This documentation can provide a valuable reference in the future, when processing methods change or when new equipment is added to the production line. The second principle in the development of an HACCP plan is the identification of critical control points (CCPs) within the system. TABLE 47-5. SEVEN HACCP PRINCIPLES Hazard analysis Identify critical control points (CCP) Establish critical limits for each CCP Monitor CCP Establish corrective action Verification Record keeping
47 A CCP is a point, step, or procedure where control can be applied and a food safety hazard can be prevented, eliminated, or reduced to acceptable levels.59 An example of a CCP is the terminal heat process applied to canned foods after cans have been filled and sealed. This process, when properly conducted according to FDA guidelines, effectively eliminates a potential food safety hazard, Clostridium botulinum. Once CCPs have been identified, the third principle in the development of an HACCP plan is to establish critical limits for each CCP. These limits are not necessarily the ideal processing parameters, but the minimum acceptable levels required to maintain the safety of the product. Again, in the example of a canned food, the critical limit is the minimum time and temperature relationship to insure that each can has met the appropriate standards required by FDA. The fourth principle, following in logical order, is to establish appropriate monitoring requirements for each critical control point. The intent of monitoring is to insure that critical limits are being met at each critical control point. Monitoring may be on a continuous or discontinuous basis. Presence of a physical hazard, such as metal, can be monitored continuously by passing all of the food produced through a metal detector. Alternately, presence of foreign material can be monitored on a continuous basis by visual inspection. Discontinuous inspection may involve taking analytical measurements, such as temperature or pH, at designated intervals during the production day. Some analytical measurements can be made on a continuous basis by the use of data-recording equipment, but it is essential that continuous measures be checked periodically by production personnel. The fifth principle in the development of an HACCP plan is to establish appropriate corrective actions for occasions when critical limits are not met. Corrective actions must address the necessary steps to correct the process that is out of control (such as increasing the temperature on an oven) as well as address disposition of the product that was made while the process was out of control. A literal interpretation of the HACCP system and a CCP is that when a CCP fails to meet the critical limits, then the food product is potentially unsafe for human consumption. As a result, food produced while the CCP was not under control cannot be put into the normal distribution chain without corrective actions being taken to that product. Typically this means that the product must be either reworked or destroyed, depending on the nature of the process and the volume of product that was produced while the CCP was out of control. This argues for frequent monitoring, so that the actual volume of product produced during each monitoring interval is relatively small. The sixth principle in the development of an HACCP plan is verification. Verification can take many forms. Microbiological tests of finished products can be performed to evaluate the effectiveness of an HACCP plan. Alternately, external auditors can be used to evaluate all parts of the HACCP plan, to insure that the stated goals and objectives are being met. An HACCP plan must also be periodically reviewed and updated, to reflect changes in production methods and use of different equipment. Another critical aspect of verification is education of new employees on the HACCP plan itself. As HACCP is phased in to many food-processing environments, many employees who are unfamiliar with the concepts and goals of HACCP will have to be educated on the necessity of following the plan. In one sense, USDA-FSIS regulations have guaranteed that meat processors will follow HACCP plans, as the penalty for not following the HACCP plan can be as severe as the loss of inspection at an establishment. However, HACCP is an excellent system for monitoring and improving production of food products, and many food processors will discover that HACCP plans offer many benefits, well above and beyond the legal requirements of the regulatory agencies. The seventh principle in the development of an HACCP plan is the establishment of effective record-keeping procedures. In many respects, an HACCP plan is an elaborate record-keeping program. Records should document what was monitored, when it was monitored and by whom, and what was done in the event of a deviation. Reliable records are essential from both business and regulatory perspectives. From the business perspective, HACCP records allow a processor to develop an accurate longitudinal record of production
Ensuring Food Safety
857
practices and deviations. Reviewing HACCP records may provide insight on a variety of issues, from an individual raw material supplier whose product frequently results in production deviations, to an indication of an equipment or environmental problem within a processing plant. From a regulatory perspective, records allow inspectors to determine if a food processor has been fulfilling commitments made in the HACCP plan. If a processor has designated a particular step in the process as a CCP, then they should have records to indicate that the CCP has been monitored on a frequent basis and should also indicate corrective actions taken in the event of a deviation. FOOD PRESERVATION
Normal microflora of foods are characterized by food type and growing/handling practices. Foods of plant origin have flora on outer surfaces. Animals too have flora on surfaces, but also have intestinal flora and secretion flora. Outside sources, such as soil, dust, water, humans, and equipment, can be significant sources of disease-causing microbes. Use of diseased animals for foods is dangerous because they often carry human pathogens. It should be noted that the inner tissues of plants and animals are generally sterile; however, cabbage inner leaves have lactobacilli, and animal intestinal tracts have numerous microbes. Pathogens found on fruits and vegetables are from soil origin (Clostridium, Bacillus) or from contaminated water, fertilizer, or food handlers. Some grain and nut products are naturally contaminated in the field with mycotoxin-producing molds. Soil is also a source of contamination of foods from animal origin. Animal feces can harbor coliforms, Clostridium perfringens, enterococci, and enteric pathogens. Milk from infected udders (mastitis) can carry disease-causing Streptococcus pyogenes and Staphylococcus aureus. Nonmastitic udders can shed Brucella, Rickettsia, and viruses. Outside sources of contamination that are not normally associated with food can be important in terms of food safety. Soil and dust contain very large numbers and a large variety of microbes. Many microorganisms responsible for food spoilage come from these sources. Contamination is by direct contact with soil, water, or by airborne dust particles. Air can carry microorganisms from other sources such as sneezing, coughing, dust, and aerosols. Pathogens, mold spores, yeasts, and spoilage bacteria can then be disseminated. Organic debris from plants or animals is an excellent source. Microorganisms can grow on walls, floors, and other surfaces and act as a source of contamination during food processing and preparation. Airborne particles can be removed by filtration or by electrostatic precipitation. Treated sewage may be used for fertilizer, although due to large amounts of toxic compounds, like heavy metals, it is not used often for this purpose.29 Sewage can be an excellent source of pathogens including all enteric gram negative bacteria, enterococci, Clostridium, viruses, and parasites. Sewage that contaminates lakes, streams, and estuaries has been linked to many seafood outbreaks. In addition, water used for food must be safe for drinking and must be treated and free of pathogens. Furthermore, water must not contain toxic wastes. Water in food processing is typically used for washing, cooling, chilling, heating, ice, or as an ingredient. Stored water (reservoirs) and underground water (wells) are usually self-purifying. Numbers and types of microorganisms found in foods depend on (a) the general environment from which the food was obtained, (b) the quality of raw food, (c) sanitary conditions under which the food was processed or handled, and (c) adequacy of packaging, handling, and storage of foods. General methods of food preservation are shown in Table 47-6. The Hurdle Concept uses multiple methods (multibarrier approach) to food preservation and is the most common. Examples include pasteurized milk (heat, refrigeration, and packaging) or canned beans (heat, anaerobiosis, and packaging).
Principles of Food Preservation Principles of food preservation rely on preventing or delaying microbial decomposition.60,61 This can be accomplished by using asepsis or
Environmental Health
TABLE 47-6. METHODS OF FOOD PRESERVATION Methods Asepsis Removal Anaerobiosis High temperatures Low temperatures Dehydration
Chemical preservatives Irradiation Mechanical destruction Combinations
such as used in sedimentation/clarification steps, is not useful for removal of bacteria or viruses.
Description Keeping microorganisms out of food, “aseptic packaging” Limited applications, difficult to do, filtration Sealed, evacuated container, vacuum packaging Sterilization, canning, pasteurization Refrigeration, freezing Drying or tying-up water by solutes and hydrophilic colloids, lower water activity (aw) Natural, developed, or added (propionic acid, nisin, spices), acids lower pH X-Rays (ionizing) or UV (nonionizing) Grinding, high pressures, not widely used Most frequently employed, multiple hurdle concept
removal. Preventing growth or activity of microbes with low temperatures, drying, anaerobic conditions, or preservatives can also be done. Killing or injuring microbes with heat, irradiation, or some preservatives is certainly effective. A second principle is to prevent or delay self-decomposition, which is done by destruction or inactivation of enzymes (blanching) or by preventing or delaying autoxidation (antioxidants). The last principle is to prevent physical damage caused by insects, animals, and mechanical forces, which prevents entry of microorganisms into food. Physical barriers (packaging) are the primary means of protection. To control microorganisms in foods, many methods of food preservation depend not on the destruction or removal of microbes but rather on delaying the initiation of growth or hindering growth once it has begun. For food preservation to succeed, one must be able to manipulate the microbial growth curve. Many steps can be done to lengthen lag phase or positive acceleration phase of a population. These steps include (a) preventing introduction of microbes by reducing contamination (fewer numbers gives a longer lag phase), (b) avoiding addition of actively growing organisms that may be found on unclean containers, equipment, and utensils, and (c) creating unfavorable environmental conditions for growth. The last step is the most important in food preservation and can be done by low water activity, extremes of temperature, irradiation, low pH, adverse redox potential, and by adding inhibitors and preservatives (Table 47-6). Some of these steps may only damage or injure microorganisms; hence, the need for multiple barriers becomes essential.60 For each of these steps to be effective, other factors should be considered. For example, the number of organisms present determines kill rate. Smaller numbers give faster kill rates. Vegetative cells are most resistant to lethal treatments when in late lag or stationary phase and least resistant when in log phase of growth.
Asepsis/Removal Keeping microorganisms out of food is often difficult during food production. Processing and post-processing, are much easier places to apply asepsis. Protective covering of foods such as skin, shells, and hides are often removed during processing, thereby exposing previously sterile foods to contaminating microbes. Raw agricultural commodities normally carry a natural bioburden upon entering the processing plant. Packaging is the most widely used form of asepsis and includes wraps, packages, cans, etc. Removal of microorganisms from foods is not very effective. Washing of fruits and vegetables can remove some surface microorganisms. However, if wash water becomes dirty, it can add microbes to the food. Trimming is an effective way to remove spoiled or damaged parts. Filtration is good for clear liquids (juices, beer, soft drinks, wine, and water) but is of little value for solid foods. Centrifugation,
Modified Atmosphere Conditions Altering the atmosphere surrounding a food can be a useful way to control microbes. Examples include packaging with vacuum, CO2, N2, or combinations of inert gases with or without oxygen. Some CO2 accumulation is possible during fermentations or vegetable respiration. It is important to note that vacuum packaging can lead to favorable environments for proliferation of anaerobic pathogens such as Clostridium botulinum.
High Temperature Preservation Use of high-temperature processing is based on destroying microbes, but may also injure certain thermoduric microbes. Not all microorganisms are killed, that is, spore formers usually survive.61 Other barriers are combined with a thermal process to achieve adequate safety and product shelf life. Commercial sterilization used in the canning process usually destroys all viable microbes that can spoil the product. Thermophilic spores may survive but will not grow under normal storage conditions. Several factors affect heat resistance of microorganisms in foods.61 Species’ variability and the ability to form spores, plus condition of the microbial population, can affect heat resistance. Environmental factors such as food variability and presence of other preservative measures employed also dictate thermal resistance. For example, heat resistance increases with decreasing water activity. Hence, moist air heating is better than dry heating. High-fat foods tend to increase resistance of cells. The larger initial number of microorganisms present means a higher heat resistance. Older (stationary phase) cells are more resistant to heat than younger cells. Resistance increases as growth temperature increases. A microbe with a high optimum temperature for growth will generally have a high heat resistance. Addition of other inhibitors, such as nitrite, will decrease resistance. Likewise, high-acid foods (pH less than 4.6) will not generally support growth of pathogens. There is a time-temperature relationship that is a very important factor governing heat resistance of a microbial population. As temperature increases, the time needed for a given kill decreases. The relationship is dependent on type and size of food container. Larger containers require longer process times. Metal conducts heat better than glass, which can lower process times. Microorganisms are killed by heat at a rate nearly proportional to the numbers present. This is a log order of death, which means that under a constant temperature, the same percentage of a population will die at a given time interval regardless of the population size (Fig. 47-1). For example, 90% die in 30 seconds, 90% of remaining in the next 30 seconds, and so on. Thus, as the initial number of 6.5 6 Survivors (Log/ml)
858
5.5 5 4.5 4 3.5 0
10
20
30
40
50
60
Heating time (sec.) at 121°C Figure 47-1. Typical heat inactivation curve for a bacterial population. D = 30 sec.
47 organisms increases, the time required for the reduction of all organisms at a given temperature also increases. Food microbiologists express this time-temperature relationship by calculating a number of constants. D value is the time required to reduce a population by one log cycle at a given temperature. Thermal Death Time (TDT) is the time needed to kill a given number of organisms at a given temperature. Thermal Death Point (TDP) is the temperature needed to kill a given number of organisms at a set time (usually 10 minutes; D10). In food canning, the time-temperature profile must be calculated for each size container, for each food type, and for each retort used. When done correctly, these time-temperature conditions provide a large margin of safety since one rarely knows the numbers and types of microbes in a given container, but one must assume that C. botulinum is present. To insure safety, inoculated pack studies are done using Clostridium sporogenes PA 3679, which is six times more heat resistant than C. botulinum. A known number of PA 3679 are added to cans fitted with thermocouples. Cans are then processed to 120°C (250°F) and held for various time periods. Survivors are enumerated to construct a thermal death curve for that particular food, and a D value is calculated. For canned foods, a 12D margin of safety is used. Thus, heat at a given temperature is applied for a time equal to D × 12 log cycle reductions of PA 3679. Therefore, if a can had 109 spores, only 1 in 1000 cans would have a viable spore. Thus, the probability of survival for C. botulinum would be 1 in 1012 if a can is heated at 250°F for 3 minutes. A minimum botulinum cook is one where every particle of food in a container reaches 250°F and remains at that temperature for at least 3 minutes. Several factors affect heat transfer and penetration into food packages. Food type (liquids, solids, size, and shape) determine mixing effects during heating. Conduction occurs with solid foods (pumpkin) and results in slow heat transfer because there is no mixing of contents. Convection gives liquids (juice) faster heat transfer due to mixing by currents or mechanical agitation. Combination of conduction and convection is observed with particles in liquid (peas), though heating is primarily by convection and depends on viscosity of liquid component. Container size, shape, and composition are important. Tall thin cans transfer heat faster than short round cans. Large cans take longer than small cans. Metal (tin, steel, and aluminum) containers transfer heat faster than glass resulting in shorter process times. Plastics can have rapid heat transfer due to thinness. Retort pouches, which are laminates of foil and plastic, have rapid heat transfer; however, pinhole problems can occur. Preheating foods prior to filling containers and preheating retort will shorten process time. Rotation or agitation of cans during processing increases convection giving faster heating. Canning is the preservation of foods in hermetically sealed containers, usualy by heat treatments. The typical sequence in canning is as follows. Freshly harvested, good-quality foods are washed to remove soils. Next, a blanch or mild heat treatment is applied to set color of fruits and vegetables, inactivate enzymes, purge dissolved gases, and kill some microorganisms. Clean containers are then filled to leave some head space. Hot packing is filling of preheated food to give faster processing, although cold packing can be done. Containers are sealed under vacuum then placed into a retort. The retort is sealed and heated with pressurized steam. After heating, cans should be rapidly cooled to avoid overcooking and to prevent growth of thermophiles. Cooling is done by submerging cans in a sanitized water bath, which can cause problems if pinhole leaks are present allowing water to enter containers. Less severe heat processing is pasteurization, which usually involves heating at less than 100°C. Pasteurization has two purposes, to destroy all pathogens normally present in a product and to reduce numbers of spoilage microorganisms. This thermal process kills some but not all microorganisms present in the food. Pasteurization is used when more rigorous heat treatments might alter food quality. For example, overheated milk will coagulate, brown, and burn. Pasteurization should kill all pathogens normally associated with the product. This is useful when spoilage microorganisms are not heat resistant and when surviving microbes can be controlled by other methods. Another
Ensuring Food Safety
859
reason for pasteurization is to kill competing microorganisms to allow for a desirable fermentation with starter cultures. Pasteurization is used to manufacture cheeses, wines, and beers. Milk pasteurization may use three equivalent treatments. Low Temperature Long Time (LTLT) treatment uses 145°F (63°C) for 30 minutes. High Temperature Short Time (HTST) uses 161°F (72°C) for 15 seconds. Ultra High Temperature or Ultrapasteurized (UHT) uses 138°C for only 2 seconds. UHT processes are used for shelf-stable products. Heating at or below 100°C involves most cooking temperatures. Baking, roasting, simmering, boiling, and frying (oil is hotter but internal temperature of food rarely reaches 100°C) are examples of cooking methods. All pathogens are usually killed except spore formers. Microwaving does not exceed 100°C and can result in uneven heating. Microwave cooking should allow an equilibration time after removal from the oven for more even heating.62,63
Low-Temperature Preservation Low temperatures retard chemical reactions, and refrigeration slows microbial growth rates. Freezing prevents growth of most microorganisms by lowering water activity. Several psychrotrophic pathogens (Listeria monocytogenes, Yersinia enterocolitica, and nonproteolytic Clostridium botulinum) are able to multiply at refrigeration temperatures.64 Among factors influencing chill storage, temperature of the compartment is critical.65 Temperature of food products should be held as low as possible. Relative humidity should be high enough to prevent dehydration but not too high to favor growth of microorganisms. Air velocity in coolers helps to remove odors, control humidity, and maintain uniform temperatures. Atmosphere surrounding food during chill storage can affect microbial growth. Modified atmosphere packaging can help ensure safe chill-stored foods. Some plant foods respire resulting in removal of O2 and release of CO2. Ultraviolet irradiation can be used to kill microorganisms on surfaces and in the air during chill storage of foods. For chill storage to be effective in controlling microorganisms, the rate of cooling should be done rapidly. Temperature should be maintained as low as possible for refrigerated foods (less than 40°F). Thawing of frozen foods presents special problems because drip loss provides ample nutrients for microorganisms. In addition, thawing should be done as rapidly as possible and the food used as quickly as possible to avoid opportunity for microbial growth. Often, thawing is done at room temperature over many hours, which can lead to exposure of surfaces to ambient temperatures for extended periods. Another problem is incomplete thawing of large food items (turkeys). By cooking a large item that is not completely thawed, the internal temperature may not reach lethal levels to kill even the most heat-sensitive enteric pathogen. In fact, a spike in the number of salmonellosis and camplyobacteriosis outbreaks occurs every Thanksgiving and Christmas holidays because of consumption of undercooked turkey and stuffing.
Drying Foods can be preserved by removing or binding water. Any treatment that lowers water activity can reduce or eliminate growth of microorganisms. Some examples include sun drying, heating, freeze drying, and addition of humectants. Humectants act not by removing water but rather by binding water to make it unavailable to act as a solvent. Humectants in common use are salt, sugars, and sugar alcohols (sorbitol). Intermediate moisture foods are those that have 20–40% moisture and a water activity of 0.75–0.85. Examples include soft candies, jams, jellies, honey, pepperoni, and country ham. These foods often require antifungal agents for complete stability.
Preservatives Food preservatives can be extrinsic (intentionally added), intrinsic (normal constituent of food), or developed (produced during fermentation).60,61 Factors affecting preservative effectiveness include
860
Environmental Health
(a) concentration of inhibitor, (b) kind, number, and age of microorganisms (older cells more resistant), (c) temperature, (d) time of exposure (if long enough some microbes can adapt and overcome inhibition), and (e) chemical and physical characteristics of food (water activity, pH, solutes, etc.). Preservatives that are cidal are able to kill microorganisms when large concentrations of the substances are used. Static activity results when sublethal concentrations inhibit microbial growth. Some examples of inorganic preservatives are NaCl, nitrate and nitrite, and sulfites and SO2. NaCl lowers water activity and causes plasmolysis by withdrawing water from cells. Nitrites and nitrates are curing agents for meats (hams, bacons, sausages, etc.) to inhibit C. botulinum under vacuum packaging conditions. Sulfur dioxide (SO2), sulfites (SO3), bisulfite (HSO3), and metabisulfites (S2O5) form sulfurous acid in aqueous solutions, which is the antimicrobial agent. Sulfites are widely used in the wine industry to sanitize equipment and reduce competing microorganisms. Wine yeasts are resistant to sulfites. Sulfites are also used in dried fruits and some fruit juices. Sulfites have been used to prevent enzymatic and nonenzymatic browning in some fruits and vegetables (cut potatoes). Nitrites can react with secondary and tertiary amines to form potentially carcinogenic nitrosamines during cooking; however, current formulations greatly reduce this risk. Nitrates in high concentrations can result in red blood cell functional impairment; however, at approved usage levels they are safe.66,67 Sulfiting agents likewise can cause adverse respiratory effects to susceptible consumers, particularly asthmatics.68,69 Therefore, use of these two classes of agents is strictly regulated. A number of organic acids and their salts are used as preservatives. These include lactic acid and lactates, propionic acid and propionates, citric acid, acetic acid, sorbic acid and sorbates, benzoic acid and benzoates, and methyl and propyl parabens (benzoic acid derivatives). Benzoates are most effective when undissociated; therefore, they require low pH values for activity (2.5–4.0). The sodium salt of benzoate is used to permit ease of solubility in foods. When esterified (parabens), benzoates are active at higher pH values. Benzoates are primarily used in high acid foods (jams, jellies, juices, soft drinks, ketchup, salad dressings, and margarine). They are active against yeast and molds, but minimally so against bacteria. They can be used at levels up to 0.1%. Sorbic acid and sorbate salts (potassium most effective) are effective at pH values less than 6.5 but at a higher pH than benzoates. Sorbates are used in cheeses, baked or non-yeast goods, beverages, jellies, jams, salad dressings, dried fruits, pickles, and margarine. They inhibit yeasts and molds, but few bacteria except C. botulinum. They prevent yeast growth during vegetable fermentations and can be used at levels up to 0.3%. Propionic acid and propionate salts (calcium most common) are active against molds at pH values less than 6. They have limited activity against yeasts and bacteria. They are widely used in baked products and cheeses. Propionic acid is found naturally in Swiss cheese at levels up to 1%. Propionates can be added to foods at levels up to 0.3%. Acetic acid is found in vinegar at levels up to 4–5%. It is used in mayonnaise, pickles, and ketchup, primarily as a flavoring agent. Acetic acid is most active against bacteria, but has some yeast and mold activity, though less active than sorbates or propionates. Lactic acid, citric acid, and their salts can be added as preservatives, to lower pH, and as flavorants. They are also developed during fermentation. These organic acids are most effective against bacteria. Some antibiotics may be found in foods, and although medical compounds are not allowed in human food, trace amounts used for animal therapy may occasionally be found. Bacteriocins, which are antimicrobial peptides produced by microorganisms, can be found in foods. An example of an approved bacteriocin is nisin, which is allowed in process cheese food as an additive. Some naturally occurring enzymes (lysozyme and lactoferrin) can be used as preservatives in limited applications where denaturation is not an issue. Some spices, herbs, and essential oils have antimicrobial activity, but such high levels are needed that the food becomes unpalatable. Ethanol has excellent preservative ability but is underutilized because of social stigma. Wood smoke, whether natural or added in liquid form, contains several phenolic antimicrobial compounds in addition to formaldehyde.
Wood smoke is most active against vegetative bacteria and some fungi. Bacterial endospores are resistant. Activity is correlated with phenolic content. Carbon dioxide gas can dissolve in food tissues to lower pH and inhibit microbes. Developed preservatives produced during fermentation include organic acids (primarily lactic, acetic, and propionic), ethanol, and bacteriocins. All added preservatives must meet government standards for direct addition to foods. All preservatives added to foods are GRAS, generally recognized as safe.
Irradiation Foods can be processed or preserved with a number of types of radiation. Nonionizing radiations used include ultraviolet, microwave, and infrared. These function by exciting molecules. Ionizing radiations include gamma, x-rays, β-rays, protons, neutrons, and αparticles. Neutrons make food radioactive, while β-rays (low energy electrons), protons, and α-particles have little penetrating ability and are of little practical use in foods. Ionizing gamma, x-rays, and highenergy electrons produce ions by breaking molecules and can be lethal to microorganisms. Ultraviolet (260 nm) lamps are used to disinfect water, meat surfaces, utensils, air, walls, ceilings, and floors. UV can control film yeasts in brines during vegetable fermentations. UV effectiveness is dose dependant. Longer exposure time increases effectiveness. UV intensity depends on lamp power, distance to object, and amount of interfering material in path. For example, humidity greater than 60% reduces intensity. UV will not penetrate opaque materials and is good only for surface decontamination. Infrared heats products, but has little penetrating power. Microwaves cause rapid oscillation of dipole molecules (water), which results in the production of heat. Microwaves have excellent penetrating power. However, there are problems with the time-temperature relationship because microwaves cause foods to reach hot temperatures too quickly. Also, microwave-treated foods rarely exceed 100°C. Thus, instances of microbial survival in these foods has been reported.62,63 X-rays have excellent penetrating ability but are quite expensive. They are not widely used in the food industry. Gamma rays from radioactive sources (Cs135 and Co150) have good penetration and are widely used to pasteurize and sterilize foods. Electron beam generators also are gaining appeal as ionizing sources of radiation to process foods. Food irradiation is much more widespread in countries other than the United States. There is much untapped potential to use ionizing radiations to reduce or eliminate microbial pathogens in foods.70,71 This technology remains underexploited due to consumer weariness about the safety of the technology.72,73,74
Fermentation A number of foods use beneficial microorganisms in the course of their processing.61 Bread, cheeses, pickles, sauerkraut, some sausages, and alcoholic beverages are made by the conversion of sugar to organic acids, ethanol, or carbon dioxide. These three by-products not only serve as desirable flavors but also provide a significant antimicrobial barrier to pathogens. There have been instances where poorly fermented foods have been linked to food-borne illness. Furthermore, cheese made from unpasteurized milk has a distinctly higher risk of carrying pathogens than cheese made from pasteurized milk. Proper acid development and avoidance of cross contamination are essential control steps in manufacturing fermented foods. Alcoholic beverages have not been linked to food-borne disease other than excess consumption leading to ethanol toxicity. SUMMARY
Because of the predominance of hazardous biological contaminants found in raw foods, most food-processing unit operations are designed to reduce or eliminate these hazards. Successful implementation of these processing steps can greatly minimize the risk of food-borne disease transmission. Unsuccessful implementation or failure to recognize the
47 need for interventions sets the stage for production of potentially dangerous products. Because of the varied nature of foods, it is imperative that prudent processors understand the inherent risks of their products and ensure the proper application of interventions to reduce these risks. This fundamentally sound recommendation will help keep processed foods competitive in the marketplace and will help maintain and enhance consumer confidence in the safety of their food supply. The intent of food processing is to deliver safe and wholesome products to the consumer. Basic food safety programs, including GMPs and sanitation, are the minimum requirements to achieve this goal. HACCP is a logical extension of these programs, and focuses on the prevention of hazards before they occur, rather than waiting for a failure to occur, and then addressing the problem. HACCP provides the most comprehensive approach to food safety in the processing environment, but is not foolproof. Perhaps the most challenging aspect is that, even with the best-designed and implemented HACCP plan, it may not always be possible to “prevent, eliminate or reduce to acceptable levels” the pathogen of concern. This is particularly true with foods that are purchased by the consumer in their raw state, and then cooked. A specific example is Escherichia coli O157:H7 in ground beef. Irrespective of the preventative efforts of the processor, it is not possible to ensure that the product is free of the bacterium, and there is no “acceptable level” of this organism in ground beef. REFERENCES
1. Food and Drug Administration. Guide to minimize microbial food safety hazards for fresh fruits and vegetables. Available via the Internet at http://www.cfsan.fda.gov/~dms/prodguid.html; 1998. 2. Mead PS, Slutsker L, Dietz V, et al. Food-related illness and death in the United States. Emerg Infect Dis. 1999;5:607–25. 3. Olsen SJ, MacKinon LC, Goulding JS, Bean NH, Slutsker L. Surveillance for foodborne disease outbreaks—United States, 1993–1997. MMWR. 2000;49(SS01):1–51. 4. Snowdon JA, Buzby JC, Roberts TA. Epidemiology, cost, and risk of foodborne disease. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:31–51. 5. Todd ECD. Foodborne disease in Canada—a 10 year summary from 1975–1984. J Food Prot. 1992;55:123–32. 6. Pierson MD, Corlett DA. HACCP; Principles and Applications. New York: Chapman and Hall; 1992. 7. Bryan FL. Risks of practices, procedures and processes that lead to outbreaks of foodborne diseases. J Food Prot. 1988;51:663–73. 8. Cliver DO, Riemann HP. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002. 9. Tauxe RV. Salmonella: a postmodern pathogen. J Food Prot. 1991;54: 563–8. 10. Gray JT, Fedorka-Cray PJ. Salmonella. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:53–68. 11. Humphrey JJ, Baskerville A, Mawer S, Rowe B, Hopper S. Salmonella enteritidis phage type 4 from the contents of intact eggs: a study involving naturally infected hens. Epidemiol Infect. 1989;103:415–23. 12. Lampel KA, Maurelli AT. Shigella. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002: 69–77. 13. Hackney CR, Dicharry A. Seafood-borne bacterial pathogens of marine origin. Food Technol. 1988;42(3):104–9. 14. Holmberg SD. Cholera and related illnesses caused by Vibrio species and Aeromonas. In: Gorbach SL, Bartlett JG, Blacklow NR, eds. Infectious Disease. Philadelphia: WB Saunders Co.; 1992:605–11. 15. Popovic T, Olsvik O, Blake PA, Wachsmuth K. Cholera in the Americas: foodborne aspects. J Food Prot. 1993;56:811–21. 16. Tacket CO, Brenner F, Blake PA. Clinical features and an epidemiological study of Vibrio vulnificus infections. J Infect Dis. 1984;149:558–61. 17. Sakazaki R. Vibrio. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:127–36.
Ensuring Food Safety
861
18. Recommendations by the National Advisory Committee on Microbiological Criteria for Foods. Microbiological criteria for raw molluscan shellfish. J Food Prot. 1992;55:463–80. 19. Fratamico PM, Smith JL, Buchanan RL. Escherichia coli. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:79–101. 20. Tarr PI. Escherichia coli O157:H7: overview of clinical and epidemiological issues. J Food Prot. 1994;57:632–7. 21. Padhye NV, Doyle MP. Escherichia coli O157:H7: epidemiology, pathogenesis, and methods for detection in food. J Food Prot. 1992;55:555–65. 22. Kapperud G. Yersinia enterocolitica. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:113–8. 23. Altekruse SF, Swerdlow DL. Campylobacter jejuni and related organisms. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:103–12. 24. Smith JL, Fratamico PM. Factors involved in the emergence and persistence of food-borne diseases. J Food Prot. 1995;58:696–716. 25. Farber JM, Peterkin PI. Listeria monocytogenes, a food-borne pathogen. Microbiol Rev. 1991;55:476–511. 26. Harris LJ. Listeria monocytogenes. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002: 137–50. 27. Labbe RG, Juneja VK. Clostridium perfringens. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:119–26. 28. Cliver DO. Infrequent microbial infections. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:151–9. 29. Cliver DO. Viral foodborne disease agents of concern. J Food Prot. 1994;57:176–8. 30. Cliver DO. Epidemiology of viral foodborne diseases. J Food Prot. 1994;57:263–6. 31. Cliver DO. Viruses. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:161–75. 32. Dubey JP, Murrell KD, Cross JH. Parasites. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:177–90. 33. Wong ACL, Bergdoll MS. Staphylococcal food poisoning. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:231–48. 34. Griffiths MW, Schraft H. Bacillus cereus food poisoning. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:261–70. 35. Parkinson H, Ito K. Botulism. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:249–59. 36. Biehl ML, Buck WB. Chemical contaminants: their metabolism and their residues. J Food Prot. 1989;50:1058–73. 37. Taylor SL. Chemical intoxications. In: Cliver DO, Riemann HP, eds. Foodborne Diseases, 2nd ed. London: Elsevier Science Ltd; 2002: 305–16. 38. Chu FS. Mycotoxins. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002: 271–304. 39. Gecan JS, Cichowicz SM. Toxic mushroom contamination of wild mushrooms in commercial distribution. J Food Prot. 1993;56:730–4. 40. Johnson EA, Schantz EJ. Seafood toxins. In: Cliver DO, Riemann HP, eds. Foodborne Diseases. 2nd ed. London: Elsevier Science Ltd; 2002:211–29. 41. Department of Agriculture, Food Safety and Inspection Service. Agency Mission and Organization. Code of Federal Regulations, Title 9, Animals and Animal Products, Part 300, 2003. 42. Food and Drug Administration, Department of Health and Human Services. Product jurisdiction. Code of Federal Regulations, Title 21, Food and Drugs, Part 3, 2003. 43. Department of Agriculture, Food Safety and Inspection Service. Pathogen reduction; hazard analysis and critical control point (HACCP)
862
44.
45.
46.
47.
48.
49. 50.
51. 52.
53. 54. 55.
56.
57.
Environmental Health systems; action: final rule. 9 CFR Parts 304, 308, 310, 320, 327, 381, 416, and 417. Fed Reg. July 25 1996;61(144):38805. Department of Agriculture, Food Safety and Inspection Service. Inspection of eggs and egg products (Egg Products Inspection Act). Code of Federal Regulations, Title 9, Animals and Animal Products, 2003; Part 590. U.S. Food and Drug Administration. Grade “A” Pasteurized Milk Ordinance 2001 Revision. 2002. Accessed from the U.S. Food and Drug Administration web page, http://vm.cfsan.fda.gov/~ear/ pmo01toc.html. Food and Agriculture Organization. Understanding the Codex Alimetarius. 2003. Accessed from the Codex Alimetarius Commission web page, http://www.fao.org/docrep/w9114e/w9114e00.htm. Food and Drug Administration, Department of Health and Human Services. Current Good Manufacturing Practice in Manufacturing, Packing, or Holding Human Food. Code of Federal Regulations, Title 21, Food and Drugs, 2003;Part 110. Marriott NG. Personal hygiene and sanitary food handling. In: Marriott NG, ed. Principles of Food Sanitation. 4th ed. Gaithersburg, MD: Aspen; 1999:60–74. Marriott NG. Pest control. In: Marriott NG, ed. Essentials of Food Sanitation. New York: Chapman and Hall; 1997:129–49. FDA/MIF/IICA. Recommended guidelines for controlling environmental contamination in dairy plants. Dairy Food Environ Sanitation. 1988;8:52–6. Gabis D, Faust RE. Controlling microbial growth in food processing environments. Food Technol. 1988;42(12):81–3. Ingham SC, Ingham BH, Buege DR. Sanitation Programs and Standard Operating Procedures for Meat and Poultry Plants. Elizabethtown, PA: American Association of Meat Processors; 1996. Marriott NG. Cleaning compounds. In: Principles of Food Sanitation. 4th ed. Gaithersburg, MD: Aspen; 1999:114–38. Marriott NG. Sanitizers. In: Marriott NG, ed. Principles of Food Sanitation. 4th ed. Gaithersburg, MD: Aspen; 1999:139–57. Department of Health and Human Services, Food and Drug Administration. Procedures for the safe and sanitary processing and importing of fish and fishery products; final rule. 21 CFR Parts 123 and 1240. Fed Reg. 1995;60(242):65096–65202. Stevenson KE, Bernard DT. HACCP: Establishing Hazard Analysis Critical Control Point Programs. Washington, D.C.: The Food Processors Institute; 1995. Noleto AL, Bergdoll MS. Production of enterotoxin by a Staphylococcus aureus strain that produces three identifiable enterotoxins. J Food Prot. 1982;45:1096–7.
58. American Meat Institute Foundation. HACCP. The Hazard Analysis Critical Control Point System in the Meat and Poultry Industry. Washington, D.C.: American Meat Institute Foundation; 1994. 59. National Advisory Committee on Microbiological Criteria for Foods. Hazard analysis and critical control point principles and applications guidelines. J Food Prot. 1998;61:1246–59. 60. Potter NN, Hotchkiss JH. Food Science. 5th ed. New York: Chapman and Hall; 1995. 61. Jay JM. Modern Food Microbiology. 5th ed. New York: Chapman and Hall; 1996. 62. Sawyer CA, Naidu YM, Thompson S. Cook/chill foodservice systems: microbiological quality and endpoint temperature of beef loaf, peas and potatoes after reheating by conduction, convection and microwave radiation. J Food Prot. 1983;46:1036–43. 63. Fruin JT, Guthertz LS. Survival of bacteria in food cooked by microwave oven, conventional oven, and slow cookers. J Food Prot. 1982;45:695–8. 64. Lechowich RV. Microbiological challenges of refrigerated foods. Food Technol. 1988;42(12):84–9. 65. Scott VN. Interaction of factors to control microbial spoilage of refrigerated foods. J Food Prot. 1989;52:431–5. 66. Nitrite Safety Council. A survey of nitrosamines in sausages and drycured meat products. Food Technol. 1980;34:45–53. 67. Hotchkiss JH, Cassens RG. Nitrate, nitrite, and nitroso compounds in foods. Food Technol. 1987;41(4):127–34. 68. Stevenson DD, Simon RA. Sensitivity to ingested metabisulfites in asthmatic subjects. J Allergy Clin Immunol. 1981;68:26. 69. Schwartz HJ. Sensitivity to ingested metabisulfite: variations in clinical presentation. J Allergy Clin Immunol. 1983;71:487–9. 70. Ingram M, Roberts TA. Ionizing irradiation. In: Microbial Ecology of Foods. Vol I. New York: Academic Press; 1980:46–7. 71. Radomyski T, Murano EA, Olson DG, Murano PS. Elimination of pathogens of significance in food by low-dose irradiation: a review. J Food Prot. 1994;57:73–86. 72. WHO. Wholesomeness of Irradiated Food. World Health Organization Technical Report Series, No. 659. Geneva: WHO; 1981. 73. Institute of Food Technologists. Radiation preservation of foods. Food Technol. 1983;37:55–60. 74. Skala JH, McGown EL, Waring PP. Wholesomeness of irradiated foods. J Food Prot. 1987;50:150–60.
Water Quality Management and Water-Borne Disease Trends
48
Patricia L. Meinhardta
Water is a necessity for human survival, and access to safe drinking water is a required cornerstone of public health. In concert with improved pasteurization and refrigeration of foods and childhood immunizations, modernized sanitation methods and access to potable water have increased the life span and improved the general health of American citizens more than any other advancement in the field of medicine.1 Conscientious water quality management and access to renewable water resources are vital to every sector of our industrialized society and every sector of our nation’s agricultural economy.2 Early American settlements located near water and water reserves were generally sufficient for our country’s development and prosperity during initial phases of growth. However, even during these early periods of U.S. history, there were recorded instances where communities disappeared as a result of declining or contaminated water supplies. Currently, there is a water crisis in the United States that has resulted from population growth and urbanization placing pressure on fixed sources of freshwater available locally and, at times, regionally. These water access pressures have resulted in insufficient quantity and deteriorating quality of water supplies in many regions of the United States. These water quantity and water quality challenges have arisen from the fact that the amount of water in the world is fixed at approximately 3.59 × 1020 gallons in all. Of this amount, only about 0.2 % is freshwater that is readily available for human use. Through the hydrologic cycle, freshwaters run to the sea and become saline, but evaporation of water from the sea and precipitation on land restores these freshwaters continuously so that the quantity of freshwater is also relatively fixed and limited. Ongoing stewardship and an increasing prioritization of water quality management will be essential in order to ensure access to a water supply that provides both the quantity and quality necessary to preserve this precious environmental resource. Preservation of water quality and prevention of water-borne disease is a complicated task requiring a coordinated effort from many diverse stakeholders including health-care providers, local and national public health authorities, water utility practitioners and engineers, water quality and regulatory specialists, environmental scientists and engineers, basic science researchers, and water consumers. In order to work together to maintain and improve water quality management in the United States, each stakeholder must understand (a) the basic parameters of water use and sources; (b) the challenges of water source protection and water contamination; (c) the trends in water-borne disease and the health effects associated with exposure to contaminated waters; and (d) the provision of safe drinking water
a
This chapter is an edited version of the chapters previously prepared for earlier editions by John B. Conway and David A. Okun.
and the treatment of wastewaters. The intent of this chapter is to provide an overview of each of these essential components of water quality management in the United States and the subsequent impact on water-borne disease and public health. FUNCTIONS OF WATER
The uses of water in any community are numerous and diverse, and the requirements for the quantity and quality of water for these multiple functions are wide ranging and multifaceted. Conventionally, it has been both convenient and economical to provide a single water supply sufficient in quantity to serve all uses and suitable in quality to meet community drinking water standards, even though only a small fraction of the total water supply used in a community is actually used for drinking water. The uses and applications for water are numerous and include but are not limited to (a) drinking and food preparation purposes; (b) personal hygiene activities including bathing and laundering; (c) residential and commercial heating and air conditioning; (d) urban irrigation and street cleaning; (e) recreational venues including swimming and wading pools, waterparks, and hot tubs and spas; (f) amenity purposes such as public fountains and ornamental ponds; (g) power production from hydropower and steam generation; (h) commercial and industrial processes including bottled water and food production; (i) residential and commercial fire protection; (j) agricultural purposes including irrigation and aquaculture; and (k) the process of carrying away human and industrial wastes from all manner of establishments and community facilities. The quantities required for each type of multiple water use in the United States vary substantially. The allocation of water use in U.S. communities is presented in Table 48-1 and reflects the fact that 40% of water use is residential use. In a typical American community, the average per capita consumption is between 50 and 100 gallons per day as illustrated in Table 48-2. In summer months, this demand may increase by 50% resulting from such activities as increased urban irrigation. It has been suggested that the U.S. per capita water usage could be substantially reduced by water conservation practices; these dramatic water savings are illustrated in Table 48-3. It is important to note that by comparison in Asia and Africa, per capita consumption may be a little as 13 gallons per day. TYPES OF WATER SYSTEMS
To service their residents, communities require sources of water, transmission pumps and mains, treatment plants, and distribution systems for delivering water to each user. Transmission systems and treatment 863
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
864
Environmental Health
TABLE 48-1. WATER USE ALLOCATION IN U.S. COMMUNITIES Use
%
Residential Commercial Industrial Public Unaccounted for
40 15 25 5 15
Total
100
plants need to be designed for the maximum water usage day, which occurs generally in the summer months and is about 150% of the average daily demand. In addition, each distribution system should meet the peak demand during the day, which may be 150–300% of the maximum daily demand, being larger for smaller communities where the peak is determined by requirements for fire protection. Concomitant requirements include a sewerage system for collecting the wastewaters from each user in the community and treatment facilities for rendering the wastewaters suitable for disposal or reuse. Currently, some 80% of the U.S. population in more than 60,000 communities is served by water supply and sewerage systems. The remaining population, not always in rural areas, is served by individual wells and on-site disposal systems, generally septic tanks and tile fields for percolation of the septic tank effluents. PROPERTIES OF WATER
Water is a unique and remarkable substance. “Pure” water is a clear, colorless, tasteless, and odorless fluid. It is also a strong solvent and in nature washes gases from the atmosphere, dissolves minerals and humic substances from the soil through which it flows, and carries substantial quantities of silt as it moves through the environment. Many of the natural and man-made uses of water affect its quality, and, accordingly, water is seldom appropriate for human use without some kind of treatment. In addition, a varied array of microorganisms find their way into waters and, depending on environmental conditions, may replicate or expire. Some of these microorganisms are beneficial or at least not harmful while others may be pathogenic to man and other animals. Many scourges of mankind have been waterborne, and the potential for spread of enteric disease is always present, even today. At normal atmospheric pressure, water freezes at 0ºC and boils at 100ºC. Due to the fact that water displays its greatest density at 4ºC, ice floats on the surface, keeping bodies of water from freezing solid, an important phenomenon that keeps aquatic creatures alive and permits lakes and reservoirs to serve as sources of water even at subfreezing temperatures. The specific heat of water is high, resulting in the ameliorating effects of large water bodies on global and regional
TABLE 48-2. ALLOCATION OF INTERIOR RESIDENTIAL WATER USE Use
%
Drinking and cooking Bathing Toilet flushing Laundering Dishwashing Miscellaneous
5 30 40 15 5 5
Total
100
climate and temperatures. The surface tension of water is also high, resulting in the concentration of many water contaminants on its surface in monomolecular layers. Water is an important constituent of all living matter, constituting approximately 70% of the weight of the human body. It is a very effective and efficient medium for transferring nutrients and removing waste materials from the human body as well as maintaining thermostability through heat transfer and evaporation. The water intake of a typical adult varies from 1 to 3 quarts per day, about half of which is lost through evaporation from the skin and lungs and the other half through excretion of feces and urine. It is the challenge of managing human excreta properly if adequate sanitation is to be maintained and the spread of water-borne disease is to be prevented and avoided. Therefore, one primary goal of water quality management involves protecting water supplies for human use from damage by human use. SOURCES OF WATER
Water may be abstracted for use from any one of a number of points in its movement through the hydrological cycle illustrated in Fig. 48-1. The most suitable water source to be developed for use by any community depends on the quantity and quality of the source under consideration for development. The selection of the most appropriate water source for human use in a specific region may result from a wide variety of options available, including the most common sources of water listed below.3 Rainwater. Rainwater is the source of all freshwater in the world. It may be collected directly from roofs and other prepared catchment systems and stored in cisterns for later use. Since catchment areas for the direct capture of rainwater are necessarily limited in size, such water supplies are useful only for individual households or small communities. Households in the Southwest are examples of the former, and paved catchments in the country of Gibraltar are examples of the latter. The quality of rainwater is generally reasonable but it may be contaminated by gases and particles that are washed out of the atmosphere or by the accumulation of dust and other debris in catchment systems. For example, the gaseous sulfur and nitrogen oxides emitted from power plants that use fossil fuels react with atmospheric water forming dilute solutions of sulfuric and nitric acids. The precipitation of these acids or “acid rain” has resulted in serious environmental impacts on surface water quality and on the biota that depend upon water in affected areas. Surface Water. The earliest sources of water for large communities in the United States were rivers and lakes, which readily provided the quantity needed for economic growth and development. However, the large drainage areas required for such run-of-river or lake supplies inevitably subjected them to activities such as urban and industrial development that resulted in degradation of the quality of water. In the United States, water supplies for the cities of Philadelphia, Cincinnati, and New Orleans are typical of run-of-river supply sources. Unfortunately, such water supplies have historically been the source of waterborne epidemics (e.g., in the nineteenth century) and still may pose water-borne disease hazards in situations where treatment is inadequate. The development of filtration processes and disinfection of water by chlorination at about the turn of the century rendered such surface waters suitable for community water supplies. However, since the onset of the chemical revolution beginning in the mid-twentieth century, waters obtained from large watersheds, such as those from the Ohio and Mississippi rivers, inevitably contained numerous synthetic organic chemicals used in industry and agriculture. Some of these water-borne chemical agents have been identified as being carcinogenic, mutagenic, teratogenic, or otherwise harmful to human health. Many of these chemical compounds are not readily removed in wastewater or water treatment processes or naturally degraded in the environment during passage downstream. As a result, it was the identification of many synthetic organic chemicals in the lower Mississippi
48
Water Quality Management and Water-Borne Disease Trends
865
TABLE 48-3. WATER-SAVING IMPACT OF AVERAGE PERSONAL WATER USE EMPLOYING WATER CONSERVATION PRACTICES OR FIXTURES VERSUS TYPICAL WATER USE∗,† Activity
Frequency
Toilet
Four flushes per day
Shower
Once a day for 5 minutes
Bath
Once a day
Shaving
Once a day
Brushing teeth
Twice a day
Washing hands
Twice a day
Cooking‡
Washing produce
Automatic dishwasher
Once per day—full load
Manual dishwashing
Once a day
Laundry§
1
Car washing
Twice a month
/3 load a day
Circumstances Conventional toilet Ultra-low flush toilet Conventional showerhead Low-flow showerhead Full bathtub Tub 1/4 to 1/3 full Open tap 1 full basin Open watertap Brush and rinse Open watertap 1 full basin Open watertap 1 full kitchen basin Standard cycle Short cycle Open watertap Full basin/wash and rinse Portion of full load Full load Hose with shut-off nozzle 5 full, 2-gallon buckets
Water Used 3.5–7 gallons per flush 1.6 gallons per flush 3–8 gallons per minute 2.5 gallons per minute 36 gallons 9–12 gallons 5–10 gallons 1 gallon 2–5 gallons 1/ –1/ gallon 4 2 2 gallons 1 gallon 5–10 gallons 1–2 gallons 10–15 gallons 8–13 gallons 30 gallons 5 gallons 35–50 gallons per full load 10–15 gallons for a full load 100 gallons per month 20 gallons per month
∗
Numbers are based on approximate, average household use since water use will vary with individual habits and lifestyles, differing water pressure, and the age and model of appliances. †The average per capita consumption of water by Americans is 50 gallons per day with 40 gallons attributed to interior residential use and additional 10 gallons for outdoor use. ‡Real cooking figure will be higher to include boiling water, rinsing utensils, and other uses. §Laundry figure is based on two full loads per person, per week.
River that provided the impetus for the passage of the Safe Drinking Water Act (PL 93-523) in 1974. The groundwork for this important act was provided by the Community Water Supply Survey conducted by the Public Health Service (PHS) in 1969,4 which indicated that many public water supply systems, particularly those serving small communities, were not providing adequate water service and were not in a position to meet appropriate drinking water standards.
A safer option is the use of smaller watersheds, which do not have naturally sustained flows during all periods of the year, but by storing wet-weather flows in reservoirs can provide substantial quantities of water for use during dry periods. Such small watersheds are generally found in upland areas and are often free of the major urban and industrial development that may result in the type of chemical pollution that has become such a growing concern for U.S. communities.
Figure 48-1. The water cycle. (Adapted from Fair GM, Geyer JC, Okun DA. Elements of Water Supply and Wastewater Disposal. New York: John Wiley & Sons; 1971.)
866
Environmental Health
Communities such as Boston, New York, and San Francisco are examples of cities that have developed upstream water sources. The quality of these upstream sources has often been of such high quality that until recently the only treatment required has been disinfection. Pressure from development of previously protected watersheds is now threatening to degrade them and special efforts will continue to be required in the future to protect such important watersheds and preserve them. The watershed area serving New York City is currently undergoing extensive evaluation, and increased protection measures are under consideration for implementation. Natural and human-made lakes may improve or degrade waters drawn from their originating watersheds. Improvement of water quality may result from storage in a lake providing opportunities for coagulation and sedimentation of colloidal and suspended solids that are tributary to the lake from rivers and streams. Some measure of disinfection is accomplished by exposure to sunlight provided that there is time for biochemical stabilization of organic matter and for degradation of water-borne microorganisms. Furthermore, storage in a lake or reservoir attenuates high levels of contaminants that may result from natural rainstorms or man-made contamination events on the watershed such as spills from tankers or transportation accidents. On the other hand, storage in lakes or reservoirs may degrade water quality through processes such as eutrophication, biomagnification, and thermal stratification. Eutrophication or overnourishing of a water body occurs naturally as a result of influence of nutrient materials, particularly phosphorus and nitrogen, which support the growth of algae. In a standing body of water with adequate sunlight, these types of nutrients tend to accumulate in algae, and as the algae settle, the lake tends slowly to fill over time. Urban and agricultural development on a watershed adds significantly to sediment and nutrient input to a lake; the former reduces the capacity of lakes and the latter accelerates the process of eutrophication to the point where many of the former uses of the lake are adversely affected. The increasing concentrations of algae are difficult to remove in water treatment and they often impart unpleasant taste and odors to the water. Another impact of storage is the bioaccumulation of small concentrations of chemicals and other contaminants that are absorbed by aquatic life in the lake. This bioaccumulation process may affect the quality of aquatic life fished from these lakes and may also increase the levels of adverse contaminants beyond what they would be in an active flowing river. Lake quality and water source protection is further affected by thermal stratification. During the summer months, warmer and lighter water accumulates in the upper layers of most lakes. The water density difference resulting from this temperature differential is sufficient to prevent the lower layers of the lake from obtaining atmospheric oxygen. The following processes may result: (a) organic matter reduces the dissolved oxygen in the lower levels of the lake often resulting in anaerobic conditions; (b) hydrogen sulfide and carbon dioxide begin to accumulate; (c) increasing acidity leads to increasing solution of such metals as iron and manganese; and (d) microorganisms tend to accumulate at the thermocline, the zone of rapidly changing temperature and density that separates the upper and lower layers of the lake. As a result of these hydrological phenomena, source water that may otherwise have been satisfactory for water supply become exceedingly problematic for use as a community water source. Groundwater. Groundwaters are recharged by percolation of rainwater and runoff through the ground and are withdrawn by means of natural springs, wells, or infiltration galleries (horizontal wells). Groundwaters tend to be more highly mineralized than surface waters resulting from the solution of minerals that the groundwaters come in contact with as these waters percolate through the ground layers. However, these groundwaters are generally of higher sanitary quality since (a) they are not as likely to be subject to microbial pollution as surface water sources and (b) passage of water through soil strata often serves to improve their bacteriological quality. On the other hand, groundwater pollution, particularly from toxic waste discharges and leaching of landfills, has become a major problem in the United States and considerable care is
required to protect such valuable water resources. Unfortunately, once a groundwater aquifer is polluted with chemical contaminants, serious financial investment and many years of time may be required for the contamination to be remediated, if remediation is even possible. In general, it is far more difficult to determine the yields of groundwater sources than of surface water sources. Yields of groundwaters are a function of the volume and size of soil interstices, and such determinations depend on extensive hydrogeological exploration, including the construction of test wells and the conduction of pumping tests. Accordingly, groundwaters have not generally been placed into service for community water supplies as often as surface water resources and are most frequently used primarily for smaller communities. On the other hand, some groundwater supplies have been over-pumped or “mined” where water withdrawals have exceeded water recharge. This overdraft has resulted in a steady lowering of the elevation of the water surface underground or water tables diminishing the amount that can be withdrawn and increasing the cost of pumping. Such excessive water withdrawals have also had a bearing on the ground surface above, threatening structures and increasing the potential for flooding in some communities. The combined use of groundwaters in association with surface waters as a source of community water requirements continues to be explored as an option but requires engineering planning and hydrological study. In many communities, underground reservoirs may have major advantages over surface water reservoirs, such as: (a) no loss of water through evaporation; (b) water quality is often not as likely to be deleteriously affected by natural or urban and industrial pollution; (c) underground reservoirs do not require the expropriation of large areas of surface land; and (d) these waters may be located nearer to the community’s points of use than are surface impoundments. In this combined scheme, water would be drawn from surface water sources during wet periods when groundwater reservoirs are also being recharged, and during dry periods, water withdrawal would be tapped from underground reservoirs. A special category of undergroundwater sources is the artesian aquifer that is a confined aquifer under pressure and that is recharged at a higher elevation some distance away. When this water resource is tapped by a well, the water in the well rises above the confining layer and may often be free flowing. As an example, flowing springs originate from artesian aquifers since they are under pressure. It is important to note that artesian aquifers are less likely to be contaminated with either microbial or chemical pollutants than unconfined aquifers. Wells are constructed in a variety of ways and configurations, depending on the nature of the aquifer from which the water is to be withdrawn. Special precautions are required that ensure wells are protected from surface water runoff by being encased properly with the protective casing extending above the ground surface. After construction, wells must be disinfected before being tested for water quality and human consumption. Sampling of water from a well is pointless if the sanitary survey indicates that the well is not protected from contamination by surface runoff. A sample taken during a dry period may reveal good quality water, but the water will inevitably become contaminated by surface water runoff, if the wellhead and base structure are not adequately protected from contamination. Ocean and Brackish Waters. These water sources are unsuitable for most communities’ water supplies, but in conditions of dire necessity, freshwater can be obtained from them by use of one of several desalination processes. The most appropriate method for desalination of seawater is thermal distillation. Distillation is widely used in oilrich areas where water is extremely limited such as in the Middle East and the West Indies. In brackish waters, where the salt content is less than 10% that of seawater, reverse osmosis or electrodialysis may be used, but all desalination methods are energy intensive. As the cost of energy continues to increase as compared with other costs, desalination is not likely to be a feasible option for most community water supplies except in situations where serious investment in providing water can be justified, such as for tourism or for individual, military, or political purposes.
48 Water Reclamation and Reuse. Far more attractive than desalination in water-scarce areas is the reclamation of wastewaters for reuse for nonpotable purposes.5 Water reuse is becoming increasingly attractive in communities where water resources are limited since a substantial portion of a community’s water needs is dedicated for urban irrigation and other nonpotable uses. Impetus for water reuse has also resulted from the increasingly rigorous requirements for wastewater treatment that often lead to production of a water effluent of too high a quality at too high a cost to be discarded. Early water reuse technology developed from wastewater disposal by irrigation, a practice widely followed in Europe for more than a century. In the United States, early water reuse was exemplified by the utilization of the effluent from the Baltimore wastewater treatment facilities for use in the Bethlehem Steel Sparrows Point plant in the 1930s. In the United States, the modern approach to water reuse is exemplified by the development of distribution systems for nonpotable waters for a variety of purposes including urban irrigation and residential and industrial use. Such dual distribution systems were pioneered in Colorado Springs, Colorado; Pomona and Irvine, California; and St. Petersburg, Florida. In these instances, the nonpotable distribution systems carry secondary wastewater effluent additionally treated by coagulation, filtration, and disinfection processes used for treatment of potable waters drawn from polluted sources. The main difference between the potable and nonpotable waters would be that the nonpotable waters would not be free of the chemical contaminants that are inevitably present in such wastewaters and that are not removed in wastewater treatment and may be hazardous if ingested over a long period of time. In as early as 1958, the United Nations Economic and Social Council stated, “No higher quality water, unless there is a surplus of it, should be used for a purpose that can tolerate a lower grade.”6 This conservation policy is being considered and potentially adopted in water-short areas of the United States. In Florida, for example, where consumptive use permits are required for all abstractions of water, a permit will not be issued if a lower-quality water can be used and is available for use. Nonpotable reuse is becoming so widely adopted that the American Water Works Association has published a Manual on Dual Distribution Systems7 and some 14 states have adopted regulations for water reclamation and reuse. In San Diego, California, at least one residential home builder offers a collection system for gray water from showers, bath tubs, and washing machines that can then be used to flush toilets and water lawns with the cost of this additional plumbing priced at less than $2000. Selection of Water Sources. Topography, climate, availability of untapped water resources, population density, land use, and myriad other characteristics differentiate each community’s water source options from another and none are precisely like that of any other community. Therefore, a community government and its planning engineers faced with the need to provide a community’s water supply must recognize that each situation is unique. The guiding principle in the selection of a water source is provided in the National Interim Primary Drinking Water Regulations promulgated by the U.S. Environmental Protection Agency (EPA) in 1976: Production of water that poses no threat to the consumer’s health depends on continuous protection. Because of human frailties associated with protection, priority should be given to the selection of the purest source. Polluted sources should not be used unless other sources are economically unavailable, and then only when personnel, equipment, and operating procedures can be depended on to purify and otherwise continuously protect the drinking water supply.8 (Emphasis added.)”
Earlier drinking water standards established by the U.S. Public Health Service presented a similar focus. The primary concern for water quality had originally been prevention of the transmission of water-borne infectious disease, many of which had effectively been addressed with conventional filtration and disinfection with chlorine.
Water Quality Management and Water-Borne Disease Trends
867
However, many cities throughout the United States opted for run-ofriver supplies that were conveniently available even though they did not constitute the “purest” source. The relatively new threat to human health arising from the “chemical revolution” with the creation of many new long-lasting synthetic organic chemicals has given new meaning to the concern for selecting the “purest source.” This is particularly important since detection methods for monitoring many of these chemical agents are not yet available or economically unfeasible for use by many communities. Community governments are given options in the selection of water sources, and prudence dictates a search for the “purest source.” This search might entail development of groundwaters or upstream sources free of urban and industrial pollution. However, in situations where these water resources are not adequate in quantity to provide all the water required in a community, consideration should be given for developing high-quality sources for potable purposes with the consideration of reclaimed wastewaters for nonpotable purposes. Protection of Water Sources. Where high-quality sources of water supply are available, whether surface or underground in nature, they are subject to despoliation from development and lack of protection of the watershed or recharge areas. In the United States, only in rare instances is the land of a community watershed or recharge area under the control of the water purveyor, and these areas are generally the responsibility of the local authorities that have planning jurisdiction. Even where a community water purveyor owns the land or the local authority that has dominion over the land is served by the water supply, the pressure for development of the watershed can lead to degradation of the water supply for the serviced community that resides many miles away from the watershed area. In a landmark case, water companies in the state of Connecticut attempted to sell portions of their wholly-owned protected watersheds for development. After considerable study, state legislation was enacted that forbade the sale of watershed lands for development. In response to a suit against the State of Connecticut by the water companies, the U.S. District Court upheld the state’s position: … the obvious purpose of the legislation is the protection of the health and welfare of the State’s inhabitants … watershed properties are critical to water purity … the State is ensuring the ability of the water companies to provide pure water to its customers.9
More generally in the United States, local government authorities have planning jurisdiction over watershed lands and recharge areas and work closely with their water purveyor in developing land use strategies that protect the integrity of the community’s water supplies. Such strategies include regulations specifying maximum densities, limits for impervious areas, setbacks from the banks of streams and reservoirs, and the definition of permissible activities on and near the watershed.10,11 The promulgation and enforcement of such regulations require strong stewardship and leadership on the part of elected and appointed government officials as these actions to protect water supply sources may sharply curtail the opportunities for financial profit from development of watershed or recharge areas. In selecting appropriate water supply sources for a community, the greatest attention is generally given to the numerical limits for specific water contaminants in the posttreatment potable water. It may be more prudent to emphasize the “sanitary survey” of the end-user potable water which would ensure a high quality of water for water consumers by ensuring appropriate protective handling and distribution of water to the end-user. The drinking water regulations of 1976 state: Knowledge of physical defects or of the existence of other health hazards in the water supply system is evidence of a deficiency in protection of the water supply. Even though water quality analysis have indicated that the quality requirements have been met, the deficiencies must be corrected before the supply can be considered safe.8 (Emphasis added.)”
868
Environmental Health
These water supply deficiencies include pollution of the water source, inadequate water treatment, cross-connections with sources of contamination, inadequate capacity resulting in low pressure, and insufficient operation of the water treatment facilities including inadequate disinfection and failure to provide stand-by facilities in the event of power or other equipment failure. In contention is whether or not the discharge of a pollutant upstream from a community water intake source is considered a “deficiency in protection of the water supply.” While many laws exist that intend to prevent the discharge of toxic substances into the environment in general and in water bodies in particular, implementation of these laws is uncertain at best. Little assurance can be given that a water supply drawn from a water source that drains large urban and industrial areas will, in fact, be free of potentially harmful chemical and microbial agents. Currently, the best course of action is to avoid discharging human, animal, and industrial wastes above water supply intakes and to avoid installing water supply intakes below waste discharges’ point sources. POTABLE WATER QUALITY AND REGULATION
Initially in the United States, protection of the nation’s public health was the responsibility of the individual states with federal initiatives in place for only interstate activities. The U.S. Public Health Service Drinking Water Standards were first adopted in 1914 to protect the health of the traveling public across the country. These standards were often adopted by individual states and eventually were applicable to water supplies throughout the United States. Initially these standards had limited application with the primary emphasis on physical and bacterial parameters: the first to ensure esthetic quality and the second to prevent the transmission of water-borne disease. These public health standards were updated periodically, and in 1962, the standards were extensively revised to include water-borne chemical and radiologic agents for the first time. Initially, the only chemical agents for which limits were established were heavy metals. Recognition of the problem of water-borne synthetic organic chemicals surfaced with the establishment of an upper limit for the chemical, carbon chloroform extract (CCE). This limit served as a comprehensive, gross surrogate for all synthetic organic chemicals, although it could not distinguish between chemical compounds that were innocuous versus harmful to human health. These initial standards required that water supply systems (a) provide adequate capacity to meet peak demands without development of low pressures; (b) assess the quality of water at the free-flowing outlet of the water consumer; and (c) administer the water system facilities under the responsible charge of personnel whose qualifications are acceptable to the regulatory agency. It was not until passage of the Safe Drinking Water Act (SDWA) in 1974 that public water supply systems in the United States came under the federal aegis of the EPA. Under this law, the EPA was authorized to set national standards to protect drinking water and source water from naturally occurring and man-made water contaminants. The Safe Drinking Water Act passed in 1974 was amended in 1986 and 1996, and the numerous amendments, regulations, and proposed rules added to the SDWA are summarized in Table 48-4.12
Provision of the Safe Drinking Water Act (SDWA) The SDWA authorized the EPA to set drinking water standards in the United States and develop regulations to control the level of contaminants in the nation’s drinking water. These drinking water standards are part of the Safe Drinking Water Act’s “multiple barrier” approach to drinking water protection that includes (a) assessing and protecting drinking water sources; (b) protecting wells and collection systems; (c) ensuring that water is treated by qualified operators; (d) ensuring the integrity of water distribution systems; and (e) providing information to the public on the quality of their drinking water. In most cases, the EPA delegates responsibility for implementing drinking water
TABLE 48-4. ENVIRONMENTAL PROTECTION AGENCY REGULATIONS REGARDING DRINKING WATER IN THE UNITED STATES FROM 1974–2003* Regulation
Year
Safe Drinking Water Act (SDWA) Interim Primary Drinking Water Standards National Primary Drinking Water Standards SDWA amendments Surface Water Treatment Rule (SWTR) Total Coliform Rule Lead and Copper Regulations SDWA Amendments Information Collection Rule Interim Enhanced SWTR Disinfectants and Disinfection By-Products (D-DBPs) Regulation Contaminant Candidate List Unregulated Contaminant Monitoring Regulations Groundwater Rule (proposed) Lead and Copper Rule—action levels Long Term 1 Enhanced SWTR Long Term 2 Enhanced SWTR Stage 2 D-DBP Rule
1974 1975 1985 1986 1989 1989 1990 1996 1996 1998 1998 1998 1999 2000 2000 2002 2003 2003
∗
Provided courtesy of the Centers for Disease Control and Prevention and accessible at http://www.cdc.gov/mmwr/preview/mmwrhtml/ss5308a4.htm.
standards to states and Indian nations. These drinking water standards apply to (a) public water systems that provide water for human consumption through at least 15 service connections or (b) systems that regularly serve at least 25 individual water customers. Public water systems include such entities as municipal water companies, homeowner associations, schools, businesses, campgrounds, and shopping malls. There are two categories of drinking water standards under the Safe Drinking Water Act: • National Primary Drinking Water Regulation (NPDWR): This primary standard is a legally enforceable drinking water standard that applies to public water systems. Primary standards protect drinking water quality by limiting the levels of specific contaminants that can adversely affect public health and are known to or anticipated to occur in water. These standards take the form of either (a) maximum contaminant levels (MCLs) or the maximum permissible level of a contaminant in water which is delivered to any user of a public water system or (b) treatment techniques (TTs) which are set rather than an MCL when there is no reliable method that is economically and technically feasible to measure a contaminant at particularly low concentrations.13 The list of National Primary Drinking Water Regulations, MCLs, potential health effects, and sources of each contaminant in drinking water are presented in Table 48-5. • National Secondary Drinking Water Regulation (NSDWR): This secondary standard is a nonenforceable guideline addressing contaminants that may cause (a) aesthetic effects such as undesirable tastes or odors; (b) cosmetic effects that do not damage human health but are still undesirable; or (c) technical effects that may cause damage to water equipment or reduce effectiveness of treatment for other contaminants in drinking water. The EPA recommends secondary standards to water systems but does not require systems to comply; however, states may choose to adopt them as enforceable standards. The EPA has established National Secondary Drinking Water Regulations that set nonmandatory water quality standards for 15 contaminants as “secondary maximum contaminant levels” or SMCLs.13 These secondary contaminants are not considered to present a risk to human health at the SMCL listed in Table 48-6.
TABLE 48-5. NATIONAL PRIMARY DRINKING WATER REGULATIONS AND MAXIMUM CONTAMINANT LEVELS (MCLS)a
Contaminant
MCLGb (mg/L)c
MCL or TTb (mg/L)c
Microorganisms Cryptosporidium sp.
zero
TTd
Giardia lamblia
zero
TTd
Heterotrophic plate count (HPC)
n/a
TTd
Legionella sp. Total Coliforms (including fecal coliform and E. coli)
zero zero
TTd 5.0%e
Turbidity
n/a
TTd
Viruses (enteric)
zero
TTd
0.010 1.0 0.060 0.10 — 0.080 MRDL (mg/L) 4.0b 4.0b 0.8b
Increased risk of cancer Anemia; infants and young children: nervous system effects Increased risk of cancer Liver, kidney, or central nervous system disorders; increased risk of cancer
By-product of drinking water disinfection By-product of drinking water disinfection By-product of drinking water disinfection By-product of drinking water disinfection
Disinfectants Chloramines (as Cl2) Chlorine (as Cl2) Chlorine dioxide (as ClO2)
zero 0.8 n/ag noneh — n/ag MRDLG (mg/L) 4b 4b 0.8b
Eye/nose irritation; stomach discomfort; anemia Eye/nose irritation; stomach discomfort Anemia; infants and young children: nervous system effects
Water additive used to control microbes Water additive used to control microbes Water additive used to control microbes
Inorganic Chemicals Antimony
0.006
0.006
Increase in blood cholesterol; decrease in blood sugar
Arsenic
0h
Asbestos (fiber >10 micrometers) Barium
7 million fibers per liter 2
0.010 as of 01/23/06 7 MFL
Skin damage or circulatory system dysfunction; possible increased risk of cancer Increased risk of developing benign intestinal polyps
2
Increase in blood pressure
Beryllium
0.004
0.004
Intestinal lesions
Discharge from petroleum refineries; fire retardants; ceramics; electronics; solder Erosion of natural deposits; runoff from orchards; runoff from glass and electronics production wastes Decay of asbestos cement in water mains; erosion of natural deposits Discharge of drilling wastes; discharge from metal refineries; erosion of natural deposits Discharge from metal refineries and coal-burning factories; discharge from electrical, aerospace, and defense industries
Disinfection By-Products Bromate Chlorite Haloacetic acids (HAA5) Total Trihalomethanes (TTHMs)
Potential Health Effects from Ingestion of Water
Sources of Contaminant in Drinking Water
Gastrointestinal illness (e.g., diarrhea, vomiting, gastrointestinal distress) Gastrointestinal illness (e.g., diarrhea, vomiting, gastrointestinal distress) HPC is an analytic method used to measure the variety of bacteria that are common in water. Lower concentrations of bacteria in drinking water indicate better maintenance at the water treatment system. Legionnaires’ disease, a type of pneumonia. Not a health threat in itself, this indicator determines whether other potentially harmful bacteria may be present in water.f
Human and animal fecal waste
Turbidity is a measure of the cloudiness of water and indicates water quality and filtration effectiveness (e.g., whether disease-causing organisms are present). Higher turbidity levels are often associated with higher levels of disease-causing microorganisms such as viruses, parasites, and some bacteria. These organisms can cause symptoms such as nausea, gastrointestinal distress, diarrhea, and associated headaches. Gastrointestinal illness (e.g., diarrhea, vomiting, gastrointestinal distress)
Human and animal fecal waste HPC measures a range of bacteria that are naturally occurring in the environment
Found naturally in water; multiplies in heating systems Coliforms are naturally present in the environment as well as feces and fecal coliforms. E. coli originate only from human and animal fecal waste. Soil runoff
Human and animal fecal waste
869
(Continued)
870
TABLE 48-5. NATIONAL PRIMARY DRINKING WATER REGULATIONS AND MAXIMUM CONTAMINANT LEVELS (MCLS)a (Continued)
Contaminant
MCLGb (mg/L)c
MCL or TTb (mg/L)c
Potential Health Effects from Ingestion of Water
Sources of Contaminant in Drinking Water
Cadmium
0.005
0.005
Kidney damage
Chromium (total)
0.1
0.1
Allergic dermatitis
Copper
1.3
TTi; action level=1.3
Cyanide (as free cyanide)
0.2
0.2
Short-term exposure: gastrointestinal distress Long-term exposure: liver or kidney damage Individuals with Wilson’s disease should consult their personal physician if the amount of copper in their water exceeds the action level Nerve system or thyroid dysfunction
Corrosion of galvanized pipes; erosion of natural deposits; discharge from metal refineries; runoff from waste batteries and paints Discharge from steel and pulp mills; erosion of natural deposits Corrosion of household plumbing systems; erosion of natural deposits
Fluoride
4.0
4.0
Bone disease; in children may lead to mottled dentition
Lead
zero
Mercury (inorganic)
0.002
TTi; action Infants and children: delays in physical or mental development including level=0.015 deficits in attention span and learning abilities Adults: kidney damage; high blood pressure 0.002 Kidney damage
Nitrate (measured as nitrogen)
10
10
Nitrite (measured as nitrogen)
1
1
Selenium
0.05
0.05
Infants below the age of 6 months drinking water containing nitrate in excess of the MCL could become seriously ill and, if untreated, exposure may lead to death. Symptoms include shortness of breath and blue-baby syndrome. Infants below the age of 6 months who drink water containing nitrite in excess of the MCL could become seriously ill and, if untreated, exposure may lead to death. Symptoms include shortness of breath and blue-baby syndrome. Hair or fingernail loss; numbness in fingers or toes; circulatory disorders
Thallium
0.0005
0.002
Hair loss; blood; kidney, intestine, or liver disorders
Organic Chemicals Acrylamide Alachlor Atrazine Benzene
zero zero 0.003 zero
TTj 0.002 0.003 0.005
Nervous system or blood disorders; increased risk of cancer Eye, liver, kidney, or spleen disorders; anemia; increased risk of cancer Cardiovascular system or reproductive disorders Anemia; decrease in blood platelets; increased risk of cancer
Benzo(a)pyrene (PAHs)
zero
0.0002
Reproductive difficulties; increased risk of cancer
Carbofuran Carbon tetrachloride
0.04 zero
0.04 0.005
Disorders of blood, nervous, or reproductive system Liver dysfunction; increased risk of cancer
Discharge from steel/metal factories; discharge from plastic and fertilizer factories Water additive which promotes strong teeth; erosion of natural deposits; discharge from fertilizer and aluminum factories Corrosion of household plumbing systems; erosion of natural deposits Erosion of natural deposits; discharge from refineries and factories; runoff from landfills and croplands Runoff from fertilizer use; leaching from septic tanks, sewage; erosion of natural deposits
Runoff from fertilizer use; leaching from septic tanks, sewage; erosion of natural deposits
Discharge from petroleum refineries; erosion of natural deposits; discharge from mines Leaching from ore-processing sites; discharge from electronics, glass, and drug factories Added to water during sewage/wastewater treatment Runoff from herbicide used on row crops Runoff from herbicide used on row crops Discharge from factories; leaching from gas storage tanks and landfills Leaching from linings of water storage tanks and distribution lines Leaching of soil fumigant used on rice and alfalfa Discharge from chemical plants and other industrial activities
Chlordane Chlorobenzene
zero 0.1
0.002 0.1
Liver or nervous system disorders; increased risk of cancer Liver or kidney dysfunction
2,4-D Dalapon 1,2-Dibromo-3chloropropane (DBCP) o-Dichlorobenzene p-Dichlorobenzene 1,2-Dichloroethane 1,1-Dichloroethylene cis-1,2-Dichloroethylene trans-1,2-Dichloroethylene Dichloromethane 1,2-Dichloropropane Di(2-ethylhexyl) adipate Di(2-ethylhexyl) phthalate Dinoseb
0.07 0.2 zero
0.07 0.2 0.0002
Kidney, liver, or adrenal gland disorders Minor kidney changes Reproductive difficulties; increased risk of cancer
0.6 0.075 zero 0.007 0.07 0.1 zero zero 0.4 zero 0.007
0.6 0.075 0.005 0.007 0.07 0.1 0.005 0.005 0.4 0.006 0.007
Liver, kidney, or circulatory system disorders Anemia; liver, kidney, or spleen damage; changes in blood function Increased risk of cancer Liver dysfunction Liver dysfunction Liver dysfunction Liver dysfunction; increased risk of cancer Increased risk of cancer Weight loss, liver dysfunction, or possible reproductive difficulties Reproductive difficulties; liver dysfunction; increased risk of cancer Reproductive difficulties
Dioxin (2,3,7,8-TCDD)
zero
0.00000003
Reproductive difficulties; increased risk of cancer
Diquat Endothall Endrin Epichlorohydrin
0.02 0.1 0.002 zero
0.02 0.1 0.002 TTj
Ethylbenzene Ethylene dibromide
0.7 zero
0.7 0.00005
Glyphosate Heptachlor Heptachlor epoxide Hexachlorobenzene
0.7 zero zero zero
0.7 0.0004 0.0002 0.001
Cataract formation Stomach and intestinal disorders Liver dysfunction Increased cancer risk, and over a long period of time, gastrointestinal disorders Liver or kidney disorders Disorders of liver, stomach, reproductive system, or kidneys; increased risk of cancer Kidney dysfunction; reproductive difficulties Liver damage; increased risk of cancer Liver damage; increased risk of cancer Liver or kidney disorders; reproductive difficulties; increased risk of cancer
Hexachlorocyclopentadiene Lindane
0.05 0.0002
0.05 0.0002
Kidney or stomach dysfunction Liver or kidney disorders
Methoxychlor
0.04
0.04
Reproductive difficulties
Oxamyl (Vydate)
0.2
0.2
Slight nervous system effects
Polychlorinated biphenyls (PCBs)
zero
0.0005
Skin changes; thymus gland disorders; immune system deficiencies; reproductive or nervous system difficulties; increased risk of cancer
Residue of banned termiticide Discharge from chemical and agricultural chemical factories Runoff from herbicide used on row crops Runoff from herbicide used on rights of way Runoff/leaching from soil fumigant used on soybeans, cotton, pineapples, and orchards Discharge from industrial chemical factories Discharge from industrial chemical factories Discharge from industrial chemical factories Discharge from industrial chemical factories Discharge from industrial chemical factories Discharge from industrial chemical factories Discharge from drug and chemical factories Discharge from industrial chemical factories Discharge from chemical factories Discharge from rubber and chemical factories Runoff from herbicide used on soybeans and vegetables Emissions from waste incineration and other combustion; discharge from chemical factories Runoff from herbicide use Runoff from herbicide use Residue of banned insecticide Discharge from industrial chemical factories; an impurity of some water treatment chemicals Discharge from petroleum refineries Discharge from petroleum refineries Runoff from herbicide use Residue of banned termiticide Breakdown of heptachlor Discharge from metal refineries and agricultural chemical factories Discharge from chemical factories Runoff/leaching from insecticide used on cattle, lumber, gardens Runoff/leaching from insecticide used on fruits, vegetables, alfalfa, livestock Runoff/leaching from insecticide used on apples, potatoes, and tomatoes Runoff from landfills; discharge of waste chemicals
(Continued)
871
872
TABLE 48-5. NATIONAL PRIMARY DRINKING WATER REGULATIONS AND MAXIMUM CONTAMINANT LEVELS (MCLS)a (Continued)
Contaminant
MCLGb (mg/L)c
MCL or TTb (mg/L)c
Potential Health Effects from Ingestion of Water
Sources of Contaminant in Drinking Water
Pentachlorophenol Picloram Simazine Styrene
zero 0.5 0.004 0.1
0.001 0.5 0.004 0.1
Liver or kidney disorders; increased cancer risk Liver dysfunction Blood disorders Liver, kidney, or circulatory system disorders
Tetrachloroethylene Toluene Toxaphene
zero 1 zero
0.005 1 0.003
Liver dysfunction; increased risk of cancer Nervous system, kidney, or liver disorders Kidney, liver, or thyroid disorders; increased risk of cancer
2,4,5-TP (Silvex) 1,2,4-Trichlorobenzene 1,1,1-Trichloroethane
0.05 0.07 0.2
0.05 0.07 0.2
Liver disorders Changes in adrenal glands Liver, nervous system, or circulatory disorders
1,1,2-Trichloroethane Trichloroethylene
0.003 zero
0.005 0.005
Liver, kidney, or immune system dysfunction Liver disorders; increased risk of cancer
Vinyl chloride
zero
0.002
Increased risk of cancer
Xylenes (total)
10
10
Nervous system damage
Discharge from wood-preserving factories Herbicide runoff Herbicide runoff Discharge from rubber and plastic factories; leaching from landfills Discharge from factories and dry cleaners Discharge from petroleum factories Runoff/leaching from insecticide used on cotton and cattle Residue of banned herbicide Discharge from textile finishing factories Discharge from metal degreasing sites and other factories Discharge from industrial chemical factories Discharge from metal degreasing sites and other factories Leaching from PVC pipes; discharge from plastic factories Discharge from petroleum factories; discharge from chemical factories
noneh — zero noneh — zero noneh — zero zero
15 picocuries per Liter (pCi/L) 4 millirems per year
Increased risk of cancer
5 pCi/L
Increased risk of cancer
Erosion of natural deposits of certain minerals that are radioactive and may emit a form of radiation known as alpha radiation Decay of natural and man-made deposits of certain minerals that are radioactive and may emit forms of radiation known as photons and beta radiation Erosion of natural deposits
30 ug/L as of 12/08/03
Increased risk of cancer, kidney toxicity
Erosion of natural deposits
Radionuclides Alpha particles
Beta particles and photon emitters Radium 226 and Radium 228 (combined) Uranium
Increased risk of cancer
Notes a Modified and provided courtesy of the Environmental Protection Agency and accessible at http://www.epa.gov/safewater/mcl.html. b Definitions: Maximum Contaminant Level (MCL)—The highest level of a contaminant that is allowed in drinking water. MCLs are set as close to MCLGs as feasible using the best available treatment technology and taking cost into consideration. MCLs are enforceable standards. Maximum Contaminant Level Goal (MCLG)—The level of a contaminant in drinking water below which there is no known or expected risk to health. MCLGs allow for a margin of safety and are nonenforceable public health goals. Maximum Residual Disinfectant Level (MRDL)—The highest level of a disinfectant allowed in drinking water. There is convincing evidence that addition of a disinfectant is necessary for control of microbial contaminants. Maximum Residual Disinfectant Level Goal (MRDLG)—The level of a drinking water disinfectant below which there is no known or expected risk to health. MRDLGs do not reflect the benefits of the use of disinfectants to control microbial contaminants. Treatment Technique—A required process intended to reduce the level of a contaminant in drinking water. c
Units are in milligrams per liter (mg/L) unless otherwise noted. Milligrams per liter are equivalent to parts per million. EPA’s surface water treatment rules require systems using surface water or groundwater under the direct influence of surface water to (a) disinfect their water, and (b) filter their water or meet criteria for avoiding filtration so that the following contaminants are controlled at the following levels:
d
• • • • •
Cryptosporidium: (as of 1/1/02 for systems serving >10,000 and 1/14/05 for systems serving <10,000) 99% removal Giardia lamblia: 99.9% removal/inactivation Viruses: 99.99% removal/inactivation Legionella: No limit, but EPA believes that if Giardia and viruses are removed/inactivated, Legionella will also be controlled Turbidity: At no time can turbidity (cloudiness of water) go above 5 nephelolometric turbidity units (NTU); systems that filter must ensure that the turbidity go no higher than 1 NTU (0.5 NTU for conventional or direct filtration) in at least 95% of the daily samples in any month. As of January 1, 2002, turbidity may never exceed 1 NTU, and must not exceed 0.3 NTU in 95% of daily samples in any month. • HPC: No more than 500 bacterial colonies per milliliter. • Long Term 1 Enhanced Surface Water Treatment (Effective Date: January 14, 2005); Surface water systems or (GWUDI) systems serving fewer than 10,000 people must comply with the applicable Long Term 1 Enhanced Surface Water Treatment Rule provisions (e.g., turbidity standards, individual filter monitoring, Cryptosporidium removal requirements, updated watershed control requirements for unfiltered systems). • Filter Backwash Recycling: The Filter Backwash Recycling Rule requires systems that recycle to return specific recycle flows through all processes of the system’s existing conventional or direct filtration system or at an alternate location approved by the state. eMore than 5.0% samples total coliform-positive in a month. (For water systems that collect fewer than 40 routine samples per month, no more than one sample can be total coliform-positive per month.) Every sample that has total coliform must be analyzed for either fecal coliforms or E. coli if two consecutive TC-positive samples, and one is also positive for E. coli fecal coliforms, system has an acute MCL violation. fFecal coliform and E. coli are bacteria whose presence indicates that the water may be contaminated with human or animal wastes. Disease-causing microbes (pathogens) in these wastes can cause diarrhea, cramps, nausea, headaches, or other symptoms. These pathogens may pose a special health risk for infants, young children, and people with severely compromised immune systems. gAlthough there is no collective MCLG for this contaminant group, there are individual MCLGs for some of the individual contaminants:
• Trihalomethanes: bromodichloromethane (zero); bromoform (zero); dibromochloromethane (0.06 mg/L). Chloroform is regulated with this group but has no MCLG. • Haloacetic acids: dichloroacetic acid (zero); trichloroacetic acid (0.3 mg/L). Monochloroacetic acid, bromoacetic acid, and dibromoacetic acid are regulated with this group but have no MCLGs. h
MCLGs were not established before the 1986 Amendments to the Safe Drinking Water Act. Therefore, there is no MCLG for this contaminant. Lead and copper are regulated by a treatment technique that requires systems to control the corrosiveness of their water. If more than 10% of tap water samples exceed the action level, water systems must take additional steps. For copper, the action level is 1.3 mg/L, and for lead is 0.015 mg/L. jEach water system must certify, in writing, to the state (using third-party or manufacturer’s certification), that when acrylamide and epichlorohydrin are used in drinking water systems, the combination (or product) of dose and monomer level does not exceed the levels specified, as follows: i
• Acrylamide = 0.05% dosed at 1 mg/L (or equivalent) • Epichlorohydrin = 0.01% dosed at 20 mg/L (or equivalent)
873
874
Environmental Health TABLE 48-6. NATIONAL SECONDARY DRINKING WATER REGULATIONS AND SECONDARY MAXIMUM CONTAMINANT LEVELS (SMCLs)∗ Contaminant Aluminum Chloride Color Copper Corrosivity Fluoride Foaming agents Iron Manganese Odor pH Silver Sulfate Total Dissolved Solids (TDS) Zinc
Secondary MCL †
0.05–0.2 mg/L 250 mg/L 15 color units 1.0 mg/L Noncorrosive 2.0 mg/L 0.5 mg/L 0.3 mg/L 0.05 mg/L 3 TON (threshold odor number) 6.5–8.5
Noticeable Effects above the Secondary MCL Colored water Salty taste Visible tint Metallic taste; blue-green staining Metallic taste; corroded pipes and fixture staining Tooth discoloration Frothy, cloudy; bitter taste; strong odor Rusty color; sediment; metallic taste; reddish or orange staining Black to brown color; black staining; bitter metallic taste “Rotten-egg,” musty or chemical smell
0.1 mg/L 250 mg/L 500 mg/L
low pH: bitter metallic taste; corrosion high pH: slippery feel; soda taste; deposit formation Skin discoloration; graying of the white portion of eye Salty taste Hardness; deposits; colored water; staining; salty taste
5 mg/L
Metallic taste
†
mg/L is milligrams of substance per liter of water Modified and provided courtesy of the Environmental Protection Agency and accessible at http://www.epa.gov/safewater/mcl.html.
∗
Development of US Drinking Water Regulations Regulations addressing the protection of drinking water and the public’s health continue to be added in earnest to the original Safe Drinking Water Act of 1974 as illustrated in Table 48-4. In 1996, an important amendment to the SDWA required the EPA to develop a process for setting drinking water standards in the United States that includes determination of whether setting a standard is appropriate for a particular water contaminant, and if so, what that drinking water standard should entail. As part of the 1996 amendment to the SDWA, drinking water standard-setting procedures now incorporate inclusion of peer-reviewed science and support data to allow for an intensive technological evaluation of the contaminant under consideration. This standard-setting process includes evaluation of many factors such as (a) the occurrence of the contaminant in the environment; (b) the probability of human exposure and risks of adverse health effects in the general population and sensitive subpopulations; (c) the availability of analytical methods of detection; (d) the technical feasibility of the regulation; and (e) the impact of regulation on water systems, the economy, and public health.13 In order to set new drinking water standards not already regulated by the SDWA, the EPA must first make determinations about which water contaminants to regulate. This regulatory determination is a formal decision on whether to issue a national primary drinking water regulation for a specific water contaminant. The decision “to regulate” a water contaminant is based upon (a) the projected adverse health effects from the contaminant and the public health risk; (b) the extent of occurrence of the contaminant in drinking water and the likelihood that the water contaminant occurs in public water systems at levels of concern; and (c) a determination as to whether regulation of the contaminant would present a “meaningful opportunity” for reducing risks to human health.13 Water contaminants that meet these criteria for possible regulatory consideration were included in the National Drinking Water Contaminant Candidate List (CCL) that was originally published March 2, 1998. The CCL catalogs contaminants that (a) are not already regulated under SDWA; (b) may have adverse human health effects; (c) are known to or anticipated to occur in public water systems; and (d) may require regulations under the SDWA.
The Safe Drinking Water Act requires that the EPA periodically publish a National Drinking Water Contaminant Candidate List or CCL. The first CCL published in March of 1998 contained 60 water contaminants, and the second CCL was published in February of 2005 after the agency decided to continue research on the list of contaminants on the first CCL. The Drinking Water Contaminant Candidate List published in 2005 (the second CCL) includes the Microbial Contaminant Candidates and the Chemical Contaminant Candidates presented in Table 48-7. The second CCL includes 51 water contaminants of the original 60 unregulated contaminants from the first CCL published in 1998 including nine microbiological contaminants and 42 chemical contaminants or contaminant groups. In July 2003, the EPA announced its final determination for a subset of nine water contaminants from the first CCL and concluded that sufficient data and information was available to make the determination not to regulate Acanthamoeba, aldrin, dieldrin, hexachlorobutadiene, manganese, metribuzin, naphthalene, sodium, and sulfate. Therefore, these nine water contaminants were not included in the updated 2005 Contaminant Candidate List.13 A second cycle of preliminary regulatory determinations from the second CCL is underway and final regulatory determinations will be announced in August of 2006. It is important to note that future drinking water contaminant regulations are not limited to making regulatory determinations for only those contaminants on the CCLs. If information becomes available indicating that a specific water contaminant presents a public health risk, a decision to regulate a previously unregulated contaminant may occur in the interest of public health.13
Determination of New Drinking Water Standards In order to propose and finalize a new drinking water standard or National Primary Drinking Water Regulation for a drinking water contaminant candidate on the CCL, the EPA follows a regulatory process that includes (a) conducting studies to develop analytical methods for detecting a new water contaminant; (b) determining whether the contaminant occurs in drinking water; (c) evaluating the treatment technologies necessary to remove the specific contaminant
48 TABLE 48-7. DRINKING WATER CONTAMINANT CANDIDATE LIST—FEBRUARY, 2005 (SECOND CCL) ∗
Microbial Contaminant Candidates Adenoviruses Aeromonas hydrophila Caliciviruses Coxsackieviruses Cyanobacteria (blue-green algae), other freshwater algae, and their toxins Echoviruses Helicobacter pylori Microsporidia (Enterocytozoon and Septata) Mycobacterium avium intracellulare (MAC)
Chemical Contaminant Candidates 1,1,2,2-tetrachloroethane 1,2,4-trimethylbenzene 1,1-dichloroethane 1,1-dichloropropene 1,2-diphenylhydrazine 1,3-dichloropropane 1,3-dichloropropene 2,4,6-trichlorophenol 2,2-dichloropropane 2,4-dichlorophenol 2,4-dinitrophenol 2,4-dinitrotoluene 2,6-dinitrotoluene 2-methyl-Phenol (o-cresol) Acetochlor Alachlor ESA and other acetanilide pesticide degradation products Aluminum Boron Bromobenzene DCPA mono-acid degradate DCPA di-acid degradate
DDE Diazinon Disulfoton Diuron EPTC (s-ethyldipropylthiocarbamate) Fonofos p-Isopropyltoluene (p-cymene) Linuron Methyl bromide Methyl-t-butyl ether (MTBE) Metolachlor Molinate Nitrobenzene Organotins Perchlorate Prometon RDX Terbacil Terbufos Triazines and degradation products of triazines Vanadium
∗
Provided courtesy of the Environmental Protection Agency and accessible at http://www.epa.gov/safewater/mcl.html.
from drinking water; and (d) investigating the potential health effects resulting from exposure to the specific water contaminant. This regulatory process allows the federal agency to determine if a new drinking water regulation or primary standard needs to be developed for a water contaminant on the CCL, whether a drinking water guidance or health advisory should be released, or if no action is necessary at all for the water contaminant under regulatory consideration.13 Drinking water contaminant candidates on the 2005 CCL presented in Table 48-7 have been divided into priorities for future regulation based upon their occurrence in drinking water, as determined by the National Contaminant Occurrence Database, and their potential human health effects. Beginning in August 1999, a National Contaminant Occurrence Database (NCOD) was developed that stores data on regulated and unregulated microbial, chemical, radiological, and physical contaminants as well as other types of contaminants that are likely to be present in finished, raw, and source waters of public water systems in the United States and its territories. In addition, the SDWA mandated that the National Academy of Sciences (National Research Council) conduct studies on the potential health effects associated with contaminants found in drinking water. A series of nine reports has been published under the title Drinking Water and Health with the first edition published in 1977.14 The original report is comprised of a 939-page compendium of the myriad health effects associated
Water Quality Management and Water-Borne Disease Trends
875
with exposure to microbial, radiological, particulate, inorganic, and organic chemical contaminants present in drinking water including risk assessments for development of human cancer resulting from exposure to chemical contaminants in drinking water. The subsequent publications through 1989 have complied data on (a) the risks associated with chlorination and disinfection by-products in drinking water; (b) the toxicological profiles of drinking water contaminants; (c) the epidemiological trends, risk assessments, and pharmacokinetics of several drinking water contaminants; and (d) suggested no-adverseresponse levels (SNARLs) for acute and chronic exposures to selected chemical contaminants in drinking water. As part of the process of proposing and finalizing a National Primary Drinking Water Regulation or primary drinking water standard for a specific water contaminant on the Drinking Water Contaminant Candidate List or CCL, the EPA reviews available health-effects studies and drinking water occurrence data. The EPA then sets a Maximum Contaminant Level Goal (MCLG) for the water contaminant under regulatory consideration which is defined as the maximum level of a contaminant in drinking water at which no known or anticipated adverse health effects occur and which allows for an adequate margin of safety.13 MCLGs are nonenforceable public health goals, and since MCLGs consider only public health risk and not the limits of detection and water treatment technology, they may be set at a level that many water utility systems cannot meet in the United States. It is very important to note that MCLGs consider the health-effects risk from water-borne exposure to contaminants by sensitive subpopulations including infants, children, geriatric populations, and those with compromised immune systems. The MCLG for each type of water contaminant is determined as follows: • Noncarcinogenic Chemical Contaminants: For chemical compounds that may lead to adverse noncancerous health effects, the MCLG is based on the reference dose (RFD), which is an estimate of the amount of a chemical that a human may be exposed to on a daily basis that is not anticipated to cause adverse health effects over a lifetime. As part of this RFD calculation, sensitive subpopulations are included and the level of uncertainty may span an order of magnitude. The RFD is multiplied by a typical adult body weight (70 kg) and divided by daily water consumption (estimated at 2 liters) to provide a Drinking Water Equivalent Level (DWEL). The DWEL is multiplied by a percentage of the total daily exposure to the chemical compound contributed by exposure to drinking water (often 20%), and this calculation provides the final determination of the MCLG for the noncarcinogen chemical under consideration.13 • Carcinogenic Chemical Contaminants: If there is credible evidence that a chemical compound may cause cancer and there is no dose below which the chemical agent is considered safe, the MCLG for the chemical compound is set at zero by the EPA. If the chemical compound is carcinogenic but a safe dose can be determined, the MCLG is set at a level above zero that is considered to be safe.13 • Microbial Contaminants: For water-borne microbial contaminants that may present public health hazards, the MCLG is set at zero since in many cases ingesting one protozoa, virus, or bacterium may cause adverse health effects, particularly in sensitive populations most at risk for morbidity and mortality from water-borne exposure. The EPA continues to conduct health effects studies to determine whether there is a safe level above zero for specific water-borne microbial contaminants. To date, this “safe level above zero” has not been established.13 Once an MCLG is determined for a specific water-borne contaminant, the EPA may establish an enforceable primary drinking water standard, which in most cases is set as a MCL, defined as the maximum permissible level of a contaminant in water which is delivered
876
Environmental Health
to any user of a public water system.13 The MCL is established as close to the MCLG as feasible as directed by the Safe Drinking Water Act, which delineates the following conditions: (a) the level that may be achieved with the use of the best available technology (BAT), treatment techniques, and other means which EPA finds are acceptable after examination for efficiency under field conditions and not solely under laboratory conditions, and (b) the level that may be achieved taking the cost of treatment into consideration. If there is no reliable method that is economically and technically feasible to measure a water contaminant at particularly low concentrations, a treatment technique (TT) is established rather than an MCL. A treatment technique is defined as an enforceable procedure or level of technological performance which public water systems must follow to ensure control of a specific water contaminant. Two current examples of treatment technique rules include the Surface Water Treatment Rule that addresses disinfection and filtration and the Lead and Copper Rule that optimizes corrosion control.13 After establishing either an MCL or TT for a specific water-borne contaminant, the EPA must complete an economic cost-benefit analysis to determine whether the benefits of any proposed primary drinking water standard under consideration justify the cost of treatment based upon affordable technology for large public water systems. Considering this economic evaluation, the EPA may adjust the MCL for a particular class or group of public water systems to a level that “maximizes health risk reduction benefits at a cost that is justified by the benefits.”13 Each state in the United State has been authorized to grant variances for new drinking water standards to small water systems serving up to 3300 people, if the system cannot afford to comply with a new drinking water regulation and the small system installs EPA-approved variance technology. States can also grant variances to public water systems serving 3301–10,000 residents but only with EPA approval. However, the SDWA does not allow small water systems to be granted variances for microbial contaminants in drinking water. It is imperative that small public water systems receive special consideration from the EPA since more than 90% of all public water systems in the United States are categorized as small and these small systems face the greatest challenges in providing potable water at affordable rates for their U.S. residents. Therefore, the 1996 Amendments to the SDWA provided states with affordable options that are appropriate for small public water systems so that they may comply with primary drinking water standards. As part of this special consideration, the EPA must identify treatment technologies that achieve primary drinking water standard compliance and that are affordable for public water systems serving fewer than 10,000 people when setting new primary drinking water standards. These special considerations may include packaged or modular systems and point-of-entry/ point-of-use treatment devices under the control of the small public water system as part of a state variance. When such alternative technologies cannot be developed and implemented, the EPA must identify affordable technologies that maximize water contaminant reduction and protect the public’s health.13 In addition, under certain circumstances, exemptions from drinking water standards may be granted to allow additional time to develop alternative compliance options or establish financial support. However, after the exemption period expires, the public drinking water system must be in compliance with the primary drinking water standard. Most importantly, the terms of any state variance or exemption to a primary drinking water standard must guarantee no unreasonable risk to the public health. Primary drinking water standards go into effect 3 years after they are finalized by the EPA. If capital improvements are required by the public water system in order to comply with the new drinking water standard, the EPA’s administrator or the local state may allow this 3-year implementation period to be extended up to 2 additional years.13 Every 5 years, the EPA repeats the cycle of revising the Drinking Water Contaminant Candidate List or CCL by making regulatory determinations for five water contaminants and identifying up to 30 contaminants for unregulated monitoring. Every 6 years, the EPA
also reevaluates existing primary drinking water regulations to determine if modifications are necessary as well.13 The U.S. regulatory process and primary drinking water standard-setting process aims to ensure the safety and quality of water for all U.S. residents and can be expected to be in continuous state of change for the foreseeable future. TYPES OF WATER CONTAMINATION
Although the United States has one of the safest water supplies in the world, those responsible for the continued safety of our nation’s water are faced with constant and newly emerging environmental challenges to water quality. Multiple reservoirs of infection, modes of transmission, and sources of contamination from microbial, chemical, and radiologic water-borne agents present continuous challenges to protecting the public from water-related disease resulting from exposure to contaminated waters.1 The Centers for Disease Control and Prevention, in addition to other organizations responsible for water quality and safety in the United States, has focused their efforts on four critical areas of water contamination that are of public health significance and warrant water protection measures.1 • Drinking Water Contamination: Water-related disease associated with exposure to drinking water may result from microbial, chemical, or radiologic contamination of drinking water supplies. Microbial contamination of drinking water frequently results from animal or human sewage contamination of source water that has not been adequately treated by available disinfection or filtration procedures. In addition, other challenges to water safety from microbial contamination include infectious pathogens such as Cryptosporidium parvum that are resistant to routine water treatment technologies. Chemical contamination of drinking water may result from multiple sources ranging from agricultural run-off to leakage from underground storage tanks to industrial discharges and chemical spills. Chemical contamination of drinking water may be generated from naturally occurring phenomenon (i.e., arsenic, lead, and copper erosion of natural deposits) or from human activities (i.e., nitrate contamination from fertilizer use or discharge of chemical solvents from industrial processes). Radiologic contamination of drinking water may also present a public health problem arising from naturally occurring or man-made contamination from various industrial and powergenerating processes.1 • Recreational Water Pollution: According to a report from the Environmental Protection Agency, as many as 40% of U.S. beaches, rivers, estuaries, and lakes may be polluted with microbial or chemical contaminants.15 Exposure to contaminated water has resulted in recreational water outbreaks as a result of exposure to water-borne contaminants in such venues as swimming and wading pools, lakes and ponds, rivers and canals, decorative fountains, hot tubs, and springs over the past several decades. Recreational water exposure to infectious pathogens such as E. coli O157:H7, Cryptosporidium parvum, and Naegleria fowleri have the potential to cause serious morbidity and mortality particularly in susceptible populations such as children.1 • “Special Uses” Water Contamination: Water used for special purposes other than drinking and recreational activities may also become contaminated and pose a public health threat and act as a potential source of contamination. The use of water is essential to medicine, agriculture, aquaculture, and commercial food and bottled water production in the United States. Unfortunately, water-borne pathogens may thrive on biofilms in dental and medical devices (such as catheters and dialysis machines) and grow on piping in air-conditioning systems and cooling towers (e.g., Legionella pneumophila). Agricultural wastewater may become contaminated with both microbial
48 and chemical contaminants including infectious zoonotic pathogens, pesticides and herbicides, and nitrates from fertilizer use. In addition, animal hormones and pharmaceuticals used in agriculture may retain their biological activity when ingested by humans exposed to agricultural wastewater. Ingestion of fish and seafood found in waters contaminated with high concentrations of enteric bacteria, parasites, and viruses, marine toxins, and chemical contaminants such as mercury may also cause adverse health effects particularly in susceptible populations such as pregnant women and developing fetuses.1 • Intentional Water Contamination: Another area of public health concern and focus of water protection countermeasures in the United States is the possibility of intentional acts of water terrorism.1,16 This possible water contamination scenario could potentially involve a community-wide water-borne disease outbreak or a cluster of water-related cases from chemical or radiologic toxicity in the general population or in sensitive subgroups most at risk for disease and death. Therefore, the possibility of water contamination from a covert or overt terrorist event remains a public health threat in the United States that could result in water-related disease resulting from biological, chemical, or radiological agents.1,16
Water Quality Management and Water-Borne Disease Trends
877
This “new” microbial era presents many formidable tasks to those responsible for water safety as well as the medical community that may be unaware or unfamiliar with the growing list of potential water-borne pathogens to consider when evaluating patients, particularly vulnerable populations most susceptible to water-borne disease.23 One very difficult aspect of protecting potable water and preventing water-related disease resulting from microbial contaminant exposure is the fact that water-borne pathogen disease trends are constantly changing and evolving as public health threats. The list of emerging and reemerging infectious diseases resulting from exposure to water-borne pathogens has grown exponentially during the past decade.24,25 The potential threat of bioterrorist assault on water reserves adds other water-borne pathogens to this growing list.18,26,27 Currently, the water-borne microbial agents of public health and clinical significance in the United States may be divided into three categories: (a) bacterial pathogens, (b) protozoan parasites, and (c) enteric viruses.1 The primary infectious pathogens that have either been identified as transmissible through contaminated water or have been increasingly suspected of water-borne transmission based upon growing epidemiologic evidence are presented in Table 48-8.1,20,24,28,29,30
Chemical Contaminants in Water Microbial Contaminants in Water Acute microbial infection was the primary cause of death in one in five Americans in the early 1900s with a significant percentage of this morbidity and mortality resulting from unchecked exposure to water-borne cholera and typhoid.17,18 During the past century, protection efforts by water utilities and public health agencies have played a major role in preventing microbial pollution of potable water supplies in the United States.1 By the early 1980s in America, most water utility practitioners and public health specialists were relatively comfortable controlling microbial contamination with existing water quality monitoring standards and conventional treatment methodologies.18–20 However, the decade of the 1990s has been characterized as “the decade of the microbe” with the emergence and reemergence of infectious pathogens as a serious challenge to both food and water safety.19,21,22
Historically in the United States, the primary concern of those responsible for providing safe drinking water has been protection from microbial contamination, and when microbial contamination of drinking water occurs, there is no question that public health is threatened.1,31 Preventing microbial contamination of drinking water supplies is still a primary focus for those local, state, and federal authorities responsible for supplying potable water to U.S. residents. However, during the past two decades, the chemical contamination of drinking water and chemical pollution of watersheds and water reserves has become a growing problem for public health specialists and water utility practitioners. In addition, the presence of chemical contaminants in drinking water has become a widely recognized concern of the public.32 Chemical terrorism with specific attacks on our national infrastructure, including drinking water resources, presents another possible public health threat.16
TABLE 48-8. SELECTED WATER-BORNE PATHOGENS OF PUBLIC HEALTH SIGNIFICANCE∗,† Bacterial Pathogens Campylobacter sp. Pathogenic Escherichia coli Diarrheagenic Escherichia coli Salmonella sp. Shigella sp. Vibrio cholerae Yersinia enterocolitica Legionella sp. Mycobacterium avium Leptospira sp. Helicobacter pylori ∗
Protozoan Parasites Entamoeba histolytica Giardia lamblia Cryptosporidium parvum Acanthamoeba sp. Naegleria fowleri Balantidium coli Microsporidia sp. Cyclospora cayetanensis Toxoplasma gondii
Enteric Viruses Hepatitis A and E Norwalk and Norwalk-like Rotavirus Adenoviruses
The selected pathogens included in this table have either been identified as transmissible through contaminated water or have been increasingly suspected of water-borne transmission based upon growing epidemiologic evidence. Last JM. Public Health and Human Ecology. Stamford, CT: Appleton and Lange; 1998; Ford TE, MacKenzie WR. How safe is our drinking water? Postgrad Med. 2000;108:11–4; Huffman DE, Rose JB. The continuing threat of waterborne pathogens. In: Cotruvo J, Craun GF, Hearne N, eds. Providing Safe Drinking Water in Small Systems. Boca Rotan, Florida. CRC Press, Inc.; 1999; 11–8; Highsmith AK, Crow SA. Waterborne disease. In: Encyclopedia of Microbiology. Vol 4. San Diego, CA: Academic Press, Inc.; 1992; Guidelines for Drinking-Water Quality. 2nd ed. Geneva, Switzerland: World Health Organization; 1993; Drinking Water and Disease: What Every Healthcare Provider Should Know. Washington, DC: Physicians for Social Responsibility; 2000. †Modified from Recognizing Waterborne Disease and the Health Effects of Water Pollution: Physician On-line Reference Guide accessible at ww.WaterHealthConnection.org. (Last JM. Public Health and Human Ecology. Stamford, CT: Appleton and Lange; 1998.)
878
Environmental Health
Primary drinking water standards with maximum contaminant level goals (MCLGs) and maximum contaminant levels (MCLs) have been established in the United States for 80 chemical and radiologic agents as of 2005 as detailed in Table 48-5 and include regulatory standards for inorganic and organic contaminants, disinfectants and disinfection by-products, and radionuclides. The potential health effects from ingestion of these drinking water contaminants as determined by the Environmental Protection Agency and the primary sources of these contaminants in U.S. drinking waters is also presented in Table 48-5 for review. Unfortunately from an ecological point of view, water and water sediments are the final repositories or “sinks” for thousands of pounds of industrial and agricultural chemicals in the United States that are used by our modern society. An estimated 70,000 chemicals are in commercial use in the United States with approximately 700 new chemical agents synthesized each year.1,33 Each chemical agent may represent a “potential contaminant or parents of daughter contaminants born of reactions of these compounds with other compounds in the aquatic environment.”1,34 Of the 70,000 industrial and agricultural chemical agents in use in the United States, approximately 500 compounds have been evaluated for carcinogenic potential with the vast majority never being subjected to thorough toxicity testing for human health effects.1,33 Regulatory agencies and water quality specialists are often faced with determining the significance and fate of these chemical compounds in the aquatic environment only after these compounds have been developed and are in use.1,34 Water utility practitioners are often tasked with removing these chemical agents from water resources while hundreds more new chemical compounds are being synthesized each year. For example, more than 1000 specific synthetic organic chemicals (SOCs) at the nanogram to microgram per liter concentration have been identified in drinking water supplies in the United States. These compounds result from industrial and municipal discharges, urban road runoff, and reaction of chlorine in water treatment with natural organics. Most of the synthetic organic chemicals identified in drinking water have not evaluated for potential human health effects, and the National Research Council indicates that only about 10% of the organic chemicals in water have even been identified.14If potential human health effects have been established for specific chemical contaminants, these health effect profiles have generally been based on animal studies conducted on individual chemical contaminants. Therefore, there is uncertainty as to the actual risk posed to humans who may be ingesting very low concentrations of combinations of chemical contaminants over an extended period of time during their lifetime. Such rare types of epidemiological investigation assessing the synergistic effects of multiple chemical agent exposure in drinking water have been far from definitive and require ongoing and continuous reassessment of the risk. MCLs are generally not available for thousands of chemical agents that might be in the public water supply since the scientific studies needed to determine their health effects are simply not available. Also, the analytic burden for assessing the presence and the concentration of all water-borne chemicals would be inordinately great since assessing the presence and concentration of these chemical compounds requires experienced analysis and sophisticated instrumentation. Most large water supply laboratories are equipped with atomic absorption spectrophotometers that will identify heavy metals in water; however, gas chromatograph-mass spectrometers are required for determination of synthetic organics in water but are many times more costly and far fewer laboratories are equipped for these types of determinations. What can be stated with certainty is that the situation with regard to acceptable levels of chemical contaminants in drinking water is constantly changing and will undergo continuous reassessment.
Radiologic Contaminants in Water Radiologic contamination of public water supplies may be naturally occurring or result from man-made activity. The radiologic agents of importance that are regulated in drinking water are presented in
Table 48-5 and include alpha particles, beta particles and photon emitters, Radium 226 and Radium 228, and Uranium.13 Radium 226 is among the more important of the naturally occurring radionuclides and is found in groundwater as a result of geological conditions such as erosion of natural deposits. On the other hand, man-made radiologic contamination of water generally affects surface waters as a result of fallout from weapons testing and releases from nuclear power plants and users of radioactive materials.35 The establishment of limits for radioactivity in water suffers from the same uncertainties as those inherent in establishment of limits for many chemical contaminants of water; that is, the assumptions that there is no threshold below which any dose is considered to be harmless and that the human health effects are proportional to the dose. In addition, when establishing limits, the cost of achieving certain levels of radioactivity in a water supply must be weighed against the expected risks and benefits in reduced radiation exposures to the resident population served by the water supply.13 Naturally occurring radium contamination in drinking water is often of greater concern than man-made radioactive contamination, particularly since naturally occurring radiologic contamination disproportionably affects small water supplies that draw from groundwaters. For example, radium concentrations as high as 50 pC/L have been reported and some 500 community water supply systems in the United States deliver water that exceeds this standard. If other sources of water cannot be found, the radium can be removed by ion exchange, although this increases the concentration of sodium and may be of concern to that portion of the population requiring low-sodium diets. Radon, a daughter product of radium, is a naturally occurring radionuclide in groundwater, and surveys indicate that approximately 70% of groundwater supplies in the United States have detectable radon. Accordingly, while radon is not likely to pose a problem for larger community supplies, it may be a problem for individual or very small supplies. WATER-BORNE DISEASE TRENDS AND SURVEILLANCE
One of the most critical outcomes of appropriate water quality management and conscientious source water protection is the prevention of water-borne diseases and the effects of water pollution on the health of the U.S. population. Monitoring water-borne disease trends resulting from exposure to biological, chemical, or radiologic contaminants in both drinking water and recreational waters provides valuable surveillance data that (a) reveals deficiencies in water quality management, (b) exposes penetration of the “multiple barrier” protection approach to ensuring safe drinking water, and (c) provides credible information for improving water quality regulations at the local, state, and federal levels. Previously in the United States, during the period of 1920 to 1970, monitoring data regarding water-borne disease outbreaks (WBDOs) was collected by various researchers and different federal agencies.36 However, since 1971, the Centers for Disease Control and Prevention (CDC), in coordination with the Environmental Protection Agency and the Council of State and Territorial Epidemiologists, has maintained a collaborative nationwide surveillance system that tracks the occurrences and causes of WBDOs associated with U.S. drinking water.37 In 1978, characterization and tabulation of waterborne disease outbreaks associated with recreational water exposure was added to this national surveillance system.38 The U.S. waterborne disease surveillance system incorporates surveillance data from each state, territory, and locality regarding water-borne disease outbreaks resulting from both microbial and chemical contaminant exposure associated with drinking water, recreational water, and other types of water exposures.37,38 This important historical surveillance database is used by federal agencies including the CDC and EPA to (a) identify the types of water systems, the underlying deficiencies of water systems, and the etiologic agents associated with outbreaks from drinking water; (b) identify the etiologic agents, types of aquatics venues, water-treatment systems, and the deficiencies associated with recreational water-borne disease outbreaks; (c) to evaluate and
48 reassess the adequacy of treatment technologies and prevention strategies for providing safe drinking water and recreational waters; and (d) establish national research priorities based upon water-related outbreak trends that may provide the basis for improved water-quality regulations in the future.37,38 These drinking water and recreational water surveillance activities are also utilized to (a) characterize the epidemiology of WBDOs, (b) identify changing trends in the etiologic agents that cause WBDOs in the United States, (c) determine why the outbreaks occurred in a specific water venue or community, and (d) prevent water–borne disease transmission. The results of these surveillance activities are currently reported in the MMWR Surveillance Summaries published by the CDC with one summary dedicated to drinking water–associated outbreaks and a second summary dedicated to recreational water–associated outbreaks.37,38 It is important to note that the surveillance data reported in these surveillance summaries represent only a portion of the burden of human disease associated with drinking water and recreational water exposure since endemic water-borne disease risks are not included and reliable estimates for the number of unrecognized WBDOs are not available, potentially leading to underreporting of outbreaks and cases of water-related disease.37,38 In addition, other confounders may lead to underreporting of water-borne disease cases and outbreaks, including but not limited to: (a) not all water-related outbreaks are detected, investigated, and subsequently reported to CDC or EPA; (b) inadequate diagnosis and underreporting of cases of water-borne disease by medical practitioners often confound water-borne disease surveillance programs and chemical exposure registries; (c) the sensitivity of this surveillance system has not been assessed; (d) water-borne outbreaks occurring in national parks, tribal lands, and military bases may not always be reported to state or local authorities; (e) availability of laboratory testing, requirements for reporting diseases, and the financial resources available to local health departments for surveillance and investigation of probable outbreaks may restrict reporting; and (f) this surveillance system is passive and the accuracy of the data depends solely on the conscientious reporting of the agencies involved (i.e., state, local, and territorial health departments).1,37,38 For these reasons, the true incidence and prevalence of water-borne disease outbreaks in the United States resulting from microbial, chemical, or radiological contamination of drinking and recreational water is probably greater
Water Quality Management and Water-Borne Disease Trends
than is reflected in these U.S. national surveillance systems.1,37,38 Even with these restrictions, it is extremely valuable to review the state of water-borne disease associated with exposure to drinking water and recreational waters contaminated with microbial, chemical, or radiologic agents as a “window” to the effectiveness of water quality management in the United States. A brief review of the most recent waterborne disease outbreak surveillance data reported by the CDC and EPA for drinking water and recreational water is summarized below.
Water-Borne Disease Trends Associated with Drinking Water The most recent MMWR Surveillance Summary detailing water-borne disease outbreaks associated with drinking water was published in 2004 and summarizes data collected from the reporting period of 2001–2002.37 In order to understand the surveillance data included in this surveillance summary of outbreaks associated with drinking water, the following definitions are important to note and understand. According to the CDC and EPA, two criteria must be met in order for an event to be defined as a drinking water–associated disease outbreak. First, more than two individuals must have experienced similar symptoms after exposure to the contaminated drinking water; that is, an outbreak is not an individual case of a specific water-borne disease. However, this first criterion is waived for a single case of laboratory-confirmed primary amebic meningoencephalitis (PAM) or for a single case of water-borne chemical poisoning if associated waterquality data indicates contamination by the chemical compound in question.37 The second criterion developed by the CDC and EPA states that epidemiologic evidence must implicate drinking water as the probable source of the water-related illness or disease. It is important to note the following: (a) reported outbreaks caused by contaminated water or ice at a point of use such as contaminated water faucets or serving containers are not classified as drinking water–associated outbreaks, and (b) WBDOs associated with cruise ships are not reported in the CDC/EPA surveillance summaries.37 Water-borne disease outbreaks are reported by the CDC and EPA according to different types of drinking water systems; this classification scheme is summarized in Fig. 48-2. Public water systems are classified as either community or noncommunity systems and are
Drinking water systems
Public water systems public or private ownership (subject to EPA∗ regulations)
Noncommunity
Community
Transient (e.g., gas stations, parks, resorts, campgrounds, restaurants, and motels with their own water systems)
Nontransient (e.g., schools, factories, office buildings, and hospitals with their own water systems)
879
Individual water systems (might be subject to state or local regulations)
Use of nonpublic sources
Privately owned home or farm wells, springs, or surface-water sources
Streams, ponds, or shallow wells not intended for drinking
Bottled water (commercial bottled water is regulated by FDA†; persons might also fill their own containers)§
∗
Environmental protection agency. Food and drug administration. § In certain instances, bottled water is used in lieu of a community supply or by noncommunity systems. †
Figure 48-2. Classification of water systems used for reporting water-borne disease outbreaks by the CDC and EPA. (Courtesy of the Centers for Disease Control and Prevention. Adapted from Blackburn B, Craun GF, Yoder JS, et al. Surveillance for waterbornedisease outbreaks associated with drinking water—United States, 2001–2002. In Surveillance Summaries, October 22, 2004. MMWR. 2004;53(No. SS-8):23–45.)
880
Environmental Health
regulated under SDWA. Of the approximately 161,000 public water systems in the United States, 33% are community systems and 67% are noncommunity systems that include 88,000 transient systems and 20,000 nontransient systems (refer to Fig. 48-2 for details).37,39 Despite representing the minority of water systems in number, community water systems serve 273 million U.S. residents or more than 93% of the U.S. population.39 In addition, although 91% of public water systems are supplied by groundwater, more Americans (66.2%) have their drinking water supplied by public systems served by surface water. Finally, approximately 17 million U.S. residents, or only 6.0%, rely upon private, individual water systems.39 Drinking water– associated outbreaks involving water not intended for drinking, such as lakes, springs, and creeks used by campers, are also classified as individual systems as are sources such as bottled water.37 Each drinking water system evaluated for a water-borne disease outbreak is also classified by the underlying deficiency that caused the water-borne outbreak and includes (a) untreated surface water; (b) untreated groundwater; (c) treatment deficiencies such as temporary interruption of disinfection, chronically inadequate disinfection, or inadequate or no filtration; (d) distribution system deficiencies such as cross-connection contamination, contamination of water mains during construction or repair, or contamination of a water storage facility; and (e) unknown or miscellaneous deficiency including contaminated bottled water or a water source not intended for drinking such as irrigation water.37 Water-borne disease outbreaks associated with drinking water categorized by year and etiologic agent are presented in Fig. 48-3 spanning the reporting period of 1971–2002.37 The number of waterborne disease outbreaks associated with drinking water by year and type of water system affected is presented in Fig. 48-4 for the period of 1971–2002.37 Both figures illustrate a significant improvement in the number of reported water-borne disease outbreaks associated with drinking water in the United States since early reporting in 1971. Drinking water–associated outbreaks classified by etiologic agent, type of water system, water source, and underlying system deficiency from the most recent reporting periods of 2001–2002 are presented in Fig. 48-5. During this most recent reporting period, a total of 31 outbreaks
associated with drinking water were reported in 19 states causing illness in an estimated 1020 persons resulting in 51 hospitalizations and seven deaths.37 Of these 31 water-borne disease outbreaks, 61.3% resulted from a known infectious water-borne pathogen, 16.1% were attributed to water-borne chemical poisoning, and 22.6% resulted from an unknown etiology (Fig. 48-5).37 In the water-borne disease outbreaks of known infectious etiology, 19.4% were caused by Legionella species, 16.1% were caused by water-borne viruses, 16.1% arose from water-borne parasites, and 9.7% resulted from water-borne exposure by bacteria other than Legionella species.37 During the 2001–2002 reporting period for drinking water– associated water-borne disease surveillance, six drinking water– associated disease outbreaks were attributed to Legionella sp. that caused illness in 80 exposed individuals and resulted in 41 hospitalizations and four deaths.37 During this same reporting period, five outbreaks affecting 727 persons were attributed to water-borne viral infections that were all determined to be of norovirus etiology. Illnesses from these five water-borne viral outbreaks resulted in two hospitalizations and one death.37 In addition, five drinking water– associated water-borne disease outbreaks affecting 30 individuals were attributed to parasitic infection with three Giardia intestinalis outbreaks, one Cryptosporidium sp. outbreak, and one Naegleria fowleri outbreak resulting in five hospitalizations and two deaths with both deaths caused by water-borne Naegleria fowleri infection. Three outbreaks affecting 27 individuals were attributed to bacterial infections other than Legionella species and included one Escherichia coli O157: H7 outbreak, one Campylobacter jejuni outbreak, and an outbreak involving coinfection with two different bacteria, Campylobacter jejuni and Yersinia enterocolitica. Waterborne illness resulting from these three water-borne bacterial outbreaks resulted in three hospitalizations and no deaths.37 Seven drinking water-associated water-borne disease outbreaks affecting 117 persons were reported that involved acute gastrointestinal illness (AGI) of unknown etiology that resulted in no hospitalizations or deaths and lead to no confirmation of the suspected etiologic agent.37 Five outbreaks affecting 39 individuals were attributed to chemical contamination of drinking water with two outbreaks resulting from
60 Legionella species∗ AGI† Chemical Viral Parasitic Bacterial
Number of outbreaks
50
40
30
20
10
0 1971
1974
1977
1980
1983
1986
1989
1992
1995
1998
2001
∗
Beginning in 2001, Legionnaires disease was added to the surveillance system, and Legionella species were classified separately. † Acute gastrointestinal illness of unknown etiology. Figure 48-3. Number of water-borne disease outbreaks associated with drinking water by year and etiologic agent in the United States, 1971–2002. (Courtesy of the Centers for Disease Control and Prevention. Adapted from Blackburn B, Craun GF, Yoder JS, et al. Surveillance for waterborne-disease outbreaks associated with drinking water—United States, 2001–2002. In Surveillance Summaries, October 22, 2004. MMWR. 2004;53(No. SS-8):23–45.)
48
Water Quality Management and Water-Borne Disease Trends
60 Individual Noncommunity Community
Number of outbreaks
50 40 30 20 10 0 1971 ∗
1974
1977
1980
1983
1986
1989
1992
1995
1998
2001
Excludes outbreaks of Legionnaires disease. Note: Individual = Private or individual water systems (9% of U.S. population or 24 million users) Community = Systems that serve >25 users year round (91% of U.S. population or 243 million users) Noncommunity = Systems that serve <25 users and transient water systems such as restaurants, highway rest areas, parks (millions of users yearly)
Figure 48-4. Number of water-borne disease outbreaks* associated with drinking water by year and type of water system in the United States, 1971–2002. (Courtesy of the Centers for Disease Control and Prevention. Adapted from Blackburn B, Craun GF, Yoder JS, et al. Surveillance for waterborne-disease outbreaks associated with drinking water—United States, 2001–2002. In Surveillance Summaries, October 22, 2004. MMWR. 2004;53(No. SS-8):23–45.)
Water system (n = 25)†
Etiologic agent (n = 31) Legionella species 19.4%
Individual 40.0%
Unidentified 22.6% Chemical 16.1%
Bacterial∗ 9.7% Viral 16.1%
Parasitic 16.1%
Noncommunity 32.0% Community 28.0%
Water source (n = 25)†
Groundwater 92.0%
Surface water 8.0%
Deficiency§ (n = 25)† Untreated groundwater 40.0%
Miscellaneous 12.0% Distribution system 20.0%
Treatment deficiency 28.0%
∗ Other
than Legionella species. outbreaks attributed to Legionella species. § No outbreaks were attributed to untreated surface water. † Excludes
Figure 48-5. Drinking water–associated outbreaks by etiologic agent, water system, water source, and water system deficiency in the United States, 2001–2002. (Courtesy of the Centers for Disease Control and Prevention. Adapted from Blackburn B, Craun GF, Yoder JS, et al. Surveillance for waterbornedisease outbreaks associated with drinking water—United States, 2001–2002. In Surveillance Summaries, October 22, 2004. MMWR. 2004;53(No. SS-8):23–45.)
881
882
Environmental Health
excessive levels of copper and a third from elevated levels of copper and other metals. One water-borne chemical outbreak was caused by ethylene glycol contamination of a water supply of a school and another by ethyl benzene, toluene, and xylene contamination of bottled water. However, illnesses from these five waterborne chemical outbreaks associated with drinking water resulted in no hospitalizations or deaths.37 Although there has been improvement in the reduction of water-borne disease outbreaks associated with contaminated drinking water, the medical and public health consequences of these type of outbreaks may lead to significant morbidity and mortality for those most susceptible to water-related disease as detailed above.1
Water-Borne Disease Trends Associated with Recreational Waters Unlike drinking water regulation, state and local governments in the United States establish and enforce regulations to protect recreational waters from both naturally occurring and man-made contaminants.38 Since standards for operating, disinfecting, and filtering such water venues as public swimming and wading pools are regulated by state and local health departments, regulations vary throughout the United States. In 1986, the EPA published guidelines for microbiologic water quality that applied to recreational freshwater such as lakes and rivers as well as marine water; however, states throughout the United States have latitude regarding their recreational water regulations and health advisory guidelines regarding warning signs to alert potential recreational water bathers about contaminated freshwater quality.38,40 Unfortunately, contaminated freshwater venues used for recreational activities may require weeks or even months to improve or return to acceptable levels of safety. In either treated or freshwater venues, prompt identification of potential sources of water contamination and appropriate remedial action is necessary to protect the safety of recreational waters. The most recent MMWR Surveillance Summary detailing water-borne disease outbreaks associated with recreational waters
40
Other∗ Dermatitis
35
Meningoencephalitis† Gastroenteritis
30 Number of outbreaks
was published in 2004 and summarizes data collected from the reporting period of 2001–2002.38 In order to understand the surveillance data included in this surveillance summary of outbreaks associated with recreational waters, the following definitions are important to note and understand. According to the CDC and EPA, two criteria must be met in order for an event to be defined as a recreational water–associated disease outbreak. First, more than two individuals must have experienced similar symptoms after exposure to water or air encountered in a recreational water venue. However, this first criterion is waived for a single case of laboratoryconfirmed primary amebic meningoencephalitis (PAM), a single case of wound infection, or a single cases of chemical poisoning if associated water-quality data indicates contamination by the chemical compound in question.38 The second criterion developed by the CDC and EPA for reporting states that epidemiologic evidence must implicate recreational water or the recreational water setting as the probable source of the water-related disease or illness.38 Under these definitions, recreational settings or venues include swimming pools, wading pools, whirlpools, hot tubs, spas, waterparks, interactive fountains, and freshwater and marine surface waters. When recreational water outbreaks are analyzed, these outbreaks are separated by type of venue: (a) untreated venues include fresh and marine waters, and (b) treated venues include the remaining settings such as swimming pools, wading pools, whirlpools, hot tubs, spas, waterparks, interactive fountains, etc.38 Water-borne disease outbreaks associated with recreational water categorized by year and type of illness are presented in Fig. 48-6 spanning the reporting period of 1978–2002.38 The number of water-borne disease outbreaks leading to gastroenteritis-associated with recreational water by year and water type (treated versus untreated freshwater) is presented in Fig. 48-7 for the period of 1978–2002.38 Both figures illustrate a significant increase in the number of reported water-borne disease outbreaks associated with recreational water in the United States since early reporting in 1978, particularly in treated recreational water venues. Recreational waterassociated outbreaks leading to gastroenteritis classified by etiologic agent and type of exposure (treated versus untreated freshwater) from the reporting period of 1993–2002 is presented in Fig. 48-8.
25 20 15 10 5 0 1978
1980
1982
1984
1986
1988
1990
1992
1994
1996
1998
2000
2002
∗
Includes keratitis, conjunctivitis, otitis, bronchitis, meningitis, hepatitis, leptospirosis, pontiac fever, and acute respiratory illness. † Also includes data from report of ameba infections (Source: Visvesvara GS, Stehr-Green JK. Epidemiology of free-living ameba infections. J Protozool. 1990;37:25S–33S). Figure 48-6. Number of water-borne disease outbreaks associated with recreational water by year and illness in the United States, 1978–2002.38 (Source: Courtesy of the Centers for Disease Control and Prevention.)
48
Water Quality Management and Water-Borne Disease Trends
16 Treated Freshwater
Number of outbreaks
14 12 10 8 6 4 2 0 1978
1980
1982
1984
1986
1988
1990
1992
1994
1996
1998
2000
2002
Figure 48-7. Number or water-borne disease outbreaks of gastroenteritis associated with recreational water by water type (treated or untreated freshwater) in United States, 1978–2002. (Source: Courtesy of the Centers for Disease Control and Prevention. Adapted from Yoder JS, Blackburn BG, Craun GF, et al. Surveillance for recreational water-associated outbreaks—United States, 2001–2002. In Surveillance Summaries, October 22, 2004. MMWR. 2004;53(No. SS-8):1–21.)
Etiologic agent (n = 122)
Cryptosporidium species 39.3%
Type of exposure (n = 122)
Treated water 52.5%
AGI∗ 17.2%
Other¶ 1.6% Giardia intestinalis 5.7% Norovirus 9.0% Shigella§ 11.5%
E. coli† 15.6%
Freshwater 47.5%
Etiologic agent in treated water (n = 64)
Etiologic agent in freshwater (n = 58) AGI∗ 25.9%
E. coli† 25.9%
Cryptosporidium species 65.6% Other¶ 3.1% Giardia intestinalis 3.1%
Giardia intestinalis 8.6% Shigella§ 15.5%
Norovirus 4.7% † 6.3%
E. coli
Shigella§ 7.8%
AGI∗ 9.4%
Cryptosporidium species 10.3%
Norovirus 13.8%
∗
Acute gastrointestinal illness of unknown etiology. Escherichia coli O157:H7, E. coli O26:NM, and E. coli O121:H19. § Includes Shigella sonnei and Shigella flexneri. ¶ Includes outbreaks of Salmonella and Campylobacter. † Includes
Figure 48-8. Water-borne disease outbreaks of gastroenteritis associated with recreational water exposure by etiologic agent and type of exposure in the United States, 1993–2002. (Source: Courtesy of the Centers for Disease Control and Prevention.38)
883
884
Environmental Health
During the 2001–2002 reporting period, a total of 65 outbreaks associated with recreational water exposure were reported in 23 states leading to water-borne illness in 2536 individuals that resulted in 61 hospitalizations and eight deaths.38 Of these 65 reported recreational water outbreaks, the following illness pattern emerged: (a) 46.2% resulted in outbreaks of gastrointestinal disease; (b) 32.3% resulted in recreational water outbreaks of dermatitis; (c) 12.3% resulted in water-borne outbreaks of meningoencephalitis; and (d) 9.2% resulted in outbreaks of acute respiratory illness including one outbreak of unknown etiology, one outbreak of Pontiac fever, and four outbreaks caused by chemical contaminants for water-borne exposure.38 Of the 65 recreational water outbreaks reported in 2001–2002, 32.3% were associated with untreated freshwater and 67.7% resulted from exposure to treated water such as chlorinated water. Of the 30 outbreaks of waterborne gastroenteritis resulting from recreational water activities, 40.0% were associated with freshwater or surface marine water exposure and 60% were associated with exposure to treated water venues.38 Of the reported water-borne disease outbreaks associated with recreational water exposure that resulted in gastroenteritis-related recreational outbreaks, 40.0% were caused by parasites, 20.0% resulted from water-borne bacteria, 16.7% resulted from exposure to water-borne viruses, and the remaining 23.3% of recreational water outbreaks were of unknown etiology (refer to Fig. 48-8 for details). Cryptosporidium species remained the most common cause of recreational water outbreaks associated with treated water exposure, and toxigenic E. coli serotypes and norovirus were the most commonly identified sources of outbreaks associated with untreated freshwater exposure. The etiologic agents that were suspected or identified in nongastroenteritis-related recreational outbreaks during this reporting period included Pseudomonas aeruginosa, Naegleria fowleri, Legionella species, Bacillus species, Staphylococcus species, avian schistosomes, and chlorine-based pool chemicals and products.38 During this reporting period, eight cases of laboratoryconfirmed primary amebic meningoencephalitis (PAM) were attributed to Naegleria fowleri infection from recreational water exposure resulting in a 100% mortality rate with eight deaths resulting from summertime contact with contaminated lake or river water.38 During 2001–2002, a total of 21 outbreaks of dermatitis were identified resulting from recreational water exposure to swimming in freshwater lakes and rivers and exposure to public and private pool and spa use.38 Although there has been improvement in the reduction of water-borne disease outbreaks associated with contaminated drinking water, the trend for morbidity and, at times, mortality associated with exposure to contaminated recreational waters both in treated venues and untreated freshwater and surface marine waters is disturbing. The resulting medical and public health consequences of these types of recreational water–associated outbreaks may lead to significant morbidity and mortality for those most susceptible to water-related diseases as detailed below.1
Susceptible Populations and Water-Related Disease When assessing the impact of water-borne disease on the general population, it is important to recognize the fact that certain individuals may be at greater risk for the morbidity and mortality that may result from exposure to microbial, chemical, or radiologic contaminants in both drinking and recreational water.1,41 The U.S. national drinking water standards address this important issue and intend “to protect the
Figure 48-9. Susceptible subpopulations at greatest risk for water-related disease from exposure to water-borne biologic, chemical, and radiologic contaminants. (Source: Modified from Recognizing Waterborne Disease and the Health Effects of Water Pollution: and radiologic contaminants. A Physician On-line Reference Guide accessible at www.WaterHealthConnection.org.)
general public as well as those groups of individuals who may be more sensitive than the general population to the harmful effects of contaminants in drinking water.”42 Susceptible or vulnerable subpopulations may experience medical sequelae at lower levels of exposure to specific contaminants in water than the general population.1 The variability of host susceptibility presents several significant challenges to managing and preventing water-related disease in vulnerable or sensitive populations most at risk for water-related disease including but not limited to: • The segment of the population identified as at increased risk from lower levels of exposure to water-borne microbial, chemical, or radiologic contaminants currently represents 20% of the U.S. population. This percentage is expected to increase as the average American life span increases and immunocompromised individuals survive longer.1,41 • There are many biological factors that influence susceptibility to specific water-borne contaminants. Susceptible or high-risk populations may develop severe and fatal systemic disease from the same water-borne exposure that may present as an asymptomatic or mild illness in the general population.1,43 For example, the case fatality ratio for pregnant women from an infection of hepatitis E during a water-borne disease outbreak is 10 times greater than that for the general population.1,41 • An individual’s susceptibility to water-borne contaminants does not remain constant or fixed over time.1,44 During an individual’s lifetime, their susceptibility changes with age from a highly susceptible developing fetus to a low-risk status as a healthy adult to increased susceptibility as an elderly patient with chronic disease.1 • Even members of the general population not specifically designated as high risk or vulnerable subgroups may at various times in their life become more susceptible to water-borne contaminant exposure.1 Intermittent illnesses or accidental trauma may shift the susceptibility status of a healthy low-risk individual to one of a susceptible patient requiring special consideration and protection from the adverse health effects of water contaminant exposure.1,44 • Specific individuals may experience water-related disease due to a greater level of exposure to water-borne contaminants than the general population. This enhanced level of exposure may be due to biological factors such as higher ratios of skin surface to body mass in children resulting in a proportionally greater body burden of water contaminants than in adults.1,45 Selected susceptible subpopulations who may be at greater risk for developing water-related disease from exposure to water-borne biological, chemical, and radiologic contaminants than the general population are presented in Fig. 48-9 for review.1,18,30,41,42,46 It is important to define and characterize which members of the U.S. population may be considered susceptible or sensitive subpopulations in order to determine whether their specific risk profile for developing waterrelated disease warrants special health precautions. Individuals in these sensitive subgroups warrant special attention and risk reduction education in order to prevent the adverse health outcomes that may result from their increased risk of developing water-related diseases. Since the primary targets for water quality management and water
Pregnant women and developing fetuses Neonates, infants, and children Geriatric patients including nursing home residents Immunosuppressed individuals including HIV and AIDS patients Patients undergoing immunosuppressive threapy including organ transplant recipients Patients treated with chemotherapeutic agents including cancer patients Patients with preexisting clinical disorders or chronic diseases resulting in imparirment of the renal, hepatic, or immunologic system
48
Water Quality Management and Water-Borne Disease Trends
885
safety regulations in the United States may not completely eliminate the risk of developing water-related disease from biological, chemical, and radiologic contaminants in water, the following special health precautions and risk reduction behaviors are warranted for susceptible populations most at risk: (a) updated health advisories for healthcare providers detailing prevention guidelines for specific “at risk” groups (Fig. 48-9) when water contamination events occur or when there is concern that water exposure may lead to disease; (b) concise patient information addressing risk characterization and risk reduction activities to reduce overall risk of exposure to water-borne contaminants; (c) risk communication information addressing the specific risk of disease from water-borne contaminant exposure and “avoidance behavior” guidelines for “at risk” groups; and (d) recommendation for alternative sources of drinking water for high-risk patients when appropriate.1 Implementation of these risk-reduction activities may lead to improved health outcomes for the susceptible populations at increased risk for suffering from the morbidity and mortality that may result from exposure to water contamination.1
Sedimentation. Under the action of gravity, many water-borne particulates including bacteria settle to the bottom of a body of water. However, since the settling velocities of these small particles are low, turbulence or swift currents interfere with sedimentation so that the process is effective only in slow-moving bodies of water such as lakes. In engineered water purification facilities, special tanks that minimize extraneous currents are used encouraging settling of the smallest and most dense particles. The engineered process of coagulation assists in the sedimentation process.
TREATMENT OF WATER AND WASTEWATER
Adsorption. While some adsorption takes place during the process of filtration, often special media designed to adsorb water contaminants are also employed during water purification processes. Activated carbon, both in granular form as filters and in powdered form as an additive to water, is used to adsorb unpleasant taste and odors and a wide variety of organic chemical contaminants in water.
Treatment of waters to make them suitable for subsequent use by humans requires physical, chemical, and biological processes.3 Some portion of this purification process may take place in nature, but when natural cleansing cannot ensure a suitable level of water quality, engineered processes in water treatment plants are frequently necessary.40–47 Sophisticated engineering processes are increasingly required to address the contamination that impairs the quality of water from increasing human-made pressures. In addition, resistance to nature’s purification process may also be overwhelmed by population growth and expansion in the face of fixed natural resources including limited water resources. The processing steps or unit processes used for purifying drinking water are briefly described below, while those used for treating wastewater are described in the Wastewater Collection and Disposal section of this chapter. Distillation. The processes of evaporation and condensation maintain the hydrologic cycle (Fig. 48-1). Engineered distillation is used for desalination and for other applications where special water quality may be needed in a community. The distillation process produces the purest water of any of the processes listed below with only volatile organics persisting after treatment. Gas Exchange. In the gas exchange process, oxygen is added to water, and dissolved gases such as carbon dioxide and hydrogen sulfide are removed. This process of reducing unpleasant taste and odor may also assist in the oxidation of iron and manganese, rendering these compounds more easily removable. Aeration is also an important natural process restoring water quality in polluted rivers and other bodies of water and is also used in water purification and wastewater treatment processes. Coagulation. During the coagulation process, colloidal and suspended particles are brought together to form large “flocs” that settle more easily during the purification process. This process occurs naturally in lakes and other bodies of water but it is an important process in water purification and is supplemented by the addition of coagulants such as aluminum sulfate or synthetic polymers. The resultant “floc” layer is subsequently removed by sedimentation, filtration, or both. Flocculation. In nature, water contaminant mixing is induced by the velocity of flow in rivers or by wind, thermal, or density-induced currents in lakes and reservoirs: the resultant aggregation process causes interparticle contact. This process of flocculation is engineered in water treatment plants and in coordination with the process of coagulation aids in the formation of large floc particles that are more easily removed from treated water.
Filtration. During the filtration process, water passes through a granular media and fine particulates are removed by adhesion to the granular media and by sedimentation in the pore spaces of the media. Removal of particles by filtration is not accomplished by straining as the particles removed are generally much smaller than the spaces between the grains of the medium. In some instances, biological growth on the filter may assist with removal of water-borne particles and with biochemical degradation of the adsorbed organic matter. In nature, the process of filtration occurs as water percolates through the soil.
Ion Exchange. In the ion exchange process of water purification, resins from both natural and synthetic sources are used to remove specific ion contaminants. The most common products are zeolites that are used for removing calcium and magnesium that are water hardness–producing ions and replacing them with sodium. Disinfection. A wide variety of disinfection procedures are employed during the water purification process for destroying pathogenic microorganisms that may cause water-borne disease. Sterilization of water during large-scale water purification procedures is not intended or a necessary target goal for water treatment. The most common disinfection procedure for water purification is chlorination. Other water treatment processes are available for specific treatment applications, such as treatment to help prevent corrosion and to manage the solids or sludge that accumulate during the water treatment process. The handling and disposal of waste sludge is a difficult problem, particularly at wastewater treatment plants where the sludge is often noxious and can constitute a health hazard itself. Other water treatment processes may be required in specific locations where a community water system may be needed to remove substances such as ammonia, phosphorus, radioactivity, or water contaminants specific to the community’s geological location or industrial activities. In general, one or more of the unit processes mentioned above will be employed in a specific community water treatment system depending upon the level and types of water contaminants present in source water. Where the target for water treatment is potable water, the selection of the treatment process is dependent on the quality of the water source. For example, groundwaters may require only aeration and disinfection, while heavily polluted surface waters may require all the unit processes described above. Community wastewaters that may include the presence of industrial wastewater discharges also select treatment processes that result in an effluent that protects the receiving water and the “downstream” uses of the released the water for the neighboring community. If the effluent is to be discharged into an ocean, fewer processes are likely to be required than if the effluent is intended for discharge into a small, fragile stream or for reuse for nonpotable purposes later. ENGINEERED WATER PURIFICATION PROCESSES
The conventional sequence of engineered processes for the purification of surface water for potable purposes includes flocculation and
886
Environmental Health
Figure 48-10. Typical structural profile of a water treatment facility in the United States. (Adapted from Fair GM, Geyer JC, Okun DA. Elements of Water Supply and Wastewater Disposal. New York: John Wiley & Sons; 1971.)
coagulation, sedimentation, filtration, and disinfection. This type of engineered water purification treatment is intended to remove color, turbidity, microorganisms, colloidal particles, and some dissolved substances. Some of these processes may be omitted where the waters are drawn from a protected source and are free of unwanted color and turbidity. However, this type of conventional treatment is not applicable to dissolved synthetic organic chemicals and is only moderately effective in the removal of heavy metals and radioactivity. If these water contaminants are present in a community water source, additional processes such as adsorption on granular-activated carbon may be required. The following sections describe, and Fig. 48-10 illustrates, the principal unit processes required in most water purification plants in the United States. Coagulation. The coagulation processes often include chemical addition, rapid mixing, and flocculation. The process removes finely divided suspended material, colloidal material, microorganisms, and to some extent dissolved substances of larger molecular size by producing flocs sufficiently sized to be removed by sedimentation, filtration, or both. The raw source water that is being processed may be highly colored or free of color or with high or low turbidity. The water-borne particles responsible for color and turbidity are not discernible by the naked eye, but after coagulation, the individual floc particles are easily observed ranging from 1 to 2 mm in diameter. The principal coagulants used for water purification are alum, aluminum sulfate available in solid or liquid form, and ferric salts such as ferric sulfate. These aluminum and iron salts in solution form trivalent aluminum and ferric ions that react with alkalinity, which may be naturally present or may be added to the process with lime or soda ash. The addition of these coagulants reduces the pH of the treated water since optimum coagulation is a function of the pH level. The aluminum hydroxide is essentially insoluble and forms a loosely bound gelatinous structure. The process is one in which the trivalent ions interact with the contaminant materials in the water to reduce the forces of repulsion among them allowing larger and larger aggregates to form with accretion. As natural colloids are largely negatively charged, the positive ions are effective in neutralizing and allowing this coagulation process to proceed. When large amounts of coagulants are required for treatment, a reduction in pH may result and the treated water may become somewhat more corrosive. The use of natural or synthetic polymers for treatment may reduce coagulant requirements by tenfold. The proper amount of coagulant and polymer and the optimum pH for a community treatment facility are determined by jar tests or by pilot plant studies in the design phase of a new facility. Jar tests are then run routinely at the treatment facility as a guide to the adjustment of the chemical concentrations needed in response to changing temperature and the variable quality of the raw source water. Chemical feeding equipment selection is based upon the required chemical, the required precision of feeding, and the variety
of concentrations necessitated by changing water quality that may be gradual or with a changing flow that may be sudden and frequent. After the necessary chemicals are added to the treated water, it is customary to utilize rapid mixing equipment to make certain that the treatment chemicals are distributed uniformly in the water. Turbine or propeller mixers are commonly used, but if the treated water is to eventually be pumped, the chemical treatment may be added on the suction side of the pump, using the pump as the mixer. Hydraulic mixing may also be used, and the water pipes themselves can be used if there are sufficient bends to ensure the necessary turbulence, or stators may be inserted in a pipe. The time required for rapid mixing of the chemical treatment is only seconds in length. The coagulation process is aided by flocculation produced in special tanks, where mechanical paddles or diffused air stirs in water gently, promoting the conjunction of suspended particles and the resulting large flocs then settle easily. The parameter of concern in the design of such tanks is the velocity gradient or the velocity variation across an element of water. In practice, the velocities in flocculation tanks vary from about 1 m/s at the entrance to the tanks, decreasing to about 0.2 m/s near the outlet, with a retention time of 30 minutes. Specific requirements vary with different raw waters and variations in temperature, so that the flocculator is designed to accommodate the worst situation, which generally occurs in winter. Sedimentation. The effluent from the flocculation tanks with large but variable-sized flocs is subsequently led into sedimentation tanks, where the flocs are encouraged to settle. The detention time, again depending on the level of quality in the water to be treated, varies from 2–6 hours. The required water-treatment facility capacity is divided into two or more units to permit one unit to be out of service without requiring shutdown of the plant. Commonly, these tanks are rectangular in design and approximately 4–5 meters in depth. When the rate of accumulation of floc at the bottom of these tanks is expected to be too significant to be easily removed manually, mechanical sludge collectors may be installed. When the sludge can be stored at the bottom of the tank for several months without creating problems in the treatment process and subsequently removed manually, mechanical sludge collectors can be dispensed, saving the cost of equipment and maintenance. Another useful treatment configuration, particularly where facility space is at a premium, is the upward flow or sludge blanket clarifier, a process in which the treated water after flocculation moves up through a floc blanket that is suspended in the tank by the upward velocity. The water impurities are removed in the blanket and the resultant effluent is clear. These upward flow units were initially developed for softening water, but their cost saving economics have encouraged their use in conventional water treatment as well. A compact arrangement for an upflow unit in combination with the flocculator is in concentric tanks with the flocculator in the center and sedimentation on the outside.
48 However, unless first-class supervision of water treatment operation can be ensured, the conventional horizontal-flow sedimentation tank is preferred for water treatment as it is far less subject to disturbance. An improvement in the efficiency of water treatment sedimentation tanks can be obtained by increasing the area on which the floc can settle. Initially, this process was completed by installing intermediate bottoms in horizontal-flow tanks. A more effective approach has been the installation of a series of sloping plates or tubes installed in the top of the sedimentation tank through which the floc-bearing waters must flow before reaching the effluent weirs. The flocs settle on the plates or in the tubes and then fall to the bottom of the tank. Such modification settlers can be installed in existing tanks to improve their water treatment performance. Management of Sludge. The material that falls to the bottom of water treatment tanks as sludge is now identified as a residual to encourage its recovery as a by-product of the treatment process. At one time water treatment sludge was returned to the community water treatment river from which the community’s source water was drawn. But today, this residual is considered a pollutant and must be reclaimed or disposed of properly. Discharge of this residual into the community sewer treatment system for final handling and disposal at the wastewater treatment plant is an expeditious solution. Otherwise, transport by truck to a landfill or other acceptable place for disposal may be required. In such instances, dewatering of the sludge is appropriate to reduce weight of the product and cost of transportation. For this purpose, sand drying beds, vacuum filtration, filter presses, or centrifuges, all processes similar to those used for handling sludge in wastewater-treatment plants, may be used after water treatment. Filtration. Floc particles that escape the sedimentation tank during water treatment are removed by filtration. The conventional filter in a water treatment process is approximately 1 meter in depth and comprised of sand grains varying in size from 0.5 to 1.0 mm. This granular material rests on a bed of graded gravel or on a specially designed underdrain system made of porous plates or false bottoms of various types with small orifices to ensure uniform backwashing of treated water. As water passes down through the filter beds, the floc settles in the interstices or is adsorbed onto the surface of the sand grains. When the amount of floc accumulated in the filter is sufficiently large enough to impede the flow of water, the filter is backwashed using filtered water sometimes accompanied by air. A filter run may last 48 hours, and the filter-washing process lasts 10 minutes in duration. Approximately 3–5% of the filtered water is required for backwashing, and the contaminated backwash water can be retained and returned to the water treatment plant influent. The cleansing of the filter is accomplished by expanding the sand bed with water introduced into the bottom of the filter. On completion of this wash process, the sand settles back into place with the finest particles at the top and the coarsest at the bottom. This configuration of particles limits the effectiveness of the filter, as the top layer tends to remove most of the floc particles and the remainder of the depth of the filter remains unused. One approach currently accepted to alleviate this situation is the use of dual media or even multimedia filters, where granular materials of different specific gravity are employed. Most common is the dual media filter, where coarser anthracite grains with a specific gravity of about 1.5 rest on top of a silica sand layer with a specific gravity of 2.65. In backwashing such a filter, the larger grains of the lighter anthracite always remain on the top of the filter, permitting the full depth of the bed to be more effectively used during the water treatment process. The rate of application to the water treatment filters ranges from 4 to 10 m/h, depending on the quality of the water. The facility water engineer selects the sizes and loadings of the pretreatment and filter units to minimize the overall cost of treatment. When raw source waters are of high quality and coagulation and sedimentation are not required, as is the case when water is drawn from upland reservoirs with low turbidity and color problems, direct filtration is used with the addition of a very small amount of coagulant and coagulant aids. Such direct filtration processes are widely practiced in the treatment
Water Quality Management and Water-Borne Disease Trends
887
of water for swimming pools and for many industrial uses. Since filters are periodically taken out of service for washing, there must be multiple backup units. Also, because of the many valves and other fittings required in each filter, there is a limit to their size requiring larger water-treatment plants to house many separate filter units. A preferred mode for operating sand filters during water treatment would be upflow so that the incoming source water is met initially by the largest sand grains. Such upflow filters or a combination of upflow and downflow filters with the filter drains in the center are widely used in Europe. The reluctance to adopt such upflow units in the United States arises from the fact that an upflow filter constitutes a cross-connection. In a conventional filter, the unfiltered water is always separated from the filtered water by the bed. The underdrain system contains only filtered water that is used for backwashing. The dirtied wash water on top of the filters after washing should not mix with the filtered water. Any wash water that remains on top of the filter is refiltered, so there is never an occasion when the underdrains receiving the filtered water can be contaminated by unfiltered or wash water. On the other hand, the purified effluent from upflow filters occupies the same space above the filters as is occupied by the wash water during washing, and contamination of the filtered water is quite possible. In many European plants, where filters are used primarily to remove iron and manganese from groundwaters and there is no bacterial contamination, such cross-connections are of little consequence. Also, in the reclamation of wastewaters where the production of potable water is not intended, upflow filters may be effectively utilized. The earliest filters, introduced in the middle of the nineteenth century for use without pretreatment by coagulation and sedimentation, were slow sand filters. The rate of application to the slow sand filter is approximately 0.15 m/h, requiring an area about 50 times greater than conventional filters. The slow sand filter operates by the creation on its top layer of a film of material removed from the water including microorganisms that is termed schmutzdecke. It is this living filter that removes color, turbidity, and bacteria from treated water. The top layer is easily clogged, however, and the top 3–5 cm of sand are removed periodically for washing. Immediately after removal of the schmutzdecke, the performance of the filter may be somewhat poorer but it is quickly restored. Several cleanings take place before the washed sand is restored to the filter. To permit somewhat greater loads on slow sand filters, it is customary to precede the slow sand filtration with pretreatment by rapid sand filters or microstrainers, drums made of finely woven steel mesh that remove algae and other large particles permitting the slow sand filters to operate for longer periods between cleanings. Chemical coagulants are not used in this process. Diatomaceous earth filters are used for industrial water supplies and many other specialized applications. The water to be filtered is mixed with diatomaceous earth and forced through a porous septum in a pressure shell forming a filtering layer several millimeters thick on the surface of the septum. When the filter is clogged, the flow is reversed and the diatomaceous earth is dislodged and washed away and a new cycle of operation is initiated. Filters of considerable capacity can be provided in a small space, and such units are particularly suitable for mobile installations, swimming pools, and many industrial water supplies. However, they are not well suited for handling coagulated waters because the filters clog quickly. More importantly, the small thickness of diatomaceous earth does not provide the security against breakthroughs of unfiltered water that is provided by the meter depth of sand in conventional filters; therefore, they have not been widely adopted in the municipal practice of water treatment. Communities with hard water, which generally originates from groundwater, often use filters containing ion-exchange resins for softening. For treatment plants drawing upon highly polluted sources, granular activated carbon filters to adsorb chemicals are also being introduced. Such filter units, identified as “point-of-use” treatment devices, have been introduced for home use for attachment to the household supply or even to a single water faucet. These “point-of-use” treatment devices must be properly maintained. If the water treatment media are not replaced or recharged at regular intervals, the treatment devices may do more harm than good. These types of filters are of
888
Environmental Health
little value in removing bacteria and may actually result in an increase in the bacterial content of water resulting from bacterial overgrowth within the filter itself. Home units for water softening are less necessary today with the availability of synthetic detergents; however, if home water softening is desired, the water softener is best attached to the household hot water tank and washing machines rather than softening the entire water supply of the household. Disinfection. Disinfection with chlorine has been the single most important process for ensuring the bacteriological safety of potable water supplies in the United States and other industrialized nations. Water-borne epidemics from bacterial contamination have been reduced in industrialized countries as a result of this advancement, and the bacterial water-borne outbreaks that have occurred have been traced to failures in chlorination. In order to be effective in water treatment, water disinfectants need to possess the following important properties: (a) effectively destroy sensitive bacteria, viruses, and amebic cysts in water within a reasonable time despite all variations in water temperature, composition, and concentration of microbial water contaminants; (b) remain nontoxic and palatable to humans and domestic animals after treatment; (c) offer a reasonable cost and be easily stored, transported, handled, and applied safely; (d) provide determinable measurement of residual concentration in the treated water easily, and preferably automatically; and (e) offer sufficient persistence in treated water so that the disappearance of the residual would be a warning of recontamination. As it is not feasible to continuously monitor bacteriological or virological quality of water and there is a need to have the results before the water is distributed to the consumer, the ability to detect a residual concentration of a known bactericidal disinfectant after exposure for a certain time at a certain pH and temperature is a key quality-control test for disinfectants in drinking water. The use of chlorine or one of its derivatives meets these requirements most economically; however, other alternative methods of disinfection are sought for two reasons: (a) chlorine added to some waters imparts an undesirable taste and odor, particularly where phenol is present; and (b) the reaction of chlorine with organic matter even where these organics are not themselves of health significance has resulted in the formation of a wide range of reaction by-products. The problems created by the use of chlorine as a disinfectant have been exacerbated because chlorine is a useful oxidant that can remove taste and odors economically and, when added at the beginning of a water treatment process, reduce the concentration of microorganisms that can cause difficulty in sedimentation tanks and filters later in the water treatment process. The wide use of prechlorination, particularly with waters drawn from polluted sources such as the Mississippi and Ohio rivers, has resulted in many water systems not meeting the trihalomethane standard. This problem may be effectively addressed by the use of better sources of raw water or by adequate treatment before the disinfecting dose of chlorine is added. Humic organics, phenols, and other precursors of trihalomethanes should be removed before the addition of chlorine to reduce this problem; however, this generally requires abandoning the process of prechlorination. While the removal of natural organics through coagulation, sedimentation, and filtration can be readily accomplished, the removal of synthetic organic chemicals in polluted waters is much more difficult. The adoption of substitutes for chlorine must be initiated with great care since other disinfectants may themselves produce by-products with toxicologic profiles of which far less is known than of the trihalomethanes. Also, some of the other methods of disinfection such as chloramination do not provide the same level of microbiological safety as chlorine treatment. Nevertheless, pursuit of all methods of disinfection may reveal a combination that provides the required safety of water disinfection while minimizing the undesirable side effects of by-product production. For example, the use of other methods of disinfection, such as those described below with chlorine added primarily to ensure bacterial safety by providing the water with a measurable chlorine residual, may be suitable combinations for drinking
water treatment. Boiling water will provide disinfection, but this process is suitable only as an emergency measure for individual consumers and is not a reasonable community approach to providing drinking water. A wide range of other methods of disinfection including the use of strong oxidants is available that may be combined with chlorine. Sunlight is a natural disinfectant, and irradiation by ultraviolet light is an engineered process that can be tailored to disinfection of water. A mercury vapor arc lamp emitting invisible light of 25–37 Å applied to a water source free of light-absorbing substances, particularly suspended matter that will protect microorganisms against the light, is a useful method of disinfection. Unfortunately, there is no way of continuously monitoring the effectiveness of the process and, as such, this application has not found use in municipal potable water supply practice in the United States, although it is used in the Soviet Union. Silver ions are bactericidal at concentrations as low as 15 µg/L, but this disinfection process is quite slow. Larger concentrations that speed up the process are unacceptable because of possible side effects from silver toxicity. Furthermore, silver ions are neither viricidal nor cysticidal in appropriate concentrations, and silver is expensive as a disinfection option. Nevertheless, silver-coated sand may be appropriate for specialized installations requiring water disinfection. Copper ions are strongly algicidal and copper sulfate is often used for algae control in lakes and reservoirs; however, copper is not bactericidal. Pathogenic bacteria do not survive in highly acid or alkaline waters below a pH of 3 or above a pH of 11. A water treatment process that reduces the pH of source water to these pH levels, as might be the case through the use of lime, will accrue some disinfection benefits. Otherwise, the use of acids or alkalis as disinfectants is not feasible. Oxidizing Chemicals. Oxidizing chemicals used for water treatment include the halogens (chlorine, bromine, and iodine), ozone, and other oxidants such as potassium permanganate and hydrogen peroxide. Potassium permanganate has found wide use as a replacement for chlorine for taste and odor control but it is not as effective as a disinfectant. The use of ozone is useful for destroying odors and color and is also an effective disinfectant but it suffers from the fact that it leaves no residual suitable for monitoring. Among the halogen compounds, gaseous chlorine and a wide variety of chlorine compounds are economically feasible for use in water treatment. Bromine and iodine have been employed on a limited scale for the disinfection of swimming pool waters as well as in tablets for disinfecting small quantities of drinking water in the field and in remote conditions. Ozone. The combination of ozone for pretreatment while providing some disinfection followed by chlorination has become a popular sequence in Europe and is beginning to be employed in the United States to reduce the level of trihalomethanes in finished drinking water. In this process, ozone is produced on-site by the corona discharge of high-voltage electricity into dry air or oxygen. Ozone is corrosive and toxic and in conjunction with hydrocarbons from automobile exhaust is responsible for oxidant pollution and upper and lower respiratory irritation in many individuals. In the vicinity of a water treatment plant using ozone, the environmental effect can often be seen on surrounding vegetation. Nevertheless, ozone is used effectively and efficiently as an oxidant, a deodorant, a decolorant, and a disinfectant in both drinking water and wastewaters. In light of the fact that production of ozone is expensive and energy-intensive and the fact that ozone residuals disappear rapidly from water and are not available for quality control, ozone is used for special applications and not as a general replacement for chlorine in water treatment. Intensive research into the characteristics, biocidal efficiency, and reaction products of ozone continues; however, the use of ozone treatment is not expected to replace chlorine entirely because it leaves no residual for ongoing water quality monitoring.
48 Chlorine. Chlorinated lime (CaClOCl) was the first chlorine disinfectant used for treating public water supplies. It is a hygroscopic white powder that rapidly absorbs both moisture and carbon dioxide from the air with the resultant loss of chlorine and is rapidly replaced by hypochlorites. This technique was replaced by elemental chlorine (Cl2) produced by the electrolysis of brine in liquid form for storage and transmission in steel cylinders. Liquid chlorine is still by far the most common form of chlorine used for water supply and wastewater disinfection. Calcium hypochlorite (Ca(OCl)2) is stable and is used for small installations since it is easily stored in solid form in small containers with 1–3% solutions prepared as needed. Sodium hypochlorite (NaOCl) is also used for small installations and increasingly for large installations where the transportation of liquid chlorine is considered too hazardous because of the danger of leakage and environmental contamination. Chlorine is heavier than air and is extremely toxic, requiring all handling procedures and dosing of liquid and gaseous chlorine to be conducted with extreme care. Chlorine dioxide (ClO2) is used in special instances, particularly where tastes and odors may be a problem for water treatment, and is produced directly in water by the reaction of elemental chlorine with sodium chlorite (NaClO2). The on-site generation of hypochlorite by electrolysis of brine may be appropriate for communities in isolated locations where power is available but the delivery of chlorine may be difficult. When chlorine or its derivatives are added to water in the absence of ammonia or organic nitrogen, hypochlorous acid (HOCl), hypochlorite ion (OCl–), or both are formed with the distribution between the two depending upon the level of pH. These compounds are referred to in practice as free available chlorine. When ammonia or organic nitrogen is present, monochloramine (NH2Cl2), dichloramine (NHCl2), and nitrogen trichloride (NCl3) may be formed with the distribution among the species again being a function of the level of pH. Generally, the first two of these compounds prevail and are referred to as chloramines of combined available chlorine. Since the disinfecting power of each of these varies widely, the chemistry of chlorination must be fully understood so that the chlorine may be used effectively and disinfection assured in finished drinking water. Although the purpose of adding chlorine is to destroy microorganisms, most of the substances in water that react with the chlorine are inert organic materials, both natural and human-made, as well as other reducing substances. If organic matter and other chemical compounds that require chlorine demand can be removed from water by treatment before the addition of chlorine, both the required addition of chlorine and the formation of chlorinated organic compounds will be reduced. When ammonia or its salts are present in chlorinated water during water treatment, chloramines are formed. Monochloramine is formed in the pH range of 6–8, while dichloramine predominates at lower pH values. The chloramines appear as part of the residual chlorine, but as they are a considerably less effective disinfecting agent than hypochlorous acid. To ensure adequate disinfection, a free residual must be formed and this requires the addition of more than enough chlorine to react with all the ammonia and organic compounds present in the water under treatment. The great advantage of obtaining free available chlorine is that most tastes and odors that can be oxidized by chlorine are destroyed, and rigorous disinfection, even leading to the inactivation of viruses, can be ensured as long as the proper combination of chlorine residual concentration, pH, time of contact, and temperature are conscientiously observed. While hypochlorites used in water treatment facilities can be added with solution feeders, special equipment is required for adding elemental chlorine. Chlorine is transported in liquid form in steel cylinders, but chlorine gas also exists within the cylinder. It is this gas that is drawn off for solution by water stream feeding and delivered into the water to be treated. For significant rates of use, particularly in wastewater treatments where the amounts of chlorine used are substantially greater than in water supply disinfection, the chlorine may be withdrawn from the steel tank as a liquid and vaporized in special evaporation equipment. Below 9.5°C, chlorine combines
Water Quality Management and Water-Borne Disease Trends
889
with water to form chlorine hydrate or chlorine ice that may obstruct feeding equipment in treatment facilities. Therefore, it is important that chlorine feeding equipment and the water that may come in contact with the gas be maintained above this critical temperature. Because chlorine is highly toxic, it must be handled with great care and under adequate safeguards. Concentrations of 30 ppm or more induce coughing, and exposures for 30 minutes to concentrations of 40–60 ppm are very dangerous with 1000 ppm being rapidly fatal in humans. Since chlorine gas is heavier than air, it may concentrate in tunnels and lower levels of buildings at the water treatment facilities, exposing workers and the public. Therefore, special facilities are provided for handling chlorine with separate entrances to feeding and weighing rooms, special automatic ventilation, and safety equipment including appropriate personal protective equipment. Since chlorine is the most important safeguard for microbiological safety of drinking water, no breakdown in chlorine feeding can be tolerated. Thus, units must be adequate in size and be duplicated so that failure of any single unit would not interfere with continuous chlorination. An ample number of filled cylinders must be available with at least two cylinders on-line at all times so that an empty cylinder can be replaced without interfering with the chlorination process. Most chlorinators operate under vacuum to prevent leakage of chlorine gas with the vacuum created by the feed water being pumped under pressure. The pressure of water required for this feed water line must be substantially greater than the water pressure in the line being fed and this requires separate pumps. Failure of these pumps resulting from a power disruption would lead to cessation of the chlorination process. Therefore, suitable alarms with provision for standby power generation are required to ensure continuous water treatment operation. Portable chlorinators that operate off pressure in the cylinders may be used for emergency chlorination of water mains, wells, tanks, and reservoirs in the field. After chlorine is added, sufficient contact time must be provided which depends upon the quality of the source water being treated but is generally approximately 30 minutes. In water treatment facilities, this process may be completed in a clear well, and in wastewater treatment plants special chlorine contact chambers are constructed. Chlorination is now routinely automated to permit automatic variation of dosage to account for variations in flow and chlorine demand and to maintain a constant chlorine residual. The chlorine dosages and residuals are recorded and it is common to maintain an alarm to give warning of any departure from the required chlorine residual in the water treatment process. Corrosion Inhibition. Treated drinking water may be more corrosive because of the addition of coagulants and chlorine, both of which reduce the pH of the water under treatment. Also, many water sources in the United States are quite soft and corrosive naturally. To avoid corrosion of pipelines, hot water heaters, and plumbing fittings, it is general practice to reduce the corrosivity of finished water. This process is completed by either adding sufficient alkalinity and raising the pH to render the water noncorrosive or by adding a hexametaphosphate sequestering agent, which tends to form a light coating in the pipes and mitigates the effect of any corrosion that might result. Corrosion control is also important to minimize lead concentrations in water where lead is present in household water plumbing and may represent a health hazard. Adsorption. Depending on the source of water for a community, a water treatment plant may use one or more of the processes described above. However, none of these processes are directed against the synthetic organic compounds that pollute many U.S. water resources resulting from drainage from urban and industrial activities. Some removal of these organics can be expected when powdered activated carbon is used for taste and odor control. The synthetic organic compounds in water were initially characterized by passing the water sample through activated carbon filters on which the organics are adsorbed and then dissolving these organics with chloroform. The 1962 U.S.
890
Environmental Health
Public Health Service Drinking Water Standards had a limit of 0.2 mg/L for this carbon chloroform extract. It was recognized that many of the organics adsorbed on the filter were of no health concern and that many organics that might be of health concern were not adsorbed at all. The use of special GAC filters for treating water drawn from polluted sources is now being introduced in an attempt to remove some, if not all, of these refractory organic chemicals. Pilot water treatment plants have been built but there is little experience with the long-term full-scale use of these special GAC filters. They have limited capacity, require recharging, and may release contaminants into the finished water being treated. In time, the larger cities that draw drinking water from polluted sources, such as the lower Delaware, Hudson, Ohio, and Mississippi rivers, are likely to incorporate GAC filters into their treatment processes. Smaller communities that draw from polluted sources will be constrained in their adoption of GAC filters because of their inadequate operating and monitoring capabilities. One beneficial effect of requiring such additional treatment may be that water purveyors now drawing on polluted sources will examine other options for drinking water treatment. For example, Vicksburg, Mississippi, which had been drawing its water supply from the Mississippi River prior to passage of the SDWA, switched to groundwater. This possibility does exist for other cities facing this environmental dilemma and may be more attractive than trying to monitor and remove the myriad synthetic organic compounds present in these rivers. Also, the cost of installation and operation of GAC filters along with the cost of monitoring may make higher-quality sources a more attractive option.
Water Distribution Systems A water supply system including a community’s treatment plant is designed to meet the average demand for water on the maximum day of use for a locality. Water use varies from hour to hour and may reach a peak during an event such as firefighting activities. Accordingly, a water distribution system must be based on peak water demand requirements and include: (a) high lift pumps that deliver treated water to the distribution system, (b) transmission mains for the treated water, (c) piping in municipal streets that serves residential homes and businesses, (d) hydrants in the distribution system for firefighting activities, and (e) service reservoirs supplying the source water. Each water customer is served by a connection to the water main generally through a dedicated meter. To ensure continuity of water service, water distribution pumps are selected so that if any single pump is down for repairs, the remaining backup pumps can handle the community’s water needs. Also, it is customary to provide standby power generally through diesel engines to ensure continuity of water service. Distribution system piping is most commonly comprised of cement-lined ductile cast-iron pipe and is generally 6 inches or more in diameter which is the minimum size necessary for fire protection. The water distribution pipe network is designed with sufficient interconnections so that if any one pipe breaks, water service including fire protection can be provided via other routes. Water distribution systems are designed to maintain a minimum pressure of 20 psi (about 280 kg/cm2) during peak flow demands to permit service to be maintained at least to the second floor of residences without creating a back draft that might pollute the water supply. Higher buildings need to be served by their own pumping stations in order to receive water service. Elevated service reservoirs are present in many communities to maintain these pressures by storing water for peak demand use, firefighting activities, or in emergencies. The introduction of dual water supply systems of potable and nonpotable water requires that the two systems be kept physically separate and easily distinguishable. This is accomplished by using different materials and colors for the pipe network and hydrants and different-shaped valve boxes.7 The operation and maintenance of a community’s water distribution system is under the responsibility of a local water supply authority so that water services are available and dependable in an emergency. It is to the credit of the water industry in the United States that power failures occur with considerably greater frequency than failure of water services.
WASTEWATER COLLECTION AND DISPOSAL
With increasing population growth and pressure from urbanization and industrialization, human waste products have increased in volume and type, and their impact on the environment in the United States has intensified significantly. Human waste products include night soil and wastewaters and each exerts its own stressors on the environment and present unique public health challenges. Human night soil principally affects soil, and wastewaters principally affect water; but both may have a serious impact on the environment and the health and general well-being of every community in the United States. Night Soil Collection and Disposal. The expression night soil is used to describe human body wastes, excreta or excrement, or the combination of feces and urine voided by humans. The term itself derives from the historical practice of carting away accumulations of human ordure at night. In most industrialized countries, night soil no longer exists as such since human excreta is flushed away by water into community or individual sewerage systems. The disposal of human night soil is a problem of economy, convenience, personal hygiene, and public health. The danger of exposure to infectious diseases is proportional to the concentration of the causative agents in night soil and it is the source of a wide variety of gastrointestinal infections. The safe disposal of human night soil has important public health implications, and necessary operations to address this public health service are commonly left to local government in the United States. The two components of night soil—feces and urine—vary significantly in amount but only slightly in composition depending upon the diet and age distribution of the general population and the consumption of water and other liquids. Human fecal matter contains food residues, bile and intestinal secretions, cellular substances from the alimentary tract, and expelled microorganisms in large numbers. The average per capita amount of fecal matter excreted daily is estimated at approximately 90 grams, ranging up to an average of 150 grams for adult males in the United States. On the basis of wet solids, fecal matter contains about 1% nitrogen, much the same relative amount of phosphoric acid, and approximately one-fourth the weight of potash. The number of coliform organisms alone is well in excess of 100 × 109 and there are a wide variety of other microorganisms in human fecal discharges including bacterial cells that comprise approximately one-fourth of the weight of human feces. The infective capacity of human feces is illustrated by the isolation of more than 100 × 109 Salmonella typhosa from some carriers of typhoid fever bacilli and in the millions of cysts of Entamoeba histolytica from carriers of amebic dysentery. Similar numbers of virus units of poliomyelitis have been isolated in the stools of infected individuals with this virus. The principal components of human urine are water, urea, and mineral ash. The weight of urine excreted by humans is about 1000 grams per capita daily and up to 1500 grams in adult males. Compared with fecal material, urine is richer in fertilizing elements with daily per capita production of nitrogen at 10 times that of fecal material, phosphoric acid twice that of fecal matter, and potash at eight times the production in human feces. Therefore, it is no surprise that that urine constitutes the most agriculturally valuable part of human excreta. At the same time, human urine is normally sterile and destroys many bacterial species in fecal matter when left in contact for any length of time. As chemical fertilizers became economical and standard practice in industrial parts of the world, the use of human excreta as fertilizer was abandoned during the last century. However, with increasing cost of chemical fertilizer production, primarily resulting from the high energy costs involved in manufacturing, interest in using human wastes for agricultural fertilizer is being reconsidered in the United States. The circulation of enteric human pathogens in the environment from night soil exposure is a function of many conditions including (a) the prevalence of the causative agent in diagnosed cases and secondary carriers, (b) the rate of survival of the excreted pathogen in different ecosystems and climates, (c) the nature of the infection and the minimum infective dose necessary for infection, and (d) the host
48 susceptibility and immune status of the population potentially exposed to the pathogen. Wastewater Disposal and Water Pollution. In the United States, household wastes from kitchen, bathroom, and laundries are conveniently flushed away as domestic wastewater and manufacturing wastes are discarded as industrial wastewaters. The system of underground pipes and appurtenances into which wastewaters are discharged is collectively the community’s sewage system. Municipalities initially constructed sewers to protect their city streets and low-lying areas from inundation by flooding rainstorms—not to carry away human body wastes. The original sewers were designed as storm-water drains— not sanitary sewers. Water carriage of human waste did not come into purposeful use until the nineteenth century. At that time, the Industrial Revolution and the explosive growth of urban communities placed a heavy burden on existing waste removal–and the manual transport of human waste from inside cities. As a consequence of this pressure, storm-water drains were pressed into service for domestic waste removal leading to a combined sewerage system. During summer periods, the streams into which the sewers emptied began “to seethe and ferment under a burning sun” as the oxygenating capacity of the natural waters had been surpassed. One resolution was the construction of intercepting sewers along the banks of larger bodies of water. These conduits were constructed to transport human waste beyond the community being serviced to other points of possible disposal. Storm waters were spilled, together with their share of municipal wastes, into the otherwise unprotected waters. This weakness in design of the combined system of sewerage is yet to be resolved in older cities in the United States. Separate systems of sewerage did not come into significant use until the beginning of the twentieth century when the treatment of wastewater was introduced resulting in (a) the protection of water reserves within the community against human and industrial pollution and (b) the treatment of all wastewaters without the complication of rainwater removal. Understandably, the need for reducing the burden of human waste pollution imposed upon freshwaters and oceans was established initially in densely settled industrial communities in the United States. Waste removal techniques progressed from the separation of gross, generally settleable pollution constituents (primary treatment) to the separation of fine or dissolved, generally nonsettleable, pollution components by biological treatment (secondary treatment), and ultimately to the removal of the small concentration of specific classes of residual pollutants (tertiary treatment). The disposal of domestic wastewater and industrial wastes involves collection through plumbing systems of residential homes and other buildings followed by their delivery to public sewers; collection and treatment of communal and industrial wastewaters; and disposal of these treated wastes onto or into receiving waters. Modern wastewater treatment in the United States began in the 1920s and for a half-century was devoted to protecting the best uses of the receiving waters into which the wastewaters were being discharged and included the following classifications: Class A—drinking water for human consumption and a protection of shellfish laying beds; Class B—bathing waters; Class C—aquatic life; Class D— industrial and agricultural water supply; and, Class E—navigation and disposal of wastewaters without nuisance. Standards were established for each of these classes, and the subsequent treatment required was established to maintain these standards. For example, treatment facilities discharging to class A and B waters were required to provide bacterial removal since drinking and bathing had rigorous bacterial standards that were not applicable to waters for other purposes. Dissolved oxygen levels did not need to be so high in waters used for industrial and agricultural purposes as in waters for aquatic life. Accordingly, the treatment to remove biochemical oxygen demand needed to be greater when discharges were released to Class C waters intended for protecting aquatic life than when they were released to Class D waters. These standards were the responsibility of the individual states, and some states were more rigorous in their implementation of standards than others. Accordingly, some streams were allowed to become
Water Quality Management and Water-Borne Disease Trends
891
highly polluted and unfit for any use. The environmental movement of the 1960s targeted this environmental problem leading to passage of Public Law 92-500, the Federal Water Pollution Control Act Amendments of 1972 with its 1977 amendments known collectively as the Clean Water Act. One of the goals of this act stated that water quality in the nation’s water reserves provide for the protection and propagation of fish, shellfish, and wildlife and provide for recreation in and on the waters—all to be achieved by 1983. This eliminated prior classification of Class D and Class E. Another national goal presented in the act was that the discharge of pollutants into navigable water be eliminated by 1983, a goal that was recognized by many professionals as being potentially unattainable. Unfortunately, there was not an emphasis in Public Law 92-500 directed to preservation of receiving waters for potable water supplies. One important provision of the Clean Water Act is the requirement for National Pollution Discharge Elimination System (NPDES) permits for all sewered, socalled point-source, discharges. The permits list the conditions that have to be met for pollution discharges, and together they provide a useful tool for wastewater management. The problems of nonpointsource wastewaters, such as urban and agricultural runoff, however, remain less tractable. Wastewater Drainage of Buildings. The plumbing systems of dwellings and other buildings are the terminus of the water supply and the beginning of wastewater disposal as illustrated in Fig. 48-11. The central components of house drainage systems are a vertical stack and a connecting horizontal house drain network leading to the residential sewer that leads to the street sewer or to an on-site method of disposal. For tightness, all piping with the exception of the house sewer is metallic or rigid plastic. Each fixture drains into the system through a trap in which a sealing depth of water prevents air within the piping from seeping into the building. Usually malodorous, this air may at times contain toxic and flammable contaminants and the seal of traps are intended to remain intact. To prevent their being siphoned by aspiration or blown by back-pressure from water rushing through them or past them in pipes or stacks, these traps are vented. For full safety, water inlets must discharge well above the high-water mark of the fixture to keep its waters from being sucked or forced back into the water system by backflow. If an adequate air gap cannot be provided, special backflow preventers must be installed in the supply pipe. Although water supply systems are normally under higher pressure than drainage systems, pressures are reduced drastically at times of high draft such as during fires or when water pipes break. The pressure in the water system may then drop below atmospheric pressure and the resulting negative (in relation to barometric) pressure differential may pull dangerous pollutants into the water system. Wastewater Drainage of Towns. Sewerage systems, whether separate or combined, are in a sense vascular systems of underground conduits that collect the spent water of a community for subsequent treatment and appropriate disposal.50 Sewers generally originate in a high-lying portion of a community, point progressively downhill, and increase in size as they accumulate more wastewaters from larger and larger tributary areas. In the United States, street sewers are at least 8 inches in diameter and house sewers at least 6 in. Sanitary and combined sewers are laid deep enough in the ground to drain the lowest fixtures in the properties serviced. However, when basements or lower levels are very deep as is the case for most tall buildings, wastewaters are lifted to the street sewer by pumps or ejectors. Sewers are generally of vitrified tile or concrete, with joints of premolded rubber or plastic to maintain water tightness. The slopes on which sewers are laid are generally set by the existing street grades. If a community is flat, sewers must still be laid on minimum grade, becoming quite deep, and pumping stations must lift the wastewater back to a minimum depth, often leading to an expensive wastewater system. Alternative systems such as vacuum or pressure sewers to avoid the need for laying sewers to grade may have appropriate application in these special situations. For inspection and cleaning, sewer access openings are generally built into the wastewater system at changes in grade and
892
Environmental Health
Figure 48-11. Typical residential plumbing system and domestic wastewater disposal in the United States.
direction and also at intermediate points in long, straight runs of network lines. Rainwater enters combined or storm sewers through street inlets with catch basins necessary for combined sewerage systems. The outlets of catch basins to their sewers are trapped to contain air in the sewer and to prevent sand and gravel from entering the system; however, street inlets in separate storm systems are left untrapped. Quantity and Composition of Wastewater. During dry weather periods, the volume of wastewater consumes about 70% of the water used by a community, with this flow fluctuating by day, week, and season.3 The maximum peak rates used by a community may be as much as 200% higher than the average daily use. Associated industrial uses may introduce still greater differences and fluctuations in demand. In wet weather and for some period thereafter, groundwater adds to this wastewater flow which is also impacted by the tightness of the sewer system and the water content of the surrounding soil. Intercepting sewers for combined sewer systems are designed to carry as much water as can be economically and technologically justified for a community system. In localities where rainfall is steady and gentle, interceptors are designed for up to six times the dry weather flow since spills are rare. However, in communities where rain and snowstorms
are intense and of short duration, the frequency and volume of storm water overflow is not altered significantly by oversizing interceptors to carry more than the peak dry weather water flow. The wastewater deposited in sewer systems originally shares the fundamental quality of the drinking water supply but is quickly contaminated by the human and industrial waste load imposed upon it, by the influx of groundwater, and in combined sewers by the varying quantities of rainwater and street wash. The longer the wastewater flows or remains stagnant, the more the contaminant constituents disintegrate with (a) fecal matter and paper breaking down; (b) bacteria and other saprophytes multiplying significantly; (c) respiration of living organisms and incidental biochemical changes reducing the oxygen originally dissolved in the water; and (d) fresh sewerage first growing stale and then converting to an anaerobic or septic environment. Wastewater is obnoxious to the senses as it purifies, and is dangerous to public health as it contains untreated pathogenic microorganisms. In general, wastewater is analyzed for the purpose of ascertaining or predicting the effects of discharge on bodies of water into which it is to be released and for evaluating the performance of wastewater treatment processes. A routine test for biochemical oxygen
48 demand (BOD) is used that measures the oxygen requirements of bacteria and other microorganisms as they feed upon and bring about the decomposition of organic matter in the wastewater. These BOD requirements are important since they determine whether the receiving body of water remains aerobic (oxygen present) or anaerobic (oxygen exhausted) after the release of wastewater. Therefore, the BOD test is a measure of the putrescible load placed on wastewater treatment works and on bodies of water into which a wastewater treatment plant empties the community’s treated wastewater. In the United States, the per capita contribution of 5-day 20°C BOD to domestic wastewater averages 54 grams, of which 42 grams is in suspension, 19 grams is settleable from suspension, and 12 grams is dissolved. Industrial wastes may add to these wastewater amounts appreciably, and their relative impact on the water system is expressed in terms of the number of individuals that would exert an equivalent BOD load. Especially high BOD loads are added to municipal wastewater systems by such industries as breweries, canneries, distilleries, packing houses, milk plants, tanneries, and textile mills. Industrial Wastewaters. Because BOD characterizes only organic wastes typical of human discharges, where industrial wastes are present, the chemical oxygen demand (COD) or the total organic carbon (TOC) determination is also a useful indicator. Where industrial synthetic organic compounds or heavy metals are released in industrial effluent, these compounds should also be monitored in wastewater streams and treatment plant effluents, particularly where industrial wastes are discharged into waters that will be subsequently used for drinking or that will provide an aquatic environment for edible fish. Many industrial wastewaters interfere with treatment processes by imposing heavy loads on the wastewater treatment plants or by impairing biological treatment resulting from the presence of toxic components in the wastewaters. Accordingly, industries may be required to pretreat their industrial wastes before being permitted to discharge them into a municipal sewerage system. In addition, these industries are often required to reimburse the local municipality for handling these industrial wastewaters, generally in accordance with their volume and toxicity. Many industrial wastewaters are discharged directly into receiving waters, thereby requiring NPDES permits created by the Clean Water Act. Facilities covered by EPA’s baseline NPDES general permit for storm water discharges associated with industrial activity are subject to reporting requirements for chemicals classified as “water priority chemicals” and must monitor their storm water discharges for these compounds.51 The Water Priority Chemicals list currently contains 234 compounds as well as corresponding methods of analysis required for sampling the facility’s storm water discharges by an EPA-approved method. Therefore, two regulatory approaches have been undertaken to address this environmental problem by the EPA.51 The first is based upon the requirement for the use of the BAT economically achievable for pollution control and analysis with guidelines established by EPA for at least 20 industrial categories. The second is based upon monitoring released chemical compounds in storm water discharge that now includes a list of 234 individual “priority pollutants.”51 Establishing standards for these chemical pollutants as well as monitoring procedures is a formidable and expensive task particularly since there are approximately 70,000 chemical compounds in industrial use in the United States yearly. WASTEWATER TREATMENT PROCESSES
With few exceptions, water purification and wastewater treatment processes are alike in concept and in technique; the two processes differ only in the amounts of pollutants that must be removed and in the degree of purification that must be accomplished at the end of the engineering process.3 The key operations in wastewater treatment processes are directed toward the separation of the imposed load of human and industrial wastes received by the “carrying” water.
Water Quality Management and Water-Borne Disease Trends
893
Wastewater Treatment Wastewater solids constitute sewerage sludge or residuals, and the desired phase separation or mass transfer of removable solids is set in motion during a number of different techniques including physical, chemical, and biological unit operations. Moreover, since wastewater is very rich in nutrients, air or oxygen must be introduced into treatment processes if the wastewaters are to be kept fresh and odorless during treatment. This process is also a form of aerobic mass transfer since aeration or gas transfer results in removal of the gases and odors of decomposition. By contrast, anaerobic conditions may favor the degradation of putrescible matter in the dewatering and stabilization of sewage sludge. The common unit operations of wastewater treatment and their useful combinations are as follows: Preliminary Treatment. Screens or comminutors are often placed at the influent of wastewater treatment plants to remove or macerate materials and other large objects that may interfere with subsequent treatment unit processes farther down the line of the treatment process. Similarly, grit chambers remove heavy sand and grit that may create problems in the wastewater treatment process and be problematic in the streams or other bodies of water receiving the wastewater effluent. Sedimentation. The main workhorse of wastewater treatment plants is the settling tank where settleable waste solids are removed by the sedimentation process. These sedimentation techniques are similar to sedimentation tanks in water purification treatment except that due to the fact that the settled waste sludge can become quickly putrescible, mechanical sludge-removal equipment is always included in this process. Primary sedimentation tanks hold waste sewage for 1–2 hours and during this time, 50–70% of the influent suspended waste solids including 30–50% of the influent BOD are deposited on the sedimentation tank’s bottom. The resultant sludge is bulky since it is comprised of approximately 95% water and is putrescible because its solids are volatile. Intermediate, secondary, or final sedimentation tanks remove the formed flocs or sludge developed in biological treatment of wastewaters. When wastewater treatment was first introduced, a recognized goal was the introduction of at least primary treatment in all the industrialized countries. In the United States, Public Law 92-500 mandates a minimum of secondary treatment (i.e., biological treatment). In general, primary sedimentation is a precursor to secondary biological treatment of wastewater. Chemical Coagulation and Flocculation. The process of chemical coagulation and flocculation for wastewater is similar to the processes used for water purification treatment, although the amount of aluminum and iron salts required may require as much as 100 mg/L. Reductions as high as 80–90% in suspended solids and 70–80% in BOD are obtained with these processes; however, the resultant sludge from chemical wastewater treatment are generally more problematic than the sludge created by primary treatment. Biological Treatment. Biological treatment units for wastewater treatment are designed to encourage a high rate of growth and activity of scavenging microorganisms. This method of biological treatment has dual benefits: (a) conversion of finely divided, colloidal, and dissolved organic matter into settleable cell substance by biosynthesis; and (b) reduction of the energy level of the remaining organic matter by bioanalysis, degradation, or oxidation. However, the presenting wastes must not be toxic to bacteria and other microorganisms in order for the biological treatment process to be effective in treating wastewaters. As noted previously, secondary or biological treatment is the minimum treatment required to be provided by U.S. communities with few exceptions. Biological treatment removes approximately 85% of BOD resulting in an effluent BOD of approximately 30 mg/L. Two unique biological treatment processes are in general use in the United States: (a) trickling filtration and (b) activated-sludge aeration.
894
Environmental Health
Diagrams for treatment works illustrating high-rate trickling filter treatment operations and activated-sludge treatment units are presented in Fig. 48-12. A third treatment option is a rotating biological contactor unit, which provides for the establishment of biological growths on a fixed medium without requiring the large areas necessary for trickling filters or activated sludge operations. Descriptions of the typical wastewater treatment operations used in the United States are described below and presented in the diagrams in Fig. 48-12:
In modified aeration, the period of aeration is shortened and the concentration of suspended solids in the mixed liquid is reduced. Less air is required, but the degree of final treatment is also reduced. In step aeration or step loading, the returned sludge is added to a fraction of the in-flowing sewage, the remainder being introduced at equal distances along the path of the mixed liquid. In this process, the returning sludge renews its activity without being overwhelmed. In complete mixing, the influent is introduced transverse to the wastewater flow. This avoids “shock loading” of the sludge even more effectively than step loading. Sludge may be kept in circulation within the aeration unit until it is no longer degradable, a practice favored in small plants or when the organic substances in the wastes under treatment are completely soluble such as the treatment of milk-processing wastes.
• Trickling Filters. Structurally, trickling filters are beds of stone or plastic media that are 1–4 meters deep with extensive surfaces to which microorganisms adhere as zoogleal slimes or biomasses. These biomasess are supplied with nutrients from waste products trickling over the beds from top to bottom and with oxygen from air sweeping up or down through the filter bed. The wastewaters are distributed over circular filters from arms rotating over the bed propelled by their own jets positioned horizontally from rows of nozzles. The filter effluent is collected by a system of underdrains large enough to carry the flows from the bed and to transmit enough air to the zoogleal slimes to ensure aerobic conditions. The biomass that builds up in the filters is balanced by sloughing into the filter effluent, and is captured in the secondary settling tank. The wastewater effluent is frequently recycled for dilution of wastewater influent and greater efficiency. For highly contaminated wastes or high-volume loading, two or more treatment units may be placed in series. After sedimentation, trickling filters can produce effluents containing less than 20 mg of BOD and suspended solids per liter. The performance of trickling filters is not significantly affected by transient shocks of strong or toxic wastes, implying that the filter slimes have a large reserve capacity that is not easily destroyed by serious effluent challenges. As a result, trickling filters are sometimes introduced as “shock absorbers” in advance of activated sludge units, which are less rugged in their response to taxing challenges from varied and significantly contaminated and toxic wastewaters. The sludge produced by this wastewater treatment process is approximately 0.05–0.1% of the original wastewater flow under treatment. Ordinarily, this sludge contains 92–95% water and 60–70% organic matter on a dry weight basis. Due to the large expanse of area occupied by this type of treatment plant, they are generally not used in large municipalities.
Stabilization Ponds. A system of stabilization ponds is constructed in porous or tight soil as rude basins that are approximately 1 meter deep which allows exposure of large surfaces to air and light. Putrescible wastewaters are held in stabilization ponds for several weeks, and during this time, settleable solids sink to the bottom of the pond where organic matter decomposes as well. Under favorable climatic conditions, carbon dioxide, nitrogen, phosphorus, and other nutrients are released into the water during decomposition and stimulate profuse algal growths. During daylight hours, oxygen is produced by photosynthesis and maintains aerobic conditions in the stabilization ponds, while at night, carbon dioxide is lost to the atmosphere. In this wastewater process, seepage and evaporation are not significant. Except in winter at high latitudes, when covered by ice, properly dimensioned stabilization ponds remain aerobic and both BOD and coliform are reduced to acceptable levels. Climatic and operational factors affect the performance of stabilization ponds so significantly that allowable wastewater loadings cannot be predicted with certainty. Depending upon environmental circumstances, winter loadings may be no more than 20 persons per 1000 m2 with summer loadings as high as 400 persons per 1000 m2. The green alga Chlorella is a common bloom and these small spherical cells are not easily separated from the wastewater effluent; however, the incentive remains to convert waste nutrients into useful algal proteins that can be subsequently harvested safely and economically as animal feed. Because of their large area requirements, stabilization ponds are introduced where waste volumes are not large and land is not too costly for placement of a wastewater treatment facility.
• Activated Sludge Units. Structurally, activated sludge units are tanks that are 10–15 ft deep in which the wastewater is mixed and aerated together with previously formed biomasses or flocs that are returned to the tank wastewater influent readied for treatment. These flocs act as trickling filter slimes, and aerobic conditions are maintained by the injection of compressed air or oxygen or by absorption of oxygen from the atmosphere at the air-water interface, which is continuously renewed by mechanical stirring or air diffusion. The flocculant solids and the activated sludge are then removed in final settling tanks. The biomass that builds up in the aeration unit is maintained by returning a useful amount of sludge to the process from the final settling tank; therefore, recycling is built into the activated sludge process. Transfer of organic matter to the zooglean flocs by adsorption and subsequent stabilization and oxidation takes several hours. Sludge return of approximately 25% by volume of incoming sewage produces about 2500 mg of suspended solids per liter of the mixed liquid. The activated sludge wasted from the process is large in bulk due to the high water content and it is highly putrescible due to the fact that the sludge consists principally of living cells. Modern activated sludge treatment facilities are inherently flexible, allowing variation in returned sludge and air in quantities and methods that meet the changing needs of a community. Three variants of the conventional process serve as examples.
Tertiary Treatment. In many instances in the United States, secondary treatment is insufficient to maintain water quality in receiving streams and lakes, and tertiary treatment is required to preserve water safety. When a tertiary treatment operation involves physical and chemical processes, it is characterized as advanced waste treatment (AWT). Often the tertiary treatment is required in a community to remove additional BOD, which can be accomplished by (a) adding a second stage of biological treatment or (b) by carrying the process to nitrification, which oxidizes oxygen-demanding ammonia in the wastewater, relieving oxygen pressures on receiving streams and water bodies. Other tertiary treatment processes are designed specifically for removal of phosphorus and/or nitrogen in a community with excessive levels in their wastewater. Phosphorus is generally removed chemically, while nitrogen can be removed biologically or by ammonia stripping, a gas-exchange process. These two water-borne nutrients may stimulate eutrophication or fertilization of receiving lakes and other still or slow-moving bodies of water, and removal from effluent wastewater may control eutrophication of receiving waters. Unfortunately, these nutrients may also originate in nonpoint sources such as runoff from fertilized urban and agricultural lands, which are much more difficult to control. Where wastewater reclamation is intended after tertiary treatment, filtration may be introduced for polishing the wastewater effluent leading to increased clarity and reduction of the chlorine demand for disinfection. In some special instances of tertiary treatment, filters may be employed with activated carbon to reduce the color and the
Figure 48-12. Typical wastewater treatment operations in the United States. (A) Trickling filter operation including comminution, plain sedimentation, contact treatment with recirculation, final settling, digestion, and drying of sludge. (B) Activated-sludge operation including coarse screening, grit removal, plain sedimentation, contact treatment, final settling, dehydration of sludge by centrifugation on vacuum filters, and final incineration. (Adapted from Fair GM, Geyer JC, Okun DA. Elements of Water Supply and Wastewater Disposal. New York: John Wiley & Sons; 1971.)
895
896
Environmental Health
concentration of synthetic organic compounds in the resulting wastewater effluent. As the efficiency of removal of water-borne pollutants is increased, the cost of removing each additional unit of pollution increases exponentially. After secondary treatment achieves 85% removal, an additional 10% removal in tertiary treatment may cost more than removal of the first 40% of water contaminants. In fact, the goal of 97–99% removal may cost as much as the entire effort of 0–97%. Unfortunately, the operational and energy costs of tertiary treatment may be exceedingly high for some communities. Therefore, municipal authorities are often tasked with demonstrating ample justification in public benefits, including improved public health, before selecting tertiary treatment options.
Disinfection of Wastewaters Disinfection of wastewater through chlorination is required only where wastewater effluents are to be discharged into waters used for drinking, bathing, or shellfish aquaculture. Chlorination of wastewater effluents may create three problems: (a) chloramines are formed during the process which may be toxic to aquatic life; (b) chlorinated hydrocarbons of potential health significance may be formed in reaction with organics; and (c) beneficial microorganisms as well as pathogens are destroyed, thereby reducing the ability of the receiving water to biochemically stabilize the organic matter remaining in the wastewater effluent being discharged. Therefore, disinfection of wastewater effluents needs to be evaluated carefully in each instance based upon the benefits versus these risks to the water ecosystem. An alternative to chlorination of wastewater effluents is ultraviolet light, which eliminates the negative impact on aquatic life and the creation of chlorinated hydrocarbons. However, to be effective, UV light requires an effluent of consistently low turbidity generally requiring tertiary filtration as well.
Sludge Management Wastewater sludge is the settled solids removed from the wastewater flow during their passage through primary sedimentation tanks with or without the benefit of coagulating chemicals and biological treatment. Sludge accumulates most of the living organisms that find their way into wastewaters and often teems with ciliated protozoa that feed upon bacteria accelerating the die-away of bacterial pathogens in the sludge product. In addition, sludge dehydration deprives the bacterial pathogens of moisture needed for survival. Fresh primary-tank solids are the most dangerous to human health, solids from biological treatment units less so, solids that have been subjected to biological decomposition still less, and air-dried solids the least hazardous to public health. Heat-dried sludge solids are generally microbiologically safe due to the heat liability of several important microbial pathogens. Although the period of survival of enteric viruses in sludge is still unknown, enteric bacteria such as the typhoid bacillus survive for approximately one week, viable cysts of E. histolytica have been isolated from sludge held for 10 days at 30°C, and viable hookworm eggs have been isolated after 41 days. Even in sludge held for 6 months, a 10% survival rate for Ascaris sp. eggs was noted. Therefore heat-treatment of sludge is important since pulverized sludge heated to 103°C for three minutes destroyed all Ascaris sp. eggs. Sludge Treatment. Generally speaking, wastewater sludge is of little economic value and generally disposed of in the cheapest fashion possible by each community. In normal circumstances, it is neither feasible nor economic to dispose of large volumes of community wastewater sludge generated without dewatering it, destroying residual organic constituents, or both. Reducing the water content of wastewater sludge is exemplified by the fact that reducing the moisture content of sludge from 96% to 98% to doubles the proportion of solid matter and consequently halves the volume of sludge to be handled and disposed. Dewatering and destruction of organic matter are the primary objectives of sludge treatment, and the wide range of options for the handling of community wastewater sludge is illustrated in Fig. 48-13.52 Sludge Digestion. Wastewater sludge is an abundant source of food for saprophytic bacteria, and different groups of living organisms use
Figure 48-13. Flow diagram of handling options for wastewater treatment plant sludge with arrows indicating possible flow paths. (Adapted from Okun DA, Ponghis G. Community Wastewater Collection and Disposal. Geneva: World Health Organization; 1975.)
different types of nutrients originally contained in the sludge or available after decomposition. As the nutritive value is exhausted, the wastewater sludge becomes stable and in its final state of degradation is inoffensive to sight and smell. As the wastewater sludge is well digested, the end products of digestion are gases, liquids, and residues of mineral and conservative organic substances. Losses by gasification and liquefaction, destruction of water-binding colloids, and physical compaction of solids reduce the bulk of the sludge and prepare it for the dewatering process. The organic solids in wastewater sludge digest under both aerobic and anaerobic conditions as is illustrated in swamps and river deposits. In the preparation of wastewater sludge for land disposal, it is simpler and more economical to digest the solids anaerobically. The principal gas released during aerobic decomposition of these types of organic compounds is carbon dioxide, while during anaerobic digestion it is combustible methane (65–80% by volume). The potential heat energy of the resultant methane is a prime factor in the economy of anaerobic sludge digestion since methane may be burned under a boiler or in a gas engine. The power released from this anaerobic process as heat and mechanical energy is used for heating buildings and digestion units in wastewater treatment facilities as well as for air compression, pumping, and minor laboratory purposes on-site. On a per capita basis, the normal daily volume of methane gas generated from this decomposition process is about 0.03 m3 from primary settling tanks and nearly the same amount again from biological treatment units. Ground garbage and some organic industrial wastes
48 may actually increase the methane gas yield appreciably from this operation with the fuel value of this gas at approximately 24,000 kJ (BTUs)/m3. Anaerobic sludge digestion units are heated, covered, insulated tanks in which wastewater sludge is stored until it is dense, essentially odorless, and readily dehydrated. The temperature of the sludge mass is kept at an optimal operating value of approximately 35°C. In modern, high-volume wastewater treatment installations, digestion is promoted by stirring as well as by heating. Digestion tank capacity requirements range from 0.07 m3 per capita for sludge from primary treatment up to twice that for all the sludge from an activated sludge plant. If the selected wastewater sludge treatment is by mechanical dewatering and heat-drying or incineration, sludge digestion is not necessary; however, capacities may be much less than indicated above because digestion is completed. The destruction of organic matter at high temperatures and pressures by wet combustion is also finding some application for wastewater treatment operations. Sludge Drying. For small wastewater treatment facilities, the most cost-effective and most common method of dewatering sludge is drying the sludge product in open air. In this scenario, digested sludge is run or pumped onto beds of sand and gravel or other suitable porous material, where part of the sludge moisture evaporates at the surface, and part seeps through the supporting bed into underdrains. Drying times vary with climate and the character of wastewater sludge. The required area is about 0.1 m2 per capita for well-digested primary sludge and twice that amount for biological sludge. When the sludge has lost enough moisture to become a spadable cake, it is removed from the drying beds for final disposal. In wastewater treatment facilities of moderate to large size, it is cost-effective to dewater sludge mechanically. Sludge Disposal. Disposal of wastewater sludge is a challenge for every community, and some disposal practices are being evaluated based upon the possibility of additional environmental pollution from some disposal practices. For example, some seacoast towns pump wet sludge to the ocean; others load partially dewatered sludge onto vessels and transport the product to dumping grounds at sea. Dewatered sludge is a suitable material for disposal in a properly designed landfill by itself or in combination with municipal refuse. Wet sludge can provide useful moisture, humus, and nutrients for composting operations. The use of properly treated sludge as a fertilizer may be warranted as a measure of nitrogen and phosphorus conservation and soil building. To this purpose, some municipalities dispose of wet sludge to local farmers for use as fertilizer; however, this practice is questioned by many professionals. Tank trucks with fixed nozzles that plow and discharge the liquid sludge into the soil have become popular. In general, only commercially dry (heat-dried to less than 10% moisture) activated sludge has been found sufficiently marketable in the United States for use on lawns and golf greens to meet the expense of dewatering and heating. Because of the low cost and the convenience of chemical fertilizers, the production of heat-dried sludge for sale is seldom economically feasible. In the few instances where heat-dried activated sludge is marketed, as from Milwaukee, Wisconsin, the capital investment in the facilities needed for sludge preparation for sale has already proven to be financially sound. Where suitable sites for sludge disposal are not economically available, the sludge must be incinerated leaving only the resultant ash for disposal. In some communities, incineration of sludge with municipal refuse has been feasible as a disposal mechanism. The ultimate disposal of wastewater sludges, particularly if they contain infectious pathogens, heavy metals, and synthetic organic chemicals, has raised many questions and public health concerns. All of the methods of disposal, whether by discharge to sea, application to agricultural land, disposal in landfills, composting, or incineration have come under criticism. The EPA has been addressing this problem since passage of the 1977 Clean Water Act Amendments. The EPA regulations include (a) contaminant limits for heavy metals and synthetic organic compounds in sludge in milligrams per kilogram; (b) loading rates in kilograms per hectare for various land applications;
Water Quality Management and Water-Borne Disease Trends
897
and (c) technology, monitoring, and reporting requirements. The ultimate choice for disposal will likely involve land application and landfills for communities that have such land available to them, and for larger cities that do not have these options, incineration is generally the most optimal choice. Unfortunately, the choice of application to land and disposal in landfills may result in the potential for significant impact in water reserves and supplies.
Wastewater Disposal Outfall sewers in each community are used to discharge treated wastewater into the receiving bodies of water in the municipality. If outfall sewers are to be effective, they must be designed and positioned to disperse the treated wastewater effluent quickly and thoroughly throughout the receiving water. In running streams, this task is not difficult; however, in lakes, tidal estuaries, and the ocean it is not a simple task. Outfall sewer locations must be selected with consideration to the location of the water purification plant intakes, shellfish layings and aquaculture operations, recreational beaches, and other recreational boating areas. Proper positioning requires analysis of water movements of the water body receiving the treated wastewater effluent such as patterns of normal currents, wind-induced and tidal movements, and eddy diffusion created by differences in the density of the treated sewerage and the receiving water. Treated wastewaters are generally warmer and lighter than the water into which they are discharged. For example, disposal of treated wastewater into a receiving body of water near its surface, especially the brackish waters of tidal estuaries, may result in the wastewater laying on top of the diluting water, not mixing appreciably, and forming a contamination slick noticeable for many miles. The temperature-density equilibrium of this disposal process is so delicate that every situation and season must be handled separately to prevent these environmental complications. Under some conditions, subsurface discharge of wastewaters into a deep freshwater lake may build up a large mass of undispersed wastes around the treated wastewater outfall. However, this technique is frequently used and is often an effective method of dispersal since the lighter wastewater liquid rises like a smoke plume through the receiving body of water, resulting in appropriate dispersion. This process may be enhanced by discharging the wastes through a number of outlets or diffusers spaced apart to prevent interference. The purification accomplished in receiving streams can be improved by engineering works that supply water for dilution during periods of low flow, lengthen the time of downstream passage of the receiving water, or introduce air into the flowing water either directly by injection or indirectly by agitation. In low-water situations, water is released from upland reservoirs in the same or neighboring catchment areas or water is pumped back or recycled from the more voluminous flows of lower river reaches or other water courses. Travel times and self-purification are normally lengthened by impoundages within a polluted stretch of the receiving stream. Compressed air has been introduced with some success into critical reaches of polluted streams from stationary compressors and piping or from floating barges. Where running water has been introduced into kitchens, bathrooms, laundries, and outbuildings of farms and residences where there is no public sewer system, the resident wastewaters must be disposed of on-site. Usually, this is done through septic tanks or cesspools, which involve simple settling and subsurface leaching of wastewater effluents. In order for these systems to be effective, the amount of sewage cannot be large in relation to the leaching area, and the receiving soil must be porous. Where the volume of wastewater is high or the soil is nonporous, more sophisticated and costly treatment methods patterned after municipal processes must be introduced. Of special concern is the contamination from both chemical and biological agents of nearby wells. Septic tanks derive their name from the septic or anaerobic condition created by the decomposition of the settling solids or accumulating sewage sludge. All septic tanks must be emptied of accumulated sludge periodically, and this septage is generally disposed of in community wastewater treatment plants. The ability of soil to absorb settled sewage is explored by digging test holes, filling them with water, and clocking the time required
898
Environmental Health
for the water to drop a given distance in the stratum in which leaching is to take place. In some states, soil profiles are used to determine the ability of the soil to absorb settled sewage. Septic tanks and tile fields may be suitable for truly rural areas; however, this method has also been adopted by housing developments, where tile fields may become clogged and septic tanks overflow creating a local health hazard. Housing developments constructed in peri-urban areas not accessible to municipal sewerage systems have led to the proliferation of package plants for wastewater treatment. These plants do conform to most modern practices often providing tertiary treatment and may be obliged by NPDES permits to meet exacting wastewater effluent standards. However, their operation and maintenance becomes the responsibility of the homeowner, who has limited capacity to manage such facilities. Even where private utility companies are employed to operate these package plants, their performance record may suffer. The quality of personnel and the cost of monitoring small treatment plants are often similar to running large treatment facilities. Therefore, it is preferable that new housing developments be small enough to permit septic tanks, and when greater densities are planned, sewerage service from a nearby large municipality be required. In particular, package plants should be avoided in water supply watersheds.
Wastewater Reclamation Wastewaters are a water resource, and their reclamation for reuse serves both to conserve limited quantities of freshwater and to reduce the load of pollution on receiving bodies of water. The following water services have already been provided by wastewater reclamation: (a) irrigation, both agricultural and urban; (b) industrial use, both process and cooling activities; (c) recreational, use through establishment of lakes and ponds; and (d) nonpotable residential and commercial use, including toilet flushing. Reclamation of wastewater for potable purposes is currently not recommended as U.S. drinking water practice requires that priority should be given to the purest water sources for drinking water. However, wastewater reclamation for irrigation and land disposal of wastewaters may provide a viable opportunity for reuse since reclaimed wastewaters for irrigation of growing crops or lawns may be beneficial. Land disposal of wastewater may be useful in smaller communities where ample land is available and soil conditions are appropriate for wastewater disposal. The nutrients in wastewaters that would be problematic if discharged to a body of water, on land may constitute an important fertilizer, particularly as chemical fertilizers become more costly. Each community situation is unique, requiring certain rates of application and requiring specific pretreatment. In addition, since wastewaters are produced year-round but cannot be applied to land during periods of heavy rainfall or freezing, seasonal storage is also required. Where wastewaters are to be reused, the treatment needs to be tailored to the specific reuse plan with more intensive treatment and more stringent standards as the uses become of greater public health concern. The California Department of Health has prepared Wastewater Reclamation Criteria, which guides the regulation of many hundreds of reclamation projects in the state.53 The highest degree of treatment required is for nonpotable distribution systems including urban irrigation, toilet flushing, industrial use, spray irrigation of food crops, and nonrestricted recreational impoundments (i.e., those that permit body contact). Essential to such reclamation use of wastewater is the reliability of operation of the treatment facilities and continuous monitoring of effluent quality with a capacity to automatically reject wastewater effluent that does not meet the bacterial, turbidity, and chlorine residual standards.53
PROTECTION OF WATER QUALITY AND PUBLIC HEALTH
The need for potable water is vital to basic human survival and is an essential cornerstone of public health. Water also plays a critical role in all aspects of the nation’s complex industrial society, and access to uncontaminated water is essential to food processing, crop production, and livestock health. In June, 2005, the Administrator of the
Environmental Protection Agency predicted that safeguarding the country’s water supply would be one of the pressing environmental concerns of the twenty-first century. EPA Administrator, Stephen L. Johnson, stated, “I believe water, over the next decade and further, will be the environmental issue that we as a nation and, frankly, as a world will be facing. Keeping the nation’s water safe and secure is an area of vulnerability for the United States and also an opportunity for us.” (Emphasis added.)55 Conscientious stewardship of water requires vigorous water source protection, enduring water pollution control, and aggressive water quality management in order to ensure access to a water supply that provides both the quantity and quality necessary to preserve this environmental resource and prevent waterrelated disease. Contamination of water by infectious pathogens, chemical compounds, or radiologic agents has the potential to affect the health of millions of residents in the United States. However, preservation of water quality and prevention of water-borne disease are complicated tasks requiring a coordinated effort from many diverse disciplines ranging from water engineers to practicing health-care providers. The complexity of water quality management is expansive as illustrated in this chapter and includes water source protection, water purification engineering, wastewater treatment and pollution control, and water-borne disease prevention. Any successful strategy to ensure water quality and safety in the United States must include a multidisciplinary team effort of educated partners working together to address the many significant challenges facing our nation in the future as we protect our precious resource of water. ACKNOWLEDGMENT
The author wishes to extend a special thanks to Ms Laura Campbell for her information technology and graphic design expertise during preparation of this manuscript. REFERENCES
1. Last JM. Public Health and Human Ecology. Stamford, CT: Appleton and Lange; 1998. 2. Meinhardt PL. Recognizing Waterborne Disease and the Health Effects of Water Pollution: Physician On-line Reference Guide. American Water Works Association and Arnot Ogden Medical Center, 2002. Accessed on July 24, 2005 at www.waterhealthconnection.org. 3. Fair GM, Geyer JC, Okun DA. Elements of Water Supply and Wastewater Disposal. New York: John Wiley & Sons; 1971. 4. U.S. Public Health Service. Community Water Supply Survey, 1969. Summarized in McCabe L, et al. Study of community water supply systems. J Am Water Works Assoc. 1970;62:670. 5. Camp, Dresser and McKee, Inc. Guidelines for Water Reuse. Cooperative Agreement 600/8-80–036. Washington, DC: Environmental Protection Agency; 1980. 6. United Nations Economic and Social Council. Water for Industrial Use. Report No. E-3058 ST/ECA/50. New York: United Nations Economic and Social Council; 1958. 7. American Water Works Association. Manual on Dual Distribution Systems, No. M24. Denver: American Water Works Association; 1983. 8. Environmental Protection Agency. National Interim Primary Drinking Water Regulations. Washington, DC: Environmental Protection Agency; 1976. 9. U.S. District Court, District of Connecticut: Bridgeport Hydraulic Co. et al. vs. The Council on Water Company Lands of the State of Connecticut et al., Civil No. B-75–212, December, 1977. 10. University of North Carolina. Protecting Drinking Water Supplies through Watershed Management: A Guidebook for Devising Local Programs. Chapel Hill, NC: Center for Urban and Regional Studies; 1982. 11. Burby RJ, Okun DA. Land use planning and health. Ann Rev Public Health. 1983;4:47–67.
48 12. Centers for Disease Control and Prevention. Surveillance for waterborne-disease outbreaks associated with drinking water— United States, 2001–2002. In Surveillance Summaries, October 22, 2004. MMWR. 2004;53:SS-8. 13. Environmental Protection Agency. List of Drinking Water Contaminants and MCLs. Accessed on August 1, 2005 at http://www.epa. gov/safewater/mcl.html. 14. National Research Council. Drinking Water and Health. Washington, DC: National Academy Press: Vol 1, 1977; Vols 2 and 3, 1980; Vol 4, 1982; Vol 5, 1983; Vol 6, 1986; Vols 7 and 8, 1987; Vol 9, 1989. 15. Environmental Protection Agency. Water Quality Conditions in the United States: A Profile from the National Quality Inventory Report to Congress. EPA number 841-F-00-006. Page 1–2. June, 2000. Accessible at http://www.epa.gov/305b/98report. 16. Meinhardt PL. Physician Preparedness for Acts of Water Terrorism: Physician On-line Readiness Guide. Environmental Protection Agency and Arnot Ogden Medical Center, 2003. Accessed on August 10, 2005 at http://www.waterhealthconnection.org/bt/index.asp. 17. Putnam SW, Wiener JB. Seeking Safe Drinking Water. Cambridge, Massachusetts: Harvard University Press; 1995. Accessed at http:// www.waterandhealth.org/drinkingwater/12749.html on October 11, 2001. 18. Meinhardt PL, Casemore DP, Miller KB. Epidemiologic aspects of human cryptosporidiosis and the role of waterborne transmission. Epidemiol Rev.1996;18:118–36. 19. Identifying Future Drinking Water Contaminants. Workshop on Emerging Drinking Water Contaminants. Washington, DC: National Academy Press; 1999. 20. Ford TE, MacKenzie WR. How safe is our drinking water? Postgrad Med. 2000;108:11–4. 21. Colwell RR. Safe drinking water. In: Cotruvo J, Craun GF, Hearne N, eds. Providing Safe Drinking Water in Small Systems. Boca Rotan, Florida: CRC Press, Inc; 1999: 7–10. 22. Eberhart-Phillips J. Outbreak Alert: Responding to the Increasing Threat of Infectious Diseases. Oakland, California: New Harbinger Publications, Inc; 2000. 23. Microbial Pollutants in Our Nation’s Water. American Society of Microbiology, Office of Public Affairs. Washington, DC, 1999. Accessible at: http://www.asm.org/ ASM/files/CCPAGECONTENT/ DOCFILENAME/0000005987/ waterreport[1].pdf. 24. Huffman DE, Rose JB. The continuing threat of waterborne pathogens. In: Cotruvo J, Craun GF, Hearne N, eds. Providing Safe Drinking Water in Small Systems. Boca Rotan, Florida. CRC Press, Inc.; 1999;11–8. 25. Strausbaugh LJ. Emerging infectious diseases: a challenge to all. Am Family Phys. 1997;55:111–7. 26. Environmental Protection Agency, Office of Water. EPA actions to safeguard the nation’s drinking water supplies. October, 2001. Accessed at http://www.epa.gov/safewater/security/secf.html on July 31, 2002. 27. Centers for Disease Control and Prevention. Public health emergency preparedness and response. April, 2000. Accessed at http://www.bt. cdc.gov/Agent/AgentList.asp on August 10, 2005. 28. Highsmith AK, Crow SA. Waterborne disease. In: Encyclopedia of Microbiology. Vol 4. San Diego, CA: Academic Press, Inc.; 1992. 29. World Health Organization. Guidelines for Drinking-Water Quality. 2nd ed. Geneva, Switzerland; 1993. 30. Physicians for Social Responsibility. Drinking Water and Disease: What Every Healthcare Provider Should Know. Washington, DC; 2000. Accessible at: http://www.psr.org/site/DocServer/Drinking_ Water_and_Disease_Primer.pdf?docID=559. 31. Olin SS. Exposure to Contaminants in Drinking Water. Preface. Washington, DC.: International Life Sciences Institute; 1998. 32. Brown JP, Jackson RJ. Water pollution. In: Brooks S, Gochfeld M, Jackson R, Herztein J, Shenker M, eds. Environmental Medicine. St. Louis, Missouri: Mosby; 1995;479–87.
Water Quality Management and Water-Borne Disease Trends
899
33. Philip RB. Environmental Hazards and Human Health. Boca Raton, Florida: CRC Press, Inc: 1995;1–3. 34. Identifying Future Drinking Water Contaminants. Workshop on Emerging Drinking Water Contaminants. Washington, DC: National Academy Press; 1999. 35. Cothern CR. Radioactivity in Drinking Water. EPA 570/9–81–002. Washington, DC: Environmental Protection Agency; 1981. 36. Craun GF, ed. Waterborne Diseases in the United States. Boca Raton, FL: CRC Press, Inc.; 1986. 37. Blackburn B, Craun GF, Yoder JS, et al. Surveillance for waterbornedisease outbreaks associated with drinking water—United States, 2001–2002. In Surveillance Summaries, October 22, 2004. MMWR. 2004;53(No. SS-8):23–45. 38. Yoder JS, Blackburn BG, Craun GF, et al. Surveillance for recreational water-associated outbreaks—United States, 2001–2002. In Surveillance Summaries, October 22, 2004. MMWR. 2004;53(No. SS-8):1–21. 39. Environmental Protection Agency, Office of Water. Factoids: Drinking Water and Groundwater Statistics for 2003. Washington, DC: Environmental Protection Agency, Office of Water, 2003. EPA publication no. 816K03001. Available at http://www.epa.gov/safewater/ data/pdfs/data_factoids_2003.pdf. 40. Environmental Protection Agency. Bacteriological Ambient Water Quality Criteria Marine and Fresh Recreational Waters. Cincinnati, OH: National Service Center for Environmental Publications; EPA publication no. 440584002: 1986. 41. Gerba CP, Rose JB, Haas CN. Sensitive populations: who is at the greatest risk? Int J Food Microb. 1996:30;13–123. 42. Environmental Protection Agency, Office of Water. Report to Congress: EPA studies on sensitive populations and drinking water contaminants. December, 2000. 43. ILSI Risk Science Institute Pathogen Risk Assessment Working Group. A conceptual framework to assess the risks of human disease following exposure to pathogens. Risk Anal. 1996;16;841–8. 44. Reiser K. General principles of susceptibility. In: Brooks S, Gochfeld M, Jackson R, Herztein J, Shenker M, eds. Environmental Medicine. St. Louis, Missouri: Mosby; 1995:351–60. 45. Olin SS. Exposure to Contaminants in Drinking Water. Washington, DC.: International Life Sciences Institute: 1998;40. 46. Environmental Protection Agency. National Drinking Water Advisory Council-Health Care Provider Outreach and Education Working Group: Draft Report. October, 1999. 47. American Water Works Association. Water Quality and Treatment. New York: McGraw-Hill; 1971. 48. American Society of Civil Engineers, American Water Works Association. Conference of State Sanitary Engineers: Water Treatment Plant Design. Denver: American Water Works Association; 1989. 49. Water Pollution Control Federation and American Society of Civil Engineers. Wastewater Treatment Plant Design. New York: American Society of Civil Engineers; 1977. 50. Water Pollution Control Federation and American Society of Civil Engineers. Design and Construction of Sanitary and Storm Sewers. Alexandria, VA: Water Pollution Control Federation; 1969. 51. Environmental Protection Agency. Analytical Methods forU.S EPA Priority Pollutants and 301(h) Pesticides in Estuarine and Marine Sediments. 1986. EPA number 503690004. Accessible at http:// yosemite. epa.gov/water/owrccatalog.nsf/0/8812b6e14862d4a685256b060072 30fb?Open Document. 52. Okun DA, Ponghis G. Community Wastewater Collection and Disposal. Geneva: World Health Organization; 1975. 53. California Department of Health Services. Wastewater Reclamation Criteria. California Administrative Code, Title 22, Division 4, 1978. Accessed on August 20, 2005 at http://www.waterboards.ca.gov/ recycling/. 54. Water Safety Tops EPA Chief’s List. Los Angeles Times, June 5, 2005. Accessed on July 1, 2005 at latimes.com.nation.
This page intentionally left blank
Hazardous Waste: Assessing, Detecting, and Remediation
49
William A. Suk
INTRODUCTION
The past century of industrial, military, and commercial activity worldwide has resulted in hundreds of thousands of hazardous waste sites where organic compounds and metals contaminated surface and subsurface soils, sediments, ground, and surface waters. In order to reduce risks to human and ecologic systems, considerable time and money have been spent remediating these sites since passage of major environmental legislation (e.g., Superfund). Hazardous waste management is undoubtedly one of the most important environmental issues. Despite the common agreement that industrial production without waste is our long-term goal, there will be an ongoing need for proper management of wastes for years to come. Further, there is a need to continue to sharpen the cause and effect relationships between a polluted environment and poor public health. These relationships resulting from exposure to hazardous wastes are more insidious and subtle manifestations in children and adults. The challenge is to better understand these contaminants, and to determine under which conditions and at which levels they pose a threat to human health and the environment. DEFINITIONS OF WASTE
Classifications and Properties of Waste Wastes may be classified by their physical, chemical, and biological characteristics. An important classification criterion is their consistency. Solid wastes are waste materials having less than approximately 70% water. This class includes municipal solid wastes such as household garbage, industrial wastes, mining wastes, and oil-field wastes. Liquid wastes are usually wastewaters, including municipal and industrial wastewaters, that contain less than 1% suspended solids. Such wastes may contain high concentrations (greater than 1%) of dissolved species, such as salts and metals. Solid waste, as defined under the Resource, Conservation, and Recovery Act (RCRA) is any solid, semisolid, liquid, or contained gaseous material discarded from industrial, commercial, mining, or agricultural operations and from community activities. Solid waste includes garbage, construction debris, commercial refuse, and sludge from water supply or waste treatment plants, material from air pollution control facilities, and other discarded materials. Solid waste does not include solid or dissolved materials in irrigation return flows or industrial discharges. Sludge is a class of waste intermediate to solid and liquid wastes. Sludge usually contain between 3% and 25% solids, while the rest of the material is water-dissolved species. These materials, which have a slurry-like
consistency, include municipal sludge, which is produced during secondary treatment of wastewaters, and sediments found in storage tanks and lagoons. Federal regulations classify wastes into three different categories, based on hazard criteria: (a) nonhazardous, (b) hazardous, and (c) special. Nonhazardous wastes are those that pose no immediate threat to human health and/or the environment, for example, municipal wastes such as household garbage and many high-volume industrial wastes. Hazardous wastes are of two types: (a) those that have characteristic hazardous properties, that is, ignitability, corrosively, or reactivity, and (b) those that contain leachable toxic constituents. Other hazardous wastes include liquid wastes, which are identified with a particular industry or industrial activity. The third category from industry is classified generically as special wastes by origin, and is regulated with waste-specific guidelines. Examples include mine spoils, oil-field wastes, spent oils, and radioactive wastes. In the United States, all hazardous wastes are regulated under Subtitle C of RCRA. Hazardous waste has been defined as myriad substances that cause toxicity to living organisms. For all practical purposes, toxic waste and hazardous waste are interchangeable. As indicated, hazardous waste is defined as solid waste that is acutely toxic or possesses one or more of the following criteria: ignitability, corrosively, reactivity, or toxicity.1 Traditionally, when discussing radioactive or medical waste, the term “mixed waste” is used (see the section on Radioactive and Mixed Wastes). Toxic substances occur naturally in soil, water, and air; however, thousands of toxic substances are anthropogenic. The anthropogenic substances are of particular concern because of the quantities that are produced, their dissemination and persistence, and because, historically, their release into the environment has not been well controlled. Furthermore, most anthropogenic compounds are organic and are readily absorbed by living organisms.
Hazardous Wastes Hazardous waste is a subset of solid wastes that poses substantial or potential threats to public health or the environment. It is specifically listed as a hazardous waste by exhibiting one or more of the characteristics of hazardous waste (i.e., ignitability, corrosively, reactivity, and/or toxicity), being generated by the treatment of hazardous waste, or being contained in a hazardous waste. Some environmental laws list specific materials as hazardous waste. For example, hazardous waste can exist in the form of a solid, liquid, or sludge and can include materials such as polychlorinated biphenyls (PCBs), chemicals, explosives, gasoline, diesel fuel, organic solvents, asbestos, acid, metals, and pesticides. Environmental laws also list materials that must be treated and managed as hazardous. 901
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
902
Environmental Health
The true amount of hazardous wastes generated is not known, although the approximate amount is 400 million tons a year. The Organization for Economic Cooperation and Development (OECD) estimates that, on average, a consignment of hazardous wastes crosses the frontier of an OECD nation every 5 minutes of every day all year. More than 2 million tons of those wastes are estimated to cross national frontiers of OECD European countries annually on the way to disposal sites. Other movements, which are illegal, are motivated by the possibility of important gains in transferring the problem to places where controls or standards are less strict. Another motive may be that vast territory and scant resources in countries that import products make any attempt at serious surveillance impossible. Some countries also prefer to manage their hazardous waste problem by transporting it at lower cost to other countries. The quantity of generated wastes of all kinds is still increasing, and the rapid pace of industrialization worldwide will necessitate careful attention. In response to growing recognition of health and environmental risks associated with hazardous wastes, governments have brought into force a series of national laws to control the generation, handling, storage, treatment, transport, disposal, and recovery of these wastes. To mitigate such potential threats, urgent measures should be taken to avoid or reduce generation of hazardous wastes, optimize environmentally sound recovery of wastes, reduce to a minimum or eliminate transboundary movements of hazardous wastes, manage wastes in an environmentally sound and efficient way, and dispose of wastes as close as possible to the place where they are generated. In exceptional cases, exporting hazardous wastes to a country capable of eliminating them properly may be safer for human health and the environment if adequate storage or treatment is not possible in the generating country or until appropriate technology and adequate infrastructure are available. Increased international cooperation is necessary to help developing countries manage and treat the wastes they generate in an environmentally sound way. There have been a number of conferences and workshops to assess and evaluate hazardous waste exposures and to provide a framework for future research and collaborative efforts to address these problems.2,3,4,5 Thousands of new chemicals are being developed and introduced annually into commerce. Only a small fraction of these substances have been tested for toxicity. Hundreds of millions of tons of hazardous waste are generated annually and the quantities are increasing.6 A small fraction of toxic waste in the environment is from household use; the greatest production comes from industry, particularly the chemical and petroleum industries.7 Another leading generator, the agricultural chemical manufacturing industry, produces chemicals, such as pesticides, that by their very nature are toxic not only to their targets, but also to other life forms. The magnitude of problems created by toxic substances is immense and ubiquitous, while the impact is, to a great extent, unknown. TRANSPORTATION OF HAZARDOUS WASTES
Toxic substances and other contaminants know no borders and, as such, the issues surrounding them have gained a presence in international forums. In 1972, 70 governments met in Sweden for the United Nations Conference on the Human Environment. This conference brought environmental issues to an international level. Since that time more than 170 international environmental treaties have been signed,8 demonstrating the global commitment to the issue. In 1976, the United Nations Environment Program’s International Register of Potentially Toxic Chemicals was established. This Register collects information on hazardous waste and distributes it to anyone who requests it. The Basil Convention, 1989, established the Control of Transboundary Movement of Hazardous Wastes and Their Dispersal. With more than 100 signatories on this treaty, the movement of wastes is now managed throughout much of the world. A pivotal conference sponsored by the United Nations and held in Rio de Janeiro, Brazil, in 1992, United Nations Conference on Environment and Development (Rio Earth Summit), focused on the issues of biodiversity and sustainable
development.9 This report included a chapter on both toxic waste and hazardous waste, thus demonstrating the priority of the effective control and management of such releases into the environment. Protection of the environment in conjunction with economic development is closely related in any proposals that support future global welfare. POLICIES MANAGING THE FATE OF TOXIC SUBSTANCES
In the 1960s, the United States Congress began its journey in establishing environmentally oriented laws. In 1966, the Division of Environmental Health Science was established in the Department of Health, Education and Welfare to study the health effects of environmental agents. In 1969, it was elevated to Institute status (National Institute of Environmental Health Sciences [NIEHS]), thus emphasizing the importance of environmental implications on human health. In the same year, the United States Congress passed the National Environmental Policy Act, requiring federal agencies to assess the impact of their actions on the environment. A year later the U.S. Environmental Protection Agency (EPA) was established. EPA is responsible for working with state and local governments to control and prevent pollution in areas of solid and hazardous waste, pesticides, water, air, drinking water, and toxic and radioactive substances. Since that time numerous other acts, including the Toxic Substance Control Act (1976) (TSCA), have been passed with the goal of maintaining a healthy environment. TSCA requires that producers of toxic substances be held accountable for the release of these substances into the environment. In 1976, the RCRA gave EPA authority to control hazardous waste from “cradle-to-grave.” This control includes the minimization, generation, transportation, treatment, storage, and disposal of hazardous waste. RCRA also set forth a framework for the management of nonhazardous solid wastes. RCRA focuses only on active and future facilities and does not address abandoned or historical sites. The National Toxicology Program was established in 1978 as an interagency organization to provide toxicological information on potentially hazardous chemicals to regulatory and research agencies and to the public. The Comprehensive, Emergency Response, and Compensation and Liability Act (CERCLA (also known as Superfund)) was passed in 1980 to address immediate and long-term threats to the public health and the environment from abandoned or active sites contaminated with hazardous or radioactive materials. Under the Superfund program, EPA has the authority to clean up the nation’s worst hazardous waste sites, using money from a trust fund supported primarily from a tax on chemical feedstocks used by manufacturers. Companies or individuals responsible for the wastes are identified by EPA, if possible, and made to pay for the cleanups. The Superfund Amendments and Reauthorization Act (SARA) of 1986 reauthorized CERCLA to continue cleanup activities around the country. Several site-specific amendments, definitions, clarifications, and technical requirements were added to the legislation, including additional enforcement authorities. Also under the SARA, the Superfund Hazardous Substances Basic Research Program (Superfund Basic Research Program) was established. The Superfund Basic Research Program is a multidisciplinary program administered by the NIEHS. This program is committed to advancing the state of the science reducing the amount and toxicity of hazardous substances and, ultimately, preventing adverse human health effects.10
ASSESSING AND DETECTING ADVERSE HEALTH
EFFECTS OF HAZAROUS WASTES Studies of the adverse health effects of hazardous waste must contend with many challenges. Exposure is usually ill defined and often misclassified, historical data may not be available or are otherwise problematic, and mixed chemical exposures are likely and may not always be uniform across a population. The exposed population is often
49 small or incompletely determined. Resources for study may also be limited. The endpoints to be studied may be uncertain, leading to consideration of multiple endpoints.
Sediment Sediment—the “muck” at the bottom of rivers and other bodies of water—is composed of materials transported and then deposited by water or wind and represents a surprisingly rich and productive environment. The organisms that live in it form the base of a food chain that stretches all the way up to humans. Areas of sediment contamination occur in coastal and inland waterways, in clusters around larger municipal and industrial centers, and in regions affected by agricultural and urban runoff. The EPA’s Report to Congress on Contaminated Sediment (prepared in conjunction with the NOAA, the Army Corps of Engineers, and other federal, state, and local agencies) states that sediment contamination exists in every region and state of the country and that approximately 10% of the sediment underlying U.S. surface waters is sufficiently contaminated with toxic pollutants to pose potential risks to fish and to humans and wildlife who eat fish. Much of the contaminated sediment in the United States was polluted years ago by improper disposal or run-off of chemicals including PCBs, pesticides, and mercury which have since been banned or restricted. Sediments constitute a major source of persistent bioaccumulative toxic chemicals which may pose threats to ecological and human health even after contaminants are no longer released from point and nonpoint sources. Documented adverse ecological effects of contaminants in sediments include skin lesions, increased tumor frequency, and reproductive toxicity in fish; reproductive failure in fish-eating birds and mammals; and decreased biodiversity in aquatic ecosystems. Threats to human health occur when sediment contaminants bioaccumulate in fish and shellfish tissues consumed by humans. Fish advisories have been issued for more than 1500 water bodies in 46 states for pollutants such as mercury, dioxins, PCBs, PAHs, and pesticides such as chlordane and chlorpyrifos. More than 10 federal statutes provide authority to the USEPA program offices to address the problem of contaminated sediment. USEPA has ongoing efforts to prevent further sediment contamination, to develop methodologies to improve the assessment of sediment contaminants, and to design remediation technologies to clean up existing sediment contamination.
Bioavailability The bioavailability of an environmental contaminant—the degree to which it can be assimilated by an organism—is a critical factor in decision-making processes related to both public health and remediation strategies. While it seems like a simple concept, bioavailability has the potential to affect: Risk assessment—Incomplete understanding of the bioavailability of a contaminant is a significant factor complicating the evaluation of the risk of exposure to a toxic contaminant.11 Historically, exposure to hazardous materials in the environment has been quantified through the use of standard laboratory analytical techniques geared toward determining the total amount of material found in the sample under consideration. It may not be appropriate to measure total concentrations in the environment, as different contaminants or different species of an elemental contaminant may exhibit different levels of mobility, both in the environment and once inhaled, ingested, or placed in contact with the skin. If a contaminant is sequestered in the soil or sediment, or has limited mobility in tissues, it may not represent a significant human or environmental risk. Identification of remediation goals—Bioavailability is one of the many complex elements that may be taken into consideration when cleanup criteria are determined for a contaminated site. The question “How clean is clean enough?” is made even more difficult by an incomplete understanding of the bioavailability of contaminants at a site and the factors that influence bioavailability.
Hazardous Waste: Assessing, Detecting, and Remediation
903
Selection of appropriate remediation strategies—Bioremediation is often less expensive and more efficient than conventional remediation techniques. However, for bioremediation methods to work, the contaminant must be available to the bacteria or plant used in the cleanup effort. Fate and transport—Bioavailability of contaminants in soils is a complex process that is influenced by the interplay of both chemical and biological factors. For example, the chemical process of adsorption is generally thought to decrease the bioavailability of certain contaminants in soils, while some evidence suggests that the presence of bacteria in soils may increase the accessibility of soil contaminants to living organisms.
Biomarkers The majority of diseases are the consequence of both environmental exposures and genetic factors. To understand the relationship between exposure and adverse health effects, scientists are working to identify biomarkers—key molecular or cellular events that link a specific environmental exposure to a health outcome. The identification, validation, and use of biomarkers in environmental medicine and biology will depend fundamentally on an increased understanding of the mechanism of action and the role of molecular and biochemical functions in disease processes.12 For environmentally-induced diseases, molecular biomarkers will play a key role in understanding the relationships between exposure to toxic environmental chemicals, the development of chronic human diseases, and identifying those individuals at increased risk for disease. Although much progress has been made to identify potential biomarkers, the challenge still remains to validate, in a robust manner, the accuracy, reproducibility, specificity, and sensitivity of biomarkers, and to assess the feasibility and costeffectiveness of applying biomarkers in large population-based studies. Such validated biomarkers will be invaluable in the prevention, early detection, and early treatment of disease. There are three broad categories of molecular biomarkers that are commonly used in the field of environmental health: Biomarkers of exposure quantify body burden of chemicals or metabolites and are usually applied early in the exposure-disease paradigm. These markers are powerful tools for epidemiologists, allowing relatively accurate measurement of external and/or internal dose of an environmental agent. However, the applicability of biomarkers of exposure is often limited by their relatively short half-life, providing information on exposure over a period of days to months compared to the natural history of the disease that spans years or decades. There are noteworthy exceptions to the transient nature of exposure biomarkers, such as pesticide residues in body fat and blood that can persist over months and years. Nevertheless, the timing of sample acquisition for measurement of environmental exposures and the study of interactions with genetic susceptibilities is a critical factor in study design. Biomarkers of effect detect functional change in the biological system under study, and allow investigators to predict the outcome of exposure. DNA damage (e.g., adducts, chromosomal aberrations, loss of heterozygosity at specific chromosome loci) is frequently used as biomarkers of effect, although there is often no clear delineation from biomarkers of exposure. For example, DNA adducts can be interpreted as biomarkers both of exposure and biological effect. Biomarkers of susceptibility indicate the interindividual variation in mechanistic processes on the continuum between exposure and effect. An individual’s susceptibility to environmentally mediated disease may arise from genetic causes or from nongenetic factors such as age, gender, disease state, or dietary intake. Genetic polymorphisms may function as biomarkers of susceptibility; but it is important to keep in mind that it is actually the phenotype that is of importance for the final response to the hazardous insult.
Agent-Specific Problems at Hazardous Waste Sites New regulations for disposal of wastes on land require periodic monitoring not only of metals but also of organic pollutants (Table 49-1).
904
Environmental Health TABLE 49-1. MAJOR GROUPS OF ORGANIC CHEMICALS Pollutants by Group Aldrin/dieldrin, heptachlor, DDT/DDE/DDD lindane, toxaphene, malathion, hexachloro-2,4-dichlorophenoxyacetic acid (2,4-D) hexachlorobutadiene Benzo[a]pyrene, benzo[a]anthracene, phenanthrene Polychlorinated biphenyls (PCBs) Dioxins, furans Phenol Pentachlorophenol Benzene, methylene chloride, methyethyl ketone tetrachloroethylene, trichloroethylene, hexachlorobutadiene Vinyl chloride, bis(2-ethyhexyl)phthalate, tricresyl phosphate, dimethylnitrosamine, benzidine, 3-3'-dichlorobenzidine
Origins and Comments Pesticides and herbicides. Some are found in household chemicals. Many chlorinated pesticides have been banned from use. Motor oils and diesel fuel. These occur naturally as by-products of fuel combustion. Electrical and chemical manufacturing. PCBs are banned. By-products from the synthesis of phenol-based pesticides, such as 2,4-D. Household products and disinfectants. Wood preservative. Pentachlorophenol is very persistent in the environment. Some household products such as paints. These chlorinated solvents are very volatile. Plastics and plasticizers.
Source: Title 40, Code of Federal Regulations. Part 257.
Arsenic. Arsenic is listed first on the ATSDR 2001 CERCLA Priority List of Hazardous Substances and is found at over 70% of all Superfund sites. While both natural and anthropogenic sources contribute to arsenic contamination of soil, sediment, and water, contamination at Superfund sites primarily results from the disposal of arsenic-containing compounds from industrial and mining practices. For example, due to improper industrial disposal, lake sediments from the Aberjona watershed area of Boston contain as much as 1–2% arsenic by weight. Although exposure to arsenic has been associated with a variety of human health effects, the toxicity of arsenic is particularly difficult to characterize as a singe element, because its chemistry is complex, and there are many arsenic compounds. Specifically: • Arsenic is considered a probable human lung, skin, and bladder cancer carcinogen. Arsenic is unique in that it is the only known agent that increases lung cancer following systemic (drinking water) rather than inhalation exposure. • Arsenic exposure has been implicated in lymphoma, nasopharyngeal, stomach, colon, kidney, and prostate cancers. • There is a strong synergistic association between arsenic exposure and cigarette smoking for the risk of lung cancer. • One of the major concerns is that there is not an established dose-response curve for arsenic-induced cancer. • Arsenic can contribute substantially to the development of vascular diseases. Remediation of arsenic-contaminated sites is complicated by several factors: • As an element, arsenic cannot be destroyed or broken down by biological or normal physical processes into simpler, less toxic substances. • At Superfund sites, arsenic is generally present in complex mixtures, often with high levels of organic compounds. • Some natural geologic formations contain high levels of arsenic that can leach into groundwater. The body of knowledge built on by research and the associated base of data concerning the potential health effects of exposure to arsenic were taken into consideration by the USEPA in its review and action to reduce the Maximum Contaminant Level (MCL) of arsenic in drinking water from 50 ppb to 10 ppb. Lead. Lead can be found in all parts of our environment. Much of it comes from human activities including burning fossil fuels, mining, and manufacturing. Due to past major reductions and now the
elimination of lead in gasoline, there has been a significant decrease in public exposure to lead in outdoor air. Remaining air pollution sources include lead smelters, incineration of lead batteries, and burning lead-contaminated waste oil. However, the most common sources of current lead exposure come from old homes containing lead-based paints and lead-contaminated soil. Because lead persists in the environment, it continues to be a contaminant of concern to the USEPA and ATSDR. Lead has been found in at least 70% of the National Priorities List sites identified by the USEPA. Lead is listed second on the 2001 ATSDR Priority List and is one of six “Criteria Air Pollutants” for which the USEPA has developed health-based national air-quality standards. Lead overexposure is a leading cause of workplace illness. Exposure to high levels of lead can damage the blood, brain, nerves, kidneys, reproductive organs, and the immune system. Lead poisoning is still the leading environmentally induced illness in children. Children are particularly susceptible to the harmful effects of lead because they are undergoing rapid neurological and physical development. Even at repeated exposure to small doses, lead can be a problem because it accumulates in the body. Lower levels that are more commonly associated with current exposures can result in impaired cognitive functioning, subtle neurobehavioral effects, and developmental effects in children, and have been associated with higher blood pressure in middle-aged men. Decades of research have been devoted to ascertaining the health effects associated with lead exposure and the underlying mechanisms for these detrimental effects. Even still, more research is needed. As the tools have become more sophisticated and sensitive, questions that could not even be considered in the past can now be studied. With this increased sensitivity, subtle health effects are now being detected. Because lead is persistent in the environment, continued research focused on low-level health effects and methods of prevention, including environmental remediation, is still necessary. Dioxin. “Dioxin” is a term commonly used to refer to the chemical 2,3,7,8-tetrachlorodibenzo-p-dioxin or TCDD. In all, there are 210 isomers of polychlorinated dibenzodioxins (PCDDs) and polychlorinated dibenzofurans (PCDFs)—collectively these compounds are often referred to as dioxin-like compounds or “dioxins.” The toxicity of dioxins varies with the position and number of chlorine atoms— many dioxins are only slightly toxic and some are nontoxic. However, animal studies have shown that TCDD is very toxic—it causes cancer and is a known endocrine disruptor that can alter reproductive, developmental, and immune function. Dioxins are among the 12 manmade chemicals targeted for global phase-out by the UN Treaty on Persistent Organic Pollutants (POPs).
49 Dioxins are chemical contaminants that have no commercial use. They are formed as by-products in the burning of chlorine-based chemical compounds with hydrocarbons. Municipal waste incineration, forest fires, backyard trash burning, and manufacturing processes to produce herbicides and paper contribute to the production of dioxins. As a consequence, trace amounts of dioxins and furans are present in virtually all global ecosystems. Because dioxins are present in low levels as environmental contaminants in food, people are constantly exposed to them through ingestion. Even though they are not found at high concentrations in food, over time, dioxins accumulate in human tissues because they are not readily excreted or metabolized. Factors impacting the remediation of dioxin-contaminated sites include the following: • Dioxins are stable to heat, acids, and alkali • Dioxins bind tightly to soil and are virtually insoluble in water. This increases the difficulty of soil remediation but decreases the extent of groundwater contamination • Dioxins can be broken down by ultraviolet light—most have a half life of 1–3 years • Dioxin uptake by plants from soil is limited—no detectable amounts of dioxin are found in grain and soybeans. Polychlorinated Biphenyls. PCBs are a family of 209 chemical compounds for which there are no known natural sources. Each consists of 2 benzene rings and 1–10 chlorine atoms; PCBs vary in degrees of toxicity. Importantly, PCB-contaminated sites are usually contaminated with mixtures of PCBs, and the toxicity of any mixture is dependent upon the interactions of the individual congeners. Because of their stability, resistance to fire, and electrical insulating properties, PCBs were widely used in a variety of industrial applications. Unfortunately, the very characteristics of PCBs that made them applicable for industrial uses make them problematic in the environment. PCBs are very persistent. They are generally unalterable by microorganisms or by chemical reaction. According to the ATSDR, PCBs have been found in approximately one-third of the National Priorities List sites identified by the USEPA. PCBs are extremely toxic—they are listed fifth on the ATSDR’s 2001 CERCLA Priority List of Hazardous Substances. PCBs have been demonstrated to cause a variety of adverse health effects in animal studies. PCBs not only cause cancer but can adversely affect the immune, reproductive, nervous, or endocrine systems. Studies in humans provide supportive evidence for potential carcinogenic and noncarcinogenic effects of PCBs as well. It has been suggested that many of the adverse health effects associated with PCB exposure are a result of its ability to mimic the body’s natural hormones (e.g., estrogen), and that this “endocrine (hormone) disruption” can lead to infertility, certain types of cancer, and other hormone-related disorders. Volatile Organic Compounds. Organic compounds that evaporate easily are collectively referred to as volatile organic compounds (VOCs). VOCs are widely used as cleaning and liquefying agents in fuels, degreasers, solvents, polishes, cosmetics, drugs, and drycleaning solutions. VOCs can have direct adverse effects on human health. Many VOCs have been classified as toxic and carcinogenic. VOCs of particular significance to human and environmental health include benzene, toluene, ethylbenzene, and xylene (BTEX), methyl t-butyl ether (MTBE), ethylene chloride, chlorobenzene, trichloroethylene (TCE), and perchloroethylene. Most VOCs found in the environment result from human activity—as the result of spills or inappropriate disposal, or as uncontrolled emissions from industrial processes. When VOCs are spilled or improperly disposed of, a portion will evaporate, but some will soak into the ground. Water can transport VOCs in soil, potentially carrying them to the groundwater table. When VOCs migrate underground to nearby wells, they can end up in drinking-water supplies.
Hazardous Waste: Assessing, Detecting, and Remediation
905
VOC contamination is recognized as a critical issue for both air and water: • USEPA estimates that VOCs are present in one-fifth of the nation’s water supplies. • Because VOCs are considered a precursor for ground-level ozone (smog), they are one of the six “Criteria Air Pollutants” for which the USEPA has developed health-based national air quality standards. • Remediation of VOC-contaminated soils and groundwater is complicated because it is common for the component organic pollutants to exist as separate liquid phases. Also, the migration of the dissolved plume is unique to each site. VOC contaminant transport is governed by the quantity of VOC in the plume; its relation to biological and chemical properties of soils and groundwater; the hydraulic properties of the geologic materials; and any structural features which can act as barriers or conduits for fluids. Therefore, it is difficult to generalize properties of VOC transport from one site to another. Mercury. Exposure to mercury occurs from inhalation, ingestion, and absorption. Primary sources of exposure are spills, incineration, contaminated water and food, and dental or medical treatments. Mercury is listed third on the ATSDR 2001 CERCLA Priority List of Hazardous Substances. Mercury is found at approximately 50% of all Superfund sites. Mercury enters aquatic and terrestrial systems from the atmosphere primarily in an inorganic form. However, under conditions that favor bacterial sulfate reduction, inorganic mercury is methylated to form methylmercury, a potent neurotoxin that bioaccumulates in fish. Wetlands, lake sediments, and anoxic bottom waters are three locations where methylmercury is rapidly formed as an incidental by-product of bacterial sulfate-reduction. As a consequence of atmospheric deposition of inorganic mercury, its metabolized form, methylmercury, can be found in fish from lakes remote from the initial point sources of contamination. Mercury contaminants are present in the environment in three forms—elemental mercury, inorganic mercury salts (ex., chlorine, sulfur), and organic mercury compounds such as methylmercury. The nervous system is very sensitive to all forms of mercury. Exposure to high levels of elemental, inorganic, or organic mercury can permanently damage the brain, kidneys, and developing fetus. Methylmercury and elemental mercury vapors are more harmful than other forms, because more mercury in these forms reaches the brain. Effects on brain function may result in irritability, shyness, tremors, or changes in vision, hearing, or memory. The human cancer data available for all forms of mercury are inadequate to draw conclusions as to its carcinogenic potential. On March 1, 2002, the Food and Drug Administration (FDA) announced that it will soon schedule a meeting of its Foods Advisory Committee to review issues surrounding methylmercury in commercial seafood. This review will include a reexamination of FDA’s most recent Consumer Advisory for pregnant women and women of childbearing age who may become pregnant. SBRP research will play an important role in these proceedings. Soils contaminated with mercury present unique challenges for remediation due to the variety of chemical forms in which mercury can occur and because of the challenge in meeting cleanup concentration goals set by regulation or risk assessment. Phytoremediation is not a viable option for mercury-contaminated soils. While thermal treatment (retorting) based on the unique volatility of mercury is listed by the USEPA as the Best Demonstrated Available Technology for mercury-contaminated wastes, typically high costs, limited capacity, and potential for atmospheric releases have restricted wide application of this technology. Mixtures. Historically, toxicity and carcinogenicity testing as well as mechanistic research on environmental chemicals have focused on
906
Environmental Health
single agents. Over the years, this approach on environmental chemicals has been critical in providing information which has led to a better understanding of the interactions of exposure and susceptibility in relation to time. Indeed, the setting of standards for single substances is seen as an important and generally accepted tool in the protection of human health. However, it is becoming increasingly recognized that humans are not exposed to single chemicals. Rather, humans are exposed either concurrently or sequentially by various routes of exposure, to a large number of chemicals from a wide variety of sources over varying periods of time. Therefore, researchers, environmental policy-makers, and public health officials are faced with the challenge to design and implement strategies to reduce human disease and dysfunction resulting from exposure to chemical mixtures.13 Scientific approaches that have been used to assess the effects of single chemicals on biological systems are inadequate to address the potential health consequences that may arise from exposure to chemical mixtures. Several factors contribute to the uncertainty of our understanding of the toxic effects of environmental exposure to chemical mixtures:14 • Many of the effects of exposure are subtle and difficult to quantify. • Many environmental contaminants are changed to metabolites or conjugates in the body, and these new products may also have biologic activity that may or may not be similar to the parent compound. Thus, even a single compound may become a functional mixture. • A single environmental contaminant may lead to different effects when exposure occurs at different ages. Researchers need to design studies that will evaluate long-term, delayed, and potential trans-generational health effects resulting from environmental or occupational exposures. • Humans may be exposed to a nearly infinite number of combinations of contaminants, and we do not know what dose ranges or which biologic endpoints should be studied. RISK ASSESSMENT
Risk assessment is a structured methodology that is used to evaluate the possible effects of hazardous waste sites on human health and ecosystem health. The USEPA uses this process both to view the extent of a problem at a Superfund site and to inform decision-makers from the preremedial through the postremedial phases of a Superfund site cleanup. An integral component of risk assessment is exposure assessment, which is the process of measuring or estimating exposures to chemical contaminants. The general goal of risk/exposure assessment research is to improve and validate the measurements, modeling, and instrumentation and study designs that are used to analyze the health risks and exposure pathways from Superfund sites. Some key areas of research include epidemiological studies that evaluate the relationship between exposure and disease in a population; the development of new risk assessment tools; use of models and biomarkers to measure exposure and effect; and studies elucidating the environmental pathways in which environmental contaminants are transported from the release site to possible points of contact with humans. The advances made in these studies can assist remedial project managers and other decision-makers in protecting the environment and meeting the public health needs of the communities affected by Superfund sites. The U.S. EPA’s Superfund statutory authority mandates that it protect both human and ecological health at hazardous waste sites. The protection of human health has received more attention by the public, the USEPA, and other federal and state site managers. However, recently increased emphasis has been placed on the development of technologies and data to better assess ecological health. Now, a risk assessment is prepared for each site that includes separate assessments of human health and the ecological impacts of a site.
REMEDIATION
Remediation research covers the spectrum of technologies being developed for the cleanup of groundwater, sediments, soil, and other environmental media contaminated with hazardous substances. With primary prevention as the goal, researchers are developing innovative biological, chemical, and physical methods that effectively reduce the amount and toxicity of hazardous wastes. Remediation research also includes development of new and improved methods of hazardous waste containment, recovery, and separation. This broad area of research includes laboratory and bench studies, and applied field research once a technology has reached an advanced level. To develop novel remediation technologies, basic knowledge regarding the physical and chemical processes involved in each strategy is needed. For example, an in-depth understanding of sorption and desorption processes is necessary for many remediation technologies. Kinetic data, such as the rates and extent of hazardous waste conversion, are needed for thermal, chemical oxidation, and supercritical fluid technologies. The development of efficient and economical remediation strategies requires collaboration among a wide spectrum of diverse fields. For example, a microbiologist alone does not have all of the knowledge required to design and implement a bioremediation system, but requires support from experts in fields such as ecology, soil science, hydrogeology, geologic engineering, geophysics, and geochemistry. Remediation can be very practical, frequently with direct applications to Superfund sites, including field testing and patented cleanup technologies. The knowledge gained from understanding remediation processes not only serves as the basis for subsequent basic or applied research in these areas, but also provides a foundation for practical benefits such as lower cleanup costs on hazardous waste sites and improvements in human and ecological health risk assessments. Remediation research covers the spectrum of technologies being developed for the cleanup of groundwater, sediments, soil, and other environmental media contaminated with hazardous substances. With primary prevention as the goal, researchers are developing innovative biological, chemical, and physical methods that effectively reduce the amount and toxicity of hazardous wastes. Remediation research also includes development of new and improved methods of hazardous waste containment, recovery, and separation. This broad area of research includes laboratory and bench studies, and applied field research once a technology has reached an advanced level.
NEED FOR MULTI-/INTERDISCIPLINARY RESEARCH
Research focused on hazardous wastes is driven by the need to protect human health; however, a positive outcome can be attained only if the full life cycle of the contaminant is understood.15 It is evident that hazardous substances are capable of moving through the environment from one stratum to another, interacting with microbes, plants, animals, and humans. Each step of the process must be elucidated and related to the steps before and after. Thus many scientific disciplines must be integrated. In doing so, the procedures of characterizing and evaluating risks of hazardous wastes can be scrutinized and revised as directed by new research findings. From a public health perspective, disease prevention and reduction of risk and exposure is fundamentally affected by the bioavailability and transformation of hazardous wastes in various medias. Therefore, it is important to support the development of environmental technologies that allow for the treatment of environmental contaminants so that potential human health effects are ameliorated, or indeed prevented.16 Basic and applied research needs to be funded on the premise that these research developments will one day be used to decrease or prevent the risk to human health associated with hazardous wastes. It is important to recognize that the cleanup of contaminated soils, sediments, and
49 groundwater is not only for improvement of the environment, but it is also a means by which human exposure and human health risks can be reduced. To this end, promoting and strengthening basic and applied research in environmental technologies integrated within a framework of health-related research and development is essential. RADIOACTIVE AND MIXED WASTES
Approximately 800,000 cubic feet of low-level radioactive waste was disposed in 1993, a 45% decrease from the preceding year. Industry efforts to minimize waste generation and to reduce the volume of waste by compaction and incineration have contributed to the decrease. The Nuclear Regulatory Commission (NRC) has developed a classification system for low-level waste (LLW) based on its potential hazards, and has specified disposal and waste form requirements for each of the three general classes of waste—A, B, and C. Class A waste contains lower concentrations of radioactive material than Class C waste. The volume and radioactivity of waste vary from year to year based on the types and quantities of waste shipped each year. The disposal of high-level radioactive waste requires a determination of acceptable health and environmental impacts over thousands of years. Current plans call for the ultimate disposal of the waste in solid form in a licensed, deep, stable geologic structure. There are basically two types of by-product materials. The first type is produced by a nuclear reactor. More precisely, this is any radioactive material or material made radioactive by exposure incident to the process of producing or using special nuclear material. The second type is produced by the uranium and thorium mining process as well as the tailings or wastes produced by the extraction or concentration of uranium or thorium from ore processed primarily for its source material content, including discrete surface wastes resulting from uranium solution-extraction processes. The radioactive waste material that results from the reprocessing of spent nuclear fuel, including liquid waste produced directly from reprocessing and any solid waste derived from the liquid that contains a combination of transuranic and fission product nuclides in quantities that require permanent isolation, is referred to as high-level waste (HLW). HLW is also a mixed waste because it has highly corrosive components or has organics or heavy metals that are regulated under RCRA. HLW may include other highly radioactive material that NRC, consistent with existing law, determines by rule requires permanent isolation.
Definitions of Radioactive Wastes Radioactive waste is solid, liquid, or gaseous waste that contains radionuclides. The Department of Energy (DOE) manages four categories of radioactive waste: high-level waste, transuranic waste, lowlevel waste, and uranium mill tailings. HLW is highly radioactive material from the reprocessing of spent nuclear fuel. HLW includes spent nuclear fuel, liquid waste, and solid waste derived from the liquid. HLW contains elements that decay slowly and remain radioactive for hundreds or thousands of years. HLW must be handled by remote control from behind protective shielding to protect workers. Transuranic (TRU) waste contains human-made elements heavier than uranium that emit α-radiation. TRU waste is produced during reactor fuel assembly, weapons fabrication, and chemical-processing operations. It decays slowly and requires long-term isolation. TRU waste can include protective clothing, equipment, and tools. Uranium mill tailings are by-products of uranium mining and milling operations. Tailings are radioactive rock and soil containing small amounts of radium and other radioactive materials. When radium decays, it emits radon, a colorless, odorless, radioactive gas. Released into the atmosphere, radon gas disperses harmlessly, but the gas is harmful if a person is exposed to high concentrations for long periods of time under conditions of limited air circulation.
Hazardous Waste: Assessing, Detecting, and Remediation
907
LLW is any radioactive waste not classified as high-level waste, transuranic waste, or uranium mill tailings. LLW often contains small amounts of radioactivity dispersed in large amounts of material. It is generated by uranium-enrichment processes, reactor operations, isotope production, medical procedures, and research and development activities. LLW is usually made up of rags, papers, filters, tools, equipment, discarded protective clothing, dirt, and construction rubble contaminated with radionuclides. Mixed waste is defined as radioactive waste contaminated with hazardous waste regulated by the RCRA. A large portion of the Department of Energy’s mixed waste is mixed low-level waste found in soils. No mixed waste can be disposed of without complying with RCRA’s requirements for hazardous waste and meeting RCRA’s Land Disposal Restrictions, which require waste to be treated before disposal in appropriate landfills. Meeting regulatory requirements and resolving mixed waste questions related to different regulations is one of DOE’s most significant waste management challenges.
CONCLUSION
The uncertainties and unknowns surrounding exposures present a huge challenge for decision-makers, especially for those dealing with hazardous waste sites. Accordingly, a basic, mechanistic understanding of the cellular, molecular, and biochemical processes that are affected by the exposures can enhance the scientific base used in the decision process.15,17 There are many aspects to developing a fuller understanding of the relationship between exposures and disease processes such as the identification of the causative agent(s); determination of the minimum dose where adverse health effects are manifested; and elucidation of the mechanisms by which these substances cause toxicity. The more we learn, the better understanding we will have of carcinogenesis, cardiovascular toxicity, reproductive toxicity, neurotoxicity, and other toxic effects. Clearly, these are all important public health concerns. REFERENCES
1. Anderson FR, Mandelker DR, Tarlock AD. Environmental Protection: Law and Policy. Boston: Little, Brown; 1984: 558. 2. Carpenter DO, Suk WA, Blaha K, Cikrt M. Hazardous wastes in eastern and central Europe. Environ Health Perspect. 1996;104(Suppl 3): 244. 3. Carter DE, Pena C, Varady R, Suk WA. Environmental health and hazardous waste issues related to the U.S.-Mexico border. Environ Health Perspect. 1996;104(Suppl 6):590. 4. Carpenter DO, Cirkt M, Suk WA. Hazardous wastes in Eastern and Central Europe: technology and health effects. Environ Health Perspect. 1999;107(4):249. 5. Suk WA, Carpenter DO, Cikrt M, Smerhovsky Z. Metals in Eastern and Central Europe: health effects, sources of contamination and methods of remediation. Int J Occup Med Environ Health. 2001;14(2):151. 6. Rummel-Bulska I. The Basil convention: a global approach for the management of hazardous wastes. In: Andrews JS, Frumkin H, Johnson BL, et al., eds. Hazardous Waste and Public Health: International Congress on the Health Effects of Hazardous Waste. Princeton, New Jersey: Princeton Publishing Company; 1993: 139–45. 7. Darnay AJ, ed. Statistical Record of the Environment. Washington, DC: Gale Environmental Library; 1991. 8. Brown LR, Denniston D, Flavin C, et al. State of the World. New York: W.W. Norton; 1995: 172. 9. Report of the United Nations Conference on Environment and Development, Rio de Janeiro, Brazil, June 3–14, 1992. New York: United Nations, 1992; Chapter 19.
908
Environmental Health
10. Anderson B, Thompson C, and Suk WA. The Superfund Basic Research Program—making a difference—past, present, and future. Int J Hygiene Environ Health. 2002;205(1–2):137. 11. Smith CM, Christiani DC, Kelsey KT, eds. Chemical Risk Assessment and Occupational Health. Current Applications, Limitations, and Future Prospects. Westport, Connecticut and London: Auburn House: 1994. 12. Suk WA, Wilson SH. Overview and future of molecular biomarkers of exposure and early disease in environmental health. In: Wilson SH, Suk WA, eds. Biomarkers of Environmentally Associated Disease. Boca Raton, FL: CRC Press LLC/Lewis Publishers; 2002:3. 13. Suk WA, Olden K, Yang RSH. Chemical mixtures research: significance and future perspectives. Environ Health Perspect. 2002;110(Suppl 6): 891.
14. Suk WA, Olden K. Multidisciplinary research: strategies for assessing chemical mixtures to reduce risk of exposure and disease. Eur J Oncol. 2003;133–42. 15. Suk WA, Anderson BE, Thompson CL, Bennett DA, VanderMeer DC. Creating multidisciplinary research opportunities: a unifying framework model helps researchers to address the complexities of environmental problems. Environ Sci Technol. 1999;4(6)241. 16. Young L, Suk WA, eds. Biodegradation: its role in reducing toxicity and exposure to environmental contaminants. Environ Health Perspect. 1995;103(Suppl 5):3–123. 17. Suk WA, Olden K. Environmental health and hazardous waste: research, policy and needs. Curr World Leaders, Int Issues. 1996; 39(6):11.
Aerospace Medicine
50
Roy L. DeHart
Aerospace medicine is “that specialty of medical practice within preventive medicine that focuses on the health of a population group defined by the operating aircrews and passengers of air and space vehicles, together with the support personnel who are required to operate and maintain them.”1 The practice of aerospace medicine tends to reverse the usual order of traditional or curative medicine. Normally the physician is treating abnormal physiology (illness) in a normal (terrestrial) environment. The physician concerned with the care of the aviator or astronaut most frequently deals with a normal (perhaps supernormal) individual in an abnormal (aeronautical) environment. Since its earliest beginnings, flight has required people to adapt to or to protect themselves from multiple environmental stressors. Progress in flight has required continuing improvement in adaptation or in the devices used for protection. Such progress has always been marked by the sacrifices made by those who push the envelope of aeronautical and astronautical activity. On December 17, 1903, on a windswept beach in Kitty Hawk, North Carolina, the Wright brothers succeeded in accomplishing sustained powered flight for 12 seconds over a distance of 40 m. In less than 15 years, thousands of these powered flying machines swarmed over the battlefields of the “Great War.” During this rapid expansion of military aviation, the seed of aviation medicine sprouted, took root, and grew. The department of space medicine was officially established at the United States Air Force School of Aerospace Medicine under the directorship of Dr Hubertus Strughold on February 9, 1949.2 The first human-operated flight in space, circumnavigating the globe, was performed by Soviet cosmonaut Yuri Gagarin on April 12, 1961. In February 1962, American astronauts joined the Soviets with the successful orbital flight of John Glenn. Biomedical oversight for the United States’ space program is headquartered at the National Aeronautics and Space Administration’s (NASA) facility at the Johnson Space Center, Houston, Texas. Following successful lunar flights and space laboratory missions, the United States entered into a nearly routine operation with the space transportation system or “shuttle.” The losses of Challenger in 1986 and Columbia in 2003, however, are a reminder of the operational hazards of space flight. THE SPECIALTY OF AEROSPACE MEDICINE
Shortly after World War II, the Aero Medical Association initiated activities for the establishment of a training program for medical specialists in the field of aviation medicine. In 1953, the American Board of Preventive Medicine (ABPM) approved the decision to authorize certification in aviation medicine. The first group of physicians was certified in the specialty that same year. As of 2005, 1376 physicians have been certified in the specialty. With the advent of space flight, both the association and the specialty changed names to appropriately reflect activities in both the
aeronautical and astronautical environments. The name of the specialty was officially changed by the ABPM to Aerospace Medicine. In 2000, the ABPM initiated the development of a Certificate of Added Competency in Undersea and Hyperbaric Medicine. This is of interest to aerospace medicine as it is related to the hyperbaric environment, an environment used to treat dysbarism or aviator’s bends. TRAINING AND EDUCATION
Few physicians have the opportunity to gain experience in aerospace medicine until their postgraduate years. Typically, physicians are introduced to the specialty via one of two routes. Those practitioners with an interest in aviation may turn to the Federal Aviation Administration (FAA) for orientation and training as an aviation medical examiner (AME) to support general aviation. Each year the FAA conducts postgraduate educational courses for new physicians who are becoming AMEs and refresher training for established AMEs. The second route is via the military, as the three services conduct their own training programs for flight surgeons. These courses are basically introductory and focus on the clinical preventive medical aspects of evaluation and care of the aviator. Historically, most physicians who have entered the field of aerospace medicine have done so via the military route.
Residency Programs Aerospace medicine is one of the smallest specialty training programs in the United States, both with regard to training sites and number of residents. Its program is similar in structure to other training programs in preventive medicine. Two programs are under the Department of Defense (DoD) sponsorship. The Air Force program is headquartered at the United States Air Force School of Aerospace Medicine, San Antonio, Texas, and the Navy program is managed at the Naval Operational Medical Institute, Pensacola, Florida. There are two civilian university residency programs available: Wright State University, College of Medicine, Dayton, Ohio, and the University of Texas, College of Medicine, Galveston. Both enjoy affiliation agreements with NASA. Fewer than 50 residents are in training at any one time, with 25–30 candidates sitting for the specialty board examination annually. THE AEROSPACE ENVIRONMENT
The characteristic that distinguishes aerospace medicine from other medical fields is the complex environment in which flight takes place. Stressors that impinge on humans in this unique environment, either singularly or in combination, include hypoxia, reduced atmospheric pressure, thermal extremes, brief and sustained acceleration fields, ionizing radiation, null gravity fields, and maintenance of situational awareness. For men and women to perform successfully in this potentially 909
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
910
Environmental Health
hazardous environment, the principles of preventive medicine apply in the selection, health maintenance, and engineering protection of the aircrew.
The Biosphere The chemical and physical properties of the atmosphere vary with the attained altitude. Although the properties are frequently described in terms of altitude, it must be appreciated that the atmosphere is dynamic in that specific characteristics are altered by season, the earth’s rotation, and latitudes. For practical purposes, the components and their relative percentage of the atmosphere remain relatively constant up to an altitude of approximately 90 km. The major constituents of the atmosphere are nitrogen (78%) and oxygen (21%). The remaining 1% of the atmosphere consists of argon, carbon dioxide, helium, krypton, xenon, hydrogen, and methane. The actual percentages of these constituents vary with the water content of the atmosphere, which is altitude-dependent. As one ascends, the air becomes dryer. Regardless of the altitude within the aeronautical frame of reference, the percentage of oxygen available to an individual at sea level is basically the same as that found at 90 km. The difference is that the partial pressure of oxygen is much reduced at altitude. Consequently the physiological availability of oxygen is likewise reduced. One constituent of the atmosphere has received considerable attention in recent years because of concern for potential adverse health effects should it be reduced. Ozone is produced in the upper atmosphere by the photodissociation of molecular oxygen. Ozone attains maximum density at an altitude of approximately 22 km but is present in measurable concentrations from 10 to 35 km. Reduction in the ozone concentration increases the level of ultraviolet radiation reaching the earth’s surface (see Chap. 3). At sea level, the column of air creates an atmospheric pressure of 760 mm Hg, 760 torr, or 1013.2 millibars. As one ascends in altitude, there is less of a column of air and thus less air pressure; however, this relationship is not linear: the density of the air decreases exponentially. Consequently, at a height of 5.5 km the air density is one-half that found at sea level, and at 11 km the density is onequarter. In practice the actual heights are somewhat greater because of the effects of temperature.
To avoid discomfort and potential hazard in flying at altitude, the most logical solution is to carry your terrestrial environment with you. Although it is not the usual case, the same principle applies for many aircraft systems, particularly passenger-carrying aircraft. The body of the aircraft becomes a pressure vessel in which the air pressure and oxygen availability are similar to that at sea level. For a number of practical reasons, such as passenger comfort, avoiding clinical hypoxia for most passengers, and the additional cost of maintaining a sea level environment, the actual cabin altitude for most commercial aircraft is set at approximately 2500 m. Although passengers will note some pressure changes in the ears or sinuses, the change is gradual and rarely causes pain or discomfort. In most cases, the passenger is not even aware of these pressure changes. The altitude is set so that most passengers are able to fly without experiencing any hypoxic symptoms. Occasionally, passengers with a compromised pulmonary or cardiovascular system may require supplemental oxygen, since their reserve is inadequate to compensate for these relatively small changes in oxygen partial pressure. In the absence of a pressurized cabin, the aviator may be forced to adapt by wearing a self-contained pressure system. Although the public is most familiar with “space suits” from television reporting, similar suits have been used for over a half century by military aviators flying high-altitude missions. Provided the ambient pressure is adequate, supplemental oxygen systems permit high-altitude flying and provide a safety factor for passengers on commercial airliners. Most systems employ an oxygen storage system of either pressurized gas or liquid oxygen. The source of oxygen is then connected through a regulator or metering device to an oxygen mask worn by the user. Another less commonly used oxygen storage system uses solid chemicals that, when activated, release oxygen. Two devices have been developed to provide onboard oxygen generation systems. The fuel-cell concept has been developed for space flight and is basically an electrolysis system freeing oxygen from water. A second system uses the reversible absorption properties of fluomine for oxygen. In this technology, pressurized air is forced over a fluomine bed, and the pressure is then reduced, allowing the absorbed oxygen to be released. Other techniques have included the molecular sieve device, which is used to filter oxygen from air; a similar technology employs a permeable membrane that passes oxygen preferentially to other constituents of the atmosphere.
Oxygen Systems Hypoxia, which may have any one of several causes, has devastating effects on normal physiological function. In aviation, this oxygen deficiency is due to a reduction in the oxygen partial pressure in inspired air, which occurs at altitude because of reduced oxygen in the ambient air. The alveolar partial pressure of oxygen is the most critical factor in this problem. In aviation, two factors must be considered in understanding hypoxia at altitude. Not only may the partial pressure of oxygen be low, for example, the available oxygen is reduced by half in the ambient air at 6 km, but the ambient pressure may be insufficient to permit gas exchange at the alveoli. Considering that water vapor at normal body temperature is 47 mm Hg and the residual alveolar carbon dioxide pressure is 40 mm Hg, then for any air exchange to occur in the lung, the ambient pressure must exceed 87 mm Hg. Even if the aviator is breathing 100% oxygen, if the ambient pressure of oxygen is no higher than 87 mm Hg, it would be impossible to overcome the gas pressures already present at the alveoli and thus provide oxygen. Hypoxia is particularly dangerous because its signs and symptoms produce little discomfort and no pain. Between 2000 and 3000 m, the subtle symptoms may produce deficiencies in night vision and some drowsiness. Unfortunately, intellectual impairment can be an early manifestation of hypoxia, thus compromising the ability of an individual to behave rationally. Thinking is slow and calculations are difficult. Both memory and judgment are faulty, and reaction time is delayed. This condition can be rapidly treated by administering oxygen at altitudes between 3000 and 10,000 m and adding positive pressure oxygen up to 14,000 m or by enclosing the individual in a pressurized system with available oxygen at altitudes out to space.
Biodynamics The first powered-flight aviation death occurred in the United States when an army lieutenant sustained fatal injuries while flying with Orville Wright. Since that initial accident, there has been an everincreasing sophistication in the science of aircraft accident prevention and aircrew and passenger protection. Acceleration occurs whenever the velocity of an object changes. This change may occur either in direction or in magnitude. For convenience, transitory acceleration in aerospace applications is expressed in terms of “g” and is defined as the magnitude of acceleration when the velocity change approximates 9.8 m/s2. Transitory acceleration is of such a short duration that the body does not reach a steady-state status. Protection from transitory acceleration has generally centered around two technologies: the development of restraint devices, such as lap belts and shoulder harnesses, and the design of crew space to reduce the possibility of contact. Accident protection technology has been employed in the design of the airframe to absorb energy and improvements in the seat structure to reduce mechanical failure. Primarily in military aviation, escape systems have been designed that often impart a new acceleration field. Ejection seats and capsules are designed to carry the occupant free of the aircraft envelope even on the ground at zero speed or in adverse conditions during uncontrolled descent. These new components of acceleration are specifically designed to remain within human tolerance. During World War I, fighter pilots began reporting visual changes when they engaged in a pull-out or during aerial combat.
50 Research work using a human centrifuge demonstrated, in 1935, the effects of blackout during sustained acceleration. Sustained acceleration is achieved when the body has sufficient time to reach equilibrium with the effects of the acceleration. In this context, g has been used to reflect a ratio of weight. Consequently, a pilot flying a maneuver in an aircraft in which he or she sustains 4 g would likewise experience an increase in body weight from 175 to 700 pounds. In such an acceleration, a flight helmet with equipment weighing 10 pounds becomes a mass of 40 pounds. As any mass exposed to such a field will experience a proportionate increase in weight, this has dynamic effects on the body’s hydrostatic blood column and thus on cardiovascular function. For example, the hydrostatic column from the heart to the eye in a normal terrestrial environment is 30 cm; when exposed to a plus 6 g acceleration environment, it becomes equivalent to 180 cm. In this example, the body’s blood pressure would be unable to overcome the hydrostatic pressure, and blood flow to the level of the eyes would cease. Because of the normal hydrostatic pressure of the eyeball, a pilot will experience blackout wherein vision is lost but consciousness is maintained. When tested on a centrifuge using a standard protocol, the typical aviator, relaxed and without any protective devices, experienced blackout between 4 and 5.5 g. The same aviator, when allowed to strain to increase blood pressure, is able to increase tolerance 0.5 g to 1.5 additional g. Two critical factors impact the degree of tolerance: the rate of onset of the acceleration, and its duration. Further protection is available using mechanical devices such as an anti-g suit. The suit is basically a lower torso device with bladders to press on the abdomen, thighs, and calves. These bladders inflate when a sensor is stimulated by acceleration. Such devices increase the g tolerance by 2 g. Research performed over half a century ago demonstrated that an anti-g suit properly worn during performance of a straining maneuver can increase g tolerance from approximately 4 g to about 9 g. Another mechanism used to enhance acceleration tolerance for pilots has included body positioning to orient the long axis of the body more perpendicular to the acceleration vector. Positive pressure breathing is also shown to be helpful in increasing tolerance, as it increases intrathoracic pressure. The biomechanical force environments in aerospace systems can be enormous, with generation of severe noise and vibration. Human exposure to these forces may affect performance and contribute to adverse health outcomes. Prevention is the key to proper management of these stressors.
Spatial Disorientation The complex neurosensory system that we terrestrials use to maintain our orientation in the three-dimensional plane of our normal existence is inadequate for the three-dimensional dynamic environment of aerospace. The vision sensory system is by far the most important modality for providing us input to maintain spatial orientation. Visual information processing, however, is acted on by the vestibular system and, to some degree, by proprioception and motion. Vestibular function in maintaining spatial orientation is not as clearly defined or evident as vision. Once we are deprived of visual cues, the vestibular system becomes a major source of orientation cues in our normal environment. The visual-vestibular interface is important in fine-tuning our spatial orientation activities. However, an individual with a nonfunctional vestibular system is able to perform well as long as visual cues are adequate. In the environment of flight, the aviator is exposed to far more complex motion inputs than the physiological system is designed to process. Not infrequently, visual cues may be in conflict with apparent motion and velocity cues processed by the vestibular system. These conflicting cues may lead to severe spatial disorientation or induce episodes of motion sickness. In flight, the visual system may be subjected to various illusions, which may cause the pilot to assume a position in free space that is inaccurate. At night or in inclement weather, the pilot may not have any external visual cues.
Aerospace Medicine
911
Vestibular illusions are often severe and may produce a fatal outcome. These illusions are generally produced by velocity changes that generate input from the semicircular canals and otolith organs. Disorientation accidents in military aircraft account for approximately 15% of fatal mishaps. Measures that may be employed to prevent these accidents include modifying flight procedures to reduce the opportunity for disorientation; improving the ease of interpretation of information presented by flight instruments; increasing proficiency in instrument flying, which will permit the pilot to overcome false sensory input; and educating the pilot regarding physiological frailty and the need for dependence on and acceptance of flight instrument information.
Space The transition from the terrestrial to the space environment is not a well-demarcated line but rather a continuum that varies with altitude depending upon the parameter discussed. Human operated flight and near-earth orbit at altitudes in excess of 240 km require a selfcontained vehicle sealed from the near vacuum of space. At this altitude, the air density is so low that there is no practical method for compressing the gases to supply both pressure and oxygen to the craft’s inhabitants. Although the sun’s radiation may heat the vehicle, occupants must be protected from the extreme cold of the ambient environment. While in orbital flight, the astronaut experiences a nearly gravity-free, or weightless, environment. This occurs when the gravitational force vector is counterbalanced by the centrifugal force imparted to the vehicle as it travels tangential to the earth’s surface. Long-term exposure to this near-null gravity environment has important biomedical ramifications that as yet are not fully defined. The earth’s atmosphere serves as an insulator to shield us from many of the potential dangers of space radiation. Once a person is in space, this protection is no longer available, and ionizing radiation must be a concern. Three types of radiation present hazards: primary cosmic radiations, geomagnetically trapped radiation (also known as the Van Allen belts), and radiation produced by solar flares. The environment of space is similar in many ways to the aeronautical environment; however, the duration of exposure is much more prolonged in space, and null-gravity is unique.
OPERATIONAL AEROSPACE MEDICINE
The physician practicing aerospace medicine as a clinical specialty must be an astute clinician in the office setting and also a practitioner able to grasp the nuance of the environment of flight. The stressors impinging on aircrew vary with the type of flight vehicle, whether a single-seat private plane or a multicrew space habitat. Consequently, the physician serving as an AME or flight surgeon (FS) must be cognizant of the aircrew’s flight environment. For ease of discussion, these operational flight environments are defined as civil aviation, military aviation, and space operations.
Civil Aviation This category of flight operations includes commercial aviation and private or recreational flying. Airlines represent an international industry with aircraft worldwide transporting nearly 1.6 billion passengers over 1300 billion air miles per year compared to 650 million passengers emplaned in the United States. With the deregulation of the airline industry in the United States, air commuter and air taxi operations have grown to fill the vacuum left when airlines pulled out of small airport terminals. Most large corporations in the United States either own or lease aircraft for business purposes. Other commercial activities include air ambulance service, flight training, aerial application, air cargo, and the new growth industry of commercial parcel delivery.3 In the United States there are approximately 460,000 active pilots, 167,000 general aviation aircraft, 10,000 air carrier aircraft, and 18,000 airports.4
912
Environmental Health
The magnitude of preventive medicine intervention by the aerospace physician takes on added meaning when one realizes that all U.S. licensed aviators are required to have an initial medical examination prior to issuance of their license and periodic assessments as long as they continue to fly. To examine these aviators, the FAA has designated 4800 physicians as AMEs. These physicians have undergone special training conducted under the auspices of the FAA; they may have had experience as military flight surgeons and frequently are private pilots themselves. The examination is performed to a rigorous protocol, and detailed physical standards have been promulgated. The periodicity and sophistication of the examination is dictated in part by the class of the license exercised by the aviator. The airline captain must meet a more stringent standard, more frequently, than is required of the private pilot. In all cases, the medical examination is reviewed by medical personnel at the FAA’s Civil Aeromedical Institute (CAMI). Approximately 1800 medical examinations are received each business day by the office. This represents one of the largest longitudinal medical databases in the country; unfortunately, resources to use the tools of epidemiology for fully studying this wealth of data have not been available. Another employment category required to meet flight medical standards is air traffic controller. These 15,000 federal employees stationed throughout the United States must meet, as a minimum, the physical standards required of pilots, and just as with pilots, these examinations are repeated periodically. CAMI also has responsibility for conducting research to address issues of health and safety for flight deck and cabin crew as well as for the private aviator and passengers. Toward this goal, the institute has conducted research and recommended standards on emergency aircraft lighting, egress systems, restraint systems, breathing equipment, emergency breathing devices, and flotation systems.
Military Aviation The Air Force has by far the widest range of aeronautical activities. Low and slow describes some Air Force missions, while others are truly into the fringes of space. Current fighter aircraft are capable of readily exceeding the physiological tolerance of the pilot with rapid onset, high g. The response of fighter aircraft is so fast that controls are now electronic rather than hydraulic or mechanical. Large transport aircraft are capable of nearly endless flight with air-to-air refueling. With rest facilities and multiple crew, the aircraft can simply keep on flying; the only restriction is the crew rest requirements of its human operators. Since the 1950s, it had been predicted that aeronautical design would take aircraft performance beyond the performance of the pilot. That time has arrived, as aeronautical engineers are now forced to curtail performance characteristics of the aircraft because the human operator can fail. Army aviation medicine has for some years concentrated on unique facets of rotary wing operations and pilot adaptability. In past years, helicopter crashes that were survivable in terms of impact force frequently ended in fire and death of the occupants. With intense research and redesign, this hazard has been significantly reduced. The military necessity of helicopter operations in adverse weather conditions and at night has created human factor challenges that have only in part been successfully addressed by technology. The unique challenge for naval aviation medicine is related to aircraft carrier operations. The flight surgeon is responsible not only for health maintenance of the flight crews but also for maintaining health surveillance for the 5000 people on board the carrier. The word “independent” has been used to describe a prominent characteristic of this medical service. The flight surgeon is the public health officer for this isolated community and oversees all aspects of hygiene, epidemiological surveillance, health maintenance, and medical disaster preparedness aboard ship. The Navy has celebrated the sixtieth anniversary of the Thousand Aviator Program, one of the first large cohort, longitudinal health surveillance programs undertaken in the United States. More than 1000 aviators and aviation cadets were examined using psychological and physiological assessment procedures. This ongoing study has
reviewed cardiovascular status, overall morbidity and mortality rates, and the effects of the aviation experience on the overall health of the individual.
Space Operations The United States piloted space program has enjoyed successes; unfortunately, it has also experienced disasters that continue to remind one that space operations are neither routine nor free from potential catastrophic failures. On February 1, 2003, the Columbia Space Shuttle broke up on reentering Earth’s atmosphere. All seven crew members were lost. Foam insulation fell from the fuel tank during launch damaging the left wing. On reentry, hot gases entered the wing, resulting in the craft’s destruction. The Soviet Union likewise has experienced success and disaster in space. As experience has accumulated with human-days in space and monitoring of increasing numbers of astronauts in the space environment, medical concerns have focused on the physiological effects of null gravity. Based on our current experience for short duration flights, the biomedical challenges include space adaptation syndrome (space motion sickness), cardiovascular deconditioning, loss of red cell mass, and bone mineral loss. For the Space Transportation System (shuttle) operations, the first two concerns are primary. Space adaptation syndrome has been experienced by up to one-third of the shuttle crew. This syndrome occurs in the early segments of orbital flight and may adversely affect early mission performance. Fluid shift and deconditioning effects occur even during the relatively short duration of the shuttle orbital missions. Performance during orbit does not appear to be compromised, but with the increasing g upon reentry, performance decrements are possible. As preparations proceed for a continuous habitat in space, the remaining biomedical challenges will become important. Russia has successfully maintained cosmonauts in orbit for over a year. The International Space Station operation introduces additional challenges for maintaining astronauts on long duration missions. The environmental control systems must be able to maintain potable water and uncontaminated air reliably for long periods. Microbe overgrowth must be prevented. Food and sanitation issues need to be addressed with resupply providing only one solution. Health maintenance surveillance and emergency medical treatment will require attention. Crew work-rest cycles and psychological considerations remain challenges, as do biologically efficient extravehicular activities. On January 14, 2004, President Bush announced a new vision for further space exploration. First, the United States will complete work on the International Space Station by 2010, meeting our commitment to over 15 international partners. Second, the nation will begin development of a new manned exploration enterprise to explore worlds beyond our orbit—the Crew Exploration vehicle with its first mission by 2014 with a one-month stay on the lunar surface. Third, the launch of a 30-month round-trip flight to Mars will follow the lunar habitat success.5 With current propulsion systems, the Mars mission will require optimal employment of orbital mechanics. Approximately half of the mission will be in transit from Earth to Mars and return. The other half will be in residence on the Martian surface waiting the time window for initiating the return launch. There is an enormous gap between the nation’s current knowledge and available technology and what will be required for a successful Mars trip. NASA has begun to develop the Bioastronautics Roadmap to assist in defining the problems and developing the solutions that must precede any such long-duration space flight and human habitation on a planet.6 In the fall of 2004, SpaceShipOne became the first civilian venture to enter suborbital space flight. A space tourism industry developed by the private sector could be in business by 2008 with up to 100,000 paying passengers a year taking suborbital flights by 2020. Anticipating this possibility, the FAA has begun the development of general medical guidance for operators of manned suborbital commercial space flights. This guidance will identify and prioritize the minimum medical requirements necessary to promote the safety of paying passengers who
50 intend to participate in these flights. Suborbital space flight may expose passengers to a far more hazardous environment than that experienced on traditional flights. PERSONNEL, PASSENGERS, AND PATIENTS
In general, the people most involved in the aerospace industry are flight crews, cabin personnel, ground staff, passengers, who represent the chief revenue source for commercial aviation, and patients, who may be transported either by an airline or air ambulance service.
Personnel American flag carrier airlines are responsible for the direct employment of approximately 650,000 workers, including 75,000 flight deck and 90,000 cabin crew members.4 The remaining employees make up the maintenance teams, counter servicing and baggage personnel, and those engaged in administration and management. The preventive health surveillance and medical monitoring of these individuals are provided via a variety of health service mechanisms. A number of the larger airlines maintain modern, sophisticated medical departments providing both occupational and aviation medicine services to the workforce. Other airlines have elected to keep only a minimal medical presence in-house and to contract for or otherwise provide services to employees. Smaller airlines have found it successful to hire the periodic services of an aeromedical consultant and to contract out health services. Less common is contracting all health services without the benefit of corporate medical oversight. Airlines providing comprehensive aviation medical services will provide many, if not all, of the services detailed in Table 50-1.7 Many of the activities for either flight crew or ground personnel are clinical preventive medicine services. The sophistication of the preemployment examination depends on the job description of the future employee. In part, because of the enormous training investment in pilots, airlines try to select pilots who are free of active disease, who have few precursors to chronic illness, and who do not exhibit high-risk lifestyle behavior. Although many pilots earn their livelihood in commercial aviation, most aviators in the United States are private pilots who fly for recreation or business. Whether the aircraft is a wide-bodied, multiengine, commercial passenger airliner, a high-performance jet fighter, or a single-engine private aircraft, the aviation environment and its potential adverse effects on human physiology remain. Although the level to which stress is imposed on the aviator is determined in large measure by the flight profile of the aircraft, all aviators are exposed to some adverse environmental factors associated with flight. Prevention or amelioration of adverse effects resulting from the flight environment continues to be a key component of the practice of aerospace medicine. Flight personnel whose health and well-being may be compromised by illness or by self-imposed stress compromise their performance as aviators and thus have a potential adverse effect on flight safety. Illness and Disease. Aviation is among those few avocations or vocations where the incapacitation of the operator could have dire TABLE 50-1. AIRLINE AVIATION MEDICAL SERVICES Preemployment Medical Examination
Employee Assistance Program
Drug abuse testing Psychological profile or personality inventory Physiological training Wellness or health maintenance program
Acute care Emergency response service Periodic medical assessment Job-related illness or injury monitoring Return to work assessment Aircraft accident team
Aerospace Medicine
913
effects. Once airborne, the aircraft is dependent on the pilot to safely complete the flight. Although there are many assists to the aviator both in the aircraft and on the ground, the number of aircraft capable of fully automated flight is small. Consequently, public safety dictates that the potential for pilot incapacitation be minimized. There are many physical afflictions an aviator may have without undue risk to flight safety. However, certain medical conditions are currently considered incompatible with safe flight. The clinical skills of the aerospace medicine specialist are most tested in diagnosing occult disease and determining the risk such a condition may impose on flight safety and the aviation activities of the aviator. Unexplained loss of consciousness or epilepsy are examples of conditions that may create an unacceptable risk to the pilot and to the public. Diabetes mellitus, requiring medication, and exertional angina are other examples where the risk to public safety may take precedence over the individual pilot’s desire to continue flying. Therapeutic Medications. Physicians write over 2 billion prescriptions for therapeutic medications each year in the United States. An even greater number of over-the-counter medications are purchased annually. With this degree of drug ingestion among the U.S. population, it is most probable that medication is being taken by a substantial percentage of aviators. Both therapeutic effects and adverse side effects may create situations that adversely affect flight performance. Common side effects of medications include drowsiness and loss of concentration. A pilot on a long, uneventful flight must be vigilant to fight boredom and inattention. He or she may also be experiencing mild hypoxia. If one adds to this scenario the side effects of medication, the results could be tragic. Most studies have shown that adverse effects of medications are enhanced by the flight environment. The Department of Defense, because it supervises the health care of its pilots, simply removes the aviator from flight duty until completion of the therapeutic regimen. For long-term or chronic disease requiring therapy, such as mild hypertension, limited prescription medications are available, provided a prior trial has demonstrated that the pilot experiences no adverse side effects. In the civilian sector, such control of health care is essentially nonexistent. This is true even for commercial airlines that may attempt to monitor the health status of their pilots. Consequently both the physician providing treatment and the pilot taking medication must be educated to the potential dangers of adverse side effects in flight. Nontherapeutic Drugs. Two commonly used nontherapeutic drugs are cigarettes and alcohol. Although the incidence of alcohol-related aircraft accidents has fallen in response to an extensive educational effort on the part of the FAA, alcohol continues to be associated with approximately 11% of general aviation accidents. Alcohol and altitude are synergistic, both in the effects upon the central nervous system and with respect to slowing metabolic clearance rates. Ground-based simulation and actual in-flight performance have demonstrated that blood alcohol levels as low as 0.04% (40 mg/dL) adversely compromise flight performance. Habitual cigarette smokers commonly have blood carbon monoxide levels in excess of 5%. This represents a reduction in the blood oxygen level equal to that of a nonsmoker at an altitude of 2200 m. Consequently, aviators who smoke are placing their bodies physiologically at a higher altitude than indicated and this compromises altitude tolerance. Work-Rest Cycles. Numerous factors in the aerospace environment enhance the onset of fatigue. One of the more significant of these factors is the erratic schedule many aviators maintain while flying. Weather remains the greatest cause for flight schedule disruption in private, business, or commercial aviation. Although larger, more expensive aircraft are now equipped with electronic measures to reduce the impact of weather on flight schedules, problems remain. There are regulatory controls, work rules, and common sense methods in place to reduce inadvertent or intentional fatigue factors.
914
Environmental Health
TABLE 50-2. COMPARATIVE ACCIDENT DATA FOR AIR, ROAD, AND RAIL TRAVEL Passenger Fatalities per 100 Million Passenger-Miles Mode
1980
1985
1990
1995
2000
2003
Air carrier Motor vehicle Bus Railroad
0.03 3.3 0.8 0.04
14.49 2.5 1.3 0.03
0.79 2.1 0.6 4.0
2.97 1.7 0.5 0.0
1.22 1.6 0.3 5.0
0.31 1.5 0.6 2.0
National Transportation Statistics, Bureau of Transportation Statistics, Department of Transportation, 2004.
Although a pilot may fly only the prescribed number of hours over a particular time period, there is no assurance that there will be either the opportunity or ability to obtain adequate rest in the interval. The excitement of a new place, insomnia in a strange bed, circadian rhythm asynchrony, and work-related anxiety may contribute to restless sleep and inadequate rest. Then a new workday begins, which may, in fact, be in the middle of the pilot’s biological night. Such circumstances are not infrequent and do lead to both acute and chronic fatigue for aircrew members. For the private pilot, time schedules are frequently self-imposed, which initially may have been realistic but become severely disrupted with the passage of a storm front. Frequently, the individual attempts to reach the next destination, ignoring the length of time without rest and the manifestations of fatigue. Fatigue is rarely cited as the primary cause of an aircraft accident; however, it often appears as a contributing factor. Aging. For a number of years, the FAA has had in place the Age 60 Rule. This rule directs that air transport pilots flying for commercial airlines may not serve as pilots beyond age 60 years. This is not a medical regulation but one promulgated through operations. There is no such age limitation for other categories of flying. All others, regardless of age, may continue aviation activities as pilots as long as a current medical certificate is maintained and other evaluation requirements of the license are met. The Age 60 Rule had its origin in 1959 before sophisticated medical diagnostic techniques were available, and it predated the advanced simulators, which are now able to measure subtle performance decrements. It was recognized that the risk for sudden incapacitation in flight increased with age, particularly cerebral vascular accidents and heart attacks. The wisdom at the time said such a rule was necessary to reduce the potential for such events by controlling the population at risk. Although the rule is currently being sustained in the courts, considerable epidemiological evidence is being put forward in an attempt to overturn what some have described as age discrimination. Southwest Airline, Jet-Blue Airways, and the Professional Pilots Federation all pointed out to the court in 2005 that older pilots are still capable and the major overseas carriers allow pilots to fly commercial aircraft beyond age 60.
Passengers Commercial airlines have both an obligation and a commitment to provide safe, reliable, and comfortable service to their passengers. In general this is the experience of millions of passengers flying each year. Table 50-2 provides comparative accident data for air, road, and rail travel. Terrorists took control of four large commercial passenger planes in a coordinated attack on September 11, 2001. Two aircraft were deliberately crashed into the two towers of the World Trade Center in New York City; one aircraft was crashed into the Pentagon in Washington, D.C.; and the final aircraft was crashed in a field in rural Pennsylvania. Nevertheless, travel by domestic airlines remains one of the safest forms of transportation. Safety. Many of the safety features in modern commercial aircraft go unrecognized by the passengers. The number of emergency exits are specified to ensure rapid evacuation in case of an emergency. Both
airline seats and seat belts are designed to sustain considerable impact force in order to protect and restrain the passenger. Other than the preflight demonstration, few passengers have seen the emergency oxygen masks, which are available at every seat location in aircraft flying at substantial altitudes. Emergency lighting has been designed to provide illumination in case of power failure, and floor level track lighting leads to the emergency evacuation routes. The most important safety feature is not equipment but the cabin attendant. Although most passengers look to these individuals to make the flight more comfortable by providing service and assistance, the cabin attendant’s primary purpose is to provide safety instructions and to help passengers in case of emergencies. Since the events of September 11, 2001, passengers have not only been exposed to the physical stressors of flight, but to social and emotional predeparture stress as well. The “hassle factor” of flying has become everybody’s burden to bear—even those in first class. The cabin crew has witnessed a significant increase in the tension, anger, and acting-out of frustrated passengers and has given the name “air rage” to this behavior. According to the largest U.S. flight attendants’ union, there are 4000 reports of air rage each year. In part due to the stress related to commercial flight, it may not be the best mode of transportation for everyone, although it is recognized as the safest mode of travel. Certain pulmonary, cardiovascular, and neuropsychiatric conditions may best be left to surface travel. Medical problems are rare and in-flight deaths rarer, but untoward events do occur. A major airline reported that 1.5 medical diversions are expected for each one billion passengers flown. In the larger commercial airliners there is a requirement that the aircraft have both a major medical kit and an automatic defibrillator on board (Table 50-3).8,9 In surveying its passengers, an airline was able to identify a physician among passengers on larger aircraft 85% of the time. In recent years a condition known as “economy
TABLE 50-3. EMERGENCY MEDICAL KIT Medication
Equipment
1. Without Defibrillator/Monitor or Monitor Epinephrine 1:1000 Antihistamine Dextrose 50% injection, 50 ml Nitroglycerin tablet or spray Major analgesic injection Moderate analgesic p.o. Sedative anticonvulsant (injection) Anti-emetic injection Bronchial dilator inhaler Atropine injection Adrenocortical steroid injection Diuretic injection Oxytocin injection Sodium chloride 0.9% ASA p.o.
Stethoscope Sphygmomanometer (electronic preferred) Airways, oropharyngeal (3 sizes) Syringes (appropriate range of sizes) Needles (appropriate range of sizes) IV catheters (appropriate range of sizes) Antiseptic wipes Gloves (disposable) Needle disposal box Urinary catheter IV admin. set Venous tourniquet Sponge gauze (4 4) Tape adhesive Surgical mask Flashlight and batteries Blood glucose strips Emergency tracheal catheter (or large gauge IV cannula) Cord clamp BLS cards Bag-valve mask A list of contents
2. With Defibrillator/Monitor or Monitor Alone Same as list 1, adding Lidocaine Epinephrine 1:10,000
ALCS cards
50 class syndrome” has entered the aviation lexicon. This refers to the development of deep vein thrombosis in passengers who remain seated in the tight confines of the cabin for long periods of time. As a preventive measure, the airlines are providing information about the syndrome and recommending that passengers take several preventive steps to reduce the risk. Passengers are encouraged to remove constrictive stockings, exercise the feet and legs while seated, move about the cabin as conditions permit, and maintain hydration. Circadian Asynchronization (“Jet Lag”). Transmeridian flights commonly are disruptive to the passenger’s awake-sleep cycle. There is considerable individual variability to disruption of the normal body rhythm. Time shifts of 3–4 hours often will alter the body’s homeostasis. The recovery time is dependent not only on the number of time zones crossed but also on the direction of flight. Body cycle disruptions occurring after crossing six or more time zones appear to be relatively persistent when one is flying east, lasting upward of 11 days; symptoms from flying west persist for no more than 1 or 2 days. Measures recommended to reduce the impact of this circadian asynchronization include adjusting daily activities several days before the flight, changing meals to the new time, eating light meals, avoiding alcohol, and using hypnotics during and following the flight, as well as allowing specific rest periods on arrival at the destination. More recent work suggests bright light and melatonin may help in resetting the “body clock.”
Patients There are few absolute contraindications to transporting patients by air. Patients who suffer from dysbarism, acute myocardial infarction, pneumothorax, or air embolism can be moved with relative safety, provided appropriate precautions are taken and preparations made. Assuming that maximum effort has been made to stabilize the patient, the question should be asked, “Are the benefits of air transportation real, and do they justify the clinical risks and financial costs?” The DoD has the greatest experience with transporting seriously ill and injured patients. The military aeromedical evacuation system employing large transport aircraft represents the nation’s main resource for fixed-wing medical transport. Commercial air ambulance services are available in all large communities in the United States. Most visible is the medical center helicopter used to transfer critically ill and injured patients and neonates to tertiary medical facilities. Medical conditions requiring particular insight into the physiology and environment of flight are air embolism and pressure change–induced decompression sickness, or dysbarism. In the transfer of such patients, it is imperative that pressure changes routinely experienced in flight be avoided. Some aircraft, such as the Hercules C-130, can be overpressurized to maintain the cabin below sea level pressure provided flight is at a relatively low altitude. Airline companies are frequently called upon to make special provisions for the transfer of ill or injured patients in the normal cabin environment of an airliner. Provided such a transfer does not represent a hazard to other passengers, stretchers are available that extend over three airline seats. The patient must be accompanied by at least one attendant. The expense is significant because of the block of seats required by the stretcher apparatus. Prevention is the hallmark of aeromedical support to personnel, passengers, and patients: prevention of disease and risk behaviors that might compromise the longitudinal health of air-crew personnel; prevention of injury or death to passengers through safety design of aircraft and safe airline operations; and prevention of further complications to the air-transported patient through planning, training, and equipping aeromedical transportation systems. COMMUNITY AND INTERNATIONAL HEALTH
Aerospace flight operations have the potential for disrupting the environment and serving as a mechanism for the introduction of disease. Within the United States, regulations have helped reduce the impact
Aerospace Medicine
915
of flight operations on the environment. The potential for disease transmission has been reduced with the implementation of international sanitary regulations and other control mechanisms.
Disease Transmission The spread of epidemics by movement of populations has been welldocumented throughout history. In days past, an infected individual traveling by land or sea usually became symptomatic, and thus the disease was apparent before the person reached his or her destination. With today’s high-speed jet traffic, it is not only possible but likely that an individual infected with a communicable disease could be asymptomatic yet incubating the disease at the time of arrival at the destination. Today it is possible to fly to nearly any destination on the globe within 24 hours. Thibeault10 implicates the aircraft in the spread of cholera, penicillin-resistant gonorrhea, influenza, rubella, and Lassa fever. Shilts, in And The Band Played On, describes how a flight attendant, with his ability to move rapidly from city to city, may have served as a vector of the human immunodeficiency virus.11 While passengers are crowded into a small cubic air volume, the aircraft is designed with an air-conditioning and ventilation system that maintains a low bacteria count. Even with the use of maximum efficiency HEPA filters, infections have occurred among both crewmembers and passengers. Such infections have been documented for tuberculosis, influenza, and most recently severe acute respiratory syndrome (SARS). As the SARS epidemic spread and became global in the spring of 2003, commercial aircraft became identified as a major source of cross-border spread. Five commercial international flights were associated with transmission of SARS from patients with symptoms to passengers and crew. The notification of potentially exposed passengers and studies of the risk of transmission were complicated by difficulties in tracing contacts.12 A highly effective spread of SARS occurred onboard China Air flight 112 from Hong Kong to Beijing on March 15, 2003. Recognizing the potential importance of the aircraft as a mechanism to spread disease and vectors, the first sanitary convention for aerial navigation convened in 1933. The convention’s focus was curtailment of the spread of yellow fever, including limiting the distribution of the mosquito vector Aedes aegypti. This convention eventually became the World Health Organization (WHO) Committee on Hygiene and Sanitation in Aviation. International airlines are required to comply with the International Health Regulations published by WHO, which primarily address the following: 1. Promulgation of the application of epidemiological principles 2. Enhancement of sanitation at international airports 3. Reduction or elimination of factors contributing to the spread of disease 4. Elimination of disease vector transportation 5. Enhancement of epidemiological techniques to halt the introduction or establishment of a foreign disease
Vector Control Disinfection procedures vary from airline to airline. The principal objective of these procedures is to kill mosquitoes and other insect vectors of disease. At one time, it was common when one was flying to or from tropical areas to have cabin attendants pass through the aircraft with activated aerosol cans spraying insecticide. Another procedure, which was less obvious, was to disseminate an insecticide vapor from several fixed stations in the aircraft. Current regulations permit residual treatment of the aircraft with permethrin. A common practice was the “blocks-away” disinfection technique, in which insecticide would be introduced into the passenger cabin immediately after the aircraft was closed and was taxiing to take off. An alternative method was to use aerosol insecticide prior to arrival at the destination airport. In any case, to be effective, it is necessary that insecticides be used before unloading passengers, cargo, and luggage. It is becoming more common for live animal cargo to be transported by air. The issues of disease and vectors must be addressed with such cargo.
916
Environmental Health
Large pieces of expensive equipment are also being transported by air. When the equipment has been used in the field, it is extremely difficult to ensure that all fomite contamination has been removed prior to air transportation to another country. Washing and steam cleaning of the exterior of such equipment has become regular practice. The use of some form of pesticide is commonly required before the equipment is allowed to be unloaded after it has crossed international borders.
transport (SST) flights. It was estimated that an SST fleet of 100 aircraft would decrease the ozone layer by 10%. This concern played an important role in the decision by this country to withdraw from the SST commercial competition. With additional research and a better understanding of the high-altitude atmospheric chemical relationship, the fears of ozone depletion from this source were shown to be exaggerated. THE FUTURE
Airline Community Health A commercial airliner, whether traveling domestic or international routes, provides a partially closed, self-contained environment. Air is brought on board, filtered, condensed, warmed, and if necessary, neutralized for irritants such as ozone and oxides of nitrogen. Potable water must be available as well as beverages safe for human consumption. The catering service must provide food items, which frequently include both preprepared meals and other items requiring some degree of preparation. Provisions must be made for the generation of solid and hazardous waste. Toilet facilities must be provided that require retention tanks to hold sewage until servicing can be provided on the ground. Arrangements for the collection of trash and sewage and its proper disposal on arrival must be made. These details may prove relatively simple in the domestic environment but may become extremely complex with international flights. In some international situations, all food products must be incinerated at the destination airport to ensure no introduction of a plant or animal disease. THE ENVIRONMENT
Noise One of the more noticeable features of aerospace operations is noise. The Department of Transportation estimates that approximately 3% of the U.S. population, 9 million persons, have been exposed to a potentially hazardous level of aircraft noise. The Environmental Protection Agency (EPA) is authorized under the Noise Pollution and Abatement Act (1970) and the Noise Control Act (1972) to institute noise control abatement procedures around airports. The FAA has also been assigned responsibilities to reduce environmental noise. Regulatory requirements set goals and timelines for airport operators to submit and comply with noise compatibility programs. Since the implementation of these laws, efforts have been undertaken by airframe manufacturers to control aircraft noise at its source. Numerous design changes have been made in engines primarily to reduce noise. Airports may require specific landing and departure patterns, including engine power adjustments, to comply with abatement controls. Some airports have found it necessary to curtail nighttime operations to satisfy objections by the community surrounding the airport. All levels of government have taken an active role in ensuring the compatibility of land use around airfields, both with regard to safety and noise control.
The “Greenhouse Effect” and Ozone Depletion Aerospace operations contribute approximately 1% to the nation’s total emissions of hydrocarbons, oxides of nitrogen, and carbon monoxide. In certain areas such as Atlanta and Chicago where aircraft operations are intense, emission levels have increased by approximately 3%. Under the Clean Air Act, airlines have markedly reduced the practice of inflight fuel dumping. Economics have also dictated a change in this policy. The principal environmental problem of the fuel is its contribution to photochemical pollution. The formation of the condensation trail, or con-trail, results from the emissions of the aircraft’s engines condensing and freezing in the cold ambient temperature of altitude. It has been suggested that heavy jet traffic may cause weather changes in areas surrounding major airport hubs. Ozone depletion is receiving an appropriate international response. In the 1970s there was much concern that oxides of nitrogen would serve as catalysts for ozone depletion at the high altitudes of the supersonic
Early in the twenty-first century, all projections point to more people flying higher and faster. The technology of aerospace systems will continue to improve, and the degree of automation of both air and space craft will continue to increase. Both British and French SST aircraft have been taken out of commercial service. A catastrophic accident may have contributed to the fleets’ demise, but there were also real environmental and economic issues that played a role in the decision. In 2005, there are no SSTs flying; however, a feasibility study to consider development of a new SST is being jointly sponsored by the French and Japanese. If bigger is better, then we should all take delight in the introduction by Airbus of their forthcoming A380. This behemoth will surpass the Boeing 747 as the world’s largest passenger airliner. This new double-decker, 555 seat (or more) transport is expected to enter service in the summer of 2008. However, there are currently few airport terminals in the world that can accommodate such an aircraft. The logistics and human-factor challenges for boarding and disembarking 500–600 passengers are enormous. All of the issues addressed earlier in the chapter may be compounded by such an airborne biomass. Large numbers of men and women will be required to maintain and operate the expanding fleet of aerospace vehicles. New exotic materials will be introduced by the aerospace industry, requiring special medical surveillance programs to ensure the safety and health of those working with these new substances. The challenges to public health and the environment will continue. With the continued expansion of international commerce via rapid air and space transport, the potential for transporting disease, vectors, and fomites will continue. Increasing air traffic in finite, threedimensional space will result in some compromise to environmental factors. Airports will continue to expand, challenging community aesthetics and introducing social and environmental concerns. With all of the opportunities and challenges of the future, aerospace medicine will continue to have an important niche in the ecology of health services. REFERENCES
1. Directory of Graduate Medical Education Programs. Chicago: American Medical Association; 1995. 2. Peyton G. Fifty Years of Aerospace Medicine. Washington, DC: U.S. Government Printing Office; 1967. 3. DeHart RL. Occupational and environmental medical support to aviation industry. In: DeHart RL, Davis JR, eds. Fundamentals of Aerospace Medicine. 3rd ed. Philadelphia: Lippincott Williams and Wilkins; 2002. 4. Berry MA. Civil aviation medicine. In: DeHart RL, Davis JR, eds. Fundamentals of Aerospace Medicine. 3rd ed. Philadelphia: Lippincott Williams and Wilkins; 2002. 5. Institute of Medicine. Review of NASA’s Longitudinal Study of Astronaut Health. Washington: National Academies Press; 2004. 6. NASA’s Bioastronautics Critical Path Road Map: Interim Report. Washington: National Academies Press; 2005. 7. Kay GG. Guidelines for the psychological evaluation of air crew personnel. In: The Aviation Industry. STAR Occupational Medicine, Philadelphia: Haney and Belfus; 2002. 8. Rayman RB. Inflight medical kits. Aviat Space Environ Med. 1998;69:1007–9.
50 9. Air Transp Med. Comm., Aerospace Med Assoc. Emergency medical kit for commercial airlines: an update. Aviat Space Environ Med. 2002;73:612–13. 10. Thibeault C. The impact of the aerospace industry on the environment and public health. In: DeHart RL, Davis JR, eds. Fundamentals of Aerospace Medicine. 3rd ed. Philadelphia: Lippincott Williams and Wilkins; 2002. 11. Shilts R. And the Band Played On. New York: St. Martin’s Press; 1987. 12. Bell DM. Public health interventions and SARS spread, 2003. Emerg Infect Dis. 2004:10;210–6.
General References Cummin ARC, Nicholson AN, eds. Aviation Medicine and the Airline Passenger. London: Arnold; 2002. DeHart RL. Health issues of air travel. Annu Rev Public Health. 2003; 24:133–51. DeHart RL, Davis JR, eds. Fundamentals of Aerospace Medicine. 3rd ed. Philadelphia: Lippincott Williams & Wilkins; 2002. Ernsting J, Nicholson AN, Rainford DJ, eds. Aviation Medicine. 3rd ed. London: Butterworth, Heinemann, Oxford; 1999.
Aerospace Medicine
917
Green KB, ed. The aviation industry. Occup Med: State Art Rev. 2002;17:2. Hawkins FH. Human Factors in Flight. 2nd ed. Aldershot: Ashgate Publishing Co.; 1993. Institute of Medicine. Safe Passage: Astronaut Care for Exploration Missions. Washington: National Academies Press; 2001. Rayman RB, Hastings JD, Kruyer WB, Levy RA. Clinical Aviation Medicine. 3rd ed. New York: Castle Connolly Medical; 2000.
Suggested Websites Aerospace Medical Assoc: www.asma.org Aircraft Owners and Pilots Association—Air Safety Foundation: www. aopa.org/asf/ FAA: Civil Aerospace Medical Institute (CAMI): www.cami.jccbi.gov FAA: Office of Aviation Medicine: www.faa.gov/avr/aam Flying Physicians Assoc: www.fpadrs.org International Academy of Aviation and Space Medicine: www.iaasm.org NASA: www.nasa.gov Naval Operational Medicine Institute: www.nomi.med.navy.mil Society for Human Performance in Extreme Environments: www.hpee.org U.S. Army School of Aviation Medicine: www.USASAM.amedd.army.mil USAF School of Aerospace Medicine: www.sam.brooks.af.mil
This page intentionally left blank
Housing and Health
51
John M. Last
All humans need protection against the elements, somewhere to store food and prepare meals, and a secure place to raise offspring. The effects of housing conditions on health have been known since antiquity. Deplorable living and sanitary conditions in urban slums became a political issue in the nineteenth century when accounts by journalists, novelists, and social reformers aroused public opinion. Osler’s Principles and Practice of Medicine (1892) and Rosenau’s Preventive Medicine and Hygiene (1913) noted the association between overcrowding and common serious diseases of the time such as tuberculosis and rheumatic fever. Housing remains a sensitive political issue in many communities, because it is unsatisfactory, insufficient, inadequately served by essential infrastructure, and for various other reasons. OVERVIEW OF HOUSING CONDITIONS IN THE WORLD
Housing conditions have greatly improved in the affluent industrial nations throughout the second half of the twentieth century, but more than two-thirds of the households in the world are in developing countries, the great majority of them in rural areas. The most prevalent indoor environment in the world is the same now as throughout history—huts in rural communities.1 This is changing, as urbanization transforms the distribution of populations in the developing world, where the proportion living in urban areas by the beginning of the new millennium had reached almost 50%.2 The urban population will compose 65% or more by 2025 (UN World population and urbanization trends, http://www.un.org/popin/wdtrends.htm). Many cities are already very large (Table 51-1). Many new urban dwellers in developing countries have terrible living conditions, crowded into periurban slums. They often lack sanitation, clean water supplies, access to health care, and other basic services such as elementary education. The proportion of people in such circumstances ranges from 20% to more than 80% in many cities throughout Africa, Latin America, and southern, southeastern, and southwestern Asia. The plight of children is especially deplorable; infant mortality rates exceed 100 in many places.3 Children may be abandoned by parents who cannot provide for them: they become street children who must fend for themselves from ages as young as 5 or 6 years. Many turn to crime and child prostitution to survive. Shantytowns and periurban slums endanger the health and security of many millions in Latin America, Africa, and much of Asia. They are ideal breeding places for disease and social unrest. Accurate numbers are impossible to obtain because the missing services include enumeration by census-takers and because situations change so rapidly, but in Mexico City, Lima, Santiago, Rio de Janeiro, São Paulo, and Bogota, well over half the total population live in the periurban slums. In the late1990s there were more than 40 million
periurban slum-dwellers in these six cities alone. Worldwide, an estimated 100 million people are entirely homeless, living on the streets without possessions, often from infancy onward.4 Although this is a problem mainly in developing countries, homeless people have increased in numbers in the most affluent industrial nations in recent decades, often forced out of their homes by hard economic times. Public health departments in large cities in North America and Europe have been obliged to spend increasing proportions of their budgets on emergency shelters for growing numbers of homeless destitute families. The weak and the vulnerable suffer disproportionately when social safety nets are inadequate, as is often the situation in the United States. One consequence is homelessness. About 842,000–850,000 people are homeless in any given week in the United States, and about 3.5 million are homeless for some period every year. Two-thirds are adults, predominantly single men, 11% are families with children, and 23% are children under 18 years. Most of the children are under 5. About 39% of homeless people in America suffer from mental disorders, and 66% have alcohol and/or substance abuse problems (http://www.nrchmi.samhsa.gov/facts/facts_question_ 2.asp). Many millions, 17 million refugees (http://www.unhcr.org) and more than 25 million internally displaced people (http://www.idpproject.org/ statistics.htm), live in refugee communities in Africa or the Middle and Far East where housing conditions are usually worse than in periurban slums, or are homeless in cities in upper- and middleincome countries. Refugee communities may have health services, but these are seldom adequate; supplies and continuity of services are often precarious; the safety and security of the inhabitants is often threatened by hostilities, and their long-term prospects for a better life are poor. The Israeli Defense Forces policy of demolishing homes in refugee communities where suicide bombers had lived has been a deplorable example of actions by a civilized nation aggravating an already terrible social predicament. The genocidal policies of the Sudanese government toward the estimated 1.5 million displaced people in the Darfur region of Sudan have been even more deplorable. Industrially developed nations are experiencing other challenging new health problems related to housing conditions. Rising land values and the need to provide cheap housing for expanding populations have led to proliferation of high-rise, high-density apartment housing. Publicly supported housing projects economize by restricting living space and providing few amenities. This kind of dwelling creates new sets of problems: emotional tensions attributable to living too close to the neighbors, inadequate play areas for children, poor services, and defective elevators and communal washing machines. When adverse climates make heating or air conditioning desirable, and when buildings must be sealed against inclement weather, efficient exhaust ventilation is important as a way to reduce the risk of sick building syndrome. Only a small minority of people, predominantly the educated professional classes (such as many readers of this book), enjoy comfortable, aesthetically pleasing, healthy living conditions. 919
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
920
Environmental Health
TABLE 51-1. WORLD’S LARGEST CITIES (CONURBATIONS), 2000–2004 1. Tokyo, Japan (incl. Yokohama, Kawasaki) 2. Mexico City, Mexico (incl. Nezahualcóyotl, Ecatepec, Naucalpan) 3. Seoul, South Korea (incl. Bucheon, Goyang, Incheon, Seongnam, Suweon) 4. New York, USA (incl. Newark, Paterson) 5. Sao Paulo, Brazil (incl. Guarulhos) 6. Mumbai (Bombay) India (incl. Kalyan, Thane, Ulhasnagar) 7. Delhi, India (incl. Faridabad, Ghaziabad) 8. Los Angeles, USA (incl. Riverside, Anaheim) 9. Jakarta, Indonesia (incl. Bekasi, Bogor, Depok, Tangerang) 10. Osaka, Japan (incl. Kobe, Kyoto) 11. Calcutta, India (incl. Haora) 12. Cairo, Egypt (incl. Al-Jizah, Shubra al-Khaymah) 13. Manila, Philippines (incl. Kalookan, Quezon City) 14. Karachi, Pakistan 15. Moscow, Russia 16. Shanghai, China 17. Buenos Aires, Argentina (incl. San Justo, La Plata) 18. Dacca, Bangladesh 19. Rio de Janeiro, Brazil (incl. Nova Iguaçu, São Gonçalo) 20. London, UK 21. Tehran, Iran (incl. Karaj) 22. Istanbul, Turkey 23. Lagos, Nigeria 24. Beijing, China 25. Paris, France 26. Chicago, USA 27. Lima, Peru 28. Bogota, Colombia 29. Washington, USA (incl. Baltimore) 30. Nagoya, Japan
34,000,000 22,350,000 22,050,000
21,800,000 20,000,000 19,400,000 19,000,000 17,750,000 16,850,000 16,750,000 15,350,000 15,250,000 14,550,000 13,800,000 13,650,000 13,400,000 13,350,000 12,750,000 12,000,000 11,950,000 11,650,000 11,250,000 10,800,000 10,700,000 9,900,000 9,700,000 8,350,000 8,150,000 8,050,000 8,000,000
Source: UN Statistical Office, 2005.
INDOOR ENVIRONMENT
Indoor climate and indoor air pollution, biological exposure factors, and various physical hazards encountered inside the home are encompassed by the term indoor environment. The indoor climate may be the same as that out of doors, or it may be modified by heating, cooling, or adjustment of humidity levels, and often in sealed modern buildings, by all of these.
Physical Hazards Physical hazards in the indoor environment include toxic gases, respirable suspended particulates, asbestos fibers, ionizing radiation, notably radon and “daughters,” nonionizing radiation, and tobacco smoke.
Indoor air may be contaminated with dusts, fumes, pollen, and microorganisms. The principal indoor air pollutants in industrially developed nations are summarized in Table 51-2. Many of these pollutants are harmful to health. Some occur mainly in sealed office buildings, and others, such as tobacco smoke, in private dwellings. In developing countries, indoor air pollution with products of biomass fuel combustion is a pervasive problem (Table 51-3). The fumes from cooking fires include high concentrations of respiratory irritants that cause chronic obstructive pulmonary disease (COPD) and that sometimes contain carcinogens. Premature death from COPD is common among women who from their childhood have spent many hours every day close to primitive cooking stoves, inhaling large quantities of toxic fumes.5 The toxic gases specified in Table 51-2 come from many sources. Formaldehyde is emitted as an off-gas from particle board, carpet adhesives, and urea-formaldehyde foam insulation; it is a respiratory and conjunctival irritant and sometimes causes asthma. It is not emitted in sufficient concentrations to constitute a significant cancer risk. Although rats exposed to formaldehyde do demonstrate increased incidence of nasopharyngeal cancer, there is only weak evidence of elevated cancer incidence or mortality rates even among persons occupationally exposed to far higher concentrations than occur in domestic settings. Nonetheless, urea-formaldehyde foam insulation has been banned in many jurisdictions on the basis of the evidence for carcinogenicity in rats. Gases and vapors from volatile solvents, such as cleaning fluids, have diverse origins. There is a wide range of other pollutants, such as many organic substances, oxides of nitrogen, sulfur, carbon, ozone, benzene, and terpenes.6 All such toxic substances can be troublesome, especially in sealed air-conditioned buildings and most of all when the air is recirculated to conserve energy used to heat or cool the building. In combination with fluorescent lighting, these gases and suspended particulate matter can produce an irritating photochemical smog that may cause chronic conjunctivitis and nasal congestion. Imperfect ventilation can become a serious hazard if it leads to accumulation or recirculation of highly toxic gas such as carbon monoxide. This is especially likely when coal or coke is used as cooking or heating fuel in cold weather, and vents to the outside are closed to conserve heat. Asbestos was used for many years as a fire retardant and insulating substance in both domestic and commercial buildings. Its dangers to health have led to restriction or banning of its use and to expensive renovations aimed at removing it (see Chap. 23). Fibrous glass insulation may present hazards similar to those of asbestos but less severe. Ionizing radiation, in particular radon and “daughters,” can be a health hazard, especially if houses are sealed and air recirculated, in which case there is greater opportunity for higher concentrations to accumulate. Sources of radon include trace amounts of radioactive material incorporated in cement used to construct basements. Radon can also be emitted from soil or rocks in the environment where the houses are built. Extremely low-frequency electromagnetic radiation (ELF) has attracted much attention since the observation of cancer incidence at higher rates than expected among children living close to high-voltage power lines.7 No convincing relationship has been demonstrated between childhood cancer and exposure to ELF from domestic appliances, with the possible exception of electric blankets.8 Microwave ovens and television screens are safe. The nature of the relationship, if any, between ELF and cancer remains controversial, however. Tobacco smoke is often the greatest health hazard attributable to physical factors in the indoor environment. Infants and children are significantly more prone to respiratory infections, and nonsmoking spouses are more prone to chronic respiratory illnesses and to tobacco-related respiratory cancer when living in the same house as a habitual cigarette smoker. Cigarette smoking is a hazard in another way as well: about 20–25% of deaths in domestic fires are a result of smoking.
51
Housing and Health
921
TABLE 51-2. SOURCES AND POSSIBLE CONCENTRATIONS OF INDOOR POLLUTANTS Sources∗
Pollutant Respirable particles Carbon monoxide Nitrogen dioxide Sulfur dioxide Carbon dioxide Formaldehyde Other organic vapors (benzene, toluene, etc.) Ozone Radon and “daughters” Asbestos Mineral fibers
Range of Concentrations
Tobacco smoke, stoves, aerosol sprays Combustion equipment, stoves, gas heaters Gas cookers, cigarettes Coal combustion Combustion, respiration Particle board, carpet adhesives, insulation Solvents, adhesives, resin products, aerosol sprays
0.05–0.7 mg/m3 1–115 mg/m3 0.05–1.0 mg/m3 0.02–1.0 mg/m3 600–9,000 mg/m3 0.06–2.0 mg/m3 0.01–0.1 mg/m3
Electric arcing, UV light sources Building materials Insulation, fireproofing Appliances
0.02–0.4 mg/m3 10–3,000 Bq/m3 1 + fiber/cm3 100–10,000/m3
∗
Tobacco smoke, benzene, radon and daughters, asbestos, and possibly formaldehyde are carcinogens: most others on this list are respiratory or conjunctival irritants. Carbon dioxide is an asphyxiant; carbon monoxide is a lethal poison.
Biological Hazards Biological hazards in the indoor environment include many varieties of pathogenic microorganisms. Mycobacterium tuberculosis survives for long periods in dark and dusty corners. Legionella lives in dilapidated water-cooled air-conditioning systems, stagnant water pipes, and shower stalls, especially in warm moist environments. Mites that live on mattresses, cushions, and infrequently swept floors cause asthma, as may many organic dusts and pollens. Many other infections, especially those spread by the fecal-oral route, occur most often when homes are dirty, open to flies, or infested with cockroaches or rats. Food storage and cooking facilities should be kept scrupulously clean at all times because many varieties of disease-carrying vermin are attracted by filth and because food scraps can be an excellent culture medium for many pathogens that cause food poisoning or other diseases.
Socioeconomic Conditions Socioeconomic conditions are related to the quality of housing in many ways, some already mentioned. Crowding always is greater among the poor than among the rich; this increases risks of transmitting communicable diseases and often imposes additional emotional stress that probably contributes to domestic violence. Street accidents involving children are more common in poor than in wealthy neighborhoods because the children often have no other place than the street to play. Poor people generally live in poorly equipped and maintained homes, adding to the risk of domestic accidents ranging from falls down poorly lit stairwells to electrocution. Lead poisoning
is a particular hazard for children in dilapidated houses where they are likely to ingest dried-out flakes of lead-based paint. Emissions from factory smelter stacks contribute to environmental lead and other toxic metal contamination and are more often present in poor than in well-to-do neighborhoods because the former are more often located in or close to heavily industrialized areas.
HOUSING CONDITIONS AND MENTAL HEALTH
Many descriptive studies by social epidemiologists and psychiatrists have demonstrated a consistent association between mental disorders and urban living conditions.9 There is also a close relationship between mental health and social class.10 Those who cannot cope with the competitive pressures of industrial and commercial civilization because they suffer from such disorders as schizophrenia, alcoholism, or mental retardation and have inadequate family and social support systems drift downward to the lowest depths of the slums or become homeless street people. There are estimated to be between 500,000 and 2 million homeless mentally ill persons in the United States.11 Schizophrenia and alcoholism have maximum prevalence in slums and “skid row” districts, and depression, manifested by attempted and accomplished suicide, is clustered in neighborhoods where a high proportion of the people live in single-room rented apartments.12 Adolescent delinquency, vandalism, and underachievement at school have high prevalence in dormitory suburbs occupied mainly by lowpaid workers, where recreational facilities for young people are often
TABLE 51-3. INDOOR AIR POLLUTION FROM BIOMASS FUEL COMBUSTION IN DEVELOPING COUNTRIES GPM (mg/m3)
BaP (mg/m3)
CO (mg/m3)
NO2 (µg/m3)
Other
Nigeria, Lagos
—
—
1,076
15,168
Papua New Guinea Kenya Highlands
0.84 4.0
— 145
35.5 —
— —
SO2, 38 ppm Benzene, 66 ppm HCHO, 1.2 ppm BaH, 224 µg/m3 Phenols, 1.0 µg/m3 Acetic acid, 4.6 µg/m3
India, Ahmedabad Cattle dung Dung and wood India, Gujarat Monsoon
16.0 21.1 2.7–10 56.6
8,250 9,320 2,220–6,070 19,300
— —
144 326
SO2, 242 µg/m3 SO2, 269 µg/m3
BaP = benz-a-pyrene; SPM = suspended particulate matter. Data from de Koning HW, Smith KR, Last JM. Biomass fuel combustion and health. Bull WHO. 1985;63:11–26. Air Quality Guidelines. Regional Reports series 23. Copenhagen: World Health Organization; 1987.
922
Environmental Health
inadequate and schools are often of inferior quality. Bad housing does not cause these problems; they are usually symptoms of a more complex social pathology. A different set of factors contributes to the syndrome called “suburban neurosis,” which occurs among women who remain housebound for much of the time while their husbands are at work and their children are at school;13 this condition has been alleviated by television, which by bringing faces and voices into the house relieves loneliness. It has also been alleviated by changing work patterns, with increasing proportions of married women joining the workforce. HOUSING STANDARDS
Public health workers are directly concerned about the quality of housing because of the many ways it can affect health. Local health officials have special powers to intervene when health is threatened by inadequate housing conditions. A handbook frequently revised by the Centers for Disease Control and Prevention and the American Public Health Association, Housing and Health; APHA-CDC Recommended Minimum Housing Standards,14 sets out specific details on basic equipment and facilities, fire safety, lighting, ventilation, thermal requirements, sanitation, space requirements (occupancy standards), and the special requirements for rooming houses. This valuable reference spells out general guidelines that can be used by local authorities as the basis for regulations, but there are no universal legally enforcible standards until local jurisdictions introduce them. Health Principles of Housing,15 a WHO manual, gives guidance on a wide range of behavioral factors that can influence health in relation to housing conditions, for example, by providing guidelines on ways to reduce psychological and social stresses by ensuring privacy and comfort, and on the housing needs of populations at special risk such as pregnant women, the handicapped, and the elderly infirm. Both booklets should be part of the library of every local health officer.
STATISTICAL INDICATORS OF HOUSING CONDITIONS
Health planning requires every kind of information pertinent to community health, including statistics on housing conditions. Useful information is routinely collected at the census on density of occupancy (persons per bedroom), cooking and refrigerating facilities, and sanitary conditions. Tables derived from small-area analysis of census data showing housing statistics enable health planners to identify neighborhoods at high risk of diseases associated with crowding and poor sanitation. Census tables also enable health planners to identify less obvious health hazards, such as proportions of elderly persons living alone, whether in small apartments or multiple-room dwellings that were family homes before others in the family moved away or died, leaving an elderly person as sole resident. Once such neighborhoods are identified, public health nurses and other community health workers can more easily locate and visit individuals at risk, who may need but have not yet asked for help. In addition to census tables, there are other useful sources of information on neighborhoods with a high incidence of social pathology. Fire departments record false alarms and fires deliberately lit; police departments record details of vandalism and calls to settle domestic disturbances; and schools record absenteeism and truancy. All can be analyzed by area, thus pinpointing high-risk neighborhoods; this method has been used as part of a program aimed at improving the chances of getting a good start in life for children from disadvantaged homes. There is a high correlation between these indicators of social pathology in a neighborhood, such as a high-rise, high-density apartment complex for low-income families, and the incidence of emotional disturbances and similar behavioral upsets among young and teenaged children.16
HEALTHY COMMUNITIES AND HEALTHY CITIES
As part of the initiative for “Health for all by the year 2000” that followed resolutions passed at the World Health Assembly in 1977,17 health planners in many nations, notably in the European region of the World Health Organization (WHO), began active planning for health promotion (to be distinguished from disease prevention). Health promotion (see Chap. 1) requires action by many individuals and groups not usually identified with care of the sick or prevention of disease. The definition of health promotion, “the process of enabling people to increase control over and improve their health,” implies that people may often have to take action aimed at improving their living conditions. The Healthy Cities movement is a coordinated program involving community health workers, local elected officials in urban affairs, and a wide variety of community groups who collectively seek to upgrade living conditions. Initially, some of the participating cities were relatively healthy places to live (e.g., Toronto, Canada), while others (e.g., Liverpool, England) were not. The Healthy Cities initiative emphasizes activities that could be expected to enhance good health, such as provision of improved recreational facilities, services for children and their mothers (including basic education for the mothers as well as the children), and aggressive action to eradicate urban wasteland, industrial pollution, toxic dump sites, and other forms of urban blight.18 From modest beginnings, the Healthy Cities movement has spread all over the world and in some places has extended beyond cities to embrace rural communities.19 Since the environment in which people live, grow, work, and play so manifestly influences their health and happiness, the Healthy Cities initiative is potentially among the most valuable means at our disposal to make this environment healthful.
SPECIAL HOUSING NEEDS
Elderly and disabled people require accommodation that has been adapted to enable easier access (ramps, handrails, wide doors to permit passage of wheelchairs), to facilitate storage and preparation of food (low-placed cupboards and stoves with front-fitted switches, which are inadvisable in homes where there are small children), and with special equipment for bathing and toileting (strong handrails, wheelchair access). Special accommodation of this type is often segregated, which tends to set the occupants apart in an urban ghetto for the elderly and disabled. Integrated special housing is preferable, as examples in Denmark, Sweden, and the United Kingdom have demonstrated; in this setting, elderly, infirm, and younger disabled persons live among others who are not disabled, a situation that many of them prefer and that helps to accustom these other people to making allowances for their disabled fellow citizens. CONCLUSION
This is a brief summary of a complex and diverse topic. The essential requirements of the domestic environment have been stressed, along with some of the obvious adverse effects of unsatisfactory housing. The home should provide more than mere shelter and a safe place to raise children. It should be the setting in which the family lives and grows together, where bonds of affection and mutual trust are formed and strengthened, where socialization into the prevailing culture and intellectual stimulation are occurring, and where privacy is available when it is wanted and needed. Doxiadis20 coined the term ekistics, meaning the science of human settlements, to encompass the many interactive factors that make living space compatible with good physical, mental, emotional, and social health and well-being. The arrangement of dwelling units, their relationship to the natural and to the human-made environment, and their interior structure and function all play a part in creating a housing environment conducive to good health. Many less easily described and unmeasurable factors,
51 such as the innumerable ways that people can interact, also contribute to the ambience of the living space. These intangible factors would receive more attention in a better world than this if we were really intent on applying all possible means to the end of promoting and preserving the public’s health. REFERENCES
1. de Koning HW, Smith KR, Last JM. Biomass fuel combustion and health. Bull WHO. 1985;63:11–26. 2. Tabibzadeh I, Rossi-Espagnet A, Maxwell R. Spotlight on the Cities; Improving Urban Health in the Developing World. Geneva: World Health Organization; 1989. 3. World Resources. A Guide to the Global Environment: The Urban Environment 1996–97. (A UNEP/UNDP/World Bank/WRI Monograph.) New York: Oxford University Press; 1991. 4. UNHCR. State of the World’s Refugees, 1996. Geneva: UNHCR; 1996. 5. Last JM. Biomass fuels. In: Environmental Determinants of Health Associated with the Production, Distribution and Use of Energy. Geneva: World Health Organization; 1991. 6. Indoor air quality: organic pollutants. WHO Regional Office for Europe, Euro Reports and Studies No. 111, 1987. 7. Wertheimer N, Leeper E. Electrical wiring configurations and childhood cancer. Am J Epidemiol. 1979;109:273–84. 8. Savitz D, John EM, Kleckner RC. Magnetic field exposure from electric appliances and childhood cancer. Am J Epidemiol. 1990;131: 763–73.
Housing and Health
923
9. Srole L, Langner TS, Michael ST, et al. Mental Health in the Metropolis: the Mid-town Manhattan Study. New York: McGraw-Hill; 1962. 10. Dohrenwend BP, Dohrenwend BS. Social Status and Psychological Disorder: A Causal Inquiry. New York: John Wiley & Sons; 1969. 11. American Psychiatric Association. Report on the Homeless Mentally Ill. Washington, DC: The Association; 1984. 12. Hare EH. Mental illness and social conditions in Bristol. J Ment Sci. 1956;102:349–57. 13. Hare EH, Shaw GK. Mental Health on a New Housing Estate. Oxford: Oxford University Press; 1965. 14. Wood EW. Housing and Health: APHA-CDC Recommended Minimum Housing Standards. Washington, DC: APHA; 1995. 15. World Health Organization. Health Principles of Housing. Geneva: World Health Organization; 1989. 16. Offord DR, Barrette PA, Last JM. A comparison of school performance, emotional adjustment and skill development of poor and middle-class children. Can J Public Health. 1985;76:157–63. 17. Resolution 30.43. World Health Assembly. Geneva: World Health Organization; 1977. 18. Ashton J, ed. Healthy Cities. Milton Keynes, Philadelphia: Open Universities Press; 1992. 19. Lacombe R. Villes et villages en sante: l’experience Quebecoise. Can J Public Health. 1989;80:3–5. 20. Doxiadis CA. Action for Human Settlements. New York: Norton; 1977.
This page intentionally left blank
52
Human Health in a Changing World John M. Last • Colin L. Soskolne
HUMAN HEALTH IN A CHANGING WORLD
Throughout its 4-billion-year life, Earth has undergone many changes in the distribution and abundance of life forms, including human inhabitants, and in the living and nonliving features of the ecosystems with which humans interact. Early in the twenty-first century, the United Nations Millennium Assessment Report, the collective work of over 1300 scientists worldwide, painted a disturbing picture of life-supporting ecosystems that are gravely stressed by human activities to an extent that is unsustainable even in the medium-term. This conclusion has been reinforced by recent publications of the Intergovernmental Panel on Climate Change,1 and the Millennium Ecosystem Assessment.2 Atmospheric composition and climate have changed many times. Sometimes air and ocean currents that determine climate and weather have been altered by tectonic plate movements. The impact of large meteors or massive volcanic activity that block sunlight by filling the air with dust and gases such as sulfur dioxide have occasionally produced sudden climate changes leading to great extinctions.3 Variation in solar radiation, oscillation of Earth’s axis, or passing clouds of interstellar dust may induce ice ages and periods of interglacial warming.4 Minor seasonal fluctuations are associated with many intervening variables that make weather forecasting one of the most inexact of all sciences. A consensus has developed among scientists in the relevant disciplines that human activity is adversely affecting Earth’s climate;5 and there is compelling evidence that human activity is changing the biosphere in other ways besides climate.6 The changes represent a new scale of human impact on the world unlike anything in recorded history. Collectively, the changes endanger both human health and future prospects for many other living creatures. Global warming and stratospheric ozone depletion have attracted the most attention, but the changes go beyond these two processes. The term global change covers several interconnected phenomena:7 global warming (“climate change”); stratospheric ozone attenuation; resource depletion; species extinction and reduced biodiversity; serious and widespread environmental pollution; desertification; and macro and micro ecosystem changes, including some that have led to emergence or reemergence of dangerous pathogens.8 These phenomena are mostly associated with industrial processes or result from the increased pressure of people on fragile ecosystems.9 All are interconnected and some are synergistic—some processes reinforce others. What makes these human-induced changes different from those through history is the rate of current change and its reach.10 Ecological integrity, the ability of ecosystems to withstand perturbations, is dependent on three factors: population, affluence, and technology.11–13 All three factors are interdependent and can operate synergistically, accelerating declines in systems upon which life, including human life, depends for sustenance. As declines accelerate, thresholds are exceeded as the buffering capacity of these systems is challenged, resulting in system flips or collapses. When large-scale ecosystems collapse, all life that these systems have nurtured over
centuries and millennia can then either be buffered from impacts by drawing on ecological capital from elsewhere, by migration to more hospitable locales, or by succumbing. The frail, marginalized in society, and the poor generally do not have the survival option. Each of these three variables is interdependent and must be considered if solutions to our downward spiral are to be found. Many independently derived indicators of ecological well-being demonstrate that declines are underway and are accelerating.14–16 There is a need for sober debate in pursuit of solutions to grave global problems, and public health practitioners must engage in such discussions by considering all three factors in the delicate ecological integrity balance; not each one in isolation. Population: In little more than the length of an average lifetime, the population quadrupled, from about 1.7 billion in 1900 to about 6.4 billion by 2000.17 It is not clear whether our numbers have reached or exceeded Earth’s carrying capacity;18 but responsible opinion inclines to the view that we have reached the limits for comfortable human existence. Earth might sustain for a while many millions more than the present number, but life for all but a small minority would be of greatly diminished quality and long-term sustainability would be at best precarious.19 Affluence: Affluent people and nations consume renewable and nonrenewable resources far in excess of their needs, and consequently, the ecological footprint of the affluent nations far exceeds available resources worldwide.20 Moreover, consumption leads to pollution from the disposal of waste, much of which ends up in the backyards of marginalized people locally21,22 or in low-income countries.23 These are institutionalized practices and hence are deemed legitimate business practices.24 Technology: Technologies that result in war and ecological devastation are clearly harmful both to ecosystems and human health.25 In countries where environmental legislation and regulation do not permit polluting technologies to operate, polluting technology is often exported to regions of the world where stringent environmental controls are absent, while newer generation technologies are implemented in affluent nations to comply with local environmental standards. Evidence on the causes and consequences of global change was published by the Intergovernmental Panel on Climate Change (IPCC) in 199026,27 in its First Assessment Report. There was more evidence in the Second Assessment Report in 1995.28 The Third Assessment Report was published in 20011 with graver predictions based on more refined science than just 11 years previously. The Fourth Assessment Report is appearing in installments in 2007 (see Addendum). Much of the information in this chapter is taken from the IPCC Reports, from Climate Change and Human Health,29 (1996) and from Climate Change and Human Health; Risks and Responses (2003).30 There have been many other reports: by national governments,31 scientific articles,32,33 and documents produced by nongovernmental agencies such as the Union of Concerned Scientists,34 Friends of the Earth, the Worldwatch Institute,35 and the Sierra Club. 925
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
926
Environmental Health
BOX 52-1. NATURE’S SERVICES∗ “The conditions and processes through which natural ecosystems, and the species that make them up, sustain and fulfil human life. They maintain biodiversity and the production of ecosystem goods, such as seafood, forage, timber, biomass fuels, natural fiber, and many pharmaceuticals, industrial products, and their precursors. The harvest and trade of these goods represent an important and familiar part of the human economy. In addition to the production of goods, ecosystem services are the actual life-support functions, such as cleansing, recycling, and renewal, and they confer many intangible aesthetic and cultural benefits as well.” • • • • • • • • • • • • • ∗
Purification of air and water Mitigation of floods and droughts Detoxification and decomposition of wastes Generation and renewal of soil and soil fertility Pollination of crops and natural vegetation Control of the vast majority of potential agricultural pests Dispersal of seeds and the translocation of nutrients Maintenance of biodiversity, from which humanity has derived key elements of its agricultural, medicinal, and industrial enterprise Protection from the sun’s harmful ultraviolet rays Partial stabilization of climate Moderation of temperature extremes and the force of winds and waves Support of diverse human cultures Providing of aesthetic beauty and intellectual stimulation that lift the human spirit
Daily, Gretchen C. Nature’s Services: Societal dependence on natural ecosystems. Island Press. 1997:3–4.
In the late 1980s, when concerns about global warming and other aspects of global change began to attract widespread public interest, some contrary views and rebuttals were published,36,37 sometimes but not always sponsored by organizations that opposed actions aimed at mitigating global change. As the empirical evidence mounted, these contrary views have become more muted. A recent attack on climate science came from Lomborg38 in The Skeptical Environmentalist. This book provides an example of how science can be challenged, but Lomborg is an economist and his book is devoid of sound scientific analysis. Every component of global change merits discussion; and so do some of the complex interconnections among them. Readers are urged to consult the sources cited in the bibliography. The health, social, economic, and other impacts of global change have been the topic of many important reports.39–41 There are some obvious actions we should take to enhance readiness to deal with public health aspects of global change and implications for public policy generally. A critical concern is the disconnect in most people’s minds between nature and human well-being.42 There is a belief that should we, for instance, destroy our water supplies, clean water will be produced through technologies yet to be invented. Likewise for all of Nature’s services43 that humanity has taken for granted for millennia (see Box 52-1) Costanza44,45 has demonstrated that Nature’s services, in dollar terms, amount annually to some three times global GDP. Thoughtful scientists and philosophers46 have cautioned for decades about the folly of such expectations, even if only on the basis of thermodynamic principles.47–49 GLOBAL WARMING
Svante Arrhenius recognized in 1896 that Earth’s mantle of atmosphere acts like a greenhouse, allowing passage of short-wavelength solar radiation into the biosphere, trapping longer wavelength infrared radiation. Without the greenhouse effect, Earth’s surface temperature would swing from over 50ºC in strong sunlight to –40ºC at dawn. The concentration of greenhouse gases in the troposphere has risen rapidly since the beginning of the industrial era because several of these gases, notably carbon dioxide, are products of fossil fuel combustion and other human activities. Industrial activity and the combustion of petroleum fuels in automobiles have increased exponentially
since the 1950s, accelerated by industrial and commercial development in India, China, South Korea, Taiwan, Indonesia, Thailand, Brazil, Mexico, and other countries. Currently over 6 billion metric tons of CO2, the principal greenhouse gas, are added to the troposphere annually, increasing amounts every year. In 2002, the last year for which a global estimate is available, the global output of CO2 was almost 7 billion metric tons.50 This is despite the promises made by most national leaders at the UN Conference on Environment and Development51 (UNCED) in Rio de Janeiro in 1992, to stabilize carbon emissions at or below 1990 levels. Moreover, tropical rain forests, perhaps the most important carbon sink (i.e., a biological system that absorbs carbon emissions, thus helping to counter-balance negative impacts on Earth’s temperature), are being rapidly depleted often by slash-burning and this adds even more carbon gases to the greenhouse. Phytoplankton, another important carbon sink, are damaged by increased ultraviolet radiation (UVR) flux from depleting stratospheric ozone, an example of reinforcement of one form of global change by another. When they signed the 1995 Framework Convention on Climate Change, most national leaders reiterated their earlier promises, and the 1995 IPCC Reports add a sense of urgency to the need for action. Recently, support for the Kyoto Accord has exposed counties whose leadership is primarily focused on the narrow good of its own citizens, failing to embrace global concerted action to remedy a global crisis already underway. In 1995, the atmospheric concentration of carbon dioxide reached a higher level than at any time in the last 140,000 years, and the average ambient atmospheric temperature was the highest since record-keeping began.52 It is estimated by global climate models using a variety of methods that the average global ambient temperature will rise by about 0.5ºC in the first half of the twenty-first century and may rise 2ºC by 2100.4 These estimates were revised upward in the Third Assessment Report to 4.5ºC. Moreover these are average temperatures; the increase and the seasonal and diurnal swings are expected to be greater, as much as 6–8ºC, in temperate zones, and even more extreme near the poles. If arctic permafrost thaws as a result, a great deal of methane will be released, adding to the existing burden of atmospheric greenhouse gases and accelerating the warming process. Polar ice caps and sea ice are melting at rates greater than the predictions suggested as recently as a decade ago, suggesting that global warming is proceeding at rates at or near the upper levels that were predicted in the earlier Assessment Reports.
52 Global warming has direct and indirect and predominantly adverse effects on health.53 Although heat-wave deaths are dramatic and obvious,54 for instance causing at least 10,000 excess deaths in Paris in July 2003, in terms of overall health impact a more important impact on health may be an increased incidence and prevalence of water-borne and vector-borne disease. Increased average ambient temperatures extend the range, distribution, and abundance of insect vectors such as mosquitoes, allow the pathogens they carry to breed more rapidly, and may enhance their virulence.55 Malaria, for instance, is expected to become prevalent in temperate zones and at altitudes in tropical and subtropical regions from which it is now absent, notably large highland cities and periurban slums in East Africa (e.g., Nairobi, Harare, Soweto) where an additional 20–30 million people will be at risk; there will be many millions more at risk of malaria annually in Indonesia and other populous South and Southeast Asian nations.56 Other tropical and subtropical vector-borne diseases also will increase in incidence, prevalence, and perhaps mortality. In North America, several arbovirus diseases (e.g., viral encephalitis and hemorrhagic dengue fever) will occur more frequently. The indirect effects of global warming include a sea-level rise of up to 50 cm by the year 2050, due to melting of polar and alpine icecaps and thermal expansion of the seawater mass. This will disrupt many coastal ecosystems, jeopardize coastal and perhaps some ocean fisheries, salinate river estuaries that are an important source of drinking water, and displace scores of millions of people from lowlying coastal regions in many parts of the world, including the Netherlands, Bangladesh, much of South China, parts of Japan, and small island states (e.g., Vanu Atu, the Maldives) that face inundation and obliteration. Up to 10–15 million people along the eastern seaboard of the United States may be affected. Many of those displaced will become “environmental refugees” in third-world megacities or drift into urban squalor in the rich industrial nations. Another effect of global climate change with implications for human health is anomalous weather—notably, more frequent, severe, and unpredictable weather emergencies such as catastrophic floods, hurricanes and tornadoes, and heat waves. Atmospheric physicists and climatologists believed that some unusual weather events in the 1986–1995 decade may have been attributable to global climate change. Indeed, the warming trend is now recognized as having been in play since about 1990. These anomalous weather events have already extracted a heavy financial toll from the insurance industry and from national disaster funds in the United States and elsewhere (Table 52-1).57 With sudden flooding, sewer backups and the potential for the contamination of drinking water supplies have been reported resulting in public health emergencies in both the United States and Canada in the recent past. The disastrous hurricanes Katrina and Rita that caused immense devastation and loss of life in New Orleans and elsewhere in Louisiana and in Texas in August 2005 may have been in part a manifestation of this trend.58–63 The European summer of 2003 included severe heat waves associated with sharp increases in mortality in several large cities, notably in Paris where a conservatively estimated 10,000 out of a total excess mortality of 14,000 was directly attributable to heat-wave conditions. The impact of global warming on food security could be very serious; here the interconnection of global warming with resource depletion and desertification is important. Global warming will jeopardize the viability of crops in some of the world’s most important grain-growing regions because it will alter rainfall patterns and soil moisture levels and hasten desertification of marginal grazing and agricultural land, as it has already done in much of the West African Sahel, parts of northeastern Brazil, and elsewhere in Africa (e.g., Ethiopia, Sudan, Angola, Zimbabwe). An increase in surface-level ultraviolet radiation flux, discussed below, will make matters worse if it impairs plant reproduction or growth. Predictions are difficult when so many variables are involved, but the models developed by agronomists suggest that, while some grain crops might benefit from warmer climate and higher levels of atmospheric CO2, the overall impact is likely to be a decline in world grain crop production64 (Table 52-2).
Human Health in a Changing World
927
TABLE 52-1. INSURED LOSSES FROM “BILLION U.S. DOLLAR” STORM EVENTS SINCE 1987 ∗
Year 1987 1989 1990 1991 1991 1992 1992 1993 1993 1995 1995 1995 1996 1998 1998 1998 1998 1999 1999 1999 1999 1999
Event
Insured Loss ($ Billion)
Windstorm (Western Europe) Hurricane Hugo (Caribbean, United States) Winter storms (Europe–four events in total) Typhoon Mireille (Japan) Akland fire (United States) Hurricane Andrew (Florida) Hurricane Iniki (Hawaii) Blizzard–“Storm of the Century” (Eastern United States) Floods (United States) Hurricane Luis (Caribbean) Hurricane Opal (United States) Hailstorm (United States) Hurricane Fran (United States) Hurricane (Georges Caribbean, United States) Ice storm (Canada, United States) Hailstorm Floods (China) Winter storms (Europe) Typhoon Bart (Japan) Hail storm (Australia) Tornadoes (United States) Hurricane Floyd (United States)
4.7 6.3 13.2 6.9 2.2 20.8 2.0 2.0 1.2 1.7 2.4 1.3 1.8 3.5 1.2 1.4 1.1 10.4 3.4 1.1 1.5 2.2
∗
From Munich Reinsurance Company. Windstorm—New Loss Dimensions of a Natural Hazard. Munich: Munich Reinsurance Company; 2000. Figures are adjusted for inflation (1999 values). (See http://www.grida.no/climate/ipcc_tar/ wg2/329.htm, Table 8-3 from which Earthquakes have been omitted.) Note (1) From 1970 through 1986, no claims exceeding 1 billion US $ are reported. Note (2) According to the United Nations Environment Programme (UNEP) (at http://www.rolac.unep.org/cprensa/cpb43i/cpb43i.htm), total insurance losses for 2003 and 2004 amount, respectively, to $16 billion and $35 billion.
Desertification is made worse by unsound and inappropriate agricultural methods. The “green revolution” that dramatically increased agricultural output in the 40 years following the end of World War II is over: many forms of agricultural output have remained stationary or have declined in the past 5–10 years, raising troubling questions about Earth’s carrying capacity. Global warming is the principal cause of the retreat of alpine glaciers that has been observed since the late nineteenth century. This could reduce the ice-melt component of many river systems, which contribute by irrigation and/or seasonal flooding to productivity of food-producing regions. The deficit is compensated in part at least by increased rainfall, but in the long term, river flow could decline. Shortages of fresh water for irrigation and drinking may be the most critical limiting factor on further population growth in many parts of the world.
STRATOSPHERIC OZONE ATTENUATION
In 1974, Molina and Rowland, two atmospheric physicists, predicted that chlorofluorocarbons (CFCs), a widely used class of chemicals, would permeate the upper atmosphere where they would break down under the influence of solar radiation to produce chlorine monoxide. 65,66 Chlorine monoxide destroys ozone; each molecule of chlorine monoxide is capable of destroying over 10,000 ozone molecules. Rowland and Molina were awarded the Nobel Prize for
928
Environmental Health TABLE 52-2. SELECTED CROP STUDY RESULTS FOR 2 CO2-EQUIVALENT EQUILIBRIUM GCM SCENARIOS ∗ Region Latin America
Crop
Yield Impact (%)
Maize
–61 to increase
Wheat
–50 to –5
Soybean
–10 to +40
Former Soviet Union
Wheat grain
–19 to +41
Europe
Maize
–14 to +13 –30 to increase
Wheat
Increase or decrease
Vegetables
Increase
Maize
–55 to +62
Wheat Soybean
–100 to +234 –96 to +58
Maize
–65 to +6
Millet
–79 to –63
South Asia
Biomass Rice
Decrease –22 to +28
China
Maize Wheat Rice
–65 to –10 –61 to +67 –78 to +28
Other Asia and Pacific Rim
Rice
–45 to +30
Pasture
–1 to +35
Wheat
–41 to +65
North America
Africa
Comments Data are from Argentina, Brazil, Chile, and Mexico; range is across Global Climate Model (GCM) scenarios, with and without CO2 effect Data are from Argentina, Uruguay, and Brazil; range is across GCM scenarios, with and without CO2 effect Data are from Brazil; range is across GCM scenarios, with CO2 effect Range is across GCM scenarios and region, with CO2 effect Data are from France, Spain, and northern Europe; with adaptation and CO2 effect; assumes longer season, irrigation efficiency loss, and northward shift Data are from France, United Kingdom, and northern Europe; with adaptation and CO2 effect; assumes longer season, northward shift, increased pest damage, and lower risk of crop failure Data are from United Kingdom and northern Europe; assumes pest damage increased and lower risk of crop failure Data are from United States and Canada; range is across GCM scenarios and sites, with/without adaptation and with/without CO2 effect Data are from United States; less severe or increase with CO2 and adaptation Data are from Egypt, Kenya, South Africa, and Zimbabwe; range is over studies and climate scenarios, with CO2 effect Data are from Senegal; carrying capacity fell 11–38% Data are from South Africa; agrozone shifts Data are from Bangladesh, India, Philippines, Thailand, Indonesia, Malaysia, and Myanmar; range is over GCM scenarios, with CO2 effect; some studies also consider adaptation
Includes rainfed and irrigated rice; range is across sites and GCM scenarios; genetic variation provides scope for adaptation Data are from Japan and South Korea; range is across GCM scenarios; generally positive in north Japan, and negative in south Data are from Australia and New Zealand; regional variation Data are from Australia and Japan; wide variation, depending on cultivar
∗
For most regions, studies have focused on one or two principal grains. These studies strongly demonstrate the variability in estimated yield impacts among countries, scenarios, methods of analysis, and crops, making it difficult to generalize results across areas or for different climate scenarios. (See Chapter 15, Reference 10.) Note: IPCC 2001 provides an update with more specific details according to studies with explicit global economics and/or global yields; and, by studies of yield and production in developed regions, nations, and subnational regions: http://www.grida.no/climate/ipcc_tar/wg2/212.htm#tab54.
Physics in 1995 in recognition of their work. Other atmospheric contaminants that destroy stratospheric ozone include other halocarbons and perhaps oxides of nitrogen (e.g., in exhaust emissions of high-flying supersonic jet aircraft). Volcanic eruptions sometimes release chlorine compounds into the atmosphere, so natural as well
as human-induced processes can contribute to stratospheric ozone attenuation. Rowland and Molina’s predictions soon began to come true. In 1985, Farman and coworkers observed extensive attenuation (a “hole”) in the stratospheric ozone layer over Antarctica during the Southern
52 Hemisphere spring.67 This has recurred annually; since 1990, seasonal ozone depletion has been observed in the Northern Hemisphere too, greatest over parts of Siberia and northeastern North America. Stratospheric ozone depletion was correlated by Kerr and colleagues at the Canadian Climate Centre in 1993 with increased surface level UVR flux.68 Ozone depletion so far is about 3–4% of total stratospheric ozone and increasing annually. The stratospheric ozone layer protects the biosphere from exposure to lethal levels of ultraviolet radiation. The gravity of this progressive loss of stratospheric ozone was recognized almost immediately and led many industrial nations to adopt the Montreal Protocol, calling for a moratorium on manufacture and use of CFCs.69 CFCs were widely used as solvents in manufacture of microprocessors for computers, foaming agents in polystyrene packing, propellants in spray cans, and as Freon gas in air conditioners and refrigerators; their supposed chemical inertness made them a popular choice. But because they are inert, they have, on average, an atmospheric half-life of about 100 years, so stratospheric ozone depletion will continue to be a serious problem well into the twentysecond century. Stratospheric ozone must not be confused with toxic surface-level air pollution with ozone that contaminates fumes from some industrial processes or as a result of the action of sunlight on automobile exhaust fumes (“photochemical smog”). Stratospheric ozone depletion permits greater amounts of harmful UVR to enter the biosphere, where it has adverse effects on many biological systems and on human health. The principal biological effects of increased UVR are disruption of the reproductive capacity and vitality of small and single-celled organisms, notably phytoplankton at the base of marine food chains, pollen, amphibians’ eggs, many insects, and the sensitive growing ends of green leaf plants. Increased UVR also has direct adverse effects on human health: it increases the risk of skin cancer, increases the risk of ocular cataracts, and probably impairs immune function.
RESOURCE DEPLETION
The more people there are, the greater the stress on finite and scarce resources. The resources essential for survival are fresh water for drinking and irrigation, and food. Air quality is also a major concern. Water shortages in some parts of the world are associated with conflicts, and in the next 50 years as the shortages spread to other
TABLE 52-3. PER CAPITA WATER AVAILABILITY (M3/YEAR, PER CAPITA) IN 2050
Country
1990
Cyprus El Salvador Haiti Japan Kenya Madagascar Mexico Peru Poland Saudi Arabia South Africa Spain
1280 3670 1700 4430 640 3330 4270 1860 1470 310 1320 2850
∗
No Climate Change—2050 770 1570 650 4260 170 710 2100 880 1200 80 540 2680
GFDL 2050
UKMO 2050
MPI 2050
470 210 840 4720 210 610 1740 830 1160 60 500 970
180 1710 280 4800 250 480 1980 690 1150 30 150 1370
1100 1250 820 4480 210 730 2010 1020 1140 140 330 1660
Assumptions about population growth are from the IPCC IS92a scenario based on the World Bank (1991) projections; the climate data are from the IPCC WGII TSU climate scenarios (based on transient model runs of Geophysics Fluid Dynamics Laboratory [GFDL], Max-Planck Institute [MPI], and UK Meteorological Office [UKMO]). The results show that in all developing countries with a high rate of population growth, future “per capita” water availability will decrease independently of the assumed climate scenario.
Human Health in a Changing World
929
countries and regions, these conflicts probably will be exacerbated (Table 52-3). Threats to water security are a primary cause of some of the most intractable conflicts in the world.70 In fact, the United Nations Environment Programme (UNEP) and various scientists and organizations anticipate that countries will be at war over access to fresh water by about 2020 (http://www.unep.or.jp/ietc/Issues/ Freshwater.asp; http://www.planetark.com/dailynewsstory.cfm/newsid/ 26728/story.htm; http://www.fdu.edu/newspubs/magazine/03su/ waterwars. html; http://www.unep.or.jp/ietc/knowledge/view.asp?id= 2383; http://pubs.acs.org/hotartcl/est/99/jan/interview. html). The IPCC Summary for Policymakers suggests that water shortages will be an important limiting factor on growth and development in some regions, notably much of the Middle East, South Africa, parts of Brazil, and the Southwest of the United States.71 Sea-level rise due to global warming and salination of river estuaries and water tables close to seacoasts will threaten some of the largest human settlements on Earth: Tokyo, Shanghai, Calcutta, Bombay, Jakarta, and Lagos, among others with populations of 14 million or more around 1999–2001. There will be much population movement away from coastal zones that are now at or only just above sea level. Not only will some of this inhabited land be below sea level, its fresh water supplies will be compromised by seepage of sea water into subsurface aquifers; many heavily populated river estuaries will thus lose much of their carrying capacity. Desertification of grazing lands and marginal cultivated agricultural land would further threaten food security. Another critical limiting factor is shortage of ocean and coastal fish stocks. This was seen in the early 1990s in dramatic form in the collapse of many of the world’s ocean fisheries, mainly due to irresponsible overfishing; but it was aggravated by changes in marine ecosystems accompanying the disappearance of coastal wetlands, disruption of river outflows by massive dams (e.g., the Aswan High Dam), pollution with chemicals, oil spills, and so on Other factors were changes in ocean temperature and flow of currents such as El Niño, which affect marine ecology. Fish provide about 20–25% of human protein needs, considerably more in coastal-dwelling populations in South and Southeast Asia. It is not clear where replacement protein will come from.72 Shortage of energy in industrializing nations such as India and China makes matters worse. Rising energy needs in these and other industrializing nations have led to greatly increased and often inefficient combustion of low-grade coal, which not only adds to the burden of greenhouse gases but causes considerable health-harming atmospheric pollution. Energy production and combustion have diverse impacts on health, ranging from chronic respiratory damage due to inhalation of smoke from cooking fires inside inadequately ventilated village huts in the developing world73 to the after-effects of the Chernobyl nuclear reactor disaster and ill-defined and poorly understood effects of living close to high-voltage electric power lines.74,75 Over 50% of the world’s population now lives in urban centers. The deterioration of air quality in these centers presents an ongoing challenge to public health because the elderly, the frail, and the hypersensitive succumb if air quality deteriorates. Allergens or other triggering factors continue to increase as demonstrated by increasing rates of asthma globally. Under global warming, large urban centers will experience more inversions, and smog day advisories are expected to increase further. In the summer of 2005, for instance, there were more than double the previous average annual number of smog days in Toronto, Canada.
SPECIES EXTINCTION AND REDUCED BIODIVERSITY
That there have been extinctions of whole populations and of specific species in the past is undisputed.76,47 As a result of human activity, unique animal and plant species are becoming extinct at an
930
Environmental Health
accelerating rate. Much discussion centers on the loss of species that might have great benefit for humans if they could be studied in detail and their properties exploited, for example, as anticancer agents. This view of species extinction is anthropocentric, a narrow view that considers only the possible direct benefits of biodiversity for humans. Subtle features of biodiversity matter more, especially the loss of genetic diversity.77 It may be very hazardous to proceed on our present course of increasing reliance on monocultures of high-yielding grain crops. Entire yields could be wiped out by an epidemic plant disease to which that strain is vulnerable; whereas if a genetically diverse grain crop is struck by plant disease, some strains at least are likely to survive. We have long understood that widespread pesticide use on insects that damage crops killed large numbers of useful arthropod species such as bees and led to death or reproductive failure of many species of birds.78 Fat-soluble dioxins and PCBs that concentrate as they move through food chains have adverse effects on reproductive outcomes, for example, by causing lethal deformities, some of which might also occur in other vertebrates, including humans. We have become increasingly aware of the interdependence among many diverse species that share an ecosystem. John Donne’s phrase, “No man is an island” applies to the myriad species that share the biosphere; when the bell tolls for amphibia whose eggs are killed by rising UVR flux, or for monarch butterflies that die when their winter habitat disappears, the bell tolls for us all. Destruction of natural ecosystems could have many harmful, even lethal, consequences for humans as well as for spotted owls.
DESERTIFICATION
Conversion of marginal agricultural land into desert is a widespread problem. Land that was suitable for light grazing was inappropriately used in attempts to grow crops. Thin soil on mountain slopes that held native vegetation capable of resisting erosion in annual spring snowmelt was cleared and cultivated, leading to rapid erosion—the soil slid down steep mountain slopes leaving only bare rock on which nothing grows. Trees and shrubs have been stripped from arid zone savannah and from many mountain slopes to provide fuel wood, with the same result.79 Sometimes the climate has changed, as in parts of formerly tropical rain forests in Central and South America that have been cleared as grazing land for beef cattle or in attempts to grow soybeans, wheat, or rice. The hydrologic cycle from tropical rain forest to rivers and lakes to clouds that precipitate as heavy rain is disrupted when trees are cut. Within a decade or less, rainfall is reduced, and soil moisture levels decline precipitously.80 The Sahara Desert was at least partly covered with rain forest as recently as 5000 years ago; once the trees were cut, conversion to desert proceeded rapidly and has shown no signs of recovery. Similar processes are at work in other parts of the world; the consequence is declining potential to produce food. Formerly bountiful land that desertifies might take from a few hundred to a few hundred-thousand years to become fertile again.
ENVIRONMENTAL POLLUTION
Environmental pollution can be localized, regional, or global; all forms adversely affect human health and the integrity of the environment. Those that fall into the category of “global change” include major environmental disasters and catastrophes: the Chernobyl nuclear accident in the former Soviet Union;81 massive oil spills in maritime accidents involving supertankers (e.g., Torrey Canyon, Exxon Valdez); and insidious permeation of the entire biosphere by stable toxic chemicals that enter and are transmitted from one species to another through marine and terrestrial food chains. International conventions82 have been developed in an attempt to control pervasive chemical exposures, but some polluting countries, for economic reasons, have elected to continue business as usual, and have opted out of these conventions.
The collapse of the former Soviet Union and its satellites revealed gross environmental destruction that could take many centuries to be healed.83 This regional pollution has had adverse effects on health, such as occurrence of high levels of birth defects and severe respiratory damage.84 Some forms of chemical pollution are global in scope: PCBs, dioxins, DDT, fat-soluble chemicals, persistent organic pollutants, and endocrine disrupters that travel through food chains have permeated the entire world.14,85 Heavy metals, for example, lead and mercury, occur in trace amounts in emissions from coal-burning power generators and ore smelting plants (which may emit other toxic chemicals such as arsenic). These contaminants occur in trace amounts, but the total burden worldwide, falling on land and into the sea, amounts to millions of metric tons annually. These toxic chemicals all concentrate in food chains. Lead and mercury concentration in cormorants’ feathers has been assayed in museum specimens prepared by taxidermists before the Industrial Revolution and compared to present-day levels; modern levels are up to 1,000–10,000 times greater than before the Industrial Revolution.86 Pregnant women who eat much fish risk causing mercury poisoning of their fetus. DEMOGRAPHIC CHANGE
Underlying all the above features of global change are several aspects of population dynamics. The most obvious is population growth, which since approximately the 1950s has accelerated in an unprecedented surge almost all over the world.87 After many millennia of stable world population in the hunter-gatherer era of human existence, the development of agriculture about 10,000 years ago led to the first surge in population growth and subsequently to a slow, but generally steady arithmetical increase in numbers of humans. Roughly coinciding with the Industrial Revolution and European colonization of the Americas and Oceania, the pattern of growth became approximately exponential about 200 years ago, leading to the sharp increase that has occurred in the last 100 years, which was followed by a hypergeometric population explosion that coincided with and was probably in part caused by the “green revolution” and greatly increased agricultural productivity in the two to three decades after World War II. The reasons for the increase are complex and controversial. The efficacy of public health measures (e.g., environmental sanitation, vaccination) played a part, but ecological and behavioral causes, such as optimism about the future and earlier age at marriage, probably were more important. In the nineteenth and early twentieth centuries, agricultural development provided more food, and the population expanded to approach the available supply of food. Not all causes of the population explosion are well understood.88,89 As well as the surging increase in numbers, unprecedented movements of people have occurred since the late nineteenth century. Long-term migration has been very large, for example, an estimated 30–40 million people from Europe into the Americas and Australasia in the period from 1850 to 1910, and perhaps larger undocumented migrations within Asia, for example, of ethnic Chinese into many parts of Southeast Asia, over a longer period dating from some time in the last millennium.90 Seemingly perpetual wars and widespread political unrest have contributed to migrations, but the main factor has been economic: many who migrate have perceived that their opportunities for work and a good life would be better elsewhere than where they were born and raised. Rapid urban and industrial growth is an important parallel sociodemographic phenomenon. The proportion of people living in cities exceeded 50% of total global population in 1998.91 In rich industrial nations, urban land shortage, real estate values, new building techniques, and personal preferences have led to an enormous growth in high-rise apartment dwellings. In developing nations, megacity shantytown slums with populations of 10 million or more have proliferated; these lack sanitary and other essential services and create an ideal breeding ground for disease and social unrest. This aspect of global change has far-reaching effects on health.
52 The movement of large numbers of people from rural to urban regions is attributable to industrialization, mechanization of agriculture, attraction of rural subsistence farmers and landless peasants to prospects for more lucrative work in cities, and in many parts of the world, flight from oppression by powerful rich landowners, banditry, or overt armed conflict. We can regard these massive people movements as a biological process, a form of tropism that has attracted people toward places where they can grow and develop, and away from places where growth and development were inhibited. This perspective comes close to considering humans as a parasitic infestation of the biosphere,92,93 a harsh judgment, but one for which there is some empirical support. Another form of movement with important health implications is short-term international air travel. International Air Transport Authority (IATA) statistics show present annual air travel between countries and continents to be 600–700 million persons; they travel on business or pleasure or for seasonal employment.94 Rapid air travel allows people who may be incubating communicable diseases to travel to destinations where large numbers of people may be susceptible to the pathogens introduced in this way.
Human Health in a Changing World
that has the potential for the political agenda to be captured by determined single-issue interests. All too frequently, those who are elected lack the political courage to make the tough decisions—such as raising taxes on fossil fuels—that the state of the world demands if the climatic trends are to be halted and ultimately reversed before irreversible harm is done.
PUBLIC HEALTH RESPONSES
Perhaps never before have public health workers and their services faced such challenges as they do now.111–115 Actions of several kinds are required (Table 52-4). Some obvious and simple measures can be initiated at once, for example, the protection of fair-skinned infants and children against excessive sun exposure. We need to establish or strengthen our surveillance of insect vectors and the pathogens they can carry. It also is essential to enhance preparedness to cope with public health consequences of disasters, including an
TABLE 52-4. PUBLIC HEALTH RESPONSES TO GLOBAL CHANGE EMERGING AND REEMERGING INFECTIONS
Another way in which the world has changed is in the emergence and reemergence of lethal infectious pathogens.95–98 The human immunodeficiency virus (HIV) pandemic is the most obvious; it is linked to a resurgence of two old plagues, tuberculosis and syphilis, which find fertile soil in immunocompromised hosts and often now are due to resistant strains of pathogens. These three diseases are endemic in sub-Saharan Africa and in megacity slums elsewhere in the developing world and in the counterpart of these slums in rich industrial nations, among the homeless, disenfranchised, urban underclass. Other emerging infections are due to organisms such as Ebola virus, hantavirus,99 Borrelia burgdorferi (Lyme disease), and Legionella pneumophila (Legionnaires’ disease). Others are due to the expansion of old diseases, such as hemorrhagic dengue fever, into regions from which such diseases had been eliminated generations ago, only to return now because of the combination of climate change and the introduction of hardy vector species such as Aedes albopictus, among others.30,100–107
OTHER RELEVANT CHANGES
Complex economic, social, industrial, and political factors accompany the above processes and contribute to the difficulty of finding solutions that work. Global economies have supplanted national and regional ones.108 Transnational corporations, owing allegiance to no nation and seemingly driven by the desire for a profitable balance sheet in the next quarterly report, move capital and production from places where obsolete plant and equipment, tough labor and environmental laws, and political systems may impede them, to countries without these restraining influences, thus maximizing short-term profits109 without accountability for potential harms to local ecologies and populations.110 Political revolts against local and regional taxation have undermined public health and other essential services and their infrastructures in some rich industrial nations, including the United States. For many years there have been no new investments, little maintenance, no salary increases (sometimes reductions of pay and benefits), and serious staff reductions in many public health services. Television “sound-bites” and fragmentary news reporting deprive busy people of information that is necessary to enable them to make intelligent decisions about such matters as public health services and environmental sustainability. Many people regard elected officials with contempt, which deters them from voting—a dangerous trend
931
Monitoring Migrant, refugee movements Food production, distribution Acute sunburn Heat-related illness Epidemiologic Surveillance Air quality Water quality Food safety and food security Vectors, pathogens Infectious diseases Fecal-oral Respiratory Vector-borne Cancer Malignant melanoma Nonmelanomatous skin cancer Other cancers Cataract Surveys Sun-seeking, sun-avoiding Attitudes to sustainability Values assessments over time Epidemiologic Studies Case-control, cohort studies to assess UV risk Behavioral adaptation studies Innovative study designs that are eco-region based versus geo-political/administrative boundary based RCTs∗ of sunscreen ointments RCTs of UV-filtering sunglasses Public Health Action Advisory messages about sun exposure Standard-setting for protective clothing, etc. Health education directed at behavior change Health care of migrant groups Disaster preparedness Extreme weather advisories Public Health Policy National food and nutrition policies Disaster relief and infrastructure policies Research priorities ∗
RCT—Randomized Controlled Trials.
932
Environmental Health
increasing proportion that are weather related and associated with floods, hurricanes, droughts, or other extreme weather events; and we need to be prepared to cope with increasing numbers of environmental refugees. Needed research strategies are also summarized in Table 52-4.116 Effective responses are hampered by many factors. The decay of infrastructures and erosion of morale, as dedicated staff are laid off and salaries frozen or cut, have inhibited meaningful efforts to prepare for any but immediate emergencies. Yet some obvious preparations to cope would cost little. Disaster planning must be maintained at a high level of preparedness; nothing is more certain than that there will be increasingly frequent, more severe, and less predictable weather emergencies. Large cities and towns on flood plains or in places where there can be tidal surges are increasingly vulnerable. Insurance companies have recognized this by reluctance to insure against some natural disasters. Some public health agencies and local government departments have disaster plans, but many do not. Swiss RE is one reinsurance company that in the early 1990s, while the fact of global warming was being denied by so many other corporate entities worldwide, recognized the fact through the simple reality of increases in insurance claims. They have since been staunch advocates for a more serious acceptance to this reality and of the need for political and social action to prepare for the severe consequences of more extreme weather events into the future.57 The insurance industry generally has been persuaded of the reality of climate change as a consequence of the great increase in claims in the past two decades. There are many simple actions that public health services can carry out to mitigate the adverse effects of global change on human health. For instance, weather reports now often mention the level of UVR flux and offer advice about sun avoidance. In U.S. cities, for example, Chicago, prone to extreme summer heat, humidity, and severe smog, public health authorities have plans to evacuate the most vulnerable people to air-conditioned shopping malls, and the like when heatwaves occur. GROUNDS FOR OPTIMISM
Faced with the array of problems outlined above, it would be easy to admit defeat. But there are grounds for optimism about our predicament.117 Humans are robust, a very hardy species: we have demonstrated considerable ability to adapt to a wide range of harsh environments. We are resourceful, intelligent, and often at our best in a crisis. We are now, perhaps, entering the greatest crisis we have ever faced. Its insidious onset has lulled us into complacency, if we think at all about the nature of the global changes that endanger us. There is also an element of denial, akin to the reluctance of a cancer patient to accept the seriousness of the condition, or the risk-taking adolescent to whom death or permanent disability due to dangerous behavior is unimaginable. Epidemiology and other public health sciences can do much to induce greater recognition of the need for changes in values and behavior. Precedents in the history of public health since the second half of the nineteenth century118 are encouraging. The field of ecoepidemiology has begun to examine ways of measuring the effect, if any, of ecological declines on population health as one way of informing policy.119,120 As with any new scientific path, however, the challenges are great to obtain the needed data to conduct meaningful analyses. Recommendations for future research have been offered,16 but great political will is needed to effect the recommended changes in data availability for the conduct of eco-epidemiology in this area. The necessary sequence for control of any public health problem is awareness that the problem exists, understanding what causes it, the capability to control it, a sense of values that the problem matters, and political will.121 All but the last of these exist now. We have the basic science evidence that enables us to predict what is likely to happen. Empirical evidence to support the predictions is rapidly mounting and soon will comprise an incontrovertible body of knowledge and
BOX 52-2. MILLENNIUM DEVELOPMENT GOALS Health, poverty, and conservation 1. 2. 3. 4. 5. 6. 7. 8.
Eradicate extreme poverty and hunger Achieve universal primary education Promote gender equality and empower women Reduce child mortality Improve maternal health Combat HIV/AIDS, malaria, and other diseases Ensure environmental sustainability Develop a global partnership for development
understanding that even the most obtuse self-interested group will be unable to deny or rebut. One example of global concerted action was seen in September 2000, at the United Nations (UN) Millennium Summit, where it was pledged that by 2015, all 191 UN member states would meet all eight of the Millennium Development Goals (MDGs) (http://www.un.org/millenniumgoals/) (see Box 52-2). Increasing numbers of important interest groups, such as the reinsurance industry and leaders in some resource-based industries are recognizing the need for action. Increasing numbers of thoughtful people are aware of the need to conserve resources rather than squandering them wantonly as we did in the 1950s and 1960s with “disposable” products. A few more dramatic disasters consequent to the collapse or major disruption of established weather systems would help to galvanize public opinion and lead to pressure for change that even the most complacent political leaders would be unable to ignore. Environmental protection laws and regulations have been strengthened in many countries and in the European Union, and in the United States these laws and regulations remained substantially intact until recently when the administration weakened or revoked several important safeguards. This suggests that the change of values required as a necessary prerequisite for action to mitigate global change is already beginning. It will undoubtedly help if the health impacts of global change are given greater attention in the media. Public health workers and epidemiologists can contribute by emphasizing the health effects of global change, and actions needed to minimize their impact. (See Box 52-3.)
ENVIRONMENTAL ETHICS AND THE PRECAUTIONARY PRINCIPLE
The change of values that we believe is already under way is leading to recognition of the need to observe a code of conduct for the environment, an ethic of environmental sustainability. Environmental movements are gathering strength in many countries and “Green” parties are seen increasingly as “respectable” and in the mainstream of politics. They are beginning to influence the political agenda despite pressure from powerful industrial and commercial interest groups that have long been able to achieve their ends by controlling political decisions. The pressure often comes first from grass-root levels, perhaps because of a proposal to establish a toxic waste dump or a polluting industry; but its origins may matter less than its increasingly successful efforts to influence the outcome. Examples such as these make for hopeful reading in Suzuki and Dressel, 2002. Another hopeful sign is recognition of the Precautionary Principle in policy formulation: when there is doubt about the possible environmental harm that may arise from an industrial or commercial development, a nuclear power station, an oil refinery, an open-cast coal mine, or other environmentally damaging activities, people and communities who will be most affected are increasingly often given the benefit of the doubt, because of the Precautionary Principle.122,123 A few years ago, this almost never happened. Now it is commonplace, especially in the European Union and, to a lesser
52
Human Health in a Changing World
933
BOX 52-3. THE EARTH CHARTER MARCH 2000 PREAMBLE Earth, Our Home The Global Situation The Challenges Ahead Universal Responsibility PRINCIPLES I. RESPECT AND CARE FOR THE COMMUNITY OF LIFE 1. 2. 3. 4.
Respect Earth and life in all its diversity. Care for the community of life with understanding, compassion, and love. Build democratic societies that are just, participatory, sustainable, and peaceful. Secure Earth’s bounty and beauty for present and future generations. In order to fulfill these four broad commitments, it is necessary to: II. ECOLOGICAL INTEGRITY
5. Protect and restore the integrity of Earth’s ecological systems, with special concern for biological diversity and the natural processes that sustain life. 6. Prevent harm as the best method of environmental protection and, when knowledge is limited, apply a precautionary approach. 7. Adopt patterns of production, consumption, and reproduction that safeguard Earth’s regenerative capacities, human rights, and community well-being. 8. Advance the study of ecological sustainability and promote the open exchange and wide application of the knowledge acquired. III. SOCIAL AND ECONOMIC JUSTICE 9. Eradicate poverty as an ethical, social, and environmental imperative. 10. Ensure that economic activities and institutions at all levels promote human development in an equitable and sustainable manner. 11. Affirm gender equality and equity as prerequisites to sustainable development and ensure universal access to education, health care, and economic opportunity. 12. Uphold the right of all, without discrimination, to a natural and social environment supportive of human dignity, bodily health, and spiritual well-being, with special attention to the rights of indigenous peoples and minorities. IV. DEMOCRACY, NONVIOLENCE, AND PEACE 13. Strengthen democratic institutions at all levels, and provide transparency and accountability in governance, inclusive participation in decision making, and access to justice. 14. Integrate into formal education and life-long learning the knowledge, values, and skills needed for a sustainable way of life. 15. Treat all living beings with respect and consideration. 16. Promote a culture of tolerance, nonviolence, and peace. THE WAY FORWARD
extent, in Canada. The principle is not widely embraced in the United States. It is promising that Western culture is beginning to show interest in the value of “Traditional Knowledge” and/or “Indigenous Knowledge” that had provided a foundational guide for aboriginal population survival over millennia. The operating principle among some of these indigenous cultures is that of “The Seventh Generation.” Under this principle, the consequences of present day decisions that could impact the environment are considered for their potential consequences seven generations hence. Any action that could negatively impact seven generations hence is not taken (http://www.iisd.org/pdf/ seventh_gen.pdf; also:http://www.ecology.info/seventh-generation. htm. Accessed on May 1, 2007). Finally, and also promising, is the fact that health and environment are being encouraged to communicate, both in government agencies, and in curricula on sustainability and health in university
training programs.8,56,124–129 Educating young people to appreciate the complexities of the link between health and environment and to move them from linear and reductionist to systems and transdisciplinary approaches to problem solving are to be encouraged.130 Transdisciplinarity is the philosophical concept of scholarly inquiry that ignores conventional boundaries among ways of thinking about and solving problems. It is based on recognition of the inherent complexity of many problems confronting humans and has evolved into a conceptual framework that embraces and seeks to mobilize all pertinent scientific and scholarly disciplines: physical, biological, social and behavioral sciences, ethics, moral philosophy, communication sciences, economics, politics, and the humanities. Many problems in public health require an inherently transdisciplinary approach. The social, demographic, and human health problems associated with global environmental change demand the greatest degree of transdisciplinarity. This is the antonym of reductionism.131
934
Environmental Health
MITIGATION OPTIONS
The 199571 and 20011 IPCC Reports discuss several mitigation options: energy-efficient industrial processes and means of transportation; reduction of human settlement emissions; sound agricultural conservation and rehabilitation policies; forest management policies and strategies, and so on. The Framework Convention on Climate Change that was adopted by most national leaders early in 1996 spells out ways in which industries that contribute heavily to greenhouse gas accumulation can and must change. These changes are not without cost, although experience has often demonstrated that conserving energy, like all other conservation measures, is cost-effective. An obvious change that would benefit all who share Earth is introduction of deterrent taxes that would discourage use of private cars for all but truly essential purposes. Political leaders everywhere are reluctant to enact this unpopular measure and are equally reluctant to spend large capital sums on new or upgraded public transport systems in an era when reducing taxes is what got many of them elected in the first place. This is unlikely to change until a climatic emergency or other environmental crisis forces large numbers to come to their senses and realize that the time for action rather than rhetoric has arrived. Public health workers should be preparing to take the initiative in the event of climatic emergencies or environmental crises and should have cogent arguments ready to state the case for action toward environmental sustainability. Our situation resembles that depicted by the clock on the cover of the Bulletin of the Atomic Scientists, with hands pointing to a few minutes before midnight. The analogy is a calendar from which all but a few days near the end of the year have been torn. The need for action is urgent. The Ecohealth program initiative at the Canadian International Development Research Centre (IDRC)132 introduced in 2005 the inaugural issue of Health Environment: Global Links Newsletter (www.idrc.ca/ecohealth). This four-language, biannual publication is being produced to meet a need for information and knowledge exchange globally. The purpose of the newsletter is to sustain momentum in building with the emerging global Community of Practice on health and environment. The newsletter is sponsored by IDRC, but is intended as forum for the emerging global community of scientists and development practitioners working on health and environment linkages http://www.idrc.ca/uploads/user-S/11231750 941HealthEnvironment_Newsletter-English.pdf.
ADDENDUM
Since this chapter went into production, there have been several important reports on climate change. As attention to this particular concern grows, many more reports can be expected. The amount of literature in this field is escalating, and much can be found through Google searches. Of particular note, one influential and more recent report emerged from Britain where the government had commissioned a report by the economist Nicholas Stern, formerly of the World Bank, on the costs of action versus inaction on climate change. Stern calculated that the long-term economic cost of continuing with a “business as usual” strategy ultimately would be as much as 20% of annual GNP because of the very high cost of dealing with increasingly frequent and severe climatic extremes. In contrast, Stern calculated that the cost of action to mitigate climate change and to adapt to inevitable change would be about 1–2% of annual GNP, if action were to begin immediately. Member nations of the European Union appear to be taking the Stern Report seriously and are acting on it; the United States and Canada, as of mid-2007, do not. In February 2007, the first of several reports from the Fourth Assessment by the Intergovernmental Panel on Climate Change (IPCC) was released. This report deals with the increasingly strong scientific evidence that climate change is occurring and is, to a considerable
extent, the result of human activity, especially the combustion of carbon-based fuels which add to the burden of carbon dioxide in the atmosphere. Further IPCC reports are expected throughout 2007, addressing impacts and adaptation to climate change. Details of the Stern Report are accessible at http://www.hmtreasury.gov.uk/media/999/76/CLOSED_SHORT_executive_summary. pdf, and of the IPCC reports are accessible at http://www.ipcc.ch. REFERENCES
1. Watson RT, and the Core Writing Team, eds. IPCC, 2001: Climate Change 2001: Synthesis Report. A Contribution of Working Groups I, II, and III to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, United Kingdom, and New York, NY, USA: Cambridge University Press; 2001: 398. 2. Millennium Ecosystem Assessment. Ecosystems and Human Wellbeing: Synthesis. Washington, DC: Island Press; 2005: 137. 3. Tudge C. The Time before History. New York: Scribner; 1996. 4. Eddy JA. Climate and the role of the sun. In: Rotberg RI, Rabb TK, eds. Climate and History. Princeton, NJ: Princeton University Press; 1981: 145–67. 5. Houghton JT, Filho LGM, Callander BA, Kattenburg A, Maskell K, eds. Climate Change 1995—The Science of Climate Change. Volume 1 of the Report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press; 1996. 6. McMichael AJ. The biosphere, human health and sustainability (Editorial). Science. 2002;420:1093. 7. Royal Society of Canada. Canadian Global Change Program. Ottawa: Royal Society of Canada; 1992. 8. Last JM. Public Health and Human Ecology. 2nd ed. Connecticut, USA: Appleton & Lange; 1998. 9. McMichael AJ. Planetary Overload: Global Environmental Change and the Health of the Human Species. Cambridge: Cambridge University Press; 1993. 10. McMichael T. Human Frontiers, Environments and Disease: Past Patterns, Uncertain Futures. UK: Cambridge University Press; 2001. 11. Raven PH, ed. Nature and Human Society: The Quest for a Sustainable World. Proceeding of the 1997 Forum on Biodiversity. National Research Council. Washington, DC: National Academy Press; 1997. 12. Soskolne CL, Bertollini R. Global Ecological Integrity and ‘Sustainable Development’: Cornerstones of Public Health: A Discussion Document. Rome Division, Italy: World Health Organization, European Centre for Environment and Health; 1999: 74. Also published at http://www.euro.who.int/document/gch/ecorep5.pdf. 13. Soskolne CL, Bertollini R. Chapter 28: Global ecological integrity, global change and public health. In: Aguirre AA, Ostfeld RS, Tabor GM, et al, eds. Conservation Medicine: Ecological Health in Practice. New York: Oxford University Press; 2002: 372–82. 14. Colborn T, Dumanoski D, Myers JP. Our Stolen Future: Are We Threatening Our Fertility, Intelligence, and Survival?—A Scientific Detective Story, New York: A Dutton Book; Penguin Group; 1996. 15. Strong M. Where on Earth are We Going? Canada: Knopf; 2000. 16. Soskolne CL, Broemling N. Eco-epidemiology: on the need to measure health effects from global change. Global Change Hum Health. 2002;3(1):58–66. 17. Annual Population Statistics and Projections. New York: UN Statistical Office; 1995. 18. Rees WE. Human carrying capacity: living within global life support. In: The Encyclopedia of Global Environmental Change. London: John Wiley and Sons; 2001. 19. Cohen JE. How Many People Can the Earth Support? New York: Norton; 1995.
52 20. Wackernagel M, Rees W. Our Ecological Footprint: Reducing Human Impact on the Earth. Gabriola Island, British Columbia, Canada: New Society Publishers; 1996. 21. Davis D. When Smoke Ran Like Water: Tales of Environmental Deception and the Battle Against Pollution. New York: Basic Books, A Member of the Perseus Books Group, 2002. 22. The environment, the public health, and the next generation of protection. Am J Law Med. 2004;30:2,3. 23. Soskolne CL. International transport of hazardous waste: legal and illegal trade in the context of professional ethics. Global Bioeth. 2001:14(1);3–9. 24. Westra L. Ecoviolence and the Law: Supranational Normative Foundations of Ecocrime. New York: Transnational Publishers; 2004. 25. Schmidt CW. Battle scars: global conflicts and environmental health. Environ Health Perspect. 2004;112(17):A995–A1005. 26. Intergovernmental Panel on Climate Change. Scientific Assessment of Climate Change. A Report by Working Group I. Geneva: World Health Organization and United Nations Environmental Programme; 1990. 27. Intergovernmental Panel on Climate Change. Impact Assessment; A Report to IPCC from Working Group II. Canberra: Australian Government Printing Office; 1990. 28. Watson RT, Zinyowera MC, Moss RH, et al., eds. Climate Change 1995—Impacts, Adaptations and Mitigation of Climate Change: Scientific-Technical Analysis. Cambridge: Cambridge University Press; 1996 (Contributions of Working Group II to the Second Assessment Report of the Intergovernmental Panel on Climate Change). 29. McMichael AJ, Haines A, Sloof R, et al., eds. Climate Change and Human Health. Geneva: World Health Organization/WMO/ United Nations Environmental Programme; 1996. 30. McMichael AJ, Campbell-Lendrum DH, Corvalan CF, et al. Climate Change and Human Health: Risks and Responses. Geneva, Switzerland: WHO; 2003. 31. Government-sponsored scientific committees in the United Kingdom, the Netherlands, Canada, Sweden, and Australia have produced multiple reports since approximately 1985. In the United States, the National Academy of Sciences has produced several reports. 32. Haines A, Fuchs C. Potential impacts on health of atmospheric change. J Public Health Med. 1991;13:69–80. 33. Last JM. Global change; ozone depletion, greenhouse warming and public health. Annu Rev Public Health. 1993;14:115–136. 34. Union of Concerned Scientists. World Scientists’ Warning Briefing Book. Cambridge, MA: Union of Concerned Scientists; 1993. 35. Worldwatch Institute. State of the World, Annual Reports since 1985. Washington DC: Worldwatch Institute. 36. Brookes WT. The global warming panic. Forbes. 1989;144(14): 96–102. 37. Lindzen RS. Some remarks on global warming. Environ Sci Technol. 1990;24:424–6. 38. Lomborg B. The Skeptical Environmentalist: Measuring the Real State of the World. Cambridge: Cambridge University Press; 2001. 39. Our Planet, Our Health. Report of the WHO Commission on Health and Environment. Geneva: World Health Organization; 1992 (with Annexe volumes on Food and Agriculture, Energy, Urbanization, and Industry). 40. Silver CS, DeFries RS, eds. One Earth, One Future. Washington DC: National Academy Press; 1990. 41. Yoda S, ed. Trilemma; Three Major Problems Threatening the World Survival. Report of the Committee for Research on Global Problems. Tokyo: Central Research Institute of Electric Power Industry; 1995. 42. Suzuki D. The Sacred Balance: Rediscovering Our Place in Nature. Vancouver: Greystone Books, the Douglas & McIntyre Publishing Group; 1997.
Human Health in a Changing World
935
43. Daily GC, ed. Nature’s Services: Societal Dependence on Natural Ecosystems. Washington, DC: Island Press; 1997. 44. Costanza R. The value of ecosystem services. Special Issue of Ecol Econ. 1999; 25(1):139. 45. Costanza RR, d’Arge R, de Groot S, et al. The value of the world’s ecosystem services: putting the issues in perspective. Ecol Econ. 1998;25:67–72. 46. Daly HE, Cobb JB, Jr. For the Common Good: Redirecting the Economy toward Community, and Sustainable Future. 2nd ed. Boston, USA: Beacon press; 1994. 47. Diamond J. Collapse: How Societies Choose to Fail or Succeed. New York: Viking—A member of Penguin Group; 2005. 48. Rees WE. Consuming the earth: yhe biophysics of sustainability. Ecol Econ. 1999;29:23–7. 49. Rees WE. Patch disturbance, ecofootprints, and biological integrity: revisiting the limits to growth (or why industrial society is inherently unsustainable). Chapter 8, In: Pimentel D, Westra L, Noss R, eds. Ecological Integrity: Integrating Environment, Conservation, and Health. Washington: Island Press; 2000;139–56. 50. Marland G, Boden TA, Andres RJ. Global, regional, and national CO2 emissions. In: Trends: A Compendium of Data on Global Change. Oak Ridge, TN: Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy; 2005. 51. United Nations Conference on Environment and Development (the Rio Summit). New York: United Nations; 1992 (“Agenda 21”). 52. Patz JA, Epstein PR, Burke TA, Balbus JM. Global climate change and emerging infectious diseases. JAMA. 1996;275:217–23. 53. Haines A, Patz JA. Health effects of climate change. JAMA. 2004;291(1):99–103. 54. Curriero FC, Heiner K, Zeger S, Samet J, Patz JA. Analysis of heatmortality in 11 cities of the eastern United States. Am J Epidemiol. 2002;155(1):80–7. 55. Gubler DJ, Reiter P, Ebi KL, et al. Climate variability and change in the United States: potential impacts on vector-and rodent-borne diseases. Environ Health Perspect. 2001;109(2):223–33. 56. Martens P. Health & Climate Change: Modelling the Impacts of Global Warming and Ozone Depletion. London: Earthscan Publications; 1998. 57. Swiss Reinsurance Company. The Great Warming—A TV Documentary. Stonehaven CCS Canada Corporation; 2003. 58. Schuster CJ, Ellis AG, Robertson WJ, et al. Infectious disease outbreaks related to drinking water in Canada, 1974–2001. Canadian J Public Health. 2005;96(4):254–8. 59. Hrudey SE, Hrudey EJ. Safe Drinking Water: Lessons from Recent Outbreaks in Affluent Nations. London, On: IWA Publishing; 2004. 60. Curriero FC, Patz JA, Rose JB, et al. Analysis of the association between extreme precipitation and waterborne disease outbreaks in the United States, 1948–1994. Am J Public Health. 2001;91:1194–9. 61. Rose JB, Epstein PR, Lipp EK, et al. Climate variability and change in the United States: potential impacts on water- and food-borne diseases caused by microbiological agents. Environ Health Perspect. 2001;109(2):211–22. 62. Rose JB, Daeschner S, Easterling DR, et al. Climate and waterborne outbreaks. J Am Water Works Assoc. 2000;92:77–87. 63. Graczyk TK, Evans BM, Shiff CJ, Karreman HJ, Patz JA. Environmental and geographical factors contributing to contamination of watershed with Cryptosporidium parvum oocystes. Environ Research. 2000;82:263–71. 64. Parry ML, Rosenzweig C. Health and climate change; food supply and risk of hunger. Lancet. 1993;342:1345–7. 65. Molina MJ, Rowland FS. Stratospheric sink for chloro-fluoromethanes; chlorine atom-catalyzed destruction of ozone. Nature. 1974;249:810–4.
936
Environmental Health
66. Rowland FS, Molina MJ. Estimated future atmospheric concentrations of CCl3F (fluorocarbon-11) for various hypothetical tropospheric removal rates. J Phys Chem. 1976;80:2049–56. 67. Farman JC, Gardiner BG, Shanklin JD. Large losses of total ozone in Antarctica reveal seasonal ClOx/NOx interaction. Nature. 1985;315: 207–10. 68. Kerr JB, McElroy CT. Evidence for large upward trends of ultraviolet-B radiation linked to ozone depletion. Science. 1993;262: 523–4. 69. United Nations Environment Programme. Montreal Protocol on Substances that Deplete the Ozone Layer. UNEP. 1987. Last amended September, 1997. Nairobi, Kenya. http:// www.unep.ch/ozone/mont_t. htm. Accessed May 1, 2007. 70. Homer-Dixon TF, Percival V. Environmental Scarcity and Violent Conflict; Briefing Book. Washington, DC and Toronto: American Association for the Advancement of Science and University of Toronto; 1996. 71. Intergovernmental Panel on Climate Change. Climate Change 1995; Impacts, Adapatations and Mitigation; Summary for Policymakers. Geneva: World Meterological Organization, World Health Organization, United Nations Environmental Programme; 1995. 72. Food and Agriculture Organization. State of the World’s Fisheries (Annual Report). Rome: Food and Agriculture Organization; 1995. 73. de Koning HW, Smith KR, Last JM. Biomass fuel combustion and health. Bull WHO. 1985;63:11–26. 74. WHO Commission on Health and Environment. Report of the Panel on Energy. Geneva: World Health Organization; 1992. 75. Nakicenovic N, Grübler A, Ishitani H, et al. Energy primer. In: Climate Change 1995; Impacts, Adaptations and Mitigation; Summary for Policymakers. Geneva: WMO, World Health Organization, United Nations Environmental Programme; 1995: 75–92. 76. Leakey R, Lewin R. The Sixth Extinction: Patterns and the Future of Humankind. Doubleday, New York: Anchor Books; 1995. 77. Wilson EO. The Diversity of Life. Cambridge, MA: Harvard University Press; 1992. 78. Carson R. Silent Spring. Boston: Houghton Mifflin; 1962. 79. Aggarwal AR. Cold Hearths and Barren Slopes. London: Zed Books; 1986. 80. Almandares J, Anderson PK, Epstein PR. Critical regions; a profile of Honduras. Lancet. 1993;342:1400–2. 81. Anderson TW. Health Problems in Ukraine related to the Chernobyl Accident. Washington DC: World Bank, Natural Resources Management Division; 1992. 82. Fidler DP. International Law and Public Health: Materials on and Analysis of Global Health Jurisprudence. New York: Transnational Publishers; 2000. 83. Committee on Environmental Policy. Chapter 12. Human health and environment. In: Environmental Performance Reviews: Azerbaijan. Series No. 19. United Nations, New York and Geneva: Economic Commission for Europe; 2004. ISBN 92-1-116888-0/ISSN 1020–4563. 84. Herzman C. Environment and Health in Central and Eastern Europe; a Report for the Environmental Action Programme for Central and Eastern Europe. Washington, DC: World Bank; 1995. 85. Tenenbaum DJ. POPs in Polar Bears: Organochlorines Affect Bone Density. Environ Health Perspect. 2004;112(17):A1011. 86. Nriagu JD. A history of global metal pollution. Science. 1996;272: 223–6. 87. Cohen JE. How Many People Can the Earth Support? New York: Norton; 1995; 25–31. 88. McKeown T, Brown RG. The modern rise of population. Pop Stud. 1995;9:119–37. 89. Cohen JE. How Many People Can the Earth Support? New York: Norton, 1995, pp 25–106.
90. UN Demographic Yearbooks and historical demographic records. 91. United Nations Statistical Office, World Bank, and United Nations Demographic Yearbooks give details. 92. Hern WM. Why are there so many of us? Description and diagnosis of a planetary ecopathological process. Pop Environ. 1990;12(1): 9–37. 93. Rees WE. How should a parasite value its host? Ecol Econ. 1999;25:49–52. 94. International Air Transport Authority. Annual Air Movements Statistics. Montreal: International Air Transport Authority; 1995. 95. Lederberg J. Infection emergent. JAMA. 1996;275:243–4. 96. Roizman B, ed. Infectious Diseases in an Age of Change; The Impact of Human Ecology and Behavior on Disease Transmission. Washington, DC: National Academy Press; 1995. 97. Garrett L. The Coming Plague; Newly Emerging Diseases in a World Out of Balance. New York: Farrar Straus Giroux; 1995. 98. Horton R. The infected metropolis. Lancet. 1996;347:134–5. 99. Glass G, Cheek J, Patz JA, et al. Predicting high risk areas for Hantavirus Pulmonary Syndrome with remotely sensed data: the Four Corners outbreak, 1993. J Emerg Infect Dis. 2000;6:239–46. 100. Kovats RS, Campbell-Lendrum DH, Woodward A, McMichael AJ, Cox J. Early effects of climate change: do they include changes in vector-borne disease? Philos Trans R Soc Lond B Biol Sci. 2001;356: 1057–68. 101. Campbell-Lendrum DH, Prüss-Üstün A, Corvalán C. In: McMichael AJ, Campbell-Lendrum DH, Corvalán C, et al., eds, How much disease could climate change cause? Climate Change and Health: Risks and Responses. WHO, Geneva; 2003. 102. Tong S, Bi P, Donald K, et al. Climate variability and Ross River virus transmission. J Epidemiol Commun Health. 2002;56: 617–21. 103. McMichael AJ. Human culture, ecological change and infectious disease: are we experiencing history’s Fourth Great transition? Ecosyst Health. 2001;7:107–15. 104. McMichael AJ, Woodruff RE, Hales S. Climate change and human health: present and future. Lancet. 2006;367(9513):859–69. 105. Patz JA, Graczyk TK, Geller N, et al. Effects of environmental change on emerging parasitic diseases. Int J Parasitol. 2000;30: 1395–405. 106. Githeko AK, Lindsay SW, Confalonieri U, et al. Climate change and vector borne diseases: a regional analysis. WHO Bull. 2000;78: 1136–47. 107. Patz JA, Hulme M, Rosenzweig C, et al. Regional warming and malaria resurgence. Nature. 2002;420:627–8. 108. McMurtry J. The Cancer Stage of Capitalism. London: Pluto Press; 1999. 109. Kennedy P. Preparing for the 21st Century. Random House: New York; 1993. 110. Cobb JB, Jr. Sustaining the Common Good: A Christian Perspective on the Global Economy. Cleveland, Ohio: The Pilgrim Press; 1994. 111. Campbell-Lendrum D, Ebi K, Pires FA, et al. Chapter 16: Volume 3, Policy Responses. Consequences and options for human health. Millenn Ecosyst Assess. 2005:467–86. 112. Patz JA, Engelberg D, Last J. The effects of changing weather on public health. Annual Rev Pub Health. 2000;21:271–307. 113. Patz JA. Public health risk assessment linked to climatic and ecological change. Hum Ecol Risk Assess. 2001;7(5):1317–27. 114. McMichael AJ, Butler CD, Folke C. New visions for addressing sustainability. Science. 2003;302:1919–20. 115. Patz J, Khaliq M. Global climate change and health: challenges for future practitioners. JAMA. 2002;287(17):2283–4. 116. National Research Council. Global Environmental Change: Research Pathways for the Next Decade. Washington, DC: National Academy Press; 1999.
52 117. Suzuki D, Dressel H. Good News for a Change: Hope for a Troubled Planet. Toronto: Stoddart Publishers; 2002. 118. Last J. New pathways in an age of ethical and ecological concern. Int J Epidemiol. 1994;23:1:1–4. 119. Sieswerda LE, Soskolne CL, Newman SC, Schopflocher D, et al. Toward measuring the impact of ecological disintegrity on human health. Epidemiology. 2001;12(1):28–32. 120. Huynen MMTE, Martens P, De Groot RS. Linkages between biodiversity loss and human health: a global indicator analysis. Int J Environ Health Res. 2004;14 (1):13–30. 121. Last JM. The future of public health. Jap J Public Health. 1991;38(10): 58–93. 122. Grandjean P, Bailar JC, Gee D, et al. Implications of the precautionary principle in research and policy-making. Am J Ind Med. 2004;45:382–5. 123. Soskolne CL. On the even greater need for precaution under global change. Int J Occup Med Environ Health. 2004;17(1): 69–76. 124. Aguirre AA, Ostfeld RS, Tabor GM, et al, eds. Conservation Medicine: Ecological Health in Practice. New York: Oxford University Press; 2002.
Human Health in a Changing World
937
125. Brown VA, Grootjans J, Ritchie J, et al, eds. Sustainability and Health: Supporting Global Ecological Integrity in Public Health. Australia: Allen & Unwin; 2005. 126. Aron JL, Patz JA. Ecosystem Change and Public Health: A Global Perspective. Baltimore: Johns Hopkins University Press; 2001. 127. Martens P, Rotmans J. Transitions in a Globalising World. Lisse: Swets & Zeitlinger Publishers; 2002. 128. Martens P, Rotmans J, eds. Climate Change: An Integrated Perspective. Springer, Netherlands; 2003. Kluwer Academic Publishers; 1999. 129. Martens P, McMichael AJ. Environmental Change, Climate and Health: Issues and Research Methods. United Kingdom: Cambridge University Press; 2002. 130. McMichael AJ. Transdisciplinarity in science. In: Somerville M, Rappport D, eds. Transdisciplinarity: Re-Creating Integrated Knowledge. Oxford: EOLS Publisher; 200:203–9. 131. Last JM. A Dictionary of Epidemiology. 4th ed. Oxford University Press 2001:179–80. 132. Lebel J. Health: An Ecosystem Approach. Ottawa, Canada: International Development Research Centre; 2003. 133. The Earth Charter Initiative (2000). http://www.earthcharter.org/
This page intentionally left blank
IV Behavioral Factors Affecting Health
Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
This page intentionally left blank
Health Behavior Research and Intervention
53
Kim D. Reynolds • Donna Spruijt-Metz • Jennifer Unger
Scientists from the Department of Health and Human Services, after reviewing causes of death in the United States, concluded that about half of all deaths could be attributed to a limited number of largely preventable behaviors and exposures.1,2 These scientists estimated external (nongenetic) modifiable causes of mortality for the year 2000 and concluded that tobacco, poor diet and physical inactivity, alcohol consumption, microbial agents, toxic agents, motor vehicle related fatality, firearms, sexual behavior, and illicit drug use accounted for the most mortality. Their analysis led Mokdad et al. to argue for increased efforts toward prevention in our health care and public health systems.2 Responding to these disease threats, several agenda-setting documents have been produced to guide the reduction of disease risk through the modification of health behavior. Healthy People 2010 is perhaps the most critical document of this type and defines a set of comprehensive disease prevention and health-promotion objectives for the United States to be achieved by the year 2010.3 Healthy People 2010 was designed to realize two overarching goals including (a) increase quality and years of healthy life and (b) eliminate health disparities. Healthy People 2010 has selected a set of 10 “leading health indicators” that will be used to measure the health of the nation over the coming years and that reflect major health concerns facing the United States in the first 10 years of the twenty-first century. These leading indicators include physical activity, overweight and obesity, tobacco use, substance abuse, responsible sexual behavior, mental health, injury and violence, environmental quality, immunization, and access to health care. Health behavior research often occurs within two broad categories. First, investigators continuously work toward a better understanding of the factors that explain and predict behavior. A better understanding of these determinants will provide guidance for the development of interventions that have a reasonable chance of producing changes in behavior. Therefore, basic research on the determinants of health behavior will ultimately improve health promotion interventions. The second broad category involves the development of intervention strategies, usually targeting changes in behavior, with the goal of modifying health behavior as well as physiological risk factors and ultimately morbidity and mortality. This chapter will describe several intervention approaches as well as frequently utilized theories in health behavior research. We will also provide guidance to resources that may help with the development and evaluation of effective theory-based interventions.
THEORIES OF BEHAVIOR CHANGE
Rationale for the use of theory Theories are used in health behavior research in a number of ways. First, theory is used to identify variables that explain and predict behavior and as a result, guide studies conducted to provide empirical evidence on postulated determinants of behavior. Second, theories are used to guide the design of interventions. The selection of variables to target for intervention and the development of specific messages within interventions are both guided by theory. Below we describe a series of theories commonly used in health behavior research and recommend further reading on the utility and use of theory in health promotion and disease prevention.4
Health Belief Model The Health Belief Model is one of the oldest and most widely used theoretical models of health behavior.5 It was created in 1958 by researchers at the U.S. Public Health Service in an attempt to understand why many people failed to take advantage of the free tuberculosis screenings.6 The general assumption of the model is that people will perform health-promoting behaviors if they believe that these behaviors will reduce either their susceptibility to the condition or the severity of the condition, and if they believe that the benefits of performing the behavior outweigh the barriers to performance. For example, the model predicts that people will be more likely to obtain screening tests for a disease if (a) they believe that they personally are at risk for the disease; (b) they believe that the disease would seriously compromise their quality of life; (c) they believe that the screening test can really detect the disease, and that early detection would lead to better outcomes; (d) they believe that they are not blocked from obtaining the screening by financial, schedule, transportation, or other concerns; and (e) something reminds them to obtain the screening. The model has five main components. Perceived susceptibility is the individual’s estimate of the probability of getting the disease. Perceived severity is the individual’s perception of how severe the health and social consequences of the disease would be. Perceived benefits are the positive consequences that the individual believes will occur as a result of performing the health behavior. Perceived barriers are factors that make it difficult for the individual to perform the health behavior. Cues to action are objects or events that remind the individual about the health
941 Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
942
Behavioral Factors Affecting Health
behavior, such as billboards, TV news stories, or hearing that a friend or a celebrity has the disease. It is important to note that the susceptibility, severity, benefits, and barriers all refer to the individual’s perceptions, which may or may not be accurate. For example, people may underestimate their probability of getting a specific disease or underestimate the severity of the disease, and they may overestimate the barriers preventing them from performing a health-promoting behavior. The Health Belief Model has been applied to numerous other screening behaviors such as mammography and HIV testing, as well as other health-related behaviors such as condom use and physical activity.7 In its original conceptualization, the Health Belief Model is best suited to predict one-time performance of a single health-related action, such as a tuberculosis test. It does not address the issues inherent in long-term maintenance of behavior change. To make the model more appropriate for predicting long-term lifestyle change, it was revised in 19888 to include the construct of self-efficacy, or the person’s confidence in his ability to adopt healthy behaviors or discontinue unhealthy behaviors.9 With the addition of self-efficacy to the model, the model’s authors acknowledged that even if people feel personally susceptible to a disease, understand the severity of the disease, and are convinced that longterm behavior change will improve their prognosis, they will undertake long-term behavior change only if they believe that they have the ability to accomplish the long-term behavior change successfully. The focus of the Health Belief Model is on the individual’s perception of susceptibility to and severity of the disease, and perception of the relative benefits and barriers of the preventive behavior. Therefore, the goal of interventions is to alter the individual’s unrealistic perceptions. For example, if a heterosexual woman believes that she is not at risk for HIV because she thinks it is a disease of gay men, the goal of counseling would be to inform her of the actual risk of contracting HIV among women who practice risky sexual or injection drug use behaviors. If she believes that HIV is not a severe health problem because some people claim to have been “cured,” the goal of counseling would be to inform her that HIV currently cannot be cured, and that people living with HIV have a compromised quality of life. If she does not believe that HIV testing has benefits, the goal of counseling would be to inform her that early detection and prompt treatment can greatly improve the health and quality of life of people with HIV. If she believes that there are too many barriers to being tested for HIV (e.g., money, no convenient place to be tested, concerns about confidentiality), the goal of counseling would be to help her brainstorm ways to overcome these barriers (e.g., free, anonymous testing services).
Social Cognitive Theory Social cognitive theory (SCT) is a comprehensive theory of behavior that has been used to explain a wide-range of human behaviors including health behaviors.10–13 SCT is one of the most widely used theories for the development of behavioral interventions.13 SCT assumes that characteristics of the environment, the person, and the behavior itself influence one another in a process referred to as reciprocal determinism. The environment is typically defined as variables that are external to the individual and include the physical environment (e.g., urban design, presence of fast food establishments) and social environment (influence of family and friends, media role models). Personal characteristics include variables internal to the individual including various attitudes and beliefs. The theory supposes Type of motivation Type of regulation Quality of behavior
Amotivation Nonregulation
External regulation
Non-selfdetermined
that if one of the three factors changes (environment, person, behavior), it is likely to produce changes in the other two factors. For example, a person may have a network of friends who are all sedentary. In this case, meeting and getting to know a new friend (environmental factor) who is physically active may lead to the adoption of new activities that involve more physical activity such as going for walks or trying a team sport (behavior) and this change in behavior may lead to a positive change in attitudes toward exercise (personal factors). Embedded within each of these larger factors (environment, person, behavior) are a set of more specific variables. Intervention programs are typically designed to influence one or more of these specific variables. In health behavior research, intervention developers are usually attempting to influence environmental variables, or more commonly, personal variables as a means of modifying behavior. Personal factors include behavioral capability, defined as the essential knowledge and motor skills needed to engage in a specific behavior. Outcome expectations and expectancies are the results we anticipate when a behavior is enacted and the value we attach to that outcome. For example, if a person believes that a dietary change to include less fat will make them feel less sluggish, this might be seen as a positive outcome of that behavior change. Outcomes expectancies can be both positive and negative. Goal setting and self-monitoring comprise components of self-control within the theory. That is, people frequently set goals and monitor their progress toward those goals. This is a naturally occurring process but can also be utilized by health promotion researchers by designing programs that help people set realistic yet challenging goals for health behavior change, and providing them with tools for monitoring that progress. Observational learning involves learning skills and values related to health behaviors from observing models. This is frequently used by intervention designers to help individuals learn key skills for behavior change such as the selection of healthier foods in a restaurant or the refusal of tobacco products by adolescents. Perceived self-efficacy is the confidence a person feels in their ability to engage in a behavior. Perceived self-efficacy is usually thought of in the context of the barriers that a person must overcome to perform a behavior. Self-efficacy is higher when few barriers limit an individual’s ability to perform a behavior. Finally, SCT describes a role for the influence of emotional arousal in shaping behavior. SCT provides strong guidance in the development of interventions by identifying variables that can form the basis of intervention activities. In addition, many of the variables described in SCT (e.g., perceived self-efficacy, outcome expectancies) have been related to a diverse set of health behaviors, boosting our confidence in the use of the model to guide intervention design.
Self Determination Theory Self determination theory (SDT) asserts that people have three basic psychological needs: competence (feeling effective), relatedness (feeling connected to others), and autonomy (perception of self as source of ones own behavior).14 These needs are assumed to hold across age, gender, and culture, although the means to satisfy these needs may differ across various groups. Social, contextual, or environmental factors may either support or thwart these basic needs. The SDT conceives of self determination as a continuum (from nonself-determined to self-determined). Levels of self-determination coincide with types of motivation (from amotivation to intrinsic motivation) and regulatory styles (Fig. 53-1).
Extrinsic motivation Introjected regulation
Identified regulation
Integrated regulation
Intrinsic motivation Intrinsic regulation
Self-determined
Figure 53-1. Regulatory styles and intrinsic motivation. (Source: Adapted from Deci & Ryan, 2002.)
53
Health Behavior Research and Intervention
943
Cognitive evaluation theory (CET) explains effects of contextual events (such as rewards, deadlines, praise) on intrinsic motivation, behavior, and experience.97,98 The CET is most useful for studying behavior for which people exhibit some interest or motivation. Organismic integration theory (OIT) examines how to transform externally regulated behaviors to self-regulated behaviors and addresses the concept of internalization especially with respect to the development of extrinsic motivation. The continuum of self-determination (Figure 53-2) is part of the OIT. Amotivation is a lack of intention to act. In external regulation, the motivation is to obtain rewards, avoid punishment, and satisfy external demands. In introjected regulation, behavior is performed to avoid guilt or shame, or to enhance feelings of self-worth. In identified regulation, behavior (or the outcomes of that behavior) is accepted as personally important. In integrated regulation, behavior is completely in line with personal values, goals, and needs but still done to achieve personally important outcomes. In intrinsic regulation, behavior is carried out purely for inherent interest and enjoyment.14 Causality orientations theory (COT) describes how people incorporate social influences into their motivational styles—i.e., whether they do things to please themselves (autonomously oriented), or because they think they “should” (controlled orientation) or without any particular intention (impersonal orientation).99 Basic needs theory says that people have three basic needs (as discussed in the main text). According to this mini-theory, there will be a positive relationship between goal attainment and well-being only if it satisfies a basic psychological need.100–102
Intrinsic motivation is considered the optimal state of autonomy and challenge, and is associated with feelings of satisfaction, enjoyment, competence, and a desire to persist. According to the SDT, the closer a behavior is to intrinsic motivation on the self-determination continuum (Fig. 53-1), the more likely people will be to participate in that behavior. At present, SDT is made up of four mini-theories that build on these core theoretical concepts (see Fig. 53-2). The SDT is a relatively new theory; however several interventions, including interventions to reduce cardiac risk,15 improve diet,16 and enhance physical activity17 are underway. One study trained physicians to support autonomy in a practice-based smoking cessation intervention.18 Physicians employed an autonomy-supportive style in a brief intervention with half of the nicotine-dependent subjects, and a controlling style with the other half. Results showed that subjects in the autonomy-supportive MI condition were significantly more autonomously motivated to quit. Subjects who were more autonomously motivated to quit were more likely to have quit smoking at 6, 12, and 30 months after the intervention and to remain nonsmoking continuously over a 30-month period.
Figure 53-2. Four mini-theories of selfdetermination theory.
Transtheoretical Model (Stages of Change) James Prochaska and Carlo DiClemente originally developed the transtheoretical model (TTM) in order to integrate the principles of behavioral change from the major psychotherapy and behavior change theories.19–22 The model was based on observations that people appear to go through similar stages of change no matter what kind of intervention they undergo. The TTM has two basic dimensions: stages of change and processes of change. Stages of change refers to an orderly sequence of changes through which people pass. According to the TTM, people progress through six stages of change (see Fig. 53-3) in the process of changing any health-related behavior. Some people move more quickly than others do, but the order is assumed and no stage is skipped. Processes of change refer to different techniques or intervention approaches that help people to progress through stages of change in order to achieve the desired behavioral changes. The TTM has identified 10 processes of change (see Fig. 53-4). The TTM indicates which processes of change will be most effective at each stage. Matching the stage of change in which the client finds herself with the
(1) Precontemplation is the stage in which people are not considering changing their behavior within the near future (defined as the next 6 months).21 (2) Contemplation is the stage in which people intend to change their risky behavior soon (defined as the next 6 months).21 (3) Preparation is the stage in which people intend to change in the immediate future (defined as the next month).103 (4) Action is the stage in which people have made explicit changes to their behavior, environment, and lifestyle for less than 6 months. These changes are observable and therefore seen as action-oriented.103 (5) Maintenance is the stage in which people continue their behavior changes and do not relapse into a previous stage for more than 6 months.104 (6) Termination is the stage in which people are no longer susceptible to the temptation of relapse and possess complete self-efficacy.21
Figure 53-3. Six stages of change.
944
Behavioral Factors Affecting Health Matching stages
Ten Processes of change
Experimental processes of change (1) Consciousness raising concerns feeling increased awareness about the risky behavior. It involves the causes, consequences, and cures for the behavior. Interventions that utilize confrontations, feedbacks, and interpretations may aid in Precontemplation this process.21 (2) Dramatic relief concerns increased emotions and subsequent reduced feelings & after the appropriate action is taken. Personal testimonies, role-playing, and media campaigns may aid in this process.21 contemplation (3) Environmental reevaluation concerns the cognitive and affective assessments of how the presence or absence of the problem behavior affects the person’s social environment. It also reflects feelings that the person can serve as a role model for others. Family interventions and empathy training may aid in this process. 21 Contemplation (4) Self-reevaluation concerns the affective and cognitive assessments of self-image & with and without the risky behavior. Healthy role models and imagery may aid in preparation this process.21 (5) Self-liberation concerns the increase in social networking and support Preparation opportunities for people isolated by their behavior change. This is very important & in marginalized and depressed people in order to maintain the behavior change. Empowerment procedures and advocacy can be especially useful in health action promotion interventions with impoverished or minority populations.21 Behavioral processes of change (6) Stimulus control involves the removal of the cues that trigger unhealthy habits and replaces them with cues for healthy habits. Stimuli that might help this change are self-help groups, avoidance of the negative stimuli, and changing the environment.21 Maintenance
(7) Helping relationships are defined as the combination of trust, caring, acceptance, and openness that can help people end unhealthy behaviors. Support and rapport building are important in this process, and social support is key in behavior change.21 (8) Counterconditioning involves learning the healthier alternative behaviors that substitute for the unhealthy behaviors. Such substitutes may include relaxation and desensitization.21 (9) Contingency management involves providing consequences for moving in a positive or negative direction. While this process can include punishments, selfchangers are more likely to rely on rewards for good behavior changes. Reinforcements are key and may be in the form of contingency contracts and group recognition.21 (10) Self-liberation involves the belief that people can change their behavior and the commitment and recommitment to act. Public testimonies and New Years’, resolutions may aid in this process.21
Figure 53-4. Processes of change (matched to stages of change).
appropriate processes of change facilitates movement through the stages of change to achieve intervention goals. The construct of decisional balance was developed to describe the stage-related process of weighing the pros and cons of any health-related behavior.23 Decisional balance changes as people move through the stages of change.24 For instance, during precontemplation the perceived benefits of smoking outweigh the perceived risks. As the smoker progresses into the action and maintenance stages, the perceptions of the negative consequences of smoking overtake the positive. Finally, Self-Efficacy is the confidence people gain as they progress thorough the stages that they can cope with temptations that might cause them to relapse into their unhealthy habits. This construct was added from Bandura’s selfefficacy theory.25 Temptation is the intensity of the urge to engage in habitual behavior. Three factors cause temptation: emotional distress or negative affect, craving, and positive social situations.21 TTM has been used in a number of interventions to change an array of behaviors including smoking, diet, exercise, delinquent behaviors, and condom use.26 Most often, TTM is used to classify subjects according to readiness to change the targeted behavior (stages of change). Intervention materials and activities appropriate for the individual’s readiness to change are then delivered. The materials and activities deemed appropriate for a given stage of readiness to change are determined from the processes of change described by the framers of the TTM. For instance, a British study used the TTM to recruit and classify subjects into a smoking cessation program.27 Data from a baseline questionnaire were used to categorize intervention subjects
according to stages of change for smoking cessation. Subjects received a personalized letter describing their stage of change and a packet tailored to that stage. This process was repeated three times over six months, with subjects being reclassified according to new questionnaire data at each pass.
Theory of Reasoned Action/Theory of Planned Behavior The theory of reasoned action28 and the theory of planned behavior29 were created to explain and predict a wide variety of human behaviors (see Fig. 53-5). Subsequently, the theories have been applied specifically to health-risk and health-protective behaviors. According to the theory of reasoned action,28 before people perform a behavior, they go through a decision-making process that leads to the formation of an intention to perform the behavior. The decision-making process involves making two types of judgments. The first type of judgment is the attitude toward the behavior, or the person’s perceptions of the pros and cons of performing the behavior. Attitudes toward the behavior consist of beliefs about the expected outcomes of the behavior and the importance of these outcomes to the individual. To form an attitude toward a behavior, a person will mentally list all the positive consequences that are likely to occur as a result of performing the behavior (e.g., for exercising, these might be weight loss, enjoyment, lowered
53 Behavioral beliefs Evaluations of behavioral outcomes Normative beliefs Motivation to comply
Attitude toward the behavior Subjective norm
Behavioral intention
Behavior
Perceived behavioral control
Figure 53-5. The theories of reasoned action and planned behavior.
blood pressure, increased cardiovascular endurance, etc.) and all the negative consequences that are likely to occur as a result of performing the behavior (e.g., soreness, lack of time to do other tasks, tiredness). The person will then consider the importance of each expected outcome (e.g., perhaps weight loss is extremely important to the person, but feeling sore does not bother the person much). A mental summary of all the perceived positive and negative consequences, and the importance of each, becomes the person’s attitude toward the behavior. The second type of judgment is the person’s subjective perception of the social norms surrounding the behavior. Social norms are the person’s beliefs about what other people think the person should do. For example, if a middle-aged woman has many friends and family members who exercise, she may perceive that her social network members would want her to exercise too. If she has few friends and family members who exercise, she may perceive that her social network members would not be supportive of her efforts to exercise. Each person’s social network consists of multiple individuals, including family members, friends, coworkers, neighbors, acquaintances, comembers of other organizations, etc. The opinions of some of these individuals are very important to the person, whereas the opinions of others are not as important. When thinking about social norms, the person will decide whose opinions really matter. For example, the middle-aged woman may be very concerned about whether her husband and children are supportive of her decision to exercise, but she may not care whether or not her neighbors approve. She will form her judgment about the social norms based on her perception of how her husband and children, whose opinions are important to her, would react to her decision to exercise. The theory of reasoned action makes the assumption that intentions to perform a behavior will lead to performance of the behavior (i.e., the person who has an intention to exercise will in fact exercise). However, there are many situations in which intentions do not predict behavior. For example, a lack of time, transportation, childcare, money, facilities, etc., could prevent the person from exercising. Similarly, a person may intend to obtain regular health screening tests but may lack resources to pay for the tests. The theory of planned behavior29 was created to address this issue. The theory of planned behavior is identical to the theory of reasoned action, except that it adds the component of perceived behavioral control. The revised theory specifies that the intentions in the theory of reasoned action will lead to performance of the behavior only if the individual perceives that he or she is capable of performing the behavior. Interventions based on the theory of reasoned action and the theory of planned behavior can focus on one or more of the theories’ components. The ultimate goal is to increase the likelihood that an individual will perform a health-promoting behavior (or decrease the likelihood of performing a health-risk behavior). According to the theory of planned behavior, the likelihood of performing a behavior can be influenced by increasing the intention to perform the behavior and by increasing the individual’s perceived control over the behavior. Therefore, interventions can focus on increasing intentions and/or increasing
Health Behavior Research and Intervention
945
perceived behavioral control. Intentions can be modified by modifying the individual’s attitude toward the behavior and/or the subjective norms. To improve an individual’s attitude toward the behavior, an intervention would attempt to influence the individual’s behavioral beliefs (e.g., the belief that exercise will increase respiratory fitness) and the evaluation of the behavioral outcomes (e.g., that increased respiratory fitness will lead to a higher quality of life). To improve an individual’s subjective norms, an intervention would focus on identifying people in the social network who are supportive of the new behavior and/or helping the individual to avoid being negatively influenced by social network members who undermine their attempts to change behavior. To improve perceived behavioral control, an intervention would teach specific strategies to adopt new healthy behaviors and overcome barriers. Like the health belief model, the theories of reasoned action and planned behavior make the assumption that humans are rational actors who carefully evaluate the costs and benefits of performing a behavior and select their course of action accordingly. These models do not specifically address impulsive, spur-of-the-moment decisions about health behaviors, nor do they address people’s resistance to changing established behavioral habits. However, these models offer a useful framework to guide theory-based intervention strategies.
BEHAVIORAL INTERVENTION STRATEGIES
Motivational Interviewing Motivational interviewing (MI) is a collaborative counseling technique aimed at helping people increase their motivation and readiness to make behavioral changes.30 MI counselors use nonjudgmental, empathetic encouragement to create a positive interpersonal collaboration that is conducive to self-examination, understanding and change.31 One of the goals of MI is to evoke intrinsic motivation for change.32 The counselor encourages the client to explore how their current health-related behaviors might conflict with their health goals, to evaluate their own reasons for and against behavior change, to discover behavior change strategies that are personally relevant, and to convince themselves that they can make changes. Core strategies in MI include agenda setting and eliciting change talk. In agenda setting, clients determine the goals for change so that they are active and willing participants. Eliciting change talk involves having participants generate self-motivational statements and is based on the premise that people are more likely to act on plans they develop themselves.31 MI interventions can be particularly effective for populations who are at a low level of readiness to change their behavior, and can be tailored to individual needs and circumstances33 including making them developmentally appropriate.34,35 Additionally, MI can be used to tailor interventions for cultural competence because the client sets the goals rather than the clinician. MI has been used in dietary and physical activity interventions, smoking cessation, substance abuse prevention, medical adherence in diabetes, psychosis, and several other chronic illnesses, and HIV-risk prevention.36–39 For instance, MI has been used successfully in interventions involving minorities to increase fruit and vegetable consumption and improve dietary compliance.34 MI for addiction counseling or psychotherapy may involve multiple extended sessions. Alternatively, in public health and medical settings, patient encounters ranging from 10 to 15 minutes. have proven to be effective tools for behavior change in diverse areas including problem drinking, smoking, treatment adherence in diabetes, and weight loss.40,41
Classes and Curricula Small group classes and curricula, developed for schools and other settings, are a common intervention strategy to modify health behavior. A wide range of behaviors has been targeted by curricula (e.g., violence, stress, substance use, diet, exercise habits). There are four main caveats to using a classroom curriculum: (a) both teachers and administrators
946
Behavioral Factors Affecting Health
need to be supportive, (b) instructors must be properly trained, (c) the curriculum needs to be appropriate (developmentally, culturally, behaviorally) for the target group, and (d) the curriculum chosen should be known to be effective unless it is being delivered as part of an intervention study. Identifying scientifically rigorous and effective curricula can be difficult. Some curricula are well-known and well-liked, but have not actually been proven to produce behavior change. Organizations, such as the Substance Abuse and Mental Health Services Administration (SAMHSA), offer listings of programs that have reviewed by expert panels and chosen as model programs. Model programs are defined as “well-implemented, well-evaluated programs, meaning they have been reviewed . . . according to rigorous standards of research.”42 In 1996, the Center for the Study and Prevention of Violence (CSPV) at the University of Colorado at Boulder, with funding from the Centers for Disease Control and Prevention and several other agencies, designed and launched a national initiative to identify what they have named “Blueprint model programs.” These programs meet a strict scientific standard of program effectiveness in reducing adolescent violent crime, aggression, delinquency, and substance abuse. Currently, more than 600 programs have been reviewed but only 11 have been deemed blueprint model programs by an expert panel.43 An example of a highly effective Blueprint model program is Project Toward No Tobacco Use (TNT), developed by Steve Sussman.44 Project TNT is a school-based intervention using a comprehensive, classroom-based curriculum designed to prevent or reduce tobacco use in youth between the ages of 5th and 10th grade. This program includes life/social-skills training, peer-resistance education, classroom-based skills development, and media education to counter alcohol and tobacco advertising. The effectiveness of Project TNT has been demonstrated in randomized controlled trials. To date, no organization has developed a system reviewing and recognizing model programs for physical activity and nutrition although the Guide to Community Preventive Services has reviewed various approaches to the modification of physical activity.45,46 An example of an effective intervention to reduce obesity in children is Gortmaker’s Planet Health.47 Planet Health is a school-based interdisciplinary curriculum focused on improving the health and well-being of 6th—8th grade students while building and reinforcing skills in language, arts, math, science, social studies, and physical education. Through classroom and physical education activities, Planet Health aimed to increase activity, improve dietary quality, and decrease inactivity, with a particular focus on decreased TV viewing. The intervention and curriculum were based on the social cognitive theory (discussed later) and the behavioral choice theory.48 Planet Health reduced television viewing among both girls and boys, increased fruit and vegetable consumption in girls, and lowered body mass index (BMI) in girls.49
Print Communications Several types of print communications are used to produce behavior change and include generic, tailored, and targeted communications. General-audience print communications provide uniform health information to the broadest possible audience. Tailored print communications adapt health education messages to the characteristics, needs, and interests of individuals. Targeted interventions50 provide information that is adapted for the characteristics of a particular audience based on age, gender, ethnicity, or other factors but do not provide information unique to the individual reader.51 Tailored and targeted communications typically use theoretical models of health behavior in their design. In one tailored intervention to increase mammography in women, participants were interviewed to assess stage of readiness, perceived barriers, and benefits for getting a mammogram, demographics, and risk status. From this information, personalized letters were generated. Each letter included a drawing of a woman matched to the age and race of the recipient and messages tailored to the recipient’s stage of readiness, personal barriers, and beliefs about mammography and breast cancer.52 An example of a targeted intervention to increase
mammography in Latina women used a large-scale populationbased survey in Latina women to understand their screening behavior, knowledge, and attitudes about cancer, as well as their reading levels and other preferences. This information was used to develop a brightly colored booklet including testimonials from Latinas, based on the aggregate characteristics of this population, which was sent to all participants in the intervention. Tailored interventions are often, although not always, more effective than generic nontailored interventions at producing changes in attitudes and behavior.53–55 Although more effective in many cases, tailored interventions are more costly requiring prescreening to identify levels on the selected tailoring variables, time to incorporate the tailored messages into print communications, and when larger samples are being used, an computerized “expert-system” is needed to review the tailoring algorithm and assign tailored messages. Tailoring can be costly and when delivered to large populations, may not always be practical.56 Print communications are often more effective when combined with other intervention strategies. One study found that meeting with a health advisor along with receiving tailored print communications was more effective than receiving the printed tailored materials alone.57
Targeted Electronic Media A number of communication strategies have been developed to extend patient contact beyond the face-to-face clinical encounter.58–62 Telephone contacts by a nurse of other health care provider have been used for some time to enhance contact and intervention effectiveness in disease management and clinical care. Recently efforts have been made to automate telephone contacts for disease management and prevention with positive results.59–63 Email strategies, internet approaches, palm pilots, and other electronic strategies have also been used in greater frequency in recent years. Integrated systems have been developed that utilize multiple electronic sources of communication and provide an opportunity for target individuals to provide test results to health care providers as well as patients receiving targeted or tailored feedback from health care providers and health educators.64–66 Electronic approaches allow repeated and more frequent contact, without the need for transportation to central health care provider locations, thereby enhancing compliance with treatment regimens and producing greater change in other behaviors of interest (e.g., smoking cessation, physical activity). In addition, these approaches take advantage of rapidly evolving electronic technology to provide new and potentially effective intervention strategies, are more possible under capitated systems of health care than under fee-for-service systems, and complement a trend toward the active involvement of patients in partnering with their health care provider.58 Evidence has been provided for the acceptance and use of these procedures by target populations64–68 and for the efficacy of the approaches to modify behavior related to diabetes self-management 60–62 and depression.59 These approaches are still relatively new and research is needed to develop improved intervention strategies, followed by widespread dissemination of those that are most effective. The dissemination of effective strategies is dependent on a number of factors clearly described by Glasgow and his colleagues.69 These applications must be reliable and user-friendly for clinicians and for the patients or other targets of the intervention. In addition, they must make primary care practices more efficient and allow clinicians to allocate time spent on behavioral counseling to other critical primary care responsibilities. Finally, Glasgow notes that to achieve widespread adoption, electronic intervention strategies must be cost neutral to practices. In sum, targeted electronic media hold promise for extending the availability of disease self-management and prevention intervention through clinical care settings.
Mass Media Mass media is an important tool for health promotion and includes radio, television, and other media reaching a wide heterogeneous audience. Exposure to mass media messages can inform the public about
53 the importance of various health issues and give them a framework for thinking about these issues.70 Mass media messages can affect people’s attitudes and behavior by providing information and by influencing their emotions.71 The effectiveness of a particular message depends on numerous factors: the characteristics of the message (e.g., the information that the message conveys about the disease or health behavior); the source (e.g., the credibility and likeability of the spokesperson); the channel (e.g., television, radio, magazines, billboard, internet); and the receiver (e.g., the characteristics of the people viewing or hearing the message, including their demographic characteristics, health status, and awareness of health issues). For a mass media message to change people’s behavior, people must notice the message, pay attention to it, understand it, remember it, retrieve the information from memory at the appropriate time, and use the information conveyed to select new behaviors.71 The advantage of mass media is that it can reach a large number of people quickly. The disadvantage is that it is difficult to create messages that are relevant and memorable to all members of the target population. The statewide antismoking media campaign in California72,73 is an example of a mass media intervention for health promotion. This long running, highly publicized statewide media campaign disseminates antitobacco messages through various communication channels, including television, radio, print media, and billboards. The campaign’s ads vary in intended target audience (e.g., adolescents, adult smokers, specific ethnic groups) and in the tobacco-related issue addressed (e.g., preventing youth access to tobacco, encouraging smokers to call a quitline, portraying the tobacco industry as manipulative). The statewide media campaign has reached the vast majority of California youth and adults and has maintained its high visibility for over 10 years.74
Policy Policy change is another important tool for health promotion. New policies cause people to alter their behavior almost immediately, regardless of whether or not they have been convinced of the necessity of changing their behavior. For example, when communities implement restaurant smoking bans, the restaurant owners immediately prohibit smoking in their establishments (or risk paying a fine), and their patrons refrain from smoking (or risk being asked to leave the restaurant). In this way, nonsmokers in the restaurant are immediately protected from exposure to secondhand smoke, even before the smoking patrons and restaurant owners have changed their attitudes about smoking in restaurants. Over time, the public observes the lack of smoking in restaurants, and smoking in restaurants becomes viewed as an unacceptable behavior.75 In the long-term, this may encourage smokers to quit or reduce their smoking so that they can dine comfortably in nonsmoking restaurants. In the short term, the policy protects restaurant staff and other customers from exposure to secondhand smoke. Policies can be set at localized levels (e.g., by specific communities, institutions, or buildings) or at higher levels (e.g., county, state, or federal governments). When policies are set at local levels, it is often easier to identify and address the specific concerns of the people who will be affected by the policy, to obtain their support for the policy, and to implement and enforce the policy. However, local-level policies affect smaller numbers of people. When policies are set at higher levels, they affect larger numbers of people, but they may be perceived as less personally relevant, and large bureaucracies may be needed to implement and enforce them. The advantage of policy interventions is that they can produce behavior change quickly, even before people’s attitudes are changed. The disadvantages of policy interventions are that they may be viewed as draconian, and they require consistent enforcement to be effective.
Built Environment Health behavior researchers have recently given more emphasis to the environment as a predictor and strategy to change health behavior. The built environment usually includes not only the physical structure of
Health Behavior Research and Intervention
947
the cities, towns, and rural settings in which we live76 but can also include legal- and policy-level determinants of behavior.77–80 Intervention strategies that include the built environment have substantial potential to impact an entire population,77,78 rather than just select individuals.78,79 Supportive built environments can also support individual-level behavior change. For example, making more fresh fruits and vegetables available in the home, at work, and in convenience stores may help some who have recently decided to change their diet eat more fruits and vegetables. Environmental and policy approaches can continue to influence behavior over time without requiring continued and active intervention by public health professionals. For example, building neighborhoods with sidewalks and safe street crossings will increase the frequency of walking even if a health-promotion program is never delivered to people who live in those neighborhoods. Environmental and policy interventions have had success in tobacco control through programs such as taxation and bans on indoor smoking.81 In physical activity, simple interventions such as posting signage to encourage people to use stairways have been effective. In addition, urban design that fosters physical activity has been a focus of great interest for addressing the ongoing epidemic of obesity as well as increasing quality of life.82–85 The number of intervention strategies that have directly manipulated an environment are limited due to lack of a full understanding of the influences of environment on behavior and to the relatively high cost and long timeframe of producing changes in the built environment. The development of theories of environmental influence and of intervention strategies using the environment are emerging areas of research. Physicians and other professionals should be aware of the important role that various elements of the built environment may play on the formation of health behaviors, risk for disease, and their use in the formulation of solutions to ongoing health-related problems. Although relevant to most health behaviors, the role of the built environment on the ongoing epidemic of obesity is of particular interest.86 Health professionals ideally will be available to influence public policy toward the creation of built environments that are more conducive to physical activity.
INTERVENTION DEVELOPMENT AND EVALUATION
Design of Behavioral Interventions The design of behavioral interventions involves a series of decisions at numerous levels including selection of the target behavior (e.g., helmet use, use of condoms) and target population (e.g., adults or children, specific ethnic group), the theory to use in design of the intervention, the setting where participants will be identified (e.g., schools, worksites, clinics, churches) and intervention activities delivered, the assembling of staff to design, deliver and evaluate the intervention, and finally the design of the intervention components and the development of specific intervention activities and communications. Describing a comprehensive strategy for the design, delivery, and evaluation of a behavioral intervention falls beyond the scope of this chapter. However, we will guide the reader toward source materials for the development and evaluation of a behavioral intervention.
Sources of Information for Intervention Planning Behavioral intervention is guided by theory and by prior research on approaches that have been effective. Various approaches to intervention development have been described in the literature and we recommend consultation of these texts for assistance. A sample of these texts include the intervention planning approach by Green and Kreuter,87 intervention mapping by Bartholomew and colleagues,88 and approaches to intervention development for youth by Perry,89 and Sussman.90 Additional texts can be found providing illustrations of effective ideas and programs.91,92 When encountering a text, we recommend using materials that use intervention-design approaches that are theory-driven,
948
Behavioral Factors Affecting Health
TABLE 53-1. WEB-BASED RESOURCES FOR INTERVENTION SELECTION AND DESIGN Title Registries of programs effective in reducing youth risk behaviors
Agency
Source
School health guidelines and strategies
Centers for Disease Control and Prevention (CDC) National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP) NCCDPHP
Overweight and obesity state programs
NCCDPHP
Programs in brief Tobacco information and prevention source Coordinated school health programs
CDC NCCDPHP
Improving the health of adolescents and young adults: a guide for states and communities Healthy youth! Nutrition Making it happen school nutrition success stories Research-tested intervention programs
NCCDPHP
NCCDPHP
www.cdc.gov/HealthyYouth/partners/ registries.htm www.cdc.gov/HealthyYouth/publications/ Guidelines.htm http://www.cdc.gov/nccdphp/dnpa/ obesity/state_programs/index.htm http://www.cdc.gov/programs/default.htm http://www.cdc.gov/tobacco/index.htm www.cdc.gov/healthyyouth/CSHP/ index.htm www.cdc.gov/healthyyouth/ NationalInitiative/guide.htm
NCCDPHP
www.cdc.gov/healthyyouth/nutrition/ Making-It-Happen/index.htm
National Cancer Institute
http://dccps.nci.nih.gov/rtips/index.asp
meaning approaches that recommend intervention based on a behavioral theory that has been empirically tested to be associated with behavior. The development of behavioral interventions often includes art as well as science. The participation of an experienced intervention researcher can be invaluable. Some elements of intervention development can be translated across target populations and health behaviors, however, the most effective advice will likely be provided by an expert who has worked with the behaviors of primary interest to you (e.g., smoking cessation, injury prevention, obesity treatment) and in the setting in which you plan to deliver intervention activities (e.g., schools, clinics, worksites). In addition, skilled media professionals will be needed for the development of most interventions including graphic designers, professional writers or curriculum developers, and video producers to name a few.
Sources of information on effective interventions Several sources of information are available on behavioral intervention approaches that have been demonstrated to work. The Guide to Community Preventive Services (Guide) has identified approaches to behavior change and risk reduction that work.45 The Guide was developed by the U.S. Department of Health and Human Services to provide guidance on approaches to prevention for a diverse set of health behaviors and disease threats. The Guide reviews community intervention approaches and evaluates the evidence rating whether each approach has been demonstrated to be effective or has insufficient evidence indicating effectiveness. The Guide provides an excellent resource to those selecting or designing an intervention for disease prevention and health promotion. The Guide also refers Healthy People 2010 objectives addressed by the intervention approaches reviewed. Recommendations provided by the Guide to Community Preventive Services are provided in book form,45 through journal articles including summaries of reviews and recommendations in the Morbidity and Mortality Weekly Report and detailed information on each review in the American Journal of Preventive Medicine, and through a website that provides the most up-to-date information on Community Guide activities, reviews and recommendations (www.thecommunityguide.org). Guidance for chronic disease intervention can be found in the text, Promising Practices in Chronic Disease Prevention and Control: A Public Health Framework for Action.93 A number of additional web-based resources are available describing interventions within particular content areas. The Centers
for Disease Control and Prevention and the National Institutes of Health offer a particularly rich source of information. Typically these compilations will guide individuals toward evaluated interventions within a specific domain defined by disease, target population, or setting. A number of these resources are presented in Table 53-1.
Sources of information for evaluation of interventions We highly recommend the evaluation of new interventions, old interventions that have not been carefully studied for efficacy, and previously evaluated interventions that have been adapted for a new population or setting. Although resource constraints may preclude the use of a rigorous evaluation, when it can be developed and used, evaluation provides guidance on the effectiveness of a program, whether it should be used in the future, and how it can be adapted to increase its effectiveness. We suggest the reader consult a strong text in evaluation including those by Rossi,94 Shadish,95 and Valente.96 Evaluation expertise can also be found through faculty in academic settings, and in consulting firms with a particular emphasis on educational or behavioral research. Finally, a number of web-based resources are available. Examples include the CDC Evaluation Working Group (www.cdc.gov/eval), the Handbook for Evaluating HIV Education (www.cdc.gov/healthyyouth/publications/hiv_handbook/index.htm), and the Introduction to Program Evaluation for Comprehensive Tobacco Control Programs (www.cdc.gov/tobacco/evaluation_manual/ ch3.html).
REFERENCES
1. McGinnis J, Foege W. Actual causes of death in the United States. JAMA. 1993;270(18):2207–12. 2. Mokdad A, Marks J, Stroup D, et al. Actual causes of death in the United States, 2000. JAMA. 2004;291(10):1238–45. 3. USDHHS. Healthy People 2010 (Conference edition in two volumes) U.S. Department of Human and Health Services. Washington, DC; 2000 January. 4. Glanz K, Rimer B, Lewis F. Theory, research and practice in health behavior and health education. In: K Glanz BR, FM Lewis, ed. Health Behavior and Health Education: Theory, research and practice. San Francisco, CA: Jossey-Bass; 2002:22–40.
53 5. Janz N, Champion V, Strecher V. The health belief model. In: K Glanz BR, FM Lewis, ed. Health Behavior and Health Education: Theory, Research and Practice. San Francisco, CA: Jossey-Bass; 2002: 45–66. 6. Hochbaum G. Public participation in medical screening programs: A sociopsychological study. Washington, DC: U.S. Department of Health and Human Services; 1958. Report No.: PHS publication No. 572. 7. Janz N, Becker M. The Health Belief Model: A decade later. Health Educ Q. 1984;11:1–47. 8. Rosenstock I, Strecher V, Becker M. Social learning theory and the health belief model. Health Educ Q. 1988;15:175–83. 9. Bandura A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol Rev. 1977;84:191–215. 10. Bandura A. Social Foundations of Thought and Action: A Social Cognitive Theory. New Jersey: Prentice Hall, Inc.; 1986. 11. Bandura A. Self-efficacy: The Exercise of Control. New York: W. H. Freeman; 1997. 12. Bandura A. Social cognitive theory: an agentic perspective. In: Annual Review of Psychology. Palo Alto, CA; 2001:1–26. 13. Baranowski T, Perry C, Parcel G. How individuals, environments, and health behavior interact: social cognitive theory. In: Glanz K, Rimer B, Lewis F, eds. Health Behavior and Health Education: Theory, Research, and Practice. 3rd ed. San Francisco, CA: JosseyBass; 2002:165–84. 14. Deci EL, Ryan RM. Handbook of Self-Determination Research. Rochester, NY: University of Rochester Press; 2002. 15. Sher TG, Bellg AJ, Braun L, et al. Partners for Life: a theoretical approach to developing an intervention for cardiac risk reduction. Health Educ Res. 2002;17(5):597–605. 16. Williams GC, Minicucci DS, Kouides RW, et al. Self-determination, smoking, diet and health. Health Educ Res. 2002;17(5):512–21. 17. Levy SS, Cardinal BJ. Effects of a self-determination theory-based mail-mediated intervention on adults’ exercise behavior. Am J Health Promot. 2004;18(5):345–9. 18. Williams GC, Gagne M, Ryan RM, et al. Facilitating autonomous motivation for smoking cessation. Health Psychol. 2002;21(1): 40–50. 19. Prochaska JO. Strong and weak principles for progressing from precontemplation to action on the basis of twelve problem behaviors. Health Psychol. 1994;13(1):47–51. 20. Prochaska JO, DiClemente CC. Stages and processes of self-change of smoking: Toward an integrative model of change. J Consult Clin Psychol. 1983;51(3):390–5. 21. Prochaska JO, Redding CA, Evers KE. The transtheoretical model and stages of change. In: Glanz K, Rimer BK, Lewis FM, eds. Health Behavior and Health Education: Theory, Research, and Practice. 3rd ed: San Francisco, CA: Jossey-Bass; 2002:99–120. 22. Prochaska JO, Velicer WF, Rossi JD, et al. Stages of change and decisional balance for 12 problem behaviors. Health Psychol. 1994;13(1):39–46. 23. Kramer Lafferty C, Heaney CA, Chen MS, Jr. Assessing decisional balance for smoking cessation among Southeast Asian males in the U.S. Health Educ Res. 1999;14(1):139–46. 24. Velicer WF, DiClemente CC, Prochaska JO, Brandenburg N. Decisional balance measure for assessing and predicting smoking status. J Person Soc Psychol. 1985;48(5):1279–89. 25. Bandura A. Self-efficacy: Toward a unifying theory of behavioral change. Psychol Rev. 1977;84:191–215. 26. Prochaska JO, Velicer WF, Rossi JS, et al. Stages of change and decisional balance for 12 problem behaviors. Health Psychol. 1994;13(1):39–46. 27. Aveyard P, Griffin C, Lawrence T, Cheng KK. A controlled trial of an expert system and self-help manual intervention based on the stages of change versus standard self-help materials in smoking cessation. Addiction. 2003;98(3):45–54.
Health Behavior Research and Intervention
949
28. Fishbein M. Readings in Attitude Theory and Measurement. New York: Wiley; 1967. 29. Fishbein M, Ajzen I. Belief, Attitude, Intention, and Behavior: An Introduction to Theory and Research. Reading, MA: Addison-Wesley; 1975. 30. Berg-Smith SM, Stevens VJ, Brown KM, et al. A brief motivational intervention to improve dietary adherence in adolescents. The Dietary Intervention Study in Children (DISC) Research Group. Health Educ Res. 1999;14(3):399–410. 31. Resnicow K, DiIorio C, Soet JE, Ernst D, Borrelli B, Hecht J. Motivational interviewing in health promotion: it sounds like something is changing. Health Psychol. 2002;21(5):444–51. 32. Motivational Interviewing: Preparing People for Change. Guilford Press, 2002. (Accessed at http://www.loc.gov/catdir/bios/guilford051/ 2001051250.html. Materials specified: Contributor biographical information http://www.loc.gov/catdir/bios/guilford051/2001051250. htmlMaterials specified: Publisher description http://www.loc.gov/ catdir/description/guilford051/2001051250.htmlMaterials specified: Table of contents http://www.loc.gov/catdir/toc/fy022/ 2001051250.html.) 33. Berg-Smith SM, Stevens VJ, Brown KM, et al. A brief motivational intervention to improve dietary adherence in adolescents. The Dietary Intervention Study in Children (DISC) Research Group. Health Educ Res. 1999;14(3):399–410. 34. Resnicow K, Jackson A, Wang T, et al. A motivational interviewing intervention to increase fruit and vegetable intake through Black churches: results of the eat for Life Trial. Am J Pub Health. 2001;91: 1686–93. 35. Berg-Smith S, Stevens V, Brown K, Van Horn L, Gernhofer N, Peters E. A brief motivational intervention to improve dietary adherence in adolescents. The Dietary Intervention Study in Children (DISC) Research Group. Health Educ Res. 1999;14: 399–410. 36. Burke BL, Arkowitz H, Menchola M. The efficacy of motivational interviewing: a meta-analysis of controlled clinical trials. J Consult Clin Psychol. 2003;71(5):843–61. 37. Tait RJ, Hulse GK. A systematic review of the effectiveness of brief interventions with substance using adolescents by type of drug. Drug Alcohol Rev. 2003;22(3):337–46. 38. Bundy C. Changing behaviour: using motivational interviewing techniques. J Rl Soc Med. 2004;97 Suppl 44:43–7. 39. Britt E, Hudson SM, Blampied NM. Motivational interviewing in health settings: a review. Patient Educ Couns. 2004;53(2):147–55. 40. Emmons KM, Rollnick S. Motivational interviewing in health care settings: Opportunities and limitations. American Journal of Preventive Medicine. 2001;20(1):68–74. 41. Goldstein MG, DePue J, Kazura A, Niaura R. Models for providerpatient interaction: Applications to health behavior change. In: Shumaker SA, Schron EB, eds. The Handbook of Health Behavior Change. New York:Stringer Publishing Co;1998:85–113. 42. Accessed 03-16-2005, at http://modelprograms.samhsa.gov/template_ cf.cfm?page=model_list. 43. Accessed 3/16/2005, at http://www.colorado.edu/cspv/blueprints/ model/overview.html. 44. Sussman S, Dent CW, Stacy AW, Hodgson CS, Burton D, Flay BR. Project towards no tobacco use: implementation, process and posttest knowledge evaluation. Health Educ Res. 1993;8(1):109–23. 45. Zaza S, Briss P, Harris K. The Guide to Community Preventive Services: What Works to Promote Health? New York: Oxford University Press; 2005. 46. Resnicow K, Robinson T. School-based cardiovascular disease prevention studies: Review and synthesis. Ann Epidemiol. 1997;S7: S14–31. 47. Wiecha JL, El Ayadi AM, Fuemmeler BF, et al. Diffusion of an integrated health education program in an urban school system: planet health. J Pediatr Psychol. 2004;29(6):467–74. 48. Epstein LH, Myers MD, Raynor HA, Saelens BE. Treatment of pediatric obesity. Pediatrics 1998;101(3 Pt 2):554–70.
950
Behavioral Factors Affecting Health
49. Gortmaker SL, Peterson K, Wiecha J, et al. Reducing obesity via a school-based interdisciplinary intervention among youth: Planet Health. Arch Pediatr Adolesc Med. 1999;153(4):409–18. 50. de Nooijer J, Lechner L, Candel M, de Vries H. Short- and longterm effects of tailored information versus general information on determinants and intentions related to early detection of cancer. Prev Med. 2004;38(6):694–703. 51. Ryan GL, Skinner CS, Farrell D, Champion VL. Examining the boundaries of tailoring: The utility of tailoring versus targeting mammography interventions for two distinct populations. Health Educ Res. 2001;16(5):555–6. 52. Kreuter MW, Skinner CS. Tailoring: what’s in a name? Health Edu Res. 2000;15(1):1–4. 53. De Bourdeaudhuij I, Brug J. Tailoring dietary feedback to reduce fat intake: an intervention at the family level. Health Educ Res. 2000; 15(4):449–62. 54. Brug J, Campbell M, van Assema P. The application and impact of computer-generated personalized nutrition education: A review of the literature. Patient Edu Couns. 1999;36:145–56. 55. Brandon T, Meade C, Herzog T, Chirikos T, Webb M, Cantor A. Efficacy and cost-effectiveness of a minimal intervention to prevent smoking relapse: Dismantling the effects of amount of content versus contact. J Consult Clin Psychol. 2004;72(5):797–808. 56. Abrams DB, Mills S, Bulger D. Challenges and future directions for tailored communication research. Annals of Behavior Medicine. 1999;21:299–06. 57. Elder JP, Ayala GX, Campbell NR, et al. Interpersonal and print nutrition communication for a Spanish-dominant Latino population: Secretos de la Buena Vida. Health Psychology. 2005;24(1):49–57. 58. Hughes S. The use of non face-to-face communication to enhance preventive strategies. J Cardiovasc Nurs. 2003;18(4):267–73. 59. Datto C, Thompson R, Horowitz D, Disbot M, Oslin D. The pilot study of a telephone disease management program for depression. Gen Hosp Psychiatry 2003;25:169–77. 60. Piette J, Weinberger M, Kraemer F, McPhee S. Impact of automated calls with nurse follow-up on diabetes treatment outcomes in a Department of Veterans Affairs Health Care System. Diabetes Care 2001;24(2):202–08. 61. Piette J, Weinberger M, McPhee S. The effect of automated calls with telephone nurse follow-up on patient-centered outcomes of diabetes care: a randomized, controlled trial. Med. Care 2000;38(2): 218–30. 62. Piette J, Weinberger M, McPhee S, Mah C, Kraemer F, Crapo L. Do automated calls with nurse follow-up improve self-care and glycemic control among vulnerable patients with diabetes? Am J Med. 2000;108:20–7. 63. Ramelson H, Friedman R, Ockene J. An automated telephone-based smoking cessation education and counseling system. Patient Educ Couns. 1999;36:131–44. 64. Glanz K, Shigaki D, Farzanfar R, Pinto B, Kaplan B, Friedman R. Participant reactions to a computerized telephone system for nutriton and exercise counseling. Patient Edu Couns. 2003;49: 157–63. 65. Kaplan B, Farzanfar R, Friedman R. Personal relationships with an intelligent interactive telephone health behavior advisor system: a multimethod study using surveys and ethnographic interviews. Med Info. 2003;71:33–41. 66. Farzanfar R, Finkelstein J, Friedman R. Testing the usability of two automated home-based patient management systems. J Med Syst. 2004;28(2):143–53. 67. Piette J. Patient education via automated calls: a study of English and Spanish speakers with diabetes. Am J Prev Med. 1999;17(2):138–41. 68. Piette J, McPhee S, Weinberger M, Mah C, Kraemer F. Use of automated telephone disease management calls in an ethnically diverse sample of low-income patients with diabetes. Diabetes Care 1999;22:1302–9.
69. Glasgow R, Bull S, Piette J, et al. Interactive behavior change technology: A partial solution to the competing demands of primary care. Am J Prev Med. 2004;27(2S):80–7. 70. Finnegan J, Viswanath K. Communication theory and health behavior change: the Media Studies framework. In: K Glanz BR, FM Lewis, ed. Health Behavior and Health Education: Theory, Research and Practice. San Francisco, CA: Jossey-Bass; 2002: 361–88. 71. McGuire W. Attitudes and attiitude change. In: Aronson GLE, ed. Handbook of Social Psychology. New York: Random House; 1985. 72. Consortium IE. Interim Report: Independent Evaluation of the California Tobacco Prevention and Education program: Wave 1 Data, 1996–1997. Rockville, MD: Gallup Organization; 1998. 73. Pierce J, Emery S, Gilpin E. The California tobacco control program: a long-term health communication project. In: Hornick R, ed. Public Health Communication: Evidence for Behavior Change. Mahwah, New Jersey: Lawrence Erlbaum Associates; 2002: 97–114. 74. Gilpin E, White M, White V, et al. Tobacco Control Successes in California: A Focus on Young People, Results from the California Tobacco Surveys, 1990–2002. La Jolla, CA: University of California, San Diego; 2003. 75. Albers A, Siegel M, Cheng D, Biener L, Rigotti N. Relation between local restaurant smoking regulations and attitudes towards the prevalence and social acceptability of smoking: a study of youths and adults who eat out predominantly at restaurants in their town. Tobacco Control 2004;13:347–55. 76. Handy SL, Boarnet MG, Ewing R, Killingsworth RE. How the built environment affects physical activity: views from urban planning. Am J Prev Med. 2002;23(1):64–73. 77. Sallis J, Bauman A, Pratt M. Environmental and policy interventions to promote physical activity. Am J Prev Med. 1998;15(4): 379–97. 78. King A, Jeffery R, Fridinger F, Dusenbury L, Provence S, Hedlund S. Environmental and policy approaches to cardiovascular disease prevention through physical activity: issues and opportunities. Health Educ Q 1995;22:499–511. 79. Schmid T, Pratt M, Howze E. Policy as intervention: Environmental and policy approaches to the prevention of cardiovascular disease. AJPH 1995;85:1207–11. 80. Pollard T. Policy prescriptions for healthier communities. A J Health Promot. 2003;18(1):109–13. 81. Buchner D. Physical activity to prevent or reverse disability in sedentary older adults. Am J Prev Med. 2003;23:214–5. 82. Ewing R, Schmid T, Killingsworth R, Zlot A, Raudenbush S. Relationship between urban sprawl and physical activity, obesity, and morbidity. Am J Health Promo 2003;18:47–57. 83. Saelens B, Sallis J, Frank L. Environmental correlates of walking and cycling: findings from the transportation, urban design, and planning literatures. Ann Behav Med. 2003;25(2):80–91. 84. Owen N, Humpel N, Leslie E, Bauman A, Sallis J. Understanding environmental influences on walking: Review and research agenda. Am J Prev Med. 2004;27(1):67–76. 85. Frank L, Engelke P, Schmid T. Health and Community Design. Washington, DC: Island Press; 2003. 86. French S, Story M, Jeffery R. Environmental influences on eating and activity. Annu Rev Pub Health 2001;22:309–35. 87. Green L, Kreuter M. Health Promotion Planning: An Educational and Ecological Approach. New York: McGraw-Hill; 1999. 88. Bartholomew L, Parcel G, Kok G, Gottlieb N. Intervention Mapping: Designing Theory and Evidence-Based Health Promotion Programs with PowerWeb. New York: McGraw-Hill; 2001. 89. Perry C. Creating Health Behavior Change: How to Develop Community-Wide Programs for Youth. Thousand Oaks, CA: Sage Publishers; 1999.
53 90. Sussman S. Handbook of Program Development for Health Behavior Research & Practice. Thousand Oaks, CA: Sage Publications Inc.; 2001. 91. Brownson R, Baker E, Novick L. Community-Based Prevention: Programs that Work. Gaithersburg, MD: Aspen Publishers Inc.; 1999. 92. Kreuter M, Lezin N, Kreuter M, Green L. Community Health Promotion Ideas that Work. Boston: Jones & Bartlett; 1998. 93. CDC. Promising Practices in Chronic Disease Prevention and Control: A Public Health Framework for Action. Atlanta, GA: Department of Health and Human Services; 2003. 94. Rossi P, Lipsey M, Freeman H. Evaluation: A Systematic Approach. 7th ed. Thousand Oaks, CA: Sage Publications; 2003. 95. Shadish W, Cook T, Leviton L. Foundations of Program Evaluation: Theories of Practice. Newbury Park: Sage; 1991. 96. Valente T. Evaluating Health Promotion Programs. New York: Oxford University Press; 2002. 97. Ryan RM, Connell JP. Perceived locus of causality and internalization: examining reasons for acting in two domains. J Pers Soc Psychol. 1989;57(5):749–61.
Health Behavior Research and Intervention
951
98. Hagger MS, Chatzisarantis NLD, Biddle SJH. The influence of autonomous and controlling motives on physical activity intentions within the theory of planned behaviour. Brit J Health Psychol. 2002;7(3):283–97. 99. Deci EL, Ryan RM. The general causality orientations scale: Selfdetermination in personality. J Res Pers. 1985;19(2):109–34. 100. Frederick-Recascino CM, Schuster-Smith H. Competition and intrinsic motivation in physical activity: A comparison of two groups. J Sport Behav. 2003;26(3):240–54. 101. Wang JKC, Biddle SJH. Young people’s motivational profiles in physical activity: a cluster analysis. J SportExerc Psychol. 2001;23:1–22. 102. Ryan RM, Deci EL. Overview of self-determination theory: an organismic-dialectical perspective. In: Deci EL, Ryan RM, eds. Handbook of Self-Determination Research. Rochester, NY: University of Rochester Press; 2002:2–33. 103. Prochaska JO, DiClemente CC, Norcross JC. In search of how people change. Applications to addictive behaviors. Am Psychol. 1992;47(9):1102–14. 104. Velicer WF, Prochaska JO. An expert system intervention for smoking cessation. Patient Educ Couns. 1999;36(2):119–29.
This page intentionally left blank
54
Tobacco: Health Effects and Control Corinne G. Husten • Stacy L.Thorne
In a sense the tobacco industry may be thought of as being a specialized, hiigghly ritualized, and stylized segment of the pharmaceutical industry. Tobacco products uniquely contain and deliver nicotine, a potent drug with a variety of physiological effects. Claude E. Teague, Jr., R.J. Reynolds, Federal Register Vol 60 (155), 1995. Think of the cigarette pack as a storage container for a day’s supply of nicotine . . . Think of the cigarette as a dispenser for a dose unit of nicotine . . . Think of a puff of smoke as the vehicle of nicotine . . . Smoke is beyond question the most optimized vehicle of nicotine and the cigarette the most optimized dispenser of smoke. William L. Dunn, Phillip Morris, 1972, Federal Register Vol 60 (155), 1995.
in the social environment (social reinforcers) so that after many thousands of repetitions of inhaling tobacco fumes or inserting tobacco into the mouth, tobacco use becomes firmly entrenched as a part of the tobacco-user’s life. Tolerance, the need for increasing amounts to achieve the same physiological response, develops to some but not all effects of nicotine. Many tobacco users who abruptly quit experience a withdrawal syndrome of irritability, aggressiveness, hostility, depression, and difficulty in concentrating. These symptoms may last several days or even weeks and are accompanied by electroencephalographic changes; cravings for cigarettes may persist long after cessation and be stimulated by exposure to the social reinforcers previously associated with tobacco use. Many tobacco users relapse, often with days of the quit attempt.3 TOLL OF SMOKING
Excess Mortality Realistically if our Company is to survive and prosper, over the long term, we must get our share of the youth market . . . Thus we need new brands designed to be particularly attractive to the young smoker . . . Product image factors (a) should emphasize participation, togetherness, and membership in a group, one of the group’s primary values being individuality. (b) Should be strongly perceived as a mechanism for relieving stress, tension, awkwardness, boredom, and the like. (c) Should be associated with doing one’s own thing to be adventurous, different, adult, or whatever else is individually valued. (d) Should be perceived as some sort of new experience, something arousing some curiosity, and some challenge. (e) Must become the proprietary “in” thing of the “young” group. Claude E. Teague, Jr., R.J. Reynolds. Industry documents, Bates #: 502987407–502987418 February 2, 1973.
The custom of smoking dried tobacco leaves spread from America to the rest of the world after European colonization began in the sixteenth century. Given that smoking harms nearly every organ of the body1 coupled with its addictive properties and widespread use, it is a dangerous psychoactive drug. Its effects are soothing and tranquilizing, yet there is also a stimulant action. Physiological and psychological dependence occur, and there are severe withdrawal symptoms and a craving for tobacco that make this among the most refractory of addictions. People start to use tobacco for several reasons. Many start for social reasons and many young people perceive tobacco use as an attribute of maturity. Nicotine is the psychoactive compound in tobacco. The nicotine is absorbed quickly and reaches the brain within seconds. 2 Pharmacological factors interact with stimuli
Cigarette smoking has been identified as the leading cause of preventable morbidity and premature death.4–6 Up to two out of three lifelong smokers will die of a smoking-related disease.7 The estimated annual excess mortality from cigarette smoking in the United States is about 440,000.8 If current patterns of smoking persist, an estimated five million U.S. persons aged from 1 to 17 years in 1995 will die prematurely from smoking-related diseases.9 Because of its importance as a cause of morbidity and mortality in the United States, the prevalence of cigarette smoking is one of the conditions designated as reportable by states to the Centers for Disease Control and Prevention (CDC). Cigarette smoking is the first instance of a behavior, rather than a disease or illness, considered to be nationally reportable.10 Coronary heart disease (CHD), multiple cancers, and various respiratory diseases account for the majority of excess mortality related to cigarette smoking.8 Of the 480,000 deaths from ischemic heart disease in 2003, an estimated 80,300 (17%) were attributable to smoking. Furthermore, 156,000 (28%) of the 556,000 cancer deaths were attributable to smoking. Lung cancer caused 158,000 deaths in 2003 (28% of all cancers), and 79% of these deaths were attributed to smoking.11 Other cancers caused by smoking are those of the oral cavity, pharynx, larynx, esophagus, pancreas, bladder, kidney, cervix, stomach, and acute myeloid leukemia.1,12 Chronic obstructive pulmonary diseases (COPD), such as chronic bronchitis and emphysema, account annually for another 93,000 smoking-related deaths.8 Smokers average a 16-fold increased risk of acquiring lung cancer, a 12-fold increased risk of acquiring COPD, and a 2-fold increased risk of having a myocardial infarction (MI), compared to nonsmokers.13 Men and women who smoke lose 12.9 and 12.4 years of life, respectively.8 Historical gender differences in smoking prevalence are responsible for at least part of the gender difference in life expectancy in the United States.
953 Copyright © 2008 by The McGraw-Hill Companies, Inc. Click here for terms of use.
Economic Costs The annual economic toll of smoking can be divided into direct and indirect costs. Smoking-attributable healthcare expenditures totaled $75 billion in 1998.20 From 1997 to 2001, the average annual cost from lost earnings as a result of smoking-related deaths was $92 billion.8 In 1995–1999, the economic costs translated into an annual cost per smoker of approximately $3,391 or $7.18 per pack of cigarettes sold. In addition, pregnant smokers account for a sizable economic burden on the medical care system: medical expenditures associated with smoking during pregnancy were estimated to be $366 million in 1996, or $704 per maternal smoker.21 Most studies have looked solely at the societal costs of smoking (health care and productivity, Medicare and Medicaid, etc.). A recent study estimated the cost of cigarettes, not only to society, but also to the individual smoker and his/her family, and found that smoking costs a woman $106,000 and a man $220,000 over a lifetime, or nearly $40 per pack of cigarettes consumed.22 Part of the cost of smoking is due to cigarette-caused fires, although this is not included in the calculations cited above. In the United States, fires caused by smoking were the leading cause of fire death, resulting in 760 fatalities in 2003. These fires injured 1520 people and direct property damage associated with smokingrelated fires exceeded $481 million in 2003.23 CARDIOVASCULAR DISEASE
Stopping smoking at age 25–34 100 Stopped smoking 80 Cigarette smokers
60
Nonsmokers 40 20 0
Percentage survival from age 40
It is estimated that 8.6 million people in the United States are living with a serious illness caused by smoking—thus, for each person who dies from a smoking-related disease, 20 are living with a smokingattributable illness. About 10% of all current and former adult smokers have a smoking-attributable chronic disease. Of these, 59% are living with chronic bronchitis and emphysema, and another 19% have had a heart attack.14 Also, although nonsmokers live longer, smokers live more years with disability (2.5 years for men and 1.9 years for women).15 The good news is that smoking cessation has major and immediate health benefits.16 For persons who quit by age 30, life expectancy is essentially the same as a nonsmoker (Fig. 54-1).7 Even quitting late in life confers significant health benefits. Adults who quit at age 65–70 can expect to increase their life expectancy by 2–3 years.17–19
Percentage survival from age 35
Behavioral Factors Affecting Health
Stopping smoking at age 35–44 100 Stopped smoking 80 Cigarette smokers
60
Nonsmokers 40 20 0 Stopping smoking at age 45–54
Percentage survival from age 50
954
100 Stopped smoking 80 Cigarette smokers
60
Nonsmokers 40 20 0
Coronary Heart Disease Stopping smoking at age 55–64 Percentage survival from age 60
CHD is the leading cause of excess death and disability in the United States. In 2003, 896,000 (38%) of the 2,333,000 deaths in the United States among persons aged 35 years and older were due to diseases of the cardiovascular system and 17% were attributed to smoking.11,24 Of cardiovascular deaths, 480,000 were due to ischemic heart disease.24 In 2003, smoking was estimated to cause 39% and 34%, respectively, of ischemic heart disease deaths of men and women less than 65 years of age, with 14% and 10% being the corresponding percentages for men and women 65 years of age and older.11,24 In early investigations, cigarette smoking was observed to be associated with CHD. On the basis of this observation, cohort studies examined the nature and degree of CHD risk attributable to smoking. These studies revealed a higher incidence of myocardial infarction (MI) and death from CHD in cigarette smokers than in nonsmokers. Studies demonstrated similar findings, whether in the United States, Canada, the United Kingdom, Scandinavia, or Japan.12 In an American Cancer Society prospective study (Cancer Prevention Study II) (ACS CPS-II) with 1.2 million participants, smokers had CHD mortality approximately 85% higher than nonsmokers.13 Similarly, the 40-year follow-up of the British Physicians’ Study reported a doubling of risk for heavy smokers.25 The 1989 and 1990 Surgeon General’s reports provided a summary of studies that estimated both the risk of CHD from smoking and the decrease in risk with smoking
100 Stopped smoking 80 Cigarette smokers
60
Nonsmokers 40 20 0 40
50
60
70 Age (years)
80
90
100
Figure 54-1. Effects on survival of stopping smoking cigarettes at age 25–34 (effect from age 35), age 35–44 (effect from age 40), age 45–54 (effect from age 50), and age 55–64 (effect from age 60). (Source: Doll R, Peto R, Boreham J, et al. Mortality in relation to smoking: 50 years’ observation on male British doctors. BMJ. 2004;328:1519–27. Reproduced with permission from the BMJ Publishing Group.)
54 cessation. The 1990 report concluded that, on the basis of both cohort and case-control studies, “cigarette smoking is firmly established as an important cause of coronary heart disease, arteriosclerotic peripheral vascular disease, and stroke. Eliminating smoking presents an opportunity for bringing about a major reduction in the occurrence of CHD, the leading cause of death in the United States.”16 The 2004 Surgeon General’s report reviewed studies published through 2002 and reaffirmed the conclusion that smoking causes CHD. The risk of death from CHD and cardiovascular disease increases directly with usual daily cigarette consumption.1,16 Even among past smokers, risk of death due to CHD and cardiovascular disease was associated with previous usual daily cigarette consumption.16 Although most early investigations of the smoking-related risk of CHD used male subjects, multiple prospective studies indicate that smoking also causes CHD among women.1,13,26–28 Data from the ACS CPS-II indicated relative risks of CHD of 3.0 among female smokers aged 35–64 years and 1.6 among female smokers aged 65 years and older.12 The Nurses Health Study, which examined a cohort of 121,000 women, indicated a relative risk among current smokers of 4.13 for fatal CHD, 3.88 for nonfatal MI, and 3.93 for CHD overall. The risk increased with the number of cigarettes smoked per day: the adjusted relative risk was 1.55 for former smokers, 3.12 for women smoking 1–14 cigarettes per day, and 5.48 for women smoking 15 or more cigarettes per day, compared with lifetime nonsmokers.27,29 Smokers have a higher death rate from CHD at all ages. However, since the incidence of CHD increases sharply with age for both smokers and nonsmokers, the relative risk for smokingrelated CHD peaks for men at age 40–44 years and for women at age 45–49 years.30,31 The percentage of CHD deaths attributable to smoking is 84% for men aged 40–44 years and 26% for men aged 75–79 years. The smoking attributable percentage of CHD deaths is 85% for women aged 45–49 years and 23% for women aged 80 years and older.32 Results from cohort studies clearly demonstrate that the risk of death from CHD is increased by early smoking initiation, number of cigarettes smoked per day, and depth of smoke inhalation. For example, data from the Nurses Health Study show that, although the risk of CHD is increased for all smokers regardless of age of smoking initiation, the risk is higher for women who started smoking before age 15. After adjustment for potential confounders, including number of cigarettes smoked daily, the relative risk of CHD for those starting to smoke before age 15 was 9.2. Among former smokers, women who started smoking before age 15 were also at highest risk for CHD, but this finding was based on a small number of cases.27 Smoking in combination with other CHD risk factors appears to have a synergistic effect on CHD mortality. For example, in the Pooling project, the 10-year incidence of first major coronary event was 54 per 1000 for smokers, 92–103 per 1000 for smokers with one other risk factor (hypertension or hypercholesterolemia), and 189 per 1000 for persons with all three risk factors.33 Diabetes also confers an increased risk of CHD that is further elevated if a person smokes.34,35 In studies of women using high-dose oral contraceptives, increased cardiovascular risk was reported among women who smoke.29,36 It is unclear if this risk occurs with the newer low-dose pills. Some studies suggest this risk does not occur with these second-generation pills,37 while other studies suggest that among heavy smokers, there is an increased risk.38,39 Another study suggested that third generation pills might increase inflammatory markers of CHD.40 The 2004 Surgeon General’s report concluded that, because of its prevalence, smoking is a major cause of CHD, particularly at younger ages. The report also noted that smoking is associated with sudden cardiac death of all types. Smokers had a relative risk of 2.5 compared with nonsmokers and men had a higher relative risk than women.1 A substantial proportion of the population’s burden of CHD could be avoided with smoking prevention and cessation.1 Products with lower yields of tar and nicotine as measured by a smoking machine have not been found to reduce CHD risk substantially. Additionally, by causing CHD and MI, smoking may contribute to the
Tobacco: Health Effects and Control
955
development of congestive heart failure, an increasingly frequent and disabling disease with a poor prognosis.1 Data from cohort studies show that pipe and cigar smokers generally have a lower risk of a major coronary event and subsequent CHD than do cigarette smokers. The risk of CHD-related death for pipe and cigar smokers is in the range of 1.01–1.37 compared to nonsmokers, with deeper smoke inhalation increasing the risk.41,42 For example, in the Cancer Prevention Study I, CHD risk was 1.23 for those who reported “slight” inhalation and 1.37 for those reporting “moderate to deep” inhalation.41 The Copenhagen City Heart Study found no difference in risk for first MI among pipe, cigar/cheroot, or cigarette smokers,43 but a Swedish study reported that pipe smokers and cigarette smokers had similar risk of death from ischemic heart disease; this finding was attributed to the similar proportion of inhalers among pipe and cigarette smokers.44 A more recent cohort study found elevated risk of CVD events (RR = 1.69) and cardiovascular mortality in pipe and cigar smokers.45 The National Cancer Institute concluded that heavy cigar smokers and those who inhale deeply are at increased risk for coronary heart disease.41 Pipe and cigar smokers who are former cigarette smokers tend to inhale the smoke and to have much higher venous blood carboxyhemoglobin levels than do those who have never smoked cigarettes, and they are at higher risk for CHD.41,46,47 Smokeless tobacco use causes acute cardiovascular effects similar to those caused by cigarette smoking, such as increased heart rate and blood pressure levels. Blood pressure is affected by the high sodium content of smokeless tobacco as well as the nicotine and licorice (which causes sodium retention).48 A large population-based study in Sweden found that smokeless tobacco users were more likely to have hypertension.49 In addition, some (but not all) studies of the effect of smokeless tobacco on lipids have shown a higher risk of hypercholesterolemia, lower high-density lipoprotein levels, and higher triglyceride levels.48 This study also showed an elevated risk of diabetes in smokeless tobacco users.48 A large Swedish cohort study found that smokeless tobacco users were 1.4 times more likely to die of cardiovascular disease than nonusers,49,50 and an analysis of both CPS I and CPS II reported that both cohorts showed increased death from CHD for smokeless users.51 Two case-control studies have not found an increased risk.48,52 The positive effect of smoking cessation on both primary and secondary prevention of CHD has been extensively studied and validated. The 1990 Surgeon General’s report evaluated this research and concluded that compared with continued smoking, cessation substantially reduces the risk of CHD among men and women of all ages.16 Subsequent cohort studies have supported these conclusions.53,54 The excess risk of CHD is reduced by about half after 1 year of abstinence and then declines gradually. After 15 years of abstinence, the risk of CHD is similar to the risk in those who have never smoked. Among persons with diagnosed CHD, smoking cessation markedly reduces the risk of recurrent MI and cardiovascular death. In many studies, this reduction has been 50% or more.16
Peripheral Vascular Disease The strongest risk factor predisposing persons to atherosclerotic peripheral arterial occlusive disease is cigarette smoking,16,36,55 which has been shown to be directly related to lower extremity atherosclerotic disease of both large and small arteries.12,16 Intermittent claudication is more frequent among smokers than nonsmokers.56 Smoking prevalence is high among victims of aortoiliac (98%) and femoropopliteal (91%) disease.57 The 2004 Surgeon General’s report concluded that smoking causes subclinical atherosclerosis.1 The Ankle Arm Index or AAI, (the systolic blood pressure of the ankle divided by the systolic blood pressure of the arm) is a strong predictor of peripheral artery disease as well as coronary and cerebrovascular disease.58–60 A consistent association exists between cigarette smoking and AAI in diverse populations.1 These new findings on the relationship between
956
Behavioral Factors Affecting Health
smoking and subclinical disease demonstrate the potential for preventing more advanced and clinically symptomatic disease through cessation. Limited studies of smokeless tobacco use have not demonstrated a high incidence of peripheral vascular disease in users, and an elevated risk of peripheral vascular disease is not evident in cigar or pipe smokers.61 Studies show a lower risk of peripheral arterial occlusive disease among former smokers than among current smokers. A recent cohort study found that current smoking was associated with a 50% increase in the progression of atherosclerosis over 3 years, and past smoking was associated with a 25% increase, when compared with never smokers.62 There is a consistent reduction in complications of peripheral vascular disease and improved performance and overall survival among patients who quit smoking.16 Smoking cessation also significantly reduces the risk of peripheral arterial occlusive disease for persons with diabetes, though some of the adverse effects may be cumulative and irreversible.62,63 An autopsy study of atherosclerotic plaques in smokers found that the complexity and extent of plaque in the abdominal aorta increased with the number of cigarettes smoked.64,65 Multiple cross-sectional and cohort studies have shown that smokers have a higher abdominal aortic aneurysm mortality rate than nonsmokers.1 The 2004 Surgeon General’s report concluded that smoking causes abdominal aortic aneurysm and is one of the few avoidable causes of this frequently fatal disease.1 Several studies have also shown an increased risk of aortic aneurysm among pipe and cigar smokers,12,41 and an autopsy study indicated that men who smoked cigars, pipes, or both had more complex patterns of atherosclerotic plaques than men who had never smoked cigars or pipes regularly.64 Five cohort studies that analyzed the risk of death due to aortic aneurysm for current, former, and never smokers found that among men, risk among former smokers is 2–3 times higher than that among never smokers and about 50% lower among former smokers than current smokers. Patterns are similar for women.16
Cerebrovascular Disease Both ischemic and hemorrhagic cerebrovascular diseases are major causes of death in the United States. Although stroke deaths have declined substantially during the past two decades, ischemic and hemorrhagic strokes accounted for approximately 158,000 (6%) of deaths in the United States in 2003.24 Each year there are more than 500,000 new and 200,000 recurrent strokes.66 The risk of stroke increases with age. Smoking has been well demonstrated as a major cause of stroke.1,12 The 2004 Surgeon General’s report noted that only hypertension is as consistently related to stroke risk as smoking. Smoking increases both the incidence and mortality from stroke.1,67,68 A metaanalysis of 32 case-control and cohort studies found that the risk of cerebral infarction was 1.9, the risk for cerebral hemorrhage was 0.7, and the risk of subarachnoid hemorrhage was 2.9 among current smokers compared with never smokers; a positive dose-response relationship between number of cigarettes smoked and relative risk for stroke was also noted.1,69 One study estimated that for persons younger than 65 years of age, smoking was responsible for 51% of cerebrovascular disease in men and 55% in women.12 Smoking is also related to subclinical markers of cerebrovascular disease (white matter disease and subclinical infarcts).1 At least two cohort studies found that pipe and cigar smoking were associated with an increased risk (RR = 1.62) of stroke events.45,70 Switching from cigarettes to a pipe or cigar has little effect on reducing stroke risk.71 Analysis of CPS I and CPS II showed that in both cohorts, smokeless tobacco users had increased mortality from stroke.51 Female smokers who use high-dose oral contraceptives are reported to be at increased risk of stroke.72,73 It has been suggested that the low-dose oral contraceptives used today might not confer the risk observed for the early high-dose formulations.74 Compared with continued smoking, cessation reduces the risk of both ischemic stroke and subarachnoid hemorrhage. After smoking
cessation, the risk of stroke returns to the level of never smokers within 5 years in some studies, though in others, not until up to 15 years of abstinence.16
Mechanisms of Cardiovascular Disease Development Related to Smoking Atherosclerosis is characterized by the deposition of lipid in the inner layers of the arteries, by fibrosis, and by thickening of the arterial wall. Atherosclerotic plaques develop over time, slowly progressing from early lipid deposition (fatty streaks) to more advanced raised fibrous lesions that decrease the arterial lumen, and finally to the lesions that are associated with clinical events. The process of plaque destabilization is thought to be associated with inflammatory changes and thrombotic events that obstruct the blood flow and result in clinical manifestations of disease, such as MI or stroke.1 The highly regulated physiologic interface between blood and arterial wall components is strongly and adversely affected by the toxic products from cigarette smoke that are added to the bloodstream.1 The smoking-related development of CHD includes at least five interrelated processes: atherosclerosis, thrombosis, coronary artery spasm, cardiac arrhythmia, and reduced oxygen-carrying capacity of the blood. The exact components of cigarette smoke that cause these changes are not known.
Endothelial Injury or Dysfunction Data from animal studies suggest that nicotine causes endothelial damage, and data from humans indicate that smoking increases the number of damaged endothelial cells and the endothelial cell count in circulating blood.1 Cigarette smoke exposure in dogs resulted in increased endothelial permeability to fibrinogen.75 Young and middleaged smokers without disease had a significant reduction in endotheliumdependent vasodilatation compared with nonsmokers.76 Smoking also appears to stimulate smooth muscle cell proliferation and to increase the adherence of platelets to arterial endothelium. Animal studies have demonstrated that exposure of rat endothelium to blood from a person who had recently smoked two cigarettes resulted in the deposition of a large number of platelets on the endothelial surface.1
Thrombosis/Fibrinolysis Smoking may also increase thrombus formation. Fibrinogen levels are elevated in smokers, as is platelet-fibrinogen binding and other clotting abnormalities that tend to promote thrombus formation.77 Plaques from smokers more frequently have thrombosis along the walls of the arteries than plaques from nonsmokers.78 Smoking also increases tissue factor (a glycoprotein that initiates the extrinsic clotting cascade) expression.79 The prothrombotic effect of smoking is thought to be the main underlying factor that links smoking to sudden cardiac death.80
Inflammation Current ideas about the pathogenesis of atherosclerosis increasingly emphasize a central role for inflammation.1 Smoking induces a systemic inflammatory response, as demonstrated by increases in inflammatory markers such as the blood leukocyte count.81 Smoking is also associated with elevated C-reactive protein levels, another measure of inflammatory activity. C-reactive protein level is associated with risk of CHD, stroke, and peripheral artery disease.82–85
Lipids/Lipid Metabolism A substantial body of evidence has demonstrated an association between smoking and adverse lipid profiles.1 Smokers have decreased levels of high-density lipoprotein (HDL), higher concentrations of total low-density lipoprotein (LDL) and very low-density lipoprotein (VLDL) cholesterol compared with nonsmokers.16,86,87 A population-based cohort study showed decreasing HDL in persons
54 who started to smoke and increasing HDL levels in persons who had stopped smoking.88 Smoking may also promote lipid peroxidation, thought to be a key element in the development of atherosclerosis.89
Increased Oxygen Demand Cigarette smoking increases myocardial oxygen demand by increasing peripheral resistance, blood pressure, and heart rate (probably attributable to nicotine).90 In addition, the capacity of the blood to deliver oxygen is reduced by increased carboxyhemoglobin, greater viscosity, and higher coronary vascular resistance due to vasoconstrictor effects on the coronary arteries. Reduced oxygen-carrying capacity may contribute to infarction in the presence of significant atherosclerotic narrowing of the vessels.1,16 Coronary artery spasm can cause acute myocardial ischemia and may promote thrombus formation. Arrhythmias can precipitate heart attacks and can increase the case fatality rate of MI; smoking has been shown to lower the threshold for ventricular fibrillation.16
Mechanisms of Peripheral Vascular and Cerebrovascular Disease Development The strong association between smoking and peripheral vascular disease is likely mediated by the mechanisms that promote atherosclerosis as described earlier. The peripheral vasoconstrictive effects of smoking probably also play an important role.16 The association of smoking with ischemic stroke is likely mediated by the mechanisms that promote atherosclerosis and thrombus formation.91 Cigarette smoking appears to increase the risk of stroke by decreasing cerebral blood flow.92 In smokers with other risk factors for stroke, cerebral blood flow is reduced in an additive manner compared with that in nonsmokers with similar risk factors.93 The mechanism for the strong relationship between smoking and subarachnoid hemorrhage is currently unknown.16
Tobacco: Health Effects and Control
957
death rates from lung cancer was 6.7:1. Whereas the incidence rate in men appears to have peaked in 1984, the rate for women has continued to increase by 2% per year. By 1997–2001, the male/female ratio had declined to 1.7:1.106 Although the 1964 Surgeon General’s report107 was the first official U.S. statement on the relationship of smoking and lung cancer, case control, cohort, and animal studies conducted in the 1950s showed a clear association between smoking and lung cancer.108,109 The study most influential in drawing medical attention to this relationship was a 1956 cohort study of 40,000 British physicians 36 years of age and older. This study demonstrated that the age-adjusted death rate for lung cancer increased from 7 per 100,000 for nonsmokers to 166 per 100,000 for heavy smokers.110 Other cohort studies in various parts of the world have further demonstrated the consistency, specificity, strength, and temporal nature of the association between smoking and lung cancer. The 1990 Surgeon General’s report provided an outline of the lung cancer mortality ratios for current, former, and never smokers from prospective studies. Smoker mortality rates for lung cancer ranged from 4 to 27 times those of nonsmokers.16 The relative risk has increased over time, doubling for men and quadrupling for women from ACS CPSI, 1959–1965, to ACS CPS-II, 1982–1988.13 Strength of association was further demonstrated by the dose-response relationship.32 Figure 54-2 demonstrates the gradient of increasing risk of death
CANCER
Lung Cancer In the United States, carcinoma of the lung is the leading cancer cause of death for both men and women.94 Lung cancer replaced breast cancer as the leading cause of cancer death among both white and black American women in 1987.94 Lung cancer mortality rates, as measured by ACS CPS-I from 1959 to 1965 and ACS CPS-II from 1982 to 1988, increased over this period from 26 to 155 per 100,000 women and from 187 to 341 per 100,000 men.13 The number of lung cancer deaths in the United States rose sharply, from 18,300 in 1950 to 61,800 in 1969, to an estimated 160,400 in 2007.94 An estimated 89,510 men and 70,880 women will die of lung cancer in 2007.94 In 2003, lung cancer accounted for 28% of cancer deaths and 6% of all deaths in the United States.24 Of all lung cancer deaths, 79% are directly attributable to smoking.11,24 Among malignant lung tumors, 90% belong to four major cell types: squamous cell, oat cell, large cell, and adenocarcinoma, which are commonly designated bronchogenic carcinoma. Smoking induces all four major histologic types of lung cancer. Initially, squamous cell carcinoma was seen most often in smokers, followed by small cell carcinoma. However, since the late 1970s, adenocarcinoma has been increasing, and is now the most common histologic type.95–98 It has been suggested that the increasing incidence of adenocarcinoma may be related to the switch to low-tar, filtered cigarettes, which may allow increased puff volume with increased deposition of smoke in the peripheral airways. Low tar cigarettes also have increased tobaccospecific nitrosamine (a carcinogen shown to induce adenocarcinoma) levels.99–104 Lung cancer has a propensity to metastasize early and widely. Five-year survival in lung cancer patients is 15%. The survival rate is 49% for localized disease, but only 16% of lung cancer is diagnosed at this early stage.94 The survival rate from lung cancer has increased only slightly in the past 20 years.105 The rise in lung cancer rates in male smokers preceded that of female smokers. In the years 1959–1961, the male/female ratio of
Figure 54-2. Death rates from lung cancer among persons age 60–69, by amount and duration (in years) of cigarette smoking: Duration progresses over time. ACS Cancer Prevention Study II. (Source: Thun MJ, Myers DG, Day-Lally C, et al. Age and exposure-response relationships between cigarette smoking and premature death in Cancer Prevention Study II. In: Burns DM, Garfinkel L, Samet J, eds. Changes in Cigarette-Related Disease Risks and Their Implication for Prevention and Control. Smoking and Tobacco Control Monograph No. 8. Rockville, MD: National Cancer Institute, 1997 NIH Publication No. 97-4213.)
958
Behavioral Factors Affecting Health
from lung cancer as the number of cigarettes smoked per day increases. Increasing the number of cigarettes smoked per day increases the relative risk for both male and female smokers.111 There is also a direct relationship between the number of years of smoking and lung cancer mortality (Fig. 54-2). Lung cancer incidence appears to increase with the square of the amount smoked daily, but with the duration of smoking raised to a power of four or five.112 Smoking mechanics, such as the degree of inhalation, also affect lung cancer mortality.12 However, even smokers who report slight inhalation or none have a relative risk of cancer up to eightfold higher than that for nonsmokers.113 The 2004 Surgeon General’s report and 2004 International Agency for Research on Cancer (IARC) report confirmed and expanded the evidence base supporting the conclusion that smoking causes lung cancer.1,114 Both case-control and cohort studies have demonstrated some reduction in lung cancer risk in smokers who switched from nonfiltered to filtered cigarettes.12,115 For those who have always smoked filtered cigarettes, the risk of lung cancer is still very high, but may be 10% to 30% lower than that for lifelong smokers of nonfiltered cigarettes. Although initial studies suggested that low-tar cigarettes might confer reduced risk of lung cancer,116–118 recent reviews have concluded that any reduction in lung cancer risk associated with changes in the cigarette are small.101,114 The 2004 Surgeon General’s report and a National Cancer Institute Monograph have both concluded that although the characteristics of cigarettes have changed during the last 50 years and yields of tar and nicotine have declined as assessed by the Federal Trade Commission test protocol, the risk of lung cancer in smokers has not declined, and changes in the design of cigarettes intended to reduce tar and nicotine yields have had no significant benefit for lung cancer risk in smokers.1,119 For persons who stop smoking cigarettes, the decrease in lung cancer mortality is related to smoking history (e.g., dose, duration, type of cigarette, and depth of inhalation) as well as the number of years since cessation. Risk reduction is gradual; after 10 years the risk is about 30–50% of that for continuing smokers.16 However, even with the longest duration of quitting, the risk remains greater than for lifetime nonsmokers.120 It is hypothesized that the absolute risk of lung cancer does not decline after cessation, but the additional risk that comes with continued smoking is avoided. For example, biopsy specimens of nonmalignant tissues show persistent molecular damage in the respiratory epithelium of former smokers.1 Multiple case-control and cohort studies have reported an increased risk of lung cancer among those who smoke pipes, cigars, or both.42,45,121–123 The National Cancer Institute has concluded that regular cigar smoking causes lung cancer.41 For cigar smoking, the risk increases with the number of cigars smoked per day and with increasing depth of inhalation; depth of inhalation is the more powerful predictor of risk.41 However, studies suggest that cigar smokers who do not inhale have a lung cancer risk 2–5 times higher than nonsmokers and evidence also exists that the risk of lung cancer has increased over time for cigar smokers.114 Studies have reported that risk increases with the duration of cigar smoking, and decreases with cessation.114 In general, the risk of lung cancer is less for pipe and cigar smokers (RR = 4.35) than for cigarette smokers, but substantially greater than for nonsmokers. Former cigarette smokers who switch to cigars or pipes are at higher risk than those who have only ever smoked cigars or pipes, and the latter are at higher risk that those who quit tobacco use entirely.47 An estimated 825 Americans died in 1991 from lung cancer as a result of pipe smoking.124 Among pipe smokers, lung cancer death rates also exhibit dose-response relationships.16 In one study, lung cancer risk among pipe smokers decreased with time since cessation.125 Chemical analysis of the smoke from pipes, cigars, and cigarettes shows that carcinogens are found at comparable levels in the smoke of all these tobacco products. The lower risk of lung cancer among pipe and cigar smokers compared to that in cigarette smokers is due to the smaller amount of tobacco smoked and the lower proportion who inhale.16 In Denmark and Sweden, where the style of smoking pipes and cigars involves deeper inhalation than is generally
practiced in the United States, the rate of lung cancer in pipe and cigar smokers approaches that of cigarette smokers.44,126 The 2004 Surgeon General’s report concluded that smoking causes genetic changes in cells of the lung that lead to the development of lung cancer.1 Although research during the past 25 years has led to a greatly expanded knowledge of the major factors contributing to the toxicity and carcinogenicity of cigarette smoke, the mechanisms responsible for lung tumor initiation from tobacco smoke constituents are complex and not yet completely understood. Armitage and Doll127 proposed that “k” stages are required to transform a normal cell to a malignant cell. Components of tobacco smoke are potent mutagens and carcinogens.1 Tobacco smoke contains more than 60 known carcinogens that have both cancer-initiating and cancerpromoting activity.102,114 The bronchial epithelia of smokers show progressive abnormal changes; the frequency and intensity of these changes increase with the amount smoked. The number of cells with atypical nuclei decreases with an increased number of years since smoking cessation. An association between smoking and the presence of DNA adducts has also been reported.16 For example, a carcinogen in cigarette smoke (benzo[a]pyrene) forms adducts at specific codons on the p53 tumor suppressor gene; these adducts are at the same locations as mutations associated with lung cancer.128,129 Current smokers also have significantly higher levels of PAH-DNA adducts in their lungs.1 A large body of data links exposure to tobacco carcinogens and mutations on the K-ras oncogene. Mutations at codons 12, 13, and 61 are found in adenocarcinoma of the lung, and these mutations are primarily seen in smokers.1 Recent studies have shown that DNA methylation inactivation of the promoters of tumor suppressor genes occurs frequently in smoking-related cancers. An estimated 15–35% of lung cancer tumors have inactivation of the p16 tumor suppressor gene by DNA-methylation.130
Oral, Laryngeal, and Esophageal Cancer A large number of cohort and case-control studies from many countries support the conclusion drawn by the U.S. Surgeon General and IARC that smoking is a cause of oral and laryngeal cancer, and of both adenocarcinoma and squamous cell carcinoma of the esophagus.1,114 For the heaviest smokers, the relative risk for laryngeal cancer is 20 or more compared with lifelong nonsmokers.1 The relative risks for male current smokers compared with lifelong nonsmokers ranged from 3.6 to 11.8 for oral cancer,131 and up to 14.1 for pharyngeal cancer.132 For esophageal cancer the relative risk is 7 for men and 8 for women.11 The estimated numbers of deaths attributable to smoking for these cancers, other cancers, and other diseases are shown in Table 54-1, and Table 54-2 displays the attributable risks. For both men and women, most cases of these three cancers are attributable to smoking, with strong doseresponse relationships at each of these sites.12 Smokeless tobacco causes oral cancer.1,133,134 The 1986 Surgeon General’s Report reported a relative risk of 50 for oral cancer for smokeless tobacco users compared with nonusers.135 Long-term use of snuff is associated with cancers of the cheek and gum. The death rates from oral and pharyngeal cancer vary more than 100-fold across countries,136 with the highest rates among men in Sri Lanka and the western Pacific region, where tobacco is chewed in combination with betel.1 All forms of tobacco use (cigarettes, pipes, cigars, chewing tobacco, snuff, reverse smoking [the lit end is placed inside the mouth], and “pan” [tobacco, areca nuts, slaked lime and betel leaf], chewing) increase the development of premalignant lesions and cancer of the oral cavity and pharynx.1 Cigar smoking causes oral, laryngeal, and esophageal cancer.41,42,123 In one large cohort study (CPS I), the relative risk for oral cancer was 7.9 overall and 15.9–16.7 for men smoking five or more cigars per day.41 Former cigar smokers have a lower risk than current cigar smokers, but even after 10 years, the risk is three times that of nonusers.137 Studies have shown relative risks for laryngeal cancer of 10 overall and 26 for those smoking five or more cigars per day. A relative risk of 3.6–6.5 for esophageal cancer and a dose-response relationship with the duration/intensity of cigar smoking have been demonstrated.114
54 Pipe smoking causes lip cancer and is also associated with oral, laryngeal, and esophageal cancers.1,114 For these sites, the mortality ratios for smokers—regardless of whether they smoke cigarettes, pipes, or cigars—are similar.1,12,41 A study in Brazil reported that pipe smokers have a relative risk of 11 for developing oral cancer, and this risk decreases with cessation, though it did not return to nonuser rates even after 10 years of abstinence (when it was still 3.4).138 Another study showed an increased risk of esophageal cancer with snus (Swedish snuff) use, but the results were not statistically significant.139 The progression from healthy mucosa to carcinoma is the result of an accumulation of genetic mutations that disrupt the normal control of cell growth.140 Several carcinogens and tobacco metabolites have been measured in saliva and the oral mucosa, as well as the urine and blood, of smokers and smokeless tobacco users.1 Studies in a number of animal species show that multiple carcinogens in tobacco smoke and smoke condensate cause premalignant papillomas and carcinomas of the esophagus and forestomach.1,141 Benzo[a]pyrene penetrates the cell membranes of the esophageal epithelium, causing papillomas and squamous cell carcinoma.142 About 50% of head and neck squamous cell carcinomas have p53 mutations; these mutations appear to increase with the number of cigarettes smoked and are augmented by alcohol use.143 Alcohol plays a synergistic role with smoking for each of these cancers,94 and together, smoking and alcohol account for most cases in the United States.1 In one study of oral cancer risk, for nonsmokers who consumed 7 oz or more of alcohol per week, the relative risk of death
Tobacco: Health Effects and Control
959
from oral cancer was 2.5 compared with that for nondrinkers. Those who consumed the same amount of alcohol and smoked one-half pack of cigarettes or less per day had approximately double the risk of the nonsmoking alcohol drinkers, but the relative risk rose to 24 if the smoker consumed a pack or more per day.109 Reduction in tobacco use could prevent most deaths from esophageal cancer in the United States.1 After smoking cessation, relative risk may decrease more slowly for oral cancer than for pharyngeal cancer.138,144 Smoking cessation halves the risk of oral and esophageal cancer within 5 years of quitting; the risk is reduced further with longer abstinence. The risk of laryngeal cancer is reduced after 3–4 years of abstinence, but it remains higher than that for never smokers.16 Some studies (but not all) suggest that the risk of squamous cell carcinoma of the esophagus may decrease more rapidly than adenocarcinoma after cessation.145–147
Bladder and Renal Cancer Smoking is a well-established cause of bladder cancer, with 30 casecontrol studies and 10 prospective studies supporting this conclusion.1,148 As seen in Tables 54-1 and 54-2, about 45% of bladder cancer cases in men and 27% in women are attributable to smoking, accounting for more than 4800 deaths per year.11 Relative risks for bladder cancer are 2 to 3, with a clear dose-response relationship. Smoking cessation reduces the risk of bladder cancer by half after only a few years.16
TABLE 54-1. RELATIVE RISK (RR) FOR DEATH ATTRIBUTED TO SMOKING AND SMOKING-ATTRIBUTABLE MORTALITY (SAM) FOR CURRENT AND FORMER SMOKERS BY DISEASE CATEGORY AND SEX, UNITED STATES, 1999–2003 MEN
WOMEN
RR Disease Category (ICD-10) ADULT DISEASE (PERSONS ≥ 35 YRS) Malignant Neoplasms Lip, oral cavity, pharynx (C00-C14) Esophagus (C15) Stomach (C16) Pancreas (C25) Larynx (C32) Trachea, lung, bronchus (C33-C34) Cervix uteri (C53) Kidney, other urinary (C64-65) Urinary bladder (C67) Acute myeloid leukemia (C92.0) Cardiovascular Diseases Ischemic heart disease (I20-I25) Persons aged 35–64 yrs Persons aged >65 yrs Other heart diseases (I00-I09, I26-I51) Cerebrovascular disease (I60-I69) Persons aged 35–64 yrs Persons aged >65 yrs Atherosclerosis (I70-I71) Aortic aneurysm (I71) Other arterial disease (I72-I78) Respiratory Diseases Pneumonia, influenza (J10-J18) Bronchitis, emphysema (J40-J42, J43) Chronic airway obstruction (J44) Burn Deathsa a
Current Smokers
RR Former Smokers
10.89 6.76 1.96 2.31 14.60 23.26 0 2.72 3.27 1.86
3.40 4.46 1.47 1.15 6.34 8.70 0 1.73 2.09 1.33
2.8 1.51 1.78
1.64 1.21 1.22
3.27 1.63 2.44 6.21 2.07
1.04 1.04 1.33 3.07 1.01
1.75 17.10 10.58
1.36 15.64 6.80
SAM
Current Smokers
Former Smokers
SAM
Total SAM
3671 6735 1882 3030 2454 78,685 0 2714 3782 778
5.08 7.75 1.36 2.25 13.02 12.69 1.59 1.29 2.22 1.13
2.29 2.79 1.32 1.55 5.16 4.53 1.14 1.05 1.89 1.39
1133 1611 572 3443 566 45,268 448 212 1071 313
4804 8346 2454 6473 3020 123,953 448 2926 4852 1091
30,077
80,304
3.08 1.60 1.49
1.32 1.20 1.14
8317 8104
21,016 15,984
4.00 1.49 1.83 7.07 2.17
1.30 1.03 1.00 2.07 1.12
558 2814 751
1823 8517 1254
2.17 12.04 13.08
1.10 11.77 6.78
4183 6751 37,977 NA
10,025 14,706 78,186 760
50,227
12,700 7880
1265 5703 504 5842 7955 40,209 NA
Burn deaths were not stratified by sex. Data from U.S. Environmental Protection Agency. Respiratory health effects of passive smoking: lung cancer, and other disorders. U.S. EPA Publication No. EPA/600/6-90/006, 1992. Steenland K. Passive smoking and risks of heart disease. JAMA. 1992; 267:94–99.
960
Behavioral Factors Affecting Health TABLE 54-2. ATTRIBUTABLE FRACTIONS FOR SELECTED CAUSES OF DEATH IN CIGARETTE SMOKERS, UNITED STATES, 2003 Smoking Attributable Risk MEN a
Disease Category (ICD-10)
Malignant Neoplasms Lip, oral cavity, pharynx (C00-C14) Esophagus (C15) Stomach (C16) Pancreas (C25) Larynx (C32) Trachea, lung, bronchus (C33-C34) Cervix uteri (C53) Kidney, other urinary (C64-65) Urinary bladder (C67) Acute myeloid leukemia (C92.0) Cardiovascular Diseases Ischemic heart disease (I20-I25) Other heart diseases (I00-I09, I26-I51) Cerebrovascular disease (I60-I69) Atherosclerosis (I70-I71) Aortic aneurysm (I71) Other arterial disease (I72-I78) Respiratory Diseases Pneumonia, influenza (J10-J18) Bronchitis, emphysema (J40-J42, J43) Chronic airway obstruction (J44)
WOMEN
35–64
65+
35–64
65+
0.76 0.71 0.27 0.27 0.83 0.89 NAb 0.39 0.47 0.24
0.70 0.71 0.26 0.18 0.81 0.86 NAb 0.36 0.45 0.21
0.53 0.65 0.12 0.28 0.77 0.76 0.14 0.07 0.31 0.09
0.42 0.52 0.11 0.21 0.69 0.67 0.08 0.04 0.27 0.11
0.39 0.21 0.37 0.31 0.65 0.22
0.14 0.16 0.08 0.24 0.62 0.10
0.34 0.12 0.42 0.15 0.61 0.22
0.10 0.08 0.05 0.06 0.45 0.12
0.22 0.89 0.80
0.21 0.90 0.81
0.22 0.82 0.79
0.11 0.81 0.73
Centers for Disease Control and Prevention. Smoking-Attributable Mortality, Morbidity, and Economic
The relationship between smoking and cancers of the ureter and renal pelvis are even stronger: smoking accounts for 70–82% of these cases in men and 37–61% in women.149 Risks attributable to smoking and the corresponding numbers of annual deaths for renal cancer are shown in Tables 54-1 and 54-2. Relative risks from a variety of studies have ranged from 1 to 5, with a clear dose-response relationship demonstrated, and reduction in risk with successful cessation.1,12 Both IARC114 and the Surgeon General1 have concluded that smoking causes renal cancer. Some studies have suggested that pipe smoking is also associated with bladder cancer.114 The urinary tract is exposed to tobacco carcinogens and their metabolites as they are cleared by the body. The urine of smokers is more mutagenic than the urine of nonsmokers.150 N-nitrosodimethylamine, a chemical in tobacco smoke, causes kidney tumors in a variety of animal models.151
Pancreatic Cancer In 2007, there will be estimated 37,170 new cases of pancreatic cancer and 33,370 deaths.94 The 1-year survival rate for pancreatic cancer is 24%, and the 5-year survival rate is 4%. Even for those diagnosed with local disease, the 5-year survival rate is 17%.94 The 2004 Surgeon General’s report concluded that smoking causes pancreatic cancer.1 Dose-response relationships have been found,12 with relative risks from 2 to 3 reported in most studies, but at the highest levels of smoking, relative risks range from 3 to 5.1 Attributable risk and annual smoking-related mortality rates are shown in Tables 54-1 and 54-2, respectively. Evidence shows that the risk of pancreatic cancer declines with cessation.1 The National Cancer Institute concluded that cigar smoking probably causes pancreatic cancer.41 The Veterans Study noted a 1.5 relative risk for pancreatic cancer among cigar smokers, but no increased risk for pipe smokers.152 However, other studies have suggested an increased risk of pancreatic cancer for pipe smokers.114 One study showed an increased risk of pancreatic cancer among snus users
(RR = 1.7) and two others among traditional smokeless tobacco users.139,153,154 New studies of ras mutations in pancreatic cancer support a causal role for smoking. Pancreatic cancer can be produced in animals with the tobacco-specific N-nitrosamine, NNK. Aromatic amines may also play a role.1
Stomach Cancer In 2007, an estimated 21,260 new cases and 11,210 deaths from stomach cancer will occur in the United States.94 Nine cohort studies and 11 case-control studies support the conclusion of the Surgeon General1 and IARC114 that smoking causes gastric cancer. The Surgeon General also concluded that the evidence was suggestive, but not sufficient to infer a causal relationship between smoking and noncardia gastric cancers. The average relative risk is 1.6, with a dose-response relationship.12,155 Risk decreases with sustained cessation,16 with an average relative risk of 1.2 in former smokers. The risk of stomach cancer for former smokers approaches that of lifelong nonsmokers about 20 years after quitting.1 At least two studies have reported an increased risk of stomach cancer among smokeless tobacco users, but the results were not statistically significant.139,156 Other data show an increased risk of stomach cancer from cigar smoking, with a doseresponse relationship.156 Smoking appears to increase the infectivity or add to the pathogenicity of Helicobacter pylori, a known cause of noncardia stomach cancer. Smoking may also lower the plasma and serum concentrations of certain micronutrients that may protect against H. pylori infections or gastric cancer.157
Cervical Cancer In 2007, an estimated 11,150 new cases of cervical cancer will be diagnosed and an estimated 3670 U.S. women will die from the disease.94 Epidemiological studies have consistently shown an increased risk of
54 cervical cancer in cigarette smokers.16 A median relative risk of about 2.0 was found in these studies. There is a dose-response relationship with duration of smoking and number of cigarettes smoked per day.56 Human papillomavirus (HPV) is causally related to cervical cancer and appears to be necessary to its development.158,159 However, two prospective cohort studies have shown that smoking was associated with increased risk in women who were HPV-positive at entry into the study.160,161 It is postulated that smoking may increase the rate at which cancer develops in women with persistent infection or possibly increase the risk for persistent infection.1 Both the Surgeon General1 and IARC114 have concluded that smoking causes cervical cancer. In most studies, former smokers at one year after cessation are at lower risk for cervical cancer than are continuing smokers.16 Components of tobacco smoke (including NNK and nicotine)162,163 have been found in the cervical mucus, and the mucus is mutagenic in smokers.164 In addition, tobacco-related DNA adducts were higher in cervical biopsies of smokers compared with nonsmokers.165
Endometrial Cancer In 2007, an estimated 39,080 new cases and 7400 deaths will occur from endometrial cancer.94 Both the 198912 and 2004 Surgeon General’s reports1 concluded that smoking reduces the risk of endometrial cancer in postmenopausal women. This may be due to a lower production of estrogen because of lower body weight in smokers, and altered estrogen metabolism. However, the modest decrease in the risk of endometrial cancer is far outweighed by the increase in other causes of smoking-related disease and death.1,12
Acute Myeloid Leukemia The most common type of leukemia in U.S. adults is acute myeloid leukemia, with an estimated 13,410 cases diagnosed in 2007.94 Several literature reviews and meta-analyses noted significant association between current or former smoking and myeloid leukemia, with a doseresponse relationship to the number of cigarettes smoked per day.166,167 There is also an association with duration of smoking. The relative risk for ever-smokers ranges from 1.3 to 1.5 compared with never-smokers. For one-pack-per-day smokers, the relative risk is 2.0. Both the Surgeon General1 and IARC114 have concluded that smoking causes myeloid leukemia. Smoking causes an estimated 12–58% of acute myeloid leukemia deaths.1 Cigarette smoke contains known substances (including benzene, polonium-210 and lead-210), which are known to cause myeloid forms of leukemia. Cigarette smoke is the major source of benzene exposure in the United States (about half of all exposure).168
Other Cancers The 2004 Surgeon General’s report concluded that the evidence is suggestive, but not sufficient to infer a causal relationship between smoking and colorectal adenomatous polyps, colorectal cancer, and liver cancer. The report concluded that the evidence is suggestive of no causal relationship between smoking and risk for prostate cancer, although some studies suggest a higher mortality rate from prostate cancer in smokers than nonsmokers.1 Several studies have found an association between smokeless tobacco use and prostate cancer.169 OTHER SMOKING-RELATED DISEASES
Chronic Obstructive Pulmonary Disease About 12 million people in the United States have been diagnosed with chronic obstructive pulmonary disease (COPD).170 In 2000, COPD accounted for more than 725,000 hospitalizations in the United States, nearly 8 million physician office visits and hospital outpatient visits, and 1.5 million emergency room visits.170 An estimated 122,000 Americans died of COPD in 2003, and 76% of COPD deaths are attributable to smoking.11,24 The death rates from COPD
Tobacco: Health Effects and Control
961
increase with age; in 2003, they were about equal for men and women.24 Mortality from COPD has paralleled lung cancer mortality, increasing progressively over the past 30 years.12 A recent decline in COPD mortality at younger ages is consistent with lower smoking prevalence among younger cohorts of Americans.12 The 2004 Surgeon General’s Report1 summarized the studies of smoking and COPD to 2003. Data from case-control and cohort studies consistently demonstrate a higher COPD mortality among cigarette smokers than among nonsmokers, with a mortality ratio as high as 32 for persons smoking 25 or more cigarettes per day.16 In the Nurses’ Health Study, the relative risk for self-reported, physician-diagnosed chronic bronchitis among current smokers when compared with women who had never smoked was 2.85.171 In the 40-year follow-up of the British Physicians’ Study, the risk of COPD among smokers was found to be almost as high as the risk of lung cancer.24,25 Doseresponse relationships have been consistently observed, with the risk of death from COPD influenced by the number of cigarettes smoked per day, the depth of smoke inhalation, and by the age at smoking initiation.126,172 The 2004 Surgeon General’s report concluded that smoking causes COPD. The report also concluded that the evidence was suggestive but not sufficient to infer a causal relationship between smoking and acute respiratory infections among persons with preexisting COPD.1 Abnormal lung function (especially expiratory airflow) occurs as early as 2 years after smoking initiation.173,174 Smokers exhibit a more rapid decline in forced expiratory volume at 1 second (FEV1) with age than do nonsmokers,175 and as the amount of cigarette smoking increases, the rate of decline accelerates.1 Decline in lung function begins with inflammation in the small airways, although inflammation in the lung parenchyma is also a major factor in the development of COPD.1 Symptoms of such inflammation are not always a reliable indicator of smokers who will subsequently have symptomatic COPD. However, those smokers with a fast annual decline in FEV appear to constitute a high-risk group for COPD development.1,175 Studies have identified the likely mechanisms by which cigarette smoking induces COPD. The current model suggests that after a long latency period, COPD develops because of a more rapid decline in lung function during adulthood or because of a reduction in maximal lung growth in childhood and adolescence.1 The age at which smoking has the greatest influence on COPD pathogenesis is unknown. Atopy and increased airway responsiveness are associated with a more rapid decrease in pulmonary function, and cigarette smoking is a cause of exaggerated airway responsiveness. Smoking also causes injurious biologic processes (oxidant stress, inflammation, and a protease/antiprotease imbalance) that result in airway and alveolar injury. If sustained, such injury results in COPD.1,175 Cigar smokers and pipe smokers who inhale have a higher rate of decline of FEV1 than cigarette smokers and a higher prevalence of chronic cough and phlegm than never-smokers.111,176 Several large cohort studies have found that pipe smokers and cigar smokers have approximately a twofold increase in COPD mortality compared with nonsmokers, but the case fatality rate in these groups of smokers is lower than that of cigarette smokers.42,175 However, former cigarette smokers who switched to cigars or pipes were at higher risk than those who had only smoked pipes or cigars, and those who quit smoking without taking up other tobacco products had the lowest risk among tobacco users.47 A large prospective study in Scandinavia found that the apparent difference in the mortality risk associated with pipe and cigar smoking compared with that of cigarette smoking was markedly reduced after adjusting for smoke inhalation.126 The National Cancer Institute concluded that heavy cigar smokers and those who inhale deeply can develop COPD and that the reduced inhalation of smoke by cigar smokers probably explains their lower risk of COPD compared with cigarette smokers.41 In 1991, an estimated 145 persons in the United States died from COPD as a result of pipe smoking.124 After smoking cessation, the rate of COPD excess risk reduction is determined by prior smoking patterns (duration and daily consumption)