The Evidence-Based Program Application Process

Page 1

The Advocacy Foundation Professional Development Series Hebrews 10:24-25

Evidence-Based Programming The Evidence-Based Program Application Process

“Helping Individuals, Organizations & Communities Achieve Their Full Potential” John C Johnson III, CEO Vol. III


Page 1 of 59


The Advocacy Foundation, Inc. Helping Individuals, Organizations & Communities Achieve Their Full Potential

Professional Development Series

The Evidence-Based Program Development & Application Process

“Helping Individuals, Organizations & Communities Achieve Their Full Potential 1735 Market Street, Suite 3750 Philadelphia, PA 19102

| 100 Edgewood Avenue, Suite 1690 Atlanta, GA 30303

John C Johnson III Founder & CEO

(878) 222-0450 Voice | Fax | SMS www.TheAdvocacyFoundation.org

Page 2 of 59


Page 3 of 59


Biblical Authority ______ Hebrews 10:24-25 (NASB) 24 and let us consider how to stimulate one another to love and good deeds, 25 not forsaking our own assembling together, as is the habit of some, but encouraging one another; and all the more as you see the day drawing near.

Page 4 of 59


Page 5 of 59


Table of Contents Evidence-Based Programming The Evidence-Based Program Application Process

Biblical Authority I.

Introduction

II.

The National Registry of Evidence-Based Programs and Practices (NREPP)

III.

The Submission Process

IV.

The Review Process

V.

The Substance Abuse and Mental Health Services Administration (SAMHSA)

VI.

The US Department of Health & Human Services (HHS)

VII.

The US Department of Juvenile Justice & Delinquency Prevention

VIII. The Early Head Start Program IX.

References Attachments

A: A Paradigm Shift in Selecting Evidence-Based Approaches B: Evidence-Based Programs and Practices - What Does It All Mean

Copyright Š 2015 The Advocacy Foundation, Inc. All Rights Reserved.

Page 6 of 59


Page 7 of 59


Introduction Evidence-based programming: What does it actually mean? -

Cornell University blog (circa May 2010)

Anyone who loves detective novels (like I do) winds up being fascinated by evidence. I remember discovering the Sherlock Holmes stories as a teenager, reading how the great detective systematically used evidence to solve perplexing crimes. Holmes followed the scientific method, gathering together a large amount of evidence, deducing several possible explanations, and then finding the one that best fits the facts of the case. As everyone knows, often the ―evidence-based‖ solution was very different from what common sense told theo other people involved in the case. In our efforts to solve human problems, we also search for evidence, but the solutions rarely turn up in such neat packages. Whether it’s a solution to teen pregnancy, drug abuse, family violence, poor school performance, wasteful use of energy, or a host of other problems – we wish we had a Sherlock Holmes around to definitively tell us which solution really works. Over the past decade, efforts have grown to systematically take the evidence into consideration when developing programs to help people overcome life’s challenges. But what does ―evidence-based‖ really mean? Take a look at these three options: Which one fits the criteria for an evidence-based program? 1. A person carefully reviews the literature on a social problem. Based on high-quality research, she designs a program that follows the recommendations and ideas of researchers.

Page 8 of 59


2. A person creates a program to address a problem. He conducts an evaluation of the program in which participants rate their experiences in the program and their satisfaction with it, both of which are highly positive. 3. An agency creates a program to help its clients. Agency staff run the program and collect pretest and post-test data on participants and a small control group. The group who did the program had better outcomes than the control group. If you answered ―None of the above,‖ you are correct. Number 3 is closest, but still doesn’t quite make it. Although many people don’t realize it, the term ―evidence-based program‖ has a very clear and specific meaning.

To be called ―evidence-based,‖ the following things must happen: 1. The program is evaluated using an experimental design. In such a design, people are assigned randomly into the treatment group (these folks get the program) or a control group (these folks don’t). When the program is done, both groups are compared. This design helps us be more certain that the results came from the program, and not some other factor (e.g., certain types of people decided to do the program, thus biasing the results). Sometimes this true experimental design isn’t possible, and a ―quasiexperimental‖ design is used (more on that in a later post). Importantly, the program results should be replicated in more than one study. 2. The evaluation studies are submitted to peer review by other scientists, and often are published in peer-reviewed journals. After multiple evaluations, the program is often submitted to a federal agency or another scientific organization that endorses the program as evidence-based.

Page 9 of 59


3. The program is presented in a manual so that it can be implemented locally, as close as possible to the way the program was designed. This kind of ―treatment fidelity‖ is very important to achieve the demonstrated results of the program. As you might already be thinking, a lot of issues come up when you consider implementing an evidence-based program. On the one hand, they have one enormous advantage: The odds are that they will work. That is, you can be reasonably confident that if implemented correctly, the program will achieve the results it says it will. A big problem, on the other hand, is that a program must meet local needs, and an evidencebased program may not be available on the topic you are interested in. We’ll come back to these issues in later posts. In the meantime, I recommend this good summary prepared by extension staff at the University of Wisconsin. In addition, I’d suggest viewing this presentation by Jutta Dutterweich from Cornell’s Family Life Development Center, on ―Planning for Evidence-Based Programs. And check out our web links for some sites that register and describe evidence-based programs.

Page 10 of 59


Page 11 of 59


The National Registry of

Evidence-Based Programs and Practices (NREPP) The National Registry of Evidence-based Programs and Practices (NREPP) is an online,

searchable database of interventions designed to promote mental health or to prevent or treat substance abuse and mental disorders. The registry is funded and administered by the Substance Abuse and Mental Health Services Administration (SAMHSA), part of the U.S. Department of Health and Human Services. The goal of the Registry is to encourage wider adoption of evidence-based interventions and to help those interested in implementing an evidence-based intervention to select one that best meets their needs. ______ In the behavioral health field, there is an ongoing need for researchers, developers, evaluators, and practitioners to share information about what works to improve outcomes among individuals coping with, or at risk for, mental disorders and substance abuse. Discussing how this need led to the development of NREPP, Brounstein, Gardner, and Backer (2006) write: It is important to note that not all prevention programs work. Still other programs have no empirically based support regarding their effectiveness. […] Many others have empirical support, but the methods used to generate that support are suspect. This is another reason to highlight the need for and use of scientifically defensible, effective prevention programs. These are programs that clearly demonstrate that the program was well implemented, well evaluated, and produced a consistent pattern of positive results. The focus of NREPP is on delivering an array of standardized, comparable information on interventions that are evidence based, as opposed to identifying programs that are ―effective‖ or ranking them in effectiveness. Its peer reviewers use specific criteria to rate the quality of an intervention’s evidence base as well as the intervention’s suitability for broad adoption. In addition, NREPP provides contextual information about the intervention, such as the population served, implementation history, and cost data to encourage a realistic and holistic approach to selecting prevention interventions.

Page 12 of 59


As of 2010, the interventions reviewed by NREPP have been implemented successfully in more than 229,000 sites, in all 50 States and more than 70 countries, and with more than 107 million clients. Versions of ura review process and rating criteria have been adopted by the National Cancer Institute and the Administration on Aging. The information NREPP provides is subject to certain limitations. It is not an exhaustive repository of all tested mental health interventions; submission is a voluntary process, and limited resources may preclude the review of some interventions even though they meet minimum requirements for acceptance. The NREPP home page prominently states that ―inclusion in the registry does not constitute an endorsement.‖

Predecessor System The registry originated in 1997 and has gone through several changes since then. The predecessor to today's NREPP was the National Registry of Effective Prevention Programs (later renamed the National Registry of Effective Programs and Practices), which was developed by SAMHSA's Center for Substance Abuse Prevention as part of the Model Programs initiative. Procedures under this earlier registry were developed to review, rate, and designate programs as Model, Effective, or Promising. Based on extensive input from scientific communities, service providers, expert panels, and the public, the procedures were revised. Reviews using the new NREPP system began in 2006, and the redesigned Web site debuted in March 2007.

Page 13 of 59


Page 14 of 59


The Submission Process NREPP holds an open submission period that runs November 1 through February 1. For an intervention to be eligible for a review, it must meet four minimum criteria: 1. The intervention has produced one or more positive behavioral outcomes (p ≤ .05) in mental health, mental disorders, substance abuse, or substance use disorders use among individuals, communities, or populations. 2. Evidence of these outcomes has been demonstrated in at least one study using an experimental or quasi-experimental design. 3. The results of these studies have been published in a peer-reviewed journal or other professional publication, or documented in a comprehensive evaluation report. 4. Implementation materials, training and support resources, and quality assurance procedures have been developed and are ready for use by the public. Once reviewed and added to the Registry, interventions are invited to undergo a new review 4 or 5 years after their initial review.

Page 15 of 59


Page 16 of 59


The Review Process The NREPP review process consists of two parallel and simultaneous review tracks, one that looks at the intervention’s Quality of Research (QOR) and another that looks at the intervention’s Readiness for Dissemination (RFD). The materials used in a QOR review are generally published research articles, although unpublished final evaluation reports can also be included. The materials used in an RFD review include implementation materials and process documentation, such as manuals, curricula, training materials, and written quality assurance procedures.

The reviews are conducted by expert consultants who have received training on NREPP's review process and rating criteria. Two QOR and two RFD reviewers are assigned to each review. Reviewers work independently, rating the same materials. Their ratings are averaged to generate final scores. While the review process is ongoing, NREPP staff work with the intervention’s representatives to collect descriptive information about the intervention, such as the program goals, types of populations served, and implementation history. The QOR ratings, given on a scale of 0.0 to 4.0, indicate the strength of the evidence supporting the outcomes of the intervention. Higher scores indicate stronger, more compelling evidence. Each outcome is rated separately because interventions may Page 17 of 59


target multiple outcomes (e.g., alcohol use, marijuana use, behavior problems in school), and the evidence supporting the different outcomes may vary. The QOR rating criteria are: 1. 2. 3. 4. 5. 6.

Reliability of measures Validity of measures Intervention fidelity Missing data and attrition Potential confounding variables Appropriateness of analysis

The RFD ratings, also given on a scale of 0.0 to 4.0, indicate the amount and quality of the resources available to support the use of the intervention. Higher scores indicate that resources are readily available and of high quality. These ratings apply to the intervention as a whole. The RFD criteria are: 1. Availability of implementation materials 2. Availability of training and support resources 3. Availability of quality assurance procedures

Reviewers QOR reviewers are required to have a doctoral-level degree and a strong background and understanding of current methods of evaluating prevention and treatment interventions. RFD reviewers are selected from two categories: direct services experts (including both providers and consumers of services), or experts in the field of implementation. Direct services experts must have previous experience evaluating prevention or treatment interventions and knowledge of mental health or substance abuse prevention or treatment content areas.

Products and publications NREPP publishes an intervention summary for each intervention it adds to the Registry. The summaries, which are accessed through the Registry’s search engine, contain the following standardized information:         

A brief description of the reviewed intervention, including targeted goals and theoretical basis Study populations (age, gender, race/ethnicity) Study settings and geographical locations Implementation history Funding information Comparative evaluation research conducted with the intervention Adaptations Adverse effects List of studies and materials reviewed Page 18 of 59


       

List of outcomes Description of measures and key findings for each outcome Research design of the studies reviewed Quality of Research and Readiness for Dissemination ratings Reviewer comments (Strengths and Weaknesses) Costs Replication studies Contact information

NREPP also maintains an online Learning Center. Offerings include learning modules on implementation and preparing for NREPP submission; a research paper on evidence-based therapy relationships; and links to screening and assessment tools for mental health and substance use.

Page 19 of 59


Page 20 of 59


The Substance Abuse and Mental Health Services Administration (SAMHSA) The Substance Abuse and Mental Health Services Administration (SAMHSA) is a branch of the U.S. Department of Health and Human Services. It is charged with improving the quality and availability of prevention, treatment, and rehabilitative services in order to reduce illness, death, disability, and cost to society resulting from substance abuse and mental illnesses. The Administrator of SAMHSA reports directly to the Secretary of the U.S. Department of Health and Human Services. SAMHSA's headquarters building is located in Rockville, Maryland. SAMHSA was established in 1992 by Congress as part of a reorganization of the Federal administration of mental health services; the new law renamed the former Alcohol, Drug Abuse, and Mental Health Administration (ADAMHA). ADAMHA had passed through a series of name changes and organizational arrangements throughout its history:         

Narcotics Division (1929-30) Division of Mental Hygiene (1930-43) Mental Hygiene Division, Bureau of Medical Services (1943-49) NIMH, National Institutes of Health (NIH, 1949-67) NIMH (1967-68) NIMH, Health Services and Mental Health Administration (1968-73) NIMH, NIH (1973) National Institute on Alcohol Abuse and Alcoholism, NIMH (1970-73) ADAMHA, established 1973.

Congress directed SAMHSA to target effectively substance abuse and mental health services to the people most in need and to translate research in these areas more effectively and rapidly into the general health care system. Charles Curie was SAMHSA's Director until his resignation in May 2006. In December 2006 Terry Cline was appointed as SAMHSA's Director. Dr. Cline served through August 2008. Rear Admiral Eric Broderick served as the Acting Director upon Dr. Cline's departure, until the arrival of the succeeding Administrator, Pamela S. Hyde, J.D. in November 2009.

Page 21 of 59


Organization SAMHSA's mission is to reduce the impact of substance abuse and mental illness on American's communities. To accomplish its work, SAMHSA administers a combination of competitive, formula, and block grant programs and data collection activities. The Agency's programs are carried out through:    

The Center for Mental Health Services (CMHS) which focuses on prevention and treatment of mental disorders. The Center for Substance Abuse Prevention (CSAP) which seeks to prevent and reduce the abuse of illegal drugs, alcohol, and tobacco. The Center for Substance Abuse Treatment (CSAT) which supports the provision of effective substance abuse treatment and recovery services. The Center for Behavioral Health Statistics and Quality (CBHSQ) which has primary responsibility for collection, analysis and dissemination of behavior health data.

Together these units support U.S. States, Territories, Tribes, communities, and local organizations through grant and contract awards. They also provide national leadership in promoting the provision of quality behavioral-health services. Major activities to improve the quality and availability of prevention, treatment, and recovery-support services, are funded through competitive Programs of Regional and National Significance grants. A number of supporting offices complement the work of the four Centers:      

The Office of the Administrator (OA) The Office of Policy, Planning, and Innovation (OPPI) The Office of Behavioral Health Equity (OBHE) The Office of Financial Resources (OFR) The Office of Management, Technology, and Operations (OMTO) The Office of Communications (OC)

Center for Mental Health Services The Center for Mental Health Services (CMHS) is a unit of the Substance Abuse and Mental Health Services Administration (SAMHSA) within the U.S. Department of Health and Human Services. This U.S. government agency describes its role as: "CMHS leads Federal efforts to treat mental illnesses by promoting mental health and by preventing the development or worsening of mental illness when possible. Congress created CMHS to bring new hope to adults who have serious mental illnesses and to children with serious emotional disorders." As of 2012, the director of CMHS is Paolo del Vecchio.

Page 22 of 59


CMHS is the driving force behind the largest US children's mental health initiative to date, which is focused on creating and sustaining systems of care. This initiative provides grants (now cooperative agreements) to States, political subdivisions of States, territories, Indian Tribes and tribal organizations to improve and expand their Systems Of Care to meet the needs of the focus population—children and adolescents with serious emotional, behavioral, or mental disorders. The Children's Mental Health Initiative is the largest Federal commitment to children’s mental health to date, and through FY 2006, it has provided over $950 million to support SOC development in 126 communities.

SAMHSA's Strategic Direction In 2010, SAMHSA identified 8 Strategic Initiatives to focus the Agency's work. Below are the 8 areas and goals associated with each category: 

Prevention of Substance Abuse and Mental Illness - Create preventionprepared communities in which individuals, families, schools, workplaces, and communities take action to promote emotional health; and, to prevent and reduce mental illness, substance (including tobacco) abuse, and, suicide, across the lifespan Trauma and Justice Reduce the pervasive, harmful, and costly publichealth impacts of violence and trauma by integrating trauma-informed approaches throughout health and behavioral healthcare systems; also, to divert people with substanceabuse and mental disorders away from criminal/juvenile-justice systems, and into traumainformed treatment and recovery. Military Families – Active, Guard, Reserve, and Veteran Support of our service men & women, and their families and communities, by leading efforts to ensure needed behavioral health services are accessible to them, and successful outcomes. Health Reform - Broaden health coverage and the use of evidence-based practices to increase access to appropriate and high quality care; also, to reduce existing disparities between: the availability of substance abuse and mental disorders; and, those for other medical conditions.

Page 23 of 59


Housing and Homelessness - To provide housing for, and to reduce the barriers to accessing recovery-sustaining programs for, homeless persons with mental and substance abuse disorders (and their families) Health Information Technology for Behavioral Health Providers - To ensure that the behavioral-health provider network -- including prevention specialists and consumer providers -- fully participate with the general healthcare delivery system, in the adoption of health information technology. Data, Outcomes, and Quality – Demonstrating Results - Realize an integrated data strategy that informs policy, measures program impact, and results in improved quality of services and outcomes for individuals, families, and communities. Public Awareness and Support - Increase understanding of mental and substance abuse prevention & treatment services, to achieve the full potential of prevention, and, to help people recognize and seek assistance for these health conditions with the same urgency as any other health condition.

Their budget for the Fiscal Year 2010 was about $3.6 billion. It was re-authorized for FY2011.

Controversy In February 2004, the administration was accused of requiring the name change of an Oregon mental health conference from "Suicide Prevention Among Gay/Lesbian/Bisexual/Transgender Individuals" to "Suicide Prevention in Vulnerable Populations." In 2002, then-President George W. Bush established the New Freedom Commission on Mental Health. The resulting report was intended to provide the foundation for the federal government's Mental Health Services programs. However, many experts and advocates were highly critical of its report, "Achieving the Promise: Transforming Mental Health Care in America".

Page 24 of 59


Page 25 of 59


The US Department of Health & Human Services (HHS) The United States Department of Health and Human Services (HHS), also known as the Health Department, is a cabinet-level department of the U.S. federal government with the goal of protecting the health of all Americans and providing essential human services. Its motto is "Improving the health, safety, and well-being of America". Before the separate federal Department of Education was created in 1979, it was called the Department of Health, Education, and Welfare (HEW).

Federal Security Agency The Federal Security Agency (FSA) was established on July 1, 1939, under the Reorganization Act of 1939, P.L. 76-19. The objective was to bring together in one agency all Federal programs in the fields of health, education, and social security. The first Federal Security Administrator was Paul V. McNutt. Page 26 of 59


The new agency originally consisted of the following major components: 1. 2. 3. 4. 5.

Office of the Administrator Public Health Service (PHS) Office of Education Civilian Conservation Corps Social Security Board

Origins 

The origins of these components, however, could be traced back to the early days of the Republic. On July 16, 1798, President John Adams signed an act creating the Marine Hospital Service to furnish treatment to sick and disabled American merchant seamen. On April 29, 1878, the first Federal Quarantine Act enlarged the Service's responsibilities to include prevention of epidemics from abroad. On August 14, 1912, the name was changed to the Public Health Service (PHS). On May 26, 1930, the Hygienic Laboratory of the Service was redesignated the National Institute of Health (NIH). PHS was transferred from the Treasury Department to the FSA in 1939.

Even though the first steps toward public education were taken in 1647 by the Massachusetts Bay Colony and land was set aside for public schools by the Congress of the Confederation in 1785, the idea of universal, free public schools did not become firmly established until the Civil War era. Even then, only half of the States had an efficient public school system. In 1867, Congress established the Department of Education to promote the cause of education and collect and disseminate facts and statistics about education. Until it was transferred to the FSA, the Office of Education and its predecessor organization had been part of the Department of the Interior.

The Civilian Conservation Corps (CCC) was born during the Great Depression to provide employment for American youth and advance conservation of the Nation's natural resources. It operated from April 5, 1933 until June 30, 1942. During that time, the CCC provided work training to 3 million men and advanced conservation by more than 25 years. It was an independent agency until it came to FSA.

The Nation's social security and public assistance programs also were born during the Depression with approval of the Social Security Act on August 14, 1935. The initial Act of 1935 established the Social Security Board to administer Titles I, II, III, IV, and X of the Act. It remained an independent organization until its transfer to FSA. The Social Security Act Amendments of 1939 revised and expanded basic provisions of the program and eligibility requirements and extended protection to aged wives, dependent children and certain survivors of insured workers.

Page 27 of 59


Organized in 1855 and incorporated by the Kentucky Legislature in 1858, the American Printing House for the Blind was established to produce educational materials for the blind and since 1879 has received an allocation of Federal funds to help support this activity. Federal responsibility regarding the Printing House was transferred to FSA from the Treasury Department on July 1, 1939.

Established in 1935 to provide youth with work training, the National Youth Administration later trained young people for jobs in war industries. It was supervised by the Office of the Administrator from the time FSA was created in 1939 until 1942, when it was transferred to the War Manpower Commission.

Early Years 

the Deaf Department of the Interior  o

o

Under a Reorganization Plan that became effective on June 30, 1940, the organization of the Federal Security Agency (FSA) was enlarged:

1. The Food and Drug Administration (FDA) was transferred from the Department of Agriculture; and 2. Saint Elizabeth's Hospital, Freedmen's Hospital, and 3. Federal functions relating to Howard University and the Columbia Institution for were transferred to FSA from the

As a result of pressure for the Federal Government to control adulterated and misbranded foods and drugs, the Food and Drugs Act was enacted on June 30, 1906. These responsibilities were entrusted to the Bureau of Chemistry in the Department of Agriculture in 1907 and were organized into a Food, Drug and Insecticide Administration in 1927, renamed the Food and Drug Administration (FDA) in 1931. Transferred to FSA in 1940, FDA also was responsible for administering the Tea Importation Act (1897), the Filled Milk Act (1923), the Caustic Poison Act (1927), and the Food, Drug, and Cosmetic Act (1938). Saint Elizabeths Hospital, created by Act of Congress in 1852 as the Government Hospital for the Insane, received its first patients on January 15, 1855. Founder of Saint Elizabeth's was Dorothea Dix, the most prominent humanitarian of the era. The name was changed by Act of Congress in 1916. Freedmen's Hospital was an outgrowth of the Bureau for the Relief of Freedmen and Refugees authorized by the Act of March Page 28 of 59


o

3, 1865. In 1871, the hospital was transferred to the Department of the Interior. Howard University' was established by an act of March 2, 1867, to provide higher education for Negroes. Education for the deaf was made available in the District of Columbia through the Columbia Institution for the Deaf established by the Act of February 16, 1857. The name was changed to Gallaudet College in 1954.

The Vocational Rehabilitation Act Amendments of 1943 expanded functions relating to vocational rehabilitation and assigned them to the Federal Security Administrator, who established the Office of Vocational Rehabilitation on September 4, 1943, to carry out these functions. Since the original Vocational Rehabilitation Act of 1920, certain vocational rehabilitation and vocational education activities had been a responsibility of the Office of Education, first when it was part of the Department of Interior, then after it became part of FSA in 1939.

Impact of World War II o World War II had a broad impact on the social programs of FSA. Between 1941 and 1947, the Government recognized the need to maintain essential health and welfare services. The Federal Security Administrator also served as coordinator of the Office of Health, Welfare, and related Defense Activities, renamed the Office of Defense, Health, and Welfare Services in September 1941, which provided health care, education, and related services necessitated by the war effort. It was responsible for adjusting the distribution of remaining professional personnel to meet the requirements of the population. In 1943, the Office's title was again changed to the Office of Community War Services, which was abolished on June 30, 1947. o The FDA during the war was charged with maintaining food standards to insure delivery of properly tested foods and drugs to the military establishment. o The Public Health Service was in charge of protecting both the general population and military personnel against epidemics and carrying out medical research.

Post-WWII Organizational Changes When the war ended, President Truman moved to "strengthen the arm of the Federal Government for better integration of services in the fields of health, education, and welfare." 

1946

Page 29 of 59


o

o

Reorganization Plan No. 2 of 1946, effective July 16, 1946, abolished the three-member Social Security Board, creating in its place, the Social Security Administration, headed by a Commissioner of Social Security. The plan transferred the Children's Bureau (created in 1912), exclusive of its Industrial Division, from the Department of Labor to FSA, where it became part of the Social Security Administration (SSA); the US Employees Compensation Commission, formerly an independent organization, to the Office of the Administrator of FSA; functions of the Department of Commerce regarding vital statistics to the FSA Administrator, who delegated them to the Surgeon General of the Public Health Service. Legislation of major importance to the Agency also was passed in 1946: the National Mental Health Act; the Vocational Education Act; the Federal Employees Health Act; the 1946 Amendments to the Social Security Act; and the Hospital Survey and Construction Act.

1947. In 1947, the Administrator directed the establishment of a central library, consolidating the resources of three independent libraries at the SSA, the Office of Education, and the Office of Vocational Rehabilitation. This library eventually became the central library of the Department of Health, Education, and Welfare.

1948 o

o

o

o

By 1948, the retail price of food had risen 114 percent over the 1935-39 base, yet the monthly benefits under Social Security had not changed since the 1939 amendments had established a base level. On October 1, 1948, increases in Social Security benefits were authorized. Other key pieces of legislation passed in 1948 included bills creating the National Heart Institute and the National Institute of Dental Research. On June 16, 1948, the name of the National Institute of Health was changed to the National Institutes of Health. On June 30, 1948, the President signed the Water Pollution Bill, delegating national water pollution responsibilities to the Public Health Service. Also in 1948, legislation authorized the transfer of the Federal Credit Union program from the Federal Deposit Insurance Corporation to the SSA.

1949 o

o

The Federal Property and Administrative Services Act of 1949 gave the Federal Security Administrator authority to dispose of surplus Federal propel property to tax-supported or nonprofit educational institutions for health or educational purposes. During 1949, the Federal Security Agency began the establishment of 10 FSA regional offices to replace the 11 previously operated by the SSA and consolidated those being operated by other FSA constituents into one common regional office structure. Previous to the consolidation,

Page 30 of 59


constituent agencies were maintaining five and, in some cases, six independent regional offices in a single city. 

1950 o

o

o

o

On May 24, 1950, Reorganization Plan No. 19 of 1950 transferred from FSA to the Department of Labor the Bureau of Employees Compensation and the Employees Compensation Appeals Board. Then, the FSA abolished the Office of Special Services that had administered the two transferred units plus the Office of Vocational Rehabilitation (OVR) and the Food and Drug Administration. The effect of this action was to elevate OVR and FDA to agency status. In 1950, two important national conferences required months of staff work by FSA personnel. The Mid-century White House Conference on Children and Youth was held in Washington, D.C. in December 1950. Nearly 6,000 representatives of 100,000 local and community groups throughout the country met to discuss the "spiritual values, democractic practice, and the dignity and worth of the individual." In August of that year, a Conference on Aging was called by the FSA Administrator to study the needs and problems of the older segment of the population. In September 1950, Congress authorized the impacted aid program-to relieve the impact on local school facilities of a heavy influx of Federal civilian and military personnel-and in FY 1951 appropriated $96.5 million for school construction under P.L. 81-815, September 23, 1950, and $23 million for school operating expenses under P.L. 81-874, September 30, 1950. The Social Security Act Amendments of 1950 added to the social security rolls about 10 million persons who previously had been ineligible. These persons included agricultural workers and self-employed small shop owners. Others who benefitted from the changes were the elderly and those who had job-related disabilities. This expansion of beneficiaries was made possible by revisions to the old age and survivors insurance and long-term disability insurance sections of the original Act.

1951. In May 1951, a citizens committee, the National Mid-century Committee for Children and Youth, was established to provide national follow-up to the problems discussed at the White House Conference. Staff of the Children's Bureau worked closely with the Committee until it was dissolved in 1953.

1952. The year 1952 was a period of transition for FSA. Despite the contributions made by the Agency during and before the Korean War, most of the defenserelated activities in FSA were being phased out. The FDA continued to study chemical and bacteriological warfare agents but other FSA components were mobilized to provide disaster relief and health care assistance to a number of foreign countries. Technical assistance, under the Federal "Point IV" and Mutual Security Agency programs, provided needed help to many underdeveloped countries. The Agency also furnished guidance for foreign representatives sent to

Page 31 of 59


this country to study American programs and methods in the fields of health and education. Later in the year, FSA accelerated its response to the Nation's social needs. 

1953. FSA Becomes DHEW o By 1953, the Federal Security Agency's programs in health, education, and social security had grown to such importance that its annual budget exceeded the combined budgets of the Departments of Commerce, Justice, Labor and Interior and affected the lives of millions of people. o Consequently, in accordance with the Reorganization Act of 1949, President Eisenhower submitted to the Congress on March 12, 1953, Reorganization Plan No. 1 of 1953, which called for the dissolution of the Federal Security Agency and elevation of the agency to Cabinet status as the Department of Health, Education, and Welfare. All of the responsibilities of the Federal Security Administrator would be transferred to the Secretary of Health, Education, end Welfare and the components of FSA would be transferred to the Department. A major objective of the reorganization was to improve administration of the functions of the Federal Security Agency. The plan was approved April 1, 1953, and became effective on April 11, 1953.

Page 32 of 59


Unlike statutes authorizing the creation of other executive departments, the contents of Reorganization Plan No. 1 of 1953 were never properly codified within the United States Code, although Congress did codify a later statute ratifying the Plan. Today, the Plan is included as an appendix to Title 5 of the United States Code. The result is that HHS is the only executive department whose statutory foundation today rests on a confusing combination of several codified and uncodified statutes.

List of Federal Security Agency Administrators Name Paul V. McNutt Watson B. Miller Oscar R. Ewing Oveta Culp Hobby

Dates of Service July 13, 1939-September 14, 1945 October 11, 1945-August 26, 1947 August 27, 1947-January 20, 1953 January 21, 1953-April 10, 1953

Department of Health, Education, and Welfare The Department of Health, Education, and Welfare (HEW) was created on April 11, 1953, when Reorganization Plan No. 1 of 1953 became effective. HEW thus became the first new Cabinet-level department since the Department of Labor was created in 1913. The Reorganization Plan abolished the FSA and transferred all of its functions to the Secretary of HEW and all components of the Agency to the Department. The first Secretary of HEW was Oveta Culp Hobby, a native of Texas, who had served as Commander of the Women's Army Corps in World War II and was editor and publisher of the Houston Post. Sworn in on April 11, 1953, as Secretary, she had been FSA Administrator since January 21, 1953. The six major program-operating components of the new Department were the Public Health Service, the Office of Education, the Food and Drug Administration, the Social Security Administration, the Office of Vocational Rehabilitation, and St. Elizabeth's Hospital. The Department was also responsible for three Federally-aided corporations: Howard University, the American Printing House for the Blind, and the Columbia Institution for the Deaf (Gallaudet College since 1954). List of Secretaries of Health, Education, and Welfare Name Dates of Service Oveta Culp Hobby April 11, 1953-July 31, 1955 Marion B. Folsom August 1, 1955-July 31, 1958 Arthur Flemming August 1, 1958-January 19, 1961 Abraham Ribicoff January 21, 1961-July 13,1962 Anthony J. Celebrezze July 31, 1962-August 17, 1965 John W. Gardner August 18, 1965-March 1, 1968 Wilbur J. Cohen (Designate) March 22, 1968-May 16, 1968 Wilbur J. Cohen May 16, 1968-January 20, 1969

Page 33 of 59


Robert H. Finch Elliot L. Richardson Caspar W. Weinberger Forrest David Mathews Joseph A. Califano, Jr. Patricia Roberts Harris

January 21, 1969-June 23, 1970 June 24, 1970-January 29, 1973 February 12, 1973-August 8, 1975 August 8, 1975-January 20, 1977 January 20, 1977-August 3, 1979 August 3, 1979–May 4, 1980

Department of Health and Human Services The Department of Health, Education, and Welfare was renamed the Department of Health and Human Services (HHS) in 1979, when its education functions were transferred to the newly created United States Department of Education under the Department of Education Organization Act. HHS was left in charge of the Social Security Administration, agencies constituting the Public Health Service, and Family Support Administration. In 1995, the Social Security Administration was removed from the Department of Health and Human Services, and established as an independent agency of the executive branch of the United States Government. HHS is administered by the Secretary of Health and Human Services, who is appointed by the President with the advice and consent of the Senate. The United States Public Health Service (PHS) is the main division of the HHS and is led by the Assistant Secretary for Health. The current Secretary, Sylvia Mathews Burwell, was sworn in on June 9, 2014. The United States Public Health Service Commissioned Corps, the uniformed service of the PHS, is led by the Surgeon General who is responsible for addressing matters concerning public health as authorized by the Secretary or by the Assistant Secretary of Health in addition to his or her primary mission of administering the Commissioned Corps. The Office of Inspector General (OIG) investigates criminal activity for HHS. The special agents who work for OIG have the same title series "1811", training and authority as other federal criminal investigators, such as the FBI, ATF, DEA and Secret Service. However, OIG Special Agents have special skills in investigating white collar crime related to Medicare and Medicaid fraud and abuse. Organized crime has dominated the criminal activity relative to this type of fraud. HHS-OIG investigates tens of millions of dollars in Medicare fraud each year. In addition, OIG will continue its coverage of all 50 states and the District of Columbia by its multi-agency task forces (PSOC Task Forces) that identify, investigate, and prosecute individuals who willfully avoid payment of their child support obligations under the Child Support Recovery Act. HHS-OIG agents also provide protective services to the Secretary of HHS, and other department executives as necessary.

Page 34 of 59


In 2002, the department released Healthy People 2010, a national strategic initiative for improving the health of Americans.

Strengthening Communities Fund In June 2010 the Department of Health and Human Services created the Strengthening Communities Fund as part of the American Recovery and Reinvestment Act. The fund was appropriated $50 million to be given as grants to organizations in the United States who were engaged in Capacity Building programs. The grants were given to two different types of capacity builders: 

State, Local and Tribal governments engaged in capacity building: grants will go to state local and tribal governments to equip them with the capacity to more effectively partner with faith-based or non-faith based nonprofit organizations. Capacity building in this program will involve education and outreach that catalyzes more involvement of nonprofit organizations in economic recovery

and building up nonprofit organization's abilities to tackle economic problems. State, Local and Tribal governments can receive up to $250,000 in two year grants 

Nonprofit Social Service Providers engaged in capacity building: they will make grants available to nonprofit organizations who can assist other nonprofit organizations in organizational development, program development, leadership, and evaluations. Nonprofits can receive up to $1 million in two year grants.

Organization Internal Structure The Department of Health and Human Services is led by the United States Secretary of Health and Human Services, a member of the United States Cabinet appointed by the President of the United States with the consent of the United States Senate. The Secretary is assisted in managing the Department by the Deputy Secretary of Health and Human Services, who is also appointed by the President. The Secretary and Deputy Secretary are further assisted by seven Assistant Secretaries, who serve as top Departmental administrators. 

Secretary of Health and Human Services o Deputy Secretary  Assistant Secretary for Health  Public Health Service  Office of the Surgeon General  Public Health Service Commissioned Corps  Agency for Healthcare Research and Quality

Page 35 of 59


      

          

Agency for Toxic Substances and Disease Registry Centers for Disease Control and Prevention Food and Drug Administration Health Resources and Services Administration Indian Health Service National Institutes of Health Substance Abuse and Mental Health Services Administration Assistant Secretary for Preparedness and Response  Office of the Assistant Secretary for Preparedness and Response  Biomedical Advanced Research and Development Authority Assistant Secretary for Legislation Assistant Secretary for Planning and Evaluation Assistant Secretary for Administration Assistant Secretary for Public Affairs Assistant Secretary for Financial Resources Office of the General Counsel Office of the Inspector General Administration for Children and Families Administration for Community Living Administration on Aging Centers for Medicare and Medicaid Services

Several agencies within HHS are components of the Public Health Service (PHS), including AHRQ, ASPR, ATSDR, CDC, FDA, HRSA, IHS, NIH, SAMHSA, OGHA, and OPHS.

Budget and Finances The Department of Health and Human Services was authorized a budget for Fiscal Year 2015 of $1.020 trillion. The budget authorization is broken down as follows: Program

Funding (in billions)

Management and Finance Departmental Management

$1.4

Public Health and Social Services Emergency Fund

$1.4

Operating Divisions Food and Drug Administration

$2.6

Health Resources and Services Administration

$10.4

Indian Health Service

$4.8

Page 36 of 59


Centers for Disease Control and Prevention

$6.7

National Institutes of Health

$30.4

Substance Abuse and Mental Health Services Administration

$3.4

Agency for Healthcare Research and Quality

$0.4

Centers for Medicare and Medicaid Services

$906.8

Administration for Children and Families

$51.3

Administration for Community Living

$2.1

TOTAL

1,020.3

Historical Budgets HHS.Gov provides complete details on current and historical budgets for that agency. Budgets for fiscal years prior to 2014 are archived by Archive-It as requested by HHS. Quick Links: FY2016, FY2015, FY2014, FY2013, FY2012, FY2011, FY2010, FY2009, All HHS budget collections on Archive-It

Former Operating Divisions and Agencies  

Social Security Administration, made independent in 1995. Health Care Financing Administration, renamed to Centers for Medicare and Medicaid Services

Programs The Department of Health and Human Services' administers 115 programs across its 11 operating divisions.[13] Some highlights include:            

Health and social science research Preventing disease, including immunization services Assuring food and drug safety Medicare (health insurance for elderly and disabled Americans) and Medicaid (health insurance for low-income people) Health information technology Financial assistance and services for low-income families Improving maternal and infant health, including a Nurse Home Visitation to support first-time mothers Head Start (pre-school education and services) Faith-based and community initiatives Preventing child abuse and domestic violence Substance abuse treatment and prevention Services for older Americans, including home-delivered meals Page 37 of 59


   

Comprehensive health services for Native Americans Assets for Independence Medical preparedness for emergencies, including potential terrorism Child support enforcement

Health Care Reform The 2010 United States federal budget establishes a reserve fund of more than $630 billion over 10 years to finance fundamental reform of the health care system.

Related Legislation                             

1946: Hospital Survey and Construction Act (Hill-Burton Act) PL 79-725 1949: Hospital Construction Act PL 81-380 1950: Public Health Services Act Amendments PL 81-692 1955: Poliomyelitis Vaccination Assistance Act PL 84-377 1956: Health Research Facilities Act PL 84-835 1960: Social Security Amendments (Kerr-Mill aid) PL 86-778 1961: Community Health Services and Facilities Act PL 87-395 1962: Public Health Service Act PL 87-838 1962: Vaccination Assistance PL 87-868 1963: Mental Retardation Facilities Construction Act/Community Mental Health Centers Act PL 88-164 1964: Nurse Training Act PL 88-581 1965: Community Health Services and Facilities Act PL 89-109 1965: Medicare PL 89-97 1965: Mental Health Centers Act Amendments PL 89-105 1965: Heart Disease, Cancer, and Stroke Amendments PL 89-239 1966: Comprehensive Health Planning and Service Act PL 89-749 1970: Community Mental Health Service Act PL 91-211 1970: Family Planning Services and Population Research Act PL 91-572 1970: Lead-Based Paint Poisoning Prevention Act PL 91-695 1971: National Cancer Act PL 92-218 1974: Research on Aging Act PL 93-296 1974: National Health Planning and Resources Development Act PL 93-641 1979: Department of Education Organization Act (removed education functions) PL 96-88 1987: Department of Transportation Appropriations Act PL 100-202 1988: Medicare Catastrophic Coverage Act PL 100-360 1989: Department of Transportation and Related Agencies Appropriations Act PL 101-164 1996: Health Insurance Portability and Accountability Act PL 104-191 2000: Child Abuse Reform and Enforcement Act P.L. 106-177 2010: Patient Protection and Affordable Care Act PL 111-148

Page 38 of 59


Page 39 of 59


The US Department of Juvenile Justice & Delinquency Prevention The Office of Juvenile Justice and Delinquency Prevention (OJJDP) is an office of the United States Department of Justice and a component of the Office of Justice Programs. OJJDP sponsors research, program, and training initiatives; develops priorities and goals and sets policies to guide federal juvenile justice issues. OJJDP also disseminates information about juvenile justice issues and awards funds to states to support local programming nationwide through the office's five organizational components. The office cooperates with other federal agencies on special projects. For example, it formed the National Gang Center along with the Office of Justice Programs (OJP) and the Bureau of Justice Assistance (BJA). The OJJDP has the National Youth Gang Center linked through the National Gang Center. The office is headed by Administrator Robert L. Listenbee. He was appointed to the position by President Barack Obama in February 2013, and was sworn in to the position on March 25, 2013. Before his appointment to OJJDP, Mr. Listenbee was Chief of the Juvenile Unit of the Defender Association of Philadelphia for 16 years and was a trial lawyer with the association for 27 years. In this capacity, he created a specialized unit to deal with juvenile sexual assault cases and was instrumental in developing three specialty court programs that divert youth out of the juvenile justice system and reduce their risk of residential placement. OJJDP, a component of the Office of Justice Programs (OJP), supports states, local communities and tribal jurisdictions in their efforts to develop and implement effective programs for juveniles.

Page 40 of 59


The office strives to strengthen the juvenile justice system’s efforts to protect public safety, hold offenders accountable and provide services that address the needs of youth and their families. Through its components, OJJDP sponsors research; program and training initiatives; develops priorities and goals; sets policies to guide federal juvenile justice issues; disseminates information about juvenile justice issues, and awards funds to states to support local programming.

Authorizing Legislation Congress enacted the Juvenile Justice and Delinquency Prevention (JJDP) Act (Pub. L. No. 93-415, 42 U.S.C. § 5601 et seq.) in 1974. This landmark legislation established OJJDP to support local and state efforts to prevent delinquency and improve the juvenile justice system. On November 2, 2002, Congress reauthorized the JJDP Act. The reauthorization (the 21st Century Department of Justice Appropriations Authorization Act, Pub. L. No. 107273, 116 Stat. 1758) supports OJJDP's established mission while introducing important changes that streamline the Office's operations and bring a sharper focus to its role. The provisions of the reauthorization took effect in FY 2004 (October 2003).

Page 41 of 59


JJDP Act Milestones 1974    

Act signed into law. Created Formula Grants program. Established the separation requirement. Established the deinstitutionalization of status offenders (DSO) requirement.

1977  

Increased and expanded DSO and separation requirements. Emphasized prevention and treatment.

1980 

Established jail removal requirements.

1984 

Enhanced and amended jail removal requirements.

1988 

Addressed disproportionate minority confinement (DMC) as a requirement.

1992     

Amended DSO, jail removal, and separation requirements. Elevated DMC to a core requirement. Established the Title V Incentive Grants for Local Delinquency Prevention Grants Program (Title V). Established new programs to address gender bias. Emphasized prevention and treatment, family strengthening, graduated sanctions, and risk-need assessments.

2002     

Broadened the scope of the DMC core requirement from "disproportionate minority confinement" to "disproportionate minority contact." Consolidated seven previously independent programs into a single Part C prevention block grant. Created a new Part D, authorizing research, training and technical assistance, and information dissemination. Added Part E, authorizing grants for new initiatives and programs. Reauthorized Title V. Page 42 of 59


  

Required states to give funding priorities of their formula and block grant allocations to evidence-based programs. Reauthorized Title II Formula Grants Program. Revised the Juvenile Accountability Incentive Block Grants program, which is now called the Juvenile Accountability Block Grants program (as part of the Omnibus Crime Control and Safe Streets Act.)

Related Legislation In addition to the JJDP Act, other pieces of legislation are relevant to OJJDP and its policies and priorities; several are listed below. Search for these and others on GPO Access, which contains the text of public and private laws enacted from the 104th Congress to the present.

Page 43 of 59


            

Omnibus Crime Control and Safe Streets Act, Title I--Part R, Chapter 46-Subchapter XII-F: Juvenile Accountability Block Grants The Adoption Promotion Act of 2003 (Pub. L. No. 108-145). Child Abuse Prevention and Treatment Act (Title I, Pub. L. No. 108-036). Immigration Services and Infrastructure Improvements Act (Title II, Pub. L. No. 106-313). Methamphetamine Anti-Proliferation Act (Title XXXVI, Pub. L. No. 106-310). National Police Athletic League Youth Enrichment Act of 2000 (Pub. L. No. 106367). No Child Left Behind Act of 2001 (Pub. L. No. 107-110). Prison Rape Elimination Act of 2003 (Pub. L. No. 108-79) PROTECT Act (Pub. L. No. 108-021). Protection of Children from Sexual Predators Act of 1998 (Pub. L. No. 105-314). Runaway, Homeless, and Missing Children Protection Act (Pub. L. No. 108-096). Strengthening Abuse and Neglect Courts Act of 2000 (Pub. L. No. 106-314). Violence Against Women Act of 2000 (Division B, Pub. L. No. 106-386).

Page 44 of 59


Page 45 of 59


The Early Head Start Program Early Head Start is a federally funded community-based program for low-income families with pregnant women, infants, and toddlers up to age 3. It is a program that came out of the Head Start Program. The program was designed in 1994 by an Advisory Committee on Services for Families with Infants and Toddlers formed by the Secretary of Health and Human Services. "In addition to providing or linking families with needed services—medical, mental health, nutrition, and education—Early Head Start can provide a place for children to experience consistent, nurturing relationships and stable, ongoing routines." Early Head Start Programs offer three different options and programs may offer one or more to families. The three options are: a home-based option, a center-based option, or a combination option in which families get a set number of home visits and a set number of center-based experiences, There are also locally designed options, which in some communities include family child care.

Mission Statement The mission of the Early Head Start program is to promote healthy prenatal outcomes for pregnant women, to improve the development of young children, and to promote healthy family functioning.

Important Areas of the Early Head Start Program 1.Child Development: "Programs must support the physical, social, emotional, cognitive, and language development of each child." This also includes educating and supporting parents and positive parent-child relationships. The program must provide the following services or it must refer families to outside services that provide these: -Developmentally appropriate education services for young children this includes developmentally appropriate settings, activities, and resources; -Home-visits; -Parent education and parent-child activities; -Complete health and mental health services; and -High quality child care services provided by or in partnership with local child care centers

Page 46 of 59


2.Family Development: Programs must help families develop and reach goals for both parents and children. Each family will work with the staff to create a family development plan that focuses on all different needs of the family including social, economic, and the child's developmental needs. Families involved in multiple programs will receive help to integrate all programs into one plan and system of services. The services that programs must provide directly or through referral include: -Information on child development; -Complete health and mental health services, this includes alcohol and substance abuse treatment and assistance with quitting smoking; -Parents will receive adult education, literacy, and job skills training to foster family's independence. -Families will get help in obtaining income support, safe housing, or emergency cash; and -Families will receive help with transportation to early head start program services to allow all participants access to the program and services. 3.Community Building: In order to create a complete network of services and support for pregnant women and families with infants and toddlers, Early Head Start Programs must assess a community and its services. The goal is to create a network in the community to support these families and their needs by giving them access to services and making these services more efficient for all families in the community. 4.Staff Development: The quality of the staff is a key structure of the Early Head Start program. Staff members involved with the program must develop supportive relationships with parents and children. Staff will have a continuous learning process which includes trainings, supervision, and mentoring in order to keep them focused on the main goals of the program and help them build better relationships with both families and children. Development will be focused on child development, family development, and community development. 5.Administration/Management: The administration and management used with the Early Head Start programs will follow the practices which uphold the nine principles and four cornerstones set forth in the Early Head Start initiative. All staff must be crosstrained in the areas of child, family, and community development. Relationship-building will be the focus and basis for interactions between children, families, and staff members. 6.Continuous Improvement: On-going training and technical assistance is provided by the Infant/Family Network and the EHS NRC, this in addition to other trainings, mentoring, research, and evaluations enables the Early Head Start program staff and services to meet the needs of families and their children better. Continuous training ensures that staff will be up to date and constantly informed on program policies and guides.

Page 47 of 59


7.Children with Disabilities: Early Head Start programs will be responsible for coordinating with different programs and services in their areas in accordance with Part C of the Individuals with Disabilities Education Act. The Early Head Start program ensures that children with disabilities will not be excluded, and that these children will receive all the services they need and be included in all program activities. This gives all children equal access to services and resources to ensure proper child development.

8.Socialization:The Early Head Start program focuses on socialization of infants and toddlers the most important relationship at this age is between children and their parents. Socialization between infants and toddlers and their peers is also important but is not the main focus. Socialization gives parents a chance to be in a setting where they can interact with their child, other parents, and qualified staff in order to learn more about their child's development and develop more as a family. It is one more way families can receive support and education. Socialization also helps with community and team building by bringing many different members together and increasing communication and relationship. 9.Curriculum: The Early Head Start curriculum plays an extremely important role in the development and education of young children in the program. The curriculum includes five aspects: 1-the goals designed by staff and parents for the child's growth, development, and learning; 2-the experiences and activities through which the child will achieve the goals set for them; 3-what the staff and parents will do to support and help the child achieve these goals; and 4- the materials needed to facilitate and support the implementation of the curriculum in order for the child to reach these goals.

Eligibility for the Program Early Head Start is a child development program for low-income families with infants and toddlers. "Each Early Head Start program is responsible for determining its' own eligibility criteria." Key factors in determining eligibility are Page 48 of 59


- Family income, which is evaluated by the federal poverty guidelines. -Early Head Start programs may choose to target their services to a particular population of their community in order to meet its needs better -Involvement in the child welfare system. These include children who: have been physically, mentally or emotionally abused; children who have been neglected; infants whose parents have exposed them to drugs or alcohol and do not have a suitable caretaker; and children whose parents have died, gone to jail, or been hospitalized. -Many Early Head Start Locations have programs to help rehabilitate families that have been affected by drugs or alcohol. This program is not solely targeted at the child's development but is aimed at helping the entire family and community develop so that the relationships with the child will be healthier and improved.

Early Head Start Research and Evaluation (EHSRE) Project 1996–2010 In 1996, the Department of Health & Human Services (DHHS) launched a large-scale evaluation of Early Head Start (EHS) by randomly assigning qualifying families at 17 sites nationally to participate and looking at their social, psychological, developmental and academic outcomes compared to a matched control group. Families in the control group were able to receive any services available to them. The evaluation followed families over five time points, according to the child’s age: 14 months, 24 months, 36 months, pre-kindergarten and 5th grade.

Supportive Findings for Children’s Development Findings from the DHHS evaluation demonstrate significant, positive impacts on children’s social-emotional development (e.g. reduced aggression), as well as in children’s abilities to engage in learning activities. These results are seen as early as at the 24-month time point, but continue through the pre-kindergarten time point. Additionally, recent findings from the 5th grade time point reveal that children enrolled in EHS develop more complex reasoning skills and exhibit fewer behavior problems. However, these results vary by the type of school children were enrolled in (high poverty versus low poverty).

Mixed Findings for Children’s Development The results as related to children’s language development are mixed, such that some broad reports discuss minimal to no impact and other individual academic manuscripts detail specific, complex supportive findings. Additionally, two groups seemed to benefit the most from enrollment in EHS: those enrolled during pregnancy with the child who would later be in the program, and African American children and their families. The formatting of the EHS program mattered, as well, as children who were enrolled in a ―mixed approach to service delivery‖ (home visiting and classroom education) received

Page 49 of 59


the greatest benefits. Finally, parents who attended parenting classes were more likely to engage in strategies that promote positive development. These pieces of evidence may point to a ―dosage‖ effect, such that children who received the most quality early child care experience and had parents who attended parenting classes (in addition to other demographic risk factors) may reap the most from Early Head Start.

Supportive Findings for Parenting and the Home Environment As Early Head Start is a ―two-generation‖ program, the goal is to promote healthy parental development as well as a stimulating home environment through enrollment in EHS. EHS demonstrated effectiveness at increasing parental support for language and literacy development, including daily reading and increased teaching activities in the home through the pre-kindergarten time point. EHS parents also reported using fewer punitive discipline strategies with their children. Additionally, the positive impacts on parenting and parenting behaviors was seen by the 36-month time point for families living in low socio-demographic risk, but did not emerge until the pre-kindergarten time point for families living in high socio-demographic risk.

Page 50 of 59


Page 51 of 59


References ______

1. http://evidencebasedliving.human.cornell.edu/2010/05/04/evidencebased-programming-what-does-it-actually-mean/ 2. https://en.wikipedia.org/wiki/SAMSHA_National_Registry_of_Evidenc e-Based_Programs_and_Practices_%28NREPP%29 3. http://www.carsrp.org/publications/Prevention%20Tactics/PT09.06.10.pdf 4. http://www.evidencebasedassociates.com/reports/research_review.p df 5. https://en.wikipedia.org/wiki/Substance_Abuse_and_Mental_Health_ Services_Administration 6. https://en.wikipedia.org/wiki/United_States_Department_of_Health_a nd_Human_Services 7. https://en.wikipedia.org/wiki/Early_Head_Start 8. http://www.ojjdp.gov/

Page 52 of 59


Page 53 of 59


Notes ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________

Page 54 of 59


Notes ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________ ______________________________________________________________________________

Page 55 of 59


Page 56 of 59


Attachment A A Paradigm Shift in Selecting Evidence-Based Approaches

Page 57 of 59


prevention A Paradigm Shift in Selecting Evidence-Based Approaches for Substance Abuse Prevention By David Sherman, M.Ed. The term “evidence-based” has become both popular and necessary in the field of prevention services. Popular, because it is used liberally to describe prevention interventions1, and to prove their appropriateness. Necessary, because without this label, interventions have not been recognized (or funded) by government agencies, or adopted for use by prevention providers.

Defining Evidence-based The nature of evidence is that it is both continuous and contextual. The quality of evidence can be judged along a continuum, from strong to weak. To determine the relative strength or weakness of a research study, for example, one must consider the rigor of its design and the appropriateness of the methods used to collect and analyze data. Evidence is contextual because the quality of the evidence depends on the extent to which findings can be generalized to similar populations and settings. Strong evidence that an intervention program for rural Native Americans had positive outcomes may not be relevant when deciding if this program is appropriate for an urban Hispanic population. The evidence, and what it says about a given intervention, must be viewed in light of contextual factors such as place, population, and culture. The terms evidence-based, research-based, researchinformed, science-based, blueprint programs, model programs, promising programs, and effective programs are often used interchangeably. There is no single, universally accepted definition of ‘evidence-based program.’ The determination of whether a program, practice, or policy is evidence-based varies across government agencies, research institutions, and other 1 The term “intervention” is used broadly in this document to reference the terms: programs, practices and policies, each of which have a more discreet definition.

preventionTactics 9:6 (2010) Tactics (Tak’tiks) n. 1. a plan for promoting a desired end. 2. the art of the possible.

This Prevention Tactic will: • • •

review the recent history behind the designation of “evidence-based” to describe prevention approaches; examine the evolution of the use of evidencebased in the National Registry of Effective Programs & Practices (NREPP); explore how the recent changes to NREPP have impacted the process that providers use to select interventions to meet the needs of the community they serve; and describe, compare, and contrast the three categories of evidence-based interventions required by the Strategic Prevention Framework State Incentive Grant (SPF SIG) program.

organizations dedicated to promoting evidencebased policy and practice. Some definitions are more stringent than are others. In 2001, for example, the Institute of Medicine offered this definition: Evidence-based practice is the integration of best research evidence with clinical expertise and patient values.

In 2005 the American Psychological Association established this definition for its members:i Evidence-based practice in psychology (EBPP) is the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences.

A Paradigm Shift in Selecting Evidence-Based Approaches for Substance Abuse Prevention

Tactics

1


In general, evidence-based interventions: • are based on a clearly stated, scientifically supported theory; • include a detailed description of the intervention and measurement design (i.e. What intervention was used with which populations to achieve what outcomes?); • identify measurable outcomes that have been carefully assessed, including long-term follow-ups; and • have been tested in a scientific manner, ideally through randomized, controlled studies. At first reading, this list might seem highly technical and rigorous. However, evidence-based interventions may include some of these attributes without encompassing all of them. Put more simply, if an intervention is designated as ‘evidence-based,’ it is grounded in theory, evaluated by some commonly accepted method, and shown to have at least some positive outcome.

The Evolution of Evidence-based Prevention Approaches Before the mid-1990s, selection of prevention interventions was based on popular belief or practitioner recommendations. The use of evidencebased practice first appears in federal policy in 1997, in the Substance Abuse and Mental Health Services Association’s (SAMHSA) Center for Substance Abuse Prevention’s (CSAP) NREPP list. NREPP used a three-tiered hierarchy to rate program interventions: • Model – well implemented and evaluated according to rigorous standards of research. • Promising – have been implemented and are considered to be scientifically defensible, but were not shown to have sufficient rigor and/or consistent positive outcomes required for Model programs. • Effective – meet all the criteria of Model programs but are not currently available to be disseminated to the general public.

2

This hierarchy reinforced a government culture that favored accountability and the selection of “proven” programs. The 2001 Federal No Child Left Behind Act (NCLB) adopted this approach to evidence-based practice. By 2002, it was included in California’s Safe and Drug-Free Schools and Communities (SDFSC) programming.

As the requirements to use evidence-based approaches increased, however, prevention professionals noticed that the term evidence was not applied consistently. Different agencies and groups adopted different criteria to determine what programs made it onto an “evidence-based” list. For example, CSAP rated the credibility of evidence for a program on a five-point scale. The U.S. Department of Education, however, used seven criteria for judging a program’s adherence to evidence-based practice. These different scales created confusion for providers and reinforced the notion that they were “picking off a list,” rather than selecting a program or intervention based on sound science that was appropriate to their context. By the mid-2000s, criticism of a list-based approach began to creep into the literature. For example, a 2007 reviewii of the use of “evidence based” prevention programs by state recipients of SDFSC funding confirmed some weaknesses with this policy. The review found that many of the lists used were out of date, and limited funding prevented the inclusion of updated information from new scientific studies. There were also concerns about the process of becoming a listed program. Some researchers, such as Gordon (2002)iii and Petrosino (2003)iv, concluded that the review processes were not transparent, that the judging criteria were ambiguous, and that the system was open to conflict of interest. Halfors et al concluded that: …the greatest problem is that for most lists “evidence” about program effectiveness comes from a single small efficacy trial by program developers.


The focus shifted towards looking at interventions on a continuum of evidence. Outcomes were viewed in terms of the program’s context. Not only did the degree of evidence matter, but also whether that evidence supported a program’s appropriateness in a given context. Adopting a program intervention that was effective in rural Iowa made no sense if it would not be effective in urban Los Angeles. The current NREPP system reports on interventions’ descriptive characteristics, strength of evidence, and readiness for dissemination. It is designed to support service providers by: • promoting informed decision making; • disseminating timely and reliable information regarding interventions; • allowing access to descriptive information about interventions; and • providing peer-reviewed ratings of outcome-specific evidence across several dimensions.

Prior to this criticism, in 2004, the Society for Prevention Research had proposed new, consistent standards of evidence for the prevention field that aimed to establish consistency and credibility in the program evaluation process. The Society hoped that “…the widespread use of these criteria will lead to consistent and high standards for determining whether programs have been scientifically shown to be efficacious, effective or ready for dissemination.”v In 2007, after conducting focus groups and seeking input, SAMHSA/CSAP reconsidered the paradigm used for evidence-based practice. They revised the National Registry system, phasing out the “model” and “promising” program format.

The new system expanded opportunities for local organizations to have their intervention strategies added to the registry. Because intervention programs are no longer rated on the equivalent of an A–B–C scale, there is more emphasis paid to selecting an intervention that meets other criteria, such as the population being served and the capacity and resources available for implementation. In other words, the new NREPP listing encourages a more realistic and holistic approach to selecting prevention interventions.

Practical Considerations for Providers What is the impact of these changes on providers? The immediate impact is that they are less restricted in their choice of interventions. They also have the freedom to make decisions locally and to select interventions that suit their context and population. With this freedom,

A Paradigm Shift SAMHSA’s new process for selecting evidence-based programs represents a paradigm shift and providers should consider how this shift is being managed within their organization. From: To: • Picking off lists • Thinking critically about needs • Categorical labels • Ratings along a continuum • Relying on strength of evidence alone • Assessing relative importance of strength of • Stand-alone intervention selections evidence in broader context • Comprehensive community plans

3


however, comes a responsibility to make more informed decisions. Within the Strategic Prevention Framework SIG Program, these decisions are based upon a threestage process: 1. Match the intervention to the community’s goals (Relevant) 2. Determine if the intervention is appropriate and feasible (Appropriate) 3. Ensure there is evidence that the intervention is effective (Effective) Figure 1 depicts the three-stage process of ensuring that interventions are relevant, appropriate and potentially effective. (For more details about selecting evidencebased interventions that align with your organization’s community and goals, s ee the Prevention Tactic, Selecting and Implementing Evidence-Based Prevention Through the Strategic Prevention Framework (SPF) Planning, available at www.cars-rp.org/publications/ preventiontactics.php)

SAMHSA’s Guidelines for Selecting Evidence-Based Interventions The Strategic Prevention Framework State Incentive Grant (SPF SIG) program specifically requires implementation of evidence-based interventions that fall within one or more of the following categories:vi A.

included in Federal registries of evidence- based interventions;

B.

reported (with positive effects on the primary targeted outcome) in peer-reviewed journals; and

C.

documented effectiveness supported by other sources of information and the consensus judgment of informed experts.

The question of whether an intervention strategy is relevant, appropriate, and effective is viewed within the framework of which category is used in the selection process.

Figure 1. Process Description: Selecting Best Fit Prevention Interventions

Source: SAMHSA: Identifying and Selecting Evidence Based Interventions, Revised Jan. 2009

4


A. Federal Registries Federal registries are accessible public resources that identify evidence-based prevention interventions. NREPP is an example of a searchable database that features interventions and programs that have been tested in communities, schools, and social service organizations across the country. Federal registries like NREPP briefly describe interventions and provide descriptions and information about supporting evidence.

These registries, however, tend to restrict the number of interventions listed to those which are most easily evaluated using traditional, experimental methods. They often use predetermined criteria and a rating process to score the effectiveness of the listed interventions. People who are less experienced judging research may find it difficult to compare the strength of the evaluations and ratings of the various interventions. While the use of a registry may seem easier, service providers must still think critically. Local circumstances and populations must be considered when judging the interventions rated on a national registry. In its revised incarnation, NREPP is a searchable online database of mental health and substance abuse interventions. The NREPP website defines intervention as: A strategy or approach intended to prevent an undesirable outcome (preventive intervention), promote a desirable outcome (promotion intervention) or alter the course of an existing condition (treatment intervention).vii

Prevention providers will find the website useful for identifying approaches to preventing and treating substance use disorders. As noted above, NREPP’s criteria ensure that the interventions listed have been scientifically tested and can be readily disseminated. One of the useful features of the website is the Find Interventions page, a search engine that enables providers to define search criteria. For example, the database includes both mental health and substance abuse interventions, but with the click of a checkbox, the intervention search can be limited to only substance abuse prevention. Examples of other search criteria include: • • • •

Areas of Interest (e.g. alcohol, environmental strategies) Implementation History Study Population (e.g. age, race/ethnicity, gender) Settings (e.g. urban, suburban, rural, tribal)

By narrowing search criteria, providers can spend more time reviewing and assessing the programs’ interventions.

5


To make it easier for providers to assess whether an intervention is appropriate for their particular context, NREPP publishes an “intervention summary” for every intervention it reviews. Each of these reports includes: • • • •

descriptive information about the intervention and its targeted outcomes; Quality of Research and Readiness for Dissemination ratings; a list of studies and materials submitted for review; and contact information for the intervention developer.

The new registry is designed to be a comprehensive and interactive source of information. It provides ratings for individual outcomes targeted by an intervention, rather than a single, overall rating. Users are encouraged to read the “Key Findings” sections in the intervention summary to better understand the research results for each outcome. It is important to bear in mind that NREPP does not review all interventions that are submitted, and that some interventions are never submitted to NREPP. While the NREPP database may be a one-stop-shop for some purposes, prevention providers might also consider other suitable Federal registries. The following table represents a sampling of other Federal registries listed by SAMHSA: A Sample of Other Federal Registries •

OJJDP Model Programs Guide http://www.dsgonline.com/mpg2.5/mpg_index.htm Provides descriptions of, and rates evidence for, youth-oriented interventions, many of which are relevant to the prevention of substance use and abuse.

Exemplary and Promising Safe, Disciplined and Drug-Free Schools Programs Sponsored by the U.S. Department of Education http://www.ed.gov/admins/lead/safety/exemplary01/exemplary01.pdf Provides descriptions of, and rates evidence for, educational programs related to substance use.

Guide to Clinical Preventive Services Sponsored by the Agency for Healthcare Research and Quality [AHRQ] http://www.ahrq.gov/clinic/cps3dix.htm Provides recommendations regarding screening and counseling in clinical settings to prevent the use of tobacco, alcohol, and other substances.

Guide to Community Preventive Services Sponsored by the Centers for Disease Control and Prevention [CDC] http://www.thecommunityguide.org Provides recommendations regarding generic programs and policies to prevent and reduce tobacco use and alcohol-impaired driving.

Source: Identifying and Selecting Evidence-Based Interventions. Revised Guidance Document for the Strategic Prevention Framework State Incentive Grant Program. (2009) Substance Abuse and Mental Health Services Administration (SAMHSA).

6


B. Peer Reviewed Journals Using scholarly research articles is another way to locate evidence-based programs and practices. SAMHSA recommends a careful review of all literature published on a particular intervention; it is not enough to base a decision on a single document. When using this approach, conduct a thorough search of relevant information about the intervention. The goal is to ensure that the reported outcomes are consistent and that they are applicable to your selected population, community, and context. Scholarly research requires a certain level of technical expertise to interpret results and judge the quality of the study being reported. Moreover, accessing articles can be challenging to those without ready access to university libraries or online journals. The table below is a tool that can assist you to analyze journal articles. The first column lists key elements of evidence presented in most peer-reviewed journals. The second column suggests questions to help you review articles and interpret the results presented. Elements of Evidence Questions to Consider A defined conceptual model with • Does the article describe the theory or provide a conceptual model of the outcomes that are defined and intervention? measured. • Is the theory or model linked to expectations about the way the program should work? • Does the article describe the connection of the theory or the conceptual model to the intervention approach, activities, and expected outcomes in sufficient detail to guide your decision? Background on the intervention • Does the intervention match the identified needs of your community? evaluated. • Does the article describe the proposed mechanism of change of the intervention? • Are the structure and content of the intervention described in enough detail? • Is the context or setting of the intervention described well enough to make an informed decision concerning how well it might work in the communities targeted? A well-described study • Does the article describe the characteristics of the study population? population. • Does the study population match your local target group? A pre-intervention measurement • Does the article describe the comparison or control groups used? of that population and the use • Do those groups resemble your target group? of comparison/control groups to evaluate the outcomes. Overall quality of study design • Are competing explanations for the findings ruled out? and data collection methods. • Are issues related to missing data and attrition addressed and resolved? • Did the study’s methodology use a combination of strategies to measure the same outcome? Explanation of the analysis and • Is there an explanation of how the analytical plan addresses the main presentation of the findings. questions posed in the study? • Do the analyses take into account the key characteristics of the study’s methodology? • Does the article report and clearly describe findings and outcomes? • Are the findings consistent with the theory or conceptual model and the study’s hypotheses? • Are findings reported for all outcomes specified? A summary and discussion of the • Does the discussion draw inferences and reach conclusions related to the findings. data reported? Adapted from Identifying and Selecting Evidence-Based Interventions. Revised Guidance Document for the Strategic Prevention Framework State Incentive Grant Program. (2009) Substance Abuse and Mental Health Services Administration (SAMHSA).

7


Guideline 3: The intervention is supported by documentation that it has been effectively implemented in the past, and multiple times, in a manner attentive to scientific standards of evidence and with results that show a consistent pattern of credible and positive effects; and

Guideline 4: The intervention is reviewed and deemed appropriate by a panel of informed prevention experts that includes: well-qualified prevention researchers who are experienced in evaluating prevention interventions similar to those under review, local prevention practitioners, and key community leaders as appropriate (e.g., officials from law enforcement and education sectors or elders within indigenous cultures).

These guidelines expand the array of interventions available to prevention providers. As part of a comprehensive program, SAMHSA suggests that these types of interventions “should be considered supplements, not replacements, for traditional scientific standards used in Federal registry systems or peer-reviewed journals.” ix

Other Information Sources that Document Effectiveness This option enables you to use locally developed interventions that are most appropriate to the unique needs of your community and target population. Using this category of intervention, however, requires particular care. When selecting interventions based on other sources of information, all four of these guidelines must be met:viii

8

Guideline 1: The intervention is based on a theory of change that is documented in a clear logic or conceptual model;

Guideline 2: The intervention is similar in content and structure to interventions that appear in registries and/or the peer-reviewed literature;


Advantages and Challenges of Evidence-Based Interventions The following table compares the advantages and challenges of each of SAMHSA’s three categories of evidencebased interventions.

A. Federal registries

Advantages • Offer “one-stop” convenience for those seeking quick information on the interventions listed. • Provide concise descriptions of the interventions. • Rate the strength of evidence measured against defined and accepted standards for scientific research. • Present a variety of practical information, formatted and categorized for easy access.

Challenges • List predominantly school- and family-based interventions and relatively few community, environmental, or policy interventions. • Include a limited number of interventions depending on how they are selected. • Are based on evidence that may be outdated if the registry does not provide a process for incorporating new evidence. • May be confusing to consumers seeking to compare the relative strength of evidence for similar programs included on different registries.

B. Peer-Reviewed Journals

• Leave it to the reader to interpret results • Typically present detailed findings and and assess the strength of the evidence analyses about whether or not the presented and its relevance and applicability program or practice has an adequate level to a particular population, culture, or of evidence that the intervention works. community context. • Provide authors’ contact information • Describe in limited detail the activities and that facilitates further collaboration and practical implementation issues pertinent to discussion. the use of the intervention. • In some cases, articles report and summarize meta-analyses and other types of complex analyses that examine effectiveness across interventions or intervention components. These types of analyses are potentially very useful to prevention planners.

C. Other Documented Information Sources

• Enable planners to consider interventions • Place substantial responsibility on prevention that do not currently appear on a Federal providers for intervention selection decisions. list or in the peer-reviewed literature but • Require providers to assemble additional which have the potential to address the documentation and assess a particular problem targeted. intervention as part of the larger • Provide opportunities to use locally comprehensive community prevention plan. developed or adapted interventions, • Require extensive decision-making and provided they are supported by adequate documentation that create resource documentation of effectiveness. demands beyond those that are readily • Involves community members and available to some communities. prevention professionals in a systematic, evidence-based, decision-making process.

9


Selecting Evidence-Based Programs and Practices Regardless of which SAMHSA-designated selection process you follow, whether it be a review of research articles or a search of an online database, you will likely identify several suitable interventions. The following checklist may assist you in selecting an evidence-based intervention that is relevant, appropriate, and effective:x

q The intervention or practice has been evaluated and has

demonstrated effective outcomes in settings similar to yours.

q The intervention addresses risk factors that are relevant to your

target population.

q The intervention has been successfully implemented with your

intended target population, considering factors such as age, race and ethnicity, socio-economic status, and geographic location.

q

The intervention or practice aligns with identified community needs.

q

The intervention or practice fits with the capacity and support of your organization, including personnel, physical and financial resources.

q

There is sufficient time for your organization to fully implement this strategy.

q

The intervention or practice fits with the mission of your organization.

q

The intervention reflects the values and practices of your community.

q

The intervention or practice offers something different than what is currently being offered in the community.

q

The intervention offers a manual or curriculum that will facilitate implementation. If such a guide does not exist, the principal investigator or program developer can be contacted for more information.

q

The staffing and cost requirements of the intervention are explicit, and it is easy to apply that information to your organization’s circumstances.

Conclusion This Prevention Tactic is only one tool for understanding how to select, implement, and evaluate evidence-based interventions. How your organization applies the principles of evidencebased practice when making decisions about interventions will be determined by a number of factors. Among these are the capacity of your organization and the needs of the selected population and/or community. Whether your organization uses a Federal registry, peer reviewed journals, or other documented sources of information, an understanding of how to apply the principles of evidence-based practice will enable it to make appropriate choices that will enhance the intervention’s effectiveness and meet your community’s needs.

10


Further Reading… SAMHSA’s Revised Guidance Document will help providers to fully understand and apply the Strategic Planning Framework (SPF) as they identify and select evidence-based practices, programs, and policies. The document can be downloaded from the SAMHSA website at www. samhsa.gov.

Further Reading… SAMHSA’s Revised Guidance Document will help providers to fully understand and apply the Strategic Planning Framework (SPF) as they identify and select evidence‐based practices, programs, and policies. The document can be downloaded from the SAMHSA website at www.samhsa.gov.

Resources Levant, Ronald F. 2005 Presidential Task Force on Evidence-Based Practice. American Psychological Association. July 1, 2005. Available at http://www.apa.org/practice/ebpreport.pdf i

Hallfors, D., Pankrantz, M.. and Hartford, S. Does Federal Policy Support of the use of Scientific Evidence in School-based Prevention Programs? Prevention Science. (2007) 8:75-81. ii

Gorman, D. M. (2002). Defining and operationalizing “research-based” prevention” a critique (with case studies) of the US Department of Education’s Safe, Disciplined and Drug-Free Schools Exemplary Programs. Evaluation and Program Planning, 25, 295–302.

iii

Petrosino, A. (2003). Standards for evidence and evidence for standards: The case of school-based drug prevention. The Annals of the American Academy of Political and Social Science, 587, 180–207.

iv

Society for Prevention Research. (2004). Standards of evidence: Criteria for efficacy, effectiveness, and dissemination. Retrieved November 13. 2009 from http://www.preventionresearch.org/StandardsofEvidencebook.pdf. v

Identifying and Selecting Evidence-Based Interventions. Revised Guidance Document for the Strategic Prevention Framework State Incentive Grant Program. (2009) Substance Abuse and Mental Health Services Administration (SAMHSA). www.samhsa.gov; accessed 11/10/09.

vi

SAMHSA’s National Registry of Evidence-based Programs and Practices. Help Glossary. www.nrepp.samhsa.gov/help-glossary.asp#intervention. Accessed March 22, 2010.

vii

viii

Ibid. p. 18-19.

ix

Ibid, p.19

x

Evidence-based Interventions and Practices. (2009).Wilder Research.

11


PRSRT STD U.S. Postage PAID Sacramento, CA Permit No. 2840

Center for Applied Research Solutions 923 College Avenue Santa Rosa, CA 95404

prevention

Tactics

Prevention Tactics are published periodically by CARS under its Community Prevention Initiative contract with the California Department of Alcohol and Drug Programs (ADP). The purpose of this publication is to help practitioners in the prevention field stay abreast of best practices emerging from current research and to provide practical tools and resources for implementing proven strategies. The information or strategies highlighted in Prevention Tactics do not constitute an endorsement by ADP, nor are the ideas and opinions expressed herein those of ADP or its staff.

Let’s Hear From You! We welcome readers’ comments on topics presented. Call us at 707.568.3800 Fax us at 707.568.3810 or send an email to cpiinfo@cars-rp.org Additional copies of this publication are available upon request or online at: www.ca-cpi.org

© 2010 by Community Prevention Initiative (CPI). Permission to reproduce is granted, provided credit is given. Edition 9:6 Author: David Sherman, M.Ed.

For more information, please visit the CARS website at: www.cars-rp.org.

This publication can be made available in Braille, large print, computer disk, or tape cassette as disability-related reasonable accommodation for an individual with a disability.


Attachment B Evidence-Based Programs and Practices What Does It All Mean

Page 58 of 59


Research Review Evidence-Based Programs and Practices: What Does It All Mean? Children’s Services Council’s Mission and Vision and Evidence-Based Programs In order for CSC to truly achieve its mission and vision, we as an organization must expect programs and services we fund to be able to demonstrate through data that they are achieving positive results and “doing no harm” to the recipients. CSC must be accountable to the children, families and taxpayers of Palm Beach County. In order to do that, the best possible programs and services must be in place. This means that we are either funding programs that are already evidence-based, are on a continuum of becoming evidence-based, or are providing services that enable children to enter programs that are evidence-based. Children and families will be able to reach their full potential if we as an organization and our providers and partners offer the best possible programs and services. We must remember that we are only at the beginning of this journey and are all in it together. In order to assist in this process, CSC has organized an evidence-based programs committee consisting of a cross section of divisions and outside consultants. Its primary purpose is two-fold, (1) to gather research on nationally rated, evidence-based programs and (2) to construct an assessment tool comprised of specific criteria to rate our currently funded programs. This tool will enable us to see where programs/services fall on a continuum of effectiveness so that we can better understand program needs and also assist programs in their journey towards becoming more effective. This effort will also help agencies see where they are on the continuum and help them improve their programs. More specifically, the more information CSC and providers have, the better equipped we are in regards to either implementing a nationally rated program or helping to refine current programs in order to demonstrate their effectiveness. In January, at a Senior Executive Policy Institute, providers engaged in an activity aimed at helping the committee examine what criteria should be included in the assessment tool. There was also an inquiry regarding what CSC could do to help providers move towards becoming evidence-based. Information like this is quite helpful to committee work and provider input and participation will continue to be needed along the way.

The Science of Investing in Evidence-Based Programs: Advocacy and Impact It should be the charge of any social service organization to not only affect the children it serves, but to improve the lives of those who it will never serve directly. This goal is often done though advocacy (Pizzigati, Stuck, and Ness, 2002). To that end, it is critical that CSC advocate for other service providers in the county to begin or continue to research and implement evidence-based programs. Why? The answer is simple, because they are a good return on investment and, more importantly, research shows they work for the children and families they serve.

Tana Ebbole CEO

While the phrase “evidence-based programs and practices” has been a common one within the medical field, it is becoming much more widespread in other disciplines, including early childhood education, academia, and juvenile justice. However, what has become quite evident is that not all disciplines are using it in the same way; in other words, there is no consensus regarding the definition and what criterion makes a program evidence-based. This of course leads us to the problem of miscommunication. It is critically important that each area have a common understanding of what the term means and of issues that are relevant to this work. Children’s Services Council Palm Beach County believes it is imperative that we help to inform and educate the community regarding what evidence-based programs mean to our organization and to the programs and services we fund.

What is an Evidence-Based Program? No universal definition exists for the term “evidence-based program.” (See Appendix A) Evidence-based is often used synonymously with researchbased and science-based programming. Other terms commonly used are promising programs, model programs, effective programs, and exemplary programs. Each of these terms has a different meaning and each is


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Table of Contents A Message from Tana Ebbole, CEO........................ 1 What is an Evidence-Based Program?..................................... 1 Evidence-Based Practice (Best Practices) versus Program: Is There a Difference?................................ 3 History of Evidence-Based Programs..................................... 4 Examples of Evidence-Based Programs..................................... 5 What Organizations Promote the Use of Evidence-Based Programs?... 5 Why Implement an Evidence-Based Program?..................................... 6 Hot Topic #1: Concerns Regarding Evidence-Based Programs.............. 8 Hot Topic #2: Fidelity versus Adaptation................................. 12 References................................. 14 Frequently Asked Questions......... 15 Appendix A: CSC’s Glossary of Evidence-Based Terms................. 18 Appendix B: National Ratings...... 20 Acknowledgements:

Special thanks to all the reviewers for their thoughtful critique and invaluable feedback.

A Special Thanks to the EvidenceBased Programs Committee: Current Members – CSC staff: Lisa Williams-Taylor, (Chair), Maggie Dante, Patrick Freeland, Betty Scott, Regina Kanuk, Linda Traum, Jennifer Estrada, Lance Till. Consultants: Mike Levine, Alan Brown, and Gail Chadwick Past Members – CSC staff: Judith Brauer, Tiffany North, Debbie Labella, Beth Halleck, Carol Scott, Kelly Brill, Theresa Kanter, Cassondra Corbin-Thaddies

Written by: Lisa Williams-Taylor Published by: Children’s Services Council of Palm Beach County 2300 High Ridge Road Boynton Beach, FL 33426 561-740-7000

defined differently by the various The Department of Education uses organizations defining them. There specific criteria noted in the No Child are at least 23 organizations that Left Behind Act, which includes have created criteria to rate program • “research that involves the applicaeffectiveness (See Appendix B). tion of rigorous, systematic, and For example, the Substance Abuse objective procedures to obtain and Mental Health Services Adreliable and valid knowledge relministration (SAMHSA) uses the evant to education activities and term “science-based programs” and programs; defines them as “programs which • data analysis adequate to test and have been reviewed by experts in the justify the general conclusions field according to accepted standards drawn; of empirical research. Science-based • measurements or observational programs are conceptually sound methods and internally that provide consistent, have “Effective Programs do not try to reliable and do everything for everyone. Their sound research valid data methodology, design and operation reflect clear across evaluand can prove priorities and goals in terms of the ators, observthe effects are ers, multiple type of youth they target, what they clearly linked measureto the program seek to accomplish, and the kinds ments and of services, supports, and activities itself and not observations, extraneous they offer” (Promising & Effective and studies; events” (Kyler, Practices Network, 2001). • evaluBumbarger, and ated using Greenberg, 2005, p. 2). The No Child experimental or quasi-experimenLeft Behind Act uses the term “scital designs in which individuals, entifically based research” program, entities, programs, or activities are which is defined as having “reliable assigned to different conditions evidence that the program or practice and with appropriate controls; and works” (U.S. Department of Educa• experimental studies are presented tion, 2003). in sufficient detail and clarity to Each organization may have differallow for replication or, offer the ent criteria for determining whether a opportunity to build systematiprogram is evidence-based. Reviewers cally on their findings; and has for SAMHSA’s National Registry of been accepted by a peer-reviewed Evidence-based Programs and Pracjournal or approved by a panel tices use criteria to measure the quality of independent experts through of research and readiness for disa comparably rigorous, objective, semination. They look at the following and scientific review” (No Child quality of research criteria: reliability, Left Behind Act, 2001, pp. 1964validity, fidelity, attrition and missing 65). data, potential confounding variables, Because organizations use different and appropriateness of analysis. The criteria, it is extremely important readiness for dissemination criteria is that a funder or provider of evidenceas follows: Availability of implemenbased programs understands who is tation materials, training and supassigning a rating to the programs and port resources, and quality assurance how they are defining it. procedures.

Children’s Services Council of Palm Beach County


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Evidence-based practice stands in contrast to approaches that are based on tradition, convention, belief, or anecdotal evidence (National Registry of Evidence-based Programs and Practices). Common Elements Although this may be confusing, most definitions of evidence-based do include common elements such as: a strong theoretical foundation; intended for a developmentally appropriate population; quality data collection and procedures; and evidence of effectiveness. For a program to show effectiveness, generally there must be strong evidence that the program results are the direct result of the activities of the program. This means that no other factor or factors were major contributors to the outcomes or that the changes did not happen by chance. For example, while we would expect early education programs to produce favorable effects on children, a scientifically sound evaluation is absolutely required in order to know whether they fulfill their promise (Karoly, Kilburn and Cannon, 2005). To truly say that a program is effective, there must be a strong research design testing the outcomes. This means using an experimental/Randomized Control Trial (RTC) or quasi-experimental design. The experimental design is often referred to as the “gold standard” in research. While an in-depth discussion of research designs and methodology is outside the scope of this brief, it is important to note that there are specific types of studies needed in order to say that a program is working and achieving specific child-level outcomes. Without an evaluation that compares a group that received the program or intervention with another group that did not, it would be difficult to determine whether or not the program/

intervention caused the differences between the two groups of children. Also, if you just measure children before and after they receive treatment then you can not say that the gains they made would not have occurred despite the intervention. As Karoly and colleagues note (2005), we do not want to attribute a positive effect to a program without a comparison with what would have happened in the absence of the program, holding all other factors constant. Ultimately, we want to answer the question “Compared to what?” to determine whether a program is “effective” (p. 27). Other characteristics of a rigorous research design are an adequate sample size (meaning there were a sufficient number of research subjects who received the intervention); a measurement of sustainability; replication; and a measure of participants’ gains or changes in knowledge, attitudes, and behaviors.

Evidence-Based Practice (Best Practices) Versus Program: Is There A Difference? While many use the terms “programs” and “practices” interchangeably, more and more researchers and practitioners are beginning to differentiate between these terms. A practice is defined as a habitual or customary performance or operation action or something that a professional does in order to achieve a positive outcome. More specifically, according to Fixsen et al. (2005), evidence-based practices are skills, techniques, and strategies that can be used when a practitioner is interacting

Commonly Used Terms Attrition A gradual, natural reduction in client participation or of personnel within a program.

Experimental Design (classical experiment) A research design where participants are randomly assigned to either an experimental group (treatment) or control group (no treatment/placebo). This allows the researchers to examine whether the intervention/treatment caused the outcomes or effect to take place (causal inference).

Fidelity Extent to which delivery of an intervention adheres to the protocol or program model originally developed.

Quasi-Experiment This research design is very similar to and almost meets criteria for an experimental design, but is unable to control potential factors and does not include random assignment of participants.

Replication Process of repeating services and/ or a program model undertaken by someone else using the same methodology. Commonly the location and participants will be different. Replication results either support earlier findings or question the accuracy of earlier results. Intervention adheres to the protocol or program model originally developed.

September 2007


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

directly with a customer (p. 26). They are sometimes called core intervention components when used in a broader program context. Evidence-based programs may be defined as “organized, multi-faceted interventions that are designed to serve consumers with complex problems. Such programs, for example, may seek to integrate social skills training, family counseling, and educational assistance, where needed, in a comprehensive yet individualized manner, based on a clearly articulated theory of change, identification of the active agents of change, and the specification of necessary organizational supports” (Fixsen et al., 2005, p. 82). Programs integrate various practices within specific settings and with targeted customers. At CSC, we differentiate between these two terms as follows: •

Evidence-Based Program (EBP) – Programs comprised of a set of coordinated services/activities that demonstrate effectiveness based on research. Criteria for rating as such depend upon organization or agency doing the rankings. EBPs may incorporate a number of evidence-based practices in the delivery of services. Evidence-Based Practice – An approach, framework, collection of ideas or concepts, adopted principles and strategies supported by research.

History of Evidence-Based Programs The idea of evidence-based programs is quite new overall, and it is even more recent for the social service arena. The premise of evidence-based originated in the medical field. One landmark in the movement towards evidence-based programs was the establishment of the Food and Drug Administration,

which is responsible for testing the safety of medical treatments (Leff, 2002). Another landmark was in the use of randomized control studies. It was only in 1948 that the first such study took place – researching the efficacy of streptomycin in treating tuberculosis. By the 1960s the number of randomized control trials reached into the hundreds, and today there are tens of thousands occurring every day (Dodge, 2006). In the field of psychology, which does not have a governmental body examining the efficacy of treatments, it is the responsibility of those in the field to research effective programs. It really was not until the 1990s that this idea began to expand. The Alcohol, Drug Abuse and Mental Health Reorganization Act of 1992 helped create the Substance Abuse and Mental Health Services Administration (SAMHSA), whose role was to assist in disseminating research and effective programs/services regarding problem behaviors. In 1999, the American Psychological Association established a task force for the main purpose of promoting scientific treatments, also termed empirically supported treatments (Dodge, 2006, p. 477). The task force wanted to advocate for improving patient outcomes by using research and current best evidence, much like what happened years earlier in the medical field. It was during this time that a backlash began with some psychologists pushing against these treatments in that they believed that “it infringes on their autonomy and dehumanizes clients” (Dodge, 2006, p. 477). There was concern that clients vary too much in regards to disorders, co-morbidity, personality, race, ethnicity, and culture to use a one size fits all “cookie-cutter” approach (Levant, 2005). In education, the No Child Left Behind Act of 2001 was the first major

Children’s Services Council of Palm Beach County

move by the education field to promote evidence-based programs. This law affects children in kindergarten through high school and stresses accountability for results and emphasizes implementing programs and practices based on scientific research (See page 3 for definitions and criteria). Prevention science has been the last discipline to welcome evidence-based programs. In 1996, the Center for the Study and Prevention of Violence in Colorado began examining various youth programs to determine which ones worked to reduce crime and violence. As can be seen from the previous examples, the areas of substance use, mental health, and juvenile justice have been working towards using evidence-based programs for the past 15 years, but a systematic review of programs in the primary prevention and early intervention areas, such as early care and education is just now taking off.

Legislation and Evidence-Based Programs In March 2007, Senators Salazar (D-Colorado) and Specter (RPennsylvania) introduced the Healthy Children and Families Act, a bill to expand the evidence-based program Nurse-Family Partnership to all 50 states. This bill would allow states to draw down federal dollars in support of their State’s Children’s Health Program (S-CHIP). If passed, as many as 570,000 mothers and children could gain access to the program each year. Nurse-Family Partnership is a home visiting program for first-time, low-income mothers. It has been researched using randomized control trials on three different occasions with the findings published in 1978, 1990, and in 1994. Outcomes achieved range from positive birth outcomes


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

As of 2004, Louisiana was the only state that used Medicaid to fully fund the Nurse-Family Partnership through Targeted Case Management. Since inception, premature births have decreased by 52% and low birthweight deliveries decreased by 22% for participating mothers (O’Connor, 2004, p. 7.) to a reduction in maternal antisocial/ criminal behavior and child abuse. This program has been researched by both the Washington State Institute for Public Policy and the RAND Corporation and has shown a positive return on the dollar ($2.88 and $4 for every dollar invested, respectively). Economic benefits include a reduction in emergency room visits, school dropout, arrests and incarceration, and an increase in employment (Yeager, 2007).

Examples of EvidenceBased Programs 1. Nurse-Family Partnership (David Olds) – A home visiting program for first-time, lowincome, at-risk mothers promoting improved maternal, prenatal, and early childhood health. The outcomes achieved include the following: •

Improved Birth Outcomes: low birthweight, preterm delivery, neurodevelopmental impairment Improved Outcomes for At-risk Mothers: reduced rates of subsequent pregnancy, reduction in maternal behavioral problems due to substance use, reduction in school dropout rates, reduction in unemployment, reduced use of welfare and food stamps, and fewer arrests Improved Child Outcomes: reduced rates of childhood injury, abuse, and neglect. Long-term follow-up also

shows children have fewer sexual partners, reduced cigarette smoking and alcohol use, and fewer arrests and convictions 15 years later.

2. High/Scope Perry Preschool Program (David Weikart) - A universal preschool program that utilizes an active learning environment to encourage independence, self-esteem, confidence, problem-solving skills, social cooperation, and promotes school bonding. The outcomes achieved include the following: •

Improved Child Outcomes: reduction in need for special education classes, increased academic success (high school graduation), increased adult financial stability (employment, home ownership, monthly income, lowered incidence of use of welfare and other social services), and reduction in arrests

3. Incredible Years (Carolyn Webster-Stratton) – A program for children ages two to eight living in poverty with conduct problems that teaches children to manage anger and frustration in a healthy manner. It provides parents with effective parenting skills to work with their child’s problem behaviors, and provides teachers with appropriate classroom management skills to address and reduce problem behaviors. The outcomes achieved include the following:

Improved Child Outcomes: positive peer association and interaction, positive behavior interactions at home and school, emotional and social competence, increase in problem-solving and anger management skills, school readiness, academic success, and prevention and reduction of aggressive and problem behaviors Improved Parental And School Outcomes: parents and teachers use appropriate and effective discipline practices and praise and increased parental involvement in school and positive relationships between parents and teachers

What Organizations Promote the Use of EvidenceBased Programs? There are many organizations that now promote the use of evidencebased programs. It is important to remember that definitions of evidencebased programs and rating standards vary greatly between organizations. Some of the most well-known organizations include (See Appendix B for links): •

Office of Juvenile Justice and Delinquency Prevention (OJJDP)

SAMHSA’s National Registry of Effective Programs and Practices (NREPP)

Blueprints for Violence Prevention

Promising Practices

September 2007


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

What Works Clearinghouse

Strengthening America’s Families

Center for Mental Health Services (2000)

Center for Substance Abuse Prevention (CSAP)

Office of the Surgeon General

Child Welfare League of America

Why Implement an Evidence-Based Program? “To date, few of the programs identified as model or exemplary programs have been successfully implemented on a wide scale” (Elliott and Mihalic, 2004). Despite decades of research on the causes and treatments of various problems within the social service arena, children and families still find themselves in crisis. Most approaches aimed at helping these families have shown only modest effect (August, et al., 2004, p. 2018). Thus, service providers have begun searching for programs with scientifically proven results. For example, SAMHSA reviewed more than 600 programs and only 11 programs were found to be effective. The National Registry of Evidence-based Programs and Practices, which examined programs in the substance use and mental health disciplines reviewed more than

1,100 programs and only found 150 that were viewed as model, effective, or promising programs. This is very telling and shows that most programs in the prevention field have either not been sufficiently researched to draw conclusions or have been and do not show positive effects.

Return on Investment According to researchers, implementing evidence-based programs helps ensure that a program is based on a proven or tested theory of change. The results or client outcomes are directly related to the services received from the program. Second, evidence-based programming helps to ensure that agencies are spending resources on a proven program that works. We must be accountable to the families we serve, as well as to community stakeholders, funders, and taxpayers (Hyde, Falls, Morris, and Schoenwald, 2003). Third, funders want to invest in programs that have demonstrated outcomes, meaning a good return on investment. “In an era of increasingly tight fiscal budgets, public sector policymakers need more objective and impartial means of reviewing publicly funded programs to determine if the greatest value is being provided for the taxpayer’s dollars. No longer can these policymakers assume that programs are effective simply because the program’s supporters assert that they are effective” (Brown, 2005).

The 2003-05 Washington state operating budget required that the Washington State Institute for Public Policy (WSIPP) conduct research examining the benefits and costs of prevention and early intervention programs for children and youth. There were three main focus areas: 1. Identifying which programs produce a positive return on the dollar. 2. Developing criteria to ensure fidelity and quality of program implementation. 3. Developing recommendations for state legislation encouraging local governments to invest in evidence-based programming and providing these governments reimbursements for implementing such programs (Pizzigati, Stuck, and Ness, 2002). What they found was that in fact, there are some programs that do produce positive effects and also generate more benefits than costs. Conversely, they also found that some programs were not good investments and were therefore an inefficient use of taxpayer money. According to Washington State Institute for Public Policy researchers, “the market for rigorously researched prevention and early intervention programs is young, but is evolving quickly. Most high-quality evaluations have been completed only in the last two decades, and many new rigorous studies will become available

Highlight As of December 2006 in Florida, 405 youth had completed the Redirection Program, a program that utilizes the evidence-based programs multi-systemic therapy and functional family therapy. This program achieves the same outcomes as residential delinquency programs and in fact, when examined closely, youth that graduated from the Broward and Escambia counties’ sites achieved significantly better outcomes, including lower re-arrest rates. Also noteworthy, the program has cost $3.1 million dollars so far in comparison to the $8.9 million it would have cost if these 405 youth had been placed in residential care (Rhen, 2007).

Children’s Services Council of Palm Beach County


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

“As Peter Greenwood states in his book, “Changing Lives,” a culture of accountability is the only way to “keep everyone honest” because it “allows for fair comparisons between competing approaches and programs.” In the absence of such a culture, Greenwood predicts the industry will be unable to avoid the “flavor of the month phenomenon, which conjures up new, supposedly promising programs on the strength of their novelty, rather than proven track record” (Rhen, Evidence-based Associates, p. 1).

in the years ahead” (Pizzigati, Stuck, and Ness, 2002). Local communities are being asked more than ever to invest in and implement proven programs. “In times of shrinking budgets and increasing federal and state deficits, policymakers and practitioners must make efficient use of prevention resources by opting for programs that have the greatest likelihood of producing positive effects” (Kyler, Bumbarger, and Greenberg, 2005). This means either implementing programs that have already been labeled evidence-based through a national process or proving that the programs they are running would be considered evidence-based if rated. These communities must prove that their programs work (i.e. that they are effective for the children and families they serve). One of the major problems is that communities and small local agencies do not have the resources necessary to prove that their programs are effective because the type of studies that need

to be conducted, namely randomized control studies or experimental designs, are very expensive. For example, on average, a three- to five-year evaluation study can cost several million dollars to fully research effectiveness.

Accountability Implementing evidence-based programs assists agencies and organizations in moving towards accountability. Why does everyone need to be accountable? The answer is simple: too often programs continue to run without ever showing that what they do works for the children and families they serve. A program may appear on the surface to work and logically should work, but when formally evaluated it may show no results or may in fact be harmful to the population it serves. Strangely enough, the government does not always support programs that have been shown to work. In fact, there is evidence that programs that do not work are being

supported and funded, such as DARE, boot camps, and Scared Straight.

Case Study: Scared Straight The Scared Straight Program, while implemented across the nation, has actually been shown to cause a small increase in subsequent criminal activity by participating youth. However, the Governor of Illinois recently signed legislation that required schools in Chicago to implement this program even though it is known to have harmful effects (Dodge, 2006).

The fact remains that we spend billions of dollars on social programs that may have absolutely no effect on the problems they are trying to eradicate, and in some cases may be harmful to participants.

September 2007


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Concerns Regarding Evidence-Based Programs Although there appears to be widespread movement towards evidence-based programs and practices, there is some skepticism. As with any movement, there is sometimes opposition, which is critical for the success of the change movement and journey. Opposition and thoughtful critique help leaders think about the goals and objectives of social change in a thorough and responsive way. In addition, there are many concerns that are currently being addressed or hopefully will be considered in the future. Here are a few examples: 1. 2. 3. 4. 5. 6.

Evidence-based programs do not take into account professional experience of practitioners. Evidence-based programs and practices do not exist for all identified needs or for all target populations. Researching programs in order to define them as evidence-based is very expensive. Implementing evidence-based programs can be very expensive. Providers may not have the capacity to implement an evidence-based program. Providers may believe that adaptation is needed for program success.

What About Professional Expertise? There has been growing concern from those working in the field that definitions of “evidence-based” do not take into account the personal and professional experience of those providing services to clients. At the same time, there is really no argument that not all programs work for all individuals or families. Because there has been concern regarding implementing programs as is, without taking into consideration the providers and their knowledge and expertise, many organizations have begun adopting definitions that emphasize a balance between research and practice. For example, Buysse and Wesley from the FPG Child Development Institute at the University of North Carolina Chapel Hill define evidence-based practice as “a decision-making process that integrates the best available research evidence with family and professional wisdom and values.” The American Psychological Association defines evidence-based practice as “the integration of the best available research with clinical expertise in the context of patient characteristics, culture, and preferences” (p. 5). The Institute of Medicine defines it as “the integration of the best research evidence with clinical expertise and patient values” (Levant, 2005, p. 5). The issue that some researchers have with possibly changing a program/practice because of professional expertise and client need is that if there is a change in the client’s behavior and the practitioner believes it was due to the program, the claim can not be substantiated without being scientifically studied. It could have been the result of maturation or because of additional assistance from family and friends (Leff, 2002). The concept is that after a program/practice is altered, it must be researched again for effectiveness.

Needs and Target Population Concerns The research on what works in prevention and early intervention is in its infancy. As can be gathered from a historical perspective, this area of study has only been in existence for the past 60 years and only recently has the social service prevention field begun to scientifically study programs and discuss the possibility of dissemination. In addition, this was done first for the juvenile justice, substance abuse and mental health areas. There has been very little done in the areas of primary prevention and early intervention, such as early care and education. Thus, some skeptics note that because there are few evidence-based programs to choose from it is unethical to refuse access to programs that are in effect. However, as asserted by Elliot (2007), would it not be unethical to provide a program

Children’s Services Council of Palm Beach County


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

to children that either does not work or may in fact be harmful? Thus, we have to be patient and remind ourselves that we are at the forefront of this movement and it is important to realize that what we are doing right now may in fact be effective. The only things to do now is research it and find out if we are really as effective as we think we are. The positive effects resulting from these programs may not be visible for years, and the small size of many programs makes it unlikely that they alone could affect city- or county-wide risk factors . . . (Moore, 2007)

Research is Costly There is not always the opportunity or resources needed to conduct a strong research design. Many studies of this magnitude are extremely expensive and time consuming due to the need to test for sustainability. Most evidencebased programs show that their outcomes are sustainable for at least one year after leaving the program. A program can show positive effects, but if there is not a permanent change to the recipient’s attitudes, knowledge or behaviors after program participation has ended, then the effects are not sustainable and are, therefore, inadequate. In medical care, “an average of about 17 years is required for new knowledge generated by randomized controlled trials to be incorporated into practice…” (Leff, Conley, and Hennessy, 2006). This is a perfect example of just how long it can take to prove that a program or service works and have it ready for dissemination and use by others in the field. Another example is the Nurse-Family Partnership home visiting model. Program developer David Olds researched his program for over 21 years before allowing it to be replicated and disseminated to the general public. He has also established a national site that assists with the implementation of his program so that effectiveness is ensured. Because demonstrating effectiveness is such a lengthy process, there are some advocates favoring implementation of programs based on having a strong theoretical foundation, background research supporting the program model and its activities, and finally, clinical experience. There is a debate whether this type of support is enough to move forward with implementing or continuing to support a program based on these factors. It is believed that while some programs may have credible evidence for their support, in the end they still may not be effective when outcomes are measured using a strong research design. In practice, this decision must be made by the funding organization. It is also critically important to understand that changes and effects may not be seen for years, and while very expensive, in the long run the costs of researching programs and finding what works and what does not will pay for itself in the positive effects it produces for the children and family it serves. In fact, many evidencebased programs have demonstrated effects decades later for the participants and have even shown positive impacts for the children of those that participated.

Implementation: A Key to Success “Although we have taken giant strides forward in determining “what works” and promoting the use of science-based programs, we have lagged behind in building the internal capacity of designers to deliver their programs. To move forward with a national prevention initiative, this gap must be addressed by funders and policymakers” (Elliott and Mihalic, 2004). There has been some discussion and concern that the primary focus of evidence-based programs has been on researching what types of programs are researched-based and much less attention given to whether or not there is capacity to implement them. Implementation occurs in stages and there can be problems at any one of them. For example, according to Fixsen et al. (2005), there is what is termed “paper implementation,” which is when a program completes the recorded theory of change for the new program, the second stage is “process implementation,” also called the expressed or active theory of change and involves such components as training.

September 2007


10

Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

The last phase is called “performance implementation” and is known as the integrated theory of change and involves actually carrying out the program leading to outcomes for clients (p. 6). According to Elliott and Mihalic (2004) when it comes to replicating evidence-based programs, most failures are the result of inadequate site preparation and/or capacity. They are simply not ready for the complexity of implementing such a program. Actually, it can take upwards of six to nine months to get a site ready for implementation (Elliott and Mihalic, 2004). One reason is that when an agency decides to implement an evidence-based program, there is almost always the need for some organizational change (Fixsen et al., 2005, p. 64). Fixsen and colleagues (2005, pp. 64-65) report that there are specific factors that are critical to such organizational change, including • • • • • • • •

commitment of ongoing resources and support for providing time and scheduling for coaching, participatory planning, exercise of leadership, evolution of teamwork; commitment of leadership to the implementation process; involvement of stakeholders in planning and selection of programs to implement; creation of an implementation taskforce made up of consumers, stakeholders; suggestions for “unfreezing” current organizational practices; resources for extra costs, effort, equipment, manuals, materials, recruiting, access to expertise, re-training for new organizational roles; alignment of organizational structures to integrate staff selection, training, performance evaluation, and ongoing training; alignment of organizational structures to achieve horizontal and vertical integration; and “To be effective, any design process must intentionally be, from the beginning, a redesign process” (Felner, et al., 2001, p. 189).

While disseminating information about evidence-based programs is useful, if there is no capacity to put the program into practice, then the likelihood of achieving positive outcomes becomes quite limited. As Chinman and colleagues (2005) argue, there are many points where the prevention process can falter, each increasing the possibility of poor outcomes. According to them, some of the most critical factors and steps that need to be addressed are as follows: • • • •

Complexity of prevention programming - Conducting needs assessments; setting goals and objectives; choosing appropriate programming that fits the local context given current resources; planning, implementing, evaluating, and sustaining programs. System-level factors – Differences in theoretical orientations of researchers and practitioners; differences in training; lack of coordination between agencies and systems of care; lack of community readiness either to adopt programs or implement them with fidelity. Resources – Lack of needed resources to implement or sustain programming including both financial and technical. Adaptation – Issues concerning adapting programs to fit community characteristics – developers may not take into consideration dissemination issues and implementers may not consider issues such as generalizability and fidelity concerns.

Going to Scale and Fidelity There continues to be questions about whether many of these programs can be brought to scale, meaning replicated with fidelity, given real-life circumstances. Moving a program to a community setting from a research setting is not just a change in location. For example, according to August et al. (2004) there are (1) Client factors, (2) Practitioner factors, (3) Intervention Structure factors, and (4) Organizational culture/climate factors that can impact implementation success.

Children’s Services Council of Palm Beach County


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

• •

11

Client Factors - In real life settings, clients/patients can not be chosen in the same rigorous manner as is typical in research studies where these programs were tested. Other client factors include things like cost of the program and logistic issues, such as transportation. These are not usually problems when researching these programs in a controlled setting. Practitioner Factors - There will probably be a high degree of variation in the education, practice orientations, and qualifications of those individuals delivering the program. These practitioners will have backgrounds that are much different than those that provided the program to recipients in the research study. Most practitioners do not have experience delivering an evidence-based program. Individuals that are part of research projects are also very committed to the program model, implementing it with fidelity, and often have high job satisfaction. Intervention Structure factors - In a research study, scientists have complete control over the program implementation. It is implemented according to a scripted manual and there is strict supervision. These structures are quite different, and there will probably be less support, in real-life settings. Organizational Culture/climate Factors - Once a program moves out of the controlled setting, the organization or agency that decides to implement the program will have its own leadership with its own attitudes and management style, issues with financial and human resources, and organizational stress.

Possible Solutions to Implementation Concerns August (2004) reports that there are in fact some important things an agency can do to help with implementing a program successfully. These points include making sure that there are collaborative relationships with program developers and other agencies and stakeholders. Each partner must feel ownership of the program to ensure accountability. The host organization and its staff must have a high degree of readiness and motivation for implementation. To measure this there are readiness tools, such as the Organizational Readiness to Change Scale and the Evidence-based Practice Attitude Scale for staff. There must also be open communication among all parties. Implementers must have staffs with sufficient education, who are given appropriate training and supervision. Personality of staff and theoretical orientation must also be examined. There must be cultural awareness and how it can influence outcomes. Particular attention must be given to recruitment and retention of participants. The implementing organization must also consider potential problems and begin exploring ways to solve future crises. Elliott and Mihalic (2004) have various training recommendations from past research on replication of evidencebased programs, which include the following: • • • • • • •

Use interactive training methods (e.g. videos, role playing). Be firm in explaining the formal eligibility requirements for program staff (e.g. required skills, formal training, and education). Hire entire staff before training. Introduce the program to the staff before beginning training. Encourage administration to attend training. Expect staff turnover and begin planning and budgeting for it. Be ready to begin implementation right after training ends.

September 2007


12

Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Fidelity versus Adaptation “…the critical question may not be will this program fit in this local context, but how does this context have to change for us to successfully implement this program here?” (Elliott and Mihalic (2004) quoting Lisbeth Schorr) Fidelity is defined as the degree to which program implementers provide services or a program as designed by the developer. It is usually measured by adherence to the program, dosage, quality of delivery, and participant’s acceptance of the program (Rohrbach, Grana, Sussman, Valente, 2006, p. 308). Adaptation or changing the program design is usually done because it makes the program more acceptable to the local environment. For example, Rohrback et al. (2006) has reported that many times school programs are adapted and components are eliminated to make them more feasible. The Life Skills Training Program is often adapted by the teachers implementing it by adding a scare tactic component. This approach has been shown to have no effect and may in fact be harmful. Other typical adaptations include eliminating training components and changing dosage (Elliott and Mihalic, 2004). Implementers will sometimes jeopardize fidelity for sustainability of the program. The problem is that the effects may be sustainable, but no longer effective. Many assumptions are made regarding implementing evidence-based programs, namely that in order to get local buy-in, the program must be changed, such as decreasing the intensity. Elliott and Mihalic (2004) report that there is really little research that supports the need to adapt programs, but acknowledge that language and cultural adaptations may be the exceptions (p. 51). However, they also state that every program does not need separate treatments for different sexes or racial/ethnic groups, especially when the program is geared towards children and adolescent populations. There is a question though about whether a program that works well for rural teen mothers would work with inner-city teens. The program may or may not be able to be generalized to this different population and would need to be further evaluated. Does changing or adapting the original design of an evidence-based program mean that it will not be as effective? Not necessarily, the adaptation may be as effective, more effective, or not effective at all. The problem with adaptation is that, in many cases, we just do not know if it will be as effective as the original program because it has not been experimentally evaluated. Some adaptations have been evaluated. For example, there was an adaptation to the Nurse-Family Partnership program using paraprofessionals to complete the home visits instead of nurses. What researchers found was that the program was not as effective in reaching its outcomes. Further analysis revealed that nurses completed more visits than paraprofessionals and spent more time focusing on personal health during pregnancy and on parenting an infant. Paraprofessionals visited for longer time periods than nurses and spent more time on environmental concerns (safety of the environment, including living conditions and domestic violence issues and the ability to provide adequate food, clothing, and shelter). Paraprofessionals experienced greater staff turnover. These differences in implementation caused the program adaptation to fail to achieve positive outcomes for participants (Korfmacher, O’Brien, Hiatt, and Olds, 1999; Hiatt, Sampson, and Baird, 1997). If a program is implemented as designed with the intended population, the need for an outcome evaluation is eliminated and the evaluation becomes focused on process and program-design adherence. According to many advocates of evidence-based programs, strict fidelity is essential to program effectiveness. On the other hand, there is some research that shows that “sensitivity and flexibility in administering therapeutic interventions produces better outcomes than rigid application of manuals or principals (Levant, 2005, p. 14). Evidence-based program developers do not necessarily disagree with this belief or this research. That is why there has been some discussion regarding moving towards researching what the core components or critical ele-

Children’s Services Council of Palm Beach County


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

13

ments are to any intervention. The federal government is supporting current research in this area. Core components are defined as the “essential and indispensable� elements of a program or practice needed in order to reach outcomes (Fixsen, 2005, p. 24). The goal is to highlight these and then be able to do some adaptation without decreasing effectiveness. Replication of programs will need to occur before these core components can be established. Furthermore, there is also some agreement that if at all possible, a program should first be implemented with fidelity before adaptation begins and research shows that when this occurs, adaptations are more successful (Fixsen, 2005). While it is in its infancy, some research is examining generalizability and transportability of evidence-based programs into community settings. It is important that if you are contemplating adapting a program from its original format that (1) you contact the program developer and ask about the core components. The developer may have an understanding of how important an omission may be to the outcomes, and (2) you understand the theoretical foundation that the program is premised on so that you can preserve it when making changes (Chinman, Imm, Wanderman, 2004, p. 47).

September 2007


14

Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

References August, G.J, Winters, K.C., Realmuto, G.M., Tarter, R., Perry, C., and Hektner, J.M. (2004). Moving EvidenceBased Drug Abuse Prevention Programs From Basic Science to Practice: “Bridging the Efficacy-Effectiveness Interface.” Substance Use and Misuse. 39(10-12): pp. 2017-2053. Brown, A. (July 2005). Determining What Works (and What Doesn’t) in the Public Sector. Public Management. Civic Renewal and Community Building. Buysse, V. and Wesley, P. W. (September 2006). Evidence-Based Practice Empowers Early Childhood Professionals and Families. FPG Child Development Institute at The University of North Carolina at Chapel Hill. FPG Snapshot #33. Centre for Evidence-based Medicine. (2004). Glossary of EBM Terms. University Health Network. <www.cebm .utoronto.ca/glossary/index.htm>. Chinman, M., Hannah, G., Wandersman, A., Ebener, P., Hunter, S.B., Imm, P. and Sheldon, J. (June 2005). Developing a Community Science Research Agenda for Building Community Capacity for Effective Preventive Interventions. American Journal of Community Psychology. 35(3/4): pp. 143-157. Chinman, M., Imm, P., and Wanderman, A. (2004). Getting To Outcomes 2004: Promoting Accountability Through Methods and Tools for Planning, Implementation, and Evaluation. Chapter Three- Question #3: Which Evidence-Based Programs Can Be Used to Reach Your Goal? (Best Practice). Rand Corporation. <http://www.rand.org/pubs /technical_reports/TR101/> Crane, J. and Barg, M. (April 2003). Do Early Childhood Programs Really Work? Coalition for Evidence-Based Policy. Dodge, K. (2006). Professionalizing the Practice of Public Policy in Prevention of Violence. Journal of Abnormal Child Psychology. 34: pp. 475-479. Elliott, D.S. and Mihalic, S. (March 2004). Issues in Dissemination and Replicating Effective Prevention Programs. Prevention Science. 5(1): pp. 47-53. Elliott, D.S. (2007). Personal Communication via the Senior Executive Policy Institute. Felner, R. D., Favazza, A., Shim, M., Brand, S., Gu, K., & Noonan, N. (2001). Whole school improvement and restructuring as prevention and promotion – Lessons from STEP and the project on high performance learning communities. Journal of School Psychology, 39(2), pp. 177-202. Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M. & Wallace, F. (2005). Implementation Research: A Synthesis of the Literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231). Hiatt, S.W., Sampson, D., and Baird, D. (1997). Paraprofessional Home Visitation: Conceptual and Pragmatic Considerations. Journal of Community Psychology. 25(1): pp. 77-93. Hyde, P.S., Falls, K., Morris, J.A., and Schoenwald, S.K. (Fall 2003). Turning Knowledge into Practice: A Manual for Behavioral Health Administrators and Practitioners about Understanding and Implementing Evidence-Based Practices. The American College of Mental Health Administration. Karoly, L.A., Kilburn, M.R., and Cannon, J.S. (2005). Early Childhood Interventions: Proven Results, Future Promise. Rand Corporation. <www.rand.org>.

Children’s Services Council of Palm Beach County


Frequently Asked Questions (written by EBP committee)

What is an evidence-based program? An evidence-based program is comprised of a set of coordinated services/activities that demonstrate effectiveness based on research. Criteria for rating as such depend upon organization or agency doing the rankings. EBPs may incorporate a number of evidence-based practices in the delivery of services.

Why are evidence-based programs important? Implementing evidence-based programs is important to ensure that resources are spent on programs that have a high probability of achieving desired, long-term outcomes and that incorporate principles of effective programming that bring about positive results. The advantage to both funders and providers is that EBPs eliminate the costly and time-consuming efforts of exploring and experimenting with new methods, strategies and technologies. They provide the best return on investment.

Who makes the distinction whether something is evidence-based? There are many organizations that now have procedures for rating programs in order to designate them as evidence-based. CSC will also be creating criteria in which to determine which programs are evidence-based and what is needed to move a program towards becoming evidence-based.

Why is CSC interested in EBP? Why now? CSC’s top priority has always been to see that the children and families served by its funded programs achieve the best possible outcomes. We have also, since our inception demanded the highest level of accountability – of ourselves and those we fund – to provide the highest return on investment for our county’s taxpayers and stakeholders. As this report details, the development of evidence-based programming in social services is a relatively new one, but one that vastly reduces hit or miss outcomes and results. This continuous improvement mindset is part of CSC’s leadership philosophy, core values, and behaviors. Now that CSC is at a place where it can provide a supportive infrastructure (e.g. training) for EBP development, we will move forward to assist programs/agencies with this advanced level of work.

How do we know the movement towards EBP is not just another fad? As grants change, tax dollars dwindle, and boards require higher standards for the accountability of provided resources, social service programs much adapt by better measuring program success and evidence-based outcomes. This expectation to provide evidence-based outcomes is happening nationally across many states. As a result, EBP is not being viewed as another “fad,” but rather it is an understandable expectation by funders and boards to ensure accountability of provided resources and to produce outcomes/results that truly make a difference for the clients being served.

I thought we were doing that already, what is different now? Historically, CSC has funded a few, select programs that are considered evidence-based (e.g. HIPPY). While CSC has required agencies and programs to provide data in the past, and will continue to do so in the future, this does not necessarily mean that these programs are evidence-based. The term “evidence-based” refers to programs that have theoretical underpinnings, have met specific criteria, and have been proven to be effective. It is CSC’s goal to help its currently funded agencies and programs move toward becoming evidence-based. This can be a long and laborious process; however, prior work (i.e., data collection, development of logic models and theories of change) has actually helped September 2007


move programs forward in this direction. Such prior work also allows programs to work toward continuous quality improvement, ensuring accountability and provision of the best services possible for our families.

How can we learn about evidence-based programs? And how is CSC going to help us become evidence-based? There are a number of resources available to educate individuals about evidence-based programs, including the various organizations that define and rate programs using specific criteria. Links to these organizations can be found in Appendix B of this document. The reference section of this document also provides many sources of information. CSC will be hosting training on evidence-based programs beginning in September 2007 as part of a mini-series dedicated to the topic of effectiveness. There is also a CSC Professional Development Committee working on preparing professional development opportunities to prepare staff to build their knowledge, skills and abilities necessary to work within an evidence-based program environment. CSC will make available resources for professional development, monitoring and technical support.

Are we “ready” for evidence-based practices? We won’t know unless we try. We must follow logical steps during the planning and design stages – do a substantial amount of research about our own population and the programs and services deemed evidence-based that may lead to the outcomes we are seeking and determine if we can match the two.

How long does it take to become evidence based? While there is no specific time table in becoming evidence-based, key elements must be in place before evaluating success. This process can take extend over several years. Every area and program is different and some take longer to determine whether they are successful. Much also depends on the results sought. Lastly, considering that a program must have completed randomized control trials and have shown positive outcomes which have lasting client effects, the process is long and can be costly.

What measures can we take to move ourselves towards evidence-based programming? It starts with a solid foundation. To begin with, you must ensure that your agency is clear on its mission and goals and that the program staff is committed to the goal(s), outcomes and procedures required by the funding agency. Programs can begin by developing a sound theory of change stating what they believe will effect change for the clients; develop a logic model (a road map as to how they plan to achieve their outcomes); provide data that is submitted on time and is clean and easily manipulated for evaluation; and implement their existing program with fidelity. Programs should be fully aware of what is in their PIE (Program Implementation and Evaluation) and assuring that they are adhering to it. A program that wishes to move towards becoming evidence-based should be collecting data for their program and be able to understand how to implement change based on findings from the data. All members of the agency’s staff involved in the program must take advantage of ongoing professional development and periodic performance assessment. Finally, evidence-based programs not only assess themselves regularly for continuous quality improvement, but seek ongoing feedback from clients regarding satisfaction.

Will we be de-funded if we are not evidence-based? As we will continue to emphasize, we are all in the business of improving outcomes for our children and families. CSC has and will continue to make data-driven decisions – on services and programs needed, outcomes sought and who is most capable of providing those services and achieving those outcomes. The road toward becoming evidence based only heightens the importance of clean, accurate, complete data for you, as well as for CSC to be able to see what’s working and what’s not and make mid-course corrections.

Children’s Services Council of Palm Beach County


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

17

Korfmacher, J., O’Brien, R., Hiatt, S. and Olds, D. (1999). Differences in Program Implemenattion Between Nurses and Paraprofessionals Providing Home Visits During Pregnancy and Infancy: A Randomized Trial. American Journal of Public Health. 89(12): pp. 1847-1851. Kyler, S.J., Bumbarger, B.K., and Greenberg, M.T. (April 2005). Technical Assistance Fact Sheets: Evidence-Based Programs. Pennsylvania State University Prevention Research Center for the Promotion of Human Development. Leff, H. S. (2002). Section V: Insurance for Mental Health Care: Chapter 17. A Brief History of Evidence-Based Practice and a Vision for the Future. SAMHSA’s National Mental Health Information Center. <http://mentalhealth .samhsa.gov/publications/allpubs/SMA04-3938/Chapter17.asp>. Leff, S., Conley, J., Hennessy, K. (Summer, 2006). Marketing behavioral Health: Implications of Evidence Assessments for Behavioral Health Care in the United States. International Journal of Mental Health. 35(2): pp. 6-20. Levant, R.F. (2005, July 1). Report of the 2005 Presidential Task Force on Evidence-Based Practice. American Psychological Association. U.S. Department of Education (2003). Proven Methods: Questions and Answers on No Child Left Behind – Doing What Works. <www.ed.gov/nclb/methods/what works/doing.html>. Yeager, C. (Summer 2007). Nurse-Family Partnership: Helping First Time Parents Succeed. In Why Evidence-Based Associates. The EB-Advocate. Evidence-Based Associates.

September 2007


18

Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Appendix A: CSC’s Glossary of Evidence-Based Terms Attrition –A gradual, natural reduction in client participation or of personnel within a program. Client-level outcome – The actual impact, benefit, or change for an individual or group of participants as a direct correlate or effect of the program. Comparison group – A group in quasi-experimental research that is similar to the experimental groups, but who do not receive the experimental intervention (e.g. treatment, therapy, or curriculum). Comparing these groups allows the researcher to identify relationships associated with the intervention. Control group – A group in experimental research that is similar to the experimental groups, but who do not receive the experimental intervention (e.g. treatment, therapy, or curriculum). Comparing these groups allows the researcher to identify effect of the intervention. This group is similar to the comparison group in quasi-experimental research, but is randomly assigned (Maxfield and Babbie, 2005, p. 435). Cost-benefit analysis – An assessment of whether the cost of the intervention or program is worth the benefit by measuring both in the same unit of analysis (Centre for Evidence-Based Medicine, 2004). Data – Information collected in a systematic manner in order to help measure performance. This collection of observations or recorded factual material will support research and evaluation efforts. Essential elements – The crucial components of an evidence-based program. These are the components that create the benefits or outcomes for participants. Other research may refer to as core components. Evaluation – “The systematic collection of information about activities, characteristics, and outcomes of programs to reduce uncertainties, improve effectiveness, and make decisions with regard to what those programs are doing and effecting.” (Patton, 1982). Evaluation research – An evaluation of the effects of a program in regards to its stated outcomes or goals (Maxfield and Babbie, 2005, p. 436). Evidence-based practice – An approach, framework, collection of ideas or concepts, adopted principles and strategies supported by research. Evidence-based program (EBP) – Programs comprised of a set of coordinated services/activities that demonstrate effectiveness based on research. Criteria for rating as such depend upon organization or agency doing the rankings. EBPs may incorporate a number of evidence-based practices in the delivery of services. Experimental design (classical experiment) – A research design where participants are randomly assigned to either an experimental group (treatment) or the control group (no treatment/placebo). This allows the researchers to examine whether the intervention/treatment caused the outcomes or effect to take place (causal inference). Experimental group – A group in experimental research that is similar to the control group, but who receives the experimental intervention (e.g. treatment, therapy, and curriculum). Comparing these groups allows the researcher to identify effect of the intervention (Maxfield and Babbie, 2005, p. 436). Fidelity – Extent to which delivery of an intervention adheres to the protocol or program model originally developed (Mowbray, Holter, Teague, and Bybee, 2003). Level of significance – The degree of probability that the finding could be attributed to sampling error or that if we took another sample we might find no effect (p≤.05 = if there is 5% or less possibility that a relationship is due to chance or sampling error, we conclude the relationship is real).

Children’s Services Council of Palm Beach County


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

19

Logic model – A diagram that shows the interrelationships between activities and their outcomes, using arrows to indicate which sets of activities are believed to contribute to specific outcomes. Measurement –Assessing changes in characteristic(s) or attributes of subjects as a result of participation in a program or receipt of a treatment. Outcome – Benefit for participants during or after their involvement with a program. Outcomes may be related to knowledge, skills, attitudes, values, behavior, condition or status. There can be “levels” of outcomes, with initial outcomes being the first change that can be expected, leading to intermediate and longer-term outcomes that can be sustained over time. Process evaluation – An evaluation of whether a program is implemented as planned or as intended (Maxfield and Babbie, 2005, p. 438). Program – A collection of services, activities, or projects intended to meet a public (or social) need and identified goals (e.g. Nurse-Family Partnership and Brief Strategic Family Therapy). Qualitative research – Research involving detailed descriptions of characteristics, cases, and settings. This research technique derives data from observation, interviewing, and document review and focuses on the meanings and interpretations of the participants. Quantitative research – Research that examines phenomenon through the numerical representation of observations and statistical analysis. The systematic scientific collection and measurement of data that are expressed as a specific unit/ number that define, measure, and report on the relationships between various variables, characteristics or concepts. Quasi-experiment – This research design is very similar to and almost meets criteria for an experimental design, but is unable to control potential factors and does not include random assignment of participants. Replication – Process of repeating services and/or a program model undertaken by someone else using the same methodology. Commonly the location and participants will be different. Replication results either support earlier findings or question the accuracy of earlier results. Target population – The sample of participants that a program is designed to help. Theoretical framework – A theoretical framework is a collection of interrelated concepts that guide our research, determining what things you will measure, and what statistical relationships will be identified. Theory of Change – Guided by the theoretical framework, a detailed narrative that describes a process of planned social change from the assumptions that guide its design to the long-term goals it seeks to achieve. Variable – A variable is anything that takes on different values. It is a measurable factor, characteristic, or attribute that varies over time. • •

Independent variable – A variable which is actively controlled/manipulated to see if there is a change in the dependent variable and used to measure the causal construct. Dependent variable – A variable used to assess the affected construct. Rather, the dependent variable is the value that changes as a result of the manipulation of the independent variable.

September 2007


20

Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Appendix B: National Ratings Organization Blueprints for Violence Prevention <www.colorado .edu/cspv/blueprints>

Focus Violence Prevention

• •

Rating

Ratings Defined

Model Promising

Model: Programs that show evidence of a deterrent effect using either an experimental or quasiexperimental design, show sustained effects for at least one year post-treatment, and include replication at more than one site with demonstrated effects. Promising: Programs that show a deterrent effect using either an experimental or quasi-experimental design.

Center for Substance Abuse Prevention (CSAP) <www .modelprograms.samhsa.gov>

Substance Abuse Prevention

• • •

Model Effective Promising

Model: Programs that are evidence-based (conceptually sound, internally consistent, have sound methodology, credible, can be generalized). Programs have utility, are well-implemented, wellevaluated, produce a consistently positive pattern of results to the majority of intended recipients Developers must show that the programs is available for dissemination and provide TA to others wishing to implement the program (Must score ≥ 4.0). Effective: Same as above, however, not currently available for wide dissemination to the general public (Must score ≥ 4.0). Promising: Programs that demonstrate some positive outcomes, but require additional evidence and research showing consistent positive results. (Must score ≥ 3.33).

Center for Mental Health Services (1999 Report) <http://www .prevention.psu.edu/pubs /docs/CMHS.pdf>

Reducing the Risk/Effects of Psychopathology

• •

Effective Promising

Effective: Programs that are evaluated using comparison groups with either a randomized or quasi-experimental design using a control group, must have pre- and post-test data and preferably follow-up data, a written implementation manual, and must demonstrate positive outcomes. Promising: Programs that appear promising, but are not proven, meaning they lack a controlled design, contain very small samples, or have findings that are indirectly related to mental health outcomes.

Department of Education <www.ed.gov> <http://www.ed.gov /admins/lead/safety /exemplary01 /exemplary01.pdf >

Reducing Substance Use, Violence and Other Conduct Problems

• •

Exemplary Promising

Exemplary: The program is based on empirical data and demonstrates evidence of effectiveness in improving student achievement. Promising: The program provided sufficient evidence to demonstrate promise for improving student achievement. Programs are rated according to evidence of efficacy, quality, educational significance, and usefulness to others.

Children’s Services Council of Palm Beach County


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Organization Mihalic and AultmanBettridge (2004) In William L. Tulk (Ed.), Policing and school crime. Englewood Cliffs, NJ: Prentice Hall Publishers.

Focus

Rating

Reducing school • disciplinary problems, suspensions, truancy, • dropout, and • improving academic achievement

Exemplary/ Model Promising Favorable

21

Ratings Defined Exemplary/Model: Programs that show evidence of a deterrent effect using either an experimental or quasi-experimental design, show sustained effects for at least one year post-treatment, and include replication at more than one site with demonstrated effects. (Based on Blueprints) Promising: Programs that show a deterrent effect using either an experimental or quasi-experimental design. (Based on Blueprints) Favorable: Programs have experimental or matched control group designs, show evidence that behavioral effects are due to the intervention and not other factors, but may have weaker research designs than the standard held for Blueprints.

National Institute of Justice (NIJ) –Report by Sherman, et al (1998)

Crime and drug abuse prevention

• •

Working/ Effective Promising

Research in Brief (1998) <http://www.ncjrs.gov /pdffiles/171676.pdf>

Promising: Programs that have at least one level 3 evaluation and the preponderance of the remaining evidence showing effectiveness, but have a low level of certainty to support generalizability.

Full report to Congress (1997) <www.ncjrs.org /works/wholedoc.htm>

The Gilford Center <http://www.guilfordcenter .com/provider/practices /default.htm>

Working/Effective: Programs that have at least two level 3 evaluations with statistical significance tests and the preponderance of all available evidence showing effectiveness of crime prevention or in reducing risk factors for crime, and findings can be generalizable.

Adult services; substance abuse prevention and treatment child services, mental health and systems of care; developmental disabilities

• • •

Best Practice Emerging Best Practice EvidenceBased Practice

Programs are rated according to research design and internal validity using the Maryland Scale of Scientific Methods.

Level 3. A comparison between two or more comparable units of analysis, one with and one without the program (Research Design), causal direction and history are not threats to validity (internal validity).

Best Practice: Generally accepted as a successful intervention currently believed to improve consumer outcomes. Evidence based practices are a type of best practice that has been established and supported by scientific evidence. The terms “best practice” and “evidence-based practice are often used interchangeably. Emerging Best Practice: Interventions or services that have shown benefit to consumers, but have not yet been established as evidence-based practices through rigourous scientific research. Evidence-Based Practice: Intervention for which there is consistent scientific eveidence showing that it improves client outcomes.

September 2007


22

Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Organization Promising Practices Network <http://www .promisingpractices.net /programs.asp>

Focus Children and Families

Rating • • •

Proven Promising Proven/ Promising

Ratings Defined Proven: Programs have at least one credible, scientifically rigorous study that demonstrates improvement on at least one indicator. To be rated as proven, all of the following must be met: (1) must improve an indicator related to children and family outcomes; (2) at least one outcome is changed by 20%, 0.25 standard deviations, or more; (3) at least one outcome with a substantial effect size is statistically significant at the 5% level; (4) study design uses a convincing comparison group to identify program impacts, including randomizedcontrol trial (experimental design) or some quasiexperimental designs; (5) sample size of evaluation exceeds 30 in both the treatment and comparison groups; (6) program evaluation documentation is publicly available. Promising: Programs have at least some evidence that the program improves outcomes for children and families. To be rated as promising all of the following must be met (1) may affect intermediary variables rather than direct outcomes; (2) change in outcome is more than 1%; (3) outcome change is significant at the 10% level (marginally significant); (4) study has a comparison group, but it may exhibit some weaknesses, e.g., the groups lack comparability on pre-existing variables or the analysis does not employ appropriate statistical controls; (5) sample size of evaluation exceeds 10 in both the treatment and comparison groups; (6) program evaluation documentation is publicly available. Proven/Promising: Program affects more than one indicator, and the level of evidence differs across indicators. 

Juvenile Justice Evaluation Center <http://www.jrsa.org/jjec /resources/evidence-based .html>

Youth Violence

• • •

Model Programs Promising Approaches Innovative Approaches

Additional considerations play a role on a case-by-case basis. These may include attrition, quality of outcome measures, and others.

Model Programs: Model programs are those that have demonstrated definitive success in multiple evaluations. These are sometimes referred to as exemplary programs. Promising Approaches: Those for which evaluation evidence is suggestive of success, but not definitive. Innovative Approaches: Those for which no evidence exists, but may be based on prior research or evaluation.

Children’s Services Council of Palm Beach County


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Organization

Focus

Substance Abuse and Mental Health Services Administration Center for Substance Abuse Prevention <http://

Substance abuse prevention, criteria applied to high risk youth programs, (hry) and pregnant and postpartum women and their infants programs (PPWI)

www.whitehousedrugpolicy .gov/prevent/pdf/science .pdf>

Rating • • • • •

Type Type Type Type Type

1 2 3 4 5

23

Ratings Defined Type 1:Not scientifically defensible. The Program/ principle has been defined or recognized publicly, and has received awards, honors, or mentions. Type 2: Not scientifically defensible. The program/principle has appeared in a non-refereed professional publication or journal. It is important to distinguish between citations found in professional publications and those found in journals Type 3: Expert/peer consensus process scientifically defensible. The program’s source documents have undergone thorough scrutiny in a expert/peer consensus process for the quality of implementation and evaluation methods, or a paper has appeared in a peer-reviewed journal. All dosage information and data collection processes are detailed, all analysis are presented for review. Reviewers trained as evaluators, code the implementation variables and activities, as well as the findings. Type 4: Qualitative or quantitative meta-analysis - scientifically defensible. The program/principles have undergone either a quantitative meta-analysis or and expert/peer consensus process in the form of a qualitative meta-analysis. Type 5: Replications of programs/principles - scientifically defensible. Replications of program/ principle have appeared in several refereed professional journals. Evidence of a program’s effectiveness is that it can be replicated across venues and populations, demonstrating credibility, utility, and generalizability.

What Works Clearinghouse <www.whatworks.ed.gov/>

Educational Interventions (programs, products, practice, and policies)

• •

Meets Evidence Standards Meets Evidence Standards With Reservations Does Not Meet Evidence Screens

Matrix applied to establish Scientific Credibility of a program with overall program ratings on a scale of 1-5 by Integrity and Utility.

Must score 3 or greater

Meets Evidence Standards: Randomized controlled trials (RCTs) that do not have problems with randomization, attrition, or disruption, and regression discontinuity designs that do not have problems with attrition or disruption. Meets Evidence Standards with Reservations: Strong quasi-experimental studies that have comparison groups and meet other WWC Evidence Standards, as well as randomized trials with randomization, attrition, or disruption problems and regression discontinuity designs with attrition or disruption problems. Does Not Meet Evidence Screens: Studies that provide insufficient evidence or causal validity or are not relevant to the topic being reviewed 

In addition, the standards rate other important characteristics of study design, such as intervention fidelity, outcome measures, and generalizability.

September 2007


24

Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Organization Helping America’s Youth <http://guide .helpingamericasyouth.gov /default.htm>

Focus Prevent and reduce delinquency or other youthful (up to age 20) problem behaviors (e.g. drug and alcohol use).

Rating • • •

Level 1 Level 2 Level 3

Ratings Defined Level 1: Programs have been scientifically demonstrated to prevent delinquency or reduce/ enhance risk/protective factors for delinquency and other child and youthful problems using a research design of the highest quality (i.e. an experimental design and random assignment of subjects). Level 2: Programs have been scientifically demonstrated to prevent delinquency or reduce/ enhance risk/protection for delinquency and other child and youthful problems using either an experimental or quasi-experimental research design, with a comparison group, an the evidence suggest program effectiveness, but the evidence is not as strong as the Level 1 programs. Level 3: Programs display a strong theoretical base and have been demonstrated to prevent delinquency and other child and youthful problems or reduce/ enhance risk/protective factors for them using limited research methods (with at least single group preand post –treatment measurements). The evidence associated with these programs appears promising but requires confirmation using more rigorous scientific techniques.

Communities That Care, Developmental Research and Programs Posey, R., Wong, S., Catalano, R., Hawkins, D., Dusenbury, L., & Chappell, P. (2000). Communities That Care Prevention Strategies: A Research Guide to What Works. Seattle, WA: Developmental Research and Programs, Inc. <www.preventionscience .com> Office of the Surgeon General <http://www.surgeongeneral .gov/library/youthviolence /chapter5/sec2.html #ScientificStandards>

Substance abuse, delinquency, teen pregnancy, school dropout, violence, and child and youth development.

Effective

Effective: (1) Programs address research based risk factors for substance abuse, delinquency, teen pregnancy, school dropout and violence (2) Increase protective factors; (3) intervene at developmentally appropriate age; and (4) show significant effects on risk and protective factors in controlled studies or community trials.

Youth Violence

• • •

Model Promising Does Not Work

Model: Rigorous experimental design (experimental or quasi-experimental); Significant deterrent effects on: Violence or serious delinquency (Level 1) or any risk factor for violence with a large effect (.30 or greater) (Level 2); Replication with demonstrated effects; and Sustainability of effects Promising: Rigorous experimental design (experimental or quasi-experimental); Significant deterrent effects on: Violence or serious delinquency (Level 1) or any risk factor for violence with an effect size of .10 or greater (Level 2); Either replication or sustainability of effects. Does Not Work: Rigorous experimental design (experimental or quasi-experimental); Significant evidence of null or negative effects on violence or known risk factors for violence; Replication, with the preponderance of evidence suggesting that the program is ineffective or harmful.

Children’s Services Council of Palm Beach County


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Organization OJJDP Model Programs Guide <http://www.dsgonline.com /mpg2.5/mpg_index.htm>

Focus Entire continuum of youth services from prevention through sanctions to reentry.

Rating • • •

Exemplary Effective Promising

25

Ratings Defined Exemplary: In general, when implemented with a high degree of fidelity these programs demonstrate robust empirical findings using a reputable conceptual framework and an evaluation design of the highest quality (experimental). Effective: In general, when implemented with sufficient fidelity these programs demonstrate adequate empirical findings using a sound conceptual framework and an evaluation design of the high quality (quasi-experimental). Promising: In general, when implemented with minimal fidelity these programs demonstrate promising (perhaps inconsistent) empirical findings using a reasonable conceptual framework and a limited evaluation design (single group pre- posttest) that requires causal confirmation using more appropriate experimental techniques. 

The Model Programs Guide (MPG)Evidence ratings are based on the evaluation literature of specific prevention and intervention programs. The overall rating is derived from four summary dimensions of program effectiveness: (1) the conceptual framework of the program; (2) the program fidelity; (3) the evaluation design; and (4) the empirical evidence demonstrating the prevention or reduction of problem behavior; the reduction of risk factors related to problem behavior; or the enhancement of protective factors related to problem behavior.

September 2007


26

Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Organization Exemplary and Promising Safe, Disciplined and Drug-Free Schools Programs 2001 <http://www.ed.gov/admins /lead/safety/exemplary01 /report_pg3.html>

Focus Safe, Disciplined and Drug Free Schools

Rating • •

Exemplary Promising

Ratings Defined Exemplary: Based on empirical data a program was effective. Promising: There is sufficient evidence to demonstrate that the program showed promise for improving student achievement. Ratings use the following criteria: A. Evidence of Efficacy • Criterion 1: The program reports relevant evidence of efficacy/effectiveness based on a methodologically sound evaluation. B.

Quality of Program • Criterion 2 (Goals): The program’s goals with respect to changing behavior and/or risk and protective factors are clear and appropriate for the intended population and setting. • Criterion 3 (Rationale)” The rationale underlying the program is clearly stated, and the program’s content and processes are aligned with its goals. • Criterion 4 (Content Appropriateness): The program’s content takes into consideration the characteristics of the intended population and setting (e.g., developmental stage, motivational status, language, disabilities, culture) and the needs implied by these characteristics. • Criterion 5 (Implementation Methods): The program implementation process effectively engages the intended population.

C. Educational Significance • Criterion 6: The application describes how the program is integrated into schools’ educational missions. D.

Federal Government/ Office of Management and Budget <http://www .whitehouse.gov/omb /expectmore/perform.html>

Dept. of Energy to Homeland Security to the Interior, etc. Health and Human Services - There was a range of topics including, but not limited to: health, childcare, adoption, family planning, developmental disabilities, maternal child health, substance abuse, mental illness, homelessness, universal newborn screenings, TANF and immigration.

Children’s Services Council of Palm Beach County

• • •

Effective Moderately Effective Adequate

Usefulness to Others • Criterion 7 (Replicability): The program provides necessary information and guidance for replication in other appropriate settings.

Effective: This is the highest rating a program can achieve. Programs rated Effective set ambitious goals, achieve results, are well-managed and improve efficiency. Moderately Effective: In general, a program rated Moderately Effective has set ambitious goals and is well-managed. Moderately Effective programs likely need to improve their efficiency or address other problems in the programs’ design or management in order to achieve better results. Adequate: This rating describes a program that needs to set more ambitious goals, achieve better results, improve accountability or strengthen its management practices.


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Organization Strengthening America’s Families <http://www .strengtheningfamilies.org /html/programs_1999 /Review_Criteria.html>

Ohio State CLEX <http://www.alted-mh.org /ebpd/criteria.htm>

Focus

27

Rating

Ratings Defined

Behavioral Parent and • Family Skills Training • or Behavioral Family • therapy, Family therapy, Family In-home Support, Comprehensive Approaches, incorporates universal, selected (at risk) and indicated (crisis) prevention efforts

Exemplary Model Promising

Exemplary: programs that are well-implemented, are rigorously evaluated, and have consistent positive findings (integrity ratings of “A4 “ or “A5 “). Model: programs that have consistent integrity ratings of “A3” and “A4”

Youth Behavior Mental Health Alternative Education

Evidence Model Promising

• • •

Promising: programs that have mixed integrity ratings but demonstrate high integrity ratings in at least 3 - 4 categories. 

Programs are rated across 14 dimensions receiving rating from A1 for “very low quality,” to A5 for “very high quality.” Dimensions include: (1) Theory; (2) Fidelity of Interventions; (3) Sampling Strategy & Implementation; (4) Attrition; (5) Measures; (6) Missing Data; (7) Data Collection; (8) Analysis; (9) Other plausible threats to validity; (10) Replications; (11) Dissemination Capability; (12) Cultural & Age Appropriateness; (13) Integrity; and (14) Utility

Evidence Checklist: Implementable, based on effective principles, customer satisfaction, change reports, comparison group, random assignment to control group, longitudinal impact, multiple site replication, dosage analysis, meta-analysis, expert review and consensus. •

0-2 checks: Unproven approach: No documentation approach has either ever been used or has been implemented successfully w/no evaluation. 3-5 checks: Promising Approach: Implemented & significant impact evaluations have been conducted. Data is promising; its scientific rigor is insufficient to suggest causality. Multiple factors contribute to the success of participants. 6-10 points: Evidence Based: Compelling evidence of effectiveness. Attribute participant success to the program itself, & have evidence that the approach will work for others in different environments.

Model: Meets the satisfactory standards of specific criteria as an effective program.

September 2007


28

Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Organization Child Welfare League of America <http://www.cwla.org /programs/r2p/levels.htm>

Focus Child Welfare

Rating • • • •

Exemplary Practice Commendable Practice Emerging Practice Innovative Practice

Ratings Defined Levels of Research Rigor - Each program or practice included in the Research to Practice (R2P) initiative has been identified as effective with successes supported by a research component. R2P has developed the following categories to describe the level of empirical support available. All programs and practices exist within an organizational context with many factors that may influence outcomes. Exemplary Practice: Must have: Randomized study, Control group, Posttests or pre- and posttest, Effects sustained for at least 1 year, Multiple replications. Commendable Practice: Must have a majority of the following characteristics: Randomized or quasi-experimental study, Control or comparison group, Posttests or pre- and posttests, Follow up, Replication. Emerging Practice: Must have a majority of the following characteristics: Quasi-experimental study, Correlational or ex post facto study, Posttest, only, Single group pre- and posttest, Comparison group. Innovative Practice: Must have a majority of the following characteristics: Case study, Descriptive statistics only, Treatment group only.

Child Trends <http://www.childtrends .org/what_works/clarkwww /clarkwww_intro.asp>

Life Course Models, • teen programs, • school readiness, and afterschool • •

What Works What Doesn’t Work Mixed Reviews Best Bets

What Works – Programs with specific evidence from experimental studies that show a significant positive impact on a particular developmental outcome. What Doesn’t Work – Programs with experimental evidence that, to date, an outcome has not been positively affected by a particular program. These findings should not be construed to mean that the program can never positively affect outcomes or that it cannot be modified to affect outcomes positively. Mixed Reviews – Programs with experimental evidence that a program has been shown to be effective in some, but not all, studies or that it has been found to be effective for some, but not all, groups of young people. Best Bets – Programs with promising approaches or practices that have not been tested through experimental research but that may be important from a theoretical standpoint. These include results from quasi-experimental studies, multivariate analyses, analyses of longitudinal and survey studies, nonexperimental analyses of experimental data, and wisdom from practitioners working in the field. The term “best bets” is not intended to highlight these as the recommended practices for programs, but as promising approaches worthy of consideration by program designers or policymakers.

Children’s Services Council of Palm Beach County


Research Review - Evidence-Based Programs and Practices: What Does It All Mean?

Organization American Community Corrections Institute <http://www.accilifeskills .com/evidence-based /research.htm>

Focus Substance Abuse Offenders

Rating •

No specifics

29

Ratings Defined Fifteen Point Rating Criteria For EvidenceBased Programs: 1. . 3. 4. 5. 6. 7. 8. 9. 10. 11. 1 . 13. 14. 15.

Theory: the degree at which programs reflect clear principles about substance abuse behavior and how it can be changed. Intervention Fidelity: how the program ensures consistent delivery. Process Evaluation: whether the program implementation was measured Sampling Strategy and Implementation: how well the program selected its participants and how well they received it. Attrition: whether the program retained participants during evaluation. Outcome Measures: the relevance and quality of evaluation measures. Missing Data: how developer addressed incomplete measurements. Data Collection: the manner in which data were gathered. Analysis: the appropriateness and technical adequacy of data analyses. Other Plausible Threats to Validity: the degree to which the evaluation considers other explanations for program effects. Replications: number of times the program has been used in the field. Dissemination Capability: whether program materials are ready for implementation by others in the field. Cultural Age Appropriateness: the degree to which the program addresses different ethnic, racial and age groups. Integrity: overall level of confidence of the scientific rigor of the evaluation. Utility: overall pattern of program findings to form prevention theory and practice.

September 2007


PRESORTED STANDARD US POSTAGE PAID WEST PALM BEACH FL PERMIT NO 1305

2300 High Ridge Road Boynton Beach, FL 33426


Page 59 of 59


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.