Last month, the Alliance released Left Out and Left Behind: NCLB and the American High School, a report that used research by Jay Greene of the Manhattan Institute that found a severe dropout problem in America’s high schools. Two weeks ago, the Business Roundtable released a report that reinforced Greene’s research and specifically took issue with the method used by the U.S. Department of Education to calculate graduation rates. It found that some of the more widely cited official government measures of school dropout rates in the U.S. substantially underestimate the number of youth who leave high schools without obtaining a regular high school diploma.
Conceding that evidence on high school dropout rates is mixed and often controversial, the Business Roundtable found that somewhere between 25 and 30 percent of America’s teenagers, including recent immigrants, fail to graduate from high school with a regular high school diploma. The study, The Hidden Crisis in the High School Dropout Problems of Young Adults in the U.S., also includes state-by-state estimates of dropout rates and examines the different ways that dropout rates are counted. It concludes by saying that there is a hidden dropout crisis in America’s high schools that “must be immediately acknowledged and addressed by national, state, and local policymakers if the nation is to achieve important educational and economic goals in the twenty-first century.”
A new report from the Urban Institute takes the graduation rate argument a step further and finds that the way a state calculates its graduation rate could have a dramatic effect on its ability to meet the accountability provisions of NCLB. The report, Counting High School Graduates when Graduates Count, found substantial differences among the three alternative graduation rate indicators that it examined.
Under NCLB, a state must use graduation rates as one of the indicators to determine whether its schools are making adequate yearly progress at the secondary level. While the law defines a “graduate” as someone who has received a high school diploma and excludes GED certificates, it still allows states some latitude in developing their own definition that must be approved by the U.S. Secretary of Education.
The report used three different methods to calculate graduation rates for states:
- A National Center for Education Statistics (NCES) method that compares the number of high school completers in a given year (excluding GEDs) against the number of students who dropped out during the previous three years;
- A method developed by Jay Greene of the Manhattan Institute that compares the number of graduates in a given year to the number of ninth-graders four years earlier; and
- A “Cumulative Promotion Index” developed by the Urban Institute’s researchers which multiplies the proportion of 12th-graders who earn diplomas with the percent of students in grades nine through 11 who are promoted to the next grade that same year.
In the end, the NCES method favored by a majority of the states studied pegged the graduation rate at 85 percent, while the rates calculated by Greene and the Urban Institute were much lower.
The Business Roundtable report is available at: http://www.brtable.org/pdf/914.pdf
The Urban Institute report is available at: http://www.urban.org/UploadedPDF/410641_NCLB.pdf
|Princeton Review Tests the Testers
In a new report, the Princeton Review ranks the overall character and effectiveness of state accountability systems. Testing the Testers 2003 highlights good and bad accountability practices with the hope of improving the overall quality of state tests. Sadly, nearly 30 percent of states received overall scores of 65 or lower, and of the individual grades given to the bottom-performing twenty states, nearly 40 percent were C or lower.
Based on data using twenty-two relevant indicators from every state and the District of Columbia, the report graded states in four weighted categories to determine an overall score. The categories were: academic alignment, test quality, sunshine (openness to ongoing improvement of policies and procedures surrounding the tests. ), and policy. Every state was assigned a number rank and a letter grade.
The top five states were 1) New York; 2) Massachusetts; 3) Texas; 4) North Carolina; and 5) Virginia. The bottom five states were 46) Wisconsin; 47) West Virginia; 48) South Dakota; 49) Rhode Island; and 50) Montana.
The complete report is available at: http://www.princetonreview.com/footer/testingtesters.asp