|
Endnotes: High-Stakes
Testing: Educational Barometer
for Success, or False Prognosticator for Failure
Print
this page
Print
this page in PDF
These are
endnotes for the article, High-Stakes
Testing: Educational Barometer for Success, or False Prognosticator for
Failure by Torin Togut, Esq.
Endnotes
[1] Tyack, David B., The One Best System: A History of American Urban
Education (1974); Eisner, Elliott W., (1995). Standards for American
Schools: Help or Hindrance? Phi Delta Kappan, 759-60.
[2] Ravitch, Diane, (1995). National Standards in American Education:
A Citizen's Guide.
[3] Id.
at 39-41, 46-47.
[4] Marion, Scott F. & Sheinker, Alan (January 1999). Issues and Consequences
for State-Level Minimum Competency Testing Programs. (Wyoming Report 1),
Minneapolis, MN: University of Minnesota, National Center on Educational
Outcomes. Throughout the 1970s and 1980s, more than 40 states instituted
minimum competency testing for more accountability. Frederiksenm, N. (1994).
The influence of minimum competency tests on teaching and learning. Princeton,
NJ: Educational Testing Service, Policy Information Center; Winfield,
L.F. (1990). School competency testing reforms and student achievement.
Exploring a national perspective. Educational Evaluation and Policy
Analysis, 12, 157-73.
[5] McCall,
James M., Note And Comment: Now Pinch Hitting For Educational Reform:
Delaware's Minimum Competency Testing And The Diploma Sanction, 18
J.L. & Com. 373, 375 (1999). On July 10, 1998, the Delaware General
Assembly passed the Delaware Student Testing Program (DSTP), which tests
students in grades 3, 5, 8, and 10 in English language arts and mathematics.
If students fail the DSTP, they are not promoted to the next grade. For
high school students, the failure to pass the DSTP results in denial of
their diploma. Id. at 374.
[6] Id.
[7] National Committee on Excellence in Education, A Nation at Risk:
The Imperative for Educational Reform (1983). See Moran, Rachel F.,
Symposium: Education And The Constitution: Shaping Each Other And The
Next Century, 34 Akron L.Rev. 107, 111-12 (2000).
[8] Id. See also National Governors' Association, High School
Exit Examinations: Setting High Expectations (1998)
[9] Coleman, Arthur L., Excellence and Equity in Education: High Standards
for High Stakes Tests, 6 Va.J.Soc.Policy 81 & n.6 (Fall 1998).
[10] Heubert, J.P. and Hauser, R.M., (1998). High Stakes: Testing for
Tracking, Promotion, and Graduation (NCR Report), Washington, D.C.: National
Research Council, at pp. 163-64.
[11] NRC Report at p. 163.
[12] Heubert, Jay P., J.D., Ed.D. (2000). High-Stakes Testing: Opportunities
and Risks for Students of Color, English-Language Learners, and Students
with Disabilities, published as a chapter in Pines, M., ed., The Continuing
Challenge: Moving the Youth Agenda Forward (Policy Issues Monograph
00-02, Sar Levitan Center for Social Policy Studies). Baltimore, MD: John
Hopkins University Press.
[13] Based upon a survey conducted in July 2002, the following states
have mandatory exit exams: Alabama, Florida, Georgia, Indiana, Louisiana,
Maryland, Minnesota, Mississippi, Nevada, New Jersey, New Mexico, New
York, North Carolina, Ohio, South Carolina, Tennessee, Texas, and Virginia.
The following states are phasing in exit exams but are not yet withholding
diplomas: Alaska, Arizona, California, Massachusetts, Utah, and Washington.
See Chudowsky, N., Kober, N., Gayler, K.S., & Hamilton, M.
(August 2002). State High School Exit Exams - A Baseline Report, Center
on Education Policy, at pp. 6-7. In 1998, 11 of 15 states below the
Mason-Dixon line required students to pass a high school exit examination.
In 2002, 10 states had minimum-competency exams, 7 had standards-based
exams, and 2 had end-of-course exams. State High School Exit Exams
- A Baseline Report, Action Summary for State and National Leaders,
Center on Education Policy (August 2002) at p 5. Louisiana, New Mexico,
and North Carolina link test scores to promotion or retention and more
states plan to do the same in the future. See Amrein, Audrey, L., &
Berliner, David C., (March 28, 2002). High-stakes testing, uncertainty,
and student learning. Education Policy Analysis Archives, 10(8)
at 8. Arizona State University.
High-stakes testing has been concentrated in states and school districts
with substantial numbers of low-income residents and minorities. See Reardon,
Sean F., Eighth-Grade Minority Competency Testing and Early High School
Dropout Patterns 4-5 (April 1996), a paper presented at the annual
meeting of the American Educational Research Association. Students of
color are substantially more likely to fail an exit exam on the first
try than white students. The percentage of students who do not pass exit
exams on their first try ranges from 9% to 69% in mathematics, depending
upon the state, and from 5% to 42% in English and language arts. In Minnesota,
only 59% of poor students, 40% of special education, and 30% of England
language learners passed the exam on the first attempt. See Chudowsky,
et. al., (August 2002) at p 4. The majority of students pass the
exit exam for graduation. In Indiana and Ohio, approximately 98% of the
students who completed their course requirements for graduation passed
the exit exam and received a diploma. But these statistics are deceiving
in that they do not count students who drop out of school, repeat their
senior year, move out of the state or district, or are excluded from testing
because of their disability or language status. Id.
[14] Coleman, Arthur L., Excellence and Equity in Education: High Standards
for High Stakes Tests, Va.J.Soc.Policy 81 & n.29-30 (Fall 1998).
For example, the North Carolina Board of Education instituted a policy
that third-through-eighth grade students who do not achieve a designated
score on a state administered standardized test will be retained.
[15] Elul, Hagit, Making the Grade, Public Education Reform: The Use of
Standardized Testing to Retain Students and Deny Diplomas. Colum. Human
Rights L. Rev. (Summer 1999), at p. 495 n.1
[16] New York Times (Jan. 20, 1999) at A22; New York Times
(Feb. 5, 1997), at A20; see also Jay P. Heubert & Robert M. Hauser
eds. (1999). Commission on Behav. & Soc. Sci & Educ., Nat'l
Res. Council, High Stakes: Testing for Tracking, Promotion, and Graduation.
[17] Id.
[18] Coleman, Arthur L., Excellence and Equity in Education: High Standards
for High Stakes Tests, Va.J.Soc.Policy 81 (Fall 1998). See also
Goals 2000: Educate America Act, 20 U.S.C. § 5801 (1994); Elementary
and Secondary Education Act, 20 U.S.C. § 6301 (1994); National Council
on Education Standards and Testing, Raising Standards for American
Education: A Report to Congress, the Secretary of Education, the National
Goals Panel, and the American People (1992); National Governors' Association,
From Rhetoric to Action: State Progress in Restructuring the Education
System (1991).
[19] See Hagit Elul, Making the Grade, Public Education Reform:
The Use of Standardized Testing to Retain Students and Deny Diplomas.
30 Colum Human Rights L. Rev. 495, 498 & n. 22 (Summer, 1999).
[20] The late Senator Paul Wellstone of Minnesota voiced concerns about
high-stakes testing, saying that, "Today in education there is a
threat afoot . . . the threat of high-stakes testing being grossly abused
in the name of greater accountability, and almost always to the serious
detriment of our children." High-stakes tests: A harsh agenda for
America's children. (March 13, 2000). Remarks prepared for U.S. Senator
Paul D. Wellstone. Teachers College, Columbia University.
[21] Elul, Hagit, Making the Grade, Public Education Reform: The Use of
Standardized Testing to Retain Students and Deny Diplomas, 30 Colum.
Human Rights L. Rev. 495, 500 (Summer 1999).
[22] See e.g., More Schools Rely on Tests, but Study Raises Doubts,
New York Times (Dec. 28, 2002); Frase-Blunt, Martha, High Stakes
Testing a Mixed Blessing for Special Students, CEC Today, Vol.
7, No. September 2000; McDonnell, Lorraine, The Politics of High Stakes
Testing, CRESST/UCLA (1999); WestEd Policy Brief, the High Stakes
of High-Stakes Testing, (February 2000); Johnson, Joseph F., Treisman,
Uri, and Fuller, Ed., American Association of Administrators: Testing
in Texas, December 2000; Porter, Andrew, American Association of School
Administrators, Doing High-Stakes Assessment Right (December 2000);
National Council of Teachers of Mathematics (NCTM), High-Stakes Testing
(November 2000); Cizek, Gregory, University of North Carolina, In Defense
of Testing Series, Unintended Consequences of High-Stakes Testing,
EducationNews. org; International Reading Association, Summary of a Position
Statement of the International Reading Association: High-Stakes Assessments
in Reading (August 1999); American Psychological Association, Appropriate
Use of High-Stakes Testing in Our Nation's Schools (May 2001); American
Educational Research Association (AERA), AERA Position Statement Concerning
High-Stakes Testing in PreK-12 Education (July 2000); The National
Forum To Accelerate Middle-Grades Reform, Research & Policy, National
Forum Policy Statement on High-Stakes Testing; National Association
of School Psychologists, National Mental Health and Education Center,
Large Scale Assessments and High Stakes Decisions: Facts, Cautions
and Guidelines (2002); Nathan, Linda, The Human Face of the High-Stakes
Testing Story. Phi Delta Kappan (April 2002); Bracey, Gerald, Ph.D.,
(December 2000). High Stakes Testing, Center for Education Research,
Analysis, and Information. An Education Policy Project Briefing Paper:
University of Wisconsin-Milwaukee, Milwaukee, WI. Note: This list
of articles on the subject of high-stakes testing is far from being exhaustive.
[23] Id. at 501.
[24] Id. at 16.
[25] See e.g. Erik V. v Causby, 977 F. Supp. 384 (E.D.N.C. 1997);
GI Forum v. Texas Educational Agency, 87 F. Supp.2d 667 (W.D. Tex.
2000). It is difficult to prove that student accountability policies are
not a valid means of improving student achievement. This may be proved,
however, by showing that (1) standardized tests are not appropriate for
determining grade detention or diploma denial; and (2) grade retention
and diploma denial policies do not lead to academic achievement. Elul,
Hagit, Making the Grade, Public Education Reform: The Use of Standardized
Testing to Retain Students and Deny Diplomas, Colum. Human Rights L.
Rev. 495, 523-25 (Summer 1999). For example, a Texas student accountability
policy uses the TAAS exam to assess whether students acquired the knowledge,
skills, or abilities deemed to be essential for graduation. If these tests
do not accurately assess the skills, knowledge, and abilities essential
for grade promotion or high school graduation, then they may be invalid
under Title VI. If these high-stakes tests embody a standard that results
in cultural, ethnic, racial, or gender bias, this may violate Title VI
and Title IX. Id. See Joint Committee on Testing Practices, Code
of Fair Testing Practices in Education, http://www.apa.org/science/FinalCode.pdf
[26] Heubert, J.P., & Hauser, R.M. (1999). High stakes: Testing
for tracking, promotion and graduation. Washington, DC: National Academy
Press; Almond, P., Quenemoen, R.F., Olsen, K, & Thurlow, M. (2000).
Gray areas of assessment systems. Synthesis Report 32. Minneapolis,
MN: University of Minnesota, Center on Educational Outcomes.
[27] Amrein, Audrey, L., & Berliner, David C., (March 28, 2002). High-stakes
testing, uncertainty, and student learning. Education Policy Analysis
Archives, 10(8). Research suggests that these arguments are likely
to be false most of the time; some research shows the opposite result.
See McNeil, L.M. (2000). Contradictions of school reform. New York,
NY: Routledge; Orfield, G., & Kornhaber, M.L. (Eds.) (2001). Raising
standards or raising barriers? Inequity and high stakes testing in public
education. New York: The Century Foundation Press; Paris, S.G. (2000).
Trojan horse in the schoolyard: The hidden threats in high-stakes testing.
Issues in Education, 6(1,2), 1-16; Sacks, P. (1999). Standardized
minds: The high price of America's testing culture and what we can do
to change it. Cambridge, MA: Perseus Books; Sheldon, K.M. & Biddle,
B.J., (1998). Standards accountability and school reform: Perils and pitfalls.
Teacher College Record, 100(1), 164-180.
[28] Jaeger, R.M. (1982). The final hurdle: Minimum competency achievement
testing. In G.R. Austin & H. Garber (Eds.) The rise and fall of
national test scores (pp. 223-246), New York: Academic Press.
[29] Marion, Scott F., & Sheinker (January 1999). Issues and Consequences
of State-Level Minimum Competency Testing Programs. Wyoming Report
1 at pp. 4-5. Minneapolis, MN: University of Minnesota, National Center
on Educational Outcomes.
[30] Id.
[31] Id. at 6.
[32] Popham, W.J. (1987). The merits of measurement-driven instruction.
Phi Delta Kappan, 68, 679-682. There is evidence that an increase
in test scores is related to what is taught. Resnick, L.B., & Resnick,
D.P. (1992). Assessing the thinking curriculum: New tools for educational
reform. B.R. Gifford & M.C. O'Conner (eds.), Changing assessments:
Alternative views of aptitude, achievement and instruction. Boston:
Kluwer Academic. But high-stakes testing is, in itself, insufficient to
drive instruction; it must be linked to standards and content. Airasian,
P.W. (1988). Measurement driven instruction: A closer look. Educational
Measurement: Issues and Practice, 7(4), 6-11.
[33] Shepard, L.A. (1991). Psychometricians' beliefs about learning. Educational
Researcher, 4, 2-16; Mislevy, R.J. (1996). Some recent developments
in assessing student learning. Princeton, NJ: Educational Testing Service,
Center for Performance Assessment; Glasser, R. & Silver, E. (1994).
Assessment, testing, and instruction: Retrospect and prospect. Review
of Research in Education, 20, 393-419.
[34] Marion, Scott F., & Sheinker, Alan (January 1999). Issues and
Consequences of State-Level Minimum Competency Testing Programs. Wyoming
Report 1, at pp. 9-10. Minneapolis, MN: University of Minnesota, National
Center on Educational Outcomes,
[35] Id. at 11. A validity study of minimum competency testing
should examine both the positive and negative effects on curriculum and
learning. An evaluator should determine whether the effects are intended
or unintended. There should be an expectation that minimum competency
testing will increase student learning. A good evaluation should try to
account, either statistically or by establishing a comparison group, to
measure competing variables.
[36] One study found positive effects of minimum competency testing in
reading for students in grades eight and eleven, but no difference in
fourth grade students. Winfield, L.F. (1990). School competency testing
reforms and student achievement: Exploring a national perspective. Educational
Evaluation and Policy Analysis, 12, 157-73.
[37] Frederiksen, N. (1994). The influence of minimum competency tests
on teaching and learning. Princeton, NJ: Educational Testing Service,
Policy Information Center. Students from 10 high-stakes states improved
their math scores on routine problems compared with students from 11 low-stakes
states.
[38] Quenemoen, Rachel F., Lhr, Camilla, A., Thurlow, Martha L., &
Massanai, Carol B. (February 2001). Students with Disabilities in Standards-based
Assessment and Accountability Systems: Emerging Issues, Strategies, and
Recommendations. NCEO Synthesis Report 37 at pp. 10-11. Minneapolis,
MN: University of Minnesota, National Center on Educational Outcomes.
[39] Id. at 14. Except for fourth graders, students' performance
on routine problems did not generalize to nonroutine (high-order thinking)
problems.
[40] There is no consensus about the impact of minimum competency testing
on the dropout rate. See Reardon, S.F. (April 1996). Eighth grade
minimum competency testing and early high school drop out patterns. Paper
presented at the Annual Meeting of the American Educational Research Association,
New York; Griffin. B.W. & Heidorn, M.H. (1996). An examination of
the relationship between minimum competency performance and dropping out
of high school. Educational Evaluation and Policy Analysis, 18,
243-52; Catterall, J.S. (1989) Standards and school dropouts: A national
study of tests required for high school graduation. American Journal
of Education, 98, 1-34.
[41] Restriction on curriculum and instruction appears to have a greater
impact on minority students. Lomax, R.G., West, M.M., Harmon, M.C., Viator,
K.A. & Madaus, G.F. (1995). The impact of mandated standardized testing
on minority students. Journal of Negro Education, 64, 171-85; Darling-Hammond,
L. (1994). Performance-based assessment and educational equity. Harvard
Educational Review, 64, 5-30.
[42] Students and teachers may engage in unethical practices such as cheating,
and developing curricula based on the content of the test to practice
actual exams. Haladyna, T.M., Nolan, S.B., & Haas, N.S. (1991). Raising
standardized achievement test scores and the origins of test score pollution.
Educational Researcher, 20 (5), 2-7.
[43] Time devoted to testing and test preparation may actually reduce
time and opportunities for learning. Haladyna, T.M., Nolan, S.B., &
Haas, N.S. (1991). Raising standardized achievement test scores and the
origins of test score pollution. Educational Researcher, 20 (5),
2-7; Lomax, R.G., West, M.M., Harmon, M.C., Viator, K.A. & Madaus,
G.F. (1995). The impact of mandated standardized testing on minority students.
Journal of Negro Education, 64, 171-85
[44] When high-stakes testing is used inappropriately, it undermines the
quality of education, reduces opportunities for some students, and relegates
some students to low-quality educational experiences or worse. Heubert,
J.P. and Hauser, R.M., (1998). High Stakes: Testing for Tracking, Promotion,
and Graduation (NCR Report), Washington, DC: National Research Council;
A Position Paper Presented to the Program Committee of the Governors'
Education Reform Commission by the (Georgia) Governor's Council on Development
Disabilities Outcomes, Outcomes and Accountability for All: Unifying
Reform in General and Special Education (September 2000), at pp. 17-18.
[45] The cost of remediation can be substantial, depending upon the number
of students retained, and the cost of hiring of additional staff for remediation.
In Wyoming, the cost of minimum competency testing, without remediation
or grade retention, is approximately $265,625 per year. If Wyoming hired
paraprofessionals at an additional cost of $530,000 per year, the total
cost would be $795,625 per year. Marion, Scott F., & Sheinker (January
1999). Issues and Consequences of State-Level Minimum Competency Testing
Programs. Wyoming Report 1, at pp. 15-16. Minneapolis, MN: University
of Minnesota, National Center on Educational Outcomes.
[46] Quenemoen, Rachel F., Lhr, Camilla, A., Thurlow, Martha L., &
Massanai, Carol B. (2001). Students with Disabilities in Standards-based
Assessment and Accountability Systems: Emerging Issues, Strategies, and
Recommendations. NCEO Synthesis Report 37 at pp. 11-12. Minneapolis,
MN: University of Minnesota, National Center on Educational Outcomes.
[47] Thompson, S. Thurlow, M., Spicuzza, R., & Parson, L. (1999).
Participation and performance of students receiving special education
services on Minnesota's Basic Standards Tests: Reading & Math, 1996
through 1998, Minnesota Report 18. Minneapolis, MN: University
of Minnesota, National Center on Educational Outcomes.
[48] Amrein, Audrey, L., & Berliner, David C., (March 28, 2002). High-stakes
testing, uncertainty, and student learning. Education Policy Analysis
Archives, 10(8).
[49] Id. at 6.
[50] National
Center for Education Statistics Finance Data. http://www.nces.ed.gov
[51] Elazar, D.J. (1984). American federalism: A view from the states
(3rd ed.). New York: Harper & Row, Publishers.
[52] 2000 Census Bureau. http://www.census.gov
[53] Amrein, Audrey, L., & Berliner, David C., (March 28, 2002).
[54] 1999 Census Bureau Data. http://www.census.gov/
High school exams have a disproportionate negative effect on minority
students and low social economic backgrounds. Firestone, W.A., Camilli,
G., Yurecko, M., Monfils, L., & Mayrowetz, D. (2000). State standards,
social-fiscal context and opportunity to learn in New Jersey. Education
Policy Analysis Archives, 8(35).
[55] 2001 Kids Count Data. http://www.aecf.org/kidscount/kc2001/
[56] Education Commission of the States. (1998). Designing and implementing
standards-based accountability systems.
[57] Salvia, J., & Ysseldyke, J.E. (2001). Assessment (8th ed.).
Boston: Houghton Mifflin.
[58] Id.
[59] Almond, P., Quenemoen, R., Olsen, K., & Thurlow, M. (2000). Gray
areas of assessment systems. Synthesis Report 32. Minneapolis,
MN: University of Minnesota, National Center on Educational Outcomes.
[60] Heubert, Jay P., J.D., Ed.D., supra, at 7.
[61] Thurlow, M., House, A. Boys, C., Scott, D., Ysseldyke, J. (2000).
State participation and accommodations policies for students with disabilities:
1999 update. Synthesis Report 33. Minneapolis, MN: University of
Minnesota, National Center on Educational Outcomes; Tindal, G., &
Fuchs, L. (1999). A summary of research on test changes: An empirical
basis for defining accommodations. Lexington, KY: Mid-South Regional
Resource Center.
[62] Quenemoen, Rachel F., Lhr, Camilla, A., Thurlow, Martha L., &
Massanai, Carol B. (February 2001). Students with Disabilities in Standards-based
Assessment and Accountability Systems: Emerging Issues, Strategies, and
Recommendations. NCEO Synthesis Report 37, pp. 9-10. Minneapolis,
MN: University of Minnesota, National Center on Educational Outcomes.
[63] Id.
[64] Thurlow, M.L., Ysseldyke, J.E., & Anderson, C.L. (1996). High
school graduation requirements: What's happening for students with disabilities?
Synthesis Report 20. Minneapolis, MN: University of Minnesota,
National Center on Educational Outcomes.
[65] All states have written policies that guide the provision of assessment
accommodations for students with disabilities. Thurlow, M.L., House, A.,
Boys, C., Scott, D. & Ysseldyke, J. (2000). State participating and
accommodations policies for students with disabilities: 1999 update. Synthesis
Report 33. Minneapolis, MN: University of Minnesota, National Center
on Educational Outcomes.
[66] Thurlow, Martha, Erickson, Spicuzza, Richard, Vieburg, Kayleen, &
Ruhland, Aaron (August 1996). Accommodations for Students with Disabilities:
Guidelines from States with Graduation Rates, Minnesota Report.
Minneapolis, MN: National Center on Educational Outcomes; Thurlow, M.
(August 2001). Use of Accommodations in State Assessments: What Databases
Tell Us About Differential Levels of Use and How to Document the Use of
Accommodations. Technical Report 30 at p. 8. Minneapolis, MN: University
of Minnesota, Center on Educational Outcomes.
[67] For a comprehensive exploration of State Policies on Testing Accommodations
for Students with Disabilities, see Reed Martin Special Education Law
& Advocacy Strategies. http://www.reedmartin.com/stateaccommodationspolicies.htm
[68] O'Neill, Paul T., Willkie Farr & Gallagher, Pass The Test or
No Diploma: High Stakes Testing Graduation Testing and Children with Learning
Disabilities. http://www.ldonline.org/ld_indepth/assessment/oneill.html
[69] NCR Report at p. 195. It is important to note that accommodations
should be used only to level the playing field for students with disabilities.
Accommodations are intended to correct for distortions of the child's
abilities that are caused by the disability, and are unrelated to the
area being measured. For example, if a child with fine motor impairments
were permitted to dictate his answers to a writing test designed to measure
handwriting, the objective of the test would be compromised and the test
results would be invalid.
[70] Booth, Rebecca Chapman (1998). Disability Rights Advocates: List
of Appropriate School-Based Accommodations and Interventions, at pp.
3-4.
[71] Thurlow, M. (August 2001) at p. 19.
[72] Elliott, J., Bielinski, J. Thurlow, M., DeVito, P., Hedlund, E. (1999).
Accommodations and the performance of all students on Rhode Island's performance
assessment, Rhode Island Report 1. Minneapolis, MN: University
of Minnesota, National Center on Educational Outcomes.
[73] Bielinski, J., Ysseldyke, J. Bolt, S., Friedebach, M., & Friedebach,
J. Prevalence of accommodations for students with disabilities participating
in a statewide testing program. Diagnostique.
[74] Delaware Department of Education. (1997). State summary report:
1997 Delaware writing assessment program.
[75] Minnema, Jane, Thompson, Sandy, Thurlow, Martha, Barrow, Sarah (January
2000). Unintended Consequences of the Minnesota Basic Standards Tests:
Do the Data Answer the Questions Yet? Minnesota Report 23, Minneapolis,
MN: University of Minnesota, National Center on Educational Outcomes.
[76] Id. at 14-15.
[77] Bond, L.A., Roeber, E., & Braskamp, D. (1996). Trends in statewide
student assessment. Washington, DC: Council of Chief State School Officers
and NCREL; Shepard, L.A. (1992). Will national tests improve student learning?
CSE Technical Report 342. Los Angeles: University of California,
Center for Research on Evaluation, Standards, and Student Learning; Koretz,
D., McCaffrey, D., Klein, S., Bell, R., & Stecher, B. (1993). The
reliability of scores from the 1992 Vermont portfolio assessment program.
CSE Technical Report 355. Los Angeles, CA: National Center for
Research on Evaluation, Standards, and Student Testing (CRESST), University
of California. In Vermont, for example, the portfolio assessment faced
substantial hurdles because of the unreliability of scoring. There was
low rater reliability. Nonetheless, teachers liked portfolio assessments
and believed that portfolios were a valuable tool in gauging student progress.
[78] Quenemoen, Rachel F., Lehr, Camilla, A, Thurlow, Martha L., Thompson,
Sandra J., Bolt, Sara. (June 2000). Social Promotion and Students with
Disabilities: Issues and Challenges in Developing State Policies. NCEO
Synthesis Report 34, p.5. Minneapolis, MN: University of Minnesota,
National Center for Center on Educational Outcomes. For a comprehensive
review of all social promotion policies, see Appendix A - State Social
Promotion Policies http://education.umn.edu/nceo/OnlinePubs/Synthesis34.html.
[79] Shepard, L.A., (1991). Negative policies for dealing with diversity:
when does assessment and diagnosis turn into sorting and desegregation?
In E. Hiebert (ed.) Literacy for a diverse society: Perspective, practices
and policies. New York: Teachers College Press.
[80] Shepard, L.A., & Smith, M.L. (1989). Flunking grades: Research
and policies on retention. London: Falmer Press.
[81] Alexander, K.L., Entwisle, D.R., & Dauber, S.L. (1995). On
the success of failure. Cambridge: Cambridge University Press.
[82] Policy Information Center. (1995). Dreams deferred: high school
dropouts in the United States. Princeton, NJ: Educational Testing
Service.
[83] Moore, D.R. (1999). A comment on ending social promotion: Results
from the first two years. In Designs for Change. http://www.designsforchange.org/
[84] Roderick, M., Bryk, A.S., Jacob, B.A., Easton, J.Q., & Allensworth,
E.. (1999). Ending social promotion: Results from the first two years
by Consortium on Chicago School Research. http://www.consortium-chicago.org
[85] Moore, D.R., (1999), at 3.
[86] Quenemoen, Rachel F., et al. (June 2000), at p. 7.
[87] Id. at 99.
[88] Id.
[89] Id.
[90] Id. at 100.
[91] Langenfeld,
Karen, Thurlow, Martha, and Scott, Dorene (January 1997). High Stakes
Testing for Students: Unanswered Questions and Implications for Students
with Disabilities. Synthesis Report No. 26, at p. 3. Minneapolis,
MN: University of Minnesota, National Center on Educational Outcomes.
[92] Id.
[93] Amrein, Audrey, L., & Berliner, David C., at 13.
[94] Id. at 15. In this study, researchers examined test scores
on the American College Testing (ACT) program, Scholastic Achievement
Test (SAT), National Assessment of Educational Progress (NAEP), and Advanced
Placement (AP) exams in high-stakes testing states. The researchers assumed
that ACT, SAT, NAEP, and AP tests are reasonable measures of the domains
that a high-stakes testing program is intended to affect. They found little
evidence, except for a few exceptions, that high-stakes testing policies
promoted learning as measured by increased test scores on the ACT, SAT,
NAEP, and AP tests. Id. at 18-54. See also Amrein, Audrey L., Berliner,
David C. (December 2002). The Impact of High-Stakes Tests on Student Academic
Performance: An Analysis of NAEP Results in States with High-Stakes Tests
and ACT, SAT, and AP Test Results in States with High School Graduation
Exams. Education Policy Research. The purpose of the study was
to assess whether academic achievement increased after high-stakes testing
was introduced. First, the study assessed whether academic achievement
improved since the introduction of high-stakes testing in 27 states with
high-stakes policies for grades 1-8. Second, the study assessed whether
academic achievement increased after high-stakes were attached to tests
in grades 1-8. There was inadequate evidence to support the proposition
that high-stakes tests and high school graduation exams increase student
achievement. After high school exit exams were introduced, it appeared
that academic achievement decreased. In other words, the learning for
high-stakes tests did not generalize to NAEP, ACT, SAT, and AP tests.
[95] Id. at 4-5.
[96] Id at 5.
[97] Coleman, Arthur L., Excellence and Equity in Education: High Standards
for High Stakes Tests, Va.J.Soc.Policy 81 & n. 69-71 (Fall
1998).
[98] Id. Despite this caveat, an increasing number of states and
school districts automatically deny promotion or high-school diplomas
to students who fail high-stakes tests, regardless of how well these students
perform on other measures of achievement. Heubert, Jay P., J.D., Ed.D.
(2000), at p. 7. Recent studies conducted by the NCR (1999:279) and American
Education Research Association (2000) emphasize that educators should
always use test scores in conjunction with other relevant information
about student knowledge and skills, including grades, teacher recommendations,
and other data, when making high-stakes decisions about individual students.
Heubert, J.P. and Hauser, R.M., (1998). High Stakes: Testing for Tracking,
Promotion, and Graduation, NCR Report, Washington, D.C.: National
Research Council; American Education Research Association (2000). AERA
Position Statement Concerning High-Stakes Testing in PreK-12 Education.
http://www.aera.net/about/policy/stakes.htm
Although
there is little dispute that there has been significant grade inflation
during the last three decades, grades are a better measure of student
motivation than standardized tests. Thus, it is important to use grades
in addition to test scores to measure academic performance. Heubert, Jay
P., J.D., Ed.D., (2000) at p. 8.
[99] Id.
For an extensive discussion of the validity and reliability of tests and
test administration, see Tests and Measurements for the Parent, Teachers,
Attorney and Advocate at https://www.wrightslaw.com/advoc/articles/tests_mesurements.htm
and https://www.wrightslaw.com/info/test.index.htm
[100] Id. at 10. See Anderson v. Banks, 520 F. Supp. 472,
489 (S.D. Ga. 1981) "Validity in the testing field indicates whether
a test measures what it is supposed to measure."
[101] Id.
[102] Id.
[103] Id.
[104] Id.
[105] Id.
[106] Langenfeld, Karen, Thurlow, Martha, and Scott, Dorene (January 1997).
High Stakes Testing for Students: Unanswered Questions and Implications
for Students with Disabilities. Synthesis Report No. 26 at p. 6.
Minneapolis, MN: University of Minnesota, National Center on Educational
Outcomes.
[107] Langenfeld, Karen, Thurlow, Martha, Scott, Dorene (January 1997).
High Stakes Testing for Students: Unanswered Questions and Implications
for Students with Disabilities. Synthesis Report No. 26. Minneapolis,
MN: University of Minnesota, National Center on Educational Outcomes.
[108] Id. at 7.
[109] Id.
[110] Id. at 8. Does the teaching of subject matter skills and
knowledge for a test generalize across other tests of achievement? One
study found that mathematics scores did not generalize from one test to
another, and that reading scores improved marginally. Thus, overall student
achievement did not improve in this situation. There are also questions
about whether retaining students to improve their test scores, sometimes
referred to as "academic red shirting," works.
[111] Id. at 10.
[112] See Guy, Barbara, Shin, Hyeonsook, Lee, Sun-Young, Thurlow, Martha
L. (April 1999). State Graduation Requirements for Students With and Without
Disabilities. Technical Report 24. Minneapolis, MN: University
of Minnesota, Center on Educational Outcomes. To demonstrate the differences
in state diploma requirements, in Colorado, Michigan, and Pennsylvania,
local educational agencies set the graduation requirements. In Iowa, minimum
course requirements are set by the state but local boards may require
additional credits. Massachusetts has statewide credit requirements for
certain areas while local educational agencies decide credit for other
content areas. Nebraska requires a total of 200 credits, but how those
credits are distributed is decided at the local level, as long as 80%
of the credits are in core curriculum subjects. The actual numbers of
credits required for graduation varies from state to state. Vermont requires
14.5 credits while Alabama, Florida, Hawaii, Utah, and West Virginia require
24 credits. California, Idaho, Indiana, Massachusetts, Nebraska and New
Jersey leave decisions about the number of credits required for graduation
to local educational agencies.
There
are other significant differences among states in graduation requirements
for students with disabilities. Graduation requirements are established
by the State Education Agency in Wisconsin, yet diploma requirements are
established by the local educational agency. Colorado, Iowa, Montana,
New Hampshire, Rhode Island, and Wyoming do not have state requirements
for students with disabilities. Oklahoma, Idaho, and Washington do not
specify whether their graduation requirements are established at the state
or local level. Alaska, District of Columbia, Nebraska, and Oregon offer
a certificate option for students with disabilities; and Alaska and the
District of Columbia also have an IEP diploma option for students with
disabilities. About 10% of states with course requirements allow students
with disabilities to earn a high school diploma by completing their IEPs.
Yet, few states permit the IEP team to change graduation requirements.
In these states, students with disabilities may be held to different academic
standards than nondisabled students.
See
also Mehrens, W.A., Brown, C.L., Henke, R.R., Ross, L., & McArthur,
E. (1992). Overview and inventory of state requirements for school
coursework and attendance. Washington, DC: US Department of Education
Office of Educational Research and Improvement, National Center for Education
Statistics. (ERIC Document Reproduction Service No. ED 346 619). This
study analyzed each state's required courses for graduation, the name
and description of the exit exam, if any, and provisions for graduation
for students with disabilities. In addition, this study described how
states implemented minimum competency testing programs: (a) state-developed
test with state-defined minimum score to receive a high school diploma;
(b) state-developed test with the local education agency setting passing
standards; (c) state-developed test used only to award special advanced
or honor high school diplomas; and (d) state-defined competencies required
for graduation with local educational agency determining the method of
assessment.
[113]
Thurlow, M., & Thompson, S. (1999). Diploma Options and Graduation
Policies for Students with Disabilities. Minneapolis, MN: University
of Minnesota, Center on Educational Outcomes.
[114] For a fairly recent report of each state's exit options, see Guy,
Barbara, et. al. National Center on Educational Outcomes Technical
Report No. 24: State Graduation Requirements for Students With
Disabilities and Without Disabilities (NCEO Report), p. 8. As of the
spring 1999, 23 states were considering revising their policies regarding
exit options available for children with disabilities. NCEO Report at
p. 12.
[115] NCEO Report, Table 12.
[116] O'Neill, Paul T., Willkie Farr & Gallagher, Pass The Test or
No Diploma: High Stakes Testing Graduation Testing and Children with Learning
Disabilities. http://www.ldonline.org/ld_indepth/assessment/oneill.html
Nine states (Alabama, Georgia, Hawaii, Mississippi, Nevada, New Mexico,
New York, and North Carolina) with exit exams offer special diplomas or
certificates only for children with disabilities. These special diplomas
may be called an IEP Diploma, Adjusted Diploma, Occupational Diploma,
or Graduation Certificate. Thurlow, M. & Thompson, S. (1999). Diploma
Options and Graduation Policies for Students with Disabilities. (Number
10). Minneapolis, MN: University of Minnesota, Center for Educational
Outcomes.
[117] Id.
[118] Id.
[119] Hoff, David, Testing Ups and Downs Predictable, Education Week
(January 26, 2000), pp. 1, 12.
[120] Hotaklainen, Rob, High Stakes Tests Under Fire in Texas: Scores
Rising But Some Students are Left Behind, Minneapolis Star-Tribune,
at 1A.
[121] Nussbaum, Debra, Does School Testing Make the Grade? The New
York Times (December 12, 1999) Section 14, NJ, p.1.
See also Schrag, P. (2000). Too Good to Be True. The American Prospect
4(11); Clarke, M., W. Haney, and G. Madaus (2000). High Stakes Testing
and High-School Completion. Boston: National Board of Educational
Testing and Public Policy 1 (3), 1-11. The strongest predictor about whether
students drop out is whether they were retained. Students who are retained
once are significantly more likely to drop out than students who are not
retained. A single retention increases the dropout rate by 40 percent;
multiple retentions increase the dropout rate to 90 percent. Hauser, R.
(1999). Should We End Social Promotion? Truth or Consequences. In Orfield,
G., and M. Kornhaber, eds., Raising Standards or Raising Barriers?
Inequality and High Stakes Testing in Education. New York: The Century
Fund. National Research Council, Heubert J., and R. Hauser, Eds. (1999).
High Stakes: Testing for Tracking, Promotion, and Graduation. Committee
on Appropriate Test Use. Washington, D.C.: National Academy Press. One
study found that schools retain 30% of all students, 50% of all African
American males, 50% of all Latino students, and 20% of all white students.
In Texas, the K-12 retention rate is 44% (55% for African Americans);
the Louisiana rate is 66%; the Wisconsin rate is 30%. The drop out rate
for these states is proportionate to retention rate. Heubert, J. (2000)
Critical Issues in Special Education: High Stakes Assessment &
Students with Disabilities. Harvard University, Cambridge, MA. Shepard,
L.A. and M.L. Smith, Eds. (1989). Flunking Grades: Research and Policies
on Retention. London: Falmer Press. See Linda Darling-Hammond and
Beverly Falk, Using Standards and Assessments to Support Student Learning,
Phi Delta Kappan, Nov. 1997, at 191, 193 (annual retention rates
in the U.S. are roughly 15-19%, which is comparable to Haiti and Sierra
Leon).
The evidence
that grade retention does not produce academic success is overwhelming.
Requiring a student to repeat a school year is inconsistent with learning
patterns and cognitive development. A retained student may not receive
additional help, but is usually placed in the same learning environment
in which they failed. Retained students are placed in remedial courses
known as tracking. Tracking, however, tends to widen the achievement gap
by placing students with the greatest academic needs with the lowest qualified
teachers. Elul, Hagit, Making the Grade, Public Education Reform: The
Use of Standardized Testing to Retain Students and Deny Diplomas, 30 Colum.
Human Rights L. Rev. 495, 530-31. (Summer 1999).
Denial of
diplomas has high social costs, including lower earnings, decreased job
opportunities, reduced community involvement, poorer health, and family
instability. Heubert, Jay P., (1999). High Stakes: Testing for Tracking,
Promotion, and Graduation. It is arguable that less drastic, more
effective alternative non-discriminatory means are available to ensure
that students meet high standards. Schools should focus on early identification
of students who experience academic difficulties in early grades. For
example, at-risk students may participate in summer school programs, after
school tutoring, smaller classes, and quality instructional programs.
Elul, Hagit, (Summer 1999) at 536.
One study
that examined the causal connection between high stakes testing and drop
out rates found indirect evidence that high-stakes testing increases drop
out rates. When students were retained because of high-stakes testing,
more students dropped out of school. High stakes testing and higher standards
for graduation increased rates of retention. There is anecdotal evidence
that students who fail high-stakes tests have doubts that they will complete
high school. In one study, drop out rates increased for students who were
doing well academically, but not for minority students and students who
exhibited poor academic performance. There is evidence that minimum competency
testing in urban areas with high concentrations of low-income and minority
students increased drop out rates. Langenfeld, Karen, Thurlow, Martha,
Scott, Dorene (January 1997). High Stakes Testing for Students: Unanswered
Questions and Implications for Students with Disabilities. Synthesis
Report No. 26 at. 9-10. Minneapolis, MN: University of Minnesota,
National Center on Education Outcomes.
[122] Id.
[123] Corbett, H.D. & Wilson, B.L. (1991). Testing, reform, and
rebellion. Norwood, NJ: Ablex; Rottenberg, C. & Smith, M.L. (April
1990). Unintended effects of external testing in elementary schools. Paper
presented at the annual meeting of the American Educational Research Association,
Boston.
[124] McKinney, J.D. (1983). Performance of handicapped children on the
North Carolina minimum competency test. Exceptional Children, 49,
547-550; Thurlow, M.L., Ysseldyke, J.E., & Andeson, C.L. (May 1995).
High school graduation requirements: What's happening for students with
disabilities? Synthesis Report 20. Minneapolis, MN: University
of Minnesota, National Center on Educational Outcomes.
[125] Id.
[126] Id. A criterion-referenced test measures student proficiency
in a single or multiple subject area.
[127] Id.
[128] Chin-Chance, S.A., Gronna, S.S., & Jenkins, A.A. (March 1996).
Assessing special education students in a norm referenced statewide testing
program: Hawaii State Department of Education. Paper presented at the
meeting of the State Collaborative on Assessment and Student Standards
(SCASS) Assessing Special Education Students. Washington, D.C. sponsored
by the Council of Chief State School Officers (CCSSO).
[129] McKinney, J.D. (1983). Performance of handicapped children on the
North Carolina minimum competency test. Exceptional Child, 49,
547-550; Vitello, Camilli & Molenaar. (1987). Performance of special
education students on a minimum competency test. Diagnostique,
13(1), 28-35.
[130] Griffin, B.W. & Heidorn (1996). An examination of the relationship
between minimum competency test performance and dropping out of school.
Educational Evaluation and Policy Analysis, 18(3), 233-266; Reardon,
S. (April 1996). Eighth grade minimum competency testing and early high
school dropout patterns. Paper presented at the annual meeting of the
American Educational Research Association, New York.
[131] Id.
[132] Langenfeld, Karen, Thurlow, Martha, Scott, Dorene (January 1997).
High Stakes Testing for Students: Unanswered Questions and Implications
for Students with Disabilities. Synthesis Report No. 26 at 18.
Minneapolis, MN: University of Minnesota, National Center on Educational
Outcomes; Anderson, B.D. (1977). The costs of legislated minimal competency
requirements. A background paper prepared for the Minimal Competency Workshops
sponsored by the Education Commission of the States and the National Institute
of Education. (ERIC Document Reproduction Services No. ED 157 947); Potter,
D.C., & Wall, M.E. (April 1992). Higher standards for grade promotion
and graduation: Unintended benefits of reform. Paper presented at the
annual meeting of the American Educational Research Association, San Francisco;
Foshee, D.P., Davis, M.A., & Stone, M.A. (1991). Evaluating the impact
of criterion-referenced measurement on remediation decisions. Remedial
and Special Education, 12(2) 48-52; Singer, H., & Balow, I.H..
(1987). Proficiency assessment and its consequences. Riverside,
CA: University of California. (ERIC Document Reproduction Service No.
ED 290 127).
[133] Heubert, Jay P. , J.D., Ed.D., (2000). High-Stakes Testing: Opportunities
and Risks for Students of Color, English-Language Learners, and Students
with Disabilities, published as a chapter in Pines, M., ed., The Continuing
Challenge: Moving the Youth Agenda Forward (Policy Issues Monograph
00-02, Sar Levitan Center for Social Policy Studies). Baltimore, MD: John
Hopkins University Press, at p.2.
[134] Executive Summary: Conference on Minority Issues in Special Education,
The Civil Rights Project, Harvard University, (2001) at p.2. For further
information on this study, see http://www.civilrightsproject.harvard.edu/
See also Coleman, Arthur L., Excellence and Equity in Education: High
Standards for High Stakes Tests, Va.J.Soc.Policy 81 & n.35
(Fall 1998).
[135] Id.
[136] Linn, R. (2000), Assessments and accountability, Educational
Researcher 29(2), 4-16.
[137] Schrag, P. (2000). Too Good to Be True, The American Prospect
4(11); Viadero, D. (2000). Testing System in Texas Yet to Get Final
Grade, Education Week. Data shows continuing disparities in the
cumulative failure rate: 17.6 percent for black students, 17.4 percent
for Hispanic students, and 6.7 percent for white students; Natriello,
G., and A. Pallas (1999). The Development of High Stakes Testing.
Paper presented at the Conference on Civil Rights Implications of High-Stakes
Testing, sponsored by the Harvard Civil Rights Project, Teachers College,
and Columbia Law School.
[138] Grissmer, D. et. al. (2000). Improving Student Achievement:
What State NAEP Scores Tell Us. Santa Monica, CA: Rand.
[139] Schrag, P. (2000). Too Good to Be True, The American Prospect
4(11).
[140] Allington, R. (2000). Letters: On Special Education Accommodations.
Education Week 19(35); Sack, J. (2000). Researchers Warn of Possible
Pitfalls in Spec. Ed. Testing. Education Week 19(32); Yssedyke,
J.E., M.L. Thurlow, K.L. Langenfeld, J.R. Nelson, E. Teelucksingh, and
A. Seyfarth. (1998). Educational Results for Students with Disabilities:
What Do the Data Tell Us? Minneapolis, MN: National Center on Educational
Outcomes.
[141] American Federation of Teachers (1999). Making Standards Matter.
Washington, D.C.: American Federation of Teachers.
[142] National Research Council, Heubert J., and R. Hauser, Eds. (1999).
High Stakes: Testing for Tracking, Promotion, and Graduation. Committee
on Appropriate Test Use. Washington, D.C.; National Academy Press
citing Stake, R. (1998, July). Some Comments on Assessments in U.S. Education.
Educational Policy Analysis Archives, 6(14). See http://epaa.asu.edu/
and http://www.nap.edu/catalog/6336.html
[143] In August 1997, the American Bar Association (ABA) Individual Rights
and Responsibilities Section submitted comments to the U.S. Department
of Education, recommending changes to The Use of Tests When Making
High-Stakes Decisions for Students. The ABA's comments, developed
by the Section's Task Force on Diversity, noted several concerns. The
ABA suggested that the Guide: (1) should contain more neutral language
that reflects both the legal obligations to be met when using high-stakes
tests and a caution about psychometric effects of such testing; (2) should
include a strongly worded caution against the reliance on a standardized
test as the sole criterion for high-stakes decision; (3) should explain
how the "disparate impact" legal standard is applied in conjunction
with principles of fair test use; (4) should acknowledge the limits of
most standardized tests from content bias and norming bias for students
with limited English proficiency; (5) Guide should articulate more accurately
the applicable Title IX legal standard to address the requirement of intentional
discrimination; and (6) should clarify the discussion of use of tests
of students with disabilities, and promotion decisions. ABA Section on
Individual Rights & Responsibilities, Fall 2000.
In 1999, the Department of Education, Office for Civil Rights (OCR) launched
an investigation into the effects of high-stakes testing on Hispanic students.
OCR investigated complaints about high school exit exams in Ohio, Nevada,
North Carolina, Texas, and Illinois. In Ohio and Texas, federal and state
officials agreed that high-stakes tests could be used, but that all students
must have access to remedial assistance, multiple chances to pass the
exam, and appropriate instruction to prepare for exam. President's Advisory
Commission on Educational Excellence for Hispanic Americans, A Report
to the Nation: Policies and Issues on Testing Hispanic Students in the
U.S. (1999); Zehr, Mary Ann, Hispanic Students Left Out By High-Stakes
Tests, Panel Concludes, Education Week on the Web (Sep. 22, 1999);
Blair, Julie, OCR Issues Revised Guidelines on High-Stakes Testing, Education
Week on the Web (Jan. 12, 2000); Wildavsky, Ben, Achievement Testing
Gets Its Day in Court, U.S. News and World Report (Sept. 27, 1999)
at 30, 32; Chaddock, Gail Russell, Adverse Impact, Christian Science
Monitor (Nov. 30, 1999); Rossi, Rosalind, Complaints Hit School Promotion
Rule, Chicago Sun-Times (Oct. 22, 1999).
[144] Heubert, Jay P., J.D., Ed.D., (1999) at p. 10.
[145] National Mental Health and Education Center, Large Scale Assessments
and High Stakes Decisions: Facts, Cautions and Guidelines (2002),
at p. 4.
[146] Id.
[147] Heubert, Jay P., J.D., Ed.D., (1999), at p. 10.
[148] Katzman, John & Hodas, Steve, (1995). Class Action: How to
Create Accountability, Innovation, and Excellence in American Schools.
Villard Books, at 69-70.
[149] Id. at 70.
[150] Id. at 73-77.
[151] Tucker, Marc S. & Codding, Judy B. (1998). Standards for
Our Schools: How to Set Them, Measure Them, and Reach Them. Jossey-Bass
Publishers: 190, 192-93
[152] Id. at 198-99.
[153] 20 U.S.C. § 6311. This section is not intended to provide a
comprehensive analysis of the No Child Left Behind Act, but is merely
a brief overview of the Act's primary directives that will impact State
and district-wide assessments in charter and public schools.
[154] 20 U.S.C. § 6311(g); O'Neill, Paul T., Willkie Farr & Gallagher,
Pass The Test or No Diploma: High Stakes Testing Graduation Testing and
Children with Learning Disabilities. http://www.ldonline.org/ld_indepth/assessment/oneill.html
[155] 20 U.S.C. § 6311(b)(2)(A)-(C).
[156] 20 U.S.C. § 6311(b)(1)(C),(D).
[157] 20 U.S.C. § 6311(b)(3)(c)(ix); O'Neill, Paul T., Willkie Farr
& Gallagher, Pass The Test or No Diploma: High Stakes Testing Graduation
Testing and Children with Learning Disabilities: http://www.ldonline.org/ld_indepth/assessment/oneill.html
[158] 20 U.S.C. § 6316(b)(5)-(8), (14); 6316(c), 6316(f).
[159] 20 U.S.C. § 6311.
[160] 20 U.S.C. § 6311(g); Chudowsky, N., Kober, Nancy, Gayler, K.S.,
& Hamilton, Madlene. (August 2002). State High School Exit Exams
- A Baseline Report. Washington, DC: Center on Education Policy.
[161] See Castaneda v. Pickard, 648 F.2d 989, 1009 (5th Cir. 1981).
[162] 20 U.S.C. § 1706, 1708.
[163] Although there is a requirement that the IEP team document recommendations
about whether the child will participate in testing and accommodations
or modifications in testing. this requirement alone does not ensure that
IEP teams make such decisions. The National Center for Educational Outcomes
(NCEO) examined instructional and assessment accommodations in two states,
Maryland and Kentucky. In Kentucky, 89% of students with IEPs received
accommodations on state tests, with approximately 83% receiving classroom
accommodations. In Maryland, 82% of students with IEPs received classroom
accommodations. Twenty percent of the IEPs included no explanation as
to why accommodations were made; another 19% did not provide adequate
explanations. The NCEO study did not examine whether assessment accommodations
and participation recommendations were carried out in practice.
There are ways to increase the IEP team's compliance with requirements
about participation recommendations and assessment accommodations. First,
training can influence teachers' participation and accommodations decisions.
Second, school-level planning is needed to anticipate, carry out, and
monitor testing procedures. Third, the IEP should be developed during
the same academic year as state testing, consider the curriculum and instructional
accommodations that the student received during the year, and involve
teachers who will implement assessment recommendations. Shriner, James
G., Destefano, L (2002). Participation and Accommodations in State Assessment:
The Role of Individualized Education Programs. Council for Exceptional
Children, Vol. 68, No. 2, pp. 147-161.
In one 1999 survey, 80% of states with exit exams recorded participation
of students with disabilities in those exams. Less than 75% of states
with exit exams had records of performance of students with disabilities,
and approximately 70% of these states disaggregated performance by category
of disability. Preliminary data suggests that states had difficulty meeting
the requirements for assessment of students with disabilities under the
IDEA Amendments of 1997. See Guy, Barbara, Shin, Hyeonsook, Lee, Sun-Young,
Thurlow, Martha L. (April 1999). State Graduation Requirements for Students
With and Without Disabilities. Technical Report 24. Minneapolis,
MN: University of Minnesota, Center on Educational Outcomes. In the 1999
National Survey of State Directors of Special Education, only 23 states
provided data about participation of students with disabilities in statewide
testing. There was considerable variation between states in alternative
assessment participation rates. See Thompson, Sandra, Thurlow, Martha.
(December 1999). 1999 State Special Education Outcomes: A Report on
State Activities at the End of the Century. Minneapolis, MN: University
of Minnesota, Center on Educational Outcomes. http://education.umn.edu/nceo/OnlinePubs/99/StateReport.htm
In
a later survey, state directors of special education reported increased
participation rates for students with disabilities in state assessments
and improved performance. This may a result of an increase in alternate
assessment participation by students who were excluded in the past. In
Alaska, Arkansas, Connecticut, Delaware, Florida, Illinois, Montana, Nebraska,
New Jersey, New Mexico, Rhode Island, and Vermont, all students are included
in state assessments. The remaining states permit students to be excluded
for many reasons including parental refusal, medical fragility, emotional
distress, homebound or hospitalized, limited English proficiency, and
absent on test day. In regard to accommodations for testing, nearly 60%
of the states maintain data on the use of accommodations; 50% of states
reported increased use of accommodations. Most states use a portfolio
as their alternative assessment, but nearly 50% of states do not report
the scores of students who use non-approved accommodations. Thompson,
Sandra, Thurlow, Martha L. (June 2001). 2001 State Special Education
Outcomes: A Report on State Activities at the Beginning of a New Decade.
Minneapolis, MN: University of Minnesota, Center on Educational Outcomes.
http://education.umn.edu.nceo/OnlinePubs/2001/StateReport.html
[164] Goals
2000 Educate America Act, 20 U.S.C. § 5801.
[165] 20 U.S.C. § 5885, 5886; Shin, Barbara, et al., supra.
[166] Id.; see 20 U.S.C. § 5886(c)(2).
[167] 20 U.S.C. § 5886(c)(1)(B)(III).
[168] See 27 IDELR 138 (1997)
[169] See http://www.law.ou.edu/hist/state97.html
[170] See https://www.wrightslaw.com/law/osep/memor.assess.2000.pdf
[171] The more widely accepted view is that denial of promotion opportunity
or the opportunity to graduate at a particular time is not a constitutionally
protected property interest. Bester v. Tuccaloosa City Bd. of Educ.,
722 F.2d 1514, 1516 (11th Cir. 1984); Williams v. Austin Indep. Sch.
Dist., 796 F. Supp. 251, 253-54 (W.D. Tex. 1992); Crump v. Gilmer
Indep. Sch. Dist., 797 F. Supp. 552, 554 (E.D. Tex. 1992). But see
Brookhart v. Illinois State Bd. of Educ., 697 F.2d 179, 185 (7th
Cir. 1983) in which the Seventh Circuit concluded that the right to receive
a diploma conferred by state law based on extant academic requirements
constitutes a cognizable liberty interest under the Fourteenth Amendment
to the Constitution.
[172] For an extensive analysis of GI Forum v. Texas Education Agency,
see Moran, Rachel F., Symposium: Education And The Constitution: Shaping
Each Other And The Next Century: Sorting and Reforming: High-Stakes Testing
in Public Schools, 34 Akron L. Rev. 107, 122-128 (2000). The author
opines that the GI Forum decision and other recent cases demonstrate
that, in a conservative era, federal courts are increasingly unwilling
to probe the workings of state educational systems. Id. at 130.
The question remains whether state courts may be more sympathetic to challenges
to high-stakes testing. State court challenges to the system of financing
public schools in Kentucky and Texas may encourage similar lawsuits against
high-stakes testing. Id. at 131. See e.g. Rose v. Council for
Better Education, Inc., 790 S.W.2d 186 (Ky. 1989) holding that education
is a fundamental right in Kentucky and government must afford equal educational
opportunities to every child within the state; Edgewood Indep. Sch.
Dist. v. Kirby, 777 S.W.2d 391 (Tex. 1989) which held that the system
of state funding of education violates Texas Constitution; Brigham
v. State, 166 Vt. 246 (1997) which held that children were denied
equal educational opportunity under Vermont Constitution.
[173] See an article about this lawsuit, "High Stakes Testing - Indiana
Judge Asked for Injunction so Seniors can Graduate," at https://www.wrightslaw.com/advoc/articles/highstakes_tests-2000.htm
and Education Week (May 31, 2000), Indiana Case Focuses on Special Ed
at
http://www.edweek.org/ew/ewstory.cfm?slug=38staskes.h19
[174] Courts can review academic decisions of public educational institutions
under the substantive due process standard. Regents v. University of
Michigan v. Ewing, 474 U.S. 214, 222 (1985).
[175] Ewing, 474 U.S. at 225. A plaintiff may challenge high-takes
testing on the ground of substantive due process due to lack of curricular
validity.
[176] One author predicts that the Equal Protection Clause will offer
little realistic opportunity to challenge high-stakes testing unless it
involves disadvantaged minority students. In support of this prediction,
the author argues that discriminatory intent is difficult to prove. Because
the interest in obtaining a diploma is not recognized as a fundamental
right, a facially non-discriminatory competency test can survive a minimal
amount of judicial scrutiny. McCall, James M., Note And Comment: Now Pinch
Hitting For Educational Reform: Delaware's Minimum Competency Testing
And The Diploma Sanction, J.L. & Com. 373, 385 (1999).
[177] Washington v. Davis, 426 U.S. 229 (1976). A plaintiff must
prove intentional discrimination, grounded upon a discriminatory purpose
in the establishment of a practice at issue, for which there is no legitimate
educational justification, to sustain a constitutional claim under the
Fourteenth Amendment.
[178] For an in-depth analysis of Rankins v. Louisiana State Bd. of
Elementary & Secondary Educ. and related cases, see Johnson, Lloyd,
E., The Louisiana Graduation Exit Exam: Permissible Discrimination, 22
S.U.L. Rev. 184 (1995).
[179] Georgia State Conf. of Branches of NAACP, 775 F.2d at 1417.
The test under Title VI is whether the challenged practice has sufficiently
adverse racial impact, and if so, whether the practice is adequately justified.
Wards Cove Packing Co. v. Atonio, 490 U.S. 642, 656-57(1989).
[180] The court found that material on the test did not appear on the
students' IEPs and programs were not developed with the purpose of passing
the test. Therefore, the students were awarded an extended period to prepare
before taking the test. Brookhart, 679 F.2d at 187.
[181] Id. The student was offered other accommodations and alternative
testing assessment.
[182] Hearing officer found that student was able to demonstrate his language
abilities without any accommodations or a reader. The student earned all
high school credits without a reader and passed the reading portion of
the Alabama High School Graduation Examination. Thus, the student was
not severely deficient in reading skills. Instead, the hearing officer
opined that the student did not master skills necessary to pass the language
portion of the exam. If the student exhibited a severe reading deficiency
in reading and language arts, and his IEP required a reader for tests
and assessments, would the hearing officer ruled differently? What effect,
if any, would the IDEA Amendments of 1997 have on the hearing officer's
reasoning and outcome of this decision?
[183] Hearing officer found that the IEP team recommended that the student
be given a reading accommodation on the math portion as well as language
portion of the high school exit exam. The hearing officer ruled invalid
the Alabama's Department of Education's policy that required in order
to receive a reading accommodation in a subject, a provision for oral
testing in that subject must have been made for the student during the
period of the student's last two IEPs. This policy conflicts with the
IDEA and was inconsistent with classroom exams as opposed to exit exams.
The student was disadvantaged on the exit exam because 30% of the exam
required reading skills and 70% involved computational skills.
[184] For a comprehensive examination of learning disabilities, see "How
School Systems Are Failing to Properly Identify, Evaluate, and Provide
A Free Appropriate Public Education to Children With Learning Disabilities
and What We Can Do About It" by Torin Togut, Esq. Presented at
4th Annual COPAA Conference (March 8-11, 2001), Pages BR3-3 - BR3-30).
[185] Sills, Caryl, K., (1995). Success for Learning Disabled Writers
Across the Curriculum, College Teaching 43 (2), 66.
[186] Disability Rights Advocate, Do No Harm - High Stakes Testing
and Students with Learning Disabilities (2001), at pp. 2-4.
[187] Id. at 6.
[188] Id. at 8-10. It is important to distinguish between testing
accommodations and testing modifications. Testing accommodations change
how test material is presented or how a student responds to a test. Testing
accommodations may change the setting, scheduling, or response time. These
changes do not substantially change the test's level, content, or performance
criteria but provide the student with a level playing field and an opportunity
to demonstrate his or her knowledge. On the other hand, testing modifications
may substantially change to what a test measures or the difficulty of
the test. Testing modifications change what the test actually measures.
[189] Id. at 11-12. See also Thirteen Core Principles to Ensure
Fair Treatment of All Students, Including Those with Learning Disabilities,
with Regard to High Stakes Assessments, at 15.
[190] See Students with Learning Disabilities and State of Oregon
Settle Class Action Suit Over High Stakes Assessments in Public Schools,
Wrightslaw, https://www.wrightslaw.com/law/news/OR_settlement_dyslexia.htm
[191] For an article regarding the district court's opinion in Chapman
v. California Dept. Ed. see https://www.wrightslaw.com/news/2002/ca.injunction.accoms.htm
[192] See "Class Action Suit Filed Against Alaska's High-Stakes
Exit Exam" on Wrightslaw, https://www.wrightslaw.com/news/04/high.stakes.ak.htm;
also, "Alaska Students with Disabilities Can Graduate with Diploma
in 2004 Without Passing Exit Exam" on Wrightslaw at https://www.wrightslaw.com/news/04/high.stakes.ak.0410.htm
To top
Copyright © 1999-2024, Peter W.
D. Wright and Pamela Darr Wright.
All rights reserved.
Contact Us
|