In the first post in our series dissecting the new school ranking system created by Governor John Kasich and the Republican legislature we listed the six criteria that will be used to rank all school districts/charter schools in the state, explained why the decision to heap massive new responsibilities upon the Ohio Department of Education widely misses the mark (despite ODE being the correct agency), and concluded by dismantling the first measure in the ranking system.  If you have not yet read that post, you can access it here.

In this second post in the series we will examine the requirements of item #2 in the list of ranking criteria.

Again, the six criteria that were adopted in the final bill as follows [bold added for emphasis]:

  1. Performance index score for each school district, community school, and STEM school and for each separate building of a district, community school, or STEM school. For districts, schools, or buildings to which the performance index score does not apply, the superintendent of public instruction shall develop another measure of student academic performance and use that measure to include those buildings in the ranking so that all districts, schools, and buildings may be reliably compared to each other.
  2. Student performance growth from year to year, using the value-added progress dimension, if applicable, and other measures of student performance growth designated by the superintendent of public instruction for subjects and grades not covered by the value-added progress dimension;
  3. Performance measures required for career-technical education under 20 U.S.C. 2323, if applicable. If a school district is a “VEPD” or “lead district” as those terms are defined in section 3317.023 of the Revised Code, the district’s ranking shall be based on the performance of career-technical students from that district and all other districts served by that district, and such fact, including the identity of the other districts served by that district, shall be noted on the report required by division (B) of this section.
  4. Current operating expenditures per pupil;
  5. Of total current operating expenditures, percentage spent for classroom instruction as determined under standards adopted by the state board of education;
  6. Performance of, and opportunities provided to, students identified as gifted using value-added progress dimensions, if applicable, and other relevant measures as designated by the superintendent of public instruction.

The department shall rank each district, community school, and STEM school annually in accordance with the system developed under this section.

#2 – Value-Added Student Growth.  The process of obtaining value-added progress is done through a complicated series of statistical calculations by an out-of-state company, EVAAS.  The exact calculations are generally considered proprietary and secretive, though the creator of the system, William Sanders, refutes that claim and directs people to a 1997 publication that he says explains his formula.  That controversy aside, value-added data is basically a comparison of a student’s standardized test results from one year to the next, with the outcome compared to the student’s expected results relative to the pool of all tested students.  When a group of student results is combined, the teacher, school, and district, are assigned a value based on the performance of the students.

As this measure is still based on standardized test score results, the objections we raised about test bias in our first post still hold true.  However, to be fair to proponents of this measure, value-added results are touted as removing the bias by comparing a student’s own growth from year to year to determine whether the student has gained “a year’s worth of growth.”  As you can see on the chart below, the distribution is more even, displaying that the poverty level of the district does not appear to play a significant role in these results.  Note: a gain index value of greater than 2 is considered more than a single year of growth, between 2 and -2 is approximately one year’s growth, and less than -2 is less than a single year of growth.

Presuming this is considered more accurate for districts in terms of accounting for external factors, we would weight this criterion greater than the Performance Index Score.  But treating value-added results with such emphasis introduces the major flaw in this criteria, the limited pool of contributing students.

In Ohio, value-added results are based on the results of standardized tests that students take in back-to-back years (required to demonstrate yearly growth).  Due to this systemic limitation, only districts/schools that give the Ohio Achievement Assessments in reading and math in grades 3-8 receive these results.  So while the PI Score eliminated 65 schools from the rankings, a lack of existing value-added scores leaves 188 schools without data (19%).  And as with the PI Score scenario, charter schools are the most common schools (137 total) without value-added scores due to the large number of charters serving high school students.  In other cases, a charter school serves very few students across a large grade band, resulting in too few students to provide statistically reliable value-added scores.  Traditional districts, because of their legal requirement to serve all students within their boundaries, have value-added results, though high school performance has absolutely no bearing in this category. Therefore, the inequity among districts  abounds with this measure, preventing districts with high-performing secondary programs from adequately demonstrating their students’ progress.

And if that isn’t questionable enough, the most controversial aspect of this requirement is where it explains that the Ohio Department of Education is required to create value-added measures for those grades and subjects that aren’t covered by the standardized testing.  How do we even begin to explain to legislators and lobbyists that entertain such an absurd idea that just because they write something down on paper that it doesn’t make it so?

William Sanders, often referred to as the Godfather of value-added reporting, has been working on his model since the 1990s and is STILL fine-tuning the process and defending its validity.  In a 2008 article, Audrey Amrein-Beardsley, an assistant professor at Arizona State, writes that “although the EVAAS model is probably the best and most sophisticated one we have of this type,” it has “significant issues”: “insufficiency of validity studies, the difficulties with the userfriendliness of the model, the lack of external reviews, and the methodological issues with missing data, regression to the mean, and student background variables.”

The nonprofit organization Battelle for Kids can be credited with bringing the concept of value-added reporting to Ohio (to be clear, it’s not inherently negative) in an effort to help school districts improve their educational practices.  In a 2011 report published by Battelle for Kids and commissioned by The Bill and Melinda Gates Foundation, an aggressive education reformer and school accountability advocate, a strong case is offered against Kasich’s brand of hurry-up-and-create-tests legislation:

While many inputs may go into growth models, the availability and use of high quality tests are essential. Growth models, in some way or another, rely on tests of students’ knowledge in a particular subject or content area.

Now connect that concept of needing “high quality tests” with the knowledge from our prior post that Kasich cut the Ohio Department of Education budget by 12.6% as you read the following information from that same Gates Foundation report:

Implementing growth measures—especially for the purpose of examining educator effectiveness—is a challenge in any environment. It is important to consider the experience, expertise and capacity you will need from internal resources as well as an external model provider in order for your organization to be successful.

While you may not need economists or statisticians on staff, internal capacity must be considered. State or district staff must have the ability and will to drive implementation decisions and oversee any work with an external provider to ensure success. If undertaking the analysis internally, your state or district should still consider outside support in the way of an external review of your systems, methods and processes to ensure your models are behaving as expected and producing valid results.

As with any investment, the financial costs to implement and sustain a program are important to consider. Education spending is scrutinized, so you need to be prepared to explain the short- and long-term costs of implementing a growth model. You also need to be able to make a sound argument for spending public dollars on these measures.

To make the investment worthwhile, consider how you will ensure financial sustainability for the continued use of growth data. Remember to account for internal personnel and infrastructure, as well as available funding, to determine whether you could eventually conduct the analysis in-house, or if you need a long-term partnership with a provider.

You need to consider the time and personnel your organization has to invest to make the implementation successful.

In early June, we calculated that the cost of creating tests for additional tests would be over $300 million.  That obscene cost did not even calculate a test for every grade and every subject as this value-added criterion that Kasich signed into law.  While we calculated that figure based on student participation in courses annually, the fact is that schools actually offer many more specific courses that will need, according to Battelle for Kids and the Gates Foundation and the budget bill, high-quality tests “for subjects and grades not covered by the value-added progress dimension.”  Currently, Ohio has 21 different standardized tests among grades 3-8 and grade 10.  Conversely, the Ohio Department of Education has approximately 500 unique courses on record that will, through this legislation, require that a high quality test be created in order to measure student growth.  And there are very few, if any, existing models for these tests and even fewer that would provide a valid correlation to the existing Ohio tests in order to use them to compare school performance. The creation of measures to fill in all the caps not only WILL cost a lot of money, but it SHOULD cost a lot of money to be done properly.

How many more reasons do the education experts at the Statehouse need in order to comprehend that they have adopted a law that not only can’t be implement fairly, but can’t be implemented AT ALL?

Since we’re only one-third of the way through the six criteria, we have many more reasons to offer.

 

Categories

Archives

Advertisement