Ever since the Ohio Teacher Evaluation System (OTES) was adopted into law in 2011, the Ohio General Assembly has been tinkering with the various components in an effort to “improve” its implementation.  Each successive year has seen many significant changes to the overall OTES framework, while additional legislation has had an impact on the specific components — most notably changes to the state standardized tests.

This year, however, the number of proposed changes to both the overall framework and the testing components that comprise nearly 50% of the majority of teachers’ final ratings is reaching a number that goes well beyond mere “corrections” to the evaluation system.  With numerous bills again resurfacing that question the entire testing process, it is time that the General Assembly take a step back and look at the big picture to see that the entire teacher evaluation process is being question from multiple angles and across party lines.

STOPOTESSimply put, it’s time that the Ohio Teacher Evaluation System is put on hold.  OTES needs to be withdrawn indefinitely so that the General Assembly can make the sweeping overhaul that so many of its members are both directly and indirectly calling for.  Instead of continuing to build the plane as they’re flying it with Ohio’s educators on board as unwilling passengers, Ohio’s legislators need to place a moratorium on the current OTES and then either relaunch it in a few years after all of the problems have been worked out, or make an informed decision to scrap the process entirely.

I don’t think it’s a stretch to say that everyone — superintendents, principals, and teachers alike — were frustrated with the original evaluation system when it first became law and are continually frustrated as the legislators make continual changes that are increasingly difficult to keep track of, let alone enact with fidelity.  Further compounding the constant changes is the actual timing of the changes.  When the legislature enacts changes that don’t become effective until the months of September or October, those changes still require great interpretation by the Ohio Department of Education.  This process of interpreting the intent of the law can often take months to sort out as the language frequently results in either conflicting components or gaps in the law that leaves ODE grasping at straws.  ODE then has the unenviable task of trying to communicate these changes to districts in the middle of the school year, well after the process — supposedly intended to improve teaching and learning — has already begun!

Case in point: Last year, House Bill 362 was passed out of the General Assembly on June 3, sent to the Governor on June 10, and signed by Kasich on June 12.  The law then did not become official until September 11, 2014.  HB362 contained a “minor” change to the Student Growth Measures portion of the evaluation system that actually had a major impact because it failed to take into account the evaluation system as it was currently implemented.  The new language from HB362 read as follows:

ORC 3319.111  (2)(a) The board may elect, by adoption of a resolution, to evaluate each teacher who received a rating of accomplished on the teacher’s most recent evaluation conducted under this section once every two three school years, so long as the teacher’s student academic growth measure, for the most recent school year for which data is available, is average or higher, as determined by the department of education. In that case, the biennial evaluation

(b) The board may evaluate each teacher who received a rating of skilled on the teacher’s most recent evaluation conducted under this section once every two years, so long as the teacher’s student academic growth measure, for the most recent school year for which data is available, is average or higher, as determined by the department of education.

The problem that the legislature created, while trying to allow districts greater local flexibility in evaluating “better” teachers on a less frequent cycle, was the use of the term “average or higher”.  The existing framework did not use the 5-point scale that the legislature apparently assumed was being used to calculate a student growth rating, but instead was following the previous version which converted student growth into only three categories – Above, Expected, and Below.  Under this model, Above equaled a rating of 5, Below was a 1, and Expected represented a rating of 2, 3, or 4.  ODE had already fully developed this framework and even spent countless hours and dollars developing the calculations in the state reporting system — eTPES.

Furthermore, ODE had spent two years developing a plethora of training documents and user guides while training educators across the state on implementing this framework and scoring system so that teachers knew the rules and could begin to understand the evaluation process being forced upon them.

While this change may have been well-intentioned, it also put ODE under the gun to try to revamp the entire system and communicate it out to districts — based on a law that didn’t even become official until September 11 of the school year in which it was to take full effect.  ODE subsequently had to redo the Above, Expected, Below framework and ended up creating a new, 600-point process for calculating Student Growth Measures for teachers, effectively changing the rules in the middle of the game.  The law did not, however, remove the language of Above, Expected, or Below as it pertains to whether a teacher begins the year with an Improvement Plan or a Growth Plan, so ODE had to incorporate both.

The result is that in year 1, when a teacher received a SGM rating of 1, they were rated “Below”, meaning they had to do an Improvement Plan.  This year, because of the change in law, teachers who received a rating of either 1 or 2 are now considered “Below”.

Districts, especially the principals who are largely responsible for conducting evaluations, are supposed to be able to keep track of all of these changes on the fly — typically by simply checking the ODE website for updates, which occur at random times and without much fanfare — lest the district or school be audited by ODE later and found to be in violation of the state law.

An additional practice that was directly affected by this supposedly minor change from HB362 has to do with teachers who were rated Accomplished or Skilled but work with extremely small student populations (i.e., students with multiple disabilities).  Under the Ohio Department of Education’s Business Rules for Student Growth Measures, these teachers do not qualify for receiving a student growth rating as their class size is not considered statistically valid, and their entire OTES rating is therefore based on their Performance Rating received from an evaluator.  It would be in this manner that many of these hard-working teachers earned a rating of Accomplished in the 2013-14 school year and became eligible for an evaluation once every three years, until…

On January 27, 2015, halfway through the school year, ODE released their revised Business Rules for Student Growth Measures document that further clarified another problem created by the legislature with their simple change to “average or above”.  While the teachers described in the previous paragraph are still not eligible for an individual Student Growth Measures rating, neither are they allowed to have an “off” year from full evaluations, regardless of the rating assigned by their evaluator.  From the Business Rules:

Teachers without student growth data

In some limited cases, teachers may not have student growth data and will utilize only the teacher performance measures.

In some very limited cases based on the rules above, a teacher may not have student growth data. For example, Mr. Diaz teaches a multiple handicapped (MH) unit with only four students. These four students are the only students that Mr. Diaz provides instruction for. Since there is not enough data to reach the minimum six student requirement, Mr. Diaz would not have student growth measures for these four students.

Districts may decide to include a shared attribution measure (such as building level Value-Added or a building SLO) as a district measure that could apply to these teachers as their student growth measure. This is a local decision.

In these rare situations where a teacher does not have student growth measures, the teacher performance aspects of the OTES framework would represent his/her summative evaluation. This must be approved by the building principal and superintendent and noted as such in the eTPES system.

Teachers without student growth measure data are NOT eligible to participate in less frequent evaluation cycles (Accomplished every 3 years, Skilled every 2 years) per Sub. House Bill 362.

There is actually one way that these teachers could receive a Student Growth Measure rating – through the use of Shared Attribution.  Shared Attribution is the process of assigning a school or district’s overall value-added rating to individual teachers instead of the teachers individual rating.  Teachers in this situation who ODE specifies may be “exempt” from student growth, would have to have 50% of their evaluation rating based on the value-added results of hundreds or thousands of students who they never interact with.  In most cases, the students of these teachers take Alternate Assessments, not even participating in the actual tests that combine to create the value-added scores in the first place.

Remember, this was just one small, well-intentioned, yet misinformed”tweak” that the General Assembly has made to OTES.  There are other gaps in the framework that have to do with the use of value-added ratings for individual teachers that occur because those ratings are a year behind and teachers may have already changed grades or subjects in an effort to make necessary improvements in their practice.

The final straw in calling for drawing back the Ohio Teacher Evaluation System — into a pilot phase at a minimum — has to do with the overall sense that members of the General Assembly (prodded by their constituents) have lost faith in the standardized tests and testing process.  With bills popping up in increasing numbers calling for sweeping changes to everything from the duration of tests to the complete removal of the Common Core and PARCC tests, now is the time to stop trying to implement the OTES that relies on these supposedly reliable statistical methods.  In fact, for the 2015-16 school year, changes to the standardized tests will result in a significant delay in the receipt of value-added results for districts, schools, and teachers, a major component of a teacher’s final overall rating.  This delay will essentially result in the OTES ratings being worthless for the supposed purpose of addressing a teacher’s individual needed areas of improvement.

Even State Superintendent Dick Ross has called for major changes to the Student Growth Measures portion of the evaluation process, recommending a dramatic increase in the use if Shared Attribution that directly contradicts prior ODE guidance about how districts should, and should not, use shared attribution for teachers (cautiously, and never more than 25% of an individual teacher’s student growth).

Simply put, when so many of the components of the process are being called into question, and when an increasing number of legislators and their constituents are questioning the integrity and legitimacy of the measures that provide the foundation for a full 50% of the original concept, it’s time to halt the process.  The constant modifications are causing educators across Ohio to lose any faith they might have ever had in the system.  All results from the past two years of full implementation should be dismissed and merely considered as part of a large-scale pilot of the system before teachers truly begin to be impacted by the negative effects of a severely flawed program.

It’s time for the Ohio General Assembly to hit the brakes on the Ohio Teacher Evaluation System so that it may itself be evaluated.