Blog

Why isn’t it working?

“Every system is perfectly designed to achieve the results it gets” (Batalden, in Hark, Pironaka, Carver, & Nordstrum, 2013, p. 4).

Continuous improvement is getting some attention in education contexts at the moment. Originally applied outside of education, it implies both high frequency improvement efforts and the integration of improvement into day-to-day operations. It also stems from the belief that the current outputs are a result of the current system, and achieving different outputs requires changing this system. This means measuring certain aspects of the system then designing interventions targeted toward improving specific components of the system based on what the evidence suggests is lacking.

One common complaint in schools is that improvement efforts are maybe too frequent. Rather than being continuous in the sense that improvement focuses constantly on moving forward to solve a specific problem, education sometimes seems to jump from problem to problem in hopes of tripping upon the solution. Continuous improvement requires not that we do the same work differently, but that we do different work. This is an interesting distinction that many educational improvement efforts seem to miss. Rather than changing the essence of what we do, we try to do the same thing in a different way. We add a particular reading program to our curriculum, or a certain instructional strategy to our teaching repertoire. When these things do not produce the desired results, we discard them and move onto the next thing to see if it helps.

These efforts do not often produce the desired results, or lasting change, for several reasons. While they are frequent, they are not integrated into our day-to-day work. Some people may get on board, and do the new things in parallel to what they’re already doing. Others may choose to wait the whole thing out, which is possible when the efforts are not integrated.

In order to continuous improvement to work, the system needs to change, which requires looking at processes rather than outputs. In schools, we focus on student test scores and student achievement, waiting for these to go up or down based on our efforts. We design “strategic plans” focused on achieving certain outcomes, but fail to look at and measure the inputs and processes that need to be in place prior to those outcomes being possible. We then discard the intervention that “did not work,” without even knowing if we actually implemented it in a way that made our work different.

As researchers and evaluators, we often have discussions with educators about the difference between implementation evaluation and outcomes evaluation. Educators create logic models, with boxes on the left describing what will be done, and boxes on the right describing what will happen, and sometimes fail to specify how much of the stuff on the left has to be done for the stuff on the right to occur. Our discussions with educators often revolve around setting a standard for implementation fidelity—how much of the inputs and outputs need to happen before we can reasonably expect the outcomes to be present? Setting this standard is essential to making the change a part of day-to-day work, and collecting data to determine what is happening and using that data to inform adjustments is essential.

Here is an example. Smith School adopts a personalized, digital learning model. They purchase tablets for all of their students, and train the teachers to use these tablets to personalize learning. It is assumed that because the teachers have been trained and the students have tablets, the model is being implemented. Smith School reviews test scores at the end of the year, finds they have not risen, and decides the digital learning model is a failure. Jones School implements a similar model and provides similar technology and training to their teachers. They set criteria for what high-fidelity implementation of the model looks like, for example what classrooms will look like and how they will operate when the digital learning model is being implemented correctly, which includes how many days students must be present, how much time per day they must be actively engaged with the learning tools, and how many days/weeks/months students must be exposed to the intervention in order for it to impact achievement. All individuals in the organization set process goals and timelines for iterative cycles. Data are collected and analyzed by collaborative teams to determine if the required “dosage” is present in each classroom, and the findings are used to refine the model and implementation and to provide additional supports at all levels, including giving administrators the guidance and resources needed to support teachers, helping teachers integrate the components of the model into their instruction, and helping district-level staff learn skills for creating a system to support the model. At the end of the pre-determined implementation period, data are examined, and test scores for students who were exposed to the desired dose of the digital learning model are used to determine if the intervention achieved the desired outcomes. These findings are then used to inform future decisions.

In the first example, improvement ran parallel to existing practice. The second, more of a continuous improvement approach, implemented the intervention and documented this implementation such that it was an essential part of the day-to-day operations of the school for all educators and staff. They gathered the data needed to keep track of how their “system” was changing, then used these data to adjust their approach. The focus was on changes in process by ensuring the implementation occurred before expecting the outcomes. While this seems obvious, that you must add yeast before you can expect dough to rise, it is an element that is often missing from our efforts to improve student achievement.

While these are oversimplified examples (it is a blog after all), additional examples are provided in this Carnegie Foundation report. Of course, Metiri Group can help districts in all stages of continuous improvement, with our suite of measurement tools (such as TRAx) and wrap-around services, including planning and evaluation.

 

Park, S., Hironaka, S., Carver, P., & Nordstrum, L. (2013). Continuous improvement in education. Carnegie Foundation for the Advancement of Learning. Retrieved from http://archive.carnegiefoundation.org/pdfs/elibrary/carnegie-foundation_continuous-improvement_2013.05.pdf