Data in Education — Measuring for the Sake of Measuring

My district seems to have gone data crazy here as we reach the end of the year. I am completely supportive of collecting data if it is meaningful and helps to inform instruction. Lately, however, some of the data (but more specifically, the way that I am being required to report it) has me feeling really confused about what the purpose of having all of this data is beyond being able to say, “We have data.”

I’ll give you an example. We have the Fountas and Pinnell (F&P) system that we are using to benchmark where our students are at the end of year in reading. (My thoughts about F&P can go in another post — but just let me say that I do think we need to consider what being at a “P” means in the real world, other than being a boon to profits of books that have the letter “P” on them. It seems like a real cash cow for Heinemann…) Anyway, my understanding of the purpose of these assessments was that they are meant to inform placement for initial groupings for next year’s teachers. (Again, whether this test should be used for placement and if we should be advocating homogenous grouping is a question to be answered another day.)

Despite my questions and concerns about the use of this test, I have been trying to jump through the hoops because it is something that we have to do and there is clearly no fighting it. While doing the testing, it became increasingly frustrating to bump against the problems with the assessment — for example, the complete inattention to the impact of interest on reading ability (it is a real question of ethics when I know that a student could read a book at a much higher level because it is about her favorite animal) and the inability to declare that a child read a book proficiently if he/she did not make the “acceptable” numbers of self-corrections, even when the errors didn’t interfere with comprehension. Beyond the actual testing, even more parameters were in place to dictate how I have to report my data, including a mandate to not record how far past the “target” a student is. This requirement has really sent me into a tailspin and left me wondering what the purpose of collecting all this data is, since clearly it would be beneficial for the next teacher to know not just that a student is beyond the expected level, but how far past.

Needless to say, I think that the lack of communication with teachers about the purposes for having us take all of this class time (I had to miss two half days in my room!) to collect data is a real problem. I don’t understand why more teachers aren’t involved in the discussions that must take place at the higher levels of administration about how to measure student learning. It seems to be a growing issue in education that, all across the country, mandates to teachers are delivered in a top-down manner and that teachers perhaps aren’t consider “expert” enough to be able to make these important decisions.

I can easily think of other types of data that would be more useful to the next teacher and would not take any more time to collect, such as detailed observations and samples of student work. Unfortunately, in this era of education reform, it seems that only quantitative data is valued, as it is more able to be standardized in terms of collection methods. But, I fervently feel that in education — perhaps more than any other subject — it is qualitative data that will have the most profound impact on effectively informing instruction and thus, improving student achievement. Children and all of their complexities cannot be defined by numbers.