My district seems to have gone data crazy here as we reach the end of the year. I am completely supportive of collecting data if it is meaningful and helps to inform instruction. Lately, however, some of the data (but more specifically, the way that I am being required to report it) has me feeling really confused about what the purpose of having all of this data is beyond being able to say, “We have data.”
I’ll give you an example. We have the Fountas and Pinnell (F&P) system that we are using to benchmark where our students are at the end of year in reading. (My thoughts about F&P can go in another post — but just let me say that I do think we need to consider what being at a “P” means in the real world, other than being a boon to profits of books that have the letter “P” on them. It seems like a real cash cow for Heinemann…) Anyway, my understanding of the purpose of these assessments was that they are meant to inform placement for initial groupings for next year’s teachers. (Again, whether this test should be used for placement and if we should be advocating homogenous grouping is a question to be answered another day.)
Despite my questions and concerns about the use of this test, I have been trying to jump through the hoops because it is something that we have to do and there is clearly no fighting it. While doing the testing, it became increasingly frustrating to bump against the problems with the assessment — for example, the complete inattention to the impact of interest on reading ability (it is a real question of ethics when I know that a student could read a book at a much higher level because it is about her favorite animal) and the inability to declare that a child read a book proficiently if he/she did not make the “acceptable” numbers of self-corrections, even when the errors didn’t interfere with comprehension. Beyond the actual testing, even more parameters were in place to dictate how I have to report my data, including a mandate to not record how far past the “target” a student is. This requirement has really sent me into a tailspin and left me wondering what the purpose of collecting all this data is, since clearly it would be beneficial for the next teacher to know not just that a student is beyond the expected level, but how far past.
Needless to say, I think that the lack of communication with teachers about the purposes for having us take all of this class time (I had to miss two half days in my room!) to collect data is a real problem. I don’t understand why more teachers aren’t involved in the discussions that must take place at the higher levels of administration about how to measure student learning. It seems to be a growing issue in education that, all across the country, mandates to teachers are delivered in a top-down manner and that teachers perhaps aren’t consider “expert” enough to be able to make these important decisions.
I can easily think of other types of data that would be more useful to the next teacher and would not take any more time to collect, such as detailed observations and samples of student work. Unfortunately, in this era of education reform, it seems that only quantitative data is valued, as it is more able to be standardized in terms of collection methods. But, I fervently feel that in education — perhaps more than any other subject — it is qualitative data that will have the most profound impact on effectively informing instruction and thus, improving student achievement. Children and all of their complexities cannot be defined by numbers.
I am not familiar with the Fountas and Pinnell system, but certainly am with others like it. I agree with you wholeheartedly and blog frequently about this issue myself. The purpose of assessment is to inform teaching so needs to be concurrent with teaching. Children’s abilities can change so much over a holiday period that often end of year results are of little value in informing the teacher of the following year. Most of those teachers will want to do their own initial assessment anyway. The end of year assessment is useful in showing the growth that has taken place in children’s learning and can assist a teacher to reflect on their own practices and how these can be improved. It is true too that effective readers do not necessarily correct all miscues out loud, especially if they make sense. They may or may not note them, but the importance of doing so is minimal if what is read not only makes sense but is in keeping with the author’s intended meaning. Sometimes too, a miscue which doesn’t maintain meaning can be made and not corrected orally, but noted by the reader. There is no way of knowing this though other than through discussion of the material read. Oral reading can place extra pressure upon readers and sometimes discussion of a passage read silently can provide valuable information about a reader’s progress. As you say, interest and background knowledge play an important role in one’s ability to read any material.
Thanks for sharing your thoughts, Norah. I agree with you that my students’ abilities will likely change over the two and a half month-long summer vacation (hopefully in a positive direction!) I’ve been talking with my colleagues about this and we hadn’t considered how oral reading itself can be very nerve-wracking for children, especially when their teacher is ticking off the words on a sheet as they read along. I know that I would mistakes reading aloud in that situation! And as far as miscues, I definitely make errors reading aloud to my students — they catch me all of the time when I haven’t even noticed. Maybe all teachers should have to assess each other using these tests first, to see what the outcome is for us!
That’s an interesting suggestion Nicole. I think it is important for teachers to place themselves in their students’ shoes sometimes (so to speak) so that they can understand the expectations and responses to those expectations. It is easy to forgot what it is like to sit on the other side of the desk. Enjoy the break. But you are working through the summer, aren’t you?