…and yesterday we had round two. Last time, of course, our focus was on charter schools and the educational models they employ; this time, we were able to focus on a set of issues that have proved particularly tricky for policymakers to deal with: evaluation, standardization, and unionization. I thought all three groups presented compelling evidence that could be used to make informed judgments about these issues.
The first group — Natasha, Jenna, and Olivia — took us on a walk through the briar patch of evaluation, with special attention placed on grading. This all seemed perfectly natural. There are, of course, lots of different ways to evaluate and assess what students have learned, as the presenters ably pointed out, and grades are simply one tool we can use to do that. I liked the sample narrative evaluation you provided, especially because it started with a brief review of the course description that included a bit of the fictional instructor’s philosophy for the course. I hear it all the time: education coursework is too “theoretical,” not “practical” enough, but I often think that dismissing philosophy as unimportant is what leads to a lot of the misunderstanding and miscommunication that exists between students and teachers. Wouldn’t it be helpful to know, as a student, if your teacher thinks students are not held to high enough standards? Or if your teacher thinks that students do not know how to be challenged? Or that students should be graded strictly on a bell curve to ensure the “mediocre” work is not rewarded? We make “practical” choices on the basis of philosophical commitments all the time. What we don’t always do is try to interrogate our own biases, assumptions, and philosophical commitments to understand how they affect the choices we make. Your evaluation example underscored that point.
I was also struck, as I thought about this presentation, by the idea that we tend to often dress up ideas as new and innovative when, really, they are just recycled versions of ideas already in use. This isn’t a criticism of the presentation, but of the short attention span of some school reformers and policymakers and their stubborn unwillingness to consider that the system we have in place might actually have some merit. In a sense it’s really true that there is nothing new under the sun. This narrative evaluation, wonderful as it is, still ends with a grade: in this case it’s “proficient” instead of “B” or “C” or whatever, and that’s an important step forward since it can be easier to find consensus on the meaning of a word like proficient than it might be to agree on the meaning of a letter. But, ultimately, the grade is still there. What the narrative does is add valuable evaluative context to the grade, context that is not always made available to students. The bottom line, to me, is that an open, honest feedback loop that goes from teacher to student, but also from student back to teacher (and probably, too, from student to student) leads to the most positive results. Being open and honest also tends to reduce the likelihood that competition might get in the way of learning as it can encourage conversation about the claims people make. Believe me: I’ve been on the receiving end of anonymous and biting student comments that I found discouraging and unfair, only to find myself unable to start a conversation with the student to address the concerns raised. That’s a frustrating situation to be in, and one I can imagine many students have been in as well. I know I was in that situation more then once as a student. Transparency, it seems to me, is at least part of the antidote, and the key to evaluating teaching and learning fairly.
These kinds of situations underscore the importance of accountability, if not our obsession with it. The presentation we heard from Chris, Shelby, and Katie focused on the encroaching use of standardized test data as a means of accountability. And let’s be clear about this: student scores on standardized tests are not being used to provide formative, growth-oriented information to students and teachers that might be used to propel student learning in new directions. They are being used to establish a floor for student “achievement,” and, increasingly, to evaluate the effectiveness of teachers and administrators. (You might also know that they have even been used to evaluate the lunch lady and the janitor in some schools too!) We’re rapidly headed to a place where not only are teachers held accountable for the performance of their students on standardized tests — tests that are not tailored to the individual needs of students or teachers — but are also, as the story at the link above notes, held accountable for the scores of students they didn’t even teach.
We do this, I think, because that social efficiency goal we have talked about all semester has supplanted the other two identified by Labaree — democratic equality and social mobility — as the pre-eminent one in our schools. We pay lip service to the other two, but efficiency rules supreme. Shelby, Chris, and Katie really brought this home by presenting a wealth of evidence that suggests that standardized tests rarely deliver on the promises their proponents make. But that’s only true if we discount efficiency as a primary goal of testing — because one thing tests are really good at doing is making indiscriminate judgments for us about who the good and bad students and teachers are and providing us with the “evidence” we need to dismiss them or move them around. Using this measuring stick, tests are quite successful at giving us what we seem to want. Chris’ argument that pollsters even go so far as to ask leading questions that poll respondents are poorly positioned to answer only illustrates this point further. What’s the point of asking people what they think of Common Core if ⅔ of them have never heard of it? And what if another 20% have received misinformation? Those questions probably answer themselves.
The conclusion drawn by the group was that we should distance ourselves from standardized test data and employ other and more varied approaches to assessment. The hard question to answer, of course, is how we do this efficiently. Teachers are simply not compensated enough to spend endless hours writing narrative assessments of student work (and even if they were could we expect such evaluations not to become pro forma over time?), and students are not well equipped to construct meaningful portfolios in a world where letter grades and test scores reign supreme. I’m not trying to pour cold water on these ideas, especially because I think the actual evidence supporting these policy recommendations is very solid; I’m simply encouraging the group members to address these concerns as definitively as possible in their final project.
I’ll continue this thread in a second post. This one’s running a little long…