For me, the main safeguard against implicit bias in our evaluations of students is having well-articulated standards and observable levels of performance. Of course, it’s a lot harder to set these out clearly for the “non-measurable aspects” of students’ performance you refer to than for specific mathematical concepts. Vague rubrics seem more susceptible to poor inter-rater reliability, and so I wonder if implicit bias impacts our qualitative judgments about student effort, engagement, curiosity, behavior, etc. differently than it might for more airtight rubrics for mastery of a particular mathematical strand. I should also say that I don’t see a problem with rubrics per se, but rather the thoughtless use of them—for example, using the arithmetic mean as the default algorithm for synthesizing scores on multiple scales into a final evaluative grade (which I have been guilty of in the past) rather thinking deeply about how different dimensions of learning fit together to form a more complete picture of a student’s performance.

There are different types of judgments we make as teachers, and it seems to me that the greater the degree of subjectivity required to evaluate a student’s performance along some dimension, the more cautious we should be about how much significance we assign to that evaluation. That is to say, I think we should tend to use clearly articulated standards and scales (that the students have had a hand in generating) as a key tool for equity in grading and reporting rather than falling back on algorithms as our insurance of fairness.

This is a messy issue, but I agree that it’s vital to ask ourselves about the extent to which each of our choices about grading and reporting are subject to implicit bias.

LikeLike

]]>LikeLike

]]>LikeLike

]]>LikeLike

]]>LikeLike

]]>I’ll be in touch soon with a response to your post.

LikeLike

]]>LikeLike

]]>Resilience literally means to recover quickly from difficulty, so a critical practice for developing resilience in students is to provide quality feedback on what went wrong. How can we expect students to recover quickly from the difficulty of failing at a task if they are unsure where they went wrong? After all, if they knew where they went wrong, wouldn’t they have fixed it before submitting for review?

The other key practice that I have found to be important is to let the students know what they did right. Rarely have I had students consistently miss the mark so badly that they did nothing correctly on their assignments. But I found that my time saving grading practice of only pointing out what needed to be fixed led to students feeling only like failures. Students became less resilient, because what students want to be resilient when they know they will only be told how wrong they were the next time? Once I started pointing out what the students did well they were noticeably more likely to have a desire to fix the things that needed fixing.

Finally, a strategy that I will be implementing this school year is one that is a vital component of standards based grading: ditching the letter grades and, more importantly I think, the percentages. Letter grades and percentages give students an out. They give them the choice of accepting inadequate performance. They do this because they are considered a final evaluation of performance because that’s how they have been used for so long. Once students see a letter grade or percentage, they believe that’s it, that’s the end of the learning of that topic. Some teachers use rubrics to show students exactly what they need to do to earn high marks, but the standard 3-5 point rubric still gives the students the opportunity to choose a poor, but “passing” level of performance. If we want students to reach a certain level of proficiency, why not make that the only choice? The 1 point rubric helps develop resilience by not letting students choose otherwise – they either get 0 or they are proficient, with unlimited attempts to get to proficient.

]]>