Gauging Teaching Ability
June 14, 2022•290 words
I think that to evaluate teachers, we need a mixed system of evaluation from above and below (like from bosses and from students). It seems like a lot of my friends (out of a sample) hold the view that end of year student surveys of teachers would be effective in determining their teaching ability. However, by surveying students for how effectively they learned, one assumes that all students want to learn and that they know themself/the topic enough to evaluate their proficiency in that topic. For the former, it is obvious that not every student has the desire to learn whatever topic that class is in. For the latter, the Dunning-Kruger effect may come into play [1] as students with little knowledge of the field won't know how little they know, and thus will evaluate the ineffectual teacher as very good when they, in actuality, aren't. Furthermore, student evaluations are biased by the teacher's personality - charismatic/humorous-but-meaningless lectures can fool even professionals, as seen in the Dr. Fox effect [2]. However, if we use only administrative evaluations, then there won't be enough flexibility in the system to allow for variation in what is good and bad criteria. For instance, administrators would be unlikely to catch or be motivated to fix bad grading policy. Furthermore, administrators for external reasons (funding, time, personal belief, etc.) will naturally be biased in some way of another. Given the small number of administrators, these biases can make culling of bad teachers just not happen. However, administrative evaluation helps provide a less mob-mentality-ed and broader-scoped consideration of teaching ability. Admittedly, finding a proper balance, evaluation system police, and hard external factors like funding/lack of teachers will make balancing the two evaluators difficult.
[1] https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect
[2] https://en.wikipedia.org/wiki/Dr._Fox_effect