DYSPEPSIA GENERATION

We have seen the future, and it sucks.

The Extraordinary Silliness of American College Grading

4th November 2017

Read it.

Imagine if marathon runners were ranked simply by taking their average time over every course. Some courses are clearly harder than others, and runners can choose which races to enter, so a runner could always improve her ranking by refusing to run on difficult courses. Even sillier would be to rank runners by their average finish position across all races: a world-class professional could just run against high-schoolers and finish first without even trying.

Strangely, the current system for evaluating American college students manages to achieve this extraordinary level of silliness. Since students can select their own courses, and grades from all courses count equally, they are rewarded for taking easier courses and punished for taking harder ones. A first-year student taking introductory English literature gets exactly equal credit as her classmate who precociously jumps into graduate-level literary analysis.

College grade-point averages (GPAs) are not merely a matter of pride. Medical schools, law schools and consulting firms, among other popular post-graduation destinations, have strict GPA cut-offs; any student who fails to make the grade will struggle to have their application even seen by a human. From an individual student’s standpoint, it’s completely rational to optimise for GPA, even at the expense of other considerations. From the university’s point of view, though, that expense is vast.

The problem with any system of grading is: Who’s doing the grading, and on what basis?

American military schools solve this problem by separating the teaching and the testing. Tests at the end of every two or three week Unit are standardized and are created and graded by a special testing group who build tests based on individual questions that have been used in past tests and which have values associated with them as to predicted performance, e.g. a question is historically gotten right by 64% or 73% or 45% of test-takers; tests are built from questions whose average predicted success rate equals the desired passing score. New questions are constantly being created and their initial values are derived from performance in tests to which they are added but in which they are not counted ‘for real’.

The closest approach to this in modern academia are the various ‘aptitude tests’ like the SAT and the ACT and the ‘gateway tests’ used by British-format school systems, such as the GCSE, O-Level, and A-Level exams. Attempts are being made to do something like this in the United States but are hitting rocks and shoals because (a) tests of absolute achievement allow teachers themselves to be evaluated for effectiveness, and teachers resist this to their least breath, and (b) such tests are subject to capture by political groups who want the educational system to indoctrinate students with their preferred political agenda and so write tests that reward that indoctrination.

 

Comments are closed.