So, I agree with the general gist of this Charles Murray is right in principle that not all students can be above average, that differentially distributed cognitive abilities will “cap” the academic achievement of students in different places under the best of circumstances*, and that it does nobody any favors to pretend this isn’t the case, but (2) There is no very good reason for thinking we’ve come anywhere near maxing out the potential on the lower end of the distribution—either in terms of boosting intelligence in a given student or boosting achievement holding intelligence constant—even if we don’t know as much as we’d like to think about what this would entail.
But I am surprised that they both seem to agree on this point:
He gives the example that he himself cannot follow proofs in the American Journal of Mathematics — not because he knows too little, but because he is not smart enough. Charles, I’m with you. After perusing this paper on “The Equivalence Problem and Rigidity for Hypersurfaces Embedded into Hyperquadrics,” I am prepared to agree with the now-discontinued Teen-Talk Barbie: “[Abstract] math is hard.”
Well… how could they possibly know this, assuming they haven’t actually had intensive training in higher math? If I look at a page of text written in Japanese or Arabic, it will look like a lot of incomprehensible squiggles. This in itself tells me nothing about whether I’d be capable of learning the language (since people routinely do this, I’ll assume I could too with enough study) and, a fortiori, nothing about whether I’d be able to understand what’s written on a given page. This is obvious enough if we’re talking about whole foreign languages, but since technical academic papers are at least partially written in something that at least looks like English, we’re much quicker to attribute the failure of understanding to innate inability. And without becoming too sanguine about the limitless possibilities of education, we might at least hazard a macroapplication of the micropoint: If (as Murray suggests) we have not even really been studying what different schools achieve relative to the underlying ability or g of students, maybe pessimism is premature.
A promising first step might be to start tracking this. That is, check (as best we can, and understanding that these numbers are artificially precise) and track student g over time along with test scores. (There are obviously some serious privacy issues here, but presumably these numbers could be hashed before submission to any sort of central database in a way that would avoid creating any massive store of individually-identifiable student data.) My understanding (as a non-eduwonk) is that we typically keep stats only on aggregate scores for schools; the idea here would be to consider aptitude separately from achievement and keep more fine-grained data on both, as well as on how they covary. Instead of just asking whether schools work, we could ask (1) whether certain kinds of education actually do manage to affect g (the seven point boost Murray mentions from some “severe interventions” would be damn significant if we could figure out how to achieve comparable effects in ordinary schooling), and (2) how well schools are doing at promoting achievement relative to the ability of their underlying population, both as a whole and at each point in the distribution. If certain schools or pedagogical methods are producing higher achievement among lower-g students but failing to spur higher-g students to do all they can, or vice versa, that seems like it would be interesting to know.
* Let’s bracket for the moment any of the thoughts about more dubious claims regarding group differences that might be sparked by Murray’s name and presume we’re just talking about a general distribution across individuals.