It is commonly said that junior year is the most stressful of all the years spent in high school, one of the most stressful aspects being the tests. In the beginning of the school year juniors usually take the PSAT, which, according to the College Board, is “highly relevant to your future success because [it focuses] on the skills and knowledge at the heart of education.”
Recently, the College Board has changed the PSAT in an attempt to make it reflect more valuable skills for today’s college-bound youth. This change has raised some degree of controversy on its own, but more importantly, it unearths the old debate over whether PSAT scores really serve as an indicator of who’s better than whom. I believe that each version of the PSAT tests for different skills that apply to different students, and therefore that both the old and the new PSAT are unreliable indicators of “future success.”
First, I’d like to offer a few words on the differences between the old and new PSATs. I believe that the change is huge and that each test reflects skills applicable to two very different types of students. If you look at the old PSAT, its writing section focused on a student’s grasp of English grammar, testing the student on proper sentence constructions and syntax. Try to find the underlined mistake in this sample problem below (taken from Barron’s old PSAT Prep Book), if there is one :
Marilyn and I ran as fast as we could, but we missed our train, which made us late for work. No error.
The answer is that the underlined word “which” is incorrect in this sentence. According to the answer key, “Which is a pronoun, and needs a noun for its antecedent. The only available noun is train, but that doesn’t make sense (the train didn’t make us late—missing the train made us late).
In the critical reading section, the student read several passages (usually adapted from old works of literature), and answered comprehension questions. There were also questions that asked students to choose a word that best completed a sentence from five choices of mostly esoteric words. Take a look at this example from the Barron’s book:
“Just as disloyalty is the mark of the traitor, ___ is the mark of the ___”:
(A) timorousness … hero
(B) temerity … renegade
(C) avarice … philanthropist
(D) cowardice … craven
(E) vanity … flatterer
The answer is (D): cowardice is the mark of the craven.
The mathematics section had problems such as finding all the numbers that are multiples of 4 and 6 from 1–100, and it was these problems that alienated many test-takers because of their ostensibly impractical nature. Given this information, it’s clear that a student versed in traditional English grammar, literature with advanced vocabulary, and a wide range of strategies for solving math questions was favored by the old PSAT. Indeed, there are such students out there, with backgrounds that fostered these skills.
The new test, however, appears to favor different types of students entirely. It’s made up of just two sections (a combined reading and writing section and a math section), and it’s very different than the old test. When I took the new PSAT this fall, I noticed that the writing questions focused less on traditional grammatical technique. In addition, instead of the old passages on American literature, I found ones on recent scientific discoveries and historical events in the United States, while the math section had questions with less emphasis on knowing strategies and shortcuts. Here is an example math question from the College Board website:
“At a primate reserve, the mean age of all the male primates is 15 years, and the mean age of all female primates is 19 years. Which of the following must be true about the mean age m of the combined group of male and female primates at the primate reserve?”
(A) m=17
(B) m>17
(C) m<17
(D) 15<m<19
The answer is (D).
This suggests that the new PSAT favors totally different skills and abilities in each section. All this isn’t to say that one of these tests is necessarily better than the other; I only think that each one selects for a completely different type of student. Two students who have the same levels of capability might have drastically different scores on any one PSAT that they both take, because one of them has skills needed for that PSAT, whilst the other has separate skills that might have been assessed better by the other version of the test. This renders the score you get somewhat arbitrary, as it depends on the version you took—something admission officers at colleges may not factor into their decisions.
This concept brings me to my next point: is either PSAT a good indicator of success and readiness for colleges to use? In my opinion, it’s most definitely unfair because of the manner in which it favors certain students from certain backgrounds. Some researchers, such as Professor Stephen Ceci, disagree with my claim. Professor Ceci argues the following about both PSATs and SATs, since the two are so similar:
“Most researchers claim the SATs are moderately predictive of college performance. For example, the SATs predict college GPA between r = 0.4 to 0.5. These correlations are not huge but they are moderate and statistically reliable. So what does a correlation of 0.4 between SATs and college outcomes such as GPA and graduation rate mean in practical terms for colleges and universities? For simplicity’s sake, imagine that the SATs are not moderately predictive (0.40) but only weakly predictive (say, 0.20) of college graduation rates. Let’s take all IHS students’ SAT scores and do a median split, showing the top half and the bottom half of scores. What does this tell college and university admissions officers about the likelihood of admitting students who will be successful, in the sense that they actually graduate with a degree? Below are some hypothetical data to help illustrate why admission officers often embrace the SATs/ACTs, etc:
Graduate from College | Drop out of college | |
Above Average SAT/ACT | 60 percent | 40 percent |
Below Average SAT/ACT | 40 percent | 60 percent |
As can be seen in the table, even a weak correlation of 0.20 is enough to predict which group has a better graduation rate, so imagine what a 0.40 correlation would predict.”
Obviously, Professor Ceci views the PSATs as being good enough predictors to assist admission officers who don’t know much about students who apply to college other than grades and test scores. But he is overlooking the fact that each PSAT favors a different type of student, and excludes some who could also be successful. Colleges would say that the students who did better on one version of the PSAT will be more successful, while in reality, the students who did worse might have done just as well on the other test and consequently been admitted and graduated successfully! As said in an article on The Conversation called “Fulfilling Martin Luther King Jr.’s Dream: the Role for Higher Education”: “These tests have been grossly inadequate, measuring only a narrow band of potential, while missing wide swaths of our talent pool whose excellence is not readily detected through the use of such ‘blunt’ instruments.”
The bottom line is that any one PSAT that students take, regardless of whether it is the new or the old, will favor some and disadvantage others. If students were admitted to colleges and universities without PSAT scores being taken into account, we might discover that the scores are not really predictors because non-admitted applicants would have done just as well if only they were given the opportunity to show it. One size doesn’t fit all, and it’s simply unfair to judge all students based on one standardized test.