The College Board reports this week that SAT scores fell dramatically, ” to the lowest level in four decades on the college-entrance test” with only 43% of test takers demonstrating SAT-style college readiness.
“College Board officials and other experts noted that the declining scores could have much to do with the testing pool, which is growing and becoming more diverse. Last year, 45% of students who took the exam were members of a minority group, up from 38% of the 1.56 million who took it in 2008. And 28% of test takers reported that English wasn’t exclusively their first language, up from 24% in 2008.”
But is the Scholastic Aptitude Test the best way to measure learning? In “How Do We Measure What Really Counts in the Classroom,” CoExist reporter Cathy Davidson shares a little of the history of the SAT, the genesis of which was born 1914 as an effort by a Kansas doctoral student named Frederick Kelly who looking for a test to measure “lower order thinking” to evaluate the basic academic skills of the immigrants and women pouring into the workforce during WWI. In 1926, a version of the test adopted by the College Entrance Examination Board became the SAT.
While it’s not a new question, it’s a question gaining more interest in an age of informal learning: Does the SAT and similar “standardized” testing accurately measure knowledge and college readiness? With more schools opting out of standard pre-admissions requirements for incoming students, academicians and workforce administrators are looking for better, more realistic assessment tools.
As Davidson points out in her CoExist article, “ Americans use standardized tests earlier and more often than any other nation on the planet. Research shows that high stakes, after-the-fact or end of grade, multiple choice testing has little impact on learning motivation and even little real quantitative relationship to content mastery.”
That’s worth repeating: Research has show that multiple choice testing has “little real quantitative relationship to content mastery.”
So what are some options?
Kyle Peck, of Penn State, and Khusro Kidwai, from the University of Southern Maine, recently demoed their nonprofit, free eRubric assessment tool at Duke Unniversity.
Davidson observes, “… eRubric allows anyone evaluating others the ability to customize the categories to be evaluated, to weight the individual categories differently on different assignments, and could be used in informal or formal education, from kindergarten through college and beyond, and with applications for any Human Resources department at any corporation too.”
Now THAT’s learning assessment, customizing assessment to desired outcomes for a true measure of learning success!
Read more at
- Badging Systems
- Open Badges and Assessment
- Mozilla Open Badges Project
- eRubric Assistant
- Alternative Assessment