Showing posts with label Michelle Honeyford. Show all posts
Showing posts with label Michelle Honeyford. Show all posts

Sunday, June 14, 2009

the harrison bergeron approach to education: how university rankings stunt the social revolution

I've been thinking some lately about the odd and confusing practice of comparing undergraduate and graduate programs at American colleges and universities and producing a set of rankings that show how the programs stack up against each other.

One of the most widely cited set of rankings comes from U.S. News and World Report, which offers rankings in dozens of categories, for both undergraduate and graduate-level programs. Here, the magazine offers its altruistic rationale behind producing these rankings:
A college education is one of the most important—and one of the most costly—investments that prospective students will ever make. For this reason, the editors of U.S. News believe that students and their families should have as much information as possible about the comparative merits of the educational programs at America's colleges and universities. The data we gather on America's colleges—and the rankings of the schools that arise from these data—serve as an objective guide by which students and their parents can compare the academic quality of schools. When consumers purchase a car or a computer, this sort of information is readily available. We think it's even more important that comparative data help people make informed decisions about an education that at some private universities is now approaching a total cost of more than $200,000 including tuition, room, board, required fees, books, transportation, and other personal expenses.

(To access the entire rankings, developed and produced selflessly by U.S. News and World Report, you need to pay. Click here to purchase the Premium Online Edition, which is the only way to get complete rankings, for $14.95.)

The 2009 rankings, released in April, are in the news lately because of questions related to how the magazine gathers data from colleges. As Carl Bialik points out in a recent post at the Wall Street Journal, concerns over how Clemson University set about increasing its rank point to deeper questions about the influence of rankings numbers on university operations. Clemson President James F. Barker reportedly shot for cracking the top 20 (it was ranked 38th nationally in 2001) by targeting all of the ranking indicators used by U.S. News. Bialik writes:
While the truth about Clemson’s approach to the rankings remains elusive, the episode does call into question the utility of a ranking that schools can seek to manipulate. “Colleges have been ‘rank-steering,’ — driving under the influence of the rankings,” Lloyd Thacker, executive director of the Education Conservancy and a critic of rankings, told the Associated Press. “We’ve seen over the years a shifting of resources to influence ranks.”

Setting aside questions of the rankings' influence on university operations and on recruiting (both for prospective students and prospective faculty), and setting aside too the question of how accurate any numbers collected from university officials themselves could possibly be when the stakes are so high, one wonders how these rankings limit schools' ability to embrace what appear to be key tenets emerging out of the social revolution. A key feature of some of the most vibrant, energetic, and active online communities is what Clay Shirky labels the "failure for free" model. As I explained in a previous post on the open source movement, the open source software (OSS) movement embraces this tenet:
It's not, after all, that most open source projects present a legitimate threat to the corporate status quo; that's not what scares companies like Microsoft. What scares Microsoft is the fact that OSS can afford a thousand GNOME Bulgarias on the way to its Linux. Microsoft certainly can't afford that rate of failure, but the OSS movement can, because, as Shirky explains,
open systems lower the cost of failure, they do not create biases in favor of predictable but substandard outcomes, and they make it simpler to integrate the contributions of people who contribute only a single idea.

Anyone who's worked for a company of reasonable size understands the push to keep the risk of failure low. "More people," Shirky writes, "will remember you saying yes to a failure than saying no to a radical but promising idea." The higher up the organizational chart you go, the harder the push will be for safe choices. Innovation, it seems, is both a product of and oppositional to the social contract.

The U.S. News rankings, and the methodology behind them, runs completely anathema to the notion of innovation. Indeed, a full 25 percent of the ranking system is based on what U.S. News calls "peer assessment," which comes from "the top academics we consult--presidents, provosts, and deans of admissions" and, ostensibly, at least, allows these consultants
to account for intangibles such as faculty dedication to teaching. Each individual is asked to rate peer schools' academic programs on a scale from 1 (marginal) to 5 (distinguished). Those who don't know enough about a school to evaluate it fairly are asked to mark "don't know." Synovate, an opinion-research firm based near Chicago, in spring 2008 collected the data; of the 4,272 people who were sent questionnaires, 46 percent responded.

Who becomes "distinguished" in the ivory-tower world of academia? Those who play by the long-established rules of tradition, polity, and networking, of course. The people who most want to effect change at the institutional level are often the most outraged, the most unwilling to play by the rules established by administrators and rankings systems, and therefore the least likely to make it into the top echelons of academia. Indeed, failure is rarely free in the high-stakes world of academics; it's safer to say no to "a radical but promising idea" than to say yes to any number of boring but safe ideas.

So what do you do if you are, say, a prospective doctoral student who wants to tear wide the gates of academic institutions? What do you do if you want to go as far in your chosen field as your little legs will carry you, leaving a swath of destruction in your wake? What do you do if you want to bring the social revolution to the ivory tower, instead of waiting for the ivory tower to come to the social revolution?

You rely on the U.S. News rankings, of course. It's what I did when I made decisions about which schools to apply to (the University of Wisconsin-Madison [ranked 7th overall in graduate education programs, first in Curriculum & Instruction, first in Educational Psychology] the University of Texas-Austin [tied at 7th overall, 10th in Curriculum & Instruction], the University of Washington [12th overall, 9th in Curriculum & Instruction], the University of Michigan [14th overall, 7th in Curriculum & Instruction, and 3rd in Educational Psychology] the University of Indiana [19th overall, out of the top 10 in individual categories], and Arizona State University [24th overall, out of the top 10 in individual categories]). Interestingly, though, the decision to turn down offers from schools ranked higher than Indiana (go hoosiers) wasn't all that difficult. I knew that I belonged at IU (go hoosiers) almost before I visited, and a recruitment weekend sealed the deal.

But I had an inside track to information about IU (go hoosiers) via my work with Dan Hickey and Michelle Honeyford. I also happen to be a highly resourceful learner with a relatively clear sense of what I want to study, and with whom, and why. Other learners--especially undergraduates--aren't necessarily in such a cushy position. They are likely to rely heavily on rankings in making decisions about where to apply and which offer to accept. This not only serves to reify the arbitrary and esoteric rankings system (highest ranked schools get highest ranked students), but also serves to stunt the social revolution in an institution that needs revolution, and desperately.

In this matter, it's turtles all the way down. High-stakes standardized testing practices and teacher evaluations based on achievement on these tests limits innovation--from teachers as well as from students--at the secondary and, increasingly, the elementary level. But the world that surrounds schools is increasingly ruled by those who know how to innovate, how to say yes to a radical but promising idea, how to work within a "failure for free" model. If schools can't learn how to embrace the increasingly valued and valuable mindsets afforded by participatory practices, it's failing to prepare its student body for the world at large. The rankings system is just another set of hobbles added on to a system of clamps, tethers, and chains already set up to fail the very people it purports to serve.

Friday, May 29, 2009

figuring out "how to go on"

In his paper "Human Action and Social Groups as the Natural Home of Assessment: Thoughts on 21st Century Learning and Assessment," Jim Gee describes what at first glance appears to be two opposing uses of assessment in informal online spaces. As Gee explains,

Assessment for most social groups is both a form of mentoring and policing. These two are, however, not as opposed to each other as it might at first seem (and as they often are in school). Newcomers want to “live up to” their new identity and, since this is an identity they value, they want that identity “policed” so that it remains worth having by the time they gain it more fully. They buy into the “standards.” Surely this is how SWAT team members, scientists, and Yu-Gi-Oh fanatics feel.


At its best, assessment in formal education serves the same dual role; yet something is most assuredly different. What's different is not the degree to which students "buy into" the value of assessment; they see assessment in school as important, just as they would argue that the mentoring and policing of the online spaces they inhabit is an essential element to keeping those spaces alive.

What's different is not the degree of investment; what's different is the degree of relevance. The Yu-Gi-Oh fanatics Gee references want--sometimes desperately--to be accepted into the groups they join, and so they agree to the terms of this belonging, even if it requires being held to at times impossibly high standards of participation. The same is true of novice SWAT team members and scientists; it's less true of 11th graders reading Moby-Dick in an English classroom. To what purpose? they might ask, and they would be right to do so. Until we can align the goals, roles, and assessment practices of the formal classroom--until, that is, we can transform the domain to meet the needs of a participatory culture--investment exists without relevance. Students want A's to get into college; they want A's because that's what their parents tells them equals success; they want A's (or D's, or F's) because their friends tell them they should.

We know that practices are mediated by the tools we use to engage in those practices. We know that writing with a pencil is different from writing with a computer is different from writing with a Blackberry. The notion of "re-mediation" is intended to point to another level of mediation: That the tools that mediate traditional literacy practices get re-mediated by new media, which then re-mediates the practices that we bring to the tools. It's all very meta.

All of this is by way of introduction to re-mediating assessment, a new blog emerging out of a clever little partnership between a plucky crew of assessment-oriented researchers out of Indiana University and MIT. The plucky researchers include:

Daniel T. Hickey, Associate Professor of Learning Sciences at Indiana University, and our intrepid leader. Dan's research focuses on participatory approaches to assessment and motivation, design-based educational research, and program evaluation. He is particularly interested in how new participatory approaches can advance nagging educational debates over things like assessment formats and the use of extrinsic incentives. His work is funded by the National Science Foundation, NASA, and the MacArthur foundation, and has mostly been conducted in the context of digital social networks and videogames. He teaches graduate courses on cognition & instruction, assessment, and motivation, and undergraduate classes on educational psychology.

Michelle Honeyford, a Ph.D. Candidate in the Literacy, Culture, and Language Education Department at Indiana University and the cool head behind this operation. Michelle is a Graduate Research Assistant on the MacArthur-funded 21st Century Assessment Project for Situated and Sociocultural Approaches to Learning, working on a participatory assessment model for new media literacies. Her broader research interests include identity, cultural citizenship, and new literacy studies. Michelle is a former middle and high school English Language Arts teacher, and has taught courses in the teaching of writing at IU.

Jenna McWilliams, the little engine that could. Jenna is a prolific blogger who is working on mastering the art of being both smart and lucky, sometimes simultaneously. She recently got picked up by the Guardian's online site, Comment is Free, and was interviewed about the future of newspapers on the BBC's News Hour program. She currently works as an educational researcher for Project New Media Literacies, a MacArthur-funded research project based at MIT; prior to that, she taught English composition, literature, and creative writing at Suffolk University, Bridgewater State College, and at Newbury College and at Colorado State University, where she earned her MFA in Creative Writing. In Fall 2009, she will begin doctoral study in the Learning Sciences Program at Indiana University, under the tutelage of Dan Hickey, who is her sensei.

Together, this merry band will start working out the simple matter of "how to go on" and how to align classroom practices with the proficiencies called for--indeed, demanded--by a participatory culture.

We're bringing the smart. Wish us luck.