Wednesday, July 22, 2009

What Participatory Assessment is NOT

Obviously a blog devoted to participatory assessment should explain what that means. And try to do so in simple every day terms. This is the first in a series of posts that attempts to do so. Quite specifically, participatory assessment is first about assessing and improving a communities social participation knowledgeable activity, with the added bonus of fostering the understanding and achievement of the individuals in that community. To explain what this means, this post introduces some basic ideas and terms from educational assessment help explain what this does not mean. It was authored by Dan Hickey and Jenna McWilliams.

What is Assessment?
It makes sense to start first by saying what we mean by assessment. For us, assessment is about documenting outcomes. Most of the time we are interested in assessing what someone knows, or can do. Traditionally, educational assessment has been linked to documenting what students know and what they have learned. Often it has been associated with what teachers do in classrooms, at the end of a unit or a term. In the language of modern cognitive science, this view is well represented in the title of the 2001 National Research Council report entitled Knowing What Students Know: The Science and Design of Educational Assessment. This report (whose editors include Dan’s doctoral advisor Jim Pellegrino) nicely describes the mainstream view of educational assessment in terms of conceptual structures called “schema.” We here at re-mediating assessment often use the phrase “assessing individual understanding” to represent this commonly held view.

Assessment vs. Measurement
It is helpful to distinguish assessment from measurement. Most people use the term measurement to emphasize the use of sophisticated psychometric techniques. While these techniques are also useful for student assessment, they are typically associated with large-scale achievement testing. These techniques make it possible to figure out how hard one test item is compared to other test items. When you know this, you can estimate a person’s chance of getting a particular item correct after taking a few other items. This is how standardized tests come up with such precise scores that don’t change from test to test (this means the scores are reliable). As long as you think the items get at the thing you want to measure, you can efficiently and accurately compare how much of that thing different people have. This is most often used to measure achievement of educational standards, by creating large numbers of items that target those standards. Hence we here at re-mediating assessment often use the phrase “measurement of aggregated achievement” to describe how achievement tests are designed and used. By combining psychometrics, large pools of items of known difficulty, and computer-based testing, we get a big industry devoted to assessing aggregated achievement of lots of students. Because these tests are developed by institutions (usually companies) rather than teachers or schools, they are often called external tests.

Assessment, Measurement, and Educational Reform

Most (but certainly not all) efforts to improve education are concerned with raising scores on achievement tests. Because achievement tests target educational standards, they can be used to compare different textbooks or techniques that teach to those standards. Among those who care about achievement tests, there is a lot of debate about how directly education should target achievement tests. With the No Child Left Behind act, many educational reforms focused very directly on achievement tests. Sometimes people call this “drill and practice” but many teachers give students drills and then practice on things that are not directly included on achievement tests. The best example is what we call “test-prep.” These are computer-based programs that essentially train students in the very specific associations between words that will help them do well on a particular tests. Under No Child Left Behind, schools that were not making “adequate yearly progress” were required to take money out of the classrooms and use it to provide tutoring. Under NCLB, this often involves (and sometimes requires) schools paying private companies to deliver that tutoring, on the assumption that the teachers had failed. In practice, this meant that companies would get paid to use school computer labs to deliver computer-based test preparation. While the impact of these efforts on targeted achievement tests is a subject of fierce debate, one thing is for sure: It remains a very lucrative industry. (If you are interested see the column on Supplemental Educational Services by Edward Rotherham, aka Eduwonk).

As you might have guessed, we here at re-mediating assessment feel pretty strongly about test-prep practices. We also feel pretty strongly about what test mean, and have lots of debates about it. While Dan believe that achievement tests of literacy, numeracy, and domains like sciences and history measure stuff that is useful. Therefore as long as the tests are not directly taught to, higher scores are better. Jenna, like many, thinks that tests and the testing industry are so bad for education that they should be abandoned. But we both agree that (1) any viable current educational reforms must impact achievement tests and (2) that the transformation of knowledge into achievement tests makes that new representation of that knowledge practically useless for anything else. This is where the name of our blog comes from. When we transform knowledge of school subjects to make achievement tests, we re-mediate that knowledge. For example, consider the simple case of determining which brand of laundry detergent is the cheapest per ounce. Figuring this out while in the cleaning supplies aisle at the supermarket is very different from sitting at a table, with story problem, a pencil and a blank sheet of paper. Voila: Re-mediation. It is also worth noting that we also have some disagreements about the use of this term.

Re-mediation of knowledge for achievement tests usually means making multiple-choice items—lots of them. And this is where the trouble begins for many of us. A multiple choice item usually consists of a question (called a stem) and five answers (called responses). Most assume that test items have one “correct” answer while the rest are wrong. But it is usually more a matter of one response being “more” correct. When we take multiple choice tests we are as much ruling out wrong responses as we are recognizing the most correct one. Because the stem and five response make up five associations, we characterize achievement testing as the process of “guessing which of four or five loosely related associations is least wrong.” The more a student knows about the stuff being tested, the better they are at doing so.

What is the Problem with Test Prep?

Test prep programs raise achievement scores by training students to recognize hundreds or even thousands of specific associations that might appear on tests. Because of the way our brain works, we don’t need to “understand” an association to recognize it. All test prep programs have to do is help students recognize a few more associations as being “less wrong” or “more correct” to raise scores. Because of the way tests are designed, getting even a handful of the more difficult items correct can raise scores. A lot. And this is the root of the problem this blog is dedicated to solving. We believe that the way knowledge is remediated for tests makes that knowledge entirely worthless for teaching, and mostly worthless for classroom assessment. Specifically, we believe that training kids to recognize a bunch of isolated associations is mostly worthless for anything other than raising scores on the targeted tests. Test preparation practices and the politically motivated lowering of passing scores (“criteria”) on state achievement tests is why scores on state tests have gone up dramatically under No Child Left Behind, while scores on non-targeted tests (like the National Assessment of Educational Progress) and lots of other educational outcomes (like college readiness) have declined. Here is an article referencing some of the earlier studies. We are particularly distressed the so many schools find their computer laboratories locked up and their technology budgets locked down by computer based test preparation and interim “formative” testing. Despite a decade of e-rate funding, many students in many schools still don’t have access to networked computers to engage in networked technology practices that are actually useful.

There is a lot of debate about consequences of test preparation for achievement and its impact on other outcomes. We think that any programs that directly train students in specific associations on targeted tests is educational malpractice, because that knowledge is useless for any other purpose. This is because we think that knowledge is more about successful participation in social practices. And these practices have very little to do with tests scores. So, in summary, test preparation is the epitome of what participatory assessment is not. Our next post will try to explain what it is.

I'm bringing sexyback: some thoughts on formative assessment

Immersed as I am lately in the world of participatory assessment, I go through cycles of forgetting and then remembering and then forgetting again that not everybody in educational research thinks assessment is sexy.

I was reminded of this again recently while reading Lorrie Shepard's excellent 2005 paper, "Formative Assessment: Caveat Emptor." The piece argues that the notion of "formative assessment" has been twisted in unfortunate ways as a result of the excessive hammering kids get from high-stakes standardized tests.

I helpfully plugged the entire paper into the wordle machine for you and got this:


In theory, then, assessment should be easy to understand: All of the most frequently used words in Shepard's paper are fairly common and comprehensible. In practice, though, assessment research is complicated by the impulse to put a fine point on things. Here's a sample paragraph from Shepard's piece, which starts out okay but descends into chaos before the end:
“Everyone knows that formative assessment improves learning,” said one anonymous test maker, hence the rush to provide and advertise “formative assessment” products. But are these claims genuine? Dylan Wiliam (personal communication, 2005) has suggested that prevalent interim and benchmark assessments are better thought of as “early-warning summative” assessments rather than as true formative assessments. Commercial item banks may come closer to meeting the timing requirements for effective formative assessment, but they typically lack sufficient ties to curriculum and instruction to make it possible to provide feedback that leads to improvement.


I'm not saying the language is unnecessary; I'm not saying that assessment types are putting too fine a point on things. What I will argue here is that assessment research has, for lots of good and not-so-good reasons, been divorced so thoroughly from other aspects of educational research that it's decontextualized itself right into asexuality. It's like that guy in the corner booth at the bar on Friday night who wants to talk about Marxism when everybody else just wants to make sure everybody gets the same amount of beer before closing time.

Think about that guy for a second. Let's call him Jeff. Jeff has been single for a long time now, and he's spent a lot of that time reading. Maybe he's grown nostalgic for the early days before his girlfriend cheated on him and then moved in with some guy she met in her Econ class. His friends miss those days, too, mainly because he was so much goddamn fun back then. They're nice enough; they want to take him out and help him snap out of it. But the minute the beers come he's back on the Marxism soapbox again and NOBODY. FREAKING. CARES. It's Friday night, late July, and everybody just wants to get stupid drunk. They drop him some hints. Sully slaps him on the back and asks him to tell that one joke he told last week.

"In a minute," Jeff says. "I'm explaining where Marxism went wrong."

Eventually his friends will tell him to either cut it out or go home. If he wants to keep hanging out with these guys, he'll shut up. Or maybe he'll tell that one joke he executes so well. If the girls around him laugh, he might tell another one. Girls like funny guys, he'll suddenly remember. They don't necessarily like Marxists.

All of this is what we might call "formative assessment." This guy wants to be accepted by his friends, which means he needs to pay attention to his behavior. He learns (or re-learns) how to act at the bar on Friday night by paying attention to the feedback he gets from his friends, from other people at the bar, from his memories of having a social life all those years ago.

If we wanted to, we could spend some time talking about better ways to help Jeff learn the social skills he needs. For example, his friends could have sat him down before they went out and explained that his primary goal was to be the funniest guy in the room. "Because girls like funny guys," his buddy Rufus might remind him. They might also set deadlines: By 11:30 you better have told at least three jokes. Then, over the course of the evening, they could check in with him and get a joke-count.

The point is that everybody's on board with the evening's goals. Everybody--Jeff, his friends--wants Jeff to have a good time, and they want to have a good time with him.

Haha! I tricked you into caring about formative assessment.

This is what assessment is, even if it doesn't always feel that way to students, teachers, or researchers. There is an end goal, an objective, and formative assessment is a way of getting everyone on board with this goal and keeping them on board. When it works right, everybody involved actually wants to achieve the objective and the assessment is valuable because it helps them get where they want to go.

But as Shepard's piece points out, too often the insanity of NCLB substitutes test scores for real, intrinsic motivation. Too often and too easily, students learn skills it takes to attain high test scores without actually learning anything. Though "(the) idea of being able to do well on a test without really understanding the concepts is difficult to grasp," Shepard writes, she gives as evidence a 1984 study performed by M.L. Koczor, which focused on two groups of children learning about Roman numerals:
One group learned and practiced translating Roman to Arabic numerals. The other group learned and practiced Arabic to Roman translations. At the end of the study each group was randomly subdivided again (now there were four groups). Half of the subjects in each original group got assessments in the same format as they had practiced. The other half got the reverse. Within each instructional group, the drop off in performance, when participants got the assessment that was not what they had practiced, was dramatic. Moreover, the amount of drop-off depended on whether participants were low, middle, or high achieving. For low-achieving students, the loss was more than a standard deviation. Students who were drilled on one way of translation appeared to know the material, but only so long as they were not asked to translate in the other direction.

Because NCLB and other insane policies that mandate high-stakes testing for accountability have pushed assessment out of its natural home--as Jim Gee explains it, "in human action"--assessment researchers have themselves been backed into a separate corner of the room.

This is not okay. It doesn't help anybody to take the sexy out of assessment by tossing it into a corner. What we need, more than anything, is to push assessment back where it belongs: inside of the participation structures that support authentic learning.

Participatory assessment is, at its core, about social justice, about narrowing the participation gap that keeps our society stratified by race and class, about motivating learners to achieve real goals and overcome real obstacles to their own learning. Participatory assessment, if we do it right, can make almost anything possible for almost anyone.

Monday, July 20, 2009

making universities relevant: the naked teaching approach

I feel sorry for college deans, I really do*. They face the herculean task of proving that the brick-and-mortar college experience offers something worth going into tens of thousands of dollars of debt for, a task made even more difficult by the realities of a recession that's left nearly a quarter of Americans either unemployed or underemployed.

Then there's the added challenge of proving colleges have anything other than paper credentials to offer in a culture where information is free and expert status is easily attainable. Only in a participatory culture, for example, would it be possible for time-efficiency guru Timothy Ferriss to offer a set of instructions on "How to Become a Top Expert in 4 Weeks." "It's time to obliterate the cult of the expert," Ferriss writes in his mega-bestseller, The Four-Hour Workweek. He argues that the key is to accumulate what he calls "credibility indicators." It is possible, he writes,
to know all there is to know about a subject--medicine, for example--but if you don't have M.D. at the end of your name, few will listen.... Becoming a recognized expert isn't difficult, so I want to remove that barrier now. I am not recommending pretending to be something you're not... In modern PR terms, proof of expertise in most fields is shown with group affiliations, client lists, writing credentials, and media mentions, not IQ points or Ph.D.s.

Ferriss then offers five tips for becoming a "recognized expert" in your chosen field. None of them include earning the credential through formal education.

Just like that, we've gone from the position that expertise takes a decade, at minimum, to develop, to the argument that a person can become an expert in just four weeks.

In the face of this qualitative shift in how we orient to expertise, colleges--the educational institutions that have made their bones on offering a sure path to credentialing--are struggling to remain viable. One strategy--and the one chosen by José A. Bowen, dean of the Meadows School of the Arts--is to offer "naked teaching." Bowen's approach, as described in a recent piece in the Chronicle of Higher Education, is to actually remove networked technologies from the classroom. The article makes it clear that Bowen is not anti-technology; he just thinks technologies are being misused by faculty who overrely on PowerPoint and technology-supported lecturing techniques. He favors using technologies like podcasting for delivering lecture materials outside of the classroom, then using the class itself to foster group discussion and debates.

To support this approach, all faculty were recently given laptops and support for creating podcasts and videos.

According to the Chronicle piece, the group that's most upset about the shift away from the traditional lecture format is...students. According to Kevin Heffernan, an associate professor in the school's division of cinema and television, students

are used to being spoon-fed material that is going to be quote unquote on the test. Students have been socialized to view the educational process as essentially passive. The only way we're going to stop that is by radically refiguring the classroom in precisely the way José wants to do it.


For all the griping we do about No Child Left Behind, test-centered accountability practices, and high-stakes assessment practices, the roaring success of decontextualized accountability structures is their astounding ability to keep formal education relevant. "Success" at the primary and secondary level means high achievement on high-stakes tests; and, achievement depends on the learner's ability to internalize the value systems and learning approaches implicit in the approach of this kind of testing structure. Do well on a series of state-mandated tests and you'll probably also do well on the SAT; do well on the SAT and you're well positioned for the lecture-style, knowledge-transfer and, in general, highly decontextualized experience of most undergraduate-level classes. We gravitate toward the kind of experience that make us feel successful, which means the testing factory churns out its own customer base.

While Bowen's experiment (one that he's been moving toward for years; see this 2006 piece in the National Teaching and Learning Forum) may garner attention for an apparent anti-technology stance, the impetus behind his "naked teaching" approach is an effort to reshape the role of institutions of higher education. In truth, learning can happen anywhere, and Bowen's embrace of this truth through his embrace of technologies for supporting out-of-class information transfer seems like a low-risk and high-yield slant on the role of the university.

If learning can happen anywhere, then the physical community of learners gathered together within four walls, engaged in the act of collaborative knowledge-building: That's the rare commodity. In a world where everyone can be an expert, the promise of credentials become just another strategy for bringing that community together.



*jk I really don't.

Friday, July 17, 2009

getting students off of Maggie's farm

I stumbled across an interesting cross-blog conversation about Social Media Classroom and similar Learning Management Systems (LMS's). I have been, and continue to be, a strong and vocal supporter of Social Media Classroom (SMC), Howard Rheingold's Drupal-based, open-source educational technology intended to support participatory practices in formal learning settings.

Most significantly for me, it was participation in SMC that led to my passion for all things open-source. This is not a trivial thing: If participation in an LMS fosters a disposition toward increased openness, collaboration, and sharing, then it's clearly putting its money where its mouth is.

Blogger and computer scientist Andre Malan writes that he recently took SMC for a spin around the block and found it impressive in some ways and lacking in others. He writes:

  1. It seems to be closed off and private by default (although this may have just been the system I used). If outsiders can participate (as has been shown by Jon Beasley-Murray, Jim Groom and D’Arcy Norman) magic can happen. We need to let the world see what students are doing in university.

  2. The “Social Media Classroom” is missing one little word in the title. A game changer would rather be a “Social Network Media Classroom”. Although students can edit their own profiles in the Social Media Classroom, there is no way to form groups or to add people to their network. The network is often the most powerful part of any social media applications and it is a terrible oversight to not include it.


  3. The training wheels don’t come off. This application is great for students who do not know of, or use social media tools. However, it sucks for those that do. They are not able to use their current networks or applications. Most people who have blogs would want to use their own blogs for a class. Or use their own social bookmarking service. These people (the ones who would be very useful in this environment as they could guide their peers and instructors in the use of social media) will feel alienated and resent having to use the Social Media Classroom. If an education-based social media application is ever to be successful it has to provide an easy way for experienced students to show others the tricks of the trade and for novice students to take the wheels off of the bicycle and use real tools when they are ready for it.



D'Arcy Norman, writing from the University of Calgary, responded to the above points first in the comments section and then in a full post on his own blog. Norman doesn't have a problem with fostering student engagement within "walled gardens"--he writes:
The goal isn’t to publish content to the open internet. The goal is to engage students, in creation, discussion, and reflection. If they need a walled garden to do that effectively (and there are several excellent reasons for needing privacy for a community) then so be it. If they’d like to do it in the open, that’s just a checkbox on a settings page.


And, in the most spectacular finish to a post I've so far read anywhere, by anyone, Norman ends with this:
That option isn’t available for users of The Big Commercial LMS Platform. If it’s in an LMS, it’s closed. End of discussion. And people only gain experience in using the LMS, in farming for Maggie.


Norman is right and he's wrong. A closed LMS that lacks the capacity for open participation in a larger community turns learners into day laborers reduced to carting bushels of cognitive work from the fields to the barn and taking home only what they can hide away in their pockets. But in many ways, a "walled garden" isn't much better. Not to overstretch the metaphors here, but legend has it that Prince Siddhartha spent his youth inside of a walled garden. The kind of participation his surroundings supported was absolutely voluntary, and probably felt authentic, in the main. But when he left the garden, everything he knew to be true was true no longer.

One of the big failings of educational institutions is that they too often offer a beautiful walled garden. Inside the garden, food is abundant, and everybody eats equally well. (Well, that depends on the garden you've walked into, how you got there, how long you can stay, and whether you have comparable walled garden experience in your past.)

Sure, participation in a closed system engages students "in creation, discussion, and reflection." This is, I agree, a necessary component of higher education. But I disagree with Norman that this type of participation is sufficient. In fact, creation, discussion and reflection are only useful learning experiences insofar as they support learners' ability and willingness to engage with wider, more public, and less protected communities of practice. This means that publishing content on the open internet should--indeed, must--be a key curricular element. The internet isn't a garden; it's an ecosystem complete with backlots, busted glass, some ragged sunflowers and lots of rich material ripe for harvesting--but only if you've learned what it takes to grow and then harvest that material.

Monday, July 13, 2009

on the community-source model for open educational software design

For all my fascination with all things open-source, I'm finding that the notion of open source software (OSS) is one that's used far too broadly, to cover more categories than it can rightfully manage. Specifically, the use of this term to describe collaborative open education resource (OER) projects seems problematic. The notion of OSS points to a series of characteristics and truths that do not apply, for better or worse, to the features of collaborative learning environments developed for opening up education.

While in general, open educational resources are developed to adhere to the letter of the OSS movement, what they miss is what we might call the spirit of OSS, which for my money encompasses the following:

  • A reliance on people's willingness to donate labor--for love, and not for money.
  • An embrace of the "failure for free" model identified by Clay Shirky in Here Comes Everybody.
  • A loose collaboration across fields, disciplines, and interest levels.

Open educational resources are not, in general, developed by volunteers; they are more often the product of extensive funding mechanisms that include paying participants for their labor.

There are good reasons for this. As Christopher J. Mackie points out in Opening Up Education, while the OSS movement has produced some "runaway successes" (Perl, Linux, and Firefox), the moveent has less success at tackling certain types of projects, including development of products designed for widespread institutional use (instead of adoption by individuals). There are good reasons for this, he argues; and his explanation points to both the weaknesses and the strengths of the open education movement:

This limitation may trace to any of several facotrs: the number of programmers having the special expertise required to deliver an enterprise information system may be too small to sustain a community; the software may be inherently too unglamorous or uninteresting to attract volunteers; the benefits of the software may be too diffuse to encourage beneficiaries to collaborate to produce it; the software may be too complex for its development to be coordinated on a purely volunteer basis; the software may require the active, committed participation of specific firms or institutions having strong disincentives to participate in OSS; and so on.


Perhaps the two most significant weak spots Mackie points to are the unglamorous nature of developing OERs and the strong disincentives against institutional participation in developing and circulating these resources. OERs require sustained, consistent dedication at all levels, from programmers all the way up to administrators and funders; and this type of dedication is difficult to attain for the following reasons:

  • While OSS is primarily affiliated with the movement itself, OERs are by their nature affiliated first with an institution or funder; as project affiliates change institutions or roles, their commitment to developing the OER can shift or disappear.
  • OERs require institutional buy-in, and the notion of openness, on its surface at least, appears at odds with institutional goals. (Universities survive by offering something unique, something you can only get by paying your money and walking through the gates.)

Mackie suggests an alternate term for OERs designed in keeping with the open source ideals: community source software (CSS). He identifies the following characteristics as key to the CSS movement:

  • Multiple institutions band together to design software that meets their collective needs, with the ultimate goal of releasing the software as open source eventually;
  • Development of the software is conducted virtually, with employees from each institution collaborating;
  • The collaboration aligns with a corporate, even sometimes hierarchical, structure, with project leaders, paid staff, and experts in a range of design and development categories;
  • Everybody is compensated for their expertise, and this supports a systematic, targeted approach to software development that is often lacking in OSS projects.



Embracing the notion of community source software instead of open source is more than a semantic choice, in my view. It opens up new avenues for participation and the possibility for new affiliation structures across institutions of higher education. Just as higher education institutions have historically affiliated around various community markers (cf. The Associated Writers and Writing Programs, HASTAC member institutions, the Doctoral Consortium in Rhetoric and Composition), colleges and universities--and their affiliates--might unite around the notion of opening up education by opening up technologies, access, and information.

After all, let's take our heads out of the clouds for a second and think about what sorts of factors might motivate a university to align with the open educational movement. Asking institutions to relinquish their monopoly on whatever they think makes them unique (cf. the college ranking system at U.S. News and World Report) requires that we offer them something in exchange. "For the good of humankind" is a sweet notion, but you can't take it to the bank.

Thursday, July 9, 2009

Participatory Assessment for Bridging the Void between Content and Participation.

Here at Re-Mediating Assessment, we share our ideas about educational practices, mostly as they relate to innovative assessment practices and mostly then as they relate to new media and technology. In this post, I respond to an email from a colleague about developing on-line versions of required courses in graduate-level teacher education courses.

My colleague and I are discussing how we ensure coverage of “content” in proposed courses that focuses more directly on “participation” in the actual educational practices. This void between participation (in meaningful practices) and content (as represented in textbooks, standards, and exams) is a central motivation behind Re-Mediating Assessment. So it seems worthwhile to expand my explanation of how participatory assessment can bridge this void and post it here.

To give a bit of context, note that the course requirements of teacher education programs are constantly debated and adjusted. From my perspective it is reasonable to assume that someone with a Master’s degree in Ed should have taken a course on educational assessment. But it also seems reasonable to have also had a course on, say, Child Development. But it simply may not be possible to require students to take both classes. Because both undergraduate and graduate teacher educator majors have numerous required content area courses (i.e., math, English, etc.), there are few slots left for other courses that most agree they need. So the departments that offer these other required courses have an obvious obligation to maintain accountability over the courses that they offer.

I have resisted teaching online because previous courseware tools were not designed to foster participation in the meaningful discourse that is what I think is so important to a good course. Without a classroom context for discourse (even conversations around a traditional lecture), students have few cues for what matters. Without those cues, assessment practices become paramount in communicating the instructor values. And this is a lot to ask of an assessment.

This is why, in my observation, online instruction heretofore has mostly consisted of two equally problematic alternatives. The first is the familiar on-line tools for pushing content out to students: “Here is the text, here are some resources, and here is a forum where you can post questions, and here is the exam schedule.” The instructors log on to the forums regularly and answer any questions, students take exams, and that is it. Sometimes these courses are augmented with papers and projects and perhaps with collaborative projects; hopefully students get feedback, and they might even use that feedback to learn more. But many many on-line course are essentially fancy test prep. My perceptions are certainly biased by my experiences back in the 90s in the early days of on-line instruction. The Econ faculty where I was working could not figure out why the students who took the online version of Econ 101 always got higher exam scores than the face-to-face (FTF) students, but almost always did far worse in the FTF Econ 201. This illustrates the problem with instruction that directly preparing students to pass formal exams. Formal exams are just proxies for prior learning, and framing course content entirely around tests (especially multiple choice ones) is just a terrible idea. Guessing which of four associations is least wrong is still an efficient way of reliably comparing what people know about a curriculum or a topic. But re-mediating course content to fit into this format makes it nearly useful for teaching.

The other extreme of on-line instruction is “project based” classes that focus almost entirely on developing a portfolio of course-related projects. These approaches seem particularly popular in teacher education programs. The problem with on-line portfolios is that the lack of FTF contact requires the specifications for the portfolios to be excruciatingly detailed. Much of the learning that occurs tends to be figuring out what the instructor wants in order to get a good grade. The most salient discourse in these classes often surrounds the question “Is this what you want?” These classes are usually extremely time-consuming to teach because the accountability associated with the artifacts leads students to demand, and instructors to provide, tons of detailed feedback on each iteration of the artifacts. So much so that the most qualified faculty can’t really afford to teach many of these courses. As such, these courses are often taught by graduate students and part-time faculty who may not be ideal for communicating the “Relevant Big Ideas” (RBIs, or what a learning scientist might call “formalisms") behind the assignments, and instead just focus on helping students create the highest quality artifacts. This creates a very real risk that students in these classes may or may not actually learn the underlying concepts, or may learn them in a way that they are so bound to the project that they can’t be used in other contexts. In my observation, such classes seldom feature formal examinations. Without careful attention, lots of really good feedback, and student use of feedback, students may come away from the class with a lovely portfolio and little else. Given the massive investment in e-Portfolios in e-learning platforms like Sakai, this issue demand careful attention. (I will ask my friend Larry Mikulecky in Indiana’s Department of Culture, Communication, and Language Education who I understand has been teaching non-exam online courses for years and has reportedly develops considerable evidence of student’s enduring understanding.)

A Practical Alternative
I am teaching on-line for the first time this summer. The course is P540, Cognition and Learning, a required course for many M. Ed programs. I am working like crazy to take full advantage of the new on-line resources for social networking that are now available in OnCourse, IU’s version of Sakai (an open-source collaborative learning environment designed for higher education). In doing so I am working hard to put into place an on-line alternative that balances participation and content. I also plan to use some of the lessons I am learning in my Educational Assessment course this Fall—which is partly what prompted that aforementioned conversation with my colleague. I want to put some of my ideas as they are unfolding in that class out there and seek input and feedback, including from my current students who are (so far) patiently hanging with me as I refine these practices as I go.

In particular I am working hard to incorporate the ideas about participatory culture that I have gained from working with Henry Jenkins and his team at Project New Media Literacies over the last year. Participatory assessment assumes that you can teach more "content" and gather more evidence that students “understand” that content by focusing more directly on participation and less directly on content. Theoretically, these ideas are framed by situative theories of cognition that say participation in social discourse is the most important thing to think about, and that individual cognition and individual behavior are “secondary” phenomena. These ideas come to me from three Jims: Greeno (whose theorizing has long shaped my work) Gee (who also deeply influences my thinking about cognition and assessment and whose MacArthur grant funded the aforementioned collaboration and indirectly supports this blog) and Pellegrino (with whom I did my doctoral studies of assessment, transfer, and validity with but who maintains an individual differences approach to cognition).

Per the curriculum committee that mandated a cognition and learning course for most masters degrees for teachers, my students are just completing ten tough chapters on memory, cognition, motivation, etc. I use Roger Bruning’s text because he make is quite clear and puts 5-7 “implications for teaching” at the end of each chapter. But it is a LOT of content for these students to learn, especially if I just have them read the chapters.

I break students up into domain groups (math science, etc.) and in those groups they go through the 5-7 implications for teaching. Each group must use the forum to generate a specific example of that implication, and then rank order the implications in terms of relevance and warrant those rankings and post them to the OnCourse wiki. The level of discourse in the student-generated forums around the content is tremendous. Then the lead group each week synthesizes the postings of all five groups to come up with a single list. I also have now asked them to do the same with “things worth being familiar with” in the chapter (essentially the bolded items and any highlighted research studies). What I particularly like about the discussions is the way that the discourse around agreeing that an implication or topic is less relevant actually leads to a pretty deep understanding of that implication or idea. This builds on ideas I have learned from my colleague Melissa Gresalfi about “consequential engagement.” By struggling to conclude that the implication is least likely to impact practice makes it more likely that they will remember that implication if they find themselves is a situation that makes it more relevant.

This participatory approach to content is complemented by four other aspects of my class. Illustrating my commitment to content, I include three formal exams that are timed and use traditional MC and short answer items. But I prioritize the content that the class has deemed most important, and don't even include the content they deem least important.

The second complement is the e-Portfolios each student has to post each week in OnCourse. Students have to select the one implication they think is most relevant, warrant the selection, exemplify and critique it, and then seek feedback on that post from their classmates. Again following Melissa’s lead, the e-Portfolio asks students for increasingly sophisticated engagement with the implication relative to their own teaching practice: procedural engagement (Basically explain the implication in your own words), conceptual engagement (give an example that illustrates what this implication means), consequential engagement (what are the consequence of this implication for your teaching practice, what should you do differently now that you understand this aspect of cognition?) and critical engagement (why might someone disagree with you and what would happen if you took this implication too far?). I require them to request feedback from their classmates. While this aspect of the new on-Course e-Portfolio tools is still quite buggy, I am persevering because the mere act of knowing that a peer audience is going to read it pushes them to engage more deeply. Going back to my earlier point, it is hard for me to find time to review and provide detailed feedback on 220 indivdiual submissions across the semester. When I do review them (students submit them for formal review after five submissions), I can just look at the feedback from other students and the students' own reflection on what they have learned for pretty clear evidence of consequential and critical engagement.

The third complement is the e-Portfolio that each student completes during the last five weeks of class. While each of the groups leads the class in the chapter associated with their domain (literacy, comprehension, writing, science and math), students will be building an e-portfolio in which they critique and refine at least two web-based instructional resources (educational videogames, webquests, the kind of stuff teachers increasingly are searching out and using in their classes). They select two or more of the implications from that chapter to critique the activities and provide suggestions for how it should be used (or if it should be avoided), along with one of the implications from the chapter on instructional technology, and one of the implications from the other chapters on memory and learning. If I have done my job right, I don’t need to prompt them to the consequential and critical engagement at this stage. This is because they should have developed what Melissa calls a “disposition” towards these important forms of engagement. All I have to do is include the requirement that they justify why each implication was selected, the feedback from their classmates, and their reflection on what they learned from feedback. It turns out the consequential and critical engagement is remarkably easy to recognize in discourse. That seems partly because it is so much more interesting and worthwhile to read than the more typical class discourse that is limited to procedural and conceptual engagement. Ultimately, that is the point.

Tuesday, July 7, 2009

Five tips for seeding and feeding your educational community

Dan Hickey's recent post on seeding, feeding, and weeding educators' networks got me thinking, for lots of reasons--not least of which being that I will most likely be one of the research assistants he explains will “work with lead educators to identify interesting and engaging online activities for their students.”

This got me a-planning. I started thinking about how I would seed, feed, and weed a social network if (when) given the chance to do so. As David Armano, the author of "Debunking Social Media Myths, the article that suggests the seeding, feeding, and weeding metaphor, points out, building a social media network is more difficult than people think—this is not a “if we build it, they will come” sort of thing. Designing, promoting, and growing a community takes a lot of work. People will, given the right motives, participate in the community for love and for free, but you have to start out on the right foot. This means offering them the right motivations for giving up time they would otherwise be spending on something else.

A caveat
First, know that I am a True Believer. I have deep faith in the transformative potential of participatory media, not because I see it as a panacea to all of our problems but because participatory media supports disruption of the status quo. A public that primarily consumes media primarily gets the world the media producers decide they want to offer. A public that produces and circulates media expressions gets to help decide what world it wants.

Social media, because of its disruptive and transformative potential, is both essential and nigh on impossible to get into the classroom. This is precisely why it needs to happen, and the sooner it happens, the better.

But integrating participatory media and the participatory practices they support into the field of education is not a simple matter. Too often people push for introduction of new technologies or practices (blogging, wikis, chatrooms and forums) without considering the dispositions required to use them in participatory ways. A blog can easily be used as an online paper submission tool; leveraging its neatest affordances--access to a broad, engaged public, joining a web of interconnected arguments and ideas, offering entrance into a community of bloggers--takes more effort and different, often more time-consuming, approaches.

Additionally, while social networks for educators hold a great deal of promise for supporting the spread of educational practices, designing, building, and supporting a vibrant community of educators requires thinking beyond the chosen technology itself.

Five Tips for Seeding and Feeding your Community

With these points in mind, I offer my first shot at strategies for seeding and beginning to feed a participatory educational community. (Weeding, the best part of the endeavor, comes later, once my tactics have proven to work.)

1. Think beyond the classroom setting.
In the recently published National Writing Project book, Teaching the New Writing, the editors point out that for teachers to integrate new media technologies into their classrooms, they "need to be given time to investigate and use technology themselves, personally and professionally, so that they can themselves assess the ways that these tools can enhance a given curricular unit."

The emerging new media landscape offers more than just teaching tools--it offers a new way of thinking about communication, expression, and circulation of ideas. We would do well to remember this as we devise strategies for getting teachers involved in educational communities online. After all, asking a teacher who's never engaged with social media to use it in the classroom is like asking a teacher who's never used the quadratic equation to teach Algebra.

Anyone who knows me knows what a fan of blogging I am. I proselytize, prod, and shame people into blogging--though, again, not because I think blogging is the best new practice or even necessarily the most enjoyable one. Blogging is just one type of practice among a constellation of tools and practices being adopted by cutting edge educators, scholars, and Big Thinkers across all disciplines. Blogging was, for me, a way in to these practices and tools, and I do think blogging is one of the most accessible new practice for teacherly / writerly types. The immediacy and publicness of a blogpost is a nice preparation for increased engagement with what Clay Shirky calls the “publish, then filter” model of participatory media. This is a chaotic, disconcerting, and confusing model in comparison to the traditional “filter, then publish” model, but getting in synch with this key element of participatory culture is absolutely essential for engaging with features like hyperlinking, directing traffic, and identifying and writing for a public. In a larger sense, connecting with the publish, then filter approach prepares participants to join the larger social networking community.

2. Cover all your bases--and stop thinking locally
One of the neatest things about an increasingly networked global community is that we're no longer limited to the experts or expertises of the people who are within our physical reach. Increasingly, we can tap into the knowledge and interests of like-minded folks as we work to seed a new community.

Backing up a step: It helps, in the beginning for sure but even more so as a tiny community grows into a small, then medium-sized, group, to consider all of the knowledge, experience, and expertises you would like to see represented in your educational community. This may include expertise with a variety of social media platforms, experience in subject areas or in fields outside of teaching, and various amounts of experience within the field of education.

3. In covering your bases, make sure there's something for everyone to do.
Especially in the beginning, people participate when they feel like they a.) have something they think is worth saying, b.) feel that their contributions matter to others, and c.) can easily see how and where to contribute. I have been a member of forums where everybody has basically the same background and areas of expertise; these forums usually start out vibrant, then descend into one or two heavily populated discussion groups (usually complaining or commiserating about one issue that gets up in everyone's craw) before petering out.

Now imagine you have two teachers who have decided to introduce a Wikipedia-editing exercise into their classrooms by focusing on the Wikipedia entry for Moby-Dick. Imagine you have a couple of Wikipedians in your network who have extensive experience working with the formatting code required for editing; and you have a scholar who has published a book on Moby-Dick. This community has the potential for a rich dialogue that supports increasing the expertise of everybody involved. Everybody feels valued, everybody feels enriched, and everybody feels interested in contributing and learning.

4. Use the tool yourself, and interact with absolutely everybody.
Caterina Fake, the founder of Flickr, says that she decided to greet the first ten thousand Flickr users personally. Assuming ten thousand users is several thousand more than you want in your community, you might have the time to imitate Fake's example. It also helps to join in on forums and other discussions, especially if one emerges from the users themselves. Students are not the only people who respond well to feeling like someone's listening.

Use the tool. Use the tool. Use the tool. I can't emphasize enough how important this is. You should use it for at least one purpose other than seeding and feeding your community. You should be familiar enough with it to be able to answer most questions and do some troubleshooting when necessary. You should be able to integrate new features when they become available and relevant, and you should offer a means for other users to do the same.


5. Pick a tool that supports the needs of your intended community, and then use the technology's features as they were designed to be used.

Though I put this point last, it's the most important of all. You can't--you cannot--build the right community with the wrong tools. Too often, community designers hone in on a tool they have some familiarity with or, even worse, a tool that they've heard a lot about. This is the wrong tack.

What you need to do is figure out what you want your community to do first, then seek out a tool that supports those practices. If you want your community to refine an already-established set of definitions, approaches, or pedagogical tenets, then what you're looking for is a wiki. If you want the community to discuss key issues that come up in the classroom, you want a forum or chat function. If you want them to share and comment on lesson plans, you need a blog or similar text editing function.

Once you've decided on the functions you want, you need to stick with using them as god intended. Do not use a wiki to post information that doesn't need community input. Don't use a forum as a calendar. And don't use a blog for forum discussions.

It's not easy to start and build a community, offline or online. It takes time and energy and a high resistance to disappointment and exhaustion. But as anybody who's ever tried and failed (or succeeded) to start up a community knows, we wouldn't bother if we didn't think it was worth the effort.

Weeding, Seeding, and Feeding Social Educational Designs

This post examines the implications of a post at the Harvard Business blog by David Armano titled Debunking Social Media Myths about social business design. He points to three labor-intensive activities that are necessary for a profitable social network: weeding, seeding, and feeding. We examine these three considerations for social education design, and how they are necessary for a worthwhile social network for educators.

First some background and context. One of our primary interests here at Re-Mediating Assessment is how innovative classroom assessment practices can be shared over digital social networks. By assessment practices, we mean both particular assessments for particular activities, as well as expertise associated with those practices. Of course, we know that most efforts to create collaborative networks for educators don’t take hold (Check out the 2004 book Designing Virtual Communities in the Service of Learning by Barab, Kling, and Gray, and a special issue of The Information Society they edited for a good discussion of some pioneering efforts).

We have previously written about the value of insights out of media scholarship for thinking about the sharing of educational practices. In particular Henry Jenkins’ notions of "spreadable" practices have prompted us to launch serial posts at Project New Media Literacies introducing the idea of Spreadable Educational Practices (SEP) and to juxtapose them with the doomed distribution of centrally defined and "scientifically" validated scripts, what we label Disseminated Instructional Routines.

We are currently outlining several new proposals to expand the nascent networks that are forming around various efforts. We also want to design and test strategies for helping other nascent networks succeed by helping to foster spread of effective practices and the necessary social bonds.

Media scholars inevitably consider the for-profit nature of commercial media. Of course, not all media scholars care about markets and eyeballs, and the nature of media markets are undergoing tremendous change. In our prior posts at Project NML, Henry's descriptions of failed corporate efforts to create "viral" messages and "sticky" websites seemed to describe some of the failed efforts to create educator social networks. The point here is that educational social design can be informed by business social design.

Armano’s talk at the Conversational Marketing Summit pointed out something that is easy to underestimate: "Being social means having real people who actively participate in your initiatives." Because educators tend to be so overwhelmed by the daily press of teaching, worthwhile social education design must find ways to get teachers actively involved. And this takes resources. Building a network will require significant support at the outset, likely sponsoring leading participants to welcome newcomers and foster effective practices. As Clay Shirky wrote in Here Comes Everybody, the founder of Flickr said that she learned early on that "you have to greet the first ten thousand users personally.”

Here are three things that Armano says that successful networks must plan for, and what they might look like in an educational network:

Seeding: Someone has to seed a network with resources and practices that your particular users need. In the Participatory Activities and Assessment Network we are trying to build out, we will budget quite a bit of time for research assistants to work with lead educators to identify interesting and engaging on-line activities for their students. However, the participatory assessments that educators can use to implement those activities in their classrooms and refine them over time will likely have to be constructed by the research effort. So we are budgeting for that too. We will work with heavily subsidized and fully-supported lead teachers for the first year to seed the network with useful activities before bringing on less-subsidized and partly-supported teachers. Only then do we think that the network will be sufficiently seeded to expect unsupported users to start participating in large numbers.

Feeding. The network needs a steady stream of content. By content, what we mean is information--information that other participants will find useful. In our case, the most useful information will be the anecdotes and guidelines for implementing participatory activities with actual students, and sharing the "low stakes" evidence obtained from participatory assessments for improving success. For example, we view the posting of accounts, videos, and artifacts from the enactment of successful implementation as crucial content that needs to be fed to the network. It is our job to make sure that there is both a source and an audience. Our lead teachers will need help posting accounts of enactments to the network. For example, most teachers know that they themselves can't post video of their students to YouTube (as researchers we are forbidden from even thinking about doing so because of Human Subjects constraints). But there is nothing to stop us from giving the lead teachers several inexpensive flip cams and letting students post accounts of themselves to YouTube (if the school allows access; they may be better off with SchoolTube which is less likely to be blocked).

Once accounts of practice are up, it is also our job to ensure that there is an audience. In this case, we will have paired teachers up to select activities to complete with their classrooms, and then asking one of them to implement first. This first lead teachers’ posted accounts and informal guidelines will be immediately useful for to second lead teacher. One or two simple successes like this will create a powerful social bond between two otherwise isolated participants. Because this interaction will take place via public and persistent discourse in the network, the accounts will be immediately useful for other participants wishing to use the activities; this discourse will be crucial for helping that newcomer locate and access the informal expertise that is now spread across the network in those two lead teachers.

Weeding. Armano points out that productive social business design must prune content that inhibits growth. This might be the most challenging aspect of a productive social education design. This partly refers to getting rid of problematic content. One of the lessons we learned in our collaboration with Project New Media Literacies working with Becky Rupert at Aurora High School is that need for involving students in helping keep offensive or objectionable material off of school related networks. If the students find the networking activities an enjoyable alternative to traditional activities the quickly become a powerful ally in minimizing transgressions. This is crucial, as teachers simply won’t have time to do it, and will be overwhelmed with the nuanced decisions between creative expressions and those that are patently offensive.

Another important part of weeding is getting rid of stuff that does not work. As educators we have a tendency to hang on to everything and make it available to all. Thus we create a huge obstacle for other educators who have to weed through endless list of resources looking for the right one, and then implement it and hope it succeeds. Our network assumes that most web-based educational resources are not very good at fostering worthwhile classroom participation. This is not because the resources are inherently bad, but because participatory classroom culture is so challenging to attain. Our Participatory Activities and Assessment Network will start with a carefully catalogued and tagged set of activities that have been initially vetted and aligned to one or two Relevant Big Ideas (or RBIs, which in turn can be easily aligned to content standards). As lead teachers select activities for further consideration, they will be tested by research assistants before participatory assessments are created and released along with the activity. If the activities and assessments don’t foster worthwhile participation for the first two lead teachers, it will be tagged as such.

Importantly, the network will contain useful information about the nature of that “productive failure” that will be useful for others. Consider that it may well be that the activity turned out to be too easy or too hard or required too much background knowledge for the particular students. Rather than labeling the activity as “useless” it should be tagged in a way that another teacher who works with students for whom it might be “perfect” can find the activity, along with the information and distributed expertise for using it. It looks to us like building information systems for accomplishing this will be one of our major challenges.

Sunday, July 5, 2009

On collaborative platforms for sharing educational practices

I've been in conversation with lots of educators recently about strategies for developing and supporting collaborative communities of teachers within various social networks online. Most recently I am talking with IU Mathematics Education Professor Cathy Brown about the lovely site that she has created in Moodle to support the math teachers who are teaching at the New Tech High Schools in Indiana. We are going to meet to see if some of the ideas we have been developing about participatory activities and assessment might help NewTech teachers use the site to do what they are doing--Helping integrate mathematics into interesting and engaging projects. Because Indiana is now rolling out End of Course assessments in Algebra (along with English and Biology) I assume that these teachers are under significant pressure to show not only that thier students are passing (required to get credit for the course) but exceling. This creates an important tension that gets at the heart of what we care about here at Re-Mediating Assessment.

Though I'd like to say otherwise, there is unfortunately no perfect tool--no single network that magically fosters community, cooperation, and collaboration. Part of this is due to the fact that all platforms are designed to support only certain kinds of engagement and therefore have benefits and drawbacks inherent to them; the other factor is that too often, people try to bend a community to the affordances of the technology instead of finding a tool or set of tools that align most closely to the needs of the community.As for platforms, I have bounced around a lot from several which have distinct advantages and disadvantages. I want to take a minute and share my experiences and them make the point I want to make.

I used SocialMediaClassroom for my graduate classes in Spring 2009 and that was very informative and help.. One of the great things about using it was that it hooked us up with it sponsor, social networking pioneer Howard Rheingold and his deep and interesting community who kibbutz at his installation of SMC at http://socialmediaclassroom.com/. It also hooks you up with the open-source Drupal community, which also has a lot of potential. It was a bit buggy, which was not surprising at it was an early stage open source program. Sam Rose did a tremendous job setting it up and was really helpful both in getting it installed and then working out the many bugs that resulted from my ignorance. MacArtur’s Digitial Media and Learning initiative funded the initial development, and are using it in the DML hub which is also important

This summer I have been using Indiana University's OnCourse CL, an online collaborative learning environment designed through the open-source Sakai Project. OnCourse brings the whole Sakai community and is very stable. Now that it has e-portfolios and wikis it has a lot of potential for the kinds of participatory activites and assessments that are so important to me. Stacy Morrone has pushed hard on the e-Portfolio features and they really have tremendous untapped potential. A big personal advantage for me in using OnCourse is the tremendous support that I get from the IU staff who are quite committed to it. The Learning Sciences graduate program just got a grant to expand our online course offerings, and we aim to use this to build a strong community of scholars around these courses, and will be using OnCourse.
The big drawback with OnCourse is that it is so closed--it only supports participation from IU affiliates and therefore restricts participation across multiple institutions. Case in point, I was planning on having my students in my Cognition and Learning course seek feedback from at least one outside expert or peer on the e-Portfolios that each of the students are drafting. The author of our textbook Roger Bruning has even agreed to review some. But for non-IU folks to do so they have to register for guest accounts. I have to do the same all the time so I can view my class as a student (another hassle of OnCourse) and I know it is a huge hassle. I have to get a new password every time. So I really can't include that in the course requirements as it will cause a revolt and a lot of headaches. Of course, the beauty of the Sakai platform is that I should be able to build and mount my own version for this. I will keep you posted!

For the last year, we have been working with an ELA curriculum designed by Project New Media Literacies, a project headed by media scholar Henry Jenkins and funded by the MacArthur Foundation's Digital Media and Learning Initiative. Our collaboration with Project NML revolved around a site in Ning which, like Moodle, is very popular with teachers. (Ning has dominated the "best educational use of a social networking service" category of the Edublogs Awards for the last two years: In 2008, 9 out of the 10 finalists were Ning-based, and in 2007 all ten finalists were based in Ning.) Our thoughts are influenced as usual by Clay Shirky. In Here Comes Everybody he pointed out that "there are no generically good tools, only tools that are good for certain purposes."

The point I want to make here is that focusing too much on the actual hub ends up as technological determinism--and leads to efforts to squeeze the community into the tool instead of using the tool to support the community. We must be much more focused on the participatory cultures and practices that the networks support. Often, this means supporting layered use of various technologies, according to the interests, needs, and dispositions of community members. In fact, the most important evidence that you have established a participatory culture around a network is that the practices you are fostering in your network spread to other networks. In other words, if you lurk on other networks, you should see reference to your network and practice.

opening up scholarship: generosity among grinches

why academic research and open exchange of ideas are like that bottle of raspberry vinaigrette salad dressing you've had in the back of your fridge since last summer


The folks over at Good Magazine are tossing up a series of blogposts under the heading "We Like to Share."

The articles are actually a series of interviews with creative types in a variety of fields who share one characteristic: they believe that sharing of ideas and content is valuable and important. The edited interviews are being posted by Eric Steuer, the Creative Director of Creative Commons--a project which, though I admittedly don't fully understand it, I find deeply ethical and innovative with respect to offering new approaches to sharing and community.

So far, two posts have gone up, the first with Chris Hughes, a co-founder of Facebook and the former online strategist for the Obama presidential campaign, and the second second with Flickr founder Caterina Fake. Talking about how much we've changed in our attitudes toward sharing, Fake explains that
[i]f you go online today you will see stories about Obama sharing his private Flickr photos. So this is how far the world has come: our president is sharing photos of his life and experiences with the rest of the world, online. Our acceptance of public sharing has evolved a lot over the course of the past 15 years. And as people became increasingly comfortable sharing with each other—and the world—that lead to things that we didn’t even anticipate: the smart mob phenomenon, people cracking crimes, participatory media, subverting oppressive governments. We didn’t know these things were going to happen when we created the website, but that one decision—to make things public and sharable—had significant consequences.


Hughes' interview is less overtly about sharing as we typically think of the term, but he points out that the Obama campaign was successful because it focused on offering useful communications tools that lowered barriers to access and then
getting out of the way of the grassroots supporters and organizers who were already out there making technology the most efficient vehicle possible for them to be able to organize. That was a huge emphasis of our program: with people all over the place online—Facebook, MySpace, and a lot of other different networks—we worked hard to make sure anyone who was energized by the campaign and inspired by Barack Obama could share that enthusiasm with their friends, get involved, and do tangible things to help us get closer to victory. The Obama campaign was in many ways a good end to the grassroots energy that was out there.


Both interviews, for as far as they go, offer interesting insights into how sharing is approached by innovators within their respective spheres. But though these posts present their subjects as bold in their embrace of sharing and community, their ideas about what sharing means and how it matters are woefully...limited. Fake uses the Obama example to point out how far we've come; but really, does Obama's decision to make public photos of his adorable family mean much more than that he knows how to maintain his image as the handsome, open President who loves his family almost to a fault? I don't imagine we'd be very surprised to learn that Obama's advisors counseled him to make these photos widely available.

Indeed, the Flickr approach, in general, is this: These photos are mine and I will let you see them, but you have to give them back when you're done. It's a version of sharing, yes, but only along the lines of the sharing we learned to do as children.

The same is true of the picture Hughes paints of a campaign that successfully leveraged social networking technologies. The Obama campaign's decision to use participatory technologies was a calculated move: Everybody knows that a.) More young, wired and tech-savvy people supported Obama than McCain; and b.) those supporters required a little extra outreach in order to line up at the polls on election day. You can bet that if Republicans outnumbered Democrats on Facebook, you can bet Obama's managers would have been a little less quick to embrace these barrier-dropping communication tools.

What we're not seeing so far among these innovators is an innovative approach to sharing--one that opens up copyright-able and patent-able and, therefore, economically valuable ideas and content to the larger community.

I've been thinking about this lately because of my obsession with open education and open access. In particular, educational researchers--even those who embrace open educational resources--struggle with the prospect of making their work available to other interested researchers.

This makes sense to anyone who's undertaken ed research--prestige, funding, and plum faculty positions (what little there is of any of these things) are secured through the generation of innovative, unique scholarship and ideas, and ideas made readily available are ideas made readily stealable. As a fairly new addition to the field, even I have been a victim of intellectual property theft. It's enough to give a person pause, even if, like me, you're on open education like Joss Whedon on strong, feminist-type leading ladies.

But, come on, we all know there's no point to hiding good research from the public. As Kevin Smith writes in a recent blogpost on a San Jose State University professor who accused a student of copyright violation for posting assigned work online,

[t]here are many reasons to share scholarship, and very few reasons to keep it secret. Scholarship that is not shared has very little value, and the default position for scholars at all levels ought to be as much openness as is possible. There are a few situations in which it is appropriate to withhold scholarship from public view, but they should be carefully defined and circumscribed. After all, the point of our institutions is to increase public knowledge and to put learning at the service of society. And there are several ways in which scholars benefit personally by sharing their work widely.


Smith is right, of course, and the only real issue is figuring out strategies for getting everybody on board with the pro-sharing approach to scholarship. The "I made this and you can see it but you have to give it back when you're done" model is nice in theory but, in practice, limits innovation and progress in educational research. A more useful approach might be along the lines of: "I made this and you can feel free to appropriate the parts that are valuable to you, but please make sure you credit my work as your source material." This is a key principle at the core of the open education approach and of what media scholar Henry Jenkins calls "spreadability."

The problem is that there are enough academics who subscribe to the "share your toys but take them back when you're done playing" approach to research that anybody who embraces the free-appropriation model of scholarship ends up getting every toy stolen and has to go home with an empty bag. This is why the open education movement holds so much promise for all of academia: Adherents to the core values of open education agree that while we may not have a common vocabulary for the practice of sharing scholarship, we absolutely need to work to develop one. For all my criticisms of the OpenCourseWare projects at MIT and elsewhere, one essential aspect of this work is that it opens up a space to talk about how to share materials, and why, and when, and in what context. The content of these projects may be conservative, but the approach is wildly radical.