Wednesday, December 4, 2013

The Varied Functions of Digital Badges in the Educational Assessment BOOC


by Dan Hickey and Tara Kelley


This extended post details how open digital badges were incorporated into the Education Assessment Big Open Online Course.  In summary there were four types of badges:
  •  Assessment Expertise badges for completing peer-endorsed wikifolios and an exam in each of the sections of the course (Practices, Principles, and Policies)
  •  Assessment Expert badge for earning the three expertise badges and succeeding on the final exam
  • Leader versions of the Expertise and Expert badges for getting the most peer-promotions in the networking group
  • A Customized Assessment Expert badge for completing a term paper by assembling all of the insights gained across the 11 wikifolios assignments into a coherent professional paper.  This badge allows earners to indicate the state, domain, or context in which they have will have developed local expertise about assessment.
Along the way, this post explores (a) how open badges are different than grades and other static (i.e., non-networked, evidence-free) credentials, (b) how we incorporated evidence of learning directly into the badges, and (c) the role of badges in making claims about general, specific, and local expertise.

Previous posts describe the BOOC, the peer promotion and endorsement features, the role of the textbook, and how one student experienced the course and the badges.  Future posts will describe the code and interface used to issue them in Course Builder, the entire corpus of badges issued, how earners shared them, and what we learned by analyzing the evidence they contained, and the design principles for recognizing, assessing, motivating, and studying learning that the BOOC badges illustrate.

Open digital badges are different from other credentials because they can contain specific claims about learning, detailed evidence of that learning, and can be readily accumulated and displayed.  If you want more background you might refer to this EDUCAUSE brief by Carla Casilli and Erin Knight from Mozilla.  Ultimately open badges are nothing but eight pieces of information and an image.  But they have tremendous potential to transform the way that learning is recognized and assessed.  And because recognition and assessment are so fundamental to education, open badges have the potential to transform learning as we know it.
Badges are one of the many reasons we were happy that Google gave a grant to Indiana University so that we could offer a Big Open Online Course on Educational Assessment.  Dan has been teaching the course online for many years and introduced open badges and peer-endorsement in the Summer 2013 course in which Tara served as a graduate teaching assistant.  The grant and an open course allowed us to find out what was really possible.  Following is a summary of the badges we ended up issuing.

Assessment Expertise Badges for Completing Course Sections
Participants in the BOOC could earn three Assessment Expertise badges that corresponded with the three sections of the course:  Assessment practices concerned curricular aims & standards and the primary classroom assessment formats: selected response, constructed-response, performance assessment, and portfolio assessment.  Assessment principles concerned the principles of validity, reliability & bias, and formative feedback.  Assessment policies concerned complex issues associated with standardized testing, test preparation, the evaluation of instruction, and grading. 
              BOOC Project Coordinator Garrett Poortinga designed the rather striking images for these badges [Figure 1].  Some wondered about the choice of “little green people.”  That was precisely the point.  Garrett intended the images to cause people to wonder what they meant.  Our initial results suggest he was successful: several people have said the images made them want to follow the link and see what the badge was about.


 Figure 1. Assessment Expertise Badges

            To earn each badge, students needed to have completed an open-book multiple-choice exam and each of the 3-4 interactive “wikifolios” assigned each week.  A wikifolio was deemed complete if one or more classmates had endorsed it as so by clicking on the corresponding link (we later verified that there were only a couple of endorsed wikifolios that were missing major elements).  To claim their badge, participants simply click on the corresponding icon on their account page.  Doing so generates the unique URL (weblink) that points back to our server to gather the image and eight fields of information associated with the badge.  Our programmer Thomas Smith made it simple for earners to push their badges out to Facebook, Twitter, or email, etc [Figure 2].

Figure 2. Badge information on account page.

Because our badge metadata is compliant with Mozilla’s Open Badges Interface (OBI), earners can push their badges out to an external digital backpack at Mozilla.  In the backpack, earners can assemble collections of OBI compliant badges, add additional information about badges and collections, and choose to make them public or private.  For example, if you go to Dan’s backpack, you can see that he explains that he was just testing the system, and that he put his badge in a public folder called “My Badges about Badges.”  Being able to accumulate badges, add additional information, and readily display them are key ways that open digital badges exploit digital social networks.  For other examples you can follow this link to go to #BOOC_IU at Twitter and see what our participants have said about their badges in that context.
            Clicking on the evidence link on the badge takes the viewer to the evidence page on our server.  The BOOC team spent many productive hours deliberating over the evidence that would be shown on that page; and as a result, Thomas was able to build in some excellent new features.  The evidence page shows the criteria for earning the badge and includes links to the actual wikifolios that students completed.  The page also indicates how many comments were included in the threaded discussions at the bottom of the wiki, the number of peer endorsements (for being complete) and the number of peer promotions (for being exemplary).   To protect classmates’ privacy, we elected to only display the number of comments on each wikifolio, rather than the content & author of the comments.  But we also decided to give the earner the option of including the wikifolio itself (which likely allowed some participants to discuss their work environment more freely with their classmates).  After some consideration, we also decided to allow additional writing or interaction occurring after that badge is issued to still be included in the links.  While this might be a problem in some settings, it seemed to open up lots of possibilities for additional engagement and feedback.  For example, participants might gain new insights working on the optional term paper described below, and might want those insights to be included in their evidence page.

Assessment Expert Badge for Completing the Course
Participants who earn all three Expertise badges and score at least 80% on the final exam will earn the Assessment Expert badge that summarizes their accomplishments. Notice that the Expertise and Expert badges do not state what the earner can do.  Rather the badge describes what they did by describing the topics in which the earner has developed some expertise and provides a detailed account of what the learning consisted of.  It is up to the viewer to conclude what this means. 
It is worth pondering how much different credentialing is when the credentials contain direct evidence of what the earner did to earn it.  Consider, for example, that many of the participants are taking the BOOC to earn professional development credit for their job.  Consistent with the existing documentation systems in many school districts, we are offering participants who earn the Assessment Expert badge an “instructor verified” certificate that states that they completed 30 hours of professional development.  But we did not have time to get our certificate approved by the state and our BOOC is not an accredited graduate level course.  This means that some participants may need additional evidence of their professional development.  We presume that they should be able to simply email their badges or include the links when they submit the certificate.  Furthermore, most of them clearly engaged more than 3 hours a week on this class.  Our goal is that badges will serve as direct evidence of that engagement.
This ability to readily gather and display direct evidence of engagement is central to the transformative potential of digital badges.  When John Frederiksen and Allen Collins introduced the tantalizing notion of systemic validity in 1989, they pointed out that direct assessments of learning are more likely to support assessment-driven improvement of educational systems.  But they also pointed out that the vast majority of evidence in existing school systems comes from indirect assessments. Arguably this is why all of the ensuing assessment-driven reforms have been so ineffective and problematic.  The assessment reforms of the mid-1990s and the Smarter Balanced assessments in the current Race to the Top initiative both feature performance assessments that are more direct than conventional achievement tests.  But they can’t possibly include direct evidence of engagement as well. Certainly the badges movement can learn a lot from these other initiatives.  But it seems that large-scale assessment reforms will have a lot to learn from badges.

What Do We Really Mean by Assessment Expert?
As a visitor to the BOOC Facebook page reminded us, one course certainly does not make someone an “expert” in the conventional sense of the word.  Indeed studies of expertise have shown that in most domains, individuals who are deemed to be “experts” have at least 10,000 hours of deliberate practice and expereienc—that is five years of full time work.  While our Expert badge does indeed state “expert,” we do not make any claims about general expertise.  Quite to the contrary, the wikifolios show that the earner’s new expertise is actually quite specific to a particular curricular aim and context.
This gets at a crucial aspect of the BOOC’s structure.  The BOOC’s instructional design is consistent with contemporary situative theories of learning.  These theories suggest that the context in which knowledge is learned and used is an essential part of that knowledge.  (Hence the label “situative” and the notion of “situated cognition.”)  The BOOC assumes that course concepts are best learned by considering their appropriateness or relevance in personally meaningful concepts.  The Assessment BOOC is designed so that learners will “co-construct” their understanding of their own experiences and role in education alongside their new understanding of the disciplinary concepts of assessment.  Our primary goal is to create “local experts” who know significantly more about assessment in a specific context, concerning a specific curricular aim, than most of their peers.  Our secondary goal is enabling our graduates to interact with colleagues whose roles lead them to think about assessment differently.
This contextualized learning in the BOOC is accomplished by organizing all of the learning around an individually defined curricular aim that embodies each participant’s actual or aspirational role in the larger educational ecosystem.  Participants were asked to define a curricular aim in order to register for the course.  They then further refined that aim and expanded on their role in education in the first assignment.  Some participants struggled with the crucial distinction between describing a lesson (i.e., teaching) and the educational aim of that lesson (i.e., learning).  Each weekly assignment asked them to further refine their aim and role in light of the week’s assessment topic while drawing on their growing understanding of assessment.  They then learned about the specific topics each week by considering and discussing the relative relevance of those concepts to their individual aims and roles. 
Consider, for example, the unit on validity.  Participants were expected to learn the difference between criterion validity (concerning the level of performance required on a particular assessment), content validity (concerning the way the content of the assessment relates to the instruction and claim made), and construct validity (concerning psychological constructs).  While Jim Popham’s book does a good job explaining these and gives some examples, these are still likely to remain abstract nuances.  These concepts gain new meaning as the participants consider them within their own role and context. One the topic of validity, (a) many administrators and consultants found criterion validity most relevant, (b) nearly all of the classroom teachers and faculty found content validity most relevant, and (c) just the few participants who were assessing things like cultural competency found construct validity most relevant.

Assessment Leader Badges for Exemplary Work
            Each of the weekly wikifolio assignments includes expected interaction with others via threaded comments at the bottom of their wikifolios.  Each participant is expected to (a) post at least one question to their classmates, (b) interact with their classmates through discussion threads, (c) endorse at least three wikifolios as being complete, (d) promote one (and only one) wikifolio as being “exemplary,” and explain what was exemplary about that wikifolio.  While participation in the questioning and commenting (which were not directly related to badges) was uneven, participation in endorsement and promotion was consistent and enthusiastic. 
            The weekly feedback that was sent out to the class indicated which member of each of the six professional networking group earned the most promotions, and summarized what was found to be exemplary about their wikifolios.  For each of the three sections of the course, the member of each group who earns the most promotions is awarded an Assessment Leader badge.  The badge states that classmates found the badge earner’s work to be exemplary; clicking on the number of promotions reveals the reasons the work was promoted as exemplary.
The Leader badges seem to be quite valued by participants and are definitely being shared more than the Expertise and Expert badges.  One earner posted a comment on a previous RMA post explaining that she shared her badge with her family and friends who were educators, and that a relative in turn indicated that she had earned a badge from an activity USDOE’s Connected Educator Month.

Term Papers and Customizable Assessment Expert Badge
            Our final badge will be a version of the Assessment Expert badge where the earner can actually insert the provide the US State, educational domain, or context in which the earner wants to claim some expertise.  They will be asked for a shorter description (12 characters or 15 spaces) to appear on the image and longer description (up to six words) to appear on the evidence page.  To earn this badge, participants will need to assemble all of the insights they generated across the eleven wikifolio assignments into a coherent term paper that discusses the aspects of assessment practices, principles, and policies that are most relevant to a particular curricular aim and educational context.  This badge will include the criteria provided for the paper and some information about the course.  The earner will have the option of including links in the badges to a detailed review of the paper provided by Dan and the paper itself.  Of the 60 students likely to complete the course, it looks like just a handful of them will take this option; it is worth 10 (out of 100) extra points in the for-credit section and is the only way to earn an A+. This paper format has been used in the previous online courses where the weekly wikifolios were a single block of text.  In this case it is going to take more cutting and pasting.  But many of the more active students really engaged deeply in this course and generated lots of useful new insights.  We expect these papers to (a) help the writers develop more robust and general understanding of the concepts, (b) serve as useful resource for others in their professional context, and (c) provide convincing evidence of local expertise when assembled with other badges from the course.


Reference
Frederiksen, J.R., &; Collins, A. (1989). A systems approach to educational testing. Educational Researcher, 18, 1-32.

No comments:

Post a Comment