Ye canny defy the laws of (complexity science) or how to evaluate a MOOC

One way to approach evaluation can be to try to define what would indicate that an intervention or change has been successful. This could be successful from the perspective of a project team, the intended recipients of the intervention, or those paying for it, among others. In designing the evaluation of ocTEL (open course in Technology Enhanced Learning), I've spoken with the course team, reviewed proposals and planning documents, but it's also been interesting to see how other people have defined success for MOOCs  (Massive Open Online Courses).

There is frequent mention of completion rates, and whether or not they are a useful indicator. Dominik Lukeš provides an insightful post on how complexity science demonstrates that, given the effects of magnitude, drop out rates in (Massive) MOOCs and more traditional (small) classes aren't really comparable. Dominik goes on to reflect on his own MOOC experiences. He notes that xMOOCs (Coursera, edX) focused on knowledge and skills, with peer interaction simply being a support for these. While the cMOOCs (Connectivist) he took part in were more interactive and exploratory, with peer relationships being a strong motivator for many.

Reading his post made me think that to evaluate a MOOC perhaps we need to identify some models of participation. Indeed, in his research into one MOOC, Colin Milligan identifies:
“lurkers who purposefully did not engage with other course participants … passive participants, who expected ‘to be taught’, and viewed the course as a source of information … more active participants, who set their own goals, established connections with other learners and linked these connections with their existing personal learning network.”

So there are some not entirely unexpected patterns of participation, but what factors would influence these in a MOOC? Perhaps, number of learners, duration of the course (days or weeks), motivations of learners, processes for recognition or accreditation, what tools were used for interaction, whether the MOOC is about knowledge or 'the experience'.

At the end of his post, Dominik concludes that "we need to consider the impact dropping out of a course or education has on the individual. And the impact the dropping out of a large numbers of individuals will have on the whole group." (My emphasis.) This impact on the individual is picked up in a more recent post from Sheila MacNeill on Badges? Certificates? What counts as succeeding in MOOCs? Sheila highlights how in talking to participants from #oldsmooc and #edcmooc they all felt they had "gained something from even cursory participation". Indeed, being able to participate in a way that suits their own needs, sometimes strategically, sometimes without worrying about dropping out seemed to be key. I think Andy Powell sums it up in the comments, where he says "your [personal] measures of success depend on your reasons for taking a MOOC in the first place". Though learners have to be able to identify those reasons, and that comes up later.

For ocTEL, the evaluation will focus around a number of themes. These aim to address the individual level, but are also very much informed by the original intent of ocTEL, ie to help participants better understand how to use technology to enhance their teaching in Higher Education. The themes and associated evaluation questions are thus:

Impact on staff competency (value in practice)
1. To what extent has ocTEL helped participants make better use of technology in their teaching?
2. Has ocTEL helped participants solve their own problems, thereby adding long-term value?
  - To what extent have participants’ expected outcomes been achieved?
  - How valuable were the outcomes to participants?

There is also an underlying question of how prepared or how effective participants are as self-directed learners, which is likely to be influenced by prior experience of learning in MOOCs, and ability to set goals around problems to solve & ability to identify expected outcomes. In a post on Designing MOOCs, Colin Milligan also talks about the effect of having pre-existing networks and a certain level of digital literacy.

ocTEL Content and Design
4. What was the quality of ocTEL’s content and how well was it designed?
  - How useful is the content and design of ocTEL?
  - Are/were participants' level of experience of Technology Enhanced Learning sufficiently accounted for in the content and design of ocTEL?
5. What were the barriers and enablers that made the learning experience more or less successful?

ocTEL Discourse and Knowledge
6. To what extent was being able to take risks and have fun significant in facilitating participants’ learning?
7. Can new knowledge be generated via critical discourse in a MOOC?

ocTEL Community and Sustainability
8. How effective has ocTEL been in engendering self-organisation from the community?
  - Was there sufficient orientation time to make connections and become familiar with the environment/tools?
9. How effective was the process of encouraging contributions from the community on subsequent community sustainability?
10. To what extent are the content and/or design of ocTEL likely to be valuable in other settings? How reusable is it elsewhere?

There is also a sub-theme looking at the collaborative approach to open authoring and the impact of ocTEL on the course team.

Labels: , , ,