ocTEL - whys, wherefores and greasy data

Last week, I made a flying visit to the ALT-C conference. Even though I was just there for the day, it was great to catch up with a few old friends, and to meet people I've been working with for a while but not actually met in person. In particular, David Jennings who has been co-ordinating the ocTEL MOOC, which I've been evaluating.

At the conference, David, Martin Hawksey, and I presented an overview of how ocTEL was developed, the technology that ran it, who took part, the kinds of benefits it offered to participants and ALT, and some early evaluation findings. Here's the slidedeck from the presentation.



Martin has written extensively on how he developed the code for wordpress to run the ocTEL course, but I particularly liked how he described pulling together the different social media sources that fed the ocTEL course reader:

In relation to the evaluation findings, I think Rose Heaney nicely summed up how participants engaged.

Labels: , , , ,

Ye canny defy the laws of (complexity science) or how to evaluate a MOOC

One way to approach evaluation can be to try to define what would indicate that an intervention or change has been successful. This could be successful from the perspective of a project team, the intended recipients of the intervention, or those paying for it, among others. In designing the evaluation of ocTEL (open course in Technology Enhanced Learning), I've spoken with the course team, reviewed proposals and planning documents, but it's also been interesting to see how other people have defined success for MOOCs  (Massive Open Online Courses).

There is frequent mention of completion rates, and whether or not they are a useful indicator. Dominik Lukeš provides an insightful post on how complexity science demonstrates that, given the effects of magnitude, drop out rates in (Massive) MOOCs and more traditional (small) classes aren't really comparable. Dominik goes on to reflect on his own MOOC experiences. He notes that xMOOCs (Coursera, edX) focused on knowledge and skills, with peer interaction simply being a support for these. While the cMOOCs (Connectivist) he took part in were more interactive and exploratory, with peer relationships being a strong motivator for many.

Reading his post made me think that to evaluate a MOOC perhaps we need to identify some models of participation. Indeed, in his research into one MOOC, Colin Milligan identifies:
“lurkers who purposefully did not engage with other course participants … passive participants, who expected ‘to be taught’, and viewed the course as a source of information … more active participants, who set their own goals, established connections with other learners and linked these connections with their existing personal learning network.”

So there are some not entirely unexpected patterns of participation, but what factors would influence these in a MOOC? Perhaps, number of learners, duration of the course (days or weeks), motivations of learners, processes for recognition or accreditation, what tools were used for interaction, whether the MOOC is about knowledge or 'the experience'.

At the end of his post, Dominik concludes that "we need to consider the impact dropping out of a course or education has on the individual. And the impact the dropping out of a large numbers of individuals will have on the whole group." (My emphasis.) This impact on the individual is picked up in a more recent post from Sheila MacNeill on Badges? Certificates? What counts as succeeding in MOOCs? Sheila highlights how in talking to participants from #oldsmooc and #edcmooc they all felt they had "gained something from even cursory participation". Indeed, being able to participate in a way that suits their own needs, sometimes strategically, sometimes without worrying about dropping out seemed to be key. I think Andy Powell sums it up in the comments, where he says "your [personal] measures of success depend on your reasons for taking a MOOC in the first place". Though learners have to be able to identify those reasons, and that comes up later.

For ocTEL, the evaluation will focus around a number of themes. These aim to address the individual level, but are also very much informed by the original intent of ocTEL, ie to help participants better understand how to use technology to enhance their teaching in Higher Education. The themes and associated evaluation questions are thus:

Impact on staff competency (value in practice)
1. To what extent has ocTEL helped participants make better use of technology in their teaching?
2. Has ocTEL helped participants solve their own problems, thereby adding long-term value?
  - To what extent have participants’ expected outcomes been achieved?
  - How valuable were the outcomes to participants?

There is also an underlying question of how prepared or how effective participants are as self-directed learners, which is likely to be influenced by prior experience of learning in MOOCs, and ability to set goals around problems to solve & ability to identify expected outcomes. In a post on Designing MOOCs, Colin Milligan also talks about the effect of having pre-existing networks and a certain level of digital literacy.

ocTEL Content and Design
4. What was the quality of ocTEL’s content and how well was it designed?
  - How useful is the content and design of ocTEL?
  - Are/were participants' level of experience of Technology Enhanced Learning sufficiently accounted for in the content and design of ocTEL?
5. What were the barriers and enablers that made the learning experience more or less successful?

ocTEL Discourse and Knowledge
6. To what extent was being able to take risks and have fun significant in facilitating participants’ learning?
7. Can new knowledge be generated via critical discourse in a MOOC?

ocTEL Community and Sustainability
8. How effective has ocTEL been in engendering self-organisation from the community?
  - Was there sufficient orientation time to make connections and become familiar with the environment/tools?
9. How effective was the process of encouraging contributions from the community on subsequent community sustainability?
10. To what extent are the content and/or design of ocTEL likely to be valuable in other settings? How reusable is it elsewhere?

There is also a sub-theme looking at the collaborative approach to open authoring and the impact of ocTEL on the course team.

Labels: , , ,

What the Curriculum Design projects tell us about evaluation

The Jisc Institutional Approaches to Curriculum Design programme set out to develop innovative technology-supported approaches to curriculum design, approval and review. The programme was perhaps unique in providing four-years of funding, and this duration presented some challenges in planning for evaluation. Added to that the focus on technology-supported curriculum design was not well documented in the literature, again a challenge when planning evaluation activity. Nonetheless, Jisc made clear that project teams should undertake in-depth baseline and ongoing evaluation activity, although the actual process or methodology for doing so was up to project teams to decide. Indeed, the projects went on to use a wide range of approaches.

I was the 'evaluation critical friend and supporter' to projects, and have recently posted an overview of the philosophy or approach to evaluation that different projects applied on the Jisc Design Studio. (The latter is a wiki containing a huge range of resources connected to the wider e-learning programme.) The outline of What the Curriculum Design projects tell us about evaluation also includes a selection of evaluation-related advice drawn from projects' experiences. This points to advice on where to look if you need help with:
  1. Getting started in planning an evaluation.
  2. Finding examples of strong evaluation questions.
  3. Providing a visual representation of how evaluation maps to aims.
  4. Considering how different aspects of a project can be evaluated with different methods, which build on each other.
  5. Visualising how activities link to outcomes.
  6. Applying the principles of Naturalistic inquiry to evaluation.
Finally, there are outlines of some of the Frameworks, methods and tools projects used or developed  for their evaluations. More will be added over the next few weeks.

Labels: , , , ,

Christmas cheers!

This year, rather than sending physical Christmas cards, I’m making a donation to the Al Ahli Arab Hospital in Gaza. Just to maintain the Christmas cheer though, here's a very merry card, courtesy of Calum.



There's nothing new in MOOCs

I have recently started working with the ocTEL team looking at how to evaluate the Massive Open Online Course or MOOC they'll be running next year. (ocTEL stands for Open Course in Technology Enhanced Learning, there's an outline of the course on the ocTEL blog.) There is a *huge* amount of discussion on MOOCs at the moment from tips for those taking a MOOC to the future of MOOCs and HE. This article from the Chronicle provides a rather US centric timeline of developments and responses to MOOCs. There is also the recent launch of the slightly mysterious Futurelearn in the UK. Some very interesting discussion on this on the ALT mailing list. The list is members only, but quoting from a recent list posting by Diana Laurillard "Everyone in the field knows there's nothing new in MOOCs" but we do need to meet the "massive demand for education, across all sectors from primary to lifelong, all over the world... It can't be done without technology... It's time to start looking seriously at what those models could be."

For me at the moment, the focus is on evaluating just one MOOC, which is still under construction. Nonetheless when I started asking the ocTEL team what they hope the MOOC will provide, one of the themes that emerged quite strongly is the desire for a sustainable community to come out of the MOOC. This has led me back to work I undertook over ten years ago on the theory and practice of online learning communities. At that time, I was influenced by Etienne Wenger's insights into Communities of Practice and Jenny Preece's perspective of designing usability and supporting sociability in online communities. Figure 1 summarises the dimensions of practice Wenger attributes to 'community', and Figure 2 shows the relative similarities in Preece's online community key features.

Figure 1 - Dimensions of practice as the property of a community
(Wenger, 1998, p73).

Figure 2 - Key features of an online community, with associated characteristics
(Adapted from Preece, 2000)

These models will be reviewed and elements drawn into the ocTEL evaluation framework. I'll also be looking at the concept of sustainability in relation to community. This has been addressed before by Bell et al in their 2007 book chapter entitled Evaluation: a link in the chain of sustainability. They highlight how the lack of persistence could come down to coordination failure or when the"costs of participation exceed the perceived benefits".

So perhaps there is nothing new in MOOCs, or maybe it's just a reminder to put old lessons into practice.

References
Bell, F. et al (2007) Evaluation: a link in the chain of sustainability. In: Lambropoulos, N., Zaphiris, P. (eds.) User-Centered Design of Online Learning Communities. Idea Group Inc.
Preece, J. (2000) Online Communities: Designing Usability, Supporting Sociability. Chichester, John Wiley and Sons.
Wenger, E. (1998) Communities of Practice. Learning, meaning and identity. Cambridge University Press, Cambridge.

Labels: , , , , ,