Different routes to evidencing value

As well as encouraging curriculum innovation, the Transforming Curriculum Delivery Through Technology Programme had a strong focus on gathering and making effective use of evidence. This was intended to inform the 15 funded projects, but also to provide evidence to the wider sector of the potential benefits of using technology to support learning. It was very much part of the current context of striving for more evidence, particularly “evidence for ‘what works?’” (Sharpe, 2011) and of funded projects adding value by generating evidence (Draper & Nicol, 2006).

But how did the projects do it? What kinds of approach to evaluation did they adopt, and what methods did they use?

As evaluation support consultant to the programme, I have prepared a report that summarises the evaluation methods and techniques used by projects. While this is not a comprehensive guide to evaluation, it does give an insight into the wide variety of approaches that were applied. Hopefully, this will spark ideas in those setting out on the process of project evaluation.

The Curriculum Delivery Programme did not advocate a particular approach to evaluation, although projects were expected to undertake their own evaluation or engage an evaluator. This self-directed evaluation was perhaps unusual when the programme started, and it’s fair to say it was uncertain what direction different projects would take. Afterall, in this diverse programme, projects teams brought a wide mix of skills, and for some evaluation was entirely new.

Many of the standard data collection methods were applied including focus groups, interviews, observation and questionnaires. Technology was also employed, with many projects undertaking video interviews that were later compiled and made available on the web. (Such as this overview of student interviews from Making Assessment Count.) The more unusual methods included time motion studies, which were used to forecast efficiency savings; cognitive mapping, here’s an example causal map from DUCKLING; and social network analysis using Gephi. Projects also found innovate ways of engaging with students such as employing them as researchers.

On the basis of 15 very different projects, it is not possible to draw conclusions on the ‘best’ approach to evaluate curriculum innovations. There were, however, a number of projects that applied or adapted action research, and a typical evaluation cycle based in action research is described in the report. Other approaches that feature in the report include appreciative inquiry, balanced scorecard, and engaging an independent evaluator as well as more traditional formative evaluation approaches.

References
Draper, S. and Nicol, D. (2006) Transformation in e-learning. Short paper delivered at ALT-C 2006

Sharpe, R. (2011) Evidence and evaluation in learning technology research. Research in Learning Technology. 19(1) p1-4.

Leave a Reply

The following information is needed for us to identify you and display your comment. We’ll use it, as described in our standard privacy notice, to provide the service you’ve requested, as well as to identify problems or ways to make the service better. We’ll keep the information until we are told that you no longer want us to hold it.
Your email address will not be published. Required fields are marked *