Happy EVALentine's day

Thanks to John Gargani for sending this EVALentine’s Day card. John's artwork is inspired by a 1973 Valentine’s Day love stamp, the gingerbread heart is simply inspired by my love of chocolate!

As John suggests, why not find out more about evaluation from the American Evaluation Association, the AEA tip a day is a good place to start. Or closer to home for me, is the UK Evaluation Society.

Tips for managing Consultation Analysis (part I)

I recently spent some time working up an approach to analyse a substantial number of responses to a public consultation. Due to a change in policy, the analysis did not go ahead. However, I think I learned a few things in the work up stage, and it seemed worth sharing these.

Getting at the data
Upload/Download (Photo credit: meganpru)
In this case, the first hurdle would be retrieving and collating over 300 individual responses that were due to be posted on the web. In terms of managing this kind of quantity of data, the way that the responses are made available can make a huge difference. Direct links to individual responses on a single page makes it possible to automate the download process, as opposed to side links to individual download pages. Something like the add-on DownloadThemAll, which can be used to filter and capture specific types of file links (e.g. pdf, doc, gif), as well as speeding up the downloading itself, can really help here. Thanks to Martin Hawksey for pointing that one out!

How individual responses are named is also worth considering. In some cases, numbers are used that are not necessarily related to the individual respondents or their responses. Respondents’ names might be given alongside download links, or it may be necessary to open the response to find out. Generally, the names are needed, if only to sort responses according to the 'type' of respondent. It may be possible to rename during the download process, and it’s worth experimenting with DownloadthemAll for this, as renaming manually is quite time-consuming. An alternative would be to generate an ID table, which matches response numbers to respondents.

Checking you're going in the right direction
It's likely you'll have an understanding of the kinds of terms or key words you want to focus the analysis on. This would probably be derived from the original consultation document itself, and will be influenced by the interests of the organisation sponsoring the consultation analysis. With access to a large team or limitless time, it would be possible to review these key terms iteratively as you go through the detailed qualitative analysis. However, assuming you don't have this luxury, there are options to check your key terms are (mainly) the right ones.

For a start, automated word analysis can be undertaken using a CAQDAS package*, such as the MAXDictio component of MAXQDA. This could either involve a complete analysis of all words used across all responses, or just those respondents of particular interest. Prominent terms can then be pulled out of the resulting frequency table (example on the right). Alternatively, the pre-identified key terms can be used. This automated step essentially checks if the identified terms matches the key terms being used by respondents.

The original text surrounding these terms can be ‘jumped’ to within MAXQDA. Manual coding is then required to specify the relevant sentence or paragraph that provides the context for the key term. Although a degree of autocoding of the surrounding text is possible so long as response files aren't pdfs.

* Other software provides similar functionality, and the University of Surrey provide a useful overview on choosing a CAQDAS package. Dedoose is a web-based tool that is new to me, but apparently it can be used on Android and tablet devices.

Part II - Targeting the analysis (coming soon!)

Labels: , , , ,