Sunday, June 8, 2008

Summary Weeks 12 and 13 - Analysis of data and results

The picture for the summary this week is dedicated to Gordon who has managed to post some excellent insights and discussion about his preliminary analysis of results - even while on the road and 'riding the wild moose in Canada'.
Gordon's findings on his latest post illustrate how any evaluation the first time round, always finds some of the "fish hooks" in the way questions are posed and interpreted. This is always good particularly with a pilot evaluation, as it enables modification of the approach for next time.
It also looks like any further evaluation on the product might be best done as an effectiveness or impact evaluation once people have used the computer-based training resource. That way he will be able to find out how personnel are transferring their learning done with the CBT to real situations in the workplace is probably very important in the area of hazardous substances.

There has also been an interesting debate around Likert-type scales on Gordon's blog - his evaluation has illustrated the importance of the midline scale and brings up the issue of how to organise the rating scale. It seems more logical to have the positive responses as 4 & 5 though many studies use 1 & 2. Probably as this aligns with the importance associated with being 1st and second.

I am interested to see what others think about this idea.

Helga has been spending time extracting some meaning from her interview data, and asked for help on this. Unfortunately the tips from Bronwyn about how to go about analysing this kind of qualitative data arrived after Helga had sorted it using the website Yvonne suggested. She has also realised the importance of lots of small usability tests as you develop.

It is all about looking for common themes and patterns ie, making connections as Yvonne suggested. Themes are common words or phrases and usually match the research/evaluation questions and/or interview questions. In fact, in a well designed study the two areas will align nicely. For example a pattern in the data might be that all participants found the learning formats easy to navigate.

A table with columns is a good way to organise the themes. You can list the themes and place examples of the comments next to them. Helga used stickies very successfully. I also include one column which indicates how many times each theme (word or phrase) was mentioned. Taking some time to do this with your data will mean you can extract more meaning. If you summarise in a couple of sentences without working through the themes - you may miss some important points.

You also need to look for what is not mentioned as well. For example - if no-one mentions they enjoyed the experience you might begin to wonder was it too hard or were they pushed for time.

Helga in her previous post mentioned how she was having some fun with her formative, usability evaluation. She has demonstrated how important it is in usability testing to get feedback on early prototypes and to act on them reasonably quickly (Rapid Iterative Testing) - there may be several iterations of a prototype during usability testing before it goes live. Prototyping is outlined in Wikipedia.

When testing how usable a product is in real-use situations, Gordon asks how feasible it is to make changes on the spot. However, rapid iterative usability testing does do this. There is more about this on a site called: Learn about usability testing ......"You should be testing prototypes from early paper-based stages through fully functional later stages."

Rika has made some headway with a modified plan and is about to publish her presentation.

No comments: