Subscribe To My Podcast

Monday, May 12, 2008

Using Microblogging (Twitter, Jaiku, Tumblr, Pownce et al) to effect learning evaluation

A few hours earlier, I was in the process of writing a white paper on Kirkpatrick's levels of learning for one of our clients when I thought that it would be interesting to track and analyze a learners thought stream before , during, and after the learning process to evaluate learning effectiveness. Short of some Frankensteinesque experiment of embedding electrodes in the learners skull this does not seem to be a plausible idea.

A decade ago this level of intrusion would be equivalent to intruding into a person's inner bastions of privacy. This could perhaps be done using a method where a learner keeps a diary or is observed and measured using set yardsticks to measure learning effectiveness.

Technology has crept up to us quite rapidly. Collaboration and fostered sense of community has led most for us to Google when we need to find information, opinions, directions... Well almost anything... Not surprisingly, a large number of us already share our inner thoughts on quite public forums like blogs, forums, email groups, social network groups and other online locations.


To capture a thought stream for either learning or evaluation, or both, the method should ideally:

1. Be available at all times. Even when on the move
2. Be easily accessible
3. Easy to use for recording thoughts
4. Have high or medium latency and be available for viewing both as a linear sequence of recordings and in a search-able format

With the internet becoming more ubiquitous and available on mobile devices even while on the move on line form filling, email, blogging, and instant messaging seem to fit the bill. All of except instant messaging are asynchronous modes of information capture . This means that there is a high risk of losing certain information in the interim period where the learner accesses the medium of expression.

Microblogging, by virtue of being available both via the internet and via SMS bridges the essential gap between a synchronous and asynchronous medium.

Tom Barret has interesting insights to this point on his blog at: http://tbarrett.edublogs.org/2008/03/29/twitter-a-teaching-and-learning-tool/
Also have a look at:
http://www.flickr.com/photos/kardon/2370367463/

Apart from the traditional means of evaluation for each level, here is how I think all forms of microblogging (if adopted by the learner) would impact learning evaluation at each of Kirkpatrick's levels of evaluation:

Level 1 : Apart from the smile sheets the microblog posts would provide tangential feedback on training effectiveness. This would truly be a survey of 'reactions'. Also mining the data and cross referencing with the student profiles would lead to more foolproof reactions which would be more substantial since it if backed by data as well as live opinions.

Level 2 : To be effective at this level of evaluation, the microblogging would have to be present as a continuous activity along with both pre and post assessments. It would validate the pre and post assessment, since there would a qualitative difference in the posts before and after learning.

Level 3 : Individual learner microblog posts would probably be best utilized from this level on wards. Level 3 is primarily focussed around depicting quantifiable behavioural changes and effect of the learning on the job at hand. The conventional methods would tie in to observations from the work place before and after the learning event. This could be benchmarked using microblog posts. Thought chains are an effective indicator to behavioral change. This would also provide lateral inputs about the conviction and extent of learning permeation of the learner. Higher levels of both would probably imply that the learner have followers and perhaps be on the favorites list of peers and others in the contact with the microblog.

Level 4 : This would include and continue where the posts of the earlier level stops. Here one would need to measure organization level impact in terms of goal achievements. Analysis of the micro blog posts would reveal shifts in attitude. If heuristics were employed to analyze the data across all the posts over multiple employees, it could reveal the required information from the lateral information present in the posts. This could lead to mass validation of the greater orgnisational level quantitative analysis at a level of granularity which would perhaps not be possible in formats which rely on sample polling or such methods.

What are the caveats of this proposition?

1. The learners would all need to participate and be avid micro bloggers. This can be a challenge over a diverse learning audience profile. Hence I have hypothesized micoblogging as a supporting mechanism apart from the traditional modes of evaluation. A good example of one such exercise is Elliott Masie's experiment with RealTime blogging from the Harvard Kennedy School event on Presidential Leadership Competencies. (Ref:http://twitter.com/masie/with_friends)

2. The media must actually be available for this to even work. High security enviroments where even cellular phones are not allowed, access could be a road block.

3. The medium is a till now an "open garden" and hence concerns of data security would trouble most corporate implementers.

4. I anticipate that this mode would most often not be used due to sheer cost restraints. Sending SMSes to Twitter and the others can be an expensive
since it can cost upto 25c per SMS.


In conclusion it would be a good experiment to introduce and try this as a part of a formal learning evaluation process.

No comments: