Skip to main content

dkernohan

Neither of the two links you cite back up the point you are making.

The Durham Schools study link is to a BBC article about a research project that ended in 2013. As far as I can tell the data drawn on is described in this (OA) paper:
http://www.sciencedirect.com/science/article/pii/S0959475212000850

Methodologically, it is - shall we say - interesting. N is 44 for the study, and 42 for the "comparison study". The comparison study actually had a greater learner benefit (if you see "more correct answers" as the best measure of benefit). Overall, even after the tortuous analysis, the paper concluded: "From these results, it appears that both conditions support the development of routine expertise, and the individual paper-based version of the ‘make up some questions’ task appears to be as useful as NumberNet (the tech solution being tested) in supporting the development of fluency and speed with simple calculation".

The second paper you cite - from the prestigious Appalachia Regional Comprehensive Centre - is a 2013 summary of a number of selected papers about the effectiveness of technology. The word "selected" should be ringing alarm bells, as no selection criteria are given - making "cherry picking" very likely.

To look at just the first paper in the ARCC summary - their meta-analysis sounds interesting and useful. You can read the paper itself here:
http://sttechnology.pbworks.com/w/file/fetch/67600623/Cheung_(2013)_The%20effectiveness%20of%20ed%20tech%20applications%20in%20K12%20classrooms.pdf (note that this is not an OA paper, but is available online for some reason)

The intro of the paper notes that: "Consistent with the more recent reviews, the findings suggest that educational technology applications generally produced a positive, though modest, effect (ES = +0.15) in comparison to traditional methods. However, the effects may vary by educational technology type. Among the three types of educational technology applications, supplemental CAI had the largest effect with an effect size of +0.18. The other two interventions, computer-management learning and comprehensive programs, had a much smaller effect size, +0.08 and +0.07, respectively."

This looks like a very well done literature review. And it is broadly true to say (as ARCC do) that technology used to support traditional classroom experience has the largest effect. However, +0.18 isn't a "large" effect by any standards - and in most classroom situations would make the change almost indistinguishable from the control. (If you need a briefing on how effect sizes work, may I recommend the University of Leeds' superb "It's the effect size, stupid"? (http://www.leeds.ac.uk/educol/documents/00002182.htm).

I could go over the other links in the ARCC paper, but I think I have made my point.

Steve, you are both a tenured academic researcher and a prominent advocate of education technology. As I am neither, it isn't really my job to pick these issues up for you. If you could please check your claims a little more carefully in the future, and ideally cite directly from research literature rather than media reports, I would be grateful.