What works in education? Ah, but how well does it work? No, but how well compared to other things does it work? What are the most effective influences on achievement? What about all the things we wring our hands over that actually make trivial although positive differences (class size) or have no effect (learning styles)?
A work friend of mine, Teletha, put me on to "visible learning," John Hattie's meta-analysis of meta-analyses of effects on education for children and youth ages 4-20. I don't know whether Hattie has employed good criteria for the GIGO problem in meta-analysis, let alone meta-meta-analysis: garbage in, garbage out. This means that you don't want to include bad studies in your meta-analysis, so you have to have inclusion and exclusion criteria for the studies.
His point about a positive effect size is one that many researchers understand, but some policy makers might not (and some researchers don't understand, and some policy makers do): The purpose of an effect size is that it gives you a range of the magnitude of the effect and does not rely on a cut-off point, like the p value associated with the null hypothesis statistical test does. So I don't get too technical, perhaps you should look at this web page for an explanation of Hattie's logic: http://www.learningandteaching.info/teaching/what_works.htm.
He needs to be careful not to use .4 as a cut-off. Nevertheless, the point is well taken, especially when you understand the concept of standard deviations. Hattie has found that the mid-point of effects on educational achievement is .40 (40% of a pooled standard deviation), so effects less than that are relatively weak to the effects above that. And effects are on a continuum with a common metric, so you can tell how much more effective one variable is than another.
Now it's time to show you a couple of 15-minute videos of "visible learning." Part 1: http://www.youtube.com/watch?v=sng4p3Vsu7Y&feature=related. Part 2: http://www.youtube.com/watch?v=lS_AackYwEo&feature=related.