Monday, March 26, 2018

Why "Research-Based" is a Great Place to Start...But You Can't Stop There


by Kate Wolfe Maxlow


I admit that I always raise an eyebrow when someone tells me an educational strategy is "research-based." Don't get me wrong, I LIKE John Hattie and Robert Marzano and I use their work all the time...but I also know that there are some pretty big caveats to using educational research to make instructional decisions. Here are some of the biggest:


1. "High yield" usually means "correlated with increased student achievement on standardized assessments."
Remember, standardized assessments are most often multiple choice, and therefore most often written at the Remember through Apply level (with the occasional smattering of Analyze). Therefore, the majority of the research that's out there is looking at student improvement on lower-level skills. But in a good classroom, that shouldn't be all that we're looking to improve.

One of the best examples of this is Hattie's research on inquiry learning. In his book,Visible Learning:  A Synthesis of over 800 Meta-Analyses Relating to Achievement, Hattie does a meta-analysis (which means that he combines the results of multiple studies on inquiry learning) to determine that inquiry learning has a 0.31 effect size.

Okay, so what does that mean? It means that a group of students who were taught using an educational strategy (in this case, inquiry learning) would, on average, do better than a group of students who, all other things being equal, were not taught with this strategy. So, for instance, a group that was taught with inquiry learning that initially scored, on average, at the 50th percentile (i.e., on average, doing better than about 50% of their peers), would improve, on average, 12 percentile points after engaging in Inquiry learning, and therefore the average would now be at the 62nd percentile (i.e., on average, scoring better than about 62% of their peers).

(Want to better understand that? Read this article from Marzano).

While 12 points is nothing to sneeze at, there are other strategies that are easier to implement and have higher effect sizes, such as direct instruction (0.59 effect size/22 points average after engaging in the strategy). Who doesn't want 22 percentile points gain over 12? (Note that direct instruction is NOT the same as lecture.)

Of course, those assessments are mostly using standardized assessment measures of student achievement.

So that's the thing: the point of inquiry learning is to teach those higher-level critical thinking skills. Yeah, if all you want is for students to memorize information, it's direct instruction, all the way. Hattie himself notes that in one 1996 study, Smith found "larger effects from inquiry methods in critical thinking skills (d = 1.02; 35 percentile points average gain), than in achievement (d = 0.40; 16 percentile points average gain). What does this mean? It means that if your goal is to teach actual critical thinking and how to engage in scientific processes, inquiry learning is the way to go. If you're looking just to teach science facts, it might not be worth it.


2. It's not the strategy that's effective; it's how and when it's implemented.
The best example of this is the great homework debate. There are a ton of schools right now that are nixing the homework, especially at the elementary level. According to Hattie, homework has an effect size of d = 0.29, or a percentile gain of 11 points if students engage in the strategy.

So students should do homework, right? Eleven points is eleven points.

Well, it's not that cut and dry. One of the best probes into the research can be found on the ASCD website on "The Case For and Against Homework," but it boils down to this: In elementary school, Cooper (1989a) found that the effect size is 0.15 / 6 percentile points, for middle school it's 0.31 / 12 percentile points, and for high school it's 0.64, or 24 percentile points.

Moreover, other studies have shown that homework is strikingly more effective when teachers grade it and provide comments, and that after a certain number of minutes of homework per night, there are diminishing returns on student achievement gains.

In other words, be careful whenever anyone tries to say something like, "Research shows that homework has an 0.29 effect size." Unless you know how the strategy was implemented and with whom, it's really hard to make overall judgements about the effectiveness of any one strategy.


3. All educational research is based on averages, but students are individuals.
The two biggest swingers in the educational effectiveness game are definitely Hattie and Marzano. They both use meta-analyses to come up with their conclusions about what does and does not work in education. The benefit of a meta-analysis is that it can tell us, broadly, which strategies tend to be good across the majority of students in the majority of content areas and grade levels. What they cannot tell us, of course, is whether a particular strategy is going to work with a particular classroom or a specific student.

Why is that? Because in order to come up with an effect size, most education studies compare two groups of students: one group that has the treatment (the strategy, such as note-taking), and one group that does not (i.e., same instructional methods, but students aren't allowed to take notes). Researchers take the average on the assessment of a group that takes notes and compare it to the average on the assessment of the group that doesn't take notes in order to figure out the effect size (that's overly simplified. Read more here.)

Then, the meta-analyst will pull tons of studies on the same topic, or close to the same topic. For instance, does note-taking in a graphic organizer count...or does that go under the graphic organizers strategy? Can we count the experiment that used a multiple choice test along with the one that used a writing prompt and scoring rubric? And are the studies even rigorous enough to include?

These are all decisions left up to the meta-analyst like Hattie and Marzano. Once they decide what studies to include, they do some fancy math that basically gives them an average of all the study averages.

So, in the end, the appropriate conclusion is less like, "Note-taking has an effect size of 0.99 and therefore yields a percentile increase of 34 points," and more like, "On average, note-taking has been found by some researchers to yield a percentile increase of something like 34 percentile points for many students much of the time."

See the difference?

All that being said, educational research is still worthwhile. It can still tell us really important things (for instance, that retaining students tends to, on average, decrease their student achievement when oftentimes we're retaining students in hopes of raising it). Just remember: read the actual studies as much as possible to find out the actual (not just averaged) results, it's more often about the implementation than the strategy itself, and in any case, your results for a given strategy may vary.

No comments:

Post a Comment

Thank you for commenting! We love comments!

Most Popular Conversations