Editor’s note – AllOnGeorgia reached out to Dr. Brooke Macnamara to present her critique of the Growth Mindset theory found in many schools in the nation, and in Georgia. The approach seeks to define real student achievement in schools. Critical measures, such as Georgia’s teacher evaluation, is predicated on growth mindset measures. Additionally, the state’s school report card system, known as CCRPI, is also connected to test scores that use growth mindset outcomes to evaluate schools. Macnamara raises the concern that such results of the growth mindset are unresponsive and require high costs to taxpayers despite the lack of results from popular commercial based vendors used by school districts. ~ Jeremy Spencer, Ed.S.
Guest education columnist, Brooke N. Macnamara, PhD is an assistant professor in the Department of Psychological Sciences at Case Western Reserve University. She is a proponent of replication studies, meta-analyses, and transparency in research. Her research focuses on predictors of expertise, learning, and achievement. This is the analysis of the author and not written or researched by AllOnGeorgia.
According to mindset theory, developed by Professor Carol Dweck, “what students believe about their brains – whether they see their intelligence as something that’s fixed or something that can grow and change – has profound effects on their motivation, learning, and school achievement.”
According to the theory, students with fixed mindsets—those who believe intelligence is relatively stable—interpret challenges as signs that they lack ability, which leads them to give up and become “devastated by setbacks.”
In contrast, students with growth mindsets—those who believe intelligence can grow with effort—are thought to interpret challenges as learning opportunities, which leads them to exert effort and achieve more than their fixed mindset counterparts.
If growth mindsets have “profound effects” on academic achievement, then developing interventions to instill growth mindsets in students should improve students’ achievement. At least, that is the idea behind growth mindset interventions.
As Jo Boaler, Dweck’s colleague at Stanford University stated, there is a “powerful impact of growth mindset messages upon students’ attainment.”
But do these claims hold up under close scrutiny?
Funding and Buying
Many funders, researchers, educators, and business people are excited by these claims. Non-profit and for-profit entities have been established to develop growth mindset interventions. Mindset programs are in hundreds of schools across the country. Millions of taxpayer dollars have been spent on growth mindset intervention research. For example, the Institute of Education Sciences, an arm of the U.S. Department of Education, is currently spending approximately $3.5 million on a study to determine if Brainology, a commercial growth mindset intervention product sold by Mindset Works, Inc., is effective or not. Despite this study not yet being complete, Mindset Works, Inc. is selling Brainology to schools for thousands of dollars, and has been for years, claiming that students benefit from using it.
Mindset researchers have called for policymakers to “make effective academic mindset practices a funding priority” and schools have purchased growth mindset interventions with the hopes of increasing student achievement.
How Profound Are the Effects?
My colleagues and I set out to test how much of a difference these interventions actually make on academic achievement.
We did this for two reasons. First, this is a testable and interesting question whose answer contributes to scientific knowledge. Second, schools are dedicating parts of their budgets and curricula to these interventions; researchers have an obligation to test the effectiveness of these interventions to assess whether they are a wise use of schools’ funds and teachers’ and students’ time.
We conducted a meta-analysis to do this. When conducting a meta-analysis, researchers cull all the relevant available studies on a topic to analyze across them. This technique allows researchers to synthesize the findings, resulting in an overall effect, and to examine potential patterns. Meta-analyses typically provide better estimates of effects than any single study can provide.
What Are We Looking For?
Meta-analyses work by analyzing effect sizes. In our case, these are the differences between the two groups—those who received the intervention and the comparison group—on academic achievement following the intervention.
If researchers find an effect size of 0.0, this means there is zero difference between the groups (i.e., the intervention has no impact). If we randomly select a student from the intervention group and a student from the comparison group, the odds are 50/50 that the student from the intervention group will have higher academic achievement. That is, the intervention has no bearing on achievement.
Effect sizes of 0.2, 0.5, and 0.8 are often considered small, medium, and large, respectively. If we randomly select a student from the intervention group and a student from the comparison group, the odds that the student from the intervention group will have higher academic achievement than the student from the comparison group are 56/44 (small), 64/36 (medium), and 71/29 (large).
The larger the effect size, the greater the impact of the intervention.
Overall, we found an effect size of 0.08. This means that if we randomly select a student from the intervention group and a student from the comparison group, the odds are 52/48—nearly 50/50—that the student from the intervention group will have higher academic achievement.
We also examined a number of other factors to see if they made a difference.
Mostly, they did not.
The effect size didn’t change significantly depending on the age of the students, how at-risk the students were of failing, or how many sessions of the intervention they received.
When looking only at a subset of students—those who were at a high risk of failing—we found an effect size of 0.19. Likewise, when only looking at students coming from poverty, we found an effect size of 0.34.
However, no one should hang their hat on either of these effect sizes. Why? In both cases, only a few studies had looked at these groups, so the available evidence is limited. Additionally, for the high-risk students, their effect size did not differ significantly from students with less risk (wherein the interventions had no significant effect).
We also found that many studies didn’t check to see if the growth mindset intervention actually affected students’ mindsets. This is a problem. If researchers don’t establish that the intervention worked as intended, we can’t tell if any group differences are due to the intervention.
Of those that did check, many failed to demonstrate that the growth mindset intervention had any influence on students’ mindsets.
When we looked at only the cases where the intervention demonstrably influenced students’ mindsets as intended, the effect on academic achievement was 0.02—not statistically significantly different from zero.
People’s takes on growth mindset interventions vary.
Dweck has argued that the effects of growth mindset interventions are meaningful and sizeable relative to other academic interventions.
However, professor of education Luke Wood has argued that growth mindset messages can be detrimental for underserved students, particularly Black boys and men who have negative school experiences.
This is an important point. Before assuming growth mindset interventions are effective, we need to know when growth mindset interventions are beneficial, harmful, or have no effect. Crucially, we also need to evaluate this evidence using studies that demonstrate the interventions are influencing mindsets (i.e., working) as intended.
Only then can policymakers and educators make informed decisions about whether growth mindset interventions are a wise use of limited resources.