"Bear in mind that the wonderful things you learn in your schools are the work of many generations, produced by enthusiastic effort and infinite labor in every country of the world. All this is put into your hands as your inheritance in order that you may receive it, honor it, add to it, and one day faithfully hand it to your children. Thus do we mortals achieve immortality in the permanent things which we create in common." - Albert Einstein

Friday, February 27, 2015

"Education Experts" Lack Expertise?

Three years ago, when this blog started, retired professor Flor Lacanilao lamented over the media coverage of education policies. In A Critique of Some Commentaries on the Philippine K-12 Program, a stark contrast between who gets wide media coverage and who does not was highlighted. Lacanilao's exact words were "Note further that the nonscientist authors and cited authorities include prominent people in education, and that these nonscientist authors and cited authorities enjoy wide media coverage. I think this situation explains the present state of Philippine education." Lacanilao actually cited one of my articles as among those not receiving attention in spite of his opinion that what I wrote was supported by properly published studies. To establish expertise, Lacanilao was using an individual's record of research contribution as measured by how frequently one's publication had been cited by others. Such measurement, for instance, was reflected on the individual's Google Scholar record.

It turns out that Lacanilao's observations do not apply only to the Philippines. A similar predicament exists in the United States. Curriculum specialist Joel R. Malin and education professor Christopher Lubienski, both from University of Illinois, have authored a paper in the journal Education Policy Analysis Archives that suggests "...analyzing various indicators of expertise and media penetration, we find a weak relationship between expertise and media impact, but find significantly elevated media penetration for individuals working at a sub-sample of organizations promoting what we term “incentivist” education reforms, in spite of their generally lower levels of expertise...."

The following graphs taken from the paper show an inverse relation between expertise (as measured by Google Scholar) and media exposure:


Above figures copied from
MALIN, Joel R.; LUBIENSKI, Christopher. Educational Expertise, Advocacy, and Media Influence. Education Policy Analysis Archives, [S.l.], v. 23, p. 6, jan. 2015. ISSN 1068-2341. 

Malin also adds the following in a phys.org article that describes the above paper:

"Our findings suggest that individuals with less expertise can often have greater success in media penetration. Although some individuals might not have formal training in research methods for analyzing the issues about which they are speaking, they possess skills and orientations that make them accessible and appealing to the media. And when these people are affiliated with organizations that have strong media arms or outreach efforts, they have the support and the incentive to engage broader and policy audiences."
So when Lacanilao states that "this situation explains the present state of Philippine education", something similar may be said with regard to US education....





Thursday, February 26, 2015

Cooperation in Teaching

I served once in the College Executive Council at Georgetown and during one meeting, the dean noticed chalk marks on my clothes. The dean then expressed a feeling of satisfaction knowing that I was actually teaching that day. Teachers are supposed to teach after all. Attending meetings consumes one's time and takes away opportunities to do actual tasks. Meetings are important, however, if individuals are expected to work as a team. Thus, there is an obvious need to balance the two since efforts to work as a community may in fact impede an individual to do the more important task of actually teaching.

It is interesting to survey how much of the world's teachers in basic education actually work as a team. The OECD Teaching and Learning International Survey (TALIS) provides a good starting point to answer this question. The following graph provides, for instance, an average picture of how a teacher spends his or her working hours per week:

Above copied from TALIS 2013 Results: An International Perspective on Teaching and Learning.
DOI:10.1787/9789264196261-en
The above figure displays the time spent on each task averaged over all the countries participating in TALIS 2013. It is worth noting that there is a great deal of variation among countries and the following chart illustrates how two high-performing countries, Finland and Japan, in international exams are found at the opposite sides of the spectra:

Above copied from TALIS 2013 Results: An International Perspective on Teaching and Learning. 
DOI:10.1787/9789264196261-en
Finland's teachers spend more time teaching than teachers in Japan. Finland's instructors spend about 65 percent of their time teaching. Japan, on the other hand, spends only 33 percent. For comparison, teachers in the United States spend about fifty percent (This number already takes into account the error in teaching times reported for US teachers, pointed out by Samuel Abrams of Columbia College) .

On tasks other than teaching, Finland's teachers spend the least amount of time. In team work, for example, Japan and Finland are truly on opposite sides. Finland spends 1.9 hours per week on team work while Japan spends twice as much, 3.9 hours. Teachers in the United States, on average, spend 3.0 hours. Looking at the survey at greater detail, while only one in five lower secondary education teachers in Japan do not participate in collaborative professional learning, nearly half of Finland's instructors do not. The United States reports less than 10 percent of teachers not participating in this type of activity. 

There would be no attempt to correlate the above with the actual test scores. One must keep in mind that there is a great deal of shadow education happening in Asian countries that score well in standardized exams. Thus, Japan may be spending less time in teaching but this only takes into account hours spent inside formal schools not in the "cram schools". A child in Japan, for example, easily spends an additional six hours a week in these tutorial classes. This additional instructional time brings the number of hours spent in teaching way above the number of hours from Finland. Furthermore, seeing that two countries that do well in the exams belong to opposite sides of these measures only implies that there is most probably no clear correlation between how much additional tasks a teacher takes and learning outcomes. The reason is simple. Cooperation in teaching may or may not benefit students. One could only imagine that team work only helps if the team is doing the right things for the students.



Wednesday, February 25, 2015

A Science Night at an Elementary School

My five-year old daughter and I spent last evening at Mason Crest Elementary School. It was a family science night and the "museum without walls" of the Children's Science Center was inside the school's cafeteria. The evening started with a competition between two balls, "Rey Ricochet" and "The King Bouncer". It was the night to see which ball could bounce higher.


A bouncing ball competition was an opportunity to introduce to young minds polymers. It was a glimpse at how the structure at the molecular level could define properties of materials. There was even a model composed of eight kids (my daughter was representing one repeating unit of the polymer) to illustrate cross linking.



Tuesday, February 24, 2015

Does Differentiated Instruction Work?

I just received an email reminding me of the controversial commentary from James R. Delisle posted last month on Education Week. The title of Delisle's article is "Differentiation Doesn't Work". In this blog, I am posing it as a question instead. To answer such query, however, is not an easy task. Differentiated instruction is very complex as it involves assessment, planning and flexibility. All of these tasks hinge on the qualities of the teacher. A teacher who understands where his or her students stand is a good teacher. A teacher who tailors his or her lessons to maximize student's engagement is a good teacher. A teacher who can recognize that something is not working and needs to be adjusted is a good teacher.

Carol Ann Tomlinson is one of the pioneers of differentiated instruction. The Harvard Education Letter had a piece on differentiated instruction several years ago in which some of Tomlinson's views were highlighted:
While she would never say that differentiating instruction “is a piece of cake,” Tomlinson believes the approach is a path to more expert teaching. Like someone asked to make a meal, Tomlinson says, “You could have dinner with butter on toast with an egg. But if you want to grow as a cook, you need to expand your ingredients list.” Her four “non-negotiables”—a high-quality curriculum with clear goals, the use of data to monitor and provide feedback on student learning, the ability to recognize when something isn’t jelling and modify it to fit the student, and the creation of an environment in which students are supported and challenged—she says, “are not about differentiation. They are about a good classroom. That is good teaching.”
Differentiated instruction seems synonymous to good teaching. From the same article, a concrete experience from a teacher who is trying to differentiate is also cited:
Suddenly, “tiering”—or varying the difficulty of work for students based on readiness—had a twist: Kids didn’t like it when a classmate’s paper looked a lot different or had more problems on it. As she tried flexible grouping, students who seemed to need extra support actually “got it,” while those expected to glide would struggle. As Hauser put it in her write-up, “I quickly discovered that my assumptions were not always accurate.”
And the above is just the first step in differentiated instruction, understanding where the students stand. There is such a thing as incorrect assessment, and as in the above instance, wrong impressions on students can be made.  Doing the other steps requires even more from the teacher. To plan and to be able to change in the last minute definitely needs a good mastery of the subject and its pedagogy.

To appreciate how much should go into implementing differentiated instruction, the following list from Langa and Yost is quite helpful:

Content (Materials & elements)
  1. Select a variety of books and resource materials for handling variety in reading levels
  2. Select specific areas of interest within the focus area
  3. Use Learning contracts with students
  4. Group students according to readiness levels or interest levels
  5. Reteach to small groups who need support or explanations; exempt those who have mastered the material
  6. Establish learning centers or stations
  7. Allow students to work alone or with peers.
Process (how students gain understanding of main ideas and information)
  1. Use tiered activities (a series of related tasks of varying complexity)
  2. Use learning contracts based on readiness, interests, or learning profile of student
  3. Use independent learning
  4. Use choice boards, flexible grouping, reading buddies, learning centers and peer teaching
Products (ways students will demonstrate their knowledge or understanding of a topic)
  1. Write a story or a poem
  2. Write a book report, a play, or perform a play
  3. Debate or investigate an issue
  4. Design a model or a game
  5. Create a mural or a song
  6. Compare or contrast
Designing an experiment to evaluate its effectiveness is also very challenging because of the myriad of factors and its immense dependence on the quality of the instructor on how differentiation is implemented. Thus, it is not an easy task to find from research dependable experiments that measure the effectiveness of differentiated instruction.

More importantly, without doubt, differentiated instruction demands quite a lot from a teacher. The following excerpt copied from a guest post on a blog by a kindergarten teacher typifies one reaction a teacher may have with regard to differentiated instruction:



One teacher who started on something small, differentiated homework, finds "The results do not support the use of a differentiated homework structure for the acquisition of biology content or mastery of concepts." In primary schools in the Netherlands, a similar observation has been found: "Results showed that differentiated instruction has no statistically significant effect on student mathematics achievement, which was against expectations." These studies perhaps do not invalidate the differentiated instruction approach because one can always make the excuse that it is a failure of implementation.

However, one strong criticism comes from the very first step differentiated instruction requires. The article from the Harvard Education Letter also mentions:
But critics say differentiated instruction encourages teachers to categorize students based on popularized notions that may not actually be accurate or helpful in making content more accessible.
Daniel Willingham explains this more eloquently (also taken from the Harvard Education Letter):
“A lot of the time when we talk about differentiating instruction there is an implicit theory about the mind and the idea that different kids learn in different ways—and not only that, but that we have a deep enough understanding that we can then categorize kids on that basis. We assume that matching a teaching approach that plays to a kid’s strengths is the best way to teach. Or, should we work to attack areas of weakness? And how do we know if a teacher has correctly identified a child’s strengths? Differentiation sounds great, but on what basis are we differentiating? What do we know about this kid—and how do we know it?” 


Lastly, Greg Ashman made the following graph from PISA and TALIS data.

Above copied from Greg Ashman's Your own personal PISA - what does the TALIS show us?

This is simply a correlation, and it is weak, but is negative, that is, countries that perform well in PISA do less differentiated instruction. Obviously, it is not a solid proof that differentiated instruction does not work. But it also shows that there is no proof that it does.

Differentiated instruction taxes a great deal from a teacher. With the planning and management of different tasks or activities occurring inside a classroom, as Greg Ashman points, there are opportunity costs. If a teacher fails to do more compelling tasks (or even trivial ones, like collecting homework and helping a child organize), there is a price to pay for a teacher who spends more time on walking around the room just trying to keep the class in order during differentiated instruction.




Monday, February 23, 2015

Intelligent Tutoring Systems

What is inside a learner's mind can be very useful to an instructor. Knowing ahead of time misconceptions a student may have allows for a teacher to remediate effectively. Learning benefits from a responsive exchange between a teacher and a student. This is a distinct disadvantage in large classes where a teacher's knowledge of where each student stands is severely compromised. There are computer systems that are now available that are targeted to provide both student modeling as well as adaptive remediation. These are called intelligent tutoring systems.

An intelligent tutoring system (ITS) is not a simple computer based instructional system. Defining characteristics of an ITS have been recently compiled by Ma and coworkers:
An ITS is a computer system that for each student: 
1. Performs tutoring functions by (a) presenting information to be learned, (b) asking questions or assigning learning tasks, (c) providing feedback or hints, (d) answering questions posed by students, or (e) offering prompts to provoke cognitive, motivational or metacognitive change 
2. By computing inferences from student responses constructs either a persistent multidimensional model of the student’s psychological states (such as subject matter knowledge, learning strategies, motivations, or emotions) or locates the student’s current psychological state in a multidimensional domain model 
3. Uses the student modeling functions identified in point 2 to adapt one or more of the tutoring functions identified in point 1
With the above definition, Ma and coworkers then proceeded to do a meta-analysis of ITS to see how effective these systems really are, according to vetted published literature. The first important finding emphasizes the instructional method used to compare against ITS. ITS is effective, but such statement needs to be qualified. In terms of effects on learning outcomes, the following is observed:

Above graph drawn from data provided by Ma et al. 
What is displayed here is the effect size of ITS when compared to a specific instructional method. Effect sizes of about 0.3 to 0.5 are considered to be moderate. Clearly, ITS is much more effective than large classes, computer based instruction, and individual reading. The positive effects, however, are not present when ITS is compared to either small classes (less than or equal to 8 students per instructor) or tutoring. The above comparisons truly highlight what ITS is able to do - it can take the pulse of the student and adjust the lessons. Of course, this is likewise possible in small classes as well as individual tutoring sessions.

There are other detailed analysis presented by Ma and coworkers. One piece that catches my interest is the domain dependence of ITS. How effective ITS is appears to be dependent on the subject as shown in the following figure:

Above graph drawn from data provided by Ma et al.
Chemistry stands out with the smallest effect size for ITS. A deeper analysis of the data actually shows that humanities and the social sciences have much higher effect sizes than mathematics and the natural sciences....





Sunday, February 22, 2015

Asking a President to Resign and a Salary Raise

Teachers belonging to the Alliance of Concerned Teachers (ACT) in the Philippines are scheduled for a sit-down strike this Tuesday, February 24, 2015. The strike reiterates the teachers' demand for a salary increase as mandated by law, Republic Act 4670 or the Magna Carta of Public School Teachers. The Salary Standardization Law likewise dictates that salaries of government workers which include teachers be adjusted every three years. No raises have been made since according to the president, the government simply has no funds for a pay hike.

Above copied from the Alliance of Concerned Teachers
Of course, ACT is quick to point out that the president however has funds for the Disbursement Acceleration Program (DAP) and Priority Development Assistance Fund (PDAF). Both programs have been declared unconstitutional by the Supreme Court of the Philippines. On top of the chief executive's refusal to grant salary increases, the recent massacre of police officers in Mamapasano, Maguindanao earns President Aquino III the title "Teachers' Enemy Number One".

The president does need to respond to quite a number of missteps his administration has made. This president after all is the main driving force behind the new K+12 curriculum of the Department of Education. Thus, at a time of great frustration, ACT is also asking the president to resign. It is not at all clear at this point, whether such resignation would lead to meeting the current demands basic education in the Philippines desperately needs. The constitution prescribes that in the case of a vacancy of the presidency, the duly elected vice president shall become president. Unfortunately, the current vice president is likewise not in a good light. There are presently serious questions raised regarding the vice president's unexplained wealth. ACT and other organizations are therefore suggesting a "transitional council" to act as caretaker before the country chooses in an election a new president and vice president. The current constitution of the Philippines does not have such a provision.

What the country really needs now is responsibility. It is really irrelevant to basic education who currently leads the government. In fact, it is only the president who could and must respond as a true leader to the needs of the country. The president is duly elected and has enough time during the remaining period of his term to begin solving the problems the country faces. A change in leadership does not guarantee that the right steps are going to be taken. Only responsibility can. The solutions can be reached within the confines of law. The solutions to the problem Philippine basic education faces do not require extraconstitutional measures. It merely requires a president who takes responsibility and acts based on evidence and not on whims.

A sit-down strike for just salaries is perhaps justified but as a call for a president to resign, that is stretching too much.






Saturday, February 21, 2015

A Disturbing Trend

A colleague dropped by my office yesterday. Seeing that I was writing an article on education, he asked whether I had visited schools lately to see the individuals teaching our children. He inquired if I had noticed how young the teachers were. He likewise queried if I thought the teachers had children of their own. Then he lamented on how much knowledge had exploded recently and rhetorically asked whether I thought the teachers I saw were up to the task. It was good that the question was rhetorical because I did not know how to respond.

There are encouraging trends in basic education in the United States. As reported in the Educational Researcher December 2014 issue, individuals who have recently entered the teaching profession are increasingly coming from the top third of high school graduates, based on SAT scores. Unfortunately, there are other trends which only amplify the doubt raised by my colleague. One trend comes from a closer examination of the results from the Programme for the International Assessment of Adult Competencies (PIAAC). The report made by Madeline J. Goodman, Anita M. Sands, Richard J. Cole from the Educational Testing Service looks specifically at the performance of millenials (individuals born after 1980). Focusing on individuals age 16-34, a comparison against other countries does not look promising for the United States. Three competencies are assessed in the PIAAC. One is Literacy: the ability to understand, evaluate, use, and engage with written text to participate in society, to achieve one’s goals, and to develop one’s knowledge and potential. In this area, half of young American adults do not reach level 3, the minimum standard:

Above graph drawn from data provided by America's Skills Challenge: Millenials and the Future
In Numeracy: the ability to access, use, interpret, and communicate mathematical information and ideas in order to engage in and manage the mathematical demands of a range of situations in adult life, the situation is worse. Nearly two-thirds of American age 16-34 do not reach level 3, the minimum standard:

Above graph drawn from data provided by America's Skills Challenge: Millenials and the Future

In the last area, Problem Solving in Technology-Rich Environments (PS-TRE): using digital technology, communication tools, and networks to acquire and evaluate information, communicate with others, and perform practical tasks, millenials from the United States are among the worse performers:

Above graph drawn from data provided by America's Skills Challenge: Millenials and the Future
The alarming trend becomes even more obvious when scores from millenials that have college degrees are placed against those of other countries. On the numeracy assessment, college graduates from the United States are barely performing better than high school graduates from France and Japan. More dismaying, high school graduates from Belgium, Austria, Sweden, Germany, Finland and Denmark are doing better.

Above figure adapted from America's Skills Challenge: Millenials and the Future
Combine the above with the findings of the report, Seven Trends: The Transformation of the Teaching Force, that teachers in 2012 compared to teachers in 1987 are larger in numbers, younger, less experienced, more female, more diverse by ethnicity, similar in academic abilities, and are less likely to stay teaching, a disturbing trend does become visible.

Given what we have now, we may still hope. The teachers are young. We can focus on their professional development. Scores are not static but we must do the right thing. There is enough assessment or testing, what teachers need now is our support.




Friday, February 20, 2015

Learning to Walk Before One Could Stand

It was very exciting to see for the first time my son standing inside his crib. He was still clinging on the crib rail but his smile was definitely gleaming with an aura of accomplishment. After being able to stand on one's legs for some time, the next challenge was to move. I could not wait to see him walk on his own. It was tempting to buy one of those walkers that could help a baby move within a room.


It was important to consider though that years ago, Siegel and Burton had already cautioned parents not to use walkers. In an article published in the journal Developmental and Behavioral Practices, the following developmental delays as measured by Bayley Indexes were presented.


Walkers in this study are divided into two groups to illustrate precisely why such supports may hamper an infant's development. An occluding walker is a walker equipped with plastic trays that prevent the baby from seeing his or her legs. This type of walker is apparently the worse in terms of delaying both motor and mental development. The chart above clearly shows that for a baby to develop, seeing what the body does is important. There are indeed risks in providing artificial support for development and not allowing for nature to take its course.

Babies continue to grow and by the time they reach school age, they grow even more with the help of their teachers. Learning to walk before learning to run must equally apply to basic education. There is a need to balance support and challenges. There is likewise proper timing. How much one should expect and how much support one should give are the top questions an instructor must address. Oftentimes, only the first question is given attention. This is not surprising since drawing goals and assessments is usually at the top of the agenda. One simply has to attend an Individualized Education Plan meeting to measure how much attention is spent on writing goals and how little time is given to planning interventions.

There is considerably more emphasis on what a child must accomplish in school. Being able to think is a primary objective of basic education and the activity that provides the best opportunity to demonstrate thought is writing. Yet writing requires so much from the executive function of the brain. Writing involves initiating, sustaining, inhibiting, shifting, organizing, planning and self-monitoring. Any one of these tasks can be challenging. A child who has difficulty in any one of these tasks will find writing quite demanding. These tasks describe only the processes, there is likewise content because writing after all requires knowledge. The task of writing is one area in education where goals or objectives are known so well. How these goals can be achieved unfortunately is seldom discussed.

"How many went through elementary school and were asked to write a paragraph that started with either 'How I spent my Christmas/ Summer Vacation' or 'My New Year's Resolution' while the teacher simply waited for the bell to ring?" is just one comment yet it captures how much there is to be desired regarding how writing is taught inside elementary classrooms.

Children can indeed develop writing skills early in their lives but they do need plenty of support. Cindy D’On Jones of Utah State University has examined two interventions designed to help young children develop writing skills. In the article, "Effects of Writing Instructionon Kindergarten Students’ WritingAchievement: An Experimental Study", D’On Jones specifically looks at two writing instructional procedures, "Writing Workshop" and "Interactive Writing":


These two are compared against "teaching-as-usual" (Control):


The results of these instructional methods have been obtained by administering the Test of Early Written Language - Second Edition (TWEL-2), which measures both basic and contextual writing. In the basic test, there are 57 items that test a student on directionality, letter formation, punctuation, and sentence construction. In the contextual part, a student writes a story after being shown a series of three sequential pictures. There are 14 compositional items, each one graded on a 4-point scale. The findings are summarized in the following graphs. First, with regard to punctuations, spelling, capitalizing the first letter of the first word in sentence, and other foundational writing skills, the instructional method does not matter:


On the other hand, the control (teach-as-usual) group somewhat underperforms in the compositional part:


One must keep the above scores in a proper perspective, however. In a thesis submitted by Emily Boss at the University of Pittsburgh, it is noted that in a sample of 139 kindergarten children from Pittsburgh, Pennsylvania and Tallahassee, Florida, the average score for the foundational skills test is 33.3 (6.3). For the compositional skills test, the mean is 6.9 (3.3). (The numbers inside the parenthesis are standard deviations)

There seems to be no harm in using either the "Writing Workshop" or "Interactive Writing". There seems to be benefits. What needs to be examined more is how to sustain these efforts. These methods clearly require a teacher to be much more involved and consistent. These methods require a specific environment that supports writing. It is social. It does not begin from a vacuum. This is how we, as adults, write. Walkers are bad because we as adults do not walk with such a support unless we are disabled. And unlike learning how to walk, we have to recognize how much support a child really needs to learn how to write, to learn how to think. The reason is simple: Even grown ups need the right setting and support.







Thursday, February 19, 2015

If You Think Reading Is Problematic, Try Writing

I asked my son why he finds writing very difficult. First, he said it was boring. So I qualified my question, "What if the subject was interesting?" My son replied that he still would not be eager to write because writing for him was something "private". We do bare ourselves when we write so my son does have a point. Teaching a child to write could be extremely challenging. I was a child once. I was not good at writing and I did not like writing at all. Not being capable at something often was a good enough reason not to like. In fact, it took me until college to write intelligibly. My writing did not go anywhere especially when I did not even have any idea of where to begin. It was like trying to squeeze something out of a dry sponge.

It was hard then. Now, it is my turn to help a child write.

I tried to use a prompt to see if that could help my son write a story. There are picture prompts available on the internet that one may use. I picked the following:

Above copied from education.com
After ten minutes, the following was my son's paragraph:
So what's happenning is that a man and a dhinosaur were having a race thorgh the woods. And the dhinosaur cried because he lost. But then the man called him a baby. So the dhinosaur went home. But then his mom also called him a baby.
 The above came with his own illustration:



Writing goes far beyond reading because it is creative. In addition, there are rules that include spelling, grammar, sentence structure, and punctuation. It is therefore not surprising to see that in reading, 34 percent of 8th graders are proficient (NAEP reading 2011), while in writing, only 27 percent are:


If research on effective ways to teach reading is lacking, it is worse with writing. Cognitively, writing requires the following steps: (1) generation and organization of ideas, (2) textualization of these ideas, (3) proof reading of the written piece. The first step can easily be the biggest stumbling block. For young children, however, the last two steps are likewise equally challenging. Interventions that focus on each of these various steps are still being developed. To gauge how effective these interventions are, reliable measures of writing are also required. For young children, writing is gauged through the following measures: total words written (TWW), words spelled correctly (WSC), and correct writing sequences (CWS).

One intervention in writing that has been given recent attention is the "Individualized Performance Feedback". The intervention is as simple as providing a student a score after each writing session. The score is the total words written (TWW). There is a writing session each week and a student is informed of his or her current TWW after each session. In addition, the student is also prompted on whether the TWW is higher or lower than that of the previous week. Whether this actually has an effect or not is the subject of a research article published by Truckenmiller and coworkers.

Apparently, it does as summarized in the following figure:

Above copied from Truckenmiller et al. Journal of School Psychology 52 (2014) 531-548

The results of the intervention are labeled Feedback (square points) in the above figure. Practice is identical to Feedback, students work on writing sessions each week but do not receive feedback on their TWW score. Students in the Control group also receive some information on their performance but these scores are based on parallel mathematics activities. Participants in this study are third-grade students from three elementary schools that serve mostly minority students. More than two thirds of the pupils in these schools also qualify for reduced-price or free lunch.

From the above figure, it seems like when students are provided a running tally of where they currently stand, it helps even in an unrelated subject since the students in the Control group also experience measurable growth. A growth of about ten words in a matter of eight weeks for the Feedback group is quite impressive. The authors have noted that this increase is equivalent to the difference between the averages of third and fourth grade students. Of course, what is measured here is just the total number of words a student writes. The authors claim that there is also a noticeable improvement in correct writing sequences. Still, these are limited to story-writing activities.

As in reading, there is fiction and non-fiction. There is reading for pleasure and there is reading for acquiring knowledge. For me, it took much more than just counting words before I really started to write. I took philosophy, theology, sociology, anthropology, psychology and a number of literature courses before I began writing intelligibly, that is, if one would consider what I write now as such....






Wednesday, February 18, 2015

When Something Does Not Work

Multiple factors determine student performance and some of these factors are key. Good health, for instance, is expected to be an important factor. With this in mind, a good night sleep is worth our attention. Sleep not only allows for a body to rest, but also prepares the brain for the next day. Irritability and difficulty in paying attention are among the common results of sleep deficiency. Sleep deprivation is therefore a possible hindrance to good learning. It is thus expected that a positive correlation exists between having adequate quality sleep and performance in school. For this reason, school start times have become a variable that one may tweak to help improve student performance. Starting school very early in the morning forces a child to wake up early, reducing the number of hours a child could possibly sleep. Numerous studies have shown that later school times can indeed improve student performance.

Of course, it is still possible to see scenarios where what is expected from published studies do not materialize. Education is multivariate. There are multiple factors and in some cases, there is a factor that simply overwhelms the rest. A recent paper, "Earlier School Start Times as a Risk Factor for PoorSchool Performance: An Examination of Public Elementary Schools in the Commonwealth of Kentucky", published in the Journal of Educational Psychology illustrates this concretely.  This study looks specifically at the effects of later school start times on student performance. As in other studies, a correlation between later times and student performance is seen. Students perform better across all subjects when school starts about an hour later. Apparently, this also applies to elementary school children and not just adolescents. This, however, is not the surprising finding. What is remarkable is that delayed school start times improve student performance only in schools where the majority of students does not come from poor families. A later start for school does not have any effect on poor students. This is summarized in the following figure:

Graph above drawn based on data from "Earlier School Start Times as a Risk Factor for PoorSchool Performance: An Examination of Public Elementary Schools in the Commonwealth of Kentucky"
Schools in this study have been categorized as "not poor" or "poor" depending on the number of students who qualify for a free lunch. Considering data for free lunch eligible students in Kentucky, "not poor" are schools that have about a third of its students qualifying for free lunch while "poor schools" are those that have as much as ninety percent of students qualifying for free lunch. Later school times clearly correlate with performance across all subjects in schools not overwhelmed by poverty. On the other hand, in schools where poverty is evident, later school times do not have any effect.

The lesson here is that sometimes something expected to work does not work and the reason is that there is a different factor that is overarching. In this case, it is poverty.




Tuesday, February 17, 2015

Reading: We Learn in Steps

For several sessions now, the karate class is nothing but a repetition of evasive moves. The instructor says, "right", and we're supposed to move to our right. When we hear "left", we need to slide to our left. "Jump" means we jump, and most importantly, when "duck" is shouted, we must duck. Over and over again, we practice. The instructor explains that our brain must learn to automatically associate moves with the commands we hear. Sometimes, I wonder if our karate instructor know some neuroscience. Certainly, we are not our instructors's first class of students. So perhaps, the instructor has either learned this pedagogical technique from experience or from his master.

Learning karate starts with defensive moves at Dietrich Karate Studios
Going through these karate lessons reminds me of my years in elementary school. The teacher points to a poster with the following, and the entire class simply recites what is written over and over again: "a e i o u, ba be bi bo, bu, ka ke ki ko ku, da de di do du,..."

Above is a screen capture from Filipino ba be bi bo bu

Why were my teachers in elementary school spending so much time on these exercises? These routines perhaps operate on the same principle that my karate instructor has based his teaching style. In this case, reading is recognized as requiring first of all phonological processing. Phonological processing is what our brain does when it associates basic component sounds with their visual representations. Our teachers learned these techniques from experience and their own teachers.

There is now physical evidence that supports what our instructors have been doing.

From kindergarten to third grade, magnetic resonance imaging (MRI) of the brain shows that during this time period there are volume changes in temporo-parietal white matter. The increase in volume is found to explain more than half of the observed variance in reading outcomes. This finding has been recently published in the journal Psychological Science. The following excerpt from the paper describes what this study may imply:
...One possible interpretation of our data is that structural brain differences causally influence early variations in reading development. However, we measured changes in the volume of critical left-hemisphere fiber tracts during a time when children were making large strides in learning to read. Changes in volume in the structures we identified may in part reflect changes in the amount of myelin present (Fjell et al., 2008). The degree of myelination is related to the electrical activity of a particular axon (Ishibashi et al., 2006) and may reflect differences in experience that drive differences in brain development. Thus, changes in volume in the left-hemisphere fiber tracts that relate to reading acquisition in the same time window may reflect differences in brain growth that are at least partly the product of experiential influences....
When little children read and recite "a e i o u, ba be bi bo, bu, ka ke ki ko ku, da de di do du,...", this may help trigger those electrical activity that the brain needs in order to develop. With so much focus on upper level thinking, there is a possibility that such emphasis may not be synchronized with how our brain actually develops. There are steps. Skipping such steps may preclude the growth that a child's brain needs.




Sunday, February 15, 2015

A Tale of Two Interventions

"In much of the rest of the world there is evolutionary change, grounded in the assumption that if professionals keep working at something, they can make continual improvements. In China and Japan, for example, curricula change much less frequently and much more slowly than in the United States. To begin with, these curricula are carefully conceived and known to be reasonably effective. These curricula are refined on the basis of classroom observations and student performance. Teachers make the curriculum a collaborative object of study, working to find better ways to teach lessons or to improve them. In that way, gradual and sustained improvements are made...."
-Alan H. Schoenfeld, Professor of Education, UC, Berkeley

How learning happens inside a classroom is affected by so many factors. The curriculum is one small factor influencing learning outcomes in schools. A written curriculum is likewise not necessarily identical to what is being delivered to students. There are individuals called teachers who implement the curriculum. And there is no denying that some teachers, if not most, feel that pedagogy supported by experimental research may look inviting in literature, but often fails in their own classrooms.

Education is perhaps so complex that it is wise to take a conservative perspective and focus only on small changes. Gigantic pedagogical reforms that promise too much often fall flat. It is also quite common to see interventions working so well in a controlled setting, but not delivering when brought to a larger scale or a more realistic setting.

But there are interventions that work even in the large scale. Pasi Sahlberg wrote the following in the Washington Post:
"...many education visitors to Finland expect to find schools filled with Finnish pedagogical innovation and state-of-the-art technology. Instead, they see teachers teaching and pupils learning as they would in any typical good school in the United States. Some observers call this “pedagogical conservatism” or “informal and relaxed” because there does not appear to be much going on in classrooms. 
The irony of Finnish educational success is that it derives heavily from classroom innovation and school improvement research in the United States. Cooperative learning and portfolio assessment are examples of American classroom-based innovations that have been implemented in large scale in the Finnish school system."
There are interventions backed by evidence-based research that do work. In this post, we look at two interventions that demonstrate encouraging results in the initial pilot studies to gain insights on what makes an innovation successful in a larger scale. One intervention (Number Rockets) seems to work across a larger sample while a second one (Successful Intelligence) fails when scaled up.

Successful Intelligence can be described by the following excerpt from Sternberg and Grigorenko:
"Many students could learn more effectively than they do now if they were taught in a way that better matched their patterns of abilities. Teaching for successful intelligence provides a way to create such a match. It involves helping all students capitalize on their strengths and compensate for or correct their weaknesses. It does so by teaching in a way that balances learning for memory, analytical, creative, and practical thinking...."
The above does sound quite promising if not outright inspiring. "Teaching in a way that balances learning for memory, analytical, creative and practical thinking" is an ideal no teacher or parent would deny. These objectives require the teacher, for instances, to help students compare and contrast two different solutions to a math problem (analytical), discover on their own an explanation behind a natural phenomenon (creative), and apply what they have learned in math to budgeting their allowance (practical).  These are just specific examples but it should be quite clear that Teaching for Successful Intelligence goes into even greater depth than teaching critical thinking. And it works, at least, in controlled settings.

Sternberg and coworkers have examined whether Teaching for Successful Intelligence works on a larger scale (more than 7000 4th grade students from 223 schools across the US, located in nine states). The following figure summarizes their findings.

Above copied from Sternberg et al.
 The study compares Successful Intelligence against other teaching methods, namely Memory (M), Critical Thinking (CT) and "Teaching as usual" (TAU). The vertical axis on the above graph is a measure of how effective a teaching method is compared to Successful Intelligence (SI). A positive number means that the method other than SI is better. These comparisons are made across different lessons or units. There are five lessons in Language Arts: Wonders of Nature, True Wonders, Lively Biographics, Journeys, and It's a Mystery. There are three lessons in Math: Equivalent Fractions, Measurement, and Geometry. There are two lessons in Science: The Nature of Light and Magnetism. Out of the 23 comparisons, only seven are below the 0.00 line. Out of the 12 lessons, Successful Intelligence works best only in 2. It sounds so good but it does not work.

To end this post with a much more optimistic look, another intervention is worth mentioning. Number Rockets is an intervention designed for first grade pupils who are having difficulties in arithmetic.

Number Rockets from the Vanderbilt Kennedy Center
 Number Rockets is a tutoring program consisting of 48 forty-minute sessions over 16 weeks that focus on (a) identifying and writing numbers, (b) identifying more and less objects, (c) sequencing numbers, (d) using <, >, and =, (e) skip counting by l0s, 5s, and 2s, and (f) place values. Similar to the previous intervention, Number Rockets also shows promising results in small controlled studies. Gersten and coworkers have investigated whether Number Rockets also works on a much larger scale (almost 1000 at-risk students from 77 schools). The results are summarized in the following figure:


From the above graph, it can be seen that the effect size of the intervention is about 0.33 (The mean score of the intervention group is about a third of its standard deviation higher than the mean score of the control group). This is a moderate and significant size effect. Gersten et al. therefore conclude in their abstract, "Intervention students showed significantly superior performance on a broad measure of mathematics proficiency."

Some pedagogical interventions do work. The one illustrated here that works seems to carry less loftier goals. It simply aims to help a seven year old child who is struggling with math.



Saturday, February 14, 2015

The Problem With Reading

Like mathematics, learning to read is an essential part of basic education. Reading, after all, is an important gateway to the other disciplines. Unfortunately, compared to mathematics, students in the United States have not improved as much in reading comprehension. Based on the results of National Assessment of Educational Progress (NAEP) exams, progress in reading comprehension is lagging behind the improvement in mathematics over the past two decades.


The uninspiring growth in reading scores becomes even more evident when one looks beyond the mean scores and examines how scores are distributed throughout the past years.


The bars for math in the figure above when combined start to look like a parallelogram while the bars for reading look very much like a rectangle. Over the past twenty years, the percent of students reaching basic level in math has grown from 48 to 82 percent. In reading, the growth is much less spectacular, from 60 to 67 percent. In 1990, only 12 percent of students have reached proficiency in math. In 2013, 42 percent are now proficient. On the other hand, in reading, over twenty years, the percentage of students reaching proficiency grew from 27 to only 34 percent. Looking at these numbers, one may reasonably infer that we may be doing something right in teaching math. Conversely, we may not be doing something right in reading.

Since reading, like math, is one of the subjects in high-stakes standardized tests, there is no doubt that schools are paying close attention and extra effort on teaching students how to read. There are several "reading comprehension strategies" that should now be quite familiar especially to those who have children enrolled in elementary schools. The following graphic from TeachThought shows some of the popular ones:


These strategies are not new. An article published in 1991 in the journal Review of Educational Research mentions several of these strategies. The article, "Moving from the Old to the New: Research on Reading Comprehension Instruction", lists the following strategies:

  1. Determining Importance. This is no different from separating the important from the unimportant. This could be achieved by (as suggested in the graphic above) locating key words, activating prior knowledge, and using context clues.
  2. Summarizing Information. This one is actually in the graphic above.
  3. Drawing Inferences. This likewise is in the above poster.
  4. Generating Questions.
  5. Monitoring Comprehension. This is basically equivalent to evaluating understanding.
These strategies are "somehow" supported by research. The following excerpt from the review's conclusions provides a general overview of research in reading comprehension strategies in the 90's:
We have learned much about the reading comprehension process and about comprehension instruction in recent years, but even more awaits our study. For example, regarding the issue of what to teach, which of the strategies we have identified are necessary and sufficient for the improvement of comprehension abilities? What has been left out? How do strategies develop over time? Even though strategies look similar at different levels of sophistication, should they be introduced differently? Most strategy training work has been completed in the middle and secondary schools. Should strategies be emphasized at the very beginning reading stages, and, if so, which ones can young children be expected to understand and make use of? We also do not know how much of the comprehension curriculum should be spent on the teaching of reading strategies versus other types of activities. How, for example, should strategy instruction time be balanced against such things as decoding skills, free reading, authentic reading and writing activities, and teacher-led discussions of stories?
Fast forward to the current year, the NAEP scores in reading answers most of the questions posed above. 

Part of the difficulty is that the strategies have been drawn by assuming that we actually know what good readers do and that reading comprehension can be dissected into various parts, each one necessitating a particular strategy. Reading comprehension is really complex. Kendeou and coworkers have pointed out in "A Cognitive View of Reading Comprehension: Implications for Reading Difficulties", that with children in the early elementary grades, there is a need to take into account developmental differences in children in the following areas:
  • Inference making
  • Executive functions
  • Attention allocations
These areas clearly overlap with most of the strategies commonly suggested to teach reading comprehension. This is a very important point since it illustrates that the strategies a teacher is using to help reading comprehension are actually traits that already require a set of skills. These strategies need to be taught and should not be expected from a child. With this realization, Kendeou and coworkers have suggested that instructional materials, the text a child reads, must be chosen judiciously. Appropriate reading materials must be picked to help children acquire skills in these areas. During the early years, one must not mixed text intended for reading instruction and text intended for gaining knowledge. Secondly, these skills are not limited to reading text. Children must be given the opportunity to exercise these skills while listening or even watching a show or movie. Third, reading is a process, therefore it is important that instruction goes along with the reading, not just before or after, but more importantly, during reading. Lastly, background information is often influential in reading comprehension that an instructor must always take this into account.

Lastly, in the journal The Reading TeacherShanahan reminds us of much older techniques that may help students develop reading comprehension.  The article, "Let's Get Higher Scores on These New Assessments", enumerate the following:
  • Interpretation of vocabulary in context
  • Making sense of sentences
  • Sustained silent reading
Shanahan concludes:
"The idea of having students practice answering test questions is ubiquitous and ineffective in raising test scores. Consider focusing instruction on those things that actually make a difference in test performance. Teach kids how to figure out the meanings of words on the basis of morphology and context. Teach them to figure out the meanings of complex sentences by breaking those sentences into parts. Teach them to sustain their concentration during the silent reading of challenging and extensive texts. Teach those things well, and you will see improved test performance; the side benefit to be derived from this kind of test prep would be that your students would become better readers to boot."
If a child does not have a strategy we expect, reading becomes only frustrating, hurting the child's love for reading. We must develop and not expect these strategies from children. Focusing on strategies that we think good readers should have may not be an effective way of teaching. It is like teaching arithmetic while expecting that a child already knows numbers and how to add and subtract.




Thursday, February 12, 2015

Cognitive Strategy Instruction on Math Problem Solving

If I see thirty six legs in a polar bear exhibit, and each polar bear has four legs, how many polar bears are there inside the exhibit? This was one of the questions in my son's math homework last night. He was quite confident. He said he would show me how he actually thought about the problem. So he started drawing "circles", each one with four "smaller circles" inside:


As he drew each circle, I could hear him counting in fours: 4, 18, 12, 16, 20, 24, 28 .... and as he approached 36, he slowed down with his drawing. Of course, if my son was already fluent with division, he could have just simply written the following:


This would also be quite acceptable. In either case, it should be clear that a specific approach to solving the problem was taken. Problem solving can be taught by worked examples. Worked examples, unfortunately, do not necessarily express on their own the line of thought used in solving the problem. There are of course examples out there in various textbooks that attempt to describe in greater detail the strategy employed. Still, in these examples, the strategy remains external to the learner. To develop problem skills, a student needs not only to see examples. A student actually needs to work on the problems. It also helps if the student is aware of his or her strategy.

Teaching students how proficient word problem solvers approach and solve math questions is a cognitive strategy instruction. An example is the Solve It! intervention.

Above copied from Solve It! (Montague 2003)
Solve It! is an intervention designed to help middle school children develop problem solving skills in mathematics. Montague and coworkers have recently shown that this intervention works as well with younger (Grade 7) learners. The following is an example of the first step (Read) of Solve It! being implemented in a classroom:
Mr. Wright: Watch me say everything I am thinking and doing as I solve this problem.  
Mr. Swanson needs 12 gallons of brown paint at $9.95 a gallon. He needs to buy three brushes at  $2.45 each. How much does he spend in total?  
First, I am going to read the problem for understanding.  
SAY: Read the problem. Okay, I will do that. (Mr. Wright reads the problem.) If I don’t understand it, I will read it again. Hm, I think I need to read it again. (He reads the problem again.)  
ASK: Have I read and understood the problem? I think so.  
CHECK: For understanding as I solve the problem. Okay, I understand it.  
Next, I am going to paraphrase by putting the problem into my own words. SAY: Put the problem into my own words.
This guy is buying 12 cans of paint and three brushes. Paint is $9.95 and brushes are $2.45 each. How much altogether?  
Underline the important information. I will underline 12 gallons and $9.95 a gallon and three brushes and $2.45 each.

ASK: Have I underlined the important information? Let’s see, yes I did.  
What is the question? The question is “how much did he spend in total?” What am I looking for? I am looking for the total  amount of money for the paint and brushes.
The complete sample lesson can be obtained here.

The observed effect size of Solve It! as an intervention is quite large (0.88). This is almost one standard deviation of improvement. Shown as a graph, applying the Solve It! intervention in a grade 7 class leads to a faster growth trajectory in math word problem solving:

Above copied from Montague et al. Journal of Educational Psychology, 2014, Vol. 106, No.2, 469-481





Wednesday, February 11, 2015

How Should We Make Lessons More Challenging

Learning materials are tools that assist a student. Lessons become more concrete, for instance, with manipulatives. Providing too much, however, is not ideal since lessons can become too specific that critical characteristics of a material to be learned can be easily misinterpreted. Oftentimes, superficial traits become incorrectly equated with distinguishing features. As a result, going to the next lesson or transferring what is learned to a different situation becomes much more difficult. Effective lessons therefore require a balance between accessibility and difficulty. There is indeed a continuum between direct instruction and discovery-based learning. Finding just the right amount of scaffolding allows for students to find an activity doable and at the same time, challenging. The key is introducing "desirable difficulties". These are difficulties that should extend the lesson and avoid cosmetic scaffolds that may introduce irrelevant or incorrect generalizations.

One time, I saw my son and his friend doing this math activity on the computer. It was the Battleship Numberline game from BrainPop. The game basically requires a child to estimate on a number line. The following is a screenshot:


The game first tells a child where the ship is. In the above example, the ship is supposedly spotted at "10". At the beginning of this game, there is no submarine drawn on the picture. It only shows up after a guess is made along the number line. In this particular case, the guess I made is indicated by a missile falling. It is a fairly good estimate of where "10" is along the number line. Below is an example of a bad guess I made:




Monday, February 9, 2015

Should I Explain Or Should I Listen?

There are two methods through which one may teach. Showing examples is one (Worked Examples) and another is allowing for students to provide their solution on their own (Generation). Both can lead to better performance. How two entirely different approaches to teaching can both lead to enhancing learning can be explained by examining the material to be learned and the background of the student. How much guidance a student needs depends on two things, the complexity (or novelty) of the material and the expertise of the learner. This dependence comes mainly from the cognitive architecture of the human brain.

The material to be learned can either demand resources from either long term memory or the working memory. Long term memory deals with materials that have been learned while the working memory serves as a scratch pad for the specific task in hand. How much is present in one's long term memory defines expertise. One of education's objectives is to increase what is stored in the long term memory. If a problem at hand requires a solution already stored in long term memory, the solution can be easily retrieved and used in the working memory without significant cognitive load. On the other hand, with a problem never encountered before, one is required to generate a solution from scratch. It should be obvious that this puts a heavier burden on one's brain. It is actually good that the human brain has a maximum working memory capacity. Otherwise, it may simply come up with a large number of permutations and combinations of all the solutions it has in its long term memory. It is the inherent limitations of the working memory that need to be considered in choosing between Worked Example and Generation.

What is required from the working memory depends on what the learner already knows and the complexity and novelty of the material to be learned. This has been recently demonstrated by Chen and coworkers. Their work is published in the Journal of Educational Psychology:

The Worked Example Effect, the Generation Effect, and Element Interactivity.
Chen, Ouhao; Kalyuga, Slava; Sweller, John
Journal of Educational Psychology, Jan 19 , 2015, No Pagination Specified.
http://dx.doi.org/10.1037/edu0000018


Abstract The worked example effect indicates that examples providing full guidance on how to solve a problem result in better test performance than a problem-solving condition with no guidance. The generation effect occurs when learners generating responses demonstrate better test performance than learners in a presentation condition that provides an answer. This contradiction may be resolved by the suggestion that the worked example effect occurs for complex, high-element interactivity materials that impose a heavy working memory load whereas the generation effect is applicable for low-element interactivity materials. Two experiments tested this hypothesis in the area of geometry instruction using students with different levels of prior knowledge in geometry. The results of Experiment 1 indicated a worked example effect obtained for materials high in element interactivity and a generation effect for materials low in element interactivity. As levels of expertise increased in Experiment 2, thus reducing effective complexity, this interaction was replaced by a generation effect for all materials. These results suggest that when students need to learn low-element interactivity material, learning will be enhanced if they generate rather than study responses but if students need to learn high-element interactivity material, study may be preferable to generating responses. (PsycINFO Database Record (c) 2015 APA, all rights reserved) 
As mentioned in the above abstract, the study looks at two groups of students from China. One group is composed of Year 4 (about 10 years old) students who are just being introduced to calculations of areas and perimeters of polygons while a second group comes from Year 7 (about 13 years old) students. The older students have already taken a geometry class during which the topics involved in this experiment have already been covered. Year 4 students therefore represent students with much lower expertise than the Year 7 students. Both years of students are randomly divided into two classes. One group receives high guidance (Worked Examples) while a second group receives low guidance (Generation). Both groups tackle two types of geometry problems. One is simple while another is complex. At the end of the sessions, assessments are administered to gauge how much students have learned in both types of geometry problems (simple versus complex). The results are summarized in the following figures:

Year 4 students (low experience)
Graph based on data from the Journal of Educational Psychology
Year 7 students (high experience)


Graph based on data from the Journal of Educational Psychology
Almost every class taught in a school contains simple and complex topics. It appears that when dealing with beginners, it is important to provide direct and explicit instructions (Worked Examples) on challenging materials. On the other hand, for reviewing topics that have already been covered, it is beneficial to allow learners to develop the answers on their own (Generation).




Saturday, February 7, 2015

The Iceberg Effect

Pure ice is less dense than either water or seawater so it floats, but only about ten percent of its volume is visible above the surface. Thus, the "tip of an iceberg" is an expression commonly used when only a small part of the problem is apparent. With the challenges public education systems face, standardized test scores are often used as gauges. There is nothing wrong in paying attention to scores in standardized exams. The problem lies in how conclusions are drawn especially when the picture provided by test scores is simply a "tip of an iceberg". Test scores are measures of student outcomes. These numbers do not tell, for example, the reasons behind a student's performance. I can stand on a balance and measure my weight. At the end of such procedure, I do get a number describing my weight, but it does not tell me the real reasons behind the numbers that I see.

In education, no one can deny the fact that scores in international standardized exams are informative. These, like other tests, are measures of student outcomes. High scores in this exam do point at education systems around the globe that are producing students that can perform quite well. Such extrapolation is reasonable. Errors begin for example when one simply jumps to the conclusion that the scores are dependable indicators of poor teaching techniques and curriculum. There are so many other factors contributing to student outcomes. This is the central message of a research document, The Iceberg Effect, recently published by the The Horace Mann League of the U.S.A., and the National Superintendents Roundtable:


The report encourages everyone to look at education with a much wider perspective. The following image, for example, captures what is often hidden underneath test scores:

Above copied from The Iceberg Effect
Clearly, without looking at the other factors that may influence education, wrong conclusions can be drawn from what is visible on the surface. The factors listed above, inequity and inequality, support for schools, social stress and violence, and support for young families are not defined by a school's curriculum. These are factors that are outside the classroom. These are factors that are not pedagogical. Thus, simply copying what teachers do in high-performing countries to improve one's schools is really jumping into conclusions. The Iceberg Effect looks at these other factors in the United States and compares them against those of other countries. The results are summarized in the following figure:

Above copied from The Iceberg Effect
With the additional factors, important clues on why students in the US do not perform as well as those in other countries become evident. The report concludes:
"American society reveals the greatest economic inequities among the advanced nations in this analysis, combined with the highest levels of social stress, and the lowest levels of support for young families."
Emma Brown at the Washington Post drives this point home in a series of figures taken from the report:





I would like to add one more figure from the report, if I may:


Similar to the authors of the report, I likewise hope that we all realize that there is so much more than meets the eye....