91制片厂视频

Opinion
School Choice & Charters Opinion

What Assessment System Would Serve Students & Society?

By Joe Nathan 鈥 February 17, 2015 12 min read
  • Save to favorites
  • Print
Email Copy URL

Joe Nathan opens this week鈥檚 discussion. Deborah Meier responds, and Nathan offers a brief reaction.

Dear Deb,

You asked last week whether human judgment has a place in education. My response, going back over more than 40 years, is 鈥淎bsolutely yes.鈥 To help show how that could be done, I鈥檇 like to discuss a entitled 鈥淲hat Should We Do: A Practical Guide to Assessment and Accountability in Schools.鈥 Our coordinated this project on assessment in 1999-2000. We worked with some of the most thoughtful evaluation authorities in the country, and some of the most creative, innovative public schools in the country.

The report strongly encourages using multiple measures including human judgment. I think the report describe the kind of assessment system that makes sense both for helping students grow, and for helping the broader society understand what鈥檚 happening in a school.

First, we convened a number of evaluation authorities, to discuss what was vital, and what valuable, in assessing a school. These experts included Professor , then president of the , Professor , University of Minnesota, one of the nation鈥檚 leading authorities on assessment of students with special needs, Dr. , a national known authority on assessing students for whom English is the second language, Professor University of California, Los Angeles, who has studied the role of arts in education, and assessing students with whom traditional schools have not succeeded, and Dr. , former teacher, principal, district superintendent, Iowa Superintendent of Public Instruction and then professor at University of Colorado, Colorado Springs.

Before producing a final report, we invited reactions and examples from district and charter all over the US. We did this via various networks and a November 1999 article in 91制片厂视频 Week. Eventually we shared information from 11 district and 10 charter public schools from all over the US. (One of them was , which you founded). We also talked with authorities from the American Federation of Teachers, American School Counselors Association, Council of Great City Schools, Charter Friends Network, Massachusetts 91制片厂视频 Schools Resource Center, National Association of State Boards of 91制片厂视频, North Central Regional 91制片厂视频 Laboratory, Rural Trust, and Small Schools Network.

I list these individuals and organizations, not to be boring, but to show that we tried to listen to and learn from a variety of thoughtful people with wide-ranging insights and experience.

Together we developed 6 鈥渧ital鈥 and three 鈥渧aluable features that the report suggested be part of any and every school鈥檚 evaluation process.

The six vital features were:


  • Clear, measurable outcomes for each school
  • Goals that are widely understood and supported by families, students and educators
  • Multiple measures, including use of standardized tests and applied performance measures


  • Measurement of all students鈥 work, not necessarily by using the same assessment.
  • Assessment of students鈥 growth, including students who don鈥檛 speak English at home; again not necessarily using the same assessment.
  • Explanation of how information gained from assessments is being used to inform school improvement efforts.

We concluded that the following were valuable:


  • Using a person or persons outside the school to help assess student work.
  • Measuring experiences and attitude of school graduates.
  • Creating a parent/educator/community committee to supervise assessment effort.

Speaking specifically to your point about human judgment鈥檚 place in assessment, the report cited graduation programs at St. Paul Open School (now known as and . Each of these schools uses a portfolio approach to high school graduation. The first three are district schools. The fourth is a chartered public school. Though the details vary, each school relied in part on assessments of students by adults. Some included both educators and community experts. The report also cited a number of other performance measures in K-12 public schools.

The report also used experience from in Milwaukee. For decades, this college has used a rdeveloped by faculty to measure a student鈥檚 abilities to speak in public. Alverno keeps a record of how each student progresses toward various public speaking standards, measure by humans, not standardized tests. It鈥檚 a great example of measurement that uses a mixture of standards and human judgment to determine whether students are making progress, and in what ways.

To sum up, yes, I think there is a very important role for human judgment in assessing students and in the overall assessment of a school. We鈥檝e tried hard to provide examples of how this can be done.

Deborah Meier responds:

Dear Joe,

What an amazing 10 days I鈥檝e had - to Lima for a week with my granddaughter and then Texas at the always amazing meeting of the ! Our focus this time was issues related to the good fortune of having a home language that isn鈥檛 English.

The NDSG met in North Dakota in 1972 at the request of one my heroes, Vito Perrone, to lend support to Head Start parents in their effort stop the use of standardized tests--IQ tests at first-- to measure their children.

They found it insulting.

And we agreed, including many testing experts who joined us--like Ted Chittenden, Walt Haney to name just two. So now 40 plus years later we鈥檙e back fighting an even more pervasive testing system.

I鈥檇 pay more heed to those weighty organizations and experts you mention if they had done more, louder challenging about the testing mania that has undermined serious and useful education for the past 20-25 years. We鈥檝e needed them and other experts to just plain say the truth: this is not science. I like some points more than others.

That they start with proposing that we use clear measurable tools to judge institutions and children seems sad. Of course, it could be that they are using the term 鈥渕easurement鈥 in a way we鈥檙e not accustomed to. Measurement has become, for me, synonymous for a system for differentiating; ranking those with most to least 鈥渁cademic鈥 smarts objectively. In fact, they do neither. If they are suggesting a new paradigm, that does not require a ranking order, or a pretention of precision, and designed with a particular purpose and audience--then I鈥檓 arguing semantics, and I apologize. Another query. Does using multiple bad tools any better than one? Or are they recommending using a range of very different 鈥渢ools"--including observation, taped reading samples, student work, etc.? What defines, in their terms, either reliability or validity? Both of which presently rest on what I deem to be built-in race and class biases--and ever more shall do so. That it predicts success on similar tools given in the future says as much about the real world as the student鈥檚 competence! We鈥檝e abandoned the normal curve system of scoring by percentages (which I鈥檓 not a fan of) for an even sillier one--which I call a politically set scoring system. It鈥檚 set to insure that just the right number succeed and fail based on the latest politically 鈥渞igorous鈥 agenda. Not new, Joe, the NYC DOE did something similar for decades--to make each new superintendent and/or Mayor look good.

So many of the issues involved in the assessment discourse speak to how we define what it means to be a well educated citizen of a democracy--useful to oneself and the common good. If education is a preparation for the weighty tasks of deciding on matters of earth-shaking repercussions to our common future, we鈥檇 better spend more time thinking together, community by community about what we want to judge or measure. The beauty of the kind of work I did with Ted Sizer is that we each worked to build schools that could learn as they were developing--from themselves and others. I borrowed from The Parker School (charter) and they from us, and on and on. We also benefited from the idea of visiting teams of colleagues. The in Boston is doing some interesting work--based on the original Pilots (1/3 of Boston鈥檚 schools are Pilots) and now being proposed for all Boston schools.

It might help us see when online learning can or cannot be useful and when classrooms are too large (to do x)? And why ranking everything is profoundly undemocratic perhaps?

I鈥檇 like to reverse the process and have that body of experts respond to proposals put forth by teachers and parents and even kids, as they did when the Coalition schools, some 20 years ago, developed The designers are known as the The bottom line: the designers should be as close as possible to the real data"-- the kids!

Deb

Joe Nathan responds:

Deb,

In this brief response, I鈥檒l comment on a few of the concerns that you mentioned above. I hope we can continue this discussion and that many others will join in.

First, yes, Deb, you and I agree that it would have been valuable to have some families, educators and students helping develop features of each school鈥檚 assessment system. The report I described above recommended that every school have a committee of educators, family and community members helping develop and supervise its assessment program.

Second, you鈥檝e recommended, and I agree, that educators should examine the work of the . In fact, the 鈥淲hat Should We Do鈥 included examples from one of the founders of that Consortium, Urban Academy in New York City. The report also included examples from Central Park East, which you founded.

In developing the 鈥淲hat Should We Do鈥 鈥榮 recommendations, we asked for and received terrific responses and suggestions from educators all over the U.S. Many of their insights are included in the report.
At the same time, I鈥檓 glad we included people like Lauren Resnick, then president of the American 91制片厂视频 Research Association. She has had a distinguished career. She and others involved have devoted decades to helping improve public schools.

Third, you wrote we should have started with people in schools, and asked the researchers to react to recommendations from schools. If I were doing the project again, I would include, from the beginning, both researchers and people working every day in some outstanding schools. (And by outstanding, I don鈥檛 mean just those with high test scores.)

Fourth, yes, I believe that each school should have some clear, measurable goals that are well known to and supported by the faculty, families and students. Progress toward many of those goals would not be measured by traditional standardized tests.

You disagreed with this recommendation.

Why should schools have some clear, measurable goals? Families deserve to know what a school is trying to accomplish. So does the broader community. Moreover, a school is more likely to reach its goals if educators, families and students involved in the school know about and agree on what the institution鈥檚 goals are, and how they will be measured.

Dr. Wayne Jennings, founding principal of the St Paul K-12 (district) Open School stressed the value of having multiple goals, multiple measurements and annual reports. You may recall Jennings as a member of the North Dakota Study group for its first 15 years. Jennings has been a wonderful mentor for many people, including me. He is a visionary educator who has helped create several exciting public schools, district and charter.

Jennings was and is no fan of heavy reliance on standardized tests. He was very clear when he joined with hundreds of parents and community members in 1970. Together, they convinced the St. Paul Board to establish the Open School.

He emphasized: 鈥淚t鈥檚 not enough to describe what you oppose. You have to explain what you are for.鈥

So with educator, family and student participation, the Open School did a yearly report, something I鈥檇 suggest every public school produce. After several years, the U.S. Department of 91制片厂视频 recognized this school as a 鈥渃arefully evaluated, proven innovation worthy of national replication.鈥

The Open School annual report reflected its values and goals. For example, the school believed in learning from families and students. It believed in listening to graduates. It used some suggestions from these surveys to improve the school. For example, one survey of graduates recommended that the school increase the amount of writing students did. Educators agreed and added more writing to the curriculum.

The school also valued learning in the community, not just learning inside the building. It believed students should help improve the community. So in addition to standardized test scores, the annual report included, for example:


  • Results of parent and student surveys (identifying strengths and areas needing attention)
  • Results of surveys of graduates (once the school had some)
  • Examples of how educators used surveys mentioned above to improve the school.
  • Examples of student鈥檚 community service projects.
  • Examples of local and national field trips.
  • Number of students who took college classes.
  • What graduates did after high school.

The annual report included measures that parents, students and community members suggested be part of the annual report.

Part of an assessment system for each publicly funded school should be use of a standardized test. But as you know, there are many such tests. I think the best measure progress over a year, rather than just being given once a year. And yes, we agree that the NCLB expectation of all students being proficient by 2014 was absurd.

You asked, 鈥淚s using multiple bad tools any better than one?鈥 Of course not. But like you, I鈥檓 all for giving power to schools to select several ways to measure what鈥檚 happening in the school. Schools should not rely just on standardized test scores and in the case of high schools, four-year graduation rates.

If I were working in Open World School today, I鈥檇 suggest adding the . This survey measures whether students feel they are learning to set and work toward goals. It measures whether students are developing a sense that they can accomplish things they value. A University of Kansas found this was a better predictor of college graduation than high grades in high school or high test scores. Students who work on 鈥渞eal world鈥 projects develop the kind of skills and attitudes that the Hope Survey measures. The Hope Survey is available from (which in the interests of full disclosure, serves as the fiscal agent for our center).

Assessing student achievements, and assessing an overall school, are big subjects. I鈥檓 glad we鈥檙e discussing how this should be done. From my perspective, using a variety of measures, including some selected at the local school level by educators, families and students, is the best way to capture the broad array of things that each school is trying to do.

Joe Nathan has been an urban public school teacher, administrator, PTA president, researcher, and advocate. He directs the St. Paul, Minn.-based Center for School Change, which works at the school, community, and policy levels to help improve public schools.

The opinions expressed in Bridging Differences are strictly those of the author(s) and do not reflect the opinions or endorsement of 91制片厂视频, or any of its publications.