Introducing SanDERA

By Julian Betts, Ph.D.

The Department of Economics has exciting news to share in the field of economics of education. The San Diego Unified School District (SDUSD) and the Department of Economics have agreed to create a new research entity called the San Diego Education Research Alliance at UC San Diego (SanDERA).

[Photo: Julian Betts] Professor Julian Betts

SanDERA (sahn-dare-ah) builds on a decade-long collaborative research relationship among Julian Betts, colleagues in the Department of Economics and SDUSD. The collaboration began in summer 2000 with funding from the Public Policy Institute of California (PPIC). Andrew Zau was hired by PPIC to undertake the difficult task of piecing together a longitudinal database linking student records with information on the qualifications of each student’s teacher. Andrew, now a senior statistician in the Department of Economics, has worked closely with school district staff to put together quite a remarkable database. He is responsible for day-to-day management of databases and also plays a leading role in the research itself.

In the last decade, SanDERA has published fifteen books and papers on a variety of topics ranging from the determinants of student achievement and school choice to detailed evaluations of major reading reforms implemented in San Diego and a study of the effects of the California High School Exit Exam. Numerous thesis chapters have also resulted from this research.

The creation of SanDERA not only formalizes the ongoing collaborative work with the district, but will also foster rigorous statistical analysis of some of the most pressing policy issues in San Diego. Discussions with district officials about unsettled policy questions will contribute directly to the research agenda. SanDERA also plans to hold public events in San Diego designed to foster a dialogue with parents on some of these issues.

The executive director of SanDERA is Julian Betts, Ph.D., professor and chair of the Department of Economics at UC San Diego. Karen Volz Bachofer, Ph.D., former executive director of SDUSD’s Research and Evaluation Division, who joined UC San Diego in 2009, serves as director of SanDERA. The San Diego Unified School District SanDERA representative is Ron Rode, executive director of the district’s Accountability Office. The SDUSD SanDERA coordinator is Peter Bell, Ph.D., director of the Research and Reporting Department at the District. Regular meetings keep everyone updated on research findings and new policy issues.

Karen is a particularly important addition to this collaborative team. She has deep policy knowledge combined with experience in both quantitative and qualitative research. Karen joined the Department of Economics late last year to lead a case study of career and technical education, which was part of a large grant from the U.S. Department of Education. In addition to her own research, Karen has already made a big contribution by helping a number of students who research the economics of education to situate their hypotheses within the current policy landscape, and to point them toward natural experiments of interest.

Current Research

SanDERA is currently involved in a number of research projects. It is completing a three-year project funded by the U.S. Department of Education to study more closely which students enroll in career and technical education, and the short- and long-term consequences for students embarking on these occupational courses. Coauthors include graduate students John McAdams and Dallas Dotter, as well as Andrew Zau, Karen Bachofer and Julian Betts. San Diego is one of three sites being studied nationally, as mandated by the National Assessment of Career and Technical Education.

Graduate student Youjin Hahn is working with Julian and Andrew to study the effects of diagnostic math testing on student outcomes. In addition, Youjin and doctoral candidate Sam Dastrup are immersed in thesis research focused on SDUSD, and several others are working on developing proposals to work with SDUSD.

Impact of the Research

A number of the studies emanating from the UCSD-SDUSD collaboration have had a practical impact.

Most recently, in summer 2010, PPIC published a longitudinal student-level analysis of the impact of ambitious and controversial reading reforms implemented in San Diego between 2000 and 2005. The report, by Julian Betts, Andrew Zau and Cory Koedel (an assistant professor at the University of Missouri), has received front-page coverage in “Education Week” and additional coverage from many media outlets. In spite of the claims by critics at the time, the reforms had positive effects on reading achievement in the lower grades. However, the reforms backfired at the high school level.

Another recent study has also had significant public impact. “Predicting Success, Preventing Failure: An Investigation of the California High School Exit Exam,” a 2008 PPIC book coauthored by Andrew Zau and Julian Betts, examined student performance on the new California High School Exit Exam (CAHSEE). The report found that it is quite easy to forecast who will fail and who will pass the exit exam based on achievement, grades and demographic characteristics of students at the time they enter high school. More surprising, however, was the finding that student performance on the exit exam could be predicted almost as well using information available in elementary school. (See figure.)

Based on two separate statistical models, students were assigned to one of ten groups based on their predicted probability of passing the exam. The height of each bar shows the percentage of students in each group who did go on to pass the exit exam. The rising percentage of students who passed the CAHSEE in the groups for which we predicted higher probabilities of passing shows that our models based on fourth-grade data perform almost as well as the models based on data available five years later.

Strength of Our Predictive Models

SanDERA

Percentage of students who passed the CAHSEE by the end of twelfth grade, plotted against the predicted probability of passing based on information about the students available in fourth and ninth grades.

The same book also contains a study on the effect of state funds that allowed districts to tutor twelfth grade students who had yet to pass the CAHSEE. (From tenth grade on, students have multiple chances to pass the test.) These expenditures did not appear to have a big effect. As a result of the lack of effectiveness of this eleventh-hour intervention and the finding that failing CAHSEE can be predicted using data back as early as fourth grade, the report suggests that the state of California provide these funds to districts in a way that would allow them to intervene with struggling students far earlier than twelfth grade. The state has recently acted to fold these twelfth-grade tutoring funds into a more flexible funding category. While it is doubtful that the report was solely responsible for this policy change, the authors knew after doing several Sacramento briefings that the message was delivered and accepted by key officials. They are now updating and extending this work with the help of another graduate student, Yendrick Zieleniak.

In other work, Cory Koedel, while still an economics graduate student, and Julian Betts have studied “value-added” measures of teaching effectiveness. The idea is to test whether teacher effectiveness can be meaningfully measured by examining student changes in test scores during the school year. (While lesser known to the public, value-added models have been a hotbed of policy-wonk debate for the last few years. But value-added measures of teacher effectiveness burst into the headlines in August 2010. Rather sensationally the “Los Angeles Times” announced it had performed such an analysis and would post a list of teacher rankings from Los Angeles Unified School District.)

In analysis of value-added approaches, Cory and Julian have found that teachers do vary dramatically in teaching effectiveness. However, a number of statistical issues need to be addressed before administrators can feel confident in evaluating (or paying) teachers based on students’ gains in test scores. First, and most damaging, these measures are quite unstable if one uses just two or three years of test-score data. This is an important finding because numerous factions in the current education debates are now recommending that beginning teachers be assessed, and perhaps fired, based on the test scores of students they teach while in their first year or two of teaching. Such an approach, without considerable additional and direct evaluation of teachers in the classroom, is likely to make faulty picks of the “best” novice teachers.

Cory and Julian also offer solutions to two other key statistical issues. The first problem was that tests used in some states have “ceiling effects” – they are more like tests of minimum competency, so that high-achieving students score so highly on these tests that they have no room for improvement. The study offers a simple test that policymakers can use to check whether their test is subject to this problem.

Also, in separate work, Cory and Julian reproduced a troubling finding by Jesse Rothstein of UC Berkeley. Using data from North Carolina, Jesse showed that one can predict the test-score gains of students in fourth grade by knowing who will be their fifth grade teachers the following year, and that the future teacher predicts current performance almost as well as the identity of the current-year teacher. However, the new study shows that this problem disappears if data on four years of classes taught by each teacher is taken into account. As found in the earlier study, reliability of these “value-added” measures of teaching effectiveness is not high if one focuses on just one or two years of classroom outcomes for each teacher.

This work has been discussed in a front-page article in “Education Week” as well as in a blog for “The Wall Street Journal.” For some of his contributions on value-added measures of teacher effectiveness in his thesis, Cory Koedel was named the 2008 co-winner of the American Educational Research Association national award for best thesis in education policy. The department is certainly very proud of his achievement.

Looking Forward

In the future, the goal is for SanDERA to grow in scope while conducting useful research for policymakers and still maintaining rigorous econometric standards. The hope is to increase engagement in the San Diego community and to continue making research contributions that are relevant both to an academic audience and to education policymakers who regularly make difficult choices about how best to spend limited resources.

Portions of the work described above have been funded by the U.S. Department of Education, PPIC, The William and Flora Hewlett Foundation, the Bill and Melinda Gates Foundation, The Atlantic Philanthropies, the Girard Foundation, the California Academic Partnership Program and others.

For further information on SanDERA, please contact Julian Betts at jbetts@ucsd.edu or Karen Volz Bachofer at kbachofer@ucsd.edu.


Published work by the UC San Diego team relating to the SDUSD/UCSD Research Collaboration:

PPIC Books and Reports (available at www.ppic.org)

  • (2010), Betts, Julian, Andrew C. Zau, and Cory Koedel, Lessons in Reading Reform: Finding What Works, San Francisco: Public Policy Institute of California.
  • (2008), Zau, Andrew C., Julian R. Betts, Predicting Success, Preventing Failure: An Investigation of the California High School Exit Exam, San Francisco: Public Policy Institute of California.
  • (2006), Julian R. Betts, Lorien A. Rice, Andrew C. Zau, Y. Emily Tang, and Cory R. Koedel, Does School Choice Work? Effects on Student Integration and Achievement, San Francisco: Public Policy Institute of California.
  • (2005), Julian R. Betts, Andrew Zau and Kevin King, From Blueprint to Reality: San Diego’s Education Reforms, San Francisco: Public Policy Institute of California.
  • (2003), Julian R. Betts, Andrew Zau and Lorien Rice, Determinants of Student Achievement: New Evidence from San Diego, San Francisco: Public Policy Institute of California.

Refereed Journal Articles and Book Chapters

  • (forthcoming), Koedel, Cory and Julian R. Betts, “Does Student Sorting Invalidate Value-Added Models of Teacher Effectiveness? An Extended Analysis of the Rothstein Critique,” Education Finance and Policy.
  • (2010), Koedel, Cory and Julian R. Betts, “Value-Added to What? How a Ceiling in the Testing Instrument Influences Value-Added Estimation,” Education Finance and Policy, 5(1) pp. 54-81.
  • (2010), Julian R. Betts, Y. Emily Tang, and Andrew C. Zau, “Madness in the Method? A Critical Analysis of Popular Methods of Estimating the Effect of Charter Schools on Student Achievement,” Chapter 2 in Paul T. Hill and Julian R. Betts (Eds.), Taking Measure of Charter Schools: Better Assessments, Better Policymaking, Better Schools, Lanham, MD: Rowman & Littlefield Publishers, Inc.
  • (2009), Cory Koedel, Julian R. Betts, Lorien A. Rice, and Andrew C. Zau, “The Integrating and Segregating Effects of School Choice”, Peabody Journal of Education, (84:2), pp. 110-129.
  • (2009), Philip Babcock and Julian R. Betts, “Reduced-Class Distinctions: Effort, Ability, and the Education Production Function”, Journal of Urban Economics, (65), pp. 314-322.
  • (2009), Julian R. Betts, “The San Diego Blueprint for Student Success: A Retrospective Overview and Commentary”, Journal of Education for Students Placed at Risk, (14:1), pp. 120-129.
  • (2007), Julian R. Betts, “California: Does the Golden State Deserve A Gold Star?,” Chapter 3 in Frederick M. Hess and Chester E. Finn Jr. (Eds.), No Remedy Left Behind: Lessons from a Half-Decade of NCLB, Washington, D.C.: AEI Press, pp. 121-152. (This article draws upon conversations with many school districts, including SDUSD.)
  • (2005), Julian R. Betts, “The Promise and Challenge of Accountability in Public Schooling,” in Frederick M. Hess (Ed.), Urban School Reform: Lessons from San Diego, Cambridge, MA: Harvard Education Press, pp. 157-176.
  • (2005), Andrew Zau and Julian R. Betts, “The Evolution of School Choice,” in Frederick M. Hess (Ed.), Urban School Reform: Lessons from San Diego, Cambridge, MA: Harvard Education Press, pp. 223-241.
  • (2004), Julian R. Betts and Anne Danenberg, “San Diego: Do Too Many Cooks Spoil the Broth?”, in Frederick Hess and Chester Finn (Eds.), Leaving No Child Behind? Options for Kids in Failing Schools, New York: Palgrave MacMillan, pp. 213-238.