Governance and Leadership – Education Next https://www.educationnext.org A Journal of Opinion and Research About Education Policy Mon, 01 May 2023 14:03:46 +0000 en-US hourly 1 https://wordpress.org/?v=5.4.2 https://www.educationnext.org/wp-content/uploads/2020/06/e-logo-1.png Governance and Leadership – Education Next https://www.educationnext.org 32 32 181792879 Putting Teachers on the Ballot https://www.educationnext.org/putting-teachers-on-the-ballot-raises-fewer-charters-when-educators-join-school-board/ Tue, 30 May 2023 09:00:31 +0000 https://www.educationnext.org/?p=49716557 Raises for teachers, fewer charters when educators join the school board

The post Putting Teachers on the Ballot appeared first on Education Next.

]]>

Illustration of campaign flyers on a sign

Public K–12 education in the United States is distinctively a local affair: school districts are governed by local boards of education, composed of lay members typically elected in non-partisan elections. These boards have decision-making power over hundreds of billions of public dollars and oversee complex agencies that, in addition to preparing a community’s children for the future, can be the biggest employer in town. Yet we know very little about what factors influence a board’s governance and impact, including the professional backgrounds of elected members.

One profession would seem to have particularly relevant effects: educators. Organizations like the National Education Association and Leadership for Educational Equity, the political arm of Teach for America, are training and supporting their educator members and alumni to run for elected offices. What might be the impacts of such efforts on school board elections, district governance, and student outcomes?

Research focused on boards of directors, which play a similar role in the corporate world, has found that adding members with more industry expertise increases a firm’s value. It stands to reason that electing educators to school boards could have similarly beneficial effects. For example, former classroom teachers or school leaders with firsthand knowledge of common challenges could theoretically make better decisions about teachers’ working conditions and positively influence student performance.

On the other hand, 70 percent of U.S. teachers are members of teachers unions. This raises the possibility that educators serving on school boards could be influenced not only by expertise but also allegiance to union priorities. That could theoretically influence collective bargaining, which is one of the major responsibilities of a school board. Union allegiance could shift bargaining agreements toward union goals, such as increasing teacher salaries or limiting charter-school growth, which may not necessarily benefit students.

We investigate these possibilities in California. State election rules randomize the order of candidates’ names on the ballot, which allows us to estimate the causal effects of an educator serving on a school board. By looking at randomized ballot order, candidate filings, election records, and school district data, we provide the first evidence on how the composition of local school boards affects district resource allocation and student performance.

Our analysis finds no impact on student achievement from an educator serving on a school board; neither average test scores nor high-school graduation rates improve. However, outcomes relevant to union priorities advance. Relative to a district without an educator on the school board, charter-school enrollment declines and the number of charter schools shrinks by about one school on average during an elected educator’s four-year board term.

In addition, each educator elected to a board leads to an increase of approximately 2 percent in teacher pay, while non-instructional salaries remain flat. Benefits spending is stable, while the share of district spending on ancillary services and capital outlays shrinks. We also find that educators are 40 percent more likely than non-educators to report being endorsed by teachers unions.

Despite raising teachers’ salaries, electing an educator to a school board does not translate into improved outcomes for students and has negative impacts on charter schools. We believe this shows that school boards are an important causal channel through which teachers unions can exert influence.

Electing Educators in California

Nationwide, nearly 90,000 members serve on about 14,000 local school boards. These boards have several general responsibilities, which include strategic planning for the district, curricular decisions, community engagement, budgeting, hiring senior administrators, and implementing federal and state programs and court orders. In addition, in nearly all states, school boards determine contracts for instructional staff through collective-bargaining agreements with teachers unions. These negotiations set salary schedules, benefits, work hours, and school calendars. Local school boards also set attendance zone boundaries and, in about three dozen states, authorize and monitor charter schools. In 2020–21, local education agencies accounted for 90 percent of all charter-school authorizers in the U.S. and enrolled 48 percent of the nation’s charter-school students.

While typical in most respects, school district governance in California has several unique characteristics. First, teachers unions are especially influential: 90 percent of California teachers are full voting union members. Second, school boards effectively do not have the power to tax. Under Proposition 13, property-tax collections are capped at 1 percent of assessed value, and assessments are adjusted only when a property is sold. Finally, charter authorization is overwhelmingly a local issue, with about 87 percent of California charters authorized by local school districts. Los Angeles Unified School District is the single biggest local authorizer in the U.S. and enrolls 4 percent of all charter-school students nationwide.

Our analysis is based on records from the California Elections Data Archive for all contested school board elections from 1996 to 2005. The data include each candidate’s vote share, ballot position, electoral outcome, and occupational background. We identify as educators candidates who describe their primary occupation or profession as a teacher, educator, principal, superintendent, or school administrator. Educators account for 16 percent of all 14,150 candidates in contested races and 19 percent of all 7,268 winners during this period.

Almost all school-board members serve four-year terms with staggered contests occurring every two years. The average tenure is seven years, and the average school board has five members. We use candidate-level records to construct yearly measures of school-board composition in each district, including the share of members who are educators. On the average school board, educators account for 18 percent of members. We link school-board rosters with district-level characteristics and charter-school campus and enrollment counts from the federal Common Core of Data, as well as negotiated salary schedules and district finance information from the state Department of Education. To look at impacts on student outcomes, we include average test scores in elementary and middle schools along with high-school graduation rates, also from the state education department.

Investigating Educator Impacts

To estimate the causal effects of an educator being elected to a school board, we need to compare two sets of circumstances: what happens after an elected educator joins the board and what would have happened if the educator had not won. While the effects could appear immediately and persist over time, it is also possible that they only become apparent in the longer run. Our approach therefore must examine the profile of effects over time.

The key challenge we face in making these comparisons is that the school districts that elect educators likely differ from those that do not—and these other differences could be responsible for any policy outcomes that change after an educator’s election. To overcome this challenge, we take advantage of the fact that, under California law, the order in which candidates for elected office appear on the ballot is randomly determined. Our data confirm that candidates who have the good fortune of being listed first on the ballot gain an advantage of 10.3 percentage points of the votes cast in their election. When an educator is listed first, this advantage translates into a 2.3 percentage point increase in the share of the board’s members who are educators. In short, the random assignment of an educator to the top of a ballot will shift a board’s composition.

Armed with this insight, we compare the policy choices of districts where educators are and are not listed first to isolate the causal effects of adding an educator to a school board on student outcomes, district spending, and charter schools. We first look at elementary- and middle-school scores on reading and math tests, as well as high-school graduation rates, and find no impacts.

We then consider teachers’ working conditions and find limited evidence of effects on service days, benefits, or class size. However, when an educator is elected to a school board, teachers’ salaries increase by 2 percent more than they would have otherwise four years after election. These increases apply across the board, for teachers at all levels of education and experience.

Because California school boards cannot raise the tax rate, boards decrease spending on building repairs and services like professional development in order to pay teachers more (see Figure 1). Four years after an educator is elected, a school board has increased the share of spending on certified salaries by 1.3 percentage points and decreased spending on capital outlays and services by 0.6 and 0.7 percentage points, respectively. We do not find evidence for impacts on superintendents’ salaries.

Figure 1: Districts Spend More on Teacher Salaries After an Educator Joins a School Board

In looking at effects on charter schools, the share of district students enrolled in charters declines by three percentage points (see Figure 2). By the end of an elected educator’s four-year term, there are 1.3 fewer charter schools in the district. In a state with an active charter sector serving at least one out of every 10 public-school students, these are sizeable impacts.

Figure 2: Fewer Charters When Educators Serve on Local School Boards

What if a school board includes multiple educators? That could shift the identity of the median board “voter” for a given issue and influence board decisions through deliberations and agenda-setting. To examine these possibilities, we estimate the effects of electing an educator to a school board if it already has a sitting member who is an educator. Our results suggest that this is of limited importance. There are slightly larger negative effects on charter school enrollment, but these are not statistically significant.

We also investigate whether electing an educator to a school board has consequences for subsequent elections and find evidence that it does. In this analysis, we look again at the effect of ballot order. An educator being listed first increases the number of elected educators in that election by 13 percent but decreases the number of elected educators by 9 percent in the next election. Interestingly, educators are no less likely to run in these subsequent elections; those who do run are just less likely to win. The long-term causal effects of electing an additional educator would be even larger in the absence of this electoral dynamic.

The Influence of Teachers Unions

Our findings suggest that educators’ professional expertise on boards does not translate into improvements in student learning. The results are consistent with a rent-seeking framework, in which representation of union interests predicts higher teachers’ salaries and potentially negative effects on student performance. Our own data reveal that educators are 40 percent more likely than non-educators to be endorsed by a teachers union. School board member survey data also indicate a strong positive association between professional experience in education and alignment with union priorities.

We conclude that school boards may be an important causal mechanism for the influence of teachers unions on local education, which points to several avenues for future research. Our ballot-order-based strategy provides a new approach to inferring how the characteristics of candidates causally affect outcomes. A valuable next step would be to analyze candidate-level records of union endorsement. This would facilitate separating out the influence of educators on education production from their possible alignment with teachers unions. Likewise, shifting from aggregate school-level to administrative student records would enable disentangling impacts on student sorting from their effects on education quality. Future work should also focus on broader dimensions of students’ skills and behavior, such as social-emotional attributes and civic engagement.

In summary, the election of an educator to a local school board shifts spending priorities on K–12 public schools, which collectively cost about $800 billion in federal, state, and local tax dollars a year. Yet voter turnout in school-board elections is typically between 5 and 10 percent. While more research is needed, voters don’t need to wait. Our results show just how much these races matter.

Ying Shi is assistant professor at Syracuse University and John G. Singleton is assistant professor at the University of Rochester.

The post Putting Teachers on the Ballot appeared first on Education Next.

]]>
49716557
Assessing Integration in Wake County https://www.educationnext.org/assessing-integration-wake-county-north-carolina-loud-debate-muted-effect-students-schools/ Tue, 18 Oct 2022 09:00:46 +0000 https://www.educationnext.org/?p=49715914 Loud debate, but muted effects for students and schools

The post Assessing Integration in Wake County appeared first on Education Next.

]]>

IllustrationFor decades, large public-school systems in the United States have used student assignment policies to foster more diverse school enrollments. Such efforts, sometimes pursued under court order, seek to expand educational opportunity and counterbalance the patterns of residential segregation that contribute to racial and economic isolation, especially in urban centers. They may receive a new source of federal support: the Biden Administration has proposed a $100 million Fostering Diverse Schools grant program to help communities “voluntarily develop and implement strategies that will build more racially and socioeconomically diverse schools and classrooms.”

“Research suggests that diverse learning environments benefit all students and can improve student achievement, serve as engines of social and economic mobility, and promote school improvement,” Education Secretary Miguel Cardona said in his budget testimony.

One common strategy to create more diverse learning environments is to intentionally balance school enrollments according to students’ socioeconomic and demographic characteristics. Districts can achieve this by physically transporting students to schools outside their immediate neighborhoods or by prioritizing the enrollment of students in schools where they would diversify the student body. The intent is to expand opportunities for students from diverse backgrounds to learn alongside one another, in order to reduce the prevalence of segregated schools, allocate resources more equitably, and improve student outcomes. Indeed, research has found benefits for educational achievement, attainment, and other measures of well-being for Black students where schools were desegregated and resources more equitably distributed.

But school integration initiatives (sometimes referred to as “mandatory busing” or just “busing”) in New York City, San Francisco, Charlotte-Mecklenburg in North Carolina, and elsewhere have sparked substantial backlash, particularly among more affluent families. Their concerns often echo a longstanding claim that school reassignments destabilize communities and exact a social or educational toll on reassigned students and their peers. For example, in 2019, community members submitted hundreds of overwhelmingly negative comments to Maryland’s Howard County Public School System in response to a proposed plan to redraw attendance boundaries to integrate its schools. Among the comments, a prediction: “The only result you will find is more time commuting to school, humiliation, intimidation. Busing children WILL NOT increase individual grade-point averages. In fact, it may decrease all those objectives.”

Critical to informing this debate is a comprehensive answer to the question: How does reassigning students to create schools that are more socioeconomically and academically diverse affect the distribution of educational opportunity? What are the impacts on students who switch schools as a result of these policies? And how do changes in school assignments affect the students who don’t switch schools, but who experience changes in their classmates’ characteristics?

We report the results from two distinct studies of North Carolina’s Wake County Public School System, which has a long history of using student assignment policies to weaken the school-neighborhood links that exacerbate school segregation. Our research teams, one based at the University of North Carolina at Chapel Hill (UNC) and the other originating at the Center for Education Policy Research at Harvard University, have worked closely with the district to better understand how student assignment policies affect academic and behavioral outcomes and how changing the demographic characteristics of a student’s peers affects learning.

We find that, on the whole, school reassignment has somewhat muted effects. In contrast to the sharp criticism and heated controversy that integration programs often inspire, switching schools does not harm students who are reassigned. In fact, reassigned students perform modestly better on statewide tests and are less likely to be suspended. We do find some negative effects for students who switch to schools where achievement and income levels are lower, but these effects are offset by positive impacts for students when school reassignments mean they learn alongside higher-performing and wealthier peers. However, these impacts are small, because, in most cases, students’ new schools are largely similar to the schools they left behind. Put another way, the impacts of school integration rely more on the destination than the departure.

A Decade of Dizzying Growth

Our research emerges from the deep and longstanding commitment to evidence-informed policymaking by district leaders in Wake County. Multiple teams of university-based researchers have cooperated with district staff over the past two decades. Our studies are unique, in that two teams pursued partially overlapping research topics. Taken together, these studies complement one another and provide a more comprehensive assessment of a longstanding student reassignment policy in one of the largest districts to implement such a plan. Satisfyingly, where our questions align, so, too, do our findings, which we believe can serve as a model for how future evaluations of high-priority policy topics might proceed.

Both studies examine school reassignment policies and effects during an era of rapid growth and demographic change in Wake County. Between 2000 and 2010, the number of students jumped by nearly 50 percent, to 143,289 from 98,741. The share of Hispanic students more than tripled, to 13 percent from 4 percent, while the percentage of Asian students almost doubled to 6 percent from 3 percent. During that time, the percentage of white students fell to 51 percent from 64 percent, while the share of Black students shrank slightly to 25 percent from 26 percent. As a result of the population growth, the district opened 40 new school buildings during the decade, most of which were located in relatively more affluent neighborhoods in the county’s suburban fringe. Nearly one quarter of all students were reassigned during the study period.

The UNC study looks at how reassignments affect the 23.9 percent of students who were asked to change schools, including how reassignment changed the characteristics of the schools they attend. It does not find that reassigned students who change schools are adversely affected. On average, after changing schools, reassigned students travel shorter distances and attend higher-performing schools. Their academic performance does not suffer after changing schools, and, in some cases, it actually improves by a small but significant degree. Critically, these results suggest that concerns about the negative consequences of school reassignment for those who were reassigned may be overblown.

The Harvard study asks a different set of questions. What about the larger group of students who don’t switch schools, but whose peer groups are changed due to wide-
scale reassignments? Their academic performance improves too—but only if their peer group changes to include more high-achieving classmates. Students from more affluent families and already high-achieving students benefit the most from peer groups that are also academically high-achieving. But students with lower family incomes and lower baseline academic achievement also benefit from being in class with academically stronger peers. The team does find that students earn lower grades in reading when they experience an influx of higher-performing peers, possibly due to teachers’ practices of relative-rank grading. But overall, the picture is positive.

In particular, our results indicate that students who learn alongside more high-achieving students as a consequence of school reassignment policies have better academic achievement. These results suggest that a policy that reassigns students to optimize the average peer achievement level of less-advantaged students can help accomplish equity goals but, under certain conditions, may also have unintended consequences, particularly for their more-advantaged peers.

A Voluntary Desegregation Effort

Unlike many other large school districts in the South, Wake County was never the subject of court-ordered desegregation. But in the 1970s, under federal pressure to integrate, the majority-Black Raleigh City Schools and majority-white surrounding county district merged to form the Wake County Public School System. To balance the enrollments of the district’s schools, leaders used students’ race as a primary factor in school assignments until a federal court decision ended mandatory desegregation efforts in the nearby Charlotte-Mecklenburg Schools.

In 2000, Wake County shifted to using students’ socioeconomic status and levels of academic achievement in school assignment decisions instead of race. As part of its strategy for meeting these targets, the district was divided into geographic nodes of about 150 students. The district assigned each node to a base elementary, middle, and high school, which served as the default school of attendance for students in the node.

To maintain socioeconomic and achievement balance, Wake County reassigned a small share of students to different base schools each year based on their grade level and node. That is, in a particular year, the policy would assign all students in the same grade and node to the same school. The goal was to balance enrollments such that no school would serve a student body with more than 40 percent of students receiving free or reduced-price lunch and more than 25 percent of students reading below grade level.

Reassignments occurred throughout the district, including nodes in the district’s urban core as well as rapidly expanding suburban nodes at the district’s northern and southern peripheries. Nodes with high concentrations of Black and Hispanic students were more likely to experience reassignment than nodes that included predominately white students (see Figure 1). The district also used reassignments to populate newly constructed schools, most of which were located in relatively affluent, high-growth neighborhoods. These shifts affected between 2 percent and 8 percent of students annually.

Shifting School Assignments in Wake County, 2000-2010 (Figure 1)

District decisionmakers used a range of criteria when reassigning groups of students, including travel distances, capacity constraints, and diversity considerations. These recommendations were presented for community feedback alongside options for students to attend magnet and year-round schools. The annual reassignment process kicked off more than a year in advance, and parents had at least six months to decide whether to accept the newly assigned school or appeal the decision.

This process created several groups of students: students who were reassigned and moved; students who were reassigned and did not move; and students at sending and receiving schools who experienced different peer groups because of node reassignments. In addition, students could choose to attend year-round or magnet schools, though students were guaranteed door-to-door transportation to their base schools but not to magnet programs. About two thirds of Wake County students attended their base schools from 2000 to 2010.

The district’s approach to reassignment enables both research teams to overcome a key challenge in measuring the impacts of school switching or the influence of peers on one another’s learning. Simply examining the outcomes of any reassigned students who change schools or not-reassigned students who share classrooms with peers of different demographic backgrounds would fail to account for unobserved differences that could influence outcomes. In our studies, however, we can compare the outcomes of students in nodes selected to switch schools to those in otherwise similar nodes who were not chosen to change schools. After accounting for the observable characteristics used to inform the assignment process, groups of students from adjacent nodes were selected in an arguably random process to attend different schools. Similarly, to understand the effects of learning with peers from different backgrounds, we examine the outcomes of students who remained in the same school but experienced an as-good-as-random reshuffling of peers assigned into and out of their classes. As a result, both teams have plausible claims to a causal interpretation of their findings.

Impacts on Reassigned Students

Those of us on the UNC team examine how reassignment affected the new schools that students attended as well as its impact on reassigned students’ academic achievement, attendance, and school discipline. We compare student outcomes in the pre-reassignment period to those outcomes in post-reassignment periods and then benchmark those differences against the trend for nodes that were never reassigned. Our analysis is based on district data from 1999–2000 to 2010–11, which includes students’ basic demographic and academic characteristics, home addresses, geographic nodes, and school assignments.

Despite concerns about the potential harms of reassigning students to achieve diversity goals, we find no evidence that reassignment negatively affected student outcomes. In some cases, reassignment modestly boosted achievement and protected students against exclusionary discipline.

First, we examine how reassignment affects the characteristics of students’ assigned schools. Looking at the effects on distance, we find that reassignment reduces the distance between a geographic node and assigned school by one fifth to one half of a mile, on average. However, this surprising overall result masks heterogeneity across racial and ethnic groups. While distances for reassigned white students decline by roughly one mile, distances for reassigned Hispanic students increase by about one mile. There is no change in average travel distance for Black students.

Overall, reassignment results in students attending schools with somewhat higher math achievement, though we find substantial variation across racial and ethnic groups. On average, test-score performance in math at schools attended by reassigned students is 0.02 to 0.05 standard deviations higher compared to their previous schools. For white students, however, math achievement is between 0.02 and 0.07 standard deviations higher at new schools. Black students attend schools where achievement is initially lower than at their previous schools, but this effect shrinks over time. The differences range from 0.05 standard deviations lower one year after reassignment to 0.004 standard deviations lower two years later. There is no change in average school performance for reassigned Hispanic students. In addition, the proportion of students of color is lower in schools attended by reassigned Black, Hispanic, and white students, suggesting that only white students are more likely to be reassigned to schools that included more students who share their racial identity.

We then look at how reassignment affects students who change schools, and we find encouraging results. After reassignment, students’ math achievement improves by a modest amount in all three post-reassignment years, ranging from 0.02 standard deviations in year one to 0.05 standard deviations in year three (see Figure 2). Reading achievement is initially flat but improves by 0.02 standard deviations in year two. Students who are reassigned are also less likely to experience exclusionary discipline in the first post-reassignment year and are no more likely to be chronically absent than before they switched schools. The impact on suspensions is particularly encouraging in light of emerging efforts by policymakers to combat disciplinary practices that disproportionately harm students of color (see “Proving the School-to-Prison Pipeline,” research, Fall 2021).

Effects of Reassignment for Students Who Change Schools (Figure 2)

Wake County students also switch schools to attend the district’s rich set of magnet and year-round calendar schools; during the study period, about one-third of district students attend a public school of choice. We explore whether the main results differ depending on whether reassigned students attend their reassigned base school or opt to attend a magnet or year-round public school. Encouragingly, we find that the effects for achievement, absenteeism, and suspension are broadly similar whether reassigned students attend new base schools or schools of choice.

Peer Effects on Students Who Don’t Switch Schools

Those of us on the Harvard team focus here on students who do not change schools. Because students in similar geographic nodes were not uniformly reassigned, school enrollment changes serve as a series of natural experiments that allow us to compare students’ performance across years in which they experienced more- or less-affluent or higher- or lower-achieving peers.

Unlike students who switch schools, the only aspect of non-reassigned students’ schooling that changed was their peers. Thus, we can home in on just the phenomenon of changes in composition of these students’ classroom resulting from peer in- and out-flow produced by the reassignment policy. Our analysis includes district data from 2005–06 to 2011–12 (a shorter window than the UNC study), when academic standards, tests, and district-assignment policies were relatively stable. We focus on students in grades 7 and 8, who have two years of annual test-score data and typically do not change schools unless they are reassigned.

We find that middle-school students’ academic skills, as measured by standardized test scores, improve when they attend school with higher-achieving peers. Overall, when peer achievement increases by 0.10 standard deviations, students’ test scores increase by 0.04 standard deviations in math and 0.03 standard deviations in reading (see Figure 3).

Impacts of Changing Peer Groups on Students Who Are Not Reassigned (Figure 3)

We also look at non-reassigned students by family income and prior test-score performance. Peer effects are largest for wealthier students, whose test scores increase by 0.05 standard deviations in math and 0.03 standard deviations in reading when their peers’ achievement increases by 0.10 standard deviations. Students with higher test scores get the biggest gains in math from attending school with higher-achieving peers. Students with lower family incomes and lower baseline levels of achievement also benefit from academically stronger peers. When peer achievement increases by 0.10 standard deviations, lower-income students’ test scores increase by 0.02 standard deviations in math and 0.01 standard deviations in reading. Similarly, students with low prior achievement improve their math and reading scores by 0.04 and 0.03 standard deviations, respectively, when they have higher-achieving peers in their class.

In looking at students’ grades, we find differences in peer effects between math and reading. When students attend school with more higher-performing and affluent peers, their math grades go up by 0.02 and 0.20 standard deviations, respectively. We find that benefit throughout the performance distribution. But non-reassigned students’ English grades decline by 0.03 standard deviations when their peers are higher performing. This may be due to the different ways that students are graded in reading and math. In English Language Arts classes, teachers may look at how students stand in relation to other students in the class, while math grades may depend more directly on objective mastery of the material.

One important caveat to our results is that only a small number of student reassignments demonstrably changed the achievement and family income levels of students’ peers, both for students who were reassigned and those who experienced new peers but did not themselves change schools. The majority of reassignments were intended to address the rapidly expanding student population. As such, we do not present our study as an evaluation of comprehensive policies that redistribute students to schools for the purposes of socioeconomic or academic integration. Rather, it looks at the narrower topic of changing a particular child’s assigned school or peer group.

Lessons in Complexity

The Wake County Public School System’s recent history of school integration policies represents one of many such efforts occurring nationwide. The Century Foundation recently reported that 185 charter schools and school districts are actively implementing voluntary or court-ordered integration policies based on select demographic or socioeconomic criteria. Our joint research projects represent a deep dive into one large district’s policy, with implications for stakeholders and policymakers pursuing equity through integration efforts in schools.

Of course, each district’s experience is unique. For more than three decades, Wake County implemented a student assignment policy that aimed to prevent school segregation and enjoyed widespread popularity. The recent iteration of the policy resulted in roughly 2 percent to 8 percent of students being reassigned in any given year—many for purposes other than integration. This represents an incremental approach that, at times, led to loud, negative headlines but relatively muted impacts. Still, the policy ran into political headwinds and was phased out following the 2009 board election, which featured a large influx of national attention, organizing, and funding not typically seen in local education races.

Districts implementing their own policies or considering new ones should set expectations for equity and achievement that are commensurate with the scope of any particular policy levers. As decisionmakers consider various factors in the policymaking process (including parental preferences and political feasibility), they should be aware of the implications of our research. Modest reassignment policies lead to modest changes in students’ peer groups, which together produce modest—although mostly positive—results. Bolder interventions may produce more meaningful effects. But they will require a broad spectrum of stakeholder support.

Given the targeted reach of Wake County’s integration policy, we are not surprised to see empirically small achievement impacts on students who were reassigned. The results from both studies suggest that shifting small numbers of students might marginally improve achievement for reassigned students in the aggregate, but also lead to unintended impacts for some groups of marginalized students or the widening of opportunity gaps. Examples of such unintended consequences include longer travel times for Hispanic students, declines in performance for low-performing students, and some peer-learning benefits that accrue disproportionately to students from more affluent families. The relatively small impacts we detail stand in contrast to the often hyperbolic discourse that accompanies school-integration debates, with critics arguing that reassignment has large and persistent deleterious effects for students who are asked to change schools.

Our work also highlights how peers influence their classmates’ learning. Increasing the overall proportion of high-achieving and wealthy students is likely to increase student achievement. However, these benefits tend to be concentrated among students who are already high-achieving and do not have low family incomes. And relying on changes in classroom composition alone as a mechanism to improve student outcomes will pose challenges.

First, consider Matthew Kraft’s estimate of the median effect of educational interventions on student test scores: 0.10 standard deviations. By our calculations, accomplishing that median impact would require vast changes in school composition. Students would have to switch to schools where their peers performed 0.20–0.25 standard deviations higher and where the share of more-affluent families was 10 percentage points greater compared to their previous schools. This is no small feat.

In addition, our findings imply that reassignment policies such as the one we study may have some unintended consequences. Specifically, our findings imply that—absent mitigation efforts—some students may learn less when they study in classrooms with more lower-achieving peers as a result of reassignment policies. To limit this potential outcome, teachers and school leaders in locales seeking to use reassignment for equity purposes will need to attend to the needs of higher- and lower-achieving students alike. For example, higher-achieving and higher-family-income students may benefit from community-service extension activities and differentiated instruction. At the same time, lower-achieving and lower-family-income students may benefit from school leaders scrutinizing grading practices and ensuring that their schools are designed to support traditionally underserved learners.

Historically, children from different racial, socioeconomic, and achievement backgrounds leave school with different life opportunities available to them, and resource inequalities in the country’s K–12 educational system contribute to these inequities. The results of our respective studies suggest that student assignment policies that relocate students to optimize the average peer achievement level of lower-achieving or less-affluent students can accomplish equity goals. But school systems will need to employ strategies beyond reassignment to accomplish such goals, not least of which will be to build the political will to implement and sustain such integration policies in their communities.

James S. Carter III is Senior Education Data and Research Associate at the Urban Institute. Rodney P. Hughes is assistant professor at West Virginia University. Matthew A. Lenard is a doctoral candidate at the Harvard Graduate School of Education. David D. Liebowitz is assistant professor at the University of Oregon. Rachel M. Perera is a fellow in the Brown Center on Education Policy at the Brookings Institution. This article is based on the study “The Kids on the Bus,” published in the August 2021 issue of the Journal of Policy Analysis and Management, and the National Bureau of Economic Research working paper “New Schools and New Classmates,” issued in May 2022 and forthcoming in the Economics of Education Review.

This article appeared in the Winter 2023 issue of Education Next. Suggested citation format:

Carter III, J.S., Hughes, R.P., Lenard, M.A., Liebowitz, D.D., and Perera, R.M. (2023). Assessing Integration in Wake County:Loud debate, but muted effects for students and schools. Education Next, 23(1), 60-67.

The post Assessing Integration in Wake County appeared first on Education Next.

]]>
49715914
A Bad Bargain https://www.educationnext.org/bad-bargain-teacher-collective-bargaining-employment-earnings/ Tue, 17 Nov 2015 00:00:00 +0000 http://www.educationnext.org/bad-bargain-teacher-collective-bargaining-employment-earnings/ How teacher collective bargaining affects students’ employment and earnings later in life

The post A Bad Bargain appeared first on Education Next.

]]>

ednext_XVI_1_lovenheim_graphic-click

On the eve of the Seattle teachers strike in September 2015, the Seattle Times condemned the impending walkout, accusing the union of “stiff-arming more than 50,000 kids and their families.” Yet the teachers insisted that their strike was about children’s education, not just teacher pay, and commanded widespread support from parents and the community at large.

Seattle teachers and administrators reached an agreement in one week, but the question of how unions affect public education is far from settled. According to the recent Education Next poll (see “The 2015 EdNext Poll on School Reform,” features, Winter 2016), the public is divided as to whether teachers unions have a positive or negative impact on schools, and, until now, researchers have been unable to document the effects of collective bargaining on students’ long-term outcomes.

Today, more than 60 percent of teachers in the United States work under a union contract. The rights of teachers to unionize and bargain together have expanded dramatically since the late 1950s, when states began passing “duty-to-bargain” (DTB) laws that required school districts to negotiate with teachers unions in good faith. Recently, though, states such as Wisconsin, Indiana, Michigan, and Tennessee have sought to weaken the ability of teachers unions to negotiate contracts in K–12 education.
Advocates for these restrictions claim that unions have a negative effect on the quality of public education and, therefore, students’ life chances. Those in favor of teacher collective bargaining, on the other hand, argue that unions make the education system more effective by empowering teachers who are in the classroom and by giving them a role in shaping their working conditions. Due to data limitations, however, empirical research has not credibly addressed the critical question of how teacher collective bargaining influences student outcomes.

In this study, we present the first evidence on how laws that support teacher collective bargaining affect students’ employment and earnings in adulthood. We do so by first examining how the outcomes of students educated in a given state changed after the state enacted a duty-to-bargain law, and then comparing those changes to what happened over the same time period in states that did not change their collective-bargaining policies.

We find no clear effects of collective-bargaining laws on how much schooling students ultimately complete. But our results show that laws requiring school districts to engage in collective bargaining with teachers unions lead students to be less successful in the labor market in adulthood. Students who spent all 12 years of grade school in a state with a duty-to-bargain law earned an average of $795 less per year and worked half an hour less per week as adults than students who were not exposed to collective-bargaining laws. They are 0.9 percentage points less likely to be employed and 0.8 percentage points less likely to be in the labor force. And those with jobs tend to work in lower-skilled occupations.

Striking teachers from the Seattle School District walk a picket line on September 10, 2015
Striking teachers from the Seattle School District walk a picket line on September 10, 2015

Teacher Collective Bargaining in the United States

ednext_XVI_1_lovenheim_fig01-smallIn the first half of the 20th century, teachers unions in the United States were predominantly professional organizations that had little say in contract negotiations between teachers and school districts. Starting with Wisconsin in 1959, however, states began passing union-friendly legislation that either gave teachers the right to collectively bargain or explicitly mandated that districts negotiate with unions in good faith. Duty-to-bargain laws in particular give unions considerable power in the collective-bargaining process, because they make it illegal for a district to refuse to bargain with a union, and because most of them require state arbitration if the two sides reach an impasse. The enactment of such laws led to a sharp rise in the number of teachers who joined unions and in the prevalence of collectively bargained contracts.

Between 1959 and 1987, 33 states passed duty-to-bargain laws (see Figure 1); just 1 (New Mexico) has done so since. Of the 16 states without such a law, 9 have legislation that permits teachers unions and districts to bargain if both sides agree to do so. In the remaining 7 states (Arizona, Georgia, Mississippi, North Carolina, South Carolina, Texas, and Virginia), collective bargaining is prohibited either by statute or by court ruling (see Figure 2).

ednext_XVI_1_lovenheim_fig02-small

How Might Collective Bargaining Affect Schools and Students?

Collective-bargaining laws strengthen teachers unions and give them greater influence over how school districts allocate their resources. A typical collective-bargaining agreement addresses a remarkably broad range of items: unions negotiate over salary schedules and benefits; hiring, evaluation, and firing policies; and rules detailing work and teaching hours, class assignments, class sizes, and nonteaching duties. By increasing union membership, collective-bargaining laws also heighten the influence of teachers unions in education politics at the state level.

Starting in 1959, states began passing union-friendly legislation that led to a sharp rise in the prevalence of OR collectively bargained contracts.
Starting in 1959, states began passing union-friendly
legislation that led to a sharp rise in the prevalence of OR collectively bargained contracts.

Critics of teacher unionization argue that collective bargaining in public education has reduced school quality by shifting resources toward teachers and away from other educational inputs and by making it more difficult to fire low-performing teachers. Stronger unions may also have made it  harder for states to adopt policies aimed at improving school quality through enhanced accountability or expanded school choice.

Other arguments, however, suggest that stronger unions may benefit students. First, to the extent that teachers have expertise in creating effective learning environments, giving them more say over how resources are allocated could lead to better educational outcomes. Second, giving teachers a greater voice in the structure of their working environments could lead them to become more productive and could attract more effective teachers into the profession. Finally, teachers unions could use their political muscle to support additional investment in public education and other policies that might enhance school quality.

In sum, there’s little dispute that collective bargaining alters how school districts operate and shifts the balance of power in state education politics, but there is wide disagreement over whether these changes affect student outcomes negatively or positively. This disparity of opinion highlights the importance of turning to empirical evidence.

Our Study

The central challenge in studying the effects of collective-bargaining policies is that states with strong protections for collective bargaining tend to be very different from states with weaker protections. For example, the states without duty-to-bargain laws are located mainly in the South, where student achievement has historically been low for reasons unrelated to collective bargaining. States such as Massachusetts and Minnesota demonstrate that it is possible to have a relatively high-performing school system in the presence of strong unions, but they tell us very little about the effects of collective bargaining itself.

Our study overcomes this hurdle by examining how the outcomes of students educated in specific states changed over the years when most states enacted collective-bargaining laws. We focus on duty-to-bargain DTB laws specifically because these laws led to greater growth in unionization and collective bargaining than did other forms of state union laws. And we focus on entire states rather than on specific school districts, because the passage of a duty-to-bargain law might have consequences even for students in districts that did not unionize. Unions’ political activities influence education policies statewide, and nonunionized districts operating in a DTB state may tend to adopt policies supported by teachers in order to avoid unionization efforts.

We do not directly compare students educated in duty-to-bargain states with students in non-DTB states, because such comparisons would clearly yield outcome differences unrelated to collective bargaining (for instance, differences caused by higher or lower poverty rates). Also, we eschew simple “before and after” comparisons within a state, because again, any observed outcome differences could be the result of factors other than collective bargaining (for instance, social and political changes since the 1960s that affected K–12 education). Our strategy, therefore, is to compare the differences in outcomes for students educated in the same state (before and after the DTB law passed) to the differences in outcomes for students in non-DTB states over the same time period.

When making these comparisons, we adjust for the share of the student’s state birth cohort that is black, Hispanic, and white, and the share that is male. We also take into account two policy changes enacted by many states during this same time period that may have affected student outcomes: school finance reforms and changes in the generosity of state earned-income tax credits. If the rollout of those policies coincided with the passage of duty-to-bargain laws, unadjusted before-and-after comparisons could yield misleading results. Adjusting for these two variables turns out to make little difference in our results but strengthens our confidence that collective bargaining is responsible for the effects we document.

Our measure of the extent to which each student is exposed to collective bargaining varies from 0 to 1 and is defined as the proportion of the student’s school years in which a duty-to-bargain law was in effect in his or her state. A value of 1 means that a DTB law had been enacted by the time students in the birth cohort were six years old (in time for first grade); thus, they were exposed to the law throughout their entire K–12 education. The variable is 0 for students whose birth cohorts had no exposure, either because they were over 18 when a DTB law was passed or because they were born in a state that did not impose a duty to bargain.

Data

The data for our analysis come from two main sources. The first is the National Bureau of Economic Research collective-bargaining law dataset that contains, for each state and year since 1955, collective-bargaining laws for each type of public-sector worker. We combine the collective-bargaining information for teachers with 2005–2012 American Community Survey (ACS) data containing detailed information on the educational attainment and labor market success of representative samples of adults in each state.

We look specifically at ACS data for individuals between the ages of 35 and 49, because people in this age group typically have completed their education and are at a juncture when yearly earnings are indicative of lifetime earnings. We examine birth cohorts ranging from 1956 to 1977, which correspond to students who attended school from 1962 to 1995. As shown in Figure 1, these schooling years correspond with the dramatic rise in duty-to-bargain laws in the United States.

Results

These data enable us to examine the effects of teacher collective-bargaining policies on multiple indicators of students’ labor-market success. Taken as a whole, our results clearly indicate that laws supporting collective bargaining for teachers have adverse long-term consequences for students.

ednext_XVI_1_lovenheim_fig03-smallEarnings. We find strong evidence that teacher collective bargaining has a negative effect on students’ earnings as adults. Attending school in a state with a duty-to-bargain law for all 12 years of schooling reduces later earnings by $795 dollars per year (see Figure 3). This represents a decline in earnings of 1.9 percent relative to the average. Although the individual effect is modest, it translates into a large overall loss of earnings for the nation as a whole. In particular, our results suggest a total loss of $196 billion per year accruing to those who were educated in the 34 states with duty-to-bargain policies on the books.

Hours worked. Consistent with this reduction in earnings, we also find that exposure to a duty-to-bargain law throughout one’s school years is associated with a decline of 0.49 hours worked per week. This is a 1.4 percent decline relative to the average, and it suggests that a reduction in hours worked is a main driver of the lower earnings.

Wages. The reduced earnings caused by unionization could also reflect lower wages, and the evidence suggests a negative relationship between collective-bargaining exposure and wages. While this relationship is not statistically significant, it is consistent with our other results and suggests that teacher collective bargaining may also have a modest adverse effect on average wages.

Employment. The fact that teacher collective bargaining reduces working hours suggests that duty-to-bargain laws may also affect employment levels. In fact, when we use the share of individuals who are employed as the outcome variable, we find that duty-to-bargain laws reduce employment. Specifically, exposure to a duty-to-bargain law for all 12 years of schooling lowers the likelihood that a worker is employed by 0.9 percentage points. Duty-to-bargain laws have no impact on unemployment rates, however, suggesting that they reduce employment by leading some individuals to drop out of the labor force altogether.

Occupational skill level. Finally, we analyze the effects of collective bargaining on the skill level of a student’s selected occupation, as measured by the share of workers in that occupation who have any education beyond a high school diploma. The results suggest yet another negative effect: being exposed to a duty-to-bargain law for all 12 years of schooling decreases the proportion of such workers in an occupation by almost half of a percentage point (or 0.6 percent relative to the average). This effect is modest in size, but it implies that teacher collective bargaining leads students to work in occupations requiring lower levels of skill.

Educational attainment. The reduced earnings and labor force participation associated with teacher collective bargaining raise the possibility that affected students may have completed less education. Our analysis, however, finds little evidence of bargaining power having a significant effect on how much schooling students completed. This finding is surprising in light of the substantial labor-market effects we document, but it comports with prior research that has found no effect of duty-to-bargain law passage on high-school dropout rates.

Additionally, educational attainment is but one measure of the amount of human capital students accumulate. Even if students do not complete fewer years of education, they may be acquiring fewer skills while they are in school. We believe that our results concerning earnings and employment are driven by other aspects of school quality that are not reflected in educational attainment, and they reinforce the importance of studying labor-market outcomes directly in order to understand how major reforms such as the enactment of teacher collective-bargaining laws affect students’ life outcomes.

Policy Implications

In 2014 under Governor Rick Snyder, Michigan passed a law that sought to limit union negotiating power.
In 2014 under Governor Rick Snyder, Michigan passed a law that sought to limit union negotiating power.

This study provides the first comprehensive analysis of the effect of teacher collective bargaining on the long-term educational and labor market outcomes of affected children. We find that exposure to a duty-to-bargain law while in grade school lowers earnings and leads to fewer hours worked, reductions in employment, and decreases in labor force participation. Occupational skill level also declines. However, educational attainment is unchanged by exposure to these laws.

These results contribute new information to the contentious debate occurring in many states over limiting the collective-bargaining rights of teachers. For example, in 2011 Wisconsin passed legislation that greatly reduced the ability of teachers to bargain with school districts (see “Limits on Collective Bargaining,” features, Fall 2013), and in 2014 Michigan passed a public employee right-to-work law that sought to limit union negotiating power. Not surprisingly, teachers unions and their allies responded to these laws with fierce opposition.

At the core of this debate lies the question of how teacher collective bargaining affects student outcomes. Our results suggest that lawmakers in Wisconsin and Michigan have evidence on their side. However, we urge caution when generalizing these findings to current students, because the cohorts we analyze in this study, most of whom attended school in the 1970s and 1980s, were educated in an environment very different from today’s. Some of the adverse effects of teacher collective bargaining we document could be driven by how teachers unions interacted with aspects of the school system that are no longer relevant. On the other hand, the economy’s growing demand for skilled workers may mean that policies affecting human capital accumulation matter more now than ever. Future research should investigate whether and how the effects of teacher collective bargaining have changed over time.

Moreover, our results say little about why the enactment of collective-bargaining laws has harmed student outcomes. Perhaps collective bargaining has made it more difficult for school districts to dismiss ineffective teachers or to allocate teachers among schools. Or perhaps the political influence of teachers unions at the state level has interfered with efforts to improve school quality. Identifying the factors at play could shed light on the most promising strategies for reform. In the meantime, however, our evidence points to the conclusion that collective bargaining in public education has been a bad deal for American students.

Michael F. Lovenheim is associate professor of policy analysis and management at Cornell University and a faculty research fellow at the National Bureau of Economic Research. Alexander Willén is a doctoral student in policy analysis and management at Cornell University.      

This article appeared in the Winter 2016 issue of Education Next. Suggested citation format:

Lovenheim, M.F., and Willén, A. (2016). A Bad Bargain: How teacher collective bargaining affects students’ employment and earnings later in life. Education Next, 16(1), 62-68.

The post A Bad Bargain appeared first on Education Next.

]]>
49703808
Results of President Obama’s Race to the Top https://www.educationnext.org/results-president-obama-race-to-the-top-reform/ Tue, 14 Jul 2015 00:00:00 +0000 http://www.educationnext.org/results-president-obama-race-to-the-top-reform/ Win or lose, states enacted education reforms

The post Results of President Obama’s Race to the Top appeared first on Education Next.

]]>

ednext_XV_4_howell_mapclick

Caught between extraordinary public expectations and relatively modest constitutional authority, U.S. presidents historically have fashioned all sorts of mechanisms—executive orders, proclamations, memoranda—by which to move their objectives forward. Under President Barack Obama’s administration, presidential entrepreneurialism has continued unabated. Like his predecessors, Obama has sought to harness and consolidate his influence outside of Congress. He also has made contributions of his own to the arsenal of administrative policy devices. The most creative, perhaps, is his Race to the Top initiative, which attempted to spur wide-ranging reforms in education, a policy domain in which past presidents exercised very little independent authority.

Barack ObamaThis study examines the effects of Obama’s Race to the Top on education policymaking around the country. In doing so, it does not assess the efficacy of the particular policies promoted by the initiative, nor does it investigate how Race to the Top altered practices within schools or districts. Rather, the focus is the education policymaking process itself; the adoption of education policies is the outcome of interest.

No single test provides incontrovertible evidence about its causal effects. The overall findings, however, indicate that Race to the Top had a meaningful impact on the production of education policy across the United States. In its aftermath, all states experienced a marked surge in the adoption of education policies. This surge does not appear to be a statistical aberration or an extension of past policy trends. Legislators from all states reported that Race to the Top affected policy deliberations within their states. The patterns of policy adoptions and legislator responses, moreover, correspond with states’ experiences in the Race to the Top competitions.

In the main, the evidence suggests that by strategically deploying funds to cash-strapped states and massively increasing the public profile of a controversial set of education policies, the president managed to stimulate reforms that had stalled in state legislatures, stood no chance of enactment in Congress, and could not be accomplished via unilateral action.

Asking States to Compete

On February 17, 2009, President Obama signed into law the American Recovery and Reinvestment Act of 2009 (ARRA), legislation that was designed to stimulate the economy; support job creation; and invest in critical sectors, including education, in the aftermath of the Great Recession.  Roughly $100 billion of the ARRA was allocated for education, with $4.35 billion set aside for the establishment of Race to the Top, a competitive grant program designed to encourage states to support education innovation.

From the outset, the president saw Race to the Top as a way to induce state-level policymaking that aligned with his education objectives on college readiness, the creation of new data systems, teacher effectiveness, and persistently low-performing schools. As he noted in his July 2009 speech announcing the initiative, Obama intended to “incentivize excellence and spur reform and launch a race to the top in America’s public schools.”

The U.S. Department of Education (ED) exercised considerable discretion over the design and operation of the Race to the Top competition. Within a handful of broad priorities identified by Congress in ARRA, the Obama administration chose which specific policies would be rewarded, and by how much; how many states would receive financial rewards, and in what amount; and what kinds of oversight mechanisms would be used to ensure compliance. Subsequent to the ARRA’s enactment, Congress did not issue any binding requirements for the design or administration of the program. From an operational standpoint, Race to the Top was nearly entirely the handiwork of ED.

Race to the Top comprised three distinct phases of competition. Both Phase 1 and Phase 2 included specific education-policy priorities on which each applicant would be evaluated. States were asked to describe their current status and outline their future goals in meeting the criteria in each of these categories. The education policy priorities spanned six major scoring categories and one competitive preference category (see Table 1).

ednext_XV_4_howell_tab01-small

To assist states in writing their applications, ED offered technical assistance workshops, webinars, and training materials. Additionally, nonprofit organizations such as the National Council on Teacher Quality published reports intended to help states maximize their likelihood of winning an award. Nonetheless, substantial uncertainty shrouded some components of the competition, including the exact grading procedures, number of possible winners, total allocated prize amount per winning state, and prize allocation mechanism and timeline.

ednext_XV_4_howell_fig01-smallWhen all was said and done, 40 states and the District of Columbia submitted applications to Phase 1 of the competition. Finalists and winners were announced in March 2010. Phase 1 winners Tennessee and Delaware were awarded roughly $500 million and $120 million, respectively, which amounted to 10 percent and 5.7 percent of the two respective states’ budgets for K‒12 education for a single year. Figure 1 identifies all winners and award amounts.

Thirty-five states and the District of Columbia submitted applications to Phase 2 of the competition in June 2010. Ten winners were each awarded prizes between $75 million and $700 million in Phase 2.

Having exhausted the ARRA funds, the president in 2011 sought additional support for the competition. That spring, Congress allotted funds to support a third phase, in which only losing finalists from Phase 2 could participate. A significantly higher percentage of participating states won in Phase 3, although the amounts of these grants were considerably smaller than those from Phases 1 and 2. On December 23, 2011, ED announced Phase 3 winners, which received prizes ranging from $17 million to $43 million.

States that won Race to the Top grants were subject to a nontrivial monitoring process, complete with annual performance reports, accountability protocols, and site visits. After receiving an award letter, a state could immediately withdraw up to 12.5 percent of its overall award. The remaining balance of funds, however, was available to winning states only after ED received and approved a final scope of work from the state’s participating local education agencies. Each winning state’s drawdown of funds, then, depended upon its ability to meet the specific goals and timelines outlined in its scope of work.

Impact on State Policy

In its public rhetoric, the Obama administration emphasized its intention to use Race to the Top to stimulate new education-policy activity. How would we know if it succeeded? To identify the effects of Race to the Top on state-level policymaking, ideally one would take advantage of plausibly random variation in either eligibility or participation. Unfortunately, neither of these strategies is possible, as all states were allowed to enter the competition and participation was entirely voluntary. To discern Race to the Top’s policy consequences, therefore, I exploit other kinds of comparisons between policy changes in the 19 winning states and the District of Columbia, the 28 losers, and the 4 that did not participate; commitments that different states made in their applications and subsequent policymaking activities; and changes in policymaking at different intervals of the competitions.

Policy Adoptions. Perhaps the most telling piece of evidence related to the effect of Race to the Top is the number of relevant education reforms adopted as state policy in the aftermath of the competition’s announcement. To determine that number, my research team and I documented trends in actual policy enactments across the 50 states and the District of Columbia.  We tracked numerous policies that clearly fit the various criteria laid out under Race to the Top, and covered such topics as charter schools, data management, intervention into low-performing schools, and the use of test scores for school personnel policy, as well as three additional control policies—increased high-school graduation requirements, the establishment of 3rd-grade test-based promotion policies, and tax credits to support private-school scholarships—that were similar to Race to the Top policies but were neither mentioned nor rewarded under the program (see sidebar, opposite page, for specific policies tracked for Race to the Top applications and state adoptions).

Across all 50 states and the District of Columbia, we examined whether a state legislature, governor, school board, professional standards board, or any other governing body with statewide authority had enacted a qualifying policy each year between 2001 and 2014. Policies that were merely proposed or out for comment did not qualify. We also examined whether each state in its written application claimed to have already enacted each policy or expressed its clear intention to do so, as well as the number of points the application received in the scoring process.

Illinois state senator Kimberly Lightford noted, “I think Race to the Top was our driving force to get us all honest and fair, and willing to negotiate.”
Illinois state senator Kimberly Lightford noted, “I think Race to the Top was our driving force to get us all honest and fair, and willing to negotiate.”

These data reveal that the Race to the Top competitions did not reward states exclusively on the basis of what they had already done. Race to the Top, in this sense, did not function as an award ceremony for states’ past accomplishments. Rather, both states’ past accomplishments and their stated commitments to adopt new policies informed the scores they received—and hence their chances of winning federal funding.

We also found that states around the country enacted a subset of these reform policies at a much higher rate in the aftermath of Race to the Top than previously. Between 2001 and 2008, states on average enacted about 10 percent of reform policies. Between 2009 and 2014, however, they had enacted 68 percent. And during this later period, adoption rates increased every single year. At the rate established by preexisting trends, it would have taken states multiple decades to accomplish what, in the aftermath of the competitions, was accomplished in less than five years.

Policy Adoptions in Winning, Losing, and Nonapplying States. The surge of legislative activity was not limited to states that were awarded Race to the Top funding. Figure 2 illustrates the policy adoption activity of three groups of states: those that won in one of the three phases of competition; those that applied in at least one phase but never won; and those that never applied. In nearly every year between 2001 and 2008, policy adoption rates in these groups were both low and essentially indistinguishable from one another. In the aftermath of Race to the Top’s announcement, however, adoption rates for all three groups increased dramatically. By 2014, winning states had adopted, on average, 88 percent of the policies, compared to 68 percent among losing states, and 56 percent among states that never applied.

ednext_XV_4_howell_fig02-small

Regression analyses that account for previous policy adoptions and other state characteristics show that winning states were 37 percentage points more likely to have enacted a Race to the Top policy after the competitions than nonapplicant states. While losing states were also more likely than nonapplicants to have adopted such policies, the estimated effects for winning states are roughly twice as large. Anecdotal media reports, as well as interviews conducted by my research team, suggest that the process of applying to the competitions by itself generated some momentum behind policy reform. Such momentum, along with the increased attention given to Race to the Top policies, may explain why those states that did not even apply to the competition nonetheless began to enact these policies at higher rates.

Winning states were also more likely to have adopted one of the control policies, which is not altogether surprising, given the complementarities between Race to the Top policies and the chosen control policies. Still, the estimated relationship between winning and the adoption of Race to the Top policies is more than twice as large as that between winning and the adoption of control policies.

My results also suggest that both winning and losing states were especially likely to adopt policies about which they made clear commitments in their Race to the Top applications. Though the effects are not always statistically significant, winning states appear 21 percentage points more likely to adopt a policy about which they made a promise than one about which they did not; put differently, they were 36 percentage points more likely to adopt a policy about which they made an explicit commitment than were nonapplying states, which, for obvious reasons, made no promises at all. Losing states, meanwhile, were 31 percentage points more likely to adopt a policy on which they had made a promise than on a policy on which they had not.

Closer examination of winning, losing, and nonapplying states illuminates how Race to the Top influenced policymaking in all states, regardless of their status. One winning state, Illinois, submitted applications in all three phases before finally winning. Its biggest policy accomplishments, however, happened well before it received any funds from ED. The rapid enactment of Race to the Top policies in Illinois reflected a concerted effort by the state government to strengthen its application in each competition. Before the state even submitted its Phase 1 application, Illinois enacted the Performance Evaluation Reform Act (PERA), a law that significantly changed teacher and principal evaluation practices.

After losing in Phase 1, Illinois went on to adopt several other Race to the Top policies prior to submitting Phase 2 and Phase 3 applications. The competition served as a clear catalyst for education reform in the state. As Illinois state senator Kimberly Lightford noted, “It’s not that we’ve never wanted to do it before. I think Race to the Top was our driving force to get us all honest and fair, and willing to negotiate at the table.”

Whereas persistence eventually paid off for Illinois, California’s applications never resulted in Race to the Top funding. As in Illinois, lawmakers in California adopted several significant education reforms in an effort to solidify their chances of winning an award. Prior to the first-round deadline, the director of federal policy for Democrats for Education Reform noted that in California, “there’s been more state legislation [around education reform] in the last eight months than there was in the entire seven or eight years of No Child Left Behind, in terms of laws passed.”

California was not selected as a Phase 1 or Phase 2 winner, and a change in the governor’s mansion prior to Phase 3 meant the state would not compete in the last competition. While the state never did receive any funding, California did not revoke any of the policies it had enacted during its failed bids.

Although Alaska did not participate in Race to the Top, the state adopted policies that either perfectly or nearly perfectly aligned with Race to the Top priorities. Governor Sean Parnell acknowledged the importance of keeping pace with other states.
Although Alaska did not participate in Race to the Top, the state adopted policies that either perfectly or nearly perfectly aligned with Race to the Top priorities. Governor Sean Parnell acknowledged the importance of keeping pace with other states.

What about the four states that never applied for Race to the Top funding? By jump-starting education policy reform in some states, the competition may have influenced policy deliberations in others. Alaska provides a case in point. When Race to the Top was first announced, Alaska’s education commissioner, Larry LeDoux, cited concerns about federal government power and the program’s urban focus as reasons not to apply.

Still, in the years that followed, Alaska adopted a batch of policies that either perfectly or nearly perfectly aligned with Race to the Top priorities. One of the most consequential concerned the state’s teacher-evaluation system. In 2012, the Alaska Department of Education approved changes that required that 20 percent of a teacher’s assessment be based on data from at least one standardized test, a percentage that would increase to 50 by the 2018‒19 school year. In defending the rule, Governor Sean Parnell recognized the importance of keeping pace with other states’ policy achievements: “Nearly 20 states in the nation now weight at least 33 percent, and many 50 percent, of the performance evaluation based on student academic progress. I would like Alaska to lead in this, not bring up the rear with 20 percent of an evaluation focused on student improvement.” Those 20 states that had made the changes, it bears emphasizing, had participated in Race to the Top.

ednext_XV_4_howell_sidebar-small

Policymaker Perspectives. To further assess the influence of Race to the Top on state policymaking, I consulted state legislators. Embedded in a nationally representative survey of state legislators conducted in the spring of 2014 was a question about the importance of Race to the Top for the education policy deliberations within their states. Roughly one-third of legislators reported that Race to the Top had either a “massive” or “big” impact on education policymaking in their state. Another 49 percent reported that it had a “minor” impact, whereas just 19 percent claimed that it had no impact at all.

Lawmakers’ responses mirror my finding that Race to the Top influenced policymaking in all states, with the greatest impact on winning states. Winners were fully 36 percentage points more likely to say that Race to the Top had a massive or big impact than losers, who, in turn, were 12 percentage points more likely than legislators in states that never applied to say as much. If these reports are to be believed, Race to the Top did not merely reward winning states for their independent policy achievements. Rather, the competitions meaningfully influenced education policymaking within their states.

Even legislators from nonapplying states recognized the relevance of Race to the Top for their education policymaking deliberations. Indeed, a majority of legislators from states that never applied nonetheless reported that the competitions had some influence over policymaking within their states. Although dosages vary, all states appear to have been “treated” by the Race to the Top policy intervention.

From Policy to Practice. None of the preceding analyses speak to the translation of policy enactments into real-world outcomes. For all sorts of reasons, the possibility that Race to the Top influenced the production of education policy around the country does not mean that it changed goings-on within schools and districts.

Still, preliminary evidence suggests that Race to the Top can count more than just policy enactments on its list of accomplishments. As Education Next has reported elsewhere (see “States Raise Proficiency Standards in Math and Reading,” features, Summer 2015), states introduced more rigorous standards for student academic proficiency in the aftermath of Race to the Top. Moreover, they did so in ways that reflected their experiences in the competition itself.

ednext_XV_4_howell_fig03-smallFigure 3a tracks over a 10-year period the average rigor of standards in states that eventually won Race to the Top, states that applied but never won, and states that never applied. Throughout this period, eventual winners and losers looked better than nonapplicants. Before the competition, though, winners and loser looked indistinguishable from one another.  Between 2003 and 2009, the rigor of their state standards declined at nearly identical rates and to identical levels. In the aftermath of Race to the Top, however, winning states rebounded dramatically, reaching unprecedented heights within just two years. While losing states showed some improvement, the reversal was not nearly as dramatic. Nonapplying states, meanwhile, maintained their relatively low standards.

The impact of Race to the Top on charter schools, which constituted a less significant portion of the competition, is not nearly so apparent. In winning states, higher percentages of public school students attend charter schools than in either losing or non-applying states. But as Figure 3b shows, post-Race to the Top gains appear indistinguishable from the projections of previous trends. While Race to the Top may have helped sustain previous gains, it seems unlikely. Between 2003 and 2013, the three groups of states showed nearly constant gains in charter school enrollments.

Conclusions and Implications

With Race to the Top, the Obama administration sought to remake education policy around the nation. The evidence presented in this paper suggests that it met with a fair bit of success. In the aftermath, states adopted at unprecedented rates policies that were explicitly rewarded under the competitions.

States that participated in the competitions were especially likely to adopt Race to the Top policies, particularly those on which they made explicit policy commitments in their applications. These patterns of policy adoptions and endorsements, moreover, were confirmed by a nationally representative sample of state legislators who were asked to assess the impact of Race to the Top on education policymaking in their respective states.

Differences in the policy actions of winning, losing, and nonapplying states, however, do not adequately characterize the depth or breadth of the president’s influence. In the aftermath of Race to the Top, all states experienced a marked surge in the adoption of education policies. And legislators from all states reported that Race to the Top affected policy deliberations within their states.

While it is possible that Race to the Top appeared on the scene at a time when states were already poised to enact widespread policy reforms, several facts suggest that the initiative is at least partially responsible for the rising rate of policy adoption from 2009 onward. First, winning states distinguished themselves from losing and nonapplying states more by the enactment of Race to the Top policies than by other related education reforms. Second, at least in 2009 and 2010, Race to the Top did not coincide with any other major policy initiative that could plausibly explain the patterns of policy activities documented in this paper. (Obama’s selective provision of waivers to No Child Left Behind, a possible confounder, did not begin until later.) Finally, state legislators’ own testimony confirms the central role that the competitions played in the adoption of state policies between 2009 and 2014, either by directly changing the incentives of policymakers within applying states or by generating cross-state pressures in nonapplying states.

The surge of post-2009 policy activity constitutes a major accomplishment for the Obama administration. With a relatively small amount of money, little formal constitutional authority in education, and without the power to unilaterally impose his will upon state governments, President Obama managed to jump-start policy processes that had languished for years in state governments around the country. When it comes to domestic policymaking, past presidents often accomplished a lot less with a lot more.

William G. Howell is professor of American politics at the University of Chicago.

This article appeared in the Fall 2015 issue of Education Next. Suggested citation format:

Howell, W.G. (2015). Results of President Obama’s Race to the Top: Win or lose, states enacted education reforms. Education Next, 15(4), 58-66.

The post Results of President Obama’s Race to the Top appeared first on Education Next.

]]>
49703306
Grading Schools https://www.educationnext.org/grading-schools/ Tue, 10 Aug 2010 00:00:00 +0000 http://www.educationnext.org/grading-schools/ Can citizens tell a good school when they see one?

The post Grading Schools appeared first on Education Next.

]]>

Video: Marty West talks with Education Next.

An unabridged version of this article is available here.


Never before have Americans had greater access to information about school quality. Under the federal No Child Left Behind Act (NCLB), all school districts are required to distribute annual report cards detailing student achievement levels at each of their schools. Local newspapers frequently cover the release of state test results, emphasizing the relative standing of their community’s schools. Meanwhile, new organizations like GreatSchools and SchoolMatters aggregate this information and make it readily available to parents online.

But do all these performance data inform perceptions of school quality? Or do citizens base their evaluations instead on such indicators as the racial or class makeup of schools, regardless of their relationship with actual school performance?

In discussions of parental choice in education, researchers have frequently speculated that parents would base their evaluations of schools primarily on the characteristics of their student bodies. Columbia University professor Amy Stuart Wells, for example, concluded that the decisions of St. Louis parents participating in a voluntary desegregation program were based “on a perception that county is better than city and white is better than black, not on factual information about the schools.” And even if some parents base their decisions on educational quality, many observers worry that low-income and minority parents will be less informed about or interested in school quality, placing their children at a disadvantage in the education marketplace.

The evidence on these questions available to date comes from small-scale studies of specific school districts, making it difficult to reach general conclusions about the degree to which parents and the public at large are well informed about the performance of local schools. We are now able to supplement that research with data from a nationally representative survey of parents and other adults conducted in 2009 under the auspices of Education Next and the Program on Education Policy and Governance (PEPG) at Harvard University. Because we knew the addresses of respondents in advance of the survey, we were able to link individual respondents to specific public schools in their community and to obtain their subjective ratings of those schools. We also gathered publicly available data on student achievement in the same schools, making it possible to compare respondents’ subjective ratings to objective measures of school quality.

Our results indicate that citizens’ perceptions of the quality of their local schools do in fact reflect the schools’ performance as measured by student proficiency rates in core academic subjects. Although citizens also appear to take into account the share of a school’s students who are poor when evaluating its quality, those considerations do not overwhelm judgments based on information about academic achievement.

Public Perception and Objective Quality Measures

The 2009 Education Next–PEPG Survey was administered to a nationally representative sample of 3,251 American adults, including an oversample of 948 residents of the state of Florida. The Florida oversample was conducted in order to link perceptions of school quality to the unusually rich information about school performance available in that state. The survey was administered over the Internet by the polling firm Knowledge Networks in February and March of 2009. (For methodological details and complete survey results, see “The Persuadable Public,” features, Fall 2009.)

Before conducting the survey, we geo-coded the address of each respondent to latitude-longitude coordinates and a census block. We also obtained latitude-longitude coordinates for every U.S. public school from the National Center for Education Statistics. Using census blocks to place respondents within school districts, we then linked each respondent to the closest elementary, middle, and high schools (up to five schools of each type) operated by the local school district.

The survey asked all respondents this question: “Each of the following schools in your area serves elementary-school students. Which one, if any, do you consider your local elementary school?” It then offered each respondent a personalized list of the five closest elementary schools from which to pick; respondents were also allowed to specify a school that did not appear on the list. After a specific elementary school had been identified, the survey asked the respondent to grade this school on a scale from A to F. This same process was then repeated for middle and high schools.

We converted the A to F grades that respondents assigned to the schools into a standard grade-point-average (GPA) scale (A=4 and F=0). Of the elementary and middle schools our survey respondents rated, 41 percent received a B grade, while 36 percent received a C. In contrast, only 14 percent of schools received an A grade, 7 percent a D, and 2 percent an F. This distribution corresponds to an overall GPA of 2.57, or just below a B-minus average. Interestingly, respondents assigned their local middle schools grades that were, on average, one-quarter of a letter grade lower than the grades they assigned their local elementary schools (see Figure 1).

We measured actual school quality as the percentage of students in a school who achieved “proficiency” in math and reading on the state’s accountability exams (taking the average proficiency rate across the two subjects). School-level data on student proficiency were drawn from SchoolDataDirect.org for the 2007–08 school year, the most recent year for which test-score data would have been publicly available when the survey was conducted. Although the rigor of state content standards and definitions of math and reading proficiency vary widely (see “State Standards Rise in Reading, Fall in Math,” features), we are able to adjust for these differences by limiting our comparisons to respondents within the same state when examining the relationship between proficiency levels and school ratings.

To be sure, the percentage of students achieving proficiency in core academic subjects is an imperfect measure of quality, even when comparing schools in the same state. Given the strong influence of out-of-school factors on student achievement, any quality measure based on the level of student performance at a single point in time will be heavily influenced by characteristics of a school’s student body. At the same time, proficiency rates are the only quality measure available for a national sample of schools. They are determined in part by the amount students learn in school, and research suggests that moving to a school with higher proficiency rates does produce achievement gains.

Nor do we wish to claim that any judgment of school quality that does not correspond to test-score performance is uninformed or irrational. The ability to promote math and reading achievement is hardly the only dimension along which citizens are likely to evaluate their local schools. But we suspect that high test scores go along with other aspects of school quality that citizens value in their schools, so that evidence of a connection between student achievement and public opinion likely indicates that parents and other members of the public have the information they need to make reasonable judgments about their schools.

National Evidence

These data enable us to provide the first evidence on the extent to which citizens’ subjective ratings of specific schools correspond to publicly available information on their actual performance. Because other school characteristics may also influence perceptions of school quality, we incorporated into our analysis data from the National Center for Education Statistics on the racial/ethnic composition of each school, the percentage of students eligible for free or reduced-price lunch (an indicator of poverty), average cohort size (our preferred measure of school size), and pupil-teacher ratio (a proxy measure of class size) in the 2007–08 school year. We exclude high schools when analyzing the data for the nation as a whole because proficiency data are unavailable for many of them, and when available, typically reflect the performance of only a single cohort of students. We also adjust for whether the respondent was evaluating an elementary or a middle school to account for the fact that middle schools received systematically lower grades from survey respondents.

Figure 2 presents the strength of the relationship between citizen ratings of school quality and each of these school characteristics after taking into account the other key variables built into our analysis. The values of each variable except the one identifying elementary schools have been standardized to illustrate their relative importance. (In technical terms, the relationships presented for these variables reflect the effect of an increase of one standard deviation in the value of the characteristic in question.) The figure confirms that student proficiency rates are a significant predictor of citizen ratings of school quality. An increase of 18 percentage points in percent proficient (i.e., one standard deviation) is associated with a rating that is on average 0.16 grade points higher, or about one-sixth of a letter grade.

 

Examining the racial/ethnic and class makeup of a school’s student body in isolation would suggest that both are important predictors of citizen ratings, a fact that may explain the common perception that this is the case. In particular, schools with 25 percentage points more African American students received ratings that were 15 percent of a letter grade lower, while schools with 24 percentage points more Hispanic students received ratings that were 16 percent of a letter grade lower. Schools with 26 percentage points more poor students received ratings that were one-quarter of a letter grade lower.

However, when these variables are considered simultaneously and alongside school performance and resource measures, only the poverty indicator retains predictive power. Neither the percentage of students who are African American nor the percentage who are Hispanic is systematically related to perceptions of school quality. The percentage of students who are poor remains an important predictor of citizen ratings, with a relationship essentially as strong as that for proficiency rates.

Even after controlling for proficiency rates and other school characteristics, middle schools receive ratings that are, on average, 18 percent of a letter grade lower than comparable elementary schools. In other words, proficiency rates explain some, but by no means all, of the lower perceived quality of middle schools. This finding is of interest given recent research suggesting that middle schools have adverse consequences for student achievement (see “Stuck in the Middle,” research). In contrast, neither school size nor pupil-teacher ratio are important determinants of perceptions of school quality. In fact, the weak relationship between pupil-teacher ratio and school ratings is in the opposite of the expected direction: schools with larger classes receive somewhat higher grades, perhaps because effective schools attract more families to the neighborhood.

As noted above, it has often been speculated that disadvantaged groups are less informed about school quality than more-advantaged groups. But we find that the relationship between school performance and citizen ratings is as strong for African American and Hispanic respondents as it is for whites. The relationship between school quality and citizen ratings is also essentially the same for high-income and more-educated respondents as it is for low-income and less-educated respondents.

We also consider whether the relationship between school performance and citizen ratings is stronger for parents of school-age children, who are arguably the most connected to their local schools, or for homeowners, whose property values are influenced by school quality. Perhaps surprisingly, homeowners are no more sensitive to differences in school quality than are other citizens. However, the relationship between proficiency rates and school ratings is more than twice as strong for parents of school-age children than for other respondents (see Figure 2). An increase of one standard deviation in percent proficient is associated with a rating from parents that is one-third of a letter grade higher, as compared with 16 percent of a letter grade higher for the public as a whole. Parents also give low-scoring schools far lower ratings than do other local residents, but this difference narrows and eventually reverses direction as proficiency rates increase (see Figure 3). Like those of other citizens, parents’ ratings of local schools are not influenced by the schools’ racial/ethnic composition, school size, or pupil-teacher ratios. However, parents do appear to be somewhat more responsive than other citizens to school poverty rates and take an especially dim view of middle schools, assigning them grades that are 39 percent of a letter grade lower than otherwise similar elementary schools.

Finally, we consider the issue of differences in school quality across states. Because NCLB allows each state to set its own standards for proficiency, schools in different states with the same percentage of students achieving proficiency may be of markedly different quality if one state has high standards and the other low. The national sample allows us to examine the degree to which citizen ratings of school quality are responsive to performance levels relative to the nation or simply to differences in performance within specific states. The National Assessment of Educational Progress (NAEP) conducted every two years by the U.S. Department of Education provides evidence on the average performance of 4th- and 8th-grade students in each state in mathematics and reading. We use data from the 2007 NAEP to see whether respondents in states with higher-scoring students rate their schools higher, on average, than respondents in states with lower NAEP scores. That is, if we compare respondents whose local schools have the same proficiency rate as measured by their state test, do the respondents in states with better schools, as measured by student performance on the NAEP, assign their school higher grades? We find no evidence that respondents in general, or even parents, have information about school quality beyond the information provided on the state assessments. In other words, citizens appear to be taking cues about school quality from local comparisons or from information provided by their state testing system without taking into account the relative rigor of state standards.

Levels or Growth?

Our analysis yields strong evidence that citizens, and especially parents of school-age children, rate schools in a way that lines up with publicly available information about school quality. As discussed previously, however, the percentage of students scoring at the proficient level on state tests is an imperfect indicator of school quality, contaminated as it is by the fact that student achievement is influenced by a host of factors outside of a school’s control. A better, if still an imperfect, measure of school quality is the amount of growth in student achievement from one year to the next. To examine the correspondence of citizen perceptions of school quality and measures of test-score growth, we turn to our representative sample of residents of Florida, where the state accountability system evaluates schools based on both test-score levels and test-score growth. Because high-school performance data are widely available in Florida, we are able to include high schools in this portion of the analysis.

Florida assigns schools letter grades based on a point system with eight main components, which we divide into two categories: level-related points (percentage proficient in math, English, writing, and science) and growth-related points (percentage making learning gains in math and reading and the percentage of the lowest 25 percent of students making gains in math and reading). The level variable is highly correlated with the school quality measure (percent proficient) used in the national analysis, but the correlation between the growth variable and percent proficient is considerably weaker.

Our basic strategy is to compare the ratings Florida residents assigned to their schools both to test-score levels and to test-score growth at those schools. Because measures of test-score growth are less stable over time than measures of test-score levels, we average the points awarded to each school based on levels and growth over the previous three years. Adjustments are also made for the same demographic and school characteristics as in the national analysis. To make the results as comparable as possible to those reported for the national sample, we also scale the point variables so that a one-unit increase in each variable corresponds to a shift of one standard deviation in the performance distribution of Florida public schools.

The results indicate that Florida residents’ perceptions of school quality are even more responsive to differences in student achievement levels than are those of the national public. An increase of one standard deviation in the level variable is associated with ratings that are almost one-third of a letter grade higher after taking into account other school characteristics. We also find that perceptions of school quality in Florida are unrelated to student demographic characteristics, including the percentage of students who are poor, once we take into account levels of student achievement. Although we cannot be sure, both Floridians’ greater responsiveness to test performance and their lack of responsiveness to student demographic characteristics could reflect the transparency and salience of the state’s high-profile school accountability system.

When both the test-score level and growth variables are examined simultaneously, however, the relationship between level-related points and citizen evaluations of schools is almost twice as strong as for growth-related points. This suggests that citizen ratings do reflect differences in the growth in student achievement across schools, but that this is primarily because of the correlation between achievement levels and achievement growth.

The Role of Accountability Systems

So far we have shown that citizens’ assessments of schools are strongly related to objective measures of performance made available by state accountability systems. Yet it is difficult to determine whether respondents’ apparent sensitivity to actual quality is the result of publicly available information or simply direct experience with schools. The fact that parental perceptions track actual school quality more closely than those of other citizens, but the perceptions of homeowners do not, suggests that direct interactions with a school may be a more important factor than simply having a vested interest in acquiring information about local school quality. But do accountability systems also play a role in shaping citizen perceptions?

Again, Florida provides an ideal case for more detailed analysis. As noted above, the Florida Department of Education uses the total number of points received (i.e., the sum of level- and growth-related points) to assign each school a letter grade between A and F. These grades receive considerable media attention in Florida, so we might expect citizen ratings to be correlated with them. This expectation is confirmed in the data: a school grade that is one point higher (again measured on a standard GPA scale) is associated with a respondent rating that is 0.2 grades higher.

To test the hypothesis that publicly available information has an impact over and above direct observation of school performance, we can compare the ratings given by respondents whose schools were very close to the cutoffs in the point system used by Florida to assign school grades. We know that schools with more points received higher ratings on average, but might also expect to see a “jump” in the average rating at these cutoffs. Because schools on either side of the cutoff should be of essentially the same quality, we can interpret any jump in the rating observed at the cutoff as the pure effect of information provided by the school grade on citizen perceptions of school quality.

We focus our attention on the B/C cutoff, because that is the only one for which we have enough respondents assigned to schools near the cutoff to yield results with a reasonable degree of precision. Comparing respondents’ ratings of schools on either side of this cutoff suggests a large positive effect of receiving the higher (B) grade, with an increase in the grades assigned to schools in the range of of 36 to 57 percent of a letter grade. That the publicized school grades have a direct effect on respondent ratings over and above the relationship between ratings and the underlying point variables suggests that the signals provided by the state’s school accountability system do in fact affect citizen perceptions of their local schools.

Implications

The findings reported above represent the first systematic evidence that Americans’ perceptions of the quality of their local public schools reflect publicly available information about the academic achievement of the students who attend them. Importantly, disadvantaged segments of the population are no less informed about school quality than other citizens. Although the mechanisms explaining this responsiveness are not entirely clear, our evidence suggests that both direct experience with schools and the public dissemination of performance data may play a role.

It is worth emphasizing several limitations on this evidence of responsiveness. First, the relationship between actual and perceived quality is modest for citizens as a whole, although it is quite strong for parents, who have the most opportunities to observe schools and arguably have the strongest incentives to be informed. Second, both parents and the public appear to be more responsive to the level of student achievement at a school than to the amount students learn from one year to the next. Finally, citizens appear sensitive to relative differences in school quality within their state (as reflected in school performance on state tests) but insensitive to information on school quality in the state as a whole (as measured by statewide performance on a national assessment).

Even so, at least two policy implications emerge from our results. First, our finding that accountability ratings influence citizens’ assessments of their local schools coupled with the fact that citizen ratings are more strongly associated with achievement levels than with achievement growth suggest that featuring growth measures more prominently in school accountability ratings could cause citizens to pay more attention to this barometer of school quality. Second, our finding that citizen ratings are associated with student performance on state tests but not with performance on a national assessment suggests that a closer alignment of state standards (or a move toward common standards across states) might help citizens form more accurate perceptions of their schools. In particular, it could lower perceptions of school quality in states where many students perform poorly relative to national norms but are deemed proficient by the state.

Matthew M. Chingos is a postdoctoral fellow at Harvard University’s Program on Education Policy and Governance. Michael Henderson is a doctoral candidate in Harvard’s Department of Government. Martin R. West is assistant professor of education at the Harvard Graduate School of Education and executive editor of Education Next.

The post Grading Schools appeared first on Education Next.

]]>
49697638
School-Finance Reform in Red and Blue https://www.educationnext.org/school-finance-reform-in-red-and-blue/ Sun, 18 Apr 2010 00:00:00 +0000 http://www.educationnext.org/school-finance-reform-in-red-and-blue/ Where the money goes depends on who’s running the state

The post School-Finance Reform in Red and Blue appeared first on Education Next.

]]>

Video: Chris Berry talks with Education Next


The constitutionality of state school-finance systems has been under attack for nearly 40 years. Since the California Supreme Court’s 1971 ruling in Serrano v. Priest, finance-reform advocates have filed 139 separate lawsuits in 45 states. The specific language varies from state to state, but virtually all state constitutions contain education clauses that require the state legislature to provide an “adequate,” “basic,” or “thorough and efficient” education for all children. Plaintiffs have relied on these provisions to seek increases in the financial resources devoted to public schools, especially those serving disadvantaged students. Courts have in turn deemed school-finance systems unconstitutional in 28 states.

While school-finance lawsuits have attracted significant attention in the legal community and generated numerous state-specific case studies, nationwide analyses of the effects of school-finance judgments (SFJs) have been relatively few. This small pool of studies has produced some common conclusions, namely, that such judgments reduce funding inequality between districts by increasing spending in the poorest districts and that they do so by transferring responsibility for education funding from local to state governments. Some questions remain unanswered, however, such as why SFJs have substantially different effects in different states.

A court’s ruling that an existing school-finance system is unconstitutional is only the first step toward funding reform. Some court orders provide instruction for how the legislature should fix the system, but most simply instruct state politicians to redesign the finance system themselves. In either case, the new finance system must garner the approval of the state legislature and governor. In other words, after the court ruling, the reform must pass through the state’s usual lawmaking process. States with similar court rulings may end up with very different reforms, depending on how the legislature and governor respond.

With this political process in mind, we decided to investigate how politics might influence the way an SFJ alters a state’s school-finance system. Our starting point was estimating the change in per-pupil funding that could be confidently attributed to an SFJ. We did this by comparing changes in funding in school districts where the state’s school-finance system has been ruled unconstitutional in a court challenge to funding changes in comparable districts in states where no SFJ has been issued. We studied district-level changes in school funding following 23 school-finance judgments issued between 1988 and 2005. The lawsuits were all related to general education funding, and each was the first SFJ in a state during our period of study. In total, we studied funding outcomes in more than 13,000 districts over 18 years.

What we were most interested to know is whether the change in funding differs if a state has unified Democratic control of the state legislature and the governorship at the time of the court decision, unified Republican control, or when control is divided between the two parties as, for example, when the governor is a Republican and the Democrats control one or both of the houses of the legislature. To find out, we compared the outcomes of SFJs issued in each of these circumstances.

We found that court-ordered finance reform alters district funding levels under each type of partisan regime. On balance, Democratic control results in across-the-board increases in state funding to local school districts, while Republican and divided-government regimes tend to produce funding increases targeted to poorer districts. SFJs in all three types of political environments lead to a shift in funding responsibility from local to state governments, although to differing degrees.

Which Party is Responsible?

As we began our study, we had to decide how to assign responsibility for school funding changes produced by an SFJ in the years following the judgment, especially when the party that controls the state government changed. We decided to focus on partisan control at the time of the court decision because the government at the time of the ruling is obligated to craft the policy response. Our approach, then, attributes the effect of the SFJ in subsequent years to the party in power when the judgment is made, even if there is a subsequent change in partisan control. We checked the validity of this decision by rerunning our analysis, attributing the funding associated with an SFJ in any given year to the party in control of the state government in that same year. With this method, our estimates of the relationship between partisan control and the effect of an SFJ, in dollars, were much less precise than when we used our preferred approach, although the substantive conclusions of our analysis remained the same. The better estimates lead us to conclude that party control at the time of the court decision has, on average, the most important role in determining the political response to an SFJ.

Table 1 lists the cases used in our analysis and the configuration of partisan control of the state government at the time of the court decision. Only three SFJs were issued during periods of unified Republican government: in New Hampshire, Ohio, and Wyoming. This suggests the need for caution in interpreting our results, especially about the patterns in school finance we see under Republican governments. There were seven judgments handed down during unified Democratic government (in Alabama, Kentucky, Maryland, Missouri, Tennessee, Vermont, and West Virginia) and 11 delivered when government was divided (in Connecticut, Idaho, Kansas, Massachusetts, Minnesota, Montana, North Carolina, New Jersey, New York, South Caroline, and Texas).

State, Local, and Federal Funding

While SFJs require a policy response from the state government, and therefore are expected to have a direct impact on state funding, they may also have an indirect effect on funding from local sources. Indeed, one concern over the efficacy of court-induced reforms is that local districts may reduce their own contribution to the schools in response to increases in state aid, thereby undermining efforts to increase total school spending. To provide a more comprehensive picture of the effect of SFJs, we look at the impact on both state and local funding.

Of course, because spending on schools also includes a small amount of federal aid, total funding is not simply the sum of state and local funding. Federal funds, which make up about 10 percent of total education funding, have until recently been limited to specific programs, such as the National School Lunch Program and special education. Thus, we would not expect a state court decision  to influence federal funding, an assumption that is borne out in the data.

Gauging the Effects

Our basic strategy was to compare changes in funding levels in districts where the state’s school-finance system has been ruled unconstitutional to funding changes in comparable districts in states where an SFJ has not been issued. We make these comparisons with groups of districts that had Democratic, Republican, or split-party control of the state government at the time the SFJ was issued. We allow for a one-year delay for the judgment to take effect because we assume that any changes in policy made as a result of the decision will be reflected in the next year’s budget, at the earliest.

Because most school-finance lawsuits are aimed at increasing funding for poor districts specifically, we designed our analysis to measure how the effects of SFJs, and of the party in control of the state government at the time of the decision, might be different for school districts with high rates of students in poverty and for districts where the students are better-off financially. To look for these differences, we divided each state’s districts into four quartiles based on the proportion of students living in poverty and allowed for the possibility that the effect of an SFJ, and of one under Democratic, Republican, or divided government, could be different in each quartile.

To isolate the effects of an SFJ on districts within each poverty quartile, we focus on changes in spending over time within specific school districts after taking into account changes from year to year in average education spending across all of the nation’s school districts. Thus we effectively control for unmeasured attributes of each school district that are constant over time and for national trends that affect all districts, such as economic conditions or changes in federal education policy that could have an impact on funding even in the absence of an SFJ. We adjust for inflation by converting all per-pupil funding figures to constant 2007 dollars.

Of course, there are other factors that likely influence changes over time in the level of per-pupil funding in a school district, including characteristics that change over time and influence either their receipt of state funding or the propensity of school districts to raise their own local revenue. We account for the variation in funding that should be directly attributed to the percentage of the student population living in poverty, independent of any change produced by an SFJ. We also include the total number of students in the district, to allow for the possibility that large districts operate differently from small districts. And we estimate the impact on per-pupil expenditure of the proportion of students in a district with Individualized Education Plans (IEPs), as students with IEPs generally have special needs that result in higher spending. Finally, we include the proportion of the student population that is African American and the proportion Hispanic. Although we have no reason to believe that these two variables directly cause changes in education funding, they may be correlated with other relevant factors, such as property values or population growth, for which we lack direct information.

In addition to district-specific characteristics, we take into account state-level characteristics that could influence state funding of education. In particular, we control for the fraction of the state’s population over age 65 to account for the possibility that the elderly oppose increases in school spending. We also control for the fraction of the population that is of school age, which captures aggregate demand for educational services. The final control variable in our analysis is per-capita income in the state, as the demand for government services may increase with income.

Annual district-level financial and demographic information comes from the Common Core of Data (CCD), available from the National Center for Education Statistics (NCES). For years in which CCD data are not available (1988–1992 and 2005), we use data from U.S. Census Bureau Elementary-Secondary Education Finance Survey (F-33). Our analysis considers only local school districts and parts of local supervisory unions with at least 100 students, as identified by the CCD. We exclude Hawaii and Washington, D.C., because each has only one school district.

Additional district demographic information, including the proportion of the population aged 5 to 17 and the proportion of school-aged children living in poverty, comes from the U.S. Census Small Area Income and Poverty Estimates for most years. For 1989 and 2005, the district demographic information comes from the School District Demographics System. Because district poverty information is not available for every year, we use the poverty estimates from the closest available survey year. For example, the district poverty estimates for 1996, 1997, and 1998 all use the data from 1997.

Partisan Patterns

A new and very clear picture about the impact of politics on SFJs emerged from our findings. The school-finance reforms implemented by Democratic state governments have substantially different effects on district funding than reforms produced by Republican or divided governments. When a Democratic state government implements an SFJ-induced reform, all districts, poor and non-poor alike, see increases in total funding. Under Republican and divided governments, districts with different levels of poverty fare quite differently.

Figure 1 represents our findings graphically. Each bar in the graph represents the effect of an SFJ, that is, the within-district change in spending after the decision, for each category of partisan control and district poverty. We present separate estimates for the change in total funding, in funding from state sources, and in funding from local revenues.

In Democrat-led reforms, our estimates show, districts in every poverty quartile see a shift from local to state funding after an SFJ. Local funding decreases, while state funding increases. This pattern of centralization of school funding is consistent with evidence from earlier studies, which also shows that localities partially offset state efforts to increase overall education spending after SFJs.

The upshot is a net increase in total funding ranging from roughly $750 to $1,000 per pupil—a sizable impact, given that total per-pupil funding in our sample is a little over $9,750 on average. While a few of the differences between quartiles are statistically significant, they are substantively small relative to the overall level of the funding increases. Indeed, if anything, the results indicate that the most affluent districts fare better than the poorest districts, in terms of total funding, when Democrats are in power, although this difference is not statistically significant. We should note here that high poverty does not necessarily imply low spending (in many states, high-poverty districts have the highest spending levels), so our findings here do not bear directly on spending inequality.

School-finance rulings handed down to divided governments produce decidedly different results. State funding increases across the board, but the changes in state funding differ markedly across the levels of district poverty: The poorer the district, the larger the increase in state funding. But, as in states with Democrat-led reforms, SFJs are not unmitigated wins for school-district budgets. All four quartiles see sizable reductions in funding from local sources. These reductions are large enough that the poorest quartile is the only one to see positive net changes in total funding. Overall, divided government reforms appear to represent a more or less straightforward redistribution of funding toward the poorest districts. The net effects on state education funding appear to be budget-neutral, as we estimate that there is little change in total education funding after an SFJ under a divided government. That said, the net increase of roughly $175 per student in total funding for poor districts is fairly modest when compared to total per-pupil funding.

Republican-controlled reforms present yet a third pattern of funding changes. Under Republican governments, funding shifts from local to state only for the poorest districts. Districts in the most affluent quartile face cuts in state funding, but they are able to more than compensate for these reductions by increasing local funding. In other words, Republican-led reforms involve centralization of funding for the poorest districts and decentralization of funding for the richest districts. The middle two quartiles are essentially unaffected. On net, both the poorest and the richest districts see increases in total funding, the former courtesy of state aid and the latter financed from their own tax base. Indeed, the richest half of districts in Republican states are the only group under any partisan regime to experience an increase in local funding following an SFJ.

Alternative Explanations

A lingering concern with our results may be that party control of the state government is related to the decision to file a school-reform lawsuit. Finance reform advocates may time the filing of their lawsuits to take advantage of what they view as particularly favorable political conditions. Another possibility is that advocates might resort to litigation only when the legislative and political process fails to provide reform. Either of these possibilities means that SFJs might have effects that appear to be associated with party control but are not actually caused by the response of the party in power.

We answer by first noting that because nearly all states—45 of 50—were subject to at least one education-finance lawsuit, the central issue is not whether a state would face a suit but when. Beyond that point, we believe that this is not a major concern for three reasons: 1) the amount of time between lawsuit filing and the court decision is often long and always unpredictable; 2) the party in control often changes between the lawsuit filing and decision; and 3) lawsuits do not appear to be precipitated by changes in political regime. Among the 23 cases included in our study, the length of time from the initial filing through the final appellate court decision ranged from less than a year to nine years. On average, the process took four years. Due to the length of time the suits take and the variability of the speed of the adjudication process, advocates could not effectively time their lawsuits to specific political circumstances. In almost half of the cases (11 out of 23), the party in control changed between the time of filing and the time of decision. Further, school-finance lawsuits do not appear to be triggered by changes in party control. On average, the party in control in the state was stable for six years prior to the filing of a case. In only three cases did the party in control change in the year of the lawsuit filing, and for each of those three cases, the party in control changed again before the lawsuit was decided.

Conclusion

Which partisan arrangement leads to the best results for poor districts after a school-finance judgment? That question requires stepping into the debate about the relationship between student outcomes and school funding and goes beyond the evidence we present here. What our study does show is one of the many possible ways that politics can influence the implementation of court-ordered school-finance reform. Clearly, reforms implemented by Democrats produce the largest net increases in funding for all students. However, by delivering roughly equivalent funding increases to districts at all income levels, Democrat-led reforms do not target new resources to districts serving poor students. Reforms implemented by divided or Republican governments deliver concentrated benefits to districts serving poor students. In these instances, however, the actual flow of new dollars into poor districts is more meager than when Democrats are in control.

Christopher Berry is assistant professor at the Harris School of Public Policy at the University of Chicago. Charles Wysong is a student at Stanford Law School.

The post School-Finance Reform in Red and Blue appeared first on Education Next.

]]>
49697432
What Happened When Kindergarten Went Universal? https://www.educationnext.org/what-happened-when-kindergarten-went-universal/ Wed, 03 Mar 2010 00:00:00 +0000 http://www.educationnext.org/what-happened-when-kindergarten-went-universal/ Benefits were small and only reached white children

The post What Happened When Kindergarten Went Universal? appeared first on Education Next.

]]>

More than four decades after the first model preschool interventions, there is an emerging consensus that high-quality early-childhood education can improve a child’s economic and social outcomes over the long term. Publicly funded kindergarten is available to virtually all children in the U.S. at age five, but access to preschool opportunities for children four years old and younger remains uneven across regions and socioeconomic groups. Parents with financial means have the option of enrolling their child in a private program at their own expense. State and federal subsidies are available to some low-income parents; the federal Head Start program also serves children from low-income families. And states such as Oklahoma and Florida have recently enacted universal preschool programs. Yet gaps in access to high-quality programs remain.

It is unclear whether and how public funds should be mobilized to close those gaps. Some advocate expanding existing programs that target disadvantaged children on the grounds that limited public resources should be directed toward the families and children most in need. Others consider the perennial underfunding of targeted programs like Head Start as evidence of a lack of political support for this approach, and argue that providing universal access is needed to ensure adequate public funding over the long run. In other words, any new funding for preschool education must benefit middle-class children if it is to gain their parents’ political backing. Or so it is argued.

Existing research provides little insight into the relative merits of universal programs and those targeted to specific groups. While there have been several recent studies of the short-term effects of universal preschool programs in the U.S., there is no evidence to date on long-term consequences. Some studies suggest that Head Start has lasting effects in reducing criminal behavior and increasing educational attainment, but this program is much more intensive than any universal program is likely to be and serves a very disadvantaged population.

In the absence of direct evidence on the types of preschool programs now under consideration, this study attempts to shed light on the likely consequences of a new universal program by estimating the impact of earlier state interventions to introduce kindergarten into public schools. In the 1960s and 1970s, many states, particularly in the southern and western parts of the country, for the first time began offering grants to school districts operating kindergarten programs. Districts were quick to respond. The average state experienced a 30 percentage point increase in its kindergarten enrollment rate within two years after an initiative, contributing to dramatic increases in kindergarten enrollment (see Figure 1). These interventions present an unusual opportunity to study the long-term effects of large state investments in universal preschool education.

My results indicate that state funding of universal kindergarten had no discernible impact on many of the long-term outcomes desired by policymakers, including grade retention, public assistance receipt, employment, and earnings. White children were 2.5 percent less likely to be high school dropouts and 22 percent less likely to be incarcerated or otherwise institutionalized as adults following state funding initiatives, but no other effects could be discerned. Also, I find no positive effects for African Americans, despite comparable increases in their enrollment in public kindergartens after implementation of the initiatives. These findings suggest that even large investments in universal early-childhood education programs do not necessarily yield clear benefits, especially for more disadvantaged students.

Kindergarten in the U.S.

Many state governments have only recently introduced state grants for school districts that operate kindergarten programs. Kindergartens began outside of the public school system, funded largely through philanthropic organizations or private tuition. Over the first half of the 20th century, kindergartens slowly became incorporated into urban schools, at the same time gaining partial funding through local taxes. As late as the mid-1960s, however, such programs continued to rely heavily on local resources, as only 26 states and the District of Columbia helped fund kindergarten costs. There were remarkable changes over the next decade, however: Between 1966 and 1975, 19 states began funding kindergarten for the first time. The majority of these states were in the South, but the West was also well represented. By the late 1970s, only two states—Mississippi and North Dakota—did not fund kindergarten programs (see Figure 2).

Initially, states channeled their funding to districts in one of two ways. Some states revised existing funding formulas to include financial support for kindergartens on a basis equivalent to support for all other grades in a state’s public school system. Other states appropriated separate monies for kindergarten, an approach that made kindergarten funding more vulnerable to budget cuts. Eventually, however, all states made kindergarten a part of the basic state school program.

The initiatives were introduced during a period of rising labor-force participation among women with young children, so kindergarten’s popularity may have been due to the fact that it provided families with subsidized child care. The stated purpose, however, was to improve children’s educational outcomes. In particular, it was claimed that kindergarten would provide the preparation children need to succeed in the elementary school years. Greater success in school would, in turn, reduce state spending not only on special education and “re-education” of children who failed, but also on public assistance and incarceration over the long term.

Whether state funding of kindergarten was capable of achieving these goals is open to question. Kindergartens have historically maintained a curriculum focused more on children’s social development and less on academic training. While a focus on socialization does not preclude long-term effects, kindergarten programs lacked features of some targeted interventions—such as parental involvement and health services—that may be critical to their success. State-funded kindergartens for five-year-olds may also have reduced enrollment in private kindergartens and in education programs funded through Head Start and Title I. My study seeks to shed light on these important policy questions of relevance to the current conversations concerning early childhood education.

Method and Data

To find out the long-term impacts of the introduction of universal kindergarten, I take advantage of the staggered introduction of state funding for kindergarten from the 1960s forward, combined with the fact that children generally attend kindergarten at age five. More specifically, I calculate the average difference in outcomes between individuals who were age five before the introduction of kindergarten funding and children born in the same state who were five years old after the initiative was introduced. I further adjust these comparisons to take into account the fact that kindergarten enrollment was increasing gradually in many states prior to the adoption of state funding. The remaining differences should reflect the long-run effects of the typical state-funded kindergarten program.

I restrict my analysis to the 24 states that introduced state funding for universal kindergarten after 1960 because the data needed for the analysis are not available for earlier years. I also limit attention to the 1954 to 1978 birth cohorts because they span the period over which most of these funding initiatives were passed, and doing so provides me with data both before and after the introduction of these initiatives necessary to estimate the effects of kindergarten funding on long-term outcomes.

I combine data from several sources. I measure the kindergarten enrollment rate with the state kindergarten-to-first-grade enrollment ratio, calculated from the federal Common Core of Data and earlier published data. Data for the analysis of the initiatives’ long-term effects were drawn from Public Use Microdata Samples (PUMS) of the Decennial Census. In particular, I examine 1) whether a child was below grade for age while still of school age (a proxy for grade retention); 2) three indicators of adult educational attainment (high school dropout, high school degree only, and some college); 3) adult wage and salary earnings and indicators of employment and receipt of public assistance income; and 4) an indicator for residence in institutionalized group quarters, a widely used proxy for incarceration.

Limited Impact

I begin the empirical analysis by examining how the funding initiatives affected kindergarten enrollment. The results confirm that funding had a large, immediate impact on kindergarten participation. In the first year in which funding was available, the kindergarten enrollment rate in the typical state was about 15 percentage points higher than would have been the case in the absence of state funding. Two years out, it was 33 percentage points higher, and the lion’s share of gains in kindergarten enrollment from the funding initiative had been achieved. Anecdotal evidence suggests that the take-up of kindergarten was not completely immediate because of shortages of classrooms and teachers rather than because of a gradual increase in local demand. On net, the public school kindergarten enrollment rate of children turning five after an initiative was about 30 percentage points higher than it would otherwise have been.

I next investigate whether these developments were matched by changes in child well-being. Because grade retention and educational attainment were arguably the prime targets of policymakers, I first consider the effects of kindergarten funding on those indicators. Whites had more education as adults as a result of the initiatives, but the effect was quite small: only a 2.5 percent reduction in the dropout rate (see Figure 3). Because the dropout rate among whites prior to kindergarten funding was roughly 15 percent, this reduction amounts to less than half of 1 percentage point, which is a small effect even if we take into account that kindergarten enrollment rose 30 percentage points (not a full 100 percentage points) as a result of the initiatives. College attendance also increased among whites, but by an even smaller amount. The analogous estimates for African Americans suggest that affected children attained lower levels of education. While not statistically significant, the estimates are sufficiently precise to rule out the possibility that African Americans experienced even the small positive gains in educational attainment evident among whites. The apparent gains in educational attainment for whites occurred without significant reductions in grade repetition, either in absolute terms or relative to African Americans.

I then turn to an investigation of the impacts on earnings, employment, public assistance receipt, and the proxy for incarceration described earlier. I again find little evidence that kindergarten funding affected these outcomes. The most notable exception is that whites of kindergarten age after passage of a funding initiative were less likely to reside in prisons or institutionalized group quarters as adults. The effect is relatively large, at 22 percent. Once again, however, no such effects were observed for African Americans. Moreover, I find no evidence of an impact of state kindergarten funding on earnings for individuals of either race. The estimated effects on earnings are imprecise, however, and leave open the possibility that kindergarten attendance had effects on earnings comparable to any other year of education for African Americans and whites alike. In general, the earnings estimates should be viewed with caution, as they could be distorted by the fact that the sample includes some individuals who are young and could still be enrolled in school.

These results remain essentially unchanged when estimated using to a series of alternative approaches, including adding controls for state demographic and labor market conditions. I also perform the analysis separately by gender, which reveals that the effect of kindergarten funding on institutionalization for whites is primarily due to its effect on men, for whom the institutionalization rate is much higher. The magnitude of the effect for white men is similar to that observed for whites overall (a reduction of 23 percent). Among African Americans, there are no effects on institutionalization rates for men or women. The gender-specific results also reveal that kindergarten funding was associated with significantly lower earnings for African American women. To the extent that kindergarten funding displaced African American enrollment in more intensive early education, a possibility that I explore below, these findings would be consistent with recent findings that girls are more responsive to intensive preschool interventions.

Why Did African Americans Not Benefit?

My main results imply that there were some positive impacts of state subsidization of kindergarten, particularly on incarceration rates. What is potentially unexpected, however, is that the funding initiatives appear to have had positive effects only for whites. What might explain these findings? I explore three broad hypotheses for why African Americans might not have benefited as much as whites from the funding initiatives: 1) kindergarten funding disproportionately drew African Americans out of higher-quality education settings; 2) instead of raising additional revenue to fund local kindergarten programs fully, school districts offered lower-quality kindergarten programs to African Americans or moved funds from existing school programs from which African Americans may have disproportionately benefited; and 3) African Americans were more adversely affected by any subsequent “upgrading” of school curricula as more students entered elementary grades having attended kindergarten. The first of these hypotheses receives the most support in the available data.

Data from the Panel Study of Income Dynamics suggest that the introduction of state funding for kindergarten prompted a reduction in Head Start participation among African Americans. The existence of kindergarten funding among all states in a region (relative to none) was associated with a statistically significant 25-percentage-point-reduction in the likelihood that an African American child attended Head Start at age five. Given an enrollment rate of 26 percentage points across the observed cohorts, this estimate implies that state funding for kindergartens essentially eliminated enrollment of African American five-year-olds in Head Start (see Figure 4). By comparison, enrollment of whites in Head Start at age five was much lower (2 percent), and the change in enrollment after the average funding initiative close to zero.

Together with historical accounts of the importance of Head Start in providing education for five-year-olds in the absence of state-funded kindergartens, these estimates strengthen support for the hypothesis that state funding for kindergartens decreased enrollment of African American five-year-olds in federally funded early education for the poor. It is difficult to gauge the extent to which the movement of African American five-year-olds from Head Start to kindergarten might have offset positive impacts of kindergarten attendance elsewhere in the African American population. However, a back-of-the-envelope calculation suggests that the reduction in Head Start attendance among African Americans may account for at least 16 percent of the 1-percentage-point increase in the African American-white gap in high school dropout rates after the initiatives were passed. Head Start has also been found to reduce criminal behavior among African-American males.

I uncover no support for the hypothesis that school districts failing to supplement the state grants placed African American students in lower-quality programs, either in kindergarten or in later grades. I also detect no evidence that the establishment of kindergarten programs as a result of the funding initiatives prompted an increase in academic expectations of students in the early grades, which would have adversely affected children with low levels of achievement. Because the data available to test these alternative hypotheses are not ideal, however, these conclusions must be viewed with caution.

Back to the Future

Although there is great interest among policymakers in extending free early education to disadvantaged children, evidence to date on long-run effects of preschool has been limited to experimental evaluations of model preschools and nonexperimental studies of Head Start. This study has attempted to expand this literature by measuring the long-term effects of a historical episode of public investment in universal early education—the introduction of state funding for public school kindergarten in the 1960s and 1970s. I find evidence that state funding of universal kindergarten lowered high-school dropout and institutionalization rates among whites, but not among African Americans, and detect no impact of state funding for children of either race on grade retention, public assistance receipt, employment or earnings. Why the positive effects for whites occur for dropout and incarceration only is not entirely clear and should be grounds for future research.

These findings complement those of existing research on the long-term effects of targeted programs. First, they suggest that, in the absence of higher-quality alternatives, participation in a low-intensity preschool program may have some limited positive long-term effects. In other words, even a weak program may be better than no program at all, as can be seen in the results for whites. Second, when alternatives already exist for many disadvantaged children, universal programs may not yield additional benefits for that group.

Though there are clear limits to the generalizability of these findings, they do provide some tentative lessons for policymakers. On one hand, the higher rates of preschool participation among children today suggest that any positive long-term effects of extending universal public schooling to four-year-olds may be even smaller than those estimated here for kindergarten. On the other hand, the universal preschool programs being proposed today have a more academic orientation than kindergarten has had, and may therefore have larger impacts on long-term well-being despite significantly “crowding-out” enrollment in other programs. The truth will only be discovered in the years to come.

Elizabeth U. Cascio is assistant professor of economics at Dartmouth College.

The post What Happened When Kindergarten Went Universal? appeared first on Education Next.

]]>
49697249
Time for School? https://www.educationnext.org/time-for-school/ Wed, 23 Dec 2009 00:00:00 +0000 http://www.educationnext.org/time-for-school/ When the snow falls, test scores also drop

The post Time for School? appeared first on Education Next.

]]>
20101_52_openStudents in the United States spend much less time in school than do students in most other industrialized nations, and the school year has been essentially unchanged for more than a century. This is not to say that there is no interest in extending the school year. While there has been little solid evidence that doing so will improve learning outcomes, the idea is often endorsed. U.S. Secretary of Education Arne Duncan has made clear his view that “our school day is too short, our week is too short, our year is too short.”

Researchers have recently begun to learn more about the effects of time spent on learning from natural experiments around the country. This new body of evidence, to which we have separately contributed, suggests that extending time in school would in fact likely raise student achievement. Below we review past research on this issue and then describe the new evidence and the additional insights it provides into the wisdom of increasing instructional time for American students.

We also discuss the importance of recognizing the role of instructional time, explicitly, in accountability systems. Whether or not policymakers change the length of the school year for the average American student, differences in instructional time can and do affect school performance as measured by No Child Left Behind. Ignoring this fact results in less-informative accountability systems and lost opportunities for improving learning outcomes.

Emerging Evidence

More than a century ago, William T. Harris in his 1894 Report of the Commissioner [of the U.S. Bureau of Education] lamented,

The boy of today must attend school 11.1 years in order to receive as much instruction, quantitatively, as the boy of fifty years ago received in 8 years…. It is scarcely necessary to look further than this for the explanation for the greater amount of work accomplished…in the German and French than in the American schools.

The National Education Commission on Time and Learning would echo his complaint one hundred years later. But the research summary issued by that same commission in 1994 included not one study on the impact of additional instruction on learning. Researchers at that time simply had little direct evidence to offer.

The general problem researchers confront here is that length of the school year is a choice variable. Because longer school years require greater resources, comparing a district with a long school year to one with a shorter year historically often amounted to comparing a rich school district to a poor one, thereby introducing many confounding factors. A further problem in the American context is that there is little recent variation in the length of school year. Nationwide, districts generally adhere to (and seldom exceed) a school calendar of 180 instructional days. And while there was some variation in the first half of the 20th century, other policies and practices changed simultaneously, making it difficult to uncover the separate effect of changes in instructional time.

Among the first researchers to try to identify the impact of variation in instructional time were economists studying the effect of schooling on labor market outcomes such as earnings. Robert Margo in 1994 found evidence suggesting that historical differences in school-year length accounted for a large fraction of differences in earnings between black workers and white workers.

Using differences in the length of the school year across countries, researchers Jong-Wha Lee and Robert Barro reported in 2001 that more time in school improves math and science test scores. Oddly, though, their results also suggested that it lowers reading scores. In 2007, Ozkan Eren and Daniel Millimet examined the limited variation that does exist across American states and found weak evidence that longer school years improve math and reading test scores.

Work we conducted separately in 2007 and 2008 provides much stronger evidence of effects on test scores from year-to-year changes in the length of the school year due to bad weather. In a nutshell, we compared how specific Maryland and Colorado schools fared on state assessments in years when there were frequent cancellations due to snowfall to the performance of the very same schools in relatively mild winters. Because the severity of winter weather is inarguably outside the control of schools, this research design addresses the concern that schools with longer school years differ from those with shorter years (see research design sidebar).

Research Design

Our studies use variation from one year to the next in snow or the number of instructional days cancelled due to bad weather to explain changes in each school’s test scores over time. We also take into account changing characteristics of schools and students, as well as trends in performance over time. The advantage of this approach is that weather is obviously outside the control of school districts and thereby provides a source of variation in instructional time that should be otherwise unrelated to school performance. Furthermore, Maryland and Colorado are ideal states in which to study weather-related cancellations. In addition to having large year-to-year fluctuations in snowfall, annual snowfall in both states typically varies widely across In Maryland and Colorado, some districts are exposed to much greater variation in the severity of their winters than others, which allows us to use the remaining districts to control for common trends shared by all districts in the state. Further, because we have data from many years, we can compare students in years with many weather-related cancellations to students in the same school in previous or subsequent years with fewer cancellations. Although cancellations are eventually made up, tests are administered in the spring in both states. This is months before the makeup days held prior to summer break.

In Marcotte (2007) and Hansen (2008), we estimate that each additional inch of snow in a winter reduced the percentage of 3rd-, 5th-, and 8th-grade students who passed math assessments by between one-half and seven-tenths of a percentage point, or just under 0.0025 standard deviations. To put that seemingly small impact in context, Marcotte reports that in winters with average levels of snowfall (about 17 inches) the share of students testing proficient is about 1 to 2 percentage points lower than in winters with little to no snow. Hansen reports comparable impacts from additional days with more than four inches of snow on 8th-grade students’ performance on math tests in Colorado.

Marcotte and Steven Hemelt (2008) collected data on school closures from all but one school district in Maryland to estimate the impact on achievement. The percentage of students passing math assessments fell by about one-third to one-half a percentage point for each day school was closed, with the effect largest for students in lower grades. Hansen (2008) found effects in Maryland that are nearly identical to those reported by Marcotte and Hemelt, and larger, though statistically insignificant, results in Colorado. Hansen also took advantage of a different source of variation in instructional time in Minnesota. Utilizing the fact that the Minnesota Department of Education moved the date for its assessments each year for six years, Hansen estimated that the percentage of 3rd- and 5th-grade students with proficient scores on the math assessment increased by one-third to one-half of a percentage point for each additional day of schooling.

While our studies use data from different states and years, and employ somewhat different statistical methods, they yield very similar results on the value of additional instructional days for student performance. We estimate that an additional 10 days of instruction results in an increase in student performance on state math assessments of just under 0.2 standard deviations. To put that in perspective, the percentage of students passing math assessments falls by about one-third to one-half a percentage point for each day school is closed.

Other researchers have examined impacts of instructional time on learning outcomes in other states, with similar results. For example, University of Virginia researcher Sarah Hastedt has shown that closures that eliminated 10 school days reduced math and reading performance on the Virginia Standards of Learning exams by 0.2 standard deviations, the same magnitude we estimate for the neighboring state of Maryland. Economist David Sims of Brigham Young University in 2008 took advantage of a 2001 law change in Wisconsin that required all school districts in that state to start after September 1. Because some districts were affected while others were not, he was also able to provide unusually convincing evidence on the effect of changes in the number of instructional days. He found additional instruction days to be associated with increased scores in math for 4th-grade students, though not in reading.

Collectively, this emerging body of research suggests that expanding instructional time is as effective as other commonly discussed educational interventions intended to boost learning. Figure 1 compares the magnitude of the effect of instructional days on standardized math scores to estimates drawn from other high-quality studies of the impact of changing class size, teacher quality, and retaining students in grade. The effect of additional instructional days is quite similar to that of increasing teacher quality and reducing class size. The impact of grade retention is comparable, too, though that intervention is pertinent only for low-achieving students.

20101_52_fig1

Although the evidence is mounting that expanding instructional time will result in real learning gains, evidence on the costs of extending the school year is much scarcer and involves a good deal of conjecture. Perhaps the best evidence comes from a recent study in Minnesota, which estimated that increasing the number of instructional days from 175 to 200 would cost close to $1,000 per student, in a state where the median per-pupil expenditure is about $9,000. The total annual cost was estimated at $750 million, an expense that proved politically and financially infeasible when the proposal was recently considered in that state. Comparing costs of expanding instructional days with the costs of other policy interventions will be an analytic and policy exercise of real importance if the call for expanded instructional time is to result in real change.

Complicating this analytic task are differences in costs that exist across schools and states. Utilities, transportation, and teacher summer-labor markets vary widely across geographic areas, and all affect the cost of extending the school year. So, while the benefits of extending the school year may exceed the costs in some states or school districts, they may not in others. A further complication is the possibility of diminishing returns to additional instructional time. Our research has studied the effect of additional instructional days prior to testing, typically after approximately 120 school days. The effect of extending instructional time into the summer is unknown. Also, our research has focused on the variation in instructional days prior to exams, or accountable days. The effect of adding days after exams could be quite different.

Costs of extending school years are as much political as economic. Teachers have come to expect time off in the summer and have been among the most vocal opponents of extending school years in several locations. Additional compensation could likely overcome this obstacle, but how much is an unresolved and difficult question.

Teachers are not the only ones who have grown accustomed to a summer lasting from June through August. Students and families have camps, vacations, and work schedules set up around summer vacation. “Save Our Summers” movements have for years decried the benefits of additional instructional days and proclaimed the benefits of summer vacation, and the movements have grown as states have considered extending the school year and individual school districts have moved up their start dates. Longer school years might reduce tourism and its accompanying tax revenue. These additional costs likely vary by state and district, but are clearly part of the analytic and political calculus.

Time and Accountability

As education policymakers consider lengthening the school year and face trade-offs and uncertainties, it is important to recognize that expanding instructional time offers both opportunities and hazards for another reform that is well established, the accountability movement. Educators, policymakers, parents, and economists are sure to agree that if students in one school learn content in half the time it takes comparable students at another school to learn the same content, the first school is doing a better job. How students would rank these schools is equally obvious. Yet state and federal accountability systems do not account for the time students actually spent in school when measuring gains, and so far have no way of determining how efficiently schools educate their students.

One implication of this oversight is that accountability systems are ignoring information relevant to understanding schools’ performance. Year-to-year improvements in the share of students performing well on state assessments can be accomplished by changes in school practices, or by increases in students’ exposure to school. Depending on the financial or political costs of extending school years, those with a stake in education might think differently about gains attributable to the quality of instruction provided and gains attributable to the quantity.

To see how the contributions of these inputs might be separated, consider data from Minnesota. Between 2002 and 2005, 3rd graders in that state exhibited substantial improvements in performance on math assessments, a fact clearly reflected by Minnesota’s accountability system. But during that period, there was substantial year-to-year variation in the number of instructional days students had prior to the test date. In Figure 2, we plot both the reported test scores for Minnesota 3rd graders (the solid line) and the number of days of instruction those students received (the bars). Useful, and readily calculated, is the time series of test scores, adjusting for differences in the number of instructional days (the dotted line).

20101_52_fig2

Comparing the reported and adjusted scores is useful for at least two reasons. First, it illustrates the role of time as a component of test gains. Overall, scale scores increased by 0.4 standard deviations from 2001–02 to 2004–05. Of this increase, a large portion was attributable to expansion in instructional time prior to the test date. Adjusting for the effect of instructional days, we estimate that scores increased by roughly 0.25 standard deviations, nearly 40 percent less than the reported gains.

Second, the comparatively steady gain in adjusted scores over the period provides evidence of improvements in instructional quality, independent of changes in the amount of time students were in class. The fast year-to-year increases in the first and last periods result in large part from increases in the amount of time in school, while the negligible change in overall scores between 2003 and 2004 does not pick up real gains made despite a shortened school year. Adjusted scores pick up increases in learning gains attributable to how schools used instructional time, such as through changing personnel, curricula, or leadership. The point here is that time-adjusted scores provide information that is just as important as the overall reported scores for understanding school improvements. A robust accountability system would recognize that more instructional time can be used to meet goals, but that more time is neither a perfect substitute for, nor the same thing as, better use of time.

The Hazards of Ignoring Time

Failing to account for the role of time in student learning not only means missed opportunity, it also creates potential problems. First, it can allow districts to game accountability systems by rearranging school calendars so that students have more time in school prior to the exam, even as the overall length of the school year remains constant. Beginning in the 1990s, districts in a number of states began moving start dates earlier, with many starting just after the first of August. The question arose whether these changes might be linked to pressures on districts to improve performance on state assessments. David Sims showed that Wisconsin schools with low test scores in one year acted strategically by starting the next school year a bit earlier to raise scores. Evidence of gaming soon emerged in other states as well. Wisconsin passed its 2001 law requiring schools to begin after September 1 to prevent such gaming; similar laws were recently passed in Texas and Florida.

The motives driving earlier start dates could spill over into other instructional policies. Minnesota moved its testing regimen from February to April in the wake of accountability standards, while Colorado legislators have proposed moving their testing window from March into April, with advocates suggesting that the increased time for instruction would make meeting performance requirements under No Child Left Behind more feasible for struggling schools. While administering the test later in the year has potential benefits in measured performance, grading the tests over a shorter time frame costs more, estimated at some $3.9 million annually in Colorado. Schools thus sacrifice educational inputs (such as smaller classes or higher teacher salaries) to pay for the later test date.

A second hazard involves fairness to schools at risk of being sanctioned for poor performance: these schools can face longer odds if weather or other schedule disruptions limit school days. The impact of instructional time on learning means that one factor determining the ability of schools to meet performance goals is not under the control of administrators and teachers. We illustrate the effects of time on making adequate yearly progress (AYP) as defined by No Child Left Behind by comparing the performance of Maryland schools the law identified as underperforming to estimates of what the performance would have been had the schools been given a few more days for instruction.

We begin with data from all elementary schools in Maryland that did not make AYP in math and reading during the 2002–03 to 2004–05 school years. We adjust actual performance by the number of days lost in a given year multiplied by the marginal effect of an additional day on test performance as reported in Marcotte and Hemelt’s study of Maryland schools. This allows us to estimate what the proficiency rates in each subject would have been had those schools been open for all scheduled instructional days prior to the assessment. We then compare the predicted proficiency rate to the AYP threshold.

We summarize the results of this exercise in Figure 3. The light bars represent the number of schools failing to make AYP in math and reading in various years. The dark bars are the number of those schools that we predict would have failed to make AYP if the schools had been able to meet on all scheduled days. We make these estimates assuming that low-performing schools would have made average gains with each additional day of instruction.

The average number of days lost to unscheduled school closings varied substantially over the period, from more than 10 to fewer than four and a half. Many schools that did not make AYP likely would have had they not lost so many school days. For example, we estimate that 35 of the 56 elementary schools that did not make AYP in math in 2002–03 would have met the AYP criterion if they had been open during all scheduled school days. Even if these schools were only half as productive as the typical school, 24 of the 56 flagged schools would likely have made AYP if they had been open for all scheduled days.

There is, however, a way to reduce risks like these for schools and to limit incentives for administrators to move start or test dates at the same time: that is to recognize and report time as an input in education. A simple and transparent way to do this is for state report cards, which inform parents about school outcomes and summarize the information on AYP status, to include information about the number of instructional days at test date as well as the total number of instructional days for the year. This information is readily available and already monitored by schools, districts, and states. Local and state education authorities could use it when assessing performance, for example, in hearing an appeal from a school that failed to meet its AYP goals. Further, this information could be used to estimate test scores adjusted for instructional days, to be used alongside unadjusted changes in performance. Distinguishing between gains due to expanded instruction time and better use of that time can enrich accountability systems and provide more and better information to analysts and the public alike.

Looking Ahead

There can be no doubt that expanding the amount of time American students spend in school is an idea popular with many education policymakers and has long been so. What makes the present different is that we now have solid evidence that anticipated improvements in learning will materialize.

Practical obstacles to the extension of the school year include substantial expense and stakeholder attachment to the current school year and summer schedule. The benefits of additional instructional days could diminish as school years are lengthened. Further, it is unknown how teachers would use additional instructional days if they are provided after annual testing is already finished. Simply extending the year well after assessments are given might mean that students and teachers spend more days filling (or killing) time before the end of the year. This would make improvements in learning unlikely, and presumably make students unhappy for no good reason.

Though the issue has seen little movement in the past and faces real opposition going forward, the policy climate appears likely to be favorable once the fiscal challenges now facing public school systems recede. It is our hope that policymakers and administrators who try to take advantage of this window of opportunity don’t harm reforms that have succeeded in improving learning outcomes and don’t implement reforms in a manner that would fail to do the same. Advocates for extended school years have so far said virtually nothing about whether or how accountability systems should accommodate longer school years.

Across the country, a small number of schools and districts are modifying or extending the academic year. The Massachusetts 2020 initiative has provided resources for several dozen schools to increase the number of instructional days they offer from 180 to about 200. Other examples include low-performing schools that have lengthened their school day in an effort to improve, and the longer school days, weeks, and years in some charter schools. However, such initiatives remain rare, with no systemic change in the instructional time provided to American students. Our work confirms that increasing instructional time could have large positive effects on learning gains. Encouraging schools and districts to view the school calendar as a tool in the effort to improve learning outcomes should be encouraged in both word and policy.

Dave E. Marcotte is professor of public policy at the University of Maryland, Baltimore County. Benjamin Hansen is a research associate at IMPAQ International, LLC.

For more on this topic, please read “Do Schools Begin Too Early: The effect of start times on student achievement

The post Time for School? appeared first on Education Next.

]]>
49697016
Lost Opportunities https://www.educationnext.org/lost-opportunities/ Thu, 10 Dec 2009 00:00:00 +0000 http://www.educationnext.org/lost-opportunities/ Lawmakers threaten D.C. scholarships despite evidence of benefits

The post Lost Opportunities appeared first on Education Next.

]]>

An unabridged version of this article is available here.

An interview with Patrick Wolf about his evaluation of the D.C. Opportunity Scholarship Program and about its likely future is available here.


dc-threatSchool choice supporters, including hundreds of private school students in crisp uniforms, filled Washington, D.C.’s Freedom Plaza last May to protest a congressional decision to eliminate the city’s federally funded school voucher program after the next school year (to see additional images of this event please click here). That afternoon, President Obama announced a compromise proposal to grandfather the more than 1,700 students currently in the District of Columbia Opportunity Scholarship Program, funding their vouchers through high school graduation, but denying entry to additional children. Both program supporters and opponents cite evidence from an ongoing congressionally mandated Institute of Education Sciences (IES) evaluation of the program, for which I am principal investigator, to buttress their positions, rendering the evaluation a Rorschach test for one’s ideological position on this fiercely debated issue.

School vouchers provide funds to parents to enable them to enroll their children in private schools and, as a result, are one of the most controversial education reforms in the United States (to see an interview with Patrick Wolf about his evaluation of the D.C. Opportunity Scholarship Program and about its likely future please click here). Among the many points of contention is whether voucher programs in fact improve student achievement. Most evaluations of such programs have found at least some positive achievement effects, but not always for all types of participants and not always in both reading and math. This pattern of results has so far failed to generate a scholarly consensus regarding the beneficial effects of school vouchers on student achievement. The policy and academic communities seek more definitive guidance.

The IES released the third-year impact evaluation of the Opportunity Scholarship Program (OSP) in April 2009. The results showed that students who participated in the program performed at significantly higher levels in reading than the students in an experimental control group. Here are the study findings and my own interpretation of what they mean.

dc-threat2

Opportunity Scholarships
Currently, 13 directly funded voucher programs operate in four U.S. cities and six states, serving approximately 65,000 students. Another seven programs indirectly fund private K—12 scholarship organizations through government tax credits to individuals or corporations. About 100,000 students receive school vouchers funded through tax credits. All of the directly funded voucher programs are targeted to students with some educational disadvantage, such as low family income, disability, or status as a foster child.

Nineteen of the 20 school voucher programs in the U.S. are funded by state and local governments. The OSP is the only federal voucher initiative. Established in 2004 as part of compromise legislation that also included new spending on charter and traditional public schools in the District of Columbia, the OSP is a means-tested program. Initial eligibility is limited to K—12 students in D.C. with family incomes at or below 185 percent of the poverty line. Congress has appropriated $14 million annually to the program, enough to support about 1,700 students at the maximum voucher amount of $7,500. The voucher covers most or all of the costs of tuition, transportation, and educational fees at any of the 66 D.C. private schools that have participated in the program. By the spring of 2008, a total of 5,331 eligible students had applied for the limited number of Opportunity Scholarships. Recipients are selected by lottery, with priority given to students applying to the program from public schools deemed in need of improvement (SINI) under No Child Left Behind. Scholars and policymakers have since questioned the extent to which SINI designations accurately signal school quality because they are based on levels of achievement instead of the more informative measure of achievement gains over time.

The third-year impact evaluation tracked the experiences of two cohorts of students. All of the students were attending public schools or were rising kindergartners at the time of application to the program. Cohort 1 consisted of 492 students entering grades 6—12 in 2004. Cohort 2 consisted of 1,816 students entering grades K—12 in 2005. The 2,308 students in the study make it the largest school voucher evaluation in the U.S. to employ the “gold standard” method of random assignment.

Voucher Effects
dc-threat3Researchers over the past decade have focused on evaluating voucher programs using experimental research designs called randomized control trials (RCTs). Such experimental designs are widely used to evaluate the efficacy of medical drugs prior to making such treatments available to the public. With an RCT design, a group of students who all qualify for a voucher program and whose parents are equally motivated to exercise private school choice, participate in a lottery. The students who win the lottery become the “treatment” group. The students who lose the lottery become the “control” group. Since only a voucher offer and mere chance distinguish the treatment students from their control group counterparts, any significant difference in student outcomes for the treatment students can be attributed to the program. Although not all students offered a voucher will use it to enroll in a private school, the data from an RCT can also be used to generate a separate estimate of the effect of voucher use (see sidebar, page 50).

Using an RCT research design, the ongoing IES evaluation found no impacts on student math performance but a statistically significant positive impact of the scholarship program on student reading performance, as measured by the Stanford Achievement Test (SAT 9). The estimated impact of using a scholarship to attend a private school for any length of time during the three-year evaluation period was a gain of 5.3 scale points in reading. That estimate provides the impact on all those who ever attended a private school, whether for one month, three years, or any length of time in between (see Figure 1). Consequently, the estimate should be interpreted as a lower-bound estimate of the three-year impact of attending a private school, because many students who used a scholarship during the three-year period did not remain in private school throughout the entire period. The data indicate that members of the treatment group who were attending private schools in the third year of the evaluation gained an average of 7.1 scale score points in reading from the program.

dc-threat4

What do these gains mean for students? They mean that the students in the control group would need to remain in school an extra 3.7 months on average to catch up to the level of reading achievement attained by those who used the scholarship opportunity to attend a private school for any period of time. The catch-up time would have been around 5 months for those in the control group as compared to those who were attending a private school in the third year of the evaluation.

Over time, in my opinion, the effects of the program show a trend toward larger reading gains cumulating for students. Especially when one considers that students who used their scholarship in year 1 needed to adjust to a new and different school environment, the reading impacts of using a scholarship of 1.4 scale score points (not significant) in year 1, 4.0 scale score points (not significant) in year 2, and 5.3 scale score points (significant) in year 3 suggest that students are steadily gaining in reading performance relative to their peers in the control group the longer they make use of the scholarship. No trend in program impacts is evident in math.

What explains the fact that positive impacts have been observed as a result of the OSP in reading but not in math? Paul Peterson and Elena Llaudet of Harvard University, in a nonexperimental evaluation of the effects of school sector on student achievement, suggest that private schools may boost reading scores more than math scores for a number of reasons, including a greater content emphasis on reading, the use of phonics instead of whole-language instruction, and the greater availability of well-trained education content specialists in reading than in math. Any or all of these explanations for a voucher advantage in reading but not in math are plausible and could be behind the pattern of results observed for the D.C. Opportunity Scholarships. The experimental design of the D.C. evaluation, while a methodological strength in many ways, makes it difficult to connect the context of students’ educational experiences with specific outcomes in any reliable way. As a result, one can only speculate as to why voucher gains are clear in reading but not observed in math.

dc-threat5

Student Characteristics
The OSP serves a highly disadvantaged group of D.C. students. Descriptive information from the first two annual reports indicates that more than 90 percent of students are African American and 9 percent are Hispanic. Their family incomes averaged less than $20,000 in the year in which they applied for the scholarship.

Overall, participating students were performing well below national norms in reading and math when they applied to the program. For example, the Cohort 1 students had initial reading scores on the SAT-9 that averaged below the 24th National Percentile Rank, meaning that 75 percent of students in their respective grades nationally were performing higher than Chart 1 in reading. In my view, these descriptive data show how means tests and other provisions to target school voucher programs to disadvantaged students serve to minimize the threat of cream-skimming. The OSP reached a population of highly disadvantaged students because it was designed by policymakers to do so.

Did Only Some Students Benefit?
dc-threat6Several commentators have sought to minimize the positive findings of the OSP evaluation by suggesting that only certain subgroups of participants benefited from the program. Martin Carnoy states that “the treated students in Cohort 1 were concentrated in middle schools and the effect on their reading score was significantly higher than for treated students in Cohort 2.” Henry Levin likewise asserts that “the evaluators found that receiving a voucher resulted in no advantage in math or reading test scores for either [low achievers or students from SINI schools].”

The actual results of the evaluation provide no scientific basis for claims that some subgroups of students benefited more in reading from the voucher program than other subgroups. The impact of the program on the reading achievement of Cohort 1 students did not differ by a statistically significant amount from the impact of the program on the reading achievement of Cohort 2 students, Carnoy’s claim notwithstanding. Nor did students with low initial levels of achievement and applicants from SINI schools experience significantly different reading gains from the program than high achievers and non-SINI applicants. The mere fact that statistically significant impacts were observed for a particular subgroup does not mean that impacts for that group are significantly different from those not in the subgroup. For example, Group A and Group B may have experienced roughly similar impacts, but the impact for Group A might have been just large enough for it to be significantly different from zero (or no impact at all), while Group B’s quite similar scores fell just below that threshold.

From a scientific standpoint, three conclusions are valid about the achievement results in reading from the year 3 impact evaluation of the OSP:

  • The program improved the reading achievement of the treatment group students overall.
  • Overall reading gains from the program were not significantly different across the various subgroups examined.
  • Three distinct subgroups of students—those who were not from SINI schools, students scheduled to enter grades K-8 in the fall after application to the program, and students in the higher two-thirds of the performance distribution (whose average reading test scores at baseline were at the 37th percentile nationally)—experienced statistically significant reading impacts from the program when their performance was examined separately. Female students and students in Cohort 1 saw reading gains that were statistically significant with reservations due to the possibility of obtaining false positive results when making comparisons across numerous subgroups.
    Why examine and report achievement impacts at the subgroup level, if the evidence indicates only an overall reading gain for the entire sample? The reasons are that Congress mandated an analysis of subgroup impacts, at least for SINI and non-SINI students, and because analyses at the subgroup level might have yielded more conclusive information about disproportionate impacts for certain types of students.

Expanding Choice
dc-threat7The OSP facilitates the enrollment of low-income D.C. students in private schools of their parents’ choosing. It does not guarantee enrollment in a private school, but the $7,500 voucher should make such enrollments relatively common among the students who won the scholarship lottery. The eligible students who lost the scholarship lottery and were assigned to the control group still might attend a private school but they would have to do so by drawing on resources outside of the OSP. At the same time, students in both groups have access to a large number of public charter schools.

The implication is that, for this evaluation of the OSP, winning the lottery does not necessarily mean private schooling, and losing the lottery does not necessarily mean education in a traditional public school. Members of both groups attended all three types of schools—private, public charter, and traditional public—in year 3 of the voucher experiment, although the proportions that attended each type differed markedly based on whether or not they won the scholarship lottery (see Figure 2). In total, about 81 percent of parents placed their child in a private or public school of choice three years after winning the scholarship lottery, as did 46 percent of those who lost the lottery. The desire for an alternative to a neighborhood public school was strong for the families who applied to the OSP in 2004 and 2005.

These enrollment patterns highlight the fact that the effects of voucher use reported above do not amount to a comparison between “school choice” and “no school choice.” Rather, voucher users are exercising private school choice, while control group members are exercising a small amount of private school choice and a substantial amount of public school choice. The positive impacts on reading achievement observed for voucher users therefore reflect the incremental effect of adding private school choice through the OSP to the existing schooling options for low-income D.C. families.

Parent Satisfaction
Another key measure of school reform initiatives is the perception among parents, who see firsthand the effects of changes in their child’s educational environment. Whenever school choice researchers have asked parents about their satisfaction with schools, those who have been given the chance to select their child’s school have reported much higher levels of satisfaction. The OSP study findings fit this pattern. The proportion of parents who assigned a high grade of A or B to their child’s school was 11 percentile points higher if they were offered a voucher, 12 percentile points higher if their child actually used a scholarship, and 21 points higher if their child was attending a private school in year 3, regardless of whether they were in the treatment group. Parents whose children used an Opportunity Scholarship also expressed greater confidence in their children’s safety in school than parents in the control group.

Additional evidence of parental satisfaction with the OSP comes from the series of focus groups conducted independently of the congressionally mandated evaluation. One parent emphasized the expanded freedom inherent in school choice:

“[The OSP] gives me the choice to, freedom to attend other schools than D.C. public schools….I just didn’t feel that I wanted to put him in D.C. public school and I had the opportunity to take one of the scholarships, so, therefore, I can afford it and I’m glad that I did do that.” (Cohort 1 Elementary School Parent, Spring 2008)

Another parent with two children in the OSP may have hinted at a reason achievement impacts were observed specifically in reading:

“They really excel at this program, `cause I know for a fact they would never have received this kind of education at a public school….I listen to them when they talk, and what they are saying, and they articulate better than I do, and I know it’s because of the school, and I like that about them, and I’m proud of them.” (Cohort 1 Elementary School Parent, Spring 2008)

These parents of OSP students clearly see their families as having benefited from this program.

Previous Voucher Research
dc-threat8The IES evaluation of the DC OSP adds to a growing body of research on means-tested school voucher programs in urban districts across the nation. Experimental evaluations of the achievement impacts of publicly funded voucher and privately funded K—12 scholarship programs have been conducted in Milwaukee, New York City, the District of Columbia, Charlotte, North Carolina, and Dayton, Ohio. Different research teams analyzed the data from New York City (three different teams), Milwaukee (two teams), and Charlotte (two teams). The four studies of Milwaukee’s and Charlotte’s programs reported statistically significant achievement gains overall for the members of the treatment group. The individual studies of the privately funded K—12 scholarship programs in the District of Columbia and Dayton reported overall achievement gains only for the large subgroup of African American students in the program. The three different evaluators of the New York City privately funded scholarship program were split in their assessment of achievement impacts, as two research teams reported no overall test-score effects, but did report achievement gains for African Americans; the third team claimed there were no statistically significant test-score impacts overall or for any subgroup of participants.

The specific patterns of achievement impacts vary across these studies, with some gains emerging quickly, but others, like those in the OSP evaluation, taking at least three years to reach a standard level of statistical significance. Earlier experimental evaluations of voucher programs were somewhat more likely to report achievement gains from the programs in math than in reading—the opposite of what was observed for the OSP. Despite these differences, the bulk of the available, high-quality evidence on school voucher programs suggests that they do yield positive achievement effects for participating students.

Conclusions
School voucher initiatives such as the District of Columbia Opportunity Scholarship Program will remain politically controversial in spite of rigorous evaluations such as this one, showing that parents and students benefited in some ways from the program. Critics will continue to point to the fact that no impacts of the program have been observed in math, or that applicants from SINI schools, who were a service priority, have not demonstrated statistically significant achievement gains at the subgroup level, as reasons to characterize these findings as disappointing. Certainly the results would have been even more encouraging if the high-priority SINI students had shown significant reading gains as a distinct subgroup. Still, in my opinion, the bottom line is that the OSP lottery paid off for those students who won it. On average, participating low-income students are performing better in reading because the federal government decided to launch an experimental school choice program in our nation’s capital.

The achievement results from the D.C. voucher evaluation are also striking when compared to the results from other experimental evaluations of education policies. The National Center for Education Evaluation and Regional Assistance (NCEE) at the IES has sponsored and overseen 11 studies that are RCTs, including the OSP evaluation. Only 3 of the 11 education interventions tested, when subjected to such a rigorous evaluation, have demonstrated statistically significant achievement impacts overall in either reading or math. The reading impact of the D.C. voucher program is the largest achievement impact yet reported in an RCT evaluation overseen by the NCEE. A second program was found to increase reading outcomes by about 40 percent less than the reading gain from the DC OSP. The third intervention was reported to have boosted math achievement by less than half the amount of the reading gain from the D.C. voucher program. Of the remaining eight NCEE-sponsored RCTs, six of them found no statistically significant achievement impacts overall and the other two showed a mix of no impacts and actual achievement losses from their programs. Many of these studies are in their early stages and might report more impressive achievement results in the future. Still, the D.C. voucher program has proven to be the most effective education policy evaluated by the federal government’s official education research arm so far.

The experimental evaluation of the District of Columbia Opportunity Scholarship Program is continuing into its fourth and final year of studying the impacts on students and parents. The final evidence collected from the participants may confirm the accumulation of achievement gains in reading and higher levels of parental satisfaction from the program that were evident after three years, or show that those gains have faded. Uncertainty also surrounds the program itself, as the students who gathered on Freedom Plaza in May currently are only guaranteed one final year in their chosen private schools. What will policymakers see as they continue to consider the results of this evaluation? The educational futures of a group of low-income D.C. schoolchildren hinge on the answer.

Patrick J. Wolf is professor of education reform at the University of Arkansas and principal investigator of the D.C. Opportunity Scholarship Program Impact Evaluation. The opinions expressed in this article are his own.

An unabridged version of this article is available here.

Methodology Notes

If one’s purpose is to evaluate the effects of a specific public policy, such as the District of Columbia Opportunity Scholarship Program (OSP), then the comparison of the average outcomes of the treatment and control groups, regardless of what proportion attended which types of school, is most appropriate. A school voucher program cannot force scholarship recipients to use a voucher, nor can it prevent control-group students from attending private schools at their own expense. A voucher program can only offer students scholarships that they subsequently may or may not use. Nevertheless, the mere offer of a scholarship, in and of itself, clearly has no impact on the educational outcomes of students. A scholarship could only change the future of a student if it were actually used.

Fortunately, statistical techniques are available that produce reliable estimates of the average effect of using a voucher compared to not being offered one and the average effect of attending private school in year 3 of the study with or without a voucher compared to not attending private school. All three effect estimates—treatment vs. control, effect of voucher use, and impact of private schooling—are provided in the longer version of this article (see “Summary of the OSP Evaluation” at www.educationnext.org), so that individual readers can view those outcomes that are most relevant to their considerations.

I have presented mainly the impacts of scholarship use in this essay. Those impacts are computed by taking the average difference between the out comes of the entire treatment and control groups—the pure experimental impact—and adjusting for the fact that some treatment students never used an Opportunity Scholarship. Since nonusers could not have been affected by the voucher, the impact of scholarship use can be computed easily by dividing the pure experimental impact by the proportion of treatment students who used their scholarships, effectively rescaling the impact across scholarship users instead of all treatment students including nonusers. I focus here on scholarship usage because that specific measure of program impact is easily understood, is relevant to policymakers, and preserves the control group as the natural representation of what would have happened to the treatment group absent the program, including the fact that some of them would have attended private school on their own.

The post Lost Opportunities appeared first on Education Next.

]]>
49696710
Teacher Retirement Benefits https://www.educationnext.org/teacher-retirement-benefits/ Thu, 03 Dec 2009 00:00:00 +0000 http://www.educationnext.org/teacher-retirement-benefits/ Even in economically tough times, costs are higher than ever.

The post Teacher Retirement Benefits appeared first on Education Next.

]]>

An unabridged version of this article is available here.


The ongoing global financial crisis is forcing many employers, from General Motors to local general stores, to take a hard look at the costs of the compensation packages they offer employees. For public school systems, this will entail a consideration of fringe benefit costs, which in recent years have become an increasingly important component of teacher compensation. During the 2005–06 school year, the most recent year for which U.S. Department of Education data are available, the nation’s public schools spent $187 billion in salaries and $59 billion in benefits for instructional personnel. Total benefits added about 32 percent to salaries, up from 25 percent in 1999–2000. The increase reflects the well-known rise in health insurance costs, but it also appears to include growing costs of retirement benefits, which have received much less attention.

Conventional wisdom holds that teacher pensions (along with other public pensions) are more costly than private retirement benefits, for reasons dating to an earlier era of low teacher salaries over lifelong careers. In spite of dissent from this view by some researchers (see sidebar), in this case we find that conventional wisdom is right: the cost of retirement benefits for teachers is higher than for private-sector professionals.

Wrong Data, Wrong Conclusion Our findings are at odds with the claim made by Lawrence Mishel and Richard Rothstein of the Economic Policy Institute in the June 2007 Phi Delta Kappan that employer contributions for retiree benefits for teachers are no higher than for professionals in the private sector. Their claim was also based on National Compensation Survey (NCS) data. The unabridged version of this paper provides a detailed critique of their methodology. The three main problems with their calculations are summarized below.

Inappropriate Occupational Categories

The policy debate is about public school teachers, yet Mishel and Rothstein combine public and private school teachers in their analysis. In addition, the “professionals” to whom these teachers are compared also include all teachers; indeed, they are one of the largest components of this group. The authors mislabel the group in their article as “all other professionals,” but the Bureau of Labor Statistics (BLS) table from which their data are drawn clearly shows it to be an occupational grouping that includes teachers. Finally, while Mishel and Rothstein state that the appropriate comparison is with private-sector professionals, this group includes all state and local government professionals, too. The same BLS report provides separate tables with data for the two appropriate occupational groups: public school K–12 teachers and private-sector “management, professional, and related” workers. These are the tables we use in our analysis.

Confounding Social Security Contributions

Mishel and Rothstein are unable to isolate Social Security contributions with the table they use. In that table, Social Security contributions are subsumed into a larger category that also includes Medicare, worker’s compensation, and federal and state unemployment insurance. This problem does not exist when using the proper table for private-sector professionals, as Social Security contributions are separated out. The table with data for public school teachers does not separate out Social Security, but those contributions can be estimated using the NCS estimate for Social Security coverage, as explained in the text.

Share of Total Compensation vs. Percentage of Earnings

Mishel and Rothstein measure employer contributions as a share of total compensation instead of as a percentage of earnings. Shares of total compensation are not informative about how remunerative one occupation is compared to another. To take a simple example, suppose two occupations, one of them teachers, have identical earnings and retirement benefits, but differ in health insurance benefits. Since employer contributions to health insurance are markedly higher for teachers, the share of compensation for that component will be higher and the share for retirement will be lower, since all shares must sum to 100 percent. This fact alone mathematically reduces the share of total compensation that goes to retirement for public teachers, relative to private professionals.

Summing Up

Mishel and Rothstein find that employer costs for retirement constituted 11.5 percent of total compensation for “teachers” and for “other professionals” in June 2006. Correcting the three problems identified above, we find that employer contributions for retirement were 12.8 percent of earnings for public school teachers and 10.5 percent for private professionals in June 2006, a gap of about one-fifth. Since that time, as shown in Figure 1, contributions for private professionals have remained flat, while contributions for teachers have risen, doubling the gap between the two by September 2008.

To track changes in retirement costs and compare employer contributions to retirement for public school teachers with those for private-sector professionals, we draw on recent data from a major employer survey conducted by the U.S. Department of Labor. These data show that the rate of employer contributions to retirement benefits for public school teachers in 2008 is substantially higher than for private professionals: 14.6 percent of earnings for teachers vs. 10.4 percent for private professionals. Moreover, the gap has widened over the four years the data have been available. Between March 2004 and September 2008, the difference more than doubled, rising from 1.9 to 4.2 percentage points (see Figure 1).

Article Figure 1: Employer contributions to public school teacher pensions and Social Security are higher than contributions for privatesector professionals, the gapmore than doubling between 2004 and 2008.

There are several reasons one might expect employer contributions to retirement to be higher for teachers. First, nearly all teachers are covered by traditional defined benefit (DB) pension plans, in which employees receive a regular retirement check based on a legislatively determined formula. These plans have, over the years, come to offer retirement at relatively young ages, at a rate that replaces a substantial portion of final salary. U.S. Department of Education data show a median retirement age for public school teachers of 58 years, compared to about 62 for the labor force as a whole. A teacher in her mid-50s who has worked for 30 years under a typical teacher pension plan will be entitled to an annuity at retirement of between 60 and 75 percent of her final salary. In nearly all plans this annuity has some sort of cost-of-living adjustment. One does not generally observe comparable retirement plans for professionals and lower-tier managers in the private sector, since most employers have replaced traditional DB plans with defined contribution (DC) or similar 401(k)-type plans, in which the employer and employee contribute to a retirement account that belongs to the employee. Nor do those traditional DB plans that remain typically reward retirement at such early ages; they more nearly resemble Social Security, where eligibility is age 62 for early retirement, and 66 and rising for normal retirement.

The Survey Data

Our analysis draws on data from the National Compensation Survey (NCS), an employer survey developed by the Bureau of Labor Statistics (BLS). The NCS survey is designed to measure employer costs for wages and salaries and fringe benefits across a wide range of occupations and industries in the public and private sectors. Although the BLS has been reporting quarterly fringe-benefit cost data for various public and private employee groups for more than a decade, only since March 2004 has the bureau broken out these fringe-benefit cost data for public school K–12 teachers. In this article we use those data to compare retirement benefit costs for public K–12 teachers with costs for private-sector professionals. We use the most detailed available private-sector comparison group, “management, professional, and related,” a category that includes business and financial managers, operations specialists, accountants and auditors, computer programmers and analysts, engineers, lawyers, physicians, and nurses.

We measure the cost of retirement benefits as a percentage of earnings. Virtually all states specify in law that the employer will contribute a certain percentage of teacher salaries to a DB pension fund (employee contributions are similarly specified), and it is commonplace to compare such contribution rates among the states. Similarly, private-sector employers offering DC plans will typically specify their contribution as a percentage of salary (often as a match to employee contributions). Unlike some other benefits (e.g., health insurance), if salaries change, the dollar costs for retirement benefits move proportionally. On the benefit side, the DB formula ties one’s starting annuity to final average salary, while the adequacy of a DC plan is commonly thought of in terms of the salary replacement rate. Thus it is natural to specify retirement costs as a percentage of salary, both for teachers and for private-sector professionals.

In making this comparison, we must account for the fact that, while all of the private-sector professionals are covered by Social Security, a number of public school teachers are not. Some of the higher cost of employer retirement plans for teachers is offset by lower employer contributions for Social Security benefits. Thus, we should compare the contribution rates for employer-provided retirement benefits and Social Security for both groups of workers. While the BLS reports the Social Security contribution rate for private professionals, it does not report a similar rate for teachers. However, we are able to make such an adjustment by multiplying the share of teachers covered by Social Security, which the BLS estimates to be 73 percent, times the employer contribution rate (6.2 percent). This assumes that the vast majority of teachers are below the Social Security earnings cap (currently $102,000) and that the share of teachers in Social Security has been steady over the four years for which we make the comparison.

A time series with quarterly data for these benefit percentages is reported in Figure 1. Two patterns are visible. First, the contribution rate is considerably higher for public school teachers than for private professionals. In the most recent quarter for which data are reported, ending September 2008, the employer contribution rate for public K–12 teachers (14.6 percent) was 4.2 points higher than that for private-sector professionals (10.4 percent). Second, the gap is widening. While the private sector contribution rate has been relatively flat over the four years, the rate for public school teachers has markedly increased, doubling the gap between them from one-fifth to two-fifths.

In one important respect, it is likely that the BLS data underestimate the cost of retirement benefits for public school teachers. Many public school districts (and some states) provide health insurance benefits for retired public school teachers. In the course of this research we were surprised to learn that retiree health insurance benefits are not included in the BLS employment cost estimates. Since private employers have largely eliminated this benefit, this means that our estimate of the gap in retirement benefits favoring public school teachers is low, although we cannot be sure of the extent of the underestimate.

Social Security and Teachers

While the overall employer contribution rate for public school teachers is higher than for private-sector professionals, the group average may mask differences between teachers who are and are not covered by Social Security. In order to assess this point empirically, we examined directly the data on employer contributions for teacher pension funds. We find that total employer contributions for both groups of public school teachers are higher than for private-sector professionals.

Most teachers are in statewide pension funds, with a relatively small number in district funds (e.g., New York City, Denver, St. Louis). Data on employer contributions for these plans are available in annual financial reports for each fund, which are surveyed by the National Association of State Retirement Administrators (NASRA).

Using data on contributions from NASRA and pension fund annual reports where necessary, and using weights based on the number of teachers employed in each state or district as reported in the NCES Common Core of Data, it is possible to compute average employer contribution rates for teachers. First we consider teachers who are in states and districts covered by Social Security. For these teachers, we calculate the weighted average employer contribution to be 9 percent of earnings. This can be compared to the estimate of employer contributions to retirement for private-sector professionals and managers, calculated from the BLS data as 4.7 percent for the comparable period (FY07). This is a 4.3 percent difference favoring public school teachers, almost double, in those states and districts where teachers are enrolled in Social Security, so the comparison is on an equal footing.

Article Figure 2: Total retirement contributions in 2007 were highest where teachers are covered by Social Security.

For states and districts where teachers are not in Social Security, we calculate the average employer contribution at 11.1 percent of earnings. Of course, this is considerably higher than the 4.7 percent retirement contributions for private-sector professionals, but, perhaps surprisingly, it even exceeds their employers’ combined contributions to retirement and Social Security, which averaged 10.3 percent for FY07. Thus, as Figure 2 shows, comparing teachers with professionals in private-sector employment, total employer contributions are higher for teachers whether or not they are also covered by Social Security.

Our analysis of evidence from the BLS National Compensation Survey and the NASRA Public Fund Survey shows that the employer contribution rates for public school teachers are a larger percentage of earnings than for private-sector professionals and managers, whether or not we take account of teacher coverage under Social Security. In addition, the BLS data show that the contribution rate for teachers is clearly trending upward.

Looking Ahead

What are the likely trends going forward for the cost of teacher retirement benefits? No one knows for sure, but we can identify the two key factors that will drive these costs: future developments in the benefits themselves and in their funding. The trend through much of the postwar period was to enhance the retirement formulas in various ways, including reducing the age or service requirement for full benefits. For example, just last year New York City agreed to enhance its pension formula for younger teachers. But there is evidence that benefit enhancement has generally abated in recent years. There are even a few states, including Texas, that have moved to reduce benefits for newly hired teachers. However, this is unlikely to reduce costs in the near future, since benefits for incumbent teachers are protected by law in most states.

The other factor to consider is the funding status of teacher pension plans. The vast majority of teacher pension plans are not fully funded. This means that contributions include both the “normal cost” of pension liabilities accruing to current employees and the legacy costs of amortizing unfunded liabilities accrued previously (due to a variety of reasons, including the original pay-as-you go nature of most plans, as well as unfunded benefit enhancements over the years). In theory, if the actuarial assumptions hold true going forward and no new benefits are enacted, the amortization costs will eventually disappear (after 30 years, under a typical funding schedule), in much the same way that a homeowner’s monthly expenses decline when the mortgage gets paid off.

However, the near-term prospects may be very different. For one thing, public pension funds face the possibility of important accounting changes. Unlike private pension funds, public fund actuaries have been allowed to discount future liabilities at a rate of about 8 percent, the assumed long-run market return on fund assets. Finance economists have argued that such a high discount rate is imprudent, however, and there have been signs that public accounting standards might move toward the private-sector rules, based on corporate bond and Treasury rates, which could reduce the discount rate to about 5 percent. This would dramatically raise the required amortization payments.

Finally, it bears noting that the market value of pension funds has fallen precipitously as of this writing (December 2008). Barring a major market recovery, pension funds across the country will have new, large unfunded liabilities. Under actuarial smoothing methods, these losses will be phased in, raising required amortization payments over the next few years. If the accounting rules for public funds also change, reducing the discount rate on liabilities, the employers of public school teachers, along with other public employers, will face a double hit, requiring sharp increases in contributions. By contrast, those private employers who have switched over to defined contribution plans in recent decades will be unaffected. In short, there are good reasons to believe that the contribution gap we have documented will continue to widen in coming years.

Robert M. Costrell is professor of education reform and economics at the University of Arkansas. Michael Podgursky is professor of economics at the University of Missouri–Columbia.

The post Teacher Retirement Benefits appeared first on Education Next.

]]>
49696643