Managing performance within a ‘decentralised’ state – the logic of targetry and the feasibility of target-free delivery Paper prepared for the 65th Political Studies Association Annual International Conference 30 March – 1 April 2015 Sheffield Town Hall and City Hall Dr Felicity Matthews [email protected] Senior Lecturer Department of Politics University of Sheffield ***Draft in progress – please do not quote or cite without permission*** Since 2010, the Coalition Government has sought to pursue an agenda of localism, based on the devolving of power down to local authorities and the communities that they serve (HM Government, 2010, p. 11). A key aspect of the localism agenda has been the abolition of the layers of targetry, audit and performance management that had proliferated under the previous Labour Governments; and the 2010 Comprehensive Spending Review stated the Coalition’s belief that ‘attempts to micro- manage delivery from the centre’ were ‘both wrong and doomed to fail’ (Cm. 7942, 2010, p. 34). However at the same time, the Coalition introduced a deep and wide- ranging programme spending cuts across the public sector; and in this respect the decision to jettison the machinery of targetry can be seen as not just a principled commitment to the empowerment of front-line service providers, but also as a pragmatic attempt to obscure the deleterious impact of the cuts upon service provision. Reflecting upon such tensions, this paper will seek to dig beneath the rhetoric that surrounds the use of targerty in order to explore the utility of target-based performance management in the delivery of effective and responsive public services within a delivery landscape that is at once organisationally fragmented and financially squeezed. In doing so, it considers the extent to which public service delivery through the use of centrally mandated targets embodies an overly top-down, interventionist approach to governing, and therefore asks whether such instruments are politically feasible or desirable in an age of austerity. To do so, the paper proceeds as follows. It commences with synopsis of the academic literature on the management of public sector performance in order to highlight the rationale and risks associated of target-based regimes. It then provides an overview of the previous Labour Governments’ approach to performance management, setting out the range of reporting mechanisms that developed, and in doing so, providing the context for the subsequent changes that followed under the Coalition. The paper then moves on to examine the logic of the Coalition’s localism agenda, the extent to which the rhetoric 1 of decentralisation has been accompanied by a relaxation of reporting criteria. Finally, the paper concludes by considering the implications of the Coalition’s experiment with localism for our understanding of the role and utility of performance management as a tool of public service delivery, and the directions in which future research could travel. Dials, tin openers and governance-by-numbers Performance management has been described as one of the ‘most widespread international trends in public management’ (Pollitt, 2006, p. 25); and whilst ‘the definition of performance in a public sector organization is often elusive’, this ‘has not prevented governments throughout the world from publishing an increasing volume of data pertaining to various aspects of public sector performance’ (Smith, 1995, p. 278). In different jurisdictions, different elements of performance have been measured, including processes, procedures; outputs; efficiency; outcomes; and, various permeations thereof (Talbot, 2008, p. 1570). In turn, performance management frameworks have been established to serve a range of purposes including internal performance management and policy adjustment; external oversight and regulation; citizen engagement and informed choice; and (electoral) accountability through punishment and reward (James and Wilson, 2010, p. 10). More broadly, performance management has been seen as ‘among the most important tools by which governments structure relationships, state values, and allocate resources with employees, third-party providers, and the public’ (Moynihan et al, 2010, 141). The underlying rationale of performance management is therefore one of ‘control’, and a distinction can be drawn between external political control and internal managerial control (Smith, 1995, p. 278). It is clear, therefore, that the measurement of public sector performance serves the interests multiple stakeholders, including politicians; central government departments and agencies; auditors and regulators; public service managers; and, interest groups and user groups (Jackson, 2011, p. 14; see also Hood and Dixon, 2010, for a detailed discussion of politicians’ payoffs). Nonetheless, despite the apparent enthusiasm with which performance management has been embraced, it has been subject to criticism from practitioners and observers alike; and it is possible to identify different degrees of buy-in between the ‘true believers’, ‘pragmatic sceptics’ and ‘active doubters’ (Norman, 2002), which both reflects and fuels the challenges associated with measuring the performance of the public sector. This synopsis will therefore delineate the core challenges identified within the academic literature, focusing on the way in which targets are set and measured and the perverse consequences in which this can result. Many aspects of public sector performance management have travelled from the private sector, and the pursuit of performance measurement is bound up more broadly with the pursuit of rational efficiency associated with New Public Management. A long-standing assumption within the management literature is that targets are unlikely to be achieved if they are the subject of top-down imposition (e.g. Drucker, 1954; Latham and Yukl, 1975; Likierman, 1993; Locke and Latham 2002). In turn, It is widely accepted a smaller range of ‘SMART’ (specific, measurable, attributable, realistic, timed) targets is more likely to provide service providers with a clearer focus and sense of priority; and, conversely, that a groaning proliferation of targets is likely to overload service providers and undermine attempts at prioritisation (e.g. Chun and Rayney, 2005; Boyne and Chen, 2007). Yet whilst the 2 language of performance management may assume an air of rational efficiency, the way in which priorities are articulated and measured has a direct bearing on their chances of success. Even the act of defining an aspect of service as a target priority is fraught with risks, including the neglect of non-priority areas and the demoralisation of staff that are not engaged in the delivery of target priorities. As Moynihan et al explain, ‘in an environment where some goals and values are explicitly recognized as important and essential to the organizational mission while other values are largely ignored or denigrated as procedural barriers to performance, it is entirely predictable that organizational actors will exert effort toward what is deemed culturally appropriate’ (Moynihan et al, 2011, p. 150). Moreover, in the pursuit of quantification, the risks exists that the focus shifts towards the measurable at the expense of the important, resulting in a ‘naïve goal oriented model’ which ‘not only defines and therefore constrains what is measured’ but ‘also ignores the long-established understanding that public sector goals are ambiguous, multiple, complex and frequently in conflict with one another’ (Jackson, 2011, p. 15). As such, several scholars have questioned whether it is even possible to measure public service performance with any meaningful degree of accuracy. Jackson, for example, suggests that a distinction should be drawn between performance measures and performance indicators, as whilst the former is precise and akin to reading a number off a dial, it is the imprecise signals of performance indicators that constitute the majority of information available about public service performance (Jackson, 1988). In a similar vein, Carter el al suggest that in the public sector, performance indicators are ‘tin openers rather than dials: by opening up a can of worms they do not give answers but prompt investigation and inquiry, and by themselves provide an incomplete and inaccurate picture’ (Carter el al, 1995, p. 49). Moreover, it has been suggested that the vast swathes of information to which such indicators give rise ‘results in overload and paralysis’, and actually undermines effective decision-making (Jackson, 2011, p. 23). Such risks are undoubtedly a product of, and exacerbated by, the uneasy relationship between outputs and outcomes of public service delivery; and whilst the outputs of public services are tangible and more easily captured within the SMART framework (e.g. more teachers in schools; additional investment in medical research; the increased installation of speed cameras), it is the outcomes of public services – their contribution to human welfare – that are of primary importance (e.g. increased educational attainment; enhanced quality of life; improved road safety). As Boyne and Law remind us, ‘policies are a means to an end, rather than ends in themselves and it is the outcome which should ideally be measured’ (Boyne and Law, 2005, p. 257). The challenges of outcome-focused performance management are well documented, and Arvidsson’s argument that ‘finding indicators that are both relevant and operational’ is a ‘general problem of performance evaluation of public activities’ (Arvidsson, 1986) remains of relevance today. Outcomes can be extremely difficult to manage, as often government departments and service providers will not control all the levers necessary to bring about societal change; indeed the achievement of an outcome target could be coincidental to the effects of a policy or target, or even in spite of them (Flynn et al, 1988; Carter et al, 1992). This argument is developed by Jackson, who states that ‘public services are necessary, but not sufficient, for the production of outcomes’. Citing the example of education, he explains that outcomes ‘depend upon the various activities that take place within the school and the classroom plus the efforts of the student, the quality of support in the home environment, and peer pressure. Very little is known about these co-production 3 processes and, therefore, about the effectiveness of public services’ (Jackson, 2011, p. 16). Reflecting on such risks, Boyne and Law suggested that an outcome focus may risk the target being ‘viewed as a lottery ticket’, which could have the unintended consequence of promoting complacency or inactivity: ‘whether the target is hit is likely to be beyond the control of the organization, so service managers may as well sit back and see if ‘their number comes up’ (Boyne and Law, 2005, p. 254); and resultantly, Kristensen et al have suggest that an outcome-focused approach will necessitate an even greater proliferation of information for formulating, implementing and evaluating policies (Kristensen et al , 2002, pp. 9-10). Moreover, if outcomes focus on the important, then defining ‘the important’ is in itself an inherently political act, as an ‘appropriate performance measure will, in any event, vary according to who is concerned with performance’ as ‘different interests lay different emphases and it cannot be assumed that all have the same requirements of performance’ (Stewart and Walsh, 1994, pp. 47-8). The political context therefore matters, and will affect the way in which data is accrued and deployed (Jennings and Haist, 2004; Bordeaux and Chitoko, 2008). As such, reflecting on the extent to which evaluations of public service performance are inherently qualitative and normative, Stewart and Walsh counsel against ‘placing total reliance on [targets] or on one set of measures, but rather seeing them as a means of supporting judgement’ (Steward and Walsh, 1994, p. 48; see also Smith, 1995; Jackson, 2011). It is clear therefore that performance management in the public sector entails many risks which, if not carefully managed, can result in counter-productive consequences. As Moynihan et al argue that ‘performance regimes that do not reflect the complexity of governance will fail in some fashion. They run the risk of being ignored, discarded, or manipulated. They may foster unanticipated consequences and undesirable behavior at odds with traditional values of governance’ (Moynihan et al, 2011, p. 142). Indeed, Smith suggests that it is the inherent uncertainty and complexity of the public sector – and the stark contrast between the nature of public and private goods – that renders it so uniquely prone to such perverse consequences, often at the expense of securing control (Smith, 1995, p. 280). Within the academic literature, it is possible to identify a burgeoning list of such perverse consequences. De Brujin highlights: the discouragement of innovation; the blocking of ambition; the obscuring of problems; the driving-down of performance through complacency; and, the encouragement of ‘strategic behaviour’ (De Brujin, 2007). Similarly, Bevan and Hood identify three broad forms of perverse effects: ratchet effects (‘bas[ing] next year’s targets on last year’s performance, meaning that managers who expect still to be in place in the next target period have a perverse incentive not to exceed targets even if they could easily do so’); threshold effects (‘providing a perverse incentive for those doing better than the target to allow their performance to deteriorate to the standard’); and, output distortions (‘[a]ttempts to achieve targets at the cost of significant but unmeasured aspects of performance’) (Bevan and Hood, 2006, p. 521). A similar typology is developed by Jackson, who distinguishes between: ‘definitional gaming’ (where the definition of a target’s subject or remit distorts what is being reported); ‘numerical gaming’ (where an organisation presents their data in a way that exaggerates their performance); and ‘behavioural gaming’ (where the changes in behaviour that allows a target to be met have adverse effects on other areas of an organisation’s work) (Jackson, 2005). Building on this, a fourth category of ‘fraud’ is advanced by Coulson, to capture instances of the 4 knowingly false reporting or alteration of performance data (Coulson, 2009, p. 277). A even more comprehensive list is developed by Smith, who identifies eight distinct types of phenomena: ‘tunnel vision’ (an emphasis on the measured at the expense of the unmeasured); ‘suboptimization’ (the pursuit of narrow local objectives at the expense of the organization as a whole); ‘myopia’ (the pursuit of short-term goals at the expense of legitimate long-term goals); ‘measure fixation’ (focusing on the measure of success rather than the underlying objective); ‘misrepresentation’ (the manipulation of performance data); ‘misinterpretation’ (the failure to adequately account for the effects of the external environment); ‘gaming’ (the manipulation of behaviour to secure strategic advantage); and, ‘ossification’ (organisational paralysis brought on by excessive rigidity) (Smith, 1995, pp. 283-301). The extent to which performance measurement can encourage such ‘strategic behaviour’, ‘output distortions’ or ‘gaming’ has been subject to particular academic attention (e.g. Hood, 2002; Propper and Wilson, 2003; Bird et al, 2005). Because the measurement of public sector performance remains uncertain, incomplete and unattributable, performance management may exacerbate – rather than reduce – the risk of moral hazard (Jackson, 2011, p. 21); and the academic literature is replete with examples of the way in which service providers have sought to cheat the system in order to maximise their own interests. For example, in their study of the star ratings system applied to all NHS hospitals and England and Wales until 2005, Dawson et al reveal that many hospitals diverted resources to activities that would be awarded good star ratings (e.g. waiting times and cleanliness), at the expense of other, less quantifiable aspects of performance (e.g. the quality of treatment provided) (Bevan and Hamblin, 2009). Indeed practitioner research presented in the British Medical Journal suggested that for the most acutely ill patients, star ratings bore little relevance to their chances of survival because ‘crude mortality data… ignore the fact that higher rated trusts tend to be teaching institutions with patients who are less severely ill on admission to critical care units’ (Rowan et al, 2003). Similar activities were identified in the field of crime prevention, and Patrick (2009 cited in Coulson, 2009) presented a detailed dossier of the ways in which crime statistics were manipulated in order to demonstrate improvements against targets, supported by practices such ‘cuffing’ (the failure to record crime), ‘nodding’ (eliciting confessions in return for favours to secure conviction), ‘stitching’ (the creation of false evidence to secure conviction), and ‘slicing’ (the focusing of resources on crimes where convictions were more easy to secure). Moreover, several studies have underlined the way in which the dogged measurement of performance has actually entailed potentially fatal consequences. In New York, for example, the regular publication of unadjusted patient mortality rates by the New York State Department of Health had prompted a reluctance amongst cardiac surgeons to take on high-risk cases, which subsequently led to an increase in deaths amongst the most vulnerable at-risk Medicare patients (Dranrove et al, 2002); and in England, the stipulation that 75% ‘immediately life-threatening’ emergency calls were responded to within eight minutes encouraged ambulance trusts to concentrate their fleet in densely populated areas at the expense of patients in rural areas (Bevan and Hamblin, 2009, p. 178). Instances of such gaming behaviour owe as much to opportunity as to motivation, and whilst it is a significant risk, its pursuit is not axiomatic universal. In order to explore the extent to which individuals seek to cheat the system and to distinguish between the different reasons for such behaviour, Bevan and Hood develop a four- fold typology that draws on Le Grand’s (2003) ‘knights and knaves’ dichotomy. Firstly, they identify ‘saints’ who ‘may not share all of the goals of central controllers, but whose public service ethos is so high that they voluntarily disclose shortcomings 5 to central authorities’. A closely-related second group is ‘honest triers’, who broadly share the goals of central controllers, do not voluntarily draw attention to their failures, but do not attempt to spin or fiddle data in their favour.’ Further down the typology are ‘reactive gamers’ who ‘broadly share the goals of central controllers, but aim to game the target system if they have reasons and opportunities to do so’. Finally, they highlight the existence of ‘Rational maniacs’ who ‘do not share the goals of central controllers and aim to manipulate data to conceal their operations’ (Bevan and Hood, 2006, pp. 522-3). It is, therefore, readily apparent that a clear disjuncture exists between intention and reality, rhetoric and practice. The desire to develop systems of public sector performance management rests on a normative assumption that the government is insufficiently responsive, efficient and dynamic; and the growth of performance management is therefore intimately bound up with the concomitant spread of rational managerialism and New Public Management throughout neo-liberal democracies. Yet, as this review has shown, the measurement of public sector performance is beset by operational challenges; and it is clear that uniquely uncertain and complex environment in which public providers function constitutes a fundamental challenge to attempts to quantify performance in a meaningful and attributable manner. As Moynihan et al underline: Few public officials have the luxury of directly providing relatively simple services, the context in which performance regimes work best. Instead, they must work in the context of a disarticulated state, with policy problems that cross national boundaries and demand a multi-actor response. At the same time, traditional democratic values must be honored (Moynihan et al, 2011, p. 141). More broadly, as Wolf has argued, the ‘predominant and ineluctable source for failures of government services lie precisely in those circumstances that provide the rationale for these services being outside normal markets in the first place’ (Wolf, 1993, p. 65). Against such a backdrop, important questions remain regarding the added-value that performance management has brought in terms of public sector improvement. Boyne and Chen, for example, suggest that the relationship between performance management and service delivery is likely to be indirect, as whilst there is a positive relationship between the setting of clear goals and organisational performance, ‘direct support for the view that targets lead to better public services is sparse’ (Boyne and Chen, 2007, p. 458). Similarly, Jackson, states that ‘little is known about the factors that influence and shape public sector performance’ (Jackson, 2011, p. 23); and Moynihan et al argue that ‘as the most basic level we lack definitive evidence about whether performance regimes ultimately improve public sector capacity and outcomes’ (Moynihan et al, 2011, p. 151). Reflecting on this ostensible lacuna, several scholars have highlighted potential lines of further inquiry. Johnsen, for example, suggests that scholars focus on ‘why and when performance measures are developed and promoted’ and ‘why and when performance information is resisted and/or distorted’ (Johnsen, 2005, p. 15); and Moynihan et al provide a comprehensive list of questions clustered around the themes of democratic value, collaborative governance, global governance and performance regime effects (Moynihan et al, p. 144). More specifically, Boyne and Chen suggests that ‘explicit measures of the presence or absence of targets are missing from the empirical studies’, which in turn has meant that ‘whether organizations with targets achieve more than those without targets is unclear’ (Boyne and Chen, 2007, p. 458). This final point is of particular relevance 6 for this paper, as the Coalition has swept away the layers of performance management that developed under the Labour Government; and this new phase of public service performance ‘management’ therefore constitutes an important opportunity to examine the extent to which targets make a difference. However, in order to tease out this difference, it is first necessary to set out the dichotomous approach that developed under the Labour Governments of 1997-2010. Whole-of-system control-freakery – The Labour Governments, 1997-2010 In 1997 the Labour Government came to power with an ambitious agenda of public service reform, accompanied by even more ambitious tools of tightly-controlled performance management. Certainly, attempts to foster effective performance across government were nothing new. Measures such as the Financial Management Initiative launched in 1982 and the Next Steps programme of 1988 had sought to strengthen link between the funding of Whitehall departments and agencies to the delivery of specific service outputs; the establishment of the Audit Commission in 1983 had brought the performance of local authorities under systematic scrutiny; and the creation of tools such as the Citizens’ Charter in 1991 had mandated certain standards that public service organisations must provide. Yet, what was different about Labour’s approach was the scope and scale of the multiple reporting frameworks established in quick succession, which left not a single area of state activity untouched, and resulted in a brave new ‘targetworld’ redolent of the Soviet regime (Hood, 2006). Labour’s approach to performance management was expansive and multi-layered, including a national system of Public Service Agreements (see James, 2004; Matthews, 2008; Matthews 2013); a plethora of local authority targets (see Boyne and Law, 2005; Coulson, 2009); and, performance regimes for specific sectors such as the star ratings given to all NHS hospitals in England and Wales (see Bevan and Hood, 2006), or the Policing Performance Assessment Framework to which police forces were subject (see Painter, 2012). A fundamental principle of Labour’s overarching approach to performance management was that all layers of government and the wider state would be united around a set of top-level priorities that would be mandated by the centre and trickle down to the vast flotilla of service providers on the ground. To this end the most significant innovation within Labour’s toolkit was the (PSA) framework, which was intended to act as the main driver, the starting point, of all government expenditure and policy-making. In operation between 1998-2010, the PSA regime represented an ambitious channel of governance, aiming simultaneously to increase the strategic and leadership capacity of the core executive, while delivering joined-up public services within a national framework of priority setting and accountability. The PSA framework was introduced as part of the Government’s 1998 Comprehensive Spending Review and established a series of targets within each Whitehall department that mapped directly onto core public service priorities. Several principles informed the new regime (for a detailed overview, see Matthews, 2013). Fixing the public expenditure envelope for a three-year period was intended to enable departments to manage their resources more effectively, and to sharpen the focus upon tangible policy outcomes, as spending became linked to the achievement of policy outcomes. Multi-year planning and a focus on outcomes was also intended to promote greater co-ordination across all layers of government, which would be united under a series of common objectives. Departments were also encouraged, where appropriate, to work together to achieve policy goals that transcended traditional Whitehall boundaries, with numerous shared frameworks being established to promote collaboration. The monitoring arrangements in place to 7 oversee progress towards the PSA targets reinforced the role of the centre as the driver of the new regime, its system of sanctions and rewards intended to ensure that the focus across government remained on the centre’s core objectives. As well as fostering SMART working across Whitehall, the PSA framework also sought to engage those responsible for delivering public services on the ground, uniting the core and periphery around a set of shared goals. A stated intention of the framework was to afford service providers with flexibility in deciding appropriate delivery mechanisms in accordance with the principle of ‘earned autonomy’, which would ‘ensure that public service providers have the discretion to innovate and improve the services they provide, constrained by the need to reach high minimum standards’ (Cm. 5570, 2002, p. 10). In order to cohere local authorities and other service providers around its target priorities, the Government introduced a series of mechanisms that were intended to complement the PSA framework. The relationship between top-level PSA objectives and their on-the-ground delivery was formalised in 2000 through the introduction of Local Public Service Agreement (LPSA) targets, which were initially piloted amongst twenty upper-tier local authorities. Targets were intended to form a ‘partnership agreement between individual local authorities and the Government, intended to accelerate or surpass key outcomes than would otherwise be the case, for people living in the authority's area’; and brought with them a number of incentives including administrative flexibilities, pump-priming grants and performance reward grants (ODPM, 2000, s. 1). By 2002 LPSAs had been rolled out to over sixty upper-tier local authorities, and by 2003 all but three upper-tier local authorities had elected to negotiate an LPSA (Cm. 5571, 2002, p. 1); and the second generation of LPSAs rolled in 2003. In 2007, LPASs were superseded by Local Area Agreements (LAAs), three-year agreements negotiated between central government and local authorities which set out a series of targeted improvements that authorities were committed to achieving, along with a detailed delivery plan. Whilst not explicitly linked to the PSA framework, LAAs entailed up to 35 priorities for each local authority, underpinned by ‘a single set of about 200 outcome based indicators covering all important national priorities like climate change, social exclusion and anti-social behaviour’; and the Government insisted that ‘LAA targets will generally be negotiated to balance local priorities and levels of performance with national improvement priorities’ (Cm. 6939-I, 2006, para. 6.39). At the same time, local authorities were confronted with an additional, and entirely separate, performance management regime, as running parallel to the LPSAs and LAAs negotiated with central government was the programme of audit and inspection administered by the Audit Commission. In 2001, for example, the Audit Commission launched the Comprehensive Performance Assessment, which rated councils against an excess of 1,000 performance indicators, using a five-point scale for each. This was replaced in 2009 by the Comprehensive Area Assessment, which was intended to streamline the process by assessing local public services against a total of 214 indicators (for an overview, see Coulson, 2009). As this overview illustrates, the limited number of top-level targets at the national level was not matched by a similar focus on the ground, as service providers became subject to an explosion of performance indicators. Whilst the Labour Governments espoused the rhetoric of flexibility and ‘earned autonomy’, the scope and scale of these performance frameworks in place squeezed the capacity for local discretion. Moreover, the complexities and contradictions of the Government’s raft of central initiatives was seen to result in a lack of clarity regarding roles of central and local 8 government. The Public Administration Select Committee, for example, suggested that a ‘lack of proper integration’ between the simultaneous development of front-line organisational capacity and a centrally-driven measurement culture had created ‘tension between those charged with centralised responsibility and those who are responsible for dispersed delivery of public services’, undermining the centre’s ability to realise its strategic leadership priorities (HC 62-I, 2003, p. i). Research similarly revealed that local authorities and other service providers were overloaded with a raft of central initiatives, which lacked sufficient co-ordination, and in turn impeded local autonomy and service delivery. The setting of top-level targets often resulted in the cascading of a proliferation of sub-targets further down the delivery chain, which added to the burden faced by local authorities. Indeed, Local authorities were faced with targets set at the national level through Local Public Service Agreements, and later Local Area Agreements; whilst at the same time being required to report against a vast number of indicators set by the Audit Commission through the Comprehensive Performance Assessment, and later the Comprehensive Area Assessment. Moreover, the link between the PSA framework and these parallel performance regimes at the national and local levels was without clear explication. In creating such a complex web of central-local relationships, which rested on an inherent tension between top-down performance management and on-the-ground autonomy, the Government’s commitment to ‘earned autonomy’ again appeared questionable, with evidence suggesting that the Labour Government’s interactions with the localities were based on command-and-control and an innate confidence in the centre’s capacity to determine delivery on the ground (Matthews, 2013). Research conducted on the ground revealed ‘scant evidence that other major spending departments have much appetite for reducing the numbers of targets that authorities are required to meet or the range of plans that they must submit to Whitehall’ (Martin, 2005, p. 540); and that reasonable requests for further autonomy were frequently rejected, suggesting that the ‘fear of setting precedents, anxiety about local variation and risk aversion seemed to have discouraged officials, who found it easier to say “no”’ (Sullivan and Gillanders, 2005, pp. 565-6). LPSAs, for example, were criticised for failing to enhance autonomy, as the centrally-driven process of ‘upwards negotiation’ was one-way and only allowed local authorities to commit to surpassing national targets, even if national targets failed to account for reflect local challenges. This was attributed to the pervading culture of Whitehall, as ‘LPSAs were introduced into a set of pre-existing relationships between local and central government… informed by history, local context and the relative power and influence of different government departments’ (Sullivan and Gillanders, 2005, pp. 559-70). Whilst introduction of LAAs was intended to foster a greater sense of ownership of target priorities amongst local authorities, the sense remained that targets were government impositions; indeed it was suggested that much of the Government’s rhetoric ‘treat[ed] local councils as if they were agencies of the national state, required to implement a national plan, but given freedom as to how they actually deliver it’ (Coulson, 2009, p. 279). It was therefore unsurprising that evidence of gaming emerged, as local authorities and other services providers buckled under the burden the competing pressures emanating from Labour’s performance management framework. Evidence submitted to the Public Administration Select Committee illustrated the prevalence of output distortions or behaviourial gaming in the field of healthcare: In my own area where I have worked for many years the ophthalmic unit cancelled 19,500 follow-up appointments in a six-month period so that new patients could be seen to reach the target for new patients 9 being seen (HC 1259-i, 2002, Q. 2). [I]f you have a lot of targets, you have a hierarchy of targets… You know some are really important because you will get sacked for those and you know others are targets which it does not really matter if you miss… That is where all your resources go. It is not that you do not want to deliver that target, but delivering the mainstream target takes priority (HC 62-iv, 2002, Qs. 431-4). Similar perverse consequences were also evidence amongst local authorities seeking to achieve their household recycling targets. The creation of garden waste collection schemes was highlighted by one senior official as a case of definitional gaming, as local authorities sought to distort their performance, collecting something that would otherwise not be collected, simply to provide an immediate boost to recycling rates. Indeed, this was described by one official as a ‘fiddle’, stating that ‘if it’s not in the waste stream now, why would it be in the waste stream in the future?.’ Other officials spoke of the threshold effects experienced by high-achieving local authorities: ‘[t]here’s no incentive to overachieve, because there’s a limit on budgets so you need to justify why you want to spend more money’ (Matthews, 2013). Letting go and clawing back – The Coalition Government 2010-2015 Upon entering office in 2010, the Coalition Government swept aside this machinery of performance measurement, consigning Labour’s era of top-down government-by- numbers to history with a cursory statement tucked away in the 2010 Comprehensive Spending Review (Cm. 7942, 2010, p. 34). The Coalition’s determination to dismantle the machinery of target-based performance management was part of its broader commitment to lean government and localism, as set out by Cabinet Minister Francis Maude in July 2010, who stated that ‘there were many attempts to micro- manage delivery from the centre, through targets, PSAs, monitoring, auditing, endless guidance and regulation. This is both wrong and doomed to fail.’1 Instead the, approach of the Coalition was to be one of ‘radical decentralisation’, which as the Deputy Prime Minister, Nick Clegg, explained: means stripping away much of the top-down bureaucracy that previous governments have put in the way of frontline public services and civil society. It means giving local people the powers and the funding to delivery what they want for their communities (CLG, 2011, p. 1). Similarly and, reflecting on the ‘historic opportunity to redress the balance’, the Minister for Decentralisation, Greg Clark, stated that: bureaucratic micromanagement of our public services is not only inefficient but undemocratic. If central government is everywhere, then local decision-making is nowhere – everything is subject to national politics, with nothing left to community leadership (CLG, 2011, p. 1) 1 http://www.cabinetoffice.gov.uk/news/Francis-Maude-speech-to-the-civil-service, last accessed 15 September 2012. 10
Description: