Do Managers Face a Performance Trade-Off? Correlations Between Production and Process Performance
In: International public management journal, Band 21, Heft 1, S. 53-73
ISSN: 1559-3169
7 Ergebnisse
Sortierung:
In: International public management journal, Band 21, Heft 1, S. 53-73
ISSN: 1559-3169
In: Public performance & management review, Band 45, Heft 6, S. 1258-1286
ISSN: 1557-9271
In: Journal of public administration research and theory, Band 32, Heft 4, S. 781-794
ISSN: 1477-9803
Abstract
This paper develops and tests a parsimonious micro-theory of street-level bureaucrats' individual implementation behavior. By systemizing and synthesizing theoretical insights from Ajzen's theory of planned behavior, from Bandura's theory of perceived self-efficacy, and from implementation literature, we claim that street-level bureaucrats' policy implementation behavior is a function of their individual evaluation of the efficacy of the policy and of their own ability to implement the policy measures in terms of their perceived self-efficacy. We test our theory on a panel data set capturing teachers' implementation of the wide-scale Danish public school reform of 2014. We use administrative data and a five-wave panel survey (2014–18) of 2,055 teachers in approx. 200 schools as well as their school leaders. Our results confirm the importance of teachers' perceived policy efficacy and of their perceived self-efficacy for their implementation behavior. When, as shown in this study, street-level bureaucrats' individual evaluations of a policy and their own perceived abilities shape their implementation behaviors, street-level bureaucrats become even more important individual policymakers than depicted in most research on street-level bureaucrats that mainly focuses on structural determinants of their behaviors rather than individual factors.
In: International public management journal, Band 20, Heft 2, S. 183-205
ISSN: 1559-3169
In: Journal of public administration research and theory, Band 33, Heft 2, S. 296-312
ISSN: 1477-9803
AbstractPublic service organizations periodically collect and disseminate performance information that enables frontline employees to act based on two aspects of performance: current performance (how is the client performing right now?) and performance progression (is the client performing better, similarly, or worse than previously?). Yet knowledge of how frontline employees use performance information about their clients' performance progression remains limited. Building on cognitive psychology and street-level bureaucracy research, this article theorizes and tests how information on changes in client performance over time affects frontline employees' performance information use. We develop a theoretical framework that comprises three competing hypotheses on how performance progression information (on performance improvement, performance stability, and performance deterioration) shapes purposeful performance information use at the frontlines of public services delivery. Each hypothesis relates to a distinct cognitive bias: needed-deservingness bias, negativity bias, and change-react bias. Using a pre-registered survey experiment among Danish public school teachers (n = 925), we find support for the change-react bias. Teachers are more inclined to take behavioral action when presented with performance progression information showing change (either improvement or deterioration) relative to no change (stability) in student performance. These results expand our understanding of performance information use at the frontlines by suggesting that frontline employees process performance information showing change different from performance information showing stability.
In: Public administration: an international journal, Band 99, Heft 1, S. 189-204
ISSN: 1467-9299
AbstractPublic decision‐makers increasingly rely on satisfaction surveys to inform budget and policy decisions. Yet, our knowledge of whether, and under what conditions, this input from public service users provides valid performance information remains incomplete. Using a preregistered split‐ballot experiment among government grant recipients in Denmark, this article shows that the ordering of survey questions can bias satisfaction measures even for highly experienced and professional respondents. We find that asking about overall satisfaction before any specific service ratings lowers overall user satisfaction, compared to the reverse order, while the correlations between specific ratings and overall satisfaction are relatively stable. Also, the question order effect outweighs that of a large‐scale embezzlement scandal, which unexpectedly hit the investigated government agency during the data collection. Our results support rising concerns that subjective performance indicators are susceptible to bias. We discuss how practitioners should approach satisfaction surveys to account for the risk of question order bias.
In: Thau , M , Mikkelsen , M F , Pedersen , M J & Larsen , M H 2020 , ' Question Order Bias Revisited : A Split-Ballot Experiment on Satisfaction with Public Services among Experienced and Professional Users ' , Public Administration , vol. 99 , no. 1 , pp. 189-204 . https://doi.org/10.1111/padm.12688
Public decision‐makers increasingly rely on satisfaction surveys to inform budget and policy decisions. Yet, our knowledge of whether, and under what conditions, this input from public service users provides valid performance information remains incomplete. Using a preregistered split‐ballot experiment among government grant recipients in Denmark, this article shows that the ordering of survey questions can bias satisfaction measures even for highly experienced and professional respondents. We find that asking about overall satisfaction before any specific service ratings lowers overall user satisfaction, compared to the reverse order, while the correlations between specific ratings and overall satisfaction are relatively stable. Also, the question order effect outweighs that of a large‐scale embezzlement scandal, which unexpectedly hit the investigated government agency during the data collection. Our results support rising concerns that subjective performance indicators are susceptible to bias. We discuss how practitioners should approach satisfaction surveys to account for the risk of question order bias. ; Public decision‐makers increasingly rely on satisfaction surveys to inform budget and policy decisions. Yet, our knowledge of whether, and under what conditions, this input from public service users provides valid performance information remains incomplete. Using a preregistered split‐ballot experiment among government grant recipients in Denmark, this article shows that the ordering of survey questions can bias satisfaction measures even for highly experienced and professional respondents. We find that asking about overall satisfaction before any specific service ratings lowers overall user satisfaction, compared to the reverse order, while the correlations between specific ratings and overall satisfaction are relatively stable. Also, the question order effect outweighs that of a large‐scale embezzlement scandal, which unexpectedly hit the investigated government agency during the data collection. Our results support rising concerns that subjective performance indicators are susceptible to bias. We discuss how practitioners should approach satisfaction surveys to account for the risk of question order bias.
BASE