On the Positive Effects of Overconfident Self-Perception in Teams
Sandra Ludwig1, Philipp C. Wichardt2* and Hanke Wickhorst3
1Deptartment of Economics, University of Ulm, Germany
2Kiel Institute for the World Economy; Dept. of Economics, University of Lund; Dept. of Economics, University of Rostock; CESifo Munich, Germany
3Power Forward Consulting GmbH
Submission: June 11, 2024; Published: June 28, 2024
*Corresponding author: Philipp C. Wichardt, Department of Economics, University of Rostock, Ulmenstr. 69, 18057 Rostock, Germany. e-mail: philipp.wichardt@uni-rostock.de
How to cite this article: Sandra Ludwig, Philipp C. Wichardt* and Hanke Wickhorst. On the Positive Effects of Overconfident Self-Perception in Teams. Ann Soc Sci Manage Stud. 2024; 10(4): 555792. DOI: 10.19080/ASM.2024.10.555792
Abstract
In this paper, we study the individual payoff effects of over confident self-perception in teams. In particular, we demonstrate that the welfare of an overconfident agent who works in a team with a rational agent or in a team with an overconfident agent can be higher than the welfare of the members of a team of two rational agents. This result holds irrespective of the assumption about the agents’ awareness of their colleague’s bias. Moreover, we show that an overconfident agent is always better off when he is unaware of a potential bias of his colleague. Thus, our results provide a potential rationale for the widespread dissemination of overconfidence.
Keywords: Overconfidence; Team Production; Unawareness; Self-perception; Synergy effects
JEL classification: D21, D62, L23
Introduction
Considerable evidence from psychology suggests that individuals tend to overestimate their own skills (e.g. [1-4] for recent reviews see [5-7]).1 Given the apparent relevance of the phenomenon for many economic contexts, the effects of overconfidence have also received considerable attention in the economic literature2. One prevalent effect of overconfidence seems to be that individuals who overestimate their own skill tend to work harder than individuals assessing their ability correctly (see, e.g., [20-22] and more recently [23]).
Interestingly, the effort-increasing effect of overconfidence implies that the bias of some agents can affect the actions or payoffs of other agents. For example, the bias may change the incentive structure of the others if the agents’ payoffs depend not only on their own effort but also on the effort of others, e.g. in a teamwork setting, or the bias may change the payoff and/or the optimal incentive scheme from the perspective of a principal. Yet, such changes may crucially depend on the information agents possess about the actions and/or biases of the (overconfident) agents since agents can only react to what they observe or believe. The importance of the information structure is, for example, demonstrated by Santos-Pinto [24] in a principal- agent setting. Focusing on the principal, Santos-Pinto considers a situation where the principal can condition wages on each agent’s output. He shows that overconfidence is beneficial for the principal if effort is observable, while it need not be beneficial in the presence of moral hazard.
In the present paper, we take up the discussion about the effects of overconfidence and analyze (unlike Santos-Pinto) a model of a teamwork situation with effort complementarities (see also Hakens and Katolnik [25] who study optimal team size in teams of overconfident agents). We first consider the potential advantage overconfident agents may have in an environment of mainly rational agents. The argument is related to works by De la Rosa [26], Gervais and Goldstein [27] or Hvide [28]. De la Rosa [26], for example, analyses welfare effects of overconfidence in a setting in which firms compete for an overconfident and risk-averse agent; he finds that the agent benefits when his bias is moderate. Along the same lines, Gervais and Goldstein [27] analyze a model of team production with effort complementarities. They show how overconfidence reduces free-riding, how it might increase both a rational as well as an overconfident agent’s welfare and give rise to a Pareto-improvement. Hvide [28], in turn, considers a case where the agent can actually choose the beliefs about his ability and shows that biased beliefs can be beneficial to the agent - as they may improve his outside option - while they are detrimental for the firm.
1Note that the notion of overconfidence in general is not uncontested [8-11]. A recent meta-study by Koehler et al. (2002), however, describes overconfidence as a prevalent phenomenon.
2For example, effects of overconfidence on decisions by managers and stock-traders have been analysed by Grinblatt and Keloharju [13], Malmendier and Tate [14,15], Heaton [16], Hirshleifer and Luo [16], and Kyle and Wang [18]. The effects on employee turnover and firm profits are analyzed by Hoffman and Burks [19].
In a second step, we then ask how individual payoffs are affected if both team members are biased and how awareness of the biases of others impacts on the agents’ payoffs. Whether or not people are actually aware of the bias of others, of course, remains an empirical issue which so far has received little attention. However, the findings by Ludwig and Nafziger [29] indicate that overconfident people tend to be unaware of the biases of others. Also, Bruhin et al. [30] find that subjects in an experiment do not appear to strategically respond to overconfidence of another team member.
In the subsequent analysis, we show that overconfidence may not only enhance the team’s productivity (due to increased efforts), i.e. benefit the firm, but may also increase the welfare of the biased agent himself. And this holds in a team of one overconfident and one rational agent as well as in a team of two overconfident agents. Moreover, the result is particularly strong if the considered agent’s overconfidence is combined with unawareness of other people’s biases (despite the fact that being aware of the other’s bias is closer to the true state of the world). Thus, our results not only provide a potential rationale for the wide dissemination of overconfidence suggested by the studies cited above. They also provide a potential rationale for the empirical finding that overconfident people appear to be unaware of the biases of others [29,30].
The intuition behind these results is rather straightforward: Due to the effects of synergy, overconfidence of another team member increases the optimal effort level for any agent who is aware of this bias. However, if an agent is overconfident himself, his effort level is already above the individual optimum – because of his own bias which he is unaware of. Awareness of a colleague’s bias, then, leads to a further (suboptimal) increase in his effort. By contrast, lack of such awareness keeps the expectation about the colleague’s effort and, hence, the agent’s extra effort, which he exerts in order to exploit effort complementarities, low. In combination with the increase in the agent’s effort due to his own overconfidence, the agent’s effort choice gets closer to the overall individual optimum than if he were aware of the other’s bias. In a sense, all necessary upward-adjustments in the agent’s effort (in order to exploit the synergies from the colleague’s overconfidence) are already accounted for in the agent’s effort choice - although for a different reason, namely the agent’s own overconfidence (which he is unaware of). And this intuition essentially covers both cases, i.e. a team with one biased and one rational agent and a team with two biased agents.
The rest of the paper is structured as follows: Section 2 presents our baseline model of a teamwork situation with effort complementarities. Section 3 introduces overconfidence in a team of one overconfident and one rational agent. Moving to teams of two overconfident agents, Section 4 consider the effects of changes in the information structure in such instances. Section 5, then, compares teams of two overconfident agents with teams of two rational agents and summarizes the main points of the analysis. Section 6 concludes.
The Baseline Model
Consider a firm whose output generates from a single oneperiod project which is carried out by two risk neutral agents, i = 1, 2, where teamwork is implemented in order to create positive externalities3. The value of the project is the value of its expected cash flow which depends on the agents’ efforts, ei , and their abilities, ai ; for the sake of argument, we assume a1 = a2 = a.4 Moreover, we assume that agent i‘s expected return from the project, denoted by Ri(ei, e-i) is increasing in effort and ability and that the marginal return to effort is increasing in ability, i.e. d2 Ri /d ei dai > 0.5 The agents’ cost of effort is denoted by c(ei) with c (0) = 0, c ' > 0 and c " > 0. Finally, in order to make the subsequent discussion meaningful, we follow, for example, Gervais and Goldstein [15] and assume that the agents’ efforts are strategic complements, i.e.:6
3On positive externalities through teamwork see e.g. Alchian and Demsetz [32], Grossmann and Hart [33], Alchian and Woodward [34], Aghion and Tirole [35], Jensen and Meckling [36], or Holmström and Roberts [37].
4Note that assuming equal ability is not restrictive for the present argument. In particular, the focus of the analysis is on the individual effects of overconfidence and information about such biases of other team members. And, as such, the discussion is essentially confined to the consequences of changes in these parameters for one of the two agents. In fact, actual ability is not explicitly accounted for as we will treat it as fixed throughout the analysis.
5The complementarity assumption between ability and the value of effort is reasonable in many settings since it is often the case that a more able agent needs less time to carry out a certain task.
6Efforts being strategic complements corresponds to the slope of the best reply being positive, i.e.
Under the above assumptions, the maximization problem of agent i can be written as follows:
with first-order condition (FOC):
Moreover, the corresponding second-order condition (SOC) is satisfied if:
which we assume to hold in the following.
Substituting the corresponding equilibrium efforts, denoted by with i = 1, 2, into the agents’ payoff functions, we obtain the following general expression for the agents’ equilibrium payoffs in the case without overconfidence:
These payoffs will serve as our benchmark for later comparisons.
Overconfidence
In order to analyze the effects of overconfidence, we first consider a team in which one agent, say agent 2, is overconfident, while the other agent, agent 1, is rational and aware of agent 2’s bias. In particular, we assume that agent 2 overrates his own skill by b2 > 0, i.e. his perceived ability is a ':= a + b2 . 7
Moreover, we assume that agent 2 is not aware of his own bias so that agent 2’s maximization problem can be written as follows:8
where denotes the expected return to the project as (wrongly) perceived by the biased agent . The resulting FOC is given by:
with SOC:
Note that, compared to a situation without overconfidence (b2) = 0 the effort of agent 2 increases for a given effort level of agent 1, as the marginal return to effort of agent 2 is increasing in b2 :
recall that, by construction, the nominator of (9) is positive due to the assumed positive effect of the agents’ ability on marginal productivity - and the denominator is negative which follows from the SOC (see (4) and (8)).
The maximization problem of agent 1, in turn, is the same as described in the baseline model of a fully rational team except that agent 1 now takes the bias b2 of agent 2 into account; i.e. agent 1 knows that agent 2’s effort changes due to his overconfidence and accounts for this. Thus, agent 1 knows that agent 2 is biased and while agent 2 knows this, he disagrees with agent 1, i.e. the agents agree to disagree as, for example, in Morris [39] and Squintani [40].9 Accordingly, optimal efforts are derived as follows: Agent 2 maximises his incorrectly perceived payoff (correctly) anticipating that agent 1 is rational (and that agent 1 believes that agent 2 is biased); and agent 1 maximizes his actual payoff (correctly) anticipating that agent 2 is biased and thus maximizes his perceived payoff.
Denoting the resulting efforts with and , the agents’ individual payoffs based on actual and not on perceived abilities (and thus on actual rewards) are:
The qualitative effect of changes in agent 2’s perceived ability on agent 1’s expected payoff, then, can be summarized as follows: for any: ,where denotes some upper bound on agent 2’s bias (possibly ), it holds that
7Note that we consider overconfidence in the form of overestimation of one’s absolute ability (see, e.g., [27], for a similar approach). In general, overconfidence can arise in other forms like overestimation of relative abilities (“better-than- average effect”, e.g. [3]) or personal control (“illusion of control”, e.g., [38]); as well as unrealistic optimism about the future (e.g. Weinstein, N.D. (1980), ”Unrealistic Optimism about Future Life Events,” Journal of Personality and Social Psychology, 39, 806-820).
8Note that overconfidence would have no behavioral effect if agents were aware of their bias (and otherwise rational, i.e. expected utility maximisers).
9Note that beliefs in this type of argument are used essentially to motivate behavior but are not themselves part of the equilibrium in that they have to be correct. This is somewhat similar to models of level-k thinking used to analyze initial responses in normal form games see, for example, [41-43]. In view of applications, such an implicit exclusion of the consistency condition regarding beliefs appears to be a justifiable simplification, for example, in settings where there are few opportunities for learning (e.g. due to a low frequency of repetition) or where the common restrictions of the agents’ mental capacities are binding (e.g. due to time constraints or some other details of the job the agents have to carry out).
10Since both agents are unaware of each other’s bias, both believe that the colleague is unbiased. Moreover, each agent is unaware of the own bias. Thus, the agents’ beliefs are effectively inconsistent with actual strategies (as they are unaware of the biases); see also footnote 9. However, as soon as we deal with biased agents, consistency of beliefs is always an issue as biased agents, by definition, are at least unaware of their own bias.
As the first term is zero by the envelope theorem, the impact of agent 2’s overconfidence on agent 1’s payoff depends on the sign of the strategic effect, which is positive. Hence, agent 1’s expected payoff increases in agent 2’s overconfidence.
Furthermore, the impact of b2 on agent 2’s own expected payoff is given by:
The first term again reflects the strategic effect, which is positive as (1) efforts are strategic complements, i.e. by assumption and (2) agent 2’s effort is increasing in his bias b2 i.e. as the marginal return to effort is increasing in ability.
By contrast, the second term, which reflects the payoff effect of agent 2’s mistaken belief about his own ability, is, of course, negative as the mistaken belief induces agent 2 to exert too much effort, i.e. which in turn implies .
Eventually, the overall effect on agent 2’s expected payoff is determined by the trade-off between the strategic effect and the effect of agent 2’s mistaken belief. In particular, if synergy effects are large, the strategic effect dominates and agent 2’s payoff increases in 2 b This also holds if both synergy effects and agent 2’s bias are small as a small bias results in a moderate increase in agent 2’s effort and thus the mistaken belief effect is negligible. If synergies are small while the bias is comparably large, though, the overall effect on agent 2’s utility is negative.
The overall effect of agent 2’s overconfidence on agent 1’s expected payoff, by contrast, depends only on the sign of the strategic effect, which is positive. Accordingly, agent 1’s payoff always increases in agent 2’s overconfidence.
Summing up, both agents’ efforts increase in b2 if efforts are strategic complements and the marginal return to effort of agent 2 is increasing in b2 - as assumed for the present discussion. Moreover, such an increase in efforts does not only lead to a higher team productivity (i.e. a higher firm value) and a higher expected payoff of agent 1 (which is increasing in b2). It also increases the expected payoff of the overconfident agent 2, provided that either synergies are large or, if they are small, also the bias, b2, itself is sufficiently small. Intuitively, the latter effect is due to the fact that agent 2 benefits from the positive externalities of the increased effort of agent 1. Even if these externalities are rather small, this effect outweighs the decrease in expected payoff resulting from agent 2’s increased effort as long as the extent of overconfidence is moderate. Thus, we conclude:
Lemma 1 Within the considered model of team production, being overconfident (and paired with a rational agent) increases the payoff of the overconfident agent if either synergy effects are sufficiently large or if both synergy effects and the agent’s bias are small.
Bias-Awareness
In a next step, we turn to the discussion of teams which consist of two overconfident agents. We address the question whether it is optimal for either agent to be informed or ignorant of his colleague’s bias. In doing so, we distinguish three settings: (Case 1) both agents are unaware of each other’s biases; (Case 2) one agent is aware of the other’s bias while the other agent is unaware of the colleague’s bias; (Case 3) both agents are aware of each other’s bias. As we will see, it is always better for agent 2 to be unaware of his colleague’s overconfidence - irrespective of whether agent 1 is aware or unaware of agent 2’s bias. The section concludes with some brief statements about the effect of partial awareness.
Case 1: Both agents are unaware of each other’s bias.
If both agents are overconfident but unaware of their colleague’s bias, each agent’s decision situation is basically analogous to the situation of agent 2 considered in Section 3, i.e. the situation where an overconfident agent 2 is paired with a rational agent 1. Accordingly, each agent maximizes his (incorrectly) perceived payoff (incorrectly) anticipating that the other agent behaves rationally. Thus, the derivation of the maximization problems and the optimal efforts for both agents is analogous to that for agent 2 in the previous section.10 Accordingly, agent 2’s decision in the present setting is identical to the one discussed in Section 3:
Agent 1, in turn, now acts in the same way as agent 2; i.e. he also increases his effort compared to the individually rational level, , because of his own overconfidence (but no longer, as he did before, because of - the knowledge of - his colleague’s bias). Thus, agent 1’s maximization problem is given by:
Note that the first term of this expression derives from agent i’s mistaken belief and is negative as (recall that the marginal return to effort increases in ability). Moreover, the strategic effect is zero as both agents are unaware of the other’s bias, i.e. Thus, we conclude:
Lemma 2 Being overconfident reduces agent i’s payoff if agent -i is unaware of this bias.
Case 2: One agent is aware, one unaware of the other’s bias
Suppose agent 2 is aware of the bias of agent 1 but agent 1 is still unaware of his colleague’s bias.13 Then, agent 1 maximizes his (incorrectly) perceived payoff (incorrectly) anticipating that agent 2 behaves rationally; and agent 1 disagrees with agent 2’s belief that agent 1 is overconfident. Thus, the maximization problem and the corresponding optimal effort of agent 1 remain the same as in Case 1. Thus, we have:14
For agent 2, however, things are different. In particular, agent 2 again maximizes his (incorrectly) perceived payoff but now accounts for agent 1’s overconfidence. Thus, as efforts are strategic complements, agent 2’s effort increases in b1 (because agent 1’s marginal return to effort increases in 1 b ):
Note that there are now two reasons for agent 2 to increase his effort: (1) the biased perception of his own ability (which he is not aware of), and (2) the awareness of the colleague’s overconfidence. Thus, agent 2’s effort is not only higher than in the fully rational team, but also higher than his effort in the case where he is unaware of agent 1’s bias, i.e.:
This implies that the team’s productivity is increased compared to the fully rational team and the team with two overconfident agents who are both unaware of their colleague’s bias.15
Moreover, the corresponding optimal payoffs of the agents in this case are as follows:
Payoff comparison when one agent is unaware of the colleague’s bias.
A simple payoff comparison yields that if one agent, say agent 1, is unaware of agent 2’s bias, agent 2 is better off being unaware of the bias of agent 1:
11Here as below, the double digit in the exponent (“00” in this case) refers to the agents’ awareness of biases: the first digit refers to agent 1 and the second to agent 2 (“0” indicating unawareness of the respective other agent’s bias and “1” indicating awareness of it).
12Note that, as both agents are unaware of the other’s bias, optimal effort levels only depend on each agent’s own bias.
13Due to the symmetry of the problem, the case that agent 1 is aware of agent 2’s bias follows immediately from interchanging the agents.
14Note that “01” in the exponent now indicates that agent 1 is unaware of agent 2’s bias while agent 2 is aware of agent 1’s bias.
15It can also be shown that the team’s productivity increases compared to the team with only one overconfident agent.
16Recall that the “01” in the exponent refers to the case where both agents are overconfident but only agent 1 is aware of the bias of agent 2, which is analogous to Case 2 except that the information structure is reversed. Note further that
Intuitively, accounting for agent 1’s overconfidence induces agent 2 to further increase his effort in an attempt to exploit effort complementarities. Yet, his effort is already above the individual optimum because of his own overconfidence and the further increase in effort is not complemented by agent 1. Thus, we conclude:
Lemma 3 If both agents are overconfident and agent 1 is unaware of the bias of agent 2, then agent 2, ceteris paribus, is better off if he is also unaware of agent 1’s bias than if he were aware of it.
Case 3: Both agents are aware of each other’s bias.
When both agents are aware of each other’s bias (but unaware of their own bias), both maximise their (incorrectly) perceived payoff (correctly) anticipating that the other agent is overconfident. Yet, agents disagree with the other agent’s belief that they are biased themselves. Hence, the situation is analogous to the situation of agent 2 in Case 2 where agent 2 is aware of agent 1’s bias; i.e. the optimal effort level of agent 2 is given by:
However, for agent 1, who now takes into account the bias of agent 2, the optimal effort level is increased compared to Case 2, i.e.:
as efforts are strategic complements and agent 2’s marginal return to effort is increasing in his bias. Note that under these conditions both agents increase their effort for two reasons: (1) their own overconfidence and (2) their attempt to complement their colleague’s increased effort.
Accordingly, the agents’ resulting payoffs in this case are given by:
Payoff comparison when one agent is aware of the colleague’s bias.
Next, we consider agent 2 and compare his payoff for the case where he is aware of agent 1’s bias with the case where he is not – assuming that agent 1 is aware of agent 2’s bias. A comparison of agent 2’s payoff in both instances shows that being unaware of agent 1’s bias is preferable for agent 2 if:16
which holds as is concave and . 17 The intuition for this result is the same as before: Complementing agent 1’s additional effort is detrimental for agent 2 because agent 2’s effort is already above the optimum – due to his own bias – and because the further increase is not complemented by agent 1. Similar to the previous situation, we thus conclude:
Lemma 4 If both agents are overconfident and agent 1 is aware of the bias of agent 2, then agent 2, ceteris paribus, is better off if he is unaware of agent 1’s bias.
Consequences of Partial Awareness.
Finally, we want to briefly comment on the effects of partial awareness of biases; see Appendix A for a formal discussion. For the sake of argument, we assume that an agent who is “partially aware” of his colleague’s overconfidence assigns probability p ∈ [0, 1] to the case that his colleague has bias bi > 0, where bi is the true bias of agent i.18 As it turns out, partial awareness essentially reduces the strength of the effects discussed above while keeping the direction of changes. In particular, it holds (see Appendix A for a formal derivation):
17Irrespective of whether agent 2 is aware or unaware of agent 1’s bias, it is obviously better for agent 2 if agent 1 is aware of agent 2’s bias than if he is not, i.e.
18It is straightforward to generalize our analysis to more general cases of “partial aware- ness”, where an agent attaches different probabilities to different sizes of the bias.
19Here we consider only the case in which information about biases is optimal, i.e. biased agents are unaware of the biases of others. Similar results hold if one or both agents are (partially) aware of the bias of their colleague, albeit with slightly stricter restrictions on synergy effects and the size of the biases.
Lemma 5 An agent is best off being unaware of the colleague’s bias; and being partially aware is better than being fully aware. Moreover, for an over- confident agent it is optimal if his colleague is fully aware of the bias; and partial awareness is better than unawareness.
Comparison with Rational Team
In the previous sections, we have shown that within the proposed model of team production (1) overconfidence can be beneficial for the biased agent and (2) if an agent is overconfident, it is always best for him to be unaware of a potential bias of his colleague. In view of a general comparison between rational and overconfident agents, however, it is interesting to ask how individual payoffs in a team of two overconfident agents compare to those in a fully rational team. In the remainder of this section, we show that (un- der fairly weak conditions) individual payoffs in a team of two overconfident agents are higher than in a team of two rational agents.
Consider a situation in which both agents are overconfident but unaware of their colleague’s bias, i.e. a situation where overconfidence is present in its “individually optimal” form (i.e. it is combined with unawareness of the colleague’s bias). Then, both agents’ overconfidence is not complemented by a higher effort of the respective colleague through awareness of biases.
In order to obtain a clear picture of the individual payoff comparison for this scenario, let us first consider the case in which one agent, agent i, is biased and the other agent exerts his benchmark effort (e.g. because he is rational but unaware of his colleague’s bias). For this case, the following holds:
In fact, the comparison remains positive also for small synergy effects if biases are moderate. Intuitively, this holds as a small bias of agent i induces only a moderate increase in agent i’s own effort. Hence, a smaller “synergetic feedback” through agent −i’s effort is required to “reimburse” the biased agent i.
Summing up, the above result in favor of overconfidence is rather intuitive as we have already seen that individual payoffs for a biased agent in a team of one overconfident and one rational agent are higher (cf. Section 3). The maximization problem of the overconfident agent, say agent 2, is the same in both the team with one and the team with two overconfident agents: He is biased himself (and unaware of his bias) and thinks his col- league, agent 1, is unbiased and, hence, will exert the same effort in both cases. Moreover, if the additional effort exerted by a rational agent 1 in order to complement agent 2’s additional effort (due to agent 2’s overconfidence) is enough to overcompensate agent 2 for his increased effort cost, then it is natural to expect that an overconfidence bias of agent 1 has a similar effect. Eventually, both the awareness of agent 2’s bias (of the rational agent 1) and the own overconfidence of agent 1 have a similar effort enhancing effect; and the higher effort of agent 1 (due to his overconfidence) is what compensates agent 2 for his additional cost.
Finally, it is interesting to note that the favorable comparison of individual payoffs in an overconfident team with those in a fully rational team does not depend on the overconfident agents’ unawareness of their colleague’s bias. In fact, even if one or both agents are (partially) aware of their col- league’s bias, individual payoffs are higher than those in a fully rational team if either synergy effects are comparably large, or if synergy effects are small and biases are moderate; see Appendix B for a more detailed argument. Proposition 1 below qualitatively summarizes the main points of the preceding discussion.
Proposition 1 For the above model of team production with synergy effects, the following results hold:
i. Individual payoffs in a team of one overconfident and one rational agent are higher than those in a team with two rational agents - provided that the rational agent is aware of his colleague’s bias and either synergy effects are sufficiently large, or synergy effects are small and the bias is moderate.
ii. The individual payoff of an overconfident agent whose colleague is also overconfident is always higher if he is not aware of his colleague’s bias (irrespective of whether the colleague is aware of the other agent’s bias).
iii. Individual payoffs in a team of two overconfident agents which are both unaware of the other’s bias are higher than those in a team of two rational agents - provided that either synergy effects are sufficiently large, or synergy effects are small and biases are moderate.19
Conclusion
In this paper, we have considered an intuitive model of team production with effort complementarities in order to emphasize the potentially positive effects of being overconfident. As we have shown, a more rational perspective on others, i.e. awareness of the overconfidence of others, is suboptimal for an agent who is overconfident himself. More specifically, within the considered model of team production, the payoff of an overconfident agent, whose col- league is also overconfident, is always higher if he is unaware of his colleague’s bias. Thus, although the empirical evidence on the matter is scarce, our results provide a possible rationale for why many people appear to be unaware of the overconfidence biases of others [29,30].
Moreover, we have shown that individual payoffs in both a team of a rational and an overconfident agent as well as in a team of two overconfident agents are higher than in a team of two rational agents whenever either synergy effects are sufficiently large or biases are moderate. Thus, the present analysis gives further support to the notion that being overconfident is beneficial not only in view of aggregate outcomes (as overconfidence seems to enhance effort and therefore team productivity) but also for the overconfident individuals themselves (see also [29]). In fact, the analysis also suggests that overconfident agents have no incentive to gather information about a colleague’s potentially biased selfperception (even if such information was costless). Thus, our results provide a possible rationale for why overconfidence may indeed be (and remain) as widespread a phenomenon as empirical and experimental research indicates.
Appendix
A. Partial Awareness
In order to model a situation in which agent i is uncertain of his colleague’s bias, we assume that agent i assigns probability p ∈ [0, 1] to the case that his colleague -i has bias b−i > 0 and otherwise is unbiased. For the sake of argument, suppose i =1. Thus, agent 1 believes that with probability p agent 2 follows strategy ~ e2 (where the tilde denotes that agent 2 is biased) and with probability 1− p strategy 2 e . Accordingly, agent 1 has to solve the following maximization problem:
Since (by assumption) an agent’s effort rises in his ability and, thus, in his bias, i.e. , we have Moreover, as efforts are strategic complements, it follows:
Hence, the left hand side of the FOC must be increasing in p. For p = 1, agent 1 attaches probability one to the case that agent 2 has bias 2b . (which corresponds to the case that agent 1 is completely aware of agent 2’s bias). In this case, the FOC becomes:
Obviously, the left hand side of this FOC is larger than if agent 1 is aware of agent 2’s bias. Hence, also the right hand side must be larger.
Note that agent 2’s effort does not depend on agent 1’s awareness of agent 2’s bias but only on agent 2’s awareness of his colleague’s bias (see also the discussion of Case 1-3 in Section 4):
Thus, we conclude
Lemma A.1 An agent is best off being unaware of the colleague’s bias; and being partially aware is better than being fully aware.
For agent 2 it also holds that his effort is higher if he is partially aware than if he is unaware and highest if he is aware of agent 1’s bias - irrespective of agent 1’s awareness status x∈[0, 1] ,
Thus, we conclude
Lemma A.2 An overconfident agent is best off if the colleague is aware of the bias; and partial awareness is better than unawareness.
B. Comparison: 2 Overconfident vs. 2 Rational Agents
i. Both overconfident agents are aware of their colleague’s bias.
If both agents are overconfident and aware of their colleague’s bias, we have:
If both agents are overconfident and aware of their colleague’s bias, we have:
provided that synergy effects are sufficiently large. And this also holds if biases are moderate (as a small bias results in a moderate increase in effort and therefore a smaller “synergetic feedback” through agent −i’s effort is required).
ii. Only one overconfident agent is aware of his colleague’s bias.
Similar to the above argument individual payoffs again are higher than for a fully rational team if synergy effects are sufficiently large or biases are moderate:
Acknowledgments
We are grateful to Martin Kocher and Armin Schmutzler for helpful comments and suggestions. Financial support of the German Research Foundation (DFG), through SFB/TR 15 at the Universities of Bonn and Munich, is gratefully acknowledged. The usual disclaimer applies.
References
- Larwood L, Whittaker W (1977) Managerial Myopia: Self-Serving Biases in Organizational Planning. J Appl Psychol 62(2): 194-198.
- Weinstein ND (1980) Unrealistic Optimism about Future Life Events. J Personality and Social Psychol 39(5): 806-820.
- Svenson O (1981) Are We All Less Risky and More Skillful than Our Fellow Drivers?. Acta Psychological 47(2): 143-148.
- Taylor SE, Brown JD (1988) Illusion and Well-Being: A Social Psychological Perspective on Mental Health. Psychological Bulletin 103(2): 193-210.
- Alicke MD, Govorun O (2005) The Better-Than-Average Effect. The Self in Social Judgment, Psychology Press, Philadelphia pp. 83-106.
- Moore DA, Healy PJ (2008) The Trouble with Overconfidence. Psychol Rev 115(2): 502-517.
- Skata D (2008) Overconfidence in Psychology and Finance - an Interdisciplinary Literature Review. Bank i Kredyt 4: 33-50.
- Gigerenzer G, Hoffrage U, Kleinbolting H (1991) Probabilistic Mental Models: A Brunswikian Theory of Confidence. Psychological Rev 98(4): 506-528.
- Juslin P (1994) The Overconfidence Phenomenon as a Consequence of Informal Experimenter-Guided Selection of Almanac Items. Org Behav and Human Decision Processes 57(2): 226-246.
- Griffin D, Tversky A (1992) The Weighing of Evidence and the Determinants of Confidence. Cognitive Psychol 24(3): 411-435.
- Kahnemann D, Tversky A (1996) On the Reality of Cognitive Il- lusions: A Reply to Gigerenzer’s Critique. Psychological Rev 103(3): 582-591.
- Koehler DJ, Brenner L, Griffin D (2002) The Calibration of Expert Judgment: Heuristics and Biases Beyond the Laboratory, In: Gilovich T, Griffin D and Kahnemann D (eds.), Heuristics and Biases: The Psychology of Intuitive Judgment, Cambridge University Press, Cam- bridge.
- Grinblatt M, Keloharju M (2009) Sensation Seeking, Overconfi- dence, and Trading Activity. J Finance 64(2): 549-578.
- Malmendier U, Tate G (2005) CEO Overconfidence and Corporate Investment. J Finance 60(6): 2661-2700.
- Malmendier U, Tate G (2008) Who Makes Acquisitions? CEO Overconfidence and the Market’s Reaction. J Financial Econom 89(1): 20-43.
- Heaton JB (2002) Managerial Optimism and Corporate Finance. Financial Management 31: 33-45.
- Hirshleifer D, Luo GY (2001) On the Survival of Overconfident Traders in a Competitive Securities Market. J Financial Markets 4(1): 73-84.
- Kyle AS, Wang FA (1997) Speculation Duopoly with Agreement to Disagree: Can Overconfidence Survive the Market Test?. J Finance 52(5): 2073-2090.
- Hoffman M, Burks S (2020) Worker overconfidence: Field evidence and implications for employee turnover and firm profits. Quantitative Economics 11(1): 315-348.
- Felson RB (1984) The Effects of Self-Appraisals of Ability on Academic Performance. J Personality Social Psychol 47(5): 944- 952.
- Locke EA, Latham GP (1990) A Theory of Goal Setting and Task Performance. Englewood Cliffs, N.J.
- Heath C, Larrick RP, Wu G (1999) Goals as Reference Points. Cognitive Psychol 38(1): 79-109.
- Zhang Y, Fishbach A (2010) Counteracting Obstacles with Optimistic Predictions. J Experimental Psychology: General 139(1): 16-31.
- Santos-Pinto L (2008) Positive Self-Image and Incentives in Organizations. Econom J 118: 1315-1332.
- Hakenes H, Katolnik S (2018) Optimal team size and overconfi-dence. Group Decision and Negotiation 27(4): 665-687.
- De la Rosa L (2007) Overconfidence and Moral Hazard, Danish Centre for Accounting and Finance, Working Paper No 24.
- Gervais S, Goldstein I (2007) The Positive Effects of Biased Self- Perceptions in Firms. Rev Finance 11(3): 453-496.
- Hvide HK (2002) Pragmatic Beliefs and Overconfidence. J Econom Behav Org 48(1): 15-28.
- Ludwig S, Nafziger J (2011) Beliefs About Overconfidence. Theory and Decision 70: 475-500.
- Bruhin A, Petros F, Santos-Pinto L (2024) The role of self-confidence in teamwork: Experimental evidence. Exp Econom p. 1-26.
- Ludwig S, Wichardt P, Wickhorst H (2011) Overconfidence can improve an agent’s relative and absolute performance in contests. Econom Letters 110(3): 193-196.
- Alchian AA, Demsetz H (1972) Production, Information Costs, and Economic Organization. Am Econom Rev 62: 777-795.
- Grossman SJ, Hart OD (1986) The Costs and Benefits of Owner- ship: A Theory of Vertical and Lateral Integration. J Political Econom 94(4): 691-719.
- Alchian AA, Woodward SE (1987) Reflections on the Theory of the Firm. J Institut Theoretical Econom 143(1): 110-136.
- Aghion P, Tirole J (1994) The Management of Innovation, Quarterly. J Econom 109(4): 1185-1209.
- Jensen MC, Meckling WH (1995) Specific and General Knowledge and Organizational Structure. J Applied Corporate Finance 8(2): 4-18.
- Holmstr¨om B, Roberts J (1998) The Boundaries of the Firm Revisited. J Economic Perspectives 12(4): 73-94.
- Langer EJ (1975) The Illusion of Control. J Personal Soc Psychol 32(2): 311-328.
- Morris S (1996) Speculative Investor Behavior and Learning. Quarterly J Econom 111(4): 1111-1133.
- Squintani F (2006) Equilibrium and Mistaken Self-Perception. Econom Theory 27: 615-641.
- Nagel R (1995) Unravelling in Guessing Games: An Experimental Study. Am Economic Rev 85(5): 1313-1326.
- Costa-Gomes M, Crawford V, Broseta B (2001) Cognition and Behavior in Normal-Form Games: An Experimental Study. Econometrica 69(5): 1193-1235.
- Crawford V, Iriberri N (2007) Fatal Attraction: Salience, Naivity, and Sophistication in Experimental eHide and Seek´ı Games. Am Econom Rev 97(5): 1731-1770.