Social Desirability Biased Responding: Are Researchers Listening?
Bruce M King*
*Department of Psychology, Clemson University, USA
Submission: June 05, 2024; Published: June 12, 2024
*Corresponding author: Bruce M King, Department of Psychology, Clemson University, USA Email Id: bking2@clemson.edu
How to cite this article: King BM. Social Desirability Biased Responding: Are Researchers Listening?. Psychol Behav Sci Int J. 2024; 21(5): 556075. DOI: 10.19080/PBSIJ.2024.21.556075.
Abstract
In surveys that rely on self-reported behaviors, under-reporting and over-reporting is common and often extreme. This includes surveys of energy intake, body weight, use of drugs and alcohol, sexual behaviors, and political surveys. In one national study that compared respondents’ answers with actual measurements, nearly 40% of high school students over-reported their heights by 3 inches or more. The major cause of misreporting is social desirability biased responding, which refers to respondents who consciously misreport to make themselves look better. Having respondents answer questions anonymously is only minimally effective in reducing misreporting. Reliable survey tools for assessing social desirability have been available for decades, yet researchers who rely on self-reported behaviors continue to ignore the issue. In recent years, fewer than 5% of survey studies published in sexuality, ethics, and accounting journals had controlled for social desirability biased responding. Public policy can only be as effective as the truthfulness of the data on which it is based. Researchers conducting survey research are urged to include a measure of social desirability bias in their experimental design.
Keywords: Survey Research; Self-reported Behaviors; Misreporting; Social Desirability; Test Validity; Political surveys
Misreporting in Health and Behavioral Science Research
The Centers for Disease Control and Prevention (CDC) biennially conducts a survey of U.S. high school students’ risky behaviors titled the Youth Risk Behavior Surveillance (YRBS) [1]. These include self-reported use of alcohol, drugs and tobacco, as well as dietary and sexual behaviors. However, in a study of the validity of their own results, CDC researchers took actual measurements after the survey and found that high school students, on average, over-reported their heights by 2.7 inches, with 39.5% over-reporting by at least 3 inches [2]. Many students under-reported their body weight. The net effect was that for 12.7% of students, body mass index was under-reported by at least 5 kg//m2.
Numerous other studies using the gold standard (actual measurements vs. self-reports) have also found that many people, not just high school students, under-report their energy intake and body weight, often by 30% or more, and over-report their height [3-13]. In one study, up to 14% of people under- reported their energy intake to such an extent they were called “extreme under-reporters” [14]. The under-reporting of energy intake and body weight is so common and often extreme that one group of researchers concluded that self-reports of energy intake are “fundamentally and fatally flawed” [4, p. 911]. Another researcher called these self-reported data “implausible” [15]. The misreporting is only minimally due to bad memory [4]. For example, many adults with obesity also under-report on inventories of high-calorie foods in their homes [16]. Instead, there is “robust evidence of social desirability bias” [6, p. 198].
Social desirability biased responding refers to “the need of [individuals] to obtain approval by responding in a culturally appropriate manner” [17, p. 353]. The higher one’s level of social desirability, the more likely he or she is to over-report desirable behaviors and under-report undesirable behaviors on surveys of personal behaviors. Concerns about social desirability biased responding were first expressed over 90 years ago [18]. The component of social desirability biased responding that is of greatest concern to researchers is called impression management which refers to respondents who consciously under- or overreport to make themselves look better [19]. The degree of misreporting depends on the sensitivity of the issue, mode of data collection and interviewer’s characteristics (e.g., face-to-face interview versus anonymous testing), and wording of questions [20-21].
Several studies have observed statistically significant correlations between degree of under-reporting of energy intake and level of social desirability [22-27]. Under-reporting smoking, use of alcohol and illicit drugs, and adolescent reckless driving is common and are also significantly correlated with social desirability [28-33].
Studies have found that social desirability response bias also affects self-reports about HIV serostatus and risky sexual behaviors (e.g., receptive anal intercourse) [34-36]. Men overreport their use of condoms [37-39] and erect penis size [40] and under-report their engagement in extramarital affairs [41] and these, too, are associated with high social desirability scores [36, 40-41].
An early sexuality study found that respondents’ answers changed when they were told that questions were going to be repeated while taking a polygraph test [42]. In another study, over half of adolescents denied having ever had a sexually transmitted infection, yet hospital records indicated that they had been treated [43]. There is ample evidence of under- or over-reporting for many other sexual behaviors [44]. In health research, qualitative studies are “very susceptible” to social desirability biased responding [45]. Deliberate misreporting is also common in the behavioral sciences (unrelated to health). For example, in the field of political science, self-reported voter turnout in national elections has far exceeded actual turnout for decades [46-48]. Educated people who express an interest in politics are the most likely to over-report. The over-reporting of voter turnout is attributed to socially desirable responding [49]. In the 2016 presidential election, people who were more likely to comply with social science norms were less likely to show support for Trump in preelection polls, yet many obviously voted for him [50].
On ballot measures regarding same-sex marriage, opposition on election day is 5% to 7% greater than is found in preelection polls [51].
Social desirability bias also affects expressed racial attitudes in political surveys [52] and attitudes about restrictive immigration [53]. Social desirability biased responding is found in many cultures. For example, in low-income African countries, men are more likely to oppose women’s political rights when they are interviewed by a man [54]. In another African study, people gave different answers depending on whether the interviewer was from the same ethnic group [55]. Among non-pregnant Indian women, self-reported use of smokeless tobacco was found to be 20.6% lower when interviews were done while their husband was present [56].
In summary, conscious misreporting for self-reported behaviors is common and frequently extreme. In a review of anthropology studies, respondent misreporting was called “a well-kept open secret” [57, p. 504].
Assessing for Social Desirability Biased Responding
Researchers who conduct surveys have long assumed that if respondents are allowed to answer questions anonymously, they will answer honestly. However, studies have shown that answering questions anonymously only minimally reduces social desirability biased responding [58]. The CDC’s YRBS has respondents answer questions anonymously yet recall that 39.5% of high-school students over-reported their height by at least 3 inches [2]. How likely is it that these same individuals were truthful when answering sensitive questions about their use of drugs and alcohol, and their experiences with risky sexual behaviors? Today, un-proctored computer-assisted self-administered techniques have replaced the standard paper-and-pencil survey, but a metaanalysis of these studies found that they are no better at reducing social desirability biased responding [59].
There are some excellent scales to measure social desirability bias. The most widely used is the Marlowe-Crowne scale, a 33- item scale that can be used in all fields of research [17]. For brevity, a 13-item short form is available [60]. There is also a 20-item scale developed specifically to measure impression management, the Balanced Inventory of Desirable Responding [61], but recent research shows that the Marlowe-Crowne Scale may still be superior [62]. It is not the intent of this paper to assess which technique is better, but instead to acknowledge that several techniques are available to researchers to assess social desirability bias.
Regardless of which measurement tool is used, logistic regression can be used to adjust raw scores [63]. In brief, the researcher measures “socially desirable response tendency [e.g., using the Marlowe-Crowne Scale] alongside a measure of interest and then adjusts raw scores on that measure by an amount commensurate with the degree of socially desirable responding” [34, p. 97].
Are Researchers Listening?
The CDC says that “educators, parents, local decision makers…...use YRBSS data to…...develop local and state policy” [64, p. 1]. However, policy can only be as effective as the truthfulness of the data on which it is based. While the CDC makes a great effort to obtain a nationally representative sample, it gives only passing mention to its previous findings of extreme under- and over-reporting (“the extent….cannot be determined,” p. 11). The CDC’s research group knows their respondents’ answers are often untruthful but continues to present them as fact
The YRBS is just one example of large nationally representative surveys that ask sensitive questions but include no measure of social desirability bias. Others include the National Health andNutrition Examination Survey, National Survey of Family Growth, and National Survey of Sexual Health and Behavior.
Tests for social desirability bias are equally applicable to smaller studies using convenience samples, but these authors have also generally ignored the possibility of respondents underand/ or over-reporting. In a recent study, it was found that fewer than 5% of survey studies in accounting-and-ethics research had controlled for social desirability biased responding [65]. Surveys used by sexuality researchers almost always include personal and sensitive items for which answers cannot be authenticated by the gold standard. An examination by this author of survey studies (excluding interviews) published in The Journal of Sex Research for the three-year period 2022 through 2024 revealed that only 3.6% of studies employed a measure of social desirability responding or authentication of self-reported behaviors. Similarly, an examination of papers published in the same three-year period in American Journal of Sexuality Education and Sex Education, journals that publish studies of the effects of teaching sexuality education on behaviors, attitudes and opinions, revealed that none of 46 papers using surveys included a measure of social desirability responding.
Conclusion and Recommendation
After decades of warnings about social desirability biased responding on surveys [17-18, 20-21, 44, 57] and the development of several methods to assess such bias [17, 60-61], there appears to be little concern by researchers using surveys about the truthfulness of their respondents’ answers. Left to themselves, researchers continue to present self-reported behaviors as factual data. If change is to occur, editors of journals must begin to urge that researchers using surveys of self-reported behaviors include a measurement of social desirability biased responding in their experimental design.
References
- Centers for Disease Control and Prevention (2023) Youth Risk Behavior Surveillance - United States, 2021. MMWR 72(SS-01): 1-100.
- Brener ND, McManus T, Galuska DA, Lowry R, Wechsler H (2003) Reliability and validity of self-reported height and weight among high school students. J Adolesc Health 32(4): 281-284.
- Archer E, Hand GA, Blair SN (2013) Validity of U.S. nutritional surveillance: National Health and Nutrition Examination Survey caloric intake data, 1971-2010. PlosOne 8(10): e76632.
- Archer E, Pavela G, Lavie CJ (2015) The inadmissibility of What We Eat in America and NHANES dietary data in nutrition and obesity research and the scientific formulation of national dietary guidelines. Mayo Clin Proc 90(7): 911-926.
- Braam LAJLM, Ocké MC, Bueno-de-Mesquita HB, Seidell JC (1998) Determinants of obesity-related underreporting of energy intake. Am J Epidemiol 147(11): 1081-1086.
- Burke MA, Carman KG (2017) You can be too thin (but not too tall): Social desirability bias in self-reports of weight and height. Econ Hum Biol 27(Part A): 198-222.
- Connor Gorber S, Trembley M, Moher D, Gorber B (2007) A comparison of direct vs. self-report measures for assessing height, weight, and body mass index: A systematic review. Obesity Rev 8(4): 307-326.
- Lissner L, Troiano RP, Midthune D, Heitmann BL, Kipnis V, et al. (2007) OPEN about obesity: Recovery biomarkers, dietary errors and BMI. Int J Obes 31(6): 956-961.
- Mela DJ, Aaron JI (1997) Honest but invalid what subjects say about recording their food intake. J Am Diet Assoc 97(7): 791-793.
- Merrill RM, Richardson JS (2009) Validity of self-reported height, weight, and body mass index: Findings from the National Health and Nutrition Examination Survey.Prev Chronic Dis 6(4): A121.
- Nyholm M, Gullberg B, Merlo J, Lundqvist-Persson C, Råstarn L, et al. (2007) The validity of obesity based on self-reported weight and height: Implications for population studies. Obesity 15(1): 197-208.
- Palta M, Prineas RJ, Berman R, Hannan P (1982) Comparison of self-reported and measured height and weight. Am J Epidemiol 115(2): 223-230.
- Subar AF, Kipnis V, Troiano RP, Midthune D, Schoeller DA, et al. (2003) Using intake biomarkers to evaluate the extent of dietary misreporting in a large sample of adults: The OPEN Study. Am J Epidemiol 158(1): 1-13.
- Ferrari P, Slimani N, Ciampi A, Trichopoulou A, Naska A, et al. (2002) Evaluation of under- and over-reporting of energy intake in the 24-hour diet recalls in the European Prospective Investigation into Cancer and Nutrition (EPIC). Public Health Nutr 5(6B): 1329-1345.
- Ioannidis JP (2013) Implausible results in human nutrition research. BMJ 347: f6698.
- King BM, Ivester AN, Burgess PD, Shappell KM, Coleman KL, et al. (2016) Adults with obesity underreport high-calorie foods in the home. Health Behav Policy Rev 3(5): 439-443.
- Crowne D, Marlowe D (1960) A new scale of social desirability independent of psychopathology. J Consult Psychol 24(4): 349-354.
- Bernreuter RG (1933) Validity of the personality inventory. Personnel J 11: 383-386.
- Paulhus DL (1984) Two-component models of socially desirable responding. J Pers Soc Psychol 46(3): 598-609.
- Krumpal I (2013) Determinants of social desirability bias in sensitive surveys: A literature review. Qual Quant 47(4): 2025-2047.
- Tourangeau R, Yan T (2007) Sensitive questions in surveys. Psychol Bull 133(5): 859-883.
- Hebert JR, Peterson KE, Hurley TG, Stoddard, AM, Cohen N, et al. (2001) The effect of social desirability trait on self-reported dietary measures among multi-ethnic female health center employees. Ann Epidemiol 11(6): 417-427.
- Hebert JR, Ebbeling CB, Matthews CE, Hurley TG, Ma Y, et al. (2002) Systematic errors in middle-aged women’s estimates of energy intake: Comparing three self-report measures to total energy expenditure from doubly labeled water. Ann Epidemiol 12(8): 577-588.
- Scagliusi FB, Polacow VO, Artioli GG, Benatti FB, Lancha AH (2003) Selective underreporting of energy intake in women: Magnitude, determinants, and effect of training. J Am Diet Assoc 103(10): 1306-1313.
- Scagliusi FB, Ferriolli E, Pfrimer K, Laureano C, Cunha CSF, et al. (2009) Characteristics of women who frequently underreport their energy intake: A doubly labelled water study. Eur J Clin Nutr 63(10): 1192-1199.
- Taren DL, Tobar M, Hill A, Howell W, Shisslak C, et al. (1999) The association of energy intake bias with psychological scores of women. Eur J Clin Nutr 53(7): 570-578.
- Tooze J, Subar AF, Thompson FE, Troiano R, Schatzkin A, et al. (2004) Psychosocial predictors of energy underreporting in a large doubly labeled water study. Am J Clin Nutr 79(5): 795-804.
- Bradley G, Wildman K (2002) Psychosocial predictors of emerging adults’ risk and reckless behaviors. J Youth Adolesc 31(4): 253-265.
- Davis CG, Thake J, Vilhena N (2010) Social desirability biases in self-reported alcohol consumption and harms. Addict Behav 35(4): 302-311.
- Delaney-Black V, Chiodo LM, Hannigan JH, Greenwald MK, Janisse J, et al. (2010) Just say “I don’t”: lack of concordance between teen report and biological measures of drug use. Pediatrics 126(5): 887-893.
- Latkin CA, Edwards C, Davey-Rothwell M, Tobin KE (2017) The relationship between social desirability bias and self-reports of health, substance abuse, and social network factors among urban substance abuse users in Baltimore, Maryland. Addict Behav 73: 133-136.
- Marissen MAE, Franken IHA, Blanken P, van den Brink W, Hendriks VM (2006) The relation between social desirability and different measures of heroin craving. J Addict Dis 24(4): 91-103.
- Scheuermann TS, Richter KP, Rigotti NA, Cummins SE, Harrington KF, et al. (2017) Accuracy of self-reported smoking abstinence in clinical trials of hospital-initiated smoking interventions. Addiction 112(12): 2227-2236.
- Gibson DR, Hudes ES, Donovan D (1999) Estimating and correlating for response bias in self-reported HIV risk behavior. J Sex Res 36(1): 96-101.
- Latkin CA, Vlahov D (1998) Socially desirable response tendency as a correlate of accuracy of self-reported HIV serostatus for HIV seropositive injection drug users. Addiction 93(8): 1191-1197.
- Rao A, Tobin K, Davey-Rothwell M, Latkin CA (2017) Social desirability bias and prevalence of sexual HIV risk behaviors among people who use drugs in Baltimore, Maryland: Implications for identifying individuals prone to underreporting sexual risk behaviors. AIDS Behav 21(7): 2207-2214.
- Davoli M, Perucci CA, Sangalli M, Brancato G, Dell’Uomo G (1992) Reliability of sexual behavior data among high school students in Rome. Epidemiology 3(6): 531-535.
- Ellish NJ, Weisman CS, Celentano D, Zenilman JM (1996) Reliability of partner reports of sexual history in a heterosexual population at a sexually transmitted diseases clinic. Sex Transm Dis 23(6): 446-452.
- Zenilman JM, Weisman CS, Rompalo AM, Ellish N, Upchurch DM, et al. (1995) Condom use to prevent incident STDs: The validity of self-reported condom use. Sex Transm Dis 22(1): 15-21.
- King BM, Duncan LM, Clinkenbeard KM, Rutland MB, Ryan KM (2019) Social desirability and young men’s self-reports of penis size. J Sex Marital Ther 45(5): 452-455.
- Zapien N (2017) Participation bias and social desirability effects in research on extramarital affairs: Considerations of meaning and implications for sexual behavior research. Arch Sex Behav 46(6): 1565-1571.
- Clark J, Tiffit L (1966) Polygraph and interview validation of self-reported deviant behavior. Am Sociol Rev 31(4): 516-523.
- Clark LR, Brasseux C, Richmond D, Getson P, D’Angelo LJ (1997) Are adolescents accurate in self-report of frequencies of sexually transmitted diseases and pregnancies? J Adolesc Health 21(2): 91-96.
- King BM (2022) The influence of social desirability on sexual behavior surveys: A review. Arch Sex Behav 51(3): 1495-1501.
- Bispo JP (2022) Social desirability bias in qualitative health research. Rev Saúde Pública 56: 101.
- Enamorado T, Imai K (2019) Validating self-reported turnout by linking public opinion surveys with administrative records. Public Opin Q 83(4): 723-748.
- Jackman S, Spahn B (2019) Why does the American national election study overestimate voter turnout? Polit Anal 27(2): 193-207.
- Silver BD, Anderson BA, Abramson PR (1986) Who overreports voting? Am Polit Sci Rev 80(2): 613-624.
- Cuevas-Molina I (2023) Response latencies as evidence of social desirability bias in voter turnout overreports. Am Politics Res 51(5): 670-680.
- Klar S, Weber CR, Krupnikov Y (2016) Social desirability bias in the 2016 presidential election. The Forum 14(4): 433-443.
- Powell RJ (2013) Social desirability bias in polling on same-sex marriage ballot measures. Am Politics Res 41(6): 1052-1070.
- Morning A, Brückner H., Nelson A (2019) Socially desirable reporting and the expression of biological concepts of race. DuBois Rev 16(2): 439-455.
- Janus AL (2010) The influence of social desirability pressures on expressed immigration attitudes. Soc Sci Q 91(4): 928-946.
- Sundström A, Stockemer D (2020) Measuring support for women’s political leadership: Social desirability and gendered interviewer effects among African respondents. QoG Work Pap Ser 8: 1-51.
- Adida CL, Ferree KE, Posner DN, Robinson AL (2016) Who’s asking? Interviewer co-ethnicity effects in African survey data. Comp Polit Stud 49(12): 1630-1660.
- Singh PK, Jain P, Singh N, Singh L, Kumor C, et al. (2022) Social desirability and under-reporting of smokeless tobacco use among reproductive age women: Evidence from National Family Health Survey. SSM-Popul Health 19: 101257.
- Bernard HR, Killworth P, Kronenfeld D, Sailer L (1984) The problem of informant accuracy: The validity of retrospective data. In Annu Rev Anthropol vol. 13, eds. BJ Siegel, AR Beals, SA Tayler, Palo Alto, CA: Annual Reviews, Inc pp. 495-517.
- Dalal DK, Hakel MD (2016) Experimental comparisons of methods for reducing deliberate distortions to self-report measures of sensitive constructs. Organ Res Methods 19(3): 475-505.
- Gnambs T, Kaspar K (2017) Socially desirable responding in web-based questionnaires: A meta-analytic review of the candor hypothesis. Assessment 24(6): 746-762.
- Reynolds WM (1982) Development of reliable and valid short forms of the Marlowe-Crowne Social Desirability Scale. J Clin Psychol 38(1): 119-125.
- Paulhus DL (2002) Manual for Balanced Inventory of Desirable Responding (BIDR-7). Toronto: Multi-Health Systems.
- Lambert CE, Arbuckle SA, Holden RR (2016) The Marlowe-Crowne Social Desirability Scale outperforms the BIDR impression management scale for identifying fakers. J Res Pers 61: 80-86.
- Paulhus DL (1991) Measurement and control of response bias. In Measurement of Personality and Social Psychological Attitudes, eds. JP Robinson, PR Shaver, LS Wrightsman: Academic Press.
- Mpofu J, Underwood JM, Thornton JE, Brener ND, Rico A, et al. (2023) Overview and methods for the Youth Risk Behavior Surveillance System - United States, 2021. MMWR Supplements 72(1): 1-12.
- Bernardi RA, Nash J (2023) The importance and efficacy of controlling for social desirability response bias. Ethics Behav 33(5): 413-429.