Key publications
Pennycook G, Epstein Z, Mosleh M, Arechar AA, Eckles D, Rand D (In Press). Shifting attention to accuracy can reduce misinformation online. In press Nature: https://psyarxiv. com/3n9u8
Mosleh M, Pennycook G, Arechar AA, Rand DG (2021). Cognitive reflection correlates with behavior on Twitter.
Nat Commun,
12(1).
Abstract:
Cognitive reflection correlates with behavior on Twitter.
We investigate the relationship between individual differences in cognitive reflection and behavior on the social media platform Twitter, using a convenience sample of N = 1,901 individuals from Prolific. We find that people who score higher on the Cognitive Reflection Test-a widely used measure of reflective thinking-were more discerning in their social media use, as evidenced by the types and number of accounts followed, and by the reliability of the news sources they shared. Furthermore, a network analysis indicates that the phenomenon of echo chambers, in which discourse is more likely with like-minded others, is not limited to politics: people who scored lower in cognitive reflection tended to follow a set of accounts which are avoided by people who scored higher in cognitive reflection. Our results help to illuminate the drivers of behavior on social media platforms and challenge intuitionist notions that reflective thinking is unimportant for everyday judgment and decision-making.
Abstract.
Author URL.
DOI.
Mosleh M, Martel C, Eckles D, Rand DG (2021). Shared partisanship dramatically increases social tie formation in a Twitter field experiment.
Proc Natl Acad Sci U S A,
118(7).
Abstract:
Shared partisanship dramatically increases social tie formation in a Twitter field experiment.
Americans are much more likely to be socially connected to copartisans, both in daily life and on social media. However, this observation does not necessarily mean that shared partisanship per se drives social tie formation, because partisanship is confounded with many other factors. Here, we test the causal effect of shared partisanship on the formation of social ties in a field experiment on Twitter. We created bot accounts that self-identified as people who favored the Democratic or Republican party and that varied in the strength of that identification. We then randomly assigned 842 Twitter users to be followed by one of our accounts. Users were roughly three times more likely to reciprocally follow-back bots whose partisanship matched their own, and this was true regardless of the bot's strength of identification. Interestingly, there was no partisan asymmetry in this preferential follow-back behavior: Democrats and Republicans alike were much more likely to reciprocate follows from copartisans. These results demonstrate a strong causal effect of shared partisanship on the formation of social ties in an ecologically valid field setting and have important implications for political psychology, social media, and the politically polarized state of the American public.
Abstract.
Author URL.
DOI.
Mosleh M, Kyker K, Cohen JD, Rand DG (2020). Globalization and the rise and fall of cognitive control. Nature communications, 11, 1-10.
Mosleh M, Martel C, Eckles D, Rand D (2020). Shared Partisanship Dramatically Increases Social Tie Formation in a Twitter Field Experiment.
Stewart AJ, Mosleh M, Diakonova M, Arechar AA, Rand DG, Plotkin JB (2019). Information gerrymandering and undemocratic decisions. Nature, 573, 117-121.
Publications by category
Journal articles
Mosleh M, Pennycook G, Rand DG (In Press). Field experiments on social media.
Abstract:
Field experiments on social media
Online behavioral data, such as digital traces from social media, have the potential to allow researchers an unprecedented new window into human behavior in ecologically valid everyday contexts. However, research using such data is often purely observational, limiting its ability to identify causal relationships. Here we review recent innovations in experimental approaches to studying online behavior, with a particular focus on research related to misinformation and political psychology. In hybrid lab-field studies, exposure to social media content can be randomized, and the impact on attitudes and beliefs measured using surveys; or exposure to treatments can be randomized within survey experiments, and their impact observed on subsequent online behavior. In field experiments conducted on social media, randomized treatments can be administered directly to users in the online environment - e.g. via social tie invitations, private messages, or public posts - without revealing that they are part of an experiment, and the impacts on subsequent online behavior observed. The strengths and weaknesses of each approach are discussed, along with practical advice and central ethical constraints on such studies.
Abstract.
DOI.
Mosleh M, Rand DG (In Press). Measuring exposure to misinformation from political elites on Twitter.
Abstract:
Measuring exposure to misinformation from political elites on Twitter
Misinformation can come directly from public figures and organizations (referred to here as “elites”). Here, we develop a tool for measuring Twitter users’ exposure to misinformation from elites based on the public figures and organizations they choose to follow. Using a database of professional fact-checks by PolitiFact, we calculate falsity scores for 816 elites based on the veracity of their statements. We then assign users an elite misinformation-exposure score based on the falsity scores of the elites they follow on Twitter. Users’ misinformation-exposure scores are negatively correlated with the quality of news they share themselves, and positively correlated with estimated conservative ideology. Additionally, we analyze the co-follower, co-share, and co- retweet networks of 5,000 Twitter users and find an ideological asymmetry: estimated ideological extremity is associated with more misinformation exposure for users estimated to be conservative but not for users estimated to be liberal. Finally, we create an open-source R library and an Application Programming Interface (API) making our elite misinformation-exposure estimation tool openly available to the community.
Abstract.
DOI.
Pennycook G, Epstein Z, Mosleh M, Arechar AA, Eckles D, Rand D (In Press). Shifting attention to accuracy can reduce misinformation online. In press Nature: https://psyarxiv. com/3n9u8
Mosleh M, Martel C, Eckles D, Rand DG (In Press). Social correction of fake news across party lines.
Abstract:
Social correction of fake news across party lines
Social corrections, wherein social media users correct one another, are an important mechanism for debunking online misinformation. But users who post misinformation only rarely engage with social corrections, instead typically choosing to ignore them. Here, we investigate how the social relationship between the corrector and corrected user affect the willingness to engage with corrective, debunking messages. We explore two key dimensions: (i) partisan agreement with, and (ii) social relationship between, the user and the corrector. We conducted a randomized field experiment with N=1,586 Twitter users and a conceptual replication survey experiment with N=812 Amazon Mechanical Turk workers in which posts containing false news were corrected. We varied whether the corrector identified as Democrat or Republican; and whether the corrector followed the user and liked three of their tweets the day before issuing the correction (creating a minimal social relationship). Surprisingly, we found that shared partisanship did not increase a user’s probability of engaging with the correction. Conversely, forming a minimal social connection significantly increased engagement rate. A second survey experiment (N = 1,621) found that minimal social relationships foster a general norm of responding, such that people feel more obligated to respond – and think others expect them to respond more – to people who follow them, even outside the context of misinformation correction. These results emphasize social media’s ability to foster engagement with corrections via minimal social relationships, and have implications for effective, engaging fact-check delivery online.
Abstract.
DOI.
Mosleh M, Rand DG (2022). Author Correction: Measuring exposure to misinformation from political elites on Twitter.
Nature Communications,
13(1).
DOI.
Mosleh M, Rand DG (2022). Measuring exposure to misinformation from political elites on Twitter.
Nat Commun,
13(1).
Abstract:
Measuring exposure to misinformation from political elites on Twitter.
Misinformation can come directly from public figures and organizations (referred to here as "elites"). Here, we develop a tool for measuring Twitter users' exposure to misinformation from elites based on the public figures and organizations they choose to follow. Using a database of professional fact-checks by PolitiFact, we calculate falsity scores for 816 elites based on the veracity of their statements. We then assign users an elite misinformation-exposure score based on the falsity scores of the elites they follow on Twitter. Users' misinformation-exposure scores are negatively correlated with the quality of news they share themselves, and positively correlated with estimated conservative ideology. Additionally, we analyze the co-follower, co-share, and co-retweet networks of 5000 Twitter users and find an ideological asymmetry: estimated ideological extremity is associated with more misinformation exposure for users estimated to be conservative but not for users estimated to be liberal. Finally, we create an open-source R library and an Application Programming Interface (API) making our elite misinformation-exposure estimation tool openly available to the community.
Abstract.
Author URL.
DOI.
Mosleh M, Pennycook G, Arechar AA, Rand DG (2021). Cognitive reflection correlates with behavior on Twitter.
Nat Commun,
12(1).
Abstract:
Cognitive reflection correlates with behavior on Twitter.
We investigate the relationship between individual differences in cognitive reflection and behavior on the social media platform Twitter, using a convenience sample of N = 1,901 individuals from Prolific. We find that people who score higher on the Cognitive Reflection Test-a widely used measure of reflective thinking-were more discerning in their social media use, as evidenced by the types and number of accounts followed, and by the reliability of the news sources they shared. Furthermore, a network analysis indicates that the phenomenon of echo chambers, in which discourse is more likely with like-minded others, is not limited to politics: people who scored lower in cognitive reflection tended to follow a set of accounts which are avoided by people who scored higher in cognitive reflection. Our results help to illuminate the drivers of behavior on social media platforms and challenge intuitionist notions that reflective thinking is unimportant for everyday judgment and decision-making.
Abstract.
Author URL.
DOI.
Mosleh M, Pennycook G, Rand DG (2021). Field Experiments on Social Media.
Current Directions in Psychological Science,
31(1), 69-75.
Abstract:
Field Experiments on Social Media
Online behavioral data, such as digital traces from social media, have the potential to allow researchers an unprecedented new window into human behavior in ecologically valid everyday contexts. However, research using such data is often purely observational, which limits its usefulness for identifying causal relationships. Here we review recent innovations in experimental approaches to studying online behavior, with a particular focus on research related to misinformation and political psychology. In hybrid lab-field studies, exposure to social-media content can be randomized, and the impact on attitudes and beliefs can be measured using surveys, or exposure to treatments can be randomized within survey experiments, and their impact on subsequent online behavior can be observed. In field experiments conducted on social media, randomized treatments can be administered directly to users in the online environment (e.g. via social-tie invitations, private messages, or public posts) without revealing that they are part of an experiment, and the effects on subsequent online behavior can then be observed. The strengths and weaknesses of each approach are discussed, along with practical advice and central ethical constraints on such studies.
Abstract.
DOI.
Mosleh M, Martel C, Eckles D, Rand DG (2021). Shared partisanship dramatically increases social tie formation in a Twitter field experiment.
Proc Natl Acad Sci U S A,
118(7).
Abstract:
Shared partisanship dramatically increases social tie formation in a Twitter field experiment.
Americans are much more likely to be socially connected to copartisans, both in daily life and on social media. However, this observation does not necessarily mean that shared partisanship per se drives social tie formation, because partisanship is confounded with many other factors. Here, we test the causal effect of shared partisanship on the formation of social ties in a field experiment on Twitter. We created bot accounts that self-identified as people who favored the Democratic or Republican party and that varied in the strength of that identification. We then randomly assigned 842 Twitter users to be followed by one of our accounts. Users were roughly three times more likely to reciprocally follow-back bots whose partisanship matched their own, and this was true regardless of the bot's strength of identification. Interestingly, there was no partisan asymmetry in this preferential follow-back behavior: Democrats and Republicans alike were much more likely to reciprocate follows from copartisans. These results demonstrate a strong causal effect of shared partisanship on the formation of social ties in an ecologically valid field setting and have important implications for political psychology, social media, and the politically polarized state of the American public.
Abstract.
Author URL.
DOI.
Pennycook G, Epstein Z, Mosleh M, Arechar AA, Eckles D, Rand DG (2021). Shifting attention to accuracy can reduce misinformation online.
Nature,
592(7855), 590-595.
DOI.
Martel C, Mosleh M, Rand D (2021). You’re definitely wrong, maybe: Correction style has minimal effect on corrections of misinformation online.
Media and Communication,
9(1), 120-133.
Abstract:
You’re definitely wrong, maybe: Correction style has minimal effect on corrections of misinformation online
How can online communication most effectively respond to misinformation posted on social media? Recent studies examining the content of corrective messages provide mixed results—several studies suggest that politer, hedged messages may increase engagement with corrections, while others favor direct messaging which does not shed doubt on the credibility of the corrective message. Furthermore, common debunking strategies often include keeping the message simple and clear, while others recommend including a detailed explanation of why the initial misinformation is incorrect. To shed more light on how correction style affects correction efficacy, we manipulated both correction strength (direct, hedged) and explanatory depth (simple explanation, detailed explanation) in response to participants from Lucid (N = 2,228) who indicated they would share a false story in a survey experiment. We found minimal evidence suggesting that correction strength or depth affects correction engagement, both in terms of likelihood of replying, and accepting or resisting corrective information. However, we do find that analytic thinking and actively open-minded thinking are associated with greater acceptance of information in response to corrective messages, regardless of correction style. Our results help elucidate the efficacy of user-generated corrections of misinformation on social media.
Abstract.
DOI.
Mosleh M, Kyker K, Cohen JD, Rand DG (2020). Globalization and the rise and fall of cognitive control. Nature communications, 11, 1-10.
Heydari B, Heydari P, Mosleh M (2020). Not all bridges connect: integration in multi-community networks. The Journal of Mathematical Sociology, 44, 199-220.
Mosleh M, Stewart AJ, Plotkin JB, Rand DG (2020). Prosociality in the economic Dictator Game is associated with less parochialism and greater willingness to vote for intergroup compromise. Judgment & Decision Making, 15
Mosleh M, Pennycook G, Rand DG (2020). Self-reported willingness to share political news articles in online surveys correlates with actual sharing on Twitter. Plos one, 15, e0228882-e0228882.
Mosleh M, Martel C, Eckles D, Rand D (2020). Shared Partisanship Dramatically Increases Social Tie Formation in a Twitter Field Experiment.
Stewart AJ, Mosleh M, Diakonova M, Arechar AA, Rand DG, Plotkin JB (2019). Information gerrymandering and undemocratic decisions. Nature, 573, 117-121.
Mosleh M, Stewart AJ, Plotkin J, Rand D (2019). Prosociality in an economic game is associated with less parochialism and greater willingness to vote for intergroup compromise.
Gianetto DA, Mosleh M, Heydari B (2018). Dynamic Structure of Competition Networks in Affordable Care Act Insurance Market. IEEE Access, 6, 12700-12709.
Mosleh M, Rand DG (2018). Population structure promotes the evolution of intuitive cooperation and inhibits deliberation. Scientific reports, 8, 1-8.
Mosleh M, Heydari B (2017). Fair topologies: Community structures and network hubs drive emergence of fairness norms. Scientific reports, 7, 1-9.
Mosleh M, Dalili K, Heydari B (2016). Distributed or monolithic? a computational architecture decision framework. IEEE Systems journal, 12, 125-136.
Mosleh M, Ludlow P, Heydari B (2016). Distributed resource management in systems of systems: an architecture perspective. Systems Engineering, 19, 362-374.
Heydari B, Mosleh M, Dalili K (2016). From modular to distributed open architectures: a unified decision framework. Systems Engineering, 19, 252-266.
Heydari B, Mosleh M, Dalili K (2015). Efficient Network Structures with Separable Heterogeneous Connection Costs. Economics Letters, 134, 82-85.
Mosleh M, Dalili K, Heydari B (2014). Optimal Modularity for Fractionated Spacecraft: the Case of System F6. Procedia Computer Science, 28, 164-170.
Abiri-Jahromi A, Fotuhi-Firuzabad M, Parvania M, Mosleh M (2011). Optimized sectionalizing switch placement strategy in distribution systems. IEEE Transactions on Power Delivery, 27, 362-370.
Conferences
Mosleh M, Martel C (2021). Perverse downstream consequences of debunking: Being corrected by another user for posting false political news increases subsequent sharing of low qality, partisan, and toxic content in a twiter field experiment.
Abstract:
Perverse downstream consequences of debunking: Being corrected by another user for posting false political news increases subsequent sharing of low qality, partisan, and toxic content in a twiter field experiment
Abstract.
DOI.
Mosleh M (2019). Online Interactive Experiments on Networks.
Mosleh M, Heydari B (2017). Market Evolution of Sharing Economy vs. Traditional Platforms: a Natural Language Processing Approach.
Mosleh M, Heydari B (2017). Why Groups Show Different Fairness Norms? the Interaction Topology Might Explain.
Mosleh M, Ludlow P, Heydari B (2016). Resource allocation through network architecture in systems of systems: a complex networks framework.
Publications by year
In Press
Mosleh M, Pennycook G, Rand DG (In Press). Field experiments on social media.
Abstract:
Field experiments on social media
Online behavioral data, such as digital traces from social media, have the potential to allow researchers an unprecedented new window into human behavior in ecologically valid everyday contexts. However, research using such data is often purely observational, limiting its ability to identify causal relationships. Here we review recent innovations in experimental approaches to studying online behavior, with a particular focus on research related to misinformation and political psychology. In hybrid lab-field studies, exposure to social media content can be randomized, and the impact on attitudes and beliefs measured using surveys; or exposure to treatments can be randomized within survey experiments, and their impact observed on subsequent online behavior. In field experiments conducted on social media, randomized treatments can be administered directly to users in the online environment - e.g. via social tie invitations, private messages, or public posts - without revealing that they are part of an experiment, and the impacts on subsequent online behavior observed. The strengths and weaknesses of each approach are discussed, along with practical advice and central ethical constraints on such studies.
Abstract.
DOI.
Mosleh M, Rand DG (In Press). Measuring exposure to misinformation from political elites on Twitter.
Abstract:
Measuring exposure to misinformation from political elites on Twitter
Misinformation can come directly from public figures and organizations (referred to here as “elites”). Here, we develop a tool for measuring Twitter users’ exposure to misinformation from elites based on the public figures and organizations they choose to follow. Using a database of professional fact-checks by PolitiFact, we calculate falsity scores for 816 elites based on the veracity of their statements. We then assign users an elite misinformation-exposure score based on the falsity scores of the elites they follow on Twitter. Users’ misinformation-exposure scores are negatively correlated with the quality of news they share themselves, and positively correlated with estimated conservative ideology. Additionally, we analyze the co-follower, co-share, and co- retweet networks of 5,000 Twitter users and find an ideological asymmetry: estimated ideological extremity is associated with more misinformation exposure for users estimated to be conservative but not for users estimated to be liberal. Finally, we create an open-source R library and an Application Programming Interface (API) making our elite misinformation-exposure estimation tool openly available to the community.
Abstract.
DOI.
Mosleh M, Martel C, Eckles D, Rand DG (In Press). Promoting engagement with social fact-checks online.
Abstract:
Promoting engagement with social fact-checks online
Social media users who post misinformation rarely engage with corrections from other users. We investigate the causal effect of the social relationship between the corrector and corrected user on engagement with such corrections. In a Twitter field experiment and an Amazon Mechanical Turk conceptual replication, we used human-looking bots to correct users who shared false news. We varied whether the corrector bot identified as a Democrat or Republican; and whether the corrector bot followed the user and liked three of their tweets the day before issuing the correction (creating a minimal social connection). Surprisingly, we did not find evidence that shared partisanship increased engagement with the correction (although overall, conservatives were more likely to engage negatively, versus positively, with corrections). Social connection, conversely, increased engagement with corrections from co- partisans, while effects for counter-partisans were ambiguous. A final survey experiment demonstrated a generalized norm of responding, where people feel more obligated to respond to people who follow them -- even outside the context of misinformation correction.
Abstract.
DOI.
Pennycook G, Epstein Z, Mosleh M, Arechar AA, Eckles D, Rand D (In Press). Shifting attention to accuracy can reduce misinformation online. In press Nature: https://psyarxiv. com/3n9u8
Mosleh M, Martel C, Eckles D, Rand DG (In Press). Social correction of fake news across party lines.
Abstract:
Social correction of fake news across party lines
Social corrections, wherein social media users correct one another, are an important mechanism for debunking online misinformation. But users who post misinformation only rarely engage with social corrections, instead typically choosing to ignore them. Here, we investigate how the social relationship between the corrector and corrected user affect the willingness to engage with corrective, debunking messages. We explore two key dimensions: (i) partisan agreement with, and (ii) social relationship between, the user and the corrector. We conducted a randomized field experiment with N=1,586 Twitter users and a conceptual replication survey experiment with N=812 Amazon Mechanical Turk workers in which posts containing false news were corrected. We varied whether the corrector identified as Democrat or Republican; and whether the corrector followed the user and liked three of their tweets the day before issuing the correction (creating a minimal social relationship). Surprisingly, we found that shared partisanship did not increase a user’s probability of engaging with the correction. Conversely, forming a minimal social connection significantly increased engagement rate. A second survey experiment (N = 1,621) found that minimal social relationships foster a general norm of responding, such that people feel more obligated to respond – and think others expect them to respond more – to people who follow them, even outside the context of misinformation correction. These results emphasize social media’s ability to foster engagement with corrections via minimal social relationships, and have implications for effective, engaging fact-check delivery online.
Abstract.
DOI.
Yang Q, Mosleh M, Rand DG, Zaman T (In Press). The Follow Back Problem in a Hyper-Partisan Environment.
Abstract:
The Follow Back Problem in a Hyper-Partisan Environment
Many social media users try to obtain as many followers as possible in a social network to gain influence, a challenge that is often referred to as the follow back problem. In this work we study different strategies for this problem in the context of politically polarized social networks and study how political partisanship affect social media users' propensity to follow each other. We test how contact strategy (liking, following) interacts with partisan alignment when trying to induce users to follow back. To do so, we conduct a field experiment on Twitter where we target N=8,104 active users using bot accounts that present as human. We found that users were more than twice as likely to reciprocally follow back bots whose partisanship matched their own. Conversely, when the only form of contact between the bot and the user was the bot liking the user’s posts, the follow rate was extremely low regardless of partisan alignment – and liking a user’s content and following them led to no increase in follow-back relative to just following the user. Finally, we found no partisanship asymmetries, such that Democrats and Republicans preferentially followed co-partisans to the same extent. Our results demonstrate the important impact of following users and having shared partisanship – and the irrelevance of liking users’ content – on solving the follow back problem.
Abstract.
DOI.
Mosleh M, Yang Q, Zaman T, Pennycook G, Rand DG (In Press). Trade-offs between reducing misinformation and politically-balanced enforcement on social media.
Abstract:
Trade-offs between reducing misinformation and politically-balanced enforcement on social media
An analysis of Twitter data shows that the tendency for conservative users to be suspended at higher rates than liberal users can be largely explained by conservative users sharing more links to low quality news sites; this partisan asymmetry in sharing behavior creates a trade- off between reducing the spread of misinformation and maintaining political balance in enforcement.
Abstract.
DOI.
2022
Mosleh M, Rand DG (2022). Author Correction: Measuring exposure to misinformation from political elites on Twitter.
Nature Communications,
13(1).
DOI.
Mosleh M, Rand DG (2022). Measuring exposure to misinformation from political elites on Twitter.
Nat Commun,
13(1).
Abstract:
Measuring exposure to misinformation from political elites on Twitter.
Misinformation can come directly from public figures and organizations (referred to here as "elites"). Here, we develop a tool for measuring Twitter users' exposure to misinformation from elites based on the public figures and organizations they choose to follow. Using a database of professional fact-checks by PolitiFact, we calculate falsity scores for 816 elites based on the veracity of their statements. We then assign users an elite misinformation-exposure score based on the falsity scores of the elites they follow on Twitter. Users' misinformation-exposure scores are negatively correlated with the quality of news they share themselves, and positively correlated with estimated conservative ideology. Additionally, we analyze the co-follower, co-share, and co-retweet networks of 5000 Twitter users and find an ideological asymmetry: estimated ideological extremity is associated with more misinformation exposure for users estimated to be conservative but not for users estimated to be liberal. Finally, we create an open-source R library and an Application Programming Interface (API) making our elite misinformation-exposure estimation tool openly available to the community.
Abstract.
Author URL.
DOI.
2021
Mosleh M, Pennycook G, Arechar AA, Rand DG (2021). Cognitive reflection correlates with behavior on Twitter.
Nat Commun,
12(1).
Abstract:
Cognitive reflection correlates with behavior on Twitter.
We investigate the relationship between individual differences in cognitive reflection and behavior on the social media platform Twitter, using a convenience sample of N = 1,901 individuals from Prolific. We find that people who score higher on the Cognitive Reflection Test-a widely used measure of reflective thinking-were more discerning in their social media use, as evidenced by the types and number of accounts followed, and by the reliability of the news sources they shared. Furthermore, a network analysis indicates that the phenomenon of echo chambers, in which discourse is more likely with like-minded others, is not limited to politics: people who scored lower in cognitive reflection tended to follow a set of accounts which are avoided by people who scored higher in cognitive reflection. Our results help to illuminate the drivers of behavior on social media platforms and challenge intuitionist notions that reflective thinking is unimportant for everyday judgment and decision-making.
Abstract.
Author URL.
DOI.
Mosleh M, Pennycook G, Rand DG (2021). Field Experiments on Social Media.
Current Directions in Psychological Science,
31(1), 69-75.
Abstract:
Field Experiments on Social Media
Online behavioral data, such as digital traces from social media, have the potential to allow researchers an unprecedented new window into human behavior in ecologically valid everyday contexts. However, research using such data is often purely observational, which limits its usefulness for identifying causal relationships. Here we review recent innovations in experimental approaches to studying online behavior, with a particular focus on research related to misinformation and political psychology. In hybrid lab-field studies, exposure to social-media content can be randomized, and the impact on attitudes and beliefs can be measured using surveys, or exposure to treatments can be randomized within survey experiments, and their impact on subsequent online behavior can be observed. In field experiments conducted on social media, randomized treatments can be administered directly to users in the online environment (e.g. via social-tie invitations, private messages, or public posts) without revealing that they are part of an experiment, and the effects on subsequent online behavior can then be observed. The strengths and weaknesses of each approach are discussed, along with practical advice and central ethical constraints on such studies.
Abstract.
DOI.
Mosleh M, Martel C (2021). Perverse downstream consequences of debunking: Being corrected by another user for posting false political news increases subsequent sharing of low qality, partisan, and toxic content in a twiter field experiment.
Abstract:
Perverse downstream consequences of debunking: Being corrected by another user for posting false political news increases subsequent sharing of low qality, partisan, and toxic content in a twiter field experiment
Abstract.
DOI.
Mosleh M, Martel C, Eckles D, Rand DG (2021). Shared partisanship dramatically increases social tie formation in a Twitter field experiment.
Proc Natl Acad Sci U S A,
118(7).
Abstract:
Shared partisanship dramatically increases social tie formation in a Twitter field experiment.
Americans are much more likely to be socially connected to copartisans, both in daily life and on social media. However, this observation does not necessarily mean that shared partisanship per se drives social tie formation, because partisanship is confounded with many other factors. Here, we test the causal effect of shared partisanship on the formation of social ties in a field experiment on Twitter. We created bot accounts that self-identified as people who favored the Democratic or Republican party and that varied in the strength of that identification. We then randomly assigned 842 Twitter users to be followed by one of our accounts. Users were roughly three times more likely to reciprocally follow-back bots whose partisanship matched their own, and this was true regardless of the bot's strength of identification. Interestingly, there was no partisan asymmetry in this preferential follow-back behavior: Democrats and Republicans alike were much more likely to reciprocate follows from copartisans. These results demonstrate a strong causal effect of shared partisanship on the formation of social ties in an ecologically valid field setting and have important implications for political psychology, social media, and the politically polarized state of the American public.
Abstract.
Author URL.
DOI.
Pennycook G, Epstein Z, Mosleh M, Arechar AA, Eckles D, Rand DG (2021). Shifting attention to accuracy can reduce misinformation online.
Nature,
592(7855), 590-595.
DOI.
Martel C, Mosleh M, Rand D (2021). You’re definitely wrong, maybe: Correction style has minimal effect on corrections of misinformation online.
Media and Communication,
9(1), 120-133.
Abstract:
You’re definitely wrong, maybe: Correction style has minimal effect on corrections of misinformation online
How can online communication most effectively respond to misinformation posted on social media? Recent studies examining the content of corrective messages provide mixed results—several studies suggest that politer, hedged messages may increase engagement with corrections, while others favor direct messaging which does not shed doubt on the credibility of the corrective message. Furthermore, common debunking strategies often include keeping the message simple and clear, while others recommend including a detailed explanation of why the initial misinformation is incorrect. To shed more light on how correction style affects correction efficacy, we manipulated both correction strength (direct, hedged) and explanatory depth (simple explanation, detailed explanation) in response to participants from Lucid (N = 2,228) who indicated they would share a false story in a survey experiment. We found minimal evidence suggesting that correction strength or depth affects correction engagement, both in terms of likelihood of replying, and accepting or resisting corrective information. However, we do find that analytic thinking and actively open-minded thinking are associated with greater acceptance of information in response to corrective messages, regardless of correction style. Our results help elucidate the efficacy of user-generated corrections of misinformation on social media.
Abstract.
DOI.
2020
Mosleh M, Kyker K, Cohen JD, Rand DG (2020). Globalization and the rise and fall of cognitive control. Nature communications, 11, 1-10.
Heydari B, Heydari P, Mosleh M (2020). Not all bridges connect: integration in multi-community networks. The Journal of Mathematical Sociology, 44, 199-220.
Mosleh M, Stewart AJ, Plotkin JB, Rand DG (2020). Prosociality in the economic Dictator Game is associated with less parochialism and greater willingness to vote for intergroup compromise. Judgment & Decision Making, 15
Mosleh M, Pennycook G, Rand DG (2020). Self-reported willingness to share political news articles in online surveys correlates with actual sharing on Twitter. Plos one, 15, e0228882-e0228882.
Mosleh M, Martel C, Eckles D, Rand D (2020). Shared Partisanship Dramatically Increases Social Tie Formation in a Twitter Field Experiment.
2019
Stewart AJ, Mosleh M, Diakonova M, Arechar AA, Rand DG, Plotkin JB (2019). Information gerrymandering and undemocratic decisions. Nature, 573, 117-121.
Mosleh M (2019). Online Interactive Experiments on Networks.
Mosleh M, Stewart AJ, Plotkin J, Rand D (2019). Prosociality in an economic game is associated with less parochialism and greater willingness to vote for intergroup compromise.
2018
Gianetto DA, Mosleh M, Heydari B (2018). Dynamic Structure of Competition Networks in Affordable Care Act Insurance Market. IEEE Access, 6, 12700-12709.
Mosleh M, Rand DG (2018). Population structure promotes the evolution of intuitive cooperation and inhibits deliberation. Scientific reports, 8, 1-8.
2017
Mosleh M, Heydari B (2017). Fair topologies: Community structures and network hubs drive emergence of fairness norms. Scientific reports, 7, 1-9.
Mosleh M, Heydari B (2017). Market Evolution of Sharing Economy vs. Traditional Platforms: a Natural Language Processing Approach.
Mosleh M, Heydari B (2017). Why Groups Show Different Fairness Norms? the Interaction Topology Might Explain.
2016
Mosleh M, Dalili K, Heydari B (2016). Distributed or monolithic? a computational architecture decision framework. IEEE Systems journal, 12, 125-136.
Mosleh M, Ludlow P, Heydari B (2016). Distributed resource management in systems of systems: an architecture perspective. Systems Engineering, 19, 362-374.
Heydari B, Mosleh M, Dalili K (2016). From modular to distributed open architectures: a unified decision framework. Systems Engineering, 19, 252-266.
Mosleh M, Ludlow P, Heydari B (2016). Resource allocation through network architecture in systems of systems: a complex networks framework.
2015
Heydari B, Mosleh M, Dalili K (2015). Efficient Network Structures with Separable Heterogeneous Connection Costs. Economics Letters, 134, 82-85.
2014
Mosleh M, Dalili K, Heydari B (2014). Optimal Modularity for Fractionated Spacecraft: the Case of System F6. Procedia Computer Science, 28, 164-170.
2011
Abiri-Jahromi A, Fotuhi-Firuzabad M, Parvania M, Mosleh M (2011). Optimized sectionalizing switch placement strategy in distribution systems. IEEE Transactions on Power Delivery, 27, 362-370.