Publications by category
Journal articles
Mosleh M, Rand DG (In Press). Falsehood in, falsehood out: Measuring exposure to elite misinformation on Twitter.
Abstract:
Falsehood in, falsehood out: Measuring exposure to elite misinformation on Twitter
Most research studying misinformation on social media examines links to news articles. Yet a great deal of misinformation comes directly from public figures. Here, we introduce a new tool for measuring Twitter users’ exposure to elite misinformation based on the public figures they choose to follow: github.com/mmosleh/minfo-exposure. From a database of professional fact-checks by PolitiFact, falsity scores can be calculated for 816 public figures. We then assign users a misinformation-exposure score by averaging the falsity scores of the public figures they follow. We show that users’ misinformation-exposure scores are negatively correlated with the quality of news they share (based on ratings from both professional fact-checkers and a politically-balanced crowd of laypeople), and positively correlated with conservative ideology. Additionally, we analyze the co-follower and the co-share network of 5,000 Twitter users and find an ideological asymmetry: ideological extremity is associated with more misinformation-exposure for conservatives but not liberals.
Abstract.
DOI.
Mosleh M, Pennycook G, Rand DG (In Press). Field experiments on social media.
Abstract:
Field experiments on social media
Online behavioral data, such as digital traces from social media, have the potential to allow researchers an unprecedented new window into human behavior in ecologically valid everyday contexts. However, research using such data is often purely observational, limiting its ability to identify causal relationships. Here we review recent innovations in experimental approaches to studying online behavior, with a particular focus on research related to misinformation and political psychology. In hybrid lab-field studies, exposure to social media content can be randomized, and the impact on attitudes and beliefs measured using surveys; or exposure to treatments can be randomized within survey experiments, and their impact observed on subsequent online behavior. In field experiments conducted on social media, randomized treatments can be administered directly to users in the online environment - e.g. via social tie invitations, private messages, or public posts - without revealing that they are part of an experiment, and the impacts on subsequent online behavior observed. The strengths and weaknesses of each approach are discussed, along with practical advice and central ethical constraints on such studies.
Abstract.
DOI.
Pennycook G, Epstein Z, Mosleh M, Arechar AA, Eckles D, Rand D (In Press). Shifting attention to accuracy can reduce misinformation online. In press Nature: https://psyarxiv. com/3n9u8
Mosleh M, Martel C, Eckles D, Rand DG (In Press). Social correction of fake news across party lines.
Abstract:
Social correction of fake news across party lines
Social corrections, wherein social media users correct one another, are an important mechanism for debunking online misinformation. But users who post misinformation only rarely engage with social corrections, instead typically choosing to ignore them. Here, we investigate how the social relationship between the corrector and corrected user affect the willingness to engage with corrective, debunking messages. We explore two key dimensions: (i) partisan agreement with, and (ii) social relationship between, the user and the corrector. We conducted a randomized field experiment with N=1,586 Twitter users and a conceptual replication survey experiment with N=812 Amazon Mechanical Turk workers in which posts containing false news were corrected. We varied whether the corrector identified as Democrat or Republican; and whether the corrector followed the user and liked three of their tweets the day before issuing the correction (creating a minimal social relationship). Surprisingly, we found that shared partisanship did not increase a user’s probability of engaging with the correction. Conversely, forming a minimal social connection significantly increased engagement rate. A second survey experiment (N = 1,621) found that minimal social relationships foster a general norm of responding, such that people feel more obligated to respond – and think others expect them to respond more – to people who follow them, even outside the context of misinformation correction. These results emphasize social media’s ability to foster engagement with corrections via minimal social relationships, and have implications for effective, engaging fact-check delivery online.
Abstract.
DOI.
Mosleh M, Pennycook G, Arechar AA, Rand DG (2021). Cognitive reflection correlates with behavior on Twitter.
Nat Commun,
12(1).
Abstract:
Cognitive reflection correlates with behavior on Twitter.
We investigate the relationship between individual differences in cognitive reflection and behavior on the social media platform Twitter, using a convenience sample of N = 1,901 individuals from Prolific. We find that people who score higher on the Cognitive Reflection Test-a widely used measure of reflective thinking-were more discerning in their social media use, as evidenced by the types and number of accounts followed, and by the reliability of the news sources they shared. Furthermore, a network analysis indicates that the phenomenon of echo chambers, in which discourse is more likely with like-minded others, is not limited to politics: people who scored lower in cognitive reflection tended to follow a set of accounts which are avoided by people who scored higher in cognitive reflection. Our results help to illuminate the drivers of behavior on social media platforms and challenge intuitionist notions that reflective thinking is unimportant for everyday judgment and decision-making.
Abstract.
Author URL.
Full text.
DOI.
Mosleh M, Pennycook G, Rand DG (2021). Field Experiments on Social Media.
Current Directions in Psychological Science,
31(1), 69-75.
Abstract:
Field Experiments on Social Media
Online behavioral data, such as digital traces from social media, have the potential to allow researchers an unprecedented new window into human behavior in ecologically valid everyday contexts. However, research using such data is often purely observational, which limits its usefulness for identifying causal relationships. Here we review recent innovations in experimental approaches to studying online behavior, with a particular focus on research related to misinformation and political psychology. In hybrid lab-field studies, exposure to social-media content can be randomized, and the impact on attitudes and beliefs can be measured using surveys, or exposure to treatments can be randomized within survey experiments, and their impact on subsequent online behavior can be observed. In field experiments conducted on social media, randomized treatments can be administered directly to users in the online environment (e.g. via social-tie invitations, private messages, or public posts) without revealing that they are part of an experiment, and the effects on subsequent online behavior can then be observed. The strengths and weaknesses of each approach are discussed, along with practical advice and central ethical constraints on such studies.
Abstract.
DOI.
Mosleh M, Martel C, Eckles D, Rand DG (2021). Shared partisanship dramatically increases social tie formation in a Twitter field experiment.
Proc Natl Acad Sci U S A,
118(7).
Abstract:
Shared partisanship dramatically increases social tie formation in a Twitter field experiment.
Americans are much more likely to be socially connected to copartisans, both in daily life and on social media. However, this observation does not necessarily mean that shared partisanship per se drives social tie formation, because partisanship is confounded with many other factors. Here, we test the causal effect of shared partisanship on the formation of social ties in a field experiment on Twitter. We created bot accounts that self-identified as people who favored the Democratic or Republican party and that varied in the strength of that identification. We then randomly assigned 842 Twitter users to be followed by one of our accounts. Users were roughly three times more likely to reciprocally follow-back bots whose partisanship matched their own, and this was true regardless of the bot's strength of identification. Interestingly, there was no partisan asymmetry in this preferential follow-back behavior: Democrats and Republicans alike were much more likely to reciprocate follows from copartisans. These results demonstrate a strong causal effect of shared partisanship on the formation of social ties in an ecologically valid field setting and have important implications for political psychology, social media, and the politically polarized state of the American public.
Abstract.
Author URL.
Full text.
DOI.
Pennycook G, Epstein Z, Mosleh M, Arechar AA, Eckles D, Rand DG (2021). Shifting attention to accuracy can reduce misinformation online.
Nature,
592(7855), 590-595.
DOI.
Martel C, Mosleh M, Rand D (2021). You’re definitely wrong, maybe: Correction style has minimal effect on corrections of misinformation online.
Media and Communication,
9(1), 120-133.
Abstract:
You’re definitely wrong, maybe: Correction style has minimal effect on corrections of misinformation online
How can online communication most effectively respond to misinformation posted on social media? Recent studies examining the content of corrective messages provide mixed results—several studies suggest that politer, hedged messages may increase engagement with corrections, while others favor direct messaging which does not shed doubt on the credibility of the corrective message. Furthermore, common debunking strategies often include keeping the message simple and clear, while others recommend including a detailed explanation of why the initial misinformation is incorrect. To shed more light on how correction style affects correction efficacy, we manipulated both correction strength (direct, hedged) and explanatory depth (simple explanation, detailed explanation) in response to participants from Lucid (N = 2,228) who indicated they would share a false story in a survey experiment. We found minimal evidence suggesting that correction strength or depth affects correction engagement, both in terms of likelihood of replying, and accepting or resisting corrective information. However, we do find that analytic thinking and actively open-minded thinking are associated with greater acceptance of information in response to corrective messages, regardless of correction style. Our results help elucidate the efficacy of user-generated corrections of misinformation on social media.
Abstract.
Full text.
DOI.
Mosleh M, Kyker K, Cohen JD, Rand DG (2020). Globalization and the rise and fall of cognitive control.
Nature communications,
11, 1-10.
Full text.
Heydari B, Heydari P, Mosleh M (2020). Not all bridges connect: integration in multi-community networks. The Journal of Mathematical Sociology, 44, 199-220.
Mosleh M, Stewart AJ, Plotkin JB, Rand DG (2020). Prosociality in the economic Dictator Game is associated with less parochialism and greater willingness to vote for intergroup compromise. Judgment & Decision Making, 15
Mosleh M, Pennycook G, Rand DG (2020). Self-reported willingness to share political news articles in online surveys correlates with actual sharing on Twitter. Plos one, 15, e0228882-e0228882.
Mosleh M, Martel C, Eckles D, Rand D (2020). Shared Partisanship Dramatically Increases Social Tie Formation in a Twitter Field Experiment.
Stewart AJ, Mosleh M, Diakonova M, Arechar AA, Rand DG, Plotkin JB (2019). Information gerrymandering and undemocratic decisions. Nature, 573, 117-121.
Mosleh M, Stewart AJ, Plotkin J, Rand D (2019). Prosociality in an economic game is associated with less parochialism and greater willingness to vote for intergroup compromise.
Gianetto DA, Mosleh M, Heydari B (2018). Dynamic Structure of Competition Networks in Affordable Care Act Insurance Market. IEEE Access, 6, 12700-12709.
Mosleh M, Rand DG (2018). Population structure promotes the evolution of intuitive cooperation and inhibits deliberation. Scientific reports, 8, 1-8.
Mosleh M, Heydari B (2017). Fair topologies: Community structures and network hubs drive emergence of fairness norms. Scientific reports, 7, 1-9.
Mosleh M, Dalili K, Heydari B (2016). Distributed or monolithic? a computational architecture decision framework. IEEE Systems journal, 12, 125-136.
Mosleh M, Ludlow P, Heydari B (2016). Distributed resource management in systems of systems: an architecture perspective. Systems Engineering, 19, 362-374.
Heydari B, Mosleh M, Dalili K (2016). From modular to distributed open architectures: a unified decision framework. Systems Engineering, 19, 252-266.
Heydari B, Mosleh M, Dalili K (2015). Efficient Network Structures with Separable Heterogeneous Connection Costs. Economics Letters, 134, 82-85.
Mosleh M, Dalili K, Heydari B (2014). Optimal Modularity for Fractionated Spacecraft: the Case of System F6. Procedia Computer Science, 28, 164-170.
Abiri-Jahromi A, Fotuhi-Firuzabad M, Parvania M, Mosleh M (2011). Optimized sectionalizing switch placement strategy in distribution systems. IEEE Transactions on Power Delivery, 27, 362-370.
Conferences
Mosleh M, Martel C (2021). Perverse downstream consequences of debunking: Being corrected by another user for posting false political news increases subsequent sharing of low qality, partisan, and toxic content in a twiter field experiment.
Abstract:
Perverse downstream consequences of debunking: Being corrected by another user for posting false political news increases subsequent sharing of low qality, partisan, and toxic content in a twiter field experiment
Abstract.
DOI.
Mosleh M (2019). Online Interactive Experiments on Networks.
Mosleh M, Heydari B (2017). Market Evolution of Sharing Economy vs. Traditional Platforms: a Natural Language Processing Approach.
Mosleh M, Heydari B (2017). Why Groups Show Different Fairness Norms? the Interaction Topology Might Explain.
Mosleh M, Ludlow P, Heydari B (2016). Resource allocation through network architecture in systems of systems: a complex networks framework.
Publications by year
In Press
Mosleh M, Rand DG (In Press). Falsehood in, falsehood out: Measuring exposure to elite misinformation on Twitter.
Abstract:
Falsehood in, falsehood out: Measuring exposure to elite misinformation on Twitter
Most research studying misinformation on social media examines links to news articles. Yet a great deal of misinformation comes directly from public figures. Here, we introduce a new tool for measuring Twitter users’ exposure to elite misinformation based on the public figures they choose to follow: github.com/mmosleh/minfo-exposure. From a database of professional fact-checks by PolitiFact, falsity scores can be calculated for 816 public figures. We then assign users a misinformation-exposure score by averaging the falsity scores of the public figures they follow. We show that users’ misinformation-exposure scores are negatively correlated with the quality of news they share (based on ratings from both professional fact-checkers and a politically-balanced crowd of laypeople), and positively correlated with conservative ideology. Additionally, we analyze the co-follower and the co-share network of 5,000 Twitter users and find an ideological asymmetry: ideological extremity is associated with more misinformation-exposure for conservatives but not liberals.
Abstract.
DOI.
Mosleh M, Pennycook G, Rand DG (In Press). Field experiments on social media.
Abstract:
Field experiments on social media
Online behavioral data, such as digital traces from social media, have the potential to allow researchers an unprecedented new window into human behavior in ecologically valid everyday contexts. However, research using such data is often purely observational, limiting its ability to identify causal relationships. Here we review recent innovations in experimental approaches to studying online behavior, with a particular focus on research related to misinformation and political psychology. In hybrid lab-field studies, exposure to social media content can be randomized, and the impact on attitudes and beliefs measured using surveys; or exposure to treatments can be randomized within survey experiments, and their impact observed on subsequent online behavior. In field experiments conducted on social media, randomized treatments can be administered directly to users in the online environment - e.g. via social tie invitations, private messages, or public posts - without revealing that they are part of an experiment, and the impacts on subsequent online behavior observed. The strengths and weaknesses of each approach are discussed, along with practical advice and central ethical constraints on such studies.
Abstract.
DOI.
Yang Q, Mosleh M, Zaman T, Rand DG (In Press). Is Twitter biased against conservatives? the challenge of inferring political bias. in a hyper-partisan media ecosystem.
Abstract:
Is Twitter biased against conservatives? the challenge of inferring political bias. in a hyper-partisan media ecosystem
Social media companies are often accused of anti-conservative bias, particularly in terms of which users they suspend. Here, we evaluate this possibility empirically. We begin with a survey of 4,900 Americans, which showed strong bi-partisan support for social media companies taking actions against online misinformation. We then investigated potential political bias in suspension patterns and identified a set of 9,000 politically engaged Twitter users, half Democratic and half Republican, in October 2020, and followed them through the six months after the U.S. 2020 election. During that period, while only 7.7% of the Democratic users were suspended, 35.6% of the Republican users were suspended. The Republican users, however, shared substantially more news from misinformation sites – as judged by either fact-checkers or politically balanced crowds – than the Democratic users. Critically, we found that users’ misinformation sharing was as predictive of suspension as was their political orientation. Thus, the observation that Republicans were more likely to be suspended than Democrats provides no support for the claim that Twitter showed political bias in its suspension practices. Instead, the observed asymmetry could be explained entirely by the tendency of Republicans to share more misinformation. While support for action against misinformation is bipartisan, the sharing of misinformation – at least at this historical moment – is heavily asymmetric across parties. As a result, our study shows that it is inappropriate to make inferences about political bias from asymmetries in suspension rates.
Abstract.
DOI.
Mosleh M, Martel C, Eckles D, Rand DG (In Press). Promoting engagement with social fact-checks online.
Abstract:
Promoting engagement with social fact-checks online
Social corrections, wherein social media users correct one another, are an important mechanism for debunking online misinformation. But users who post misinformation only rarely engage with social corrections, instead typically choosing to ignore them. Here, we investigate how the social relationship between the corrector and corrected user affect the willingness to engage with corrective, debunking messages. We explore two key dimensions: (i) partisan agreement with, and (ii) social relationships between the user and the corrector. We conducted a randomized field experiment with Twitter users and a conceptual replication survey experiment with Amazon Mechanical Turk workers in which posts containing false news were corrected. We varied whether the corrector identified as a Democrat or Republican; and whether the corrector followed the user and liked three of their tweets the day before issuing the correction (creating a minimal social relationship). Surprisingly, we did not find evidence that shared partisanship increased a user’s probability of engaging with the correction. Conversely, forming a minimal social connection significantly increased engagement rate. A second survey experiment found that minimal social relationships foster a general norm of responding, such that people feel more obligated to respond – and think others expect them to respond more – to people who follow them, even outside the context of misinformation correction. These results emphasize social media’s ability to foster engagement with corrections via minimal social relationships, and have implications for effective, engaging fact-check delivery online.
Abstract.
DOI.
Pennycook G, Epstein Z, Mosleh M, Arechar AA, Eckles D, Rand D (In Press). Shifting attention to accuracy can reduce misinformation online. In press Nature: https://psyarxiv. com/3n9u8
Mosleh M, Martel C, Eckles D, Rand DG (In Press). Social correction of fake news across party lines.
Abstract:
Social correction of fake news across party lines
Social corrections, wherein social media users correct one another, are an important mechanism for debunking online misinformation. But users who post misinformation only rarely engage with social corrections, instead typically choosing to ignore them. Here, we investigate how the social relationship between the corrector and corrected user affect the willingness to engage with corrective, debunking messages. We explore two key dimensions: (i) partisan agreement with, and (ii) social relationship between, the user and the corrector. We conducted a randomized field experiment with N=1,586 Twitter users and a conceptual replication survey experiment with N=812 Amazon Mechanical Turk workers in which posts containing false news were corrected. We varied whether the corrector identified as Democrat or Republican; and whether the corrector followed the user and liked three of their tweets the day before issuing the correction (creating a minimal social relationship). Surprisingly, we found that shared partisanship did not increase a user’s probability of engaging with the correction. Conversely, forming a minimal social connection significantly increased engagement rate. A second survey experiment (N = 1,621) found that minimal social relationships foster a general norm of responding, such that people feel more obligated to respond – and think others expect them to respond more – to people who follow them, even outside the context of misinformation correction. These results emphasize social media’s ability to foster engagement with corrections via minimal social relationships, and have implications for effective, engaging fact-check delivery online.
Abstract.
DOI.
Yang Q, Mosleh M, Rand DG, Zaman T (In Press). The Follow Back Problem in a Hyper-Partisan Environment.
Abstract:
The Follow Back Problem in a Hyper-Partisan Environment
Many social media users try to obtain as many followers as possible in a social network to gain influence, a challenge that is often referred to as the follow back problem. In this work we study different strategies for this problem in the context of politically polarized social networks and study how political partisanship affect social media users' propensity to follow each other. We test how contact strategy (liking, following) interacts with partisan alignment when trying to induce users to follow back. To do so, we conduct a field experiment on Twitter where we target N=8,104 active users using bot accounts that present as human. We found that users were more than twice as likely to reciprocally follow back bots whose partisanship matched their own. Conversely, when the only form of contact between the bot and the user was the bot liking the user’s posts, the follow rate was extremely low regardless of partisan alignment – and liking a user’s content and following them led to no increase in follow-back relative to just following the user. Finally, we found no partisanship asymmetries, such that Democrats and Republicans preferentially followed co-partisans to the same extent. Our results demonstrate the important impact of following users and having shared partisanship – and the irrelevance of liking users’ content – on solving the follow back problem.
Abstract.
DOI.
2021
Mosleh M, Pennycook G, Arechar AA, Rand DG (2021). Cognitive reflection correlates with behavior on Twitter.
Nat Commun,
12(1).
Abstract:
Cognitive reflection correlates with behavior on Twitter.
We investigate the relationship between individual differences in cognitive reflection and behavior on the social media platform Twitter, using a convenience sample of N = 1,901 individuals from Prolific. We find that people who score higher on the Cognitive Reflection Test-a widely used measure of reflective thinking-were more discerning in their social media use, as evidenced by the types and number of accounts followed, and by the reliability of the news sources they shared. Furthermore, a network analysis indicates that the phenomenon of echo chambers, in which discourse is more likely with like-minded others, is not limited to politics: people who scored lower in cognitive reflection tended to follow a set of accounts which are avoided by people who scored higher in cognitive reflection. Our results help to illuminate the drivers of behavior on social media platforms and challenge intuitionist notions that reflective thinking is unimportant for everyday judgment and decision-making.
Abstract.
Author URL.
Full text.
DOI.
Mosleh M, Pennycook G, Rand DG (2021). Field Experiments on Social Media.
Current Directions in Psychological Science,
31(1), 69-75.
Abstract:
Field Experiments on Social Media
Online behavioral data, such as digital traces from social media, have the potential to allow researchers an unprecedented new window into human behavior in ecologically valid everyday contexts. However, research using such data is often purely observational, which limits its usefulness for identifying causal relationships. Here we review recent innovations in experimental approaches to studying online behavior, with a particular focus on research related to misinformation and political psychology. In hybrid lab-field studies, exposure to social-media content can be randomized, and the impact on attitudes and beliefs can be measured using surveys, or exposure to treatments can be randomized within survey experiments, and their impact on subsequent online behavior can be observed. In field experiments conducted on social media, randomized treatments can be administered directly to users in the online environment (e.g. via social-tie invitations, private messages, or public posts) without revealing that they are part of an experiment, and the effects on subsequent online behavior can then be observed. The strengths and weaknesses of each approach are discussed, along with practical advice and central ethical constraints on such studies.
Abstract.
DOI.
Mosleh M, Martel C (2021). Perverse downstream consequences of debunking: Being corrected by another user for posting false political news increases subsequent sharing of low qality, partisan, and toxic content in a twiter field experiment.
Abstract:
Perverse downstream consequences of debunking: Being corrected by another user for posting false political news increases subsequent sharing of low qality, partisan, and toxic content in a twiter field experiment
Abstract.
DOI.
Mosleh M, Martel C, Eckles D, Rand DG (2021). Shared partisanship dramatically increases social tie formation in a Twitter field experiment.
Proc Natl Acad Sci U S A,
118(7).
Abstract:
Shared partisanship dramatically increases social tie formation in a Twitter field experiment.
Americans are much more likely to be socially connected to copartisans, both in daily life and on social media. However, this observation does not necessarily mean that shared partisanship per se drives social tie formation, because partisanship is confounded with many other factors. Here, we test the causal effect of shared partisanship on the formation of social ties in a field experiment on Twitter. We created bot accounts that self-identified as people who favored the Democratic or Republican party and that varied in the strength of that identification. We then randomly assigned 842 Twitter users to be followed by one of our accounts. Users were roughly three times more likely to reciprocally follow-back bots whose partisanship matched their own, and this was true regardless of the bot's strength of identification. Interestingly, there was no partisan asymmetry in this preferential follow-back behavior: Democrats and Republicans alike were much more likely to reciprocate follows from copartisans. These results demonstrate a strong causal effect of shared partisanship on the formation of social ties in an ecologically valid field setting and have important implications for political psychology, social media, and the politically polarized state of the American public.
Abstract.
Author URL.
Full text.
DOI.
Pennycook G, Epstein Z, Mosleh M, Arechar AA, Eckles D, Rand DG (2021). Shifting attention to accuracy can reduce misinformation online.
Nature,
592(7855), 590-595.
DOI.
Martel C, Mosleh M, Rand D (2021). You’re definitely wrong, maybe: Correction style has minimal effect on corrections of misinformation online.
Media and Communication,
9(1), 120-133.
Abstract:
You’re definitely wrong, maybe: Correction style has minimal effect on corrections of misinformation online
How can online communication most effectively respond to misinformation posted on social media? Recent studies examining the content of corrective messages provide mixed results—several studies suggest that politer, hedged messages may increase engagement with corrections, while others favor direct messaging which does not shed doubt on the credibility of the corrective message. Furthermore, common debunking strategies often include keeping the message simple and clear, while others recommend including a detailed explanation of why the initial misinformation is incorrect. To shed more light on how correction style affects correction efficacy, we manipulated both correction strength (direct, hedged) and explanatory depth (simple explanation, detailed explanation) in response to participants from Lucid (N = 2,228) who indicated they would share a false story in a survey experiment. We found minimal evidence suggesting that correction strength or depth affects correction engagement, both in terms of likelihood of replying, and accepting or resisting corrective information. However, we do find that analytic thinking and actively open-minded thinking are associated with greater acceptance of information in response to corrective messages, regardless of correction style. Our results help elucidate the efficacy of user-generated corrections of misinformation on social media.
Abstract.
Full text.
DOI.
2020
Mosleh M, Kyker K, Cohen JD, Rand DG (2020). Globalization and the rise and fall of cognitive control.
Nature communications,
11, 1-10.
Full text.
Heydari B, Heydari P, Mosleh M (2020). Not all bridges connect: integration in multi-community networks. The Journal of Mathematical Sociology, 44, 199-220.
Mosleh M, Stewart AJ, Plotkin JB, Rand DG (2020). Prosociality in the economic Dictator Game is associated with less parochialism and greater willingness to vote for intergroup compromise. Judgment & Decision Making, 15
Mosleh M, Pennycook G, Rand DG (2020). Self-reported willingness to share political news articles in online surveys correlates with actual sharing on Twitter. Plos one, 15, e0228882-e0228882.
Mosleh M, Martel C, Eckles D, Rand D (2020). Shared Partisanship Dramatically Increases Social Tie Formation in a Twitter Field Experiment.
2019
Stewart AJ, Mosleh M, Diakonova M, Arechar AA, Rand DG, Plotkin JB (2019). Information gerrymandering and undemocratic decisions. Nature, 573, 117-121.
Mosleh M (2019). Online Interactive Experiments on Networks.
Mosleh M, Stewart AJ, Plotkin J, Rand D (2019). Prosociality in an economic game is associated with less parochialism and greater willingness to vote for intergroup compromise.
2018
Gianetto DA, Mosleh M, Heydari B (2018). Dynamic Structure of Competition Networks in Affordable Care Act Insurance Market. IEEE Access, 6, 12700-12709.
Mosleh M, Rand DG (2018). Population structure promotes the evolution of intuitive cooperation and inhibits deliberation. Scientific reports, 8, 1-8.
2017
Mosleh M, Heydari B (2017). Fair topologies: Community structures and network hubs drive emergence of fairness norms. Scientific reports, 7, 1-9.
Mosleh M, Heydari B (2017). Market Evolution of Sharing Economy vs. Traditional Platforms: a Natural Language Processing Approach.
Mosleh M, Heydari B (2017). Why Groups Show Different Fairness Norms? the Interaction Topology Might Explain.
2016
Mosleh M, Dalili K, Heydari B (2016). Distributed or monolithic? a computational architecture decision framework. IEEE Systems journal, 12, 125-136.
Mosleh M, Ludlow P, Heydari B (2016). Distributed resource management in systems of systems: an architecture perspective. Systems Engineering, 19, 362-374.
Heydari B, Mosleh M, Dalili K (2016). From modular to distributed open architectures: a unified decision framework. Systems Engineering, 19, 252-266.
Mosleh M, Ludlow P, Heydari B (2016). Resource allocation through network architecture in systems of systems: a complex networks framework.
2015
Heydari B, Mosleh M, Dalili K (2015). Efficient Network Structures with Separable Heterogeneous Connection Costs. Economics Letters, 134, 82-85.
2014
Mosleh M, Dalili K, Heydari B (2014). Optimal Modularity for Fractionated Spacecraft: the Case of System F6. Procedia Computer Science, 28, 164-170.
2011
Abiri-Jahromi A, Fotuhi-Firuzabad M, Parvania M, Mosleh M (2011). Optimized sectionalizing switch placement strategy in distribution systems. IEEE Transactions on Power Delivery, 27, 362-370.