Information

Has sharing non peer reviewed sources decreased or increased beliefs in pseudoscience?

Has sharing non peer reviewed sources decreased or increased beliefs in pseudoscience?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I haven't found a detailed study of this, but here would be an example. If people shared a site that talked about the dangers of global warming, but wasn't peer reviewed, does this increase awareness, or does it actually give more credibility to global warming denial sites, since they are on equal footing, both being non peer reviewed? A global warming denialist argument is that a debate still exists (it doesn't), which has been good enough to cause skepticism and lack of action.

I know sites generally have incentives to get their articles or videos shared, so they would rather make their own summaries of scientific results, rather than just link to proper references. But pseudoscience sites can also reference peer reviewed sources, they just give their own interpretations of them. But why should anyone believe one site over another, since the standards for credibility are gone (which all of these sites perpetuate, pseudoscience or not)?

Or is it the case that non peer reviewed sites draw attention to issues in a way peer reviewed sources don't, and while there may be some collateral damage with lowering of standards, people are being swayed by pseudoscience sites either way?

So the main question is, has sharing non peer reviewed sites caused more damage than good in combating pseudoscience (including this site)?


Materials and methods

Participants

A total of 99 women and 75 men (174 subjects in total) of legal age (mean = 28.82 standard deviation = 7.943) participated. A total of 41.4% of the participants resided in Madrid, and 58.6% lived in Barcelona. All of them signed a consent form authorizing their voluntary participation. Likewise, they also stated that they had no psychiatric history.

Instruments

Multivariable multiaxial suggestibility inventory − 2 reduced (MMSI-2-R)

It is a self-report questionnaire composed of 49 polytomous items distributed in 6 dimensions or scales: Visual and Auditory Perception (Pva) Cenesthetic Perception (Pc) Olfactory Perception (Po) Touch Perception (Pt) Taste Perception (Pg) and Paranoid Experience (Et). The answers are coded using a Likert scale that fluctuates between 1 and 5. 1 means “strongly disagree” and 5 “strongly agree”. Both versions offer guarantees on their validity and reliability, whose internal consistency indices are greater than 0.8 in all scales [51]. Table 1 reports the description of each dimension and the reliability coefficients.

Australian sheep-goat scale (ASGS)

It is a brief scale formed by 18 items that examine pseudoscientific beliefs and experiences. Originally, this scale was developed and validated in Australia [52], but A. Escolà-Gascón and L. Storm developed the Spanish adaptation (which has not yet been published), which also shows adequate validity and reliability of the test (Guttman’s lambda = 0.93). The responses to the 18 items can be coded in two ways, either complying with the original protocol or the following coding can be applied: 0 = “false”, 1 = “I doubt my answer” and 2 = “true”. This coding was used in the Spanish adaptation and has also been shown to be reliable (McDonald’s omega = 0.92) [53]. Given that the Spanish adaptation of the ASGS is not published, the ASGS scale translated into Spanish used in this study is attached to this report (see Supplementary Materials).

Community assessment of psychic experiences-42 (CAPE-42)

It is a psychometric scale widely used to evaluate the psychotic phenotype in subjects from the general population [25]. It consists of 3 main dimensions: Positive Dimension (hereafter PD) (composed of 20 items), (2) Negative Dimension (hereafter ND) (consisting of 14 items), and (3), Depressive Dimension (hereafter DD) (contains 8 items). In total, there are 42 items whose responses are quantified following the Likert model with 5 response options. The 1 means “almost never” and the 5 “almost always”. The CAPE-42 was translated and adapted with the Spanish population [54]. This adaptation presents satisfactory reliability indices and construct validity according to the original version of the test. This version was the one used in this study. Table 2 presents a description of each scale and reliability coefficients.

The subscale that measured the psychopathological impact of psychotic symptoms was not applied because the scales of the CAPE-42 were analyzed as dependent variables (and not as independent variables). The aim was to analyze the impact of the social quarantine derived from COVID-19 on subclinical psychotic symptoms and not vice versa.

Procedures

In this study, hypothesis contrast tests were applied by comparing means between two repeated samples. The aim was to verify whether social quarantine could alter perceptual processes and magical belief systems.

Initially, the purpose of this research was to replicate the psychometric properties of the MMSI-2-R by examining its convergent validity with respect to the ASGS and CAPE-42 scales. During December 2019 and January, February and March 2020, 346 subjects responded to the questionnaires. When in Spain, the state of alarm was decreed on March 14 due to the health crisis caused by COVID-19 [55], the research had to be interrupted to meet other more urgent needs related to this crisis. However, with the state of alarm in Spain, the total social quarantine of the population was also decreed during the following 2 weeks of March. Subsequently, the quarantine lasted until May 10. This fact caused the research team to make a decision regarding how to take advantage of the research sample. Understanding the importance of the scientific and statistical analysis of the social, health and economic impact of the SARS-CoV-2 virus, the research team decided to reorganize the priorities of the original study and made the quick decision to contact the participants again by email to return to answer telematically to the MMSI-2-R, CAPE-42 and ASGS questionnaires. The contact with the participants began on May 11 (also the day in which the first phase began to resolve the quarantine and return to normal social relations). The deadline for receiving the responses was May 21. This decision was made with the aim of adapting the collection of posttests to the circumstances of each participant, since it was not possible for all participants to respond to the questionnaires on the last day of quarantine. Of the 346 subjects, only 174 subjects answered the tests again. In the following week, the data were analyzed, and the present report was written.

Data analysis

The data were processed in the JASP and JAMOVI programs, both of which are open access and were created by the same research group [56]. Student’s t-tests were applied for repeated samples, their nonparametric version (Wilcoxon test) and a Bayesian estimation were also performed from the Bayes factor in favor of the alternative hypothesis (hereafter BF10). The a priori probabilities were adjusted to 50% such that the null hypothesis (H0) and alternative hypothesis were equiprobable. The Cauchy scale was also adjusted for convenience to 0.707. From the BFs, the probability (P) that the alternative hypothesis (H1) reproduces the observed data (D) could be obtained. The following transformation formula was used:

This is possible because the BF10 are likelihood ratios, but they differ from the likelihood quotient in that the parameters of the previous equation are obtained by integration and not by maximizing. As a complement, measures of effect size were also estimated using Cohen’s d. The risk of error was adjusted to 1% in all contrasts and to 5% for the credibility intervals of the Bayesian estimates.


Conscious Brain-to-Brain Communication in Humans Using Non-Invasive Technologies

Affiliations Starlab Barcelona, Barcelona, Spain, Neurodynamics Laboratory, Department of Psychiatry and Clinical Psychobiology, Psychology and Medicine Faculties, University of Barcelona, Barcelona, Spain

Affiliation Axilum Robotics, Strasbourg, France

Affiliations Starlab Barcelona, Barcelona, Spain, Neuroelectrics Barcelona, Barcelona, Spain

Affiliation Axilum Robotics, Strasbourg, France

Affiliation Axilum Robotics, Strasbourg, France

Affiliation Axilum Robotics, Strasbourg, France

Affiliation Cognition and Brain Plasticity Unit, Department of Basic Psychology, University of Barcelona, Barcelona, Spain

Affiliation Berenson Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, United States of America

Affiliations Starlab Barcelona, Barcelona, Spain, Neuroelectrics Barcelona, Barcelona, Spain


Materials and Methods

Ethics Statement

The ethical principles as laid out by the WMA Declaration of Helsinki (2013), binding for medical research, were observed. Legally, no formal ethics approval is required for social science research in many European countries, unless the research objectives involve issues regulated by law, which was not the case (e.g., use of medications, medical devices, psychological intervention, or deception). Prior to commencement, we obtained a waiver of permission from the Croatian research institute's ethics committee.

Our procedures were in accordance with the standards published by the German Society for Psychology, an adaptation of the APA’s ethical principles and code of conduct. Participants were recruited at the Croatian colleague‘s research institution. Participants were invited and informed by a researcher (or assistant) which procedures were involved in the entirely voluntary study. Participants were assured of their anonymity. Questionnaires were handed out together with an information sheet for written informed consent. Participants who were willing to contribute simply returned the package completed.

Croatian Supernatural Belief Scale

The ten-item SBS was translated from English into Croatian by three researchers (graduate and PhD level) at a public university. All keywords like god, demon, angel, devil, heaven, hell, miracles and souls exist in Croatian and are used in the same context as are in English. A few items were discussed to choose the Croatian wording that best represented the meaning of the original statements. We ensured the quality of the translation by acquiring a retranslation from a fourth colleague (PhD level psychology student) proficient in English and Croatian. As the sentences are straightforward, the Croatian translations of all the items were closely back-translated within minor semantic nuances. Agreement with the ten statements is expressed on 9-point Likert scales, anchored at “strongly disagree” (−4) and “strongly agree” (4).

Participants and Procedure

This study was run as part of a larger study on individual difference variables the other measures in the study were unrelated to the present investigation. Volunteers were recruited at a large public university, so that 642 Croatian students—from freshmen to senior year, 69.0% females, 29.6% males, 1.4% unspecified (Mage = 20.38 [18–50] years, SD = 2.66)—participated in the study with the permission of the faculty and the professors of selected classes. Some of the professors offered extra course credits as a compensation, but we did not follow up on which students were compensated. Study majors included Psychology, Sociology, Communication Studies, Journalism, Electrical Engineering, and Computer science. Participants returned the distributed questionnaires within one week. Five participants not providing any SBS data are not included in SBS self-report analyses.

Participants first answered a sociodemographic questionnaire, including the question “What is your religious denomination?” The majority of the university sample consisted of Christians (66.8%), followed by atheists and agnostics (29.1%), and then participants from other religious traditions (2.0% e.g., Buddhist) 2.0% of participants did not answer the question.

Next participants completed the SBS, along with frequency measures of religious behavior, in particular how frequently they prayed, attended church/holy mass, and took communion. Table 2 displays the frequency of the religious activities as a function of religious domination. More than 95% of the Christians reported that they prayed more than 84% claimed to go to Church at least once a year 63% reported taking communion at least once a year.

Each participant was also asked to recruit the person who knew them best to independently fill in a similar set of questionnaires about them. We stressed that the participant and the informant were to fill in the questionnaires separately and independently of each other. The completed questionnaires were to be returned one week later in sealed envelopes. Only nine participants were unable to elicit any data from their peers (these peers are not included in analyses involving peer-reports) altogether, responses from 633 peers were obtained. As part of their questionnaires, peers were asked what their relationship was to the target participants 39.3% were friends, 23.5% were romantic partners, 19.8% were parents, and 12.8% were siblings.

Additionally, for a cross-cultural comparison, we used a previously collected sample of 360 English-speaking students to inspect measurement invariance (62.5% females, Mage = 20.92, SD = 3.65). These participants had been recruited at a New Zealand university and sampled from various study majors (original research presented by Jong et al. [11]). According to self-reported ethnic background (multiple nominations were possible), the majority of the sample had a European/Caucasian heritage (approx. 80%), followed by Pacific Islander, Asian, African, and Indian ethnic backgrounds. In terms of religion, 55% of the participants categorized themselves as None/Atheist/Agnostic/Undecided 42% reported to be Christian the rest identified as Spiritual, Free Thinker, Muslim, Hindu, Buddhist or “other”.

Statistical Analysis and Evaluation of Model Fit

Mplus 7.11 [69] was used to implement the CFAs, other analyses were run with SPSS 21. When assuming normal theory (maximum likelihood estimation ML) for ordinal data, this choice can yield biased parameter estimates when the number of categories is very small. Given that five or more response categories yield ML estimators that are not worse than weighted least squares estimators (WLSMV [70–72]), we opted for ML to model the nine SBS response categories.

We obtained ML estimates of the CFA parameters with robust standard errors to account for violations of multivariate normality assumptions (self-data: χ 2 (20) = 8588.83, p < .0001). Without robust procedures model fit indices would be biased. Mplus provides MLR for maximum likelihood with robust ‘Huber-White’ standard errors and a scaled test statistic asymptotically equivalent to the Yuan–Bentler T2* statistic [73–75] and similar to the robust Satorra–Bentler scaled χ 2 -statistic (MLM [76, 77]). When conducting χ 2 -difference tests (or Likelihood Ratio Tests [78]), the procedure has to be corrected for the scaling factors of robust MLR procedures [79–81]. Then the “Satorra–Bentler scaled chi-quared tests”–and other goodness-of-fit statistics based on scaled chi-square–are robust to nonnormality.

Given the limitations of any individual fit index, we tested the goodness-of-fit of plausible models using multiple criteria. First, to establish model fit, the χ 2 -test would ideally be non-significant [82], and the χ 2 /df ratio should be as low as possible, ideally at least as low as 2 [83]. Second, the comparative fit index (CFI) with values > .95/.90 indicates good/appropriate model fit, respectively [84, 85]. Third, the root mean square error of approximation (RMSEA) with values of .00–.05/.06–.08/.09–.10 indicates good/reasonable/poor model fit respectively [86]. Fourth, the standardized root mean square residual (SRMR) with values less than .05/.08 reflects good/ appropriate fit [85, 87]. Finally, we used the Akaike Information Criterion (AIC) [88] for single-group CFAs and the Bayesian Information Criterion (BIC) [89] for invariance tests with multiple-group CFAs [90]. Lower AIC values indicate a more accurate model, and similarly so for BIC, though BIC reflects the true data generating process better, as it penalizes overly complex (less parsimonious) models more strictly than AIC. Hence lower BIC values indicate a better trade-off between fit and complexity. For both AIC and BIC, differences in information criteria greater than +10 provide “strong evidence” against equal fit of the models in question [91].

All cut-offs for each of these fit indices are approximations and subject to model-complexity and population characteristics. The acceptance of a measurement model is not a binary (pass-fail) decision, nor does acceptance depend wholly on strict adherence to the cut-offs [92, 93]. Rather, the best practice in accepting a model involves the comparative testing of alternative models on the same data [94].


Introduction

Science is a cumulative and self-corrective enterprise over time the veracity of the scientific literature should gradually increase as falsehoods are refuted and credible claims are preserved (Merton, 1973 Popper, 1963). These processes can optimally occur when the scientific community is able to access and examine the key products of research (materials, data, analyses, and protocols), enabling a tradition where results can be truly cumulative (Ioannidis, 2012). Recently, there has been growing concern that self-correction in psychological science (and scientific disciplines more broadly) has not been operating as effectively as assumed, and a substantial proportion of the literature may therefore consist of false or misleading evidence (Ioannidis, 2005 Johnson, Payne, Wang, Asher, and Mandal, 2016 Klein et al., 2014 Open Science Collaboration, 2015 Simmons, Nelson, & Simonsohn, 2011 Swiatkowski & Dompnier, 2017). Many solutions have been proposed we focus here on the adoption of transparent research practices as an essential way to improve the credibility and cumulativity of psychological science.

There has never been an easier time to embrace transparent research practices. A growing number of journals, including Science, Nature, and Psychological Science, have indicated a preference for transparent research practices by adopting the Transparency and Openness Promotion guidelines (Nosek et al., 2015). Similarly, a number of major funders have begun to mandate open practices such as data sharing (Houtkoop et al., 2018). But how should individuals and labs make the move to transparency?

The level of effort and technical knowledge required for transparent practices is rapidly decreasing with the exponential growth of tools and services tailored towards supporting open science (Spellman, 2015). While a greater diversity of tools is advantageous, researchers are also faced with a paradox of choice. The goal of this paper is thus to provide a practical guide to help researchers navigate the process of preparing and sharing the products of research, including materials, data, analysis scripts, and study protocols. In the supplementary material, readers can find concrete procedures and resources for integrating the principles we outline in their own research. 1 Our view is that being an open scientist means adopting a few straightforward research management practices, which lead to less error prone, reproducible research workflows. Further, this adoption can be piecemeal – each incremental step towards complete transparency adds positive value. These steps not only improve the efficiency of individual researchers, they enhance the credibility of the knowledge generated by the scientific community.


Subjective social class

Stephens et al.’s ( 2014 ) conceptualization of culture-specific selves that vary as a function of social class is compatible with the ‘subjective social rank’ argument advanced by Kraus, Piff, and Keltner ( 2011 ). The latter authors argue that the differences in material resources available to working- and middle-class people create cultural identities that are based on subjective perceptions of social rank in relation to others. These perceptions are based on distinctive patterns of observable behaviour arising from differences in wealth, education, and occupation. ‘To the extent that these patterns of behavior are both observable and reliably associated with individual wealth, occupational prestige, and education, they become potential signals to others of a person's social class’ (Kraus et al., 2011 , p. 246). Among the signals of social class is non-verbal behaviour. Kraus and Keltner ( 2009 ) studied non-verbal behaviour in pairs of people from different social class backgrounds and found that whereas upper-class individuals were more disengaged non-verbally, lower-class individuals exhibited more socially engaged eye contact, head nods, and laughter. Furthermore, when naïve observers were shown 60-s excerpts of these interactions, they used these disengaged versus engaged non-verbal behavioural styles to make judgements of the educational and income backgrounds of the people they had seen with above-chance accuracy. In other words, social class differences are reflected in social signals, and these signals can be used by individuals to assess their subjective social rank. By comparing their wealth, education, occupation, aesthetic tastes, and behaviour with those of others, individuals can determine where they stand in the social hierarchy, and this subjective social rank then shapes other aspects of their social behaviour. More recent research has confirmed these findings. Becker, Kraus, and Rheinschmidt-Same ( 2017 ) found that people's social class could be judged with above-chance accuracy from uploaded Facebook photographs, while Kraus, Park, and Tan ( 2017 ) found that when Americans were asked to judge a speaker's social class from just seven spoken words, the accuracy of their judgments was again above chance.

The fact that there are behavioural signals of social class also opens up the potential for others to hold prejudiced attitudes and to engage in discriminatory behaviour towards those from a lower social class, although Kraus et al. ( 2011 ) focus is on how the social comparison process affects the self-perception of social rank, and how this in turn affects other aspects of social behaviour. These authors argue that subjective social rank ‘exerts broad influences on thought, emotion, and social behavior independently of the substance of objective social class’ (p. 248). The relation between objective and subjective social class is an interesting issue in its own right. Objective social class is generally operationalized in terms of wealth and income, educational attainment, and occupation. These are the three ‘gateway contexts’ identified by Stephens et al. ( 2014 ). As argued by them, these contexts have a powerful influence on individual cognition and behaviour who operate within them, but they do not fully determine how individuals developing and living in these contexts think, feel, and act. Likewise, there will be circumstances in which individuals who objectively are, say, middle-class construe themselves as having low subjective social rank as a result of the context in which they live.

There is evidence from health psychology that measures of objective and subjective social class have independent effects on health outcomes, with subjective social class explaining variation in health outcomes over and above what can be accounted for in terms of objective social class (Adler, Epel, Castellazzo, & Ickovics, 2000 Cohen et al., 2008 ). For example, in the prospective study by Cohen et al. ( 2008 ), 193 volunteers were exposed to a cold or influenza virus and monitored in quarantine for objective and subjective signs of illness. Higher subjective class was associated with less risk of becoming ill as a result of virus exposure, and this relation was independent of objective social class. Additional analyses suggested that the impact of subjective social class on likelihood of becoming ill was due in part to differences in sleep quantity and quality. The most plausible explanation for such findings is that low subjective social class is associated with greater stress. It may be that seeing oneself as being low in subjective class is itself a source of stress, or that it increases vulnerability to the effects of stress.

Below I organize the social psychological literature on social class in terms of the impact of class on three types of outcome: thought, encompassing social cognition and attitudes emotion, with a focus on moral emotions and prosocial behaviour and behaviour in high-prestige educational and workplace settings. I will show that these impacts of social class are consistent with the view that the different construals of the self that are fostered by growing up in low versus high social class contexts have lasting psychological consequences.


Abstract

Due to the devastating impact on victims and society, scholars have started to pay more attention to the phenomenon of mass shootings (MS) in the United States. While the extant literature has given us important insights, disparities in conceptualizations, operationalizations, and methods of identifying and collecting data on these incidents have made it difficult for researchers and audiences to come to a deeper and more comprehensive understanding of the characteristics of offenders, causes and consequences. Using a mixed-method systematic review, this study seeks to assess the state of scholarly research in journal articles regarding MS in the United States. Using SCOPUS as the search database, a total of 73 peer-reviewed journal articles on MS within the United States published between 1999 and 2018 were included in this study. This study finds the number of articles published on MS has increased dramatically between 1999 and 2018. Also, most of the MS studies tend to rely heavily on open-source data using the different definitions of MS. We further examined and discussed theoretical frameworks, methodology, and policy suggestions used in each study. Based on the findings of this study, we suggested implications for future research.


Methods

To determine whether, and to what extent, source alerts limit the influence of foreign disinformation, we conducted an experimental test of the following hypotheses on a large national MTurk sample a week before the 2020 presidential election:

Hypothesis1: Exposure to source alerts will reduce social media users’ tendency to believe pseudonymous disinformation.

Hypothesis2: Exposure to source alerts will mitigate social media users’ tendency to spread pseudonymous disinformation online.

H2a: Exposure to source alerts will mitigate social media users’ tendency to “like” pseudonymous disinformation.

H2b: Exposure to source alerts will mitigate social media users’ tendency to “share/retweet” pseudonymous disinformation.

Hypothesis3: Exposure to source alerts will mitigate social media users’ tendency to spread pseudonymous disinformation offline.

H3a: Exposure to source alerts will mitigate social media users’ tendency to initiate conversations about pseudonymous disinformation.

H3b: Exposure to source alerts will mitigate social media users’ tendency to engage in conversations about pseudonymous disinformation.

In order to test our hypotheses, we conducted an experiment utilizing disinformation directly related to the 2020 U.S. presidential election. The experiment took place on October 22-23, 2020 (N = 1,483) and is a two (social media platforms) by two (party consistent message) by two (source cue) design, producing eight experimental conditions and four control conditions. Subjects were recruited via MTurk. All subjects were required to be at least 18 years of age and U.S. citizens. The survey was built and distributed via SurveyMonkey and randomization was used. 3 We conducted randomization tests involving partisanship and social media types. Results of the randomization checks are available from the authors. Thirteen subjects dropped out of the experiment. The experiment was estimated to take approximately seven minutes to complete.

Prior to exposure to the treatments, we asked subjects about their social media usage and party identification. These questions were used to branch subjects into the appropriate stimuli groups. That is, subjects who indicated a preference for Twitter were branched into the Twitter treatment group while subjects who indicated a preference for Facebook received a treatment that appeared to be from Facebook. The Twitter and the Facebook posts use the same images, wording, and user profile details, but the meme was altered to appear as a Facebook or Twitter post. Additionally, Democrats and Republicans were exposed to treatment memes that were relevant to their stated partisanship. Democrats were shown a message about alleged voter suppression efforts by Republicans, while Republicans were shown a message about voter fraud efforts allegedly perpetrated by Democrats. 4 In response to the partisanship question, subjects who initially indicated that they were independent or something else were then prompted to indicate whether they think of themselves as being closer to the Republican or Democratic Party and branched into an experimental condition accordingly. Self-proclaimed Independents and non-partisans often act in the same fashion as their partisan counterparts, marking partisan independence as a matter of self-presentation rather than actual beliefs and behaviors (Petrocik, 2009). Additionally, negative partisanship has been identified as a primary motivator in the way Americans respond to political parties and candidates (Abramowitz & Webster, 2016) our treatments capture appeals to negative partisanship more than they capture loyalty to any one particular party. After the filter questions, all subjects were randomly assigned to the source cues (control, Russian Government Account, Foreign Government Account). Source alerts were presented adjacent to the posts with a highly visible cautionary symbol to draw subjects’ attention to the alert. Finally, subjects were asked a number of questions about the social media post they viewed and how they would engage with the post if it appeared on their Facebook or Twitter feed.

Overall, our MTurk sample is more male, educated, and younger than the U.S. population: 42.2% are female, 69.7% have post-secondary degrees, and 24.8% are 18-29 years old, 59.87% are 30-49, and 15.3% are 50 or older. 5 There is an emerging consensus among political methodologists about the efficacy of MTurk sampling procedures in experimental research. Berinsky et al. (2012), for example, find MTurk samples to be more representative than “the modal sample in published experimental political science” (p. 351) even if it is less representative than national probability samples or internet-based panels. In addition, 77.6% are white, while 39% reside in the south. Moreover, 52.1% side with the Democratic Party, with 47.9% noting a closer link with the Republican Party.

Figure 3. Disinformation post shown to Democrats, modified by social media type and treatment group. Figure 4. Disinformation post shown to Republicans, modified by social media type and treatment group. 6 While we acknowledge that fewer subjects chose Twitter than Facebook, there are enough subjects in each Twitter condition for statistical power.

Subjects answered a number of post-treatment response items to examine their perceptions and behavioral intentions. Specifically, we asked respondents to report how truthful the information in the post was (0 = false and 1 = true), how likely they would be to “like” and “share”/“retweet” the post, and how likely they would be to initiate conversation and engage in conversation about the post offline (each coded -3 to 3, ranging from least to most likely). All of those items, along with others measuring respondents’ partisanship and social media habits, are included in the appendix.

In addition to the condition dummy variables, we include a number of control variables in our models. Age is included because the mean age of subjects who chose the Twitter conditions was lower than the mean age of subjects who chose Facebook. We include a measure of gender because the exclusion of 13 incomplete responses resulted in a gender imbalance in two of our Twitter conditions. Education is included because belief in conspiracy thinking is more common among less educated individuals (Goertzel, 1994). We include a variable for the South because regional differences are common in political behavioral research. Subjects’ tendency to share political information and their rate of social media use are used to control for social media habits. Finally, we included an authoritarian personality measure because authoritarianism has been shown to be a key variable in influencing political behaviors in multiple domains (Feldman, 2003). Specifically, more authoritarian individuals have been found to staunchly defend prior attitudes and beliefs. Low authoritarian individuals, on the other hand, have demonstrated a greater need for cognition (Hetherington & Weiler, 2009 Lavine et al., 2005 Wintersieck, forthcoming). As a result, high authoritarians may be less moved by source alerts on information they are likely to believe, while low authoritarians may be more likely to take up these cues when making assessments about the memes. Information about coding these variables is presented in the appendix.


Conclusion

In this review, we have highlighted the methodological diversity of measures to asses eye contact between two human beings. Of particular importance for future work is how various operationalizations of eye contact—such as the personal experience of eye contact or the more precise measures assessing gaze location—can be used to better understand the phenomenon of eye contact and its consequences for human interaction. To do this, research is needed that captures both the first-person experience of eye contact and the more objective outsiders’ perspective. Researchers need to make their choices for specific definitions and operationalizations of eye contact well-founded, based on evidence or theory. Future studies would benefit from specific descriptions of which techniques were used, the direction of gaze (reciprocal or not), the area of interest of the gaze direction (eyes, face, body or person), and the participant behavior. Moreover, a more meticulous investigation of the comparability of measures is needed before conclusions can be drawn and theories formed about the workings of eye contact.


The Importance and Limitations of Peer-Review

Peer-review is a critical part of the functioning of the scientific community, of quality control, and the self corrective nature of science. But it is no panacea. It is helpful to understand what it is, and what it isn’t, its uses and abuses.

When the statement is made that research is “peer-reviewed” this is usually meant to refer to the fact that it has been published in a peer-reviewed journal. Different scientific disciplines have different mechanisms for determining which journals are legitimately peer-reviewed. In medicine the National Library of Medicine (NLM) has rules for peer-review and they decide on a case by case basis which journals get their stamp of approval. Such journals are then listed as peer-reviewed.

The basic criterion is that there is a formalized process of peer-review prior to publication – so this presents a barrier to publication that acts as a quality control filter. Typically, the journal editor will give a submitted paper to a small number of qualified peers – recognized experts in the relevant field. The reviewers will then submit detailed criticism of the paper along with a recommendation to reject, accept with major revisions, accept with minor revisions, or accept as is. It is rare to get an acceptance as is on the first round.

The editor also reviews the paper, and may break a tie among the reviewers or add their own comments. The process, although at times painful, is quite useful in not only checking the quality of submitted work, but improving the quality. A reviewer, for example, may point out prior research the authors did not comment on, or may point our errors in the paper which can be fixed.

It is typical for authors to submit a paper to a prestigious journal first, and then if they get rejected to work their way down the food chain until they find a journal that will accept it. This does not always mean that the paper was of poorer quality – the most prestigious journals have tons of submissions and can pick and choose the most relevant or important studies. But sometimes it does mean the paper is mediocre or even poor.

The limitations of Peer-Review

It is important to realize that not all peer-reviewed journals are created equal. Small or obscure journals may follow the rules and gain recognized peer-reviewed status, but be desperate for submissions and have a low bar for acceptance. They also have a harder time getting world-class experts to review their submissions, and have to find reviewers that are also farther down the food chain. The bottom line is that when a study is touted as “peer-reviewed” you have to consider where it was reviewed and published.

Even at the best journals, the process is only as good as the editors and reviewers, who are people who make mistakes. A busy reviewer may give a cursory read through a paper that superficially looks good, but miss subtle mistakes. Or they may not take the time to chase down every reference, or check all the statistics. The process generally works, and is certainly better than having no quality control filter, but it is also no guarantee of correctness, or even the avoidance of mistakes.

Peer-reviewers also have biases. They may be prejudiced against studies that contradict their own research or their preferred beliefs. They may therefore bias the published studies in their favored direction, and may be loath to give a pass to a submission that would directly contradict something they have published. For this reasons editors often allow authors to request or recommend reviewers, or to request that certain people not be asked to be reviewers. Each journal has their own policy. Sometimes an editor will specifically use a reviewer that the authors request not be used, thinking they may be trying to avoid legitimate criticism.

The process can be quite messy, and full of politics. But in the end it more or less works. If an author thinks they were treated unfairly by one journal, they can always go to another or they can talk directly to the editor to appeal a decision and try to make their case.

Post Publication Peer Review

The term peer-review is sometimes used to refer to the fact that papers are read and reviewed by the broader scientific community once they are published. However, this post-production review should not be confused with “peer-reviewed” and that term should not be used to refer to post-publication review, to avoid confusion.

The process, however, is even more critical to quality control in science. Now, instead of one editor and 2-3 reviewers looking at a study, dozens or hundreds (maybe even thousands) of scientists can pick over a study, dissect the statistics and the claims, bring to bear knowledge from related areas or other research, and provide detailed criticism. This is the real “meat grinder” of science. Hundreds of reviewers are more likely to find problems than the few pre-publication reviewers. Arguments can be tested in the unforgiving arena of the scientific community, weeding out bad arguments, honing others, so that only the best survive.

Here is the bottom line – peer-review is a necessary component of quality control in science, but is no guarantee of quality, and you have to know the details of the journal that is providing the peer-review.


Watch the video: I Will Guess Your Name In One Minute! (August 2022).