SOCIAL SCIENCE REFORM
- Browse
- Search
- Publish
- Home
- All Journals
- Cogent Social Sciences
- List of Issues
- Volume 7, Issue 1
- The current phase of social sciences res ….
Search in: This Journal Anywhere
Cogent Social Sciences Volume 7, 2021 – Issue 1
Submit an article Journal homepage
Open access
9,925
Views
0
CrossRef citations to date
0
Altmetric
SOCIOLOGY
The current phase of social sciences research: A thematic overview of the literature
Sandro Serpa (Reviewing editor)
Article: 1892263 | Received 07 Jul 2020, Accepted 12 Feb 2021, Published online: 04 Mar 2021
In this article
- Abstract
- 1. Introduction
- 2. Historical phases of social science research
- 3. The current phase of social sciences research
- 4. Challenges of the social sciences research
- 5. Conclusion
- Additional information
- References
- Full Article
- Figures & data
- References
- Citations
- Metrics
- Licensing
- Reprints & Permissions
- View PDFView EPUB
Abstract
Social sciences research (SSR) is an enterprise that is continuously evolving, but not without some debilitating issues that impede the realization of its full potential and usefulness. SSR, as a human initiative and activity, is discussed. The paper features the current methods and authorship issues that enhance the research outcomes’ validity. The argument advanced because while scholars continue to explore processes for research advancement, minimize where possible by putting an end to vices such as falsification, plagiarism, inappropriate methodology, and wrong data analysis. At the same time, while significant challenges are related to misconduct and dishonesty. The present paper expanded and updated the literature and proposed bringing the resources available to the social science researchers to enhance research outcomes. Overall, the paper recommends continuous strives for tools, and process enhancement and a demonstration of integrity by the stakeholders in SSR conduct to achieve field acceptance.
Keywords:
- Meta-analysis
- mixed-method
- open access
- authorship
- result reporting
- misconduct
- methodological challenge
Previous articleView issue table of contentsNext article
PUBLIC INTEREST STATEMENT
Social science research is a mechanism for socio-economic development as the process generates evidenced-based information that guild policy formulation and implementation, among others. Its associated importance has necessitated the intense call for improvement and integrity in initiating and executing the process’s activities. In fairness, although with some challenges, tremendous breakthroughs represented in inventions and refinement characterized the trade. This paper provides a thematic overview of the advances, complemented with challenges confronting the practice of SSR. The purpose is to acquaint the stakeholders with the development and constraints of their field. It is hoping to provide information that would lead to better research as the breakthroughs represent the available best practices.
1. Introduction
Science, the systematic study of structure and behavior in the natural and social worlds, is among the most compelling and trustworthy ways of knowing, including authority, experience, and common sense (Gravetter & Forzano, Citation2018; Morling, Citation2018). Thus, it has made tremendous contributions to human existence, as manifested in all the discoveries and inventions. Through its methods, science provides a standardized set of techniques for building scientific knowledge, such as making valid observations, interpreting results, and generalizing them. Other identified components include objectivity, confirmation of findings, self-correction, and control (Smith & Davis, Citation2016). Science is broadly classified into natural and social. The former deals with naturally occurring objects or phenomena, such as light and earth, while the latter deals with people and their individual or collective behavior (Colander & Hunt, Citation2019; Mukherjee et al., Citation2018). Social sciences (SS) is a category of academic fields concerned with society and social interactions. It cuts across several academic disciplines, including geography, economics, political science, sociology, psychology, and even social medicine. Though rooted in philosophy and social sciences, research is concerned with establishing authentic and valid approaches to human beings’ study with prominent contributors, including Descartes, Comte, Durkheim, Kant, Schutz, and Weber (Matthews & Ross, Citation2010). For instance, psychology, the science of mental processes and behavior, aims to explain and control behavior. Sociology, the science of social groups, seeks to understand Man’s interaction in society. And economics, the science of firms, markets, is concerned with the creation and accumulation of wealth in a state (Mukherjee et al., Citation2018). The SS disciplines’ focus determines the kind of questions they ask, which influence approaches to be used. Research is the SS’s wheel and soul, as almost all the feats associated with it are implicitly or explicitly outcomes of the investigation.
Social science is a process characterized by two sets of activities and influence (Bhattacherjee, Citation2012). First, it is a truth-seeking movement that seeks a connection to human action and interaction. Internal and external processes typify the second. Internal functions are fundamental to specific research, e.g., exploration, research design, and research execution. On the other hand, external methods include factors such as research funding, dissemination, and application.
Consequently, social science research (SSR) implies any form of a systematic study of people’s actions and interactions, with fundamentals of thought and behavior that are social as the center of attention. SSR has the purpose of providing an understanding, explanation, and ultimately control of social behavior on the assumption that such behavior has a known and measurable cause (Mor, Citation2019). Social sciences’ needs and challenges are responded to by revisiting the various elements that make up SSR processes. Therefore, the present paper focuses on the conceptual review of the current phase: where and what SSR is now, and the enduring challenges confronting the enterprise.
Review and research papers on the development and challenges of SSR to the existing literature are historical and continuous. However, the present paper differed by offering a justification and establishing its importance for SSR enterprise stakeholders. The extant works are specific in their presentation of issues within the scope of SSR’s trends and challenges. Scandura and Williams’ (Citation2000) study was limited to political science disciplines.) McKie and Ryan’s (Citation2012) work was on trends and challenges in sociological research; Aro-Gordon (Citation2015) paper was mostly a tutorial on research methodology and statistics. Haq’s (Citation2020) bibliometric analysis of social sciences research trend was on Pakistan. Whereas Hodonu-Wusu and Lazarus’ (Citation2018) bibliometric review was on library and information science research trends. The present study addressed the development and challenges that are not limited to any social science discipline or geographical context, focused, and incorporated issues inclusively within the scope of growth and SSR challenges. The themes and their value, precept, and way forward were discussed where appropriate.
The themes presented in this paper represent several scholars’ positions with voices in the scientific world (e.g., August Comte, 1798–1857; Émile Durkheim, 1858–1917). And the recommendations of several other scholars (such as Levitt et al., Citation2018; Memon et al., Citation2019; Schreiber, Citation2017; Watkins, Citation2018) in terms of best practices, and the concerns and recommendations of journals that are usually made public in reviewers’ and editorial comments. Also, several regulatory bodies in the various social sciences (e.g., American Psychological Association and American Sociological Association) and the present authors’ personal experiences contribute to several academic publications.
2. Historical phases of social science research
The social world is inherently complex and consequently placed on humans the necessity to be creatively curious for their survival. Among the outcomes of humans’ curious tendency is the development of the process of the scientific method. The application of scientific methods in the study of society took moral and social philosophy to social sciences. Every research to understand social behavior and society through the scientific approach is tagged SSR.
William Thompson, in 1824 first used the term social science (Claeys, Citation1986). While Auguste Comte (1798–1857) works on the scientific view, positivism to combat what he perceived as a harmful and destructive philosophy of enlightenment influences Herbert Spence and Émile Durkheim for a basis practical social research (Ritzer, Citation2011). SSR history is characterized by stages of development and refinement reflected in the enterprise’s current phase. The first phase is the adoption of the positivist methodology that has been established in the “hard” sciences. However, at the earlier time, positivism ran parallel with interactionism. It represents a period of specific social science research being either quantitative or qualitative. This dichotomy in the method is reconciled in a mixed-method, representing the SSR method’s current stage.
Glass (Citation1976) introduced the term “Meta-analysis,” and a year later, Smith and Glass (Citation1977) published a meta-analysis of the effectiveness of psychotherapy from 375 studies. In the early ’90s, Charles Edward Spearman (1863–1945) and Karl Pearson (1857–1936) developed and introduced factor analysis and structural equation modeling (SEM) into the SSR (Tarka, Citation2018). Factor analysis developed along with intelligence theories and intelligence tests, as founding intelligence scholars developed the tool to enhance intelligence’s nature and measurement (Keith, Citation2019). And structural equation modeling (SEM) has its origins in factor analysis (Wang & Wang, Citation2019). Generally, the idea of variables having an indirect effect on another variable was first described by Wright in 1920. However, the original statistical description of the process was presented by Hyman in 1955 and Lazarsfeld in 1955. James and Brett (Citation1984) and Baron and Kenny (Citation1986) adopted and introduced the procedure into psychology as mediator and moderator (MacKinnon et al., Citation2007).
Open peer review, open-access publishing, and multiple authorship are relatively recent entrants in SSR development history. Open peer review is part of the scholarly communications discussion (Wolfram et al., Citation2020). The Open Access Movement (OAM), an open-access journal as an outcome, started in 1991 (Rath, Citation2015). The movement’s philosophy was enhanced by three milestones: proposals of the early twenty-first century that covered the Budapest Open Access Initiative, the Bethesda Statement on Open Access Publishing, and the Berlin Declaration on Open Access to Knowledge the Sciences and Humanities. Back from the 1700s to the 1920s, the norm of academic publishing was sole authorship (Greene, Citation2007). Today, most academic fields are engaged in multiple authorship because academic publications have become critical career success issues (Henriksen, Citation2016)
Although SSR was a methodological response to society’s study, its development has gone beyond that concern, as documented in this review. It extends to issues such as reviewing and publishing.
3. The current phase of social sciences research
SSR literature has a history of being dominated by specific concerns at any given period. These concerns are usually in the form of a revisit with substantial vigor of existing issues or the introduction of a novel process. The brief and types of development currently running in SSR are presented as follows:
3.1. Mixed methods
The research method is broadly classified as quantitative or qualitative. The latter usually emphasizes words rather than doing quantification in data collection. Approach to quantitative and qualitative research methods are informed by cosmology and epistemology. Mixed-method research, a hybrid that emerged from a combination of quantitative and qualitative research methods in the most suitable way for a research project, is gaining popularity in the SSR (Hafsa, Citation2019). It aimed at maximizing the strengths and minimizing the weaknesses of the two research methods, thereby increasing the validity of research findings, among other advantages (Hafsa, Citation2019; McKim, Citation2017). SS disciplines traditionally biased in quantitative methods include psychology, which is now welcoming some qualitative approach features.
In contrast, the traditional qualitative disciplines, such as sociology, are now substantially quantifying. Books on qualitative methods for psychology (e.g., Howitt, Citation2016; Levitt, Citation2020) and quantifying in sociology (e.g., Weinstein, Citation2010) are now available. MM research demands higher expertise when compared to either quantitative or qualitative methods. It is essential for MM research, as training in the individual disciplines that constitute the SS is often tilted to either quantitative or qualitative research methods. This mixed has much potential for the advancement of SSR as each of the methods minimizes the weaknesses and maximizes one another’s strengths. The existing statistical software, such as IBM-SPSS and STATA, for quantitative data analysis and Nvivo, HubSpot, Quirkos for qualitative analysis have tremendously enhanced exploration of each of the methods.
Although the strengths of engaging in mixed methods research are widely discussed and are convincing, the process has some shortcomings that include much demand on time, energy, and other resources (Hafsa, Citation2019). Similarly, MM gains substantial recognition and acceptance as a research approach in varied fields, including SSR. However, the MM has been poorly adopted and applied based on the existing evaluating standards. It could have negatively impacted the studies’ perception of this method (Askun & Cizel, Citation2020; Bartholomew & Lockard, Citation2018; Fàbregues et al., Citation2020). It emphasizes the need for authors to abreast themselves with existing reports (Levitt et al., Citation2018) on the approach’s best practices.
3.2. Factor analysis
Another statistical technique that has remarkably gained space is factor analysis (FA). FA is a powerful technique in applied multivariate analytical procedures. It is a set of statistical procedures used in determining the degree to which interrelated variables can be assembled as one combined factor instead of standing as a series of separate variables (Howitt & Crammer, Citation2017a). FA is widely classified into exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). The researcher must have explicit expectations regarding the number of factors, which variables reflect given characteristics, and whether the elements are interrelated (Howitt & Cramer, Citation2017b). The purpose of FA includes the development of theory, construction of questionnaires to measure an underlying variable, and summarizing relationships in a more parsimonious set of factor scores that can be used in further analyses and inputting objects into categories depending on the factor scores (Field, Citation2018). However, in the social sciences, FA’s dominant use is to establish the construct validity of measures. CFA is the most pronounced method used to test for the validity of the criteria used. It is a testing method for measuring efficacy (e.g., discriminant and convergent approach) because it combines the search for validity measures. The validity measure used in a study has implications for the validity of results from that study. Thus, factor analytic tests measuring efficacy should be the choice in journals in the SS’s disciplines. Although widely recommended and adopted in the SSR, several unresolved issues such as the appropriate sample size, possibilities of obtaining ambiguous results, the difficulty of determining the number of factors to accept, and mathematical problems (Trninić et al., Citation2013) surround the techniques. The practice of factor analysis should be preceded by adequate training as the process is technical and is expected to provide an outcome for decisions. Several questionable practices have been identified with factor analysis (Crede & Harms, Citation2019). The importance of meaningful research outputs is non-contestable; SSR needs to adopt the principles outline in several reports (e.g., Watkins, Citation2018) on best practices regarding factor analysis.
3.3. Structural equation modeling
Statistics is a discipline that has much application in SSR, and its adoption is a notable feature of quantitative research. The choice of adopted statistical test for a study is influenced by factors that include data type and the researcher’s competence and preference. However, structural equation modeling (SEM) in high impact journals is becoming a phenomenon. SEM (also known as covariance structure analysis, covariance structure modeling) is a comprehensive statistical modeling tool used to analyze multivariate data that involves complex relationships between and among variables. It is a powerful collection of multivariate analysis techniques that specifies the relationships between variables through measurement equations and structural equations (Wang & Wang, Citation2019). SEM is essentially an extension of and advancement to other statistical techniques such as path analysis, factor analysis, and multiple regression analysis. It extends these different techniques in its potential to accommodate both observed and latent variables, being sensitive to the potential measurement error of variables included in a model, the ability to model multiple dependent variables simultaneously (Thakkar, Citation2020; Wang & Wang, Citation2019). And the application of computationally detail iterative exploration for coefficients matches the data (Westland, Citation2019).
SEM is one of the most favored analytical techniques among SS researchers (Rahman et al., Citation2015). It is observed as better than multiple regression analyses and other multivariate methods in investigating simultaneously series of dependence relationships. Wang and Wang (Citation2019) noted the significant SEM use in the relationship between different causal theoretical frameworks. Check for the direct, indirect, and moderating effects of other variables in the complex models. SEM’s capacity to accommodate immeasurable constructs has broadened and extended the social sciences’ scope and has now been within the realm of SSR (Westland, Citation2019).
Like other traditional techniques such as correlation, variance analysis, and regression, SEM does not test causalities (Thakkar, Citation2020). SEM techniques have been defective due to authors’ understanding, flawed studies, and reviewers that are not sufficiently grounded with SEM’s science and art (Hult et al., Citation2006). Several reviews have reported that most published reports of applying the techniques are associated with at least one severe weakness that limits the study’s scientific importance (Karakaya-Ozyer & Aksu-Dunya, Citation2018; Zhang et al., Citation2020). As with factor analysis, activities and outputs associated with SEM are complex. Therefore, admirers of the techniques need sufficient training to be equipped for a successful application. And since several published guidelines represent best practices (e.g., Morrison et al., Citation2017; Schreiber, Citation2017) for reporting SEM results are available, authors are advised to familiarize themselves with the reports.
3.4. Moderator and mediator analyses
Explicitly or implicitly, SSR is fundamentally about the examination of the relationships between two or more variables. Tests of direct contact between variables have been observed to be of limited practical utility as relationships between variables are not usually straightforward. Although testing for moderation and mediation in relations between two variables has been recognized for decades, the trend is phenomenal. SSR literature in recent times shows that a significant proportion of papers published in high impact journals in the SS test for moderator and mediator effects. A mediator (also called “intervening or process variable) is the variable responsible for the relationship between two other variables. A mediator variable conveys the influence of the antecedent on the outcome, either in part or whole (Baron & Kenny, Citation1986). Therefore, mediation occurs when the impact of one variable (an independent variable) on another variable (a dependent variable) is explained by a third variable (a mediator).
A moderator variable is the strength or direction of the relationship between the independent and the dependent variables. Moderation (interaction effect) occurs when the relationship between two variables differs in magnitude or statistical significance based on another variable (Baron & Kenny, Citation1986; Howitt & Cramer, Citation2017b; Morling, 2018). A moderator and mediator are variables that could change the direction and magnitude of or explain the relationship between two variables.
A hybrid of moderation and mediation analyses with the name moderation-mediation model is emerging in SSR literature. Moderated mediation, also known as conditional indirect effects (Hayes, Citation2017), occurs when the treatment effect of an independent variable on an outcome variable via a mediator variable differs depending on the moderator variable’s levels. Moderation-mediation models are instituted when researchers believe mediated models will get stronger by introducing moderating variables (Hayes, Citation2017). Testing for moderator and mediator effects enables the achievement of two significant feats. It guides in the development and refinement of theories, and because it has the potential to produce results that are closer to reality, it has useful, practical utility. The choice for moderator or mediator variables should be guided by theory or empirical literature and not chosen arbitrarily. More so, social science researchers should be mindful of the various extant models of mediation and moderation (e.g., Baron & Kenny, Citation1986; Zhao et al., Citation2010) and be logically guided in their choice. Social science researchers are of necessity required to familiarise themselves with the extant reports (e.g., Holland et al., Citation2017; Memon et al., Citation2018, Citation2019) on best practices in the implementation of moderation and mediation in studies.
A related concern and demand for moderation and mediation of premier journals in the SSR is the consciousness for a common cause (confounding variables or third variable problem) and method variance in the design and data analysis. A common cause refers to a variable that affects both a presumed influence and its presumed outcome (Keith, Citation2019). Common method variance is the systematic variance shared among the variables that are typically introduced to the measures through the measurement method rather than the theoretical constructs the measures represented. It is a measurement weakness widely associated with cross-sectional research design where data are collected at one point in time with self-report instruments (Tehseen et al., Citation2017). Both the common cause and common method variance pose a threat to studies’ internal validity as they provide an alternative explanation for the observed relationship (Gravetter & Forzano, Citation2018; Morling, 2018). The research community has adequately responded in the development of various procedures that address the concerns (Rodríguez-Ardura & Meseguer-Artola, Citation2020; Tehseen et al., Citation2017). Therefore, to enhance studies’ internal validity, social science researchers are at the compulsion to address these research elements when applicable and necessary.
3.5. Meta-analysis
Varying degrees of congruent and heterogeneous outcomes result from differences in adopted methodological constituents (e.g., population, conceptualization, and statistics) and how studies were conducted, even in research with single relationships. Divergence and convergence results on a specific link need to be put in perspective for theoretical and practical purposes. Several approaches for integrating studies exist, but one that is widely cherished and accepted necessarily because its practice follows the method of science is meta-analysis. Meta-analysis is a quantitative review that summarizes the results of many quantitative studies and performs various analytical tests to demonstrate whether a given variable has a significant effect across the selected studies. The statistical analysis of an extensive collection of results from related studies to integrate the findings, essentially an average of effect sizes (Field, Citation2018; Gogtay & Thatte, Citation2017). It is a complicated research process that requires a preview of a multitude of education/studies to be meta-analyzed, which comes with varied research designs and statistical techniques. A meta-analysis, as a form of the literature review, is laborious..
Although Gene Glass introduced the term meta-analysis in 1976, the rapid growth and widespread practice of meta-analysis are evidenced in the report of Aguinis et al. (Citation2011), which indicated that between 1994 and 2009, there were 3,481 in PsycINFO, 6,918 in EBSCO, and 11,373 in MEDLINE. Several forms of misuse of meta-analysis, including the combination of studies conducted with substantially different populations and methods in a meta-analysis, are widely noted in the literature. Such amalgamation leads to meaningless conclusions (Barnard et al., Citation2017). Several forms of research misconduct, such as reporting bias (implication publication bias), falsification, and fabrication, harm the conclusion from a meta-analysis. Meta-analysis has the potential to contribute immensely to theory development, theory clarification, and practice. Evidence from a meta-analysis can suggest refinement to theoretical models, direction for future research, and inform evidence-based policies. Still, it should be cautiously applied to avoid results reporting bias and publication bias, which distort the literature that provides data for meta-analysis research (Carter et al., Citation2019). Therefore, SS researchers should establish best practices (e.g., Levitt et al., Citation2018; Siddaway et al., Citation2019).
3.6. Results and indices reporting
SSR is conducted to gauge reality, usually with a subset of populations. In quantitative research, the feat of understanding society from the sample has been made possible by several inferential statistical techniques such as analysis of variance (ANOVA), regression analysis, and Mann–Whitney U test. When applied to sets of data, these various statistical tests can generate indices that come together to represent the study results. In SSR, the dominant index used to describe study results is derived from Null hypothesis significance testing (NHST). NSHT refers to a statistical method of inference whereby an experimental factor is tested against a hypothesis of no relationship or no effect based on a given observation. It is an amalgamation of Fisher and Neyman—Pearson statistics (Pernet, Citation2017; Quintana & Williams, Citation2018). NHST has attracted severe criticism for its misunderstanding and frequent misuse by researchers (García, Citation2017). It depends on the sample size and the limited information when applying research results (McShane et al., Citation2019). The controversy and dissatisfaction with NSHT have reached the level where banned is proposed on it (Trafimow & Marks, Citation2015). However, Lane-Getaz (Citation2017) argued that several criticisms of the approach are merely attempting to move forward a Bayesian agenda. Since the complaints largely border on misuses, misinterpretations, and misconceptions, adequate education on the procedures is needed to address the problem.
Other statistical approaches are emerging in the literature regarding its associated limitations and the validity of applicability. Two analytical procedures that are gaining noticeable recognition in the ‘Results Section’ of high impact SS journals are effect size and confidence interval. Indices from the two methods complement that of NHST, which merely note the existence or non-existence of effect. Effect size statistics, which are not influenced by sample size, improved what NHST offers to identify and provide information on the magnitude of an impact. Effect size refers to the volume of the relationship between the independent and dependent variables. It is distinguishable from statistical significance as a highly significant finding could explain a small effect, and vice versa, based on the size of the sample used in the study (Funder & Ozer, Citation2019). While NHST statistics is about statistical significance, effect size statistic talks about the practical importance of studies’ results. Effect size indices are widely interpreted in terms of “small,” “medium,” or “large” for application purposes (García, Citation2017; Kinney et al., Citation2020).
The confidence interval for a given statistic such as mean is a range of values around that statistic that are believed to contain a certain proportion of samples (e.g., 95%), the real value of that statistic in the population parameter (Field, Citation2018). Confidence interval statistics, when thoroughly explored, has the potential for statistical significance and practical significance. Some researchers have noted this trend in the literature. For instance, Sun et al. (Citation2010) reported an effect size reporting rate of 49% from 1243 articles published in 14 journals between 2005 and 2007 in psychology and education. Sun and Fan (Citation2010) noted an effect size reporting rate of 75% from a sample of 22 papers in communication. Giofrè et al. (Citation2017) observed a rise from 28% in 2013 to 70% in the Psychological Science Journal’s 2015 confidence interval reporting. The statistics are encouraging but not adequate. Almost every quantitative study must report the effect size and confidence interval statistics to produce a result that has both statistical significance and practical importance.
These three analytical procedures complement one another in providing indices that amount to results and should be treated accordingly. Concerted efforts should be vigorously made to pursue the adoption of multiple sources of effect reporting to tap the associated gains and broader usage. And this inclusiveness of indices is becoming the demand on authors from premier journals in the fields of social sciences. Therefore, SS researchers are encouraged to be acquainted with best practices (e.g., Appelbaum et al., Citation2018; Levitt et al., Citation2018) in results and indecisive reporting.
3.7. Multiple authorship
Author and authorship are two related terms that are associated with the creation of something. In a research context, the word “author” applies to any individual who makes a substantial intellectual contribution in conception, design, acquisition of data, development of models, analysis, data interpretation, and editorial revision of an original piece of research. In the research context, an author is a person who has made direct and significant intellectual contributions to a study (Tarkang et al., Citation2017). Authorship is the listing of contributors to the work product. It is a claim of recognition for a contribution to a unique part of the study and any moral or legal rights associated with it. Scholarship credits are precious in academics as it is essential for career progression (publish or perish) and research funding. The traditional authorship model, which endorsed listing only individuals who contribute to writing or revising a manuscript as authors, is recently complimented with the “contributorship model.” This latter approach supported crediting all those who make substantial contributions (including financial) to a study. This approach is noted to have the potentials to enhance universities and funders to identify the right mix of researchers, facilitate meta-science and create vital scientific tools and software (Holcombe, Citation2019; McNutt et al., Citation2018). Several publishers (e.g., Nature Publishing Group) are adopting the model. However, this model has the potential of watering down what it takes to earn academic recognition, with the possibility of having too many individuals qualified for listing as authors could result in conflict over space on the byline.
Although long-existing in science, technology, engineering, and medicine (Henriksen, Citation2018), multiple authorship (having numerous or many authors on a piece of research work) is becoming a tendency in SSR (Henriksen, Citation2016). Studies have shown that multiple authorship is increasing significantly in economics, psychology, sociology, political science, and public administration (Henriksen, Citation2018; Mali et al., Citation2010). In specifics, Kuld and O’Hagan (Citation2018), in the field of economics study, indicated that in 1996, singled-authored papers accounted for 50% of articles published, with the number reduced to about 25% in 2014.
There is a current upsurge of multiple authorships in the work of social scientists. Social issues and interactions, being the subject matter, are growing more sophisticated in leap and bound. Thus, addressing the complexities in research demands varied skills and competencies that one author is most unlikely to possess. The rising cost of conducting and having SSR published in most journals is usually beyond the reach of an individual, particularly in the developing countries where research funding is an extreme privilege. Many faculties key to quality publications through collaboration with other researchers, thus avoiding quantity paper publishing with insignificant contributions. Since much research is done in academic institutions, the SSR co-authorship phenomenon is much likely to continue.
Nevertheless, multiple authorships can be beneficial; they pose a challenge on name listing and contribution. Authorship is a mechanism for establishing credit, integrity, accountability, and responsibility and goes beyond listing names. Thus, it should be an exercise free of fraud, errors, misinterpretation, wrongful inclusions, and exclusions. As a matter of integrity, SSR should willfully adhere to the established standard for authorship listing, such as the International Committee of Medical Journal Editors (Citation2018).
3.8. Open-access publishing
The world of electronic publishing has added free or open-access publishing (OA) to the traditional close-access or subscription-access publishing. OA is direct communication between the author, and the reader, with a minimum of mediation, thereby added value to the chain of scientific communication (Schöpfel, Citation2014). It makes research output freely accessible and increases transparency, free of copyright and licensing restrictions, thus reusing relevant data possible by other researchers (Tickell, Citation2018). OA is a publishing trend in social sciences as it helps to bridge the gap in resource sharing and limited expenses (Lamani et al., Citation2018). Although high-quality open access requires adequate funding and monitoring (Adler et al., Citation2019), electronic publishing made journal publication tasks easier, resulting in several online journals. The easiness of setting OA up goes with the cost of providing an enabling environment for predatory journals to thrive. Predatory journals are characterized by doubtful integrity with little or no ethical and publication guidelines and the absence of scientific review processes. Open-access predatory publishers refer to publishers prepared and ready to accept and publish an article for payment without adequate consideration for quality (Krawczyk & Kulczycki, Citation2020). The aim is to extort publication payment fees from unsuspecting authors for papers that do not follow the standard scientific rigors (Sarfraz et al., Citation2020). Sadly, both early-career researchers and seasoned researchers feature as authors in predatory journals. Since academics are often behind setting up and sustainability of predatory journals, institutional sanctions should exist for faculty found to be associated either as founder, editor, or reviewer with any scientifically unethical journal. Mostly, open access publishing puts the cost of making articles public on the authors. In developed societies where institutions and funders offset these costs on behalf of the authors, authors in developing countries where such facilities have rarely faced payment burden. Such a situation could be frustrating with its associated consequences and breed unethical conduct such as guest writers’ practice.
Another novelty currently opening to SSR is mega-journal publishing characterized by large size volume, concern for scientific soundness, and openness to a broad subject area. Such journals include PLOS ONE, Scientific Reports, BMC Research Notes, BMJ Open, AIP Advances, Medicine, SpringerPlus, PeerJ, SAGE Open (Spezi et al., Citation2017). This form of open access journal publishing that stated with PLOS ONE offered an opportunity to publish a paper that the contribution is not assessed by reviewers nor the editors of the journal but by the readers through the number of references. As with the regular journal, Beall (Citation2013) reported some predatory mega-journals that include the International Journal of Current Research, International Journal of Sciences, and the British Journal of Science. So, social science researchers have reason to be cautious in selecting mega-journal to reference and publish.
3.9. Open peer review
The peer-review system was first formalized in the peer review process in 1665 by the Royal Society’s Philosophical Transactions. While the Royal Society of Edinburgh adhered to the peer-review process in their Medical Essays and Observations in 1731. Kelly et al. (Citation2014) leading to the quality control system of research outputs before publication. It intends to improve the conduct, reporting, and sieve-out work that falls short of the research community’s research production (Wolfram et al., Citation2020). Conventionally, peer review has been a blind system in double-blind (when reviewers and authors are equally unaware of each other’s identity). Single-blind (when reviewers know the author’s identity but not the other way around (Fresco-Santalla & Hernández-Pérez, Citation2014). Blind peer review has weaknesses such as unaccountability, resource wastefulness, lack of incentives, and inconsistency (Ross-Hellauer & Görögh, Citation2019). A peer-review system with the name open peer review is emerging in the research world, including social sciences. In its primary, the open peer-review system is a review model where both authors’ and reviewers’ identities are disclosed to each other (Hodonu-Wusu, Citation2018; Wolfram et al., Citation2020). Its other features include reviewer’s reports being published alongside articles, manuscript being made available to the public for comments (Ross-Hellauer, Citation2017), and publication of data with the research (Castelvecchi, Citation2018). Therefore, it is an inclusive term for several overlapping ways that peer-review models can be adapted following open science goals. The various open peer-review elements help avoid review flaws, detect unintentional statistical errors, and expose academic fraud associated with “publish or perish” cultures (Adler et al., Citation2019; Artino et al., Citation2019). While the early adopter publishers of open peer review implemented the system in different ways, which resulted in various degrees of transparency, the approach is experiencing steady acceptance and growth (Wolfram et al., Citation2020). However, scholars have argued that open peer review is subject to abuse as it allows others virtual review and provides commentary on a new scientific piece (O’Grady, Citation2017). More so, the open-access review can generate an unending evaluation process for specific research work as it is opened to the public with different levels of knowledge and opinions. There should be a way of regulating rules on how long a paper under open peer review can be made public and the processing purpose.
4. Challenges of the social sciences research
In the next section of this review, SSR challenges complement the previous as any enterprise is trailed with challenges that require appropriate handling to either eliminate or mitigate adverse impacts. The practice of SSR experiences specific problems inherent to the process or inflicted on the process by researchers’ unethical behavior. These challenges require an appropriately handling mechanism to achieve the intended goal of generating knowledge. Some profound problems are misconduct, reporting and publication bias, inadequate reproduction and replication studies, methodological challenges, insufficient finance, and practice gap.
4.1. Misconduct
Every research output is mostly a product of human behavior, thinking, and decisions. SSR is a process that covers some activities such as data collection, result reporting, and citations at the control and manipulation of the researchers. The possibility of researchers abusing these inherent rights is distinct, and some rules and regulations have been instituted to guide conduct in the enterprise (Hickey, Citation2018). Such rules and regulations implicitly or explicitly center on issues of integrity. Scientific integrity implies observance of ethical and professional principles, values, and practices in the conduct and application of science and scholarship outcomes.
Similarly, academic integrity refers to a commitment to five fundamental values that cover honesty, trust, fairness, respect, and responsibility in the conduct of research (Dinis-Oliveira, Citation2020). Entrenching integrity to uphold research rightly is necessary to society as virtually everything available to a man outside nature is a research product. The compromise of integrity has an overreaching catastrophic effect on society and human existence. Little wonder that honesty is a watchword in the research world and acts antithetical to it is severely condemned and sanctioned. Unfortunately, researchers have been indicted on conducts that undermine integrity. The associated scientific misconduct (the act of violating the standard codes of scholarly conduct and ethical behavior in the manner and publication of scientific work) goes with various names that include plagiarism, falsification, ghost authorship, and guest authorship (D’Angelo, Citation2018). In its various forms, misconduct undermines the integrity, confidence, and trustworthiness of research enterprise both within the scientific community and the public. Therefore, the educational and societal damage associated with specific research misconduct such as falsification, fabrication, and plagiarism is seen so grievous that it is hotly debated if it should be criminalized (Bülow & Helgesson, Citation2019; Dal-Ré et al., Citation2020). Criminalization implies the decision to make some research misconducts criminal offense that would merit criminal punishment, including fines, community service, or even imprisonment.
4.2. Plagiarism
There is significant variation in what plagiarism means within social sciences disciplines but consistent across the fields (Stitzel et al., Citation2018). The act of plagiarism encompasses buying, borrowing, or stealing a research paper and presenting it as one’s idea. Putting someone else’s opinions or outline into your own words; using someone’s word-for-word phrases, sentences, or paragraphs without giving credit; and presenting facts or statistics without citing the source (R. A. Harris, Citation2017; Dinis-Oliveira, Citation2020; Zhang, Citation2016). However, the above conceptualization revealed that the core issue about plagiarism is a citation and, by extension, referencing. It is a concern for the appropriate adaption of other people’s ideas with acknowledgment. It is also exhibited in the concept of self-plagiarism (inappropriate presentation of one’s own published data or text as new and original). The act of plagiarism is a threat to the educational process as students and academics could receive credit or reward from work not initially theirs. Plagiarism has the potential to adversely impact meta-studies, as it could falsify the number of actual studies on which the meta-analysis is based, thereby leading to incorrect conclusions (Foltýnek et al., Citation2020).
Pupovac and Fanelli (Citation2015) meta-analytical reporting suggests that around 2% of respondents admitted having committed some form of plagiarism, while 30% admitted witnessing such among colleagues. Their findings also indicated there are now committees and websites dedicated to plagiarism or unethical behaviors, especially in economics. However, more than two-thirds of the social scientists and three out of five economists have not heard of or used the service (Stitzel et al., Citation2018). Other means of curbing the practice can be through appropriate sanctions and enlightenment campaigns on the moral need to be aware of others’ and self intellectual efforts.
4.3. Falsification
Falsification covered the act of concealed manipulation of research instrumentation, materials, or processes and changing or omitting research data and consequently results to support claims, hypotheses, and related matters. It is the construction of or addition of data, observations, or characterizations that did not occur in data gathering (Dal-Ré et al., Citation2020; Dinis-Oliveira, Citation2020). The act of falsification included citations and references that are forged. Image manipulation is an example of an act of forgery. Image manipulation is undocumented alterations to research images, a deceptive, “illicit” practice, which violates responsible research conduct (Jordan, Citation2014). It is also called illicit splicing, which changes images for non-research related purposes, thus breaking the readers’ expectation that the presented images are the real situation presented (Parrish & Noonan, Citation2009). In their study, Nurunnabi and Hossain (Citation2019) found that data falsification/fabrications were primarily responsible for 77% of the articles’ retraction in published journals. Scholars such as Titus et al. (Citation2008) found falsification and fabrication by 5.2% of respondents. Fanelli (Citation2009) indicated that when 2% of scientists admit to falsifying research, just as Tijdink et al. (Citation2014) showed, 15% of scientists admitted to having fabricated, falsified, plagiarized, or manipulated data. Findings like this have been suggested to be a result of what constitutes research misconduct (Fanelli, Citation2009), anonymity and confidentiality (Bates & Cox, Citation2008), and social expectation (Farrington, Citation1999).
4.4. Fabrication
Fabrication as a form of academic miscount implies the act of researcher(s) making up data, or results, or literature reference and have them recorded or reported as if they were factual (Dal-Ré et al., Citation2020; Dinis-Oliveira, Citation2020; Vaux, Citation2016). Fabrication is similar to falsification. Fabrication deals with creating new data, reference, falsification involves changing the existing data (Elsayed, Citation2020). Cases of fabrication such as Diederik Stapel’s identified with the aid of statistical tools for detection of data fabrication are becoming common in the literature (Hartgerink et al., Citation2016).
These unwelcome research-related conducts result from drive linked to career pressure. However, the readiness to engage in misconduct, combined with the ease of perpetuation, has many negative implications for the perpetrator, the research world, and society. Cases abound where careers of perpetrators of academic misconduct are truncated either through demotion in rank or dismissal from their host institutions. Fault in the SSR enterprise significantly erodes the trust that formed the bases for which practitioners, policymakers, and other stakeholders make decisions. Detection of data falsification may still be problematic; thus, Nurunnabi and Hossain (Citation2019) warned researchers to be wary of implications of ethical issues in the publication of their research works and that academic integrity should be upholding all the time. Any society guided and driven by false literature (the aftermath of academic misconduct) is prone to catastrophe. As checks on plagiarism and fabrication act, the exercise of subjecting manuscript to plagiarism tests should be routing for every journal on any script before the reviewing process is initiated. Editors should request (raw) data for a study and results from adopted statistical software as an appendix in manuscripts. And when of necessity, these pieces of information should be relied upon to verified result indices in the script. As a matter of exigency, every stakeholder in the research enterprise, covering authors, reviewers, editors, publishing bodies, readers, and institutions, must identify and manage research misconduct and put the literature in the correct state. In improving the situation, the incentives to fabricate need to be reduced, and rewards for authors, readers, reviewers, editors, publishers, and institutions who do the right thing should be increased. Every country needs to establish research integrity bodies to provide advice and oversight, collect data, and improve practice codes. Existing proposed research misconduct policy for universities and postgraduate colleges (e.g., Adesanya, Citation2020) should be critically looked into for the stakeholders’ possible adoption.
4.5. Gift, ghost, and guest authorship
In the academic world, authorship is a principal basis that employers use to evaluate faculties for employment, promotion, and tenure. Authorship is claimed by those who make intellectual contributions to the execution of the research. However, it is common in publications for somebody who contributed substantially to the study not listed in the author’s by-line or acknowledgment. A gift, also called honorary authorship, comes in varied forms. For instance, when a junior associate (e.g., postgraduate student, postdoctoral fellow, or junior researcher) carried out research and listed a senior person as an author, even when the ideal person did not satisfy the essential authorship criteria. It is also the practice of offering authorship to an old or junior colleague in the obvious or secret hope that the listed individual will return the favor. Ghost authorship exists when someone who contributes substantially to a research work gets paid without any other recognition. Guest authorship exists when prominent or well-known persons let somebody use their name in a manuscript to enhance the paper’s prestige, even when they do not contribute meaningfully or have nothing to do with creating the work (Lapeña., Citation2019; Harvey, Citation2018). It is said that faculties that received ghost authors’ services grow to the level where they do not have the competence, leading to a high level of mediocrity and unhealthy politics that is telling harshly on academic institutions and society. These practices are sustained by including “publish or perish” situations, thus, pushing academics to contract out their research work, pay, and own authorship. Other factors include appreciation, sponsorship, and authority. For instance, junior researchers often show appreciation to their benefactors by adding their names in their research work. Some names are included in a paper because they hold senior positions, facilitate the research, or secure funds for the study. Some individuals are made quest authors because their names and higher post on the research give the journal’s authority and acceptance.
Several ghost authors live on this trade. Since graduate qualification is the foundational requirement for engaging faculties, academic managers endeavor to strengthen training in the art and science of research. The educational environment cannot be completely free of political tendencies. However, it will be of the greater good for society to mostly limit faculties’ recruitment based on academic competencies. An individual with the appropriate ability would be less likely to contract out the writing of their academic papers. Public recognition of guest authorship should be discouraged as it involves giving credits to individuals who do not earn it, thereby creating unmerited opportunities for such individuals. Ghost authorship prevalence can be significantly minimized when the burden of Article Processing Fee is taken away from the shoulders of authors, particularly in the underdeveloped countries
4.6. Reporting and publication bias for “positive results”
Research is made public through publication outlets that include a journal, periodical, and bulletins, with literature therein, overwhelmed with studies that observed positive results. This phenomenon is labeled “publication bias.” Publication bias represents a situation where authors submit, reviewers recommend, and journal editors publish studies with “positive” results (significant) than studies with “negative” findings (unsupportive) (DeVito & Goldacre, Citation2019; Murad et al., Citation2018). Negative consequences cover cases of findings that inconclusively suggest no effect, indicating no impact and outcomes that produce the opposite effect entirely to what is expected (Mlinarić et al., Citation2017). Bias publications are plausibly sustained by a higher citation rate for positive results than adverse effects. Most journals are aimed at a high impact factor, which is an outcome of quotes. Bias publication has tremendous detrimental implications for theory and practice as the existing literature would be an accumulation of poor representation of reality. However, there are suggestions that the high proportion of positive findings and almost total absence of negative results reflects improvements in formulation and hypotheses testing.
The percentage of papers reporting statistically significant results has increased by 22% between 1990 and 2007, and psychology is among the disciplines in which this increase is highest (Joober et al., Citation2012). Making negative findings available is essential to the interpretation of the overall significance of a field of research. Underreporting and publication of negative results bring bias into meta-analysis and mar the validity of its application (Page et al., Citation2020). An essential feature of science is self-correction. The long-run scientific method guarantees convergence to true theories as it enables the identification and correction of errors in published research (Romero & Sprenger, Citation2020). Publication bias has the potential to hinder science from achieving several elements of the process, such as replication of studies (Andrews & Kasy, Citation2019; Editorial (Citation2019, Citation2019). Efforts are being made to improve the report of adverse effects. For instance, many open-access journals affirm commitment to publishing manuscripts despite whether they report negative or positive results since they are methodologically sound (Joober et al., Citation2012). Publishers should see it as an obligation to launch journals of negative and neutral results in various research fields as this would provide ample opportunity for publication of negative consequences. Methodologically appropriate studies, no matter the adverse effect, should be published.
4.7. Reproduction and replication studies
A feature of science is reproducibility and/or replication of studies. Reproducibility indicates the capability of achieving the same findings as another researcher using existing data from a previous study. In contrast, replication studies require collecting and analyzing data to establish if the new studies, either in whole or in part, provide the same results as an earlier study (KNAW, Citation2018; NASEM, 2019). The former has been depicted as minimum condition research findings that need to be credible and informative, while the latter is the definitive standard for judging scientific claims (Peng, Citation2011). Reproducibility and replication of studies are rare. The low rate of replication in SSR is epitomized in the finding that only 1.07% of 500 randomly selected articles in high-impact psychology journals were replication studies (Makel et al., Citation2012). There is an exhaustive report that several studies did not pass the reproducibility test in psychology and some other disciplines of the social sciences (Diener & Biswas-Diener, Citation2018). Replication is of much value in theory development. It has the potency to correct initial study publication bias and make possible research transparency as it requires researchers to make public study materials. There are several hypotheses with few repetitions, even though replication exercise is vital to the research findings’ validity. One explanation is anchored in the pressure on SSR to be original. Because publishers and funders are biased in favor of initial studies, replication studies hardly get accepted in journals or research sponsorships. Social science journals should be more receptive to reproductions and replication studies, while specialized journals of such research concerns are established for SSR.
4.8. Methodological challenge
Research requires executing a series of interrelated methodological techniques that accumulate and give structure to a research project. These techniques that constitute the methodology are limited in their potential to lead to valid and useful results. The social scientists do not have the potential and the enabling conditions to harness the techniques’ full possibilities. Cross-sectional methods, a dominant data collection procedure in SSR, do not make a possible causal interpretation. The weakness hinders the explanation and control goals of the research. The definitive inferential statistical tests go with several assumptions, such as data collection requirements, data analysis procedures, and the analytical tools to be valid and useful. Beliefs such as interval measurement of parametric statistics are not often met as substantial research uses a Likert scaling format that does not achieve equal appearing interval. However, modern, robust statistical procedures can bail the research limitation imposed by the traditional parametric analytical tools’ requirements. The current statistical methods are designed to perform well when classical assumptions are met and violated (Maronna et al., Citation2019).
Another methodological challenge includes difficulties in conceptualizing some variables, which often results in related but varied definitions and models on a given variable. Although various perspectives offer a better understanding of variables, social science researchers need a sufficient level of consensus on the phenomena of their concern to enable the building of knowledge with practical utility. Comparative studies on the relatedness of the various representations and adoption of a given variable are suggested. Besides, SSR is dominated by studies that collect data through non-probability sampling techniques as the requirement of every member of a population having an equal probability of being sampled is a near-impossible task in most SSR environments. Non-probability sampling techniques do not guarantee the representativeness of samples. It poses a significant challenge to the generalization and application of research findings with parametric statistical tools. For informed adoption and valid application of results from non-probability samples, aggressive reproduction and replication of studies should be encouraged and enshrine in SSR.
4.9. Poor quality journal
The previous challenge of research publications that have been overcome through technological innovation has resulted in journals’ proliferation. While many journals are necessary to accommodate the rising number of studies, several of these journals are of poor quality/predatory. It is essentially defined by a lack of substantial and rigorous adherence to the peer-review process’s practice. The peer-review process ensures that experts in the field evaluate and certify the class of such a paper before publication (Eder & Frings, Citation2018). For reasons, which may contrast with the goal of journal publishing (e.g., profit-making), some journals do not adequately implore the peer-review process. Makel et al. (Citation2012) have noted the negative attention and criticism of psychological research’s quality and integrity. Seethapathy et al. (Citation2016) provided a detailed report on poor-quality open access journals in a developing economy. The low-quality journals are contributing to losing public trust in science in its entirety. Social science researchers should publish only indexed journals as indexing bodies such as Scopus institute yearly quality checks to ensure the paper’s quality is published in the indexed journal. However, Eder and Frings (Citation2018) in an editorial argued that a quality journal should have satisfactory scientometric numbers, such as impact factor, should be focused in an area (specialized), has a rigorous peer-reviewed process, transparent, particularly regarding open data and should be bureaucratic only when necessary in its dealing with authors
4.10. Inadequate funding
Both the processes of conducting and disseminating research are time-consuming and capital intensive. SS researchers need funds to cater for activities (such as hiring research assistants, acquiring the necessary tools); this assistance is rare as only very few social scientists get research grants. Scholars such as Gayithri and Bairagya (Citation2018) have noted inadequate funding, besides unequal resource distribution between pure sciences and social sciences. Governments, universities, and NGOs that have been significant research financing sources are decreasing the disbursement of funds. On the other hand, funds from industries may have strings attached to them, which may be unhealthy for the research goals. Studies funded by private sectors are inclined to provide conclusions that favor the sponsors, which may compromise objectivity in research. Good research projects require substantial financial involvement. The government should substantially take the responsibility of funding research, and social science researchers should be visibly results-oriented to attract the release of the needed economic intervention from the government
4.11. Research-practice gap
Research offers systematic understanding, explanation, prediction, and, ultimately, control of social reality. The substantial low adoption and application of knowledge generated through research into practice are referred to as the research-practice gap (Lawler & Benson, Citation2020). It is a reality that has been expressed in several disciplines of the social sciences (Denvall & Skillmark, Citation2020; Lawler & Benson, Citation2020). This lacuna is a function of several factors that include the public mistrust for integrity in executing the research process and lack of communication and interaction between academic researchers and those in practice. The issue of integrity doubt is substantially validated in several falsifications and fabrication of research processes (Dal-Ré et al., Citation2020). Simultaneously, research misconducts have been well noted to undermine integrity, credibility, and objectivity in research (Bülow & Helgesson, Citation2019). The research-practice gap can be bridged through some methods, Including researchers’ and practitioners’ collaboration in the study of complex nature’s social problems. This form of arrangement would involve the two groups working together on the research activities’ fundamental stages. The research activities include problem formulation, designing, problem-solving; provision of best available evidence for practice through systematic review; creation of relationship beyond collaborative research; and creation of boundary-spanning organizations beyond individuals’ capacity and scope. Evidence-based practice, which involves conscientious, unambiguous, and well-thought-out use of current best evidence from systematic research in reaching decisions regarding the issue at hand, should be consciously encouraged among practitioners. As per research enterprise, social scientists training should not be limited to the art and science of conducting good research, but also to include acquisition of self and product marketing skills
5. Conclusion
With the aid of technology, social scientists are making bold and tremendous strides to expand and improve the approaches to conduct and disseminate their research. These feats are represented in MM research, SEM, and open access publishing, among others. Some of these innovations, e.g., MM research, have improved the validity of outcomes, while free access publishing gives the most comprehensive coverage to the output. SSR findings should be valid and bridge the gap between research and practice, which is the primary reason for its production perceives as unreliable and inapplicable. The process of improvement should be on-going as the validity of research outcomes is a continuum. The extensive publicity to keep knowledge circulation healthy and scientific method of instituting quality, such as rigorous peer-review, should be maintained. Practitioners of SSR are requested to note the best practices of their trade and adapt accordingly.
SSR is saddled with some problems that are implicitly and explicitly connected to the integrity of practitioners. Some have received wide negative publicity on the act of plagiarism, data, and results from falsification, among others. These may have been the primary reasons for the skeptical dispositions practitioners have for SSR products. Although various sanctions exist for these unhealthy research misconducts and dishonesty, they are yet to make a significant impact in deterring the behavior as such acts are not quickly noticed, nor are out of awareness by the scientific community. Since the various associated misconducts centered on integrity, emphasizing the virtue is strongly recommended. To effectively achieve moral socialization of instituting integrity requires training in recent development as a continuous exercise.
Whereas with value for social science researchers, this review has the limitations of not being detailed with any of the themes discussed. The paper is a thematic synopsis. The implication is that the article may not have presented sufficient information to understand the issues raised fully. SSR implicates several social science disciplines. This paper is presented without cognizance of the differences among the fields regarding the topics discussed therein. Future reviews should aim to provide information on where each of the social science disciplines is with every theme that constitutes a phase in SSR. Accumulation of such knowledge would guide each field in identifying their strengths and weaknesses on issues relating to best practices in SSR and respond accordingly.
Additional information
Funding
The authors received no direct funding for this research.
Notes on contributors
Sunday Samson Babalola
Sunday Samson Babalola is a professor in the Department of Human Resource Management, School of Management Sciences, University of Venda. He is a South African’s rated scientist. His research interests focus on work ethics, entrepreneurship, workplace attitude, human capital management, and organizational behavior.
Chiyem Lucky Nwanzu is a PhD holder in Industrial/Organisational Psychology and a lecturer in the Department of Psychology, Delta State University, Abraka, Nigeria. His research interests are workplace attitude and behavior, organization sustainability, and change management.
Previous articleView issue table of contentsNext article
References
- Adesanya, A. A. (2020). A proposed research misconduct policy for universities and postgraduate colleges in developing countries. The Nigerian Postgraduate Medical Journal, 27(3), 250–20. https://doi.org/10.4103/npmj.npmj_51_20PubMedGoogle Scholar
- Adler, J. R., Chan, T. M., Blain, J. B., Thoma, B., & Atkinson, P. (2019). #OpenAccess: Free online, open-access crowdsource-reviewed publishing is the future; traditional peer-reviewed journals are on the way out. Canadian Journal of Emergency Medicine, 21(1), 11–14. https://doi.org/10.1017/cem.2018.481Web of Science ®Google Scholar
- Aguinis, H., Piece, A. C., Frank, A. B., Dalton, R. D., & Dalton, M. C. (2011). Debunking myths and urban legend about meta-analysis. Organizational Research Methods, 14(2), 306–331. https://doi.org/10.1177/1094428110375720Web of Science ®Google Scholar
- Andrews, I., & Kasy, M. (2019). Identification of and correction for publication bias. American Economic Review, 109(8), 2766–9274. https://doi.org/10.1257/aer.20180310Web of Science ®Google Scholar
- Appelbaum, M., Cooper, H., Kline, B. R., Mayo-Wilson, E., Nezu, M. A., & Rao, M. S. (2018). Journal article reporting standards for quantitative research in psychology: The APA publications and communications board task force report. American Psychologist, 73(1), 3–25. https://doi.org/10.1037/amp0000191PubMed Web of Science ®Google Scholar
- Aro-Gordon, S. (2015). Emerging trends in social science research. Paper presented at the Interactive Session with UG and PG students held at PES University, Bangalore South Campus, Electronic City, Bangalore, India.Google Scholar
- Artino, A. R., Driessen, E. W., & Maggio, L. A. (2019). Ethical shades of gray: International frequency of scientific misconduct and questionable research practices in health professions education. Academic Medicine, 94(1), 76–84. https://doi.org/10.1097/ACM.0000000000002412PubMed Web of Science ®Google Scholar
- Askun, V., & Cizel, R. (2020). Twenty years of research on mixed methods. Journal of Mixed Methods Studies, 1(1), 26–40. https://doi.org/10.14689/jomes.2020.1.2Google Scholar
- Barnard, D. L., Willett, C. W., & Ding, L. E. (2017). The Misuse of Meta-analysis in Nutrition Research. JAMA, 318(15), 1435–1436. https://doi.org/10.1001/jama.2017.12083Google Scholar
- Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51(6), 1173–1182. https://doi.org/10.1037/0022-3514.51.6.1173PubMed Web of Science ®Google Scholar
- Bartholomew, T. T., & Lockard, A. J. (2018). Mixed methods in psychotherapy research: A review of method(ology) integration in psychotherapy science. Journal of Clinical Psychology, 74(10), 1687–1709. https://doi.org/10.1002/jclp.22653PubMed Web of Science ®Google Scholar
- Bates, S. C., & Cox, J. M. (2008). The impact of computer versus paper-pencil survey, and individual versus group administration, on self-reports of sensitive behaviors. Computers in Human Behavior, 24(3), 903–916. https://doi.org/10.1016/j.chb.2007.02.021Web of Science ®Google Scholar
- Beall, J. (2013). Five predatory mega-journals: A review. The Charleston Advisor, 14(4), 20–25. https://doi.org/10.5260/chara.14.4.20Google Scholar
- Bhattacherjee, A. (2012). Social science research: Principle, methods, and practices. Florida University.Google Scholar
- Bülow, W., & Helgesson, G. (2019). Criminalization of scientific misconduct. Medicine, Health Care and Philosophy, 22(2), 245–252. https://doi.org/10.1007/s11019-018-9865-7PubMed Web of Science ®Google Scholar
- Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2019). Correcting for bias in psychology: A comparison of meta-analytic methods. Advances in Methods and Practices in Psychological Science, 2(2), 115–144. https://doi.org/10.1177/2515245919847196Google Scholar
- Castelvecchi, D. (2018). Google unveils search engine for open data. Nature, 561(7722), 161–162. https://doi.org/10.1038/d41586-018-06201-xPubMed Web of Science ®Google Scholar
- Claeys, G. (1986). “Individualism,” “socialism,” and “social science”: Further notes on a process of conceptual formation, 1800–1850. Journal of the History of Ideas, 47(1), 81–93. https://doi.org/10.2307/2709596Web of Science ®Google Scholar
- Colander, C. D., & Hunt, F. E. (2019). Social science: An introduction to the study of society (17th ed.). Routledge.Google Scholar
- Crede, M., & Harms, P. C. (2019). Questionable research practices when using confirmatory factor analysis. Journal of Managerial Psychology, 34(1), 18–30. https://doi.org/10.1108/JMP-06-2018-0272Web of Science ®Google Scholar
- D’Angelo, J. G. (2018). Ethics in science: Ethical misconduct in scientific research. CRC Press.Google Scholar
- Dal-Ré, R., Bouter, L. M., Cuijpers, P., Gluud, C., & Holm, S. (2020). Should research misconduct be criminalized? Research Ethics, 16(1–2), 1–12. https://doi.org/10.1177/1747016119898400Google Scholar
- Denvall, V., & Skillmark, M. (2020). Bridge over troubled water-closing the research-practice gap in social work. The British Journal of Social Work, 00, 1–18. https://doi.org/10.1093/bjsw/bcaa055Google Scholar
- DeVito, N. J., & Goldacre, B. (2019). Catalog of bias: Publication bias. BMJ Evidence-Based Medicine, 24(2), 53–54. http://dx.doi.org/10.1136/bmjebm-2018-111107PubMedGoogle Scholar
- Diener, E., & Biswas-Diener, R. (2018). The replication crisis in psychology. In G. Feldman (Ed.), In HKU PSYC2020: Fundamentals of social psychology. Noba textbook series: Psychology. (pp 5-18). DEF Publishers.Google Scholar
- Dinis-Oliveira, J. R. (2020). COVID-19 research: Pandemic versus “paperdemic,” integrity, values, and risks of the “speed science.”. Forensic Sciences Research, 5(2), 174–187. https://doi.org/10.1080/20961790.2020.1767754PubMed Web of Science ®Google Scholar
- Eder, A. B., & Frings, C. (2018). What makes a quality journal? Experimental Psychology, 65(5), 257–262. https://doi.org/10.1027/1618-3169/a000426PubMed Web of Science ®Google Scholar
- Editorial. (2019). The importance of no evidence. Nature Human Behavior, 3(3), 197. https://doi.org/10.1038/s41562-019-0569-7PubMed Web of Science ®Google Scholar
- Elsayed, D. E. M. (2020). Fraud and misconduct in publishing medical research. Sudan Journal of Medical Sciences, 15(2), 131–141. https://doi.org/10.18502/sjms.v15i2.6693Web of Science ®Google Scholar
- Fàbregues, S., Hong, Q. N., Escalante-Barrios, E. L., Guetterman, T. C., Meneses, J., & Fetters, M. D. (2020). A methodological review of mixed methods research in palliative and end-of-life care (2014–2019). International Journal of Environmental Research and Public Health, 17(11), 3853. https://doi.org/10.3390/ijerph17113853PubMed Web of Science ®Google Scholar
- Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PloS One, 4(5), e5738. https://doi.org/10.1371/journal.pone.0005738PubMed Web of Science ®Google Scholar
- Farrington, D. P. (1999). What has been learned from self-reports about criminal careers and the causes of offending? Institute of Criminology, University of Cambridge.Google Scholar
- Field, A. (2018). Discovering statistics using IBM SPSS Statistics. Sage.Google Scholar
- Foltýnek, T., Dlabolová, D., Anohina-Naumeca, A., Razı, S., Kravjar, J., Kamzola, L., … Weber-Wulff, D. (2020). Testing of support tools for plagiarism detection. International Journal of Educational Technology in Higher Education, 17(1), 46. https://doi.org/10.1186/s41239-020-00192-4Web of Science ®Google Scholar
- Fresco-Santalla, A., & Hernández-Pérez, T. (2014). Current and evolving models of peer review. The Serials Librarian, 67(4), 373–398. https://doi.org/10.1080/0361526X.2014.985415Google Scholar
- Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science, 2(2), 156–168. https://doi.org/10.1177/2515245919847202Google Scholar
- García, F. V. (2017). Effect-size reporting in Mexican psychology journals: What it says about the quality of research within the field. CIENCIA Ergo-Sum, 24(3), 225–233. https://doi.org/10.30878/ces.v24n3a4Web of Science ®Google Scholar
- Gayithri, K., & Bairagya, I. (2018). Sources of funding for social science research funding in India: Flows, adequacy, and priorities. Policy Brief-18, Institute for Social and Economic ChangeGoogle Scholar
- Giofrè, D., Cumming, G., Fresc, L., Boedker, I., & Tressoldi, P. (2017). The influence of journal submission guidelines on authors’ reporting of statistics and the use of open research practices. PLoS One, 12(4), e0175583. https://doi.org/10.1371/journal.pone.0175583PubMed Web of Science ®Google Scholar
- Glass, G. V. (1976). Primary, secondary, and meta-analysis of research. Educational Researcher, 5(10), 3–8. https://doi.org/10.3102/0013189X005010003Google Scholar
- Gogtay, N. J., & Thatte, U. M. (2017). An introduction to meta-analysis. Journal of Association of Physicians in India, 65(12), 78-85. PMID: 31556276.Google Scholar
- Gravetter, F. J., & Forzano, L. A. B. (2018). Research methods for the behavioral sciences. Cengage Learning.Google Scholar
- Greene, M. (2007). The demise of the lone author. Nature, 450(7173), 1165–1165. https://doi.org/10.1038/4501165aGoogle Scholar
- Hafsa, N.-E. (2019). Mixed methods research: An overview for beginner researchers. Journal of Literature, Languages and Linguistics, 58, 45–49. https://doi.org/10.7176/JLLLGoogle Scholar
- Haq, U. I. (2020). Social sciences research in Pakistan: Bibliometric analysis. Library Philosophy and Practice (E-journal), 4499, 1-12. https://digitalcommons.unl.edu/libphilprac/4499Google Scholar
- Harris, R. A. (2017). Using sources effectively: Strengthening your writing and avoiding plagiarism. Taylor & Francis.Google Scholar
- Hartgerink, C., Wicherts, J., & Van Assen, M. (2016). The value of statistical tools to detect data fabrication. Research Ideas and Outcomes, 2, e8860. https://doi.org/10.3897/rio.2.e8860Google Scholar
- Harvey, L. A. (2018). Gift, honorary or guest authorship. Spinal Cord, 56(2), 91. https://doi.org/10.1038/s41393-017-0057-8PubMed Web of Science ®Google Scholar
- Hayes, A. F. (2017). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. Guilford publications.Google Scholar
- Henriksen, D. (2016). The rise in co-authorship in the social sciences (1980–2013). Scientometrics, 107(2), 455–476. https://doi.org/10.1007/s11192-016-1849-xWeb of Science ®Google Scholar
- Henriksen, D. (2018). Research collaboration and co-authorship in the social sciences. Forlaget Politica. Politicas PhD-Serie.Google Scholar
- Hickey, C. (2018). Research ethics in social research. Centre for Effective Services.Google Scholar
- Hodonu-Wusu, J. O. (2018). Open science: A review on open peer review literature. Library Philosophy and Practice (E-journal), 1874. http://digitalcommons.unl.edu/libphilprac/1874Google Scholar
- Hodonu-Wusu, J. O., & Lazarus, G. N. (2018). Major trends in LIS-research: A bibliometric analysis. Library Philosophy and Practice (E-journal), 1873. http://digitalcommons.unl.edu/libphilprac/1873Google Scholar
- Holcombe, A. O. (2019). Contributorship, not authorship: Use CRediT to indicate who did what. Publications, 7(3), 48. https://doi.org/10.3390/publications7030048Web of Science ®Google Scholar
- Holland, S. J., Shore, D. B., & Cortina, J. M. (2017). Review and recommendations for integrating mediation and moderation. Organizational Research Methods, 20(4), 686–720. https://doi.org/10.1177/1094428116658958Web of Science ®Google Scholar
- Howitt, D. (2016). Introduction to qualitative research methods in psychology. Pearson Education.Google Scholar
- Howitt, D., & Cramer, D. (2017b). Understanding statistics in psychology with SPSS. Pearson Education.Google Scholar
- Howitt, D., & Crammer, D. (2017a). Research method in psychology (5th ed.). Pearson Education.Google Scholar
- Hult, G. T. M., Ketchen, D., Cui, A. S. Prud’homme, A. M. Seggie, S. H., Stanko, M. A., Xu, S. A., & Causgil, S. (2006). An assessment of the use of structural equation modeling in international business research. In D. J. Ketchen & D. D. Bergh (Eds.), Research methodology in strategy and management (Vol. 3, pp 385–415). Emerald Group Publishing. https://doi.org/10.1016/S1479–8387(06)03012–8Google Scholar
- International Committee of Medical Journal Editors. (2018). Defining the role of authors and contributors. Philadelphia: ICMJE. http://www.icmje.org/icmje-recommendations.pdf.Google Scholar
- James, L. R., & Brett, J. M. (1984). Mediators, moderators, and tests for mediation. Journal of Applied Psychology, 69(2), 307–321. https://doi.org/10.1037/0021-9010.69.2.307Web of Science ®Google Scholar
- Joober, R., Schmitz, N., Annable, L., & Boksa, P. (2012). Publication bias: What are the challenges and can they be overcome? Journal of Psychiatry & Neuroscience, 37(3), 149–152. https://doi.org/10.1503/jpn.120065PubMed Web of Science ®Google Scholar
- Jordan, S. R. (2014). Research integrity, image manipulation, and anonymizing photographs in visual social science research. International Journal of Social Research Methodology, 17(4), 441–454. https://doi.org/10.1080/13645579.2012.759333Web of Science ®Google Scholar
- Karakaya-Ozyer, K., & Aksu-Dunya, B. (2018). A review of structural equation modeling applications in Turkish educational science literature, 2010–2015. International Journal of Research in Education and Science, 4(1), 279–291. https://doi.org/10.21890/ijres.383177Google Scholar
- Keith, Z. M. (2019). Multiple regression and beyond: An introduction to multiple regression and structural equation modeling (3rd ed.). Routledge.Google Scholar
- Kelly, J., Sadeghieh, T., & Adeli, K. (2014). Peer-review in scientific publications: Benefits, critiques, and a survival guide. EJIFCC, 25(3), 227–243PubMedGoogle Scholar
- Kinney, A. R., Eakman, A. M., & Graham, J. E. (2020). Novel effect size interpretation guidelines and an evaluation of statistical power in rehabilitation research. Archives of Physical Medicine and Rehabilitation, 101(12), 2219–2226. https://doi.org/10.1016/j.apmr.2020.02.017PubMed Web of Science ®Google Scholar
- KNAW. (2018). Replication studies: Improving reproducibility in the empirical sciences.Google Scholar
- Krawczyk, F., & Kulczycki, E. (2020). How is open access accused of being predatory? The impact of Beall’s lists of predatory journals on academic publishing. The Journal of Academic Librarianship, 102271. https://doi.org/10.1016/j.acalib.2020.102271Google Scholar
- Kuld, L., & O’Hagan, J. (2018). Rise of multi-authored papers in economics: Demise of the ‘lone star’ and why? Scientometrics, 114(3), 1207–1225. https://doi.org/10.1007/s11192-017-2588-3Web of Science ®Google Scholar
- Lamani, M. B., Patil, R. R., & Kumbar, B. D. (2018). Open access e-books in social science: A case study of directory of open access books. DESIDOC Journal of Library and Information Technology, 38(2), 141–144. https://doi.org/10.14429/djlit.38.2.10890Web of Science ®Google Scholar
- Lane-Getaz, S. (2017). Is the p-value really dead? Assessing inference learning outcomes for social science students in an introductory statistics course. Statistics Education Research Journal, 16(1), 357–399. http://iase-web.org/Publications.php?p=SERJGoogle Scholar
- Lapeña, J. F. F. (2019). Authorship controversies: Gift, guest, and ghost authorship. PhiliPPine Journal of Otolaryngology-Head and Neck Surgery, 34(1), 4–5. https://doi.org/10.32412/pjohns.v34i1.957Google Scholar
- Lawler, E. E., III, & Benson, S. G. (2020). The practitioner-academic gap: A view from the middle. CEO Working Paper Series, G20-01(697).Google Scholar
- Levitt, H., Bamberg, M., Creswell, J., Frost, D., Josselson, R., & Suárez-Orozco, C. (2018). Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA publications and communications board task force report. American Psychologist, 73(1), 26–46. http://dx.doi.org/10.1037/amp0000151PubMed Web of Science ®Google Scholar
- Levitt, H. M. (2020). Reporting qualitative research in psychology: How to meet APA style journal article reporting standards (Revised ed.). APA Style Series.Google Scholar
- MacKinnon, D. P., Fairchild, A. J., & Fritz, M. S. (2007). Mediation analysis. Annual Review of Psychology, 58(1), 593-614.https://doi.org/10.1146/1nnurev.psych.58.110405.085542Google Scholar
- Makel, C. M., Plucker, A. J., & Hegarty, B. (2012). Replications in psychology research: How often do they occur. Perspectives on Psychological Science, 7(6), 537–542. https://doi.org/10.1177/1745691612460688Web of Science ®Google Scholar
- Mali, F., Kronegger, L., & Ferligo, A. (2010). Co-authorship trends and collaboration patterns in the Slovenian sociological community. CORVINUS: Journal of Sociology and Social Policy, 1(2), 29–50. http://dx.doi.org/10.14267/cjssp.2010.02.02Google Scholar
- Maronna, R. A., Martin, R. D., Yohai, V. J., & Salibián-Barrera, M. (2019). Robust statistics: Theory and methods (with R) (2nd ed.). John Wiley & Sons.Google Scholar
- Matthews, B., & Ross, L. (2010). Research methods: A practical guide for the social sciences. Pearson Education.Google Scholar
- McKie, L., & Ryan, L. (2012). Exploring trends and challenges in sociological research. Sociology, 46(6), 1–7. https://doi.org/10.1177/0038038512452356Google Scholar
- McKim, A. C. (2017). The value of mixed methods research: A mixed-methods study. Journal of Mixed Methods Research, 11(2), 202–222. https://doi.org/10.1177/1558689815607096Web of Science ®Google Scholar
- McNutt, M. K., Bradford, M., Drazen, J. M., Hanson, B., Howard, B., Jamieson, K. H., … Verma, I. M. (2018). Transparency in authors’ contributions and responsibilities to promote integrity in scientific publication. Proceedings of the National Academy of Sciences, 115(11), 2557–2560. https://doi.org/10.1101/140228PubMed Web of Science ®Google Scholar
- McShane, B. B., Gal, D., Gelman, A., Robert, C., & Tackett, J. L. (2019). Abandon statistical significance. The American Statistician, 73(sup1), 235–245. https://doi.org/10.1080/00031305.2018.1527253Web of Science ®Google Scholar
- Mehler, D., Edelsbrunner, P., & Matić, K. (2019). Appreciating the significance of non-significant findings in psychology. Journal of European Psychology Students, 10(4), 1–7. https://doi.org/10.5334/e2019aGoogle Scholar
- Memon, M. A., Cheah, J. H., Ramayah, T., Ting, H., Chuah, F., & Cham, T. H. (2019). Moderation analysis: Issues and guidelines. Journal of Applied Structural Equation Modeling, 3(1), i–xi. https://doi.org/10.47263/JASEM.3(1)01Google Scholar
- Memon, M. A., Cheah, J.-H., Ramayah, T., Ting, H., & Chuah, F. (2018). Mediation analysis issues and recommendations. Journal of Applied Structural Equation Modeling, 2(1), i–ix. https://doi.org/10.47263/JASEM.2(1)01Google Scholar
- Mlinarić, A., Horvat, M., & Šupak Smolčić, V. (2017). Dealing with the positive publication bias: Why you should publish your negative results. Biochemia Medica: Biochemia Medica, 27(3), 1–6. https://doi.org/10.11613/BM.2017.030201Web of Science ®Google Scholar
- Mor, S. (2019). Mor, S. (2019). Social science research: An introduction. In S. Mor (Ed.), Emerging research trends in social sciences (pp. 1-9). Bloomsburg India.Google Scholar
- Morling, B. (2018). Research methods in psychology: Evaluating a world of information, 3 rd ed. W.W. Norton & Company.Google Scholar
- Morrison, T. G., Morrison, M. A., & McCutcheon, J. M. (2017). Best practice recommendations for using structural equation modeling in psychological research. Psychology, 8(9), 1326–1341. https://doi.org/10.4236/psych.2017.89086Google Scholar
- Mukherjee, S. P., Sinha, B. K., & Chattopadhyay, A. K. (2018). Statistical methods in social science research. Springer.Google Scholar
- Murad, M. H., Chu, H., Lin, L., & Wang, Z. (2018). The effect of publication bias magnitude and direction on the certainty in evidence. BMJ Evidence-Based Medicine, 23(3), 84–86. https://doi.org/10.1136/bmjebm-2018-110891PubMedGoogle Scholar
- Nurunnabi, M., & Hossain, M. A. (2019). Data falsification and question on academic integrity. Accountability in Research, 26(2), 108–122. https://doi.org/10.1080/08989621.2018.1564664PubMed Web of Science ®Google Scholar
- O’Grady, C. (2017). 107 cancer papers retracted due to peer review fraud. ARS Technica. https://arstechnica.com/science/2017/04/107-cancer-papers-retracted-due-to-peer-review-fraud/.Google Scholar
- Page, M. J., Sterne, J. A. C., Higgins, J. P. T., & Egger, M. (2020). Investigating and dealing with publication bias and other reporting biases in meta-analyses of health research: A review. Research Synthesis Methods. https://doi.org/10.1002/jrsm.1468Google Scholar
- Parrish, D., & Noonan, B. (2009). Image manipulation as research misconduct. Science and Engineering Ethics, 15(2), 161–167. https://doi.org/10.1007/s11948-008-9108-zPubMed Web of Science ®Google Scholar
- Peng, R. D. (2011). Reproducible research in computational science. Science, 334(6060), 1226–1227. https://doi.org/10.1126/science.1213847PubMed Web of Science ®Google Scholar
- Pernet, C. (2017). Null hypothesis significance testing: A guide to commonly misunderstood concepts and recommendations for good practice [version 5; referees: 2 approved, 2 not approved] Research 2017. 4, 621. https://doi.org/10.12688/f1000research.6963.5Google Scholar
- Pupovac, V., & Fanelli, D. (2015). Scientists admitting to plagiarism: A meta-analysis of surveys. Science and Engineering Ethics, 21(5), 1331–1352. https://doi.org/10.1007/s11948-014-9600-6PubMed Web of Science ®Google Scholar
- Quintana, D. S., & Williams, D. R. (2018). Bayesian alternatives for common null-hypothesis significance tests in psychiatry: A non-technical guide using JASP. BMC Psychiatry, 18(1), 178. https://doi.org/10.1186/s12888-018-1761-4PubMed Web of Science ®Google Scholar
- Rahman, W., Shah, A. F., & Rasli, A. (2015). Use of structural equation modeling in social science research. Asian Social Science, 11(4), 371–377. http://dx.doi.org/10.5539/ass.v11n4p371Google Scholar
- Rath, P. N. (2015). Study of open access publishing in social sciences and its implications for libraries. DESIDOC Journal of Library and Information Technology, 35(3), 177–183. https://doi.org/10.14429/djlit.35.3.8720Web of Science ®Google Scholar
- Ritzer, G. (2011). Sociological theory (eight ed.). The McGraw-Hill Companies.Google Scholar
- Rodríguez-Ardura, I., & Meseguer-Artola, A. (2020). Editorial: How to prevent, detect, and control common method variance in electronic commerce research. Journal of Theoretical and Applied Electronic Commerce Research, 15(2), i–v. https://doi.org/10.4067/S0718-18762020000200101Web of Science ®Google Scholar
- Romero, F., & Sprenger, J. (2020). Scientific self-correction: The Bayesian way. Synthese. https://doi.org/10.1007/s11229-020-02697-xGoogle Scholar
- Ross-Hellauer, T. (2017). What is open peer review? A systematic review. F1000Research, 6, 588–588. https://doi.org/10.12688/f1000research.11369.1Google Scholar
- Ross-Hellauer, T., & Görögh, E. (2019). Guidelines for open peer review implementation. Research Integrity and Peer Review, 4(1), 1–12. https://doi.org/10.1186/s41073-019-0063-9PubMedGoogle Scholar
- Sarfraz, Z., Sarfraz, A., Anwer, A., Nadeem, Z., Bano, S., & Tareen, S. (2020). Predatory journals: A literature review. Pakistan Journal of Surgery and Medicine, 1(1), 42–51. https://doi.org/10.37978/pjsm.v1i1.102Google Scholar
- Scandura, T. A., & Williams, E. A. (2000). Research methodology in management: Current practices, trends, and implications for future research. Academy of Management Journal, 43(6), 1248–1264. https://doi.org/10.14689/jomes.2020.1.2Web of Science ®Google Scholar
- Schöpfel, J. (2014). Open access and document supply. Interlending and Document Supply, 42(4), 187–195. https://doi.org/10.1108/ILDS-10-2014-0049Google Scholar
- Schreiber, J. B. (2017). Update to core reporting practices in structural equation modeling. Research in Social and Administrative Pharmacy, 13(3), 634–643. https://doi.org/10.1016/j.sapharm.2016.06.006PubMed Web of Science ®Google Scholar
- Seethapathy, J., Kumar, S. G., & Hareesha, A. S. (2016). India’s scientific publication in predatory journals: The need for regulating the quality of Indian science and education. Current Science, 111(11), 1759–1764. https//doi.org/18520/cs/v111/i11/1759-1764Web of Science ®Google Scholar
- Siddaway, A. P., Wood, A. M., & Hedges, L. V. (2019). How to do a systematic review: A best practice guide for conducting and reporting narrative reviews, meta-analyses, and meta-syntheses. Annual Review of Psychology, 70(1), 747–770. https://doi.org/10.1146/annurev-psych-010418-102803PubMed Web of Science ®Google Scholar
- Smith, M. L., & Glass, G. V. (1977). Meta-analysis of psychotherapy outcome studies. American Psychologist, 32(9), 752. https://doi.org/10.1037/0003-066X.32.9.752PubMed Web of Science ®Google Scholar
- Smith, R. A., & Davis, S. F. (2016). The psychologist as detective: An introduction to conducting research in psychology (6th ed.). Pearson Education.Google Scholar
- Spezi, V., Wakeling, S., Pinfield, S., Creaser, C., Fry, J., & Willett, P. (2017). Open-access mega-journals: The future of scholarly communication or academic dumping ground? A review. Journal of Documentation, 73(2), 263–283. https://doi.org/10.1108/JD-06-2016-0082Web of Science ®Google Scholar
- Stitzel, B., Hoover, G. A., & Clark, W. (2018). More on plagiarism in the social sciences. Social Science Quarterly, 99(3), 1075–1088. https://doi.org/10.1111/ssqu.12481Web of Science ®Google Scholar
- Sun, S., & Fan, X. (2010). Effect size reporting practices in communication research. Communication Methods and Measures, 4(4), 331–340. https://doi.org/10.1080/19312458.2010.527875Google Scholar
- Sun, S., Pan, W., & Wang, L. L. (2010). A comprehensive review of effect size reporting and interpreting practices in academic journals in education and psychology. Journal of Educational Psychology, 102(4), 989–1004. https://doi.org/10.1037/a0019507Web of Science ®Google Scholar
- Tarka, P. (2018). An overview of structural equation modeling: Its beginnings, historical development, usefulness, and controversies in the social sciences. Quality and Quantity: International Journal of Methodology, 52(1), 313–354. https://doi.org/10.1007/s11135-017-0469-8PubMedGoogle Scholar
- Tarkang, E. E., Kweku, M., & Zotor, F. B. (2017). Publication practices and responsible authorship: A review article. Journal of Public Health in Africa, 8(1), 723. https://doi.org/10.4081/jphia.2017.723PubMed Web of Science ®Google Scholar
- Tehseen, S., Ramayah, T., & Sajilan, S. (2017). Testing and controlling for common method variance: A review of available methods. Journal of Management Sciences, 4(2), 146–175. https://doi.org/10.20547/jms.2014.1704202Google Scholar
- Thakkar, J. J. (2020). Structural equation modeling application for research and practice (Amos and R). Springer.Google Scholar
- Tickell, A. (2018). Open access to research publications: Independent advice. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/774956/Open-access-to-research-publications-2018.pdfGoogle Scholar
- Tijdink, J. K., Verbeke, R., & Smulders, Y. M. (2014). Publication pressure and scientific misconduct in medical scientists. Journal of Empirical Research on Human Research Ethics, 9(5), 64–71. https://doi.org/10.1177/1556264614552421PubMed Web of Science ®Google Scholar
- Titus, S. L., Wells, J. A., & Rhoades, L. J. (2008). Repairing research integrity. Nature, 453(7198), 980–982. https://doi.org/10.1038/453980aPubMed Web of Science ®Google Scholar
- Trafimow, D., & Marks, M. (2015). Editorial. Basic and Applied Social Psychology, 37(1), 1–2. https://doi.org/10.1080/01973533.2015.1012991Web of Science ®Google Scholar
- Trninić, V., Jelaska, I., & Štalec, J. (2013). Appropriateness and limitations of factor analysis methods utilized in psychology and kinesiology: Part II. Physical Culture/Fizička Kultura, 67(1), 1–17. https://doi.org/10.5937/fizkul1301001Google Scholar
- Vaux, D. L. (2016). Scientific misconduct: Falsification, fabrication, and misappropriation of credit. In T. Bretag (Ed.), Handbook of academic integrity (pp. 895–911). Springer.Google Scholar
- Wang, J., & Wang, X. (2019). Structural equation modeling: Applications using Mplus. John Wiley & Sons.Google Scholar
- Watkins, M. W. (2018). Exploratory factor analysis: A guide to best practice. Journal of Black Psychology, 44(3), 219–246. https://doi.org/10.1177/0095798418771807Web of Science ®Google Scholar
- Weinstein, A. J. (2010). Applying social statistics: An introduction to quantitative reasoning in sociology. Rowman & Littlefield.Google Scholar
- Westland, C. J. (2019). Structural equation models: From paths to networks. Springer.Google Scholar
- Wolfram, D., Wang, P., Hembree, A., & Park, H. (2020). Open peer review: Promoting transparency in open science. Scientometrics, 125(2), 1033–1051. https://doi.org/10.1007/s11192-020-03488-4Web of Science ®Google Scholar
- Zhang, M. F., Dawson, J. F., & Kline, R. B. (2020). Evaluating the use of covariance‐based structural equation modelling with reflective measurement in organizational and management research: A review and recommendations for best practice. British Journal of Management, 1–16. https://doi.org/10.1111/1467-8551.12415Google Scholar
- Zhang, Y. H. (2016). Against plagiarism: A guide for editors and authors. Springer.Google Scholar
- Zhao, X., Lynch, J. G., Jr, & Chen, Q. (2010). Reconsidering Baron and Kenny: Myths and truths about mediation analysis. Journal of Consumer Research, 37(2), 197–206. https://doi.org/10.1086/651257Web of Science ®Google Scholar
Related research
Using thematic analysis in psychology
Virginia Braun et al.
Qualitative Research in Psychology
Published online: 21 Jul 2008
Cogent Social Sciences
Published online: 8 Sep 2019
Victoria Clarke et al.
The Journal of Positive Psychology
Published online: 9 Dec 2016
Information for
Open access
Opportunities
Help and information
Keep up to date
Register to receive personalised research and resources by email
Taylor and Francis Group Facebook page
Taylor and Francis Group X Twitter page
Taylor and Francis Group Linkedin page
Taylor and Francis Group Youtube page
Taylor and Francis Group Weibo page
Copyright © 2024Informa UK Limited Privacy policy Cookies Terms & conditions Accessibility
Registered in England & Wales No. 3099067
5 Howick Place | London | SW1P 1WG
×
Listen
Dictionary
Translate