Elsevier has approved its first “read-and-publish” contract with a national consortium of universities and research institutions in Norway. The Norwegian consortium has employed an agreement that rolls the two costs into one, apart from paying distinctly to avail content behind paywalls and make the particular articles instantly available to the scientific community. This is a big deal because there are many librarians and speakers who trust this model will decrease subscription charges while improving open-access publications.
Members of the Jamaica Health Ministry and the University of the West Indies met with the University of Buffalo (UB) and SUNY faculty, to begin exploring ways to share research and clinical findings to help both countries. The meeting was focused on instituting a new center to study infectious diseases and engage in team science and collaborative research to achieve sustainable health in the Caribbean region. Though the center will be based in Jamaica, it will also have a presence in Buffalo, possibly at the UB Center of Excellence in Bioinformatics & Life Sciences.
Reference Link: http://www.buffalo.edu/news/releases/2019/04/012.html
The East African Science and Technology Commission or EASTECO has launched a local journal to promote research in science, technology and innovation. This is a step forward to improve sharing knowledge among the scientific community. The most interesting thing is that this is the first journal of its kind in the region. It is funded by six EAC states. EASTECO’s executive sectary, Gertrude Ngabirano said they aim new information which can be used to address issues disturbing the region. It will also help come up with evidence-based policies.
Researchers of the Nigeria Centre for Disease Control (NCDC) in Ahuja, and its director general, Chikwe Ihekweazu are building strategies to fight infectious diseases that are more often than not outbreaks in Africa. NCDC’s approach to disease research in Africa is self-assured and revolutionary. The agency is assisting to shape the priorities of international scientists who wish to conduct research in Nigeria. Nigeria is vital, as the nation is massive and the country is poked with outbreaks like Ebola that could crash Africa’s economy and spread worldwide. Supporting African-led research is good for science, good for Africa and good for the entire world.
Reference link: https://www.nature.com/articles/d41586-019-00612-0
A “controversial ideas” journal where researchers can publish articles under pseudonyms will be launched next year by an Oxford University academic. The new journal is a response to a rise in researchers being criticised and silenced by those who disagree with them, according to Jeff McMahan, a professor of moral philosophy at Oxford.
When scholars choose a topic to work on their research, they need more sources or materials to review literature and add more value to their findings. According to Canadian science publishing’s article from last year, 2.5 million research papers are published annually while another unidentified source suggests that new researches are published around the world; approximately 1 million each year! Which is equal to one every 30 seconds. With the overload of new papers in each field and more growing every year it is practically impossible for scholars to keep with the information that is put out in each paper. Christian Berger’s team from the University of Gothenburg in Sweden, found a staggering number of papers on the subject; more than 10,000 in the same subject. Fortunately, the team had the support of an AI system, a writing investigation tool called Iris.ai.
Iris.ai is an AI, a tool developed for scholars to make writing research papers easier. It is a Berlin-based company that claims to save 90% of time with 85% precision of data matching, has more than 70 m open access papers. Iris.ai is programmed to learn about the topic provided and perform an elaborate frequency analysis over the text. Then it read the words for which it needs to find results and additional material that could be helpful for the paper. It uses a 500-word description of the researcher’s issue, or the link of their paper and the AI restores a guide to thousands of coordinating reports. As the website suggests, it is a scientific writing assistant.
According to Berger, it was “a quick and nevertheless precise overview of what should be relevant to a certain research question”. Iris.ai is one among many of the new AI-based tools offering targeted results of the knowledge landscape. One such tool is called Semantic Scholar, produced by the Allen Institute for Artificial Intelligence in Seattle, Washington, and Microsoft Academic.
Although every instrument is different from each other and gives different output, they all provide researchers with a different look at the scientific literature than conventional tools such as PubMed and Google Scholar. Semantic Scholar is a browser-based search tool that mimics the engines like Google and it is free. But it is more informative than Google Scholar in terms of specific results required by researchers. Doug Raymond, Semantic Scholar’s general manager, says that one million individuals utilize their service every month. It uses natural language processing or NLP to extract data while building connections to determine if the information is relevant and reputable or not.
Artificial intelligence is saving a lot of time and making it easier and quicker to automate some procedures. In the academic publishing industry, the Al-based innovations are being produced and implemented to help both authors and publishers for peer reviews, searching published content, detecting plagiarism, and identifying data fabrication. AI could be costly, but it can accelerate a researchers’ access to new fields. More and more such AI tools are being developed to cater to various requisites of writing a paper, such as filtering topics for relevance, keyword search, etc.
Experts who need more assistance for their specific concerns might consider free Al tool such as Microsoft Academic or Semantic Scholar. While AI is easing so many burdens and saving time for a researcher, let’s not forget that it is still machine intelligence and may require human intervention here and there to make a paper more presentable and precise.
With Open Access turning into the shared vision of various governments worldwide and a specific concentration inside some European research funders, this extended joint effort permits both Wiley and Hindawi to help the continuous improvement of top-notch open access titles and giving creators extra choices for where and how to publish. This collaboration is an extraordinary case of how Open Access is an intense and powerful driving force of the Open Science landscape, supporting an open and energizing worldwide space of sharing and associating the effect of research for the future generations.
Scientists in the Japanese sleep institute have found in their research that active components rich in sugarcane and other natural products may ameliorate stress, thus helping in having sound sleep.
Likert scale is a psychometric scale (i.e. a scale which measures individual differences) that is commonly used in survey research involving questionnaires (i.e. instrument). Each question or statement of the questionnaire forms the “Likert item”. Likert item measures the participants’ level of agreement to a statement, such as “strongly agree” or “neutral” or “disagree” which are orderly numbered. Generally 5 levels of responses are used i.e. 1. Strongly disagree, 2. Disagree, 3. Neither agrees nor disagrees, 4. Agree 5. Strongly agree. However, more than 5 levels i.e. 7 and 9 levels are also sometimes used.
Before analyzing the Likert scale data, the reliability of the instrument or scale is performed. This is achieved by three different ways. First, the uniformity in response within the instrument (i.e. internal consistency) is measured by estimating the Cronbach’s alpha. A Cronbach’s alpha value of ≥0.7 is accepted. Second, the test-retest reliability is calculated. In SPSS, the test-retest is calculated by bivariate correlation which is denoted by the Pearson’s correlation coefficient (r). Third, the inter-rater reliability is also estimated as test-retest in SPSS.
After determining the reliability of the instrument, analysis of Likert data is carried out. Each Likert item can be analyzed either separately (also called as Likert-type data) or summed to create a score for a group of items (summative scales, Likert scales). Each Likert scale consists of at least four or more Likert-type items, all measuring a single variable.
- Likert-type data is an ordinal data, i.e. we can only say that one score is higher than another. Due to the ordinal nature of the data, generally parametric tests (i.e. t-test, ANOVA) are not applied. Rather, non-parametric tests such as Mann Whitney-U test, Wilcoxon signed-rank test, Kruskal-Wallis test should be used. Descriptive statistics used for Likert-type data includes mode or median for measuring central tendency and frequencies for variability. Further analysis appropriate for ordinal scale items includes the chi-square measure of association, Kendall Tau B, and Kendall Tau C.
- Likert scale data, on the other hand, are analyzed as interval data. Since the data are of interval, parametric tests are used for analysis of Likert scale. Analysis that can be performed includes mean for central tendency, standard deviations for variability, Pearson’s r for bivariate analysis, t-test and ANOVA for comparing group means, and regression procedures for associations.
- When Likert-type or Likert scale data can be reduced to nominal level i.e. yes vs. no, agree vs. disagree, then, chi-square test, Cochran Q test, and Mc Nemar test can also be performed.
Further, Likert scales may be subject to biases from several causes. Central tendency bias occurs when respondents may avoid using extreme response categories; acquiescence bias occurs when respondents agree with statements as presented and social desirability bias occurs when respondents try to represent themselves or their institution more positively. All the biases can be checked by conducting a pilot survey before the actual study. If any biases are observed the questionnaire can be modified accordingly. Crafting a scale with an equal number of positive and negative statements can counteract the problem of acquiescence bias. Central tendency bias can be avoided by either making the survey questionnaire short or by forcing comparable rating i.e and/or by randomizing the questions. Social desirability bias can be prevented by implementing the all the above methods i.e. to minimize central tendency bias and acquiescence bias and also by making the questions indirect.