Writing a research manuscript

While editing different research manuscripts, I have often observed the lack of presentation in the content matter; as a result, in spite of having a good amount of results, the manuscript becomes very weak in terms of readability and clarity. Here are few suggestions that might be helpful for the beginner to understand how to write an effective research manuscript. A research manuscript can be of different types: original article, reviews, short communication, rapid communication, letters, etc. Here I will limit my discussion on how to plan for writing a manuscript for an original article.

Before you start writing the manuscript, take a few steps back, gather all your results and ask yourself few questions: Is it a new and original work? Does it have a clear objective or hypothesis? Did you make a significant amount of progress to achieve the goal? Are all your claims supported by appropriate data? Can you explain gist of your work in one or two sentences? If all the answers are YES, go ahead and start writing the research manuscript.

There is a general structure for each type of research manuscript. For writing a manuscript of an original article, the following structure should be followed:

Title

Abstract

Keywords

IMRAD (the main body: Introduction (I), Methods (M), Results And Discussion (RAD))

Conclusions

Acknowledgements

References

Appendices/ Supplementary

This should be the format and the order of final presentation; however, the order of writing would be little different.

First, prepare all your figures and tables. This will help you in assessing the standard of your work; accordingly, select two or three journals. Once you finish writing choose the target journal among them. Following is the order you may start writing:

1. Start with the “Method” or experimental section (if you are theoretician, first work on your Theory) of the manuscript. This section should be written in detail so that any reader, if needed, can reproduce the results by following the method you described. If you have used any previously established method, cite the appropriate reference without going into detail. For chemicals, cell lines, antibody, etc., mention the company or lab from where you bought or procured it. For instrument, it is important to mention the model number along with the company name. Same is for any software, for example, Sigma-Plot, SPSS, etc. (mention the version).

2. Next, start the “Result” section of your manuscript. Briefly writing the protocol could be effective. Present all the main findings;  you may present the secondary data in supplementary section. Refer the figures and tables in order. Use sub-headings while presenting results of same type together. Do not discuss and interpret the results here, if you have a separate “Discussion” section. However, in case of common “Results and Discussion”, you need to interpret. For this, you need to check the “Author guidelines” of your target journal and accordingly, plan your presentation.

3. Once you finish the result section, you will see a story has already built up in front of you. Now, start writing the “Introduction” of the manuscript. “Introduction” should reflect the background of the study, i.e., what made you interested or inspired to undertake this project. Discuss already published studies in the field. Remember, while presenting the previous literature, you should take care of the logical flow of the content. “Introduction” of a manuscript sets the beginning of your article; do not ruin it with irrelevant facts. The last paragraph should present the objective of your work clearly, and care should be taken to maintain the logical flow with rest of the introduction.

4. Once you have the “Introduction”, “Results” and “Methods” sections ready, it is easy to write “Discussion” of a manuscript. Start “Discussion” with the answer of the questions raised in the “Introduction”. The “Discussion” section of a manuscript not only involves interpreting your findings, but also comparing your results with the previously reported studies. This is very important. Often, I see the authors only discuss their result without comparing with the existing reports. If you have obtained improved results, explain the reason. At the same time, if your findings are not in accordance with the published report, try to give explanation. This could be some difference in methods or due to some limitation in your study. Besides explaining the significance your work, you must explain weakness or discrepancies of your work (if any).

4. Once you are done with the “Discussion” of your manuscript, go back to “Introduction” and refine it. Depending on how far you could achieve the goal, you need to refine. Go through the entire manuscript couple of times and find out if something is missing or over stretched. Once you are satisfied, think about “Conclusions”

5. “Conclusions” helps a reader or a reviewer to judge the work presented in the manuscript. Remember, “Conclusions” of a manuscript should not be the rehash of “Results”. In this section, you should briefly present only the key results, followed by how far you achieved the goal. Limitations (if any) should also be told very briefly and end with some future study or application.

6. Again, go back and refine your “Introduction”.

7.  Take utmost care while writing “Abstract” of your manuscript. It should be clear and at the same time interesting. Do not drag it (keep it within 250-300 words as most of the journal recommends). If your target journal wants a structured abstract (Background-Objective-Results- Conclusions), it is easy for you to write; however, you may always write the “abstract” following this structure in mind. Try to present a clear objective with highlighting the key findings and end with a robust “conclusions”. A clear “Abstract” sets the mood of a reader whether your manuscript will be considered for further reading.

8. Keywords are used for indexing and it increases the visibility of your manuscript if published. Therefore, choose keywords (generally five or six maximum) those exactly relate to your study.

9. “Title” is the most crucial part of a manuscript, attracting readers. Title should be crisp and chosen in such way so that it represents the content of a manuscript in a “nut shell”. Take more time to come up with an appropriate title.

Finally, revise, revise, revise…..

Planning an oral presentation

Learning the art of presentation of research findings is very important for graduate students. You  may have obtained very speaking interesting results, but communicating your findings effectively is also very important. This article discusses how to make an effective oral presentation; it can be a conference presentation or in-house symposium presentation or thesis presentation. You need to work on few basic aspects to deliver a good lecture: Timing, Audience, Content, Organization, Presentation tool, and Tone and body-language of the speaker.

Timing: First, find out the duration of presentation, whether it is a 15 min (presentation: 10 min + question: 5 min) or 45 min or 60 min. It is better to finish little early, rather than overshooting the recommended duration. Overshooting presentation time is not only against professional courtesy, but also reflects lack of preparation. Therefore, it is extremely important to plan your presentation according to the recommended duration. Obviously, planning for a 10 min talk would definitely be different from a 60 min lecture. For the short talk, you only have to show the key points without discussing much on the individual research methods. However, for a 60 min lecture, you may elaborate on important research methods used for your study.

Audience: The success of a presentation lies on your ability to understand your audience and accordingly make the presentation. Now how to get an idea about the audience will you have. Well, that’s not very hard to find out. If it is a conference of specialized field (e.g., Asian Society of Spectroscopy, Experimental NMR conference, etc.) or thesis presentation, you may audienceexpect peers or experts of the field as your audience. On the other hand, if you are presenting in conferences, like American Chemical Society or Royal Society of Chemistry, you may expect general audiences from various fields of Chemistry. When you are presenting in front of peers or experts of a particular field, you need not have to bother about jargons or acronyms or technical terms regularly used in your field. However, for general audience, you need to define them or restricting yourself not to use much jargons. If you need to use such term, then make an effort to explain those to your audience. You may expect more general audience when you are going to deliver a talk in a college or in a university set up, where students and teachers from diverse fields of science may be present. Here, you need to be more cautious about the planning of your talk. Always remember that the objective of your presentation is to communicate your research findings effectively with your audience, and they should at least understand the overall implication of your work.

Content: Well, you need not present all the details. Plan the content of your presentation keeping in mind the “timing” and “audience”. Before deciding the content, think about the “take home message” you want to give the audience. To make your presentation interesting, take a step back and think what made you interested to take up this project, while working on this project what are the new things you learnt, and what are the main points you want the audience to remember after you finish your presentation.

Organization of contents: Once you have decided the contents, it is time to organize them. Following is a rough outline:

  1. First wish the audience and introduce yourself and then start your presentation
  2. Title: make it interesting but simple
  3. Background of the project: keep it brief
  4. Objective: what made you undertake this project and what would you like to examine
  5. Methods: keep it brief highlight the key points (use flow diagrams/schematics/pictures/ short video clip for showing actual reaction or experiment), but save some extra slides at the end of the presentation so that if somebody is interested to know the actual method,those slides would be helpful.
  6. Results: the most important part, show only the key results. Club the similar type of results together instead of showing single graph for each parameter. Never forget to show control while comparing.
  7. Discussion: compare the related work by others
  8. Conclusions and future direction
  9. Acknowledgement

Use flow diagrams, schematics and minimize the use of text. Write the bullet points not a whole paragraph of text.

Presentation tool: These days people rarely use transparent sheets for presentation. Everybody uses power-point, the most effective tool for presentation. Few points to remember while using power-point:

  1. Choose background color and text color in such a way so that it would be visible in more or less any interior lighting. Do not go for fancy, keep it basic. Most importantly, be consistent throughout. Do not use different background color for different slide.
  2. Choose font and font size so that it should be visible from the last row of audience in a standard size of lecture room. For heading choose one size and another for text, but be consistent throughout the document.
  3. Do not play with colors. Use multiple colors only when required to distinguish or highlight some points.
  4. You may use animation but do not overdo it. Use only if required.
  5. It is okay to waste slide space but never over-crowd slides.

Tone and body-language of the speaker: Talk in an audible voice so that everybody can hear you. Talk slowly and pronounce clearly each tone and body language word. Always face the audience and never read your slides line by line. Make eye-contact with your audience. Do not be nervous. Practice and give mock presentation in front of your labmates or friends. If you are afraid of forgetting something, bring notes. Think about the questions audience may ask. While giving mock presentation, ask your labmates or friends to ask questions. Keep some back-up slides, you may need while answering some question. However, it is okay to say “I don’t know” rather trying to give a vague answer which actually you do not know. Practice makes one much refined and confident, but never be overconfident or aggressive to prove your point. Try to address the questions with proper scientific reasoning. Finally, dress well — dress like a professional.

Practice…….. Practice…….Practice…….

Plan for Writing Ph.D. Research Thesis

After years of hardworking, it is the time to write your research thesis, an important aspect of getting the coveted degree (MS or Ph.D.). Writing thesis is Writing Ph.D. Research Thesis probably one of the hardest challenges in your academic career. However, a little planning in advance may simplify such challenge. So, do not panic and start the planning in the beginning of your final year research. First, know your institution’s requirement for a research thesis to be fulfilled e.g., minimum volume of work required to write a research thesis. Of course, volume of work is not same for a MS thesis and a Ph.D. thesis. However, the planning and organization are same in both cases. This article mainly discusses the plan for writing Ph.D. research thesis.

Develop a Ph.D. Research Thesis Plan:

First and foremost, sit with your data and judge whether you have enough data to write your Ph.D. research thesis. Second, discuss with you research supervisor for his/her expert comments and then plan the followings:

  • Always target a date for completion of the first draft of your research thesis and then divide this time to complete each of the chapters. Try to stick to your deadline.
  • Be clear about the objective of your research thesis.
  • Do in-depth literature survey for writing the “Introduction”, the beginning of your research thesis. The “Introduction” should include a complete literature review of your field at least for past 20 years including couple of classic/ breakthrough works done before 20 years, if your field is not so new. Read some basic books related to your field to understand the broader knowledge. Your “Introduction” should end with the “research question” or “objective” or the reason to undertake the particular project.
  • Plan the chapters and organize the chapters in such a way so that there remains a logical flow of the content throughout the theses, chapter after chapter. After going through a research thesis, a person should get a clear idea about the objective with which you started the research work and how far you achieved.
  • Now plan for each individual chapter of your research thesis. Make notes point by point regarding the content of each chapter and arrange the things accordingly.
  • While writing each chapter, divide the chapter in different sub-divisions with introduction, methods, results and discussion (IMRAD) and conclusions. Method section should be described in detail. All the research technique/ experimental/ theoretical modeling should be presented clearly. Therefore, sit with your lab notebook for exact description of the methods you used. Ready all the figures, graphs, and tables needed for writing the result and discussion. Most of us struggle how to start. Procrastination should be avoided. A small tip regarding writing individual chapter: first arrange your figures, graphs and tables and then you will find that the story has automatically developed in front of you. Therefore, start writing with your result section.
  • Make sure if your university has some recommended wordcount for research thesis. Then accordingly plan each chapter, so that together they do not cross the recommended word limit.

Research Thesis Structure:

Generally Ph.D. research thesis has more or less a common structure. However, check with your university research thesis guidelines, and a senior labmate’s (one or two year senior to you) thesis would be helpful in this regard. thesis_writing_rectangle_sticker

  • Title page
  • Synopsys (or summary of each chapters)
  • Acknowledgements
  • Table of contents
  • Symbols or values of important parameters
  • Main body of the thesis (Introduction and different chapters)
  • Appendices (if any)

Try to think your research thesis as a story and develop it accordingly. Work on each chapter in such a way that thesis as a whole is interconnected and tells a story of a scientific invention starting from the history of the particular field (how the field evolved: “Introduction”), which are the interesting points that one needs to address for further advancement of the field (“Objective”  of your work or” research question”), what can be done to address those issues or how to further develop the field (individual “Thesis Chapters”), how far you achieved (“Conclusions” including any limitation and future application or study to develop further).

Analytical Study Design in Medical Research: Cohort study

Cohort studies are observational analytical studies. As mentioned in my previous blog (http://blog.manuscriptedit.com/2014/02/overview-different-analytical-study-designs-medical-research/), the word ‘cohort’ is derived from the Latin word ‘cohors’, which means unit. For conducting cohort type of studies, the study population is chosen from general population both exposed to a certain agent suspected for disease development and unexposed to the cause. The population is followed for a longer period of time. The incidence in disease development in exposed group is compared with the non-exposed group. Therefore, the objective of a cohort study is to find out association between a suspected cause(s) and disease. If performed correctly, cohort studies can predict results comparable to the experimental analytical studies. The following measurements can be done in a cohort study design: absolute risk or incidence, relative risk (risk ratio or rate ratio), risk difference, and attributable proportion. Cohort studies are classified as prospective and retrospective studies based on the timing of enrollment of subjects and disease outcome.

Analytical Study III_Fig1


Prospective Cohort Study

As the name suggested, prospective cohort studies are started with a population containing non-diseased subjects but all having a risk to develop a certain disease, and the investigator waits for the disease to develop. The population is divided into two groups, one with the exposure of the potent agent or environment suspected to be associated with the disease, and the other remains unexposed but with equal susceptibility to develop the disease. Then the population is followed up for a certain period of time until they develop the condition or disease. After following up the study population for a certain period of time until the disease developed, the incidences of disease in exposed and unexposed population are calculated (see the following schematic diagram). Therefore, incidence rate is the measure of disease in cohort studies. The association to disease is measured by relative risk (RR).

           Analyt Stydy III_Fig2a (1)Analyt Stydy III_Fig2b

From the above table, the incidences of disease in exposed and unexposed population, relative risk (RR) can be calculated.

Analytical Study

Alternatively, odd ratio (OR) can also be a measurement of association and is the ratio of two odds. Again, we can obtain odds from the ratio of chances of something to happen to that of not happening. In this case, the OR can be calculated as follows:

 

OR = a/b : c/d

 

Attributable risk or exposure attributed to the disease in total population or in other words population attributable risk (PAR) can also be calculated with respect to the total population.

The Framingham heart study is a good example of this type of prospective study. Started in 1948, this study is still going on. The Framingham heart study was undertaken to determine common contributing factors to cardiovascular disease. The Framingham risk score based on the Framingham study can predict 10-years cardiovascular risk in an individual with no known cardiovascular disease.

Advantages of prospective cohort study include the following:

(i)                 Better for rare exposure

(ii)               One can determine the disease incidence rate and relative risk

(iii)             More than one disease associated with a single exposure can be determined

(iv)             This study is able to establish the cause-effect

(v)               Selection and information biases are minimized

However, the study has certain limitations as well. The study requires a large population and long time to complete. Loss of subjects in long time follow-up adversely affects the outcome of the study. Prospective study is insufficient in rare diseases. Moreover, this type of study is expensive and has ethical issue too.

 

Retrospective Cohort study

Retrospective cohort study, also known as historical cohort study, is a type of cohort study where data are collected in past, but analyzed at present (see the inset diagram). Here, the investigator retrospectively Analytical Study identifies the exposure and the outcome information. A retrospective study design is chosen for a rare or unusual exposure for which a prospective study design will not be appropriate. In addition, retrospective study design can quickly estimate the effect of exposure on certain outcome as well as determine the disease association. This type of study is helpful in designing future studies and interventions. The data are collected from past medical records, administrative databases, conducting patients’ interviews, etc. Odd ratio is used as the measure of association between the exposure or risk factor and disease. The other measurements are same as prospective cohort study.

A classic example of retrospective study is the study conducted by Case et al (1954) to examine the excess risk of bladder cancer in men worked in the manufacturing plant of certain dye intermediate. In this study, the authors first made a list of men who worked in chemical plant manufacturing dye in UK at least six months since 1920. The investigators searched retrospectively for the cases of bladder cancer among those workers who had been employed in dye manufacturing chemical plants between 1921 to till February 1952. The number of cases of bladder cancer among these workers was then compared with the number of bladder cancer incidences in general population to determine the excess risk of bladder cancer in men exposed to certain dye intermediate.

Advantages of retrospective cohort study include the following:

(i)                 Good for rare exposure

(ii)               Unlike prospective study, this study takes relatively short time to complete

(iii)             Relatively less expensive

(iv)             Can be conducted for multiple cohort

(v)               Estimate the incidence data

(vi)             No ethical issues involve

 

However, retrospective cohort study design has certain disadvantages, especially, the chances of selection bias in sampling is relatively higher. Sometimes it may be difficult to identify the appropriate exposed group and corresponding control group or the group for comparison. Confounding is another issue in historical study design; loss of follow up may also bias the study results. In addition, like prospective cohort study, retrospective cohort study is not appropriate for rare diseases. Poor quality of available medical data not designed for such study often adds error in results.

 

References

1. Morabia, A (2004). A History of Epidemiologic Methods and Concepts. Birkhaeuser Verlag;  Basel: p. 1-405.

2. John-Hopkins open courseware.

http://ocw.jhsph.edu/courses/fundepiii/lectureNotes.cfm

3. Emily L. Harris EL. Linking Exposures and Endpoints: Measures of Association and Risk

http://www.genome.gov/pages/about/od/opg/epidemiologyforresearchers/3_harris.pdf

4. Framingham heart study, a project of the National Heart, Lung, and Blood Institute and Boston University. http://www.framinghamheartstudy.org/fhs-bibliography/index.php

5. http://www.iarc.fr/en/publications/pdfs-online/epi/cancerepi/CancerEpi-8.pdf

6. Case RAM, Hosker ME, McDonald DB, Pearson JT (1954). Tumors of the urinary bladder in workmen engaged in the manufacture and use of certain dyestuff intermediates in the British chemical industry. Part I. The role of aniline, benzidine, alpha-naphthylamine, and beta-naphthylamine. BR J Ind Med 11:75-104.

Analytical Study Design in Medical Research: Measures of risk and disease association

A researcher, while designing any analytical study in medical research, should be aware of few basic terms in epidemiology required to measure disease risk and association. This blog article focuses on defining those terms used for calculating disease risk and association. As mentioned above, there are two different types of measurements: Measures of risk and Measures of association.

Measures of Risk

Risk is defined as the probability of an individual developing a condition or disease over a period of time.

Risk = Chances of something to happen/ Chances of all things to happen

Odds= Chances of something to happen/ Chances of it not happening

Therefore, “Risk” is a proportion, while “Odds” is a ratio.

Incidence: Incidence is a measure of risk which describes the number of cases developed a new condition for a specified period of time. In this context, there is another important term, “Incidence proportion” to be worth mentioning. It is defined as the proportion of the number of cases developed a new condition and total population including the cases with developed condition and no condition in a specified period of time.

For example, among 100 non-diseased persons initially at risk, 20 develop a disease/condition over a period of five years.

Incidence = 20 cases

Incidence proportion = 20 cases per 100 persons i.e., 5%

Incidence rate = 20 cases developed in 100 persons in 5 year means the rate of incidence is equal to 4 per 100 person-years

Prevalence: Prevalence is the proportion of the number of people having a condition at a specific point of time and total population studied. This is specifically called point prevalence. For example, at a certain date, five persons are detected having a condition among 100 people studied. There are two more terms need to be defined in this regard: Period prevalence and Life time prevalence (LTF). The former is defined as the proportion of the number of people having the disease at a certain period of time, say a month or period or a year and the total population studied at that period of time. On the other hand, LTF is defined as the proportion of the number of people having the disease at some point of their life and total population studied.

There is a very subtle difference between incidence and prevalence. Incidence is the frequency of a new event, while prevalence is the frequency of an existing event.

Cumulative Risk: Cumulative risk is defined by the probability of developing a condition over a period of time.

Measures of Association

Association is defined as a statistical measurement between two or more variables.

For measuring the strength of association of a disease for etiological and hypothesis testing, following measurements are important. The terms defined below are used to measure the association between exposure and disease.

Relative risk (RR): The relative risk is measured as a ratio of two risks.

For example, in 100 people consisting of 50 male and 50 female, while 20 male are infected with Tuberculosis, 10 female develop the condition.

Risk in men: 20/50

Risk in women: 10/50

Therefore, relative risk (RR) of developing Tuberculosis in men compared to women is

RR = 20/50 : 10/50 = 2.0

i.e., men are at double risk of developing Tuberculosis as compared to women.

Odd ratio (OR): Odd ratio is measured as the ratio of two odds (odds is defined above).

Continuing the previous example of Tuberculosis in men and women in a total population of 100

Odds in men: 20/30

Odds in women: 10/40

Odd ratio (OR) = 20/30 : 10/40 = 2.67

Therefore, the odds of men getting infected with Tuberculosis are 2.6 times as high as the women developing Tuberculosis.

To measure the impact of   the disease association on public health, following measuerments are important. All these measurements assume that the association between exposure and disease is causal.

Attributable risk (AR): Amount of disease attributed to the exposure i.e., the difference between the incidence of disease in the exposed group (Ie) and the incidence of disease in the unexposed group (Iue).

AR = Ie – Iue

Attributable (risk) fraction (ARF): ARF is the proportion of disease in the exposed population whose disease can be attributed to the exposure.

ARF = Ie – Iue / Ie

Population attributable risk (PAR): The incidence of disease in total population (Ip) that can be attributed to the exposure.

PAR = Ip – Iue

Population attributable (risk) fraction (PARF): PARF is the proportion of the disease in the total population whose disease can be attributed to the exposure.

PARF = Ip – Iue / Ip

 

Bias and Confounding Factors

In an epidemiological study, when association is found between exposure and disease, it is very important to check first whether the association is real. One needs to be cautious if the association is by chance due to non-adequate sample size or it is because of some kind of bias in the design or measurement.

Bias is a systematic error in design, conduct or analysis which results in unreal association of exposure with disease. There are three types of biases possible: (i) Selection bias, (ii) Information bias, and (iii) Confounding.

Selection bias occurs when selection of participants in one group shows different outcome in the selection of other groups. Information bias happens when information is taken differently from two groups.

Confounding occurs when the observed result between exposure and disease differs from the truth due to the influence of a third variable which has not been considered for analysis. For example, a person suffers from headache when he is under stress; however the person eats a lot of junk food especially, when he is in under stress. Therefore, it is hard to predict what actually causes the headache; whether it is lack of sleep, anxiety, gas formation due to indigestion. Therefore, all these variables should be adjusted before associating mental stress with headache.

 

References

1. Health Statistics New South Wales – Definitions. (n.d.). http://www.healthstats.nsw.gov.au/ContentText/Display/Definitions

2. SOURCES OF EPIDEMIOLOGIC DATA – KSU. (n.d.).

http://faculty.ksu.edu.sa/71640/Publications/COURSES/epidemiology-334%20CHS%20%20(70).doc

3. John-Hopkins open courseware. http://ocw.jhsph.edu/courses/fundepiii/lectureNotes.cfm

4. Manuel Bayona M, Chris Olsen, C. Measures in Epidemiology. In The Young Epidemiology Scholars Program (YES)

www.collegeboard.com/prod_downloads/yes/4297_MODULE_09.pdf‎

5. Emily L. Harris EL. Linking Exposures and Endpoints: Measures of Association and Risk

http://www.genome.gov/pages/about/od/opg/epidemiologyforresearchers/3_harris.pdf

Analytical Study Designs in Medical Research

In medical research, it is important for a researcher to know about different analytical studies. The objectives of different analytical studies are different, and each study aims to determine different aspects of a disease(s) such as prevalence, incidence, cause, prognosis, or effect of treatment. Therefore, it is essential to identify the appropriate analytical study associated with certain objectives. Analytical studies are classified as experimental and observational studies. While in an experimental study, the investigator examines the effect of presence or absence of  certain intervention(s), he does not need to intervene in a observational study, rather he observes and assesses the  relation between exposure and disease variable. Interventional studies or clinical trials fall under the category of experimental study where investigator assigns the exposure status. Observational studies are of four types: cohort studies, case-control studies, cross-sectional studies, and longitudinal studies

Classification of Analytical studies

While experimental studies are sometimes non indicative or not ethical to conduct or very expensive, observational studies probably are the next best approach to answer certain investigative questions. Well-designed observational studies may also produce similar results as controlled trials; therefore, probably, the observational studies may not be considered as second best options. In order to design an appropriate observational study, one should able to distinguish between four different observational studies and their appropriate application depending on the investigative questions. Following is a brief discussion on four different observational studies (each will be discussed in detail individually in my upcoming blogs):

 

Observational Analytical Study Designs

Cohort studies

Cohort methodology is one of the main tools of analytical epidemiological research. The word “cohort” is derived from the Latin word “cohors” meaning unit. The word was adopted in epidemiology to refer a set of people monitored for a period of time. In modern epidemiology, the word is now defined as “group of people with defined characteristics who are followed up to determine incidence of, or mortality from, some specific disease, all causes of death, or some other outcome” (Morabia, 2004). In cohort studies, individuals are identified who initially do not have the outcome of interest and followed for a period of time. The group can be classified in sub sets on the basis of the exposure. For example, a group of people can be identified consisting of both smoker and non-smoker and followed them for the incidence of lung cancer. At the beginning of the study none of the individuals have lung-cancer and the individuals are grouped into two sub sets as smoker and non-smoker and then followed for a period of time for different characteristics of exposure such as smoking, BMI, eating habits, exercise habits, family history of lung cancer or cardiovascular diseases, etc. Over the time, some individuals develop the outcome of interest. From the data collected over time, it is convenient to evaluate the hypothesis whether smoking is related with the incidence of lung cancer. The following schematic shows the basic design of a cohort study. There are two types of cohort studies: prospective and retrospective. A prospective study is conducted at present but followed up to future i.e., waiting for the disease to develop. On the other hand, a retrospective study is carried out at present on the data collected in the past. This is also called as historic cohort study. In the next blog, I will discuss these in detail.

Design of a Cohort study

Case-control studies

In terms of objective, case-control studies and cohort studies are same. Both are observational analytical studies, which aim to investigate the association between exposure and outcome. The difference lies in the sampling strategy. While cohort studies identify the subjects based on the exposure status, case-control studies identify the subjects based on the outcome status. Once the outcome status is identified the subjects are divided into two sets: case and control (who do not develop the outcome). For example, a study design which determines the relation between endrometrial cancer with use of conjugated estrogen. For this study, subjects are chosen based on the outcome status (endrometrium cancer) i.e., with disease present (case) and absent (control), and then these two subsets are compared with respect to the exposure (use of conjugated estrogen). Therefore, case-control study is retrospective in nature and cannot be used for calculating relative risk. However, odd ratio can be measured, which in turn, is approximate to relative risk. In cases of rare outcomes, case control study is probably the only feasible analytical study approach.

Design of a Case-Control Study

Cross-sectional studies

Cross-sectional study is a type of observational analytical study which is used primarily to determine the prevalence without manipulating the study environment. For example, a study can be designed to determine the cholesterol level in walker and non-walker without exerting any exercise regime or activity on non-walkers or modifying the activity of the walkers. Apart from cholesterol other characteristics of interest, such as age, gender, food habits, educational level, occupation, income, etc., can also be measured. The data collected at one time in present with no further follow up. In cross-sectional design, one can study a single population (only walkers) or more than one population (both walker and non-walker) at one point of time to see the association between cholesterol level and walking. However, the design of this study does not allow to examine the causal of a certain condition since the subjects are never been followed either in past or present. 

Design of a Cross-Sectional Study

Longitudinal studies

Longitudinal studies, similar to cross-sectional studies, are also a type of observational analytical studies. However, the difference of this study design with the cross-sectional study is the following up the subjects for a longer time; hence, can contribute more to the association of causative to a condition. For example, the design that aims to determine the cholesterol level of a single population, say the walkers over a period of time along with some other characteristics of interest such as age, gender, food habits, educational level, occupation, income, etc. One may choose to examine the pattern of cholesterol level in men aged 35 years walking daily for 10 years. The cholesterol level is measured at the onset of the activity (here, walking) and followed up throughout the defined time period, which enables to detect any change or development in the characteristics of the population.

Following two tables summarize different observational analytical studies with regard to the objectives and time-frame.

Fig5

I will define several terms, such as risk factor, odd ratio, probability, confounding factors, etc., related to study designs along with the detail discussion on individual analytical study design and tips to choose correct design depending on the research question in my upcoming blogs. Visit the blog section of the website (www.manuscriptedit.com) for more such informative and educative topics. 

References

[1] Morabia, A (2004). A History of Epidemiologic Methods and Concepts. Birkhaeuser Verlag; Basel: p. 1-405.

[2] Hulley, S.B., Cummings, S.R., Browner, W.S., et al (2001). Designing Clinical Research: An Epidemiologic Approach. 2nd Ed. Lippincott Williams & Wilkins; Philadelphia: p. 1-336.

[3] Merril, R.M., Timmreck, T.C (2006).  Introduction to Epidemiology. 4th Ed. Jones and Bartlett Publishers; Mississauga, Ontario: p. 1-342.

[4] Lilienfeld, A.M., and Lilienfeld, D.E. (1980): Foundations of Epidemiology. Oxford University Press, London.

Size does matter: Nano vs. Macroscopic world

We live in an era of nanomaterials, nanotechnology, and nanoscience. What is so special about this nano world? How different is it from the macroscopic world of conventional bulk materials? How size influences the difference in properties in these two distinct worlds, although the basic material is same? For example, the properties of gold nanoparticles are distinctly different from the bulk gold. One simple answer is nanoparticles consist of fewer atoms to few thousand atoms while the bulk materials generally Fig 1 gold macro vs nano composed of billions of atoms. Look at the image below. At nanoscale, gold does not look even yellow! All of us know that gold (in bulk) is an inert metal. However, the same metal at nanosize of about 5 nm works as a catalyst in oxidizing carbon monoxide (CO). Therefore, size does influence the property. But, how? What happens when a material breaks down to nanoscale? Part of the answer lies in the number of surface atoms. Let’s elaborate it. We know that at bulk state gold forms face centered cubic (fcc) lattice where each gold atom remains surrounded by 12 gold atoms, even the gold atoms at surface is surrounded by six adjacent atoms. In a gold nanoparticle, a higher number of atoms sit at the surface, and surface atoms are always more reactive. These large numbers of exposed atoms in gold nanoparticles compared to the bulk material enable gold nanoparticles to function as a catalyst.

Now what happens to the color? At nanoscale, gold loses its vibrant yellow color. While light gets reflected from the surface of the gold at bulk state, the electron clouds resonates with certain wavelength of light at nanoscale. Depending on the size of the nanoparticle, it absorbs light of certain wavelength and emits light at different wavelength. For example, nanoparticles of sizes about 90 nm absorb red to yellow light and emit blue-green, whereas particles around 30 nm in size absorb blue and green light and appear red in color.

The physical properties such as melting point, boiling point, conductivity, etc. also change in nanoscale. For example, Fig 2a when gold melts in its bulk state regardless whether it’s a small ring or big gold bar, all melts at the same temperature. But this is not true for nanoparticles; with decrease in size, the melting point lowers and it varies by hundreds of degrees (Check the inset picture). This is because when a matter reaches nano-regime, it no longer follows Newtonian or classical physics, rather it obeys the rules of quantum mechanics. The nanoeffects which are relevant for nanomaterials are as follows: (i) Gravitational force no longer controls the behavior due to the very small mass of the nanoparticles, rather electromagnetic field determines the behavior of the atoms and molecules; (ii) Wave-particle duality applicable for such small masses, where wave nature shows pronounced effect; (iii) As a result of wave-particle duality, a particle (electron) can penetrate through an energy region or barrier (i.e. energy potential) which is classically forbidden and this is known as quantum tunneling. In classical physics, a particle can jump a barrier only when it has energy more than the barrier; Fig 2_tunneling therefore, the probability of finding the particle on the other side the barrier is nil if the particle possesses less energy than the barrier. On the other hand, in quantum physics, the probability of finding a particle, with less energy required to jump the barrier, on the other side is finite. However, to have a tunneling effect, the thickness of the barrier should be comparable with the wavelength of the particle and this is only possible in nanoscale level. Based on quantum tunneling, scanning tunneling microscope (STM) is created to characterize the nanosurfaces.

(iv) Quantum confinement i.e. electrons are not freely movable in bulk material rather these are confined in space. Size tunable electronic properties of nanoparticles arise due to quantum confinement.

(v) Energy quantization i.e. energy is quantized. An electron can exist only at discreet energy levels. Quantum dots, a special class of nanoparticles of size 1-30 nm, show the effect of energy quantization.

(vi) Random molecular motion: At absolute zero molecules are always moving owing to their kinetic energy, although this motion is not comparable to the object at macroscale. However, at nanoscale, this motion becomes comparable to the size of the particle; hence, influence the behavior of the particle.

(vii) Increased surface-to-volume ratio: The changes in the bulk properties (mp, bp, hardness, etc.) can be attributed to the enhanced surface-to-volume ratio of nanoparticles.

Therefore, in a nut shell, because of the above mentioned changes, the properties of a material in nanoregime differ from macroscale.

Interdisciplinary research – Direct Imaging of Single Molecule

Interdisciplinary research has immense potential. I have talked about one of the major discoveries of modern science based on interdisciplinary research in my previous blog, posted on 29th July 2013 (http://blog.manuscriptedit.com/2013/07/ interdisciplinary-research-nobel-prize-chemistry-won-biologists/). Today, let us take another example, where one chemist and one physicist came together and presented us with the direct image of internal covalent bond structure of a single molecule using one of the advanced imaging tools, non-contact Atomic force microscope (nc-AFM). Image1Dr. Felix R.Fischer (http://www.cchem.berkeley.edu/frfgrp/), a young Assistant Professor of Chemistry at University of California (UC), Berkeley along with his collaborator Dr. Michael Crommie (http://www.physics.berkeley.edu/research/crommie/home), also a UC Berkeley Professor of Physics captured the images of internal bond structure of oligo (phenylene-1, 2 ethynylenes) [Reactant1] when it undergoes cyclization to give different cyclic compounds (one of which is shown in the inset picture http://newscenter.berkeley.edu/2013/05/30/scientists-capture-first-images-of-molecules-before-and-after-reaction/). Chemists generally determine structure of molecules using different spectroscopic techniques (NMR, IR, Uv-vis, etc.) in an indirect manner. The molecular structures, we generally see in the textbooks result from the indirect way of structure determination, either theoretical or experimental or both. It is more like putting together various parts to solve a puzzle. But now, with this ground breaking work of two scientists from UC Berkeley, one can directly see for the very first time in the history of science, how a single molecule undergoes transformation in a chemical reaction, how the atoms reorganized themselves at a certain condition to produce another molecule. No more solving puzzle for next generation of chemists to determine the molecular structure.

HOW interdisciplinary research made it possible:

Well, it was not easy task for the scientists to come up with these spectacular molecular images. Imaging techniques such as scanning tunneling microscopy (STM), tunneling electron microscopy (TEM), have their limitations, and are often destructive to the organic molecular structure. Advanced technique like nc-AFM where a single carbon monoxide molecule sits on the tip (probe) helps in enhancing the spatial resolution of the microscope, and this method is also non-destructive. The thermal cyclization of the Reactant 1 was probed on an atomically cleaned silver surface, Ag(001) under ultra-high vacuum at single molecular level by STM and nc-AFM. Before probing, the reaction surface and the molecules were chilled at liquid helium temperature, 40K (-2700C). Then the researchers first located the surface molecules by STM and then performed further finetuning with nc-AFM, and the result is what we see in the inset picture. For cyclization, the Reactant 1 was heated at 900C, the products were chilled and probed.  Chilling after heating did not alter the structure of the products. The mechanism of thermal cyclization was also clearly understood, and the mechanistic pathway was in agreement with the theoretical calculations. From the blurred images of STM, Dr. Fischer and Dr. Crommie along with their coworkers presented us crystal clear molecular images with visible internal bond structure. This ground breaking work shows the potential of nc-AFM and unveils secrets of surface bound chemical reactions which will definitely have a huge impact on oil and chemical industries where heterogeneous catalysis is widely used. This technique will also help in creating customized nanostructure for use in electronic devices.

Again this path breaking work was possible due to the collaborative research between chemists and physicists. Hence, the interdisciplinary researches have endless potential.

References

1.    de Oteyza DG, Gorman P, Chen Y-C, Wickenburg S, Riss A, Mowbray DJ, Etkin G, Pedramrazi Z, Tsai H-Z, Rubio A, Crommie MF, Fischer FR. Direct Imaging of Covalent bond structure in Single-molecule chemical reactions. Science (2013); 340: 1434-1437

 

Interdisciplinary research – Nobel Prize for Chemistry was awarded to two Biologists

Modern scientific research does not confine itself to any restricted boundary.  Nowadays, it is all about interdisciplinary research. In 2012, Nobel Prize for Chemistry (http://www.nobelprize.org/nobel_prizes/chemistry/)was awarded to two eminent biologists, Prof. Robert J Lefkowitz and Prof. Brian Kobika, for their crucial contribution in unveiling the signalling mechanism of G protein-coupled receptors (GPCRs). It’s a lifetime work of both the scientists. Dr. Lefkowitz, an investigator at Howard Hughes Medical Institute (HHMI) at Duke University, is also James B Duke Professor of Medicine and of Biochemistry at Duke University Medical Center, Durham, NC, USA. Dr. Kobika, earlier a postdoctoral fellow in Dr. Lefkowitz’s laboratory, is currently Professor of Molecular and Cellular Physiology at Stanford University, School of Medicine, Stanford, CA, USA.

Transmembrane signalling of one GPCR “caught in action” by X-ray crystallography

GTP (guanosine triphosphate) binding proteins (G-protein) act as molecular switches in transmitting signals from different stimuli outside the cell to inside the cell. However, for doing this G-protein needs to be activated, and that is where GPCRs play the most important role. They sit in the cell membranes throughout the body. GPCRs, also known as seven transmembrane (pass through the cell membrane seven times) domain proteins, detect the external signals like odor, light, flavor as well as the signals within the body such as hormones, neurotransmitter.1 Once the GPCRs detect a signal, the signal is transduced in certain pathway and finally activate the G-protein. In response, the activated G-protein triggers different cellular processes. Binding of a signalling molecule or ligand to the GPCR causes conformational changes in the GPCR structure. As a result of extensive research of 20 long years, Dr. Lefkowitz and Dr. Kobika not only identified 800 members of GPCRs family in human but also caught in action how these receptor proteins actually carry out the signal transduction with the help of high resolution X-ray crystallography. The crystal structure of ß2-adrenergic receptor (ß2AR), a member of the human GPCRs family was reported by Dr. Kobika and his colleagues in 2007.2 The hormones adrenaline and noradrenaline are known to activate ß2AR, and the activated ß2AR triggers different biochemical processes which help in speeding up the heart and opening airways as body’s fight response. The ß2AR is a key ingredient in anti-asthma drugs. One of the major breakthroughs came in 2011 when Dr. Kobika and his co-workers unveiled for the first time the exact moment of the transmembrane signalling by a GPCR. They reported the crystal structure of “the active state ternary complex composed of agonist-occupied monomeric ß2AR and nucleotide-free Gs heterotrimer”.3 A major conformational change in ß2AR during signal transduction was discovered.

Now what is so special about GPCRs? Well, these proteins belong to one of the largest families of  all human proteins. GPCRs are involved in most of the physiological activities, and hence are  the targets of a number of drugs. Determination of the molecular structures of this class of receptors not only helps the researchers to understand the actual mechanism of different cellular processes but also help them to design life saving and more effective drugs. So, in a nut shell, this scientific breakthrough was possible due to the involvement of experts of different areas of science such as, chemistry, biochemistry, molecular and cellular biology, structural biology, cardiology, crystallography.

 

References

 

  1. Lefkowitz, R. J. Seven transmembrane receptors: something old, something new. Acta Physiol. (Oxf.) 190, 9–19 (2007).
  2. Rasmussen, S. G. et al. Crystal structure of the human b2 adrenergic G-protein coupled receptor. Nature 450, 383–387 (2007).
  3. Rasmussen, S. G. et al.  Crystal structure of the b2 adrenergic receptor–Gs protein complex. Nature 477,  549-557 (2011)