India’s first markets: Their role in 21st-century economic progress

India’s first markets have for millennia attracted people: to conquer, to plunder, to settle, to prosper, and to leave behind a living legacy long after they sailed into the sunset as their fortunes ebbed. What attracted them to this land of gold and honey was its abundant wealth; they came to buy in the booming bazaars and to export India’s diverse high quality produce and products.  India’s first civilization, which spread from its type-site Harappa in Punjab (in present-day Pakistan) to Kutch, the Ganga valley, and even further south to coastal Maharashtra and Karnataka, bears ample evidence of the existence of such flourishing markets and port towns in proto-historic India.

India’s first Markets as carriers of economic growth and prosperity

If truth were to be told, the affluence and opulence of ancient Indian civilizations were brought out by India’s first markets which included the twin modes of economic growth, namely, markets and exports. Many centuries before Japan and South Korea resorted to the export-oriented growth theory to promote swift development of their economies, the early Indian rulers, craftsmen, and tradesmen had successfully adopted the same growth ideology to usher in riches and luxury to its populace through India’s first markets. Not surprisingly, it has been estimated that India had the world’s largest economy during the years 1-1000 CE. According to Maddison’s calculations (see Table), China and India together contributed 50.5% of world GDP.

gdp

Virtually every facet of human life and venture is a reflection of the development of India’s first markets. Not surprisingly, India is now emerging steadily as an Asian giant, aspiring to become an economic superpower of the world by the mid-21st century.

For good or bad, there is no denying the critical role played by various mercantile and commercial epochs, which owed much to the trade and market expansion through those eras, in the origin of India’s first markets. This has enabled India to occupy a pride of place on the global economic map in the 21st century. Each strand of India’s economic, social, and cultural growth is nothing but a continual process of evolution, experimentation, and improvisation in every sphere of economic activity that has been facilitated primarily by the business and commercial exchanges seen in India’ first markets.

History of nations is essentially a history of their political economies, and is driven in the pursuit of not so much political as commercial conquests. Most facets of human behavior surface out of business exchanges at market places. So also do economic processes and institutions. These have not been day-break discoveries or inventions, but have evolved over centuries through an endless string of tried and tested market processes, experiments, and research. To put it more explicitly, the term “market” means much more than its simple generic implication of buy and sell; markets are of various kinds, marketing functions are many, different genres as it were, and each genre has developed through time to meet the requirements of specific sorts of buy and sell transactions.

But for India’s first markets, man would have well-nigh remained a nomad in search of food. In view of the critical role that markets undoubtedly played in the rise of human civilization, a prod into both the proto-historic and historic growth of markets makes a fascinating research inquiry. Hence, as one views and visits the modern next-generation exchange markets, it would indeed be exciting to peep into the past, and trace the growth of markets, market logistics, and market institutions over the past millennia. And no other country seems more apt for such a probe into times of yore than India, which can boast of a long history, and has subsequently traversed through successive historical eras, thanks to the entry of races and cultures of diverse hues in different periods of time.

Analytical Study Designs in Medical Research

In medical research, it is important for a researcher to know about different analytical studies. The objectives of different analytical studies are different, and each study aims to determine different aspects of a disease(s) such as prevalence, incidence, cause, prognosis, or effect of treatment. Therefore, it is essential to identify the appropriate analytical study associated with certain objectives. Analytical studies are classified as experimental and observational studies. While in an experimental study, the investigator examines the effect of presence or absence of  certain intervention(s), he does not need to intervene in a observational study, rather he observes and assesses the  relation between exposure and disease variable. Interventional studies or clinical trials fall under the category of experimental study where investigator assigns the exposure status. Observational studies are of four types: cohort studies, case-control studies, cross-sectional studies, and longitudinal studies

Classification of Analytical studies

While experimental studies are sometimes non indicative or not ethical to conduct or very expensive, observational studies probably are the next best approach to answer certain investigative questions. Well-designed observational studies may also produce similar results as controlled trials; therefore, probably, the observational studies may not be considered as second best options. In order to design an appropriate observational study, one should able to distinguish between four different observational studies and their appropriate application depending on the investigative questions. Following is a brief discussion on four different observational studies (each will be discussed in detail individually in my upcoming blogs):

 

Observational Analytical Study Designs

Cohort studies

Cohort methodology is one of the main tools of analytical epidemiological research. The word “cohort” is derived from the Latin word “cohors” meaning unit. The word was adopted in epidemiology to refer a set of people monitored for a period of time. In modern epidemiology, the word is now defined as “group of people with defined characteristics who are followed up to determine incidence of, or mortality from, some specific disease, all causes of death, or some other outcome” (Morabia, 2004). In cohort studies, individuals are identified who initially do not have the outcome of interest and followed for a period of time. The group can be classified in sub sets on the basis of the exposure. For example, a group of people can be identified consisting of both smoker and non-smoker and followed them for the incidence of lung cancer. At the beginning of the study none of the individuals have lung-cancer and the individuals are grouped into two sub sets as smoker and non-smoker and then followed for a period of time for different characteristics of exposure such as smoking, BMI, eating habits, exercise habits, family history of lung cancer or cardiovascular diseases, etc. Over the time, some individuals develop the outcome of interest. From the data collected over time, it is convenient to evaluate the hypothesis whether smoking is related with the incidence of lung cancer. The following schematic shows the basic design of a cohort study. There are two types of cohort studies: prospective and retrospective. A prospective study is conducted at present but followed up to future i.e., waiting for the disease to develop. On the other hand, a retrospective study is carried out at present on the data collected in the past. This is also called as historic cohort study. In the next blog, I will discuss these in detail.

Design of a Cohort study

Case-control studies

In terms of objective, case-control studies and cohort studies are same. Both are observational analytical studies, which aim to investigate the association between exposure and outcome. The difference lies in the sampling strategy. While cohort studies identify the subjects based on the exposure status, case-control studies identify the subjects based on the outcome status. Once the outcome status is identified the subjects are divided into two sets: case and control (who do not develop the outcome). For example, a study design which determines the relation between endrometrial cancer with use of conjugated estrogen. For this study, subjects are chosen based on the outcome status (endrometrium cancer) i.e., with disease present (case) and absent (control), and then these two subsets are compared with respect to the exposure (use of conjugated estrogen). Therefore, case-control study is retrospective in nature and cannot be used for calculating relative risk. However, odd ratio can be measured, which in turn, is approximate to relative risk. In cases of rare outcomes, case control study is probably the only feasible analytical study approach.

Design of a Case-Control Study

Cross-sectional studies

Cross-sectional study is a type of observational analytical study which is used primarily to determine the prevalence without manipulating the study environment. For example, a study can be designed to determine the cholesterol level in walker and non-walker without exerting any exercise regime or activity on non-walkers or modifying the activity of the walkers. Apart from cholesterol other characteristics of interest, such as age, gender, food habits, educational level, occupation, income, etc., can also be measured. The data collected at one time in present with no further follow up. In cross-sectional design, one can study a single population (only walkers) or more than one population (both walker and non-walker) at one point of time to see the association between cholesterol level and walking. However, the design of this study does not allow to examine the causal of a certain condition since the subjects are never been followed either in past or present. 

Design of a Cross-Sectional Study

Longitudinal studies

Longitudinal studies, similar to cross-sectional studies, are also a type of observational analytical studies. However, the difference of this study design with the cross-sectional study is the following up the subjects for a longer time; hence, can contribute more to the association of causative to a condition. For example, the design that aims to determine the cholesterol level of a single population, say the walkers over a period of time along with some other characteristics of interest such as age, gender, food habits, educational level, occupation, income, etc. One may choose to examine the pattern of cholesterol level in men aged 35 years walking daily for 10 years. The cholesterol level is measured at the onset of the activity (here, walking) and followed up throughout the defined time period, which enables to detect any change or development in the characteristics of the population.

Following two tables summarize different observational analytical studies with regard to the objectives and time-frame.

Fig5

I will define several terms, such as risk factor, odd ratio, probability, confounding factors, etc., related to study designs along with the detail discussion on individual analytical study design and tips to choose correct design depending on the research question in my upcoming blogs. Visit the blog section of the website (www.manuscriptedit.com) for more such informative and educative topics. 

References

[1] Morabia, A (2004). A History of Epidemiologic Methods and Concepts. Birkhaeuser Verlag; Basel: p. 1-405.

[2] Hulley, S.B., Cummings, S.R., Browner, W.S., et al (2001). Designing Clinical Research: An Epidemiologic Approach. 2nd Ed. Lippincott Williams & Wilkins; Philadelphia: p. 1-336.

[3] Merril, R.M., Timmreck, T.C (2006).  Introduction to Epidemiology. 4th Ed. Jones and Bartlett Publishers; Mississauga, Ontario: p. 1-342.

[4] Lilienfeld, A.M., and Lilienfeld, D.E. (1980): Foundations of Epidemiology. Oxford University Press, London.

Size does matter: Nano vs. Macroscopic world

We live in an era of nanomaterials, nanotechnology, and nanoscience. What is so special about this nano world? How different is it from the macroscopic world of conventional bulk materials? How size influences the difference in properties in these two distinct worlds, although the basic material is same? For example, the properties of gold nanoparticles are distinctly different from the bulk gold. One simple answer is nanoparticles consist of fewer atoms to few thousand atoms while the bulk materials generally Fig 1 gold macro vs nano composed of billions of atoms. Look at the image below. At nanoscale, gold does not look even yellow! All of us know that gold (in bulk) is an inert metal. However, the same metal at nanosize of about 5 nm works as a catalyst in oxidizing carbon monoxide (CO). Therefore, size does influence the property. But, how? What happens when a material breaks down to nanoscale? Part of the answer lies in the number of surface atoms. Let’s elaborate it. We know that at bulk state gold forms face centered cubic (fcc) lattice where each gold atom remains surrounded by 12 gold atoms, even the gold atoms at surface is surrounded by six adjacent atoms. In a gold nanoparticle, a higher number of atoms sit at the surface, and surface atoms are always more reactive. These large numbers of exposed atoms in gold nanoparticles compared to the bulk material enable gold nanoparticles to function as a catalyst.

Now what happens to the color? At nanoscale, gold loses its vibrant yellow color. While light gets reflected from the surface of the gold at bulk state, the electron clouds resonates with certain wavelength of light at nanoscale. Depending on the size of the nanoparticle, it absorbs light of certain wavelength and emits light at different wavelength. For example, nanoparticles of sizes about 90 nm absorb red to yellow light and emit blue-green, whereas particles around 30 nm in size absorb blue and green light and appear red in color.

The physical properties such as melting point, boiling point, conductivity, etc. also change in nanoscale. For example, Fig 2a when gold melts in its bulk state regardless whether it’s a small ring or big gold bar, all melts at the same temperature. But this is not true for nanoparticles; with decrease in size, the melting point lowers and it varies by hundreds of degrees (Check the inset picture). This is because when a matter reaches nano-regime, it no longer follows Newtonian or classical physics, rather it obeys the rules of quantum mechanics. The nanoeffects which are relevant for nanomaterials are as follows: (i) Gravitational force no longer controls the behavior due to the very small mass of the nanoparticles, rather electromagnetic field determines the behavior of the atoms and molecules; (ii) Wave-particle duality applicable for such small masses, where wave nature shows pronounced effect; (iii) As a result of wave-particle duality, a particle (electron) can penetrate through an energy region or barrier (i.e. energy potential) which is classically forbidden and this is known as quantum tunneling. In classical physics, a particle can jump a barrier only when it has energy more than the barrier; Fig 2_tunneling therefore, the probability of finding the particle on the other side the barrier is nil if the particle possesses less energy than the barrier. On the other hand, in quantum physics, the probability of finding a particle, with less energy required to jump the barrier, on the other side is finite. However, to have a tunneling effect, the thickness of the barrier should be comparable with the wavelength of the particle and this is only possible in nanoscale level. Based on quantum tunneling, scanning tunneling microscope (STM) is created to characterize the nanosurfaces.

(iv) Quantum confinement i.e. electrons are not freely movable in bulk material rather these are confined in space. Size tunable electronic properties of nanoparticles arise due to quantum confinement.

(v) Energy quantization i.e. energy is quantized. An electron can exist only at discreet energy levels. Quantum dots, a special class of nanoparticles of size 1-30 nm, show the effect of energy quantization.

(vi) Random molecular motion: At absolute zero molecules are always moving owing to their kinetic energy, although this motion is not comparable to the object at macroscale. However, at nanoscale, this motion becomes comparable to the size of the particle; hence, influence the behavior of the particle.

(vii) Increased surface-to-volume ratio: The changes in the bulk properties (mp, bp, hardness, etc.) can be attributed to the enhanced surface-to-volume ratio of nanoparticles.

Therefore, in a nut shell, because of the above mentioned changes, the properties of a material in nanoregime differ from macroscale.

Japan’s Contribution to World Research

Since the 1980s, Japan has emerged at the forefront of research in several fields and has made path-breaking contributions in the global arena. This is the outcome of significant investment in R&D activities and the centers of excellence in the form of more than 30 leading universities. In fact, together with the U.S. and Europe, Japan ranks among the topmost countries as a proven leader in the global effort toward research and development.

International recognition of Japan’s contribution to World Research

Expectedly, Thomson Scientific, a leading provider of information solutions, has repeatedly recognized Japan’s ongoing impact on global research through the years. In 2007, 17 leading Japanese scientists were honored with the Thomson Research Front Award. The selection was made on the basis of an analysis of communication among scientists and the fact that the Japanese scientists published research papers that were among the most highly cited papers around the world.

More recently, in December 2012, Japanese organizations dominated the Top 100 Global Innovators list announced by the IP & Science business of Thomson Reuters, the world’s leading featured provider of intelligent information for businesses and professionals. While the U.S. led globally with 47 organizations in the list, there were 25 Japanese organizations out of a total of 32 organizations from Asia. Such recognition shows that Japanese researchers and innovators are at the forefront of global research.

While Japan’s excellence in electronics is a well-established fact, researchers in Japan have a proven track record in the fields of medicine and science, as evident in the long list of Nobel laureates from Japan.

Japan has particularly excelled in medical research in the streams of nuclear medicine, cardiovascular diseases, infectious diseases, and rheumatology. A database analysis of research papers in nuclear medicine published in reputed research journals during the 1990s shows that Japanese researchers contributed more than 11% of the total papers and rank second behind the U.S.

It might be logical to assume that the excellence of medical research in Japan is a direct result of the increasing investment by both public and private sectors in the field of biomedical R&D. Not surprisingly, a study published in the New England Journal of Medicine on January 2, 2014, reveals that such investment from the private sector surged from $20.9 billion in 2007 to $27.6 billion in 2012. At the macro level, Japan’s total spending on medical research increased by $9 billion and accounted for 13.8% of the world’s total research spending. To put things in perspective, the study emphasizes that the U.S. had a reduced spending on medical research over the same period.

Japan’s capacity to innovate, coupled with researchers par excellence, can surely lead the country to scale newer heights in research and to continue its contribution to the global research pool.

Globalization of Academic Research in Japan

Globalization of academic research  is a reality in the contemporary world. National boundaries are getting obliterated because of the Internet and instant electronic communication. In fact, in the last few years, there is an increasing collaboration among researchers in science, humanities, and the arts from around the world to produce results that have a global impact. In statistical terms, a report by the National Science Foundation confirms that 6,477 new international research alliances were formed in the 1990s, and this is only a fraction of the international research alliances in the 2000s.

This shows that the combined forces of globalization of academic research and and internationalization are transforming research worldwide. Research in Japan, which has had an isolated past, is also reinventing itself since the 1990s. Not surprisingly, Japan is the world’s second highest R&D spender behind the U.S. and contributes 13% of the total expenditure on R&D worldwide. This has largely been possible because of the rapid strides of industrialization in Japan.

However, the talent pool that contributes to industrialization comes from Japan’s academic institutions. Therefore, in the last few decades, there has been a conscious effort to facilitate globalization of academic research in Japan.

Globalization of Academic Research in Japan: Problems and Way Forward

Two factors that have hindered globalization of academic research in Japan are the language barrier and the lack of alignment of the academic year in Japan with the international calendar. However, both these impediments are being tackled to make Japan a truly global destination for research. Several universities have already introduced classes in English for undergraduate courses. For instance, The University of Tokyo (Todai) has launched its new all-English undergraduate programs. Further, there are more than 50 graduate schools where students can enroll for lessons conducted in English.

The mismatch in the academic year in Japan with the global academic calendar is also under scrutiny for change, although it might take some time coming because it will entail a complete overhaul of Japan’s education system. However, Todai has recently announced a four-semester plan, which is likely to start in March 2015. This will make it easier for foreign students to study at Todai from the beginning of the second term in September, and for Japanese students to utilize the summer break of June-August to study overseas. With a similar objective of attracting overseas researchers, Waseda University has also introduced four “quarter terms” as an alternative to the semester system.

Japan’s plan for international student exchange, known as the “300,000 International Students Plan,” was launched in 2008 and aims to achieve the targeted number international students by 2020. Another significant initiative toward globalization of research in Japan was the “Global 30” Project, launched by the Ministry of Education, Culture, Sports, Science, and Technology in 2009. The objective of this program was to establish 30 core universities for internationalization. This initiative has successfully broken down the language barrier, and a range of courses in many research fields are being offered in English at the universities.

The university is undoubtedly the cradle for pioneering research in the future. Therefore, it is important to open up Japan’s research institutions to international research talent, and to simultaneously send Japanese researchers for exposure in other countries.  Already, researchers from universities and research institutes in Japan are travelling to China, Vietnam, Russia, Hungary, Germany, France and many other countries, with reciprocal visits from those countries.

The Future of Globalization of Academic Research in Japan

These are the first steps in Japan toward an international mix of researchers, which have gathered momentum after the turn of the millennium and will yield tangible results in the next few decades. Assimilation of international researchers in higher education, backed by favorable government policies and funding for research projects, will go a long way toward globalization of research in Japan, and will hopefully take research in Japan even higher on the global stage.

Ultrasound Patch: An innovative technology for rapid treatment and management of ulcers

Venous skin ulcers also known as stasis ulcers or varicose ulcers are chronic wounds caused by the poor blood circulation in the venous valves or veins, usually occurring in the lower part of the legs, between the ankle and the calf. This condition is known venous insufficiency and accounts for roughly 70 % to 90% of leg ulcer cases. These ulcers are often recurring, extremely painful and can take months and years to heal. This condition affects approximately 500,000 Americans annually, and the number is expected to increase as the rate of obesity climbs. It is estimated that the financial burden for the treatment of venous skin ulcers costs US healthcare systems over one billion dollars per year and the monthly treatment costs could be as high as $2,400 a month. Current treatments for venous skin ulcers are either conservative management, such as compression therapy or invasive and expensive surgical procedures, such as skin grafts. Other available treatment options include mechanical treatment and medications. The most standard treatment however involves the infection control, wound dressings and compression therapy in which patients are asked to wear elastic stockings to help improve leg circulation. Nevertheless, all these approaches were not found to be successful in every case and these wounds take often months or sometime years to heal.

Designing Ultrasound patch:

Recently, a team of researchers led by Dr. Peter A. Lewin at Drexel University at Philadelphia have designed a novel non-invasive technique called “Ultrasound Patch” for treating chronic ulcers and wounds. This technique uses patches with a novel ultrasound applicator that can be worn effortlessly like a band-aid. In this alternative therapy, battery-powered patch sends low-frequency and low-intensity ultrasound waves directly to the wound site. The therapeutic benefits of ultrasound for wound healing were established in previous studies, but most of studies were performed with much higher frequencies, around 1-3 megahertz (MHz). Dr. Lewin believed that decreasing the frequency to 20–100 kilohertz (kHz) might work better with a reduced exposure. According to him, one of the biggest challenges in designing this technology was to build a battery-powered patch since most ultrasound transducers require a bulky apparatus which need to be fixed on the wall. Dr. Lewin and colleagues also wanted to create something which is portable and can be easily worn for which the device has to be essentially battery operated. To accomplish this, they designed a transducer that could produce medically pertinent energy levels using minimum voltage. The ultrasound patch in its present form, weighs approximately 100 grams and required two rechargeable AA batteries. It is designed to be worn over the ulcer or the wound and the patient can deliver controlled pulses of ultrasound directly to the wound, while at home. The funding for this study was received from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), a part of the National Institutes of Health.

Clinical studies for testing Ultrasound patch

To determine the optimal frequency and treatment duration of ultrasound patch, the study trial was carried out initially in total 20 patients, divided into four groups. Each group received either 20 kHz for 15 minutes, 20 kHz for 45 minutes 100 kHz for 15 minutes, or 15 minutes of a placebo or control which received no radiation.  According to the researchers, the first group was the one that eventually came out best, where all the five participants completely healed by the time they reached their fourth session. In contrast, the ulcers of the patients in the placebo group worsened over the similar duration. Results suggested that patients who received this low-frequency, low-intensity ultrasound therapy during their weekly follow ups (in addition to the standard compression therapy), showed a net reduction in wound size just after four weeks of the therapy. Whereas, the patients who did not received the ultrasound treatment had an average increase in the wound size. The team’s clinical findings were further confirmed by their in vitro studies where after 24 hours of receiving 20 kHz ultrasound for 15 minutes, mouse fibroblasts cells that play an active role in wound healing showed a 32% increase in cell metabolism and a 40% increase in cell proliferation as compared to the control cells. These findings are yet to be published in the Journal of the Acoustical Society of America.

Advantages and applications of Ultrasound patch

Researchers believe that using ultrasound patch for chronic ulcers will reduce the treatment cost and patient’s discomfort. It aids in speedy recovery of wounds as compared to the conventional approaches and could eventually be used to manage wounds associated with diabetic and pressure ulcers. However, before it widespread applications, studies need to be conducted on the larger-scale for establishing its overall safety and efficacy. The ultrasound patch is light weight and can be easily worn like a band-aid. Another characteristic feature of this patch is an attached monitoring component that uses near infrared spectroscopy (NIRS) to assess the progress of wound healing. NIRS can help to non-invasively assess changes in the wound bed and monitor if the treatment is working in its initial stages, when healing is difficult to spot with the naked eye.  Using this patch will also prevent frequent visits to doctor’s clinic or hospital, which can be at times very difficult for patients with chronic wounds. Currently, studies with larger numbers of patients are underway to confirm the safety and efficacy of this patch before it makes its way into the clinics.

SEO Content Writing

Search engine optimisation or SEO content writing is not just an application to be used for the website, but it can be used for various other benefits as well. In a broad sense, SEO tools can be used in many different ways in order to promote websites, businesses, and products and services. In fact, one will be completely astonished at all the ways in which these services can be useful.

The present article on “SEO content writing” gives the fundamental ways to use SEO content writing and SEO tools. It also provides detailed information on why to use the services of SEO content writing.

Ways to use SEO Content Writing

Firstly, SEO content writing can be definitely used for websites. Organic search engine optimisation (Organic SEO) is significant for getting in the search engine records at a high position and compelling visitors to the website. Simultaneously, it is necessary for the content to be appealing, fascinating and instructive for the readers. Consequently, the visitors will be tempted to hang on to your website long enough to possibly buy services and/or products, or at least ask for some extra information. Hence, SEO content writing can be certainly used for improving the content of a website.

Secondly, SEO content writing can also be used for blogs for similar reasons as discussed above. Blogs are a great way to endorse businesses, develop brand appreciation and representation, as well as increase visitors to the main website. For this purpose, we have to use the same SEO procedures for our blogs that we would have used for the main website. Ensure that the blogs are mostly instructive in nature. Besides, they must be connected to your main website as a source for extra information, useful products or services.

Thirdly, SEO content writing can be employed for internet marketing purposes. One expects that anything he/she posts on the internet utilises organic SEO so that they have as many positive connections to their website as possible, and as a result receive plenty of hits in the search engines on different websites and advertising methods. The most general manner by which these services can be used for internet marketing is through article marketing. Instructive articles are written about products, services, company, and/or industry. Then, these articles are posted into article listings, where they get arranged in the search engines. The articles have a link connecting them back to the main website in order to increase visitors. Meanwhile, people come across the content and get to know about the company/brand in a positive light, thus increasing business.

On the whole, there are several other ways in which SEO content writing can assist in our dealings. The only limitation is our own imagination. Gradually, you will realise that the more SEO content you put on the website, the more successful your company will develop into. Therefore, utilise these services as much as possible, and be surprised at the outcomes you will accomplish.

Antibiotic Resistance: Cause and Mechanism

Scope of antibiotic resistance problem:

Antibacterial-resistant strains and species, occasionally referred as “superbugs”, now contribute to the emergence of diseases that were well controlled few decades ago. In a recent report “Antibiotic Resistance Threats in the United States, 2013,” CDC calls this as a critical health threat for the country. According to the report more than 2 million people in the United States get antibiotic resistant infections each year and at least 23,000 of them die annually. Now, this is the situation in a country where drug regulations are quite tough and stringent and physicians are relatively careful in prescribing medications. Imagine the situation in developing countries like India, where antibiotics are available over the counter without medical prescription and more than 80-90% of population use antibiotics without physician’s consultation. In fact they are not even aware of the proper use of the antibiotic course. This is again a huge health challenge that will pose even more serious threat in coming years in treating antibiotic resistant infections. Recently, in a clinic in Mumbai some 160 of the 566 patients tested positive for TB between March and September that were resistant to the most powerful TB medicine. In fact, more than one-quarter of people diagnosed with tuberculosis have a strain that doesn’t respond to the main treatment against the disease. According to WHO and data from Indian government, India has about 100,000 of the 650,000 people in the world with multi-drug-resistance.

 Factors contributing to antibiotic resistance:

Inappropriate treatment and misuse of antibiotics has contributed maximum to the emergence of antibacterial-resistant bacteria.Many antibiotics are frequently prescribed to treat diseases that do not respond to these antibacterial therapies or are likely to resolve without any treatment. Most of the time incorrect or suboptimal doses of antibiotics are prescribed for bacterial infections. Self-prescription of antibiotics is another example of misuse. The most common forms of antibiotic misuse however, include excessive use of prophylactic antibiotics by travelers and also the failure of medical professionals to prescribe the correct dosage of antibiotics based on the patient’s weight and history of prior use. Other misuse comprise of failure to complete the entire prescribed course of the antibiotics, incorrect dosage or failure to rest for sufficient recovery. Other major causes that contribute to antibiotic resistance are excessive use of antibiotics in animal husbandry and food industry and frequent hospitalization for small medical issues where most resistant strains gets a chance to circulated among the community.

To conclude, humans contribute the most to the development and spread of drug resistance by: 1) not using the right drug for a particular infection; 2) not completing the antibiotic duration or 3) using antibiotics when they are not needed.

In addition to the growing threat of antibiotic-resistant bugs, there may be another valid reason doctors should desist from freely prescribing antibiotics. According to a recent paper published online in Science Translational Medicine, certain antibiotics cause mammalian mitochondria to fail, which in turn leads to tissue damage.

 Mechanism of antibiotic resistance:

Antibiotic resistance is a condition where bacteria develop insensitivity to the drugs (antibiotics) that generally cause growth inhibition or cell death at a given concentration.

Resistance can be categorized as:

a) Intrinsic or natural resistance:  Naturally occurring antibiotic resistance is very common, where a bacteria may be simply, inherently resistant to antibiotics. For example, Streptomyces possess genes responsible for conferring resistance to its own antibiotic, or bacteria naturally lack the target sites for the drugs or they naturally have low permeability or lack the efflux pumps or transport system for antibiotics. The genes which confer this resistance are known as the environmental resistome and these genes can be transferred from non-disease-causing bacteria to the disease causing bacter, leading to clinically significant antibiotic resistance.

b) Acquired resistance: Here a naturally susceptible microorganism acquires ways not to get affected by the drug. Bacteria can develop resistance to antibiotics due to mutations in chromosomal genes or mobile genetic elements e.g., plasmids, transposons carrying antibiotic resistance genes.

The two major mechanisms of how antibiotic resistance is acquired are:

Genetic resistance: It occurs via chromosomal mutations or acquisition of antibiotic resistance genes on plasmids or transposons.

Phenotypic resistance: Phenotypic resistance can be acquired without any genetic alteration. Mostly it is achieved due to changes in the bacterial physiological state. Bacteria can become non-susceptible to antibiotics when not growing such as in stationary phase, biofilms, persisters and in the dormant state. Example: Salicylate-induced resistance in E. coli, Staphylococci and M. tuberculosis.

In genetic resistance category, following are the five major mechanisms of antibiotic drug resistance, which occurs due to chromosomal mutations:

1. Reduced permeability or uptake (e.g. outer membrane porin mutation in Neisseria gonorrhoeae)

2. Enhanced efflux (membrane bound protein helps in extrusion of antibiotics out of bacterial cell; Efflux of drug in Streptococcus pyogenes, Streptococcus pneumoniae)

3. Enzymatic inactivation (beta-lactamases cleave beta-lactam antibiotics and cause resistance)

4. Alteration or over expression of the drug target (resistance to rifampin and vancomycin)

5. Loss of enzymes involved in drug activation (as in isoniazid resistance-KatG, pyrazinamide resistance-PncA)

Examples of transfer of resistance genes through plasmid are; Sulfa drug resistance and Streptomycin resistance genes, strA and strB while the transfer of resistance gene through transposon occurs via conjugative transposons in Salmonella and Vibro cholera.

In the next post, I will discuss few important examples of antibiotic resistance in clinically relevant microbes.

Pharmacogenomics: A study of personalized drug therapy

With the increasing advancement of technology and research progress, modern medicine has found cure for several diseases which were considered to be incurable few decades ago e.g. cardiovascular diseases, various cancers, tuberculosis, malaria and infectious diseases. However, till date no single drug is shown to be 100% efficacious for treating a certain diseased condition without exhibiting adverse drug effects. It is now a well recognized fact that each patient respond differently to a given drug treatment for a similar disease. With a particular drug, desirable therapeutic effects could be obtained in few patients where as others may have modest or no therapeutic response. Besides, many patients might experience an adverse effect that also varies from mild to severe and life-threatening. Studies have shown that with a similar dose, plasma concentration of a certain drug might vary up to a difference of 600 fold among two individuals of same weight. Such inter-individual variations occurring in response to a drug might be a consequence of complex interaction between various genetic and environmental factors. Genetic factors are known to account for approximately 15-30% inter-individual variability in drug disposition and response, but for certain drugs it could also account for 95% variations. For the majority of the drugs, these differences are largely ascribed to the polymorphic genes encoding drug metabolizing enzymes, receptors or transporters. These polymorphic genes mainly influence important pharmacokinetic characteristics of the drug metabolism e.g. drug absorption, distribution, metabolism and elimination.

Origin of pharmacogenomics:

The first report of an inherited difference in response to a foreign chemical or xenobiotic was inability to taste phenylthiocarbamide. Another example which showed that drug response is determined by genetic factors which can alter the pharmacokinetics and pharmacodynamics of medications, evolved in late 1950s, when an inherited deficiency of glucose-6-phosphate dehydrogenase was shown to cause severe hemolysis in some patients when exposed to the antimalarial drug primaquine. This discovery elucidated why hemolysis was reported mainly in African-Americans, where this deficiency is common, and rarely observed in Caucasians. Other established evidences of inter-individual variations observed in response to suxamethonium (succinylcholine), isoniazid, and debrisoquine were also linked with a genetic connection. The discovery that prolonged paralysis following the administration of succinylcholine was the result of a variant of the butyryl-cholinesterase enzyme, and peripheral neuropathy occurring in a large number of patients treated with antituberculosis drug isoniazid was an outcome of genetic diversity in the enzyme N-acetyltransferase 2 (NAT2) are excellent examples of “classical” pharmacogenetic traits altering amino acid sequence.

These observations of highly variable drug response, which began in the early 1950s, led to beginning of a new scientific discipline known as pharmacogenetics. Vogel in 1959 was the first to use the term pharmacogenetics but it was not until 1962, when in a booy by Kalow, pharmacogenetics was defined as the study of heredity and the response to drugs.

Pharmacogenomics in new era:

The term pharmacogenomics was later introduced to reflect the recent transition from genetics to genomics and the use of genome-wide approaches to identify the genes that contribute to a specific disease or a drug response. The term pharmacogenomics and pharmacogenetics are many times used interchangeably. Pharmacogenomics is an emerging discipline that aimed at studying genetic differences in drug disposition or drug targets to drug response. With the availability of more sophisticated molecular tools for detection of genetic polymorphisms, advances in bioinformatics and functional genomics, pharmacogenomic based studies are generating data which is used in identifying the genes responsible for a specific disease or the drug response. There is emerging data from various human genome projects on drug metabolizing genes that is rapidly elucidated and translated into more rational drug therapy towards a personalized medicine approach. Many physicians are now reconsidering whether “One Drug for All” approach is ideal while prescribing medicines to treat a certain condition in different individuals. Various studies have now reported genotype- phenotype association studies with reference to many diseases where respective drug metabolizing genes and receptors are highly polymorphic. In the last decade, FDA has increasingly acknowledged the importance of biomarkers and formulated new recommendations on pharmacogenomic diagnostic tests and data submission.

Applications and challenges of Pharmacogenomics:

Personalized medicine is at times deemed to be a future phenomenon; however it is already making a marked difference on patient treatments especially in various cancers. Molecular or genetic testing is now available for colon, multiple myeloma, leukemia, prostrate and breast cancer patients, hepatitits C and cardiovascular diseases where one can identify their genetic profile and based on that it can be predicted whether patients are likely to benefit from new drug treatments simultaneously minimizing adverse drug reactions. Recently, at MD Anderson Cancer Center, “Institute for Personalized Therpay” was created particularly to implement personalized cancer therapy for improving patient outcomes and reducing treatment costs.

Personalized medicine might guarantee many medical innovations but its implementation is associated with several challenges regarding public policy, social and ethical issues. Individual may not opt or participate in the genetic research as they feel it might breach their right for privacy and confidentiality. To tackle these challenges, “2008 Genetic Information Nondiscrimination Act” was designed to shield individuals from genetic discrimination. Apart from this, other existing concerns are: ownership of genetic materials, medical record privacy, clinical trial ethics, and patient’s knowledge on the consequences of storing genetic materials and phenotypic data. These concerns must be addressed for the satisfaction of all the stakeholders, especially the patients on reaching a common consensus as how to manage pharmacogenomics applications into clinical practices.

Interdisciplinary research – Direct Imaging of Single Molecule

Interdisciplinary research has immense potential. I have talked about one of the major discoveries of modern science based on interdisciplinary research in my previous blog, posted on 29th July 2013 (http://blog.manuscriptedit.com/2013/07/ interdisciplinary-research-nobel-prize-chemistry-won-biologists/). Today, let us take another example, where one chemist and one physicist came together and presented us with the direct image of internal covalent bond structure of a single molecule using one of the advanced imaging tools, non-contact Atomic force microscope (nc-AFM). Image1Dr. Felix R.Fischer (http://www.cchem.berkeley.edu/frfgrp/), a young Assistant Professor of Chemistry at University of California (UC), Berkeley along with his collaborator Dr. Michael Crommie (http://www.physics.berkeley.edu/research/crommie/home), also a UC Berkeley Professor of Physics captured the images of internal bond structure of oligo (phenylene-1, 2 ethynylenes) [Reactant1] when it undergoes cyclization to give different cyclic compounds (one of which is shown in the inset picture http://newscenter.berkeley.edu/2013/05/30/scientists-capture-first-images-of-molecules-before-and-after-reaction/). Chemists generally determine structure of molecules using different spectroscopic techniques (NMR, IR, Uv-vis, etc.) in an indirect manner. The molecular structures, we generally see in the textbooks result from the indirect way of structure determination, either theoretical or experimental or both. It is more like putting together various parts to solve a puzzle. But now, with this ground breaking work of two scientists from UC Berkeley, one can directly see for the very first time in the history of science, how a single molecule undergoes transformation in a chemical reaction, how the atoms reorganized themselves at a certain condition to produce another molecule. No more solving puzzle for next generation of chemists to determine the molecular structure.

HOW interdisciplinary research made it possible:

Well, it was not easy task for the scientists to come up with these spectacular molecular images. Imaging techniques such as scanning tunneling microscopy (STM), tunneling electron microscopy (TEM), have their limitations, and are often destructive to the organic molecular structure. Advanced technique like nc-AFM where a single carbon monoxide molecule sits on the tip (probe) helps in enhancing the spatial resolution of the microscope, and this method is also non-destructive. The thermal cyclization of the Reactant 1 was probed on an atomically cleaned silver surface, Ag(001) under ultra-high vacuum at single molecular level by STM and nc-AFM. Before probing, the reaction surface and the molecules were chilled at liquid helium temperature, 40K (-2700C). Then the researchers first located the surface molecules by STM and then performed further finetuning with nc-AFM, and the result is what we see in the inset picture. For cyclization, the Reactant 1 was heated at 900C, the products were chilled and probed.  Chilling after heating did not alter the structure of the products. The mechanism of thermal cyclization was also clearly understood, and the mechanistic pathway was in agreement with the theoretical calculations. From the blurred images of STM, Dr. Fischer and Dr. Crommie along with their coworkers presented us crystal clear molecular images with visible internal bond structure. This ground breaking work shows the potential of nc-AFM and unveils secrets of surface bound chemical reactions which will definitely have a huge impact on oil and chemical industries where heterogeneous catalysis is widely used. This technique will also help in creating customized nanostructure for use in electronic devices.

Again this path breaking work was possible due to the collaborative research between chemists and physicists. Hence, the interdisciplinary researches have endless potential.

References

1.    de Oteyza DG, Gorman P, Chen Y-C, Wickenburg S, Riss A, Mowbray DJ, Etkin G, Pedramrazi Z, Tsai H-Z, Rubio A, Crommie MF, Fischer FR. Direct Imaging of Covalent bond structure in Single-molecule chemical reactions. Science (2013); 340: 1434-1437