Blog Blog Copyright by en Mon, 17 Jun 2024 02:50:16 -0400 How to Get A Career in Bioinformatics How to Get A Career in Bioinformatics

Bioinformatics is a combination of biology, computer science, and statistics. It is an interdisciplinary field that applies computer science and statistics techniques to biological problems. Working in bioinformatics involves using software to analyze DNA or proteins from different organisms to understand their functions. As such, bioinformatics professionals can have jobs in a variety of industries that all involve the use of computing technology and statistical analysis to study biological information, not just at the molecular level but also at the genome level and even at the social network level. This article provides you with information about how you can get a career in bioinformatics.

What Does a Bioinformatician Do?

A bioinformatician is a scientist who specializes in analyzing biological data. Bioinformaticians process large amounts of data to help scientists understand biological systems and how they may be affected by outside forces. A bioinformatician works with geneticists, statisticians, and other scientists to create and apply computational tools that can be used to analyze the vast amounts of DNA and protein data researchers generate. The goal of the work performed by bioinformaticians is to better understand how organisms function and to develop ways to treat and prevent diseases.

What Are the Education Requirements for a Bioinformatician?

In terms of education background, a bioinformatician typically has an undergraduate degree in areas like biology, computer science, statistics, or mathematics. A graduate degree in bioinformatics is also common for bioinformaticians who wish to become researchers and professors. Many bioinformaticians enter the workforce with a master's degree in bioinformatics. A master's degree usually takes one to two years to complete. Admission requirements for a master's program will vary by school, but usually require a bachelor's degree in a related field, a minimum GPA, and letters of recommendation.

How to Get a Job in Bioinformatics

There are a few important steps you can take to find a job in bioinformatics. First, research different job opportunities to understand the types of positions available. Then, create a solid resume and cover letter that highlight your skills and experience. Next, apply for jobs online through job boards and company websites or attend in-person job fairs and networking events. Finally, be patient and keep searching until you find a job that is the right fit. Bioinformaticians work in a wide variety of industries, so there are many opportunities to find a job in bioinformatics. Some industries with growing bioinformatics needs include healthcare, pharmaceuticals, genetics research, and agriculture.

3 Steps to Become a Bioinformatician

If you're interested in a career in bioinformatics, there are a few steps you can take to get started. First, get an education. A bachelor's degree in a related field is usually required. Second, gain experience. You can start by volunteering or doing an internship. Third, build a strong network of professional contacts. This can help you find job opportunities.


Bioinformatics is a growing field, and there are many opportunities for those who wish to work in it. A bioinformatician uses both computer science and statistics to analyze biological data and DNA. If you're interested in this field, start by getting an education in biology, computer science, statistics, or another related field. You can then gain experience by volunteering or completing an internship. Finally, build a strong network of professional contacts to find job opportunities.

Uncategorized Editor Tue, 18 Oct 2022 06:24:07 -0400
The Rise of Chinese Biopharmaceuticals

It's not uncommon to see new drugs developed by the collaboration between big international pharmaceutical companies and Chinese venture companies these days. "A decade ago, China 's biopharmaceuticals were modest, but it' s hard to ignore now." said a VC fund partner.

China's role in the global pharmaceutical market has changed, and the long-standing supplier of cheap pharmaceutical raw materials as well as generic drugs has gradually become a major source of important new drugs. According to data released by the National Institutes of Health, China has become the second-largest country to use biotechnology for clinical trials.

There's no doubt that large foreign pharmaceutical companies have noted China's fast-growing bio-pharmaceutical development. For example, Merck Pharmaceuticals has been paying more attention to the China market. In 2015 it set up a special R & D center in Shanghai. In order to seek a breakthrough in pharmaceuticals, Johnson & Johnson also opened a similar R & D center in Shanghai in 2014. Over the past two years, U.S. pharmaceutical companies such as Lilly, Merck and Incyte have also invested heavily in China's pharmaceutical companies.

Of course, China's transformation in the pharmaceutical industry is not complete overnight. There are still some obstacles. For example, the Internet in China is not completely open to the rest of the world. The so-called Great Firewall of China blocks some popular international websites, including some sources for scientists to do their research. Many Chinese scientists have to use tools such as VPN to bypass Internet blocking in China. However, many believe that once the direction of development has been determined, it is only a matter of time to complete the transformation.

China is a promising market. In order to promote the development of local pharmaceutical industry, Beijing's pharmaceutical companies in China not only invested, but also introduced a variety of incentives. Such as attracting overseas talent to return home, invest billions of dollars to build a science and technology park that brings together biotech start-up companies and to speed up the approval of biological new drugs.

Most Chinese start-ups are simply replicating or imitating existing biopharmaceuticals at the start stage, but there are some companies that have been challenged to start developing drugs that have never been clinically tested.

Following the successful extraction of substances from Chinese hamster ovary cells and the development of new drugs, the current Xinda biological has begun to clinical trials of this new drug; Eli Lilly is also preparing to submit clinical trial applications. Once the drug is approved, Eli Lilly will sell new drugs to the rest of the world (except China, the domestic market dominated by Cinda).

The rise of China's bio-pharmaceutical industry has attracted a lot of venture capital investors. According to ChinaBio, investment in the life sciences sector has hit a new high of $ 5.3 billion over the past year, nearly 10 times more than five years ago.

In 2008, Eli Lilly set up the Asian venture capital fund, it is worth noting that almost all the fund (500 million US dollars) was invested in China's biotechnology start-up companies.

Uncategorized Editor Sat, 29 Apr 2017 02:00:33 -0400
Here Are A Few Things You Need To Know About Protein Arrays

It wouldn't be possible for the researchers to endeavor huge milestones the way they have achieved overtime in the molecular biology, without devising methods for the Protein Array. The reason behind this is the fact that unlike the DNA, proteins are unstable, i.e. they function in a quite narrow temperature range and they cannot be amplified in vitro in the way the DNA is amplified.

Protein arrays have provided the platform for researchers to achieve deeper comprehension pertaining to the activities and interactions of proteins so as to sort out valuable details regarding their functional prospect. Protein arrays have an edge over its predecessor techniques because of the sophistication it brings by dealing with a number of proteins at a time.

The sensitivity, speed, cost effectiveness, and automated procedure make it the most suitable choice to deal with the proteins. These arrays are also significant because the mRNAs don't determine the extent of expressions, so they don't give any idea on how the protein would influence the biochemical makeup of the cell; at least, the way proteins give.


The basic principle involves the interaction among biomolecules [in this case, proteins] with the immobilized probe. The probe is basically immobilized onto a surface [typically slides of glass or silicon] in such a way that it won't interfere with the protein conformation. Also, the protein is handled using chemicals which would provide the protein with a hydrophilic environment and keeping its binding sites exposed. The interaction/hybridization among the probe & protein is then further processed using high-throughput technologies and bioinformatics tools to understand this interaction.


Analytical microarrays also referred to as antibody arrays utilize the antibodies, aptamers or affibodies, which bind to the target proteins specifically. The interaction between the antibody and antigen is further analyzed using detection systems. The most frequently used detection system includes fluorescent labeling. Considering the possibility of the interference of these labels during hybridization, more sophisticated detection mechanisms have been developed. These include Carbon nanowires & Carbon nanotube mechanisms.

Functional protein microarrays provide an even more robust platform to study details entailing the protein interactions with biomolecules such as other proteins, DNA, RNA, phospholipids and drugs. Functional microarrays provide the arrays for full-length proteins & their domains rather than small peptides.


The diverse applications of protein arrays in diversified fields such as biochemistry, molecular biology, pharmaceuticals or biomedical procedures signify its importance. Protein arrays also have applications in diagnostics, proteomics, functional analysis of proteins, antibody characterization, and development of treatments.

As discussed earlier, the role of protein arrays in treating a large number of proteins at a time, it is also critical to devise even more sophisticated and robust statistical software and bioinformatics tools to handle a large amount of output data. Also, Antibody array can work in a more efficient way by adopting approaches that would expedite the antibody production with enhanced specificity. Although there has been incessant progress in the development of Protein Array procedures in the past decade, it still needs to keep evolving in order to get rid of whatever drawbacks that still exist.

Visit our website for more detailed information on Antibody Array, its types and the latest advancements pertaining to its applications in various biological fields.

Uncategorized Editor Sat, 26 Dec 2015 12:05:00 -0500
Bioinformatics Market Leaders in the Hot Pursuit of NGS

Bioinformatics is a rapidly developing branch of biology and is highly interdisciplinary, using practices and ideas from informatics, statistics, mathematics, chemistry, biochemistry, physics, and linguistics. It has many practical uses in different areas of biology and medicine. Roughly, bioinformatics labels any use of computers to handle biological information. In practice the definition used by most people is narrower; bioinformatics to them is another word for "computational molecular biology"- the use of computers to distinguish the molecular components of living things.

Rise in the Government spending in biotechnology, Increased R&D activity, increase in the growth of proteomics & genomics, rise in drug discovery & development research and upsurge of the biotechnology sector in the emerging economies are the major driving factors of the Bioinformatics market.

Currently, a majority of the companies are focusing their attention on the likes of proteomics and genomics. Companies are ensuring the development of proteomics & genomics and at the same time are in the market for acquiring companies which are constantly innovating proteomics.

The likes of Roche acquired Bina Technologies in a bid to enter the hot and in demand Genomics market. WuXi Pharma Tech,a Chinese organization has acquired NextCODE health, a US genomic analysis and bioinformatics company which is an indication of the great potential of the bioinformatics market.

Recently,one of the big giants of the bioinformatics industry Illumina entered into an agreement with SCIEX in a bid to expand their genomics and proteomics project where they develop novel ways to analyze proteomics and genomics as a part of their OneOmics project. Currently, Next Gen Sequencing is high on demand in the bioinformatics sector. Illumina recently completed the acquisition of the bioinformatics company NextBio where Illumina will integrate it into their recently launched Enterprise Informatics Unit. Illumina also got into an agreement with IVD giants bioMerieux to provide out of the box genomic pathogen solutions. Illumina has been very much active when it comes to bioinformatics and Next Gen Sequencing (NGS). They are in agreement with Merck Serono, Lockheed Martin over NGS. This is an indication of the great potential of Next Gen Sequencing in the bioinformatics sector.

All this activity points out to the fact that, in the bioinformatics market, Genomics & Proteomics along with Next Generation Sequencing are filled with immense potential and will set the trend of dominating the Global Bioinformatics market in the coming years and is a hot prospect for major companies in the biotechnology sector.

I am a research analyst working with Mordor Intelligence. We are a Market Research Consulting Firm that specializes in providing various kinds of Industry Research Reports, Global Strategic Business Reports, Competitive Landscape Analysis, etc.

To gain more insights into this article please do visit us.

Uncategorized Editor Sat, 06 Jun 2015 17:10:03 -0400
How to Prepare for Whole Genome Sequencing

Genomics is a field of technology that changes at an alarmingly rapid rate. It is difficult to find examples of other fields that have undergone change at similar rates, but comparisons could include the evolution of computer technology over the past several decades or the rapid growth of Internet startups in the 90s. Unlike these examples, the growth of genome sequencing technology is limited to a specific technological sector, or at least, it has been until now.

What has changed? 
Sequencing technology is now passing a critical point, at which the cost of whole genome sequencing is actually lower than the cost of running a conventional panel of genetic tests. Although it is difficult to calculate an exact figure for this, it probably lies in the range of $1K-$10K. Illumina's HiSeq platform or IonTorrent's Proton both promise genomic sequencing in this range, as well as Complete Genomics. As we pass this point, genome sequencing moves from the realm of research technology to viable clinical diagnostic test.

The effects of this change can be observed in the growing widespread interest in sequencing technology. While technologies in this area have revolutionized many areas of research over the past decade, their use in clinical settings has profound implications for this industry. Research applications can only provide a limited source of profits. Moving into clinical applications provides access to a much larger amount of capital, both because of the larger market and higher profit margins.

Where will the new opportunities be? 
The clinical introduction of genome sequencing technology will change many aspects of healthcare in the next few years. Genome sequencing could eventually take the place of a standard part of a patient's healthcare record, in much the same way as blood type and allergies to medication are used today. However, we have a lot of ground to cover before we get to this point. For entrepreneurs in this field, this equates to a range of exciting opportunities.

1. Bioinformatics

While interpretation of genomic has been an important area of research, the growing use of this data in clinical fields will generate a great demand for the unambiguous interpretation of results. Physicians want an unambiguous questions such as disease predisposition, and they would like a reputable party to assume the responsibility for making this judgement. Recently, an increasing number of companies have entered the area of genomic data interpretation, many of them small startups such as Genformatic, a Bioinformatics company based in Austin. Larger companies have also begun to focus on providing these services. As this industry evolves, it is likely that more companies may enter this realm, and it will be interesting to observe how things develop in this area.

2. Sample Processing 
While anyone with experience in Molecular Biology possesses the necessary skills to prepare and run libraries, a lot of things can go wrong in this process. Even in pure research applications, sample preparation is an increasingly common service provided by a variety of centers, as well as boutique services, such as Genome Organizer's ChIP-Seq service. It's not that researchers can't perform these tasks, but when they are available for a fee, they see little benefit to running these services for themselves. As genomic sequencing moves into the clinical realm, demand for such services can be expected to increase. There are not many hospitals with the current capability to perform basic sample preparation, and this represents a new area of opportunity in this field.

3. Clinical Services 
One of the greatest areas of uncertainty in this area today is how samples will get from physicians' offices to sequencing centers, and who will inform patients of their results. Though medical school curricula may one day include the interpretation of genomic data, we are far from that point today. Currently, even the administration and interpretation of Genetic tests is generally left to individuals in the field of Genetic Counseling, in collaboration with statisticians and specialists. Though it is likely that a range of other professionals will take over the responsibility of this field in the future, it is likely that in the near future, the growth will take place within this field.

For additional information, visit

Uncategorized Editor Mon, 14 Apr 2014 16:06:41 -0400
All About Next Generation Sequencing

The field of genetics has witnessed revolutionary new technologies and trends in the last decade. One of the groundbreaking breakthroughs is the next generation sequencing. This technology has equipped the researchers for determining the DNA sequences of entire genomes of bacteria. This has led to the evolution of metagenomics which is a field to study sequencing the genetic makeup for an entire community. Small samples of microbes are extracted and their combined DNA is studied using next generation sequencing.

Inarguably the most significant technological advances in the biotechnology sphere, this technique has allowed experts to simultaneously sequence several genomes in a single instrument in less than two weeks. The targeted methods for DNA enrichment allow higher genome throughput at a much lower cost per sample. The technique is now being implemented in the cancer field to study its genetic aspects. Over the span of a few years only, next generation sequencing has matured to the point where it is being run routinely by the leading laboratories of the world for diagnostic purposes. The main attractions of the technique are the sophisticated sensitivity, the speed and the reduced expenditure per sample.

The next generation sequencing

The experimental samples can be collated from any biotic or abiotic ecosystem such as soil, water etc. They can also be acquired from extreme environments such as arctic or hot springs. These contain millions of microbes and fungi which cohabitate together. The sample is then analysed and then sequencing occurs as per the desired needs.

Next generation sequencing basically deals with the ability to sequence massive DNA templates on parallel basis. These help carry the evolution process from second and third generations form the first generation 'Sanger' sequencing. To achieve this, the second generation platforms deploy amplification by cloning of DNA on a support matrix which is followed by cyclic sequencing. There are numerous benefits of this procedure.

Some of these are:

  • Reduced costs
  • Removal of PCR bias
  • High speed
  • Sensitivity

However, this is to be noted that the technique is still maturing and it is likely to rival the second generation techniques in a few years and enter mainstream medical arena.

The motivators of next generation sequencing

  • Identification of species

One of the main purposes of studying next generation sequencing is to identify species. The technique has eliminated the limit of laboratory by sequencing the bacterial colonies into communal groups. The high end software developments have allowed the processing of sequenced data into individual bacterial genomes and their identification.

  • Provision of simplified understanding of the ecosystem of samples

The metagenomic study has enabled the scientists to gain better insights in the environment of the sustaining bacteria. This gives a better understanding of the physiology and structure of the breeding species. This helps in derivation better results and in depth analysis into the mutual relationship which aids the co-existence of both species.

Thus, next generation sequencing helps in understanding the existence of biospheres. This, in turn helps deduce important information in the field of medicinal biology. With the help of this technique, the scientists are able to develop effective medicines and vaccines for the pathogens. Over time, the technique may even prove valuable in the other bioinformatics fields. 

Uncategorized Editor Sun, 10 Nov 2013 02:03:00 -0500
Does Big Data Sanctify False Conclusions?

Midway across the Atlantic, KLM, JFK to AMS (Amsterdam Airport Schiphol), seated in coach on a brand new 747, summer 1972, on an IASA Student ticket; altitude, 30,000 feet.

In-flight movies had recently been introduced, and on one leg the film was spilling off the take-up reel, out the housing opening, and falling on the passengers seated below the projector. Mid-flight entertainment went from a forgettable movie to live entertainment as the Flight Attendant wrestled with the film as more and more came spilling out, covering her in 35mm seaweed.

Later, on this flight, they showed Charlie Chaplin's controversial film, Monsieur Verdoux, a flop in the US but which did well in Europe and this was, after all, KLM and not an American airline, and so the passengers liked it. Otherwise OK, I still remember Chaplin's final speech about how small numbers can be scrutinized and comprehended, but massive numbers take on their own aura of sanctity. Is this lovely notion time-stamped to the film's post WWII original release?

Paul Krugman, in his recent NY Times OpEd columns, once again mentions the recent implosion of the 'Austerity leads to Prosperity' school of economic thought, based on a now infamous Reinhart-Rogoff (R-R, for short) 'Excel error'. Why was the 90% Debt to GDP threshold accepted as the point of no-return when real-world observations proved austerity didn't work for Ireland or anywhere else which tried it? It was not just the Excel formula, in my opinion; it was the supposed sanctity of the 900 page book of mind-numbing data, charts and statistics used to justify the austerity argument to begin with, and which until just recently, had never been questioned or validated. How many of us have been in strategic decision meetings where GB after GB of data is presented, and all we need to do is get the top-line summary, decide and get on with execution? How many of us have seen project plans with over a thousand tasks, many of which are rolled-up plans in themselves, and have just accepted the underlying assumptions were right and need not be tested?

Sales forecasting is certainly an area where big numbers can sanctify. I was in a room as a national sales force for a struggling software company forecast the upcoming Quarter. Being a NASDQ listed company, financials and Street whispers mattered, which is why I attended. Like many sales organizations, they used the weighted method, where a sale of $1,000,000 revenues with a 30% probability of closing in the upcoming Quarter, was listed as $300,000 'earned'. Trying to please the Finance oriented senior leadership, they listed every encounter, be it in a meeting or on a subway, as a potential opportunity. I told them they were "kiting forecasts", which was unacceptable for obvious reasons, but they continued, producing a forecast with several hundred rows when 100 would have sufficed. The sanctity of numbers showed they were out there, beating the bushes. If senior leadership had a deeper understanding of the end-to-end sales process, and understood each large opportunity as a communications and agreement process taking a semi-repeatable period of time (similar to Reference Class Forecasting), and not just as a set of numbers, a radically reduced and more accurate forecast would not have annoyed the Street, even if missed by a small amount. Then again, this was a highly unstable company, and many in senior leadership were doing a Cleopatra - Queen of denial to keep their jobs for another 90 days. In the end, reality won, and I wish them all well wherever they wound up.

Mike Tiabbi, in the May Rolling Stone magazine, writes how the price of gold is set, not based on a massive data trove run through a model, but by a conference call between 5 banks. Silver is similar, with 3 banks setting the price. Jet fuel, diesel, electric power, coal, etc. are all set by small groups, not gargantuan datasets and models. Libor, the interest rate underlying the world's financial system, is set each morning by 18 banks, each bank submitting their interest rates across 18 currencies and 15 time periods. Submissions are taken for granted; no validation is performed. By averaging out these 2700 data points, Libor is set and the world reacts. An academic can spend a life modeling empirical observations via data, and the bottom line is they would be better off understanding the qualitative reasons behind these 2700 elements.

Many companies now have terabytes of data in different data bases, and Big Data is today's must-have hyped technology. Why the hype? Big Data s easy for most people to understand and feel current - the same people who wear loud shirts at idea creation (and not code generation) offsite 'Hackathons', which used to called Ideation sessions, or Brainstorming, depending on when you were born. Consulting companies, no longer able to ride the 200+ person per gig ERP wave, love this kind of engagement, and so they talk it up. But as we have seen in the R-R Austerity situation, does more data always mean more accurate? Many of the junior staffers who focus on data presentation in large companies lack the experience based deep insights required to verify the information and the conclusions are solid. It's easier to show you worked hard, not necessarily smart, by maxing out Excel's 1M+ Rows by 16K Column limit, than it is to get a deep understanding of what the numbers mean, are they correctly stated, and do we actually need that level of data? What about the outliers, do we deny them as just signal noise?

Big Data implies massive centralized data and BI functions, and as we all know, anything centralized takes on an administrative overhead and calcified change structure, which could actually make the data stale and, therefore, any resulting analysis subject to 'winning the last war' syndrome. The Open Knowledge Foundation, last week, posted to their blog:

Just as we now find it ludicrous to talk of "big software" - as if size in itself were a measure of value - we should, and will one day, find it equally odd to talk of "big data". Size in itself doesn't matter - what matters is having the data, of whatever size, that helps us solve a problem or address the question we have.

Their prognosis is:

... and when we want to scale up the way to do that is through componentized small data: by creating and integrating small data "packages" not building big data monoliths, by partitioning problems in a way that works across people and organizations, not through creating massive centralized silos.

This next decade belongs to distributed models not centralized ones, to collaboration not control, and to small data not big data.

Is this to say Big Data is never big? Bioinformatics puts it in perspective. The Human genome sequence is 3 million base pairs and is stored at ¾ GB. That's it. Here, Big Data undoubtedly means Big Meaning. What we need is to stop treating Big Data as gathering, but rather think of Big Data as a continuous conversation, describing a changing world. A centralized Big Data function should be structured for agile governance, empowering operating and planning units to get accurate input for their market/function specific models, as they are closest to these conversations.

Just like networking protocols, organizations should focus on context - common definitions, and formats, so a 'Closed Sale' means the same thing across all business lines, and a customer relationship is defined with the common hierarchy and definitions. This does not imply over-simplification, it's usually quite complex, but the result is a lingua franca, where apples=apples. I worked on a Finance Transformation initiative where we discovered this multi-divisional, close to 100 year-old company had no common financial language. The financials were consolidated through some powerful computing, but did the results mean anything? We took a step back and developed their first common language. Here, too, the key is not having a newly minted MBA collect data; it's the contextual understanding making the data purposeful.

If you spend the time deeply understanding core underlying issues and causes (qualitative), and not just accumulating and presenting data (quantitative), less will be more. Predictive models, harder to set-up than combining multiple structured and unstructured data sets (since a model implies understanding, not mechanics), will most likely produce better results than unending graphs and charts. It requires the data being scrutinized by experienced employees who can use that most powerful organic computer to go beyond the colorful graphics. By keeping data decentralized, with a common set of definitions, we can best house data in the hands of those most needing and understanding it while retaining agility. Sanctity comes, not from size, but from meaning, context, currency and availability.

By the way, last week was Big Data Week. I wonder how many people celebrated and how they were broken out by age, location, height, weight and specific gravity.

Richard Eichen is the Founder and Managing Principal of Return on Efficiency, LLC, and is one of their senior turnaround leaders/CROs, Program Rescue and Interim Executives with over 25 years' experience reshaping companies, Operations, IT/Systems Integration and key initiatives. Return on Efficiency, LLC specializes in those companies and initiatives where technology is the primary means of service delivery and revenue creation. He can be reached at [email protected], and followed on Twitter, @RDEgrowroe.

Uncategorized Editor Tue, 23 Apr 2013 02:01:05 -0400
Masters Degree in Bioinformatics and Why You Should Get One

Bioinformatics is the new sunshine discipline that applies computer science methods, including mathematics and statistics, to biological data such as DNA sequences and protein structures. This data is converted into information and entered into databases, which are then analyzed, merged, researched, better understood and used for enhancing human life (by creating new improved drugs, for example), agriculture practice and environment.

A bioinformatics graduate has knowledge of modern computational tools that can research and analyze through voluminous biological data and help research and zero in on solutions. Imagine that you have all the available biological data that is available in the world, and you also have a very advanced search engine to research, merge, analyze, slice, dice and compare this data - that's bioinformatics for you. It is just what the world was waiting for, and that is why bioinformatics graduates are in great demand.

A Master's degree in Bioinformatics helps students get dual expertise in information technology and biology. So, applicants for the course must have a background in biology or in computer science, mathematics and statistics. It takes 2 years to finish the course.

Benefits of a Master's Degree in Bioinformatics:

A Master's in Bioinformatics will put you well ahead because when it comes to researching and analyzing biological data, only the surface has been scratched - the work has just begun, and you are starting at the very beginning. Bioinformatics ultimately goes on to better the standard of life, and so it becomes a necessary and vital discipline whose demand will always grow and never fade away - so long as there are humans on this planet.

It's not only health that bioinformatics impacts - it also has a profound impact on agriculture and the environment. For example, it is used in agriculture to grow high-yield crops that require low maintenance and it is used for bettering the environment by helping identify waste-eliminating bacteria. So, when you obtain a bioinformatics degree, you end up helping humans lead a healthy life, consume adequate and better nutrition, and live in clean and green surroundings. That's like being a semi-Almighty!

Career prospects:

The career prospects are not just good, they are rising at a fast pace. There are jobs for bioinformatics degree holders in facilities that work on biotechnology, biomedical sciences, and pharmaceuticals, and in research institutes, in hospital and in the industry. You can choose from a variety of career options -- bioinformatician, geneticist, computational biologist, biomedical computer scientist, and more.

Graduates of bioinformatics make very nice salary. But it's not just the salary, the experience and knowledge you accrue also makes a big difference. There are plenty of jobs waiting to be picked up, and the choice is yours.

So, go on and enroll for a Master's degree in Bioinformatics and do much more than decoding the human genome - try and reinvent it. I advise before you do to do a little more research on the internet. You can do this by visiting websites that cover the career in more detail.

Note: You are free to reprint or republish this article. The only condition is that the links should be clickable.

Copywrite Kenneth Echie. Kenneth is a Writer, Expert Author, and Publisher. He currently writes for Masters Degree. Get free scholarship report and find Masters Degree Bioinformatics Schools by visiting.

Uncategorized Editor Thu, 14 Jul 2011 02:11:05 -0400
New Technologies Reveal the Porcine MicroRNAome

The development of next generation technologies is enabling the complete mapping and further understanding of microRNAs (miRNAs). Deep sequencing (NGS) provides complete coverage of the small transcriptome and new bioinformatics tools such as ACGT101-miR enable an exhaustive analysis of the sequencing data. The application of these new technologies together with the availability of a nearly complete pig genome has provided the basis for further defining the molecular and cellular function of these small regulatory molecules in the pig.

The domestic pig (Sus scrofa) is an important species from various standpoints. First, it is a major protein source in the human diet world-wide. Additionally, its anatomy, physiology, and genome size are very similar to the human species, and there has been increasing molecular genetic evidence showing the comparability of human and pig, making it a suitable model system for human biology. Pigs are now model animals for biomedical research of cardiovascular, immunological, cancer, diabetes, and a range of other diseases. Finally, the pig has become an important source of organs and tissue for transplantation into humans.

Recently, a world-wide collaboration of groups, from Houston, Texas to Sichuan, China, set out to establish a porcine miRNA atlas (microRNAome). The findings they report lay the groundwork for a greater understanding of the species through further mapping of tissue- and stage-specific miRNAs1. 
Prior to this study, miRbase2, the primary public repository for miRNA sequence data, listed only 77 pig pre-miRs and 73 unique mature pig miRNAs; this out of a total of 10,883 database entries encompassing over 100 species. The number of pre-miRs for pig was significantly lower than for other species with similar size genomes (such as Human with 721 entries) suggesting the existence of far more pig miRNAs.

miRNA Analysis Bioinformatic Tool, ACGT101-miR

The researchers prepared 10 libraries of mixed tissue types for deep sequencing from 10 developmental stages representing the major morphological and physiological changes in pig growth from insemination through peak commercial value. The sequencing yielded approximately 10M reads per library sequenced. The ACGT101-miR bioinformatics package was employed to filter the reads for read quality, sequence pattern, copy number, read length, and other known RNA classes, and for mapping the quality reads to pig genome and mammalian miRNA sequence databases. Approximately 2.5M reads per library were mapped.

The ACGT101-miR bioinformatics tool provides:

  • Mapping of all quality reads to custom databases
  • Classification, alignment, length distribution & annotation of all mapped reads
  • Prediction of possible novel miRs
  • Detailed explanation of miRNA analysis results and their context

After mapping, the quality reads could be classified in one of four groups: reads that could be (1) mapped to known pig pre-miRNAs, (2) mapped to known other mammalian pre-miRNAs, (3) not mapped to any known pre-miRNAs, but mapped elsewhere in the pig genome or expressed sequence tags (ESTs) and predicted to form pre-miRNA hairpins, (4) not mapped to either pre-miRNAs or pig genome. Groups (1) and (2) were further divided into reads that also mapped to the pig genome or ESTs and reads that only mapped to the pre-miRNA.

Porcine miRNAome

Mining the significant amount of data generated in this study yielded:

771 - pig microRNAs 
391 - conserved in mammals 
380 - pig specific

When combined with the existing miRBase entries, the results of this study have expanded the coverage of the pig microRNAome to:

867 - pig pre-miRNAs 
1004 - mature miRNAs 
777 - unique mature miRNAs

The sequencing results of the ten libraries that span a pre- and post-natal time period not only greatly enriched our view of the pig microRNAome, but also provided information on time-dependent variations of the miRNAome as to sequence lengths, counts, composition, genomic location, and the relative expression of conserved versus pig-specific miRNAs. These results provide valuable sequence information about the porcine miRNAs which govern the functional activities of the miRNAome. Notably, a complete let-7 family of miRNAs were identified in this work, whereas the current miRBase v15 (April 2010) does not contain ssc-let-7b and ssc-let-7d. let-7 miRNAs are essential regulators of cell differentiation and have been linked to misregulation or misfunctioning of cell growth.

Expression Levels - A Global View

These new sequencing and bioinformatics technologies have enabled a much closer look at the pig microRNAome than was ever possible before. It is not surprising that those miRNAs that were found in high abundance had been mapped before, and indeed, the less abundant miRNAs were novel. According to the authors, the inability to previously detect these low abundance miRNAs was possibly due to their spatially, temporally, and physiologically transient expression or intermediate nature. Additionally, miRNA expression levels can provide clues as to their function. The top 10 miRNAs account for ~50% of all reads and the authors suggest that these highly expressed miRNAs may be involved in basic life functions/life cycle. The difference in expression levels of certain miRNAs pre and post birth is consistent with their well characterized functions. Two such differentially expressed miRNAs are the muscle specific miR-1a-3p which is important during embryonic development for muscle cell differentiation and the liver specific miR-122-5p which is essential to support a developing blood supply.


The presence of isomer sequences increases the complexity of the functional roles of these regulatory molecules and documenting isomiRs is an important step towards understanding the target-specific roles of these varying sequences. The analysis tool, ACGT101-miR, employed in this study enabled the authors to precisely identify the copy numbers and relative abundances the isomiR sequences, information that is essential for detailed delineation of their functions. They found the presence of diverse sequence variants and that isomiRs may be present at different relative levels at different stages of life.

Distribution in Chromosomes

The location of the pre-miRNAs within the genome can also provide clues to their function, for example, closely located pre-miRNAs are candidates for co-regulation in their expression and gene targeting. X-linked pre-miRNAs are highly expressed in testis with targets related to cell cycle process during spermatogenesis. Most (623) of the pre-miRNAs detected in the study were mapped to coordinates in the known pig genome, and the clusters of pre-miRNA were more dispersed than is found in human, mouse or rat.

Sequence Conservation

Determination of sequence conservation revealed yet another clue to miRNA functions. The study found that conserved mammalian miRNAs were ubiquitously expressed at high levels in all stages of pig development. However, expression of pig specific miRNAs was much lower than the conserved sequences and appeared only at specific stages of development. The authors hypothesize that "the conserved miRNAs may be responsible for control of the basic cellular and developmental pathways common to most eukaryotes whereas the species-specific miRNAs may be involved in regulation of the lineage-specific pathways and functions".

The latest sequencing and bioinformatics technologies have opened a path for exploration and better understanding of pig molecular biology in growth and development, and for advanced pathway and function modeling in pig relevant to complex human diseases.

    1. Li M, Xia Y, Gu Y, Zhang K, Lang Q, Chen L, Guan J, Luo Z, Chen H, Li Y, Li Q, Li X, Jiang AA, Shuai S, Wang J, Zhu Q, Zhou X, Gao X, Li X. (2010) MicroRNAome of porcine pre- and postnatal development. PLoS One 5(7), e11541.
  • Griffiths-Jones S, Grocock R, van Dongen S, Bateman A, Enright A. (2006) miRBase: microRNA sequences, targets and gene nomenclature. Nucleic Acids Res 34(Database issue), D140-44.

Chris Hebel
LC Sciences LLC
2575 West Bellfort Street
Suite 270
Houston, TX 77054

Uncategorized Editor Wed, 11 Aug 2010 13:17:47 -0400
DNA Barcoding in Plants & Its Potential Applications

Today, barcodes conceived by Bernard Silver, a graduate student at Drexel Institute of Technology in Philadelphia, PA, and his friends Norman Woodland and Jordin Johanson in 1948 are used universally. They play a critical role for identification purposes, relational information, and tracking. They are especially useful because scanners are relatively inexpensive, extremely accurate and highly efficient at obtaining and transmitting information from barcodes and their databases.

Natural barcodes (a short strand of deoxyribonucleic acid (DNA) (the genetic code unique to each living organism and some viruses) that consists of between 300-800 base pairs (bps) - Adenine (A)-Thymidine (T), and Cytosine (C)-Guanine (G)) that can be represented by different colors) also exist and are well established in the animal kingdom. Through sequencing of the cytochrome oxidase 1 (CO1) gene (inspired by biologist Paul Hebert's futile efforts dating back to the 1970s to identify 2000 species of moth in Papua New Guinea (because of their taxonomic and morphological similarities), his "retreat to water fleas" (of which there are only 200 species) and subsequent 2003 paper in which he described "the diversity of life as a 'harsh burden' to biologists" and suggested, "every species on Earth... be assigned a simple DNA bar code so it would be easy to tell them apart" as written in Scanning Life (National Geographic, May 2010)), which is present in the mitochondrial DNA of every multi-cellular organism, scientists are able to readily determine phylogeny (identification) on a molecular level and store it in databases for easy retrieval. Per P.M. Hollingsworth, DNA bar-coding plants in biodiversity hot spots: Progress and outstanding questions (Heredity, 9 April 2008) "DNA bar-coding is now routinely used for organismal identification" in animals and "has contributed to the discovery of new species."

However, per Mark W. Chase, Nicolas Salamin, Mike Wilkinson, James M. Dunwell, Rao Prasad Kesanakurthi, Nadia Haidar, and Vincent Savolainen, Land plants and DNA barcodes: short-term and long-term goals (Philosophical Transactions Of The Royal Society, 2005) this has not been the case with plants until recently since their CO1 gene does not have the ability to serve as a barcode gene and because they "have had the reputation of being problematic for DNA bar-coding" due to "low levels of variability" and lack of variation in "plastid phylogenetic markers." This view prevailed until 2008 when a team led by Dr. Vincent Savolainen of Imperial College London's Department of Life Sciences and The Royal Botanic Gardens, Kew, studied the functionality of the megakaryocyte-associated tyrosine-protein kinase (matK) gene located in the intron of trnK chloroplast genes found in plant leaves. Their research found that the matK gene (which "contained significant species-level genetic variability and divergence, conserved flanking sites for developing PCR (polymerase chain reaction, a process that enables scientists to produce millions of copies of a specific DNA sequence in about two hours while bypassing the need to use bacteria to amplify DNA) primers for wide taxonomic application, [and] a short sequence length... to facilitate... DNA extraction and amplification") as reported by W. John Kress and David L. Erickson, DNA barcodes: Genes, genomics, and bioinformatics (PNAS. Vol. 105, No. 8. 26 February 2008) and in Polymerase Chain Reaction (PCR) (Gene Almanac. Dolan DNA Learning Center and Cold Spring Harbor Laboratory, Inc. 2009) could be used to differentiate between at least 90% of all plants, including those that appeared identical to the human eye, known as cryptic species because of their identical appearance and genetic differences.

The matK gene, though, was found ineffective in distinguishing between up to 10% of plant species because of two major factors:

1. When variation resulting from "rapid bursts of speciation" was small, and 
2. Based on Anna-Marie Lever's article, DNA 'barcode' revealed in plants (BBC News, 6 February 2008), when plants were hybrids whose genome was rearranged through natural and artificial cross-breeding, which "confuse[d] matK gene information"

When discovery that the matK gene could serve as a natural barcode in plants was made, its location was consistent with that in animals - the barcode genes in both are located in cellular energy centers outside the nucleus (mitochondria serve as "tiny powerhouses" in animal cells while chloroplasts are involved in plant photosynthesis) since per Anna-Marie Lever, DNA 'barcode' revealed in plants, "nuclear genes usually evolve too rapidly to distinguish between [organisms] of the same species." However, consistent with mitochronidrial genes in animals, "chloroplast genes [in plants] evolve at a slower rate, allowing for [distinguishment between the same species, and] fast enough for differences to occur in the DNA code between species."

The only exception between plants and animals is the range of effectiveness for their respective barcode genes. The CO1 gene can be effectively used to determine and record phylogeny in nearly 100% of animal species while the matK gene is ineffective in about 10% of plant species. The key reason for the 90% effective range with regard to the matK gene can be attributed to natural crossbreeding, which is significantly more common to plants than animals. Because of this, matK gene information needs to be supplemented by data from another gene. Although studies utilizing trnH-psbA genes that share similar characteristics to matK showed promise (when sequencing of matK and trnH-psbA was utilized involving plants of the nutmeg family (Myristicaceae) the effective range for correct identification rose to approximately 95%), a panel of 52 leading barcoding scientists opted on using the ribulose-bisphosphate carboxylase (rbcL) gene (also located in plant chloroplasts) outlined in a 2009 paper published in Proceedings of the National Academy of Sciences as reported by Daniel Cressey, DNA barcodes for plants a step closer (Nature, 27 July 2009), to effectively complete the barcode for the 10% group.

While discovery of the phylogenetic usefulness of the matK gene is relatively novel, studies indicating the phylogenetic usefulness of the rbcL gene date back as far as 1986 when Jane Aldrich, Barry Cherney, Ellis Merlin and Jeff Palmer reported in Nucleic Acids Research that sequencing of rbcL genes showed petunia and tobacco and alfalfa and peas are 97.3% and 94.1% genetically identical when comparing their bps.

Additional studies, to name two, added further evidence of the phylogenetic usefulness of the rbcL gene. One, reported by Mitsuyasu Hasebe, Tomokyuki Omori, Miyuki Nakazawa, Toshio Sano, Masahiro Kato, and Kunio Iwatsuki in rbcL Gene sequences provide evidence for the evolutionary lineages of leptosporangiate ferns (Proceedings of the National Academy of Sciences, June 1994) utilized PCR-amplified rbcL fragments in 58 species of leptosporangiate ferns, which belong to the pteridophyte class (vascular plants that produce spores to reproduce in lieu of flowers and seeds) which has the longest evolutionary history of any vascular land plant (consequently enduring the greatest loss of plylogentically useful data) to capture their evolutionary links. The other, reported by Hiroaki Setoguchi, Takeshi Asakawa Osawa, Jean-Christophe Pintaud, Tanguy Jaffré, and Jean-Marie Veillon in Phylogenetic relationships within Araucariaceae based on rbcL gene sequences (American Journal of Botany, 1998) utilized rbcL gene sequencing to successfully determine the pylogenetic relationship between 29 species of Araucariaceae (a sample representing nearly every existing species of the ancient family of conifers that achieved maximum diversification during the Jurassic (c. 199.6± 0.6 to 145.5± 4 million years ago (Ma) and Cretaceous periods (c. 145.5 ± 4 to 65.5 ± 0.3 Ma)).

During the study that led to the discovery that a plant's matK gene could serve as a primary barcode gene, Dr. Savolainen's team compared eight potential candidate genes and analyzed more than 1600 plant DNA samples obtained from the tropical forests of Costa Rica and the temperate region of Kruger National Park, South Africa, two of the world's leading biodiversity hotspots.

Through sequencing of the matK gene (which has a slightly different code for plants of different species and a near identical code for plants of the same species), they were able to distinguish between a thousand orchid species - plants known for their difficulty to differentiate because of their near identical appearance, especially when sterile. Consequently, per Plant DNA 'Barcode' identified (Medical News Today, 6 February 2008), "...what was previously assumed to be one species of orchid was [found to be] two distinct species that live on different slopes of the mountains [with] differently shaped flowers adapted for different pollinating insects."

Scientific analysis, in which the matK gene was divided into five sectors has determined that the sector 3 (known as 3') region is the most effective area in providing useful phylogenetic information. When broken down further, 140 out of the 306 bps of the 3' region were phylogenetically informative.

Establishment of the matK gene's barcode function supplemented by use of the rbcL gene, represents a major breakthrough in plant science since it offers a diverse range of potential applications that can be used by scientists and plant taxonomists/systematists as well as an opportunity to close the large gap that presently exists between plant and animal barcoding.

Such potential applications include but are not limited to:

1. Accurate identification of plant species, especially those of cryptic species that are difficult to differentiate that could potentially lead to discovery of new species. Presently as stated by Anna-Marie Lever, DNA 'barcode' revealed in plants only a "few experts [can] accurately identify the plant composition of biodiverse hotspots." 
2. Accurate identification of botanic components in foods and medicines. 
3. Detection of undesirable plant material in processed foods by health inspectors. 
4. Tracking of plant species (e.g. migration). 
5. Locating of endangered species for habitat preservation. 
6. Detection of illegal transport/trade of endangered species to protect them from potential harm. 
7. Confirmation or identification of plant-insect associations. 
8. Expansion and facilitation of botanical medical research.

However, before this can be achieved, the following steps must be taken:

1. Establishment of a genetic database that can be uploaded into a portable scanner so that data can be readily available based on the analysis of a mere leaf/tissue sample. To enhance identification of known species and speed up discovery of new species, such a database must be massive and available online. 
2. Establishment of a search method or algorithm to search and access DNA barcode information from an online database. 
3. Establishment of a set of reference standards (which includes barcoding based solely on bp extractions from matK and rbcL genes) utilizing existing plant DNA specimens held at botanical gardens, herbariums, museums and other DNA repositories. For example, the Consortium for the Barcode of Life based at the Smithsonian Institution's National Museum of Natural History in Washington, D.C. has identified over two million of the estimated ten million species of plants, animals, and fungi (many still unnamed) while the Royal Botanic Gardens, Kew currently holds 23,000+ plant DNA samples. This is especially important since fresh collection efforts aimed at obtaining the DNA of every plant species, are impractical since they would require significant effort and time. 
4. Collection efforts conducted in accordance with international laws (e.g. Convention of Biological Diversity) to protect habitats and ensure specimen integrity. Per W. John Kress and David L. Erickson, DNA barcodes: Genes, genomics, and bioinformatics, such collection efforts are necessary since existing specimens are limited in quantity and may consist of degraded DNA. Currently collection efforts are being made in temperate (Plummers Island, MD and New York City, among others) and tropical (Forest Dynamics Plot, Panama, among others) regions.

With a stamp of approval from the United Nations that declared 2010 "the International Year of Biodiversity," vigorous global efforts aimed at barcoding 500,000 out of the 1.7 million named species of plants, animals, and fungi by 2015, are being pursued by international teams of scientists as well as by groups/projects such as the Consortium for the Barcode of Life and soon, the International Barcode of Life (iBol) project, which is slated to launch in July 2010. Furthermore, Paul Hebert, the biologist who inspired the barcode movement and a major participant in the iBol project, per Scanning Life (National Geographic, May 2010) declared, "the approach is scalable to the planet [so that by 2025 every] species humans encounter frequently will [have been] barcoded."

With regard to plants, as technology is enhanced to exploit the genetic code of matK and rbcL genes through the establishment of a uniform database, production of inexpensive portable scanners capable of analyzing leaf/tissue samples and matching the DNA barcode with database information, the field of botanical phylogenetics and research will benefit greatly especially since plant identification and classification will be available to more than a few experts. In addition, such identification and classification will be more accurate than that provided by sole reliance on visual examination and physical morphology (especially with regard to cryptic species) while endangered species will be able to be easily tracked and better protected, and people will have greater assurance pertaining to the food, drinks, and/or medicine they consume.

Additional References:

José A. Jurado-Rivera, Alfried P. Vogler, Chris A.M. Reid, Eduard Petitpierre, and Jesús Gómez-Zurita. DNA barcoding insect-host plant associations. The Royal Society. 17 October 2008.

Khidir W. Hilu and Hongping Liang. The MatK Gene: Sequence Variation And Application In Plant Systematics. American Journal of Botany 84(6). 1997.

Steve Newmaster, Aron Fazekas, Royce Steeves, and John Janovec. Testing plant barcoding regions in South American wild nutmeg trees. Botany 2008.

William Sutherland is a published poet and writer. He is the author of three books, "Poetry, Prayers & Haiku" (1999), "Russian Spring" (2003) and "Aaliyah Remembered: Her Life & The Person behind the Mystique" (2005) and has been published in poetry anthologies around the world. He has been featured in "Who's Who in New Poets" (1996), "The International Who's Who in Poetry" (2004), and is a member of the "International Poetry Hall of Fame." He is also a contributor to Wikipedia, the number one online encyclopedia and has had an article featured in "Genetic Disorders" Greenhaven Press (2009).

Uncategorized Editor Fri, 23 Apr 2010 02:05:47 -0400
Nutrigenomics For Personalized Nutrition - How Food Companies Can Create Products For a New Market

The field of nutrigenomics could play a significant role in expanded use of nutraceuticals, particularly relative to the growing interest in personalized nutrition. There is a wealth of scientific and scholarly information about the field, or more accurately, the fields of technology relevant to nutrigenomics. The fields of bioinformatics, computational biology, genomics, proteomics, and metabolomics are contributing to our understanding of nutrigenomics.

Using nutraceuticals for health benefits has been especially successful in dietary supplements and, to some extent, in functional foods. The role of the regulatory environment is important in the degree of success for any nutraceutical. Another key factor in nutraceutical success is scientific or technological advances.

For food companies considering incorporating nutriceuticals into their products, a critical question is whether there are ways to look at what technologies are emerging from the scientific world that can be exploited in commercial applications. The primary purpose of such an approach is to determine if a scientific field has advanced to a stage such that commercialization is inevitable and, if so, when investment in commercialization activities is timely and wise. Nerac refers to this approach as Emerging Technology Evaluation. An emerging technology approach is most valuable when the technology comes from a field that is outside a company's core competencies, and oftentimes off its radar screen.

Definitions & Market Potential

Nutrigenomics and its related fields hold promise for better, more optimized use of nutraceuticals such as for more personalized nutrition. Nutrigenomics is a broad and complex group of scientific disciplines that have mutually supported and advanced each other. The potential to target concentrated nutraceuticals at an individual's specific needs sounds like something from science fiction. Yet it might be on the horizon for the general population in the near future.

Nutraceuticals include a wide range of ingredients from the familiar vitamins, minerals, and herbs to bioactive enzymes and probiotics that need to remain alive to deliver some of their effects. These ingredients range in form from liquids, oils, powders, and granules and are used in virtually every type of food or dietary supplement application that exists.

Regardless of the form or application, interest in nutraceuticals is increasing because of the potential health benefits that they can deliver. The market and applications for nutraceuticals experienced unprecedented growth after the passage of the Dietary Supplement Heath and Education Act in 1994. This unusual legislation shifted the burden of proof for safety so that a priori approval was not needed to market a dietary supplement.

Just about every imaginable application of nutraceuticals for health benefits has been tried in dietary supplements. And while the use and perhaps efficacy of nutraceuticals in functional foods might seem to hold more promise than dietary supplements, the functional supplements market is significantly larger. The regulatory environment is critical, but other factors play a heavy role, too. For example, consider consumer awareness of nutraceuticals and their related health benefits. The relationship between omega-3 oils and cognitive health or probiotics and digestive health is widely known, even if it is not well understood. It is curious, though, that consumers appear to be willing to pay considerably more for nutraceuticals in supplement form than when supplemented into food. Consumers are willing to pay $40 for a bottle of 100 high-quality fish oil pills, or 40 cents a serving. However, it is unlikely that they would pay an extra 25 cents a serving for bread fortified with omega 3's that would add $2.50 to a 10-serving loaf.

Emerging Trends

Technology and scientific validation are also critical for nutraceutical success. One example is the myriad of advances over the years in micro-encapsulation for protection of labile nutrients. Another is evolution of botanical extraction technology to maximize beneficial antioxidants and minimize unpalatable flavors. The emerging nanotechnology field weighs in also, providing alternative means to address absorption or dispersion challenges.

Using Nerac's Emerging Technology Evaluation, we can spot activity trends over time in various broad categories. The first step with this approach is to look at the overall level of activity for all relevant references in the field.

For this example, we evaluated various databases for scientific articles linking nutrigenomics with food or dietary supplements. The search was designed to link all aspects of nutrigenomics including bioinformatics, genomics, metabolomics, and proteomics to food and dietary supplements. By performing the search this way we can look at the area from a broad overview.

Before 2000, scientific activity in this area was minimal but increased exponentially thereafter; suggesting that this surge started soon after the human genome project was completed in 2000. A closer look suggests that activity may have started slowing after 2006. The 452 articles cited in 2007 were only a slight increase over the 437 articles cited in 2006. The 2008 citations to date would suggest that total activity for 2008 will be lower than in 2007.

The next logical step with these data is to aggregate the citations into different types of activities. It was surprising that the great majority of scientific citations on this topic were journal references, whereas patent literature activity was disproportionately low. One explanation is that the intellectual property for this technology resides outside food applications. If this observation is true, the opportunity for licensing or partnering relationships may be an effective means to leverage nutrigenomics technology from industries outside the food industry.

Nutrigenomics' Potential

There appears to be a strong technology backdrop to the nutrigenomics field, lending scientific substantiation to the use of nutraceuticals for personalized nutrition based on individual genetic fingerprints. Scientific activity has increased exponentially. It is recommended that the specific types of technical activity be further segregated.

In addition, activity in other categories, especially regulatory and consumer acceptance, are required. Overlaying the activity into these various categories allows for a graphical view of the relative stage of each category.

The data presented suggest that scientific activity is still strong, but the pace of activity may have peaked. One conclusion is that funding and resources are being freed up for product development and commercialization. On the other hand, regulatory hurdles are serious detractors that must be resolved. A general recommendation is that companies interested in the use of nutrigenomics for personalized nutrition monitor relevant regulatory activity. Nerac analysts can evaluate this on a company's behalf.

Nerac Inc. is a global research and advisory firm for companies developing innovative products and technologies. Nerac Analysts deliver custom assessments of product and technology development opportunities, competitor intelligence, intellectual property strategies, and compliance requirements through a proven blended approach to custom analysis: review of technical knowledge, investigation of intellectual property, and appraisal of business impacts. Nerac deploys analysts in diverse disciplines to help clients discover new applications, serving as a catalyst for new thinking and creative approaches to business problems or identifying strategic growth opportunities. On the web at

Uncategorized Editor Sat, 20 Sep 2008 02:03:44 -0400