Financial technology assessment versus scientific

Sunday, January 31th, 2010 by hinrich

New profiling technologies

We are currently assessing the benefits of novel sequencing technologies and putting them into comparison with existing technologies such as microarrays. At the same time we stumble upon articles accessible via the internet that focus on investment opportunities in the biotech market. While clearly aimed at the financial community, quotes like "[...] HiSeq 2000, took a big bite out of the array market, as it brought down the cost of sequencing a human genome" (FOREXYARD) are unfortunate in a scientific context. Since when did microarrays compete against sequencers for sequencing a human genome?

Such statements are understandable from an investment point of view but certainly bad for science as rarely a new technology completely replaces an older approach (we are still running Northern blots, RTqPCR, etc.). Often times the differences between an old and a new technology are in favor of a new technology when one characteristic delivers a major advantage that outways the drawbacks of a new technology. Typically this is not the case for platform technologies such as microarrays or the current form of next-gen sequencing. It is up to the research community to identify the differences and judge which scientific application is best approached with which technology.

Nowadays advertisements and articles discuss features such as price or data volume as an argument in favor of next-gen sequencing over microarrays. However, this is rarely compared with increase in knowledge - a property that should be scientifically a major driver. If a new technology does not learn us more we have to challenge the benefit of the features of this technology.

For example, using whole genome gene expression arrays removes the bias of using focus arrays (they carry only probes against transcripts in a certain context such as e.g., metabolism, kinases, apoptosis pathways) or RTqPCR. Furthermore, this large increase in data volume has a benefit in allowing a robust preprocessing (e.g., via quantile normalization). On the other hand, a lot of research needed to go into identifying the informative genes among all measured genes in a given experiment (essential for generating knowledge from large data volumes). Which genes are truly altered in an experiment and which only appear to be affected due to chance? Similar research is still necessary when looking at knowledge creation from next-gen sequencing data.

Posted in Science


Categories

Archives