Financial technology assessment versus scientific

Sunday, January 31th, 2010 by hinrich

New profiling technologies

We are currently assessing the benefits of novel sequencing technologies and putting them into comparison with existing technologies such as microarrays. At the same time we stumble upon articles accessible via the internet that focus on investment opportunities in the biotech market. While clearly aimed at the financial community, quotes like "[...] HiSeq 2000, took a big bite out of the array market, as it brought down the cost of sequencing a human genome" (FOREXYARD) are unfortunate in a scientific context. Since when did microarrays compete against sequencers for sequencing a human genome?

Such statements are understandable from an investment point of view but certainly bad for science as rarely a new technology completely replaces an older approach (we are still running Northern blots, RTqPCR, etc.). Often times the differences between an old and a new technology are in favor of a new technology when one characteristic delivers a major advantage that outways the drawbacks of a new technology. Typically this is not the case for platform technologies such as microarrays or the current form of next-gen sequencing. It is up to the research community to identify the differences and judge which scientific application is best approached with which technology.

Nowadays advertisements and articles discuss features such as price or data volume as an argument in favor of next-gen sequencing over microarrays. However, this is rarely compared with increase in knowledge - a property that should be scientifically a major driver. If a new technology does not learn us more we have to challenge the benefit of the features of this technology.

For example, using whole genome gene expression arrays removes the bias of using focus arrays (they carry only probes against transcripts in a certain context such as e.g., metabolism, kinases, apoptosis pathways) or RTqPCR. Furthermore, this large increase in data volume has a benefit in allowing a robust preprocessing (e.g., via quantile normalization). On the other hand, a lot of research needed to go into identifying the informative genes among all measured genes in a given experiment (essential for generating knowledge from large data volumes). Which genes are truly altered in an experiment and which only appear to be affected due to chance? Similar research is still necessary when looking at knowledge creation from next-gen sequencing data.

Posted in Science


Are the days of "fixed content arrays" over?

Tuesday, January 19th, 2010 by hinrich

Next-gen sequencing

Last Friday I had written my concerns about the data quality of current and future genomic technologies. This morning I have stumbled across the news bit that according to genomeweb "Affy Marks End of 'Days of Fixed Content Arrays' […]".

While the news item continues discussing whether Affymetrix will provide next-gen sequencing technologies or not, I truly wonder whether this is the right way? How will we make data comparable? A large benefit of the "fixed content arrays" is comparability between experiments as the chip layout is fixed – especially when using whole genome arrays. This is different for the current state of the art of next-gen sequencing. Biases introduced vary between technologies / protocols and drift as technologies / protocols are continuously altered.

For example, how would we analyze samples from a prospective clinical trial that will run for three years and where one would receive a few samples per month? This is something that is still very difficult with microarray technology but is likely to be even more difficult if approached by the current state of next-gen sequencing technology. The key here is that one needs stability of the process to ensure generation of data that can be compared with a minimal level of data massaging.

Furthermore, even though next-gen sequencing offers much higher resolution, the current IT and bioinformatics infrastructure needs to evolve much further to provide users the throughput, turnaround time and cost structure that we currently have with microarray technology. Once this is solved, I expect third generation sequencing to be available. This carries the promise of removing one large source of variability (library generation) and will hopefully also create one or two major players so that we can assess the utility of sequencing technologies for quantitative assessments (molecular profiling) versus the current qualitative application of discovering novel sequences.

Posted in Molecular Profiling


Does data quality suffer from short-lived genomic technologies?

Friday, January 15th, 2010 by hinrich

Next-gen sequencing

Looking e.g., at the ever increasing number of novel sequencing technologies / the number of versions of instrumentation and protocols / the number of companies I wonder about the quality of the data we are currently generating and the comparability between technologies. I ask myself questions like: do we have the time to sufficiently understand the limits of a given generation of technology?

As users of such genomic technologies we witness a phenomenon known from the software industry: the next version (instrument / protocol / reagents) is claimed to be much better and existing issues will have been solved or at least improved. Right.

Then I wonder: why shouldn't the support for the old version be dropped as soon as the "new" version is available? Where is the incentive for the technology company to continue assessing limitations of the old version? Where is the incentive for researchers to study potential problems? By the time the study is finished a new generation will be available. Why should "big" journals want to publish results of studies investigating the properties of a particular setup? Why should researchers applying the tool be interested in the results of such an investigative study if they will have moved on by the time the results are published?

Posted in Molecular Profiling


Categories

Archives