These are chat archives for nickschurch/Kallisto_Salmon_Sailfish_comparison

5th
Feb 2016
Nick Schurch
@nickschurch
Feb 05 2016 16:13

Hi, I've been doing a comparison of the output from these tools and thought it might be fruitful to discuss some of it here. I'll start with a brief (somewhat unspecific) description of the experiment and the data. The experiment is not very relevant to the technical results, but the layout is useful to know.

The data are 100bp paired-end Illumina HiSeq 2000 reads generated from Arabidopsis thaliana samples. There are three conditions (one WT and two others) and seven biological replicates per condition. There are ~95M reads per replicates (WT-4) has some problems with it and is excluded.

The data have been quantified using three tools - Sailfish (0.8.0), Salmon (0.5.1) & Kallisto (0.42.4 - debug version) - and using three different transcript annotations for Arabidopsis - TAIR10 (ensembl), araport (v11, 20151006) and atRTD (v3, doi: 10.1111/nph.13545). I know these are not the latest versions of the tools (and in one case not the latest version of the annotation either), but for the moment they are what I'm running with until I understand the visualizations that are most informative.

The first thing I want to show you guys is the agreement between replicates within each of the conditions for different tools (attached). The plot below shows nine small correlation heat maps showing the all-against-all Pearsons correlations for replicates within conditions. The columns are conditions - so all of the matrices in col 1 refer to the same condition. The condition order is 'c1', 'c2' (the WT) & 'c3'. The rows are tools - so all the matricies in row 1 refer to the same tool. The tool order is 'sailfish', 'kallisto' and 'salmon'. All the matrices use the same annotation - in this case TAIR10. The colour scale represents R values in the range 0.9<R<1.0.

tair10_alltools_corrmatricies.png

What struck me about this:

1) The WT (c2) replicates correlate much better than either of the other conditions.
2) The different tools have very different levels of agreements between replicates, even though the underlying data is identical! In particular, Salmon shows excellent agreement between replicates that kallisto and sailfish show progressively worse agreement between. I wasn't expecting this at all. In my head I'm imagining this is something to do with how deterministic each tool is and how much random assignment of reads to transcripts they do: Salmon>kallisto>sailfish.

Is this something you'd expect and does the explanation sound plausible?