Data NormalizationNormalization aims to address the variability in sampling depth and the sparsity of the data to enable more biologically meaningful comparisons. All of these methods require raw count data as input. You can rarefy your data followed by either data scaling or data transformation. However, you cannot apply both data scaling and data transformation, because scaled or transformed data is no longer valid count data.
|
|
The method aims to bring all samples to the same scale by dividing the samples by a scaling factor. Some common choices include total sum scaling (TSS), cumulative sum scaling (CSS), and upper-quantile scaling (UQ).
Variance stabilization transformation such as log-ratio transformation and its variations. Some common choices include centered log-ratio (CLR) transformation, relative log expression (RLE) normalization, or weighted trimmed mean of M-values (TMM).
All samples will be rarefied to even sequencing depth based on the sample having the lowest sequencing depth. If this sample contains extremely low reads, you may need to manually exclude this sample (using the Sample Editor) to avoid significant data loss. You can find out if this is the case from View Sample Size from the Data Summary page.
You can rarefy the samples to a specific depth. The default value is the minimum library size after filtration. The maximum allowable size is capped at the 3rd quantile of the sample sizes after filtration.
|
You will be logged off in seconds.
Do you want to continue your session? |