codcmp

Function

Description

codcmp reads two codon usage table files and writes to file the differences in codon usage fractions between the two tables.

The usage fraction of a codon is its proportion (0 to 1) of the total number of the codons in the sequences used to construct the usage table. For each codon that is used in both tables, it takes the difference between the usage fractions in the two tables. The sum of the differences and the sum of the differences squared is reported in the output file. It also counts the number of the 64 possible codons which are unused (i.e. has a usage fraction of 0) in either one or the other or both of the codon usage tables, and writes this to the output file.

Statistical significance

Question:

How do you interpret the statistical significance of any difference between the tables?

Answer:

This is a very interesting question. I don't think that there is any way to say if it is statistically significant just from looking at it, as it is essentially a descriptive statistic about the difference between two 64-mer vectors. If you have a whole lot of sequences and codcmp results for all the possible pairwise comparisons, then the resulting distance matrix can be used to build a phylogenetic tree based on codon usage.

However, if you generate a series of random sequences, measure their codon usage and then do codcmp between each of your test sequences and all the random sequences, you could then use a z-test to see if the result between the two test sequences was outside of the top or bottom 5%.

This would assume that the codcmp results were normally distributed, but you could test that too. The simplest way is just to plot them and look for a bell-curve. For more rigour, find the mean and standard deviation of your results from the random sequences, use the normal distribution equation to generate a theoretical distribution for that mean and standard deviation, and then perform a chi square between the random data and the theoretically generated normal distribution. If you generate two sets of random data, each based on your two test sequences, an F-test should be used to establish that they have equal variances. Then you can safely go ahead and perform the z-test.

You could use shuffle to base your random sequences on the test sequences - so that would ensure the randomised background had the same nucleotide content.

F-tests, z-tests and chi-tests can all be done in Excel, as well as being standard in most statistical analysis packages.

Answered by Derek Gatherer <d.gatherer © vir.gla.ac.uk> 21 Nov 2003

Usage

Command line arguments


Input file format

It reads in the Codon Usage Tables - these are available as EMBOSS data files. See below for details.

Output file format

Data files

codcmp requires two codon usage tables which are read by default from the EMBOSS data file from Ehum.cut in the data/CODONS directory of the EMBOSS distribution. If the name of a codon usage file is specified on the command line, then this file will first be searched for in the current directory and then in the data/CODONS directory of the EMBOSS distribution.

Notes

The following notes based on Derek Gatherer's comments are useful for interpreting the significance of any difference between the tables.

It's not normally possible to be certain a a difference is statistically significant just from looking at it, as it is essentially a descriptive statistic about the difference between two 64-mer vectors. If you have a whole lot of sequences and codcmp results for all the possible pairwise comparisons, then the resulting distance matrix can be used to build a phylogenetic tree based on codon usage.

However, if you generate a series of random sequences, measure their codon usage and then do codcmp between each of your test sequences and all the random sequences, you could then use a z-test to see if the result between the two test sequences was outside of the top or bottom 5%.

This would assume that the codcmp results were normally distributed, but you could test that too. The simplest way is just to plot them and look for a bell-curve. For more rigour, find the mean and standard deviation of your results from the random sequences, use the normal distribution equation to generate a theoretical distribution for that mean and standard deviation, and then perform a chi square between the random data and the theoretically generated normal distribution. If you generate two sets of random data, each based on your two test sequences, an F-test should be used to establish that they have equal variances. Then you can safely go ahead and perform the z-test.

You could use the shuffle program to base your random sequences on the test sequences - so that would ensure the randomised background had the same nucleotide content. F-tests, z-tests and chi-tests can all be done in Excel, as well as being standard in most statistical analysis packages.

References

None.

Warnings

None.

Diagnostic Error Messages

None.

Exit status

This program always exits with a status of 0.

Known bugs

None.

Author(s)

Some more statistics were added by

History

Target users

Comments