There are many error detection and correction codes that are in use today. The most common ones are parity checks, cyclic redundancy checks and arithmetic checksum. There are additional codes that are used for correcting errors. One of these is two dimensional parity checks. With this error correcting code, a block of data is arranged in a table. The procedure is to start by correcting the errors starting with blocks of data. With this error correcting code, there are higher chances of detecting busty errors when compared to the other error correcting codes (McNamara, 1982).
Another error detecting code is the Damm algorithm. With this error detecting code, all single-digit errors are detected and corrected. Also, adjacent transposition errors are detected and corrected. The main feature of this error detection is that of quasigroup of order 10. The error detecting code is totally antisymmetric (Curt, 2010).
Another error detecting and correcting code is that of Luhn algorithm. This algorithm is applied in credit card numbers and national identifiable numbers. It is popularly used today. The strength of this algorithm is that it can detect all single-digit errors and all kinds of transposition of digits which are adjacent to each other. One weakness with this error detecting code is that it lacks the capacity to detect the transposition of two digit numbers 09 and 90. The algorithm also operates from right left. Any shift of a zero in their positions will affect the results.
Compared to CRC, parity checks and arithmetic checksums, the other error detecting codes are seen to be weaker. This is because the single digits with which they check the errors. The CRC, arithmetic checksums and parity checks used proved algorithms which check several bits of data.
References
McNamara, J. E. (1982). Technical Aspects of Data Communication. 2nd ed. Bedford, MA: Digital Press.
Curt, M. W. (2010). Data Communications and Computer Networks: A Business User’s Approach. New York: Cengage Learning.