[PDF][PDF] A comparative study of lossless compression algorithm on text data

A Jain, KI Lakhtaria, P Srivastav - Proc. Int. Conf. Adv. Comput …, 2013 - ijeie.jalaxy.com.tw
Proc. Int. Conf. Adv. Comput. Sci.(AETACS), 2013ijeie.jalaxy.com.tw
With increasing amount of text data being stored rapidly, efficient information retrieval and
Storage in the compressed domain has become a major concern. Compression is the
process of coding that will effectively reduce the total number of bits needed to represent
certain information. Data compression has been one of the critical enabling technologies for
the ongoing digital multimedia revolution. There are lots of data compression algorithms
which are available to compress files of different formats. This paper provides a survey of …
Abstract
With increasing amount of text data being stored rapidly, efficient information retrieval and Storage in the compressed domain has become a major concern. Compression is the process of coding that will effectively reduce the total number of bits needed to represent certain information. Data compression has been one of the critical enabling technologies for the ongoing digital multimedia revolution. There are lots of data compression algorithms which are available to compress files of different formats. This paper provides a survey of different basic lossless data compression algorithms on English text files: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). All the above algorithms are evaluated and tested on different text files of different sizes. To find the best algorithm among above, comparison is made in terms of compression: Size, Ratio, Time (Speed), and Entropy. The paper is concluded by the decision showing which algorithm performs best over text data.
ijeie.jalaxy.com.tw
以上显示的是最相近的搜索结果。 查看全部搜索结果