![]() ![]() In such a case we will use two sort commands and a single uniq command. Sometimes you might be curious about identifying the most repeated lines in the text file. Sort File without Duplicate Lines How to Find Most Repeated Lines in File To save the output to another file: $ sort -u sample_file.txt > final.txt ![]() We can also use Linux’s sort command with the -u option to uniquely output the content of the file without duplicate lines. $ sort sample_file.txt | uniq > final.txt We can even redirect the output of the above commands to a file like final.txt. If we were to remove the duplicate lines from our text file using these two commands, we will run: $ sort sample_file.txt | uniqĪs expected, the duplicate entries have been deleted. Delete Duplicate Lines Using Sort and Uniq CommandsĪs per the man pages of these two GNU Coreutils commands, the sort command’s primary purpose is to sort lines within a text file while the uniq command’s primary purpose is to omit/report repeated lines within a targeted text file. ![]() In a production-ready environment, such a file could be having thousands of lines making it difficult to get rid of the hidden duplicates. The above screen capture of the text file we just created contains some visibly repeated lines we will be trying to get rid of/delete. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |