如何从文件中删除重复的行?
- 2025-02-27 09:05:00
- admin 原创
- 100
问题描述:
我有一个只有一列的文件。如何删除文件中的重复行?
解决方案 1:
在 Unix/Linux 上,使用uniq
命令,按照 David Locke 的回答,或者sort
,按照 William Pursell 的评论。
如果你需要 Python 脚本:
lines_seen = set() # holds lines already seen
outfile = open(outfilename, "w")
for line in open(infilename, "r"):
if line not in lines_seen: # not a duplicate
outfile.write(line)
lines_seen.add(line)
outfile.close()
更新:sort
/组合uniq
将删除重复项,但返回已排序行的文件,这可能是您想要的,也可能不是您想要的。上面的 Python 脚本不会重新排序行,而只是删除重复项。当然,要使上面的脚本也能排序,只需省略outfile.write(line)
and ,而是在循环后立即执行 do outfile.writelines(sorted(lines_seen))
。
解决方案 2:
如果您使用的是 *nix,请尝试运行以下命令:
sort <file name> | uniq
解决方案 3:
uniqlines = set(open('/tmp/foo').readlines())
这将为您提供唯一线路的列表。
将其写回到某个文件非常简单:
bar = open('/tmp/bar', 'w').writelines(uniqlines)
bar.close()
解决方案 4:
您可以执行以下操作:
import os
os.system("awk '!x[$0]++' /path/to/file > /path/to/rem-dups")
在这里你使用 bash 进入 python :)
你还有其他方法:
with open('/tmp/result.txt') as result:
uniqlines = set(result.readlines())
with open('/tmp/rmdup.txt', 'w') as rmdup:
rmdup.writelines(set(uniqlines))
解决方案 5:
获取列表中的所有行并创建一组行,这样就完成了。例如,
>>> x = ["line1","line2","line3","line2","line1"]
>>> list(set(x))
['line3', 'line2', 'line1']
>>>
如果您需要保留行的顺序 - 因为设置是无序集合 - 请尝试以下操作:
y = []
for l in x:
if l not in y:
y.append(l)
并将内容写回文件。
解决方案 6:
这是对这里已经说过的内容的重新阐述 - 这是我所使用的内容。
import optparse
def removeDups(inputfile, outputfile):
lines=open(inputfile, 'r').readlines()
lines_set = set(lines)
out=open(outputfile, 'w')
for line in lines_set:
out.write(line)
def main():
parser = optparse.OptionParser('usage %prog ' +\n '-i <inputfile> -o <outputfile>')
parser.add_option('-i', dest='inputfile', type='string',
help='specify your input file')
parser.add_option('-o', dest='outputfile', type='string',
help='specify your output file')
(options, args) = parser.parse_args()
inputfile = options.inputfile
outputfile = options.outputfile
if (inputfile == None) or (outputfile == None):
print parser.usage
exit(1)
else:
removeDups(inputfile, outputfile)
if __name__ == '__main__':
main()
解决方案 7:
Python 单行命令:
python -c "import sys; lines = sys.stdin.readlines(); print ''.join(sorted(set(lines)))" < InputFile > OutputFile
解决方案 8:
补充@David Locke 的回答,使用 *nix 系统你可以运行
sort -u messy_file.txt > clean_file.txt
这将按clean_file.txt
字母顺序删除重复项。
解决方案 9:
查看我创建的脚本,以从文本文件中删除重复的电子邮件。希望这对您有所帮助!
# function to remove duplicate emails
def remove_duplicate():
# opens emails.txt in r mode as one long string and assigns to var
emails = open('emails.txt', 'r').read()
# .split() removes excess whitespaces from str, return str as list
emails = emails.split()
# empty list to store non-duplicate e-mails
clean_list = []
# for loop to append non-duplicate emails to clean list
for email in emails:
if email not in clean_list:
clean_list.append(email)
return clean_list
# close emails.txt file
emails.close()
# assigns no_duplicate_emails.txt to variable below
no_duplicate_emails = open('no_duplicate_emails.txt', 'w')
# function to convert clean_list 'list' elements in to strings
for email in remove_duplicate():
# .strip() method to remove commas
email = email.strip(',')
no_duplicate_emails.write(f"E-mail: {email}
")
# close no_duplicate_emails.txt file
no_duplicate_emails.close()
解决方案 10:
如果有人正在寻找一种使用散列并且更加华丽的解决方案,那么这就是我目前使用的:
def remove_duplicate_lines(input_path, output_path):
if os.path.isfile(output_path):
raise OSError('File at {} (output file location) exists.'.format(output_path))
with open(input_path, 'r') as input_file, open(output_path, 'w') as output_file:
seen_lines = set()
def add_line(line):
seen_lines.add(line)
return line
output_file.writelines((add_line(line) for line in input_file
if line not in seen_lines))
解决方案 11:
在同一个文件中编辑它
lines_seen = set() # holds lines already seen
with open("file.txt", "r+") as f:
d = f.readlines()
f.seek(0)
for i in d:
if i not in lines_seen:
f.write(i)
lines_seen.add(i)
f.truncate()
解决方案 12:
可读且简洁
with open('sample.txt') as fl:
content = fl.read().split('
')
content = set([line for line in content if line != ''])
content = '
'.join(content)
with open('sample.txt', 'w') as fl:
fl.writelines(content)
解决方案 13:
这是我的解决方案
if __name__ == '__main__':
f = open('temp.txt','w+')
flag = False
with open('file.txt') as fp:
for line in fp:
for temp in f:
if temp == line:
flag = True
print('Found Match')
break
if flag == False:
f.write(line)
elif flag == True:
flag = False
f.seek(0)
f.close()
解决方案 14:
cat <filename> | grep '^[a-zA-Z]+$' | sort -u > outfile.txt
从文件中过滤并删除重复的值。
解决方案 15:
这是我的解决方案
d = input("your file:") #write your file name here
file1 = open(d, mode="r")
file2 = open('file2.txt', mode='w')
file2 = open('file2.txt', mode='a')
file1row = file1.readline()
while file1row != "" :
file2 = open('file2.txt', mode='a')
file2read = open('file2.txt', mode='r')
file2r = file2read.read().strip()
if file1row not in file2r:
file2.write(file1row)
file1row = file1.readline()
file2read.close()
file2.close
相关推荐
热门文章
项目管理软件有哪些?
热门标签
曾咪二维码
扫码咨询,免费领取项目管理大礼包!
云禅道AD