我是Python的新手,一般来说,我也是编程方面的新手。因此,任何帮助都是非常感谢的。
我在一个目录中有超过3000个具有多个编码的文本文件。我需要将它们转换成一个单一的编码(例如utf8),以便进一步的自然语言处理工作。当我使用shell检查这些文件的类型时,我确定了以下编码:
Algol 68 source text, ISO-8859 text, with very long lines
Algol 68 source text, Little-endian UTF-16 Unicode text, with very long lines
Algol 68 source text, Non-ISO extended-ASCII text, with very long lines
Algol 68 source text, Non-ISO extended-ASCII text, with very long lines, with LF, NEL line terminators
ASCII text
ASCII text, with very long lines
data
diff output text, ASCII text
ISO-8859 text, with very long lines
ISO-8859 text, with very long lines, with LF, NEL line terminators
Little-endian UTF-16 Unicode text, with very long lines
Non-ISO extended-ASCII text
Non-ISO extended-ASCII text, with very long lines
Non-ISO extended-ASCII text, with very long lines, with LF, NEL line terminators
UTF-8 Unicode (with BOM) text, with CRLF line terminators
UTF-8 Unicode (with BOM) text, with very long lines, with CRLF line terminators
UTF-8 Unicode text, with very long lines, with CRLF line terminators你知道如何将上述编码的文本文件转换成utf-8编码的文本文件吗?
发布于 2021-05-02 22:16:03
我遇到了和你一样的问题。我使用了两个步骤来解决这个问题。
代码如下:
import os, sys, codecs
import chardet首先,使用chardet包来识别文本的编码。
for text in os.listdir(path):
txtPATH = os.path.join(path, text)
txtPATH=str(txtPATH)
f = open(txtPATH, 'rb')
data = f.read()
f_charInfo = chardet.detect(data)
coding2=f_charInfo['encoding']
coding=str(coding2)
print(coding)
data = f.read()其次,如果文本的编码不是utf-8,则将文本重写为utf-8编码到目录。
if not re.match(r'.*\.utf-8$', coding, re.IGNORECASE):
print(txtPATH)
print(coding)
with codecs.open(txtPATH, "r", coding) as sourceFile:
contents = sourceFile.read()
with codecs.open(txtPATH, "w", "utf-8") as targetFile:
targetFile.write(contents)希望这能有所帮助!谢谢
https://stackoverflow.com/questions/65074479
复制相似问题