【问题标题】:Python parse text from multiple txt filePython从多个txt文件中解析文本
【发布时间】:2017-04-24 20:00:52
【问题描述】:

寻求有关如何从多个文本文件中挖掘项目以构建字典的建议。

此文本文件:https://pastebin.com/Npcp3HCM

被手动转换成这个需要的数据结构:https://drive.google.com/file/d/0B2AJ7rliSQubV0J2Z0d0eXF3bW8/view

有数千个这样的文本文件,它们可能有不同的部分标题,如以下示例所示:

  1. https://pastebin.com/wWSPGaLX
  2. https://pastebin.com/9Up4RWHu

我从阅读文件开始

from glob import glob

txtPth = '../tr-txt/*.txt'
txtFiles = glob(txtPth)

with open(txtFiles[0],'r') as tf:
    allLines = [line.rstrip() for line in tf]

sectionHeading = ['Corporate Participants',
                  'Conference Call Participiants',
                  'Presentation',
                  'Questions and Answers']

for lineNum, line in enumerate(allLines):
    if line in sectionHeading:
        print(lineNum,allLines[lineNum])

我的想法是寻找节标题所在的行号,并尝试提取这些行号之间的内容,然后去掉像破折号这样的分隔符。那没有用,我陷入了尝试创建此类字典的过程中,以便以后可以对采石项目运行各种自然语言处理算法。

{file-name1:{
    {date-time:[string]},
    {corporate-name:[string]},
    {corporate-participants:[name1,name2,name3]},
    {call-participants:[name4,name5]},
    {section-headings:{
        {heading1:[
            {name1:[speechOrderNum, text-content]},
            {name2:[speechOrderNum, text-content]},
            {name3:[speechOrderNum, text-content]}],
        {heading2:[
            {name1:[speechOrderNum, text-content]},
            {name2:[speechOrderNum, text-content]},
            {name3:[speechOrderNum, text-content]},
            {name2:[speechOrderNum, text-content]},
            {name1:[speechOrderNum, text-content]},
            {name4:[speechOrderNum, text-content]}],
        {heading3:[text-content]},
        {heading4:[text-content]}
        }
    }
}

挑战在于不同的文件可能有不同的标题和标题数量。但总会有一个名为“演示”的部分,并且很可能有“问答”部分。这些部分标题总是由一串等号分隔。并且不同说话者的内容总是用一串破折号分隔。问答部分的“发言顺序”用方括号中的数字表示。参与者总是在文档的开头用星号表示,他们的名字前面有一个星号,他们的图块总是在下一行。

感谢任何关于如何解析文本文件的建议。理想的帮助是提供有关如何为每个可以写入数据库的文件生成这样一个字典(或其他合适的数据结构)的指导。

谢谢

--编辑--

其中一个文件如下所示:https://pastebin.com/MSvmHb2e

其中“问答”部分被错误标记为“演示文稿”,并且没有其他“问答”部分。

最后的示例文本:https://pastebin.com/jr9WfpV8

【问题讨论】:

  • 我不建议您将所有文本数据存储在单个 dict 对象中,正如您所提到的,可能需要解析大量文本文件,所以在运行中随着dict 对象的大小增加,python 进程将花费更多时间来更新dict 对象,并且如果您有一些非常大的文件要处理,可能会出现 OutOfMemory,我敢打赌一些DBMS 来存储这种数据。
  • @ZdaR 谢谢你的建议。阅读您的评论后,我决定使用数据库。我目前正在研究 sqlalchemy
  • 贴错标签不会那么容易解决。您将必须使用 ML 技术构建一个分类器,将一个部分分类为 PresentationQuestion & Answer 部分,因为没有保证的线索(使用手工规则进行的模式识别再多也不会正确 100 %) 出现在文本中。
  • 感谢 SO 社区的回答和 cmets。我已经将赏金奖励给了使用 python 和正则表达式来处理模式识别的答案。形成状态机的 if 语句级联实现了给定任务,但重新调整或重构使用正则表达式的代码更容易且更通用。

标签: python parsing dictionary nlp


【解决方案1】:

代码中的 cmets 应该解释一切。让我知道是否有任何未指定的内容,并且需要更多的 cmets。

简而言之,我利用正则表达式查找“​​=”分隔符行将整个文本细分为小节,然后为了清楚起见分别处理每种类型的部分(这样您就可以知道我是如何处理每种情况的)。

旁注:我将“参加者”和“作者”这两个词互换使用。

编辑:更新了代码以根据演示/QA 部分中与会者/作者旁边的“[x]”模式进行排序。还更改了漂亮的打印部分,因为 pprint 不能很好地处理 OrderedDict。

要去除字符串中任何额外的空格,包括\n,只需执行str.strip()。如果您特别需要仅剥离 \n,则只需执行 str.strip('\n')

我已经修改了代码以去除谈话中的任何空白。

import json
import re
from collections import OrderedDict
from pprint import pprint


# Subdivides a collection of lines based on the delimiting regular expression.
# >>> example_string =' =============================
#                       asdfasdfasdf
#                       sdfasdfdfsdfsdf
#                       =============================
#                       asdfsdfasdfasd
#                       =============================
# >>> subdivide(example_string, "^=+")
# >>> ['asdfasdfasdf\nsdfasdfdfsdfsdf\n', 'asdfsdfasdfasd\n']
def subdivide(lines, regex):
    equ_pattern = re.compile(regex, re.MULTILINE)
    sections = equ_pattern.split(lines)
    sections = [section.strip('\n') for section in sections]
    return sections


# for processing sections with dashes in them, returns the heading of the section along with
# a dictionary where each key is the subsection's header, and each value is the text in the subsection.
def process_dashed_sections(section):

    subsections = subdivide(section, "^-+")
    heading = subsections[0]  # header of the section.
    d = {key: value for key, value in zip(subsections[1::2], subsections[2::2])}
    index_pattern = re.compile("\[(.+)\]", re.MULTILINE)

    # sort the dictionary by first capturing the pattern '[x]' and extracting 'x' number.
    # Then this is passed as a compare function to 'sorted' to sort based on 'x'.
    def cmp(d):
        mat = index_pattern.findall(d[0])
        if mat:
            print(mat[0])
            return int(mat[0])
        # There are issues when dealing with subsections containing '-'s but not containing '[x]' pattern.
        # This is just to deal with that small issue.
        else:
            return 0

    o_d = OrderedDict(sorted(d.items(), key=cmp))
    return heading, o_d


# this is to rename the keys of 'd' dictionary to the proper names present in the attendees.
# it searches for the best match for the key in the 'attendees' list, and replaces the corresponding key.
# >>> d = {'mr. man   ceo of company   [1]' : ' This is talk a' ,
#  ...     'ms. woman  ceo of company    [2]' : ' This is talk b'}
# >>> l = ['mr. man', 'ms. woman']
# >>> new_d = assign_attendee(d, l)
# new_d = {'mr. man': 'This is talk a', 'ms. woman': 'This is talk b'}
def assign_attendee(d, attendees):
    new_d = OrderedDict()
    for key, value in d.items():
        a = [a for a in attendees if a in key]
        if len(a) == 1:
            # to strip out any additional whitespace anywhere in the text including '\n'.
            new_d[a[0]] = value.strip()
        elif len(a) == 0:
            # to strip out any additional whitespace anywhere in the text including '\n'.
            new_d[key] = value.strip()
    return new_d


if __name__ == '__main__':
    with open('input.txt', 'r') as input:
        lines = input.read()

        # regex pattern for matching headers of each section
        header_pattern = re.compile("^.*[^\n]", re.MULTILINE)

        # regex pattern for matching the sections that contains
        # the list of attendee's (those that start with asterisks )
        ppl_pattern = re.compile("^(\s+\*)(.+)(\s.*)", re.MULTILINE)

        # regex pattern for matching sections with subsections in them.
        dash_pattern = re.compile("^-+", re.MULTILINE)

        ppl_d = dict()
        talks_d = dict()

        # Step1. Divide the the entire document into sections using the '=' divider
        sections = subdivide(lines, "^=+")
        header = []
        print(sections)
        # Step2. Handle each section like a switch case
        for section in sections:

            # Handle headers
            if len(section.split('\n')) == 1:  # likely to match only a header (assuming )
                header = header_pattern.match(section).string

            # Handle attendees/authors
            elif ppl_pattern.match(section):
                ppls = ppl_pattern.findall(section)
                d = {key.strip(): value.strip() for (_, key, value) in ppls}
                ppl_d.update(d)

                # assuming that if the previous section was detected as a header, then this section will relate
                # to that header
                if header:
                    talks_d.update({header: ppl_d})

            # Handle subsections
            elif dash_pattern.findall(section):
                heading, d = process_dashed_sections(section)

                talks_d.update({heading: d})

            # Else its just some random text.
            else:

                # assuming that if the previous section was detected as a header, then this section will relate
                # to that header
                if header:
                    talks_d.update({header: section})

        #pprint(talks_d)
        # To assign the talks material to the appropriate attendee/author. Still works if no match found.
        for key, value in talks_d.items():
            talks_d[key] = assign_attendee(value, ppl_d.keys())

        # ordered dict does not pretty print using 'pprint'. So a small hack to make use of json output to pretty print.
        print(json.dumps(talks_d, indent=4))

【讨论】:

  • 如果您可以在talks_d 中包含语音顺序以及语音,我可以接受此答案。讲话顺序用方括号表示。如果talk_d 是一个有序字典,那将会很有用。
  • 如何从talk_d的文本中去掉'\n'?
【解决方案2】:

您能否确认您是否只需要“演示”和“问答”部分? 此外,关于输出,是否可以转储类似于“手动转换”的 CSV 格式。

更新的解决方案适用于您提供的每个示例文件。

根据共享的“Parsed-transcript”文件,输出来自单元格“D:H”。

#state = ["other", "head", "present", "qa", "speaker", "data"]
# codes : 0, 1, 2, 3, 4, 5
def writecell(out, data):
    out.write(data)
    out.write(",")

def readfile(fname, outname):
    initstate = 0
    f = open(fname, "r")
    out = open(outname, "w")
    head = ""
    head_written = 0
    quotes = 0
    had_speaker = 0
    for line in f:
        line = line.strip()
        if not line: continue
        if initstate in [0,5] and not any([s for s in line if "=" != s]):
            if initstate == 5:
                out.write('"')
                quotes = 0
                out.write("\n")
            initstate = 1
        elif initstate in [0,5] and not any([s for s in line if "-" != s]):
            if initstate == 5:
                out.write('"')
                quotes = 0
                out.write("\n")
                initstate = 4
        elif initstate == 1 and line == "Presentation":
            initstate = 2
            head = "Presentation"
            head_written = 0
        elif initstate == 1 and line == "Questions and Answers":
            initstate = 3
            head = "Questions and Answers"
            head_written = 0
        elif initstate == 1 and not any([s for s in line if "=" != s]):
            initstate = 0
        elif initstate in [2, 3] and not any([s for s in line if ("=" != s and "-" != s)]):
            initstate = 4
        elif initstate == 4 and '[' in line and ']' in line:
            comma = line.find(',')
            speech_st = line.find('[')
            speech_end = line.find(']')
            if speech_st == -1:
                initstate = 0
                continue
            if comma == -1:
                firm = ""
                speaker = line[:speech_st].strip()
            else:
                speaker = line[:comma].strip()
                firm = line[comma+1:speech_st].strip()
            head_written = 1
            if head_written:
                writecell(out, head)
                head_written = 0
            order = line[speech_st+1:speech_end]
            writecell(out, speaker)
            writecell(out, firm)
            writecell(out, order)
            had_speaker = 1
        elif initstate == 4 and not any([s for s in line if ("=" != s and "-" != s)]):
            if had_speaker:
                initstate = 5
                out.write('"')
                quotes = 1
            had_speaker = 0
        elif initstate == 5:
            line = line.replace('"', '""')
            out.write(line)
        elif initstate == 0:
            continue
        else:
            continue
    f.close()
    if quotes:
        out.write('"')
    out.close()

readfile("Sample1.txt", "out1.csv")
readfile("Sample2.txt", "out2.csv")
readfile("Sample3.txt", "out3.csv")

详情

在此解决方案中有一个状态机,其工作方式如下: 1.检测heading是否存在,如果存在则写入 2. 标题写入后检测扬声器 3. 为那个演讲者写笔记 4. 切换到下一位发言者等等...

您可以稍后根据需要处理 csv 文件。 完成基本处理后,您还可以以所需的任何格式填充数据。

编辑:

请替换函数“writecell”

def writecell(out, data):
    data = data.replace('"', '""')
    out.write('"')
    out.write(data)
    out.write('"')
    out.write(",")

【讨论】:

  • 您的方法在结构上最接近我的要求。它密切处理提供的所有示例文件。但有时公司名称后面会有一个昏迷,这会打乱输出结构。我可以接受最能解决问题--EDIT-- 部分中的示例的回答。
  • 可以直接写入csv文件或dict或数据库
  • 您好,我已根据您的反馈更新了我的答案。感谢您的反馈。
  • 非常感谢您的努力。你的方法很有趣,它帮助我理解了我的文本识别任务。然而,我将赏金奖励给了最佳答案,因为它可以推广到其他模式识别任务。
  • 没问题。乐意效劳。 :)
猜你喜欢
  • 2020-10-02
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 1970-01-01
  • 2020-10-09
相关资源
最近更新 更多