【问题标题】:Creating a new corpus with NLTK使用 NLTK 创建新语料库
【发布时间】:2011-05-15 23:56:06
【问题描述】:

我认为我的标题的答案通常是去阅读文档,但我浏览了NLTK book,但它没有给出答案。我对 Python 有点陌生。

我有一堆.txt 文件,我希望能够使用NLTK 为语料库nltk_data 提供的语料库功能。

我已经尝试过PlaintextCorpusReader,但我无法做到:

>>>import nltk
>>>from nltk.corpus import PlaintextCorpusReader
>>>corpus_root = './'
>>>newcorpus = PlaintextCorpusReader(corpus_root, '.*')
>>>newcorpus.words()

如何使用 punkt 分割 newcorpus 句子?我尝试使用 punkt 函数,但 punkt 函数无法读取 PlaintextCorpusReader 类?

您能否指导我如何将分段数据写入文本文件?

【问题讨论】:

    标签: python nlp nltk corpus


    【解决方案1】:

    经过几年的摸索,这里是

    的更新教程

    如何创建带有文本文件目录的 NLTK 语料库?

    主要思想是利用nltk.corpus.reader 包。如果您有一个English 文本文件目录,最好使用PlaintextCorpusReader

    如果您的目录如下所示:

    newcorpus/
             file1.txt
             file2.txt
             ...
    

    只需使用这几行代码,就可以得到一个语料库:

    import os
    from nltk.corpus.reader.plaintext import PlaintextCorpusReader
    
    corpusdir = 'newcorpus/' # Directory of corpus.
    
    newcorpus = PlaintextCorpusReader(corpusdir, '.*')
    

    注意: PlaintextCorpusReader 将使用默认的 nltk.tokenize.sent_tokenize()nltk.tokenize.word_tokenize() 将您的文本分成句子和单词,这些功能是为英语构建的,它可能 NOT 适用于所有语言。

    这是创建测试文本文件以及如何使用 NLTK 创建语料库以及如何在不同级别访问语料库的完整代码:

    import os
    from nltk.corpus.reader.plaintext import PlaintextCorpusReader
    
    # Let's create a corpus with 2 texts in different textfile.
    txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
    txt2 = """Are you a foo bar? Yes I am. Possibly, everyone is.\n"""
    corpus = [txt1,txt2]
    
    # Make new dir for the corpus.
    corpusdir = 'newcorpus/'
    if not os.path.isdir(corpusdir):
        os.mkdir(corpusdir)
    
    # Output the files into the directory.
    filename = 0
    for text in corpus:
        filename+=1
        with open(corpusdir+str(filename)+'.txt','w') as fout:
            print>>fout, text
    
    # Check that our corpus do exist and the files are correct.
    assert os.path.isdir(corpusdir)
    for infile, text in zip(sorted(os.listdir(corpusdir)),corpus):
        assert open(corpusdir+infile,'r').read().strip() == text.strip()
    
    
    # Create a new corpus by specifying the parameters
    # (1) directory of the new corpus
    # (2) the fileids of the corpus
    # NOTE: in this case the fileids are simply the filenames.
    newcorpus = PlaintextCorpusReader('newcorpus/', '.*')
    
    # Access each file in the corpus.
    for infile in sorted(newcorpus.fileids()):
        print infile # The fileids of each file.
        with newcorpus.open(infile) as fin: # Opens the file.
            print fin.read().strip() # Prints the content of the file
    print
    
    # Access the plaintext; outputs pure string/basestring.
    print newcorpus.raw().strip()
    print 
    
    # Access paragraphs in the corpus. (list of list of list of strings)
    # NOTE: NLTK automatically calls nltk.tokenize.sent_tokenize and 
    #       nltk.tokenize.word_tokenize.
    #
    # Each element in the outermost list is a paragraph, and
    # Each paragraph contains sentence(s), and
    # Each sentence contains token(s)
    print newcorpus.paras()
    print
    
    # To access pargraphs of a specific fileid.
    print newcorpus.paras(newcorpus.fileids()[0])
    
    # Access sentences in the corpus. (list of list of strings)
    # NOTE: That the texts are flattened into sentences that contains tokens.
    print newcorpus.sents()
    print
    
    # To access sentences of a specific fileid.
    print newcorpus.sents(newcorpus.fileids()[0])
    
    # Access just tokens/words in the corpus. (list of strings)
    print newcorpus.words()
    
    # To access tokens of a specific fileid.
    print newcorpus.words(newcorpus.fileids()[0])
    

    最后,要读取文本目录并创建其他语言的 NLTK 语料库,您必须首先确保您具有可 Python 调用的 word tokenizationsentence tokenization接受字符串/基本字符串输入并产生此类输出的模块:

    >>> from nltk.tokenize import sent_tokenize, word_tokenize
    >>> txt1 = """This is a foo bar sentence.\nAnd this is the first txtfile in the corpus."""
    >>> sent_tokenize(txt1)
    ['This is a foo bar sentence.', 'And this is the first txtfile in the corpus.']
    >>> word_tokenize(sent_tokenize(txt1)[0])
    ['This', 'is', 'a', 'foo', 'bar', 'sentence', '.']
    

    【讨论】:

    • 感谢您的澄清。不过,默认情况下支持许多语言。
    • 如果有人收到AttributeError: __exit__ 错误。使用open() 而不是with()
    【解决方案2】:

    我认为PlaintextCorpusReader 已经使用 punkt 分词器对输入进行了分段,至少如果您的输入语言是英语的话。

    PlainTextCorpusReader's constructor

    def __init__(self, root, fileids,
                 word_tokenizer=WordPunctTokenizer(),
                 sent_tokenizer=nltk.data.LazyLoader(
                     'tokenizers/punkt/english.pickle'),
                 para_block_reader=read_blankline_block,
                 encoding='utf8'):
    

    您可以将单词和句子标记器传递给读者,但对于后者,默认值已经是 nltk.data.LazyLoader('tokenizers/punkt/english.pickle')

    对于单个字符串,分词器将按如下方式使用(解释为 here,有关 punkt 分词器,请参阅第 5 节)。

    >>> import nltk.data
    >>> text = """
    ... Punkt knows that the periods in Mr. Smith and Johann S. Bach
    ... do not mark sentence boundaries.  And sometimes sentences
    ... can start with non-capitalized words.  i is a good variable
    ... name.
    ... """
    >>> tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
    >>> tokenizer.tokenize(text.strip())
    

    【讨论】:

    【解决方案3】:
     >>> import nltk
     >>> from nltk.corpus import PlaintextCorpusReader
     >>> corpus_root = './'
     >>> newcorpus = PlaintextCorpusReader(corpus_root, '.*')
     """
     if the ./ dir contains the file my_corpus.txt, then you 
     can view say all the words it by doing this 
     """
     >>> newcorpus.words('my_corpus.txt')
    

    【讨论】:

    • 为 devnagari 语言解决一些问题。
    【解决方案4】:
    from nltk.corpus.reader.plaintext import PlaintextCorpusReader
    
    
    filecontent1 = "This is a cow"
    filecontent2 = "This is a Dog"
    
    corpusdir = 'nltk_data/'
    with open(corpusdir + 'content1.txt', 'w') as text_file:
        text_file.write(filecontent1)
    with open(corpusdir + 'content2.txt', 'w') as text_file:
        text_file.write(filecontent2)
    
    text_corpus = PlaintextCorpusReader(corpusdir, ["content1.txt", "content2.txt"])
    
    no_of_words_corpus1 = len(text_corpus.words("content1.txt"))
    print(no_of_words_corpus1)
    no_of_unique_words_corpus1 = len(set(text_corpus.words("content1.txt")))
    
    no_of_words_corpus2 = len(text_corpus.words("content2.txt"))
    no_of_unique_words_corpus2 = len(set(text_corpus.words("content2.txt")))
    
    enter code here
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 2016-03-17
      • 2015-08-02
      • 2014-11-23
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多