【问题标题】:How do I convert this corpus of words from an online book into a term document matrix?如何将在线书籍中的单词语料库转换为术语文档矩阵?
【发布时间】:2022-01-21 12:51:24
【问题描述】:

这是我的代码的 sn-p:

library(gutenbergr)
library(tm)
Alice <- gutenberg_download(c(11))
Alice <- Corpus(VectorSource(Alice))
cleanAlice <- tm_map(Alice, removeWords, stopwords('english'))
cleanAlice <- tm_map(cleanAlice, removeWords, c('Alice'))
cleanAlice <- tm_map(cleanAlice, tolower)
cleanAlice <- tm_map(cleanAlice, removePunctuation)
cleanAlice <- tm_map(cleanAlice, stripWhitespace)
dtm1 <- TermDocumentMatrix(cleanAlice)
dtm1

然后我收到以下错误:

<<TermDocumentMatrix (terms: 3271, documents: 2)>>
Non-/sparse entries: 3271/3271
Sparsity           : 50%
Error in nchar(Terms(x), type = "chars") : 
  invalid multibyte string, element 12

我应该如何处理这个问题?我应该先将语料库转换为纯文本文档吗?书的文字格式有问题吗?

【问题讨论】:

    标签: r matrix text-mining


    【解决方案1】:

    Gutenbergr 返回一个 data.frame,而不是一个文本向量。您只需要稍微调整您的代码,它应该可以正常工作。你需要VectorSource(Alice$text) 而不是VectorSource(Alice)

    library(gutenbergr)
    library(tm)
    
    # don't overwrite your download when you are testing
    Alice <- gutenberg_download(c(11))
    
    # specify the column in the data.frame
    Alice_corpus <- Corpus(VectorSource(Alice$text))
    cleanAlice <- tm_map(Alice_corpus, removeWords, stopwords('english'))
    cleanAlice <- tm_map(cleanAlice, removeWords, c('Alice'))
    cleanAlice <- tm_map(cleanAlice, tolower)
    cleanAlice <- tm_map(cleanAlice, removePunctuation)
    cleanAlice <- tm_map(cleanAlice, stripWhitespace)
    dtm1 <- TermDocumentMatrix(cleanAlice)
    dtm1
    
    <<TermDocumentMatrix (terms: 3293, documents: 3380)>>
    Non-/sparse entries: 13649/11116691
    Sparsity           : 100%
    Maximal term length: 46
    Weighting          : term frequency (tf)
    

    附:您可以忽略代码中的警告信息。

    【讨论】:

      猜你喜欢
      • 2021-07-27
      • 1970-01-01
      • 2018-05-04
      • 1970-01-01
      • 2018-11-26
      • 1970-01-01
      • 2018-06-24
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多