【问题标题】:Parallel for loop in RR中的并行for循环
【发布时间】:2015-03-04 14:39:18
【问题描述】:

我在 sent$words 中有带有句子的 data.frame sentwordsDF 中带有 pos/neg 词的字典(wordsDF[ x,1])。正词 = 1,负词 = -1 (wordsDF[x,2])。该 wordsDF 数据帧中的单词根据它们的长度(字符串的长度)按降序排序。我将此目的用于我的以下功能。

这个函数是如何工作的:

1) 通过每个句子计算存储在 wordsDF 中的单词的出现次数 2)计算情感分数:特定句子中特定单词(wordsDF)的出现次数*该单词的情绪值(正= 1,负= -1) 3) 从句子中删除匹配的单词以进行另一次迭代。

stringr包的原始解决方案:

scoreSentence_01 <- function(sentence){
  score <- 0
  for(x in 1:nrow(wordsDF)){
    count <- str_count(sentence, wordsDF[x,1])
    score <- (score + (count * wordsDF[x,2])) # compute score (count * sentValue)
    sentence <- str_replace_all(sentence, wordsDF[x,1], " ")
  }
  score
}

更快的解决方案 - 第 4 行和第 5 行替换原始解决方案中的第 4 行。

scoreSentence_02 <- function(sentence){
  score <- 0
  for(x in 1:nrow(wordsDF)){
    sd <- function(text) {stri_count(text, regex=wordsDF[x,1])}
    results <- sapply(sentence, sd, USE.NAMES=F)
    score <- (score + (results * wordsDF[x,2])) # compute score (count * sentValue)
    sentence <- str_replace_all(sentence, wordsDF[x,1], " ")
  }
  score
}

调用函数是:

scoreSentence_Score <- scoreSentence_01(sent$words)

实际上,我正在使用包含 300.000 个句子的数据集和包含正面和负面单词的字典 - 总共 7.000 个单词。这种方法非常缓慢,而且因为我在 R 编程方面的初学者知识我正在努力结束。

谁能帮助我,如何将此函数重写为矢量化或并行解决方案,拜托。非常感谢任何帮助或建议。非常感谢您。

虚拟数据:

sent <- data.frame(words = c("great just great right size and i love this notebook", "benefits great laptop at the top",
                         "wouldnt bad notebook and very good", "very good quality", "bad orgtop but great",
                         "great improvement for that great improvement bad product but overall is not good", "notebook is not good but i love batterytop"), user = c(1,2,3,4,5,6,7),
                          stringsAsFactors=F)

posWords <- c("great","improvement","love","great improvement","very good","good","right","very","benefits",
          "extra","benefit","top","extraordinarily","extraordinary","super","benefits super","good","benefits great",
          "wouldnt bad")

negWords <- c("hate","bad","not good","horrible")

# Replicate original data.frame - big data simulation (700.000 rows of sentences)
df.expanded <- as.data.frame(replicate(10000,sent$words))
sent <- coredata(sent)[rep(seq(nrow(sent)),10000),]
sent$words <- paste(c(""), sent$words, c(""), collapse = NULL)
rownames(sent) <- NULL

# Ordering words in pos/negWords
wordsDF <- data.frame(words = posWords, value = 1,stringsAsFactors=F)
wordsDF <- rbind(wordsDF,data.frame(words = negWords, value = -1))
wordsDF$lengths <- unlist(lapply(wordsDF$words, nchar))
wordsDF <- wordsDF[order(-wordsDF[,3]),]
wordsDF$words <- paste(c(""), wordsDF$words, c(""), collapse = NULL)
rownames(wordsDF) <- NULL

期望的输出是:

                                                                        words user scoreSentence_Score
                         great just great right size and i love this notebook    1                   4
                                             benefits great laptop at the top    2                   2
                                           wouldnt bad notebook and very good    3                   2
                                                            very good quality    4                   1
                                                         bad orgtop but great    5                   0
 great improvement for that great improvement bad product but overall is not good    6                   0
                                   notebook is not good but i love batterytop    7                   0

【问题讨论】:

    标签: r parallel-processing vectorization


    【解决方案1】:

    好的,现在我知道你必须解决短语和单词......这是另一个尝试。基本上,你必须先拆分你的短语,给它们打分,从字符串中删除它们,然后给你的单词打分......

    library(stringr)
    sent <- data.frame(words = c("great just great right size and i love this notebook", "benefits great laptop at the top",
                                 "wouldnt bad notebook and very good", "very good quality", "bad orgtop but great",
                                 "great improvement for that great improvement bad product but overall is not good", "notebook is not good but i love batterytop"), user = c(1,2,3,4,5,6,7),
                       stringsAsFactors=F)
    
    posWords <- c("great","improvement","love","great improvement","very good","good","right","very","benefits",
                  "extra","benefit","top","extraordinarily","extraordinary","super","benefits super","good","benefits great",
                  "wouldnt bad")
    
    negWords <- c("hate","bad","not good","horrible")
    sent$words2 <- sent$words
    # split bad into words and phrases...
    bad_phrases <- negWords[grepl(" ", negWords)]
    bad_words <- negWords[!negWords %in% bad_phrases]
    bad_words <- paste0("\\b", bad_words, "\\b")
    pos_phrases <- posWords[grepl(" ", posWords)]
    pos_words <- posWords[!posWords %in% pos_phrases]
    pos_words <- paste0("\\b", pos_words, "\\b")
    score <-  - str_count(sent$words2, paste(bad_phrases, collapse="|"))
    sent$words2 <- gsub(paste(bad_phrases, collapse="|"), "", sent$words2)
    score <- score + str_count(sent$words2, paste(pos_phrases, collapse="|"))
    sent$words2 <- gsub(paste(pos_phrases, collapse="|"), "", sent$words2)
    score <- score + str_count(sent$words2, paste(pos_words, collapse="|"))  - str_count(sent$words2, paste(bad_words, collapse="|")) 
    score
    

    【讨论】:

    • 我在原始任务中添加了一些虚拟数据。如果我使用您的方法,那么不幸的是没有所需的输出。
    • 稍作改动,它就会匹配您想要的输出。无论如何,这是您应该使用的那种方法...矢量化代码而不是循环。你应该可以从这里拿走它。
    • 非常感谢 Cory... 非常快速的方法,为我节省了很多时间。
    • 非常感谢您的出色解决方案,但如果我使用庞大的字典运行它,则会导致以下错误消息:错误:断言'tree->num_tags == num_tags'在执行正则表达式时失败:文件'tre-compile.c',第 634 行
    • 问题在这一行: score
    【解决方案2】:

    你不能这样做:

    library("stringr")
    scoreSentence_Score <- str_count(sent$words, wordsDF[,1]) - str_count(sent$words, wordsDF[,2])
    

    【讨论】:

    • 我上面的循环中有相同或非常相似的方法。并试图找出一些更快的解决方案,如何避免 for 循环......
    • 我贴的解决方案是矢量化的,没有循环。
    • 然后你需要从句子中删除匹配的单词,因为你可能有一些重复:例如。你可以在句子中有“很大的改进”,如果你有单词DF的话:c(“great Improvement”,“great”,“improvement”)所以如果你跳过字典中的单词在句子中删除匹配的单词,你可以有一个重复...
    • 所以 wordsDF 不仅包含单词,还包含短语?删除短语。把它归结为文字......
    • 是的,完全正确。我使用这种方法是因为我需要与 pos/neg 单词和短语完全匹配。
    猜你喜欢
    • 2013-12-12
    • 1970-01-01
    • 2016-11-14
    • 2014-04-12
    • 1970-01-01
    • 2016-01-26
    • 1970-01-01
    • 2022-10-24
    • 1970-01-01
    相关资源
    最近更新 更多