【问题标题】:How to fix NameError: name 'phrasedocs' is not defined如何修复 NameError:未定义名称“phrasedocs”
【发布时间】:2019-03-31 08:47:19
【问题描述】:

我正在使用来自 Kaggle 的电影评论数据集进行分类任务。我正在努力的部分是一系列函数,其中一个的输出成为下一个的输入。

具体来说,在提供的代码中,函数“word_token”接受输入“phraselist”,对其进行标记化,并返回一个名为“phrasedocs”的标记化文档。唯一的问题是它似乎不起作用,因为当我将那个理论文档“phrasedocs”输入下一个函数“process_token”时,我得到:

NameError:未定义名称“phrasedocs”

我完全愿意接受我忽略了一些简单的事情,但我已经研究了几个小时,但我无法弄清楚。我将不胜感激。

我尝试过校对和调试代码,但我的 Python 专业知识并不好。

# This function obtains data from train.tsv

def processkaggle(dirPath, limitStr):
    # Convert the limit argument from a string to an int
    limit = int(limitStr)
    os.chdir(dirPath)
    f = open('./train.tsv', 'r')
    # Loop over lines in the file and use their first limit
    phrasedata = []
    for line in f:
        # Ignore the first line starting with Phrase, then read all lines
        if (not line.startswith('Phrase')):
            # Remove final end of line character
            line = line.strip()
            # Each line has four items, separated by tabs
            # Ignore the phrase and sentence IDs, keep the phrase and sentiment
            phrasedata.append(line.split('\t')[2:4])
    return phrasedata


# Randomize and subset data

def random_phrase(phrasedata):
    random.shuffle(phrasedata) # phrasedata initiated in function processkaggle
    phraselist = phrasedata[:limit]
    for phrase in phraselist[:10]:
        print(phrase)
    return phraselist


# Tokenization

def word_token(phraselist):
    phrasedocs=[]
    for phrase in phraselist:
        tokens=nltk.word_tokenize(phrase[0])
        phrasedocs.append((tokens, int(phrase[1])))
    return phrasedocs


# Pre-processing

# Convert all tokens to lower case
def lower_case(doc):
    return [w.lower() for w in doc]

# Clean text, fixing confusion over apostrophes
def clean_text(doc):
    cleantext=[]
    for review_text in doc:
        review_text = re.sub(r"it 's", "it is", review_text)
        review_text = re.sub(r"that 's", "that is", review_text)
        review_text = re.sub(r"\'s", "\'s", review_text)
        review_text = re.sub(r"\'ve", "have", review_text)
        review_text = re.sub(r"wo n't", "will not", review_text)
        review_text = re.sub(r"do n't", "do not", review_text)
        review_text = re.sub(r"ca n't", "can not", review_text)
        review_text = re.sub(r"sha n't", "shall not", review_text)
        review_text = re.sub(r"n\'t", "not", review_text)
        review_text = re.sub(r"\'re", "are", review_text)
        review_text = re.sub(r"\'d", "would", review_text)
        review_text = re.sub(r"\'ll", "will", review_text)
        cleantext.append(review_text)
    return cleantext

# Remove punctuation and numbers
def rem_no_punct(doc):
    remtext = []
    for text in doc:
        punctuation = re.compile(r'[-_.?!/\%@,":;\'{}<>~`()|0-9]')
        word = punctuation.sub("", text)
        remtext.append(word)
    return remtext

# Remove stopwords
def rem_stopword(doc):
    stopwords = nltk.corpus.stopwords.words('english')
    updatestopwords = [word for word in stopwords if word not in ['not','no','can','has','have','had','must','shan','do','should','was','were','won','are','cannot','does','ain','could','did','is','might','need','would']]
    return [w for w in doc if not w in updatestopwords]

# Lemmatization
def lemmatizer(doc):
    wnl = nltk.WordNetLemmatizer()
    lemma = [wnl.lemmatize(t) for t in doc]
    return lemma

# Stemming
def stemmer(doc):
    porter = nltk.PorterStemmer()
    stem = [porter.stem(t) for t in doc]
    return stem

# This function combines all the previous pre-processing functions into one, which is helpful
#   if I want to alter these settings for experimentation later

def process_token(phrasedocs):
    phrasedocs2 = []
    for phrase in phrasedocs:
        tokens = nltk.word_tokenize(phrase[0])
        tokens = lower_case(tokens)
        tokens = clean_text(tokens)
        tokens = rem_no_punct(tokens)
        tokens = rem_stopword(tokens)
        tokens = lemmatizer(tokens)
        tokens = stemmer(tokens)
        phrasedocs2.append((tokens, int(phrase[1]))) # Any words that pass through the processing
                                                        # steps above are added to phrasedocs2
    return phrasedocs2


dirPath = 'C:/Users/J/kagglemoviereviews/corpus'
processkaggle(dirPath, 5000) # returns 'phrasedata'
random_phrase(phrasedata) # returns 'phraselist'
word_token(phraselist) # returns 'phrasedocs'
process_token(phrasedocs) # returns phrasedocs2


NameError                                 Traceback (most recent call last)
<ipython-input-120-595bc4dcf121> in <module>()
      5 random_phrase(phrasedata) # returns 'phraselist'
      6 word_token(phraselist) # returns 'phrasedocs'
----> 7 process_token(phrasedocs) # returns phrasedocs2
      8 
      9 

NameError: name 'phrasedocs' is not defined

【问题讨论】:

  • 你从未定义过它。
  • 你从来没有给phrasedocs这个名字分配任何东西。
  • 我只是假设当我运行一个函数时,它会定义其中包含的变量并为其分配“return”命令给出的值。我不擅长 Python,这就是为什么我一直尝试像 R 一样使用它。

标签: python nlp


【解决方案1】:

您只需在一个从外部看不到的函数内部定义“phrasedocs”,并且函数返回应该被捕获在一个变量中, 编辑您的代码:

dirPath = 'C:/Users/J/kagglemoviereviews/corpus'
phrasedata = processkaggle(dirPath, 5000) # returns 'phrasedata'
phraselist = random_phrase(phrasedata) # returns 'phraselist'
phrasedocs = word_token(phraselist) # returns 'phrasedocs'
phrasedocs2 = process_token(phrasedocs) # returns phrasedocs2

【讨论】:

    【解决方案2】:

    您只在函数中创建了变量phrasedocs。因此,没有为该函数之外的所有其他代码定义该变量。当您将变量作为函数的输入调用时,python 找不到任何这样命名的变量。您必须在主代码中创建一个名为 phrasedocs 的变量。

    【讨论】:

      猜你喜欢
      • 2019-09-07
      • 1970-01-01
      • 2022-11-13
      • 2017-08-30
      • 2021-01-24
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多