【问题标题】:How can I preprocess NLP text (lowercase, remove special characters, remove numbers, remove emails, etc) in one pass?如何一次性预处理 NLP 文本(小写、删除特殊字符、删除数字、删除电子邮件等)?
【发布时间】:2019-01-28 06:11:11
【问题描述】:

如何使用 Python 一次性预处理 NLP 文本(小写、删除特殊字符、删除数字、删除电子邮件等)?

Here are all the things I want to do to a Pandas dataframe in one pass in python:
1. Lowercase text
2. Remove whitespace
3. Remove numbers
4. Remove special characters
5. Remove emails
6. Remove stop words
7. Remove NAN
8. Remove weblinks
9. Expand contractions (if possible not necessary)
10. Tokenize

这是我个人的做法:

    def preprocess(self, dataframe):


    self.log.info("In preprocess function.")

    dataframe1 = self.remove_nan(dataframe)
    dataframe2 = self.lowercase(dataframe1)
    dataframe3 = self.remove_whitespace(dataframe2)

    # Remove emails and websites before removing special characters
    dataframe4 = self.remove_emails(self, dataframe3)
    dataframe5 = self.remove_website_links(self, dataframe4)

    dataframe6 = self.remove_special_characters(dataframe5)
    dataframe7 - self.remove_numbers(dataframe6)
    self.remove_stop_words(dataframe8) # Doesn't return anything for now
    dataframe7 = self.tokenize(dataframe6)

    self.log.info(f"Sample of preprocessed data: {dataframe4.head()}")

    return dataframe7

def remove_nan(self, dataframe):
    """Pass in a dataframe to remove NAN from those columns."""
    return dataframe.dropna()

def lowercase(self, dataframe):
    logging.info("Converting dataframe to lowercase")
    lowercase_dataframe = dataframe.apply(lambda x: x.lower())
    return lowercase_dataframe


def remove_special_characters(self, dataframe):
    self.log.info("Removing special characters from dataframe")
    no_special_characters = dataframe.replace(r'[^A-Za-z0-9 ]+', '', regex=True)
    return no_special_characters

def remove_numbers(self, dataframe):
    self.log.info("Removing numbers from dataframe")
    removed_numbers = dataframe.str.replace(r'\d+','')
    return removed_numbers

def remove_whitespace(self, dataframe):
    self.log.info("Removing whitespace from dataframe")
    # replace more than 1 space with 1 space
    merged_spaces = dataframe.str.replace(r"\s\s+",' ')
    # delete beginning and trailing spaces
    trimmed_spaces = merged_spaces.apply(lambda x: x.str.strip())
    return trimmed_spaces

def remove_stop_words(self, dataframe):
    # TODO: An option to pass in a custom list of stopwords would be cool.
    set(stopwords.words('english'))

def remove_website_links(self, dataframe):
    self.log.info("Removing website links from dataframe")
    no_website_links = dataframe.str.replace(r"http\S+", "")
    return no_website_links

def tokenize(self, dataframe):
    tokenized_dataframe = dataframe.apply(lambda row: word_tokenize(row))
    return tokenized_dataframe

def remove_emails(self, dataframe):
    no_emails = dataframe.str.replace(r"\S*@\S*\s?")
    return no_emails

def expand_contractions(self, dataframe):
    # TODO: Not a priority right now. Come back to it later.
    return dataframe

【问题讨论】:

  • 使用df.apply(preprocess)
  • 使用 nlp 包,如 spaCy 或其他。

标签: python pandas nlp


【解决方案1】:

以下函数执行您提到的所有事情。

import nltk
from nltk.tokenize import RegexpTokenizer
from nltk.stem import WordNetLemmatizer,PorterStemmer
from nltk.corpus import stopwords
import re
lemmatizer = WordNetLemmatizer()
stemmer = PorterStemmer() 

 def preprocess(sentence):
    sentence=str(sentence)
    sentence = sentence.lower()
    sentence=sentence.replace('{html}',"") 
    cleanr = re.compile('<.*?>')
    cleantext = re.sub(cleanr, '', sentence)
    rem_url=re.sub(r'http\S+', '',cleantext)
    rem_num = re.sub('[0-9]+', '', rem_url)
    tokenizer = RegexpTokenizer(r'\w+')
    tokens = tokenizer.tokenize(rem_num)  
    filtered_words = [w for w in tokens if len(w) > 2 if not w in stopwords.words('english')]
    stem_words=[stemmer.stem(w) for w in filtered_words]
    lemma_words=[lemmatizer.lemmatize(w) for w in stem_words]
    return " ".join(filtered_words)


df['cleanText']=df['Text'].map(lambda s:preprocess(s)) 

【讨论】:

  • 对于像这样的文字:“您可以按照 1、2、3 步骤”。我想用单个 NUM 标签替换数字。但是使用上面的代码会获取你可以按照 NUM NUM NUM 个步骤。
【解决方案2】:

我决定使用 Dask,它允许您在本地计算机上并行处理 Python 任务,并且可以很好地与 Pandas、numpy 和 scikitlearn 配合使用:http://docs.dask.org/en/latest/why.html

【讨论】:

    【解决方案3】:

    如果没有示例数据框,我无法提供正确的代码,但正如 cmets 提到的那样,在我看来,应用似乎是最好的选择。类似的东西

    def preprocess_text(s):
        s = s.str.lower()
        s = pd.fillna(fill_value)
    

    你可以调用它

    #确保只有字符串列是对象,数字可以是数字日期时间是日期时间等

    str_columns = df.select_dtypes(inlcude='object').columns 
    df[str_columns] = df[str_columns].apply(preprocess_text)
    

    再次,如果没有示例数据框,很难更具体,但这种方法可以工作。

    【讨论】:

      猜你喜欢
      • 2012-02-25
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2019-02-27
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多