【问题标题】:OSError: [Errno 22] Invalid argument ( Fails randomly while processing a file)OSError: [Errno 22] Invalid argument ( 处理文件时随机失败)
【发布时间】:2019-08-07 14:25:37
【问题描述】:

我运行这个 python 代码来读取文件并将数据上传到引擎。 它运行良好,但是突然失败并在中间抛出错误。 我做了一些研究,但找不到有效的解决方案。 以下是错误

--- Logging error --- Traceback (most recent call last):   
File "C:\Python36\lib\logging\__init__.py", line 998, in emit     self.flush()   
File "C:\Python36\lib\logging\__init__.py", line 978, in flush     self.stream.flush() 
OSError: [Errno 22] Invalid argument

下面是代码:

import argparse
import httplib2
import numpy as np
import pprint
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from oauth2client import GOOGLE_TOKEN_URI
from oauth2client.client import OAuth2Credentials, 
HttpAccessTokenRefreshError
import pandas as pd
from datetime import date, timedelta
from dateutil.parser import parse
import time
import os
import json
import datetime
import logging
from datetime import datetime
import pysftp
import warnings
 header = []
 final_report = ""

logging.basicConfig(filename='Logs/DialogTech_To_DS3' + date.today().strftime("%Y.%m.%d"), level=logging.INFO)
def create_credentials(client_id, client_secret, refresh_token):
"""Create Google OAuth2 credentials.

Returns:
    OAuth2Credentials
"""
return OAuth2Credentials(access_token=None,
                         client_id=client_id,
                         client_secret=client_secret,
                         refresh_token=refresh_token,
                         token_expiry=None,
                         token_uri=GOOGLE_TOKEN_URI,
                         user_agent=None)

service = build('doubleclicksearch', 'v2', http=http)
return service

for filename in os.listdir('J:/SharedFolder/Feeds/Data/'):
    file = 'J:/SharedFolder/Feeds/Data/' + filename
    if filename.startswith('Daily_'):
        print(filename)
        file_name = filename
        logging.info("Uploading Conversions from " + filename)
        columns = ['Timestamp', 'GCLID', 'camp', 'OrderID', 'Orders', 'Revenue',
                   'OrderLevelDiscount', 'Units', 'OutOfStockViews', 'ScorecardApplied', 'StoreLocator']

        data = pd.read_csv(file, delimiter='\t')
        data['Revenue'] = data['Revenue'].map(lambda x: '{:.2f}'.format(x))
        data['OrderID'] = data['OrderID'].map(lambda x: '{:.0f}'.format(x))
        #data['OrderID'] = data['OrderID'].apply(lambda x: int(x) if "." in str(x) else x)

        pd.set_option('display.max_columns', 500)
        pd.set_option('display.width', 1000)

        dir = 'J:/SharedFolder/Feeds/Data/'
        # data.to_csv(dir + 'FNS_' + filename.replace('Daily_', '').replace('.txt', '') + '.csv')

        print(data.head(data['Timestamp'].count()))
        print(data['Timestamp'].count())

        for index, row in data.iterrows():
            dt = parse(row['Timestamp'])
            millisecond = int(round(dt.timestamp() * 1000))
            #print(row)

            if row['Orders'] > 0:
                order_revenue_upload(service, row['GCLID'], str(row['OrderID']) + str(index), millisecond, row['Revenue'], row['Orders'])
            if row['OrderLevelDiscount'] > 0:
                order_level_discount_upload(service, row['GCLID'], str(row['OrderID']) + "_OLD_" + str(index), millisecond, row['OrderLevelDiscount'])
            if row['Units'] > 0:
                units_upload(service, row['GCLID'], str(row['OrderID']) + "_U_" + str(index), millisecond, row['Units'])
            if row['OutOfStockViews'] > 0:
                out_of_stock_views_upload(service, row['GCLID'], str(row['OrderID']) + "_OOSV_" + str(index), millisecond, row['OutOfStockViews'])
            if row['ScorecardApplied'] > 0:
                score_card_applied_upload(service, row['GCLID'], str(row['OrderID']) + "_SCA_" + str(index), millisecond, row['ScorecardApplied'])
            if row['StoreLocator'] > 0:
                store_locator_upload(service, row['GCLID'], str(row['OrderID']) + "_SL_" + str(index), millisecond, row['StoreLocator'])

  os.rename(file, 'J:/SharedFolder/Feeds/Data/' + file_name)

【问题讨论】:

  • 是否有可能您正在记录的任何文件名都无法使用文件系统的默认编码(定义为here)进行编码?
  • 我不这么认为。该文件可以访问。

标签: python json scripting


【解决方案1】:

您在使用反斜杠作为路径分隔符\ 的窗口上。但是对于 python,这是转义字符,因此您需要使用正斜杠或原始字符串或转义反斜杠。更改这些类型的地址:

'J:/SharedFolder/Feeds/Data/'

到这个:

r'J:\SharedFolder\Feeds\Data\'

'J:\\SharedFolder\\Feeds\\Data\\' 

【讨论】:

  • 但是,如果是这样的话。它应该在一开始就失败对吗?但这在处理几行时随机失败。
  • @Moh'dAnsar 照我说的做,然后测试一下。它仍然抛出同样的错误吗?
  • 我做了更改并进行了试运行。它现在正在运行,如果出错会更新。
  • @Moh'dAnsar 你能和我们分享一下 python 显示错误的完整堆栈跟踪吗?我认为您在问题中显示的错误是不完整的......
  • 谢谢你的回答,它解决了我的问题。
【解决方案2】:

经过大量研究,我找到了一个适合我的解决方案, 而不是使用

  logging.basicConfig(filename='Logs/DialogTech_To_DS3' + date.today().strftime("%Y.%m.%d"), level=logging.INFO)

我用过

  folder = '//MappedDrive/Share/Logs\\DialogTechToDS3 test '

  logging.basicConfig(level=logging.INFO, filename=folder + date.today().strftime("%m.%d.%Y")+'.log', filemode='w')

【讨论】:

    猜你喜欢
    • 2018-06-15
    • 1970-01-01
    • 2020-04-07
    • 1970-01-01
    • 1970-01-01
    • 2020-07-29
    • 2019-06-29
    • 1970-01-01
    • 2021-08-26
    相关资源
    最近更新 更多