【发布时间】:2019-12-05 11:12:25
【问题描述】:
我想在 AWS Sagemaker 中为 scikit 逻辑回归创建一个端点。我有一个 train.py 文件,其中包含 scikit sagemaker 的培训代码。
import subprocess as sb
import pandas as pd
import numpy as np
import pickle,json
import sys
def install(package):
sb.call([sys.executable, "-m", "pip", "install", package])
install('s3fs')
import argparse
import os
if __name__ =='__main__':
parser = argparse.ArgumentParser()
# hyperparameters sent by the client are passed as command-line arguments to the script.
parser.add_argument('--solver', type=str, default='liblinear')
# Data, model, and output directories
parser.add_argument('--output_data_dir', type=str, default=os.environ.get('SM_OUTPUT_DIR'))
parser.add_argument('--model_dir', type=str, default=os.environ.get('SM_MODEL_DIR'))
parser.add_argument('--train', type=str, default=os.environ.get('SM_CHANNEL_TRAIN'))
args, _ = parser.parse_known_args()
# ... load from args.train and args.test, train a model, write model to args.model_dir.
input_files = [ os.path.join(args.train, file) for file in os.listdir(args.train) ]
if len(input_files) == 0:
raise ValueError(('There are no files in {}.\n' +
'This usually indicates that the channel ({}) was incorrectly specified,\n' +
'the data specification in S3 was incorrectly specified or the role specified\n' +
'does not have permission to access the data.').format(args.train, "train"))
raw_data = [ pd.read_csv(file, header=None, engine="python") for file in input_files ]
df = pd.concat(raw_data)
y = df.iloc[:,0]
X = df.iloc[:,1:]
solver = args.solver
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(solver=solver).fit(X, y)
from sklearn.externals import joblib
def model_fn(model_dir):
lr = joblib.dump(lr, "model.joblib")
return lr
在我的 sagemaker 笔记本中,我运行了以下代码
import os
import boto3
import re
import copy
import time
from time import gmtime, strftime
from sagemaker import get_execution_role
import sagemaker
role = get_execution_role()
region = boto3.Session().region_name
bucket=<bucket> # Replace with your s3 bucket name
prefix = <prefix>
output_path = 's3://{}/{}/{}'.format(bucket, prefix,'output_data_dir')
train_data = 's3://{}/{}/{}'.format(bucket, prefix, 'train')
train_channel = sagemaker.session.s3_input(train_data, content_type='text/csv')
from sagemaker.sklearn.estimator import SKLearn
sklearn = SKLearn(
entry_point='train.py',
train_instance_type="ml.m4.xlarge",
role=role,output_path = output_path,
sagemaker_session=sagemaker.Session(),
hyperparameters={'solver':'liblinear'})
我正在这里拟合我的模型
sklearn.fit({'train': train_channel})
现在,为了创建端点,
from sagemaker.predictor import csv_serializer
predictor = sklearn.deploy(1, 'ml.m4.xlarge')
在尝试创建端点时,它正在抛出
ClientError: An error occurred (ValidationException) when calling the CreateModel operation: Could not find model data at s3://<bucket>/<prefix>/output_data_dir/sagemaker-scikit-learn-x-y-z-000/output/model.tar.gz.
我检查了我的 S3 存储桶。在我的output_data_dir 里面有sagemaker-scikit-learn-x-y-z-000 目录,里面有debug-output\training_job_end.ts 文件。在我的<prefix> 文件夹之外创建了一个名为sagemaker-scikit-learn-x-y-z-000 的附加目录,其中包含source\sourcedir.tar.gz 文件。通常,每当我使用 sagemaker 内置算法训练我的模型时,都会创建 output_data_dir\sagemaker-scikit-learn-x-y-z-000\output\model.tar.gz 类型的文件。谁能告诉我我的 scikit 模型存储在哪里,如何在我的前缀代码中推送 source\sourcedir.tar.gz 而无需手动操作以及如何查看 sourcedir.tar.gz 的内容?
编辑:我详细阐述了有关prefix 的问题。每当我运行 sklearn.fit() 时,都会在我的 S3 存储桶中创建两个具有相同名称 sagemaker-scikit-learn-x-y-z-000 的文件。一个在我的<bucket>/<prefix>/output_data_dir/sagemaker-scikit-learn-x-y-z-000/debug-output/training_job_end.ts 中创建,另一个文件在<bucket>/sagemaker-scikit-learn-x-y-z-000/source/sourcedir.tar.gz 中创建。为什么第二个文件没有像第一个一样在我的<prefix> 中创建? sourcedir.tar.gz 文件中包含什么?
【问题讨论】:
标签: python amazon-web-services amazon-s3 scikit-learn amazon-sagemaker