【发布时间】:2021-04-19 11:21:53
【问题描述】:
下面是我正在尝试做的代码,但我的准确率总是低于 50%,所以我想知道我应该如何解决这个问题?我要做的是使用第一个 1885 年的每日单位销售数据作为输入,并将 1885 年的其余每日单位销售数据作为输出。训练这些数据后,我需要用它来预测未来 20 多个每日单位销售量 我在这里使用的数据在这个链接中提供 https://drive.google.com/file/d/13qzIZMD6Wz7e1GpOsNw1_9Yq-4PI2HrC/view?usp=sharing
import pandas as pd
import numpy as np
import keras
import keras.backend as k
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.callbacks import EarlyStopping
from sklearn import preprocessing
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
data = pd.read_csv('sales_train.csv')
#Since there are 3 departments and 10 store from 3 different areas, thus I categorized the data into 30 groups and numerize them
Unique_dept = data["dept_id"].unique()
Unique_state = data['state_id'].unique()
Unique_store = data["store_id"].unique()
data0 = data.copy()
for i in range(3):
data0["dept_id"] = data0["dept_id"].replace(to_replace=Unique_dept[i], value = i)
data0["state_id"] = data0["state_id"].replace(to_replace=Unique_state[i], value = i)
for j in range(10):
data0["store_id"] = data0["store_id"].replace(to_replace=Unique_store[j], value = int(Unique_store[j][3]) -1)
# Select the three numerized categorical variables and daily unit sale data
pt = 6 + 1885
X = pd.concat([data0.iloc[:,2],data0.iloc[:, 4:pt]], axis = 1)
Y = data0.iloc[:, pt:]
# Remove the daily unit sale data that are highly correlated to each other (corr > 0.9)
correlation = X.corr(method = 'pearson')
corr_lst = []
for i in correlation:
for j in correlation:
if (i != j) & (correlation[i][j] >= 0.9) & (j not in corr_lst) & (i not in corr_lst):
corr_lst.append(i)
x = X.drop(corr_lst, axis = 1)
x_value = x.values
y_value = Y.values
sc = StandardScaler()
X_scale = sc.fit_transform(x_value)
X_train, X_val_and_test, Y_train, Y_val_and_test = train_test_split(x_value, y_value, test_size=0.2)
X_val, X_test, Y_val, Y_test = train_test_split(X_val_and_test, Y_val_and_test, test_size=0.5)
print(X_train.shape, X_val.shape, X_test.shape, Y_train.shape, Y_val.shape, Y_test.shape)
#create model
model = Sequential()
#get number of columns in training data
n_cols = X_train.shape[1]
#add model layers
model.add(Dense(32, activation='softmax', input_shape=(n_cols,)))
model.add(Dense(32, activation='relu'))
model.add(Dense(32, activation='softmax'))
model.add(Dense(1))
#compile model using rmsse as a measure of model performance
model.compile(optimizer='Adagrad', loss= "mean_absolute_error", metrics = ['accuracy'])
#set early stopping monitor so the model stops training when it won't improve anymore early_stopping_monitor = EarlyStopping(patience=3)
early_stopping_monitor = EarlyStopping(patience=20)
#train model
model.fit(X_train, Y_train,batch_size=32, epochs=10, validation_data=(X_val, Y_val))
情节也很奇怪:
【问题讨论】:
-
嗨,保罗,欢迎您。您能否对您想要做的事情进行更多说明?输入数据是什么?培训目标是什么?您是否将输入数据和标签可视化以确保它们正确无误?
-
你好 MPA!谢谢你的回复。我的模型的输入:从第 1 天到第 1885 天的日销量数据,我的模型的输出是从第 1886 天到第 1913 天的预测的日销量。但是,我不知道如何训练我的模型以使其最适合提供的输出数据。还有一些分类变量,如部门、位置等。我只是简单地计算了它们,但我不确定我是否走在正确的轨道上
-
为了获得建设性的回复(和赞成票),提供您想要解决问题的详细背景很有帮助。从你的损失曲线看来,你的模型实际上是在学习一些东西,所以你可以尝试训练它更长的时间。对于您的问题,Softmax 似乎是错误的激活函数(它输出的概率总和为 1)。对于每日单位销售额(>= 0),请坚持使用 ReLU。
-
对于预测问题,人们已经取得了一些成功,例如递归神经网络和 LSTM。如果您认为有一些潜在的动态是您想要发现并在您的预测中使用的,还可以查看 Steve Brunton's lectures 的 Koopman 分析。
标签: machine-learning keras neural-network