Airbnb 房价预测任务
数据读取
import pandas as pd
features = ['accommodates','bedrooms','bathrooms','beds','price','minimum_nights','maximum_nights','number_of_reviews']
dc_listings = pd.read_csv('listings.csv')
dc_listings = dc_listings[features]
print(dc_listings.shape)
dc_listings.head()
数据特征:
- accommodates: 可以容纳的旅客
- bedrooms: 卧室的数量
- bathrooms: 厕所的数量
- beds: 床的数量
- price: 每晚的费用
- minimum_nights: 客人最少租了几天
- maximum_nights: 客人最多租了几天
- number_of_reviews: 评论的数量
如果我有一个1个房间的房子,我能租多少钱呢?
k代表我们候选对象个数,也就是找和我房间数量最详尽的其他房子的价格
K近邻原理
假设我们的数据源中只有5条信息,现在我想针对我的房子(只有一个房间)来顶一个价格。
在这里假设我们选择的K=3,也就是选3个跟我最相近的房源
再综合考虑这三个我就得到了我的房子大概值多少钱了!
距离的定义
如何才能知道哪些样本数跟我最相近呢?
其中Q1到Qn是一条数据的所有特征信息,P1到Pn是另一条数据的所有特征信息
假设我们的房子有三个房间
import numpy as np
our_acc_value = 3
dc_listings['distance'] = np.abs(dc_listings.accommodates - our_acc_value)
dc_listings.distance.value_counts().sort_index()
这里我们只用绝对值来计算,和我们距离为0的房间有461个
sample操作可以得到洗牌后的数据
dc_listings = dc_listings.sample(frac = 1, random_state = 0)
dc_listings = dc_listings.sort_values('distance')
dc_listings.price.head()
现在需要对字符串数据转换为float型
dc_listings['price'] = dc_listings.price.str.replace("\$|,",'').astype(float)
mean_price = dc_listings.price.iloc[:5].mean()
mean_price
得到了平均价格,也就是房子大致的价格了
模型的评估
首先制定好训练集和测试集
dc_listings.drop(''distance,axis = 1)
train_df = dc_listings.copy().iloc[:2792]
test_df = dc_listings.copy().iloc[2792:]
基于单变量预测价格
def predict_price(new_listing_value, feature_column):
temp_df = train_df
temp_df['distance'] = np.abs(dc_listings[feature_column] - new_listing_value)
temp_df = temp_df.sort_values('distance')
knn_5 = temp_df.price.iloc[:5]
predicted_price = knn_5.mean()
return(predicted_price)
test_df['predicted_price'] = test_df.accommodates.apply(predict_price, feature_column = 'accommodates')
这样得到了测试集中所有房子的价格了
root mean squared error(RMSE)均方根误差
test_df['squared_error'] = (test_df['predicted_price'] - test_df['price']) ** (2)
mse = test_df['squared_error'].mean()
rmse = mse ** (1/2)
rmse
现在我们得到了对于一个变量模型的评估得分
不同的变量效果会不会不同呢?
for feature in ['accommodates','bedrooms','bathrooms','number_of_reviews']:
test_df['predicted_price'] = test_df[feature].apply(predict_price, feature_column = feature)
test_df['squared_error'] = (test_df['predicted_price'] - test_df['price']) ** (2)
mse = test_df['squared_error'].mean()
rmse = mse ** (1/2)
print("RMSE for the {} colum : {}".format(feature,rmse)
看起来结果差异还是挺大的,借来综合利用所有信息来一起进行测试,这里多加一步数据标准化
import pandas as pd
from sklearn.preprocessing import StandardScaler
features = ['accommodates','bedrooms','bathrooms','beds','price','minimum_nights','maximum_nights','number_of_reviews']
dc_listings = pd.read_csv('listings.csv')
dc_listings = dc_listings[features]
dc_listings['price'] = dc_listings.price.str.replace("\$|,",'').astype(float)
dc_listings = dc_listings.dropna()
dc_listings[features] = StandardScaler().fit_transform(dc_listings[features])
#不仅计算训练数据的均值和方差,还会基于计算出来的均值和方差来转换训练数据,从而把数据转换成标准的正太分布
normalized_listings = dc_listings
print(dc_listings.shape)
normalized_listings.head()
norm_train_df = normalized_listings.copy().iloc[0:2792]
norm_test_df = normalized_listings.copy().iloc[2792:]
多变量距离计算
scipy中已经有现货才能的距离计算工具了
from scipy.spatial import distance
first_listing = normalized_listings.iloc[0][['accommodates','bathrooms']]
fifth_listing = normalized_listings.iloc[20][['accommodates','bathrooms']]
first_fifth_distance = distance.euclidean(first_listing, fifth_listing)
first_fifth_distance
多变量KNN模型
def predict_price_multivariate(new_listing_value, feature_columns):
temp_df = norm_train_df
temp_df['distance'] = distance.cdist(temp_df[feature_columns],[new_listing_value[feature_columns]])
temp_df = temp_df.sort_values("distance")
knn_5 = temp_df.price.iloc[:5]
predicted_price = knn_5.mean()
return(predicted_price)
cols = ['accommodates', 'bathrooms']
norm_test_df['predicted_price'] = norm_test_df[cols].apply(predict_price_multivariate,feature_column = cols, axis = 1)
norm_test_df['squared_error'] = (norm_test_df['predicted_price'] - norm_test_df['price'])
mse = norm_test_df['squared_error'].mean()
rmse = mse ** (1/2)
print(rmse)
以上部分是对K近邻的原理实现和解释下面
使用Sklearn来完成KNN
from sklearn.neighbors import KNeighborsRegressior
cols = ['accommodates', 'bedrooms']
knn = KNeighborsRegressor()
knn.fit(norm_train_df[cols], norm_train_df['price'])
two_features_predictions = knn.predict(norm_test_df[cols])
from sklearn.metrics import mean_squared_error
two_features_mse = mean_squared_error(norm_test_df['price'], two_features_predictions)
two_features_rmse = two_features_mse ** (1/2)
print(two_features_rmse)
加入更多特征
knn = KNeighborsRegressor()
cols = ['accommodates', 'bedrooms', 'bathrooms','beds','minium_nights','maximum_nights','number_of_reviews']
knn.fit(norm_train_df[cols], norm_train_df['price'])
four_features_predictions = knn.predict(norm_test_df[cols])
four_features_mse = mean_squared_error(norm_test_df['price'], four_features_predictions)
four_features_rmse = four_features_mse ** (1/2)
four_features_rmse
奥利给终于敲完了,以上就是最简单的KNN了,有问题还请指教1