经过不断的尝试,看了无数的kernels以及各种博客,终于把公分提到了0.8以上,写这篇博客记录一下,也希望能够对各位新手们有所帮助。

一、导包

# 数据处理及可视化
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# 算法
from sklearn.ensemble import RandomForestRegressor
from sklearn.svm import SVC
# 训练
from sklearn.model_selection import train_test_split
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import GridSearchCV

二、获取数据

train = pd.read_csv("all/train.csv")
test = pd.read_csv("all/test.csv")
gender_submission = pd.read_csv("all/gender_submission.csv")

三、数据分析及特征工程

train.describe(include="all")

kaggle小白入门——泰坦尼克号公分0.8+

各个特征总结:

PassengerId:当然没用;

Pclass:代表社会地位,社会地位越高,当然越容易存活;

Name:初次尝试时,多数人会把Name直接删掉。但后面越来越深入,会发现Name中隐含着巨大的信息,包括称呼,姓氏。也有见过使用名字长度为特征的,我也尝试了一下,发现并没有太大的帮助;

Sex:女士优先原则,女性的生存率相当的高;

Age:小孩生存率相当高。年龄有大量缺失值,需要填充;

SibSp:船上的兄弟姐妹和配偶数量;

Parch:船上的父母子女数量;

Ticket:初次尝试时,也都会把Ticket去掉。但是,Ticket与Name一起构成了重要的家庭特征,也是本文大力处理的地方;

Fare:票价与票号对应,相同票号价格相同,但相同票价不一定相同票号;

Cabin:多数用有无Cabin特征处理,但其实影响不大;

Embarked:S港登船的人最多,生存率也最高,但经过尝试,其实影响也不大。

对于Cabin,Embarked等已经在我的模型中可有可无的特征,本文将不再讨论。本文重点将在Name,SibSp,Parch,Ticket中讨论,来构建家庭和团体特征

Sex生存率:

sns.barplot(x="Sex", y="Survived", data=train)
print("女性生存率:", train["Survived"][train["Sex"] == "female"].value_counts(normalize=True)[1])
print("男性生存率:", train["Survived"][train["Sex"] == "male"].value_counts(normalize=True)[1])

kaggle小白入门——泰坦尼克号公分0.8+

可以看到,女性生存率相当的高,男性生存率相当的低。基本上每5个女性有1个死亡,每5个男性有1个存活,这里就要注意了,假设泰坦尼克上女的全活了,男的全死了,那么我们就需要关注一下那个死亡的女性发生了什么恐怖的事,以及那个活着的男性遇到了什么幸运的事。纵观特征,唯一能够想到的就是各种亲友拖累了那个死亡的女性或者救了那个活着的男性,因此将注意力转到家庭以及同行的人。(其实是看各种大佬kernels得到的启发)

添加Family_size特征

train["Family_size"] = train["SibSp"] + train["Parch"]
test["Family_size"] = test["SibSp"] + test["Parch"]

添加Fname特征,即姓氏

train["Fname"] = train.Name.apply(lambda x: x.split(",")[0])
test["Fname"] = test.Name.apply(lambda x: x.split(",")[0])

然后,经过一系列的爆炸观察,好吧,这里没有可视化,就是一条一条数据直接肉眼观察的。

结论:

1、Fname相同,Ticket相同,Family_size>0可以确定一个家庭,只有一个家庭例外,这个家庭的票号连号,但影响不大。

2、有女性死亡的家庭,除了1岁以下的婴儿外,家庭成员全部死亡。(分数第一次有大的提升,也是添加了该特征之后)

3、家庭中若有大于18岁男性存活,或年龄为nan的男性存活,则该家庭全部存活。(加入该特征后,公分达到了0.8)

4、其余家庭,除极个别外,符合男死女活小孩活

5、Ticket相同的乘客,也许是朋友,大部分也符合以上3个家庭特征,但训练集中除去家庭成员后,同一Ticket的乘客较少,大部分符合男死女活小孩活。

这些结论都是我手动观察得来,感兴趣的朋友可以自己观察一下。

添加女性死亡家庭特征

dead_train = train[train["Survived"] == 0]
fname_ticket = dead_train[(dead_train["Sex"] == "female") & (dead_train["Family_size"] >= 1)][["Fname", "Ticket"]]
train["dead_family"] = np.where(train["Fname"].isin(fname_ticket["Fname"]) & train["Ticket"].isin(fname_ticket["Ticket"]) & ((train["Age"] >=1) | train.Age.isnull()), 1, 0)
test["dead_family"] = np.where(test["Fname"].isin(fname_ticket["Fname"]) & test["Ticket"].isin(fname_ticket["Ticket"]) & ((test["Age"] >=1) | test.Age.isnull()), 1, 0)

添加男性存活家庭特征

live_train = train[train["Survived"] == 1]
live_fname_ticket = live_train[(live_train["Sex"] == "male") & (live_train["Family_size"] >= 1) & ((live_train["Age"] >= 18) | (live_train["Age"].isnull()))][["Fname", "Ticket"]]
train["live_family"] = np.where(train["Fname"].isin(live_fname_ticket["Fname"]) & train["Ticket"].isin(live_fname_ticket["Ticket"]), 1, 0)
test["live_family"] = np.where(test["Fname"].isin(live_fname_ticket["Fname"]) & test["Ticket"].isin(live_fname_ticket["Ticket"]), 1, 0)

添加男死女活小孩活家庭特征

dead_man_fname_ticket = train[(train["Family_size"] >= 1) & (train["Sex"] == "male") & (train["Survived"] == 0) & (train["dead_family"] == 0)][["Fname", "Ticket"]]
train["deadfamily_man"] = np.where(train["Fname"].isin(dead_man_fname_ticket["Fname"]) & train["Ticket"].isin(dead_man_fname_ticket["Ticket"]) & (train.Sex == "male"), 1, 0)
train["deadfamily_woman"] = np.where(train["Fname"].isin(dead_man_fname_ticket["Fname"]) & train["Ticket"].isin(dead_man_fname_ticket["Ticket"]) & (train.Sex == "female"), 1, 0)
test["deadfamily_man"] = np.where(test["Fname"].isin(dead_man_fname_ticket["Fname"]) & test["Ticket"].isin(dead_man_fname_ticket["Ticket"]) & (test.Sex == "male"), 1, 0)
test["deadfamily_woman"] = np.where(test["Fname"].isin(dead_man_fname_ticket["Fname"]) & test["Ticket"].isin(dead_man_fname_ticket["Ticket"]) & (test.Sex == "female"), 1, 0)
train.loc[(train["dead_family"] == 0) & (train["live_family"] == 0) & (train["deadfamily_man"] == 0) & (train["deadfamily_woman"] == 0) & (train["Family_size"] >= 1) & (train["Sex"] == "male"), "deadfamily_man"] = 1
train.loc[(train["dead_family"] == 0) & (train["live_family"] == 0) & (train["deadfamily_man"] == 0) & (train["deadfamily_woman"] == 0) & (train["Family_size"] >= 1) & (train["Sex"] == "female"), "deadfamily_woman"] = 1
test.loc[(test["dead_family"] == 0) & (test["live_family"] == 0) & (test["deadfamily_man"] == 0) & (test["deadfamily_woman"] == 0) & (test["Family_size"] >= 1) & (test["Sex"] == "male"), "deadfamily_man"] = 1
test.loc[(test["dead_family"] == 0) & (test["live_family"] == 0) & (test["deadfamily_man"] == 0) & (test["deadfamily_woman"] == 0) & (test["Family_size"] >= 1) & (test["Sex"] == "female"), "deadfamily_woman"] = 1

这部分代码有点乱,因为是分了两次,第一次没做全。

添加同票号男死女活小孩活特征

grp_tk = train.drop(["Survived"], axis=1).append(test).groupby(["Ticket"])
tickets = []
for grp, grp_train in grp_tk:
    ticket_flag = True
    if len(grp_train) != 1:
        for i in range(len(grp_train) - 1):
            if grp_train.iloc[i]["Fname"] != grp_train.iloc[i+1]["Fname"]:
                ticket_flag = False
    if ticket_flag == False:
        tickets.append(grp)
train.loc[(train.Ticket.isin(tickets)) & (train.Family_size == 0) & (train.Sex == "male"), "deadfamily_man"] = 1
train.loc[(train.Ticket.isin(tickets)) & (train.Family_size == 0) & (train.Sex == "female"), "deadfamily_woman"] = 1
test.loc[(test.Ticket.isin(tickets)) & (test.Family_size == 0) & (test.Sex == "male"), "deadfamily_man"] = 1
test.loc[(test.Ticket.isin(tickets)) & (test.Family_size == 0) & (test.Sex == "female"), "deadfamily_woman"] = 1

至此,所有家庭及团体特征添加完毕。接下来,是一些普通处理。

补充缺失的票号

test = test.fillna({"Fare": test[test["Pclass"] == 3]["Fare"].mean()})

删除多于特征

train = train.drop(["PassengerId", "Ticket", "Cabin", "Embarked", "Fname"], axis=1)
test = test.drop(["PassengerId", "Ticket", "Cabin", "Embarked", "Fname"], axis=1)

Sex特征处理

train_dummies_sex = pd.get_dummies(train["Sex"])
test_dummies_sex = pd.get_dummies(test["Sex"])
train = pd.concat([train, train_dummies_sex], axis=1)
test = pd.concat([test, test_dummies_sex], axis=1)
train = train.drop(["Sex"], axis=1)
test = test.drop(["Sex"], axis=1)

Name处理,提取出Name中的称呼,对预测年龄有一定的帮助,对生存预测的帮助好像不大。

train_name = train.Name.str.extract("([a-zA-Z]+)\.")
test_name = test.Name.str.extract("([a-zA-Z]+)\.")
train_name["Title"] = train.Name.str.extract("([a-zA-Z]+)\.")
test_name["Title"] = test.Name.str.extract("([a-zA-Z]+)\.")

train_name = train_name.drop([0], axis=1)
test_name = test_name.drop([0], axis=1)

train_name["Title"] = train_name["Title"].replace(["Mlle", "Ms"], "Miss")
train_name["Title"] = train_name["Title"].replace(["Mme"], "Mrs")
train_name["Title"] = train_name["Title"].replace(["Countess", "Sir", "Lady", "Don"], "Royal")
train_name["Title"] = train_name["Title"].replace(["Dr", "Rev", "Col", "Major", "Jonkheer", "Capt"], "Rare")

test_name["Title"] = test_name["Title"].replace(["Ms"], "Miss")
test_name["Title"] = test_name["Title"].replace(["Dona"], "Mrs")
test_name["Title"] = test_name["Title"].replace(["Dr", "Rev", "Col"], "Rare")

train_name["Title"] = train_name["Title"].replace(["Mr"], 1)
train_name["Title"] = train_name["Title"].replace(["Miss"], 2)
train_name["Title"] = train_name["Title"].replace(["Mrs"], 3)
train_name["Title"] = train_name["Title"].replace(["Master"], 4)
train_name["Title"] = train_name["Title"].replace(["Royal"], 5)
train_name["Title"] = train_name["Title"].replace(["Rare"], 6)

test_name["Title"] = test_name["Title"].replace(["Mr"], 1)
test_name["Title"] = test_name["Title"].replace(["Miss"], 2)
test_name["Title"] = test_name["Title"].replace(["Mrs"], 3)
test_name["Title"] = test_name["Title"].replace(["Master"], 4)
test_name["Title"] = test_name["Title"].replace(["Rare"], 6)

train["Title"] = train_name["Title"]
test["Title"] = test_name["Title"]

train = train.drop(["Name"], axis=1)
test = test.drop(["Name"], axis=1)

年龄缺失值预测,由于使用算法预测,每次预测结果有偏差,也会导致最终生存预测结果有偏差。

age_train = pd.concat([train.drop(["Survived"], axis=1), test], axis=0)
age_train = age_train[age_train["Age"].notnull()]

age_label = age_train["Age"]
age_train = age_train.drop(["Age"], axis=1)

RFR = RandomForestRegressor(max_depth=16, n_estimators=16)
RFR.fit(age_train, age_label)

train.loc[train.Age.isnull(), ["Age"]] = RFR.predict(train[train.Age.isnull()].drop(["Age", "Survived"], axis=1))
test.loc[test.Age.isnull(), ["Age"]] = RFR.predict(test[test.Age.isnull()].drop(["Age"], axis=1))

删除多余特征

train = train.drop(["SibSp", "Parch"], axis=1)
test = test.drop(["SibSp", "Parch"], axis=1)

年龄票价分段,以前我一直没有分段,到0.8之后就无法再提高了,虽然不懂其中的原理,但分段之后,确实提高了分数。

将年龄分段,经肉眼观察数据,我发现没被我标记死亡家庭的15岁以下的小孩,基本存活,而年龄在50到80岁的男性死亡率超高。所以分段为:

train.loc[train["Age"] <= 15, "AgeBin"] = 0
train.loc[(train["Age"] > 15) & (train["Age"] <= 30), "AgeBin"] = 1
train.loc[(train["Age"] > 30) & (train["Age"] <= 49), "AgeBin"] = 2
train.loc[(train["Age"] > 49) & (train["Age"] < 80), "AgeBin"] = 3
train.loc[train["Age"] >= 80, "AgeBin"] = 4
test.loc[test["Age"] <= 15, "AgeBin"] = 0
test.loc[(test["Age"] > 15) & (test["Age"] <= 30), "AgeBin"] = 1
test.loc[(test["Age"] > 30) & (test["Age"] <= 49), "AgeBin"] = 2
test.loc[(test["Age"] > 49) & (test["Age"] < 80), "AgeBin"] = 3
test.loc[test["Age"] >= 80, "AgeBin"] = 4

票价分段

train.loc[train["Fare"] <= 7.854, "FareBin"] = 0
train.loc[(train["Fare"] > 7.854) & (train["Fare"] <= 10.5), "FareBin"] = 1
train.loc[(train["Fare"] > 10.5) & (train["Fare"] <= 21.558), "FareBin"] = 2
train.loc[(train["Fare"] > 21.558) & (train["Fare"] <= 41.579), "FareBin"] = 3
train.loc[train["Fare"] > 41.579, "FareBin"] = 4
test.loc[test["Fare"] <= 7.854, "FareBin"] = 0
test.loc[(test["Fare"] > 7.854) & (test["Fare"] <= 10.5), "FareBin"] = 1
test.loc[(test["Fare"] > 10.5) & (test["Fare"] <= 21.558), "FareBin"] = 2
test.loc[(test["Fare"] > 21.558) & (test["Fare"] <= 41.579), "FareBin"] = 3
test.loc[test["Fare"] > 41.579, "FareBin"] = 4

为什么这么分?是根据下面这段代码得到,不过也可以代码直接分,但是忘记怎么写了。

pd.qcut(train.drop(["Survived"], axis=1).append(test)["Fare"], 5)

删除多余特征

train = train.drop(["Age", "Fare"], axis=1)
test = test.drop(["Age", "Fare"], axis=1)

ok,特征工程做完了,来看看我们的数据

kaggle小白入门——泰坦尼克号公分0.8+

四、训练

采用SVM训练

y = train["Survived"]
train_x, val_x, train_y, val_y = train_test_split(train.drop(["Survived"], axis=1), y, test_size=0.2, random_state=0)
clf = SVC(C=1, probability=True)
clf.fit(train_x, train_y)
clf.score(val_x, val_y)

输出0.88268156424581。

调参,经我多次试验,只需要调C

svc_grid = GridSearchCV(SVC(), {"C": [i for i in range(1, 101)]}, cv=3)
svc_grid.fit(train.drop(["Survived"], axis=1), y)
svc_grid.best_params_

输出C=67

看一看学习曲线

def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
                        n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
    plt.figure()
    plt.title(title)
    if ylim is not None:
        plt.ylim(*ylim)
    plt.xlabel("Training examples")
    plt.ylabel("Score")
    train_sizes, train_scores, test_scores = learning_curve(
        estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
    train_scores_mean = np.mean(train_scores, axis=1)
    train_scores_std = np.std(train_scores, axis=1)
    test_scores_mean = np.mean(test_scores, axis=1)
    test_scores_std = np.std(test_scores, axis=1)
    plt.grid()
 
    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
                     train_scores_mean + train_scores_std, alpha=0.1,
                     color="r")
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
                     test_scores_mean + test_scores_std, alpha=0.1, color="g")
    plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
             label="Training score")
    plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
             label="Cross-validation score")
 
    plt.legend(loc="best")
    return plt

这是官方实例代码,可以直接用。

cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=0)
plot_learning_curve(SVC(C=67), "C=67", train.drop(["Survived"], axis=1), y, cv=cv)

kaggle小白入门——泰坦尼克号公分0.8+

至于这个学习曲线好不好,我也不知道,总感觉调参是玄学。

五、模型融合

好吧,我用了voting和Stacking,都没有提高分数,我也不知道为什么,查了查说也许是因为数据太少,以后再慢慢学习吧。

六、提交

gender_submission["Survived"] = clf.predict(test)
gender_submission.to_csv("all/1.csv", index=False)

这样提交,到0.8应该是没有问题的,但是由于年龄预测的不确定性,以及调参的不确定性,分数有高有低,晒一下我用该模型的最高分。kaggle小白入门——泰坦尼克号公分0.8+

over

相关文章: