【问题标题】:Nested cross-validation with Spark_sklearn GridSearchCV results in SPARK-5063 errorSpark_sklearn GridSearchCV 的嵌套交叉验证导致 SPARK-5063 错误
【发布时间】:2018-04-24 12:14:02
【问题描述】:

使用 Spark_sklearn GridSearchCV 作为内部 cv 并将 sklearn cross_validate/cross_val_score 作为外部 cv 执行嵌套交叉验证会导致“您似乎正在尝试从广播变量、操作或转换中引用 SparkContext”错误。

inner_cv = StratifiedKFold(n_splits=2, shuffle=True, random_state=42)
outer_cv = StratifiedKFold(n_splits=3, shuffle=True, random_state=42)
scoring_metric = ['roc_auc', 'average_precision', 'precision']
gs = GridSearchCV(sparkcontext, estimator=RandomForestClassifier(
                                          class_weight='balanced_subsample', n_jobs=-1),
                  param_grid=[{"max_depth": [5], "max_features": [.5, .8], 
                               "min_samples_split": [2], "min_samples_leaf": [1, 2, 5, 10], 
                               "bootstrap": [True, False], "criterion": ["gini", "entropy"], 
                               "n_estimators": [300]}], 
                  scoring=scoring_metric, cv=inner_cv, verbose=verbose, n_jobs=-1, 
                  refit='roc_auc', return_train_score=False)
scores = cross_validate(gs, X, y, cv=outer_cv, scoring=scoring_metric, n_jobs=-1, 
                        return_train_score=False)

我已尝试将 n_jobs=-1 设置为 n_jobs=1 以删除基于 joblib 的并行性并重试,但仍会产生相同的异常。

异常:您似乎正试图从广播变量、操作或转换中引用 SparkContext。 SparkContext 只能在驱动程序上使用,不能在它在工作人员上运行的代码中使用。有关详细信息,请参阅 SPARK-5063。

Complete Traceback (most recent call last):
  File "model_evaluation.py", line 350, in <module>
    main()
  File "model_evaluation.py", line 269, in main
    scores = cross_validate(gs, X, y, cv=outer_cv, scoring=scoring_metric, n_jobs=-1, return_train_score=False)
  File "../python27/lib/python2.7/site-packages/sklearn/model_selection/_validation.py", line 195, in cross_validate
    for train, test in cv.split(X, y, groups))
  File "../python27/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 779, in __call__
    while self.dispatch_one_batch(iterator):
  File "../python27/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 620, in dispatch_one_batch
    tasks = BatchedCalls(itertools.islice(iterator, batch_size))
  File "../python27/lib/python2.7/site-packages/sklearn/externals/joblib/parallel.py", line 127, in __init__
    self.items = list(iterator_slice)
  File "../python27/lib/python2.7/site-packages/sklearn/model_selection/_validation.py", line 195, in <genexpr>
    for train, test in cv.split(X, y, groups))
  File "../python27/lib/python2.7/site-packages/sklearn/base.py", line 61, in clone
    new_object_params[name] = clone(param, safe=False)
  File "../python27/lib/python2.7/site-packages/sklearn/base.py", line 52, in clone
    return copy.deepcopy(estimator)
  File "/usr/local/lib/python2.7/copy.py", line 182, in deepcopy
    rv = reductor(2)
  File "/usr/local/lib/spark/python/pyspark/context.py", line 279, in __getnewargs__
    "It appears that you are attempting to reference SparkContext from a broadcast "
Exception: It appears that you are attempting to reference SparkContext from a broadcast 
variable, action, or transformation. SparkContext can only be used on the driver, not 
in code that it run on workers. For more information, see SPARK-5063.

编辑: 似乎问题在于 sklearn cross_validate() 以类似于腌制估计器对象的方式克隆每个拟合的估计器,这对于 PySpark GridsearchCV 估计器是不允许的,因为 SparkContext() 对象不能/不应该被腌制。那么我们如何正确克隆估算器呢?

【问题讨论】:

    标签: apache-spark scikit-learn pyspark


    【解决方案1】:

    我终于找到了解决办法。当 scikit-learn clone() 函数尝试对 SparkContext 对象进行深度复制时,就会出现问题。我使用的解决方案有点老套,如果有更好的解决方案,我肯定会采用另一种方式,但它确实有效。导入复制类并覆盖 deepcopy() 函数,以便在看到 SparkContext 对象时简单地忽略它。

    # Mock the deep-copy function to ignore copying sparkcontext objects
    # Helps avoid pickling error or broadcast variable errors
    import copy
    _deepcopy = copy.deepcopy
    
    def mock_deepcopy(*args, **kwargs):
        if isinstance(args[0], SparkContext):
            return args[0]
        return _deepcopy(*args, **kwargs)
    
    copy.deepcopy = mock_deepcopy
    

    所以现在它不会尝试复制 SparkContext 对象并且一切似乎都正常工作。

    【讨论】:

    • 这很酷,但我不敢相信我找不到更简单的答案。现在一定有办法使用 Spark 进行嵌套 CV 吗?
    猜你喜欢
    • 1970-01-01
    • 2019-04-28
    • 2018-08-27
    • 1970-01-01
    • 2020-03-03
    • 1970-01-01
    • 2018-08-16
    • 2020-10-17
    • 2021-02-10
    相关资源
    最近更新 更多