wolpert.wrappers.base module

class wolpert.wrappers.base.BaseStackableTransformer(estimator, method='auto', scoring=None, verbose=False)[source]

Base class for wrappers. Shouldn’t be used directly, but inherited by specialized wrappers.

Parameters:
estimator : predictor

The estimator to be blended.

method : string, optional (default=’auto’)

This method will be called on the estimator to produce the output of transform. If the method is auto, will try to invoke, for each estimator, predict_proba, decision_function or predict in that order.

scoring : string, callable, dict or None (default=None)

If not None, will save scores generated by the scoring object on the scores_ attribute each time blend is called.

verbose : bool (default=False)

When true, prints scores to stdout. scoring must not be None.

Methods

blend(X, y, **fit_params) Transform dataset using cross validation.
fit(X[, y]) Fit the estimator.
fit_blend(X, y, **fit_params) Transform dataset using cross validation and fits the estimator.
fit_transform(X[, y]) Fit to data, then transform it.
get_params([deep]) Get parameters for this estimator.
set_params(**params) Set the parameters of this estimator.
transform(*args, **kwargs) Transform the whole dataset.
blend(X, y, **fit_params)[source]

Transform dataset using cross validation.

Parameters:
X : array-like or sparse matrix, shape=(n_samples, n_features)

Input data used to build forests. Use dtype=np.float32 for maximum efficiency.

y : array-like, shape = [n_samples]

Target values.

**fit_params : parameters to be passed to the base estimator.
Returns:
X_transformed, indexes : tuple of (sparse matrix, array-like)

X_transformed is the transformed dataset. indexes is the indexes of the transformed data on the input.

fit(X, y=None, **fit_params)[source]

Fit the estimator.

Parameters:
X : {array-like, sparse matrix}, shape = [n_samples, n_features]

Training vectors, where n_samples is the number of samples and n_features is the number of features.

y : array-like, shape = [n_samples]

Target values.

**fit_params : parameters to be passed to the base estimator.
Returns:
self : object
fit_blend(X, y, **fit_params)[source]

Transform dataset using cross validation and fits the estimator.

Parameters:
X : array-like or sparse matrix, shape=(n_samples, n_features)

Input data used to build forests. Use dtype=np.float32 for maximum efficiency.

y : array-like, shape = [n_samples]

Target values.

**fit_params : parameters to be passed to the base estimator.
Returns:
X_transformed, indexes : tuple of (sparse matrix, array-like)

X_transformed is the transformed dataset. indexes is the indexes of the transformed data on the input.

fit_transform(X, y=None, **fit_params)[source]

Fit to data, then transform it.

Fits transformer to X and y with optional parameters fit_params and returns a transformed version of X.

Parameters:
X : numpy array of shape [n_samples, n_features]

Training set.

y : numpy array of shape [n_samples]

Target values.

Returns:
X_new : numpy array of shape [n_samples, n_features_new]

Transformed array.

get_params(deep=True)[source]

Get parameters for this estimator.

Parameters:
deep : boolean, optional

If True, will return the parameters for this estimator and contained subobjects that are estimators.

Returns:
params : mapping of string to any

Parameter names mapped to their values.

set_params(**params)[source]

Set the parameters of this estimator.

The method works on simple estimators as well as on nested objects (such as pipelines). The latter have parameters of the form <component>__<parameter> so that it’s possible to update each component of a nested object.

Returns:
self
transform(*args, **kwargs)[source]

Transform the whole dataset.

Parameters:
X : array-like or sparse matrix, shape=(n_samples, n_features)

Input data to be transformed. Use dtype=np.float32 for maximum efficiency. Sparse matrices are also supported, use sparse csr_matrix for maximum efficiency.

Returns:
X_transformed : sparse matrix, shape=(n_samples, n_out)

Transformed dataset.