A random forest classifier.
A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is controlled with the max_samples
parameter if bootstrap=True
[default], otherwise the whole dataset is used to build each tree.
Read more in the User Guide.
Parameters:n_estimatorsint, default=100The number of trees in the forest.
Changed in version 0.22: The default value of n_estimators
changed from 10 to 100 in 0.22.
The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “log_loss” and “entropy” both for the Shannon information gain, see Mathematical formulation. Note: This parameter is tree-specific.
max_depthint, default=NoneThe maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples.
min_samples_splitint or float, default=2The minimum number of samples required to split an internal node:
If int, then consider
min_samples_split
as the minimum number.If float, then
min_samples_split
is a fraction andceil[min_samples_split * n_samples]
are the minimum number of samples for each split.
Changed in version 0.18: Added float values for fractions.
min_samples_leafint or float, default=1The minimum number of samples
required to be at a leaf node. A split point at any depth will only be considered if it leaves at least min_samples_leaf
training samples in each of the left and right branches. This may have the effect of smoothing the model, especially in regression.
If int, then consider
min_samples_leaf
as the minimum number.If float, then
min_samples_leaf
is a fraction andceil[min_samples_leaf * n_samples]
are the minimum number of samples for each node.
Changed in version 0.18: Added float values for fractions.
min_weight_fraction_leaffloat, default=0.0The minimum weighted fraction of the sum total of weights [of all the input samples] required to be at a leaf node. Samples have equal weight when sample_weight is not provided.
max_features{“sqrt”, “log2”, None}, int or float, default=”sqrt”The number of features to consider when looking for the best split:
If int, then consider
max_features
features at each split.If float, then
max_features
is a fraction andmax[1, int[max_features * n_features_in_]]
features are considered at each split.If “auto”, then
max_features=sqrt[n_features]
.If “sqrt”, then
max_features=sqrt[n_features]
.If “log2”, then
max_features=log2[n_features]
.If None, then
max_features=n_features
.
Changed in version 1.1: The default of max_features
changed from "auto"
to "sqrt"
.
Deprecated since version 1.1: The "auto"
option was
deprecated in 1.1 and will be removed in 1.3.
Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than max_features
features.
Grow trees with max_leaf_nodes
in best-first fashion. Best nodes are defined as relative reduction in impurity. If None then unlimited number of leaf nodes.
A node will be split if this split induces a decrease of the impurity greater than or equal to this value.
The weighted impurity decrease equation is the following:
N_t / N * [impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity]
where N
is the total number of samples, N_t
is the number of samples at the current node, N_t_L
is the number of samples in the left child, and N_t_R
is the number of samples in the right child.
N
, N_t
, N_t_R
and N_t_L
all refer to
the weighted sum, if sample_weight
is passed.
New in version 0.19.
bootstrapbool, default=TrueWhether bootstrap samples are used when building trees. If False, the whole dataset is used to build each tree.
oob_scorebool, default=FalseWhether to use out-of-bag samples to estimate the generalization score. Only available if bootstrap=True.
n_jobsint, default=NoneThe number of jobs to run in parallel. fit
,
predict
, decision_path
and
apply
are all parallelized over the trees. None
means 1 unless in a joblib.parallel_backend
context. -1
means using all processors. See
Glossary for more details.
Controls both the randomness of the bootstrapping of the samples used when building trees [if bootstrap=True
] and the sampling of the features to consider when looking for the best split at each node [if max_features >> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.datasets import make_classification
>>> X, y = make_classification[n_samples=1000, n_features=4,
... n_informative=2, n_redundant=0,
... random_state=0, shuffle=False]
>>> clf = RandomForestClassifier[max_depth=2, random_state=0]
>>> clf.fit[X, y]
RandomForestClassifier[...]
>>> print[clf.predict[[[0, 0, 0, 0]]]]
[1]
Methods
| Apply trees in the forest to X, return leaf indices. |
| Return the decision path in the forest. |
| Build a forest of trees from the training set [X, y]. |
| Get parameters for this estimator. |
| Predict class for X. |
| Predict class log-probabilities for X. |
| Predict class probabilities for X. |
| Return the mean accuracy on the given test data and labels. |
| Set the parameters of this estimator. |
Apply trees in the forest to X, return leaf indices.
Parameters:X{array-like, sparse matrix} of shape [n_samples, n_features]The input samples. Internally, its dtype will be converted to dtype=np.float32
. If a sparse matrix is provided, it will be converted into a sparse csr_matrix
.
For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in.
decision_path[X][source]¶Return the decision path in the forest.
New in version 0.18.
Parameters:X{array-like, sparse matrix} of shape [n_samples, n_features]The input samples. Internally, its dtype will be converted to dtype=np.float32
. If a sparse matrix is provided, it will be converted into a sparse csr_matrix
.
Return a node indicator matrix where non zero elements indicates that the samples goes through the nodes. The matrix is of CSR format.
n_nodes_ptrndarray of shape [n_estimators + 1,]The columns from indicator[n_nodes_ptr[i]:n_nodes_ptr[i+1]] gives the indicator value for the i-th estimator.
propertyfeature_importances_¶The impurity-based feature importances.
The higher, the more important the feature. The importance of a feature is computed as the [normalized] total reduction of the criterion brought by that feature. It is also known as the Gini importance.
Warning: impurity-based feature importances can be misleading for high cardinality features [many unique values]. See sklearn.inspection.permutation_importance
as an alternative.
The values of this array sum to 1, unless all trees are single node trees consisting of only the root node, in which case it will be an array of zeros.
fit[X, y, sample_weight=None][source]¶Build a forest of trees from the training set [X, y].
Parameters:X{array-like, sparse matrix} of shape [n_samples, n_features]The training input samples. Internally, its dtype will be converted to dtype=np.float32
. If a sparse matrix is provided, it will be converted into a sparse csc_matrix
.
The target values [class labels in classification, real numbers in regression].
sample_weightarray-like of shape [n_samples,], default=NoneSample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node.
Returns:selfobjectFitted estimator.
get_params[deep=True][source]¶Get parameters for this estimator.
Parameters:deepbool, default=TrueIf True, will return the parameters for this estimator and contained subobjects that are estimators.
Parameter names mapped to their values.
propertyn_features_¶DEPRECATED: Attribute n_features_
was deprecated in version 1.0 and will be removed in 1.2. Use n_features_in_
instead.
Number of features when fitting the estimator.
predict[X][source]¶Predict class for X.
The predicted class of an input sample is a vote by the trees in the forest, weighted by their probability estimates. That is, the predicted class is the one with highest mean probability estimate across the trees.
Parameters:X{array-like, sparse matrix} of shape [n_samples, n_features]The input samples. Internally, its dtype will be converted to dtype=np.float32
. If a sparse matrix is provided, it will be converted into a
sparse csr_matrix
.
The predicted classes.
predict_log_proba[X][source]¶Predict class log-probabilities for X.
The predicted class log-probabilities of an input sample is computed as the log of the mean predicted class probabilities of the trees in the forest.
Parameters:X{array-like, sparse matrix} of shape [n_samples, n_features]The input samples. Internally, its dtype will be converted to dtype=np.float32
. If a sparse matrix is provided, it will be converted into a sparse csr_matrix
.
The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.
predict_proba[X][source]¶Predict class probabilities for X.
The predicted class probabilities of an input sample are computed as the mean predicted class probabilities of the trees in the forest. The class probability of a single tree is the fraction of samples of the same class in a leaf.
Parameters:X{array-like, sparse matrix} of shape [n_samples, n_features]The input samples. Internally, its dtype will be converted to dtype=np.float32
. If a sparse
matrix is provided, it will be converted into a sparse csr_matrix
.
The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_.
score[X, y, sample_weight=None][source]¶Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
Parameters:Xarray-like of shape [n_samples, n_features]Test samples.
yarray-like of shape [n_samples,] or [n_samples, n_outputs]True labels for
X
.
Sample weights.
Returns:scorefloatMean accuracy of self.predict[X]
wrt. y
.
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects [such as Pipeline
]. The latter have parameters of the form __
so that it’s possible to update each component of a nested object.
Estimator parameters.
Returns:selfestimator instanceEstimator instance.