How to use random forest classifier in python
A random forest classifier. Show A random forest is a meta estimator that fits a number of decision tree classifiers on various sub-samples of the dataset and uses averaging to improve the predictive accuracy and control over-fitting. The sub-sample size is controlled with the Read more in the User Guide. Parameters:n_estimatorsint, default=100The number of trees in the forest. Changed in version 0.22: The default value of The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “log_loss” and “entropy” both for the Shannon information gain, see Mathematical formulation. Note: This parameter is tree-specific. max_depthint, default=NoneThe maximum depth of the tree. If None, then nodes are expanded until all leaves are pure or until all leaves contain less than min_samples_split samples. min_samples_splitint or float, default=2The minimum number of samples required to split an internal node:
Changed in version 0.18: Added float values for fractions. min_samples_leafint or float, default=1The minimum number of samples
required to be at a leaf node. A split point at any depth will only be considered if it leaves at least
Changed in version 0.18: Added float values for fractions. min_weight_fraction_leaffloat, default=0.0The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. Samples have equal weight when sample_weight is not provided. max_features{“sqrt”, “log2”, None}, int or float, default=”sqrt”The number of features to consider when looking for the best split:
Changed in version 1.1: The default of Deprecated since version 1.1: The Note: the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than Grow trees with A node will be split if this split induces a decrease of the impurity greater than or equal to this value. The weighted impurity decrease equation is the following: N_t / N * (impurity - N_t_R / N_t * right_impurity - N_t_L / N_t * left_impurity) where
New in version 0.19. bootstrapbool, default=TrueWhether bootstrap samples are used when building trees. If False, the whole dataset is used to build each tree. Whether to use out-of-bag samples to estimate the generalization score. Only available if bootstrap=True. n_jobsint, default=NoneThe number of jobs to run in parallel. Controls both the randomness of the bootstrapping of the samples used when building trees (if Controls the verbosity when fitting and predicting. warm_startbool, default=FalseWhen set to Weights associated with classes in the form Note that for multioutput (including multilabel) weights should be defined for each class of every column in its own dict. For example, for four-class multilabel classification weights should be [{0: 1, 1: 1}, {0: 1, 1: 5}, {0: 1, 1: 1}, {0: 1, 1: 1}] instead of [{1:1}, {2:5}, {3:1}, {4:1}]. The “balanced” mode uses the values of y to automatically adjust weights inversely proportional to class frequencies in the input data as The “balanced_subsample” mode is the same as “balanced” except that weights are computed based on the bootstrap sample for every tree grown. For multi-output, the weights of each column of y will be multiplied. Note that these weights will be multiplied with sample_weight (passed through the fit method) if sample_weight is specified. ccp_alphanon-negative float, default=0.0Complexity parameter used for Minimal Cost-Complexity Pruning. The subtree with the largest cost complexity that is smaller
than New in version 0.22. max_samplesint or float, default=NoneIf bootstrap is True, the number of samples to draw from X to train each base estimator.
New in version 0.22. Attributes:base_estimator_DecisionTreeClassifierThe child estimator template used to create the collection of fitted sub-estimators. estimators_list of DecisionTreeClassifierThe collection of fitted sub-estimators. classes_ndarray of shape (n_classes,) or a list of such arraysThe classes labels (single output problem), or a list of arrays of class labels (multi-output problem). n_classes_int or listThe number of classes (single output problem), or a list containing the number of classes for each output (multi-output problem). n_features_ intDEPRECATED: Attribute Number of features seen during fit. New in version 0.24. feature_names_in_ndarray of shape (n_features_in_ ,)Names of features seen during fit. Defined only when New in version 1.0. n_outputs_intThe number of outputs when feature_importances_ ndarray of shape (n_features,)The impurity-based feature importances. oob_score_floatScore
of the training dataset obtained using an out-of-bag estimate. This attribute exists only when Decision function computed with out-of-bag estimate on the training set. If n_estimators is small it might be possible that a data point was never left out during the bootstrap. In this case, Notes The default values for the parameters controlling the size of the trees (e.g. The features are always randomly permuted at each split. Therefore, the best found split may vary,
even with the same training data, References [1]
Examples >>> from sklearn.ensemble import RandomForestClassifier >>> from sklearn.datasets import make_classification >>> X, y = make_classification(n_samples=1000, n_features=4, ... n_informative=2, n_redundant=0, ... random_state=0, shuffle=False) >>> clf = RandomForestClassifier(max_depth=2, random_state=0) >>> clf.fit(X, y) RandomForestClassifier(...) >>> print(clf.predict([[0, 0, 0, 0]])) [1] Methods
Apply trees in the forest to X, return leaf indices. Parameters:X{array-like, sparse matrix} of shape (n_samples, n_features)The input samples. Internally, its dtype will be converted to For each datapoint x in X and for each tree in the forest, return the index of the leaf x ends up in. decision_path(X)[source]¶Return the decision path in the forest. New in version 0.18. Parameters:X{array-like, sparse matrix} of shape (n_samples, n_features)The input samples. Internally, its dtype will be converted to Return a node indicator matrix where non zero elements indicates that the samples goes through the nodes. The matrix is of CSR format. n_nodes_ptrndarray of shape (n_estimators + 1,)The columns from indicator[n_nodes_ptr[i]:n_nodes_ptr[i+1]] gives the indicator value for the i-th estimator. propertyfeature_importances_¶The impurity-based feature importances. The higher, the more important the feature. The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance. Warning: impurity-based feature importances can be misleading for high cardinality features (many unique values). See The values of this array sum to 1, unless all trees are single node trees consisting of only the root node, in which case it will be an array of zeros. fit(X, y, sample_weight=None)[source]¶Build a forest of trees from the training set (X, y). Parameters:X{array-like, sparse matrix} of shape (n_samples, n_features)The training input samples. Internally, its dtype will be converted to The target values (class labels in classification, real numbers in regression). sample_weightarray-like of shape (n_samples,), default=NoneSample weights. If None, then samples are equally weighted. Splits that would create child nodes with net zero or negative weight are ignored while searching for a split in each node. In the case of classification, splits are also ignored if they would result in any single class carrying a negative weight in either child node. Returns:selfobjectFitted estimator. get_params(deep=True)[source]¶Get parameters for this estimator. Parameters:deepbool, default=TrueIf True, will return the parameters for this estimator and contained subobjects that are estimators. Returns: paramsdictParameter names mapped to their values. propertyn_features_¶DEPRECATED: Attribute Number of features when fitting the estimator. predict(X)[source]¶Predict class for X. The predicted class of an input sample is a vote by the trees in the forest, weighted by their probability estimates. That is, the predicted class is the one with highest mean probability estimate across the trees. Parameters:X{array-like, sparse matrix} of shape (n_samples, n_features)The input samples. Internally, its dtype will be converted to The predicted classes. predict_log_proba(X)[source]¶Predict class log-probabilities for X. The predicted class log-probabilities of an input sample is computed as the log of the mean predicted class probabilities of the trees in the forest. Parameters:X{array-like, sparse matrix} of shape (n_samples, n_features)The input samples. Internally, its dtype will be converted to The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. predict_proba(X)[source]¶Predict class probabilities for X. The predicted class probabilities of an input sample are computed as the mean predicted class probabilities of the trees in the forest. The class probability of a single tree is the fraction of samples of the same class in a leaf. Parameters:X{array-like, sparse matrix} of shape (n_samples, n_features)The input samples. Internally, its dtype will be converted to The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. score(X, y, sample_weight=None)[source]¶Return the mean accuracy on the given test data and labels. In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted. Parameters:Xarray-like of shape (n_samples, n_features)Test samples. yarray-like of shape (n_samples,) or (n_samples, n_outputs)True labels for
Sample weights. Returns:scorefloatMean accuracy of Set the parameters of this estimator. The method works on simple estimators as well as on nested objects (such as Estimator parameters. Returns:selfestimator instanceEstimator instance. Examples using sklearn.ensemble.RandomForestClassifier¶What is random forest classifier with example?Random Forest is a supervised machine learning algorithm made up of decision trees. Random Forest is used for both classification and regression—for example, classifying whether an email is “spam” or “not spam”
How do you do a random forest on a dataset?Machine Learning Basics: Random Forest Classification. Step 1: Importing the Libraries. ... . Step 2: Importing the dataset. ... . Step 3: Splitting the dataset into the Training set and Test set. ... . Step 4: Feature Scaling. ... . Step 5: Training the Random Forest Classification model on the Training Set. ... . Step 6: Predicting the Test set results.. How does a random forest classifier work?The random forest is a classification algorithm consisting of many decisions trees. It uses bagging and feature randomness when building each individual tree to try to create an uncorrelated forest of trees whose prediction by committee is more accurate than that of any individual tree.
What is random forest model in Python?Random forest is an ensemble of decision tree algorithms. It is an extension of bootstrap aggregation (bagging) of decision trees and can be used for classification and regression problems.
|