Hướng dẫn dùng sklearn accuracy_score python

sklearn.metrics.accuracy_score(y_true, y_pred, *, normalize=True, sample_weight=None)[source]

Accuracy classification score.

In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true.

Read more in the User Guide.

Parameters:y_true1d array-like, or label indicator array / sparse matrix

Ground truth (correct) labels.

y_pred1d array-like, or label indicator array / sparse matrix

Predicted labels, as returned by a classifier.

normalizebool, default=True

If False, return the number of correctly classified samples. Otherwise, return the fraction of correctly classified samples.

sample_weightarray-like of shape (n_samples,), default=None

Sample weights.

Returns:scorefloat

If normalize == True, return the fraction of correctly classified samples (float), else returns the number of correctly classified samples (int).

The best performance is 1 with normalize == True and the number of samples with normalize == False.

See also

balanced_accuracy_score

Compute the balanced accuracy to deal with imbalanced datasets.

jaccard_score

Compute the Jaccard similarity coefficient score.

hamming_loss

Compute the average Hamming loss or Hamming distance between two sets of samples.

zero_one_loss

Compute the Zero-one classification loss. By default, the function will return the percentage of imperfectly predicted subsets.

Notes

In binary classification, this function is equal to the jaccard_score function.

Examples

>>> from sklearn.metrics import accuracy_score
>>> y_pred = [0, 2, 1, 3]
>>> y_true = [0, 1, 2, 3]
>>> accuracy_score(y_true, y_pred)
0.5
>>> accuracy_score(y_true, y_pred, normalize=False)
2

In the multilabel case with binary label indicators:

>>> import numpy as np
>>> accuracy_score(np.array([[0, 1], [1, 1]]), np.ones((2, 2)))
0.5

Examples using sklearn.metrics.accuracy_score¶

What is accuracy_score in sklearn?

sklearn.metrics.accuracy_score¶. Accuracy classification score. In multilabel classification, this function computes subset accuracy: the set of labels predicted for a sample must exactly match the corresponding set of labels in y_true. Read more in the User Guide. Ground truth (correct) labels.

How do I use the accuracy_score function in Python?

In Python, the accuracy_score function of the sklearn.metrics package calculates the accuracy score for a set of predicted labels against the true labels. To use the accuracy_score function, we’ll import it into our program, as shown below: The accuracy_score function accepts the following parameters: y_true: These are the true labels.

How to calculate accuracy in Python scikit learn?

The accuracy_score method is used to calculate the accuracy of either the faction or count of correct prediction in Python Scikit learn. Mathematically it represents the ratio of the sum of true positives and true negatives out of all the predictions. Here we can also calculate accuracy with the help of the accuracy_score method from sklearn.

What is scikit

Thư viện cung cấp một tập các công cụ xử lý các bài toán machine learning và statistical modeling gồm: classification, regression, clustering, và dimensionality reduction. Thư viện được cấp phép bản quyền chuẩn FreeBSD và chạy được trên nhiều nền tảng Linux. Scikit-learn được sử dụng như một tài liệu để học tập.