|
| 1 | +# Naive Bayes Classifier |
| 2 | + |
| 3 | +## Introduction |
| 4 | + |
| 5 | +Naive Bayes is a probabilistic machine learning algorithm based on Bayes' theorem, with an assumption of independence between predictors. It's particularly useful for classification tasks and is known for its simplicity and effectiveness, especially with high-dimensional datasets. |
| 6 | + |
| 7 | + <img src= "Images/thomas bayes.webp" /> |
| 8 | + |
| 9 | + |
| 10 | +## Theory |
| 11 | + |
| 12 | +### Bayes' Theorem |
| 13 | + |
| 14 | +The foundation of Naive Bayes is Bayes' theorem, which describes the probability of an event based on prior knowledge of conditions that might be related to the event. |
| 15 | + |
| 16 | + |
| 17 | +$$ P(A|B) = \frac{P(B|A) \cdot P(A)}{P(B)} $$ |
| 18 | + |
| 19 | +Where: |
| 20 | +- $P(A|B)$ is the posterior probability |
| 21 | +- $P(B|A)$ is the likelihood |
| 22 | +- $P(A)$ is the prior probability |
| 23 | +- $P(B)$ is the marginal likelihood |
| 24 | + |
| 25 | +<img src= "Images/equation.webp" /> |
| 26 | + |
| 27 | + |
| 28 | +### Naive Bayes Classifier |
| 29 | + |
| 30 | +The Naive Bayes classifier extends this to classify data points into categories. It assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature (the "naive" assumption). |
| 31 | + |
| 32 | +For a data point $X = (x_1, x_2, ..., x_n)$ and a class variable : |
| 33 | + |
| 34 | +$$ P(C|X) = \frac{P(X|C) \cdot P(C)}{P(X)} $$ |
| 35 | + |
| 36 | +The classifier chooses the class with the highest posterior probability: |
| 37 | + |
| 38 | +$$ C^* = \underset{c \in C}{argmax} P(X|c) \cdot P(c) $$ |
| 39 | + |
| 40 | +### Mathematical Formulation |
| 41 | + |
| 42 | +For a given set of features $(x_1, x_2, ..., x_n)$: |
| 43 | + |
| 44 | +$$ P(C|x_1, x_2, ..., x_n) \propto P(C) \cdot P(x_1|C) \cdot P(x_2|C) \cdot ... \cdot P(x_n|C) $$ |
| 45 | + |
| 46 | +Where: |
| 47 | +- $P(C|x_1, x_2, ..., x_n)$ is the posterior probability of class $C$ given the features |
| 48 | +- $P(C)$ is the prior probability of class $C$ |
| 49 | +- $P(x_i|C)$ is the likelihood of feature $x_i$ given class $C$ |
| 50 | + |
| 51 | +## Types of Naive Bayes Classifiers |
| 52 | + |
| 53 | +1. **Gaussian Naive Bayes**: Assumes continuous values associated with each feature are distributed according to a Gaussian distribution. |
| 54 | + |
| 55 | +2. **Multinomial Naive Bayes**: Typically used for discrete counts, like word counts in text classification. |
| 56 | + |
| 57 | +3. **Bernoulli Naive Bayes**: Used for binary feature models (0s and 1s). |
| 58 | + |
| 59 | +## Example: Gaussian Naive Bayes in scikit-learn |
| 60 | + |
| 61 | +Here's a simple example of using Gaussian Naive Bayes in scikit-learn: |
| 62 | + |
| 63 | +```python |
| 64 | +from sklearn.naive_bayes import GaussianNB |
| 65 | +from sklearn.datasets import load_iris |
| 66 | +from sklearn.model_selection import train_test_split |
| 67 | +from sklearn.metrics import accuracy_score |
| 68 | + |
| 69 | +# Load the iris dataset |
| 70 | +iris = load_iris() |
| 71 | +X, y = iris.data, iris.target |
| 72 | + |
| 73 | +# Split the data into training and testing sets |
| 74 | +X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) |
| 75 | + |
| 76 | +# Initialize and train |
| 77 | +gnb = GaussianNB() |
| 78 | +gnb.fit(X_train, y_train) |
| 79 | + |
| 80 | +# Predictions |
| 81 | +y_pred = gnb.predict(X_test) |
| 82 | + |
| 83 | +# Accuracy |
| 84 | +accuracy = accuracy_score(y_test, y_pred) |
| 85 | +print(f"Accuracy: {accuracy:.2f}") |
| 86 | + |
| 87 | +``` |
| 88 | + |
| 89 | +## Applications of Naive Bayes Algorithm |
| 90 | +- Real-time Prediction. |
| 91 | +- Multi-class Prediction. |
| 92 | +- Text classification/ Spam Filtering/ Sentiment Analysis. |
| 93 | +- Recommendation Systems. |
0 commit comments