Skip to content

Commit c577cc4

Browse files
Merge branch 'main' into sivaprasath-closes-issue-3890
2 parents b2059ef + 075521d commit c577cc4

File tree

3 files changed

+598
-0
lines changed

3 files changed

+598
-0
lines changed
Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
# Learning Rules in Artificial Neural Networks (ANN)
2+
3+
## Introduction
4+
5+
Learning rules are essential components of Artificial Neural Networks (ANNs) that govern how the network updates its weights and biases. This document focuses on two fundamental learning rules: Hebbian Learning and Adaline (Adaptive Linear Neuron) Learning.
6+
7+
## 1. Hebbian Learning
8+
9+
Hebbian Learning, proposed by Donald Hebb in 1949, is one of the earliest and simplest learning rules in neural networks. It is based on the principle that neurons that fire together, wire together.
10+
11+
### Basic Principle
12+
13+
The strength of a connection between two neurons increases if both neurons are activated simultaneously.
14+
15+
### Mathematical Formulation
16+
17+
For neurons $i$ and $j$ with activation values $x_i$ and $x_j$, the weight update $\Delta w_{ij}$ is given by:
18+
19+
$$ \Delta w_{ij} = \eta x_i x_j $$
20+
21+
Where:
22+
- $\Delta w_{ij}$ is the change in weight between neurons $i$ and $j$
23+
- $\eta$ is the learning rate
24+
- $x_i$ is the output of the presynaptic neuron
25+
- $x_j$ is the output of the postsynaptic neuron
26+
27+
### Variations
28+
29+
1. **Oja's Rule**: A modification of Hebbian learning that includes weight normalization:
30+
31+
$$\Delta w_{ij} = \eta(x_i x_j - \alpha y_j^2 w_{ij})$$
32+
33+
Where $y_j$ is the output of neuron $j$ and $\alpha$ is a forgetting factor.
34+
35+
2. **Generalized Hebbian Algorithm (GHA)**: Extends Oja's rule to multiple outputs:
36+
37+
$$\Delta W = \eta(xy^T - \text{lower}(Wy^Ty))$$
38+
39+
Where $\text{lower}()$ denotes the lower triangular part of a matrix.
40+
41+
## 2. Adaline Learning (Widrow-Hoff Learning Rule)
42+
43+
Adaline (Adaptive Linear Neuron) Learning, developed by Bernard Widrow and Marcian Hoff in 1960, is a single-layer neural network that uses linear activation functions.
44+
45+
### Basic Principle
46+
47+
Adaline learning aims to minimize the mean squared error between the desired output and the actual output of the neuron.
48+
49+
### Mathematical Formulation
50+
51+
For an input vector $\mathbf{x}$ and desired output $d$, the weight update is given by:
52+
53+
$$ \Delta \mathbf{w} = \eta(d - y)\mathbf{x} $$
54+
55+
Where:
56+
- $\Delta \mathbf{w}$ is the change in weight vector
57+
- $\eta$ is the learning rate
58+
- $d$ is the desired output
59+
- $y = \mathbf{w}^T\mathbf{x}$ is the actual output
60+
- $\mathbf{x}$ is the input vector
61+
62+
### Learning Process
63+
64+
1. Initialize weights randomly
65+
2. For each training example:
66+
67+
a. Calculate the output:
68+
69+
$y = \mathbf{w}^T\mathbf{x}$
70+
71+
b. Update weights:
72+
73+
$$w_{new} = w_{old} + \eta(d - y)x$$
74+
75+
4. Repeat step 2 until convergence or a maximum number of epochs is reached
76+
77+
### Comparison with Perceptron Learning
78+
79+
While similar to the perceptron learning rule, Adaline uses the actual output value for weight updates, not just the sign of the output. This allows for more precise weight adjustments.
80+
81+
## Conclusion
82+
83+
Both Hebbian and Adaline learning rules play crucial roles in the development of neural network theory:
84+
85+
- Hebbian Learning provides a biological inspiration for neural learning and is fundamental in unsupervised learning scenarios.
86+
- Adaline Learning introduces the concept of minimizing error, which is a cornerstone of many modern learning algorithms, including backpropagation in deep neural networks.
87+
88+
Understanding these basic learning rules provides insight into more complex learning algorithms used in deep learning and helps in appreciating the historical development of neural network theory.
89+
90+
91+
## How to Use This Repository
92+
93+
- Clone this repository to your local machine.
94+
95+
```bash
96+
git clone https://github.com/CodeHarborHub/codeharborhub.github.io/tree/main/docs/Deep%20Learning/Learning Rule IN ANN
97+
```
98+
- For Python implementations and visualizations:
99+
100+
1. Ensure you have Jupyter Notebook installed
101+
102+
```bash
103+
pip install jupyter
104+
```
105+
2. Navigate to the project directory in your terminal.
106+
3. Open learning_rules.ipynb.
Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"<h3>Implementation of different activation fucntion in learning rules</h3>"
8+
]
9+
},
10+
{
11+
"cell_type": "code",
12+
"execution_count": 49,
13+
"metadata": {},
14+
"outputs": [
15+
{
16+
"name": "stdout",
17+
"output_type": "stream",
18+
"text": [
19+
"Trained Weights for hebb network are : [1.5488135 1.71518937]\n",
20+
"Trained bias for hebb network are : [1.60276338]\n",
21+
"Trained Weights for hebb network are : [0.53143536 0.57216559]\n",
22+
"Trained bias for hebb network are : [0.51141955]\n"
23+
]
24+
}
25+
],
26+
"source": [
27+
"import numpy as np\n",
28+
"import pandas as pd\n",
29+
"\n",
30+
"# Problem --> OR- Gate\n",
31+
"input = np.array([[1, 1] ,[1,-1] , [-1,1] ,[-1,-1] ])\n",
32+
"output = np.array([1 ,1 , 1 ,-1 ])\n",
33+
"np.random.seed(0)\n",
34+
"weight = np.random.rand(2)\n",
35+
"bias = np.random.rand(1)\n",
36+
"learning_rate = 0.1\n",
37+
"\n",
38+
"# Hebbian Learning rule\n",
39+
"def hebbNetwork(input , output , b , w ,epoches, learning_rate = .1):\n",
40+
" for epoch in range(epoches):\n",
41+
" for i in range(len(input)):\n",
42+
" cal_output = (np.dot(input[i] , w) + b)\n",
43+
" # print(w)\n",
44+
" if cal_output != output[i]:\n",
45+
" w = w + (learning_rate*output[i]*input[i])\n",
46+
" b = b + (learning_rate* output[i])\n",
47+
" return w , b\n",
48+
"\n",
49+
"# Aderline Learning rule\n",
50+
"def AderlineNetwork(input , output , b , w ,epoches, learning_rate = .1):\n",
51+
" for epoch in range(epoches):\n",
52+
" for i in range(len(input)):\n",
53+
" cal_output = (np.dot(input[i] , w) + b)\n",
54+
" error = (output[i] - cal_output)**2\n",
55+
" # print(error)\n",
56+
" if cal_output != output[i]:\n",
57+
" w = w + (learning_rate*(output[i]- cal_output)*input[i])\n",
58+
" b = b + (learning_rate*(output[i]- cal_output))\n",
59+
" return w , b\n",
60+
"\n",
61+
"\n",
62+
"wh , bh = hebbNetwork(input , output , bias , weight , 5)\n",
63+
"\n",
64+
"print(\"Trained Weights for hebb network are : \" , wh)\n",
65+
"print(\"Trained bias for hebb network are : \" , bh)\n",
66+
"\n",
67+
"wa , ba = AderlineNetwork(input , output , bias , weight , 5)\n",
68+
"\n",
69+
"print(\"Trained Weights for hebb network are : \" , wa)\n",
70+
"print(\"Trained bias for hebb network are : \" , ba)"
71+
]
72+
},
73+
{
74+
"cell_type": "code",
75+
"execution_count": 50,
76+
"metadata": {},
77+
"outputs": [
78+
{
79+
"name": "stdout",
80+
"output_type": "stream",
81+
"text": [
82+
"[4.86676625]\n",
83+
"[1.43638751]\n",
84+
"[1.76913924]\n",
85+
"[-1.66123949]\n"
86+
]
87+
}
88+
],
89+
"source": [
90+
"for i in range(len(input)):\n",
91+
" cal_output = (np.dot(input[i] , wh) + bh)\n",
92+
" print(cal_output)"
93+
]
94+
}
95+
],
96+
"metadata": {
97+
"kernelspec": {
98+
"display_name": "Python 3",
99+
"language": "python",
100+
"name": "python3"
101+
},
102+
"language_info": {
103+
"codemirror_mode": {
104+
"name": "ipython",
105+
"version": 3
106+
},
107+
"file_extension": ".py",
108+
"mimetype": "text/x-python",
109+
"name": "python",
110+
"nbconvert_exporter": "python",
111+
"pygments_lexer": "ipython3",
112+
"version": "3.8.10"
113+
}
114+
},
115+
"nbformat": 4,
116+
"nbformat_minor": 2
117+
}

0 commit comments

Comments
 (0)