Skip to content

Commit e3795d6

Browse files
authored
Merge branch 'main' into main
2 parents 1141fe8 + a7af00d commit e3795d6

26 files changed

+2928
-795
lines changed
Lines changed: 91 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,91 @@
1+
---
2+
title: "Introduction to Quantum Computing and Its Applications"
3+
sidebar_label: Quantum Computing
4+
authors: [pujan-sarkar]
5+
tags: [Quantum Computing, Applications]
6+
date: 2024-07-22
7+
---
8+
9+
## Quantum Computing: Basics and Applications
10+
11+
Quantum computing is a revolutionary field that leverages the principles of quantum mechanics to process information in fundamentally different ways compared to classical computing. This blog will introduce the basics of quantum computing, explore its potential applications, and provide resources for further learning.
12+
13+
## Introduction to Quantum Computing
14+
15+
Quantum computing harnesses the peculiar principles of quantum mechanics, such as superposition and entanglement, to perform computations that would be infeasible for classical computers. While classical computers use bits as the smallest unit of information (which can be 0 or 1), quantum computers use quantum bits, or qubits, which can represent both 0 and 1 simultaneously due to superposition.
16+
17+
## Basic Concepts
18+
19+
### Quantum Bits (Qubits)
20+
21+
A qubit is the fundamental unit of quantum information. Unlike a classical bit, which can be either 0 or 1, a qubit can exist in a state that is a linear combination of both. This property is called superposition. Mathematically, a qubit's state can be represented as:
22+
23+
$$ |\psi\rangle = \alpha|0\rangle + \beta|1\rangle $$
24+
25+
where $|\alpha|^2$ and $|\beta|^2$ are the probabilities of the qubit being in the state $|0\rangle$ and $|1\rangle$ respectively, and $|\alpha|^2 + |\beta|^2 = 1$.
26+
27+
### Superposition
28+
29+
Superposition is the ability of a quantum system to be in multiple states simultaneously. For qubits, this means they can represent both 0 and 1 at the same time. This property enables quantum computers to process a vast amount of possibilities simultaneously, providing an exponential speed-up for certain computations.
30+
31+
### Entanglement
32+
33+
Entanglement is a quantum phenomenon where two or more qubits become correlated in such a way that the state of one qubit instantly influences the state of the other, regardless of the distance between them. This correlation is a key resource for quantum computing, enabling complex operations and secure communication protocols.
34+
35+
### Quantum Gates
36+
37+
Quantum gates are the building blocks of quantum circuits, analogous to classical logic gates. They manipulate qubits through unitary transformations. Some fundamental quantum gates include:
38+
39+
- **Pauli-X Gate**: Flips the state of a qubit, analogous to a NOT gate in classical computing.
40+
- **Hadamard Gate**: Creates superposition, transforming a qubit from the state.
41+
- **CNOT Gate**: A two-qubit gate that flips the second qubit if the first qubit is in the state.
42+
43+
## Applications of Quantum Computing
44+
45+
### Cryptography
46+
47+
Quantum computing poses a significant threat to classical cryptographic systems. Algorithms like Shor's algorithm can factor large numbers exponentially faster than the best-known classical algorithms, potentially breaking widely used encryption methods such as RSA. On the flip side, quantum cryptography offers new ways to secure information, such as Quantum Key Distribution (QKD), which guarantees secure communication based on the principles of quantum mechanics.
48+
49+
### Optimization Problems
50+
51+
Many real-world problems, such as supply chain management, financial modelling, and route optimization, involve finding the best solution among a vast number of possibilities. Quantum computing can provide significant speed-ups for solving these optimization problems using algorithms like the Quantum Approximate Optimization Algorithm (QAOA) and Grover's algorithm.
52+
53+
### Drug Discovery
54+
55+
Quantum computers can simulate molecular interactions at a quantum level, providing insights into the behavior of complex molecules. This capability is crucial for drug discovery and materials science, as it allows researchers to model and predict chemical reactions more accurately, potentially leading to the development of new medications and materials.
56+
57+
### Machine Learning
58+
59+
Quantum machine learning combines quantum computing and classical machine learning techniques to enhance data processing capabilities. Quantum computers can process large datasets and complex models more efficiently, leading to faster training times and improved performance for certain machine learning tasks.
60+
61+
## Resources for Further Learning
62+
63+
### Books
64+
1. **"Quantum Computing: A Gentle Introduction" by Eleanor Rieffel and Wolfgang Polak**
65+
- A comprehensive introduction to the principles and applications of quantum computing.
66+
67+
2. **"Quantum Computation and Quantum Information" by Michael A. Nielsen and Isaac L. Chuang**
68+
- Often considered the definitive textbook on quantum computing, covering a wide range of topics in depth.
69+
70+
### Papers
71+
1. **"Simulating Physics with Computers" by Richard Feynman**
72+
- One of the foundational papers in quantum computing, introducing the concept of using quantum systems for simulation.
73+
74+
2. **"Shor's Algorithm for Quantum Factoring" by Peter Shor**
75+
- The seminal paper that introduced Shor's algorithm, demonstrating the potential of quantum computers to solve certain problems exponentially faster than classical computers.
76+
77+
### Online Courses
78+
1. **"Quantum Computing for the Very Curious" by Michael Nielsen**
79+
- A free, interactive online book that provides a hands-on introduction to quantum computing.
80+
81+
2. **Coursera: "Quantum Computing" by University of Toronto**
82+
- A comprehensive course covering the basics of quantum computing, quantum algorithms, and quantum hardware.
83+
84+
### Tutorials and Blogs
85+
1. **Qiskit Tutorials**
86+
- IBM's open-source quantum computing framework provides extensive tutorials and resources for learning quantum programming.
87+
88+
2. **Quantum Computing Report**
89+
- A blog that keeps up with the latest news, developments, and insights in the field of quantum computing.
90+
91+
By understanding these basic concepts and exploring the resources provided, you can build a strong foundation in quantum computing and appreciate its potential to revolutionize various industries.
Lines changed: 106 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,106 @@
1+
# Learning Rules in Artificial Neural Networks (ANN)
2+
3+
## Introduction
4+
5+
Learning rules are essential components of Artificial Neural Networks (ANNs) that govern how the network updates its weights and biases. This document focuses on two fundamental learning rules: Hebbian Learning and Adaline (Adaptive Linear Neuron) Learning.
6+
7+
## 1. Hebbian Learning
8+
9+
Hebbian Learning, proposed by Donald Hebb in 1949, is one of the earliest and simplest learning rules in neural networks. It is based on the principle that neurons that fire together, wire together.
10+
11+
### Basic Principle
12+
13+
The strength of a connection between two neurons increases if both neurons are activated simultaneously.
14+
15+
### Mathematical Formulation
16+
17+
For neurons $i$ and $j$ with activation values $x_i$ and $x_j$, the weight update $\Delta w_{ij}$ is given by:
18+
19+
$$ \Delta w_{ij} = \eta x_i x_j $$
20+
21+
Where:
22+
- $\Delta w_{ij}$ is the change in weight between neurons $i$ and $j$
23+
- $\eta$ is the learning rate
24+
- $x_i$ is the output of the presynaptic neuron
25+
- $x_j$ is the output of the postsynaptic neuron
26+
27+
### Variations
28+
29+
1. **Oja's Rule**: A modification of Hebbian learning that includes weight normalization:
30+
31+
$$\Delta w_{ij} = \eta(x_i x_j - \alpha y_j^2 w_{ij})$$
32+
33+
Where $y_j$ is the output of neuron $j$ and $\alpha$ is a forgetting factor.
34+
35+
2. **Generalized Hebbian Algorithm (GHA)**: Extends Oja's rule to multiple outputs:
36+
37+
$$\Delta W = \eta(xy^T - \text{lower}(Wy^Ty))$$
38+
39+
Where $\text{lower}()$ denotes the lower triangular part of a matrix.
40+
41+
## 2. Adaline Learning (Widrow-Hoff Learning Rule)
42+
43+
Adaline (Adaptive Linear Neuron) Learning, developed by Bernard Widrow and Marcian Hoff in 1960, is a single-layer neural network that uses linear activation functions.
44+
45+
### Basic Principle
46+
47+
Adaline learning aims to minimize the mean squared error between the desired output and the actual output of the neuron.
48+
49+
### Mathematical Formulation
50+
51+
For an input vector $\mathbf{x}$ and desired output $d$, the weight update is given by:
52+
53+
$$ \Delta \mathbf{w} = \eta(d - y)\mathbf{x} $$
54+
55+
Where:
56+
- $\Delta \mathbf{w}$ is the change in weight vector
57+
- $\eta$ is the learning rate
58+
- $d$ is the desired output
59+
- $y = \mathbf{w}^T\mathbf{x}$ is the actual output
60+
- $\mathbf{x}$ is the input vector
61+
62+
### Learning Process
63+
64+
1. Initialize weights randomly
65+
2. For each training example:
66+
67+
a. Calculate the output:
68+
69+
$y = \mathbf{w}^T\mathbf{x}$
70+
71+
b. Update weights:
72+
73+
$$w_{new} = w_{old} + \eta(d - y)x$$
74+
75+
4. Repeat step 2 until convergence or a maximum number of epochs is reached
76+
77+
### Comparison with Perceptron Learning
78+
79+
While similar to the perceptron learning rule, Adaline uses the actual output value for weight updates, not just the sign of the output. This allows for more precise weight adjustments.
80+
81+
## Conclusion
82+
83+
Both Hebbian and Adaline learning rules play crucial roles in the development of neural network theory:
84+
85+
- Hebbian Learning provides a biological inspiration for neural learning and is fundamental in unsupervised learning scenarios.
86+
- Adaline Learning introduces the concept of minimizing error, which is a cornerstone of many modern learning algorithms, including backpropagation in deep neural networks.
87+
88+
Understanding these basic learning rules provides insight into more complex learning algorithms used in deep learning and helps in appreciating the historical development of neural network theory.
89+
90+
91+
## How to Use This Repository
92+
93+
- Clone this repository to your local machine.
94+
95+
```bash
96+
git clone https://github.com/CodeHarborHub/codeharborhub.github.io/tree/main/docs/Deep%20Learning/Learning Rule IN ANN
97+
```
98+
- For Python implementations and visualizations:
99+
100+
1. Ensure you have Jupyter Notebook installed
101+
102+
```bash
103+
pip install jupyter
104+
```
105+
2. Navigate to the project directory in your terminal.
106+
3. Open learning_rules.ipynb.
Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"<h3>Implementation of different activation fucntion in learning rules</h3>"
8+
]
9+
},
10+
{
11+
"cell_type": "code",
12+
"execution_count": 49,
13+
"metadata": {},
14+
"outputs": [
15+
{
16+
"name": "stdout",
17+
"output_type": "stream",
18+
"text": [
19+
"Trained Weights for hebb network are : [1.5488135 1.71518937]\n",
20+
"Trained bias for hebb network are : [1.60276338]\n",
21+
"Trained Weights for hebb network are : [0.53143536 0.57216559]\n",
22+
"Trained bias for hebb network are : [0.51141955]\n"
23+
]
24+
}
25+
],
26+
"source": [
27+
"import numpy as np\n",
28+
"import pandas as pd\n",
29+
"\n",
30+
"# Problem --> OR- Gate\n",
31+
"input = np.array([[1, 1] ,[1,-1] , [-1,1] ,[-1,-1] ])\n",
32+
"output = np.array([1 ,1 , 1 ,-1 ])\n",
33+
"np.random.seed(0)\n",
34+
"weight = np.random.rand(2)\n",
35+
"bias = np.random.rand(1)\n",
36+
"learning_rate = 0.1\n",
37+
"\n",
38+
"# Hebbian Learning rule\n",
39+
"def hebbNetwork(input , output , b , w ,epoches, learning_rate = .1):\n",
40+
" for epoch in range(epoches):\n",
41+
" for i in range(len(input)):\n",
42+
" cal_output = (np.dot(input[i] , w) + b)\n",
43+
" # print(w)\n",
44+
" if cal_output != output[i]:\n",
45+
" w = w + (learning_rate*output[i]*input[i])\n",
46+
" b = b + (learning_rate* output[i])\n",
47+
" return w , b\n",
48+
"\n",
49+
"# Aderline Learning rule\n",
50+
"def AderlineNetwork(input , output , b , w ,epoches, learning_rate = .1):\n",
51+
" for epoch in range(epoches):\n",
52+
" for i in range(len(input)):\n",
53+
" cal_output = (np.dot(input[i] , w) + b)\n",
54+
" error = (output[i] - cal_output)**2\n",
55+
" # print(error)\n",
56+
" if cal_output != output[i]:\n",
57+
" w = w + (learning_rate*(output[i]- cal_output)*input[i])\n",
58+
" b = b + (learning_rate*(output[i]- cal_output))\n",
59+
" return w , b\n",
60+
"\n",
61+
"\n",
62+
"wh , bh = hebbNetwork(input , output , bias , weight , 5)\n",
63+
"\n",
64+
"print(\"Trained Weights for hebb network are : \" , wh)\n",
65+
"print(\"Trained bias for hebb network are : \" , bh)\n",
66+
"\n",
67+
"wa , ba = AderlineNetwork(input , output , bias , weight , 5)\n",
68+
"\n",
69+
"print(\"Trained Weights for hebb network are : \" , wa)\n",
70+
"print(\"Trained bias for hebb network are : \" , ba)"
71+
]
72+
},
73+
{
74+
"cell_type": "code",
75+
"execution_count": 50,
76+
"metadata": {},
77+
"outputs": [
78+
{
79+
"name": "stdout",
80+
"output_type": "stream",
81+
"text": [
82+
"[4.86676625]\n",
83+
"[1.43638751]\n",
84+
"[1.76913924]\n",
85+
"[-1.66123949]\n"
86+
]
87+
}
88+
],
89+
"source": [
90+
"for i in range(len(input)):\n",
91+
" cal_output = (np.dot(input[i] , wh) + bh)\n",
92+
" print(cal_output)"
93+
]
94+
}
95+
],
96+
"metadata": {
97+
"kernelspec": {
98+
"display_name": "Python 3",
99+
"language": "python",
100+
"name": "python3"
101+
},
102+
"language_info": {
103+
"codemirror_mode": {
104+
"name": "ipython",
105+
"version": 3
106+
},
107+
"file_extension": ".py",
108+
"mimetype": "text/x-python",
109+
"name": "python",
110+
"nbconvert_exporter": "python",
111+
"pygments_lexer": "ipython3",
112+
"version": "3.8.10"
113+
}
114+
},
115+
"nbformat": 4,
116+
"nbformat_minor": 2
117+
}

docs/Flask/11-Flask app on Heroku.md

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
---
2+
id: Deploy Python Flask App on Heroku
3+
title: how to deploy a flask app on Heroku
4+
sidebar_label: Flask App on Heroku
5+
sidebar_position: 11
6+
tags: [flask, python, heroku ]
7+
description: In this tutorial, you will learn about deployment offlask app on Heroku.
8+
---
9+
10+
Flask is based on the Werkzeug WSGI toolkit and Jinja2 template engine. Both are Pocco projects. This article revolves around how to deploy a flask app on Heroku. To demonstrate this, we are first going to create a sample application for a better understanding of the process.
11+
12+
The Prerequisites are-
13+
1.Python
14+
2.pip
15+
3.Heroku CLI
16+
4.Git
17+
18+
### Deploying Flask App on Heroku
19+
20+
Let’s create a simple flask application first and then it can be deployed to heroku. Create a folder named “eflask” and open the command line and cd inside the “eflask” directory. Follow the following steps to create the sample application for this tutorial.
21+
22+
# STEP 1 :
23+
Create a virtual environment with pipenv and install Flask and Gunicorn .
24+
25+
# STEP 2 :
26+
Create a “Procfile” and write the following code.
27+
28+
# STEP 3 :
29+
Create “runtime.txt” and write the following code.
30+
31+
# STEP 4 :
32+
Create a folder named “app” and enter the folder
33+
34+
# STEP 5 :
35+
Create a python file, “main.py” and enter the sample code.
36+
37+
# STEP 6 :
38+
Get back to the previous directory “eflask”.Create a file“wsgi.py” and insert the following code.
39+
40+
# STEP 7 :
41+
Run the virtual environment.
42+
43+
# STEP 8 :
44+
Initialize an empty repo, add the files in the repo and commit all the changes.
45+
46+
# STEP 9 :
47+
Login to heroku CLI
48+
49+
# STEP 10 :
50+
Push your code from local to the heroku remote.
File renamed without changes.

0 commit comments

Comments
 (0)