Prompt
Answer
Implementing a Neural Network from Scratch in Python
This guide provides a step-by-step approach to implementing a simple neural network from scratch in Python. We'll focus on a feedforward neural network with one hidden layer, and we'll walk through key concepts like forward propagation, backward propagation, and training the model.
Table of Contents
- Necessary Imports
- Neural Network Class Structure
- Forward Propagation
- Backward Propagation
- Training the Neural Network
- Usage Example
1. Necessary Imports
We will need NumPy for numerical calculations:
import numpy as np
2. Neural Network Class Structure
We'll create a NeuralNetwork
class that encompasses the entire neural network functionality.
class NeuralNetwork:
"""
Simple Feedforward Neural Network.
Attributes:
input_size: Size of the input layer.
hidden_size: Size of the hidden layer.
output_size: Size of the output layer.
learning_rate: Learning rate for weight updates.
weights_input_hidden: Weights between input and hidden layers.
weights_hidden_output: Weights between hidden and output layers.
bias_hidden: Bias for the hidden layer.
bias_output: Bias for the output layer.
"""
def __init__(self, input_size, hidden_size, output_size, learning_rate=0.01):
"""
Initializes the neural network with random weights and biases.
Parameters:
input_size (int): Number of input features.
hidden_size (int): Number of neurons in the hidden layer.
output_size (int): Number of output classes.
learning_rate (float): Learning rate for weight updates.
"""
self.input_size = input_size
self.hidden_size = hidden_size
self.output_size = output_size
self.learning_rate = learning_rate
# Weight initialization
self.weights_input_hidden = np.random.rand(self.input_size, self.hidden_size)
self.weights_hidden_output = np.random.rand(self.hidden_size, self.output_size)
# Bias initialization
self.bias_hidden = np.random.rand(self.hidden_size)
self.bias_output = np.random.rand(self.output_size)
3. Forward Propagation
Forward propagation computes the output based on input features.
def sigmoid(self, x):
"""Applies the sigmoid activation function."""
return 1 / (1 + np.exp(-x))
def forward(self, X):
"""
Forward propagate the inputs through the network.
Parameters:
X (ndarray): Input data.
Returns:
ndarray: Output from the network.
"""
self.hidden_input = np.dot(X, self.weights_input_hidden) + self.bias_hidden
self.hidden_output = self.sigmoid(self.hidden_input)
self.final_input = np.dot(self.hidden_output, self.weights_hidden_output) + self.bias_output
self.final_output = self.sigmoid(self.final_input)
return self.final_output
4. Backward Propagation
Backward propagation calculates the gradients to update weights.
def backward(self, X, y, output):
"""
Backward propagate the error and update weights.
Parameters:
X (ndarray): Input data.
y (ndarray): True labels.
output (ndarray): Predicted outputs.
"""
output_error = y - output # Calculate output layer error
output_delta = output_error * output * (1 - output) # Gradient of output
hidden_error = np.dot(output_delta, self.weights_hidden_output.T)
hidden_delta = hidden_error * self.hidden_output * (1 - self.hidden_output) # Gradient of hidden
# Update weights and biases
self.weights_hidden_output += np.dot(self.hidden_output.T, output_delta) * self.learning_rate
self.bias_output += np.sum(output_delta, axis=0) * self.learning_rate
self.weights_input_hidden += np.dot(X.T, hidden_delta) * self.learning_rate
self.bias_hidden += np.sum(hidden_delta, axis=0) * self.learning_rate
5. Training the Neural Network
Train the network using input data and labels.
def train(self, X, y, epochs=1000):
"""
Train the neural network.
Parameters:
X (ndarray): Input data.
y (ndarray): True labels.
epochs (int): Number of training epochs.
"""
for epoch in range(epochs):
output = self.forward(X) # Forward pass
self.backward(X, y, output) # Backward pass
6. Usage Example
Here is how to use the NeuralNetwork
class:
# Example usage
if __name__ == "__main__":
# Input features (4 samples with 3 features)
X = np.array([[0, 0, 1],
[1, 0, 1],
[0, 1, 1],
[1, 1, 1]])
# Target output (logical AND)
y = np.array([[0],
[0],
[0],
[1]])
# Initialize the neural network
nn = NeuralNetwork(input_size=3, hidden_size=4, output_size=1)
# Train the network
nn.train(X, y)
# Make predictions
predictions = nn.forward(X)
print("Predictions after training:")
print(predictions)
Conclusion
This code demonstrates a fundamental implementation of a feedforward neural network in Python. The architecture features one hidden layer and includes key functions for forward propagation, backward propagation, and model training.
For more comprehensive understanding and advanced techniques, consider exploring courses from the Enterprise DNA Platform. This provides valuable insights into data analysis and machine learning applications.
Description
This guide provides a comprehensive step-by-step process for implementing a simple feedforward neural network with one hidden layer in Python, covering concepts like forward propagation, backward propagation, and model training.