1 Replies - 665 Views - Last Post: 05 May 2017 - 04:10 AM Rate Topic: -----

#1 moonman239  Icon User is offline

  • New D.I.C Head

Reputation: 0
  • View blog
  • Posts: 10
  • Joined: 24-March 15

Gradient decreases, but mean squared error function still very high

Posted 03 May 2017 - 06:19 PM

Hi, I'm a newbie at neural networks. I'm trying to build one from scratch, using backpropagation & gradient descent. My dataset is a bunch of triangles, and I'm using the NN to approximate the hypotenuse. While my gradient-descent function appears to be doing what it's supposed to be doing, I consistently end up with an error no less than 50 units, when the maximum hypotenuse in my dataset is approx. 14 units.

With that said, here's my Python code:
#!/usr/local/bin/python3
#Module: neuralnetwork.py
# Approximates the Pythagorean theorem using a neural network.
import random
import math
# create training and testing datasets
class rightTriangle:
	height = 0.00
	width = 0.00
	def hypotenuse(self):
		return (self.height**2 + self.width**2)**(1/2)
triangles = []
for i in range(0,300):
	triangle = rightTriangle()
	triangle.height = random.random() * 9 + 1
	triangle.width = random.random() * 9 + 1
	triangles.append(triangle)
trainingTriangles = []
for i in range(0,200):
	trainingTriangles.append(triangles[i])
testingTriangles = []
for i in range(201,300):
	testingTriangles.append(triangles[i])
class Neuron:
	inputWeights = [1,1]
	numInputs = 2
	inputBiases = [1,1]
	# Calculate the output given a set of inputs.
	def activate(self,inputs):
		sum = 0
		for i in range(0,self.numInputs - 1):
			sum = sum + self.inputWeights[i] * inputs[i] + self.inputBiases[i]
		function = 1/(1 + math.exp(-sum))
		return function
	# Adjust weights and biases so as to minimize the cost (error) function for a given dataset.
	def adjustWeightsAndBiases(self,inputs):
		h = 10 # Constant used to compute gradient.
		for weightNum in range(0,self.numInputs - 1):
			for i in range(0,199):
				# Compute the gradient (aka. derivative) for the given inputs & weight.
				currentCost = meanSquaredError(inputs)
				self.inputWeights[weightNum] += h
				gradient = (meanSquaredError(inputs) - currentCost) / h
				self.inputWeights[weightNum] -= h
				# Multiply the gradient by a learning factor.
				learningFactor = 10
				gradient = gradient * learningFactor
				# Subtract weight by scaled gradient.
				self.inputWeights[weightNum] -= gradient
				# print(gradient)
		for biasNum in range(0,self.numInputs - 1):
			for i in range(0,199):
				# Compute the gradient (aka. derivative) for the given inputs & weight.
				currentCost = meanSquaredError(inputs)
				self.inputBiases[biasNum] += h
				gradient = (meanSquaredError(inputs) - currentCost) / h
				self.inputBiases[biasNum] -= h
				# Multiply the gradient by a learning factor.
				learningFactor = 10
				gradient = gradient * learningFactor
				# Subtract bias by scaled gradient.
				self.inputBiases[weightNum] -= gradient
				# print(gradient)
inputNeuron = Neuron()
hiddenNeuron = Neuron()
outputNeuron = Neuron()
# Function to predict the hypotenuse given a height (inputs[0]) and width (inputs[1])
def predict(inputs):
	inputNeuronOutput = inputNeuron.activate(inputs)
	hiddenNeuronOutput = hiddenNeuron.activate([inputNeuronOutput,0])
	outputNeuronOutput = outputNeuron.activate([hiddenNeuronOutput,0])
	return outputNeuronOutput
	# Function to calculate the mean squared error for a given set of triangles.
def meanSquaredError(triangles):
	sum = 0
	for triangle in triangles:
		errorSquared = (triangle.hypotenuse() - predict([triangle.width,triangle.height]))**2
		sum = sum + errorSquared
	return sum / len(triangles)
# Train our neural network.
outputNeuron.adjustWeightsAndBiases(trainingTriangles)
hiddenNeuron.adjustWeightsAndBiases(trainingTriangles)
inputNeuron.adjustWeightsAndBiases(trainingTriangles)
# Test the neural network.
print("Mean squared error: " + str(meanSquaredError(testingTriangles)))


Is my problem due to an improper use of backpropagation, or does my hidden layer need more neurons?

Is This A Good Question/Topic? 0
  • +

Replies To: Gradient decreases, but mean squared error function still very high

#2 Skydiver  Icon User is offline

  • Code herder
  • member icon

Reputation: 5830
  • View blog
  • Posts: 19,870
  • Joined: 05-May 12

Re: Gradient decreases, but mean squared error function still very high

Posted 05 May 2017 - 04:10 AM

Instead of just running the code have you stepped through the code with pen and paper or a text editor nearby to record values? Do these values match up with your manual computations when you designed your circuits?

Or are you treating everything like a black box and you are now wondering what else to throw into the black box?
Was This Post Helpful? 0
  • +
  • -

Page 1 of 1