ok so ive been trying to make a neural network in python from scratch as a learning experience. So I have a network with two layers, each layer has weights and biases. Then I have an expected output and compare it with the actual output. I then score it and give it a percentage. I've been studying evolution and had the thought, "if every time the score is low I delete it and randomize it till it works, it will work." but the problem was that every time it would score better, it would then dip. So I had the idea of, "well, if the current score is better than the last score I should reset the network to the last version" So I tried to create a save system...
so I have a loop where, it calculates the output
layer1.forward(X)
layer2.forward(layer1.output)
scores it
compareValues()
amountCorrect = correctValues[0] + correctValues[1] + correctValues[2] + correctValues[3] + correctValues[4] + correctValues[5]
percentCorrect = (amountCorrect / amountOfVariables) * 100
then randomly edits the network.
changeNetwork()
This loop all goes while the score is below 80 percent. So I made it so I check if the current score is less than the last score. if it is, I set the current version of the network equal to the last version. Otherwise I set the old version equal to the current one.
if lastPercent > percentCorrect:
layer1.biases = layer1Old.biases
layer1.weights = layer1Old.weights
layer2.biases = layer2Old.biases
layer2.weights = layer2Old.weights
else:
layer1Old.biases = layer1.biases
layer1Old.weights = layer1.weights
layer2Old.biases = layer2.biases
layer2Old.weights = layer2.weights
Then I randomly edit the current network. But this did nothing. So I added a little "TEST" debug to see it it knew when the last score was greater than the last. And it prints test. So ik that the comparing wasn't the problem and without bringing you through the whole debugging process ill just say it. For whatever reason, the old network after it is set to the current network for the first time it will constantly be equal to the current network. Even if it is at the point in the loop where it hasn't been set. So its just setting the network equal to itself.
Here's the full loop:
while percentCorrect < 80:
layer1.forward(X)
layer2.forward(layer1.output)
compareValues()
amountCorrect = correctValues[0] + correctValues[1] + correctValues[2] + correctValues[3] + correctValues[4] + correctValues[5]
percentCorrect = (amountCorrect / amountOfVariables) * 100
# print(percentCorrect)
print("------------------------------------------------------------------------")
print(layer1.biases)
print(layer1Old.biases)
if lastPercent > percentCorrect:
print("TEST")
layer1.biases = layer1Old.biases
layer1.weights = layer1Old.weights
layer2.biases = layer2Old.biases
layer2.weights = layer2Old.weights
else:
layer1Old.biases = layer1.biases
layer1Old.weights = layer1.weights
layer2Old.biases = layer2.biases
layer2Old.weights = layer2.weights
# print(layer1.biases)
# print(layer2.biases)
# print(layer1.weights)
# print(layer2.weights)
# print(layer2.output)
# print(layer2.output)
# print(layer1Old.biases)
# print(lastPercent)
print(percentCorrect)
print(lastPercent)
lastPercent = percentCorrect
changeNetwork()
and here's the entire script:
import numpy as np
import random
np.random.seed(0)
X = [[1.0, 2.0, 3.0, 2.5],
[2.0, 5.0, -1.0, 2.0],
[-1.5, 2.7, 3.3, -0.8]]
Y = [[18.0, -29.0],
[0.14100315, -5.0],
[4.0, -3.0]]
# Y = [[0.148296, -0.08397602],
# [0.14100315, -0.01340469],
# [0.20124979, -0.07290616]]
correctValues = [0, 0, 0, 0, 0, 0]
amountCorrect = 0
amountOfVariables = 6;
percentCorrect = 0
class LayerDense:
def __init__ (self, inputCount, neronCount):
self.weights = 0.10 * np.random.randn(inputCount, neronCount)
self.biases = np.zeros((1, neronCount))
def forward(self, inputs):
self.output = np.dot(inputs, self.weights) + self.biases
layer1 = LayerDense(4, 5)
layer2 = LayerDense(5, 2)
layer1Old = LayerDense(4, 5)
layer2Old = LayerDense(5, 2)
layer1.forward(X)
layer2.forward(layer1.output)
def compareValues():
currentIndex = 0
for i in range(0, 3):
for j in range(0, 2):
if layer2.output[i][j] - Y[i][j] <= 0.5 and layer2.output[i][j] - Y[i][j] >= -0.5:
correctValues[currentIndex] = 1
currentIndex += 1
else:
correctValues[currentIndex] = 0
currentIndex += 1
def changeNetwork():
amount = random.uniform(-0.1, 0.1)
maxBiases = len(layer1.biases[0])
changeBias = random.randint(0, maxBiases - 1)
layer1.biases[0][changeBias] += amount
maxWeights = len(layer1.weights)
changeWeights = random.randint(0, maxWeights - 1)
maxWeight = len(layer1.weights[changeWeights])
changeWeight = random.randint(0, maxWeight - 1)
layer1.weights[changeWeights][changeWeight] += amount
maxBiases = len(layer2.biases[0])
changeBias = random.randint(0, maxBiases - 1)
layer2.biases[0][changeBias] += amount
maxWeights = len(layer2.weights)
changeWeights = random.randint(0, maxWeights - 1)
maxWeight = len(layer2.weights[changeWeights])
changeWeight = random.randint(0, maxWeight - 1)
layer2.weights[changeWeights][changeWeight] += amount
lastPercent = 0
while percentCorrect < 80:
layer1.forward(X)
layer2.forward(layer1.output)
compareValues()
amountCorrect = correctValues[0] + correctValues[1] + correctValues[2] + correctValues[3] + correctValues[4] + correctValues[5]
percentCorrect = (amountCorrect / amountOfVariables) * 100
# print(percentCorrect)
print("------------------------------------------------------------------------")
print(layer1.biases)
print(layer1Old.biases)
if lastPercent > percentCorrect:
print("TEST")
layer1.biases = layer1Old.biases
layer1.weights = layer1Old.weights
layer2.biases = layer2Old.biases
layer2.weights = layer2Old.weights
else:
layer1Old.biases = layer1.biases
layer1Old.weights = layer1.weights
layer2Old.biases = layer2.biases
layer2Old.weights = layer2.weights
# print(layer1.biases)
# print(layer2.biases)
# print(layer1.weights)
# print(layer2.weights)
# print(layer2.output)
# print(layer2.output)
# print(layer1Old.biases)
# print(lastPercent)
print(percentCorrect)
print(lastPercent)
lastPercent = percentCorrect
changeNetwork()
Full disclosure I dont use python at all so I have no idea what's happening.
Firebase Cloud Functions: PubSub, "res.on is not a function"
TypeError: Cannot read properties of undefined (reading 'createMessageComponentCollector')
I have a list of nested dictionaries in python with data that I want to analyze using pandasHere's some example data:
For the above one, how will the condition change? Can someone please explain? how can I get this,
i need to look in to the database tables and copy the whole table and send it to destination location dbIt should only update the changes on the destination
I have example dataThis data is used to set the predictor (x) and response (y) variables: