'weights returning array list in numerical order

I'm very new to programming, so im not sure what's causing this output. I am coding a neural network with python and numpy, I have inputted and split my data 70/30, where ive then split the data into 2 arrays, 1 for my target value (column 1) and one for the rest of the data.

I've also created weights that go through my activation function and im a bit confused as it seems my output is just the data in my csv file just placed into numerical order for some reason.

edit: after printing my weights i can see this is not the issue as they print correctly. Its obviously something in my below functions causing this, i cant seem to see what the issue is and why it is just printing random values that arent in my dataset.

import pandas as pd
import numpy as np
from pandas import Series
from numpy.random import randn
import math
import csv

df = pd.read_csv(r'data.csv', header=None, quoting=csv.QUOTE_NONNUMERIC)
df.to_numpy()

#splitting data 70/30
trainingdata= df[141:]
testingdata= df[329:]

#converting data to seperate arrays for training and testing
training_features= trainingdata.loc[:, trainingdata.columns != 0]
training_labels =  trainingdata[0]
training_labels = training_labels.values.reshape(329,1)

testing_features = testingdata[0]
testing_labels = testingdata.loc[:, testingdata.columns != 0]


def init_params():

#random nums for weights 
np.random.seed(0)
#weights assigned for 30 inputs, which is the number of rows in my csv
weights = np.random.rand(30,1 )
bias = 1
lr = 0.05 #learning rate

def sigmoid(x):
      return 1/(1+np.exp(-x))

def sigmoid_derivative(x):
      return sigmoid(x)*(1-sigmoid(x))

for epoch in range(2500):
    inputs = training_features
    XW = np.dot(inputs, weights)+ bias
    z = sigmoid(XW)
    error = z - training_labels
    print(error.sum())
    dcost = error
    dpred = sigmoid_derivative(z)
    z_del = dcost * dpred
    inputs = training_features.T
    weights = weights - lr*np.dot(inputs, z_del)

    for num in z_del:
        bias = bias - lr*num


Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source