Question

Implement the PLA algorithm using Python (Only language I know). Generate linearly separable data using at...

Implement the PLA algorithm using Python (Only language I know).

Generate linearly separable data using at least 20 points with two features, it is not required that the points be evenly divided between the positive and negative class.

Run and test your code.

Please show working code with explanations to help me understand how to implement my own.

P.S. You guys are awesome

Regarding Comment: "No information providing regarding dataset, features" It is a data set of nothing in particular, just numbers on their own, and each point must have 2 features.

0 0
Add a comment Improve this question Transcribed image text
Answer #1

import numpy as np
from math import e

epoches=100      #no of epoches for training
lr=0.1           # learning rate

def sigmoid(z): # sigmoid activation function
   return 1/(1+e**(-z))

signum = lambda x : 1 if(x>=0) else 0 # signum function for rounding the perceptron result

# 20 linearly seperable 2D points
x=np.array([[[0,0]],[[0,1]],[[0,2]],[[0,3]],[[0,4]],[[1,0]],[[2,0]],[[3,0]],[[4,0]],[[5,0]],[[1,5]],[[2,5]],[[3,5]],[[4,5]],[[5,5]],[[6,0]],[[6,1]],[[6,2]],[[6,3]],[[6,4]]])

w=np.zeros((1,2)) # initial weights

# desired output
d=np.array([[[0]],[[0]],[[0]],[[0]],[[0]],[[0]],[[0]],[[0]],[[0]],[[0]],[[1]],[[1]],[[1]],[[1]],[[1]],[[1]],[[1]],[[1]],[[1]],[[1]]])

b=np.array([-1]) # bias

print("Output before training.")
for i in range(x.shape[0]):                    # printing the output before training the perceptron
       print(x[i],signum(np.matmul(x[i],w.reshape(2,1))+b))   # all outputs are 0 because weights are initialied with 0


# function for training perceptron using stocastic gradient descent
def inference(x,w,d,b):

   for i in range(epoches):
       z=np.matmul(x,w.reshape(2,1))+b #weighted sum of inputs
       y=sigmoid(z) # applying activation function

       for j in range(x.shape[0]):
           w=w+lr*x[j]*y[j]*(1-y[j])*(d[j]-y[j]) # updating weights using squared error loss function
           b=b+lr*y[j]*(1-y[j])*(d[j]-y[j])       # updating bias

   print("Output after training.")
   for i in range(x.shape[0]):                    # printing final output after training perceptron
       print(x[i],signum(np.matmul(x[i],w.reshape(2,1))+b))

inference(x,w,d,b)   # calling the inference function

##############################################################################################

if you want to plot the 2D points add the below code lines to your code:


from matplotlib import pyplot as plt

plt.scatter(x[:10].reshape(10,2)[:,0], x[:10].reshape(10,2)[:,1],label = '0')
plt.scatter(x[10:].reshape(10,2)[:,0], x[10:].reshape(10,2)[:,1],label = '1')
plt.show()

Add a comment
Know the answer?
Add Answer to:
Implement the PLA algorithm using Python (Only language I know). Generate linearly separable data using at...
Your Answer:

Post as a guest

Your Name:

What's your source?

Earn Coins

Coins can be redeemed for fabulous gifts.

Not the answer you're looking for? Ask your own homework help question. Our experts will answer your question WITHIN MINUTES for Free.
Similar Homework Help Questions
ADVERTISEMENT
Free Homework Help App
Download From Google Play
Scan Your Homework
to Get Instant Free Answers
Need Online Homework Help?
Ask a Question
Get Answers For Free
Most questions answered within 3 hours.
ADVERTISEMENT
ADVERTISEMENT
ADVERTISEMENT