def gradient_descent(feature_matrix, label, learning_rate =
0.05, epoch = 1000):
"""
Implement gradient descent algorithm for regression.
Args:
feature_matrix - A numpy matrix describing the given data, with
ones added as the first column. Each row
represents a single data point.
label - The correct value of response variable, corresponding to
feature_matrix.
learning_rate - the learning rate with default value 0.5
epoch - the number of iterations with default value 1000
Returns: A numpy array for the final value of theta
"""
n = len(label)
theta = np.zeros(feature_matrix.shape[1]) # initialize theta to be
zero vector
for i in range(epoch):
# your code below
# compute (average) gradient below
# update theta below
# compute the value of cost function
# It is not necessary to comput cost here. But it is common to use
cost
# in the termination condition of the loop
# your code above
# test
# print(i, theta, cost)
return theta
raise NotImplementedError
import numpy as np
x=np.array([2104,1600,2400,1416,3000,1985,1534]) #feature matrix
mean=x.mean()
std=x.std()
x=(x-mean)/std
x=np.c_[np.ones(np.size(x)),x] #adding 1 in the first column
y=np.array([400,330,369,232,540,300,315])
alpha=0.01 #learning rate
m=len(y)
theta=np.random.rand(2)
def Grad_dec(x,y,m,alpha,theta):
for i in range(1000):
p=np.dot(x,theta)
error=p-y
cost=sum(error**2)/(2*m)
theta=theta-(alpha/m)*np.dot(x.T,error)
return cost,theta
cost,theta=Grad_dec(x,y,m,alpha,theta)
print(theta)
test_data=int(input('Enter test value:'))
test_data=(test_data-mean)/std
test_data=np.array([1,test_data])
print(round(np.dot(test_data.T,theta),4))
def gradient_descent(feature_matrix, label, learning_rate = 0.05, epoch = 1000): """ Implement gradient descent algorithm for regression....
def stochastic_gradient_descent(feature_matrix, label, learning_rate = 0.05, epoch = 1000): """ Implement gradient descent algorithm for regression. Args: feature_matrix - A numpy matrix describing the given data, with ones added as the first column. Each row represents a single data point. label - The correct value of response variable, corresponding to feature_matrix. learning_rate - the learning rate with default value 0.5 epoch - the number of iterations with default value 1000 Returns: A numpy array for the...
Please refer to the existing functions. Don't change them. Solution has to include the functions that are already created. def hinge_loss_single(feature_vector, label, theta, theta_0): """ Finds the hinge loss on a single data point given specific classification parameters. Args: feature_vector - A numpy array describing the given data point. label - A real valued number, the correct classification of the data point. theta - A numpy array describing the linear classifier. theta_0 - A real valued number representing the offset...
Python. Just work in the def sierpinski. No output needed. Will give thumbs up for any attempt beginning this code. Your task is to implement this algorithm in Python, returning a random collection of inum-100, 000 points. You should then plot the points to see the structure. Please complete the following function: def sierpinski (po, v, f, inum) The four arguments are ·po the initial point. You may assume this is the origin, i.e., po = [0, 0] . v:...
What is the role of polymorphism? Question options: Polymorphism allows a programmer to manipulate objects that share a set of tasks, even though the tasks are executed in different ways. Polymorphism allows a programmer to use a subclass object in place of a superclass object. Polymorphism allows a subclass to override a superclass method by providing a completely new implementation. Polymorphism allows a subclass to extend a superclass method by performing the superclass task plus some additional work. Assume that...