首页
网站开发
桌面应用
管理软件
微信开发
App开发
嵌入式软件
工具软件
数据采集与分析
其他
首页
>
> 详细
代做 COMP9417、Python 语言程序代写
项目预算:
开发周期:
发布时间:
要求地区:
COMP9417 - Machine Learning Homework 2: Numerical Implementation of Logistic Regression
Introduction In homework 1, we considered Gradient Descent (and coordinate descent) for minimizing a regularized loss function. In this homework, we consider an alternative method known as Newton’s algorithm. We will first run Newton’s algorithm on a simple toy problem, and then implement it from scratch on a real data classification problem. We also look at the dual version of logistic regression.
Points Allocation There are a total of 30 marks.
• Question 1 a): 1 mark
• Question 1 b): 2 mark
• Question 2 a): 3 marks
• Question 2 b): 3 marks
• Question 2 c): 2 marks
• Question 2 d): 4 mark
• Question 2 e): 4 marks
• Question 2 f): 2 marks
• Question 2 g): 4 mark
• Question 2 h): 3 marks
• Question 2 i): 2 marks
What to Submit
• A single PDF file which contains solutions to each question. For each question, provide your solution in the form of text and requested plots. For some questions you will be requested to provide screen shots of code used to generate your answer — only include these when they are explicitly asked for.
• .py file(s) containing all code you used for the project, which should be provided in a separate .zip file. This code must match the code provided in the report.
• You may be deducted points for not following these instructions.
• You may be deducted points for poorly presented/formatted work. Please be neat and make your solutions clear. Start each question on a new page if necessary.
1
• You cannot submit a Jupyter notebook; this will receive a mark of zero. This does not stop you from developing your code in a notebook and then copying it into a .py file though, or using a tool such as nbconvert or similar.
• We will set up a Moodle forum for questions about this homework. Please read the existing questions before posting new questions. Please do some basic research online before posting questions. Please only post clarification questions. Any questions deemed to be fishing for answers will be ignored and/or deleted.
• Please check Moodle announcements for updates to this spec. It is your responsibility to check for announcements about the spec.
• Please complete your homework on your own, do not discuss your solution with other people in the course. General discussion of the problems is fine, but you must write out your own solution and acknowledge if you discussed any of the problems in your submission (including their name(s) and zID).
• As usual, we monitor all online forums such as Chegg, StackExchange, etc. Posting homework ques- tions on these site is equivalent to plagiarism and will result in a case of academic misconduct.
When and Where to Submit
• Due date: Week 7, Monday March 25th, 2024 by 5pm. Please note that the forum will not be actively monitored on weekends.
• Late submissions will incur a penalty of 5% per day from the maximum achievable grade. For ex- ample, if you achieve a grade of 80/100 but you submitted 3 days late, then your final grade will be 80 − 3 × 5 = 65. Submissions that are more than 5 days late will receive a mark of zero.
• Submission must be done through Moodle, no exceptions.
Page 2
Question 1. Introduction to Newton’s Method
Note: throughout this question do not use any existing implementations of any of the algorithms discussed unless explicitly asked to in the question. Using existing implementations can result in a grade of zero for the entire question. In homework 1 we studied gradient descent (GD), which is usually referred to as a first order method. Here, we study an alternative algorithm known as Newton’s algorithm, which is generally referred to as a second order method. Roughly speaking, a second order method makes use of both first and second derivatives. Generally, second order methods are much more accurate than first order ones. Given a twice differentiable function g : R → R, Newton’s method generates a sequence {x(k)} iteratively according to the following update rule:
x(k+1) = x(k) − g′(x(k)) , k = 0,1,2,..., (1) g′′(x(k))
For example, consider the function g(x) = 12 x2 − sin(x) with initial guess x(0) = 0. Then g′(x) = x − cos(x), and g′′(x) = 1 + sin(x),
and so we have the following iterations:
x(1) = x(0) − x(0) − cos(x0) = 0 − 0 − cos(0) = 1 1 + sin(x(0)) 1 + sin(0)
x(2) = x(1) − x(1) − cos(x1) = 1 − 1 − cos(1) = 0.750363867840244 1 + sin(x(1)) 1 + sin(1)
x(3) = 0.739112890911362 .
and this continues until we terminate the algorithm (as a quick exercise for your own benefit, code this up, plot the function and each of the iterates). We note here that in practice, we often use a different update called the dampened Newton method, defined by:
x(k+1) =x(k) −αg′(xk), k=0,1,2,.... (2) g′′(xk)
Here, as in the case of GD, the step size α has the effect of ‘dampening’ the update. Consider now the twice differentiable function f : Rn → R. The Newton steps in this case are now:
x(k+1) =x(k) −(H(x(k)))−1∇f(x(k)), k=0,1,2,..., (3)
where H(x) = ∇2f(x) is the Hessian of f. Heuristically, this formula generalized equation (1) to func- tions with vector inputs since the gradient is the analog of the first derivative, and the Hessian is the analog of the second derivative.
(a) Consider the function f : R2 → R defined by f(x,y)=100(y−x2)2 +(1−x)2.
Create a 3D plot of the function using mplot3d (see lab0 for example). Use a range of [−5, 5] for both x and y axes. Further, compute the gradient and Hessian of f . what to submit: A single plot, the code used to generate the plot, the gradient and Hessian calculated along with all working. Add a copy of the code to solutions.py
Page 3
(b) Using NumPy only, implement the (undampened) Newton algorithm to find the minimizer of the function in the previous part, using an initial guess of x(0) = (−1.2, 1)T . Terminate the algorithm when ∇f(x(k))2 ≤ 10−6. Report the values of x(k) for k = 0, 1, . . . , K where K is your final iteration. what to submit: your iterations, and a screen shot of your code. Add a copy of the code to solutions.py
Question 2. Solving Logistic Regression Numerically
Note: throughout this question do not use any existing implementations of any of the algorithms discussed unless explicitly asked to do so in the question. Using existing implementations can result in a grade of zero for the entire question. In this question we will compare gradient descent and Newton’s algorithm for solving the logistic regression problem. Recall that in logistic regresion, our goal is to minimize the log-loss, also referred to as the cross entropy loss. Consider an intercept β0 ∈ R, parametervectorβ=(β1,...,βm)T ∈Rm,targetyi ∈{0,1}andinputvectorxi =(xi1,xi2,...,xip)T. Consider also the feature map φ : Rp → Rm and corresponding feature vector φi = (φi1 , φi2 , . . . , φim )T where φi = φ(xi). Define the (l2-regularized) log-loss function:
12λn1 1
L(β0, β) = 2 ∥β∥2 + n
where σ(z) = (1+e−z)−1 is the logistic sigmoid, and λ is a hyper-parameter that controls the amount of regularization. Note that λ here is applied to the data-fit term as opposed to the penalty term directly, but all that changes is that larger λ now means more emphasis on data-fitting and less on regularization. Note also that you are provided with an implementation of this loss in helper.py.
(a) Show that the gradient descent update (with step size α) for γ = [β0, βT ]T takes the form
γ(k)=γ(k−1)−α×
− λ 1T (y − σ(β(k−1)1 + Φβ(k−1))) n n 0 n ,
β(k−1) − λ ΦT (y − σ(β(k−1)1 + Φβ(k−1))) n0n
i=1
yi ln σ(β0 + βT φi) + (1 − yi) ln 1 − σ(β0 + βT φi) ,
where the sigmoid σ(·) is applied elementwise, 1n is the n-dimensional vector of ones and
φ T1
φ T2 Φ= . ∈R
what to submit: your working out.
(b) In what follows, we refer to the version of the problem based on L(β0,β) as the Primal version. Consider the re-parameterization: β = nj=1 θjφ(xj). Show that the loss can now be written as:
1Tλn1 1
L(θ0,θ)=2θ Aθ+n
i=1
n × m
. .
,
φTn yn
yiln σ(θ0+θTbxi) +(1−yi)ln 1−σ(θ0+θTbxi) .
whereθ0 ∈R,θ=(θ1,...,θn)T ∈Rn,A∈Rn×nandfori=1,...,n,bxi ∈Rn.Werefertothis version of the problem as the Dual version. Write down exact expressions for A and bxi in terms of k(xi,xj) := ⟨φ(xi),φ(xj)⟩ for i,j = 1,...,n. Further, for the dual parameter η = [θ0,θT ]T , show that the gradient descent update is given by:
y 1
y 2 n y= . ∈R .
η(k)=η(k−1)−α×
− λ 1T (y − σ(θ(k−1)1 + Aθ(k−1))) n n 0 n
Aθ(k−1) − λ A(y − σ(θ(k−1)1 + Aθ(k−1))) n0n
Page 4
,
If m ≫ n, what is the advantage of the dual representation relative to the primal one which just makes use of the feature maps φ directly? what to submit: your working along with some commentary.
(c) We will now compare the performance of (primal/dual) GD and the Newton algorithm on a real dataset using the derived updates in the previous parts. To do this, we will work with the songs.csv dataset. The data contains information about various songs, and also contains a class variable outlining the genre of the song. If you are interested, you can read more about the data here, though a deep understanding of each of the features will not be crucial for the purposes of this assessment. Load in the data and preform the follwing preprocessing:
(I) Remove the following features: ”Artist Name”, ”Track Name”, ”key”, ”mode”, ”time signature”, ”instrumentalness”
(II) The current dataset has 10 classes, but logistic regression in the form we have described it here only works for binary classification. We will restrict the data to classes 5 (hiphop) and 9 (pop). After removing the other classes, re-code the variables so that the target variable is y = 1 for hiphop and y = 0 for pop.
(III) Remove any remaining rows that have missing values for any of the features. Your remaining dataset should have a total of 3886 rows.
(IV) Use the sklearn.model selection.train test split function to split your data into X train, X test, Y train and Y test. Use a test size of 0.3 and a random state of 23 for reproducibility.
(V) Fit the sklearn.preprocessing.MinMaxScaler to the resulting training data, and then use this object to scale both your train and test datasets so that the range of the data is in (0, 0.1).
(VI) Print out the first and last row of X train, X test, y train, y test (but only the first 3 columns of X train, X test).
What to submit: the print out of the rows requested in (VI). A copy of your code in solutions.py
(d) For the primal problem, we will use the feature map that generates all polynomial features up to and including order 3, that is:
φ(x) = [1,x1,...,xp,x31,...,x3p,x1x2x3,...,xp−1xp−2xp−1].
In python, we can generate such features using sklearn.preprocessing.PolynomialFeatures.
For example, consider the following code snippet:
1 2 3 4 5
1if you need a sanity check here, the best thing to do is use sklearn to fit logistic regression models. This should give you an idea of what kind of loss your implementation should be achieving (if your implementation does as well or better, then you are on the right track)
Page 5
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(3)
X = np.arange(6).reshape(3, 2)
poly.fit_transform(X)
Transform the data appropriately, then run gradient descent with α = 0.4 on the training dataset for 50 epochs and λ = 0.5. In your implementation, initalize β(0) = 0, β(0) = 0 , where 0 is the
0pp p-dimensional vector of zeroes. Report your final train and test losses, as well as plots of training loss at each iteration. 1 what to submit: one plot of the train losses. Report your train and test losses, and
a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
(e) Fortheprimalproblem,runthedampenedNewtonalgorithmonthetrainingdatasetfor50epochs and λ = 0.5. Use the same initialization for β0,β as in the previous question. Report your final train and test losses, as well as plots of your train loss for both GD and Newton algorithms for all iterations (use labels/legends to make your plot easy to read). In your implementation, you may use that the Hessian for the primal problem is given by:
λ1TDΦ n n ,
where D is the n × n diagonal matrix with i-th element σ(di)(1 − σ(di)) and di = β0 + φTi β. what to submit: one plot of the train losses. Report your train and test losses, and a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
(f) For the feature map used in the previous two questions, what is the correspongdin kernel k(x, y) that can be used to give the corresponding dual problem? what to submit: the chosen kernel.
H(β,β)= n n n
λ1TD1
0 λ ΦT D1n nn
(g) Implement Gradient Descent for the dual problem using the kernel found in the previous part. Use the same parameter values as before (although now θ(0) = 0 and θ(0) = 0 ). Report your final
0n
training loss, as well as plots of your train loss for GD for all iterations. what to submit: a plot of the
train losses and report your final train loss, and a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
(h) Explain how to compute the test loss for the GD solution to the dual problem in the previous part. Implement this approach and report the test loss. what to submit: some commentary and a screen shot of your code, and a copy of your code in solutions.py.
(i) In general, it turns out that Newton’s method is much better than GD, in fact convergence of the Newton algorithm is quadratic, whereas convergence of GD is linear (much slower than quadratic). Given this, why do you think gradient descent and its variants (e.g. SGD) are much more popular for solving machine learning problems? what to submit: some commentary
Page 6
Ip + λ ΦT DΦ
软件开发、广告设计客服
QQ:99515681
邮箱:99515681@qq.com
工作时间:8:00-23:00
微信:codinghelp
热点项目
更多
data程序代写、代做c/c++编程语...
2024-05-17
data程序代写、代做python编程...
2024-05-17
program代做、c/c++,python程...
2024-05-17
代写math 3333 3.0 - winter 2...
2024-05-17
代做seng6110 programming ass...
2024-05-17
代写seng6110 object oriented...
2024-05-17
代写comp828: statistical pro...
2024-05-17
代做culture and society调试数...
2024-05-17
代做comp 4911 winter 2024 as...
2024-05-17
代做lh physical iiib / 03 33...
2024-05-17
代做3032ict big data analyti...
2024-05-17
代写comp4702 report代写留学生...
2024-05-17
代写fin2020 hw6代写c/c++编程
2024-05-17
热点标签
fit2004
fit3152
mec208
econ20120
cpt304
econ2101
econ0051
engi4547
econ1048
eengm2510
fit1008
7033mkt
ec2066
cct380h5f
man00019m
mech265001
fin2020
fit9137
n1542
csc4140
math6119
comp1710
fina864
csys5020
busi4412
math5007
2702ict
dts204tc
comp2003j
cosc2673
ecmt2150
bff3121–
comu7000
stat6118
comp814
acc202
ematm0067
bit233
ecs776p
600543
bpln0025
comp3400
econ7030
159.342 ‐ operating
mang6134
math1005/math6005
geog5404m
comp1710/6780
infs 2042
inf6028
bman30702
math0002
msci242l
mgt11001
com00177m
bman71282
fit2001
cpt210
159.341
econ7310
comp3221
comp10002
cpt206
ecmt1010
finm081
econ2005
cpt202
fit3094
socs0030
data7201
data2x01
mn-3507
mat246h1
ib2d90
ib3j80
acc207
comp90007
compx518-24a
fit1050
info1111
acct2201
buad801
compsci369
cse 332s
info1110
math1033
scie1000
eeee2057
math4063
cmt219
econ5074
eng5009
csse2310/csse7231
ec333
econ0001
cpt204
elec4630
ma117
dts104tc
comp2017
640481
csit128
eco000109m
finc5090
ggr202h5f
nbs8295
4ssmn902
chc6171
dsa1002
ebu6304
comp1021
csci-ua.202
com6511
ma416
mec206
iom209
bism7202
idepg001
cpt106
comp1212
ecom209
math1062
mn-3526
fnce3000
fmhu5002
psyc10003
fina2222
be631-6-sp/1
finc2011
37989
5aaob204
citx1401
econ0028
bsan3204
comp9123
cmt218
itp122
qbus6820
ecmt1020
bus0117
soft3202/comp9202
basc0057
mecm30013
aem4060
acb1120
comp2123
econ2151
ecmt6006
inmr77
com 5140
ocmp5328
comp1039
had7002h
cmt309
asb-3715
elec373
cpt204-2324
be631-6-sp
econ3016
mast10007
buss6002
comp4403
comp30023
finm1416
csc-30002
6qqmn971
fin668
mnfg309
inft2031
cits1402
comp2011
eecs 3221
ebu4201
ct60a9600
com336
8pro102
econ7300
comp3425
comp8410
comp222
finm8007
comp2006
comp26020
comp1721
eeen3007j
cis432
csci251
comp5125m
com398sust
finm7405
econ7021
fin600
infs4205/7205
mktg2510-
32022
mth6158
comp328
finn41615
2024
mec302
联系我们
- QQ: 9951568
© 2021
www.rj363.com
软件定制开发网!