首页
网站开发
桌面应用
管理软件
微信开发
App开发
嵌入式软件
工具软件
数据采集与分析
其他
首页
>
> 详细
代写COMP9417、代做Python设计程序
项目预算:
开发周期:
发布时间:
要求地区:
COMP9417 - Machine Learning Homework 1: Regularized Optimization & Gradient Methods
Introduction In this homework we will explore gradient based optimization. Gradient based algorithms have been crucial to the development of machine learning in the last few decades. The most famous exam ple is the backpropagation algorithm used in deep learning, which is in fact just a particular application of a simple algorithm known as (stochastic) gradient descent. We will first implement gradient descent from scratch on a deterministic problem (no data), and then extend our implementation to solve a real world regression problem.
Points Allocation There are a total of 30 marks.
Question 1 a): 2 marks
Question 1 b): 4 marks
Question 1 c): 2 marks
Question 1 d): 2 marks
Question 1 e): 6 marks
Question 1 f): 6 marks
Question 1 g): 4 marks
Question 1 h): 2 marks
Question 1 i): 2 marks
What to Submit
A single PDF file which contains solutions to each question. For each question, provide your solution in the form of text and requested plots. For some questions you will be requested to provide screen shots of code used to generate your answer — only include these when they are explicitly asked for.
.py file(s) containing all code you used for the project, which should be provided in a separate .zip
This code must match the code provided in the report.
You may be deducted points for not following these instructions.
You may be deducted points for poorly presented/formatted work. Please be neat and make your solutions clear. Start each question on a new page if necessary.
1
You cannot submit a Jupyter notebook; this will receive a mark of zero. This does not stop you from developing your code in a notebook and then copying it into a .py file though, or using a tool such as nbconvert or similar.
We will set up a Moodle forum for questions about this homework. Please read the existing questions before posting new questions. Please do some basic research online before posting questions. Please
nly post clarification questions. Any questions deemed to be fishing for answers will be ignored and/or deleted.
Please check Moodle announcements for updates to this spec. It is your responsibility to check for announcements about the spec.
Please complete your homework on your own, do not discuss your solution with other people in the course. General discussion of the problems is fine, but you must write out your own solution and acknowledge if you discussed any of the problems in your submission (including their name(s) and zID).
As usual, we monitor all online forums such as Chegg, StackExchange, etc. Posting homework ques- tions on these site is equivalent to plagiarism and will result in a case of academic misconduct.
You may not use SymPy or any other symbolic programming toolkits to answer the derivation ques- tions. This will result in an automatic grade of zero for the relevant question. You must do the derivations manually.
When and Where to Submit
Due date: Week 4, Monday March 4th, 2024 by 5pm. Please note that the forum will not be actively monitored on weekends.
Late submissions will incur a penalty of 5% per day from the maximum achievable grade. For ex- ample, if you achieve a grade of 80/100 but you submitted 3 days late, then your final grade will be
3× 5 = 65. Submissions that are more than 5 days late will receive a mark of zero.
Submission must be made on Moodle, no exceptions.
Question 1. Gradient Based Optimization
The general framework for a gradient method for finding a minimizer of a function f : Rn → R is defined by
x(k+1) = x(k) − αk∇f(xk),
k = 0, 1, 2, . . . ,
(1)
where αk > 0 is known as the step size, or learning rate. Consider the following simple example of minimizing the function g(x) = 2 √ x3 + 1. We first note that g′(x) = 3x2(x3 + 1)−1/2. We then need to choose a starting value of x, say x(0) = 1. Let’s also take the step size to be constant, αk = α = 0.1. Then
we have the following iterations:
x(1) = x(0) − 0.1× 3(x(0))2((x(0))3 + 1)−1/2 = 0.7878679656440357 x(2) = x(1) − 0.1× 3(x(1))2((x(1))3 + 1)−1/2 = 0.6352617090300827 x(3) = 0.5272505146487477
and this continues until we terminate the algorithm (as a quick exercise for your own benefit, code this up and compare it to the true minimum of the function which is x∗ = −11). This idea works for functions that have vector valued inputs, which is often the case in machine learning. For example, when we minimize a loss function we do so with respect to a weight vector, β. When we take the step size to be constant at each iteration, this algorithm is known as gradient descent. For the entirety of this
question, do not use any existing implementations of gradient methods, doing so will result in an automatic mark of zero for the entire question.
(a) Consider the following optimisation problem:
x∈Rn min f(x),
f(x) = 2 1 ‖Ax− b‖22 + γ 2 ‖x‖22,
where
and where A ∈ Rm×n, b ∈ Rm are defined as
A = −1 0 3
3 2 0 0 −1 2 −4 −2 7 ,
b = −4 3 1 ,
and γ is a positive constant. Run gradient descent on f using a step size of α = 0.01 and γ = 2 and starting point of x(0) = (1, 1, 1, 1). You will need to terminate the algorithm when the following condition is met: ‖∇f(x(k))‖2 < 0.001. In your answer, clearly write down the version of the gradient steps (1) for this problem. Also, print out the first 5 and last 5 values of x(k), clearly
indicating the value of k, in the form:
1Does the algorithm converge to the true minimizer? Why/Why not?
What to submit: an equation outlining the explicit gradient update, a print out of the first 5 (k = 5 inclusive) and last 5 rows of your iterations. Use the round function to round your numbers to 4 decimal places. Include a screen shot of any code used for this section and a copy of your python code in solutions.py.
Consider now a slightly different problem: let y, β ∈ Rp and λ > 0. Further, we define the matrix
where blanks denote zero elements. 2 Define the loss function:
L(β) = 2p 1 ‖y − β‖22 + λ‖Wβ‖22.
The following code allows you to load in the data needed for this problem3:
Note, the t variable is purely for plotting purposes, it should not appear in any of your calculations. (b) Show that
ˆβ = arg min β L(β) = (I + 2λpWTW )−1y.
Update the following code4 so that it returns a plot of ˆβ and calculates L( ˆβ). Only in your code implementation, set λ = 0.9.
def create_W(p):
## generate W which is a p-2 x p matrix as defined in the question
W = np.zeros((p-2, p)) b = np.array([1,-2,1]) for i in range(p-2): W[i,i:i+3] = b return W
def loss(beta, y, W, L):
## compute loss for a given vector beta for data y, matrix W, regularization parameter L (lambda) # your code here
2If it is not already clear: for the first row of W : W11 = 1,W12 = −2,W13 = 1 and W1j = 0 for any j ≥ 4. For the second row of W : W21 = 0,W22 = 1,W23 = −2,W24 = 1 and W2j = 0 for any j ≥ 5 and so on.
3a copy of this code is provided in code student.py 4a copy of this code is provided in code student.py
return loss_val
## your code here, e.g. compute betahat and loss, and set other params..
plt.plot(t_var, y_var, zorder=1, color=’red’, label=’truth’) plt.plot(t_var, beta_hat, zorder=3, color=’blue’,
linewidth=2, linestyle=’--’, label=’fit’) plt.legend(loc=’best’) plt.title(f"L(beta_hat) = {loss(beta_hat, y, W, L)}") plt.show()
What to submit: a closed form expression along with your working, a single plot and a screen shot of your code along with a copy of your code in your .py file.
Write out each of the two terms that make up the loss function ( 2p‖y−β‖22 1 and λ‖Wβ‖22) explicitly using summations. Use this representation to explain the role played by each of the two terms. Be as specific as possible. What to submit: your answer, and any working either typed or handwritten.
Show that we can write (2) in the following way:
L(β) = p 1 j=1 p∑ Lj(β),
where Lj(β) depends on the data y1, . . . , yp only through yj . Further, show that
∇Lj(β) = −(yj 0 0 0 0 . . . . . . − βj) + 2λWTWβ,
j = 1, . . . , p.
Note that the first vector is the p-dimensional vector with zero everywhere except for the j-th index. Take a look at the supplementary material if you are confused by the notation. What to submit: your
answer, and any working either typed or handwritten.
(e) In this question, you will implement (batch) GD from scratch to solve the (2). Use an initial estimate β(0) = 1p (the p-dimensional vector of ones), and λ = 0.001 and run the algorithm for 1000 epochs
(an epoch is one pass over the entire data, so a single GD step). Repeat this for the following step sizes:
α ∈ {0.001, 0.005, 0.01, 0.05, 0.1, 0.3, 0.6, 1.2, 2}
To monitor the performance of the algorithm, we will plot the value
∆(k) = L(β(k))− L( ˆβ),
where ˆβ is the true (closed form) solution derived earlier. Present your results in a single 3× 3 grid plot, with each subplot showing the progression of ∆(k) when running GD with a specific step-size.
State which step-size you think is best in terms of speed of convergence. What to submit: a single
plot. Include a screen shot of any code used for this section and a copy of your python code in solutions.py.
We will now implement SGD from scratch to solve (2). Use an initial estimate β(0) = 1p (the vector
f ones) and λ = 0.001 and run the algorithm for 4 epochs (this means a total of 4p updates of β. Repeat this for the following step sizes:
α ∈ {0.001, 0.005, 0.01, 0.05, 0.1, 0.3, 0.6, 1.2, 2}
Present an analogous single 3 × 3 grid plot as in the previous question. Instead of choosing an index randomly at each step of SGD, we will cycle through the observations in the order they are stored in y to ensure consistent results. Report the best step-size choice. In some cases you might observe that the value of ∆(k) jumps up and down, and this is not something you would have seen using batch GD. Why do you think this might be happening?
What to submit: a single plot and some commentary. Include a screen shot of any code used for this section and a copy of your python code in solutions.py.
An alternative Coordinate Based scheme: In GD, SGD and mini-batch GD, we always update the entire p-dimensional vector β at each iteration. An alternative approach is to update each of the p parameters individually. To make this idea more clear, we write the loss function of interest L(β) as L(β1, β2 . . . , βp). We initialize β(0) , and then solve for k = 1, 2, 3, . . . ,
β(k) 1 = arg min β1 L(β1, β(k−1) 2 , β(k−1) 3 , . . . , β(k−1) p ) β(k) 2 = arg min β2 L(β(k) 1 , β2, β(k−1) 3 , . . . , β(k−1) p )
.
.
.
β(k) p = arg min βp L(β(k) 1 , β(k) 2 , β(k) 3 , . . . , βp).
Note that each of the minimizations is over a single (1-dimensional) coordinate of β, and also that as as soon as we update β(k) j , we use the new value when solving the update for β(k) j+1 and so on. The idea is then to cycle through these coordinate level updates until convergence. In the next two parts we will implement this algorithm from scratch for the problem we have been working on (2).
(g) Derive closed-form expressions for ˆβ1, ˆβ2, . . . , ˆβp where for j = 1, 2, . . . , p:
ˆβj = arg min βj L(β1, . . . , βj−1, βj , βj+1, . . . , βp).
What to submit: a closed form expression along with your working.
Hint: Be careful, this is not as straight-forward as it might seem at first. It is recommended to choose a value for p, e.g. p = 8 and first write out the expression in terms of summations. Then take derivatives to get the closed form expressions.
Implement both gradient descent and the coordinate scheme in code (from scratch) and apply it to the provided data. In your implementation:
Use λ = 0.001 for the coordinate scheme, and step-sizeα = 1 for your gradient descent scheme.
Initialize both algorithms with β = 1p, the p-dimensional vector of ones.
For the coordinate scheme, be sure to update the βj ’s in order (i.e. 1,2,3,...)
For your coordinate scheme, terminate the algorithm after 1000 updates (each time you update a single coordinate, that counts as an update.)
For your GD scheme, terminate the algoirthm after 1000 epochs.
Create a single plot of k vs ∆(k) = L(β(k)) − L( ˆβ), where ˆβ is the closed form expression derived earlier.
Your plot should have both the coordinate scheme (blue) and GD (green)
displayed and should start from k = 0. Your plot should have a legend.
What to submit: a single plot and a screen shot of your code along with a copy of your code in your .py file.
(i) Based on your answer to the previous part, when would you prefer GD? When would you prefer the coordinate scheme? What to submit: Some commentary.
Supplementary: Background on Gradient Descent As noted in the lectures, there are a few variants of gradient descent that we will briefly outline here. Recall that in gradient descent our update rule is
β(k+1) = β(k) − αk∇L(β(k)),
k = 0, 1, 2, . . . ,
where L(β) is the loss function that we are trying to minimize. In machine learning, it is often the case that the loss function takes the form
L(β) = n 1 n∑ Li(β),
i=1
i.e. the loss is an average of n functions that we have lablled Li, and each Li depends on the data only through (xi, yi). It then follows that the gradient is also an average of the form
∇L(β) = n 1 n∑ ∇Li(β).
i=1
We can now define some popular variants of gradient descent .
(i) Gradient Descent (GD) (also referred to as batch gradient descent): here we use the full gradient, as in we take the average over all n terms, so our update rule is:
β(k+1) = β(k) − αk n∑ ∇Li(β(k)),
n
k = 0, 1, 2, . . . .
i=1
(ii) Stochastic Gradient Descent (SGD): instead of considering all n terms, at the k-th step we choose an index ik randomly from {1, . . . , n}, and update
β(k+1) = β(k) − αk∇Lik(β(k)),
k = 0, 1, 2, . . . .
Here, we are approximating the full gradient∇L(β) using ∇Lik(β).
(iii) Mini-Batch Gradient Descent: GD (using all terms) and SGD (using a single term) represents the two possible extremes. In mini-batch GD we choose batches of size 1 < B < n randomly at each step, call their indices {ik1 , ik2 , . . . , ikB}, and then we update
β(k+1) = β(k) − αk B B∑ ∇Lij (β(k)),
j=1
k = 0, 1, 2, . . . ,
so we are still approximating the full gradient but using more than a single element as is done in SGD.
软件开发、广告设计客服
QQ:99515681
邮箱:99515681@qq.com
工作时间:8:00-23:00
微信:codinghelp
热点项目
更多
data程序代写、代做c/c++编程语...
2024-05-17
data程序代写、代做python编程...
2024-05-17
program代做、c/c++,python程...
2024-05-17
代写math 3333 3.0 - winter 2...
2024-05-17
代做seng6110 programming ass...
2024-05-17
代写seng6110 object oriented...
2024-05-17
代写comp828: statistical pro...
2024-05-17
代做culture and society调试数...
2024-05-17
代做comp 4911 winter 2024 as...
2024-05-17
代做lh physical iiib / 03 33...
2024-05-17
代做3032ict big data analyti...
2024-05-17
代写comp4702 report代写留学生...
2024-05-17
代写fin2020 hw6代写c/c++编程
2024-05-17
热点标签
fit2004
fit3152
mec208
econ20120
cpt304
econ2101
econ0051
engi4547
econ1048
eengm2510
fit1008
7033mkt
ec2066
cct380h5f
man00019m
mech265001
fin2020
fit9137
n1542
csc4140
math6119
comp1710
fina864
csys5020
busi4412
math5007
2702ict
dts204tc
comp2003j
cosc2673
ecmt2150
bff3121–
comu7000
stat6118
comp814
acc202
ematm0067
bit233
ecs776p
600543
bpln0025
comp3400
econ7030
159.342 ‐ operating
mang6134
math1005/math6005
geog5404m
comp1710/6780
infs 2042
inf6028
bman30702
math0002
msci242l
mgt11001
com00177m
bman71282
fit2001
cpt210
159.341
econ7310
comp3221
comp10002
cpt206
ecmt1010
finm081
econ2005
cpt202
fit3094
socs0030
data7201
data2x01
mn-3507
mat246h1
ib2d90
ib3j80
acc207
comp90007
compx518-24a
fit1050
info1111
acct2201
buad801
compsci369
cse 332s
info1110
math1033
scie1000
eeee2057
math4063
cmt219
econ5074
eng5009
csse2310/csse7231
ec333
econ0001
cpt204
elec4630
ma117
dts104tc
comp2017
640481
csit128
eco000109m
finc5090
ggr202h5f
nbs8295
4ssmn902
chc6171
dsa1002
ebu6304
comp1021
csci-ua.202
com6511
ma416
mec206
iom209
bism7202
idepg001
cpt106
comp1212
ecom209
math1062
mn-3526
fnce3000
fmhu5002
psyc10003
fina2222
be631-6-sp/1
finc2011
37989
5aaob204
citx1401
econ0028
bsan3204
comp9123
cmt218
itp122
qbus6820
ecmt1020
bus0117
soft3202/comp9202
basc0057
mecm30013
aem4060
acb1120
comp2123
econ2151
ecmt6006
inmr77
com 5140
ocmp5328
comp1039
had7002h
cmt309
asb-3715
elec373
cpt204-2324
be631-6-sp
econ3016
mast10007
buss6002
comp4403
comp30023
finm1416
csc-30002
6qqmn971
fin668
mnfg309
inft2031
cits1402
comp2011
eecs 3221
ebu4201
ct60a9600
com336
8pro102
econ7300
comp3425
comp8410
comp222
finm8007
comp2006
comp26020
comp1721
eeen3007j
cis432
csci251
comp5125m
com398sust
finm7405
econ7021
fin600
infs4205/7205
mktg2510-
32022
mth6158
comp328
finn41615
2024
mec302
联系我们
- QQ: 9951568
© 2021
www.rj363.com
软件定制开发网!