日韩精品一区二区三区高清_久久国产热这里只有精品8_天天做爽夜夜做爽_一本岛在免费一二三区

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

代做 COMP9417、Python 語言程序代寫

時間:2024-03-25  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



COMP9417 - Machine Learning Homework 2: Numerical Implementation of Logistic Regression
Introduction In homework 1, we considered Gradient Descent (and coordinate descent) for minimizing a regularized loss function. In this homework, we consider an alternative method known as Newton’s algorithm. We will first run Newton’s algorithm on a simple toy problem, and then implement it from scratch on a real data classification problem. We also look at the dual version of logistic regression.
Points Allocation There are a total of 30 marks.
• Question 1 a): 1 mark
• Question 1 b): 2 mark
• Question 2 a): 3 marks
• Question 2 b): 3 marks
• Question 2 c): 2 marks
• Question 2 d): 4 mark
• Question 2 e): 4 marks
• Question 2 f): 2 marks
• Question 2 g): 4 mark
• Question 2 h): 3 marks
• Question 2 i): 2 marks
What to Submit
• A single PDF file which contains solutions to each question. For each question, provide your solution in the form of text and requested plots. For some questions you will be requested to provide screen shots of code used to generate your answer — only include these when they are explicitly asked for.
• .py file(s) containing all code you used for the project, which should be provided in a separate .zip file. This code must match the code provided in the report.
• You may be deducted points for not following these instructions.
• You may be deducted points for poorly presented/formatted work. Please be neat and make your solutions clear. Start each question on a new page if necessary.
1

• You cannot submit a Jupyter notebook; this will receive a mark of zero. This does not stop you from developing your code in a notebook and then copying it into a .py file though, or using a tool such as nbconvert or similar.
• We will set up a Moodle forum for questions about this homework. Please read the existing questions before posting new questions. Please do some basic research online before posting questions. Please only post clarification questions. Any questions deemed to be fishing for answers will be ignored and/or deleted.
• Please check Moodle announcements for updates to this spec. It is your responsibility to check for announcements about the spec.
• Please complete your homework on your own, do not discuss your solution with other people in the course. General discussion of the problems is fine, but you must write out your own solution and acknowledge if you discussed any of the problems in your submission (including their name(s) and zID).
• As usual, we monitor all online forums such as Chegg, StackExchange, etc. Posting homework ques- tions on these site is equivalent to plagiarism and will result in a case of academic misconduct.
When and Where to Submit
• Due date: Week 7, Monday March 25th, 2024 by 5pm. Please note that the forum will not be actively monitored on weekends.
• Late submissions will incur a penalty of 5% per day from the maximum achievable grade. For ex- ample, if you achieve a grade of 80/100 but you submitted 3 days late, then your final grade will be 80 − 3 × 5 = 65. Submissions that are more than 5 days late will receive a mark of zero.
• Submission must be done through Moodle, no exceptions.
Page 2

Question 1. Introduction to Newton’s Method
Note: throughout this question do not use any existing implementations of any of the algorithms discussed unless explicitly asked to in the question. Using existing implementations can result in a grade of zero for the entire question. In homework 1 we studied gradient descent (GD), which is usually referred to as a first order method. Here, we study an alternative algorithm known as Newton’s algorithm, which is generally referred to as a second order method. Roughly speaking, a second order method makes use of both first and second derivatives. Generally, second order methods are much more accurate than first order ones. Given a twice differentiable function g : R → R, Newton’s method generates a sequence {x(k)} iteratively according to the following update rule:
x(k+1) = x(k) − g′(x(k)) , k = 0,1,2,..., (1) g′′(x(k))
For example, consider the function g(x) = 12 x2 − sin(x) with initial guess x(0) = 0. Then g′(x) = x − cos(x), and g′′(x) = 1 + sin(x),
and so we have the following iterations:
x(1) = x(0) − x(0) − cos(x0) = 0 − 0 − cos(0) = 1 1 + sin(x(0)) 1 + sin(0)
x(2) = x(1) − x(1) − cos(x1) = 1 − 1 − cos(1) = 0.750363867840244 1 + sin(x(1)) 1 + sin(1)
x(3) = 0.**91128**911362 .
and this continues until we terminate the algorithm (as a quick exercise for your own benefit, code this up, plot the function and each of the iterates). We note here that in practice, we often use a different update called the dampened Newton method, defined by:
x(k+1) =x(k) −αg′(xk), k=0,1,2,.... (2) g′′(xk)
Here, as in the case of GD, the step size α has the effect of ‘dampening’ the update. Consider now the twice differentiable function f : Rn → R. The Newton steps in this case are now:
x(k+1) =x(k) −(H(x(k)))−1∇f(x(k)), k=0,1,2,..., (3)
where H(x) = ∇2f(x) is the Hessian of f. Heuristically, this formula generalized equation (1) to func- tions with vector inputs since the gradient is the analog of the first derivative, and the Hessian is the analog of the second derivative.
(a) Consider the function f : R2 → R defined by f(x,y)=100(y−x2)2 +(1−x)2.
Create a 3D plot of the function using mplot3d (see lab0 for example). Use a range of [−5, 5] for both x and y axes. Further, compute the gradient and Hessian of f . what to submit: A single plot, the code used to generate the plot, the gradient and Hessian calculated along with all working. Add a copy of the code to solutions.py
Page 3
      
(b) Using NumPy only, implement the (undampened) Newton algorithm to find the minimizer of the function in the previous part, using an initial guess of x(0) = (−1.2, 1)T . Terminate the algorithm when 􏰀􏰀∇f(x(k))􏰀􏰀2 ≤ 10−6. Report the values of x(k) for k = 0, 1, . . . , K where K is your final iteration. what to submit: your iterations, and a screen shot of your code. Add a copy of the code to solutions.py
Question 2. Solving Logistic Regression Numerically
Note: throughout this question do not use any existing implementations of any of the algorithms discussed unless explicitly asked to do so in the question. Using existing implementations can result in a grade of zero for the entire question. In this question we will compare gradient descent and Newton’s algorithm for solving the logistic regression problem. Recall that in logistic regresion, our goal is to minimize the log-loss, also referred to as the cross entropy loss. Consider an intercept β0 ∈ R, parametervectorβ=(β1,...,βm)T ∈Rm,targetyi ∈{0,1}andinputvectorxi =(xi1,xi2,...,xip)T. Consider also the feature map φ : Rp → Rm and corresponding feature vector φi = (φi1 , φi2 , . . . , φim )T where φi = φ(xi). Define the (l2-regularized) log-loss function:
12λ􏰈n􏰃􏰁1⭺**; 􏰁1⭺**;􏰄
L(β0, β) = 2 ∥β∥2 + n
where σ(z) = (1+e−z)−1 is the logistic sigmoid, and λ is a hyper-parameter that controls the amount of regularization. Note that λ here is applied to the data-fit term as opposed to the penalty term directly, but all that changes is that larger λ now means more emphasis on data-fitting and less on regularization. Note also that you are provided with an implementation of this loss in helper.py.
(a) Show that the gradient descent update (with step size α) for γ = [β0, βT ]T takes the form
  γ(k)=γ(k−1)−α×
􏰅 − λ 1T (y − σ(β(k−1)1 + Φβ(k−1))) 􏰆 n n 0 n ,
β(k−1) − λ ΦT (y − σ(β(k−1)1 + Φβ(k−1))) n0n
i=1
yi ln σ(β0 + βT φi) + (1 − yi) ln 1 − σ(β0 + βT φi) ,
where the sigmoid σ(·) is applied elementwise, 1n is the n-dimensional vector of ones and
 φ T1 ?**7;
 φ T2 ?**8; Φ= . ?**8;∈R
what to submit: your working out.
(b) In what follows, we refer to the version of the problem based on L(β0,β) as the Primal version. Consider the re-parameterization: β = 􏰇nj=1 θjφ(xj). Show that the loss can now be written as:
1Tλ􏰈n􏰃􏰁1⭺**; 􏰁1⭺**;􏰄
L(θ0,θ)=2θ Aθ+n
i=1
n × m
.?**9; .?**9;
,
φTn yn
yiln σ(θ0+θTbxi) +(1−yi)ln 1−σ(θ0+θTbxi) .
whereθ0 ∈R,θ=(θ1,...,θn)T ∈Rn,A∈Rn×nandfori=1,...,n,bxi ∈Rn.Werefertothis version of the problem as the Dual version. Write down exact expressions for A and bxi in terms of k(xi,xj) := ⟨φ(xi),φ(xj)⟩ for i,j = 1,...,n. Further, for the dual parameter η = [θ0,θT ]T , show that the gradient descent update is given by:
 y 1 ?**7;
 y 2 ?**8; n y= . ?**8;∈R .
  η(k)=η(k−1)−α×
􏰅
− λ 1T (y − σ(θ(k−1)1 + Aθ(k−1))) n n 0 n
Aθ(k−1) − λ A(y − σ(θ(k−1)1 + Aθ(k−1))) n0n
Page 4
􏰆
,

If m ≫ n, what is the advantage of the dual representation relative to the primal one which just makes use of the feature maps φ directly? what to submit: your working along with some commentary.
(c) We will now compare the performance of (primal/dual) GD and the Newton algorithm on a real dataset using the derived updates in the previous parts. To do this, we will work with the songs.csv dataset. The data contains information about various songs, and also contains a class variable outlining the genre of the song. If you are interested, you can read more about the data here, though a deep understanding of each of the features will not be crucial for the purposes of this assessment. Load in the data and preform the follwing preprocessing:
(I) Remove the following features: ”Artist Name”, ”Track Name”, ”key”, ”mode”, ”time signature”, ”instrumentalness”
(II) The current dataset has 10 classes, but logistic regression in the form we have described it here only works for binary classification. We will restrict the data to classes 5 (hiphop) and 9 (pop). After removing the other classes, re-code the variables so that the target variable is y = 1 for hiphop and y = 0 for pop.
(III) Remove any remaining rows that have missing values for any of the features. Your remaining dataset should have a total of 3886 rows.
(IV) Use the sklearn.model selection.train test split function to split your data into X train, X test, Y train and Y test. Use a test size of 0.3 and a random state of 23 for reproducibility.
(V) Fit the sklearn.preprocessing.MinMaxScaler to the resulting training data, and then use this object to scale both your train and test datasets so that the range of the data is in (0, 0.1).
(VI) Print out the first and last row of X train, X test, y train, y test (but only the first 3 columns of X train, X test).
What to submit: the print out of the rows requested in (VI). A copy of your code in solutions.py
(d) For the primal problem, we will use the feature map that generates all polynomial features up to and including order 3, that is:
φ(x) = [1,x1,...,xp,x31,...,x3p,x1x2x3,...,xp−1xp−2xp−1].
In python, we can generate such features using sklearn.preprocessing.PolynomialFeatures.
For example, consider the following code snippet:
1 2 3 4 5
1if you need a sanity check here, the best thing to do is use sklearn to fit logistic regression models. This should give you an idea of what kind of loss your implementation should be achieving (if your implementation does as well or better, then you are on the right track)
Page 5
 from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(3)
 X = np.arange(6).reshape(3, 2)
poly.fit_transform(X)
 Transform the data appropriately, then run gradient descent with α = 0.4 on the training dataset for 50 epochs and λ = 0.5. In your implementation, initalize β(0) = 0, β(0) = 0 , where 0 is the
0pp p-dimensional vector of zeroes. Report your final train and test losses, as well as plots of training loss at each iteration. 1 what to submit: one plot of the train losses. Report your train and test losses, and
a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
 
(e) Fortheprimalproblem,runthedampenedNewtonalgorithmonthetrainingdatasetfor50epochs and λ = 0.5. Use the same initialization for β0,β as in the previous question. Report your final train and test losses, as well as plots of your train loss for both GD and Newton algorithms for all iterations (use labels/legends to make your plot easy to read). In your implementation, you may use that the Hessian for the primal problem is given by:
λ1TDΦ 􏰄 n n ,
where D is the n × n diagonal matrix with i-th element σ(di)(1 − σ(di)) and di = β0 + φTi β. what to submit: one plot of the train losses. Report your train and test losses, and a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
(f) For the feature map used in the previous two questions, what is the correspongdin kernel k(x, y) that can be used to give the corresponding dual problem? what to submit: the chosen kernel.
H(β,β)= n n n
􏰃λ1TD1
0 λ ΦT D1n nn
(g) Implement Gradient Descent for the dual problem using the kernel found in the previous part. Use the same parameter values as before (although now θ(0) = 0 and θ(0) = 0 ). Report your final
0n
training loss, as well as plots of your train loss for GD for all iterations. what to submit: a plot of the
train losses and report your final train loss, and a screen shot of any code used in this section, as well as a copy of your code in solutions.py.
(h) Explain how to compute the test loss for the GD solution to the dual problem in the previous part. Implement this approach and report the test loss. what to submit: some commentary and a screen shot of your code, and a copy of your code in solutions.py.
(i) In general, it turns out that Newton’s method is much better than GD, in fact convergence of the Newton algorithm is quadratic, whereas convergence of GD is linear (much slower than quadratic). Given this, why do you think gradient descent and its variants (e.g. SGD) are much more popular for solving machine learning problems? what to submit: some commentary
請加QQ:99515681  郵箱:99515681@qq.com   WX:codehelp 

掃一掃在手機打開當前頁
  • 上一篇:代寫 CS 20A、代做 C++語言程序
  • 下一篇:人在國外辦理菲律賓簽證需要什么材料呢
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    2025年10月份更新拼多多改銷助手小象助手多多出評軟件
    2025年10月份更新拼多多改銷助手小象助手多
    有限元分析 CAE仿真分析服務-企業/產品研發/客戶要求/設計優化
    有限元分析 CAE仿真分析服務-企業/產品研發
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
  • 短信驗證碼 trae 豆包網頁版入口 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    日韩精品一区二区三区高清_久久国产热这里只有精品8_天天做爽夜夜做爽_一本岛在免费一二三区

      <em id="rw4ev"></em>

        <tr id="rw4ev"></tr>

        <nav id="rw4ev"></nav>
        <strike id="rw4ev"><pre id="rw4ev"></pre></strike>
        激情欧美亚洲| 激情久久婷婷| 亚洲一区二区三区高清不卡| 一级成人国产| 久久亚洲私人国产精品va| 国产精品国产三级国产专播品爱网| 在线国产精品播放| 久久超碰97人人做人人爱| 亚洲国产天堂久久综合| 中日韩美女免费视频网址在线观看| 尤物九九久久国产精品的分类| 99国产一区二区三精品乱码| 国产精品综合色区在线观看| 欧美一区视频| 精品粉嫩aⅴ一区二区三区四区| 久久天天狠狠| 亚洲美女在线观看| 国产精品电影网站| 国产日本欧美在线观看| 欧美日韩在线另类| 久久综合影视| 激情五月***国产精品| 亚洲精品日韩激情在线电影| 日韩香蕉视频| 国产日韩精品综合网站| 樱桃成人精品视频在线播放| 欧美三级日韩三级国产三级| 亚洲国产小视频| 国产日韩在线视频| 国产精品视频区| 蜜臀av一级做a爰片久久| 欧美一区不卡| 国产日韩在线不卡| 影音欧美亚洲| 亚洲乱码国产乱码精品精可以看| 久久精品国产77777蜜臀| 亚洲国产精品小视频| 亚洲小说欧美另类婷婷| 欧美精品一区二区视频| 国产在线精品自拍| 久久人人97超碰人人澡爱香蕉| 国产精品久久综合| 欧美成人午夜77777| 蜜臀av国产精品久久久久| 欧美三级在线播放| 亚洲视频免费在线观看| 久久久精品国产免大香伊| aa亚洲婷婷| 亚洲精品国产日韩| 欧美视频不卡| 海角社区69精品视频| 国产欧美日韩免费看aⅴ视频| 在线亚洲一区观看| 国内精品国产成人| 欧美性大战久久久久| 校园春色综合网| 好看的av在线不卡观看| 国产精品永久在线| 久久久亚洲人| 欧美日韩综合| 欧美日韩国产成人在线观看| 欧美理论电影在线播放| 亚洲日本无吗高清不卡| 亚洲欧美国产va在线影院| 久久激情视频久久| 久久久亚洲精品一区二区三区| 美日韩精品视频免费看| 蜜桃久久精品一区二区| 黄色成人片子| 久久久综合香蕉尹人综合网| 久久精品成人一区二区三区蜜臀| 国产尤物精品| 欧美日韩成人综合| 亚洲视频一二区| 免费在线看成人av| 欧美色图五月天| 欧美特黄一级| 国产精品久久一区主播| 激情一区二区| 一区二区电影免费观看| 激情久久影院| 欧美成人精品在线播放| 午夜久久福利| 欧美国产一区二区在线观看| 国产精品视频最多的网站| 欧美精品不卡| 亚洲剧情一区二区| 国产欧美精品一区二区色综合| 国产精品xxxav免费视频| 99国产精品久久久久久久久久| 欧美亚洲在线观看| 亚洲品质自拍| 欧美国产日韩一区| 亚洲成色777777女色窝| 国产精品美女视频网站| 国产一区二区高清视频| 欧美成人亚洲成人| 欧美日韩精品一区二区三区四区| 亚洲精品一区二区三区99| 久久精品国产精品| 久久久91精品国产| 免费高清在线视频一区·| 欧美电影免费观看高清完整版| 尤物精品在线| 欧美日韩精品免费观看| 欧美r片在线| 久久偷看各类wc女厕嘘嘘偷窃| 9l国产精品久久久久麻豆| 久久se精品一区二区| 国产精品一区二区久久久久| 欧美国产精品人人做人人爱| 日韩一本二本av| 欧美a级片一区| 农夫在线精品视频免费观看| 欧美在线一二三四区| 国产精品v欧美精品∨日韩| 欧美一区2区三区4区公司二百| 欧美精品久久久久a| 91久久精品美女高潮| 99精品欧美一区二区三区综合在线| 久久久人人人| 国产欧美日韩精品在线| 亚洲欧美日韩精品一区二区| 国产精品一区二区三区成人| 麻豆九一精品爱看视频在线观看免费| 欧美一区二区视频观看视频| 亚洲黄色在线看| 国产精品99久久不卡二区| 欧美日韩少妇| 国产午夜精品视频免费不卡69堂| 亚洲视频在线一区| 久久久久久精| 久久免费99精品久久久久久| 激情自拍一区| 最近看过的日韩成人| 最新高清无码专区| 久久久免费观看视频| 久久人人超碰| 欧美高清日韩| 一本一本久久a久久精品牛牛影视| 欧美黄色影院| 欧美第一黄色网| 久久九九全国免费精品观看| 欧美与黑人午夜性猛交久久久| 久久精品日产第一区二区| 欧美日韩不卡合集视频| 亚洲精品小视频在线观看| 久久综合免费视频影院| 国产精品扒开腿爽爽爽视频| 91久久亚洲| 亚洲国产成人av在线| 日韩视频二区| 亚洲精品乱码久久久久久久久| 免费不卡亚洲欧美| 亚洲巨乳在线| 国产精品毛片| 久久综合狠狠| 亚洲欧美另类综合偷拍| 欧美国产日韩二区| 亚洲欧美视频在线观看| 亚洲韩国日本中文字幕| 国产美女一区| 最新国产の精品合集bt伙计| 国内不卡一区二区三区|