日韩精品一区二区三区高清_久久国产热这里只有精品8_天天做爽夜夜做爽_一本岛在免费一二三区

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

COMP9417代做、代寫Python語言編程

時間:2024-07-17  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



COMP9417 - Machine Learning
Homework 3: MLEs and Kernels
Introduction In this homework we first continue our exploration of bias, variance and MSE of estimators.
We will show that MLE estimators are not unnecessarily unbiased, which might affect their performance
in small samples. We then delve into kernel methods: first by kernelizing a popular algorithm used in
unsupervised learning, known as K-means. We then look at Kernel SVMs and compare them to fitting
linear SVMs with feature transforms.
Points Allocation There are a total of 28 marks.
• Question 1 a): 2 marks
• Question 1 b): 2 marks
• Question 1 c): 4 marks
• Question 2 a): 1 mark
• Question 2 b): 1 mark
• Question 2 c): 2 marks
• Question 2 d): 2 marks
• Question 2 e): 2 marks
• Question 2 f): 3 marks
• Question 2 g): 2 marks
• Question 3 a): 1 mark
• Question 3 b): 1 mark
• Question 3 c): 1 mark
• Question 3 d): 1 mark
• Question 3 e): 3 marks
What to Submit
• A single PDF file which contains solutions to each question. For each question, provide your solution
in the form of text and requested plots. For some questions you will be requested to provide screen
shots of code used to generate your answer — only include these when they are explicitly asked for.
1• .py file(s) containing all code you used for the project, which should be provided in a separate .zip
file. This code must match the code provided in the report.
• You may be deducted points for not following these instructions.
• You may be deducted points for poorly presented/formatted work. Please be neat and make your
solutions clear. Start each question on a new page if necessary.
• You cannot submit a Jupyter notebook; this will receive a mark of zero. This does not stop you from
developing your code in a notebook and then copying it into a .py file though, or using a tool such as
nbconvert or similar.
• We will set up a Moodle forum for questions about this homework. Please read the existing questions
before posting new questions. Please do some basic research online before posting questions. Please
only post clarification questions. Any questions deemed to be fishing for answers will be ignored
and/or deleted.
• Please check Moodle announcements for updates to this spec. It is your responsibility to check for
announcements about the spec.
• Please complete your homework on your own, do not discuss your solution with other people in the
course. General discussion of the problems is fine, but you must write out your own solution and
acknowledge if you discussed any of the problems in your submission (including their name(s) and
zID).
• As usual, we monitor all online forums such as Chegg, StackExchange, etc. Posting homework questions
 on these site is equivalent to plagiarism and will result in a case of academic misconduct.
• You may not use SymPy or any other symbolic programming toolkits to answer the derivation questions.
 This will result in an automatic grade of zero for the relevant question. You must do the
derivations manually.
When and Where to Submit
• Due date: Week 8, Monday July 15th, 2024 by 5pm. Please note that the forum will not be actively
monitored on weekends.
• Late submissions will incur a penalty of 5% per day from the maximum achievable grade. For example,
 if you achieve a grade of 80/100 but you submitted 3 days late, then your final grade will be
80 − 3 × 5 = 65. Submissions that are more than 5 days late will receive a mark of zero.
• Submission must be made on Moodle, no exceptions.
Page 2Question 1. Maximum Likielihood Estimators and their Bias
Let X1, . . . , Xn
i.i.d. ∼ N(µ, σ
2
). Recall that in Tutorial 2 we showed that the MLE estimators of µ, σ
2 were
µˆMLE and σˆ
2
MLE where
µˆMLE = X, and σˆ
2
MLE =
 1
n
Xn
i=1
(Xi − X)
2
.
In this question, we will explore these estimators in more depth.
(a) Find the bias and variance of both µˆMLE and σˆ
2
MLE
. Hint: You may use without proof the fact that
var
 
1
σ
2
Xn
i=1
(Xi − X)
2
!
= 2(n − 1)
What to submit: the bias and variance of the estimators, along with your working.
(b) Your friend tells you that they have a much better estimator for σ.
Discuss whether this estimator is better or worse than the MLE estimator:.
Be sure to include a detailed analysis of the bias and variance of both estimators, and describe what
happens to each of these quantities (for each of the estimators) as the sample size n increases (use
plots). For your plots, you can assume that σ = 1.
What to submit: the bias and variance of the new estimator. A plot comparing the bias of both estimators as
a function of the sample size n, a plot comparing the variance of both estimators as a function of the sample
size n, use labels/legends in your plots. A copy of the code used here in solutions.py
(c) Compute and then plot the MSE of the two estimators considered in the previous part. For your
plots, you can assume that σ = 1. Provide some discussion as to which estimator is better (according
 to their MSE), and what happens as the sample size n gets bigger. What to submit: the MSEs of
the two variance estimators. A plot comparing the MSEs of the estimators as a function of the sample size
n, and some commentary. Use labels/legends in your plots. A copy of the code used here in solutions.py
Question 2. A look at clustering algorithms
Note: Using an existing/online implementation of the algorithms described in this question will
result in a grade of zero. You may use code from the course with reference.
The K-means algorithm is the simplest and most intuitive clustering algorithm available. The algorithm
takes two inputs: the (unlabeled) data X1, . . . , Xn and a desired number of clusters K. The goal is to
identify K groupings (which we refer to as clusters), with each group containing a subset of the original
data points. Points that are deemed similar/close to each other will be assigned to the same grouping.
Algorithmically, given a set number of iterations T, we do the following:
1. Initialization: start with initial set of K-means (cluster centers): µ
(a) Consider the following data-set of n = 5 points in R
(2)
2 by hand. Be sure to
show your working.
What to submit: your cluster centers and any working, either typed or handwritten.
(b) Your friend tells you that they are working on a clustering problem at work. You ask for more
details and they tell you they have an unlabelled dataset with p = 10000 features and they ran
K-means clustering using Euclidean distance. They identified 52 clusters and managed to define
labellings for these clusters based on their expert domain knowledge. What do you think about the
usage of K-means here? Do you have any criticisms?
What to submit: some commentary.
(c) Consider the data and random clustering generated using the following code snippet:
1 import matplotlib.pyplot as plt
2 import numpy as np
3 from sklearn import datasets
4
5 X, y = datasets.make_circles(n_samples=200, factor=0.4, noise=0.04, random_state=13)
6 colors = np.array([’orange’, ’blue’])
7
8 np.random.seed(123)
9 random_labeling = np.random.choice([0,1], size=X.shape[0], )
10 plt.scatter(X[:, 0], X[:, 1], s=20, color=colors[random_labeling])
11 plt.title("Randomly Labelled Points")
12 plt.savefig("Randomly_Labeled.png")
13 plt.show()
14
The random clustering plot is displayed here:
1Recall that for a set S, |S| denotes its cardinality. For example, if S = {4, 9, 1} then |S| = 3.
2The notation in the summation here means we are summing over all points belonging to the k-th cluster at iteration t, i.e. C
(t)
k
.
Page 4Implement K-means clustering from scratch on this dataset. Initialize the following two cluster
centers:
and run for 10 iterations. In your answer, provide a plot of your final clustering (after 10 iterations)
similar to the randomly labeled plot, except with your computed labels in place of random labelling.
Do you think K-means does a good job on this data? Provide some discussion on what you observe.
What to submit: some commentary, a single plot, a screen shot of your code and a copy of your code
in your .py file.
(d) You decide to extend your implementation by considering a feature transformation which maps
2-dimensional points (x1, x2) into 3-dimensional points of the form. Run your
K-means algorithm (for 10 iterations) on the transformed data with cluster centers:
Note for reference that the nearest mean step of the algorithm is now:
ki = arg min
k∈{1,...,K}
. In your answer, provide a plot of your final clustering using the
code provided in (c) as a template. Provide some discussion on what you observe. What to submit:
a single plot, a screen shot of your code and a copy of your code in your .py file, some commentary.
(e) You recall (from lectures perhaps) that directly applying a feature transformation to the data can
be computationally intractable, and can be avoided if we instead write the algorithm in terms of
Page 5a function h that satisfies: h(x, x0
) = hφ(x), φ(x
0
)i. Show that the nearest mean step in (1) can be
re-written as:
ki = arg min
k∈{1,...,K}
h(Xi
, Xi) + T1 + T2

,
where T1 and T2 are two separate terms that may depend on C
(t−1)
k
, h(Xi
, Xj ) and h(Xj , X`) for
Xj , X` ∈ C
(t−1)
k
. The expressions should not depend on φ. What to submit: your full working.
(f) With your answer to the previous part, you design a new algorithm: Given data X1, . . . , Xn, the
number of clusters K, and the number of iterations T:
1. Initialization: start with initial set of K clusters: C
(0)
1
, C(0)
2
, . . . , C(0)
K .
2. For t = 1, 2, 3, . . . , T :
• For i = 1, 2, . . . , n: Solve
ki = arg min
k∈{1,...,K}
h(Xi
, Xi) + T1 + T2

.
• For k = 1, . . . , K, set
C
(t)
k = {Xi such that ki = k}.
The goal of this question is to implement this new algorithm from scratch using the same data
generated in part (c). In your implementation, you will run the algorithm two times: first with the
function:
h1(x, x0
) = (1 + hx, x0
i),
and then with the function
h2(x, x0
) = (1 + hx, x0
i)
2
.
For your initialization (both times), use the provided initial clusters, which can be loaded
in by running inital clusters = np.load(’init clusters.npy’). Run your code for at
most 10 iterations, and provide two plots, one for h1 and another for h2. Discuss your results for
the two functions. What to submit: two plots, your discussion, a screen shot of your code and a copy of
your code in your .py file.
(g) The initializations of the algorithms above were chosen very specifically, both in part (d) and (f).
Investigate different choices of intializations for your implemented algorithms. Do your results
look similar, better or worse? Comment on the pros/cons of your algorithm relative to K-means,
and more generally as a clustering algorithm. For full credit, you need to provide justification in
the form of a rigorous mathematical argument and/or empirical demonstration. What to submit:
your commentary.
Question 3. Kernel Power
Consider the following 2-dimensional data-set, where y denotes the class of each point.
index x1 x2 y
1 1 0 -1
2 0 1 -1
3 0 -1 -1
4 -1 0 +1
5 0 2 +1
6 0 -2 +1
7 -2 0 +1
Page 6Throughout this question, you may use any desired packages to answer the questions.
(a) Use the transformation x = (x1, x2) 7→ (φ1(x), φ2(x)) where φ1(x) = 2x
2
2 − 4x1 + 1 and φ2(x) =
x
2
1 − 2x2 − 3. What is the equation of the best separating hyper-plane in the new feature space?
Provide a plot with the data set and hyperplane clearly shown.
What to submit: a single plot, the equation of the separating hyperplane, a screen shot of your code, a copy
of your code in your .py file for this question.
(b) You wish to fit a hard margin SVM using the SVC class in sklearn. However, the SVC class only
fits soft margin SVMs. Explain how one may still effectively fit a hard margin SVM using the SVC
class. What to submit: some commentary.
(c) Fit a hard margin linear SVM to the transformed data-set in part (a). What are the estimated
values of (α1, . . . , α7). Based on this, which points are the support vectors? What error does your
computed SVM achieve?
What to submit: the indices of your identified support vectors, the train error of your SVM, the computed
α’s (rounded to 3 d.p.), a screen shot of your code, a copy of your code in your .py file for this question.
(d) Consider now the kernel k(x, z) = (2 + x
>z)
2
. Run a hard-margin kernel SVM on the original (untransformed)
data given in the table at the start of the question. What are the estimated values of
(α1, . . . , α7). Based on this, which points are the support vectors? What error does your computed
SVM achieve?
What to submit: the indices of your identified support vectors, the train error of your SVM, the computed
α’s (rounded to 3 d.p.), a screen shot of your code, a copy of your code in your .py file for this question.
(e) Provide a detailed argument explaining your results in parts (i), (ii) and (iii). Your argument
should explain the similarities and differences in the answers found. In particular, is your answer
in (iii) worse than in (ii)? Why? To get full marks, be as detailed as possible, and use mathematical
arguments or extra plots if necessary.
What to submit: some commentary and/or plots. If you use any code here, provide a screen shot of your code,
and a copy of your code in your .py file for this question.
Page 7

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp




 

掃一掃在手機打開當前頁
  • 上一篇:代寫AT2- Accounting Systems、代做Java
  • 下一篇:菲律賓降簽多久可以出境(9G工簽降簽所需的材料)
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    2025年10月份更新拼多多改銷助手小象助手多多出評軟件
    2025年10月份更新拼多多改銷助手小象助手多
    有限元分析 CAE仿真分析服務-企業/產品研發/客戶要求/設計優化
    有限元分析 CAE仿真分析服務-企業/產品研發
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
  • 短信驗證碼 trae 豆包網頁版入口 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    日韩精品一区二区三区高清_久久国产热这里只有精品8_天天做爽夜夜做爽_一本岛在免费一二三区

      <em id="rw4ev"></em>

        <tr id="rw4ev"></tr>

        <nav id="rw4ev"></nav>
        <strike id="rw4ev"><pre id="rw4ev"></pre></strike>
        女人天堂亚洲aⅴ在线观看| 欧美日韩国产va另类| 亚洲午夜精品久久久久久app| 欧美日韩亚洲天堂| 亚洲图中文字幕| 99国产成+人+综合+亚洲欧美| 亚洲一区精彩视频| 最近中文字幕日韩精品| 亚洲欧美一区二区视频| 99亚洲视频| 一区二区三区四区精品| 国产日韩欧美日韩大片| 亚洲欧美国内爽妇网| 国内成+人亚洲| 国产亚洲一区二区三区| 国产性色一区二区| 国产精品一区二区在线观看网站| 国产精品国内视频| 亚洲一区二区三区精品动漫| 一区二区三区国产在线观看| 久久精品国亚洲| 欧美专区在线播放| 美女黄色成人网| 国产精品网站一区| 欧美激情自拍| 欧美日本国产视频| 亚洲国产va精品久久久不卡综合| 久久精品毛片| 欧美国产精品日韩| 一区二区三区产品免费精品久久75| 国产午夜久久久久| 伊人久久亚洲影院| 国产精品久久国产三级国电话系列| 久久久久久久久久久久久久一区| 免费欧美在线视频| 欧美午夜一区二区| 久久国产高清| 欧美成人小视频| 国产精品成人一区二区艾草| 午夜激情综合网| 亚洲综合第一页| 欧美人与性动交a欧美精品| 国产精品理论片| 精品1区2区| 久久精品一区中文字幕| 亚洲福利视频网| 亚洲欧美清纯在线制服| 欧美专区一区二区三区| 欧美视频手机在线| 99精品热视频| 久久嫩草精品久久久精品一| 欧美午夜精品| 国产午夜精品久久| 欧美在线免费| 久久精品72免费观看| 香蕉久久夜色精品国产使用方法| 国产人成精品一区二区三| 久久精品国产一区二区三区免费看| 欧美区在线播放| 欧美午夜免费| 91久久久亚洲精品| 久久久久综合| 久久久久久久一区二区| 亚洲影视在线播放| 国产精品狼人久久影院观看方式| 欧美激情亚洲另类| 亚洲在线观看视频| 欧美日韩一区二区三区高清| 在线亚洲高清视频| 黄色精品在线看| 亚洲性图久久| 亚洲精品欧美极品| 欧美视频在线免费看| 在线综合亚洲欧美在线视频| 免费毛片一区二区三区久久久| 麻豆国产精品一区二区三区| 亚洲欧洲一区| 欧美日韩国产区| 欧美ab在线视频| 国产日产精品一区二区三区四区的观看方式| 午夜精品久久久| 一区二区av在线| 国产毛片精品视频| 中日韩美女免费视频网址在线观看| 欧美激情第五页| 一本在线高清不卡dvd| 国产午夜精品美女视频明星a级| 久久久精品国产99久久精品芒果| 亚洲电影免费观看高清完整版在线| 黄色亚洲大片免费在线观看| 激情欧美一区二区| 欧美精品一区二区三| 国产午夜精品视频免费不卡69堂| 99pao成人国产永久免费视频| 校园激情久久| 欧美黑人在线观看| 一区二区三区精品在线| 日韩午夜在线视频| 欧美大色视频| 亚洲国产精品女人久久久| 久久免费观看视频| 狠狠久久婷婷| 亚洲免费人成在线视频观看| 国产精品v日韩精品| 午夜激情亚洲| 欧美麻豆久久久久久中文| 国产视频在线一区二区| 麻豆成人小视频| 国产精品毛片a∨一区二区三区| 欧美日韩一区二区在线播放| 狠狠色狠色综合曰曰| 欧美视频在线观看一区| 香蕉精品999视频一区二区| 久久久久国产精品厨房| 国产精品视频第一区| 欧美激情综合五月色丁香| 亚洲制服欧美中文字幕中文字幕| 欧美视频久久| 老司机午夜精品视频| 久久综合久久综合久久综合| 欧美视频中文在线看| 免费在线观看日韩欧美| 国产亚洲欧美日韩在线一区| 久久精品1区| 国产精品一区视频网站| 久久免费精品日本久久中文字幕| 99国内精品久久| 国产精品高清免费在线观看| 国产亚洲一二三区| 亚洲图色在线| 亚洲美女在线一区| 最新成人在线| 国产亚洲精品高潮| 韩国成人福利片在线播放| 欧美日韩国产二区| 免费观看30秒视频久久| 亚洲在线免费观看| 欧美日韩和欧美的一区二区| 99爱精品视频| 亚洲精品在线观看免费| 最新日韩在线视频| 亚洲人成亚洲人成在线观看| 国产精自产拍久久久久久| 欧美成人69| 欧美日韩国产一级| 国产精品一区一区三区| 亚洲欧洲日本在线| 免费一级欧美片在线观看| 免费不卡亚洲欧美| 免费成人美女女| 久久精品国产一区二区电影| 午夜久久影院| 性欧美暴力猛交69hd| 在线亚洲电影| 亚洲一区二区精品视频| 亚洲精品国产系列| 日韩一级精品视频在线观看| 一区视频在线播放| 亚洲欧美激情视频| 国产日产精品一区二区三区四区的观看方式| 欧美有码在线视频| 国产精品福利网站| 久久久久国内| 樱桃国产成人精品视频|