<em id="rw4ev"></em>

      <tr id="rw4ev"></tr>

      <nav id="rw4ev"></nav>
      <strike id="rw4ev"><pre id="rw4ev"></pre></strike>
      合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

      46-886 Machine Learning Fundamentals
      46-886 Machine Learning Fundamentals

      時間:2025-03-22  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



      46-886 Machine Learning Fundamentals HW 1
      Homework 1
      Due: Sunday, March 23, 11:59pm
      • Upload your assignment to Canvas (only one person per team needs to submit)
      • Include a writeup containing your answers to the questions below (and your team
      composition), and a Python notebook with your code. Your code should run without
      error when we test it.
      • Please note that this assignment has two parts: A & B.
      • Cite all sources used (beyond course materials)
      • Finally, let’s review the instructions for using Google Colab, and submitting the final
      writeup and Python notebook on Canvas.
      1. Visit colab.research.google.com, and log in using your CMU ID.
      2. Create a new notebook. Save it. Optionally, share it with your partner.
      3. Upload1 climate change.csv to Colab after downloading it from Canvas.
      4. Complete the assignment. Remember to save the notebook when exiting Colab.
      5. File → Download → Download .ipynb downloads the notebook.
      6. Submit this notebook and a write up to Canvas.
      7. Remember to indicate if you had a partner at this stage.
      1You may need to do this on every fresh run, i.e., when Colab reinitializes your interpreter. If read csv
      complains that climate change.csv is non-existent, that’s certainly a sign.
      1
      46-886 Machine Learning Fundamentals HW 1
      Part A: Climate Change
      A.1 In this problem, we will attempt to study the relationship between average global tempera ture and several other environmental factors that affect the climate. The file (available on
      Canvas) climate change.csv contains monthly climate data from May 1983 to December
      2008. You can (and should) familiarize yourself with the data in Excel. A brief description
      of all the variables can be found below.
      Variable Description
      Year Observation year
      Month Observation month, given as a numerical value (1 = January, 2 =
      February, 3 = March, etc.)
      Temp Difference in degrees Celsius between the average global temperature
      in that period, and a reference value
      CO2, N2O, CH4,
      CFC-11, CFC-12
      Atmospheric concentrations of carbon dioxide (CO2), nitrous ox ide (N2O), methane (CH4), trichlorofluoromethane (CFC-11) and
      dichlorodifluoromethane (CFC-12), respectively. CO2, N2O and CH4
      are expressed in ppmv (parts per million by volume). CFC-11 and
      CFC-12 are expressed in ppbv (parts per billion by volume).
      Aerosols Mean stratospheric aerosol optical depth at 550 nm. This variable is
      linked to volcanoes, as volcanic eruptions result in new particles being
      added to the atmosphere, which affect how much of the sun’s energy is
      reflected back into space.
      TSI Total Solar Irradiance (TSI) in W/m2
      (the rate at which the sun’s
      energy is deposited per unit area). Due to sunspots and other solar
      phenomena, the amount of energy that is given off by the sun varies
      substantially with time.
      MEI Multivariate El Nino Southern Oscillation index (MEI) – a measure of
      the strength of the El Nino/La Nina-Southern Oscillation (a weather
      effect in the Pacific Ocean that affects global temperatures).
      We are interested in studying whether and how changes in environmental factors predict
      future temperatures. To do this, first read the dataset climate change.csv into Python
      (do not forget to place this file in the same folder, usually /current, on Colab as your
      Python notebook). Then split the data into a training set, consisting of all the observations
      up to and including 2002, and a test set consisting of the remaining years.
      (a) Build a linear regression model to predict the dependent variable Temp, using CO2,
      CH4, N2O, CFC-11, CFC-12, Aerosols, TSI and MEI as features (Year and Month
      should NOT be used as features in the model). As always, use only the training set to
      train your model. What are the in-sample and out-of-sample R2
      , MSE, and MAE?
      (b) Build another linear regression model, this time with only N2O, Aerosols, TSI, and
      2
      46-886 Machine Learning Fundamentals HW 1
      MEI as features. What are the in-sample and out-of-sample R2
      , MSE, and MAE?
      (c) Between the two models built in parts (a) and (b), which performs better in-sample?
      Which performs better out-of-sample?
      (d) For each of the two models built in parts (a) and (b), what was the regression coefficient
      for the N2O feature, and how should this coefficient be interpreted?
      (e) Given your responses to parts (c) and (d), which of the two models should you prefer
      to use moving forward?
      Hint: The current scientific opinion is that N2O is a greenhouse gas – a higher con centration traps more heat from the sun, and thus contributes to the heating of the
      Earth.
      3
      46-886 Machine Learning Fundamentals HW 1
      Part B: Baseball Analytics (No knowledge of baseball is needed to complete this problem)
      Sport Analytics started with – and was popularized by – the data-driven approach to player
      assessment and team formation of the Oakland Athletics. In the 1990s, the “A’s” were
      one of the financially-poorest teams in Major League Baseball (MLB). Player selection was
      primarily done through scouting: baseball experts would watch high school and college games
      to identify future talent. Under the leadership of Billy Beane and Paul DePodesta, the A’s
      started to use data to identify undervalued players. Quickly, they met success on the field,
      reaching the playoffs in 2002 and 2003 despite a much lower payroll than their competitors.
      This started a revolution in sports: analytics is now a central component of every team’s
      strategy.2
      In this problem, you will predict the salary of baseball players. The dataset in the included
      baseball.csv file contains information on 263 players. Each row represents a single player.
      The first column reports the players’ annual salaries (in $1,000s), which we aim to predict.
      The other columns contain four sets of variables: offensive statistics during the last season,
      offensive statistics over each player’s career, defensive statistics during the last season, and
      team information. These are described in the table below.
      Read the baseball.csv file into Python. Note that three of the features are categorical
      (League, Division, and NewLeague) and thus need to be one-hot encoded. Do that before
      proceeding to the questions below.
      B.1 Before building any machine learning models, explore the dataset: try plotting Salary
      against some features, one at a time. When you have identified a feature that you feel may
      be useful for predicting Salary, include that plot in your writeup, and comment on what
      you have observed in the plot (one sentence will suffice).
      B.2 Split the data into a training set (70%) and test set (30%). Train an “ordinary” linear
      regression model (i.e. no regularization), and report the following:
      (a) The in-sample and out-of-sample R2
      (b) The value of the coefficient for the feature you identified in question A.1, and an
      interpretation of that value.
      (c) The effect on salary that your model predicts for a player that switches divisions from
      East to West.
      (d) The effect on salary that your model predicts for a player that switches divisions from
      West to Central.
      (e) The effect on salary that your model predicts for a player that switches divisions from
      Central to East.
      B.3 Train a model using ridge regression with 10-fold cross-validation to select the tuning pa rameter. The choice of which tuning parameters to try is up to you (this does not mean
      there is not a wrong answer). Report the following:
      2For more details, see the Moneyball: The Art of Winning an Unfair Game book by Michael Lewis and
      the Moneyball film.
      4
      46-886 Machine Learning Fundamentals HW 1
      Variable Description
      Salary The player’s annual salary (in $1,000s)
      AtBats Number of at bats this season
      Hits Number of hits this season
      HmRuns Number of home runs this season
      Runs Number of runs this season
      RBIs Number of runs batted in this season
      Walks Number of walks this season
      Years Number of years in MLB
      CareerAtBats Number of at bats over career
      CareerHits Number of hits over career
      CareerHmRuns Number of home runs over career
      CareerRuns Number of runs over career
      CareerRBIs Number of runs batted in over career
      CareerWalks Number of walks over career
      PutOuts Number of putouts this season
      Assists Number of assists this season
      Errors Number of errors this season
      League League in which player plays (N=National, A=American)
      Division Division in which player plays (E=East, C=Central, W=West)
      NewLeague League in which player plays next year (N=National, A=American)
      (a) The in-sample and out-of-sample R2
      (b) The final value of the tuning parameter (i.e. after cross-validation)
      (c) The value of the coefficient for the feature you identified in question 1, and an interpre tation of that value. Compared to your model from question 2, has this feature become
      more or less “important”?
      (d) Of the two models so far, which one should be used moving forward?
      B.4 Train a model using LASSO with 10-fold cross-validation to select the tuning parameter.
      The choice of which tuning parameters to try is up to you (this does not mean there is not
      a wrong answer). Report the following:
      (a) The in-sample and out-of-sample R2
      (b) The final value of the tuning parameter (i.e. after cross-validation)
      5
      46-886 Machine Learning Fundamentals HW 1
      (c) The number of features with non-zero coefficients (Hint: there should be at least one
      feature with coefficient equal to 0)
      (d) Of the three models so far, which one should be used moving forward?
      6

      請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp

      掃一掃在手機打開當前頁
    1. 上一篇:CSC1002代做、Python程序設計代寫
    2. 下一篇:CSC3050代做、Java/Python編程代寫
    3. 無相關信息
      合肥生活資訊

      合肥圖文信息
      出評 開團工具
      出評 開團工具
      挖掘機濾芯提升發動機性能
      挖掘機濾芯提升發動機性能
      戴納斯帝壁掛爐全國售后服務電話24小時官網400(全國服務熱線)
      戴納斯帝壁掛爐全國售后服務電話24小時官網
      菲斯曼壁掛爐全國統一400售后維修服務電話24小時服務熱線
      菲斯曼壁掛爐全國統一400售后維修服務電話2
      美的熱水器售后服務技術咨詢電話全國24小時客服熱線
      美的熱水器售后服務技術咨詢電話全國24小時
      海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
      海信羅馬假日洗衣機亮相AWE 復古美學與現代
      合肥機場巴士4號線
      合肥機場巴士4號線
      合肥機場巴士3號線
      合肥機場巴士3號線
    4. 上海廠房出租 短信驗證碼 酒店vi設計

      成人久久18免费网站入口