<em id="rw4ev"></em>

      <tr id="rw4ev"></tr>

      <nav id="rw4ev"></nav>
      <strike id="rw4ev"><pre id="rw4ev"></pre></strike>
      合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

      CSC4140代做、代寫Python/Java編程設計

      時間:2024-05-16  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



      CSC4140 Final Projects
      April 27, 2024
      The nal is 40% of the total mark.
      We encourage to help eachother but do not show the same thing in your report and do not cheat!
      Strict Due Date: 11:59PM, May 20th, 2024
      Student ID:
      Student Name:
      This assignment represents my own work in accordance with University regulations.
      Signature:
      1
      1 Transient Rendering through Scattering Medium
      Problem Description
      Participating media are used to simulate materials ranging from fog, smoke, and clouds, over
      translucent materials such as skin or milk, to fuzzy structured substances such as woven or knitted
      cloth. Participating media are usually attached to shapes in the scene. When a shape marks the
      transition to a participating medium, it is necessary to provide information about the two media
      that lie at the interior and exterior of the shape. This informs the renderer about what happens
      in the region of space surrounding the surface. In many practical use cases, it is sucient only
      to specify an interior medium and to assume the exterior medium (e.g., air), not to inuence the
      light transport.
      Transient rendering is proposed to simulate how the light propagates in the space. Instead of
      the traditional renderer, it assumes the light speed is limited. Transient rendering for participating
      media helps to provide a new simulation tool to achieve a new sensing technology in extreme
      weather condition.
      Goals and Deliverbles
      Based on the code of Dierentiable Transient Rendering linked below, realize a renderer in the
      FOG medium. You can refer to any renderer or code for participating media. For this project,
      deliver a series of transient images generated with the Dierentiable Transient Renderer.
      When working on this project, you will have to gure out how to embed the participating media
      into the given engine.
      Resources
      1. Dierentiable Transient Rendering
      2. Code
      3. Mitsuba
      2 Realize BDPT (use cuda)
      Problem Description
      Based on the code of assignment 5 and 6, realize your own Bi-Directional Path Tracer.
      2
      Goals and Deliverbles
      Render the given scenes in assignments 5 and 6 using your own BDPT and compare the difference with the current one in your report. We encourage you to use CUDA to implement them
      to avoid the hours-long rendering process. CUDA has become very easy, and it's just a library for
      parallel computing and rendering.
      Find a caustic scene like a lens or glass ball, render it with BDPT and the path tracer, compare
      the results, and explain.
      Resources
      M. Clark, "CUDA Pro Tip: Kepler Texture Objects Improve Performance and Flexibility",
      NVidia Accelerated Computinig, 2013. [Online]
      T. Karras, "Thinking Parallel, Part III: Tree Construction on the GPU", NVidia Accelerated
      Computing, 2012. [Online]
      T. Karras, "Thinking Parallel, Part II: Tree Traversal on the GPU", NVidia Accelerated Computing, 2012. [Online]
      E. Veach, "Robust Monte Carlo Methods for Light Transport Simulation", Ph.D, Stanford
      University, 1997.
      3 Realize Spectral Ray Tracing and Learn to Use "Nvidia
      OptiX"
      Problem Description
      The current implementation of the raytracer cannot model dispersion and chromatic aberrations
      because its light model is not wavelength-dependent. Currently, indices of refraction are constant
      rather than dierent for each wavelength. You can implement your code based on assignment 6.
      Nvidia OptiX is a high-level GPU-accelerated ray-casting API. If your computer supports
      Nvidia RTX, We strongly recommend you try it and coding based on this API instead of the code
      of assignment 6.
      Goals and Deliverbles
      Implement spectral ray tracing by tracing rays of dierent wavelengths sampled using the human
      eye's wavelength prole for each color (RGB). By modeling dierent indices of refraction based on
      those wavelengths for glass-like materials, we hope you to reproduce eects such as the dispersion
      of light through a prism, the changing colors based on the viewing angle for a lens on a reective
      3
      surface with a thin lm (such as a DVD), as well as model chromatic aberrations present in real
      camera systems with lenses. Additionally, it would be best if you created wavelength-dependent
      bsdfs and lighting. We hope to model dierent temperature lights.
      1. Prism scene rendering
      2. Disk/bubbles scene rendering (Add dierent environment maps (potential source from Light
      Probe Library). Images from the light probe library are in HDR format, suitable for spectral
      ray-tracing since you have the more realistic spectrum distribution of each scene pixel).
      3. Correctly simulates chromatic aberration of dierent lenses.
      4. Compare rendered images with real photos we take of the objects (e.g., disk).
      5. Compare rendering under dierent temperature lights.
      6. Finally, we hope you deliver a synthesized image that harmoniously combines objects that
      best illustrate the eectiveness of our spectral ray tracer. (e.g., gemstones, etc., suggestions
      on this would be helpful!)
      7. Optional: Add fog/volumetric scattering so that rainbows can be seen.
      Tasks:
      1. Change lenstester to also include wavelength argument that the user can set. (mainly for
      debugging purposes)
      2. Refactor code so that rays have a wavelength argument that can be passed in and checked
      as well as that functions that return Spectrums now return a single intensity value
      3. Change raytrace_pixel to ask for multiple ray samples for each color channel, then combine
      those color channels
      4. Change camera.generate ray to take in a color channel argument and sample that color
      channel's wavelength distribution (Gaussian) to change the ray's wavelength
      5. Change lens_camera's tracing through the lens to use the wavelength argument to change
      indices of refraction when tracing through the lens
      6. Change sample_L of lights to have a wavelength-dependent intensity to simulate dierent
      colors of lights (maybe initialize lights with a temperature argument and model them as ideal
      black bodies to get the intensities for each color)
      7. Rewrite BSDFs of colored objects to return a wavelength-dependent magnitude instead of a
      constant spectrum argument.
      4
      8. Rewrite/write glass BSDF to have wavelength-dependent indices of refraction (similar code
      as lens_camera's tracing)
      9. Write a bubble/ thin-lm interference BSDF that uses wavelength, thickness, and light to
      determine if the interference occurs (integer multiples of wavelength)
      10. Write new scene/dae les (using Blender )/mess with the parser to create a triangular prism
      would want a small area of light create a disk + reective surface + transparent coating
      Resources
      1. AN INTRODUCTION TO NVIDIA OPTIX
      2. Prisms and Rainbows: a Dispersion Model for Computer Graphics
      3. Iridescent Surface Rendering with Per-channel Rotation of Anisotropic Microfacet Distribution
      4. Rendering Iridescent Colors of Optical Disks
      5. Derive spectrum from RGB triple
      6. soap bubbles 1
      7. soap bubbles 2
      Other useful links: [1] refractive index [2] refractive indices [3] glassner [4] hyperphysics [5]
      Morris, Nigel. "Capturing the Reectance Model of Soap Bubbles." University of Toronto
      (2003).
      4 Smooth Mesh Estimation from Depth Data using NonSmooth Convex Optimization
      Problem Description
      Meshes are commonly used as 3D maps since they encode the topology of the scene while being
      lightweight. Unfortunately, 3D meshes are mathematically dicult to handle directly because of
      their combinatorial and discrete nature. Therefore, most approaches generate 3D meshes of a scene
      after fusing depth data using volumetric or other representations. Nevertheless, volumetric fusion
      remains computationally expensive both in terms of speed and memory. The main references for
      this project are this paper on the Smooth Mesh Estimation from Depth , and you can use the
      attached test data.
      5
      Goals and Deliverbles
      Your task is to implement the given paper. To simplify the task, you are allowed to use any
      package available to help you dealing with the optimization part. Or you can grab a certain piece
      of code block to help you complete this task.
      Final Note
      You have achieved a milestone in Computer Graphics. Here your task left is to make some
      fancy results and reports! Computer Graphics is not only a science of producing graphical images
      with the aid of a computer but also a fancy art! Again, always be creative!
      請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp









       

      掃一掃在手機打開當前頁
    1. 上一篇:代寫MATH6119、Python/c++編程語言代做
    2. 下一篇:菲律賓出生紙是入國籍嗎(出生紙怎么入籍)
    3. 無相關信息
      合肥生活資訊

      合肥圖文信息
      挖掘機濾芯提升發動機性能
      挖掘機濾芯提升發動機性能
      戴納斯帝壁掛爐全國售后服務電話24小時官網400(全國服務熱線)
      戴納斯帝壁掛爐全國售后服務電話24小時官網
      菲斯曼壁掛爐全國統一400售后維修服務電話24小時服務熱線
      菲斯曼壁掛爐全國統一400售后維修服務電話2
      美的熱水器售后服務技術咨詢電話全國24小時客服熱線
      美的熱水器售后服務技術咨詢電話全國24小時
      海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
      海信羅馬假日洗衣機亮相AWE 復古美學與現代
      合肥機場巴士4號線
      合肥機場巴士4號線
      合肥機場巴士3號線
      合肥機場巴士3號線
      合肥機場巴士2號線
      合肥機場巴士2號線
    4. 幣安app官網下載 短信驗證碼 丁香花影院

      關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

      Copyright © 2024 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
      ICP備06013414號-3 公安備 42010502001045

      成人久久18免费网站入口