日韩精品一区二区三区高清_久久国产热这里只有精品8_天天做爽夜夜做爽_一本岛在免费一二三区

合肥生活安徽新聞合肥交通合肥房產生活服務合肥教育合肥招聘合肥旅游文化藝術合肥美食合肥地圖合肥社保合肥醫院企業服務合肥法律

代做Computer Architecture、代寫Gem5 編程

時間:2024-06-08  來源:合肥網hfw.cc  作者:hfw.cc 我要糾錯



Computer Architecture
2024 Spring
Final Project Part 2Overview
Tutorial
● Gem5 Introduction
● Environment Setup
Projects
● Part 1 (5%)
○ Write C++ program to analyze the specification of L1 data cache.
● Part 2 (5%)
○ Given the hardware specifications, try to get the best performance for more 
complicated program.
2Project 2
3In this project, we will use a two-level cache 
computer system. Your task is to write a 
ViT(Vision Transformer) in C++ and optimize it. 
You can see more details of the system 
specification on the next page.
Description
4System Specifications
● ISA: X86
● CPU: TimingSimpleCPU (no pipeline, CPU stalls on every memory request)
● Caches
* L1 I cache and L1 D cache connect to the same L2 cache
● Memory size: 8192MB
5
I cache 
size
I cache 
associativity
 D cache 
size
D cache 
associativity
Policy Block size
L1 cache 16KB 8 16KB 4 LRU **B
L2 cache – – 1MB 16 LRU **BViT(Vision Transformer) – Transformer Overview
6
● A basic transformer block consists of 
○ Layer Normalization
○ MultiHead Self-Attention (MHSA) 
○ Feed Forward Network (FFN)
○ Residual connection (Add)
● You only need to focus on how to 
implement the function in the red box
● If you only want to complete the project 
instead of understanding the full 
algorithm about ViT, you can skip the 
section masked as redViT(Vision Transformer) – Image Pre-processing
7
● Normalize, resize to (300,300,3) and center crop to (224,224,3)ViT(Vision Transformer) – Patch Encoder
8
● In this project, we use Conv2D as Patch 
Encoder with kernel_size = (16,16), stride = 
(16,16) and output_channel = 768
● (224,224,3) -> (14,14, 16*16*3) -> (196, 768)ViT(Vision Transformer) – Class Token
9
● Now we have 196 tokens and each 
token has 768 features
● In order to record global information, we 
need concatenate one learnable class 
token with 196 tokens
● (196,768) -> (197,768)ViT(Vision Transformer) – Position Embedding
10
● Add the learnable position information 
on the patch embedding
● (197,768) + 
position_embedding(197,768) -> 
(197,768)ViT(Vision Transformer) – Layer Normalization
11
T
# of tokens
C
embedded dimension
● Normalize each token
● You need to normalize with the formulaAttention
ViT(Vision Transformer) – MultiHead Self Attention (1)
12
● Wk
, Wq
, Wv 
∈ RC✕C
● b
q
 , bk
, bv
∈ RC
● W

∈ RC✕C
 
● b
o
 ∈ RC
Input
Linear
Projection
X Attention
split 
into 
heads
merge 
heads
Output
Linear
Projection
Y
Wk
, Wq
, Wv W

b
q
 , bk
, bv b
o
 ViT(Vision Transformer) – MultiHead Self Attention (2)
13
T
# of tokens
C
embedded dimension
● Get Q, K, V ∈ RT✕(NH*H) after input linear projection
● Split Q, K, V into Q1
, Q2
, Q3
,..., QNH K1
, K2
, K3
,..., KNH V1
, V2
, V3
,..., VNH 
∈ RT✕H
H
hidden dimension
Linear Projection and split into heads
Linear Projection
Q = XWq
T
 + b
q
K = XWk
T
 + bk
V = XW
v
T
 + b
v
NH
# of head C = H * NHViT(Vision Transformer) – MultiHead Self Attention (2)
14
● For each head i, compute Si
 = QiKi
T
/square_root(H) ∈ RT✕T
● Pi = Softmax(Si
 ) ∈ RT✕T
, Softmax is a row-wise function
● Oi = Pi Vi ∈ RT✕H
Matrix
Multiplication
and scale
Qi
Ki
Softmax
Matrix
Multiplication Vi
Oi
SoftmaxViT(Vision Transformer) – MultiHead Self Attention (3)
15
T
# of tokens
C
embedded dimension
● Oi ∈ RT✕H
, O = [O1
, O2
,...,O2
 ]
H
hidden dimension
merge heads and Linear Projection
Linear Projection
output = OWo
T
 + b
o
NH
# of headViT(Vision Transformer) – Feed Forward Network
16
● Get Q, K, V ∈ RT✕(h*H) after input linear projection
● Split Q, K, V into Q1
, Q2
, Q3
,..., Qh
 K1
, K2
, K3
,..., Kh V1
, V2
, V3
,..., Vh ∈ RT✕H
T
# of tokens
C
embedded dimension
Input
Linear
Projection
T
# of tokens
OC
hidden dimension
GeLU
output
Linear
ProjectionViT(Vision Transformer) – GeLU
17ViT(Vision Transformer) – Classifier
18
● Contains a Linear layer to transform 768 features to 200 class
○ (197, 768) -> (197, 200)
● Only refer to the first token (class token)
○ (197, 200) -> (1, 200)ViT(Vision Transformer) – Work Flow
19
Pre-pocessing
Embedder
Transformer x12
Classifier
m5_dump_init
Load_weight
m5_dump_stat
Argmax
layernorm
MHSA
layernorm
FFN
matmul
attention
matmul
matmul
layernorm
matmul
Black footed Albatross
+
+
gelu
matmul
gelu
$ make gelu_tb
$ make matmul_tb
$ make layernorm_tb
$ make MHSA_tb
$ make feedforward_tb
 $ make transformer_tb
$ run_all.sh
layernorm
layernorm
MHSA
residualViT(Vision Transformer) – Shape of array
20
layernorm token 1 token 2 …… token T
C
input/output [T*C]
MHSA input/output/o [T*C]
MHSA qkv [T*3*C] q token 1
C
k token 1 v token 1 …… q token T k token T v token T
feedforward input/output [T*C]
feedforward gelu [T*OC] token 1
OC
token 2 …… token TCommon problem
21
● Segmentation fault
○ ensure that you are not accessing a nonexistent memory address
○ Enter the command $ulimit -s unlimited All you have to do is
22
● Download TA’s Gem5 image
○ docker pull yenzu/ca_final_part2:2024
● Write C++ with understanding the algorithm in ./layer folder
○ make clean
○ make <layer>_tb
○ ./<layer>_tbAll you have to do is
23
● Ensure the ViT will successfully classify the bird
○ python3 embedder.py --image_path images/Black_Footed_Albatross_0001_796111.jpg 
--embedder_path weights/embedder.pth --output_path embedded_image.bin
○ g++ -static main.cpp layer/*.cpp -o process
○ ./process
○ python3 run_model.py --input_path result.bin --output_path torch_pred.bin --model_path 
weights/model.pth
○ python3 classifier.py --prediction_path torch_pred.bin --classifier_path 
weights/classifier.pth
○ After running the above commands, you will get the following top5 prediction.
● Evaluate the performance of part of ViT, that is layernorm+MHSA+residual
○ Need about 3.5 hours to finish the simulation
○ Check stat.txtGrading Policy
24
● (50%) Verification
○ (10%) matmul_tb
○ (10%) layernorm_tb
○ (10%) gelu_tb
○ (10%) MHSA_tb
○ (10%) transformer_tb
● (50%) Performance
○ max(sigmoid((27.74 - student latency)/student latency))*70, 50)
● You will get 0 performance point if your design is not verified.Submission
● Please submit code on E3 before 23:59 on June 20, 2024.
● Late submission is not allowed.
● Plagiarism is forbidden, otherwise you will get 0 point!!!
25
● Format
○ Code: please put your code in a folder 
named FP2_team<ID>_code and compress 
it into a zip file.
2
2
2FP2_team<ID>_code folder 
26
● You should attach the following documents
○ matmul.cpp
○ layernorm.cpp
○ gelu.cpp
○ attention.cpp
○ residual.cpp

請加QQ:99515681  郵箱:99515681@qq.com   WX:codinghelp



















 

掃一掃在手機打開當前頁
  • 上一篇:代做QBUS3600、代寫Python設計程序
  • 下一篇:哪些人可以辦理菲律賓團簽呢(跟團簽的材料)
  • 無相關信息
    合肥生活資訊

    合肥圖文信息
    2025年10月份更新拼多多改銷助手小象助手多多出評軟件
    2025年10月份更新拼多多改銷助手小象助手多
    有限元分析 CAE仿真分析服務-企業/產品研發/客戶要求/設計優化
    有限元分析 CAE仿真分析服務-企業/產品研發
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    急尋熱仿真分析?代做熱仿真服務+熱設計優化
    出評 開團工具
    出評 開團工具
    挖掘機濾芯提升發動機性能
    挖掘機濾芯提升發動機性能
    海信羅馬假日洗衣機亮相AWE  復古美學與現代科技完美結合
    海信羅馬假日洗衣機亮相AWE 復古美學與現代
    合肥機場巴士4號線
    合肥機場巴士4號線
    合肥機場巴士3號線
    合肥機場巴士3號線
  • 短信驗證碼 目錄網 排行網

    關于我們 | 打賞支持 | 廣告服務 | 聯系我們 | 網站地圖 | 免責聲明 | 幫助中心 | 友情鏈接 |

    Copyright © 2025 hfw.cc Inc. All Rights Reserved. 合肥網 版權所有
    ICP備06013414號-3 公安備 42010502001045

    日韩精品一区二区三区高清_久久国产热这里只有精品8_天天做爽夜夜做爽_一本岛在免费一二三区

      <em id="rw4ev"></em>

        <tr id="rw4ev"></tr>

        <nav id="rw4ev"></nav>
        <strike id="rw4ev"><pre id="rw4ev"></pre></strike>
        午夜精品久久久久久久久| 99国产麻豆精品| 伊人精品在线| 国产一区在线观看视频| 免费91麻豆精品国产自产在线观看| 国产亚洲一级高清| 欧美一区激情| 亚洲精品乱码久久久久久蜜桃91| 国产精品久久久久久久久借妻| 欧美96在线丨欧| 亚洲欧美制服另类日韩| 亚洲国产精品悠悠久久琪琪| 国产精品福利在线| 久久福利资源站| 欧美国产高潮xxxx1819| 亚洲国产成人av好男人在线观看| 久久国产精品久久久久久| 亚洲精品欧美激情| 国产香蕉久久精品综合网| 欧美午夜视频一区二区| 欧美国产日韩一区二区| 国产日韩av一区二区| 亚洲日本中文字幕区| 亚洲欧美清纯在线制服| 国产专区精品视频| 欧美精品在线视频| 亚洲电影免费在线| 欧美日韩亚洲一区二区三区在线观看| 欧美成人精精品一区二区频| 一区二区三区精品视频| 亚洲午夜久久久久久久久电影院| 欧美日韩无遮挡| 欧美精品v国产精品v日韩精品| 亚洲免费视频成人| 亚洲综合电影| 亚洲精品一区二区三| 亚洲一区二区三区久久| 尤物九九久久国产精品的特点| 免费在线成人av| 性色av一区二区三区红粉影视| 在线成人www免费观看视频| 香蕉尹人综合在线观看| 伊人久久大香线蕉av超碰演员| 在线视频中文亚洲| 亚洲人成高清| 性欧美精品高清| 国产精品久久婷婷六月丁香| 亚洲三级毛片| 国产一本一道久久香蕉| 一区二区三区你懂的| 国产老女人精品毛片久久| 亚洲大片免费看| 91久久精品久久国产性色也91| 欧美精品在线一区二区三区| 欧美亚洲网站| 国产精品理论片在线观看| 99成人精品| 国内不卡一区二区三区| 国产欧美日本一区视频| 国产精品第十页| 久久久久www| 一区二区在线免费观看| 亚洲欧美制服中文字幕| 亚洲欧洲精品一区二区三区不卡| 久久蜜桃资源一区二区老牛| 亚洲欧美日韩国产成人精品影院| 亚洲欧洲综合另类在线| 久久久欧美精品sm网站| 国产欧美在线观看一区| 亚洲清纯自拍| 亚洲一区图片| 国产精品高清网站| 欧美日韩精品欧美日韩精品一| 国产午夜精品视频免费不卡69堂| 国产日韩久久| 激情成人亚洲| 在线亚洲高清视频| 99视频精品在线| 久久一区亚洲| 麻豆成人在线观看| 99精品国产一区二区青青牛奶| 欧美日韩一区二区三区| 亚洲高清资源综合久久精品| 午夜精品一区二区三区四区| 久久国产精品免费一区| 欧美a级片一区| 国产农村妇女毛片精品久久莱园子| 亚洲福利专区| 欧美区日韩区| 欧美精品久久久久久久免费观看| 国产精品劲爆视频| 国产在线观看91精品一区| 亚洲国产精品尤物yw在线观看| 欧美日韩一区二区三区免费看| 精品1区2区| 欧美精品日韩综合在线| 99成人免费视频| 久久综合免费视频影院| 在线观看一区欧美| 亚洲精品国产日韩| 亚洲欧美国产日韩中文字幕| 久久福利视频导航| 欧美大胆a视频| 欧美一区二区在线视频| 欧美韩国日本综合| 亚洲午夜激情网页| 久久久九九九九| 国产精品久久久久免费a∨大胸| 国产一区二区高清视频| 亚洲免费观看高清完整版在线观看| 欧美极品影院| 欧美激情视频一区二区三区在线播放| 欧美一区二区三区成人| 国产精品美女一区二区| 久久精品女人天堂| 欧美丰满高潮xxxx喷水动漫| 亚洲欧美日韩中文播放| 男人的天堂亚洲| 久久精品夜夜夜夜久久| 欧美激情第3页| 一本色道久久综合亚洲精品按摩| 亚洲最新视频在线播放| 国产精品无码专区在线观看| 国产真实久久| 亚洲国产视频一区二区| 国产精品乱码久久久久久| 一本到高清视频免费精品| 亚洲一区二区综合| 一区精品在线| 亚洲制服少妇| 91久久嫩草影院一区二区| 黄色国产精品一区二区三区| 亚洲欧美日韩成人高清在线一区| 亚洲国产精品免费| 久久精品国产亚洲5555| 亚洲精品一区二区三区福利| 久久综合久色欧美综合狠狠| 国产精品久久久久久久浪潮网站| 一区二区三区偷拍| 一区二区不卡在线视频 午夜欧美不卡'| 中文国产成人精品久久一| 亚洲一区二区三区乱码aⅴ| 在线视频亚洲| 亚洲一区二区三区中文字幕在线| 欧美一区二区视频观看视频| 欧美一区二区三区在线观看视频| 国产一区三区三区| 亚洲最黄网站| 国产日韩欧美在线一区| 欧美日韩在线播放一区二区| 欧美大片免费久久精品三p| 亚洲第一精品电影| 亚洲伊人网站| 一区二区三区在线观看欧美| 国产精品久久久免费| 久久亚洲私人国产精品va| 欧美日韩综合| 国产农村妇女精品一区二区| 亚洲欧美日韩精品| 欧美国产日韩精品| 国产精品亚洲成人| 欧美国产视频在线观看| 国产精品久久999| 久久成人综合网|