Device Model and Update Behavior #553
Unanswered
ZhenyuWu323
asked this question in
Q&A
Replies: 1 comment
-
Hi @ZhenyuWu323, |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
RPUConfig
defines what device model and what update behavior is used, and we can apply different update algorithm like the "mixed precision" and "Tiki-taka" through Compounds that are used to define the update bahavior of RPUConig (Table IX of the tutorial paper). However, it also seems that the in-memory update behavior is also defined inanalog_tile.update
method called by theAnalogOptimizer
. Where those in-memory training algorithm like "mixed precision" were actually implemented(It seemsCompounds.py
doesn't implement the algorithm itself), and how they will be called during in-memory training? What are the difference between them two? If I am going to customize the update behavior, should I derive a new analogtile or work on compunds?Beta Was this translation helpful? Give feedback.
All reactions