Replies: 1 comment
-
Hi @afmsaif , However, if you are considering "truly" analog in-memory training, that is having some of the weight updates done with specialized tile operations in memory (such as using stochastic pulse trains to implement in-memory gradient updates), it is not controlled by the |
Beta Was this translation helpful? Give feedback.
-
Let's say I have a custom optimizer named Adabelief. Now I want to use this optimizer to update weights and biases of analog layers. How can I do it? One option I found that I can AnalogOptimizer like optimizer = AnalogOptimizer(Adabelief, model.parameters(), lr=0.5). In the documentation it is mentioned that "This class wraps an existing Optimizer, customizing the optimization step for triggering the analog update needed for analog tiles." Is it a correct approach to implement a custom optimizer in analog training? How this AnalogOptimizer class is actually working?
Beta Was this translation helpful? Give feedback.
All reactions