You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 1, 2024. It is now read-only.
Right now we only support fine tuning by freezing the trunk weights, or training all weights together. Discriminative learning rates means we can apply different learning rates for different parts of the model, which usually leads to better performance.
Motivation
https://arxiv.org/pdf/1801.06146.pdf introduced discriminative fine-tuning in NLP. Since then it's been found to be useful in computer vision as well.
Pitch
This could be implemented in either FineTuningTask or ClassyModel. I'd rather keep ClassyModel as simple as possible and move this type of logic to the task level.
Alternatives
N/A
Additional context
N/A
The text was updated successfully, but these errors were encountered:
🚀 Feature
Right now we only support fine tuning by freezing the trunk weights, or training all weights together. Discriminative learning rates means we can apply different learning rates for different parts of the model, which usually leads to better performance.
Motivation
https://arxiv.org/pdf/1801.06146.pdf introduced discriminative fine-tuning in NLP. Since then it's been found to be useful in computer vision as well.
Pitch
This could be implemented in either FineTuningTask or ClassyModel. I'd rather keep ClassyModel as simple as possible and move this type of logic to the task level.
Alternatives
N/A
Additional context
N/A
The text was updated successfully, but these errors were encountered: