Replies: 3 comments
-
Hi @IannoIITR , In your approach, is the e.g. So while problematic for large number of groups, for small number of groups it could be reasonable. Handling conv groups could be a good feature to add to the toolkit in future. |
Beta Was this translation helpful? Give feedback.
-
Hi @maljoras
Here the depth conv has 'in_channels' as number of groups but the point conv is a normal convolution with kernel size 1 and also group size 1. It was working for other albeit simpler models, I wanted to verify if this is a valid approach to testing 'hybrid' models where part of the layers are analog and part of them are in full precision. |
Beta Was this translation helpful? Give feedback.
-
I see, yes, it should be possible to convert single layers in the way you suggest. Might be worthwhile to add an option in the Accuracy might drop if you just convert it. Hardware-aware training might be needed to make the model more robust against analog noise and non-idealities. See e.g. this paper, where the direct mapping approach (Table 2) impact accuracy quite severely for some DNNs (especially CNNs). Some of the accuracy drop can be recovered by hardware-aware training (Table 3). |
Beta Was this translation helpful? Give feedback.
-
I had a model with grouped convolutions(which is not supported in aihwkit). So I tried converting individual layers from the model dict to analog.
for eg.
so basically replacing the existing floating point layer in the model with an equivalent one converted to analog.
I wanted to know if this is a valid approach or is there any problem in it as it is working for some models but in others there is a large drop in accuracy. (single layer drops accuracy of pre-trained model to 0, even when rpu config is near ideal [32 bit resolution and no noise values])
Beta Was this translation helpful? Give feedback.
All reactions