-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use of FW CRF for post-processing benchmarking #9
Comments
Hi Lukas. Thanks for your interest in our work! If your method requires training then it would be fair to train FWCRF as well, but if your method is a heuristic then you can use the default values specified in the code, which would give decent results on PASCAL VOC. Please do not hesitate to let me know if you need further information. |
Thanks for getting back! |
Hi Lukas. That is correct, using the Potts model already improves over DeepLabv3 and DeepLabv3+. Please note though that the parameters of the Potts model were set according to Krähenbühl and Koltun (α = 80, β = 13, γ = 3, see Appendix E.1), but β should be scaled depending on how the inputs are normalized. For example, in DeepLabv3+ the images are not in the range [0, 255] but [-1, 1] (which is to ensure compatibility with the TensorFlow pre-trained weights released by the original authors), so β should be scaled as: And also the backbone of DeepLabv3 is ResNet101 but that of DeepLabv3+ is Xception65. If you have any issues in training FWCRF, I would be happy to help. |
Hi,
First of all, thanks for the impressive work.
We are experimenting with a novel post-processing method (aiming at CVPR) and would like to benchmark ours with Frank-Wolfe dense CRFs (FWCRF). To that end, we are not sure if we can use FWCRF directly as is (following chapter 5.2) for inference on Pascal VOC (as defined below, with adjusted alpha, beta, gamma according to section E.1) or whether we need to train it first on the training data of the respective dataset.
Thanks much in advance for getting back.
Best wishes,
Lukas
The text was updated successfully, but these errors were encountered: