-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Do you have plan to add SAM as a visual encoder? #10
Comments
We absolutely can; just to confirm, is this the model you'd want us to try adding: https://huggingface.co/timm/samvit_base_patch16.sa1b? CC @ashwin-balakrishna96 to add to our internal run list! |
Hello @siddk @ashwin-balakrishna96 Yes! Please try the SAM-base!
Best, |
I guess the training pipeline is the same for Dinov2.
Please let me know if you find anything interesting! Best, |
You can just start with this:
|
@StarCycle thanks a bunch for the suggestion. We can try integrating the SAM baseline in a week or so, but if you have cycles and would be interested in opening up a PR to integrate it in the meanwhile (especially because it seems like you've already been thinking about how the code should look), we would also be very happy to review it and integrate it into Prismatic :) |
Adds 2.7B Phi-2 LLM with non-LLaMa tokenizer handling, recommended "Input: / Output:" formatting.
SAM can be used with Siglip/CLIP
For example, Vary uses SAM+CLIP, and Deepseek-VL uses Siglip+SAM.
Would you like to try them with this codebase?
The text was updated successfully, but these errors were encountered: