You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Mar 11, 2025. It is now read-only.
I appreciate for your exceptional contributions to the field, particularly regarding your works on syn-rep-learn.
From your "stablerep" paper, particularly the scaling effects of linear probing shown in Figure 6, I discovered these do not seem to be consistent in ImageNet zero-shot classification using provided CLIP-based checkpoints.
(CLIP shows better accuracy than StableRep++ for larger pretraining samples)
Below, I've included the code I employed to generate these results, adapted from your repositories ("StableRep" and "Scaling").
The command and its corresponding output are as follows:
Given these observations, I am curious to understand whether there might be an oversight on my part or if this phenomenon reflects the model's behavior.
Could you possibly give any idea on this discrepancy?
Dear Authors,
I appreciate for your exceptional contributions to the field, particularly regarding your works on syn-rep-learn.
From your "stablerep" paper, particularly the scaling effects of linear probing shown in Figure 6, I discovered these do not seem to be consistent in ImageNet zero-shot classification using provided CLIP-based checkpoints.
(CLIP shows better accuracy than StableRep++ for larger pretraining samples)
Below, I've included the code I employed to generate these results, adapted from your repositories ("StableRep" and "Scaling").
The command and its corresponding output are as follows:
stablerep.zip
Given these observations, I am curious to understand whether there might be an oversight on my part or if this phenomenon reflects the model's behavior.
Could you possibly give any idea on this discrepancy?
Best regards,