Fix unreachable code and resource leak in hf.py#899
Open
Chessing234 wants to merge 1 commit intostate-spaces:mainfrom
Open
Fix unreachable code and resource leak in hf.py#899Chessing234 wants to merge 1 commit intostate-spaces:mainfrom
Chessing234 wants to merge 1 commit intostate-spaces:mainfrom
Conversation
In load_state_dict_hf, a premature `return` on the torch.load line made the subsequent dtype conversion and device mapping code unreachable. This meant that when loading a model with a non-fp32 dtype, the state dict was never converted to the requested dtype or moved to the target device, silently returning fp32 weights on CPU instead. Fix by assigning the torch.load result to `state_dict` and returning after the dtype/device conversion. Also fix load_config_hf to use a `with` statement for the file handle instead of passing an unclosed open() directly to json.load. Signed-off-by: Taksh <taxkothari@gmail.com> Signed-off-by: Taksh <takshkothari09@gmail.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
load_state_dict_hf: The function had a prematurereturnon thetorch.load(...)line (line 18), which made the subsequent dtype conversion and device mapping code (lines 20-23) completely unreachable. This means when loading a model with a non-fp32 dtype, the state dict was never converted to the requested dtype or moved to the target device -- it silently returned fp32 weights on CPU instead. Fixed by assigning thetorch.loadresult tostate_dictand returning after conversion.load_config_hf: The file handle fromopen()was passed directly tojson.load()without being closed. Changed to use awithstatement to ensure proper cleanup.Test plan
load_state_dict_hfcorrectly converts dtype whendtypeis nottorch.float32load_state_dict_hfcorrectly maps state dict to the specified deviceload_config_hfstill correctly loads config files