Skip to content

cuda out of memory even batch_size = 2 #2

@Light--

Description

@Light--

hi, i modified the input size to receive 112x112 and fc layer, but always get out of memory after running....

The locations I modified in the code are as follows:
in models.py

class _Decoder(nn.Module):
    def __init__(self, output_size):
        super(_Decoder, self).__init__()
        self.layers = nn.Sequential(
            # nn.Linear(128*8*8, 512),
            nn.Linear(8 * 112 * 112, 512),
            nn.BatchNorm1d(512),
            nn.ReLU(),
            nn.Linear(512, output_size)
        )

because the input is 112x112, not 32x32 in cifar 10, so i modified the fc layer shape to avoid size mis-match problem, but
even i set batch_size to 2, it will always report cuda of memory error:

RuntimeError: CUDA out of memory. Tried to allocate 196.00 MiB (GPU 0; 11.91 GiB total capacity; 11.26 GiB already allocated; 47.06 MiB free; 50.22 MiB cached)

my gpu is titan x, has more than 12 gb memory.

Moreover,
i set os.environ["CUDA_VISIBLE_DEVICES"]='0,1' to use 2 gpus at the same time, but it always only use 1 gpu....

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions