I run the following training command on 8 v100.
torchrun --standalone --nproc_per_node=8 train.py --outdir=training-runs
--data=datasets/cifar10-32x32.zip --cond=0 --arch=ddpmpp
This gives me a FID at 2.08169 which is still far from FID in the paper(1.97).
I think this may caused by the random seeds(Random init in the code).
Is it possible to share the seed for reprodcuting the result in table2?
Any suggestiones?
I run the following training command on 8 v100.
torchrun --standalone --nproc_per_node=8 train.py --outdir=training-runs
--data=datasets/cifar10-32x32.zip --cond=0 --arch=ddpmpp
This gives me a FID at 2.08169 which is still far from FID in the paper(1.97).
I think this may caused by the random seeds(Random init in the code).
Is it possible to share the seed for reprodcuting the result in table2?
Any suggestiones?