Version
2.1.0
On which installation method(s) does this occur?
Pip
Describe the issue
I'm trying to do Physics Informed Machine learning for flow inside a duct with cylinder. I use ADAM optimizer for 50000 steps to lower the loss to some value close to O(1) and then as outlined in the guide to use the BFGS optimizer (i.e., rename the optim_Checkpoint.pth file) and then change the config to use the BFGS optimizer. When I do exactly this, I end up seeing that the BFGS has not improved the loss at all. At the end of ADAM training the loss was at the same value at what the LBFGS is giving. Also, I give the max_iters to be around 10000 iterations and line_search_fn as strong_wolfe with history size as 100. But I see that the optimization just exits early without properly converging.
I request your kind help from the community please
[15:56:48] - lbfgs optimizer selected. Setting max_steps to 0
[15:56:49] - [step: 0] lbfgs optimization in running
[16:03:42] - lbfgs optimization completed after 2072 steps
[16:03:42] - [step: 0] record constraint batch time: 1.526e-01s
[16:03:46] - [step: 0] record validators time: 4.290e+00s
[16:03:46] - [step: 0] record monitor time: 3.022e-02s
[16:03:48] - [step: 0] saved checkpoint to PhysicsNeMoResults/ModelRunGPU0_PINN_RANS_DA_Baseline_HD_NL6_NN512_AF_SILU_AAF_true_WS_relobralo_BSSI7500_BSSW5000_SGR0p1_SDNL1p0_Case1
[16:03:48] - [step: 0] loss: 1.309e+00
[16:03:48] - [step: 0] reached maximum training steps, finished training!
Minimum reproducible example
Relevant log output
Environment details
Other/Misc.
No response
Version
2.1.0
On which installation method(s) does this occur?
Pip
Describe the issue
I'm trying to do Physics Informed Machine learning for flow inside a duct with cylinder. I use ADAM optimizer for 50000 steps to lower the loss to some value close to O(1) and then as outlined in the guide to use the BFGS optimizer (i.e., rename the optim_Checkpoint.pth file) and then change the config to use the BFGS optimizer. When I do exactly this, I end up seeing that the BFGS has not improved the loss at all. At the end of ADAM training the loss was at the same value at what the LBFGS is giving. Also, I give the max_iters to be around 10000 iterations and line_search_fn as strong_wolfe with history size as 100. But I see that the optimization just exits early without properly converging.
I request your kind help from the community please
[15:56:48] - lbfgs optimizer selected. Setting max_steps to 0
[15:56:49] - [step: 0] lbfgs optimization in running
[16:03:42] - lbfgs optimization completed after 2072 steps
[16:03:42] - [step: 0] record constraint batch time: 1.526e-01s
[16:03:46] - [step: 0] record validators time: 4.290e+00s
[16:03:46] - [step: 0] record monitor time: 3.022e-02s
[16:03:48] - [step: 0] saved checkpoint to PhysicsNeMoResults/ModelRunGPU0_PINN_RANS_DA_Baseline_HD_NL6_NN512_AF_SILU_AAF_true_WS_relobralo_BSSI7500_BSSW5000_SGR0p1_SDNL1p0_Case1
[16:03:48] - [step: 0] loss: 1.309e+00
[16:03:48] - [step: 0] reached maximum training steps, finished training!
Minimum reproducible example
Relevant log output
Environment details
Other/Misc.
No response