You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been experimenting with a combined workflow using the DiagnosticWrapper to bundle FourCastNet v3 and PrecipitationAFNOv2 to output quantitative precipitation forecasts (QPF). According to the documentation on PrecipitationAFNOv2, the QPF diagnosed by the model at time t is for the 6-hour interval [t-6h, t]. After running several forecasts over the past 6 weeks, I strongly suspect that PrecipitationAFNOv2 is diagnosing QPF for the interval [t, t+6h]. The model is routinely 6 hours ahead of other dynamic models (and later verified observations) in its QPF forecasts.
Now, this could conceivably be a bias in the FCN3 which is propagating downstream to the output from PrecipitationAFNOv2, but I've tested another workflow which casts doubt on that hypothesis: I trained a small station-specific convolutional neural network to diagnose the probability of precipitation occurrence (PPO) at a single point from the FCN3 outputs, and I am seeing a consistent pattern where the peak QPF output extracted from the nearest grid point from PrecipitationAFNOv2 occurs either before or during the period of highest PPO diagnosed by my station-specific CNN model. If the QPF interval was truly [t-6h, t], I would expect the peak QPF to occur after the period where PPO reaches a peak. Has anyone noticed a similar timing mismatch with PrecipitationAFNOv2?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I have been experimenting with a combined workflow using the DiagnosticWrapper to bundle FourCastNet v3 and PrecipitationAFNOv2 to output quantitative precipitation forecasts (QPF). According to the documentation on PrecipitationAFNOv2, the QPF diagnosed by the model at time t is for the 6-hour interval [t-6h, t]. After running several forecasts over the past 6 weeks, I strongly suspect that PrecipitationAFNOv2 is diagnosing QPF for the interval [t, t+6h]. The model is routinely 6 hours ahead of other dynamic models (and later verified observations) in its QPF forecasts.
Now, this could conceivably be a bias in the FCN3 which is propagating downstream to the output from PrecipitationAFNOv2, but I've tested another workflow which casts doubt on that hypothesis: I trained a small station-specific convolutional neural network to diagnose the probability of precipitation occurrence (PPO) at a single point from the FCN3 outputs, and I am seeing a consistent pattern where the peak QPF output extracted from the nearest grid point from PrecipitationAFNOv2 occurs either before or during the period of highest PPO diagnosed by my station-specific CNN model. If the QPF interval was truly [t-6h, t], I would expect the peak QPF to occur after the period where PPO reaches a peak. Has anyone noticed a similar timing mismatch with PrecipitationAFNOv2?
Beta Was this translation helpful? Give feedback.
All reactions