-
Notifications
You must be signed in to change notification settings - Fork 222
Description
Issue overview
Typically runtime issues are related to EnergyPlus but here for workflows that take more than 24 hours to run (On my M1 Max mac, twice as slow on Window, I can share those times later) less than 2 hours of that is on the EnergyPlus simulation.
This is an URBANopt project using large (200-400 unit building) with hpXML > OSM approach merged into single model. The OSM file is about 500 MB, the out.osw file in some cases is larger than 40 MB and has more than 150k lines.
Current Behavior
Here is a plot on top of full runtime and below just EnergyPlus. Of the for largest and slowest models two of them (around 50k sqft in size) take much longer than linear scaling of time for the smaller models. I can try to produce this plot for Windows as well. I don't know what is unique about those to models than the larger models that run in less than half the time (it could be they have more units even though less sqft, but maybe something else as well).
Build Residential measure does take a few hours but a number of downstream measures are also slower. Also in on case I saw an odd multi-hour gap between when one measure finished and the next started. That may have been an anomaly. I already have a rake task that gets me the runtime total from the OSW, maybe that can be updated to isolate each measure, and look for any gaps in time between measures.
Expected Behavior
Unless sizing runs are part of a measure, they generally should run in a few seconds or few minutes. Maybe that isn't reasonable for large models, but at the worst the impact should be linear where a model twice as big is twice as slow, not 4x slower. If there are ways to speed up processing for larger models that would be helpful.
Steps to Reproduce
- Check out temp_slow_only branch https://github.com/urbanopt/urbanopt-prototype-district-templates/tree/temp_slow_only
- Run bundle Install
- You can remove all but one weather files from here so it only goes through this once
https://github.com/urbanopt/urbanopt-prototype-district-templates/tree/temp_slow_only/example_files/weather - Run the urbanopt_climate_sweep rake task
- Wait 24-48 hours for the 4 datapoints to run in parallel. It it currently setup just to run the slower of the two scenarios.
Possible Solution
Some of this may fall on measure writing to minimize looping through objects. Logging is a complex question. I generally encourage good logging of warning and info messages for transparency and diagnostics. I'm wondering if we should have an enhancement request for when running osw in the CLI to not write info messages, and only warning and runner.register values.
Possible steps outside of core OpenSutdio
- Update BuildResidential when it merges OSM's together to make cleaner model with less duplicate objects. More advanced approach for large buildings is to start to use multipliers so not every single unit has to be modeled.
- Update reduce_epd_by_percentage_for_peak_hours to have less logging of info statements, but they shouldn't have to be removed all together, they should function.
- Improve script/rake task that steps through out.osw and list duration for each measure and check for time between measures (on one occasion I saw a multi-hour gap from end of one measure and start of another, but I have not seen if that was isolated case or not.
Possible steps within core OpenStudio
- I think a good quick fix could be a special flag for CLI that tries to bypass info statements. It can't not call that code in the measure, but it can exclude the output from runner and the out.osw. Similar to verbose flag. This would allow a quick fix without having to alter measures.
- Improve the performance of large models with a lot of logging. Identify why two specific points are outliers and 2x or more slower than would be expected.
Details
The way URBANopt works is that all features in a scenario are run in parallel. I was running at one point 2x scenarios across 16x file locations. When I ran the default way I had all cores but one waiting for the last slow feature to run. To speed up process I ended running without the for slowest features. I then did separate run on separate computer of the slow features with just the right number of cores dedicated so that I could run multiple scenarios in parallel instead of just 1.
Related to the long out.osw one measure (reduce_epd_by_percentage_for_peak_hours) is largely responsible for that, because it is logging all schedules ,and other objects combined with a model that appear to have excessive amount of objects, multiple always on schedules.
Environment
Some additional details about your environment for this issue (if relevant):
- Mac 12.6 M1 Max 32GB memory
- OpenStudio 3.5.1+22e1db7be5
- URBANOpt 0.9.1
Context
This has made running analysis for paper very slow and complex. Not something we would expect external URBANopt users to do.

