0 votes
14 views

Hi everyone, 

I’m running Monte Carlo simulations in OpenLCA(2.4.0) using Python (olca_ipc 2.4.0, olca_schema 2.4.0), and I’m experiencing extremely high memory usage. 

Even with 200 iterations, OpenLCA memory can reach 25 GB. I also noticed that commenting out the following lines prevents memory growth:

for i in range(num_iterations):
# Uncommenting these lines causes memory to grow quickly
# result.simulate_next()
# result.wait_until_ready()
xs[i] = result.get_total_impact_value_of(impact_category_ref).amount

Questions:

  1.  Does anyone know why "simulate_next( )" causes such rapid memory growth? 
  2.  Is there a recommended way to run many Monte Carlo iterations in Python within openLCA while keeping memory usage manageable?
Many thanks in advance!
Siyue
Codes:
FU = 1.0 # functional unit
simulation_plan = {
"biodiesel production, from soybean": 500,
"biodiesel production, from rapeseed": 259,
}
 
impact_method = client.find(o.ImpactMethod, "IPCC 2013 GWP 100a")
impact_category_ref = o.Ref(
ref_type=o.RefType.ImpactCategory,
id='0529022f-23cd-36e5-bbba-3aff792d439c'
)
 
all_results = []
 
for system_name, num_iterations in simulation_plan.items():
print(f"\nMonte Carlo for: {system_name} ({num_iterations} iterations)")
product_system = client.find(o.ProductSystem, system_name)
 
setup = o.CalculationSetup(
target=product_system.to_ref(),
impact_method=impact_method.to_ref(),
amount=FU
)
xs = np.empty(num_iterations, dtype=float)
 
result = client.simulate(setup)
result.wait_until_ready()
 
for i in range(num_iterations):
# Uncommenting these lines causes memory to grow quickly
# result.simulate_next()
# result.wait_until_ready()
xs[i] = result.get_total_impact_value_of(impact_category_ref).amount
 
result.dispose()
 
df_partial = pd.DataFrame({
"product_system": [system_name] * num_iterations,
"iteration": np.arange(1, num_iterations + 1),
"impact": xs
})
all_results.append(df_partial)
ago in openLCA by (120 points)

Please log in or register to answer this question.

...