0 votes
237 views
Dear Ladies and Gentlemen,

we have an issue concerning the size of the database which we use for a project via the collaboration server. The database consist of a considerable amount of new processes (around 200) and few additional flows (around 20). For calculations, product systems are created and directly deleted afterwards to minimize fetching and committing time. We now observed the problem, that the database size fluctuates by more than 100 % (from 2 GB to 5 GB) without an identifiable reason, e.g., the addition of a significant amount of new processes. This leads to situations, in which our cloud space is exceeded and new modifications are no longer possbile.

My question would be, if there is an cache within the database, where the created product systems are temporarly stored even after their deletion, and which could lead to this increase in database size? If yes, how can we delete this cache manually and if no, do you have other ideas for possible causes of this fluctuations? Furthermore, are there options to decrease or optimize the size of a database?

Many thanks for your effort in advance!

Best regards

Simon
in LCA Collaboration Server by (120 points)
by (7.5k points)
Do you mean the size of the CS database (e.g. /opt/collab/database) or the size of a specific repository (e.g. /opt/collab/repositories/{group}/{name})
by (120 points)
Hi Sebastian,

I mean the size of a specific repository, i.e., the linked database we work with in openLCA.

Best regards,

Simon
by (7.5k points)
I didnt know that you are using the hosted service, and was under the impression you had your own server setup. I will reply via email

Please log in or register to answer this question.

...