Dear Ladies and Gentlemen,
we have an issue concerning the size of the database which we use for a project via the collaboration server. The database consist of a considerable amount of new processes (around 200) and few additional flows (around 20). For calculations, product systems are created and directly deleted afterwards to minimize fetching and committing time. We now observed the problem, that the database size fluctuates by more than 100 % (from 2 GB to 5 GB) without an identifiable reason, e.g., the addition of a significant amount of new processes. This leads to situations, in which our cloud space is exceeded and new modifications are no longer possbile.
My question would be, if there is an cache within the database, where the created product systems are temporarly stored even after their deletion, and which could lead to this increase in database size? If yes, how can we delete this cache manually and if no, do you have other ideas for possible causes of this fluctuations? Furthermore, are there options to decrease or optimize the size of a database?
Many thanks for your effort in advance!