0 votes
Is there any way to work with the relatively large database using the openLCA Collaboration Server?
The database size is 2.3Gb in zolca format or something like 10Gb unzipped locally (in derby DB format).
I've tried to commit it and openLCA client just crashed (tried to increase heap size -- did not help).
Even more -- when I tried to create a new database with the seed reference data and tried to push all the new changes to openLCA Collaboration Server -- it also crashed.
So the scenario is the following:
1. Create a new repo on the Collaboration Server.
2. Create a new DB in openLCA client with Complete Reference Data.
3. Add repo from #1 to this database.
4. Select "Repository -> Commit..." on the whole DB.
Result: crash (I have a crash log, in openLCA log it's nothing special -- it just does some GET from the repo)
Expected: no crash :)
If I will select some processes (like 5) -- I can commit those w/o any issues.
OpenLCA is 1.11.0.
in openLCA by (510 points)
by (510 points)
Hi Sebastian,

Sorry for the delayed answer.

Server info:

Release version: 1.3.0
Commit id: 833c92ea
Build date: 4/13/2022, 16:46:44

Client info:

openLCA 1.11.0

Nothing I can see in tomcat logs. Only the fact that my user (admin) logged in. Nothing in catalina.out. In access I only see same requests that are visible in client logs: - - [15/Jun/2022:09:10:33 +0000] "POST /ws/public/login HTTP/1.1" 200 - - - [15/Jun/2022:09:10:34 +0000] "GET /ws/repository/meta/admin/testing HTTP/1.1" 200 63 - - [15/Jun/2022:09:10:34 +0000] "GET /ws/public/announcements HTTP/1.1" 204 - - - [15/Jun/2022:09:10:50 +0000] "GET /ws/commit/request/admin/testing HTTP/1.1" 200 -

As for the point where the client crashes...
Repository->commit -- window "Comparing with repository" is the last window I can see -- and here client is already unresponsible.

I have java vm crash log, and client log and olca client configuration in the files, but I can't attach those here in comment.

This is the client log (lines appear after I click commit):

POST https://lca-collab-server.data-euw1.sustained.app/ws/public/login
63260 [ModalContext] INFO org.openlca.cloud.util.WebRequests  - POST https://lca-collab-server.data-euw1.sustained.app/ws/public/login
GET https://lca-collab-server.data-euw1.sustained.app/ws/repository?page=0
63943 [ModalContext] INFO org.openlca.cloud.util.WebRequests  - GET https://lca-collab-server.data-euw1.sustained.app/ws/repository?page=0
POST https://lca-collab-server.data-euw1.sustained.app/ws/public/login
72494 [main] INFO org.openlca.cloud.util.WebRequests  - POST https://lca-collab-server.data-euw1.sustained.app/ws/public/login
GET https://lca-collab-server.data-euw1.sustained.app/ws/repository/meta/admin/testing
72727 [main] INFO org.openlca.cloud.util.WebRequests  - GET https://lca-collab-server.data-euw1.sustained.app/ws/repository/meta/admin/testing
GET https://lca-collab-server.data-euw1.sustained.app/ws/public/announcements
73133 [main] INFO org.openlca.cloud.util.WebRequests  - GET https://lca-collab-server.data-euw1.sustained.app/ws/public/announcements
GET https://lca-collab-server.data-euw1.sustained.app/ws/commit/request/admin/testing
89018 [ModalContext] INFO org.openlca.cloud.util.WebRequests  - GET https://lca-collab-server.data-euw1.sustained.app/ws/commit/request/admin/testing
# A fatal error has been detected by the Java Runtime Environment:
#  SIGSEGV (0xb) at pc=0x00007fff6b43edc9, pid=11705, tid=0x0000000000000307

1 Answer

0 votes
by (7.5k points)
selected by
Best answer

To the first issue: Handling of big databases 

I recommend to distinguish between background and foreground data sets. In a common scenario you are not interested in changing (and thus tracking of) background data sets. You could follow this approach:

  1. Initially import background database (usually in zolca Format) with "Restore database" function
  2. Connect to an empty Collaboration Server repository
  3. Right click on the database and select "Repository/Untrack"
  4. Import foreground data sets (e.g. via JSON-LD exchange, see below)
  5. Right click on the database and select "Repository/Commit" - only the foreground data sets should be visible and selected

Other users should also start with step 1 to 3 and then run "Repository/Fetch". This way all users have the same data sets and can continue tracking/exchanging foreground data sets.

To export your foreground data sets from an existing database. Use "Export" and select "JSON-LD" and only select the process data sets that are part of your foreground system. Linked data sets will be exported automatically. When you import it into the database already containing the background data sets, data sets should be linked correctly.