+2 votes
780 views

Collaboration server version 1.1, Windows server 2012, Tomcat 8.5.34, elasticsearch 6.2.4

I've created a simple database with reference unit/flow properties data and a single user-made flow. If I make a new repository, commit this database, disconnect from the repository, and then reconnect it, then all parts of the database will be marked as "new". I need to fetch and then commit. Is this the intended behavior? Is there anything I can do short of fetching to get openLCA to recognize that there are actually no changes? Does this happen because when we disconnect the database, openLCA loses information all information about the commits and so has to fetch the entire database just to ensure it's at the right place? Does the fact that all of our version numbers are 0 make this more difficult?

For a single flow, this isn't really that big of a deal, but say we have a much larger database that we share via flash drive or something or even download as a json-ld from collaboration server. The first time we connect this database, we have to fetch the whole thing again. (About to make another post on our fetching issues).

Link to log file from test

in LCA Collaboration Server by (5.3k points)
edited by

1 Answer

0 votes
by (8.9k points)
selected by
 
Best answer
Yes this is the intended behaviour. Currently, when disconnecting, the internal index is discarded, and you need to get up to date by fetching. If the data sets are unchanged openLCA should recognize this though and only update the internal index when fetching, without actually downloading and importing any data. Please let me know if this is not the case for you, and how you created the database in that case (When simply disconnecting and reconnecting this is normally correctly recognized by openLCA).
by (8.9k points)
Ok both as it should be, but now I don't understand why it imports a lot of data sets. I rechecked the code for v1.1.0 - if no diff is shown it should not fetch anything and thus not import anything (that is also why the downloading is "so fast"). I just recently fixed an issue where partial fetches (when data sets are merged) were incorrectly executed, but this only applies to situations where the diff is shown, conflicts occur and those conflicts are solved by merging the state into something different than the remote state.

During this fix I implemented an additional check in openLCA, so if any merged data set is still fetched it won't be imported again, but this should be unrelated. We will release a new openLCA 1.10.1 version the next days, followed by a collaboration server release 1.2.x soon after. For now I'd say we wait if the issue remains in the new version, and I will do another test on this before the release.
...