Dear community,
I am currently looking into different data quality assessment schemes and I would have two questions.
1) I am using ecoinvent 3.11 cutoff unit processes, and in the section "Data Quality Systems" I identified a few options.
I understand that the "ecoinvent data quality system" matches with the publication "Overview and methodology Data quality guideline for the ecoinvent database version 3 (Weidema et al 2013)" (attached at the end of this post)
On the other hand, I also found the ILCD Data Quality System and I would like to know on which source this is based on? I assumed that it would be based on the ILCD Handbook, but some of the criteria do not match with the publication. For example "Completeness" is interpreted completely different.
While the ILCD Handbook understands it as "% of flow coverage", the implemented ILCD Data system in openLCA understands it as "share of market, for which data was collected".
Could you share some insights on that and where that matrix is coming from?
2) If i am correct in my interpretation, the uncertainty values for individual flows (defined by gmean and gsigma) only use the "Default basic uncertainty" to determine gsigma, ignoring the additional penalties arising from the pedigree matrix. I am referring to Weidema et al 2013 once again , specifically to Tables 10.3 (basic uncertainty) and Table 10.5 (pedigree uncertainty factors).
What is the motivation to use only the default basic uncertainty?
I also attached the publication:
https://ask.openlca.org/?qa=blob&qa_blobid=9553149428209810238
Thank you in advance for your valuable insights!
Cheers,
Paul