You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
scottstanie opened this issue
Jan 31, 2023
· 0 comments
Labels
quality metricsStatistics or metrics relating to the question "how uncertain are we?"researchA deeper question that may require extended research to answer
Q. How could we mark "noisy" acquisitions that aren't worth pulling?
We will be pulling $k$ old SLCs along with each new SLC to estimate/produce a new product. We almost certainly to not want to just pull the $k$ most recent SLCs (c.f. the Norwegian InSAR service, who basically skips the entire winter).
This is a two part issue:
What quality metrics can we produce that best tell is what are the old SLCs that are worth re-pulling?
Operationally, how can we mark our output products (or label Sentinel products) so we know that certain historical ones aren't worth using to estimate the latest phase?
The text was updated successfully, but these errors were encountered:
quality metricsStatistics or metrics relating to the question "how uncertain are we?"researchA deeper question that may require extended research to answer
Q. How could we mark "noisy" acquisitions that aren't worth pulling?
We will be pulling$k$ old SLCs along with each new SLC to estimate/produce a new product. We almost certainly to not want to just pull the $k$ most recent SLCs (c.f. the Norwegian InSAR service, who basically skips the entire winter).
This is a two part issue:
The text was updated successfully, but these errors were encountered: