Skip to main content
Research preview — APIs may change. GitHub

Location Proof Evaluation

When a location proof is submitted for verification, Astral evaluates it through a three-phase process. The output is a credibility vector — a structured assessment describing how strongly the evidence supports the claim across multiple dimensions.
Why not a binary yes/no? Different applications value different properties — a logistics platform cares most about spatial precision, while a compliance system may prioritize source independence. The credibility vector provides a dense quantification of the evidence so applications can apply their own weighting.

The Three Evaluation Phases

Phase 1: Stamp Checks

Each location stamp is evaluated independently for internal validity:
  • Signatures — Are the cryptographic signatures valid? Do they chain to a trusted source for this proof-of-location system?
  • Structure — Does the location stamp conform to the expected format for its plugin?
  • Signal consistency — Are the raw signals internally consistent? Do multiple sensor readings within a single location stamp agree with each other?
A location stamp that fails these checks provides no useful evidence — its results are reported but won’t contribute positively to the overall credibility.

Phase 2: Correlation Checks (Multi-Stamp Location Proofs)

For location proofs with multiple location stamps from independent proof-of-location systems, the evaluator cross-correlates the evidence:
  • Independence — Are the location stamps from genuinely independent sources? Two location stamps from the same proof-of-location system or the same device aren’t independent. Independence is assessed based on plugin type, device identity, and trust model.
  • Agreement — Do the independent location stamps agree on location and timing? Evidence from unrelated sources that converges on the same location is substantially more convincing than any single source.
This phase is what makes multi-factor location proofs powerful. Agreement between independent sources is hard to forge because an attacker would need to compromise multiple unrelated systems simultaneously.

Phase 3: Claim Assessment

The evaluator compares the observed evidence (from the location stamps) against the asserted claim:
  • Spatial consistency — Does the observed location from the location stamps fall within the claimed location and radius?
  • Temporal consistency — Does the temporal footprint of the evidence overlap with the claimed time range?
  • Overall support — Given the strength of the evidence (phases 1 and 2), how well does it support this specific claim?

The Credibility Vector

The output of evaluation is a credibility vector with four dimensions. Each dimension is a structured object containing multiple metrics — not a single score.
DimensionWhat it measuresExample metrics
SpatialHow well observed locations support the claimed locationMean distance from claim center, fraction of location stamps within claimed radius
TemporalHow well observation times align with the claimed time rangeMean overlap between location stamp and claim time windows, fraction with full overlap
ValidityInternal validity of the location stampsFraction with valid signatures, valid structure, consistent signals
IndependenceHow independent and corroborating the evidence sources areRatio of unique plugins to total location stamps, spatial agreement across sources
There is no single overall score. This is deliberate — collapsing a multidimensional assessment into one number requires value judgments about which dimensions matter most, and that’s the application’s call, not ours.
These dimensions and their constituent metrics are an active area of research. We expect them to evolve as we learn more about what’s useful in practice.

Interpreting the Vector

The credibility vector quantifies the strength of the evidence, not the probability that the claim is true. Strong metrics across all four dimensions mean the evidence is internally valid, spatially and temporally consistent with the claim, and drawn from independent sources. Weak metrics tell you where the evidence falls short. What the vector cannot tell you: whether the proof-of-location systems themselves are trustworthy for your use case. A credibility vector with strong metrics from a single device attestation reflects different underlying assurance than one with equally strong metrics from three independent proof-of-location systems. The per-dimension breakdown — especially independence — makes this visible.

Application-Level Decisions

Astral evaluates and reports. Applications decide. The credibility vector gives applications enough information to make risk-appropriate decisions. A social check-in app might accept weak independence metrics. A compliance system might require strong validity and spatial metrics from at least two independent sources. A land title registry might require the strongest available assurance across all dimensions. The threshold is always the application’s choice — Astral does not impose minimum requirements.

Next: Geocomputation

Spatial operations with proof of correct execution

See also: