Research preview — APIs may change. GitHub
Privacy
Privacy is designed into Astral from the start — not bolted on afterward. The TEE provides meaningful privacy guarantees today, and zero-knowledge circuits for location proof verification and geospatial computations are an active research direction for stronger properties in the future.TEE Privacy Guarantees
The Trusted Execution Environment processes location data inside hardware-isolated memory that the operator cannot inspect, even with physical access to the machine: What stays private:- Raw input coordinates — the exact lat/lng values submitted
- Exact geometries — polygon boundaries, line paths, and other spatial data used in computation
- Location stamp signals — the raw evidence data from proof-of-location systems
- The signed result — a boolean, numeric value, or credibility vector
- The operation type — which computation was performed
- Input references — hashes of the inputs, not the raw data itself
proofInputs, and verified location proofs include the original proof with all stamps and claim data. This means anyone who receives a signed result can see the raw location inputs.
Information Leakage From Results
The result itself may reveal information about the inputs. This is inherent to the computation, not a limitation of the privacy model:| Operation | What the result reveals |
|---|---|
contains (true) | The point is somewhere inside the polygon |
within (true, 500m) | The point is within 500m of the target |
distance (exact value) | The precise distance between two geometries |
contains check against a country-sized polygon reveals less than a within check with a 10-meter radius.
Spatial and Temporal Uncertainty as Privacy Tools
The uncertainty tradeoff in location claims has a privacy dimension. Broader spatial bounds (larger radius) and wider temporal bounds reveal less about exact location and timing. Applications that want to preserve user privacy can intentionally use coarser claims — “was this user in San Francisco sometime today?” rather than “was this user within 5m 37.7749°N 122.4194°W at 14:32:07?” This isn’t a hack — it’s a principled privacy-preserving approach. If the application only needs to know “roughly where, roughly when,” there’s no reason to collect or process exact coordinates.ZK Location Proofs (Research)
Zero-knowledge proofs would allow verification of location claims without revealing the underlying location data to anyone — including the verifier. A ZK location proof could prove “I was inside this boundary” without revealing where inside the boundary, or even what the boundary was.| Property | TEE (today) | ZK (future) |
|---|---|---|
| Raw inputs hidden from operator | Yes | Yes |
| Raw inputs hidden from verifier | No | Yes |
| No trusted hardware required | No | Yes |
| Verification without re-execution | No | Yes |
| Production-ready | Yes | Not yet |
TEE Limitations
The TEE provides strong but not absolute privacy:- Hardware trust — You are trusting that the TEE hardware (Intel SGX / AMD SEV) correctly isolates the enclave. Side-channel attacks on TEEs are an active area of security research.
- Result leakage — As described above, the result itself carries information about the inputs.
- Input reference hashes — Hashed input references are visible. If an observer knows the possible input space, they could attempt to match hashes (though this is computationally expensive for arbitrary geometries).
Next: Guides
Walk through common workflows step by step
See also:
- Trust model — what’s verified vs. what you’re trusting
- Astral Location Services — TEE architecture details