Frequently Asked Questions

GeraLens — point, see, book

What is GeraLens?

GeraLens is a camera-activated service discovery API. Point your phone, AR glasses, car camera, or smart home device at something — a plumbing problem, a meal, a property, a medical symptom — and GeraLens identifies it and offers the right Gera service to handle it (plumber from GeraHome, restaurant from GeraEats, property valuation from GeraRent, doctor from GeraClinic).

Which devices does it work on?

iOS and Android apps (native SDKs), web (via WebRTC camera access), visionOS (Apple Vision Pro), Meta Quest browser, Ray-Ban Meta glasses (experimental), and modern vehicles via CarPlay or Android Auto visual assist. The underlying Lens API is also available as a REST endpoint for third parties.

How does GeraLens handle privacy?

All image analysis can run locally on-device where hardware allows. When server-side processing is needed (complex scenes, OCR, fine-grained identification) the image is sent over TLS, processed within 30 seconds, and deleted immediately — never stored, never used for training. The user is always shown which server processed the image and can disable cloud-assisted mode entirely.

How accurate is the identification?

Accuracy varies by category. Common household objects (fixtures, plants, pets), food, and signage score above 95% in current benchmarks. Medical symptoms are intentionally bounded — GeraLens never gives a diagnosis, only suggests connecting to a doctor via GeraClinic. Service routing is deterministic: once the category is identified, the correct Gera vertical is always the same.

Does GeraLens work offline?

Yes for common categories on devices with on-device ML acceleration (iPhone 12+, Pixel 6+, Vision Pro). Rare or specialised identifications require an internet connection. The SDK automatically falls back to the best available mode and tells the calling app which mode it used.