What GeraLens Will Be: Point, See, Do
Published 21 April 2026 · 6 min read
The product in a paragraph
GeraLens is what you get when you pair the Gera services portfolio with a camera and a visual-recognition model. You point; the lens understands what you are looking at; the most useful Gera action for that object is one tap away. Over time the lens runs ambiently on AR glasses; for now it is a mobile camera flow.
Concrete scenarios
- Point at a restaurant: reservation, menu, delivery from GeraEats.
- Point at a dripping tap: plumber quote from GeraHome.
- Point at a medication box: refill reminder, contraindication check, GeraClinic pharmacist question.
- Point at a For-Sale sign: property lookup and tour booking on GeraRent.
- Point at a product in a shop: compare price and availability on GeraMarket.
- Point at a car on the street: a ride via GeraRide to the same destination.
What the lens is not
- Not facial recognition. We will never identify strangers in your camera.
- Not an ad surface. The lens surfaces utility, not promoted content.
- Not always-on by default. The camera activates when you open the lens.
Timeline
- Q4 2026 — mobile camera reference build, 10 recognisable categories.
- 2027 — mobile pilot across iOS + Android, 50+ categories.
- 2028 — AR glasses compatibility (assuming viable consumer AR hardware).
- 2030 — ambient default, integrated with GeraNexus for one-tap commit.
How it fits the Gera stack
GeraLens is the visual-input front door to the rest of the portfolio. Once the lens understands what the object is, it uses GeraNexus for the transactional handshake, GeraMind for personal-context pre-fill, and the appropriate Gera vertical for the commit.
How to follow along
Design drafts on this blog and /research. Join the waitlist — we will prioritise researchers working on computer vision safety and on-device recognition.
Help us design ambient discovery.
Join the waitlist