Autonomous mobile robots
Visual odometry, occupancy mapping, and obstacle avoidance delivering 99.8% uptime and <1 cm path accuracy. QoS-tuned ROS 2 nodes keep fleets in sync.
Autonomous vision in motion — AMR, SLAM, safety, inspection.
AMR / SLAM
eye³ generative AI vision stacks integrate deeply with AMRs, autonomous forklifts, and robotic cells. We pair optics, Edge-AI silicon, ROS 2 nodes, and HMI tooling to shorten deployment cycles and boost reliability.
Visual odometry, occupancy mapping, and obstacle avoidance delivering 99.8% uptime and <1 cm path accuracy. QoS-tuned ROS 2 nodes keep fleets in sync.
Multi-camera fusion (RGB + IR), loop closure, and bundle adjustment maintain localization in low-texture environments and crowded warehouses.
Safety & Inspection
Latency-tuned human detection, adaptive geo-fencing, and safety PLC hooks help meet SIL/PL requirements without sacrificing productivity.
Edge defect detection, OCR, and spectral analysis using quantized CNNs. Optics/ISP tuning plus dashboards reduce false rejects and downtime.
Platform
Sub-10 ms inference for multi-object grasping with calibration services, hand-eye alignment, and fleet analytics.
| SoC | Strengths | Workloads | Notes |
|---|---|---|---|
| Rockchip RK3588 | High throughput & video IO | VIO, detection, segmentation | Great for AMRs with rich IO |
| Raspberry pi | Turnkey IO + longevity | Safety I/O, opensource | rapid development for prototype |
| Renesas RZ/V2N | Efficient multi-model inference | Pose estimation, OCR | Low-power analytics |
Most AI cameras only detect — they do not understand intent. They’re cloud-tethered, bandwidth-heavy, and fragile when the workcell changes.
The Edge AI vision market exceeds $25B and grows 20% yearly. Robotics, safety, and accessibility teams need cloud-free, self-learning systems. eye³ enables that leap.
An AI vision native agent embedded generative vision camera with a 20+ TOPS Edge-AI SoC and an embedded 7B LLM. Reasoning, generation, and adaptation all run locally.
Multi-camera inputs → 20 TOPS AI SoC → Embedded 7B LLM → Outputs (sorting / reasoning / PLC / cloud sync)
The shared DDR bus links vision + language for continuous learning loops at the edge.
| Feature | Specification |
|---|---|
| AI performance | 20+ TOPS Edge AI SoC |
| Co-processor | Embedded 7B LLM |
| Memory | 5+ GB shared DDR |
| Vision inputs | Up to 4× MIPI cameras |
| Interfaces | PCIe · USB 3.0 · RGMII |
| Frameworks | TensorFlow Lite · ONNX · PyTorch |
| Mode | 100% offline edge compute |
| Platform | AI capability | Cloud dependence | Self-learning | Generative vision |
|---|---|---|---|---|
| NVIDIA Jetson | Detection only | High | No | No |
| Luxonis OAK-D | Detection only | Medium | No | No |
| Hikvision AI | Surveillance | High | No | No |
| eye3.ai | Generative + reasoning | None | Yes | Yes |
| Phase | Milestone | Timeline |
|---|---|---|
| Prototype | Hardware + firmware demo | Q4 2025 |
| Pilot testing | Field validation | Q1 2026 |
| Certification | CE / FCC / EMC | Q3 2026 |
| Generative Vision 2.0 | Enhanced LLM integration | 2026–27 |
We invite strategic partners to co-create the next generation of self-learning AI eyes. Join us for R&D pilots, manufacturing scale-up, and SDK early access.
See partner programsBeyond vision — empathic generative AI at the edge.
eye3.aiVISIONOS — a vision native agent that directly at the edge.