eye³ aiVISIONOS Robotics

Autonomous vision in motion — AMR, SLAM, safety, inspection.

AMR / SLAM

AMR & SLAM systems

eye³ generative AI vision stacks integrate deeply with AMRs, autonomous forklifts, and robotic cells. We pair optics, Edge-AI silicon, ROS 2 nodes, and HMI tooling to shorten deployment cycles and boost reliability.

AMR navigation

Autonomous mobile robots

Visual odometry, occupancy mapping, and obstacle avoidance delivering 99.8% uptime and <1 cm path accuracy. QoS-tuned ROS 2 nodes keep fleets in sync.

SLAM mapping

Visual SLAM & mapping

Multi-camera fusion (RGB + IR), loop closure, and bundle adjustment maintain localization in low-texture environments and crowded warehouses.

Safety & Inspection

Safety & inspection

Safety zones

Dynamic safety zones

Latency-tuned human detection, adaptive geo-fencing, and safety PLC hooks help meet SIL/PL requirements without sacrificing productivity.

Inspection

Quality inspection

Edge defect detection, OCR, and spectral analysis using quantized CNNs. Optics/ISP tuning plus dashboards reduce false rejects and downtime.

Platform

Platform

Pick & place

Pose & grasp planning

Sub-10 ms inference for multi-object grasping with calibration services, hand-eye alignment, and fleet analytics.

SoC comparison

SoCStrengthsWorkloadsNotes
Rockchip RK3588High throughput & video IOVIO, detection, segmentationGreat for AMRs with rich IO
Raspberry piTurnkey IO + longevitySafety I/O, opensourcerapid development for prototype
Renesas RZ/V2NEfficient multi-model inferencePose estimation, OCRLow-power analytics

1. The problem

Most AI cameras only detect — they do not understand intent. They’re cloud-tethered, bandwidth-heavy, and fragile when the workcell changes.

2. The opportunity

The Edge AI vision market exceeds $25B and grows 20% yearly. Robotics, safety, and accessibility teams need cloud-free, self-learning systems. eye³ enables that leap.

3. The eye3.aiVISIONOS

An AI vision native agent embedded generative vision camera with a 20+ TOPS Edge-AI SoC and an embedded 7B LLM. Reasoning, generation, and adaptation all run locally.

4. Core concept

  • 20 TOPS Edge AI SoC with NPU / GPU / ISP
  • Embedded 7B-parameter LLM for local reasoning
  • Shared 5 GB DDR high-bandwidth memory
  • Up to 4 MIPI camera inputs
  • Generative vision & autonomous sorting pipeline
  • Fully offline self-learning operation

5. Design directions

  1. Edge AI module — developer-ready board for robotics & R&D
  2. Integrated lens unit — camera with on-board AI brain
  3. Human-assist "third eye" — wearable empathic AI

6. Technology architecture

Multi-camera inputs → 20 TOPS AI SoC → Embedded 7B LLM → Outputs (sorting / reasoning / PLC / cloud sync)

The shared DDR bus links vision + language for continuous learning loops at the edge.

7. Technical specifications

FeatureSpecification
AI performance20+ TOPS Edge AI SoC
Co-processorEmbedded 7B LLM
Memory5+ GB shared DDR
Vision inputsUp to 4× MIPI cameras
InterfacesPCIe · USB 3.0 · RGMII
FrameworksTensorFlow Lite · ONNX · PyTorch
Mode100% offline edge compute

8. Core intelligence

  • Adaptive perception — learns from real-world feedback
  • Contextual reasoning — interprets intent + environment
  • Self-learning engine — updates without cloud
  • Generative enhancement — reconstructs occluded data
  • Local explanation — LLM narrates vision output

9. Competitive landscape

PlatformAI capabilityCloud dependenceSelf-learningGenerative vision
NVIDIA JetsonDetection onlyHighNoNo
Luxonis OAK-DDetection onlyMediumNoNo
Hikvision AISurveillanceHighNoNo
eye3.aiGenerative + reasoningNoneYesYes

10. Target markets

  • Robotics & automation
  • Assistive AI / accessibility
  • Smart factories & inspection
  • Environmental / exploration
  • R&D labs & education

11. Business model

  1. Hardware sales (modules + devices)
  2. Edge SDK & LLM vision licensing
  3. OEM co-development & integration
  4. B2B generative vision subscriptions

12. Manufacturing & supply chain

  • Design: United Kingdom · LBE R&D centre
  • Materials: OEM-ready, scalable components
  • Support: UK–Far East dual engineering team

13. Go-to-market strategy

  • Prototype + internal demo (Q3–Q4 2025)
  • Distributor & OEM partnerships (2026)
  • Edge AI SDK licensing expansion (2026–27)

14. Roadmap 2025–2027

PhaseMilestoneTimeline
PrototypeHardware + firmware demoQ4 2025
Pilot testingField validationQ1 2026
CertificationCE / FCC / EMCQ3 2026
Generative Vision 2.0Enhanced LLM integration2026–27

15. Collaboration

We invite strategic partners to co-create the next generation of self-learning AI eyes. Join us for R&D pilots, manufacturing scale-up, and SDK early access.

See partner programs

Beyond vision — empathic generative AI at the edge.
eye3.aiVISIONOS — a vision native agent that directly at the edge.

Contact engineering