Eureka setup: From unboxing to production in an afternoon

How QuickCalibrate™ technology and pre-trained AI models eliminate weeks of vision system setup.

With Eureka's AI Vision System, you can go from unpacking the hardware to running your first successful picks in a single afternoon. Here's how it actually works.

For detailed setup guidance, please refer to Eureka’s User Manual.

The setup timeline for Eureka’s AI Vision System

We've watched hundreds of engineers deploy Eureka systems. The pattern is consistent: physical installation takes about an hour, calibration takes 30 minutes, and then you spend the rest of your afternoon getting your first picks dialed in. Let's break down what happens in each phase.

1. Physical installation (about 1 hour)

Mount the Eureka 3D Camera either fixed to a frame (most common for tabletop picking) or attached to your robot's end-effector (for deep bins or multi-station work). The camera comes with mounting brackets and adapter plates for major robot brands.

Connect one PoE cable from camera to switch to Controller—that single cable handles both power and data. Plug in the Controller's power supply, and you're done with hardware.For detailed mounting instructions and network configuration, see the complete setup guide.

2. QuickCalibrate™ process (30 minutes)

This is where Eureka saves you massive time compared to traditional vision systems.
Every camera ships with a calibration board. Mount it to your robot's end-effector, then follow the semi-automated calibration routine in the Controller GUI. The robot moves the board to several positions while the camera captures images and calculates the precise spatial relationship between camera and robot coordinates.

Result: Sub-millimeter picking accuracy (<0.2mm) in 30 minutes.

You only recalibrate if you physically move the camera or change tools. For in-hand cameras with frequent gripper swaps, it's the same quick 30-minute process each time.

While calibration runs, you'll download pre-trained AI models for depth reconstruction—choose between speed, balanced, or accuracy models depending on your application. First download takes 5-10 minutes, then models cache locally.

3. First picks and optimization (rest of afternoon)

Place your parts in the camera's view and capture an image. The AI generates a 3D point cloud, and for simple parts like metal stampings or plastic molded components, Eureka's pre-trained detection model often identifies pick points automatically.

Configure your approach heights, pick offsets, and retreat distances in the GUI, then run your first pick cycle. Don't expect perfection immediately—you'll tweak camera exposure, gripper angles, and collision zones based on what you see. This is normal optimization, not debugging a broken system.

Most engineers get reliable picking working within their first afternoon.

Debugging and production variables

This is the valid concern we hear most. Vision systems have a reputation for being finicky. Here's why Eureka is different.

You see exactly what the camera sees

The Controller GUI displays raw stereo pairs, color-coded depth maps, rotatable 3D point clouds, and confidence scores for detected objects. If something looks wrong, you see it immediately. When a pick fails, you watch it happen and see exactly what the system saw—no black box guessing.

Real factory conditions don't break it

Eureka systems are running in production right now handling:

  • Varying ambient lighting (windows, cycling fluorescents)
  • Part variation within spec (molding flash, tolerance stackup)
  • Cluttered bins with overlapping parts
  • Dirty surfaces with oil, dust, fingerprints

Over 25 million picks across automotive, aerospace, and electronics applications prove the system handles real factory variables. Toyota, Pratt & Whitney, and Denso aren't running prototype systems—they're running proven production equipment.

You can fix most issues yourself, or get Eureka on-site support within 24 hours

Traditional systems require vendor engineers for troubleshooting. With Eureka's transparent diagnostics and built-in adjustment tools, you handle most issues by tweaking exposure, working distance, or confidence thresholds.

For the rare complex problem, 24/7 global support is available, with local application engineering from our Atlanta office for US customers.

Training new parts yourself

Here's Eureka's biggest advantage over competitors like Apera.

Traditional approach: Call vendor → wait 1-2 weeks → pay $5,000+ → wait more days for validation → hope nothing changes

Eureka approach: Capture 20-50 images → annotate in ML Studio (5-10 minutes) → train on cloud (1-2 hours, runs in background) → deploy and test

Total time: 2-3 hours of actual work. You can do this during a lunch break.


No CAD required

Eureka learns from camera images, not CAD models. This is critical for companies dealing with:

  • Legacy parts where CAD doesn't exist
  • Supplier components with slight variations
  • Machined parts where as-cut dimensions differ from nominal
  • Damaged or worn parts that don't match CAD anymore

ML Studio's smart segmentation identifies part boundaries automatically—click once and it segments across your entire image set. For high-mix manufacturing where you're constantly introducing new parts, this self-service capability changes everything.

What you’ll need

Hardware:

  • Eureka 3D Camera
  • Eureka Controller
  • Compatible robot arm (see supported brands)
  • PoE network switch (any standard gigabit PoE switch)
  • Calibration board (included)

Optional:

  • PLC for production integration (Ethernet/IP or Modbus TCP/IP)
  • Pattern projector for highly textureless parts (EFFI-Lase V2, Opto Engineering)

Skills:

  • Basic robot programming (teach points, I/O)
  • Factory network knowledge (IP addresses)
  • Mechanical aptitude (mounting, cabling)

No vision expertise, machine learning knowledge, or advanced programming required.