top of page
BJ3A0089_edited_edited.jpg

Robotic object Recognition System

​

Computer-Aided Architectural Design Research in Asia, CAADRIA, Hong Kong, 2021

​

A visual-based object recognition method and user interface that enables on-site robot arms to autonomously handle building components and to build specific designs without the influence of material, shape, and environment.

DSC_7029.jpg

COMPUTER VISION

The eye-in-hand configuration mounts the sensor (RealSense Camera) on the robot end-effector that has a partial but precise sight of the scene.

To allow the position of the camera to be flexibly changed, and to avoid the problem of the camera being blocked by the robotic arm, the configuration with eyes-in-hand is performed here.
 

Autonomous robotic grasping system

An autonomous robotic recognition system (ARR), can detect the design components and their properties without placing the robot in a controlled environment. Allowing it to automatically look for every available object in the view or raw materials, and it autonomously estimates the robot posture and target angle for the end effector to grasp.

F4.jpg
F3.jpg
F5.png

​POSE AND ANGLE ESTIMATION

A motion planning algorithm was developed to find the optimal grasping position and plan a collision-free motion to deliver the target. The motion planning algorithm was based on the following: (A) Object central point; (B) Angle of the detected object; (C) End effector rotation; (D) Grasping selection.

Human-Robot Collaboration interface

To put the system into the fabrication application, an autonomous construction workflow interface was designed, and then all the real-time information on the shapes and spatial properties are imported to a Human-Robot Collaboration (HRC)system. Allowing it to automatically find every target in the working environment either materials or shapes, and it can also achieve Computer in the HumanInteraction Loop (CHIL) to choose the desired object.
 

F8.jpg
F6.jpg

Visual-based autonomous pick and place workflow

1. Home set and hand-eye calibration (this step is only set at the beginning), 2. Object recognition and pose estimate(system routine), 3. Grasping position, 4. End effector angle and pick the target, 5. Place and assemble, 6. The robot moves to the home state and detects the environment again.
 

bottom of page