# KLT & Load Carrier Detection Skill

This skill enables the detection of predefined containers in the camera's field of view. The focus is on the detection of one KLT, but other types of small load carriers are also possible. The prerequisite for this skill is that the edge of the container can be detected by the camera. With this skill, various information is also determined and displayed to the user on the left side of the screen, which helps in the creation of a robot program. The skill offers two ways of providing position information. One is that the container position and orientation (center of the container) can be displayed, which can be used for a dynamic workspace definition. In addition, intelligent and dynamic processes can be realized with this approach. For example, in the automotive industry, parts often arrive at the gripping station in KLT on a conveyor belt. This does not ensure that the container is in the exact same position. To compensate for this uncertainty, this skill can be used to optimally measure the ROI (region of interest).

Another way is to display a transformation with respect to a reference position or container. By a matrix multiplication of the reference position with a teach-in position of the bin's handle, it is possible to calculate a handle point in relation to the bin. With this approach, boxes can be gripped by a conveyor and stacked on a pallet. This is a classic example of when the empty bins need to be sent for cleaning.

<table data-card-size="large" data-view="cards"><thead><tr><th></th><th data-hidden data-card-cover data-type="files"></th></tr></thead><tbody><tr><td>Euro container</td><td><a href="https://1459495663-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzRF3SV87vu3nkNgfjyt7%2Fuploads%2FVAF9yXQ4rXfgpoKHi3lN%2FBin%20Detection_KLT.png?alt=media&#x26;token=8d37e96a-c7d1-4ea6-8a9a-38d56a165f8e">Bin Detection_KLT.png</a></td></tr><tr><td>Small load carrier</td><td><a href="https://1459495663-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzRF3SV87vu3nkNgfjyt7%2Fuploads%2FliElSPUQCyEOnzMSSJ76%2FBin%20Detection_Small.png?alt=media&#x26;token=5e0fb7a7-4f39-4dad-9e38-1fe3ad88af5f">Bin Detection_Small.png</a></td></tr><tr><td></td><td></td></tr></tbody></table>

## Skill Result Information

<table data-header-hidden><thead><tr><th width="286"></th><th></th></tr></thead><tbody><tr><td><mark style="color:blue;"><strong>Position</strong></mark></td><td>Position of the detected bin position in the robot's coordinate frame</td></tr><tr><td><mark style="color:blue;"><strong>Orientation</strong></mark></td><td>Rotation of the detected bin position in the robot's coordinate frame</td></tr><tr><td><em><mark style="color:blue;"><strong>Item fraction</strong></mark></em></td><td>Estimated filling level</td></tr><tr><td><em><mark style="color:blue;"><strong>Bin empty</strong></mark></em></td><td>Decision if a bin is empty or not; based on the previous parameter and an adjustable threshold</td></tr><tr><td><em><mark style="color:blue;"><strong>Changed reference pose</strong></mark></em> </td><td>After saving a reference position of the bin, the skill also calculates the distance and rotations between the saved position and the new detection. </td></tr><tr><td><em><mark style="color:blue;"><strong>Distance to reference</strong></mark></em></td><td>Translational deviation from the reference pose</td></tr><tr><td><em><mark style="color:blue;"><strong>Rotation relative to reference</strong></mark></em></td><td>Rotational deviation from the reference pose</td></tr></tbody></table>

## Specifications

<table data-view="cards"><thead><tr><th align="center"></th><th></th><th></th><th></th><th></th><th data-type="files"></th></tr></thead><tbody><tr><td align="center"><mark style="color:blue;"><strong>Conditions</strong></mark></td><td><p><strong>Camera Mount:</strong> </p><ul><li>Dynamic</li><li>Static</li></ul></td><td><p><strong>Supported type:</strong> </p><ul><li>KLT</li><li>Euro contianer</li></ul></td><td></td><td></td><td></td></tr><tr><td align="center"><mark style="color:blue;"><strong>Specs</strong></mark></td><td><p><strong>Avg. recognition rime:</strong></p><p>&#x3C; 1 seconds</p></td><td><p></p><p><strong>Supported grippers:</strong></p><ul><li>Parallel</li><li>Vacuum</li></ul><p></p></td><td></td><td></td><td></td></tr><tr><td align="center"><mark style="color:blue;"><strong>Features</strong></mark></td><td><ul><li>KLT / load carriers position detection</li><li>Rotated position recognition </li><li>Single instance recognition</li></ul></td><td></td><td></td><td></td><td></td></tr></tbody></table>

## Parameter Example&#x20;

To ensure accurate identification of various types of load carriers, the skill parameters can be easily adjusted to fit your specific needs. Here are some recommendations to help you find the perfect parameters for your application. For your convenience, we've only included descriptions of parameters that differ from the default settings.

{% tabs %}
{% tab title="Euro container" %}

<figure><img src="https://1459495663-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzRF3SV87vu3nkNgfjyt7%2Fuploads%2FVAF9yXQ4rXfgpoKHi3lN%2FBin%20Detection_KLT.png?alt=media&#x26;token=8d37e96a-c7d1-4ea6-8a9a-38d56a165f8e" alt=""><figcaption><p>KLT &#x26; Load Carrier Detection Skill for standard KLT</p></figcaption></figure>

* Used Object: Industry-standard load carrier (398x297x171 mm)
* Camera Distance: 690mm
* Camera Mount: 30° angle
* Skill Parameters:
  * Bin dimensions: according to size of load carrier (0.398x0.297x0.171m). It's recommended to start with the longest edge as your x-axis
  * Edge width: according to size of load carrier (0.015)
  * Reference Pose: no change&#x20;
  * Other parameters: default values
    {% endtab %}

{% tab title="Small Box" %}

<figure><img src="https://1459495663-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzRF3SV87vu3nkNgfjyt7%2Fuploads%2FliElSPUQCyEOnzMSSJ76%2FBin%20Detection_Small.png?alt=media&#x26;token=5e0fb7a7-4f39-4dad-9e38-1fe3ad88af5f" alt=""><figcaption><p>KLT &#x26; Load Carrier Detection Skill for small boxes</p></figcaption></figure>

* Used Object: Small load carrier (200x150x120 mm)
* Camera Distance: 450mm
* Camera Mount: 30° angle
* Skill Parameters:
  * Bin dimensions: according to size of load carrier (0.2x0.15x0.12m). It's recommended to start with the longest edge
  * Edge width: according to size of load carrier (0.015)
  * Reference Pose: no change&#x20;
  * Other parameters: default values
    {% endtab %}
    {% endtabs %}

## &#x20;<a href="#user-content-parameters" id="user-content-parameters"></a>

## Technical Parameter Description <a href="#user-content-parameters" id="user-content-parameters"></a>

### Parameter

{% tabs %}
{% tab title="Bin settings" %}

<table data-view="cards"><thead><tr><th>Name</th><th>Parameter</th><th>Type</th><th>Description</th></tr></thead><tbody><tr><td><mark style="color:blue;"><strong>Bin dimensions</strong></mark></td><td><code>bin_x</code>, <code>bin_y</code>, <code>bin_z</code></td><td><code>int</code></td><td>The dimensions of the bin.</td></tr><tr><td><mark style="color:blue;"><strong>Edge width</strong></mark></td><td><code>edge_width</code></td><td><code>float</code></td><td>The width of the edge of the bin. If the edge does not have the same width on all four sides, the smallest width should be used. However, some bins have small gaps on the edge. In that case, better results are obtained if these gaps are ignored for the determination of the edge width.</td></tr></tbody></table>
{% endtab %}

{% tab title="Fit settings" %}

<table data-view="cards"><thead><tr><th>Name</th><th>Parameter</th><th>Type</th><th>Description </th></tr></thead><tbody><tr><td><mark style="color:blue;"><strong>Minimum quality for bin detections</strong></mark></td><td><code>min_score</code></td><td><code>float</code></td><td>Minimum prediction quality for pixels on a bin edge. This score can be used to tune the detection of bin edges. False detections can be suppressed by increasing the value, while decreasing the minimum quality can increase the detected fraction of the edge for some bins.</td></tr><tr><td><mark style="color:blue;"><strong>Downsampling size</strong></mark></td><td><code>downsampling</code></td><td><code>float</code></td><td>Defines the minimum distance between 3D points for the skill. All points within a cube with the specified size are replaced by one point. Smaller values for the downsampling size lead to better resolutions while larger values lead to faster runtimes. Best results are achieved if the specified edge width is a multiple of the downsampling size. Check the documentation for some example measurements.</td></tr><tr><td><mark style="color:blue;"><strong>Maximum item coverage</strong></mark></td><td><code>item_threshold</code></td><td><code>float</code></td><td>Maximum fraction of pixels containing items for empty bins. This limit is used to tune the decision if a detected bin is empty or not. The Skill determines the fraction of pixels within a detected bin that show items within the skill (purple shaded region in image). If this fraction is below the specified limit (maximum item coverage) the detected bin is marked as empty. Note that, while this feature works for a large variety of items, it might not work for all possible items.</td></tr><tr><td><mark style="color:blue;"><strong>Maximum distance to reference pose</strong></mark></td><td><code>distance_threshold</code></td><td><code>float</code></td><td>Maximum allowed distance to the specified reference position. The bin-detection skill was designed to be as robust as possible allowing ,e.g., the detection of a bin also if more than a single bin is present in the view of the camera or if the bin is not fully visible. However, this robustness could also lead to a wrong detection (e.g., not the desired bin, or a wrong orientation). To reject undesired results the maximum distance to a reference position can be used.</td></tr><tr><td><mark style="color:blue;"><strong>Maximum rotation relative to reference pose</strong></mark></td><td><code>rotation_threshold</code></td><td><code>float</code></td><td>Maximum allowed rotation relative to the reference pose.</td></tr></tbody></table>
{% endtab %}

{% tab title="Reference pose" %}

<table data-view="cards"><thead><tr><th>Name</th><th>Parameter</th><th>Type</th><th>Description </th></tr></thead><tbody><tr><td><mark style="color:blue;"><strong>Change reference pose</strong></mark></td><td><code>change_reference</code></td><td><a href="#reference-pose"><code>string</code></a></td><td>Selects the method to change the reference pose.</td></tr><tr><td><mark style="color:blue;"><strong>Position of reference bin</strong></mark></td><td><code>reference_pose_x</code>, <code>reference_pose_y</code>, <code>reference_pose_z</code></td><td><code>float</code></td><td>Position of the reference bin.</td></tr><tr><td><mark style="color:blue;"><strong>Orientation of reference bin</strong></mark></td><td><code>reference_pose_rx</code>, <code>reference_pose_ry</code>, <code>reference_pose_rz</code></td><td><code>float</code></td><td>Orientation of the reference bin.</td></tr></tbody></table>

#### Reference Pose Types

| Type         | Description                                                                                                                                                                                                                                                                                             |
| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `no_change`  | The reference pose remains unchanged.                                                                                                                                                                                                                                                                   |
| `fit_result` | The result of the next execution of the skill is used as reference pose for future executions. If this option is selected, the distance and rotation relative to a previous reference pose are not checked. Use this option to turn off the comparison of the detection result with the reference pose. |
| `manual`     | The reference pose is provided as skill parameter. Caution: The provided reference pose has to be in the camera's coordinate frame.                                                                                                                                                                     |
| {% endtab %} |                                                                                                                                                                                                                                                                                                         |

{% tab title="Expert" %}

<table data-view="cards"><thead><tr><th>Name</th><th>Parameter</th><th>Type</th><th>Description </th></tr></thead><tbody><tr><td><mark style="color:blue;"><strong>Enforce GPU usage</strong></mark></td><td><code>force_gpu</code></td><td><code>boolean</code></td><td>Enforces GPU usage for this skill. Enforcing GPU usage for one skill can disable GPU usage for other skills.</td></tr></tbody></table>

{% endtab %}
{% endtabs %}

### Detections

{% tabs %}
{% tab title="Detection Data" %}

<table data-view="cards"><thead><tr><th>Type</th><th>Description </th></tr></thead><tbody><tr><td><code>pose (Transformation)</code></td><td>The pose of the bin.</td></tr><tr><td><code>bin_empty (bool)</code></td><td>True if the bin is empty.</td></tr><tr><td><code>item_fraction (float)</code></td><td>Fraction of pixels containing items.</td></tr><tr><td><code>reference_trafo (Transformation)</code></td><td>Transformation relative to the reference pose.</td></tr></tbody></table>
{% endtab %}

{% tab title="Detection Example" %}

#### Reference Pose Types: `fit_result`

```json
{
   "detections":[
      {
         "pose":{
            "x":-0.056031276694279916,
            "y":-0.44327393228068174,
            "z":0.5726675775572347,
            "rx":-0.12817415969395238,
            "ry":0.9263117430484452,
            "rz":-2.9538682613260026
         },
         "bin_pose":{
            "x":-0.06337235849379225,
            "y":-0.3978340441934568,
            "z":0.5072357421203973,
            "rx":0.06617629091108139,
            "ry":-2.918019947534546,
            "rz":-0.9150699708718468
         },
         "bin_empty":true,
         "item_fraction":0.0,
         "reference_trafo":{
            "x":0.0,
            "y":0.0,
            "z":0.0,
            "rx":0.0,
            "ry":0.0,
            "rz":0.0
         }
      }
   ]
}
```

#### Reference Pose Types: `no_change`

```json
{
   "detections":[
      {
         "pose":{
            "x":-0.07200788946578035,
            "y":-0.4221961998324571,
            "z":0.5629487695216557,
            "rx":0.04372156978230481,
            "ry":0.8291113470163821,
            "rz":-2.990767713921765
         },
         "bin_pose":{
            "x":-0.07064287213105912,
            "y":-0.3809863358438283,
            "z":0.4943930707234182,
            "rx":-0.058720708358277335,
            "ry":2.9997349483453495,
            "rz":0.831597275889207
         },
         "bin_empty":true,
         "item_fraction":0.0,
         "reference_trafo":{
            "x":0.005281489005689269,
            "y":0.006600885911974622,
            "z":0.02074063723356856,
            "rx":0.06900890793539619,
            "ry":-0.10688376692974187,
            "rz":-0.024829240766247828
         }
      }
   ]
}
```

{% endtab %}

{% tab title="UR Programming Example" %}
The following code snippet shows how you can use the load carrier skill to detect boxes in the camera's FOV and adjust the workspace accordingly.

```
 'Select the Bin Detection Skill and 'World' workspace for detection'
     robobrain Pick Pose
       If rb_has_result()
         'Overwrite the workspace configuration'
         bin_pose≔rb_get_attr_pose("detections[0].bin_pose")
         rb_set_ws_config("pos_x", bin_pose[0])
         rb_set_ws_config("pos_y", bin_pose[1])
         rb_set_ws_config("pos_z", bin_pose[2])
         rb_set_ws_config("rot_x", bin_pose[3])
         rb_set_ws_config("rot_y", bin_pose[4])
         rb_set_ws_config("rot_z", bin_pose[5])
         'Select your item picking skill and a random workspace of the right size here
         robobrain Pick Pose
         
```

<figure><img src="https://1459495663-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FzRF3SV87vu3nkNgfjyt7%2Fuploads%2F1R6b486kxXNPhuh4EBnL%2FScreenshot%202023-08-02%20085458.png?alt=media&#x26;token=a2bc52ac-8fd6-47a1-b864-2fc92e38fa94" alt=""><figcaption></figcaption></figure>
{% endtab %}
{% endtabs %}
