Sample Tube Handling
Identification and automated sorting of sample tubes
Last updated
Identification and automated sorting of sample tubes
Last updated
Laboratories are facing a severe labor shortage: according to a study by labo magazine, 87% of laboratories in Germany do not have enough employees to perform the daily testing work of laboratory samples. Demographic change is also contributing to this problem, as more and more testing capacity is needed due to the aging population. At the same time, laboratories already have to process around 4,500 samples per shift by an average of 8 employees. How can these already noticeable challenges be met efficiently and in the long term?
The vast variety of sample tubes and containers make sorting for standard automation impossible. In a fast-moving industry it is crucial to react on changes seamlessly. The use of AI skill makes it possible to implement flexible robot solutions in labs that can cope with unforeseen changes and events.
robominds has developed several AI skills for automated sample handling. By using the intelligent robobrain® technology skills are able to detect, identify, sort and subsequently place incoming sample tubes - even from bulk material - in laboratories into racks for the downstream workflows.
Avg. Recognition Time
0,5 - 1,5 sec
First pick success rate
99,79 %
The objective of the application is to have a flexible rack as input and sort a large variety of sample tubes in specific workflow racks that can then be taken to the subsequent testing machine.
For many labs it is the challenge that samples arrive two or three times per day in large batches. Consequently, the batches must work through the full process chain as fast as possible without idle times. Typically errors in packaging and labeling evoke problems and machine downtimes along the workflow.
Therefore, the solution has two key functionalities: On one hand sorting the delivered samples from collection racks into workflow racks to reduce tedious manual tasks; secondly setting up a quality gateway that identifies deficient tubes to reduce costly downtimes.
From the functionality perspective the presented solution approach sets its attention on handling the vast variety of samples flexibly and robustly. This concept delivers a system that is easy to use for the operator and makes it future proof since it is self-adapting to changing sample types.
Furthermore, it takes constraints into account that are set by typical lab infrastructure. Small footprints and good portability are necessary to reduce costs and layout changes. Highest possible throughput enhances economic returns of the solution.
The presented solution handles up to 600 samples per hour, including sorting, quality check and sort-out. The reduced manual labor tasks and the reduced error rates save 1039€ per 8h-shift making it highly profitable on the return of invest.
The process can be projected onto three essential sections: Input for the collection of the samples where the robot can search for samples and pick them up. Identification where the sample can be clearly identified to the output section where the specific workflows are defined, and the corresponding tubes are placed.
The system is designed for maximum flexibility having the possibility to switch from standing upright samples to bulk. To save space the maximum footprint is double euro pallet dimensions.
Safety is designed according to regulations. To reduce possible human errors the setup is designed with locking doors that can be opened when the rack exchange is performed. For usability the cell is equipped with signaling lights at the corners to ensure optimal status visibility.
The robobrain is the central control unit. It is running the AI Operating System NEUROS which is responsible for the entire communication including to the backend, controlling the process, and running all AI Skills. NEUROS is managing all the services and their communication among each other and third-party systems. It communicates with all components and runs the entire process.
The process is programmed in the NEUROS control service. All components are connected via ethernet and are in the same network. NEUROS uses the components and sends them corresponding commands to produce the automation process. It sends motion commands to the robots and grippers and exchanges commands and messages from other process peripheries (e.g., code scanner, LED Controller, Safety SPS, Tablet Interface) Multi robot control is possible and used in the setup.
The tablet interface is the main UI to work with the cell. It includes operation functionality as well as status feedbacks including a real time 3D robot cell visualization.
Furthermore, the robobrain hosts and runs AI Skills. As usual the brain stores a variety of AI Skills that can be used and run smart processes. The eye connected the skill uses the robot vision to detect and identify objects and return corresponding handling instructions. The reported instructions are sent to the control process running on NEUROS. Robot moves and tasks are performed accordingly. Details to the used AI Skills are described in the Skill section.
In the setup two robots are used to enable tray-to-tray sorting as well as bulk-to-tray sorting. The robots are controlled by NEUROS in a joint workspace environment. Sharing the workspace requires the robots to have a common understanding of it. This means having a joint world coordinate system and real-time knowledge of robot positions and kinematics.
The joint workspace and the actual robot positions are managed and calculated in the NEUROS control environment. By visually calibrating it is possible to map the coordinate system of Robot 2 in the coordinate system of Robot 1. This step merges and transforms the two separate coordinate systems into one, which results in a common workspace.
The gripper is the component directly handling the workpieces. This makes it one of the most vital parts for the entire process. The mechanism is required to have a safe tube grip without risk of decapping. Tubes are quite small and are packed with minimum gaps.
We use the Zimmer GEH-6040IL with custom fingers and fingertips. The fingers are in a slim aluminum design with exchangeable fingertips. The fingertips are manufactured from plastic with a prism geometry for a safe grip. A safe and precise grip with the correct amount of force ensures that the tubes are not decapped within the process.
The gripper is controlled by NEUROS which is communicating over an IO-Link gateway. The gateway is connected to the grippers.
In the application in the tray-to-tray configuration we use the Sample Tube Detection Skill. The standard skill is made to detect sample tubes of almost any type and in various colors and identify optimal picking points with special regard to collision avoidance. Once the camera has taken a picture the skill analyzes the scene and will return a list of tubes with corresponding pick point and color. The list of high-quality pick points is sorted by height; the highest being first. For the process we chose to work through the list in the sequential order it was sorted by. If necessary, the order can be changed in the skill configuration or manually in the robot control process. In the following the return value and its construction is explained in detail.
All recognized tubes are returned in a list of detections. This list is build up as:
[ {object_0} , {object_1} , .. , {object_n} ]
Each object in the list contains of three different information:
The value contains a pose that represents the pick point for the robot of the corresponding tube. The pose is returned in the robot coordinate frame. It is represented in the cartesian coordinate frame with rotation (x,y,z,rx,ry,rz).
Color Code of the detected test tube cap as a string.
Width of the required gripper opening for approaching the pick point in meters.
For optimal performance it is useful to adjust the main skill parameters. The main parameters can be checked and set in the skill settings and can be found under the Basic Tab. In the following section the main parameters are explained in detail with the specific values used in the application as a reference.
Each pick point is evaluated by the AI for its quality in terms of pickability, collision possibility and confidence of having a singulated object. The parameter sets the minimum threshold for quality. Pick points with a quality score below the set threshold will be ignored by the system. Only pick points above will be used for further processing.
Tip: If the system ignores many pickable parts, it can be useful to reduce the set threshold.
Reference: Set to 0.8
With this parameter the thickness of the fingertip is set. It is used for calculating the collision prevention. Enter the actual thickness of the used fingertip.
Tip: If the system ignores many pickable parts, it can be useful to reduce the set finger thickness parameter.
As reference: Set to 0,003
With this parameter the width of the fingertip is set. It is used for calculating the collision prevention. Enter the actual width of the used fingertip.
Tip: If the system ignores many pickable parts, it can be useful to reduce the set finger width parameter.
As reference: Set to 0,005
The parameter defines the depth of the grasp on the tube. It offsets the picking point from the cap surface downwards. In this case it is set to 10 mm down from the cap surface.
As reference: Set to 0,01
For collision detection, amount of space to leave between the outside of the cap and the inside of the gripper fingers. Should be close to the jaw opening diameter minus the cap diameter divided by two.
As reference: Set to 0,005
As in most automation systems there is a need for some specific periphery to meet regulatory or application requirements. The periphery is set up to meet the requirements and ensure safe and easy operation. In this case the periphery consists of a code scanner, backend connector, safety, LED indicator lights, and a touch HMI.
The code scanner implemented is used for clear identification of tubes. If no code or a bad code is identified the system can sort them out. The used scanner is the SICK Lector which is connected via Ethernet to the cell network.
If the code can be correctly identified the process system can perform a backend call to check plausibility. The connector to the backend is running on the robobrain.
For safe operation the cell has an additional safety control system. Safety related systems must meet certain regulatory standards. The safety system has always the highest priority in the process. It is setup in full redundancy to rule out faulty states. Core is a safety control system from SICK which is connected to the robot safety systems and additional safe door locks. It returns the current state to the control system running on the robobrain.
To indicate the status the cell is equipped with status signaling lights on all four corners. It indicates the statuses operational (green), idle (yellow) and error (red). The status is handed over by the control system from the robobrain to the LED control box which is connected to all lights.
Lastly for easy operation and configuration the cell has a touch HMI tablet mounted to the front. It shows the system status with live 3D view of the robots. This is the main point of operation for the user.
What to keep in mind:
This application and its application description shows one approach how to solve a dedicated case and topic with skills. It can and should be adapted from hard- and software side to meet the requirements of the application.
The system can be loaded with a broad variety of containers. However, it is important to check the height of the container to keep the robots in the working area. Having the containers locked into position is important.
Recognition time is depending on the number of tubes in the field of view resulting in recognition times from 0.5s to 1.5s.
The safety system is designed to release the loading door locks when requested by the operator. Loading sequence can be requested with the HMI