CRAM @ ICRA 2023

CRAM was present at ICRA this time not as a paper contribution but as an invited talk at the Transferability in Robotics workshop: https://transferabilityinrobotics.github.io/icra2023/

The focus of the talk were the symbolic entity designators that support transferability over manipulation objects, environments, robot bodies and applications in a plan executive system.

2023/06/08 11:15 · gkazhoya

Controlling Tiago++ with CRAM

Recently we have used CRAM to command Tiago++ to perform a mobile pick and place task in a kitchen environment. We have never applied CRAM on a robot with a differential drive before and it made the task of mobile manipulation in a narrow environment much more challenging. The gripper was too weak to open the heavy cupboard door, such that we grasped the handle from the bottom side instead of the front side as we usually do.

2022/10/11 09:19 · gkazhoya

CRAM @ ICRA 2022

Our paper titled “Improving object pose estimation by fusion with a multimodal prior – utilizing uncertainty-based CNN pipelines for robotics” that is published at RAL has been presented at ICRA 2022 last week. The paper presents an approach to estimate 3D object poses by combining deep learning approaches with prior knowledge about the object. CRAM was used in the paper to apply the perception system on a physical robot and to evaluate the accuracy of perception by using it for mobile pick and place tasks.

2022/06/01 10:15 · gkazhoya

CRAM @ ICRA 2021

Our paper on CRAM has been accepted to ICRA 2021. The paper describes an integrated robotic system, demonstrating the Robot Household Marathon experiment of our EASE project. There are four sections for each of the main components of our system: the action planner, the motion planner, the perception system and the action parameter learning module.

CRAM takes care of action planning, parametrization and execution in the system, including failure handling. It also integrates all the system components together.

You can access the paper here.

2021/03/19 13:42 · gkazhoya

Learning Object Placements from VR with GMMs

In the bachelor thesis Robots learning geometric groundings of object arrangements for household tasks from virtual reality demonstrations Thomas Lipps implemented a method to learn object placements in a kitchen environment. First, he acquired a data set by performing table setting scenarios in the virtual kitchen by using VR and a pipeline created in previous works from Andrei Haidu und Alina Hawkin.

These data samples were then inputted in the ROS python package costmap_learning. This package takes the object positions and orientations of the used objects in VR and encodes these separated in Gaussian Mixture Models (GMM). The software architecture allows to learn separated object placements for different kitchen environments, tables, context, humans and obviously object types.

Lastly, the package only returns relevant and non redundant knowledge. The former is achieved by identifying possible relations between the object which should be placed and already placed objects. The latter is reached by filtering the result with the already placed objects. This behavior is visualized in a table setting scenario in the given video. The ROS Interface in CRAM is implemented in the CRAM package cram_learning_vr.

2020/10/19 11:34 · tlipps

Older entries >>