This is an old revision of the document!

CRAM @ ICRA 2022

Our paper titled “Improving object pose estimation by fusion with a multimodal prior – utilizing uncertainty-based CNN pipelines for robotics” that is published at RAL has been presented at ICRA 2022 last week. The paper presents an approach to estimate 3D object poses by combining deep learning approaches with prior knowledge about the object. CRAM was used in the paper to apply the perception system on a physical robot and to evaluate the accuracy of perception by using it for mobile pick and place tasks.

2022/06/01 10:15 · gkazhoya

CRAM @ ICRA 2021

Our paper on CRAM has been accepted to ICRA 2021. The paper describes an integrated robotic system, demonstrating the Robot Household Marathon experiment of our EASE project. There are four sections for each of the main components of our system: the action planner, the motion planner, the perception system and the action parameter learning module.

CRAM takes care of action planning, parametrization and execution in the system, including failure handling. It also integrates all the system components together.

You can access the paper here.

2021/03/19 13:42 · gkazhoya

Learning Object Placements from VR with GMMs

In the bachelor thesis Robots learning geometric groundings of object arrangements for household tasks from virtual reality demonstrations Thomas Lipps implemented a method to learn object placements in a kitchen environment. First, he acquired a data set by performing table setting scenarios in the virtual kitchen by using VR and a pipeline created in previous works from Andrei Haidu und Alina Hawkin.

These data samples were then inputted in the ROS python package costmap_learning. This package takes the object positions and orientations of the used objects in VR and encodes these separated in Gaussian Mixture Models (GMM). The software architecture allows to learn separated object placements for different kitchen environments, tables, context, humans and obviously object types.

Lastly, the package only returns relevant and non redundant knowledge. The former is achieved by identifying possible relations between the object which should be placed and already placed objects. The latter is reached by filtering the result with the already placed objects. This behavior is visualized in a table setting scenario in the given video. The ROS Interface in CRAM is implemented in the CRAM package cram_learning_vr.

2020/10/19 11:34 · tlipps

CRAM @ ICRA 2020

This summer our paper on CRAM plan transformations titled “Towards Plan Transformations for Real-World Mobile Fetch and Place” has been presented at the ICRA 2020 conference in a virtual form. You can see the video of the presentation here.

You can read more about plan transformations at the EASE blog or look at the implementation in the cram_plan_transformation package.

2020/08/24 16:05 · gkazhoya

PyCRAM with PyBullet

For the Bachelor Thesis of Andy and Dustin Augsten and later Jonas Dech CRAM was newly implemented in Python. The purpose behind this decision was to make the concepts of CRAM easier accessible to a wider audience.

Currently PyCRAM doesn't include all features of CRAM but the core features are implemented for example the CRAM Plan Language, Process Modules, Motion Designator, the BulletWorld and it's reasoning. While a lot of features, that are already in CRAM, aren't yet implemented in PyCRAM, it is already possible to write a functioning plan for a robot (see the second demo here). With the BulletWorld it is also possible to simulate these plans for testing or to plan future actions. The reasoning mechanisms of the BulletWorld allow to get information about the relationship of two objects in the BulletWorld.

Currently the CRAM Team also works to implement new features for PyCRAM, so stay tuned for more updates.

Below you can see a video which highlights the current capabilities of the PyCRAM framework.

2020/07/30 10:43 · jdech

Older entries >>