Learning Object Placements from VR with GMMs

In the bachelor thesis Robots learning geometric groundings of object arrangements for household tasks from virtual reality demonstrations Thomas Lipps implemented a method to learn object placements in a kitchen environment. First, he acquired a data set by performing table setting scenarios in the virtual kitchen by using VR and a pipeline created in previous works from Andrei Haidu und Alina Hawkin.

These data samples were then inputted in the ROS python package costmap_learning. This package takes the object positions and orientations of the used objects in VR and encodes these separated in Gaussian Mixture Models (GMM). The software architecture allows to learn separated object placements for different kitchen environments, tables, context, humans and obviously object types.

Lastly, the package only returns relevant and non redundant knowledge. The former is achieved by identifying possible relations between the object which should be placed and already placed objects. The latter is reached by filtering the result with the already placed objects. This behavior is visualized in a table setting scenario in the given video. The ROS Interface in CRAM is implemented in the CRAM package cram_learning_vr.