This is an old revision of the document!


This tutorial will introduce you to the cram_knowrob_vr (short: kvr) package, which uses the data recorded in the Virtual Reality environment using RobCog, extracts information from them using KnowRob and executes the CRAM high-level-plans based on this data either on the real robot or in the CRAM bullet world.

Prerequisites

This tutorial assumes that you've completed the Using JSON Prolog to communicate with KnowRob tutorial and therefore have ROS, CRAM, KnowRob and MongoDB installed. In order to be able to use the kvr package, a few specific changes have to be made. Within the knowrob_addons the knowrob_robcog package has to be replaced by this one knowrob_robcog github. If it doesn't build because dependencies are missing, please install them. If it still doesn't build, you can try to pull this fork of the knowrob_addons knowrob_addons github instead. The knowrob_robcog package contains the VR-specific queries, which we need in order to be able to extract the information we need for the plans.

Roslaunch

Launch a roscore first. Then, in a new terminal for each launch file, launch the bullet_world, json_prolog and roslisp_repl

    $ roslaunch cram_bullet_world_tutorial world.launch
    $ roslaunch json_prolog json_prolog.launch 
    $ roslisp_repl

The bullet world is needed for visualization. The json_prolog node allows us to access information in KnowRob from CRAM.

Usage and Code

The following will describe what the different files and their functions do, when and how to use them and why they are needed. The explanation will follow the order files in the .asd file. It is separated into a usage and a files section. The usage section will focus on how to get everything to run and how to execute a demo while the files section will look a bit more in depth into the code, and explain what is going on there.

Usage

Before you load the package, navigate to the init.lisp file and set the *episode-path* parameter to the path of your episode data. This is important. Otherwise it won't be possible to load the episode data properly.

Now you can load the cram_knowrob_vr package with:

CL-USER>  (ros-load:load-system "cram_knowrob_vr" :cram-knowrob-vr)

To launch all the necessary components, simply execute:

CL-USER> (kvr::init-full-simulation '("ep1"))

This will create a lisp ros node, clean up the belief-state, load the episodes that get passed to the init function as a list of strings, spawn the semantic map of the episode and the items and initialize the location costmap. In the code section below it will be explained in more detail, what is loaded when.

Code

mesh-list.lisp

Contains a list of all the meshes which we want to spawn based on their locations in the semantic map. Some of them are commented out, e.g. walls, lamps and the objects we interact with, in order to keep the bullet world neat and clean. In unreal however, the walls and lamps are being spawned. We simply currently don't need them in bullet.

mapping-urdf-semantic.lisp

Mapps the urdf kitchen to the semantic map, since they differ in how some furniture is organized and called.

init.lisp

TODO

queries.lisp

TODO

query-based-calculations.lisp

TODO

designator-integration.lisp

TODO

fetch-and-deliver-based-demo.lisp

TODO

Importing new episode data into MongoDB and KnowRob(Additional information)

In order for us to be able to query data for information, we first need to import that data into KnowRob and MongoDB. The MongoDB contains all the poses of the objects, camera, hand and furniture of the episode. So the pose is read out from there. KnowRob takes care of everything else. It knows of all the object classes and instances used in the episodes. So in order to be able to use the data, we need to import it into MongoDB and KnowRob.

MongoDB

If you record data in the Virtual Reality using RobCog, you essentially get the following files: RawData_ID.json ← this contains all the recorded events that happened, where everything was, who moved what where when…etc. SemanticMap.owl ← contains all information about the environment. Where are the tables spawned before they are manipulated? etc.

 EventData
   |-> EventData_ID.owl <- contains the times of the events. Which event occurred when, what events are possible...
   |-> Timeline_ID.html <- same as the above but as a visualized overview

Each episode gets a random ID generated, therefore replace ID here with whatever your data's ID is. In order to be able to access this data in KnowRob, we need to load it into our local MongoDB, since this is where all data is kept for KnowRob to access. Unfortunately, at the point in time where this tutorial is being written, MongoDB does not support the import of large files. The recorded episode can become fairly large if one performs multiple pick and place tasks. We can work around it, by splitting it up into multiple parts, which can be loaded into the MongoDB individually. For that, we need a little program called jq. If you don't have it, you can install it from here. It allows us to manipulate json files from command line. Then you can do the following in the directory where your RawData_ID.json is:

$ mkdir split
$ cd split
$ cat ../RawData_ID.json | jq -c -M '.' | split -l 2000

We create a new directory called split, cd into it, access the file with cat then we use jq to make the file more compact (-c), disable colorful output to the shell (-M) and format the output in a basic way (.). Then it's fed to split which creates many files, which are each about 2000 lines long, since that is a size MongoDB is comfortable with. After this, you should see many files in your split directory, named xaa, xab, xac… you get the idea. The amount of files you get depends on the size of your original .json file. Now we can import the files onto the databse.

$ mongoimport --db DB-NAME --collection COLLECTION-NAME  --file FILE-NAME

example:

$ mongoimport --db Own-Episodes_set-clean-table --collection RawData_cUCM  --file xaa

I keep everything in one database, and name the collection according to the RawData_ID name, in order to not forget what is what. You can name it however you like. IF you consider importing all the files individually fairly tedious, you can write a script for it. If you do, let us know. Didn't get around to do that yet. Now we can create a mongodump:

$ mongodump --db DB-NAME --collection COLLECTION-NAME
$ mongodump --db Own-Episodes_set-clean-table --collection RawData_cUCM

Then you get a dump directory, which contains a RawData_ID.bson file and a RawData_ID.metadata.json

After this, you can just look at how the other episodes and their directories are structured, and create the directories for your data the same way.

Should you ever for some reason need to directly import a *.bson file, you can do so as well, using mongorestore:

$ mongorestore -d DB-NAME -c COLLECTION-NAME FILE-NAME.bson
$ mongorestore -d Own-Episodes_set-clean-table -c RawData_qtzg RawData_qtzg.bson

KnowRob

KnowRob needs to be able to find the .owl and .bson files. It's important to have a directory for all the Episodes. You can either have a look or download these episodes for reference.

Performance

Depending on how many collections your database has, it can get slow when quering for information. One way to make it faster, is to include an index over timestamp for all collections. One way to add this, is to install compass for mongodb. Launch it, and connect it to your database. The defautl settings should be fine so just click ok when it launches. Then go to your collection → indexes → create index. Call the new index timestamp, select the field timestamp and set the type to 1 (asc), click create. Repeat for all the collections. It will improve the query speed greatly.