Table of Contents
Using the Cram-KnowRob-VR package
Tested under CRAM version v 0.7.0
This tutorial will introduce you to the cram_knowrob_vr (short: kvr)
package, which uses the data recorded in the Virtual Reality environment using RobCog, extracts information from them using KnowRob and executes the CRAM high-level-plans based on this data either on the real robot or in the CRAM bullet world.
Idea
We want to essentially teach a robot how to perform every day activities without having to dive deep into code, but rather simply show the robot what we want him to do using Virtual Reality. This would allow robots to learn from humans easily, since this way, the robot can acquire information about where the human was looking for things and where things were placed. Of course, this could also be hard coded, but that would take a lot more time and be prone to failure, since we as humans, often forget to describe very minor things which we automatically take for granted, but which can play a huge role in the success of a task for the robot. E.g. if the cooking pot is not in its designated area, we would automatically check the dishwasher or the sink area. This is something the robot would have to learn first.
The advantage of using Virtual Reality for this is also, that we can train the robot on all kinds of different kitchen setups, which can be build within a few minutes, instead of having to move around physical furniture. This would also allow for generalization of the acquired data and would add to the robustness of the pick and place tasks.
Prerequisites
This tutorial assumes that you've completed the Using JSON Prolog to communicate with KnowRob tutorial and therefore have ROS, CRAM, KnowRob and MongoDB installed. In order to be able to use the kvr package, a few specific changes have to be made. Within the knowrob_addons
the knowrob_robcog
package has to be replaced by this one knowrob_robcog github. If it doesn't build because dependencies are missing, please install them. If it still doesn't build, you can try to pull this fork of the knowrob_addons knowrob_addons github instead.
The knowrob_robcog
package contains the VR-specific queries, which we need in order to be able to extract the information we need for the plans.
Please also download the episode data provided at this source. The download can take about 10 minutes, depending on your internet connection. Unzip the archive and put the episodes directory next to your work space. It is it's own workspace, and is ment to be layered under your working workspace. To learn more about workspace layering and on how to set it up, please refer to this tutorial.
Roslaunch
Launch a roscore
first. Then, in a new terminal for each launch file, launch the simulation and roslisp_repl
$ roslaunch cram_knowrob_vr simulation.launch $ roslisp_repl
The simulation.launch
includes the json_prolog node which is needed for the communication between KnowRob and CRAM. It also launches the bullet world simulation
and uploads the robot description
. This launch file has the following parameters that can be set with its launch. The following are also the default values:
- upload:=true uploads the robot description if set to
true
. Set tofalse
if the robot description is being uploaded by another node or e.g. the real robot. - knowrob:=true determines if the
json_prolog
node should be launched to allow communication with KnowRob. Set tofalse
if another instance of KnowRob or json_prolog is running already. - boxy:=false determines which robot description should be uploaded and used. The default case
false
means that the PR2 description will be used. In case oftrue
, Boxy will be used.
Usage and Code
The following will describe what the different files and their functions do, when and how to use them and why they are needed. The explanation will follow the order files in the .asd file. It is separated into a usage and a files section. The usage section will focus on how to get everything to run and how to execute a demo while the files section will look a bit more in depth into the code, and explain what is going on there.
Usage
Here we will first explain on what needs to be done to get the robot to execute and perform a pick and place plan in the simulated bullet world. In the next paragraph, we will take a closer look at the individual source files and explain their function.
Before you load the package, navigate to the init.lisp
file and set the *episode-path*
parameter to the path of your episode data. This is important. Otherwise it won't be possible to load the episode data properly.
Now you can load the cram_knowrob_vr
package with:
CL-USER> (ros-load:load-system "cram_knowrob_vr" :cram-knowrob-vr)
If this is used in simulation and depending on if the PR2 or Boxy robot is supposed to be used, the respective description needs to be loaded first.
For PR2:
CL-USER> (swank:operate-on-system-for-emacs "cram-pr2-description" (quote load-op))
For Boxy:
CL-USER> (swank:operate-on-system-for-emacs "cram-boxy-description" (quote load-op))
To launch all the necessary initializations, simply execute:
CL-USER> (kvr::init-full-simulation :namedir '("ep1") :urdf-new-kitchen? nil)
This will create a lisp ros node, clean up the belief-state, load the episodes that get passed to the init
function as a list of strings in the namedir
key parameter, e.g. in our case “ep1”, spawn the semantic map of the episode and the items and initialize the location costmap. This process may take a while, so please have some patience. (or go grab a coffee meanwhile. It can really take several minutes on Ubuntu 16.04.) When the function has finished running through your bullet world should look like this:
Now, let's execute the pick and place plan:
CL-USER> (cram-urdf-projection:with-simulated-robot (kvr::demo))
With this call we first say that we want to use the simulated bullet-world PR2 robot instead of the real one, and then we simply call the demo. The demo will read out the VR episode data and extract the positions of the objects that have been manipulated, which hand was used and the positions of the human head and hand.
Code
mesh-list.lisp
Contains a list of all the meshes which we want to spawn based on their locations in the semantic map. Some of them are commented out, e.g. walls, lamps and the objects we interact with, in order to keep the bullet world neat and clean. In unreal however, the walls and lamps are being spawned. We simply currently don't need them in bullet.
mapping-urdf-semantic.lisp
Mapps the urdf kitchen (bullet world simulation environment) to the semantic map(virtual reality environment), since they differ in how some furniture is organized and called. Also maps the names of the objects the robot interacts with between the two environments.
init.lisp
Contains all the needed initialization functions for the simulation environment, episode loading and for the simulated or real robot. Also contains the *episode-path*
variable which sets the location of the episode data.
queries.lisp
Contains some query wrappers so that they can be called as lisp functions, and also includes the queries which read out the data from the database e.g. the poses of the object, hand and head of the actor in the virtual reality.
query-based-calculations.lisp
Includes all transformation calculations to make the poses of the robot relative to the respective object, and the poses of the objects relative to the surfaces. Mostly works on lazy-lists of poses.
designator-integration.lisp
Integrates the pose calculations from the query-based-calculations into location designators.
fetch-and-deliver-based-demo.lisp
Sets up the plan for the demo with the respective action designator. Also includes logging functions.
debugging-utils.lisp
Contains a lot of debugging and helper functions which can also visualize all the calculated poses.
Importing new episode data into MongoDB and KnowRob (Additional information)
In order for us to be able to query episode data for information, we first need to import that data into KnowRob and MongoDB.
The MongoDB contains all the poses of the objects, camera, hand and furniture of the episode. So the pose is read out from there. KnowRob takes care of everything else. It knows of all the object classes and instances used in the episodes. So in order to be able to use the data, we need to import it into MongoDB and KnowRob.
The following description only applies if the data has been recorded using the RobCog tool for Unreal Engine, which results in a json
file containing the events and a matching SemanticMap.owl
describing the environment.
MongoDB (quick with scripts)
There is a script which imports the episode data into the currently running MongoDB instance. Please see scripts and the provided Readme for reference and usage.
MongoDB (manuel in-depth)
This will explain how to import the episode data manually into MongoDB. This is essentially what the script in the description above does automatically. So if the script didn't work (please leave an issue on github in that case), you can follow this guide.
If you record data in the Virtual Reality using RobCog, you essentially get the following files:
RawData_ID.json
← this contains all the recorded events that happened, where everything was, who moved what where when…etc.
SemanticMap.owl
← contains all information about the environment. Where are the tables spawned before they are manipulated? etc.
EventData |-> EventData_ID.owl <- contains the times of the events. Which event occurred when, what events are possible... |-> Timeline_ID.html <- same as the above but as a visualized overview
Each episode gets a random ID generated, therefore replace ID here with whatever your data's ID is. In order to be able to access this data in KnowRob, we need to load it into our local MongoDB, since this is where all data is kept for KnowRob to access. Unfortunately, at the point in time where this tutorial is being written, MongoDB does not support the import of large files. The recorded episode can become fairly large if one performs multiple pick and place tasks. We can work around it, by splitting it up into multiple parts, which can be loaded into the MongoDB individually. For that, we need a little program called jq
. If you don't have it, you can install it from here. It allows us to manipulate json files from command line. Then you can do the following in the directory where your RawData_ID.json
is:
$ mkdir split $ cd split $ cat ../RawData_ID.json | jq -c -M '.' | split -l 2000
We create a new directory called split
, cd into it, access the file with cat
then we use jq
to make the file more compact (-c
), disable colorful output to the shell (-M
) and format the output in a basic way (.
). Then it's fed to split
which creates many files, which are each about 2000 lines long, since that is a size MongoDB is comfortable with.
After this, you should see many files in your split directory, named xaa
, xab
, xac
… you get the idea. The amount of files you get depends on the size of your original .json file.
Now we can import the files onto the databse.
$ mongoimport --db DB-NAME --collection COLLECTION-NAME --file FILE-NAME
example:
$ mongoimport --db Own-Episodes_set-clean-table --collection RawData_cUCM --file xaa
I keep everything in one database, and name the collection according to the RawData_ID name, in order to not forget what is what. You can name it however you like. IF you consider importing all the files individually fairly tedious, you can write a script for it. If you do, let us know. Didn't get around to do that yet.
Now we can create a mongodump
:
$ mongodump --db DB-NAME --collection COLLECTION-NAME
$ mongodump --db Own-Episodes_set-clean-table --collection RawData_cUCM
Then you get a dump
directory, which contains a RawData_ID.bson
file and a RawData_ID.metadata.json
After this, you can just look at how the other episodes and their directories are structured, and create the directories for your data the same way.
Should you ever for some reason need to directly import a *.bson
file, you can do so as well, using mongorestore
:
$ mongorestore -d DB-NAME -c COLLECTION-NAME FILE-NAME.bson
$ mongorestore -d Own-Episodes_set-clean-table -c RawData_qtzg RawData_qtzg.bson
KnowRob
KnowRob needs to be able to find the .owl and .bson files. It's important to have a directory for all the Episodes. You can either have a look or download these episodes for reference. Keep in mind, that depending on which episode data should be loaded, the path within CRAM (the *episode-path*
variable within the init.lisp
) might have to be adapted, as well as the parameter which is passed to the init-full-simulation
function.
Performance
This step is also covered by the scripts mentioned above, but can also be executed manually.
Depending on how many collections your database has, it can get slow when quering for information. One way to make it faster, is to include an index over timestamp for all collections. One way to add this, is to install compass for mongodb. Launch it, and connect it to your database. The defautl settings should be fine so just click ok
when it launches. Then go to your collection → indexes → create index. Call the new index timestamp
, select the field timestamp
and set the type to 1 (asc)
, click create. Repeat for all the collections. It will improve the query speed greatly.