Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
tutorials:advanced:unreal [2019/05/16 13:58] – Added Roslaunch, Prerequisites, Installation, Performance hawkintutorials:advanced:unreal [2020/06/11 11:54] (current) – [Prerequisites] adding the source to the Episode Data/VR data hawkin
Line 1: Line 1:
 +======  Using the Cram-KnowRob-VR package ======
 +
 +Tested under CRAM version v 0.7.0 
 +
 This tutorial will introduce you to the ''cram_knowrob_vr (short: kvr)'' package, which uses the data recorded in the Virtual Reality environment using [[http://robcog.org/|RobCog]], extracts information from them using [[http://www.knowrob.org/|KnowRob]] and executes the CRAM high-level-plans based on this data either on the real robot or in the CRAM [[tutorials:advanced:bullet_world|bullet world]]. This tutorial will introduce you to the ''cram_knowrob_vr (short: kvr)'' package, which uses the data recorded in the Virtual Reality environment using [[http://robcog.org/|RobCog]], extracts information from them using [[http://www.knowrob.org/|KnowRob]] and executes the CRAM high-level-plans based on this data either on the real robot or in the CRAM [[tutorials:advanced:bullet_world|bullet world]].
  
 +==== Idea ====
 +We want to essentially teach a robot how to perform every day activities without having to dive deep into code, but rather simply show the robot what we want him to do using Virtual Reality. This would allow robots to learn from humans easily, since this way, the robot can acquire information about where the human was looking for things and where things were placed. Of course, this could also be hard coded, but that would take a lot more time and be prone to failure, since we as humans, often forget to describe very minor things which we automatically take for granted, but which can play a huge role in the success of a task for the robot. E.g. if the cooking pot is not in its designated area, we would automatically check the dishwasher or the sink area. This is something the robot would have to learn first.
 +
 +The advantage of using Virtual Reality for this is also, that we can train the robot on all kinds of different kitchen setups, which can be build within a few minutes, instead of having to move around physical furniture. This would also allow for generalization of the acquired data and would add to the robustness of the pick and place tasks. 
 ==== Prerequisites ==== ==== Prerequisites ====
 This tutorial assumes that you've completed the [[tutorials:intermediate:json_prolog|Using JSON Prolog to communicate with KnowRob]] tutorial and therefore have ROS, CRAM, KnowRob and MongoDB installed. In order to be able to use the kvr package, a few specific changes have to be made. Within the ''knowrob_addons'' the ''knowrob_robcog'' package has to be replaced by this one [[https://github.com/robcog-iai/knowrob_robcog.git This tutorial assumes that you've completed the [[tutorials:intermediate:json_prolog|Using JSON Prolog to communicate with KnowRob]] tutorial and therefore have ROS, CRAM, KnowRob and MongoDB installed. In order to be able to use the kvr package, a few specific changes have to be made. Within the ''knowrob_addons'' the ''knowrob_robcog'' package has to be replaced by this one [[https://github.com/robcog-iai/knowrob_robcog.git
 |knowrob_robcog github]]. If it doesn't build because dependencies are missing, please install them. If it still doesn't build, you can try to pull this fork of the knowrob_addons [[https://github.com/hawkina/knowrob_addons|knowrob_addons github]] instead. |knowrob_robcog github]]. If it doesn't build because dependencies are missing, please install them. If it still doesn't build, you can try to pull this fork of the knowrob_addons [[https://github.com/hawkina/knowrob_addons|knowrob_addons github]] instead.
 The ''knowrob_robcog'' package contains the VR-specific queries, which we need in order to be able to extract the information we need for the plans. The ''knowrob_robcog'' package contains the VR-specific queries, which we need in order to be able to extract the information we need for the plans.
 +
 +Please also download the **episode data** provided at this [[https://seafile.zfn.uni-bremen.de/d/ba524dbadfa748488d4d/|source]]. The download can take about 10 minutes, depending on your internet connection. Unzip the archive and put the **episodes** directory next to your work space. It is it's own workspace, and is ment to be layered under your working workspace. To learn more about workspace layering and on how to set it up, please refer to this [[http://wiki.ros.org/catkin/Tutorials/workspace_overlaying|tutorial]]. 
  
 === Roslaunch === === Roslaunch ===
-Launch a ''roscore'' first. Then, in a new terminal for each launch file, launch the bullet_world, json_prolog and roslisp_repl+Launch a ''roscore'' first. Then, in a new terminal for each launch file, launch the simulation and roslisp_repl
 <code bash> <code bash>
-    $ roslaunch cram_bullet_world_tutorial world.launch +    $ roslaunch cram_knowrob_vr simulation.launch
-    $ roslaunch json_prolog json_prolog.launch +
     $ roslisp_repl     $ roslisp_repl
 </code> </code>
-The bullet world is needed for visualization. The json_prolog node allows us to access information in KnowRob from CRAM+The ''simulation.launch'' includes the json_prolog node which is needed for the communication between KnowRob and CRAM. It also launches the ''bullet world simulation'' and uploads the ''robot description''. This launch file has the following parameters that can be set with its launch. The following are also the default values: 
 +  * **upload:=true** uploads the robot description if set to ''true''. Set to ''false'' if the robot description is being uploaded by another node or e.g. the real robot. 
 +  * **knowrob:=true** determines if the ''json_prolog'' node should be launched to allow communication with KnowRob. Set to ''false'' if another instance of KnowRob or json_prolog is running already. 
 +  * **boxy:=false** determines which robot description should be uploaded and used. The default case ''false'' means that the PR2 description will be used. In case of ''true'', Boxy will be used
  
-==== Initialization (within Emacs) ==== +==== Usage and Code ==== 
-TODO+The following will describe what the different files and their functions do, when and how to use them and why they are needed. The explanation will follow the order files in the .asd file. It is separated into a usage and a files section. The usage section will focus on how to get everything  to run and how to execute a demo while the files section will look a bit more in depth into the code, and explain what is going on there. 
  
 +=== Usage ===
 +Here we will first explain on what needs to be done to get the robot to execute and perform a pick and place plan in the simulated bullet world. In the next paragraph, we will take a closer look at the individual source files and explain their function.
  
 +Before you load the package, navigate to the ''init.lisp'' file and set the ''*episode-path*'' parameter to the path of your episode data. This is important. Otherwise it won't be possible to load the episode data properly.
  
 +Now you can load the ''cram_knowrob_vr'' package with:
 +<code lisp>
 +CL-USER>  (ros-load:load-system "cram_knowrob_vr" :cram-knowrob-vr)
 +</code>
 +
 +If this is used in simulation and depending on if the PR2 or Boxy robot is supposed to be used, the respective description needs to be loaded first.
 +
 +For PR2:
 +<code lisp>
 +CL-USER>  (swank:operate-on-system-for-emacs "cram-pr2-description" (quote load-op))
 +</code>
 +
 +For Boxy:
 +<code lisp>
 +CL-USER>  (swank:operate-on-system-for-emacs "cram-boxy-description" (quote load-op))
 +</code>
 +
 +
 +To launch all the necessary initializations, simply execute: 
 +<code lisp>
 +CL-USER> (kvr::init-full-simulation :namedir '("ep1") :urdf-new-kitchen? nil)
 +</code>
 +
 +This will create a lisp ros node, clean up the belief-state, load the episodes that get passed to the ''init'' function as a list of strings in the ''namedir'' key parameter, e.g. in our case "ep1", spawn the semantic map of the episode and the items and initialize the location costmap. This process may take a while, so please have some patience. (or go grab a coffee meanwhile. It can really take several minutes on Ubuntu 16.04.) When the function has finished running through your bullet world should look like this: 
 +
 +Now, let's execute the pick and place plan:
 +<code lisp>
 +CL-USER> (cram-urdf-projection:with-simulated-robot (kvr::demo))
 +</code>
 +With this call we first say that we want to use the simulated bullet-world PR2 robot instead of the real one, and then we simply call the demo. The demo will read out the VR episode data and extract the positions of the objects that have been manipulated, which hand was used and the positions of the human head and hand. 
 +
 +
 +=== Code ===
 +== mesh-list.lisp ==
 +Contains a list of all the meshes which we want to spawn based on their locations in the semantic map. Some of them are commented out, e.g. walls, lamps and the objects we interact with, in order to keep the bullet world neat and clean. In unreal however, the walls and lamps are being spawned. We simply currently don't need them in bullet.
 +
 +== mapping-urdf-semantic.lisp ==
 +Mapps the urdf kitchen (bullet world simulation environment) to the semantic map(virtual reality environment), since they differ in how some furniture is organized and called. Also maps the names of the objects the robot interacts with between the two environments.
 +
 +== init.lisp ==
 +Contains all the needed initialization functions for the simulation environment, episode loading and for the simulated or real robot. Also contains the ''*episode-path*'' variable which sets the location of the episode data.
 +
 +== queries.lisp ==
 +Contains some query wrappers so that they can be called as lisp functions, and also includes the queries which read out the data from the database e.g. the poses of the object, hand and head of the actor in the virtual reality.
 +
 +== query-based-calculations.lisp ==
 +Includes all transformation calculations to make the poses of the robot relative to the respective object, and the poses of the objects relative to the surfaces. Mostly works on lazy-lists of poses.
 +
 +== designator-integration.lisp ==
 +Integrates the pose calculations from the query-based-calculations into location designators.
 +
 +== fetch-and-deliver-based-demo.lisp ==
 +Sets up the plan for the demo with the respective action designator. Also includes logging functions.
  
-==== Importing new episode data into MongoDB and KnowRob(Additional information) ==== +== debugging-utils.lisp == 
-In order for us to be able to query data for information, we first need to import that data into KnowRob and MongoDB.+Contains a lot of debugging and helper functions which can also visualize all the calculated poses.  
 +==== Importing new episode data into MongoDB and KnowRob (Additional information)==== 
 +In order for us to be able to query episode data for information, we first need to import that data into KnowRob and MongoDB.
 The MongoDB contains all the poses of the objects, camera, hand and furniture of the episode. So the pose is read out from there. KnowRob takes care of everything else. It knows of all the object classes and instances used in the episodes. So in order to be able to use the data, we need to import it into MongoDB and KnowRob. The MongoDB contains all the poses of the objects, camera, hand and furniture of the episode. So the pose is read out from there. KnowRob takes care of everything else. It knows of all the object classes and instances used in the episodes. So in order to be able to use the data, we need to import it into MongoDB and KnowRob.
 +The following description only applies if the data has been recorded using the [[http://robcog.org/ | RobCog]] tool for Unreal Engine, which results in a ''json'' file containing the events and a matching ''SemanticMap.owl'' describing the environment.
 +
 +=== MongoDB (quick with scripts)===
 +There is a script which imports the episode data into the currently running MongoDB instance. Please see [[https://github.com/hawkina/useful_scripts|scripts]] and the provided Readme for reference and usage.
 +
 +=== MongoDB (manuel in-depth) ===
 +This will explain how to import the episode data manually into MongoDB. This is essentially what the script in the description above does automatically. So if the script didn't work (please leave an issue on [[https://github.com/hawkina/useful_scripts| github]] in that case), you can follow this guide.
  
-=== MongoDB === 
 If you record data in the Virtual Reality using [[http://robcog.org/ | RobCog]], you essentially get the following files: If you record data in the Virtual Reality using [[http://robcog.org/ | RobCog]], you essentially get the following files:
 ''RawData_ID.json'' <- this contains all the recorded events that happened, where everything was, who moved what where when...etc. ''RawData_ID.json'' <- this contains all the recorded events that happened, where everything was, who moved what where when...etc.
Line 77: Line 153:
  
 === KnowRob === === KnowRob ===
-KnowRob needs to be able to find the .owl and .bson files. It's important to have a directory for all the Episodes. You can either have a look or download these [[ftp://open-ease-stor.informatik.uni-bremen.de/|episodes]] for reference. +KnowRob needs to be able to find the .owl and .bson files. It's important to have a directory for all the Episodes. You can either have a look or download these [[ftp://open-ease-stor.informatik.uni-bremen.de/|episodes]] for reference. Keep in mind, that depending on which episode data should be loaded, the path within CRAM (the ''*episode-path*'' variable within the ''init.lisp'') might have to be adapted, as well as the parameter which is passed to the ''init-full-simulation'' function.
  
 === Performance === === Performance ===
 +This step is also covered by the [[https://github.com/hawkina/useful_scripts | scripts]] mentioned above, but can also be executed manually. 
 +
 Depending on how many collections your database has, it can get slow when quering for information. One way to make it faster, is to include an index over timestamp for all collections. One way to add this, is to install [[https://www.mongodb.com/products/compass|compass]] for mongodb. Launch it, and connect it to your database. The defautl settings should be fine so just click ''ok'' when it launches. Then go to your collection -> indexes -> create index. Call the new index ''timestamp'', select the field ''timestamp'' and set the type to ''1 (asc)'', click create. Repeat for all the collections. It will improve the query speed greatly. Depending on how many collections your database has, it can get slow when quering for information. One way to make it faster, is to include an index over timestamp for all collections. One way to add this, is to install [[https://www.mongodb.com/products/compass|compass]] for mongodb. Launch it, and connect it to your database. The defautl settings should be fine so just click ''ok'' when it launches. Then go to your collection -> indexes -> create index. Call the new index ''timestamp'', select the field ''timestamp'' and set the type to ''1 (asc)'', click create. Repeat for all the collections. It will improve the query speed greatly.