Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
tutorials:advanced:unreal [2019/08/01 15:33]
hawkin cram-pr2-projection->cram-urdf-projection
tutorials:advanced:unreal [2020/01/13 14:37] (current)
hawkin [Usage and Code] added loading of cram_ROBOT_description as a step
Line 5: Line 5:
 This tutorial will introduce you to the ''​cram_knowrob_vr (short: kvr)''​ package, which uses the data recorded in the Virtual Reality environment using [[http://​robcog.org/​|RobCog]],​ extracts information from them using [[http://​www.knowrob.org/​|KnowRob]] and executes the CRAM high-level-plans based on this data either on the real robot or in the CRAM [[tutorials:​advanced:​bullet_world|bullet world]]. This tutorial will introduce you to the ''​cram_knowrob_vr (short: kvr)''​ package, which uses the data recorded in the Virtual Reality environment using [[http://​robcog.org/​|RobCog]],​ extracts information from them using [[http://​www.knowrob.org/​|KnowRob]] and executes the CRAM high-level-plans based on this data either on the real robot or in the CRAM [[tutorials:​advanced:​bullet_world|bullet world]].
  
 +==== Idea ====
 +We want to essentially teach a robot how to perform every day activities without having to dive deep into code, but rather simply show the robot what we want him to do using Virtual Reality. This would allow robots to learn from humans easily, since this way, the robot can acquire information about where the human was looking for things and where things were placed. Of course, this could also be hard coded, but that would take a lot more time and be prone to failure, since we as humans, often forget to describe very minor things which we automatically take for granted, but which can play a huge role in the success of a task for the robot. E.g. if the cooking pot is not in its designated area, we would automatically check the dishwasher or the sink area. This is something the robot would have to learn first.
 +
 +The advantage of using Virtual Reality for this is also, that we can train the robot on all kinds of different kitchen setups, which can be build within a few minutes, instead of having to move around physical furniture. This would also allow for generalization of the acquired data and would add to the robustness of the pick and place tasks. ​
 ==== Prerequisites ==== ==== Prerequisites ====
 This tutorial assumes that you've completed the [[tutorials:​intermediate:​json_prolog|Using JSON Prolog to communicate with KnowRob]] tutorial and therefore have ROS, CRAM, KnowRob and MongoDB installed. In order to be able to use the kvr package, a few specific changes have to be made. Within the ''​knowrob_addons''​ the ''​knowrob_robcog''​ package has to be replaced by this one [[https://​github.com/​robcog-iai/​knowrob_robcog.git This tutorial assumes that you've completed the [[tutorials:​intermediate:​json_prolog|Using JSON Prolog to communicate with KnowRob]] tutorial and therefore have ROS, CRAM, KnowRob and MongoDB installed. In order to be able to use the kvr package, a few specific changes have to be made. Within the ''​knowrob_addons''​ the ''​knowrob_robcog''​ package has to be replaced by this one [[https://​github.com/​robcog-iai/​knowrob_robcog.git
Line 11: Line 15:
  
 === Roslaunch === === Roslaunch ===
-Launch a ''​roscore''​ first. Then, in a new terminal for each launch file, launch the bullet_world,​ json_prolog ​and roslisp_repl+Launch a ''​roscore''​ first. Then, in a new terminal for each launch file, launch the simulation ​and roslisp_repl
 <code bash> <code bash>
-    $ roslaunch ​cram_pr2_pick_place_demo sandbox.launch  +    $ roslaunch ​cram_knowrob_vr simulation.launch
-    $ roslaunch json_prolog json_prolog.launch ​+
     $ roslisp_repl     $ roslisp_repl
 </​code>​ </​code>​
-The bullet world is needed for visualization. The json_prolog node allows us to access information in KnowRob ​from CRAM+The ''​simulation.launch''​ includes the json_prolog node which is needed for the communication between KnowRob and CRAM. It also launches the ''​bullet world simulation''​ and uploads the ''​robot description''​. This launch file has the following parameters that can be set with its launch. The following are also the default values: 
 +  * **upload:​=true** uploads the robot description if set to ''​true''​. Set to ''​false''​ if the robot description is being uploaded by another node or e.g. the real robot. 
 +  * **knowrob:​=true** determines if the ''​json_prolog'' ​node should be launched ​to allow communication with KnowRob. Set to ''​false''​ if another instance of KnowRob or json_prolog is running already. 
 +  * **boxy:​=false** determines which robot description should be uploaded and used. The default case ''​false''​ means that the PR2 description will be used. In case of ''​true'',​ Boxy will be used
  
 ==== Usage and Code ==== ==== Usage and Code ====
Line 32: Line 38:
 </​code>​ </​code>​
  
-To launch all the necessary componentssimply execute+If this is used in simulation and depending on if the PR2 or Boxy robot is supposed to be usedthe respective description needs to be loaded first. 
 + 
 +For PR2:
 <code lisp> <code lisp>
-CL-USER> (kvr::init-full-simulation '("ep1"))+CL-USER> ​ (swank:operate-on-system-for-emacs ​"cram-pr2-description" ​(quote load-op))
 </​code>​ </​code>​
  
-This will create a lisp ros node, clean up the belief-state,​ load the episodes that get passed to the init function as a list of strings, e.g. in our case "​ep1",​ spawn the semantic map of the episode and the items and initialize the location costmap. In the code section below it will be explained in more detail, what is loaded and when. This process may take a while, so please have some patience. When the function has finished running through your bullet world should look like this: +For Boxy: 
 +<code lisp> 
 +CL-USER> ​ (swank:​operate-on-system-for-emacs "​cram-boxy-description"​ (quote load-op)) 
 +</​code>​ 
 + 
 + 
 +To launch all the necessary initializations,​ simply execute:  
 +<code lisp> 
 +CL-USER> (kvr::​init-full-simulation :namedir '​("​ep1"​) :​urdf-new-kitchen?​ nil) 
 +</​code>​ 
 + 
 +This will create a lisp ros node, clean up the belief-state,​ load the episodes that get passed to the ''​init'' ​function as a list of strings ​in the ''​namedir''​ key parameter, e.g. in our case "​ep1",​ spawn the semantic map of the episode and the items and initialize the location costmap. This process may take a while, so please have some patience. ​(or go grab a coffee meanwhile. It can really take several minutes on Ubuntu 16.04.) ​When the function has finished running through your bullet world should look like this: 
  
 Now, let's execute the pick and place plan: Now, let's execute the pick and place plan:
Line 47: Line 66:
  
 === Code === === Code ===
-== mesh-list.lisp == +== mesh-list.lisp ==
 Contains a list of all the meshes which we want to spawn based on their locations in the semantic map. Some of them are commented out, e.g. walls, lamps and the objects we interact with, in order to keep the bullet world neat and clean. In unreal however, the walls and lamps are being spawned. We simply currently don't need them in bullet. Contains a list of all the meshes which we want to spawn based on their locations in the semantic map. Some of them are commented out, e.g. walls, lamps and the objects we interact with, in order to keep the bullet world neat and clean. In unreal however, the walls and lamps are being spawned. We simply currently don't need them in bullet.
  
 == mapping-urdf-semantic.lisp == == mapping-urdf-semantic.lisp ==
-Mapps the urdf kitchen to the semantic map, since they differ in how some furniture is organized and called. ​+Mapps the urdf kitchen ​(bullet world simulation environment) ​to the semantic map(virtual reality environment), since they differ in how some furniture is organized and called. Also maps the names of the objects the robot interacts with between the two environments.
  
 == init.lisp == == init.lisp ==
-TODO+Contains all the needed initialization functions for the simulation environment,​ episode loading and for the simulated or real robot. Also contains the ''​*episode-path*''​ variable which sets the location of the episode data.
  
 == queries.lisp == == queries.lisp ==
-TODO+Contains some query wrappers so that they can be called as lisp functions, and also includes the queries which read out the data from the database e.g. the poses of the object, hand and head of the actor in the virtual reality.
  
 == query-based-calculations.lisp == == query-based-calculations.lisp ==
-TODO+Includes all transformation calculations to make the poses of the robot relative to the respective object, and the poses of the objects relative to the surfaces. Mostly works on lazy-lists of poses.
  
 == designator-integration.lisp == == designator-integration.lisp ==
-TODO+Integrates the pose calculations from the query-based-calculations into location designators.
  
 == fetch-and-deliver-based-demo.lisp == == fetch-and-deliver-based-demo.lisp ==
-TODO +Sets up the plan for the demo with the respective action designator. Also includes logging functions. 
-==== Importing new episode data into MongoDB and KnowRob(Additional information) ==== + 
-In order for us to be able to query data for information,​ we first need to import that data into KnowRob and MongoDB.+== debugging-utils.lisp == 
 +Contains a lot of debugging and helper functions which can also visualize all the calculated poses. ​ 
 +==== Importing new episode data into MongoDB and KnowRob (Additional information)==== 
 +In order for us to be able to query episode ​data for information,​ we first need to import that data into KnowRob and MongoDB.
 The MongoDB contains all the poses of the objects, camera, hand and furniture of the episode. So the pose is read out from there. KnowRob takes care of everything else. It knows of all the object classes and instances used in the episodes. So in order to be able to use the data, we need to import it into MongoDB and KnowRob. The MongoDB contains all the poses of the objects, camera, hand and furniture of the episode. So the pose is read out from there. KnowRob takes care of everything else. It knows of all the object classes and instances used in the episodes. So in order to be able to use the data, we need to import it into MongoDB and KnowRob.
 +The following description only applies if the data has been recorded using the [[http://​robcog.org/​ | RobCog]] tool for Unreal Engine, which results in a ''​json''​ file containing the events and a matching ''​SemanticMap.owl''​ describing the environment.
 +
 +=== MongoDB (quick with scripts)===
 +There is a script which imports the episode data into the currently running MongoDB instance. Please see [[https://​github.com/​hawkina/​useful_scripts|scripts]] and the provided Readme for reference and usage.
 +
 +=== MongoDB (manuel in-depth) ===
 +This will explain how to import the episode data manually into MongoDB. This is essentially what the script in the description above does automatically. So if the script didn't work (please leave an issue on [[https://​github.com/​hawkina/​useful_scripts| github]] in that case), you can follow this guide.
  
-=== MongoDB === 
 If you record data in the Virtual Reality using [[http://​robcog.org/​ | RobCog]], you essentially get the following files: If you record data in the Virtual Reality using [[http://​robcog.org/​ | RobCog]], you essentially get the following files:
 ''​RawData_ID.json''​ <- this contains all the recorded events that happened, where everything was, who moved what where when...etc. ''​RawData_ID.json''​ <- this contains all the recorded events that happened, where everything was, who moved what where when...etc.
Line 123: Line 151:
  
 === KnowRob === === KnowRob ===
-KnowRob needs to be able to find the .owl and .bson files. It's important to have a directory for all the Episodes. You can either have a look or download these [[ftp://​open-ease-stor.informatik.uni-bremen.de/​|episodes]] for reference. ​+KnowRob needs to be able to find the .owl and .bson files. It's important to have a directory for all the Episodes. You can either have a look or download these [[ftp://​open-ease-stor.informatik.uni-bremen.de/​|episodes]] for reference. Keep in mind, that depending on which episode data should be loaded, the path within CRAM (the ''​*episode-path*''​ variable within the ''​init.lisp''​) might have to be adapted, as well as the parameter which is passed to the ''​init-full-simulation''​ function.
  
 === Performance === === Performance ===
 +This step is also covered by the [[https://​github.com/​hawkina/​useful_scripts | scripts]] mentioned above, but can also be executed manually. ​
 +
 Depending on how many collections your database has, it can get slow when quering for information. One way to make it faster, is to include an index over timestamp for all collections. One way to add this, is to install [[https://​www.mongodb.com/​products/​compass|compass]] for mongodb. Launch it, and connect it to your database. The defautl settings should be fine so just click ''​ok''​ when it launches. Then go to your collection -> indexes -> create index. Call the new index ''​timestamp'',​ select the field ''​timestamp''​ and set the type to ''​1 (asc)'',​ click create. Repeat for all the collections. It will improve the query speed greatly. Depending on how many collections your database has, it can get slow when quering for information. One way to make it faster, is to include an index over timestamp for all collections. One way to add this, is to install [[https://​www.mongodb.com/​products/​compass|compass]] for mongodb. Launch it, and connect it to your database. The defautl settings should be fine so just click ''​ok''​ when it launches. Then go to your collection -> indexes -> create index. Call the new index ''​timestamp'',​ select the field ''​timestamp''​ and set the type to ''​1 (asc)'',​ click create. Repeat for all the collections. It will improve the query speed greatly.