Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
tutorials:advanced:unreal [2019/08/01 15:33] – cram-pr2-projection->cram-urdf-projection hawkintutorials:advanced:unreal [2020/01/13 09:47] – [Usage and Code] added short file descriptions, updated launch procedure hawkin
Line 11: Line 11:
  
 === Roslaunch === === Roslaunch ===
-Launch a ''roscore'' first. Then, in a new terminal for each launch file, launch the bullet_world, json_prolog and roslisp_repl+Launch a ''roscore'' first. Then, in a new terminal for each launch file, launch the simulation and roslisp_repl
 <code bash> <code bash>
-    $ roslaunch cram_pr2_pick_place_demo sandbox.launch  +    $ roslaunch cram_knowrob_vr simulation.launch
-    $ roslaunch json_prolog json_prolog.launch +
     $ roslisp_repl     $ roslisp_repl
 </code> </code>
-The bullet world is needed for visualization. The json_prolog node allows us to access information in KnowRob from CRAM+The ''simulation.launch'' includes the json_prolog node which is needed for the communication between KnowRob and CRAM. It also launches the ''bullet world simulation'' and uploads the ''robot description''. This launch file has the following parameters that can be set with its launch. The following are also the default values: 
 +  * **upload:=true** uploads the robot description if set to ''true''. Set to ''false'' if the robot description is being uploaded by another node or e.g. the real robot. 
 +  * **knowrob:=true** determines if the ''json_prolog'' node should be launched to allow communication with KnowRob. Set to ''false'' if another instance of KnowRob or json_prolog is running already. 
 +  * **boxy:=false** determines which robot description should be uploaded and used. The default case ''false'' means that the PR2 description will be used. In case of ''true'', Boxy will be used
  
 ==== Usage and Code ==== ==== Usage and Code ====
Line 34: Line 36:
 To launch all the necessary components, simply execute:  To launch all the necessary components, simply execute: 
 <code lisp> <code lisp>
-CL-USER> (kvr::init-full-simulation '("ep1"))+CL-USER> (kvr::init-full-simulation :namedir '("ep1":urdf-new-kitchen? nil)
 </code> </code>
  
-This will create a lisp ros node, clean up the belief-state, load the episodes that get passed to the init function as a list of strings, e.g. in our case "ep1", spawn the semantic map of the episode and the items and initialize the location costmap. In the code section below it will be explained in more detail, what is loaded and when. This process may take a while, so please have some patience. When the function has finished running through your bullet world should look like this: +This will create a lisp ros node, clean up the belief-state, load the episodes that get passed to the init function as a list of strings in the ''namedir'' key parameter, e.g. in our case "ep1", spawn the semantic map of the episode and the items and initialize the location costmap. This process may take a while, so please have some patience. When the function has finished running through your bullet world should look like this: 
  
 Now, let's execute the pick and place plan: Now, let's execute the pick and place plan:
Line 51: Line 53:
  
 == mapping-urdf-semantic.lisp == == mapping-urdf-semantic.lisp ==
-Mapps the urdf kitchen to the semantic map, since they differ in how some furniture is organized and called. +Mapps the urdf kitchen (bullet world simulation environment) to the semantic map(virtual reality environment), since they differ in how some furniture is organized and called. Also maps the names of the objects the robot interacts with between the two environments.
  
 == init.lisp == == init.lisp ==
-TODO+Contains all the needed initialization functions for the simulation environment, episode loading and for the simulated or real robot. Also contains the ''*episode-path*'' variable which sets the location of the episode data.
  
 == queries.lisp == == queries.lisp ==
-TODO+Contains some query wrappers so that they can be called as lisp functions, and also includes the queries which read out the data from the database e.g. the poses of the object, hand and head of the actor in the virtual reality.
  
 == query-based-calculations.lisp == == query-based-calculations.lisp ==
-TODO+Includes all transformation calculations to make the poses of the robot relative to the respective object, and the poses of the objects relative to the surfaces. Mostly works on lazy-lists of poses.
  
 == designator-integration.lisp == == designator-integration.lisp ==
-TODO+Integrates the pose calculations from the query-based-calculations into location designators.
  
 == fetch-and-deliver-based-demo.lisp == == fetch-and-deliver-based-demo.lisp ==
-TODO+Sets up the plan for the demo with the respective action designator. Also includes logging functions. 
 + 
 +== debugging-utils.lisp == 
 +Contains a lot of debugging and helper functions which can also visualize all the calculated poses. 
 ==== Importing new episode data into MongoDB and KnowRob(Additional information) ==== ==== Importing new episode data into MongoDB and KnowRob(Additional information) ====
 In order for us to be able to query data for information, we first need to import that data into KnowRob and MongoDB. In order for us to be able to query data for information, we first need to import that data into KnowRob and MongoDB.