Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
tutorials:advanced:unreal [2019/06/06 12:29] hawkintutorials:advanced:unreal [2019/08/01 15:33] – cram-pr2-projection->cram-urdf-projection hawkin
Line 37: Line 37:
 </code> </code>
  
-This will create a lisp ros node, clean up the belief-state, load the episodes that get passed to the init function as a list of strings, e.g. in our case "ep1", spawn the semantic map of the episode and the items and initialize the location costmap. In the code section below it will be explained in more detail, what is loaded when. This process may take a while, so please have some patience. +This will create a lisp ros node, clean up the belief-state, load the episodes that get passed to the init function as a list of strings, e.g. in our case "ep1", spawn the semantic map of the episode and the items and initialize the location costmap. In the code section below it will be explained in more detail, what is loaded and when. This process may take a while, so please have some patience. When the function has finished running through your bullet world should look like this: 
  
 Now, let's execute the pick and place plan: Now, let's execute the pick and place plan:
 <code lisp> <code lisp>
-CL-USER> (cram-pr2-projection:with-simulated-robot (kvr::demo))+CL-USER> (cram-urdf-projection:with-simulated-robot (kvr::demo))
 </code> </code>
 With this call we first say that we want to use the simulated bullet-world PR2 robot instead of the real one, and then we simply call the demo. The demo will read out the VR episode data and extract the positions of the objects that have been manipulated, which hand was used and the positions of the human head and hand.  With this call we first say that we want to use the simulated bullet-world PR2 robot instead of the real one, and then we simply call the demo. The demo will read out the VR episode data and extract the positions of the objects that have been manipulated, which hand was used and the positions of the human head and hand.