Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
tutorials:intermediate:bullet_world [2019/07/08 17:04] – [Moving the robot in the Bullet world] gkazhoyatutorials:intermediate:bullet_world [2019/07/08 17:25] – [Moving the robot in the Bullet world] gkazhoya
Line 522: Line 522:
  
  
 +
 +==== Using TF in the Bullet world ====
 +
 +Per default, the TF listener is not set up in the REPL, when you are working with the Bullet world.
 +To have it running, we need a TF context. It is possible to create it manually, but to not overcomplicate this tutorial,
 +we will use the environment provided by the ''pr2-proj:with-simulated-robot'' macro. It will enable us to lookup transforms between robot frames, kitchen frames etc.
 +
 +For example, the following does not work (unless you have a real robot running in your ROS ecosystem):
 +<code lisp>
 +BTW-TUT> (cl-tf:lookup-transform cram-tf:*transformer* "map" "l_gripper_tool_frame")
 +</code>
 +and the following does:
 +<code lisp>
 +BTW-TUT> (pr2-proj:with-simulated-robot
 +           (cl-tf:lookup-transform cram-tf:*transformer* "map" "l_gripper_tool_frame"))
 +</code>
 +Here, ''"l_gripper_tool_frame"'' is the tool frame of PR2's left hand.
  
 ==== Moving the robot in the Bullet world ==== ==== Moving the robot in the Bullet world ====
Line 576: Line 593:
 </code> </code>
  
 +<html><!--
 +To execute any plan in CRAM, we need a top-level context. Besides that we also use a macro to specify that the demo should be executed in simulation, not on the real robot. Putting your plan under ''pr2-proj:with-simulated-robot'' will indicate that your robot is in the projection environment and it also has a top level call within it. The ''with-simulated-robot'' is a way to abstract out the robot details from your plans and its counterpart to execute on a real robot would be ''pr2-pms:with-real-robot''. Also note that without mentioning the robot that you want to execute on, the TF for it is not published and you'll run into errors. 
 +We can execute some movements in parallel, if they use different joints of the robot. That's what ''cpl:par'' is for. 
 +We have used a simple call to low level methods to achieve motions like move to the ''?grasp-base-pose'' and look at ''?grasp-look-pose''. These can be achieved by corresponding motion designators, which we will look at in later tutorials.
 +--></html>
  
-To execute any plan in CRAM, we need a top-level context. Besides that we also use a macro to specify that the demo should be executed in simulation, not on the real robot. Putting your plan under ''pr2-proj:with-simulated-robot'' will indicate that your robot is in the projection environment and it also has a top level call within it. The ''with-simulated-robot'' is a way to abstract out the robot details from your plans and its counterpart to execute on a real robot would be ''pr2-pms:with-real-robot''. Also note that without mentioning the robot that you want to execute on, the TF for it is not published and you'll run into errors. Putting all these together we end up with this plan:+Putting all these together we end up with the following:
 <code lisp> <code lisp>
 BTW-TUT> BTW-TUT>
 (pr2-proj:with-simulated-robot (pr2-proj:with-simulated-robot
-  (cpl:par +  (pr2-proj::move-joints '(1.9652919379395388d0 
-    (pr2-proj::move-joints '(1.9652919379395388d0 +                           -0.26499816732737785d0 
-                             -0.26499816732737785d0 +                           1.3837617139225473d0 
-                             1.3837617139225473d0 +                           -2.1224566064321584d0 
-                             -2.1224566064321584d0 +                           16.99646118944817d0 
-                             16.99646118944817d0 +                           -0.07350789589924167d0 
-                             -0.07350789589924167d0 +                           0.0) 
-                             0.0) +                         '(-1.712587449591307d0 
-                           '(-1.712587449591307d0 +                           -0.2567290370386635d0 
-                             -0.2567290370386635d0 +                           -1.4633501125737374d0 
-                             -1.4633501125737374d0 +                           -2.1221670650093913d0 
-                             -2.1221670650093913d0 +                           1.7663253481913623d0 
-                             1.7663253481913623d0 +                           -0.07942669250968948d0 
-                             -0.07942669250968948d0 +                           0.05106258161229582d0)) 
-                             0.05106258161229582d0)) +  (pr2-proj::drive ?grasp-base-pose) 
-    (pr2-proj::drive ?grasp-base-pose)+  (pr2-proj::look-at :pose ?grasp-look-pose)) 
-  (pr2-proj::look-at-pose-stamped ?grasp-look-pose))+</code> 
 +As some of the functions in ''cram-pr2-projection'' package need a running TF listener object, we wrapped our calls in ''pr2-proj:with-simulated-robot''
 + 
 +The function ''pr2-proj::move-joints'' moves the joints of both arms, which brings them into a specific position, specified in the arguments, so they don't hang around the field of view. ''pr2-proj::drive'' moves the robot, by internally calling  
 +<code lisp> 
 +(prolog:prolog '(btr:assert ?world (btr:object-pose ?robot ?target-pose)))
 </code> </code>
-We can execute some movements in parallel, if they use different joints of the robot. That's what ''cpl:par'' is for. The function ''park-arms'' performs a motion on the joints of both arms, which brings them into a specific position, so they don't hang around the field of view. We have used a simple call to low level methods to achieve motions like move to the ''?grasp-base-pose'' and look at ''?grasp-look-pose''. These can be achieved by corresponding motion designators, which we will look at in later tutorials.+''pr2-proj::look-at'' calculates the pan and tilt angles of the robot's neck such that it ends up looking at the specified point, and asserts these angles to the neck joints.
  
-To grasp the bottle we need to have its pose in the room. Thereforewe first perceive it and store the result in ''*perceived-object*'':+Nowlet us finally perceive the object and store the result in the ''*perceived-object*'' variable:
 <code lisp> <code lisp>
 BTW-TUT> BTW-TUT>
Line 610: Line 637:
 </code> </code>
  
-With that resulting perceived object we perform the picking up action. With the torso so far down we might not be able to reach for the bottle, so we need to also push the torso up:+With that resulting perceived object we could perform the picking up action. With the torso so far down we might not be able to reach the bottle, so we need to push the torso up:
  
 <code lisp> <code lisp>
Line 619: Line 646:
  
 As there is no atomic motion for picking up an object, in fact, picking up is comprised of multiple move-arm motions, As there is no atomic motion for picking up an object, in fact, picking up is comprised of multiple move-arm motions,
-so pick up is implemented within a plan and called by performing an action designator. This is explained in the [[http://cram-system.org/tutorials/intermediate/simple_mobile_manipulation_plan|next tutorial on writing simple mobile manipulation plans]].+pick up is implemented within a plan and called by performing an action designator. Performing motion and action designators in the Bullet world (or on the real robot) is explained in the [[http://cram-system.org/tutorials/intermediate/simple_mobile_manipulation_plan|next tutorial on writing simple mobile manipulation plans]].