This is an old revision of the document!


Tested with Cram v0.7.0, ROS version: Kinetic, Ubuntu 16.04

Bullet world demonstration

This tutorial demonstrates how the internal world state of the robot is represented in CRAM, how to update the world, how to perform specialized reasoning in it and how to use lightweight simulation for quick testing of plans.

This tutorial assumes that you have a basic understanding of how the world state is implemented internally. You can read about this in the documentation entry for geometric reasoning. You can find the code of this tutorial on Github.

Installing the projection project

As the system we will be working with will get quite complex things might not go as expected. If you get into trouble while following this tutorial feel free to consult the Support page or contact the person who last edited this tutorial, you can find the username in the footer.

If you followed the installation instructions for the recommended CRAM version on the Installation page, you should be ready to go. Just in case, check the version with which this tutorial has been last tested, it is at the very top of the tutorial.

Environment setup

We will set up the environment of the robot similar to how it is done for the real system: we will need an occupancy map for navigation, an IK solver for manipulation, a representation of the environment with the furniture etc.

First, let's set up the environment in our terminal by calling the launch file. It already brings up a roscore from within:

$ roslaunch cram_bullet_world_tutorial world.launch

Let's look into that launchfile.

$ roscd cram_bullet_world_tutorial
$ cat launch/world.launch

The robot needs to know who he is and what body parts it has, therefore we upload such a robot description to the ROS parameter server. To check if PR2's URDF was uploaded to the server, we can execute the following:

$ rosparam get /robot_description | awk 'NR == 5' # read the fifth line of the URDF

The name of the parameter is “robot_description”. To simulate simple grasping tasks with the PR2 robot in the bullet-world we need an inverse kinematics solver for the arms. In the launch file those are the pr2_arm_kinematics package nodes.

In addition to the description of itself, the robot also can be told how its environment looks like. This is very practical if the environment does not change often and a model thereof is available. Then the robot will not have to perceive and classify the furniture in its surroundings (which is very difficult and time consuming). We have a model of the laboratory kitchen of our Institute for AI, where our PR2 lives. The kitchen is represented with the URDF format like the PR2, as we can consider kitchen to be a robot with translational, rotational and fixed joints: translational for drawers, rotational for doors and fixed for, e.g., a fixed handle on the fridge door. We can, thus, open the fridge door by changing the joint state of the door joint. To check if the kitchen URDF was uploaded to the ROS parameter server we can execute the following:

$ rosparam get /kitchen_description | awk 'NR == 5'

As you can see, the name of the kitchen parameter is “kitchen_description”.

(The semantic map is used to reason about semantically-described places in the kitchen, such as that an iai_fridge_main is an object of class Fridge, which is a container for perishable items. For that we query KnowRob. Thus, the semantic map can only be activated if you have KnowRob installed. Per default CRAM does not come with KnowRob, so we will not discuss semantic maps in this tutorial.)

If you retrieve data from the rosparam get commands, we can get going.

REPL setup

Now, let's load the package in the REPL (start the REPL with $ roslisp_repl):

CL-USER> (ros-load:load-system "cram_bullet_world_tutorial" :cram-bullet-world-tutorial)
CL-USER> (in-package :cram-bullet-world-tutorial)

As we will be talking to the ROS master, let's first start a node:

BTW-TUT> (roslisp:start-ros-node "bullet_world")

First, let's do all the initializations:

BTW-TUT> (cram-occupancy-grid-costmap::init-occupancy-grid-costmap) ; Inits the occupancy map for navigation.
BTW-TUT> (cram-bullet-reasoning-belief-state::ros-time-init) ; Resets the simulated time.
BTW-TUT> (cram-location-costmap::location-costmap-vis-init) ; Inits publishing of markers in RViz.
BTW-TUT> (cram-tf::init-tf) ; Inits the TF listener object.

The functions above initialize and clean up the environment of the REPL. Currently, we call those functions per hand for educational purposes. However, there exists a function that does the initialization automatically and spawns the completely set up visualization window. It will be described later.

The way CRAM is designed is that the world state (the Bullet world) is only updated through Prolog queries, as we consider the world state to be “robot knowledge” that we can query and that can get things asserted to it, and we try to manipulate the knowledge only through Prolog and not through Lisp directly. To learn more about using Prolog in CRAM you can quickly go through the Prolog tutorial.

A new Bullet world instance is created when we load the cram-bullet-reasoning ASDF system (which is the dependency of our tutorial package). It is stored in the *current-bullet-world* variable, which we can access through the following Prolog query:

BTW-TUT> (prolog:prolog '(btr:bullet-world ?world))

Bullet world initialization

To be able to see what is going on in the world state and to ease the debugging process we visualize the current world instance with OpenGL:

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (btr:debug-window ?world)))

Please note, that the visualization window does not necessarily need to be open when the robot is operating and using its world state, in fact, it should be closed to save processing power.

The “bullet visualization” window that appears should be empty at the moment. Let's spawn some objects.

The first thing we need is the floor plane:

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (assert (btr:object ?world :static-plane :floor ((0 0 0) (0 0 0 1))
                                                  :normal (0 0 1) :constant 0))))

This assert query must not be confused with the Common Lisp test function assert. To show you the origin of this assert query you can look up its definition. Since it is not a function but a Prolog fact you must search with rgrep in the cram_3d_world/cram_bullet_reasoning directory:

BTW-TUT> Alt-x
rgrep RET
(<- (assert RET
*.lisp RET
/path/to/your/workspace/src/cram_3d_world/cram_bullet_reasoning/src/ RET

There are quite a few definitions here, serving various queries. All of them have in common, that they change something in the world. The one we are reaching by the previous call is this one, located in world-facts.lisp:

(<- (assert (object ?world ?object-type ?name ?pose . ?args))
  (assert ?world (object ?object-type ?name ?pose . ?args)))

In the assert query, the :static-plane is the type of the object we're spawning (it can be :static-plane, :sphere, :box, :mesh, :urdf etc.), :floor is the name of the object, ( (0 0 0) (0 0 0 1) ) is the coordinate.

Via the launch file at the beginning of this tutorial the robot URDF representation has been uploaded to the ROS parameter server under the name robot_description. Let's load it into the world:

BTW-TUT> (let ((robot-urdf
                   (cl-urdf:parse-urdf
                    (roslisp:get-param "robot_description"))))
             (prolog:prolog
              `(and (btr:bullet-world ?world)
                    (cram-robot-interfaces:robot ?robot)
                    (assert (btr:object ?world :urdf ?robot ((0 0 0) (0 0 0 1)) :urdf ,robot-urdf))
                    (cram-robot-interfaces:robot-arms-parking-joint-states ?robot ?joint-states)
                    (assert (btr:joint-state ?world ?robot ?joint-states))
                    (assert (btr:joint-state ?world ?robot (("torso_lift_joint" 0.15d0)))))))

The line (roslisp:get-param “robot_description”) gets the URDF description of the robot as a string. (cl-urdf:parse-urdf …) then parses this string into a cl-urdf:robot object. (cram-robot-interfaces:robot ?robot) predicate reads the name of the robot that was specified in a robot description Lisp package. For the PR2 it is the cram_pr2_description package, which defines the robot name as pr2, so ?robot gets bound to pr2. This line:

(assert (btr:object ?world :urdf ?robot ((0 0 0) (0 0 0 1)) :urdf ,robot-urdf))

spawns the robot. Keep in mind, that spawning a robot in the world creates a new object. The first :urdf defines the type of the created object, while the second is used as a keyword to provide the objects urdf. Then we read out the hard-coded values for robot arm parking states, where the arms are not in the way, and assert those joint states to our PR2 object of the Bullet world.

Finally, let's spawn the kitchen environment:

BTW-TUT> (let ((kitchen-urdf 
                 (cl-urdf:parse-urdf 
                  (roslisp:get-param "kitchen_description"))))
             (prolog:prolog
              `(and (btr:bullet-world ?world)
                    (assert (btr:object ?world :urdf :kitchen ((0 0 0) (0 0 0 1))
                                        :urdf ,kitchen-urdf
                                        :collision-group :static-filter
                                        :collision-mask (:default-filter :character-filter)
                                        :compound T)))))

In the visualization window you can now see the kitchen loaded from the URDF with the semantic map information attached to it.

You can see the environment in rviz as well. Open up a terminal and type “rosrun rviz rviz” or just “rviz”. Tweak your setting as shown in the image below.

If there are parameters missing (left box) you can add them using the 'Add' button below. Take the robot model of the kitchen for example:

To see the URDF of the kitchen in RViz you need its description name 'kitchen_description' and the TF prefix 'iai_kitchen'. In the launchfile 'world.launch' you can see, that the name of the kitchen URDF is remapped from 'robot_description' (the default) to 'kitchen_description', which is the name you need in your RViz configuration. When you open the robot model parameter, you have two checkboxes for 'Visual Enabled' and 'Collision Enabled'. The first shows a detailed mesh of each component in the kitchen, the latter shows the collision objects, which are usually less detailed (in computer games and also in robotics). We use collision objects when navigating the PR2, since the calculation of potential collisions would be very expensive when using visual meshes.

When you are ready with the robot model of the kitchen, look at the 'TF' visualization in RViz. It tells you if any poses can't be resolved, e.g. when their parent link is not published in TF. Open up the 'Tree' parameter of 'TF' (left panel of RViz). This is a tree representing all the TF poses with their respective parents and children. If you change the pose of a frame, all of its children will change poses as well.

As you can see, this environment visualizes all the poses of TF frames of our environment that we can use in our tutorial. Get back to the debug window (Bullet visualization window) to see the actual meshes of the objects. The debug window only shows the collision meshes, not the detailed visuals, since we want to keep the bullet world fast and lightweight and concentrate on collisions not visuals.

Please note that you will not be able to see the robot TF frames with our 'world.launch' launch file, only the TF frames of the kitchen.

The environment is now initialized. For convenience, there exists a function that will execute all of the above in one call. If you call it now it will reinitialize everything, which is also fine:

BTW-TUT> (roslisp-utilities:startup-ros)

This starts a ROS node and executes the ROS initialization functions, which are registered in CRAM through roslisp-utilities:register-ros-init-function. You can see which functions are registered by executing the following:

BTW-TUT> (alexandria:hash-table-keys roslisp-utilities::*ros-init-functions*)

You should find all of the functions that we manually called above and one additional, the (init-projection) function, which initializes the Bullet world with the robot and the kitchen environment.

Manipulating the kitchen

Let us inspect the kitchen now:

BTW-TUT> (btr:object btr:*current-bullet-world* :kitchen)

If needed, change the name of ':kitchen' to the name you used when spawning the kitchen. This call should give you a robot-object into the REPL. Inspect it by right-clicking on the #<CRAM-BULLET-REASONING:ROBOT-OBJECT {…}> and choosing Inspect. You can look into all its joints and states, how they are linked and in what state they are (press L to go up in the inspector). Now we are about to change one of those joints, causing the kitchen to open the door of the fridge.

BTW-TUT>
(btr:set-robot-state-from-joints
 '(("iai_fridge_door_joint"  0.3d0))
 (btr:object btr:*current-bullet-world* :kitchen))

If you did everything right, the fridge door should now be slightly open. Check the other joints and change them as you wish. Watch their limits though.

Spawn and change objects in the world

From this point on let us use the fast initialization function to start the bullet-world:

BTW-TUT> (roslisp-utilities:startup-ros)

Now let us spawn some household objects:

BTW-TUT> 
(prolog:prolog '(and (btr:bullet-world ?world)
                     (assert (btr:object ?world :mesh mug-1 ((0 0 2) (0 0 0 1))
                                         :mass 0.2 :color (1 0 0) :mesh :mug))))

There appears a mug above the robot's head. We can also move objects:

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (assert (btr:object-pose ?world mug-1 ((-1 1 1) (0 0 0 1))))))

Now the mug is above the kitchen island table.

To get the instance of the spawned Bullet object there is a function btr:object:

BTW-TUT> (btr:object btr:*current-bullet-world* 'mug-1)

We can get the pose of that object through the function btr:pose:

BTW-TUT> (btr:pose (btr:object btr:*current-bullet-world* 'mug-1))
#<CL-TRANSFORMS:POSE 
   #<3D-VECTOR (-1.0d0 1.0d0 1.0d0)>
   #<QUATERNION (0.0d0 0.0d0 0.0d0 1.0d0)>>

Geometric reasoning

Our mug is floating in the air. The world state representation in CRAM is static, in order to apply the physics to it we use the Bullet physics engine to simulate the world for a certain amount of time.

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (btr:simulate ?world 10)))

This simulates the world for 1 second and the mug falls onto the table.

We can also ask if the scene is stable:

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (btr:stable ?world)))
(((?WORLD . ...)))

This copies our current world and simulates it for a certain amount of time and compares the poses of objects before and after simulation. If the result we got is (NIL) or anything else except NIL then the mug is stably standing on the table.

Now, let's ask if the mug is currently visible for the robot:

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (cram-robot-interfaces:robot ?robot)
                              (btr:visible ?world ?robot mug-1)))
NIL

Let's move PR2 such that the cup is in its visibility range:

BTW-TUT> (prolog:prolog '(and (btr:bullet-world ?world)
                              (assert (btr:object-pose ?world cram-pr2-description:pr2
                                                       ((0.5 1 0) (0 0 1 0))))))

and ask the visibility query again: instead of typing, put the cursor where you would type and press Ctrl-Up (repeatedly). This will find previously executed commands.

Abstract entity descriptions

Now let's generate a distribution of locations from which the robot would be able to see the mug. Before we do anything let's define the costmap metadata, which will tell the grid size and other parameters needed to get the distribuion:

BTW-TUT>
(def-fact-group costmap-metadata ()
    (<- (location-costmap:costmap-size 12 12))
    (<- (location-costmap:costmap-origin -6 -6))
    (<- (location-costmap:costmap-resolution 0.04))
 
    (<- (location-costmap:costmap-padding 0.3))
    (<- (location-costmap:costmap-manipulation-padding 0.4))
    (<- (location-costmap:costmap-in-reach-distance 0.7))
    (<- (location-costmap:costmap-reach-minimal-distance 0.2))
    (<- (location-costmap:visibility-costmap-size 2))
    (<- (location-costmap:orientation-samples 2))
    (<- (location-costmap:orientation-sample-step 0.1)))

Now, we create an abstract location description that we call a designator. The abstract description gets grounded into specific geometric coordinates with the reference function.

BTW-TUT> (let* ((mug-pose (btr:pose (btr:object btr:*current-bullet-world* 'mug-1)))
                (?mug-pose-stamped (cl-transforms-stamped:pose->pose-stamped "map" 0 mug-pose))
                (?mug-location-designator (desig:a location (pose ?mug-pose-stamped)))
                (to-see-designator (desig:a location (visible-for pr2)
                                                     (location ?mug-location-designator))))
             (desig:reference to-see-designator))
#<CL-TRANSFORMS-STAMPED:POSE-STAMPED 
   FRAME-ID: "map", STAMP: 0.0
   #<3D-VECTOR (-0.2800002098083496d0 1.6999998092651367d0 0.0d0)>
   #<QUATERNION (0.0d0 0.0d0 -0.9338500185757329d0 0.3576648470371437d0)>>

We create two symbolic descriptions: one is for the pose that we want to perceive, which we parse to a pose-stamped, and the other one is a pose for the robot from where it would be able to see the mug.

We can also completely abstract away from geometric poses (we used the mug-pose variable which was a quantitative value) by describing the mug as an object standing on the table called the “kitchen-island-surface” in the URDF. This will be equivalent to telling the robot: “go stand in a position such that you would be able to perceive a mug standing on the kitchen island table top”. Please note that we do not explicitly specify which mug we are searching for, we just say that somewhere on the table there should be a mug.

BTW-TUT> 
(let* ((?on-counter (desig:a location 
                             (on (desig:an object
                                           (type counter-top)
                                           (urdf-name kitchen-island-surface)
                                           (part-of kitchen)))))
       (location-to-see (desig:a location 
                                 (visible-for pr2)
                                 (location ?on-counter))))
    (desig:reference location-to-see))
#<CL-TRANSFORMS-STAMPED:POSE-STAMPED 
   FRAME-ID: "map", STAMP: 0.0
   #<3D-VECTOR (-0.07000017166137695d0 1.8199996948242188d0 0.0d0)>
   #<QUATERNION (0.0d0 0.0d0 -0.9072884056004601d0 0.42050891674609586d0)>>

Here, as we didn't specifically say which pose we are trying to perceive: the robot randomly samples one pose from all the poses on the kitchen island table and generates a distribution to perceive that pose, i.e., it looks in the general direction of the table. If you look closely at the Bullet window, you might be able to see the different steps that happen during the referencing:

  • first an area, which corresponds to the location on the kitchen-island-surface, is highlighted red,
  • then a sample is being drawn from that distribution, which is visualized as a red dot,
  • then a Gaussian is generated around that dot to represent a location for the robot to see the object,
  • and, finally, a sample is taken from the Gaussian visualized again as a red dot.

You can execute the above command multiple times, and each time you will get a different distribution of locations for the robot to stand. Sometimes, you might get an error that the location cannot be referenced. This happens if the sample that the robot picks on the table is too close to the wall, and there is no location to stand to perceive that point on the table that would not create collisions of the robot with the environment.

Let us move the robot to a location to perceive a mug on the table:

BTW-TUT> (btr-utils:move-robot *)

Attaching objects to the robot

When manipulating objects in the environment, e.g., carrying a cup in the gripper, the simulation needs to be told that such objects should move when the respective robot part moves. If the PR2 robot grasps an object, it is automatically attached to the gripper, more specifically, to the link of the rigid body of the gripper. Via prolog it is possible to attach objects to robots parts manually. This comes in handy when placing objects into environment containers, e.g., we spawn a spoon in a drawer and when the drawer closes, the spoon should follow.

In this part of the tutorial, we will place a fork inside a drawer and attach it to the link of the drawer. Assuming that the cram_bullet_world_tutorial is already loaded, we need to reset the world:

(roslisp-utilities:startup-ros) ;; restart world
(add-objects-to-mesh-list) ;; tell Bullet where the fork mesh is stored on the hard drive

The function set-robot-state-from-joints can change the state of joints. A joint can be the hinge of a door, the rails of a drawer, the elbow of an arm of the PR2, basically everything that connects two parts and can be moved in some kind of way. The state of such a joint indicates its current position, e.g., how hard the elbow is bent or how far the drawer is pulled open. The function takes two arguments, a list of joints with their desired positions, and the corresponding robot, in our case that will be the kitchen.

Let us first open the drawer:

(btr:set-robot-state-from-joints
 '(("sink_area_left_upper_drawer_main_joint"  0.4))
 (btr:object btr:*current-bullet-world* :kitchen))

The drawer is called “sink_area_left_upper_drawer_main_joint” and we would like to open it to 0.4 meters.

Now let us spawn the fork inside the drawer:

(prolog:prolog '(and (btr:bullet-world ?world)
                     (assert (btr:object ?world :mesh fork-1 ((1.0 0.9 0.75) (0 0 0 1))
                                         :mass 0.2 :color (0.5 0.5 0.5) :mesh :fork-plastic))))

If this throws an error, you probably forgot to use (add-objects-to-mesh-list) previously, since we need the fork mesh. The whole scene should now look like this:

Try and move the drawer a bit with set-robot-state-from-joints. You'll see, that the fork stays in place, even if the scene is simulated with (btr:simulate btr:*current-bullet-world* 10). To solve this, the fork needs to be attached to this specific drawer. All parts and links of the kitchen can be inspected with (btr:object btr:*current-bullet-world* :kitchen), but to save some time the following example already contains the correct link to attach to:

(prolog '(and (btr:bullet-world ?world)
              (btr:%object ?world fork-1 ?fork)
              (assert (btr:attached ?world :kitchen "sink_area_left_upper_drawer_main" ?fork))))

Notice, that the joint name differs from the link name. Now the fork moves when the drawer is moved.

(btr:set-robot-state-from-joints
 '(("sink_area_left_upper_drawer_main_joint"  0.3))
 (btr:object btr:*current-bullet-world* :kitchen))

Every attachment can be checked with the following predicate:

(prolog '(and (btr:bullet-world ?world)
              (btr:attached ?world :kitchen ?_ fork-1)))

This checks if there is any attachments between kitchen and fork. If needed, it is possible to set the name of a link to be specifically checked. Or set the ?_ to ?link, to get the list of links the object is attached to. To detach an object, the retract predicate does the job.

(prolog '(and (btr:bullet-world ?world)
              (btr:%object ?world fork-1 ?fork-instance)
              (btr:retract (btr:attached ?world :kitchen ?fork-instance))))

This detaches the fork from all the links of the kitchen that it has been attached to. If you only want to detach from a specific link and not others, this is also possible, but it is out of scope of this tutorial.

Executing motions

From now on we will use the utility functions from cram_bullet_reasoning_utilities package, to save time on writing lengthy Prolog queries. The package cram_bullet_reasoning_utilities has a number of utility functions to make the rapid prototyping with the Bullet world faster and easier, there are functions such as spawn-object, move-object, kill-all-objects, move-robot etc. that execute the Prolog queries with default values of parameters that are not important, e.g. default colors for objects. By pressing . while holding Alt (Alt-.) while the Emacs cursor is on the function name you will be redirected to the definition of the function to see what exactly it does. Alt-, brings back the previous window. Try this for init-projection, for example.

We need a clean environment for this tutorial, so let's clean the world:

BTW-TUT> (roslisp-utilities:startup-ros)

Now, let's try to grasp an object. For that we will use a mesh of a bottle loaded from the resources subdirectory of the tutorial.

BTW-TUT> (add-objects-to-mesh-list)

Then we spawn an object of type bottle and move it onto the table.

BTW-TUT> (btr-utils:spawn-object 'bottle-1 :bottle)
BTW-TUT> (btr-utils:move-object 'bottle-1
                                (cl-transforms:make-pose
                                 (cl-transforms:make-3d-vector -2 -0.9 0.83)
                                 (cl-transforms:make-identity-rotation)))

Lastly we simulate the world for 10 seconds to make sure, nothing moves unexpectedly on runtime.

BTW-TUT> (btr:simulate btr:*current-bullet-world* 100)

Before we grasp the bottle, let's first prepare the PR2, just three steps: move the arms out of sight, navigate the base in front of the bottle and look at the bottle. The point we want the base to navigate to can be hard coded and saved temporarily.

BTW-TUT> (setf ?grasp-base-pose 
               (cl-transforms-stamped:make-pose-stamped
                "map"
                0.0
                (cl-transforms:make-3d-vector -1.8547d0 -0.251d0 0.0d0)
                (cl-transforms:axis-angle->quaternion (cl-transforms:make-3d-vector 0 0 1) 
                                                      (/ pi -2))))

The same thing can be done with the point we want to look at.

BTW-TUT> (setf ?grasp-look-pose 
               (cl-transforms-stamped:make-pose-stamped
                "base_footprint"
                0.0
                (cl-transforms:make-3d-vector 0.65335d0 0.076d0 0.758d0)
                (cl-transforms:make-identity-rotation)))

To execute any plan in CRAM, we need a top-level context. Besides that we also use a macro to specify that the demo should be executed in simulation, not on the real robot. Putting your plan under pr2-proj:with-simulated-robot will indicate that your robot is in the projection environment and it also has a top level call within it. The with-simulated-robot is a way to abstract out the robot details from your plans and its counterpart to execute on a real robot would be pr2-pms:with-real-robot. Also note that without mentioning the robot that you want to execute on, the TF for it is not published and you'll run into errors. Putting all these together we end up with this plan:

BTW-TUT>
(pr2-proj:with-simulated-robot
  (cpl:par
    (pp-plans:park-arms)
    (pr2-proj::drive ?grasp-base-pose))
  (pr2-proj::look-at-pose-stamped ?grasp-look-pose))

We can execute some movements in parallel, if they use different joints of the robot. That's what cpl:par is for. The function park-arms performs a motion on the joints of both arms, which brings them into a specific position, so they don't hang around the field of view. We have used a simple call to low level methods to achieve motions like move to the ?grasp-base-pose and look at ?grasp-look-pose. These can be achieved by corresponding motion designators, which we will look at in later tutorials.

To grasp the bottle we need to have its pose in the room. Therefore, we first perceive it and store the result in *perceived-object*:

BTW-TUT>
(defvar *perceived-object* nil "Object designator returned from perception")
(pr2-proj:with-simulated-robot
  (setf *perceived-object*
        (pr2-proj::detect (desig:an object (type bottle)))))

With that resulting perceived object we perform the picking up action. With the torso so far down we might not be able to reach for the bottle, so we need to also push the torso up:

(pr2-proj:with-simulated-robot
    (let ((?perceived-bottle-desig *perceived-object*))
      (pr2-proj::move-torso 0.3)

As there is no atomic motion for picking up an object, in fact, picking up is comprised of multiple move-arm motions, so pick up is implemented within a plan and called by performing an action designator. This is explained in the next tutorial on writing simple mobile manipulation plans.