Table of Contents
Bullet World Reasoning
This section takes a look at the reasoning that is provided by PyCRAM and explains the different types. The reasoning consists of semantic queries one can utilize to get a better understanding of the situation inside the simulation. For example, there are methods to determine if two objects are colliding or if one object is visible from a specific point (like the visual sensor of a robot). In the following sections these queries will be referred to as reasoning queries. Below you can see a table of all available reasoning queries along with the parameter types and a short description. Every query will be explained in detail further below.
Query | Parameter | Description |
---|---|---|
Stable | Object, Bulletworld (optional) | Checks if the given object is stable in the simulation. An object is considered stable if it will not change its position if the simulation is running. |
Contact | Object, Object, Bulletworld (optional) | Checks if two given objects are in contact with each other. |
Visible | Object, list[list[float]], list[float]], list[float], float (optional), Bulletworld (optional) | Checks if the given object is visible from a given position. |
Occluding | Object, list[list[float], list[float]], list[float], Bulletworld (optional) | Returns a list of objects that occlude the object. |
Reachable_position | Object, Object, String, Bulletworld (optional) | Checks if a given object is reachable for a given robot. |
Reachable_object | list[float], Object, String, BulletWorld (optional) | Checks if a given position is reachable for a given robot. |
Blocking | Object, Object, String, BulletWorld (optional) | Returns a list of objects that are in contact with the robot if it were to reach the object. |
Supporting | Object, Object, BulletWorld (optional) | Checks if the first object supports the second. |
Stable
This reasoning query determines if an object is stable inside the simulation, i.e., that the object does not change over a longer period of time. This is the case if the object is, for example, placed on top of a table or a counter.
To accomplish this, the coordinates of the objects will be saved at the beginning for later comparison. In addition, the current state of the world will also be saved so it can be restored later. Next, the world will be simulated for an equivalent of 2 seconds and the current coordinates will be saved. Because of how precise PyBullet works, the coordinates cannot be directly compared. Instead they will be rounded to three decimal places that correspond to the precision of 1 millimeter. Now, the previous saved state will be restored and the result of the comparison will be returned.
This reasoning query returns “True” if the object did not move while it was simulated ( i.e., stable) and returns “False” in any other case.
stable(bottle)
For being utilized, the reasoning query only requires the object, and optionally, the world when multiple worlds are used.
Contact
This reasoning query is similar to a collision detection. It returns “True” if the two given objects are in contact and “False” in any other case. To achieve this, the collision detection get_contact_points method of PyBullet can be used. To determine if there are contact points the simulation needs to step one time.
The usage of this method is shown in the following:
contact(kitchen, bottle)
All that is needed are the two objects for which the contact should be determined, and optionally, the world when multiple worlds are used.
Visible
This reasoning query determines if an object is visible from a specific position, e.g., the camera mount of a robot. The reason why the position and not the robot needs to be given is that the framework should be able to work with all kinds of robots. Thus, no assumptions about the robot can be made. For easier use, the object class has a method called get_link_position which, when given a link name, returns the exact position of this link in the world coordinate frame.
To determine if the object is visible it needs to be rendered two times. The first time only the object alone will be rendered and all visible pixels are counted. This sets the maximal number of pixel that are visible from this position. The second time all objects will be rendered and the visible pixels are counted again. This value is the actual number of pixels that are visible from the given position.
To ensure the TF positions stay valid while the objects are moved for the rendering. This reasoning query will be executed in it's own BulletWorld, this is done by copying the BulletWorld and creating a new identical one in DIRECT mode. This BulletWorld then executes the reasoning and is destroyed afterwards.
To determine if the object is visible, the method checks if the actual number divided by the maximum number of pixels is greater than 0.8. This corresponds to at least 80 % of the object being visible. Before dividing the number it will be checked if the maximum number of pixels is greater than zero, to prevent a division by zero. If the maximum number of pixels is zero “False” will be returned, because the object is determined to be not visible.
To use the reasoning query, the user needs to specify the object and a position of the camera, as well as the front facing axis of the camera frame. This is normally the z axis, that's why this axis is the standard for this function. This would look like this:
visible(bottle, robot.get_link_position("camera_mount_link"), [0, 0, 1]
Occluding
This reasoning query returns a list of objects that are between one given object and the camera position. This is useful if, for example, a robot wants to get something out of a shelf and check if anything is in front of that object.
This reasoning query is similar to visible in the way that the scene needs to be rendered multiple times. First, the object is rendered alone. This time, the pixels are not counted but, instead, the position of pixel that belong to the given object are saved as tuples in a list. The second time, the whole scene is rendered and the previously saved positions of the pixels belonging to the object are checked. If the object is still visible in this pixel nothing happens, but if another object is visible this object is occluding the given object and will be saved in a list and then returned.
To ensure the TF positions stay valid while the objects are moved for the rendering. This reasoning query will be executed in it's own BulletWorld, this is done by copying the BulletWorld and creating a new identical one in DIRECT mode. This BulletWorld then executes the reasoning and is destroyed afterwards.
To use the reasoning query, the user needs to specify the object and a position of the camera, as well as the front facing axis of the camera frame. This is normally the z axis, that's why this axis is the standard for this function. This call is very similar to the one above:
occluding(bottle, robot.get_link_position("camera_mount_link"), [0, 0, 1]
Reachable
The reachable reasoning query determines, like the name suggests, if a given object is reachable by a robot. This reasoning query needs an object that is a robot and the link name of one gripper of this robot. The name of the gripper needs to be given because PyBullet should work with arbitrary robots. Optionally, one can set a threshold for how close the gripper needs to be to the object for the reasoning query to return “True”. The standard threshold is one millimeter.
To check if an object is reachable, the same steps as in a real scenario will be performed. In other words, first, the inverse kinematics will be calculated and applied. Afterwards, the distance between gripper and object will be calculated if it is less than the threshold, the reasoning query returns “True”.
For calculating the inverse kinematics the KDL ik solver is used.
The usage would look like this:
reachable(bottle, robot, "r_gripper_tool_frame")
In some cases, it can be useful to determine if a position, not an object, is reachable. One such case is, for example, if one wants to check if the robot can reach the target position when placing an object. For this purpose, there are two reachable reasoning queries: one that takes an object and another that takes a position in the world coordinate frame.
In theory, it would be possible to not only use the tool frame but every other link described in the URDF. So, it would also be possible to use the link of the arm. The reasoning query would still work if the arm can reach the object.
Blocking
This reasoning query determines the objects that would block the robot if it was to reach for the given object. It is very similar to the reachable predicate in the way that it needs an object, which is a robot, and a link name of one gripper.
To begin, the reasoning query calculates again the inverse kinematics for the robot to reach the object. Then, this will be applied to the robot. Afterwards, it will be checked for every object in the world if it is in contact with the robot and the list with objects that are in contact will be returned.
This approach is not ideal because, for example, the floor is also considered an ordinary object and the robot is always in contact with the floor. Thus, lists of blocking objects would always contain the floor.
However, this problem is difficult to solve as assumptions about the objects would have to be made. For example, one could check the name for something like “floor” or “plane” but there is no guarantee that the user would name the floor like this. To solve this problem, the type attribute of objects is used: For example, one could set the type of objects, such as the floor or other environment, to “environment” and filter for these after the reasoning query is finished.
The call for the blocking query is similar to the reachable one:
blocking(bottle, robot, "r_gripper_tool_frame")
Supporting
The supporting reasoning query determines if one object supports the other. For example, if a bottle is sitting on a table than the table is supporting the bottle. For this to be true there are two conditions that have to be met.
Firstly, the second object needs to be above the first object and, secondly, the two objects need to be in contact.
To check these conditions, first the z coordinates of the objects are compared. If the one of the second object are greater than those of the first, the condition is met. For the second condition the contact predicate is used. If booth conditions are met the predicate returns “True” and “False” in any other case.
The usage of this predicate needs the two objects, which are or are not supporting each other:
supporting(kitchen, bottle)
It can, for example, be used to find all objects that are placed on a table. For this, the user needs to traverse through all objects and check if they are supported by the table.