-

hey viewer, we're moving!

We are currently transitioning to a new web system, so we are not updating this wikisite anymore.

The public part of the new web system is available at http://www.ira.disco.unimib.it


Mobile Robot with Hooked Arm for Doors, Elevators and Scouting (MrHades)

From Irawiki

Jump to: navigation, search

People participating to this project

The aim of the project is the realization of a mobile robot with a simple arm (three degrees of freedom) able to navigate into a known multi-storey building which means opening doors and taking elevators. Here you can download the final report

laser.png Navigation: selection of middleware and practical experience with it
MrHadesArm16.jpg Manipulation: development of the arm
liftVision.png Vision system: sensing of "elevator keyboard", preliminary work

Contents

Other project related to this one

Re-edit suggestions

  • selection of middleware and practical experience with it (mettere sotto questa parte sostanzialmente tutto il contenuto di questa pagina, aggiungere il conteggio delle ore)
  • development of the arm (mettere qui la pagina sul braccio e le ore, non come allegato, manca montaggio del braccio sulla base otto)
  • vision system: sensing of "elevator keyboard", preliminary work (manca del tutto)
  • link al pdf della relazione di stage (accessibile solo a selected people: dove? su svn?)
  • links ai prodotti informatici realizzati (cad, software, etc) (sotto chiave, accessibile solo a selected people: dove? su svn?)
  • montaggio batterie in numero adeguato all'uso di un calc. power-hungry
  • montaggio hokuyo

What do we start with

This project have been already faced by Galbiati Andrea that realized a first prototype with a p3dx chassis. He also realized a bilink planar arm with a hook to open doors. You can see here the details of the arm and here the details of his robot.

Our intention is to use a VolksBot RT3 and the bilink arm adding a third degree of freedom to have more flexibility in the movements.

Choice of the middleware

For the p3dx was used Aria but because of its stiffness and its liability on behaviours revealed not to be the better choice.

Our requirements

First of all I analyzed the requirements of the project in order to understand which features should have our middleware. In the following map is summarized this analysis. File:Requirements.png

Existing solutions

With Ballardini Augusto Luis I analyzed some existing solutions as you can see in the following table OpenSource Localization & Mapping Navigation Process Comunication Sick Drivers 3D Visualization Generic Bayesian Filtering OpenCV Integration Integrazione Altri Software Linux Windows ROS yes yes, gmapping module {laser SLAM} yes, Navigation Stack yes, Services like RPC yes, laser_drivers yes, rviz, PR2 yes, Orocos BFL yes, cv_wrappers & opencv2 Orocos RTT, others yes no Orocos (& ORCA fork) yes Not Found Motion Control [ready] -- OCL [1] (Need KDL [2]) yes, RTT Module [3] (Realtime Toolkit) Not Found; generic sensor interface Link to openSimulator (project suspended) + link other projects yes, Bayesian Filtering Library [4] OCL::CaptureCamera ? yes no Dctd [5] yes no no publish/subscribe RS-232, USB, Ethernet, IEEE 802.11b no no no no Not Found yes no Aria [6] yes yes yes ? yes only 2D MobileSim yes, not open-source [7] Not Found yes yes Carmen [8] yes yes, Localization module yes, Path planning module Process Monitoring [9] Hardware support for SICK laser RF [10] , [11] (SICK S30 support) no, only 2D {Robot/sensor simulator (in 2d)} yes no Not Found yes no Brics [12] work in progress.... // // // // // // // // // //

Our choice was based on the features of the middleware and the available documentation. The best solution was Ros: it has a lot of packages and can also import some modules from Orocos (the only valid alternative) such as the Real Time Toolkit. The documentation is complete with tutorials and an active support community

It also has useful tools such as a 3d simulation tool and a mapping module.

Understanding Ros

Following the tutorials at http://www.ros.org/ it is easy to understand the main structure of Ros.

The main entity is the Node, usually a c++ executable (or a python script), the meaning of which is comparable to that of a process. Nodes are controlled and synchronized by the Master: every node when created connects to the master through a server (tipically localhost).

There are two way nodes can communicate: topics and services.

   A topic is a place, managed by the master, where publisher nodes can leave messages while subscriber nodes are informed of each new message which they can react to with a callback.
   A node can also offer a service while other nodes can request this service to get an answer. 

Ros filesystem is organized into packages and stacks. A package is a group of nodes logically linked by a common purpose. It is also very useful to build a package because you can specify node dependencies, directories and libraries to include in the compilation and linking process and you can create a launchfile to startup the nodes all in once with params, arguments and other properties.

The development environment provides some extended commands for bash to access packages, compile sources, list environment variables.

The first impression after few hours of study is that Ros is simple and linear. Easy to use but powerful with a lot of components. You can see an example of our work in the tutorial Teleoperate p3dx with Ros and Aria.

The Navigation Stack

For the robot navigation Ros provides the Navigation Stack, a collection of nodes that can drive the robot to a specified goal avoiding obstacles. The navigation stack requires the robot to be publishing sensor data through a LaserScan or PointCloud message, odometry informations with an Odometry message and the transform configuration, the relationships between robot coordinate frames. It is also necessary that the robot can receive velocity commands to be driven to the goal by the navigation stack. Optionally you can provide a map of the environment: the robot can auto localize itself through the amcl node, which implements the adaptive Monte Carlo localization algorithm.

The following paragraphs describe the setup of the navigation stack. This does not substitute the ros tutorial http://www.ros.org/wiki/navigation/Tutorials/RobotSetup but provides a concrete example with all the problems that the setup could produce

3dx handler

In order to understand the setup and the behavior of the navigation stack I decided to use the already known p3dx instead of the new volskbot. For this purpose I created a package to manage the robot in all its parts, from the connection through Aria to the sonar data. In particular the aim of this package is to connect the robot with the navigation stack. From the side of the robot the handler uses Aria to get velocity and sonar informations. Then publishes these data in ros in the right way.

When I will be ready to use the volksbot it will be sufficient to create a volksbot_handler to use the navigation stack with it.

Structure

The p3dx_handler package contains basically four elements:

  • p3dx.cpp: this is the node that does the dirty work. It brings up Aria, connects to the robot and istantiates a p3dx class. This class provides methods to get data from the robot and to publish data to the topics. It also listens to spicific topics to get p3dx_commands (publish or not publish data, execute velocity commands..) and to receive velocity commands.
  • manifest.xml: this file defines the dependancies of the package and other informations on the package
  • CMakeLists.txt: this file defiens the sources to compile and the folders and libraries to include
  • p3dx_handler.launch: this launch file launch all the nodes with the right arguments. It also guarantees that the ros core is running

In the following picture you can see the structure of the package, the topics used and the ros packages that interact with it . [Image:P3dx_handler.png]

Coordinate Frames

There are many frames involved. Here you can see a tree of the frames created by the handler.

The world frame coincide with the map frame. The p3dx_frame is located in the cinematic centre of the robot. The sonar frame is fixed with respect to the p3dx_frame as well the laser frame. Although the p3dx does not have any laser, is necessary for the amcl node to provide a laser scan message for the auto localization. The reason of this choice is probably the inaccuracy and the lack of information of a sonar. I created a simple wrapper for the sonar to make it appear as a laser only for educational purposes, but our intention is to mount a laser to have a far superior performance.

The odometry frame represent the position in the world where the system is started. The odometry provides the transform from the p3dx frame to the odometry frame.

The relation between the map and the odometry frame is calculated by the amcl node.

Sonar data

The p3dx has 8 sonars numbered from 0 to 7. With Aria method getSonarRange(int id) I get the distance in mm of the nearest obstacle heard from the sonar number id. Then I calculate a point in the sonar frame where the object is using the datasheet of the robot. Here you can see the orientation of the sonars with their id and the position of the sonar frame with respect to the robot.

The positions, expressed in mm, of the sonars in the sonar frame are:

  • 0 = (0, 136, 0)
  • 1 = (45, 119, 0)
  • 2 = (79, 78, 0)
  • 3 = (97, 27, 0)
  • 3 = (97, -27, 0)
  • 2 = (79, -78, 0)
  • 1 = (45, -119, 0)
  • 7 = (0, -136, 0)

The sonar frame in the robot frame is at (0.69, 0, 0.185).

Below there are two screenshots of rviz, the ros viewer, showing sonar data, one point for each sonar.

Laser data

With the collaboration of Surricchio Paolo we mounted a LMS111 laser on the p3dx (Laser scanner LMS111: montaggio e test di localizzazione). To have an idea on how more effective the laser is you can see the following image Screenshot from Rviz. In red you can see the points identified through the sonar while in blue the points from the laser Screenshot from Rviz. In red you can see the points identified through the sonar while in blue the points from the laser


The laser provides the distances detected with 541 samples in a circular sector of 270 degrees. These distances are converted by Ros in points in the laser frame that is at (0.190, 0, 0.441) in the p3dx frame.

Note that the laser publisher is independent from the p3dx node, but is launched by the p3dx handler launchfile. ******************

Odometry data

Publishing odometry data over ros is very important to give a feedback of the velocity commands given. To implement this functionality i followed the respective tutorial The handler publishes a transform between the odometry frame and the robot frame calculating the estimate position of the robot based on odometry informations. The position is calculated by Aria. Is also published a message on the odom topic with the estimate pose and instantaneous velocity of the robot.

Amcl node

A fundamental part of the navigation stack is the auto localization system. As we said before it provides the transform between the map frame and the odometry frame.

The setup of the amcl node had been quite difficult. The documentation on the ros wiki is poor specially for all the params that could be given to the node.

The only thing that seems necessary to run amcl for the navigation stack is the following line to be added in the launchfile

<include file=" $(find amcl)/examples/amcl_diff.launch" />

Of course this is not sufficient. Is very important to customize the amcl launch file to run correctly the navigation stack. This is the default amcl launch file

<launch>
<node pkg="amcl" type="amcl" name="amcl" output="screen">
 <param name="odom_model_type" value="diff"/>
 <param name="odom_alpha5" value="0.1"/>
 <param name="transform_tolerance" value="0.2" />
 <param name="gui_publish_rate" value="10.0"/>
 <param name="laser_max_beams" value="9"/>
 <param name="min_particles" value="500"/>
 <param name="max_particles" value="5000"/>
 <param name="kld_err" value="0.05"/>
 <param name="kld_z" value="0.99"/>
 <param name="odom_alpha1" value="0.2"/>
 <param name="odom_alpha2" value="0.2"/>
 <param name="odom_alpha3" value="0.8"/>
 <param name="odom_alpha4" value="0.2"/>
 <param name="laser_z_hit" value="0.5"/>
 <param name="laser_z_short" value="0.05"/>
 <param name="laser_z_max" value="0.05"/>
 <param name="laser_z_rand" value="0.5"/>
 <param name="laser_sigma_hit" value="0.2"/>
 <param name="laser_lambda_short" value="0.1"/>
 <param name="laser_lambda_short" value="0.1"/>
 <param name="laser_model_type" value="beam"/> 
 <param name="laser_likelihood_max_dist" value="2.0"/>
 <param name="update_min_d" value="0.2"/>
 <param name="update_min_a" value="0.5"/>
 <param name="odom_frame_id" value="odometry_start_frame"/>
 <param name="base_frame_id" value="p3dx_frame"/>
 <param name="resample_interval" value="1"/>
 <param name="transform_tolerance" value="1"/>
 <param name="recovery_alpha_slow" value="0.0"/>
 <param name="recovery_alpha_fast" value="0.0"/>
</node>
</launch>

We have made some changes from the original file to get the things work.

<param name="laser_max_beams" value="9"/>

Amcl in the function call tree uses this value in a for loop. This param must be smaller or equal to the number of the laser beams you have (we actually use 10 laser beams) or amcl will fall in an infinite loop without any warning.

We set a value of 9 when using the sonars and a value of 400 with the laser

<param name="laser_model_type" value="beam"/> 

Uncomment your laser model type.

<param name="odom_frame_id" value="odometry_start_frame"/>
<param name="base_frame_id" value="p3dx_frame"/>


Amcl have to know the name of the frames to connect with the map frame.

This should be sufficient to run amcl. You can easily understand if amcl is running correctly by opening rviz and adding a tf element. If the world frame and the robot frame are in the same tree then amcl is running.

To get the things work better we had to adjust other parameters as follows:

    <param name="kld_err" value="0.05"/>
    <param name="kld_z" value="0.95"/>
  • (see Probabilistic Robotics for more details)
    <param name="update_min_d" value="0.0"/>
    <param name="update_min_a" value="0.07"/>

Exponential decay rate for the slow and fast average weight filter, used in deciding when to recover by adding random poses. For a correct localization we must allow a recover action or the robot could stop aborting its execution. With the values of 0 this feature was disabled.

You can also give an initial pose estimate to generate a particle cloud (geometry_msgs/PoseArray). You can find more informations on the parcticle cloud in the adaptive monte carlo localization here: http://books.nips.cc/papers/files/nips14/AA62.pdf

Map server

The navigation stack does not require a map to run but it is very common to use a map. This can be done thanks to the map server that offer map data as a service. A map could be a common image file that describes the occupancy state of each cell of the world in the color of the corresponding pixel. Whiter pixels are free, blacker pixels are occupied, and pixels in between are unknown. Each map is described by a YAML file in which are summarized some important informations:

  • the map file
  • the resolution (meter per pixel)
  • the origin of the map
  • thresholds for the regions (occupied, free, unknown)

This is the code of our yaml file

image: corridoio.png
resolution: 0.03415
origin: [0.0, 0.0, 0.0]
occupied_thresh: 0.65
free_thresh: 0.196
negate: 0

This is the code to run the map server with a launch file

<node name="map_server" pkg="map_server" type="map_server" args="$(find p3dx_handler)/maps/corridoio.yaml"/>

move_base package

The move_base package provides an implementation of an action that, given a goal in the world, will attempt to reach it with a mobile base. The move_base node links together a global and local planner to accomplish its global navigation task. The move_base node also maintains two costraint maps, one for the global planner, and one for a local planner that are used to accomplish navigation tasks.

The launchfile of the move_base is used to load yaml files containing the features of the local and global planner.

<node pkg="move_base" type="move_base" respawn="false" name="move_base" output="screen">
   <rosparam file="$(find p3dx_handler)/costmap_common_params.yaml" command="load" ns="global_costmap" />
   <rosparam file="$(find p3dx_handler)/costmap_common_params.yaml" command="load" ns="local_costmap" />
   <rosparam file="$(find p3dx_handler)/local_costmap_params.yaml" command="load" />
   <rosparam file="$(find p3dx_handler)/global_costmap_params.yaml" command="load" />
   <rosparam file="$(find p3dx_handler)/base_local_planner_params.yaml" command="load" />
 </node>

Costmap common params

In this file we can set some parameters for the obstacles seen by the sensors

obstacle_range: 8.0
raytrace_range: 8.5
#footprint: [[x0, y0], [x1, y1], ... [xn, yn]]
robot_radius: 0.3
inflation_radius: 0.6

observation_sources: scan

scan: {sensor_frame: laser_frame, data_type: LaserScan, topic: /scan, marking: true, clearing: true}
  • Obstacle range: the maximum sensor data that will be put in the costmap as an obstacle
  • Raytrace range: the range necessary to determine a free space
  • Inflation radius: the maximum distance from obstacles at which a cost should be incurred

Note that the observation_sources are labels. The inflation_radius have to be larger than the robot radius to avoid collisions.

Base local planner params

In this file we can specify some characteristics of the robot for the trajectory planner

TrajectoryPlannerROS:
 max_vel_x: 0.3
 min_vel_x: 0.05
 max_rotational_vel: 0.9
 min_in_place_rotational_vel: 0.1
 acc_limit_th: 1.0
 acc_limit_x: 0.2
 acc_limit_y: 0
 holonomic_robot: true

For the p3dx some parameters are described in the p3dx.p file (see Aria and p3dx reference section) A too high minimum velocity can bring the robot to oscillate.


Global and local costmap params

These two files contain the parameters for the costmaps: the frames, some frequecies and the static_map option. Defining a map static means that the costmap is based on a map served by the map server and the navigation stack will consider obstacles described in the map as real costraints.

global_costmap:
 global_frame: /map
 robot_base_frame: /p3dx_frame
 update_frequency: 5.0
 publish_frequency: 0.0
 static_map: true
local_costmap:
 global_frame: /odometry_start_frame
 robot_base_frame: /p3dx_frame
 update_frequency: 5.0
 publish_frequency: 2.0
 static_map: false
 rolling_window: true
 width: 6.0
 height: 6.0
 resolution: 0.05

Our test

We run some tests in the 1st floor corridor, near our laboratory (Bicocca U-14), whose map has been created using the measurements taken by Marzorati Daniele. Map of the corridor, resolution 0.03415 meters/pixel Map of the corridor, resolution 0.03415 meters/pixel

This environment is better than the lab because it has more free space and more defined boundaries. This helps the robot to localize itself better.

The first attempt have been done using only the sonars, then we started testing the laser.

Testing with the sonars

The performance with the sonars was very poor. Partly because the parameters were not tuned correctly but mainly because the sonars provided too few data. The localization itself fails to determine an estimate position and leaves a very high variance which means a high uncertainty about the position in the world. Screenshot from rviz showing a particle cloud before localization Screenshot from rviz showing a particle cloud before localization Screenshot from rviz showing a particle cloud after localization through sonar Screenshot from rviz showing a particle cloud after localization through sonar


Of course the navigation failed in reaching the goal.

Testing with the laser

With the laser we had a very different performance. The localization, after a quite long tuning of the parameters, worked effectively. Screenshot from rviz showing a particle cloud before localization Screenshot from rviz showing a particle cloud before localization Screenshot from rviz showing a particle cloud while localization through laser Screenshot from rviz showing a particle cloud while localization through laser Screenshot from rviz showing a particle cloud after localization through laser Screenshot from rviz showing a particle cloud after localization through laser

Regarding the navigation we observed that the global plan seems within reason althoug the local planner fatigues to follow the plan. We need a more accurate tuning in particular about velocities and costraint, and a more precise map.

Building the robot

After testing ROS capabilities with our pioneer, we want to use the volksbot to reach our goal. To do this we have to build a control board for the robot, a new three degrees of freedom arm and a control board for the arm.

I spent some time on the design of the arm. In the next section I will explain our choices. MrHades arm: project and realization

External Links

Ros

Aria and p3dx reference

Here you can find a lot of useful informations

Personal tools