Localization Techniques

Determine robot position using maps and sensors


Tutorial Overview

🎯 Learning Objectives

By the end of this tutorial, you will:

  • βœ… Understand localization vs SLAM

  • βœ… Use AMCL (Adaptive Monte Carlo Localization)

  • βœ… Initialize robot pose on known map

  • βœ… Monitor localization quality

  • βœ… Recover from kidnapped robot problem

  • βœ… Tune AMCL parameters

  • βœ… Compare localization methods

  • βœ… Troubleshoot localization failures

⏱️ Time Required

  • Reading & Theory: 25 minutes

  • Setup & First Localization: 30 minutes

  • Quality Assessment: 25 minutes

  • Parameter Tuning: 30 minutes

  • Advanced Scenarios: 35 minutes

  • Total: ~145 minutes

πŸ“š Prerequisites

  • βœ… Completed SLAM Mapping

  • βœ… Have at least one saved map

  • βœ… Understanding of particle filters (helpful)

  • βœ… Completed Sensor Fusion with EKF

  • βœ… Can drive robot smoothly

  • βœ… Can visualize in RViz

πŸ› οΈ What You'll Need

  • βœ… Beetlebot (fully charged)

  • βœ… Laptop with ROS2 Jazzy

  • βœ… Wireless controller

  • βœ… Previously created map files (.pgm + .yaml)

  • βœ… Mapped environment (unchanged since mapping)

  • βœ… Clear space to operate


Part 1: Localization Fundamentals

What is Localization?

Definition: Determining robot's position on a known map

Key difference from SLAM:

Aspect
SLAM
Localization

Map

Unknown (building it)

Known (already have it)

Position

Unknown (estimating it)

Unknown (estimating it)

Complexity

High (2 unknowns)

Medium (1 unknown)

Accuracy

Good (but drifts)

Excellent (bounded by map)

Use Case

Exploring new areas

Operating in known areas

Why localization matters:

  • Navigation requires knowing position on map

  • Planning paths needs current location

  • Avoid obstacles relative to map

  • Return to specific locations (charging station, home)


The Three Localization Problems

1. Position Tracking

  • Know approximate starting position

  • Track motion from there

  • Easiest problem

  • Example: Start at (0, 0), track from there

2. Global Localization

  • No idea where robot is on map

  • Must determine position from scratch

  • Harder problem

  • Example: Robot placed randomly in environment

3. Kidnapped Robot

  • Robot localized, then suddenly moved

  • Must detect and recover

  • Hardest problem

  • Example: Robot picked up and moved while running

AMCL handles all three! (with varying difficulty)


Particle Filter Concept

How AMCL works:

Particles = Hypotheses about robot position

Visual analogy:

  • Start: Particle cloud covers entire map (no idea where robot is)

  • Drive: Cloud moves and spreads (motion uncertainty)

  • See wall: Particles near walls get high weight

  • Resample: Cloud shrinks toward high-weight area

  • Converged: Tight cluster = confident position estimate

[PLACEHOLDER: Diagram showing particle filter convergence]


Part 2: Setting Up AMCL

Install AMCL (if not already)


Load Your Map

First, you need a map running:

In another terminal, verify in RViz:


Launch AMCL

Your robot likely has AMCL configured to launch automatically with navigation. Check:

What AMCL does:

  • Subscribes to: /scan (LiDAR), /odom (wheel odometry)

  • Publishes: /amcl_pose (estimated position)

  • Provides: map β†’ odom transform

  • Updates: Particle cloud on /particlecloud


Configure RViz for Localization

Full RViz setup:

[PLACEHOLDER: Screenshot of RViz configured for localization]


Part 3: Initial Pose Estimation

Setting Initial Pose

Robot needs to know approximate starting position:

Method 1: 2D Pose Estimate (RViz)

Method 2: Command Line

Method 3: Known Position (Launch File)


Convergence Process

After setting initial pose:

Signs of convergence:

  • βœ… Particle cloud shrinks (concentrated)

  • βœ… LiDAR scans align with map walls

  • βœ… Robot model stays aligned on map

  • βœ… Covariance ellipse small

Signs of divergence (failure):

  • ❌ Particles spread apart

  • ❌ Scans don't match map

  • ❌ Robot drifting on map

  • ❌ Multiple particle clusters


Exercise 10.1: First Localization

Task: Successfully localize robot

Procedure:

Estimated time: 2-3 minutes to converge


Part 4: Monitoring Localization Quality

Particle Cloud Size

Check particle spread:

Interpretation:

  • Small, tight cloud = good localization

  • Large, spread cloud = uncertain localization

  • Multiple clusters = ambiguous (similar features)


Covariance (Uncertainty)

Check pose covariance:

Good values:

Poor values:


Visual Alignment Check

In RViz, verify:

  1. Scan alignment:

    • LiDAR red dots should overlay map walls

    • Corners should line up precisely

    • Doorways should match

  2. Robot model position:

    • Should be inside free space (white)

    • Not inside walls (black)

    • Realistic position in environment

  3. Motion consistency:

    • Drive forward β†’ robot moves forward on map

    • Turn left β†’ robot rotates left on map

    • No jumping or jittering

[PLACEHOLDER: Screenshot showing good vs poor alignment]


Exercise 10.2: Quality Assessment

Task: Quantify localization quality

Test scenarios:

Scenario 1: Good localization

Scenario 2: Ambiguous localization

Scenario 3: Featureless area


Part 5: Global Localization

No Initial Pose

Challenge: Robot doesn't know where it is at all

AMCL solution: Spread particles across entire map

Launch with global localization:


Convergence Strategy

How to help global localization converge:


Exercise 10.3: Global Localization Challenge

Task: Localize without initial pose

Procedure:

Typical time: 1-3 minutes


Part 6: Kidnapped Robot Problem

What is Kidnapped Robot?

Scenario:

Challenge: Detect this happened and re-localize


AMCL's Recovery

How AMCL detects kidnapping:

  1. Scan mismatch increases

    • LiDAR sees walls that shouldn't be there

    • Particle weights drop dramatically

  2. Insert random particles

    • AMCL periodically adds random particles

    • If one matches new location, it survives

    • Others die off quickly

  3. Recovery mode triggered

    • Increases particle spread

    • Accelerates resampling

    • Similar to mini global localization

Parameters controlling recovery:


Exercise 10.4: Kidnapped Robot Test

Task: Force and recover from kidnapping

Procedure:


Part 7: Parameter Tuning

Key AMCL Parameters

Particle filter parameters:


Laser model parameters:


Exercise 10.5: Tune for Your Environment

Scenario 1: Symmetric hallway (ambiguous features)

Problem: Multiple particle clusters persist

Solution: Increase particles, stricter matching


Scenario 2: Open warehouse (few features)

Problem: Localization drifts in open areas

Solution: More particle spread, trust odometry less


Scenario 3: Dynamic environment (moving obstacles)

Problem: False obstacles confuse AMCL

Solution: Increase noise tolerance


Testing Parameter Changes

Systematic approach:


Part 8: Advanced Localization Topics

Multi-Hypothesis Tracking

When environment is symmetric:

AMCL may maintain multiple particle clusters (each a hypothesis)

Example:

Viewing multiple hypotheses:


Localization with Odometry Bias

Problem: Odometry has systematic error (wheel radius wrong)

Effect:

  • Robot drifts consistently in one direction

  • AMCL can compensate to some degree

  • But if error too large, fails

Solution:

  • Calibrate odometry first (wheel radius, wheelbase)

  • Or increase odom_alpha parameters (trust odometry less)


Scan Matching vs Particle Filter

Two localization approaches:

Particle Filter (AMCL):

  • βœ… Handles global localization

  • βœ… Handles kidnapped robot

  • βœ… Multiple hypotheses

  • ❌ Slower (1000s of particles)

  • ❌ Needs motion to converge

Scan Matching (ICP - Iterative Closest Point):

  • βœ… Very fast

  • βœ… No motion needed

  • βœ… Precise alignment

  • ❌ Only local (needs good initial guess)

  • ❌ No kidnapping recovery

Beetlebot uses AMCL (particle filter)


Part 9: Troubleshooting Localization

Problem: Localization Won't Converge

Symptoms: Particles stay spread out, never form tight cluster

Possible causes:

  1. Map doesn't match environment

  1. Initial pose very wrong

  1. Not enough motion

  1. Too few distinctive features


Problem: Localization Jumps Around

Symptoms: Robot position jitters on map

Possible causes:

  1. LiDAR noise

  1. Particle depletion

  1. Odometry jumps


Problem: Localization Drifts Over Time

Symptoms: Position slowly wanders away from true location

Possible causes:

  1. Odometry bias

  1. Environment changed

  1. Insufficient features


General Debugging Steps

Systematic approach:


Part 10: Knowledge Check

Concept Quiz

  1. What's the main difference between SLAM and localization?

  2. What do particles represent in AMCL?

  3. Why rotate in place for global localization?

  4. What is the kidnapped robot problem?

  5. Can localization work in completely featureless environment (empty room)?


Hands-On Challenge

Task: Robust localization system

Requirements:

  1. Create launch file that:

    • Loads specified map

    • Launches AMCL with tuned parameters

    • Launches RViz with localization config

  2. Test in 3 scenarios:

    • Known starting position (tracking)

    • Unknown starting position (global)

    • Kidnapped robot (recovery)

  3. Document convergence times and final errors

  4. Create tuned parameter file for your environment

Deliverable:

  • Launch file

  • Parameter config file

  • Test results table (convergence time, final covariance, position error)

  • Screenshots of RViz during each scenario

  • Recommendations for future users

Bonus:

  • Compare AMCL performance with different particle counts

  • Test with artificially degraded odometry (simulated wheel slip)

  • Create map quality metric (how "localizable" is your map?)


Part 11: What You've Learned

βœ… Congratulations!

You now understand:

Localization Fundamentals:

  • βœ… Localization vs SLAM

  • βœ… Three localization problems (tracking, global, kidnapped)

  • βœ… Particle filter concepts

  • βœ… When localization is appropriate

AMCL Operation:

  • βœ… Setting initial pose

  • βœ… Monitoring convergence

  • βœ… Assessing localization quality

  • βœ… Understanding particle cloud behavior

Practical Skills:

  • βœ… Loading and using saved maps

  • βœ… Localizing robot in known environment

  • βœ… Global localization (no initial pose)

  • βœ… Recovering from kidnapping

  • βœ… Tuning AMCL parameters

Advanced Topics:

  • βœ… Multi-hypothesis tracking

  • βœ… Covariance interpretation

  • βœ… Scan matching principles

  • βœ… Troubleshooting localization failures


Next Steps

🎯 You're Now Ready For:

FINAL TUTORIAL: β†’ Autonomous Navigation - Put it all together!

Beyond This Course:

  • Multi-robot localization

  • Visual localization (camera-based)

  • GPS integration (outdoor)

  • Robust localization in dynamic environments


Quick Reference

Essential Localization Commands


Localization Quality Metrics

Metric
Good
Fair
Poor

X/Y Covariance

<0.01

0.01-0.05

>0.05

Yaw Covariance

<0.01

0.01-0.05

>0.05

Particle Spread

<0.5m

0.5-2m

>2m

Scan Alignment

Perfect

Close

Off

Convergence Time

<30s

30-90s

>90s


Common AMCL Parameters


Completed Localization Techniques! πŸŽ‰

β†’ Continue to FINAL TUTORIAL: Autonomous Navigation β†’ Or return to Tutorial Index


Last Updated: January 2026 Tutorial 10 of 11 - Advanced Level Estimated completion time: 145 minutes

Last updated