Scanpath Storytelling From Fixations in Python using Pupil Labs Eye Tracking

This guide explains how to turn fixation data exported from a Pupil Labs eye tracker into a readable scanpath: you will plot fixations on the panorama and connect them in temporal order to show the “story” of visual attention.

By the end, you will generate:
- A static scanpath overlay on the panorama (fixation dots sized by duration + connecting saccade path)
- An animated scanpath replay that reveals the sequence fixation-by-fixation
- A small summary plot (fixation durations over time / order)
- (Optional, but recommended for this tutorial) An AOI-aware scanpath using the avatar AOI, plus an AOI sequence plot

Note. This code works with Pupil Labs CSV exports (especially fixations.csv). The code uses a panorama image captured in VR

Download Fixations CSV File

Download Panorama Image

Requirements

- Anaconda Python Development Environment
- CSV files exported from Pupil Labs
- Optional panorama image (recommended for the most informative visualization)
- Python packages:
  - pandas
  - numpy
  - matplotlib
  - Pillow (usually included with Anaconda; used for reading images and writing GIFs)

Setup

1. Install Anaconda if needed.
2. Open Spyder, Jupyter Notebook, or VS Code.
3. Put the files in your working folder:
  - fixations.csv (required)
  - optional: gaze_positions.csv (not required for the scanpath overlay in this tutorial)
  - optional: a panorama image file (for example, a panorama or screenshot)
4. Save the script as scanpath_storytelling.py in the same folder (or a folder of your choice and pass paths on the command line).

To use Spyder, install Anaconda, run it, and launch Spyder. If you see Install instead of Launch for Spyder, install Spyder first. Create a new file in Spyder and save it in the same directory as your data when you run the examples below.

Step 1 Data

We start with the fixation events exported by Pupil Labs.

Important variables from fixations.csv:
- start_timestamp: when the fixation started (used to sort fixations in time order)
- duration: fixation duration in milliseconds (used to size fixation markers)
- norm_pos_x, norm_pos_y: normalized gaze coordinates (used to place fixations on the panorama)
- confidence: fixation confidence (used to optionally filter low-quality fixations)

Step 2 Plot a Static Scanpath Overlay

In this step, we:
1. Load fixations.csv
2. Sort fixations by start_timestamp
3. Convert norm_pos_x and norm_pos_y into pixel coordinates on the panorama image
4. Draw fixation dots (size proportional to duration) and a connecting line between consecutive fixations in temporal order.

This static plot is the “headline figure”: it turns raw points into an interpretable behavioral trace.

Panorama image and AOI definition

Download the panorama image for this tutorial provided above.

How this AOI was defined for the tutorial: The panorama used in the reference analysis is 4096 × 2048 pixels (width × height). The avatar region is a rectangle in pixel coordinates of the original image, with the origin at the top-left (x increases to the right, y increases downward): left = 885, right = 1162, top = 873, bottom = 1158. The script scales these numbers to match the actual width and height of your --image file when you run it.

Getting these coordinates in Adobe Illustrator: Open the image at full resolution and confirm the document size in pixels. Set Units to Pixels (for example Edit → Preferences → Units on Windows, or Illustrator → Settings → Units on Mac). Use the Rectangle Tool to draw a box over the region of interest. With the rectangle selected, read X, Y, W, and H in the Transform panel (or use Window → Info while adjusting the shape). Convert to left / right / top / bottom in pixels if needed (for example: right = left + width, bottom = top + height).

Step 2.1 Create an AOI-aware Scanpath (avatar AOI)

In this step, we use the avatar AOI definition from the heatmap analysis and turn fixations into an AOI sequence:
1. Filter fixations to the first 2 minutes (matches the heatmap/AOI analysis window)
2. Determine for each fixation whether it falls inside the avatar AOI rectangle in the same calibrated pixel coordinates as the scanpath overlay (the same shift used for the heatmap-style display—not raw norm→image mapping alone)
3. Create an AOI-aware scanpath overlay: AOI fixations are shown in a different color than “elsewhere” fixations; the avatar AOI rectangle is drawn on top of the panorama image
4. Save an “AOI sequence” plot (aoi_sequence.png): a step plot of AOI vs elsewhere over fixation order, with a small caption counting how many fixations landed in the AOI (this should match the orange vs blue split on scanpath_static.png)

Learning outcome: you can connect where (AOI vs elsewhere) with when (the fixation sequence).

Step 3 Create a Scanpath Replay Animation (Fixation-by-Fixation)

1. Decide a maximum number of fixations to animate (keeps the GIF readable)
2. For each animation frame, plot the scanpath up to that fixation index
3. Save a GIF so you can “watch” attention shift over the panorama

If GIF creation is not supported on your system, the script will still produce the static scanpath.

Step 4 Run the Script

Option A (portable, minimal) — put fixations.csv in the same folder as the script and run:
python scanpath_storytelling.py --fixations "fixations.csv"

Option B (best visualization with an image overlay)
python scanpath_storytelling.py --fixations "fixations.csv" --image "panorama.jpg"

Option C (AOI-aware scanpath for the avatar AOI)
python scanpath_storytelling.py --fixations "fixations.csv" --image "panorama.jpg" --aoi-avatar --gif

Expected output files (saved automatically to your outputs folder):
- scanpath_static.png
- scanpath_replay.gif (if GIF writing works)
- fixation_durations.png
- aoi_sequence.png (only when --aoi-avatar is enabled)

Step 5 Code

Use the script file scanpath_storytelling.py. The script uses fixation timestamps to order fixations, and the normalized fixation coordinates to place them on the panorama.
                                        
                                            """
                                            @author: Fjorda
                                            """

                                            from __future__ import annotations

                                            import argparse
                                            from pathlib import Path
                                            from typing import Tuple

                                            import matplotlib.pyplot as plt
                                            import matplotlib as mpl
                                            import numpy as np
                                            import pandas as pd

                                            from matplotlib.patches import Rectangle

                                            # ---- Avatar AOI configuration (pixel rectangle on the panorama as stored) ----
                                            IMG_W = 4096
                                            IMG_H = 2048
                                            FIRST_2_MIN_SEC_DEFAULT = 120.0

                                            # AOI (avatar) rectangle in original image coords: x [885, 1162], y [873, 1158]
                                            AOI_X_LEFT = 885
                                            AOI_X_RIGHT = 1162
                                            AOI_Y_TOP = 873
                                            AOI_Y_BOTTOM = 1158

                                            AOI_CENTER_NORM_X = (AOI_X_LEFT + AOI_X_RIGHT) / 2.0 / IMG_W
                                            AOI_CENTER_NORM_Y = 1.0 - (AOI_Y_TOP + AOI_Y_BOTTOM) / 2.0 / IMG_H


                                            def aoi_x_display_edges(W: int) -> Tuple[float, float]:
                                                """
                                                AOI left/right in the same horizontal data coordinates as map_to_panorama_pixels()
                                                """
                                                scale_x = W / IMG_W
                                                raw_left = AOI_X_LEFT * scale_x
                                                raw_right = AOI_X_RIGHT * scale_x
                                                return (W - 1) - raw_right, (W - 1) - raw_left


                                            def parse_args():
                                                script_dir = Path(__file__).resolve().parent
                                                parser = argparse.ArgumentParser(
                                                    description="Create static and animated scanpath plots from Pupil Labs fixations.csv."
                                                )
                                                parser.add_argument(
                                                    "--fixations",
                                                    type=Path,
                                                    default=script_dir / "fixations.csv",
                                                    help="Path to fixations.csv (default: script folder/fixations.csv).",
                                                )
                                                parser.add_argument(
                                                    "--image",
                                                    type=Path,
                                                    default=None,
                                                    help="Optional panorama image (panorama/screenshot) to overlay scanpath on.",
                                                )
                                                parser.add_argument(
                                                    "--out-dir",
                                                    type=Path,
                                                    default=script_dir / "outputs",
                                                    help="Output folder (default: script folder/outputs).",
                                                )
                                                parser.add_argument(
                                                    "--confidence-threshold",
                                                    type=float,
                                                    default=0.5,
                                                    help="Filter fixations by confidence if the column exists.",
                                                )
                                                parser.add_argument(
                                                    "--max-fixations",
                                                    type=int,
                                                    default=80,
                                                    help="Maximum number of fixations to use for plotting/animation.",
                                                )
                                                parser.add_argument(
                                                    "--aoi-avatar",
                                                    action="store_true",
                                                    dest="aoi_avatar",
                                                    help="Enable avatar AOI-aware scanpath (colors points by AOI membership and draws AOI rectangle on the image).",
                                                )
                                                parser.add_argument(
                                                    "--aoi-first2min-seconds",
                                                    type=float,
                                                    default=FIRST_2_MIN_SEC_DEFAULT,
                                                    dest="aoi_first2min_seconds",
                                                    help="Seconds from the first fixation start to consider for the AOI-aware scanpath.",
                                                )
                                                parser.add_argument(
                                                    "--gif",
                                                    action="store_true",
                                                    help="If set, attempt to write a GIF replay (requires Pillow/matplotlib support).",
                                                )
                                                parser.add_argument(
                                                    "--gif-frames",
                                                    type=int,
                                                    default=60,
                                                    help="Number of animation frames (reduced for speed/readability).",
                                                )
                                                return parser.parse_args()


                                            def load_fixations(fix_path: Path, conf_threshold: float, max_fixations: int) -> pd.DataFrame:
                                                fix = pd.read_csv(fix_path)

                                                required = {"start_timestamp", "duration", "norm_pos_x", "norm_pos_y"}
                                                missing = required - set(fix.columns)
                                                if missing:
                                                    raise ValueError(f"Missing required columns in fixations.csv: {sorted(missing)}")

                                                # Optional confidence filtering.
                                                if "confidence" in fix.columns:
                                                    fix = fix[fix["confidence"] >= conf_threshold].copy()

                                                fix = fix.dropna(subset=["start_timestamp", "duration", "norm_pos_x", "norm_pos_y"]).copy()
                                                fix = fix.sort_values("start_timestamp").reset_index(drop=True)

                                                if max_fixations is not None and len(fix) > max_fixations:
                                                    fix = fix.iloc[:max_fixations].copy()

                                                return fix


                                            def norm_to_pixels(norm_x: np.ndarray, norm_y: np.ndarray, size: Tuple[int, int]) -> Tuple[np.ndarray, np.ndarray]:
                                                """
                                                Map normalized coordinates (0..1) to pixel coordinates.
                                                Assumption for Pupil Labs exports: norm_pos_y increases downward (screen coordinate).
                                                """
                                                width, height = size
                                                x_pix = norm_x * width
                                                y_pix = norm_y * height
                                                return x_pix, y_pix


                                            def filter_first_seconds_by_timestamp(fix: pd.DataFrame, seconds: float) -> pd.DataFrame:
                                                if fix.empty or "start_timestamp" not in fix.columns:
                                                    return fix
                                                fix = fix.copy()
                                                fix["start_timestamp"] = pd.to_numeric(fix["start_timestamp"], errors="coerce")
                                                fix = fix.dropna(subset=["start_timestamp"]).copy()
                                                if fix.empty:
                                                    return fix
                                                t0 = float(fix["start_timestamp"].min())
                                                mask = fix["start_timestamp"].to_numpy() <= (t0 + float(seconds))
                                                return fix.loc[mask].copy()


                                            def avatar_aoi_mask(
                                                fix: pd.DataFrame,
                                                W: int | None = None,
                                                H: int | None = None,
                                                shift_x: float | None = None,
                                                shift_y: float | None = None,
                                            ) -> np.ndarray:
                                                """
                                                AOI membership in the same pixel space as the AOI scanpath overlay:
                                                `map_to_panorama_pixels` + scaled rectangle from `draw_avatar_aoi_rect`.

                                                The old raw norm→4096×2048 test ignored calibration shifts, so every fixation could
                                                be marked "elsewhere" while still plotting inside the AOI box.

                                                If W/H are omitted, uses IMG_W/IMG_H (full panorama resolution). If shifts are
                                                omitted, uses `aoi_calibration_shift(fix)`.
                                                """
                                                if W is None:
                                                    W = IMG_W
                                                if H is None:
                                                    H = IMG_H
                                                if shift_x is None or shift_y is None:
                                                    shift_x, shift_y = aoi_calibration_shift(fix)

                                                px_x, px_y = map_to_panorama_pixels(fix, W=W, H=H, shift_x=shift_x, shift_y=shift_y)

                                                left, right = aoi_x_display_edges(W)
                                                scale_y = H / IMG_H
                                                top = AOI_Y_TOP * scale_y
                                                bottom = AOI_Y_BOTTOM * scale_y
                                                y_lo = min(top, bottom)
                                                y_hi = max(top, bottom)

                                                return (px_x >= left) & (px_x <= right) & (px_y >= y_lo) & (px_y <= y_hi)


                                            def aoi_calibration_shift(fix: pd.DataFrame) -> Tuple[float, float]:
                                                """
                                                Calibrate display coordinates so the median fixation lands at the AOI center,
                                                matching your heatmap approach (shift_x/shift_y).
                                                """
                                                norm_x = np.clip(fix["norm_pos_x"].to_numpy(dtype=float), 0.0, 1.0)
                                                norm_y_display = 1.0 - np.clip(fix["norm_pos_y"].to_numpy(dtype=float), 0.0, 1.0)

                                                if norm_x.size == 0 or norm_y_display.size == 0:
                                                    return 0.0, 0.0
                                                return AOI_CENTER_NORM_X - float(np.median(norm_x)), AOI_CENTER_NORM_Y - float(np.median(norm_y_display))


                                            def map_to_panorama_pixels(
                                                fix: pd.DataFrame,
                                                W: int,
                                                H: int,
                                                shift_x: float,
                                                shift_y: float,
                                            ) -> Tuple[np.ndarray, np.ndarray]:
                                                """
                                                Map fixations to pixel coords on the panorama image as loaded.

                                                Uses display-style vertical axis (origin at top): norm_y_display = 1 - norm_pos_y,
                                                plus shift_x/shift_y calibration. Horizontal position is mirrored relative to the
                                                old heatmap pipeline so that coordinates match the file orientation
                                                (panorama panorama as stored on disk) instead of a left-right mirrored image.
                                                """
                                                norm_x = np.clip(fix["norm_pos_x"].to_numpy(dtype=float), 0.0, 1.0)
                                                norm_y_display = 1.0 - np.clip(fix["norm_pos_y"].to_numpy(dtype=float), 0.0, 1.0)

                                                img_x = np.clip(norm_x + shift_x, 0.0, 1.0)
                                                img_y = np.clip(norm_y_display + shift_y, 0.0, 1.0)

                                                px_x = img_x * (W - 1)
                                                px_y = img_y * (H - 1)
                                                px_x = (W - 1) - px_x
                                                return px_x, px_y


                                            def draw_avatar_aoi_rect(ax, H: int, W: int) -> None:
                                                """
                                                Draw avatar AOI on ax.imshow(..., extent=[0, W, H, 0]), ylim(H, 0).
                                                Horizontal edges match map_to_panorama_pixels (same x mirror as gaze).
                                                """
                                                left, right = aoi_x_display_edges(W)
                                                scale_y = H / IMG_H

                                                top = AOI_Y_TOP * scale_y
                                                bottom = AOI_Y_BOTTOM * scale_y

                                                y_min = min(top, bottom)
                                                y_max = max(top, bottom)
                                                rect_height = y_max - y_min
                                                rect_bottom_axis = H - y_max

                                                ax.add_patch(
                                                    Rectangle(
                                                        (left, rect_bottom_axis),
                                                        right - left,
                                                        rect_height,
                                                        fill=False,
                                                        edgecolor="white",
                                                        linewidth=3,
                                                        linestyle="--",
                                                    )
                                                )


                                            def plot_aoi_sequence(in_aoi: np.ndarray, out_path: Path):
                                                fig, ax = plt.subplots(figsize=(12, 3.8))
                                                seq = in_aoi.astype(int)
                                                x = np.arange(len(seq))
                                                ax.step(x, seq, where="post", linewidth=2.0, alpha=0.95)
                                                ax.set_yticks([0, 1])
                                                ax.set_yticklabels(["Elsewhere", "AOI (avatar)"])
                                                ax.set_xlabel("Fixation order (time ordered)")
                                                ax.set_ylabel("Where the fixation landed")
                                                n_in = int(np.sum(in_aoi))
                                                n_tot = len(in_aoi)
                                                ax.set_title("AOI membership over time (same coords as scanpath overlay)")
                                                ax.text(
                                                    0.99,
                                                    0.02,
                                                    f"Fixations in AOI: {n_in} / {n_tot}",
                                                    transform=ax.transAxes,
                                                    ha="right",
                                                    va="bottom",
                                                    fontsize=10,
                                                    color="0.2",
                                                )
                                                ax.grid(alpha=0.25)
                                                ax.set_ylim(-0.05, 1.05)
                                                fig.tight_layout()
                                                fig.savefig(out_path, dpi=180)
                                                plt.close(fig)


                                            def read_image(image_path: Path):
                                                from PIL import Image

                                                img = Image.open(image_path)
                                                arr = np.asarray(img)
                                                # size returns (width, height)
                                                size = (img.size[0], img.size[1])
                                                return arr, size


                                            def plot_static_scanpath(
                                                fix: pd.DataFrame,
                                                out_path: Path,
                                                image_path: Path | None = None,
                                                aoi_avatar: bool = False,
                                                aoi_first2min_seconds: float = FIRST_2_MIN_SEC_DEFAULT,
                                            ):
                                                mpl.rcParams.update({"figure.autolayout": True})

                                                if aoi_avatar:
                                                    fix = filter_first_seconds_by_timestamp(fix, aoi_first2min_seconds)

                                                def savefig_transparent(fig, path: Path):
                                                    fig.patch.set_alpha(0.0)
                                                    fig.savefig(path, dpi=180, transparent=True, bbox_inches="tight", pad_inches=0.05)
                                                    plt.close(fig)

                                                def savefig_with_scene_bg(fig, ax, path: Path, bg_rgb: tuple[int, int, int]):
                                                    # Save without transparency and force the background to match the panorama corner.
                                                    fig.patch.set_alpha(1.0)
                                                    fig.patch.set_facecolor(bg_rgb)
                                                    ax.set_facecolor(bg_rgb)
                                                    fig.savefig(path, dpi=180, transparent=False, bbox_inches="tight", pad_inches=0.0)
                                                    plt.close(fig)

                                                if image_path is not None and not aoi_avatar:
                                                    img_arr, size = read_image(image_path)
                                                    x_pix, y_pix = norm_to_pixels(fix["norm_pos_x"].to_numpy(), fix["norm_pos_y"].to_numpy(), size)

                                                    fig, ax = plt.subplots(figsize=(12, 7))
                                                    ax.set_facecolor("none")
                                                    ax.imshow(img_arr)
                                                    ax.set_axis_off()
                                                    ax.set_title("Scanpath on panorama", pad=10)

                                                    # Scale marker size by duration.
                                                    durations = fix["duration"].to_numpy()
                                                    dur_scaled = (durations - durations.min()) / max(durations.max() - durations.min(), 1e-9)
                                                    sizes = 20 + dur_scaled * 160

                                                    order = np.arange(len(fix))
                                                    cmap = plt.get_cmap("viridis")

                                                    ax.scatter(x_pix, y_pix, s=sizes, c=order, cmap=cmap, alpha=0.85, edgecolors="white", linewidths=0.3)

                                                    # Connecting line in temporal order.
                                                    ax.plot(x_pix, y_pix, color="white", linewidth=1.2, alpha=0.55, zorder=3)
                                                    for i in range(1, len(fix)):
                                                        ax.plot(
                                                            x_pix[i - 1 : i + 1],
                                                            y_pix[i - 1 : i + 1],
                                                            color=cmap(order[i] / max(order.max(), 1)),
                                                            linewidth=2.2,
                                                            alpha=0.65,
                                                            zorder=4,
                                                        )

                                                    savefig_transparent(fig, out_path)
                                                elif image_path is not None and aoi_avatar:
                                                    img_arr, _ = read_image(image_path)
                                                    H, W = img_arr.shape[0], img_arr.shape[1]

                                                    shift_x, shift_y = aoi_calibration_shift(fix)
                                                    in_aoi = avatar_aoi_mask(fix, W, H, shift_x, shift_y)
                                                    px_x, px_y = map_to_panorama_pixels(fix, W=W, H=H, shift_x=shift_x, shift_y=shift_y)

                                                    durations = fix["duration"].to_numpy(dtype=float)
                                                    dur_scaled = (durations - durations.min()) / max(durations.max() - durations.min(), 1e-9)
                                                    sizes = 20 + dur_scaled * 160

                                                    order = np.arange(len(fix))

                                                    fig, ax = plt.subplots(figsize=(12, 7))
                                                    bg = img_arr[0, 0]
                                                    # matplotlib expects colors in 0..1 range
                                                    bg_rgb = tuple(float(x) / 255.0 for x in bg[:3])
                                                    ax.set_facecolor(bg_rgb)
                                                    # Make sure axes fill the entire figure (prevents bottom/side margins).
                                                    fig.subplots_adjust(left=0, right=1, bottom=0, top=1)
                                                    ax.set_position([0, 0, 1, 1])
                                                    ax.imshow(img_arr, extent=[0, W, H, 0], aspect="auto", interpolation="bilinear")
                                                    ax.set_xlim(0, W)
                                                    ax.set_ylim(H, 0)
                                                    ax.set_aspect("auto")
                                                    ax.set_axis_off()
                                                    # Put a short label inside the axes to avoid reserving outer margins.
                                                    ax.text(
                                                        0.01,
                                                        0.99,
                                                        "AOI-aware scanpath on panorama",
                                                        transform=ax.transAxes,
                                                        color="white",
                                                        fontsize=12,
                                                        ha="left",
                                                        va="top",
                                                        bbox=dict(facecolor="black", alpha=0.35, edgecolor="none", boxstyle="round,pad=0.25"),
                                                        zorder=10,
                                                    )

                                                    draw_avatar_aoi_rect(ax=ax, H=H, W=W)

                                                    # Connecting line in temporal order (neutral color).
                                                    ax.plot(px_x, px_y, color="white", linewidth=1.1, alpha=0.45, zorder=2)

                                                    # Draw AOI fixations and elsewhere fixations differently.
                                                    cmap = plt.get_cmap("viridis")
                                                    ax.scatter(
                                                        px_x[~in_aoi],
                                                        px_y[~in_aoi],
                                                        s=sizes[~in_aoi],
                                                        c=order[~in_aoi],
                                                        cmap=cmap,
                                                        alpha=0.75,
                                                        edgecolors="white",
                                                        linewidths=0.25,
                                                        zorder=5,
                                                        label="Elsewhere fixations",
                                                    )
                                                    ax.scatter(
                                                        px_x[in_aoi],
                                                        px_y[in_aoi],
                                                        s=sizes[in_aoi],
                                                        c=order[in_aoi],
                                                        cmap=plt.get_cmap("autumn"),
                                                        alpha=0.95,
                                                        edgecolors="white",
                                                        linewidths=0.25,
                                                        zorder=6,
                                                        label="AOI (avatar) fixations",
                                                    )

                                                    savefig_with_scene_bg(fig=fig, ax=ax, path=out_path, bg_rgb=bg_rgb)
                                                else:
                                                    # Fallback: scanpath in normalized coordinate space.
                                                    x_pix = fix["norm_pos_x"].to_numpy()
                                                    y_pix = fix["norm_pos_y"].to_numpy()

                                                    fig, ax = plt.subplots(figsize=(8, 6))
                                                    ax.set_facecolor("none")
                                                    if aoi_avatar:
                                                        in_aoi = avatar_aoi_mask(fix)
                                                        ax.set_title("AOI-aware scanpath in normalized coordinates")
                                                        ax.scatter(x_pix[~in_aoi], y_pix[~in_aoi], s=20, alpha=0.35, c="tab:blue", label="Elsewhere")
                                                        ax.scatter(x_pix[in_aoi], y_pix[in_aoi], s=20, alpha=0.8, c="tab:orange", label="AOI (avatar)")
                                                        ax.legend(loc="best", frameon=False)
                                                    else:
                                                        ax.set_title("Scanpath in Normalized Coordinates")
                                                        ax.set_xlabel("norm_pos_x")
                                                        ax.set_ylabel("norm_pos_y")
                                                        ax.grid(alpha=0.25)

                                                    durations = fix["duration"].to_numpy(dtype=float)
                                                    dur_scaled = (durations - durations.min()) / max(durations.max() - durations.min(), 1e-9)
                                                    sizes = 20 + dur_scaled * 160

                                                    if not aoi_avatar:
                                                        order = np.arange(len(fix))
                                                        cmap = plt.get_cmap("viridis")
                                                        ax.scatter(x_pix, y_pix, s=sizes, c=order, cmap=cmap, alpha=0.85, edgecolors="white", linewidths=0.3)
                                                        ax.plot(x_pix, y_pix, color="gray", linewidth=1.2, alpha=0.6)
                                                    else:
                                                        ax.scatter(x_pix[in_aoi], y_pix[in_aoi], s=sizes[in_aoi], alpha=0.85, c="tab:orange", edgecolors="white", linewidths=0.25)
                                                        ax.scatter(x_pix[~in_aoi], y_pix[~in_aoi], s=sizes[~in_aoi], alpha=0.55, c="tab:blue", edgecolors="white", linewidths=0.25)

                                                    savefig_transparent(fig, out_path)


                                            def plot_fixation_durations(fix: pd.DataFrame, out_path: Path):
                                                fig, ax = plt.subplots(figsize=(12, 4.8))
                                                durations = fix["duration"].to_numpy()
                                                ax.plot(np.arange(len(durations)), durations, linewidth=1.5, alpha=0.9)
                                                ax.set_title("Fixation Durations by Temporal Order")
                                                ax.set_xlabel("Fixation index (time ordered)")
                                                ax.set_ylabel("Duration (ms)")
                                                ax.grid(alpha=0.25)
                                                fig.savefig(out_path, dpi=180)
                                                plt.close(fig)


                                            def make_replay_gif(
                                                fix: pd.DataFrame,
                                                out_path: Path,
                                                image_path: Path,
                                                max_frames: int,
                                                aoi_avatar: bool = False,
                                                aoi_first2min_seconds: float = FIRST_2_MIN_SEC_DEFAULT,
                                            ):
                                                """
                                                Create a scanpath replay GIF by progressively plotting fixations.
                                                """
                                                from PIL import Image

                                                if aoi_avatar:
                                                    fix = filter_first_seconds_by_timestamp(fix, aoi_first2min_seconds)

                                                    img_arr, _ = read_image(image_path)
                                                    H, W = img_arr.shape[0], img_arr.shape[1]
                                                    shift_x, shift_y = aoi_calibration_shift(fix)
                                                    in_aoi = avatar_aoi_mask(fix, W, H, shift_x, shift_y)
                                                    x_pix, y_pix = map_to_panorama_pixels(fix, W=W, H=H, shift_x=shift_x, shift_y=shift_y)
                                                else:
                                                    img_arr, size = read_image(image_path)
                                                    x_pix, y_pix = norm_to_pixels(fix["norm_pos_x"].to_numpy(), fix["norm_pos_y"].to_numpy(), size)

                                                durations = fix["duration"].to_numpy()
                                                dur_scaled = (durations - durations.min()) / max(durations.max() - durations.min(), 1e-9)
                                                sizes = 20 + dur_scaled * 160

                                                order = np.arange(len(fix))
                                                cmap = plt.get_cmap("viridis")

                                                # Pick frame indices spread across the fixation sequence.
                                                if len(fix) <= 1:
                                                    frame_indices = [0]
                                                else:
                                                    frame_indices = np.linspace(0, len(fix) - 1, num=min(max_frames, len(fix))).astype(int).tolist()

                                                frames = []
                                                for k, idx in enumerate(frame_indices):
                                                    fig, ax = plt.subplots(figsize=(12, 7))
                                                    # Match any leftover margins to the panorama corner color (hides borders).
                                                    corner = np.asarray(img_arr)[0, 0]
                                                    corner_rgb = corner[:3] if corner.shape[0] >= 3 else np.array([0, 0, 0], dtype=float)
                                                    if np.max(corner_rgb) > 1.0:
                                                        corner_rgb = corner_rgb / 255.0
                                                    corner_rgb_tuple = tuple(float(x) for x in corner_rgb)

                                                    fig.patch.set_alpha(1.0)
                                                    fig.patch.set_facecolor(corner_rgb_tuple)
                                                    ax.set_facecolor(corner_rgb_tuple)
                                                    fig.subplots_adjust(left=0, right=1, bottom=0, top=1)  # remove white margins
                                                    ax.set_position([0, 0, 1, 1])
                                                    if aoi_avatar:
                                                        ax.imshow(img_arr, extent=[0, W, H, 0], aspect="auto", interpolation="bilinear")
                                                        ax.set_xlim(0, W)
                                                        ax.set_ylim(H, 0)
                                                        ax.set_aspect("auto")
                                                        ax.set_axis_off()
                                                        draw_avatar_aoi_rect(ax=ax, H=H, W=W)
                                                    else:
                                                        ax.imshow(img_arr)
                                                    ax.set_axis_off()

                                                    # Plot the path up to idx.
                                                    ax.plot(x_pix[: idx + 1], y_pix[: idx + 1], color="white", linewidth=1.2, alpha=0.45, zorder=3)
                                                    for i in range(1, idx + 1):
                                                        ax.plot(
                                                            x_pix[i - 1 : i + 1],
                                                            y_pix[i - 1 : i + 1],
                                                            color=cmap(order[i] / max(order.max(), 1)),
                                                            linewidth=2.2,
                                                            alpha=0.65,
                                                            zorder=4,
                                                        )

                                                    # Plot fixations up to idx.
                                                    if aoi_avatar:
                                                        mask_up_to = in_aoi[: idx + 1]
                                                        ax.scatter(
                                                            x_pix[: idx + 1][~mask_up_to],
                                                            y_pix[: idx + 1][~mask_up_to],
                                                            s=sizes[: idx + 1][~mask_up_to],
                                                            c=order[: idx + 1][~mask_up_to],
                                                            cmap=cmap,
                                                            alpha=0.75,
                                                            edgecolors="white",
                                                            linewidths=0.25,
                                                            zorder=5,
                                                        )
                                                        ax.scatter(
                                                            x_pix[: idx + 1][mask_up_to],
                                                            y_pix[: idx + 1][mask_up_to],
                                                            s=sizes[: idx + 1][mask_up_to],
                                                            c=order[: idx + 1][mask_up_to],
                                                            cmap=plt.get_cmap("autumn"),
                                                            alpha=0.95,
                                                            edgecolors="white",
                                                            linewidths=0.25,
                                                            zorder=6,
                                                        )
                                                    else:
                                                        ax.scatter(
                                                            x_pix[: idx + 1],
                                                            y_pix[: idx + 1],
                                                            s=sizes[: idx + 1],
                                                            c=order[: idx + 1],
                                                            cmap=cmap,
                                                            alpha=0.9,
                                                            edgecolors="white",
                                                            linewidths=0.3,
                                                            zorder=5,
                                                        )

                                                    # Draw a small caption (frame progression).
                                                    ax.text(0.01, 0.99, f"{k+1}/{len(frame_indices)}", transform=ax.transAxes, color="white",
                                                            fontsize=12, ha="left", va="top",
                                                            bbox=dict(facecolor="black", alpha=0.35, edgecolor="none", boxstyle="round,pad=0.25"))

                                                    # Convert figure to image frame.
                                                    fig.canvas.draw()
                                                    # Use RGB buffer to keep GIF look/resolution consistent.
                                                    frame = np.frombuffer(fig.canvas.tostring_rgb(), dtype=np.uint8)
                                                    frame = frame.reshape(fig.canvas.get_width_height()[::-1] + (3,))
                                                    frames.append(frame)
                                                    plt.close(fig)

                                                # Crop away top/bottom/side margins so the panorama fills the whole canvas.
                                                # This keeps the output GIF resolution unchanged (1200x700) while removing
                                                # the visible "background band" around the panorama.
                                                if frames:
                                                    target_h, target_w = frames[0].shape[0], frames[0].shape[1]
                                                    bg = frames[0][0, 0]
                                                    diff = np.max(np.abs(frames[0].astype(int) - np.array(bg).astype(int)), axis=2)
                                                    mask = diff > 3
                                                    ys, xs = np.where(mask)
                                                    if xs.size > 0 and ys.size > 0:
                                                        x0, x1 = int(xs.min()), int(xs.max()) + 1
                                                        y0, y1 = int(ys.min()), int(ys.max()) + 1
                                                        if (x0 > 0 or y0 > 0) and (x1 < target_w or y1 < target_h):
                                                            from PIL import Image

                                                            resized_frames = []
                                                            for fr in frames:
                                                                cropped = fr[y0:y1, x0:x1, :]
                                                                im = Image.fromarray(cropped)
                                                                im = im.resize((target_w, target_h), resample=Image.NEAREST)
                                                                resized_frames.append(np.asarray(im))
                                                            frames = resized_frames

                                                # Save with PIL.
                                                pil_frames = [Image.fromarray(f) for f in frames]
                                                pil_frames[0].save(
                                                    out_path,
                                                    save_all=True,
                                                    append_images=pil_frames[1:],
                                                    duration=90,
                                                    loop=0,
                                                )


                                            def main():
                                                args = parse_args()

                                                fix_path = args.fixations.expanduser().resolve()
                                                if not fix_path.exists():
                                                    raise FileNotFoundError(f"Missing fixations.csv: {fix_path}")

                                                out_dir = args.out_dir.expanduser().resolve()
                                                out_dir.mkdir(parents=True, exist_ok=True)

                                                image_path = None
                                                if args.image is not None:
                                                    image_path = args.image.expanduser().resolve()
                                                    if not image_path.exists():
                                                        raise FileNotFoundError(f"Missing image: {image_path}")

                                                fix = load_fixations(
                                                    fix_path=fix_path,
                                                    conf_threshold=args.confidence_threshold,
                                                    max_fixations=args.max_fixations,
                                                )

                                                # Static overlay (optionally AOI-aware for avatar AOI)
                                                plot_static_scanpath(
                                                    fix=fix,
                                                    out_path=out_dir / "scanpath_static.png",
                                                    image_path=image_path,
                                                    aoi_avatar=args.aoi_avatar,
                                                    aoi_first2min_seconds=args.aoi_first2min_seconds,
                                                )

                                                # Duration summary
                                                plot_fixation_durations(fix=fix, out_path=out_dir / "fixation_durations.png")

                                                # AOI sequence visualization (avatar AOI) if requested.
                                                if args.aoi_avatar:
                                                    fix_aoi = filter_first_seconds_by_timestamp(fix, args.aoi_first2min_seconds)
                                                    if image_path is not None:
                                                        img_arr, _ = read_image(image_path)
                                                        H_img, W_img = img_arr.shape[0], img_arr.shape[1]
                                                        sx, sy = aoi_calibration_shift(fix_aoi)
                                                        in_aoi = avatar_aoi_mask(fix_aoi, W_img, H_img, sx, sy)
                                                    else:
                                                        in_aoi = avatar_aoi_mask(fix_aoi)
                                                    plot_aoi_sequence(in_aoi=in_aoi, out_path=out_dir / "aoi_sequence.png")

                                                # Optional GIF replay
                                                if args.gif and image_path is not None:
                                                    try:
                                                        make_replay_gif(
                                                            fix=fix,
                                                            out_path=out_dir / "scanpath_replay.gif",
                                                            image_path=image_path,
                                                            max_frames=args.gif_frames,
                                                            aoi_avatar=args.aoi_avatar,
                                                            aoi_first2min_seconds=args.aoi_first2min_seconds,
                                                        )
                                                        print("Saved scanpath_replay.gif")
                                                    except Exception as e:
                                                        print(f"GIF creation failed (continuing without GIF). Reason: {e}")
                                                elif args.gif and image_path is None:
                                                    print("GIF replay requested but no --image was provided; skipping GIF.")

                                                print(f"Fixations used: {len(fix):,}")
                                                print(f"Outputs saved to: {out_dir}")


                                            if __name__ == "__main__":
                                                main()
                                        
                                    
After you run the script with --image (and optionally --aoi-avatar and --gif), you will obtain the scanpath outputs listed in Step 4. Example figures below use the same image paths as on this site (you can replace them with your generated plots when you upload assets).

Conclusions

  • A scanpath turns fixation events into an intuitive sequence of attention.
  • Fixation duration is a useful visual cue: longer fixations often indicate higher processing effort.
  • The animated replay help you connect timing (sequence) with spatial attention (where they looked).