vff — the signal in the noise
Research

Dijkstra Beats RAPTOR for Transit Routing with Buffer Times

Denys Katkalo, Andrii Rohovyi, Toby WalshRead original
Share
Dijkstra Beats RAPTOR for Transit Routing with Buffer Times

Researchers revisit classical Dijkstra-based algorithms for public transit routing and demonstrate that Time-Dependent Dijkstra (TD-Dijkstra) outperforms the state-of-the-art RAPTOR-based approach (MR) for unlimited transfers without preprocessing. They identify a critical flaw in existing TD-Dijkstra implementations: preprocessing that filters dominated connections is unsound when stops have buffer times, since it cannot distinguish between seated passengers continuing without delay and transferring passengers who must wait. The authors introduce Transfer Aware Dijkstra (TAD), which scans entire trip sequences rather than individual edges to correctly handle buffer times while maintaining over 2x speed improvements on London and Switzerland networks.

TL;DR

  • TD-Dijkstra outperforms RAPTOR-based MR algorithm for public transit routing with unlimited transfers, contrary to recent algorithmic evolution in the field
  • Existing TD-Dijkstra preprocessing filters are mathematically unsound for networks with buffer times at stops, creating correctness issues
  • Transfer Aware Dijkstra (TAD) fixes the buffer time problem by processing full trip sequences instead of individual edges while preserving performance gains
  • Experiments show greater than 2x speedup over MR with optimal results on real networks both with and without buffer constraints

Why it matters

This work challenges the assumption that newer RAPTOR-based algorithms are categorically superior to classical approaches for transit routing, revealing that systematic re-examination of foundational algorithms can yield both correctness improvements and performance gains. For the broader AI and optimization community, it demonstrates the importance of rigorous algorithmic analysis when assumptions about real-world constraints like buffer times are embedded in preprocessing steps.

Business relevance

Transit routing powers navigation apps, trip planning services, and logistics optimization for millions of users daily. A 2x speedup with correct handling of real-world buffer constraints directly improves user experience and reduces infrastructure costs for mapping platforms, transit agencies, and mobility services that rely on fast, accurate routing.

Key implications

  • Classical algorithms deserve systematic re-evaluation against newer approaches rather than assumption-based dismissal, potentially unlocking performance and correctness gains in other domains
  • Real-world constraints like buffer times must be explicitly modeled in algorithm design and preprocessing, not assumed away, to avoid subtle correctness bugs in production systems
  • RAPTOR-based methods may not be optimal for all transit routing scenarios, particularly those with complex transfer rules or buffer requirements common in European networks

What to watch

Monitor adoption of TAD in commercial routing engines and whether other transit networks with buffer constraints report similar speedups and correctness improvements. Watch for whether this work prompts broader re-examination of preprocessing assumptions in other graph algorithms used in logistics, navigation, and network optimization.

Share

vff Briefing

Weekly signal. No noise. Built for founders, operators, and AI-curious professionals.

No spam. Unsubscribe any time.

Related stories

AWS Launches G7e GPU Instances for Cheaper Large Model Inference
TrendingModel Release

AWS Launches G7e GPU Instances for Cheaper Large Model Inference

AWS has launched G7e instances on Amazon SageMaker AI, powered by NVIDIA RTX PRO 6000 Blackwell GPUs with 96 GB of GDDR7 memory per GPU. The instances deliver up to 2.3x inference performance compared to previous-generation G6e instances and support configurations from 1 to 8 GPUs, enabling deployment of large language models up to 300B parameters on the largest 8-GPU node. This represents a significant upgrade in memory bandwidth, networking throughput, and model capacity for generative AI inference workloads.

1 day ago· AWS Machine Learning Blog
Anthropic Launches Claude Design for Non-Designers
Model Release

Anthropic Launches Claude Design for Non-Designers

Anthropic has launched Claude Design, a new product aimed at helping non-designers like founders and product managers create visuals quickly to communicate their ideas. The tool addresses a gap for early-stage teams and individuals who need to share concepts visually but lack design expertise or resources. Claude Design integrates with Anthropic's Claude AI platform, leveraging its capabilities to streamline the visual creation process. The launch reflects growing demand for AI-powered design tools that lower barriers to entry for non-technical users.

2 days ago· TechCrunch AI
Google Splits TPUs Into Training and Inference Chips

Google Splits TPUs Into Training and Inference Chips

Google is splitting its eighth-generation tensor processing units into separate chips optimized for AI training and inference, a shift the company says reflects the rise of AI agents and their distinct computational needs. The training chip delivers 2.8 times the performance of its predecessor at the same price, while the inference processor (TPU 8i) achieves 80% better performance and includes triple the SRAM of the prior generation. Both chips will launch later this year as Google continues its effort to compete with Nvidia in custom AI silicon, though the company is not directly benchmarking against Nvidia's offerings.

about 5 hours ago· Direct
Phononic Eyes $1.5B+ Valuation in AI Data Center Cooling Play

Phononic Eyes $1.5B+ Valuation in AI Data Center Cooling Play

Phononic, a 17-year-old Durham, North Carolina semiconductor company that makes cooling components for AI data center servers, is in talks with potential buyers at a valuation of at least $1.5 billion, with some buyers expressing interest above $2 billion. The company has engaged investment bank Lazard to evaluate its options since early 2026. This valuation would more than double its last private funding round, reflecting broader investor appetite for industrial suppliers tied to AI infrastructure demand. Phononic may also choose to raise additional capital instead of pursuing a sale.

1 day ago· The Information