To provide the most transparent, fact-checked, and accessible measurement of AI capabilities—so the public, investors, policymakers, and developers can make informed decisions based on data, not hype.
AI is evolving faster than most people can track. Every week brings new model releases, benchmark claims, and capability announcements. Separating signal from noise has become a full-time job.
Training Run exists to solve this problem. We aggregate performance data from the most respected evaluation platforms, apply a transparent scoring methodology, and deliver a clear, weekly snapshot of where AI models actually stand.
We're not affiliated with any AI company. We don't have a horse in this race. We just want to give you the clearest picture possible.
Every score we publish comes with full methodology documentation and source citations. No black boxes.
We only use data from verifiable, peer-reviewed, or publicly auditable sources. We cite everything.
We're not funded by AI companies. Our only incentive is accuracy and usefulness to our audience.
We focus on capabilities that matter in the real world—not just benchmarks that make for good press releases.
We're clear about what we don't measure. TRS is not a prediction of AGI timelines or future capabilities.
Training Run is also a weekly video series that breaks down the latest in AI performance. Think of it as your weekly conditioning—keeping you up to speed on what these models can actually do, delivered in a format that's actually watchable.
New episodes drop weekly, covering the latest scores, notable movements, and what they mean for the AI landscape.
Questions about our methodology? Suggestions for improvement? Corrections or feedback? We want to hear from you.
hello@trainingrun.ai