メインコンテンツへスキップ
Official benchmark from Vercel measuring AI model performance on Next.js code generation and migration tasks. Evaluates success rate, execution time, token usage, and quality improvements.

Results

Last updated: December 2025

Methodology

CategoryDescription
Code GenerationCreating Next.js components, pages, and API routes
MigrationUpgrading from Pages Router to App Router
Best PracticesFollowing Next.js patterns and conventions
TypeScriptProper type safety and inference
Scoring metrics:
  • Success Rate - Percentage of tasks completed correctly
  • Execution Time - Time to complete tasks
  • Token Usage - Efficiency of model responses
  • Quality Score - Code quality and best practices

Next.js Evals

View live results and methodology