Benchmark Dashboard

SGLang & vLLM · Real-time Community Activity Comparison

Updated: 2026-03-13Fetching live data...
Historical Data (API Limited)(2026-01-11 to 2026-03-13)
SGLang Stars:23,539
SGLang Forks:4,478
vLLM Stars:70,613
vLLM Forks:13,526
S
SGLang
by RadixArk

Latest Version

v0.5.8

v
vLLM
by Inferact

Latest Version

v0.14.1

GitHub Stars Trend Comparison
Select different metrics to view historical data comparison
01-1801-2502-0102-0802-15020,00040,00060,00080,000
  • SGLang
  • vLLM
Weekly Growth Rate
Star growth percentage week over week
01-1801-2502-0102-0802-150%0.6%1.2%1.8%2.4%
  • SGLang
  • vLLM
Contributor Activity
Active contributors per week
01-1801-2502-0102-0802-1504080120160
  • SGLang
  • vLLM
Content Frequency
Blogs, videos, media per week
01-1801-2502-0102-0802-1502468
  • SGLang
  • vLLM

Key Metrics(Jan 11 - Mar 13, 2026)

Loading live data...

GitHub Stars

SGLang

...

vLLM

...

Forks

SGLang

...

vLLM

...

Active PRs

SGLang

...

vLLM

...

Merged PRs

SGLang

...

vLLM

...

Open PRs

SGLang

...

vLLM

...

Contributors

SGLang

...

vLLM

...

Commits

SGLang

...

vLLM

...

Active Issues

SGLang

...

vLLM

...

Closed Issues

SGLang

...

vLLM

...

New Issues

SGLang

...

vLLM

...

PyPI Daily

SGLang

...

vLLM

...

PyPI Weekly

SGLang

...

vLLM

...

PyPI Monthly

SGLang

...

vLLM

...

Docker Pulls

SGLang

...

vLLM

...

Market Events

mediaComparison Article
2026-02-14

Guide to Local LLMs in 2026

Comparison article positioning vLLM as enterprise-grade option

Source:SitePointDevelopers, Tech Enthusiasts
blogTechnical Deep-Dive
2026-02-13

DeepSeek-V3.2 on GB300: Performance Breakthrough

8-20x performance improvement on NVIDIA GB300 GPUs, co-authored with DaoCloud

Source:vLLM Official BlogEnterprise Architects, ML Engineers
videoEducational Video
2026-02-12

vLLM Office Hours #43 - Triton Backend Deep Dive

Technical session covering Triton backend implementation

Source:Red Hat YouTubeDevelopers, ML Engineers
blogThought Leadership
2026-02-12

Why I'm Joining the PyTorch Foundation

Institutional endorsement: vLLM has become the inference engine of choice for the industry

Source:LF AI & Data FoundationEnterprise Decision-Makers, Developers
mediaIndustry News
2026-02-12

AI inference costs dropped up to 10x on Nvidia's Blackwell

Mainstream tech media coverage mentioning vLLM's role in Blackwell optimization

Source:VentureBeatBusiness Leaders, Investors
videoEducational Video
2026-02-11

GLM-5 Office Hours

Tutorial on deploying GLM-5 with SGLang on Modal

Source:LMSYS YouTubeDevelopers, ML Engineers
blogTechnical Deep-Dive
2026-02-10

Mini-SGLang Released

5000-line simplified tutorial codebase for learning SGLang internals

Source:LMSYS Official BlogDevelopers, ML Engineers
blogDeployment Guide
2026-02-09

How to Deploy vLLM on Kubernetes

Step-by-step Kubernetes deployment guide

Source:OneUptime BlogDevOps Engineers, Platform Teams
announcement
2026-02-08

SGLang Convert Command Deep Dive

Technical article on unlocking large language models with SGLang convert command

blogTutorial
2026-02-08

SGLang Convert Command Deep Dive

Unlocking large language models with SGLang convert command

Source:Oreate AI BlogDevelopers
blogTutorial
2026-02-08

How to Run LLM Inference with vLLM in Docker

Comprehensive Docker tutorial covering setup and production configuration

Source:OneUptime BlogDevOps Engineers, Developers
blogOverview/Introduction
2026-02-08

What is vLLM? Everything You Should Know

Comprehensive overview optimized for newcomers

Source:F22 Labs BlogNewcomers, Developers
announcement
2026-02-03

Driving vLLM WideEP on Blackwell (Part I)

26.2K prefill TPGS and 10.1K decode TPGS on GB200 for DeepSeek-style MoE models

partnership
2026-02-02

AMD Developer Cloud Tutorial

Step-by-step tutorial for running vLLM on AMD Instinct MI300X GPUs

announcement
2026-02-02

CVE-2026-22778 Security Vulnerability

Remote code execution vulnerability affecting vLLM 0.8.3 through pre-0.14.1

announcement
2026-02-01

GPT-OSS Performance Optimizations on NVIDIA Blackwell

+38% max throughput and +13% min latency improvement for gpt-oss-120b

announcement
2026-01-28

Inferact Funding Coverage Continues

Continued media coverage from TechCrunch, Bloomberg, and VentureBeat

release
2026-01-23

SGLang v0.5.8 Released

New release with diffusion model improvements

funding
2026-01-22

vLLM launches Inferact

$150M seed at $800M valuation, a16z & Lightspeed co-led

funding
2026-01-21

SGLang spins out as RadixArk

$400M valuation, Accel-led funding round

partnership
2026-01-21

ROCm First-Class Platform

AMD ROCm becomes first-class platform in vLLM ecosystem

release
2026-01-14

vLLM v0.14.0 Released

Major release with T4/2080Ti support for 32B-AWQ models

announcement
2026-01-14

SGLang NVIDIA Collaboration Roadmap

Q1 2026 roadmap announced with kernel optimizations