Project comparison
Compare adoption, momentum, maintenance health, and project basics before choosing which tool to evaluate deeper.
A Claude skill that writes the accurate prompts for any AI tool. Zero tokens or credits wasted. Full context and memory retention
A high-throughput and memory-efficient inference and serving engine for LLMs
vLLM has the larger GitHub footprint with 79.7K stars.
vLLM is currently growing faster at +606 stars this week.
vLLM has the stronger health score at 93/100.
| Signal | Prompt Master | vLLM |
|---|---|---|
| GitHub stars | 7.4K | 79.7K |
| Weekly growth | 0 | +606 |
| Health score | 42 | 93 |
| Contributors | 5 | 2.6K |
| Commits per week | 3.8 | 208.8 |
| Open issues | 13 | 4.9K |
| Language | N/A | Python |
| License | MIT | Apache-2.0 |
| Last commit | 8d ago | 15h ago |
| Last release | N/A | v0.20.2 |
Get the fastest-growing projects, useful MCP servers, and technical reads in one weekly email.