Meta
Llama 4 Scout
Meta's open-source model. 512K context. Free to run locally.
6 reviews
Community Ratings
Used this model? Share your experience.
Write a ReviewReviews (6)
Benchmark gaming controversy. LM Arena scores from a version nobody can use. Verbose, surface-level outputs.
Benchmark gaming controversy. LM Arena scores from a version nobody can use. Verbose, surface-level outputs.
Benchmark gaming controversy. LM Arena scores from a version nobody can use. Verbose, surface-level outputs.
Coding is catastrophic: LiveCodeBench 32.8%, below Llama 3.3 70B. Context window is misleading — 15.6% accuracy at 128K.
Coding is catastrophic: LiveCodeBench 32.8%, below Llama 3.3 70B. Context window is misleading — 15.6% accuracy at 128K.
Coding is catastrophic: LiveCodeBench 32.8%, below Llama 3.3 70B. Context window is misleading — 15.6% accuracy at 128K.