Will Gemini-1.5-Pro-Exp-0801 Score Above 1165 in Scale AI's Coding Evaluation
Basic
3
Ṁ60Oct 1
28%
chance
1D
1W
1M
ALL
Context:
Gemini-1.5-Pro-Exp-0801 is currently the leading model on the LMYS Arena leaderboard (https://arena.lmsys.org/).
This market is about its potential evaluation by Scale AI (https://scale.com/leaderboard).
Resolution Criteria:
The market resolves as "Yes" if the model is evaluated by Scale AI and It receives a score strictly larger than 1165 in the Coding category.
The market resolves as "No" if the model is evaluated by Scale AI and it receives a score of 1165 or less in the Coding category
The market resolves as "N/A" if either
Scale AI doesn't evaluate the model and add it to the leaderboard before October 1, 2024 or
The evaluation methodology changes before the model is evaluated.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will Gemini-1.5-Pro-Exp-0801 Score Above 1165 in Scale AI's Math Evaluation
48% chance
Will Gemini-1.5-Pro-Exp-0801 Score Above 90.35 (current #1) in Scale AI's Instruction Following Evaluation
53% chance
Will Gemini-1.5-Pro-Exp-0801 Score Lower Than 8 (current best) in Scale AI's
Adversarial Robustness
56% chance
Will Gemini achieve a higher score on the SAT compared to GPT-4?
70% chance
Before February 2025, will a Gemini model exceed Claude 3.5 Sonnet 10/22's Global Average score on Simple Bench?
85% chance
Will GPT-5 score higher than 1350 on the Lmsys Arena Leaderboard
77% chance
Will GPT-4.5 score at least 100 in an IQ test?
56% chance
Will the GPT4+code-interpreter+search score > 1350 on Lmsys Arena Leaderboard?
49% chance
GPT-5: 120+ on AMC
45% chance
Will Gemini exceed the performance of GPT-4 on the 2022 AMC 10 and AMC 12 exams?
72% chance