Did Gemini 1.5 Pro achieve long-context reasoning through retrieval?
Basic
1
Ṁ44Jan 1
50%
chance
1D
1W
1M
ALL
There is no way an attention network is that good.
1 hour video understanding
Needle in a Haystack 99% accuracy
Learning a language that no one speaks by reading a grammar book "in context"
Resolves YES if we later found out that the long context ability was enhanced by agents/retrieval/search/etc., i.e. it was not achieved merely by extending attention mechanism.
Resolves NA if I can't find out by EOY 2024
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Is the model Gemini Experimental 1206 an early version of what will be Gemini 2 Pro?
82% chance
Will Gemini 1.5 Pro seem to be as good as Gemini 1.0 Ultra for common use cases? [Poll]
70% chance
Will GPT-5 have function-calling ability to some o1-like reasoning model, upon release?
35% chance
Will Gemini outperform GPT-4 at mathematical theorem-proving?
62% chance
Will Gemini achieve a higher score on the SAT compared to GPT-4?
70% chance
What will be true of Gemini 2?
Will Google Gemini do as well as GPT-4 on Sparks of AGI tasks?
76% chance
Will Gemini-1.5-Pro-Exp-0801 Score Above 90.35 (current #1) in Scale AI's Instruction Following Evaluation
53% chance
Will Gemini exceed the performance of GPT-4 on the 2022 AMC 10 and AMC 12 exams?
72% chance
Will Gemini-1.5-Pro-Exp-0801 Score Above 1165 in Scale AI's Math Evaluation
48% chance