Will GPT-4 be trained (roughly) compute-optimally using the best-known scaling laws at the time?
➕
Plus
40
Ṁ6753
Jun 2
30%
chance

This question resolves YES if GPT-4 has enough data to roughly match the best-known scaling laws prescriptions known at the time of the training of GPT-4. Currently, this would mean following Chinchilla scaling laws. By roughly, I mean that it can be off by 20%. That is, if GPT-4 is 100B parameters, which would prescribe 12T tokens as per (currently known) optimal scaling laws, GPT-4 would need to be trained from ~10T to ~14T tokens for this question to resolve positively.

Get
Ṁ1,000
and
S3.00
Sort by:

lol gpt4 just used better scaling laws than chinchilla. And because it's still the most powerful model a year after it was trained, i'm guessing that it was trained with the best known scaling laws - which were only known inside openai!

If GPT-4 turns out to be a MoE, would this question resolve according to the parameter count of each expert rather than all experts combined?

@Khoja Presumably it would resolve according to MoE scaling laws.

predictedYES

@osmarks Yeah, exactly.

Will this market be extended until the answer is known? I have a suspicion they'll be publishing more details next year, including the parameter count that is relevant for this market.

predictedYES

@Mira Yes

In code we trust, algorithms reign
Machine learns with minimal pain
With GPT-4 we'll surely see
The pinnacle of AI mastery

I feel like the scaling charts in the paper are basically confirmation, here. Still, waiting for confirmation.

If GPT-4 is intended to be used a lot, then the majority of its cost will be in run-time not train-time. Dunno.

predictedNO

To expand on my comment: we can see from https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/ that smaller models keep improving as trained longer. Given that inference costs are high for OpenAI, it makes sense for them to train to minimize inference + training costs rather than train costs only, which means a smaller-than-chinchilla-optimal model is best.

predictedYES

how does this resolve if GPT-4 training does better per FLOP than chinchilla scaling?

predictedYES

ping :)

predictedNO

@Lauro It doesn’t matter whether it does better or worse per flop than chinchilla scaling, as long as it is trained roughly compute-optimally according to the known scaling laws at the time. If a much-better-than-chinchilla scaling law is discovered, then it could very well be that GPT-4 is trained more compute optimally than Chinchilla yet doesn’t abide to known optimal scaling laws.

predictedYES

@BionicD0LPH1N got it!

Does "known" mean "publicly known" here?

If the better scaling law is discovered by openai and used to train GPT4, does that count as YES (bc that's the new best known law) or NO (bc the scaling is better than the best publicly known at the time of training)

predictedYES

Basically I'm interested in the interaction with this market https://manifold.markets/Lauro/will-gpt4-improve-on-the-chinchilla

predictedYES

@BionicD0LPH1N any ruling on this?

Has there recently been new public info? If so I’d tip for it!

@BionicD0LPH1N there is this article https://www.datacamp.com/blog/what-we-know-gpt4

not necessarily 100% reliable

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules