This is based on the inaugural longbets.org bet between Ray Kurzweil (YES) and Mitch Kapor (NO). It's a much more stringent Turing test than just "person on the street chats informally with a bot and can't tell it from a human". In fact, it's carefully constructed to be a proxy for AGI. Experts who know all the bot's weaknesses get to grill it for hours. Kurzweil and Kapor agree that LLMs as of 2023 don't and can't pass this Turing test.
Personally I think Kapor will win and Kurzweil will lose -- that a computer will not pass this version of the Turing test this decade.
((Bayesian) Update: But I admit the probability has jumped up recently! I created this Manifold market almost a year before ChatGPT launched.)
Related Markets
Metaculus's version and Manifold mirror of Metaculus's version
Manifold numerical market for a full probability distribution on the year AGI appears
(Also I had a real-money version on biatob.com for anyone confident that Kurzweil's side has a good chance, but the link keeps breaking)
My subjective opinion is that this may never happen because experts know very well weaknesses of probabilistic autoregressive models that we are using now
@mathvc
Yep. Kind of like how superhuman go was achieved almost a a decade ago. But if you train an adversarial model specifically against Go-playing AI, then humans can still beat it.
This may be a general feature of high dimensional spaces: if AI has to commit to a strategy in advance and humans are allowed to train specifically against that strategy humans can always win.
Is this Turing Test framing substantially different from another metaculus variant? This variant is at 62% by end of 2029, unlike the linked market's 85%.
If anyone thinks this is going to happen, I have 1000 shares of NO ordered at $0.31 on Kalshi market that you can go and match.
And if you do I'll put a lot more there too.
@DavidBolin so the price is way higher on kalshi than here right? so wouldn't someone wanna buy YES here before kalshi? ig unless the benefit is realmoney, but the returns are almost certainly worse than eg s&p 500 for those few years
Disclaimer: correlation ain't causation... but this market has jumped 20% since Claude 3 dropped. If your expected agi (this market's definition) timeline has updated towards sooner rather than later, I would love to hear why.
@VerySeriousPoster I think the jump was more due to people being reminded how much higher Metaculus (~88%) and Kalshi (~72%) were than Manifold (~38%). There have also been various discussions of issues and subjectivity in the Longbets resolution itself, including arguments that the substantial randomness of the result will push the result towards 50% or towards ~100%. I think Claude 3 would come after that.
However, for many people, Claude 3 was a negative update for AI capabilities because people expected the GPT-4 bar to be significantly cleared already and Claude 3 is merely a little below, at, or a little above—depending on who you ask. I think the combined speed and performance of Claude 3 Haiku was more clearly a positive update, though perhaps on a somewhat different axis and maybe only relative to February 2024 (rather than, say, March 2023).
@Bayesian I am also surprised to see real money so high. I even bet against it a bit myself on Kalshi! But people keep buying it back up! I'm not an expert, and AI keeps outperforming my expectation.
So I'm going to take the outside view here. If anyone disagrees, they should go bet against it with real money!
@Bayesian Well I bought a bit of no when the kalshi market went live and was even higher than it is now. But the price remained high, and after reading the metaculus comments now I think this market is the one that's more mispriced.
If Metaculus dropped, then I'd probably sell here. But I think they'd be good at this kind of thing since they don't have to care about the time value of money.
@DavidBolin I'm an ASI ~never kinda dude but do you really think transparent text chat isn't happening within the decade?
@Adam Passing an adversarial Turing test, even with pure chat, IS superintelligence.
Right now, you can pretty much recognize an LLM by a single paragraph. Even if you disagree with that statement about superintelligence, there is no way anything is passing an adversarial test like that without intensive training and finetuning on the specific task of trying to pass the test.
No one is going to do that because it's too expensive and goes against other things they care about, like not having LLMs be outrageous liars. This is not happening.
@DavidBolin on one hand you say that passing the test is superintelligence, then you argue no one will bother to do the work. which one is it? even if we’re being super charitable anyway, it seem pretty obvious that passing the test or winning the llm arena unlocks a ton of funding prospects at attractive valuations
@beaver1 If you want a precise answer, it is superintelligence.
I made the other point as in "even if you think it does not take superintelligence," it is still true that no one would do the work.
In my opinion (1) no one will do the work, (2) if they did, they would fail. (3) If they did not fail, they would get superintelligence.