Will I believe, 3 years from today, that publicly available AI tools have absolute advantage over human mathematicians in all areas of mathematical research?
I will not participate in this market.
Some background--I'm a math professor at the University of Toronto, mostly focusing on algebraic geometry and number theory.
I will resolve this market YES if I think publicly available AI tools outperform most professional mathematicians (e.g. me) as of July 17, 2028, and NO otherwise. In particular, for a YES resolution, they should be able to create high-quality original research with very little human input.
See here for a related market, with a more precise operationalization of a related question, albeit with some minor problems as a proxy: https://manifold.markets/TamayBesiroglu/will-ai-be-capable-of-producing-ann
Update 2025-07-17 (PST) (AI summary of creator comment): The creator has specified that for a YES resolution, the AI must outperform humans in all areas of math research, which explicitly includes:
Creating definitions
Making conjectures
@JoshSnider Be assured that if I see a swarm of nanobots approaching, turning everything in their path into paperclips, I will resolve the market YES.
@DanielLittQCSn a swarm of nanobots approaching turning everything in their path into proofs of the Riemann hypothesis
@JoshuaTindall We will know whether P = NP by measuring how much of the world AI has to turn into data centers before it knows how to put oranges in a box.
Are you considering only the task of proving theorems, or also that of building theories and conjectures? In my opinion, while it's possible for a system like AlphaProof to become superhuman at proving theorems by 2028 (though it would be very difficult), I believe such systems are still very far from being able to create definitions, make conjectures, and build theories.
@Grothenfla The question is about "all areas of math research." This includes creating definitions, making conjectures, etc.