Will the US implement information security requirements for frontier AI models by 2028?
Basic
4
Ṁ1712028
88%
chance
1D
1W
1M
ALL
This market will resolve to yes if the US creates a policy by 2028 mandating information security protections (including cyber, physical, and personnel security) for frontier AI models. These security measures should be in place during model training to limit unintended proliferation of dangerous models. Frontier AI models are those with highly general capabilities (over a certain threshold) or trained with a certain compute budget (e.g. as much compute as $1 billion can buy today).
Luke Muehlhauser from Open Philanthropy suggests this idea in his April 2023 post, "12 tentative ideas for US AI policy." This market idea was proposed by Michael Chen.
This question is managed and resolved by Manifold.
Get
1,000
and3.00
Related questions
Related questions
Will the US require a license to develop frontier AI models by 2028?
50% chance
Will the US implement software export controls for frontier AI models by 2028?
75% chance
Will the US implement testing and evaluation requirements for frontier AI models by 2028?
82% chance
Will the US fund defensive information security R&D for limiting unintended proliferation of dangerous AI models by 2028
53% chance
Will the US government adopt a mandatory labeling system for AI-generated content by 2025?
25% chance
Will the US government commit to legal restrictions on large AI training runs by January 1st, 2025?
5% chance
Will a new lab create a top-performing AI frontier model before 2028?
57% chance
Will the US regulate AI development by end of 2025?
43% chance
Will the US implement AI incident reporting requirements by 2028?
83% chance
Will the US or UK nationalize any frontier AI labs by 2035?
44% chance