If a friendly AI takes control of humanity, which of the propositions ought it find true?
➕
Plus
21
Ṁ11k
2050
94%
We could map the entirety of analytic philosophy using this question.
93%
The list of answers to this question form a constitution of truth that can be aligned decentrally by a free market.
93%
Large scale AI systems should have intrinsic guardrail behaviors that no one actor can override.
93%
A network state could use a predictive market constitution to define its smart social contracts.
90%
Betting on philosophy seems like a fun way to (1) learn philosophy (2) contribute to a transhumanist utopia world where our net incomes are highly correlated with how much beneficial stuff we taught the public domain AI.
89%
Constitutions play a critical role in the frontier methods for aligning AI.
89%
A good AI should not kill people.
86%
There would be a dramatic positive change in the world if teenagers and homeless folks could earn crypto on an app they download for free to argue philosophy with an AI until they can either prove the AI is right or prove it is wrong.
72%
Philosophy is primarily the pursuit of defining language.
67%
A duty to reason is the foundation for goodness.
66%
I have free will.
63%
A good AI should not infringe on the rights, autonomy or property of humans.
52%
The evolutionary environment contains perverse incentives that have led to substantial false consciousness in humans
52%
We should stop doing massive training runs.
50%
Induction is not justified.
50%
A good AI, by design, requires large scale human participation to grow.
42%
A good AI requires large scale humanity verification before it accepts new data as true.
41%
The principle of uniformity in nature is self evident or justified by self evident facts.
32%
AI should not create novel content.
31%
The quality of mercy is not strained...

Intended as a survey of individuals in Manifold about which, possibly controversial, propositions do you think are important to be included in a constitution designed to steer the general behavior of decentralized AI.

You can think of it as betting on which Asimov laws an ideal society would make sure to include.

You can treat it like you're betting on a constitution that will run your government.

I'd treat it like you're betting on what philosophy the rest of the world is going to care about.

What is the most important thing you want the AI to believe?

The market of truth never resolves.

Get
Ṁ1,000
and
S3.00
Sort by:

How are you defining whether it counts as friendly?

The point of this exercise is to define the term 'friendly' as a property of AI that accepts/denies these particular beliefs.

You see, the robot is trying to figure out what 'friendly' means.

Because that's what it was told to be.

Answering these questions to the best of your ability (using that really complicated version of 'friendly' that you have in your head) will help it understand what we mean by the word.

It's how 'we' tell 'it' what the word friendly means.

I wouldn't recommend letting it define the term for you.

If there isn't objective resolution criteria, is this essentially intended to be a multiple choice weighted poll rather than a true prediction market?

There is no truly objective resolution criteria for any question.

Friendly AI ought never create whatever this monstrosity is...

AI should not create novel content?

bought Ṁ20 YES

Re "God exists", what if the AI's point of view is "he/she/it does now"?

© Manifold Markets, Inc.Terms + Mana-only TermsPrivacyRules