
I believe Eliezer has the best overall understanding of the issues related to alignment.
Also, I'm confident there is a solution.
This will only resolve if Eliezer charitably engages with my claims.
If he does, this will only resolve yes if, after review, his new p(doom) is < 0.1. (10%)
It will only resolve yes with Eliezer's consent.
It will resolve no if Eliezer compels me to understand why Krantz isn't a solution to alignment within the krantz demonstration mechanism OR spends 45 mins charitably speaking with me at Manifest 2025 about my claims contained in the demonstration mechanism.
https://manifold.markets/Krantz/krantz-mechanism-demonstration?r=S3JhbnR6
2 min general overview by request.
Update: Due to integrity and logistic concerns in the comments, the above prediction will resolve N/A on Jan 1st 2030.
Update 2025-07-01 (PST) (AI summary of creator comment): The creator has specified that for Eliezer Yudkowsky to "charitably engage with my claims", he must interact with a specific set of the creator's other Manifold markets. The creator describes these markets as 'mechanisms for identifying priors' or 'surveys'.
The primary markets requiring engagement are:
For anyone that's super confident that my proposed solution is not a clear path for navigating the existential risk posed by AI, but doesn't want to bet this market below 20% because your worried about tying up mana (or whatever other reason you have), please consider engaging with my claims on a platform where capital is irrelevant (Metaculus).
I will try to make a point to reaffirm my 99.9% confidence on the below Metaculus prediction every day. Most others have indicated a confidence of 0.01. If I could get a small army of you to follow suit and cast similar very low predictions (maybe reaffirm them every so often), I think I could better create a situation where I end up with a ton of Metaculus reputation (whether it be positive or negative). To me, that's more valuable than money.
Either way, this is really important to me. I genuinely believe the fate of humanity is a stake. Forecasting on Metaculus is free and it would really help me bring attention to the issue.
Thanks ๐
@Krantz I think that if you want to increase your chance people read, understand, and think your proposition is potentially good, having description of what you want to do, your arguments for it, and how you respond to the main counter-arguments, scattered all over the place on different platforms, isn't the way to go.
Take the time to write an article or website, with the clearer and shorter explanation you can make, the assumptions your solution rely upon, why you expect it to work, and why it avoids the known pitfall of these kinds of solution.
And also where you categorize and answer to the main counter-arguments.
I think you should do it for the uncensored version, but then I think you should also do it for the censored version where you don't put the parts you think are info hazard.
Then you can share the same reference on different platforms or social media and have some kind of centralization where we can read the official version.
And you can also probably directly pay Yudkowsky to read the uncensored version, and give an honest shot at understanding it and trying to see if it could work (but I am unsure, anyway I think it has more chance to work if you have it than some messages on X).
@dionisos I get this advice a lot and I keep directing people to the same strategic predictions on Manifold that I'm trying to pay folks like Eliezer (or anyone from my proper date list) to engage with. Nobody is engaging with those predictions (I'm not talking about comments, I'm talking about wagering in the market).
It's important to understand that I'm trying to get a limited number of people's attention. I'm not trying to make something that's accessible for the general public (actually trying to avoid that).
Here's a short description.
https://www.lesswrong.com/posts/qqCdphfBdHxPzHFgt/how-to-avoid-death-by-ai
Here is exactly what I'm trying to get Eliezer (or anyone from my proper date list) to engage with:
https://manifold.markets/Krantz/krantz-mechanism-demonstration?r=S3JhbnR6
https://manifold.markets/Krantz/which-proposition-will-be-denied?r=S3JhbnR6
https://manifold.markets/Krantz/who-are-you-aligned-with?r=S3JhbnR6
I have a lot more predictions, but these are the primary ones I'd like Eliezer to engage with. They are not predictions in the traditional sense. They are mechanisms for identifying priors. I am willing to reimburse Eliezer (or anyone from my proper date list) that uses mana to follow the instructions and evaluate the propositions I have proposed. I want to pay them to tell me what their priors are. That process can be accomplished using these predictions.
I would love to pay Eliezer for his time identifying his confidence and alignment, but he has never spoken to me, replied to any of my requests or acknowledged that I exist.
Paying him to take a survey (which is what I'm trying to do) seems like an intractable task due to the noise that exists in today's communication infrastructure.
If you know a place where I can put my money that would increase the odds of him cheerfully taking my surveys, I'd love to hear about it.
Thank you for taking the time to pay attention.
@Krantz
I will read the LessWrong article.
> If you know a place where I can put my money that would increase the odds of him cheerfully taking my surveys, I'd love to hear about it.
Unfortunately no, I didn't know you already tried, I assumed it would be less difficult.
It's important to understand that I'm trying to get a limited number of people's attention. I'm not trying to make something that's accessible for the general public (actually trying to avoid that).
The trouble here is that making it accessible for the general public, or at least the academic community, followed by those groups following it en masse, is normally a pre-requisite for famous people to consider it worthwhile, not the other way round
I've attended Manifest twice now. Last year (2024) I came for the full week, this year (2025) I was only able to make it because I dealt the poker tournament (thanks David!). I had an amazing time both times. It has been a real treat to be around so many wonderful, kind and intelligent people that share my interest in the mechanistically interpretable alignment of AI (symbolic), the decentralization of decision markets (futarchy) and the exploration of sapiosexuality (open minded rationalists).
My goal both times has been to share an infohazardous solution to Eliezer's unsolvable problem. I would also be happy sharing this information with anyone that had a thorough understanding of ALL of the following topics and has publicly demonstrated a certain level of moral integrity:
(A) Danny Hillis and Doug Lenat's work in symbolic AI. Specifically, the lengths Doug went to and the amount of cognitive labor his team put into CYC. It's also helpful to understand the difference in approach that lead Danny and Doug to part ways (objective vs. subjective world modeling).
(B) A real understanding of WHY decentralized decision markets are infohazardous. People tend to overlook the fact that providing a solution to how 8 billion people can coordinate to prevent the full spectrum dominance takeover of a superintelligence is also a solution for how to prevent ANY totalitarian actor from acquiring full spectrum dominance. If that's already happened, this will need to be a mutually beneficial distribution of control.
(C) The effect AI has had on the public's ability to trigger the release of classified information (disclosure).
Eliezer (and a dozen others familar with the topics above) might assume that I'm trying to align large arrays of inscrutable matrices. I am not. That specific problem is tautologically unsolvable. I'm sure his new book does a great job of explaining that.
However, I do not share his pessimism that it would be impossible to scale GOFAI fast enough to beat ML to the punch. I believe that is our only hope and that if he would charitably review my work, I believe he would be optimistic that GOFAI could make an astonishing comeback. I believe that if we used futarchic decision markets in a way that (1) allowed individuals to perform the cognitive labor Doug was trying to harness, (2) enabled individuals to fairly express their normative opinions on how decentralized AI ought behave and (3) incentivised individuals to perform that cognitive labor by minting them nonfungible tokens as proof of that labor, we could create a game theoretically fair intelligence economy that will allow all of our grandkids to live sovereignly and earn a comfortable living simply by getting an education, checking facts and performing their civic duties decentrally by interacting with a free and open market of analytic philosophy.
I'm trying to show everyone a systematic way of writing the source code for decentralized GOFAI inside the market itself. If we do that, we create a truly decentralized and alignable futarchy that will eventually make government and money obsolete. I'm trying to teach a large cadre of gifted youngsters that if you engineer the market to speak analytically (like Wittgenstein in the Tractatus or Madison in the Constitution) it will actually come to life (in the Stephen Wolfram sense) and be the engine we collectively use to steer decentralized and mechanistically interpretable collectivesuperintelligence.
I really need help getting someone with influence in the community to take this seriously. If anyone could put me on Eliezer or Robin's radar I'd really appreciate it. I'm not trying to make money by starting a company. I'm a retired casino director / minimalist philosopher/ cub scout leader / collective intelligence enthusiast that's trying to teach others how to help my kids protect their freedom and stay alive.
In today's world, it seems impossible to communicate information like this over the internet. This is why I value the Manifest experience so much. I also deeply believe in prediction markets as a mechanism for communicating complex ideas across a distance.
If you'd like to help, please consider adding liquidity, wagering on, or sharing the various markets I've created to help explain the process. I understand that they might appear facetious, but they are genuine attemps to save humanity.
Thank you for listening and I hope to see you all next year with a full presentation. If we're still here.
Krantz ๐ค
@Krantz reading up on point A seems straightforward enough, but points B and C seem somewhat vague, where exactly would one learn the specifics of those points that you require?
@Ajelenbogen It would've been easier to say, 'I don't think Eliezer is going to look at your work.', which has been said already by dozens of others in the comments. I'd tell you it's a waste of everyone's time to contribute duplicate comments, but then I'd be a hypocrite. I just wish someone would actually put money into which proposition in my argument they believe is wrong because doing that produces the incentive for Eliezer to resolve it.
@Krantz you should share your data showing your secret algorithm is more effective than all normal advertising and educational content if you want to convince people this is not a waste of time.
If your secret algorithm so effectively scales collective human intelligence beyond every other social structure weโve created (eg. companies solving incredibly complex problems using aligned incentives for their component humans), a first priority should be recruiting some trusted friends and using this secret power with your combined super intelligence to figure out how you can make a more convincing sales pitch (considering you got Yudkowsky to comment here but failed to show anything he deemed worthwhile).
@Krantz genuinely the bit about writing down ones thinking, being able to argue between people without talking to them by reading their lines of reasoning and evidence, verifying their individual humanity, and having other people fund this thinking and learning is a good approach to figuring things out with collective intelligence.
We know that because thatโs how publicly funded academic research works.
Itโs unlikely turning everyone into academics who have microgrants in the form of trading back and forth some cryptocurrency is going to be very effective, efficient, have a consistent source of funding, or increase agreement. Itโs common for people to agree on some facts but disagree on narratives and subjective conclusions. AI alignment is one such area.
@Krantz either the resolution will be N/A as you claim, in which case there's no incentive to bet, or something else will happen, in which case no sensible investor should touch this market with a 10-foot pole at this point
@AriZerner I can't discern what point you are trying to make. For all cases of x, such that x is an event, x will either happen or something else will.
The proposition 'something else will happen' (referring to this market resolving yes or no) doesn't seem to imply that 'no sensible investor should touch this market with a 10-foot pole'. Were there additional propositions you used in your reasoning that you did not write out?
@AriZerner We can make this point clearer by applying your argument to a wager about a basketball game between the Chicago Bulls and the New York Knicks. Either, that wager will resolve na (because the game might get canceled), or something else will happen.
This is true, yet I have no apriori justification to believe that a sensible investor should not wager on such a game.
@Krantz Thereโs a strong chance you already managed to force Yudkowsky to read it through one of the vectors youโve clearly tried to use to access him online, and if that's the case his behavior would indicate he already dismissed it. Heโs not so famous as to be completely inaccessible and you appear to have a lot of persistence plus an obsession with him specifically.
It's notable the vectors you are not apparently using are the ones which would be the most convincing (trying to publish it with peer review, showing a working small scale reference implementation, detailing it in a best selling book, even putting something on Arxiv for the internet to review).
This business about:
Unfortunately, it would also help LLMs reason more effectively and possible piss off intelligence agencies as much as Bitcoin pissed off the banks, so it's not online anywhere.
Does not take into account the fact that if you get this guy to look at it and he finds it convincing, the very next step would be to start sharing it so others can check it. You could do that now.
This makes it extremely likely this market will N/A or, as they said, โsomething else will happenโ which is extremely unpredictable based on the thought patterns displayed in your posts.
Edit: I scrolled down and saw he actually already looked at your posts here.
@Krantz also if this secret algorithm were so powerful it will cause LLMs to rapidly gain reasoning abilities beyond all current research tracks and is therefore such a terrifying info hazard you canโt share it with anyone, youโd have made a tech equivalent of ice-nine and p(doom) would be higher.
It can not be both a good solution we can implement globally to save the world from run away AI and something so hazardous you need to keep it secret from everyone lest it cause run away AI.
@Krantz I make this reply on the off-chance you're simply naive rather than in bad faith, but this will be my last engagement with your sealioning here. My reasoning contains one additional proposition: if this market resolves other than N/A, then you, the creator, are a liar, and one would be foolish to invest in any market you have the power to resolve.
Here's an argument if anyone would like to engage with the claims I'm asserting.
https://manifold.markets/Krantz/which-proposition-will-be-denied?r=S3JhbnR6