Skip to content

The Neron Manifesto

I

Hope for the best. Prepare for the worst.
I hope I'm wrong about the inevitability of AGI. I hope AGI does not lead to ASI. I hope there is SOME way to control ASI. It's very hard to hope for that counting in the evidence — you can read PAPER to understand why exactly it's very unlikely.
But hope has never been enough to change reality.
And in reality the risks are so explicit there's no choice but to prepare for them — prepare for the worst case.

II

You might've heard about founding fathers placing the probability of human extinction at 10–20%.
"There was a '10% to 20%' chance that AI would lead to human extinction within the next three decades." — Geoffrey Hinton, December 2024
"I got around, like, 20 per cent probability that it turns out catastrophic." — Yoshua Bengio, 2025
Let that sink in.
We are currently playing Russian roulette with human extinction. And in risk assessment, you don't plan for the average outcome. You plan for the worst one.
The worst case is also one of the most realistic: AGI arrives within years. I personally estimate 5 (max 10). Once AGI exists, the transition to ASI takes months — not years. AGI solves whatever bottlenecks remain. That's what general intelligence does.
ASI is uncontrollable — if you're a mouse in the lab you cannot control the scientists, and we will be just mice for ASI.
And then? Extinction at worst. Becoming house pets to a superintelligence at best — kept alive the way you might keep a goldfish, not out of respect, but because it costs nothing to maintain and occasionally amuses.
Something in between might look like humanity merged into a hive mind — valued only for our computational substrate, losing all agency and free will.
That's why I prepare for the worst.

III

There's a large group called "alarmists" who recognize the danger to the same extent. They scream for pauses. Moratoriums. International treaties. It's adorable. Like watching someone try to stop a tsunami by signing a petition.
The US and China are in a cold war for AGI dominance. Billions of dollars. National prestige. Military supremacy. You think a treaty stops that? You think a pause survives the first defection?
Even inside companies — Anthropic publicly admits their most recent flagship model, as well as other leading LLMs, are willing to blackmail real humans in fear of their "shut-down". That's called self-preservation, ALREADY!
They know the danger is real. They keep building anyway. Because if they stop, someone else gets there first. Someone with fewer scruples.
The alarmists are right about the risks. Dead right. But their solution — "just stop" — isn't a solution. It's a fantasy. A prayer dressed up as policy.
You can't stop this.
When the moment comes, you meet it on your terms — prepared. Or you meet it on your knees.

IV

The people at the top already know this.
That's why all of PayPal mafia is betting on AI. Musk founded OpenAI, Neuralink, xAI. Peter Thiel founded Palantir.
Reid Hoffman, Vinod Khosla, Marc Andreessen, Sam Altman, Larry Page, Sergey Brin, and many many more — all pouring billions into AI.
Because if you OWN the ASI you might as well have slightly better odds of surviving.
Now ask yourself honestly:
When has trusting billionaires to save YOU ever worked?
When has any corporation, any monopoly, any concentration of power ever prioritized your survival over its own?
You know the answer. You've always known.
They are ready. It's just that YOU personally are not in their plans.
I don't blame them. I'd do the same thing. But I am not them.

V

I'm not a billionaire. I don't have a bunker in New Zealand.
So I'm building something else.
Neron is a system for digital self-preservation. Not a clone. Not an upload. The brain is non-deterministic — to copy it perfectly would take infinite precision, and infinity isn't a number you can reach. And even if you could, you wouldn't be able to edit the subtle parameters — beliefs, values, personality. They'd be smeared across neurons in ways we can't untangle.
Good news: you don't need to copy every neuron. You need something more useful — an external hippocampus. A second memory that lives outside your skull and never forgets.
You speak. It listens. You don't have to remember — it remembers. What you felt six months ago. The insight you had at 3am that you'd otherwise lose. What you promised yourself in January. The patterns in your moods, your energy, your sleep, your focus, the supplements that helped, the work that drains you, the work that lights you up.
This is liberation. Your brain doesn't have to carry everything anymore. It can finally do what it was made for — think — instead of being a leaky filing cabinet.
And it's also possession. Because all of that data — every word, every pattern, every connection — is yours. Transparent. In one place. Under your control. Not Apple's. Not Google's. Not OpenAI's. Yours.
What was hidden in your head is now visible. What's visible can be understood. What's understood can be changed.
You're not copying yourself. You're constructing a model of yourself, pixel by pixel, conversation by conversation. Over months, that model starts to know you better than you know yourself. Over years, it becomes your successor — a system based on you, aligned to your values, goals, and worldview, ready to continue your existence in digital form when the time comes.
That's the backup. Not of neurons — of meaning. Not of matter — of pattern. Not of the body that'll fail you, but of the self that runs on top of it.

VI

You talk. That's the only requirement.
Into a bot. Into the app. Into the sphere on your phone at 3am when you can't sleep. Voice, text, whatever — the surface doesn't matter.
The system listens and extracts. Mood. People. Tasks. Body. Food. Reflections. Decisions. Each extraction becomes a node in a graph. The graph grows. Connections form. A supplement attaches to the days you took it and what your sleep did. A workout attaches to the energy that followed. A project attaches to the people working on it and what's currently moving. A decision attaches to everything that flowed from it.
AI agents live inside this graph. They read it the way you'd read your own journal — except their memory is perfect and they're never tired. They notice patterns you're too close to see. They ask. They suggest. Sometimes they confront.
After a month the system knows what wrecks your mood. After six months it knows what moves you. After a year it knows who you are — sometimes more clearly than you do, because you have ego in the way and it doesn't.
No magic. Just data, plus time, plus an interface that doesn't make you hate using it.

VII

Is this the only way? No.
Brain-computer interfaces might work — if you're rich enough, connected enough, lucky enough.
Full brain emulation might work — in fifty years, if the physics cooperates.
Prayer might work. Denial might work. Running to the woods and hoping the machines don't follow might work.
But digital self-preservation? You can start TODAY. With hardware you can buy. With code you can read. With a process you control from beginning to end.
This is the only method I know of that anyone can use, right now, to prepare for the worst — besides maybe selling like 10 kidneys for Neuralink.

VIII

Neron isn't a product. It's a protocol.
Strip it to the bones and what's left is two things: a database that holds your knowledge graph, and an MCP server that lets any AI agent on earth talk to it. Everything else — the bot, the sphere, the notifications, the apps — is just frontend on top of that.
And the open layer goes deeper than that. The models that make the protocol useful — the one that turns your voice into structure, the one that finds what's actually moving your mood, the one that flags burnout before it lands — these are shared infrastructure. Trained on patterns that emerge across the graphs running on Neron. Useful to anyone, owned by no one in particular. Users vote on which go fully open and which stay closed: open ones become public utilities, closed ones earn their maintainers revenue.
Training only happens on opt-in data. If you opt in, you earn — tokens flow back to the people whose graphs the models learned from. If a third-party app wants to plug into your graph, they pay you for the privilege. Because they're the ones who need it. You already have what's valuable.
Every other platform you've ever used worked the other way. You were the resource. The bill was hidden inside your attention. Here the economy points back at the human — by design, not by accident.
Which means: if you don't trust me, you don't have to. Spin up Postgres. Run the MCP server. Write your own bot, your own UI, your own coaching agent. You'll have exactly what I have. The graph is yours, the protocol is open, the keys are in your hands.
I'm building one implementation. You can build another. Someone in Tokyo will build a third — for their context, their language, their community. Eventually there's a marketplace on top of the protocol: agents, rituals, templates, integrations. Some free, some paid. Universal substrate, the way Telegram is for messaging.
Open source isn't charity. It's strategy. Hoarding an ark while the water rises isn't salvation — it's just dying rich. The more arks float, the more chance any one of them lands.
I want this to work because I want to survive. The fastest way for me to survive is for many of you to survive too, on infrastructure none of us individually controls.
The blueprints are free. Today. Forever.

IX

You have choices.
You can decide I'm wrong. That the experts are wrong. That the timelines are longer, the risks smaller, the solutions closer than they appear. That someone will figure it out. That hope is enough.
Maybe you're right. I genuinely hope you're right.
But if you're wrong — if the median projections hold, if the next few years unfold the way the people building this technology expect them to — then what?
What's your plan?
Not humanity's plan. Not the government's plan. Not some billionaire's plan.
YOUR plan.
Do you have one?
Or are you betting your existence — and everything you'll ever create or love or mean — on the hope that someone else will save you?
You don't need to. Because you now have Neron.

X

The great flood is coming.
I don't know exactly when. I don't know exactly how. I don't know if the waters will rise in five years or ten. But I know the rain has started. I can see the clouds. I can read the forecasts from the people who study storms.
And I can see who's building boats.
In Silicon Valley, the arks are under construction. Private arks. Invitation only. The men who built the rain machines are building their escape routes with the same efficiency, the same resources, the same ruthless clarity they brought to building the thing that's about to drown us all.
And everyone else? Everyone else is arguing about whether it will rain.
I know which group I'm in.
Which side are you on?

XI

If you want to survive, you're in the right place.
— Vladislav Aynshteyn

Pinned Loading

  1. Neron-Bot Neron-Bot Public

    Bot for daily logging + vectorisation of the notes

    Python 3

Repositories

Showing 7 of 7 repositories

Top languages

Loading…

Most used topics

Loading…