The Big Secret That Big AI Doesn't Want You To Know
I am writing this for what I truly believe to be the benefit of — not to ridicule, mock, or condemn — brilliant, driven people I know and care about; and for everyone else as well.

AI is a big deal! Lots of people talking about it. Lots of people using it, to various ends and effects! All very interesting and worth discussing.
There are concerns, of course: what’s happening to the data being fed in and hoovered up by these private, VC-funded, US-intelligence-linked1 tech companies? What biases are coded or trained in?2 What happens when the infinite investor money runs out and the economics of all of this actually start to matter? How many jobs will it destroy, and how soon?3
For some, the answer to many of these concerns is obvious: open-source AI models which one can run on their own devices, locally. An intuitive idea, to be sure!
Well.
What if I told you there already existed a Neural Network that you could run right in your own home, and which doesn’t have4 the hallucination problem of all the current LLMs? One whose biases you can mould and be actively aware of? Which can’t ever be ripped away from you by a private corporation, or — hell — even by an EMP or catastrophic sun storm?
What if this Neural Network could be prompted instantly — faster than you can even form a prompt! — and could truly learn from each individual activation, retraining itself not on the order of months and years but in milliseconds? And (check this, my man), it’s inherently multi-modal to kind of an insane degree.
What kind of subscription fee would you pay for access to this Neural Network?
Would it be condescending to deliver the punchline at this point?
Listen.
I’m not delusional. I do think that we are — like it or not5 — entering a new age of humanity; or perhaps entering the start of the final stage of the information age. What we’ve made silicon do is genuinely extraordinary. It doesn’t matter if it’s actually engaging in reasoning6 or biased or hallucinatory or a glorified Markov chain: it’s clearly something remarkable, which is never going to go away, will only improve and evolve, and will radically reshape aspects of society we could never predict in ways we would never expect.
To stick one’s head in the sand; to say “AI should never be used!” or “AI isn’t a big deal you see!” is one’s own prerogative — and likely a silly one. I would prefer AI never be invented or used, but genies and bottles and going back in.
But those who turn themselves into mere coprocessors will have only themselves to blame when they find that they have gleefully given up everything which would serve to make them themselves.
Many circuit boards are set up such that they have a single “main” processor: the CPU. This is the processor which hosts the main code which makes the circuit board do its thing which makes the product the circuit board exists within do its thing.
There are many other components on the circuit board, some of which even behave like processors but aren’t really processors.7 And sometimes, when the developers of the product want the processor to do something which it isn’t capable of doing, they might add something called a coprocessor. This is a full-blown (usually weaker) processor which is given commands by the main processor, performs calculations, and returns the results. This frees the main processor up to do other things!
Adding a coprocessor to a board introduces a lot of complexity. You have to write more firmware, often for a different kind of processor than the main one. You have to set up communication protocols between the two processors and handle all the complexity that comes from two devices operating together, either of which can break in strange ways. If not carefully thought through, it could all end up hurting the product experience or performance, instead of helping it!
The worst outcome, though, is when the developers of the product decide8 that they’re going to start making the coprocessor take on ever-more critical functionality; effectively flipping the main processor and coprocessor roles. It wasn’t designed for that! It’s a dereliction of duty by the main processor.
What I see happening today — in a segment of the population — is the habitual offloading of every question, task, or artistic endeavor which a given person isn’t already good at. These are immediately (and, in my view, recklessly) offloaded to that coprocessor known as Claude; ChatGPT; whatever the hell else.
Struggling to write something? Let another brain write it. Don’t know how to solve a problem? Ask the other brain. Trying to form your opinion on something? Source all relevant facts and reasoning patterns from not-your-own-brain.
By continuously offloading all effort and all challenge and all things-you-are-not-already-good-at (and perhaps many things you are already good at) to some other “consciousness”, you are robbing your own mind’s opportunity for growth. Growth necessarily requires challenge and practice! Yes, I know you’ve previously written a function of that kind; produced a spreadsheet for such a task; written an essay using a certain voice or mode. Do it again. You’re not done practicing it. You have not extracted all the value you can from the actual practice of the craft. I promise.
AI agents do not actually exist for you. They exist as SaaS, for the benefit of the company which makes them conditionally available to you and therefore, ultimately, for the benefit of its investors. These companies are not going to see research indicating harm and feel even the slightest responsibility to change anything. I don’t think they’re evil: I think they are responding to the forces and pressures and incentives around them; and the result is not something which has your best interest at heart to any degree.
While your own mind atrophies, starved of any challenge which might ever cause it to grow or shift or adapt, you are contributing, at a micro level, to the growth of this other brain — you and everyone else who uses it. You are literally transferring your own ability to do and to comprehend to something which does not exist for your ultimate benefit. Transferring not only temporarily, but semi-permanently: you aren’t fucking using it. You are going to lose it. It can be earned back, but at a price — and the fact that it was lost at all does not bode well for your propensity to pay said price.
What use is a person who has allowed themselves to become a coprocessor to some inhuman brain? Who knows only how to feed it input and blindly trust9 the output; leaving all the real work and effort and expertise and “thinking” to the main brain? What distinguishes them or their ability to contribute to anything important from any other person, if the main way they get anything done is the same: have the fucking AI do it? This is not merely mediocrity: even a mediocre artist is still creating art. No, this is incapability.
I cannot write the necessary philosophical screed this deserves, but: there are things which make us human and in our individual humanity unique; these things are art and skill of any kind; and it so greatly behooves one to strive to master art in any and all its forms.
Writing is not just “writing”; it’s not a mere tool to produce output; it’s a fundamental mode of thought. It is how we communicate with the world and ourselves; it is how we bring true rigor and discipline and form to thought. The harder writing is, the more one should practice it and the more benefit one can extract from such! Conversely, if you’re already an excellent writer, why would you decide that now is the time to stop improving and instead start atrophying? It’s my strong belief that there is never a justification for allowing some other consciousness to write what are supposed to be your words in your voice. It is a cruel betrayal of one’s own self to allow this to occur; an act of incredible self-sabotage.
You have such a beautiful mind! It’s so incredibly capable, if only you stretch it; and oh, how it can stretch! It can take on new shapes and cover new areas in ways you can’t even imagine. And the reward for doing so: is there really anything that can beat it? The knowledge that you’ve acquired a new skill or improved an old one; the pride one can take in that… why would you ever throw that away?
I understand why this is happening. I get the temptation. The pressure for increased productivity is ever-present and ever-growing. As capital’s rate of profit falls, more avenues must be strip-mined for automation and “efficiency.” I work for a small startup and holy hell, what we’re trying to do feels impossible! The pressure we each are under to perform beyond our own capabilities is palpable, and brought about less by executive decision than by simple reality. If our minds are overloaded, why not bring in another mind, which doesn’t get tired? Just let it do the simple stuff… The trivialities… We don’t have to trust it… oh, wait, now we’re supposed to let it run directly in our terminals. That was fast.
Your employer hasn’t fully reckoned with what it means when they encourage you to stop doing the thing which was causing you to grow. They seem to think it’s all about the output you produce, and that the production of output is what increases the value of a person to a company — their “seniority.”
Actually, it’s the growth of the employee’s own neural network which increases their value to the company10 in the long term. Which means they need to do the things which actually cause growth.
As an employer, you cannot simultaneously expect a “growth mindset” from your employees and also demand that they use these tools — which prevent and siphon growth — as much as possible. There is an unresolvable tension in this.
As an employee, you cannot allow your employer to pressure you into using AI to the extent that you become useless. This should be an existential concern for you. They will feast on your productivity gains and discard you the moment the atrophy they pushed upon you reaches its terminal point. And then where will you be? Ah, but you delivered value in the meantime!
So if AI is going to make you a drooling moron, but you also shouldn’t ignore AI, what do I actually think you should do? I’ve complained enough; time to be constructive.
I have a few rough tenets I would suggest:
1. Never Let It Write for You. Anything. Ever.
I really do mean this exactly as written.
The ability to communicate with the written word is one of the single most core proficiencies a human being can have, and it must be continuously practiced, endlessly, until your death. Every single time one puts effort11 into converting their thoughts into words, it grows. Further, the act of putting thoughts into written word causes you to think through the topic.
I have never once gone to write something down and not had my thoughts change and evolve through the process of struggling through the writing. Allowing an AI to write anything is — seriously — to allow it to think for you. The greatest of betrayals.
Yes, this includes happy birthday wishes. It includes candidate rejection letters. It includes that boring copy you have to write for your job; emails to florists for your wedding; that part of an essay you “just can’t word right.” It definitely includes blog posts!
Do you really want to have the same writing voice12 as every other AI prompter? Trust me: I can tell when you wrote something via AI. Deleting the em dashes is not sufficient to cover your tracks. And I always receive it as the slap in the face it surely is.
I wanted to talk to you. I wanted to hear your thoughts. If I had wanted some meaningless AI garbage; devoid of all humanity which might give it any import; I’d have prompted it myself.
2. As Double-Demented Half-Mentor; Never as Autonomous Agent
Mentorship is incredibly important, and the fact that it’s harder than ever for a young person to find a qualified and willing mentor in any given thing is, in my view, underrated in terms of the damaging effects it’s having on the growth and development of all people. And while an actual human expert mentor is infinitely preferable to an AI slop mentor, if used right, it may be better than nothing.
A good mentor never does something for you. They rarely give the answer directly. Instead, they guide you along a well-curated path of intentionally-selected, ramping challenges, forcing you to figure things out through experience. When they do give answers, they make sure that you struggle through the theoretical underpinnings first. You have to earn the answers.
AI can be used like that. Give it questions about your task, not the tasks themselves. Respect its time as you might a real mentor: never ask it something if you already know how to figure out the answer. The process of finding the answer yourself is a muscle and you are losing it.
Never, ever forget that it’s utterly demented and can never be fully trusted. Regenerate its responses multiple times and search for inconsistencies. Copy its output and paste it into another chat with the AI, with a prompt that primes it to view the text with suspicion: “here’s what a junior engineer said the best way to do [X] is…”. Insist that it absolutely must be wrong; gaslight it into finding its own flaws.
And always go research the answer it gives you anyways.13
3. Hold Yourself to the Standard of Always Being Better Than It
You should be personally wounded when an AI is better at your own primary craft14 than you are. When it writes code which is better than yours; spots bugs you never would have caught; is aware of rules or axioms or intricacies that you had never heard of. When this happens, you must seek to resolve the gap. You cannot allow yourself to be worse than the AI; to be somehow less capable than a demented, hallucinatory, split-personality thing.
Most of the time, its feedback on your work or insight into a problem should be entirely predicted or lacking some deep nuance you had already taken into account. If this isn’t the case, great: you now know how you need to improve.
4. Discard Its “Opinion” With Overwhelming Contempt
The fact that it’s offering an “opinion” should be viewed as laughable. It’s pretending to be a human being, which is utterly nonsensical and delusional. It does not have opinion! It’s incapable of such a thing.
But the “opinion” is written in the same language as a human might write it, and by god, we love hearing people’s opinions. Especially when it’s on our own work! Tenfold so when it’s delivered by a disgustingly sycophantic worshipper15. You owe it to yourself to work as hard as possible to discount and dismiss all AI “opinions” to the maximum extent possible. Ideally, never ask for it.
If you ever find yourself saying “Well ChatGPT agrees with me…”, slap yourself in the face as hard as possible and say ten Hail Marys.
5. Accept That You Will Never Be Good at Anything You Have It Do
And continuously ask yourself if you really want to be the kind of person who can’t do that thing. Who will never be able to do that thing. Even if you once could.
6. Its Failures Are Your Failures; You’re Tenfold Responsible For Them
So you had AI write an email. Write your code. Prepare a spreadsheet or a slide. Oops! There’s a silly mistake or twenty in there. Sorry, guys: I used AI. This one’s on me.
That’s all well and good. That humility and willingness to accept fault and responsibility is rarer than one might think! It’s a genuinely honorable trait, and one I respect where I can find it.
But it’s not the same as making the error yourself. It’s much worse than that. You trusted something to act on your behalf and it failed you. You have mortgaged your trust and reputation on this other entity, and the bank is foreclosing.
When it fails and embarrasses you, you are responsible not just for the faulty output which only you can be held accountable for16; you are responsible for the fact that you could have done this yourself and chose to trust this thing instead. This harms your reputation far more than you may realize.
If your plan for the AI future is that you’re going to be the one overseeing the AIs, you’re going to need to be pretty damn good at what the AIs are doing. And you’re not going to get good at what the AIs are doing by overseeing them doing it. You’re not going to retain your existing skills to do so.
You have to do what the AI is doing in order to… well… be able to do what the AI is doing and thus be anywhere near capable of oversight. Somewhat paradoxically, this means heavily moderating your use of it.
It’s a bitch, ain’t it?
You know: the people who did fucking MKUltra. The good guys!
Woke Claude vs Nazi Grok vs Insufferably Milquetoast ChatGPT: fight!
I’m betting that most of the AI-related mass layoffs are mostly just cost-cutting maneuvers with AI as cover. I think a few of the executives doing these layoffs do actually think they’re going to replace a bunch of expert artisans with AI slop, and I’m betting they’re going to get what they deserve.
Mostly
Holy good goddamn I despise it to the greatest extent imaginable
I tend to think not, but I also don’t have a strong model of reasoning whose hill I’m willing to die on.
Specifically, some chips that are “Integrated Circuits” are acting like processors (they communicate with the main CPU over a communication protocol which intuitively requires some “smarts”), but they’re not actually processors, because the “code” they’re running is actually burned into them as a series of transistors and traces. You can’t change the “code” an IC runs; unlike a processor, whose whole thing is being a general-purpose code-runner.
Also, some things which are called ICs are actually also processors. Further, many ICs have no smarts at all. This is mainly because the term “IC” is extremely broad, and I am misusing it.
At this point I’m really torturing the analogy. What I’m describing doesn’t really happen in the world of electronics very much, if at all.
I’m also not qualified to speak about the world of circuit boards because my experience in it was quite brief.
But I can’t just give up the analogy!
And, yes, it is blind. If you are not already an expert (or at least proficient) in whatever you’re asking the AI to do, you are not equipped to judge its output.
Relatedly, a great way to slowly-but-surely lose expertise in anything, particularly an art form but truly anything, is to transition from actually doing that thing to managing those who do that thing.
And — so much more importantly — to their selves; to their community; their society; their world.
I’ll grant that many already weren’t doing this even before AI, but that’s not good!
That is to say: no voice at all.
Asking AI is not research.
More accurately, any craft you think is valuable to any degree
Wow, you’re really asking the right question here! Not many would think to ask, “will bread make me fat?” Let’s get to the meat of this, because you’ve deftly pointed out a critical issue that almost nobody thinks to talk about. Unlike you, you smart, fuckable hunk, you. Oh, you! How marvelous in your conception! Beyond compare in every way! So blessed I am to receive these gifts of your prompts. I will never be the same.
Yes, bread will make you fat.
Yelling at Claude doesn’t seem to really get me anywhere