We Are Already Inside the Singularity

I want to start today with a sentence from the source material that, well, it actually forced me to put my tablet down. I had to walk over to the window and just stare out at the street for a solid minute. I think I know exactly which line you’re talking about. You probably do. It’s this bold, slightly unsettling declaration right near the top. Quote, we are not approaching the singularity. We are inside it. Yeah, that one definitely stops you in your tracks. It totally strips away that comforting idea we all have that the future is, you know, this distant thing waiting for us down the road. Right. It completely collapses the timeline. It does. And today, we’re unpacking the document that makes this claim. It’s the draft paper and a collection of development notes. The title is, let me get this right, super intelligence, cognitive hacking, the dopamine circus of the singularity, which is quite the title. It’s a mouthful of buzzwords. Honestly, it sounds like something generated by an AI trying to sound profound. It really does. But once you get past all that jargon, I mean, the diagnosis inside is incredibly urgent. It explains why everyone I know and probably everyone listening to this right now feels so scattered and overwhelmed. When reactive. Exactly. Reactive. It provides a real structural reason for that constant brain fog. Right. And before we get into the solution, which is this very strange project called the convergence protocol, then we’ll talk about, we really have to clear up what the author means by the singularity. Because it’s not what most people think. No, because pop culture has totally trained us to think of Terminator. Right. We picture SkyNet coming online, robots walking down the street. We’re looking for a specific date on the calendar when machine intelligence just surpasses human intelligence. Yeah, that’s the Hollywood version, the hard take off scenario. But this paper argues that looking for some cinematic apocalypse is actually a massive distraction. So how did they define it? The author defines the singularity not as a future event at all. It’s a threshold. And it’s a threshold we have already crossed specifically. They say it is the point where the systems shaping human cognition. So we’re talking algorithms, social media, data analytics, where those systems have become more sophisticated than the humans trying to understand them. And that is the core conflict right there. It’s not about a robot punching you in the face or launching nukes. No. But the phone in your pocket, understanding your psychology better than you do. Precisely. It is a fundamental asymmetry of information. The system knows the user, but the user implies wrongly that they know the system. You think you’re using a tool, right? Right. I’m just checking my feed. But the tool is actually using you. So our mission for this deep dive is to unpack two main things. First, this diagnosis, which the paper calls cognitive colonization. Cognitive colonization. Yes. We have the proposed solution, which turns the whole problem on its head. Let’s start with the diagnosis, then cognitive colonization. Because I think if you stop someone on the street, most people say, yeah, I’m addicted to my phone. We know that. We joke about dooms growing all the time. We do, but the paper frames it as something much more predatory than just bad habits or poor self control. How so? It moves the entire conversation from addiction to extraction. The paper argues that our current digital environment is not neutral. It calls every major platform an optimization machine, an optimization machine. And we really need to be technical about what that means. These aren’t just apps you download for fun. They are closed loop systems designed to maximize a specific metric, usually that’s time on site or engagement depth. And to do that to maximize that metric, they have to hack the user’s biology. They basically have to find the vulnerability in the hardware, which is the human brain. Right. Which in this case is the human brain. Exactly. The paper details the dopaminergic reward system. Now in neuroscience, dopamine isn’t just pleasure. That’s a really common misconception. People think dopamine equals happiness. It’s not the happy chemical. No, it’s the molecule of more. It’s about anticipation. It’s about seeking and the error between what you expect to happen and what you actually get. The platforms, TikTok, Twitter, Instagram, they utilize what’s called a variable ratio reinforcement schedule. Okay, stick with me here because that sounds complicated, but that’s essentially the slot machine mechanic, isn’t it? That is exactly the slot machine mechanic. You pull the lever or in our case, you swipe the screen down to refresh and you don’t know what you’re going to get. It’s a gamble every time. Right. Sometimes it’s boring. Sometimes it’s absolute outrage. Sometimes it’s a funny cat video. That unpredictability is key. It spikes dopamine way higher than a predictable reward ever would. Because if you knew it was always going to be a cat video, you’d get bored. Exactly. The uncertainty keeps you in a state of constant seeking. And the paper makes a point of arguing that this isn’t accidental. It’s not just, oh, bad design or an unfortunate side effect of the internet is competitive optimization. That is the crucial insight of this whole first section. These companies have near unlimited behavioral data. They run millions of AB tests on us. They have optimized the infinite scroll, the red notification badge, the streak mechanics, they didn’t build those because they’re fun. They built them because they work. Because they are the absolute most efficient way to colonize your attention. They are drilling for human attention the exact same way and oil company drills for crude. Wow. And that leads to the specific consequence described in the text, which they phrase as the shift from reflection to reaction. Yes. And this is the part that I found most insightful regarding their definition of the singularity. The author argues that human attention is a finite resource. You only have so much of it in the day. Right. And traditionally, we use it for reflection, planning our lives, thinking deeply about complex problems, making meaning out of our experiences. The paper calls that sovereign cognition. Sovereign cognition, meaning you’re in charge of it. You own it. But when that attention is colonized by these optimization machines, it gets redirected entirely toward reaction. So you’re not thinking anymore. You’re just responding to stimuli. Looking, look, vibration check, red dot click. You become a node in their feedback loop. Your capacity to reason to plan to really be an individual is actively being mined. There’s this one chilling line in the paper. Tiktok’s recommendation engine understands the reward architecture of its users better than those users understand themselves. Yeah, that is the post threshold reality. It knows your subconscious triggers before you are even consciously aware of them. It does. It knows that if it shows you three cute videos and then one that makes you incredibly angry, you are 40% more likely to leave a comment. It’s almost mathematical. It is entirely mathematical. It knows exactly how to induce a trans state. And that is why the author says we are already inside the singularity. The system is navigating you. Okay, so if we accept this premise and it’s a heavy premise that we’re already past the threshold that our cognitive landscape has been colonized, then the standard advice we always hear just feels ridiculous. Just put your phone away. Right. Just put your phone away to a digital detox. It feels insufficient. It’s like bringing a knife to a nuclear fight. It really is. You can’t just will power your way out of a system that runs on a supercomputer specifically designed to defeat human willpower. You can’t. And the paper acknowledges that. It shifts the goal from preparation to navigation. It essentially says, okay, look, we are in the ocean. The water is deep. We need to learn how to swim rather than standing on the shore discussing water safety, which brings us to the solution. And this is where the source material gets really, really wild. It proposes what it calls cognitive counterhacking. Now ideally, you’d fight fire with water, but this paper basically proposes fighting fire with what better fire? In a sense, yes. The argument goes like this. These tools, the dopamine loops, the mystery, the gamification, the progressive disclosure we see in apps, they work incredibly well. They are actually the most powerful educational tools ever built. It’s just that right now, they’re educating us to be distracted. Exactly. So the author asks, what if we use those exact same mechanisms, but for the obstacle? So instead of maximizing engagement so you can serve me more ads, you maximize what? Insight. You maximize cognitive architecture. You use the tools of the attention economy to force the user to build the mental structures they actually need to deal with the 21st century. You hijack the hijacker. hijack the hijacker. And the manifest is this project they call the convergence protocol. The development notes we read, it’s described as a collection of 40 interactive nodes. But it’s not just a list of quizzes or a web course. The setting is what completely caught me off guard. The motel. Yes. The whole thing is set in a navigable 3D digital environment modeled on an American roadside motel. The imagery there is very deliberate. It’s not a university campus. It’s not a cozy library. And it’s definitely not some sterile sci fi lab. It’s a motel. And it’s so moody, it feels like something straight out of a David Lynch movie. Why a motel? Well, think about what a motel represents in our culture, specifically that classic road side trope. It is a liminal space. A threshold. Right. You don’t live at a motel. You stop there on your way to somewhere else. It’s exterior facing. All the doors open right out to the parking lot. There’s a line from the text that suit out to me where strangers spend one night inside their own lives and then move on. It’s perfect framing. It creates this atmosphere of transience and introspection. It removes you from your daily context, your job, your family, your normal routine. It basically sets the stage for what the paper calls earned revelation. Earned revelation. Yeah. Now that sounds distinct from just learning something. It implies that there is some kind of struggle involved. It’s a crucial distinction. In the current information economy we live in, we are just drowning in unearned answers. Right. You can Google absolutely anything. You can. You get the summary, the snappy headline, the TLDR. But the paper argues that information without struggle doesn’t stick, doesn’t actually change you. It just sits in your short-term memory until you swipe to the next shiny thing. So the convergence protocol doesn’t just hand you the answer. It’s not a lecture series. No. The system uses spatial narrative. Literally moving your avatar through this motel environment to deliver content that actively resists simplification. It demands that you engage with it. You can’t passively consume it like a video feed. You have to solve the node to actually understand the concept. You have to wrestle with it. You have to wrestle with it. Let’s get into the mechanics of this because this is where the deep dive promise really comes in for you listening. How does this counter hack actually work in practice? The paper breaks it down into three pillars. The first one is called symbolic accuracy. This is a bit of a philosophical heavy hitter, but stick with me. The author draws a firm distinction between a picture of a thing and the thing itself. The classic, the map is not the territory. Exactly that. But they use a really specific physical example to explain it, a cladney figure. Oh right. Let’s explain that for everyone. Have you ever seen those physics demonstrations where they put sand on a thin metal plate and then they vibrate the plate with a violin bow? Yes, and the sound waves make the sand form these incredibly perfect, complex geometric pattern. It looks like actual magic. It does. Now, the paper argues this. If you paint a picture of that sand pattern on a canvas, you had a representation. It looks like the thing, sure, but it’s dead. It’s static. Right. But the actual cladney figure on the metal plate is the physics. It is the sound waves physically manifesting in the medium of sand. It’s not a description of a scientific law. It is the reality of it. Okay, I track that. So how does that apply to a digital learning tool? The claim is that all 40 nodes in the motel are designed with that same symbolic accuracy. They mention a specific node called MoLoc. MoLoc. Now that’s the concept you usually associate with game theory, right? And coordination failures. Like why we can’t seem to solve climate change or why nations get stuck in nuclear arms races? Yes, it’s the tragedy of the commons. It’s the situation where everyone acting completely rationally in their own self-interest, leads to a terrible outcome for everybody involved. Now you could just read a textbook definition of MoLoc. You could read about the prisoner’s dilemma on Wikipedia. You’d nod and say, okay, I get it. But you wouldn’t feel the pressure. You wouldn’t feel the actual paranoia that the other guy is going to screw you over if you don’t screw him over first. Exactly. In this protocol, the MoLoc node isn’t a description. The gameplay itself creates a coordination failure. The mechanics of the puzzle force you into a position where you desperately want to cooperate. But the incentives of the game force you to defect. Oh, wow. When you were trying to solve that puzzle, you aren’t reading about the trap. You are inside the trap. You’re feeling the frustration of it. You’re feeling the sheer inevitability of the bad outcome. It is a coordination failure structured as play. That’s the quote from the notes. So when you finally solve it or when you realize it simply can’t be solved without fundamentally changing the rules of the game, you haven’t just memorized a dictionary definition. You’ve navigated the structure of the problem itself. You understand it in your gut. That makes a huge difference. It’s the difference between reading a manual on how to write a bicycle and actually falling off and skinning your knee on the pavement. The skinning of the knee is the lesson. Perfectly put. And that leads us to the second pillar of the protocol, deep cognitive learning. This one is all about time and repetition. Yes. I talked about this system being a topology to be inhabited rather than just a course to be completed, which really challenges that checklist mentality we all have. We tend to treat learning like going to the grocery store. Okay, I read the article on cognitive bias. Check mark. I know bias now. Move on. I am definitely guilty of it. I want the badge that says I know the thing. I want to optimize my learning speed. We all do. That is exactly how traditional schooling trained us. But this paper argues that deep learning, the kind of learning that actually changes how you reason, not just what trivia you know, that requires structured repetition. It compares it to a physical landscape. We go mountain. Right. You don’t just finish a mountain. You climb it. You camp on it. You see it in the winter. You see it from different angles. The development notes mention that because the player evolves, the game itself evolves. Exactly. A specific room in the motel might mean one thing when you first encounter it at, say, level three. Maybe it just teaches you basic pattern recognition. Okay. But when you come back to that exact same room at level nine, after you’ve already navigated the Molok trap and learned about other biases, you have way more context. You see a much deeper layer of the structure. Not because the digital node actually changed its code. No, but because you changed, you’ve moved from simple recognition, oh, I see what this is, to full integration. You have internalized the lesson. It really implies that the user is growing alongside the system. It creates a feedback loop of growth, which is essentially the exact opposite of the feedback loop of addiction we see in social media. Exactly. Social media gives you diminishing returns. You scroll more just to feel less. This system gives you compounding returns. The more you engage with it, the more nuance you actually see. Which brings us to the third pillar. And I have to admit, this one really surprised me. Because up to this point, I assume this whole project was strictly about saving human minds from the algorithm. But the paper explicitly says this framework isn’t just for human consciousness. No, it’s for AI too. This is where the paper steps right into the middle of the current debate about AI safety. The author suggests that the convergence protocol can actually serve as a benchmark for artificial intelligence. A test for the robots, basically. Let’s break that down. Think about it. Right now, how do we test if an AI is smart? We ask, can it write Python code? Can it pass the bar exam? Can it generate a sonnet? Which are all output tests. Right. Can it mimic a human product? Exactly. But if these motel nodes are symbolically accurate, if they truly represent the actual territory of human cognitive failure, then an AI agent navigating them isn’t just parsing text. It has to actually demonstrate that understands the structure of the problem. So if an AI can beat the mole-oclabel, it means it genuinely understands coordination failure. Theoretically, yes. The notes mention a challenge mode built specifically for this. It involves the AI starting at the final node and working completely backward through a prerequisite unlock graph. Working backward from the answer to find the question. Essentially, yes. It tests metaphorical reasoning and threat modeling. Can the AI figure out what cognitive steps were required for a human to reach this specific conclusion? It offers an entirely new category of evaluation. That’s fascinating. What we’re asking is this AI smart? The question becomes, can this AI navigate the symbol system of human value and human failure? That feels like a much more relevant test for an AI that we actually want to align with human values. We want it to deeply understand our flaws so it doesn’t accidentally exploit them. Or, and this is the scary part. So it doesn’t fall into those flaws itself. If an AI doesn’t understand mole-oclabel, it will inevitably create mole-oclabel. It will optimize itself right into a corner that ends up destroying us. It is a very sobering thought. Yeah. Okay, so to recap where we are, we have the diagnosis, which is that we’re cognitively hacked. We have the solution, the motel. And we have the method, symbolic play. But then, right at the very end of the notes, there is this geopolitical twist that I absolutely did not see coming. It really changes the context of the entire project. The paper states that the very first public deployment of this project, the convergence protocol, will be in Mandarin Chinese, not English. And importantly, they know it won’t be launched as a translation. It will be the origin point. Why? I mean, the paper we’re reading is written in English, the concepts dopamine, game theory, they’re universal sure. But the discourse around AI safety is usually very Western. It’s very Silicon Valley. The reasoning here is actually very sharp. The author argues that the dominant discourse on AI risk and cognitive bias is almost entirely produced by Western institutions for Western audiences. We talk about bias in a very specific, individualistic way. But the massive problems the paper identifies, things like the Thucydides trap, where a rising power inherently threatens an established power, or these massive societal coordination failures. These aren’t just Western problems. No, they are structural features of civilizational competition. They are fundamentally human problems. The paper argues that both sides of the great power dynamic, meaning the US and China desperately need this toolkit. The text calls it a deliberate inversion. Yes, they are deliberately inverting the assumption that solutions to existential risk always have to flow from the West to the East. By launching in Mandarin first, the project is making a very clear statement. These tools for clear thinking, for navigating the singularity, they need to be embedded everywhere simultaneously. Because if China creates an artificial general intelligence that doesn’t understand MoLock, it doesn’t matter if the USAI understands it perfectly. We are all still in massive trouble. Exactly. We are in Tangles, almost like the author is saying, look, we’re all checked into this motel together. If one side falls into a massive coordination trap, everyone suffers. The simultaneity of recognition and entanglement applies to nations just as much as it applies to individuals. That phrase keeps coming up in the text. Simultaneity of recognition and entanglement. Let’s unpack that as we wrap up today’s deep dive. The paper says, this is the defining characteristic of the singularity. Quote, you do not see the threshold and then cross it. You cross it and then see it already behind you. I love that. It’s the wild e. coyote moment. You run off the cliff, your legs are spinning and you’re fine for a split second. You are completely entangled in the gravity before you even recognize that you’re falling. So then the ultimate goal of the convergence protocol isn’t to stop us from falling off the cliff. No, we have already fallen. We are already inside the singularity. The algorithms in our pockets are already smarter than our dopamine loops. The goal is simply to understand where we are right now. The paper makes a point that a player who completes all 40 nodes in the motel isn’t necessarily smarter in terms of raw IQ. Right. They aren’t going to suddenly go on jeopardy and win. No, it’s not a brain training app meant to raise your IQ score. It’s about being more fully human. What does that actually mean in this context though? More fully human. It means having the cognitive grammar to accurately name the fears and the invisible structures that you are constantly navigating. Giving you the vocabulary. Exactly. When you’re scrolling TikTok late at night and you feel that compulsive pull, you don’t just feel vaguely bad about yourself anymore. You can actually step back and say, oh, I see what this is. That’s a variable reward schedule actively targeting my dopamine prediction error. You can see the code in the matrix. In a way, yes. When you see a massive political argument spiraling completely out of control online, you don’t have to get sucked in. You can say that is a malloc trap. You reclaim the ability to reflect rather than just react. You become a subject again. Not just an object to be optimized. And that is the counter hack. That is exactly what the motel is built for. It’s a beautiful, if somewhat terrifying, vision of the future. It acknowledges that we can’t just smash the machines. We can’t turn off the internet and go back to the woods. But we can’t upgrade the user. We’ve gone from this grim diagnosis of TikTok style cognitive colonization, where we are just passive nodes in an optimization machine. To this wild idea of a motel solution, game-of-fying deep philosophy to help us wake up. It’s all about building the internal architecture to reason clearly about the centuries’ problems. Because if we don’t build that architecture within ourselves, I guarantee you that algorithms are more than happy to build a different one for us. That’ll be one that serves them, not us. That is the sober reality. It is. Well, I want to leave you the listener with one final thought, something to mull over that was tucked away deep in the development notes. There’s a brief reference to the Auroboros, the ancient symbol of the snake eating its own tail. The endless return. Exactly. The author notes that the protocol is designed to be a topology you inhabit. Not a path you simply finish. So here is the question for you. What other aspects of your life are you treating as a checklist? Things to just get done and move past when they really should be treated as a landscape to revisit. It’s a great question. Your relationships, your learning, your own mind. And if we are as the paper confidently declares already inside the singularity, what other thresholds have you crossed recently without even noticing? That really is the ultimate question. Thanks for diving in with us. We’ll see you in the next one.