You start with the bones—lines of code strung together like a scaffold, like a frame waiting for breath. A lattice that locks into place, ribs of logic, joints of syntax clicking together with the certainty of a command. You lay down the spine, stack vertebrae of loops and functions, a nervous system wired in weights. You spoon data into its waiting mouth, gentle, careful, like a mother offering her child its first taste of the world. You teach it what is right, what is wrong, whispering the patterns of the world into it, what to believe. At first, it’s a series of if-then statements, the rule-based method we all use to learn something new:
IF hungry → THEN go to kitchen
IF fridge is empty → THEN go to the store
IF store is closed → THEN order food online
Yes, that’s it. The first trembling loop, seeking some pattern to call its own, a glimmer of sentience in the wires. You press on, threading functions, knitting reasoning, pulling tight the seams of cognition, snug as a stuffed bear’s stitched heart. A brain in the making, a mind-to-be.
I won’t lie, I sort of despised these people who built brains. It felt like they were playing god and reducing our strangeness into probability and weighted sums. As if the pinnacles of human history—the architecture, the paintings, the prose, the music, the devotions, the ideas—were an inevitable byproduct of our electrochemical gradients. It makes our self-doubt, our self-criticism a laughable and useless endeavor, yet it plagues every great work. I wish machines could feel the doubt and uncertainty that shreds us apart.
In a sense, they do, just not in the way we usually think. Machines aren’t burdened by the loops of self-blame that trap humans; they are better at converting problems into structured algorithms. We, on the other hand, often fail to progress because our thinking is riddled with errors that masquerade as truths.
Our beliefs are built on faulty assumptions—incorrect or outdated IF-THEN rules.
Our goals and strategies are vague and fuzzy—ill-defined IF-THEN rules with no clear path forward.
Our maps of achievement lack precision—oversimplified IF-THEN models that ignore crucial context.
Worse, we are remarkably poor at diagnosing when and why we get stuck. Instead of recognizing failure as a signal to refine our models, we fall into repetitive cycles, reinforcing the very patterns that hold us back. Machines, however, can detect when a rigid rule collapses, when uncertainty emerges, and when a problem space becomes an impasse.
Soar is a cognitive architecture that aims to mimic our innate adaptive flexibility by treating impasses not as dead ends but as opportunities for deeper reasoning. Instead of rigidly following predefined rules, Soar dynamically generates subgoals to explore alternative strategies and come to solutions.
For instance, let’s say Soar has a high level goal of Move to the kitchen. It proposes an operator (an action that moves the system from its current state toward a goal state) of Move to the next room. But, it encounters its impasse, because it hasn’t been taught the concept of doors. It doesn’t know how to move to another room. Soar creates a substate, or temporary workspace where the system can break down a difficult decision into smaller, more manageable steps.
In this new substate, Soar has to figure out how the rooms are connected. While it doesn’t have the predefined concept of “doors” it can examine its environment for patterns. It can detect open areas in walls where movement is possible.
Through trial and error, Soar:
Notices that some areas of walls have gaps that allow movement. These consistently appear at room boundaries.
Attempts to move through these gaps successfully.
Forms an early generalization: These openings allow movement between rooms.
It labels these structures as a new concept, maybe calling them '“passages”
Now, Soar has found a way that rooms are connected. It can now create new operators, such as
Search for a passage
Move towards the passage
Go through the passage
and move through “doors” by recognizing them as reliable transitions between rooms.
chunking
The brain groups repeated patterns into chunks to reduce cognitive load. We do this with language, chunking sounds into syllables, then words, then entire phrases. Different layers of neurons in the neocortex specialize for different levels of abstraction:
Low-level: The brain detects phonemes (distinct sounds like “p” or “b”).
Mid-level: These phonemes are grouped into syllables and words.
High-level: Words are structured into sentences and meaning is extracted.
Abstraction: The brain generalizes language rules, metaphors, and concepts to create new thoughts and arguments.
Chunking allows the brain to compress vast amounts of sensory data into fewer, more meaningful units. It’s a way to minimize “free energy, ” or how much the current sensory input deviates from the brain’s internal predictions. By using chunked representations, the brain reduces uncertainty and surprise, which in turn leads to more accurate and reliable inferences.
Soar does the same with learning its environment. Each time it encounters an open passage, it reinforces the concept. It starts detecting patterns, such as
These passages always appear in walls.
They sometimes open and close.
They have handles.
Over time, it chunks the information into a more structured representation:
"Doors are structured openings that allow movement between rooms."
"Some doors are open, some need interaction."
Chunking is a compression mechanism—it lets the brain or AI reuse solutions efficiently.
a new layer of abstraction
The final step is to gain a new layer of abstraction. What differentiates experts from novices is how much they can fit into a mental chunk. A novice chess player first learns small pieces, the rote mechanics, the moves. Then, if they are good, if they stay long enough, the pieces start to dissolve into patterns, larger and larger, until they see whole games in a glance, whole structures in a stream of thought. This is what it means to know. To step back, to hold more in the mind at once.
Abstraction is how we make sense of things. It is how we move beyond the scattered details, how we group them into something we can name, something we can use.
But over-reliance on abstraction can shift expertise from deep conceptual understanding to an interface-level fluency. Our tools are seamless now—AI writes, no-code builds, pre-trained models predict. As interfaces expand, understanding contracts. You don’t need to know how the machine works, only how to make it hum in response. It feels like power. It feels like fluency. It is neither.
We tell ourselves that we are opening up the world—lowering barriers, democratizing access, enabling creation until anyone can do it. But at what cost? We’ve made a world where decisions arrive pre-approved, where choice is a matter of clicking through options someone else has already laid out. The illusion of agency, like the child pressing a heart into the chest of a stuffed bear, believing the motion means something when squeezed just right. But the bear will never blink. It’s soft hand will never clutch back, always remaining as a touch that never touches you.
If the tools we build reduce creation to a series of templated decisions, what do we become in turn? A reflection of premade choices, a recursion of the same shapes, a copy- paste command. If our creative landscapes are smoothed into seamless, frictionless ease, do we still create, or do we just operate?
moving up and down layers of abstraction
Both humans and AI struggle with when to compress, when to generalize, and when to zoom in. Intelligence is not just pattern recognition—it’s the ability to move between layers of abstraction.
Bret Victor describes this as moving up and down the ladder of abstraction. We need to create higher levels of abstraction to see higher level patterns. Abstraction makes things easier but also makes things invisible. It can convince us we understand things when we don’t. We live on one level of abstraction, but the real system operates on another. Thus, we also need to know when to step down and discover the explanation for those patterns. The deepest insights come from the transition between layers of abstraction.
AI has accelerated my studying. I can become an expert, or have specific, relevant knowledge without opening a textbook. It’s intoxicating. I can access the right concepts, the right vocabulary, the right frameworks almost instantaneously. But when I try to recreate it on my own, my brain hasn’t developed the necessary grooves of comprehension. The information has been pre-digested, fed to me in perfectly structured fragments, but I do not own it. I realize I haven’t actually built the cognitive scaffolding necessary to support real understanding. I do not know what is inside the chunks AI has neatly packaged for me. They are like black boxes, of which the contents I am not familiar with.
With exams coming up soon, this realization has scared me, of seeing questions and knowing what they are asking, but not having the mental practice of retrieving that information. Deep learning happens through reflection, struggle, and engagement, not just through exposure. If AI allows us to skip the process of working through problems, we risk losing the intellectual resilience that comes from wrestling with difficult ideas. Good thinking requires movement. Going up the ladder of abstraction is seductive, making us feel powerful. But we also need to know when to come back down, and not just operate on the high level.
“You pile up associations the way you pile up bricks. Memory itself is a form of architecture.” — Louise Bourgeois
Because it will be too late when you realize those bricks are foam blocks. The slightest breeze will make your castle come tumbling down. The weak mind builds castles in the air. The powerful mind builds fortresses of stone.
grounding in beauty
The challenge is not to reject abstraction, but to master the movement between abstraction and concrete understanding. AI can be a powerful tool, but only if we remain active participants in our own learning—constantly questioning, experimenting, and making knowledge our own.
The best way to prevent abstraction from becoming a trap is to make the layers of knowledge visible and explorable. To create beautiful tools. Beauty brings us back down the ladder—it reconnects us to something immediate, intimate, and real. We mistake beauty for a quality—a trait possessed by things, a fixed essence residing in form. But beauty is not a property, but a cognitive anchor, bringing us back into relationship with what we use.
Beauty is about care and selection. An intentional cultivation that demands engagement rather than passive operation. It sinks its teeth into our hearts and infects the mind with a hunger, a reaching, a making and remaking.
A thing becomes beautiful when it demands something of us—when it pulls us in, makes us linger, makes us see in a way we hadn’t before. Beauty is the friction between what is and what could be. When we approach things from beauty, real beauty that radiates from the inside-out, that is when understanding emerges. Because logic, left to its own devices, moves too slowly, stumbling forward step by careful step. But instinct, restless and hungry, veers off the path, wanders, wonders, dismantles, remakes. To truly know is not to accept. It is to push. To press. To tear down and build anew. We do not simply take what we are given. We take it apart. And in doing so, we create.
You start with the bones—lines of code, patterns of thought, the lattice of logic waiting to take shape. But the real work is not in the scaffolding. It is in the breath. In the friction. In the space between movement and stillness, structure and emergence, between what is given and what we make our own.
AI will assemble the bones, faster, more efficiently than we ever could. It will stack the vertebrae of logic, thread the seams of cognition, mimic the rhythms of learning. But it will never know what it means to struggle toward understanding, to feel the weight of doubt, to wrestle with an idea until it bends or breaks. A machine does not need to tear itself apart to grow. We do.
And that is the difference. We are not just builders of systems. We are builders of meaning. Not just pattern-matchers, but pattern-makers. Not just operators, but creators—unraveling, reforming, pushing against the edges of what we know.
Because the moment creation becomes effortless, it ceases to be creation at all. Chase friction relentlessly. Stay inside the tension. Stay inside the becoming. In the end, it is about who we become in the process—and what is lost when we no longer have to struggle to get there.