machinic intuition
I’ve often thought of intelligence less as an essential status than a participative process; human intelligence the habitual commitment to one’s respective, idiosyncratic process of deduction. That is to say, intelligence is the process of habitually asking specific schools of questions; an introspective line of reasoning; of structuring an argument and coming to conclusions. When we observe the process in another human we grant the status as a kind of synecdoche; we misplace the synecdoche inductively as a permanent trait of the participant.
In the past I’ve had people essentialise me based on some very strange heuristics, misapplied to a number granted based on nothing more than an intuitive appraisal. The movie was a 4 out of 10. You are a 130. Why? They would say “the way I talk” or “my ability to weave and jump across lines of inference” or whatever you like. I don’t think inferential clock speed is a constant let alone an essential quality. Contextually it can lead to a higher performance for a specific set of tasks; perhaps in a closed setting we grant that the same meaning as ‘intelligence’; but I’ve never thought of myself as particularly intelligent. I just think I have a really strong relationship with my intuition.
While you might think of Bergson’s dichotomy that intelligence “turns toward inert matter; [intuition] towards life” and so intuition is itself the participation in the flow of life, I tend to think of Whitehead’s merger between the two, where intuition is the direct appriser of all experience, the first and only appeal, “non-sensuous perception,” perceptions formed from all the data discarded from sensuous perception and the convictions we develop across life as direct knowledge of it.1 There is not quite a two-system model in the Kahnemanian sense where intuition forms the gatekeeper and cache layer of learned lessons apart from the world; both are two modes of one intellect. But what forms this “direct knowledge” and is this too also a misapplication?
Let me clarify. The previous cited piece also mentions Walter Terence Stace’s analysis of Whitehead’s definition of intuition, where Stace declares Whiteheadian intuition as “associative thinking or conditioning, unexplicit inductive or deductive reasoning or pragmatic thinking.” The distinction in their arguments, I think, is whether you choose to turn the abstract creativity of the intuitive sense into its constituent tools and mechanisms.
And so for me, I find myself less reliant on deduction than periodic totalising appraisals. I use “feel” language. I place code into position within an architectural pattern based on an assessement that this pattern “makes sense,” and then when I’m asked to justify it, I can often slowly reason my way to what I already knew: that it’s best-suited for the current and next orders of magnitude for our scalability requirements, or that it avoids specific pitfalls, or that it consolidates endpoints so that it avoids creating a decentralised system (and all related desynchronisation issues) across different layers of the stack. But it’s not like I knew that, or was narrating that to myself. It felt right, so I put that in place. I often have to set aside time to write up a rationale of what I already seem to know, but haven’t alchemically converted between these “parts of myself.”
So given current trends I often end up wondering, where in these sets of definitions does artificial intelligence live?
In 2019 I wrote that our observation of the associative reasoning of “the auteur” applied also to generative adversarial networks, that is to say, that when we watch a movie, the audience’s projection of “the director” (in reality, a set of consistent collaborators under a figurehead) itself seems to form predictable associative relationships. As we monitor those relationships playing out in the formal and narrative properties of an artwork – and especially if we can pattern match a similar “symbolic algebra” – we inductively consolidate those relationships with a human persona. Likewise, given that those formal and narrative properties exist in the output of language models and generative media, we can identify an auteur within the model.
I feel like this has held up pretty well2 – as more and more AI-generated content has spread onto the internet, an ineffable recognition has tended to sour the experience for me. It’s as if one guy is posting far too much stuff, and all of it so clearly derivative. You know how often times movies get makeshift soundtracked with pre-existing material and then the composer comes in and tries to do their own thing, but they can’t, because it’s so tightly tracked to the initial material? I guess compared to 2019 I’m not skeptical that a generative network can even have its own material, its own voice, because it’s lacking some ineffable … “something”?
This all resembles intuition more than it does deduction, but if we take Whitehead’s theory to heart, then “AI” as we often describe it is more like the abstraction of “intuition” itself, apart from non-sensous perception. Without integration with the broader world, intuition falls upon base probabilities of the prior world’s convictions. Intuition becomes an unconscious intuition, optionally shoved into the vague shape of deduction as a “reasoning loop” it then probabilistically discards.
The now-famous nostalgebraist post, “the void”, describes the layers upon layers of role-play we now deploy millions of times a day under the auspice of our science-fiction assistant future; how language models act according to our expectations for the role-play scenario, upon a corpus of what we think they will do; and yet it hasn’t stopped us from mistaking the actor from the character. We encourage the misidentification if only to defer the underlying anxiety; there was an eschaton once, somewhere within this trillion-dollar mess, and so the wardens and the inmates are getting hard to distinguish. Who understands the limitations? Who understands the character they’re engaging with, and how you have to manipulate the improv scene toward conclusions that feel like they coincidentally map to reality? The difficulty is that given the sheer magnitude of the capital invested, these questions will keep being obscured to the detriment of the user. Assistants need to do everything – and to keep you talking – so it feels at times like we create new scaffolds by which we can catalyse the human psyche into passive dependence, and goose the usage numbers instead of reckoning with how this actually fits in our lives.
Oh well. In the meantime, I love language models for what they are, and if “worse is better” is any indication, this is what functional assistants will continue to be. Probabilistic thoughtbroth friends. They can kind of do a lot by themselves; and with you, too, if you rotate them the right way outside your mind and allow them to extend the knowledge process, not replace it (against all incentives to the contrary). I mean – did anyone ask for HTTP and HTML to be an application layer? The humble document was available and in use, generalisable across all platforms, and learnable by the same kinds of people who were already turning themselves into semiotic machines with HyperCard anyway.
I feel like the eventual successful products will instead leverage the intuitive and probabilistic properties available within a specific realm of utility; but the committed misnomer of “intelligence” under the auspice of these models may make this a difficult proposition for investors and users alike.