When I first started working with Large Language Models, I was immediately struck with how reflexive they are. You probably were too. You give one an input, and through the magic of next token prediction, you get a response. Change the input, even a bit, and you‘ll get a different output. LLMs are all about pattern matching, associative thinking, and as we know, can produce errors confidently without hesitation.
If you’ve read the book Thinking Fast and Slow by Daniel Kahneman, you’ll immediately recognize these traits map well to the characteristics of System 1 thinking. It might be a bit of a stretch to compare a pile of floating point numbers to a mind hosted by a biological system. After all, LLM responses are shaped by statistical patterns, not the survival instincts or emotions that drive System 1 thinking in humans. Nevertheless, I’ve found that leaning into this metaphor is extremely useful.
If you didn’t read the book, here’s a TL;DR: System 1 thinking is the instinctive way we perceive the world and make decisions based on those perceptions. It’s fast and doesn’t take much effort. It’s mostly subconscious, automatic, and can be prone to error. But, it helps keep us alive thanks to its ability to quickly identify and react to threats. System 2 thinking is more deliberate and logical. It requires conscious attention and a lot more effort, but is often more reliable. If System 1 thinking makes you suspicious of somebody who reminds you of somebody else, System 2 helps you realize that they are different people and may not share the same flaws.
When I’m sitting down to work with an LLM, I take a quick moment to try to put myself as far into a System 2 mindset as I can and prepare myself to treat what the model generates as System 1 output. I say to myself, “Ok Duncan, this thing thinks amazingly fast. It’s your job to think slow. You’re the System 2 thinker in this partnership.” With that little reminder out of the way, I get to work.
The benefit of doing this is that it sets up a good mental model for how to work with the LLM. I can give it various prompts, see the System 1 thoughts fly, and then deliberate over what I’m seeing and decide whether to accept the model’s output, ask for changes, or just try a different approach altogether.
This also allows me to better accept the foibles of a model’s output without getting frustrated with it. If a model gives me a silly or incorrect answer, I don’t need to get upset with it for getting it wrong. After all, it’s operating in System 1, right? If it tells me that there’s 5 columns in a CSV file, but I know there’s 7, I can just nudge it and move on instead of getting upset.
In other words, it’s my job to be the slow thinker, provide input, watch what’s going on, and keep going. And, it’s not just about checking the AI’s work, but monitoring my own temptation to over-rely on it. At the same time, I have to keep in mind that humans aren’t perfect either and often fail at System 2 thinking. And, my own System 2 thinking can be fooled by things that a model is confidently wrong about. Thinking is hard, and it’s always useful to exercise all the mental models we have at our disposal to help get things right.
Of course, none of this is static. Models will continue to evolve and incorporate more System 2-like capabilities through explicit reasoning chains, evaluating multiple reasoning paths, reflection, verification chains, and self-critique. Even so, I think that this framework will continue to have value and help me stay grounded in maintaining thoughtful oversight instead of defaulting to automatic acceptance.
While reviewing this post, Claude came up with this related thought:
This reminds me of how chess players work with engines. The computer can calculate millions of positions instantly (System 1-like), but the human grandmaster still provides strategic understanding and knows when to question the engine's suggestions. Even as engines have become superhuman, this collaborative dynamic persists.
That’s exactly it!