duncan.dev / blog
Serifos Chora

Serifos Chora

June 29, 2025

Nestled in the western Cyclades, Serifos offers a quieter, more authentic Greek island experience than its famous counterparts like Mykonos and Santorini. You can only get there by boat from either neighboring islands or direct from Athens. This helps keep it a more mellow destination than islands with an airport.

This quieter pace is exactly what my family and I wanted for the week we just spent there. We stayed at a low-key hotel right on the beach where we could sleep hearing the sound of the waves lapping on the beach. We would wake up, walk 20 meters to the cafe to get a coffee, and then go right onto the sand to hang out under a line of trees planted right next to crystal clear water. Then, later we’d go up to the Chora high above the sea for stunning evening views and lovely food.

I’d say wish you were here, but please only a few of you at a time. We gotta keep the vibe right. 🤣

A free agent again, for now

A free agent again, for now

June 16, 2025

Almost four and a half years ago in early 2021, I went to work at Shopify as Technical Advisor to the CEO. And, I have to say, it was one of the best jobs I’ve had. It’s like it was custom made, just for me. Working alongside Tobi Lütke as a right hand advisor meant that I was part of almost every major technical discussion and had a very close up view of how a hundred-billion dollar company works while retaining its founder-mode ambitions and flexibility.

From helping to define and prototype Hydrogen and Oxygen, Shopify’s headless e-commerce stack, to changing Shopify’s compensation strategy with Flex Comp, there weren’t too many things that I didn’t have visibility and input into. And, as Vice President of Developer Productivity, I helped with Shopify’s push to use AI and LLMs, including helping to rollout Claude Code internally when it was first launched.

Of course, not every day was great. There were two major layoffs and the divestiture of Shopify Logistics, none of which were easy. But easy isn’t part of the job description. Every challenge required applying my skillset in a new and interesting way. And grow new skills quickly as needed.

I said more than once that working at Shopify was the best MBA I could have gotten, and it came with direct experience (and a paycheck!) attached. But, like studying for a degree or serving a tour of duty, there’s a time when you know it’s done and you need to take the next step. As companies evolve, so do the roles within them.

Last week, I served my last day at Shopify. Today, I’m a free agent.

Why? There are several reasons, big and small. Not many really matter in the long view. The one I’ll share here — probably the most important one when all is said and done — is that the world is changing thanks to AI. It's like 1997 and the early web all over again. But different, and more extreme. Of course, like then, many will be looking for a free lunch or to create party tricks. And, like the web, it’ll take us a while to really figure out how it’s going to change the world. For those that invest the time and effort, however, entirely new horizons are opening up even as entire industries are about to be rewritten.

So, I’m taking my virtual degree from the School of Shopify and heading into the world. Instead of lining up my next role before leaving, I’m going to take a few months this summer to really reflect on what I’ve learned and what I want to take forward with me. And, I want to step back and do that reflection outside the scope of any single company’s perspective.

It’s time to Observe and Orient. Decision and Action will come soon enough.

AI thinks fast, so think slow

June 14, 2025

When I first started working with Large Language Models, I was immediately struck with how reflexive they are. You probably were too. You give one an input, and through the magic of next token prediction, you get a response. Change the input, even a bit, and you‘ll get a different output. LLMs are all about pattern matching, associative thinking, and as we know, can produce errors confidently without hesitation.

If you’ve read the book Thinking Fast and Slow by Daniel Kahneman, you’ll immediately recognize these traits map well to the characteristics of System 1 thinking. It might be a bit of a stretch to compare a pile of floating point numbers to a mind hosted by a biological system. After all, LLM responses are shaped by statistical patterns, not the survival instincts or emotions that drive System 1 thinking in humans. Nevertheless, I’ve found that leaning into this metaphor is extremely useful.

If you didn’t read the book, here’s a TL;DR: System 1 thinking is the instinctive way we perceive the world and make decisions based on those perceptions. It’s fast and doesn’t take much effort. It’s mostly subconscious, automatic, and can be prone to error. But, it helps keep us alive thanks to its ability to quickly identify and react to threats. System 2 thinking is more deliberate and logical. It requires conscious attention and a lot more effort, but is often more reliable. If System 1 thinking makes you suspicious of somebody who reminds you of somebody else, System 2 helps you realize that they are different people and may not share the same flaws.

When I’m sitting down to work with an LLM, I take a quick moment to try to put myself as far into a System 2 mindset as I can and prepare myself to treat what the model generates as System 1 output. I say to myself, “Ok Duncan, this thing thinks amazingly fast. It’s your job to think slow. You’re the System 2 thinker in this partnership.” With that little reminder out of the way, I get to work.

The benefit of doing this is that it sets up a good mental model for how to work with the LLM. I can give it various prompts, see the System 1 thoughts fly, and then deliberate over what I’m seeing and decide whether to accept the model’s output, ask for changes, or just try a different approach altogether.

This also allows me to better accept the foibles of a model’s output without getting frustrated with it. If a model gives me a silly or incorrect answer, I don’t need to get upset with it for getting it wrong. After all, it’s operating in System 1, right? If it tells me that there’s 5 columns in a CSV file, but I know there’s 7, I can just nudge it and move on instead of getting upset.

In other words, it’s my job to be the slow thinker, provide input, watch what’s going on, and keep going. And, it’s not just about checking the AI’s work, but monitoring my own temptation to over-rely on it. At the same time, I have to keep in mind that humans aren’t perfect either and often fail at System 2 thinking. And, my own System 2 thinking can be fooled by things that a model is confidently wrong about. Thinking is hard, and it’s always useful to exercise all the mental models we have at our disposal to help get things right.

Of course, none of this is static. Models will continue to evolve and incorporate more System 2-like capabilities through explicit reasoning chains, evaluating multiple reasoning paths, reflection, verification chains, and self-critique. Even so, I think that this framework will continue to have value and help me stay grounded in maintaining thoughtful oversight instead of defaulting to automatic acceptance.

While reviewing this post, Claude came up with this related thought:

This reminds me of how chess players work with engines. The computer can calculate millions of positions instantly (System 1-like), but the human grandmaster still provides strategic understanding and knows when to question the engine's suggestions. Even as engines have become superhuman, this collaborative dynamic persists.

That’s exactly it!

The art of saying no

June 12, 2025

It’s never been so easy to create prototypes and churn out interesting approaches. When Claude or ChatGPT can create 50 different API designs in minutes — all you have to do is ask for it! — the essential skill is no longer generating work. It’s about managing the vast number of options available, recognizing what aligns with your goals, and saying no to the ones that don’t meet the bar.

Creatives have long known that trying to solve a problem without constraints can be paralyzing. Too many options can lead to analysis paralysis and decision fatigue. Sometimes the most important thing to do is define some guardrails just so that you can limit the possibility space and move forward. Otherwise, you can find yourself drowning in possibilities, shipping kitchen sink features, or constantly pivoting between half-explored ideas.

As I learn (and re-learn) how to use LLMs in my work, I keep having a flashback to 2013 and Apple’s WWDC opening video about product design that I return to time and time again. Its message seems more relevant than ever:

The first thing we ask is what do we want people to feel? Delight. Surprise. Love. Connection. Then we begin to craft around our intention. It takes time. There are a thousand no’s for every yes. We simplify. We perfect.

Now that AI can turbocharge your creativity and generate nearly infinite variations of solutions in seconds, thousands of no’s probably aren’t enough. Every decision, even the smallest ones, can shape increasingly large outputs. As generation cost drops exponentially, the ability to curate becomes even more essential to creating things now.

When I had Claude review this post, it came up with an interesting related thought:

Your piece reminds me of the paradox of choice research - how people are actually less satisfied when choosing jam from 30 options versus 6. But now we're not choosing between 30 jams, we're choosing between infinite jams that can be instantly reformulated. The cognitive load is unprecedented, which makes your point about guardrails and saying no even more critical.

Thanks Claude! You got that exactly right.