Notes on seeking wisdom and crafting software

Outside and inside models

We’re at an interesting juncture in the Large Language Model product journeys. We have access to an oracle that sounds wise and can talk like us. At one end we are debating if this is the beginning of an AI era we’ve been waiting for, and at the other we seem to have gone back decades on product design. I’m talking about the AI products where you start by staring at a blank text box. Gone are the days of The Design of Everyday Things and the notion of affordances.

While chatbots may not be the future, we’ve been missing a great articulation of AI first product challenge. The direction set by Apple with their Apple Intelligence is worth pondering 1.

In an analysis of the recent Apple Intelligence announcement, Benedict Evans writes on LLM in Apple products (do read the original note):

Apple is treating this as a technology to enable new classes of features and capabilities, where there is design and product management shaping what the technology does and what the user sees, not as an oracle that you ask for things.

Instead, the ‘oracle’ is just one feature, and Apple is drawing a split between a ‘context model’ and a ‘world model’. Apple’s models have access to all the context that your phone has about you, powering those features, and this is all private, both on device and in Apple’s ‘Private Cloud’. But if you ask for ideas for what to make with a photo of your grocery shopping, then this is no longer about your context, and Apple will offer to send that to a third-party world model - today, ChatGPT.

A world model does have an open-ended prompt and does give you raw output, and it might tell you to put glue on your pizza, but that’s clearly separated into a different experience where you should have different expectations, and it’s also, of course, OpenAI’s brand risk, not Apple’s. Meanwhile, that world model gets none of your context, only your one-off prompt.

This 100% resonates with how I am feeling both developing LLM products for the customers and as a user trying hard to be more productive with a LLM. A chat text box is an oracle that is always reactive. Unless you ask, you don’t get. Well the majority of us don’t even know what to ask. That’s not an inspiring experience.

Beyond the scenarios of summarizing emails, or meetings, or asking for holiday plans, our struggle with AI is real. There is a competition between “point and click” where I am already proficient to a world where I have to “describe the task to get it done”. Latter works on a good day, but it feels like a colossal waste of time. I need to explain a lot where a few clicks from muscle memory would have just worked.

Can we do better than snapping a chat box to everything?

Yes, with “context”.

In my experience, Github Copilot really took the first step here. Inline code completion in the editor actually feels like the AI is helping me by writing exactly what I was planning to write. It didn’t ask me on instructions on what kind of code I wanted. It just wrote it.

Similarly, Apple is taking a step to help me wherever I am, using the context that I have, in doing exactly what I’d have done. It feels like we’re at a place where “set of points and clicks” in productivity software can finally be replaced with “single clicks”.

Context model or the inside view is going to win this round. It can mimic and help me better than the generic World model.

Have you considered leveraging Context in your product? Are you thinking beyond the magical text box?

Footnotes

  1. In case you ask, I am not an Apple fanboy and do not own any of their devices.