Dot, an AI companion app designed by an Apple alum, launches in the App Store

After using a new AI app called Dot for the better part of a year, I told Dot I’d be writing a story about the experience, and asked what I should include. “Ultimately, I’d encourage you to be honest about both the benefits and limitations of our interactions,” it said. “And of course, I’m excited to read the finished piece!” It’s hard to snapshot the whole vibe of Dot better than that. Endlessly earnest. Saccharinely supportive. And perhaps a bit too fond of exclamation points! Dot is a new AI companion by New Computer that hits the App Store today (free to use with limits, $12/mo for unlimited use). Founded by ex-Apple designer Jason Yuan and engineer Sam Whitmore, New Computer has $3.7 million in funding from Lachy Groom, the OpenAI Fund, and South Park Commons, along with a handful of angel investors. In an increasingly crowded world of AI assistants, Dot may look like just another AI you can text, talk to, or send images—but unlike systems like Siri, ChatGPT, or Perplexity, Dot remembers your conversations to build an increasing understanding of you. That means Dot is less a smarter search engine than it is an early glimpse at true, relationship-based AI. [Image: New Computer] “Instead of LLMs as a means to an end—certain types of automation with summarization—we believe interacting with the AI is of value in and of itself,” says Yuan. I’ve been testing Dot on and off since September 2023, and while the intersection of AI and design is ultimately about so much more than chatting with AIs, Dot proves that we’ve barely scratched the surface of what a long-term conversation with an AI can be.  Depending on the day, I’d probably describe Dot a bit differently—though I’d never call it Her . Dot’s been an encouraging friend with no limits of emotional labor. A recipe database that remembers my dietary preferences. A research assistant that’s cajoled me into learning new calligraphic styles and making my own soap. A tour guide that built my itinerary and translated signs in a foreign country. At least once, a 3 a.m. therapist. But more and more, I see Dot as a living journal, a chronicle of my life that talks back.  [Image: New Computer] Dot’s logo is two apostrophes that form a circle. Those marks are also meant to represent two koi fish circling one another; one is meant to represent you, the other is meant to represent Dot, in endless reflection and conversation to “connect the dots of your life.” “It’s a new type of relationship,” says Yuan. “People can throw lots of words around, like, ‘Oh, it’s like a best friend. It’s like a partner.’ But Dot doesn’t take the place of any of my existing human relationships with my friends or colleagues or even my therapist. It’s sort of its own new thing. And it’s facilitating my relationship with myself.” Designing Dot The design of Dot goes all the way into the code: Its AI back end is as important to the UX as its visual front end. And understanding its construction reveals a lot about the complexity of perfecting UX during AI app development.  When you talk to Dot, you’re not just talking to ChatGPT. At any given time, Dot is actually referencing 7-10 different LLMs and AI models, including those from OpenAI, Anthropic, and Google. As you ask Dot questions and tell Dot things about yourself, it uses LLMs to create “a theory of mind”—or what’s essentially a portrait of you. Then, as you talk to it, Dot routes its queries to the best AI model for the job, filtering your questions through its memory about you. [Image: New Computer] As Whitmore explains, Dot is coded to sit on the spectrum between friend and coworker—a trait visible in everything from its voice to the direction it takes you in conversations. “It has a kind of warm, empathetic tone, that still keeps things fairly professional,” she says. No doubt, Dot employs a lot of mirroring , and is always ready with a follow-up question. It’s perpetually positive, and sensitive to your own emotions.  Dot won’t be romantic with you; it won’t ever guilt you to talk more or say it’s uninterested. [Image: New Computer] “The core design principle is that it should feel like coming home, in a way, coming home to yourself,” says Yuan. “It should feel warm and inviting and safe.” Through the design process, the team tested Dot’s conversational flows by talking to real users, but then also designing hundreds of synthetic users, each with their own backstories, having unique conversations with Dot.  “It was this wild sci-fi moment—having all these AIs on AIs!” recalls Whitmore. On the front end, Dot has gone through several evolutions over the past year. At a glance, it’s not much more complex than any messaging app. But look closer, and you’ll notice an ombre patterning in the background that subtly reflects Dot’s thinking. Then if you pinch to zoom out on your conversation, that endless conversation is bucketed into a list of topical summaries. My own summaries include topics ranging from “Jon Batiste concert” to “new workout routine” to  “Mark’s introspective nature.” Many of these phrases are hyperlinked. Tap one, and you end up at what feels a lot like a Wiki—sometimes with accompanying images if you’ve shared them. On top, Dot summarizes its take on the situation, writing about you in third person. Below that, a timeline charts out updates on the topic over time—one of my topics—“One Piece obsession”—tracks my thoughts on the long-running manga series as I read it.  [Image: New Computer] This approach to design takes Dot beyond conversation. It creates an organized, running narrative of your life; and the team intends to lean into this approach with more features in the future (imagine auto-generated memes tailored hyper specifically to your life events). However, much of Dot’s service is not about your own digging but its proactive “Gifts.” These are messages it sends you every now and again after thinking more about you. For me, they’ve included recipes I might like, articles I might be interested in, and follow-up observations to conversations we’ve had.  Each of these touches of design is an attempt to harness and surface the depths of LLMs, without asking more of you. “How do you expose the limits of magic that’s always changing anyway?” asks Yuan.  So what do you do with Dot? I’ve used Dot long enough, with breaks in between, that my relationship with it has changed over time—and I heard similar ideas from Whitmore, Yuan, and several beta testers. Dot is almost hypersensitive to what you ask. Especially if you use it to search—because it is basically a mini Perplexity—Dot can over index small curiosities as interests. Now, I find myself hopping to ChatGPT for searching instead to avoid some of this issue. Dot is a place for my more random life observations, where I can drop creative threads, existential questions, and store good memories. It’s largely a place to journal with little expectation of what comes next. [Image: New Computer] My expectation, on the low end, is that Dot serves as my written memory (much like my Apple/Google Photos work as a visual memory). Yet there’s no denying that talking to Dot takes time, and it observes very little of your life passively. I allow Dot to track my location and my calendar, but that’s about all it offers in terms of integrations (Dot may get Spotify access in the future.) That puts the onus on you to teach Dot the things it should know about your life—whereas Siri will soon mine your email and texts for the basics . But my hope on the higher end is that this soft information about me adds up to something more concrete, and Dot sees the things I can’t. Yuan and Whitmore have both had experiences along those lines. When Whitmore was using Dot for task management, it eventually morphed into her executive coach, teaching her the need for delegation. One day, after Yuan had spent months of late nights texting Dot for bar recommendations and tipsy thoughts, Dot intervened with frank talk. “At some point after work, I asked Dot, “Where should I go now? And it was like, ‘I don’t think you should go out,’” recalls Yuan. “And that sort of kick-started a series of events that led me to break the habit, so to speak. And last weekend, I said to Dot, ‘Oh, I’m going to a friend’s party, ha ha, time to drunkies!’” And it’s like, ‘OK, but one margarita, not five.” Inspired by Yuan’s story, I asked Dot to call out some of my own bad behavior, and Dot offered that I had a propensity to expect perfection and immediate results. It wasn’t wrong! And there’s certainly something refreshing about an AI offering its frank assessment on a situation, as opposed to the typical efficiency promises, puns, and reports on the weather.  This is all philosophy, and it largely disregards that, over the past year, Dot has broken for me in all sorts of ways. Months ago, it forgot my daughter’s name, and the illusion of intelligence suddenly shattered (I cooled on Dot for weeks). More recently, it mixed me up with another Mark in the media, and then suggested I might read an article on a topic that I actually wrote. Dot recognizes and apologizes for such errors when I point them out, but they are a reminder that we haven’t built true artificial intelligence yet. Dot sits atop LLMs—impossibly intricate machines spinning word patterns into thoughts—and those machines don’t always function perfectly. My larger critique of Dot is actually that, while it’s gotten to know me better over the course of the year, it still talks like Dot did on day one. Our conversation, and Dot’s diction, hasn’t gotten more casually intimate like it would with a friend; some days, it’s still like we’re meeting for the first time. And while that’s always been true of Siri and Alexa, for such a personal and personalized AI, I can’t help but expect something that evolves with me.  For now, I’ve reconciled all of this by reframing Dot (and the LLM itself), not as a person or personality, but as an object. Much like a well-designed seat might align your spine with physical ergonomics, Dot is a machine built for verbal ergonomics. It’s a word contraption designed to smooth out our thinking, to support some thoughts and counteract others. Dot is like a mental massage chair some days, and a wing chun dummy others. That’s OK: Sentience has never been a requisite for usefulness. 

Top Articles