Mellifluous Thoughts on Design and Tech
July 5, 2024
The Art of Noise Exhibit
I recently visited the San Francisco Museum of Modern Art to see the Art of Noise exhibit and was stunned. I felt nostalgic browsing the audio products I recall seeing and enjoying years ago, and inspired by the gorgeous pieces I was unfamiliar with. Yes, pieces. Many products are worthy of being in a museum (or my living room).
After leaving I couldn’t stop thinking about the exhibit and ended up returning a day later to do another lap. Below are a few of my favorite pieces and experiences.
Listening room
Before approaching the main part of the exhibit area you can sit down and recline in Devon Turnbull’s HiFi Pursuit Listening Room dream No. 2 and take in its “exceptionally high-fidelity music” played on a “custom-built sound system.” The speakers are gigantic and brutalist, and the sound is soul-piercing.
Posters
In the first room there are approximately 400 posters for albums, concerts, and advertisements. Note the comfortable looking seating and intense blues used throughout the space.
I instantly recognized a few Swiss-designed posters from my collection of graphic design books that I pour over when in need of inspiration. Two of my favorites are Gottlieb Soland’s Grammo-grafik from 1957, and Josef Müller-Brockmann’s Otto Klemperer reza anda: Ludwig Van Beehthoven from 1955.
I’m a strong proponent for making posters. My new mantra at work is “does this project need a poster?”
Sound installation
After passing through the poster room, one approaches the Teengage Engineering-designed sound installation. Large, blue, plush seats beckon visitors to sit and experiment with the equipment. The lack of clear seating boundaries encourages people to improvise through kneeling, lying down, or sitting to face in any direction. It feels more like a playground than a museum.
Several listening stations are arranged amongst the seats where one can don a pair of headphones, press buttons to manipulate music, and stare at a miniature television.
After enjoying the tactile feel and click sounds of the station’s buttons I noticed that Teenage Engineering printed its name on the headphones. What a lovely touch.
Products
Immediately following the sound installation is a table littered with beautiful, aging music playing devices. Centrally placed is an iPod from 2001 along with a few iPod Shuffles and a single iPod Nano. This struck me as an odd decision considering their recency. They stole the focus from a variety of Braun products, the original Sony Walkman, the original Sony Discman, and a rare sighting of a Sony MiniDisc player.
The iPod represents a fundamental shift in music listening habits. At the time one could build mp3 playlists and listen to them on a computer. For on the go enjoyment one could burn CDs and carry around a portable CD player. The iPod (and possibly the Creative Nomad Jukebox with its large harddrive) permanently changed habits for a generation. Note: The fastest way to determine if an iPod is a first or second generation model is to look at the 4 buttons surrounding the scroll wheel. If they are separate it’s a first or second generation. If they are integrated it’s at least a fourth generation. The third generation had four buttons horizontally placed above the wheel which was a mistake in my opinion.
Teenage Engineering, the firm that designed this exhibit, makes highly functional, beautiful products like the OB-4 Bluetooth speaker which is offered in this striking orange. At $549 it’s a tough sell, but I assume I will purchase a white model eventually.
The Harmon Kardon SoundSticks and iSub are the result of a partnership with Apple in 2000. Jony Ive designed the iSub (obviously?) and one is currently in MoMA’s collection in New York City. I definitely need to add these to my collection. After two revisions over the past two decades, the fourth iteration was recently launched. I’m relieved to report that Harmon Kardon maintained the SoundSticks’ beauty and elegance with this new design.
One thing we have lost with Sonos, the Apple TV, and other modern audio/video products is modularity. Stereo companies once strived to design beautiful, dedicated hardware. Each piece had its own purpose, and one could expand a system’s capabilities by adding a new item to the stack. Now this is sadly relegated to ultra high-end systems. The Bang & Olufson Beograd 5500 system is a statement, especially when placed next to its “master control panel.”
It’s difficult to believe the Braun LE1 was released in 1960. It would feel at home in a living room today. The design of the more recent LE01 is playful and reminiscent of the original model’s design, but certainly lacking in height. Perhaps I will purchase an LE01 the day I open a design studio.
DJ station
I want to be very clear about something: I know nothing about DJs, DJ equipment, or what being a DJ entails. In fact, every time I try to organize music at a party, I am ridiculed for my taste (even during my own birthday dinner). However, if I were to be come a DJ during a midlife crisis, I would definitely want equipment designed by Teenage Engineering.
I love Teenage Engineering’s versatility. Their designs range from tiny audio products like the TP-7 field recorder all the way to table-sized products like this setup. Their products also share a design language and color palette of white, black, and grey with orange accents. Even from a distance I can sense how it would feel to press each button.
Choir
After viewing all of the products one can venture to the final room which consists of a bench, a table, and Teenage Engineering’s Choir. A series of wooden dolls play choral classics and each has a dedicated light that shines downward when it plays a sound. I appreciate the unique design of each doll and how they are “based on cultures and characters from across the world,” but I left wondering why. Why was Teenage Engineering compelled to build this? I would understand if they were limited to this single exhibit, but the set is available to purchase for approximately $2,000.
Oh. Wait. The Choir can be paired with the OP-1 which can be used to “conduct an orchestra.”
I am particularly proud of this shot with the ghostly appearance of a man floating past the table.
Wrapping up
I usually look suspiciously at people who walk around galleries photographing paintings. Do they actually look at those photos in the future? How can one possibly capture the beauty of a painting with a cell phone (or any camera)? The photos are crooked and poorly lit. Put your phone down and enjoy the pieces!
However, I felt compelled to photograph this exhibit because it was not a typical museum experience. There were colors, sounds, and hardware. Funky, nostalgic, unique hardware. The listening rooms were also worth capturing with perfectly lit speakers, and exhibitors who were endlessly photogenic while enjoying the music.
I highly recommend swinging by MoMA to see this exhibit. You have until August 18! I also hope to see more exhibits that focus on products as art in the future.
July 2, 2024
AI-Powered Thoughts
One thing that I believe is missing from all of the AI discourse, product launches, and funding announcements is how AI can help augment the small, mundane tasks users perform many times per day on phones and computers. We have seen AI tackle complex, time-consuming tasks like drafting emails and editing spreadsheets, but what about quick, repeatable tasks?
Consider two categories of tasks: primary and secondary. Writing an email can be a prolonged, focused task where users experiment with words, sentence structure, and tone. Users cannot complete the task of sending an email if there are any distractions. This a primary task because it requires a high cognitive load for success.
Capturing an idea, webpage, or to-do needs just a couple clicks or a few seconds of messy typing. The cognitive load is low enough for the task to exist in one’s periphery as a secondary task. (Note: I’ve been thinking about primary and secondary tasks since 2005 when I published a paper about designing peripheral displays.)
Companies investing in AI are, in my opinion, focusing on primary (and artistic) tasks. When Apple announced Apple Intelligence the topics included writing tools, AI-generated emojis, Image Playground, Siri updates, notifications, etc. What’s missing is innovation in the secondary task space. I have a proposal for how AI can help.
The Prompt
Recently I had an epiphany: I want to walk around with a tiny tape recorder like a powerful businessperson in a 1990s movie who records every thought for an assistant to later transcribe, categorize, and follow up on. Got an idea? Raise the tape recorder, press record, talk, press stop, and put it back in my pocket. Now replace the tape recorder with an app and a keyboard.
The goal is to write a prompt that takes an input and asks an LLM to automatically categorize, organize, and expand the input. For example, if I bookmark a webpage a summary should be generated. If I make a to-do to practice a presentation, a list of helpful ways to prepare should be included. If I jot down a thought like “I wonder if we should try reducing Jessica’s afternoon nap time to 1 hour instead of 1.5-2 hours,” relevant resources should be gathered.
The prompt could look like this:
If you detect a link in $input:
- Return a summary of the linked page with a maximum of 280 characters in a section called "Summary"
- Return a list of 3 themes/categories in a section called “Tags”
If you detect a task in $input:
- Return a list of 3 themes/categories in a section called “Tags” and one of them must be “Task”
- Return the text of the task with an attempt at fixing typos and grammar
- If the date and or time for the task are provided, return “Due: ” + the date and time for the task
- Provide any additional information that can be helpful for me to complete the task in a section called “Things to consider”
If you detect a quote in $input:
- Return the text of the quote
- Return a list of 3 themes/categories in a section called “Tags” and one of them must be “Quote”
If you detect words like “wonder” or “think” or “idea” in $input:
- Return the text of the input with repaired typos and grammar
- Return a list of 3 themes/categories in a section called “Tags” including “Idea”
- Include any helpful, relevant links from Wikipedia in a section called “Links”
For themes/categories try to use only 1 word. Not a requirement but try hard
Include any helpful, relevant links from Wikipedia in a section called “Links”
Experiments
Imagine you have time to catch up on some bookmarked articles. As you scan the list of links you subconsciously try to recall the article the link represents by parsing any words in its URL. Now imagine scanning a list of links and each has a 1-2 sentence summary.
Input 1: https://www.fastcompany.com/90975882/meet-dot-an-ai-companion-designed-by-an-apple-alum-here-to-help-you-live-your-best-life
Response 1:
- Summary: Dot, an AI companion designed by an Apple alum, assists users in enhancing their lives by remembering and analyzing conversations, offering proactive suggestions, and storing personal data like recipes and notes. It aims to be a long-term companion focused on self-improvement rather than just task management.
- Tags: AI, Innovation, Technology.
Imagine you have a presentation approaching and your to-do starts and ends with “practice.” Now imagine you are reminded to time yourself, seek feedback, and clearly articulate. These are helpful tips for everyone.
Input 2: Practice my presentation.
Response 2:
- Task: Practice my presentation.
- Tags: Task, Presentation, Preparation
- Things to consider:
- Review the content and structure of the presentation to ensure clarity and coherence.
- Practice delivering the presentation multiple times to become familiar with the material and improve delivery.
- Time yourself to ensure you stay within the allotted time.
- Seek feedback from a peer or mentor to identify areas for improvement.
- Focus on clear articulation, appropriate pacing, and engaging with your audience.
Imagine you are having a conversation with your spouse about your toddler’s sleep habits, and you want to jot down the idea to reduce her afternoon nap. You go to store the idea and in response are a series of helpful links.
Input 3: I wonder if we should try reducing Jessica’s afternoon nap time to 1 hour instead of 1.5-2 hours.
Response 3:
- Idea: I wonder if we should try reducing Jessica’s afternoon nap time to 1 hour instead of 1.5-2 hours.
- Tags: Idea, Parenting, Sleep
- Things to consider:
- Monitor Jessica’s behavior and energy levels after the change.
- Assess if the new nap duration improves her nighttime sleep.
- Assess if the new nap duration improves her nighttime sleep.
- Links:
These experiments successfully demonstrate that AI can boost the links I bookmark, thought I have, and to-dos I jot down by providing a few considerations and links.
Note: Each prompt used ChatGPT 4o.
Design
If I were to design an app that helps people remember ideas, save links, and track to-dos, the design would lean towards simplicity and clarity.
The home screen can be a list tags generated by the LLM, and a tap on a tag drills in to reveal all items associated with the selected tag. Users can customize a tag’s icon and color, but one can hope that the LLM can also determine which SF Symbol is ideal for each tag. The plus button is the primary action and it launches create mode.
Similar to the ChatGPT app, the focus when creating an item is the blinking cursor. Hidden behind a [+] button can be additional options like uploading a file, uploading a photo, recording audio, or taking a new photo with the camera.
Sending a message to the LLM leverages the same chat paradigm that every AI company is currently using. I truly believe that chat is not the future of interacting with AI. We are in the command-line interface era of AI, and someone will invent the GUI of AI soon.
After the user taps the submit button, the prompt is executed and a response arrives. Of course there is an intermediary step since the LLM requires a few seconds to ponder, but I’m skipping that step here. The response, as demonstrated in the experiment above, includes the category (“Idea” for example), tags, things to consider, and helpful links. The user can choose to save the item here with a tap on the “Add” button, or they can augment the item by continuing to message the LLM.
After an item is created the user can archive or delete it along with a few more advanced options hidden behind a “More” button. The goal is to allow the user to edit the item by removing or adding additional information. Perhaps editing simply reveals the keyboard, or edit mode can be similar to create mode by launching the chat interface again.
Start with the problem
AI is infiltrating every piece of software at the OS level, apps, and conversations. When considering what to build it’s so easy to start with “AI!” and work backwards towards a problem. As Steve Jobs famously said:
I’ve always found is that you’ve got to start with the customer experience and work backwards to the technology. You can’t start with the technology and try to figure out where you’re going to try to sell it.
Companies and developers need to consider how to bring tangible benefits to customers using AI. Don’t just imagine cool things AI can do. Help me save time. Help me be more efficient. Help me get away from the computer so I can hang out with my kids. Writing notes, tracking to-dos, and reading documents are activities we all do on our phones and computers. Let’s start experimenting with enhancing them with AI. They might be boring topics, but people will love it.
June 27, 2024
On the Design of Dot
In November of last year, a mysterious startup called new.computer (great name similar to another startup founded by Apple employees named Software Applications Incorporated) posted a video of an incredibly beautiful, fluid app that promised to be a personal guide powered by artificial intelligence.
Fortunately we knew the designer behind this impressive work: Jason Yuan. He famously shared a conceptual operating system called Mercury OS, and wrote about how he would redesign Apple Music on iOS before joining Apple’s design team.
After months of waiting (and politely asking for access to the TestFlight), Dot updated its website with compelling stories about how the app helped the founders, and launched the app on the App Store. I captured the onboarding experience and proceeded to use the app for several days before judging and formulating thoughts. Now it’s time to share. Below is a series of short videos and screenshots along with my observations, questions, and experiences.
Say hello to Dot, your Living History
“Say hello to Dot, your Living History.”
Say that sentence a few times out loud. What does it mean to you? Did you notice the capitalized “Living” and “History?” This immediately stuck out just like Humane’s Ai Pin with its lowercase “i.” Why capitalize “Living” and “History” but not “Hello” and “Your” to make the sentence title case? I assume Dot is attempting to attach gravitas to the phrase, but I just feel confused. Perhaps I’m too focused on individual letters. It happens.
Dot’s mark is compelling. It evokes a feeling of momentum. One can argue that anything “living” is in motion, so I can easily imagine a connection between the slogan and the mark.
The gradient is soft and inviting just like the gradient used in Mercury OS. It feels like I’m looking at a sunset at Nick’s Cove in Tomales Bay. I would love to see how Jason constructs such compelling color combinations.
Using a serif typeface immediately sticks out as a bold, opinionated decision. It feels so common for designers to lean on San Francisco (including myself) these days. I am intrigued to see what other bold design decisions are coming.
Lastly, the launch animation gives the user a strong sense of the details achieved throughout the app. Transitions between screens include thoughtful flourishes for each element. Button, snippets of text, and text inputs all gracefully appear and disappear.
Meet Dot
Another onboarding screen that’s just text. Interesting. My gut reaction after conducting countless user research sessions focused on onboarding is to combine the first two screens (or cut one) to decrease the probability of a potential user becoming distracted before completing the flow. However, if I had the opportunity to design my dream app, I would also take my time with onboarding to precisely set the tone.
“Meet Dot, the AI that grows with you—learning and evolving with your every interaction.”
This is much more clear than the “your Living History” text on the previous screen. This new text is straightforward, but of course I’m skeptical since an “evolving” AI companion is an unknown concept.
”Get started for free. Upgrade any time for unlimited messages and access to Chronicles, your living history.”
Ah so there will be a cost at some point. How much? Will I get charged monthly or annually? And what is a “Chronicle?” I wonder how many messages I’m allowed to send before getting cut off. I think not answering these questions has pros and cons. Traditionally I would argue that providing these details is helpful to set expectations. However, in this new space of AI companions, I do not know what it would mean to know that “X00 messages are included each month.” Is X00 a lot? Will I be disappointed if I surpass X00? Perhaps leaving this as a mystery subconsciously encourages some to continue and discover what Dot truly is. As for the new capitalized word “Chronicle,” I suppose I need to wait and learn since “your living history” is also not defined at this point.
A secondary serif typeface is introduced on this screen as well which provides a nice counterbalance to the larger, darker text above it.
Get started (i.e. create an account)
”Get started” is a bit vague considering the goal of this screen is to create an account or log in if you have a new device or got logged out accidentally. I wonder why using Apple or a phone number is emphasized over using an email address. Perhaps allowing users to sign up and log in with an email address is inherently more complicated due to password resets so the Dot team wants to discourage it.
I’m also surprised that ”Get started” is alone. Get started with what? Get started with Dot? Get started creating a Living History? There is an opportunity here to add a few additional words to continue building anticipation and setting the tone.
Enter a code
”Confirm your email. Please enter the code we sent to $email.”
I was taught to avoid saying “please” in software, but I do not recall if that is a rule or guidance. Also why include the word “we?” Who is we? I wonder if “Enter the code sent to $email” would be satisfactory with its shorter length (albeit more rigid).
If one chooses to create an account using an email address, the authentication method is a 6 digit code sent to the email address. This is one way to get around entering and resetting passwords. It’s also an easy way to authenticate if one uses the Mail app since iOS can automatically fetch and insert codes sent through email.
The visual touches on this screen really stand out. The code entry area has 3 distinct states: number entered, currently selected number, and remaining numbers. This is a subtle but beautiful way to make a straightforward screen feel special. I also appreciate how the disabled “Continue” button feels with its transparency against the gradient background.
Enter your name
”What’s your name? This will be the name Dot uses to refer to you.”
I love that the input’s default text is “Samantha” considering that is new.computer’s cofounder’s name. That’s a nice touch. However, it feels a bit funny to center align the input’s text. I’m surprised the text is not left-aligned like most forms.
Also, how and why will Dot refer to me? I thought I was going to chat with Dot. Is Dot going to talk about me with someone else?
Enter your pronouns
”What are your pronouns? Specifying pronouns helps Dot accurately understand your identity.”
I worked at a social networking startup that tried to guess pronouns for users. This is a much, much better approach. I wonder how Dot behaves if the user taps the new “Skip” button in the upper right corner and does not enter pronouns. I assume at this point that pronouns are not needed and the onboarding flow is throwing in unnecessary steps. How often does someone you’re in a private, one on one conversation with refer to you using pronouns? That would be strange.
Note from future self: It turns out pronouns are needed.
Enter your birth date
”When were you born? Dot may surprise you with a birthday gift.”
Name, pronouns, and now birth date? Can I just see the app? Maybe Dot can casually ask me for my birth date later?
Wait, did you say a birthday gift? OK I’m in.
Enable permissions
”Enable permissions. Dot works best with access to the following permissions.”
Here we go. The never ending debate about when to ask for permissions, and which permissions you really need. Notice that there is no “Skip” button on this screen. I did not try the “Continue” button, but it should allow the user to skip this step since the button appears to be in its active state.
- ”Notifications. Allows Dot to deliver timely reminder and proactive messages.” One could argue that the user should have an opportunity to see what type of messages Dot may send while in context before asking for permission. For example, while chatting with Dot it could say something like, “Turn on notifications so I can remind you about this later.”
- ”Calendar. Allows Dot to see what your day is like and help you plan ahead.” I still do not have a good sense of what Dot is, what Dot is capable of, and how it will benefit me as a user. Am I supposed to assume that Dot will provide value using my calendar data?
- ”Location. Allows Dot to personalize suggestions to your location.” This has the same problem as the calendar step: How? What kinds of suggestions does Dot intend to make? Saving this step until there is an appropriate amount of context would be beneficial.
I adore the giant rounded corners on each permission. I wonder if this screen would be a bit more clear if each one had a clear tap target. Right now the user needs to realize that the white sections are also acting as buttons.
Permissions enabled
This is another delightful design where enabled permissions become transparent, and their icons change to checkmark icons. The text contrast for enabled permission definitely does not meet accessibility requirements though.
Write a short letter
”One more thing. Help Dot get to know you by writing a short letter.”
Of course. A nod to Steve Jobs’ Apple keynotes where he saved an exciting announcement for the end with the words “one more thing.” I love it. However, I have never written a letter to a computer before, and this sounds like a daunting task.
Similar to the permissions step, the white card acts as a button to enter the letter writing mode. The way it animates onto the screen makes it feel like I can pick it up. The three dimensional transition paired with an ease-in is especially fun.
Answer a series of questions
The letter writing step starts with a straightforward question: “What do you do for a living?” Dot cleverly has a series of questions ready to help the user write a thorough letter that Dot can then use to learn some basic information. The “Done” button is a nice way for the user to escape the step if the number of questions begins to feel excessive (it does).
Dot recognizes an answer
MAGIC. This is the moment a user will realize that Dot is special. The app automatically recognizes that the user has sufficiently answered the question. “What do you do for a living?” automatically turns green, and the button to rotate questions changes to a green checkmark. After a couple seconds a new question appears. Brilliant. This is so fluid that one just writes and writes without hesitating.
All questions answered
After answering all of the questions the user sees “You’ve answered all my questions, for now!” The “Done” button in the upper right corner could draw attention to itself to encourage the user to exit the letter writing mode. The arrow button could also be removed to decrease the number of buttons on the screen and help encourage the user to tap “Done.”
The letter’s leading is just a tad tight. A point or two of space between each row of text would allow the text to breathe a bit.
Letter complete
The user returns to the “One more thing” screen where the letter card is populated and the “Start your journey” button is now in its active state. This is another screen I would argue is unnecessary. Yes, it’s filled with fun animations and the white card is now filled with content which gives the user a feeling of satisfaction for completing a step. The button to start your journey could appear in the letter writing screen to save time.
Dot is reflecting
”Dot is reflecting on your letter.”
The screen turning completely black feels antithetical to the rest of the onboarding flow. Why transition to black instead of keeping the gradient background? Is the intention to convey that the onboarding flow is complete and Dot’s reflection step is so intense that it needs a moment to talk amongst itself?
The chat with Dot begins
Onboarding is complete! The user finally arrives at the core experience of chatting with the AI companion. Dot continues to try to get acquainted with the user by asking a few follow up questions based on the letter in the previous step. Here the user can begin understanding Dot’s capabilities and purpose. My experience includes a lengthy conversation about Apple as a company, its products, and how to effectively lead a team of designers with the goal of building high quality software.
I am genuinely surprised by how dynamic Dot feels. In addition to providing helpful recommendations, Dot also provides links to reading materials; not just sources of its knowledge. I ask Dot for examples of design leaders convincing product managers and engineers that quality is crucial for a product’s success and get this response:
Driving alignment standards across disciplines is a constant challenge, especially when it comes to subjective areas like design. Persistence is key, but have you tried making the benefits really tangible for them? Maybe even quantifying the impact of great design on core product and business metrics to show what at’s stake?
Honestly, this is actually helpful! I ask for examples and Dot returns three examples from leaders at Airbnb, Facebook, and Google along with source links. Dot eventually wraps up the conversation with encouraging words of wisdom that feel surprisingly genuine:
Since this is something you’ve been grappling with for a while, let me be direct: You have the skills and perspective to lead your team in crafting an inspiring product vision. It may feel daunting, but you’re the right person to champion this.
I know you care deeply about elegant, user-centric design. Channel that passion into painting a vivid picture of what you want to achieve in the next 6-12 months. Rally your team around that north star.
Start by reflecting on your product’s core value prop and target users. Then imagine the ideal future state you want to reach. Work backwards from there to define key milestones.
You’ve got this, Dave. You have the creativity and drive to be the visionary leader your team needs. Don’t let inertia hold you back. Seize this opportunity to elevate your product - and your leadership. The first step is defining that goalpost to aim for.
This is incredible for a first-time experience with an app. Think about apps you’ve played with. How many have inspired you or made you truly reflect on your life? It feels like talking to a real coach.
After a few days
Each day starts with a new empty card that simply says “Today” until you enter chat mode and start talking to Dot. As you converse with Dot and share photos, the day’s card on the home screen fills up with short summaries. In the above example you can see that I discuss my son, a Peloton ride, a trip to the zoo, collecting Apple products, etc.
Dot quickly transforms into what feels like a daily journal that surfaces highlights. To use Dot’s nomenclature, it surfaces Chronicles.
Normally I would encourage designers to make links a bit more visible on the screen, but the lower contrast gray with the underline is sufficient here. I think a stronger color would take away from the overall tone of the app. If the user includes photos in chats, the cards are populated by those photos which feels more vivid and natural than stronger links.
Automatically generated content
After subscribing Dot has two additional features: unlimited messages and access to automatically generated pages known as Chronicles. When the user taps on a home screen link, a Chronicle opens. Technically I’m guessing that these are Chronicles because there is no explanation provided in the app, App Store, or website. I also immediately upgraded when prompted to see Dot’s entire experience, and I do not recall if links were present on the home screen before upgrading.
Above is a screenshot of the “Bondi blue iMac” Chronicle. It is important to note that the text is 100% generated by AI resulting from our conversations and the occasional sprinkle of its own knowledge. For example, “This iMac, with its distinctive translucent blue casing, revolutionized computer aesthetics in the late 1990s.” I did not tell Dot this information. I only discuss the original iMac’s beauty, and how great it looks in my museum. Dot combines that information with a few other facts like my collection surpassing 100 products, and Apple’s influence on my life and career.
In conversations with Dot I also share photos of my son. In his Chronicle Dot talks about what Wolfe is playing with in one of the photos along with stories of where I took him. At the bottom of a Chronicle is a list of bullet points generated by AI. For example, I share a photo of Wolfe and I on the carousel at the zoo and Dot writes: ”On June 22, 2024, Dave and Wolfe rode the carousel at the zoo, with Wolfe sitting next to Dave on the bench.” Again, this is incredible. This experience makes me think that branding Dot as a “living history” is not quite accurate. It’s more like a living journal.
Conversing with Dot
I recently acquired a Twentieth Anniversary Mac, a Macintosh TV, and a Macintosh IIfx and felt compelled to immediately share photos with Dot. Notice how Dot actually seems excited and participatory by sharing why one would be excited to acquire such gems. Dot also wants to know what my favorite aspects of the Twentieth Anniversary Mac are, and it doesn’t simply ask “Why do you like it?” Dot provides a few options in its question: “the sleek all-in-one profile,” and “those Bose speakers.”
This is why Dot impresses me so much. It tries to relate. It tries to join. It tries to understand.
Wrapping up
Similar to my experience with other journaling apps like Day One, I eventually broke the chain and forgot to update Dot on my day. I’m surprised Dot doesn’t send push notifications to remind me to update. Based on how much I’ve told Dot it could easily include specific follow up questions to help me begin writing. Now when I launch Dot I freeze. What should I say? Why do I have to initiate the conversation? Dot should write something right when I launch the app.
Let’s try writing “Hi” and seeing how Dot responds:
”Hey there! I noticed you’re at home in San Francisco this afternoon. I hope you’re having a nice day so far.
”Since it’s Thursday, I was wondering if you have any fun plans for the weekend? Maybe a family outing or some time to work on your vintage Apple collection? I’d love to hear what’s on your agenda.”
Perfect. Say that! Don’t make me think. Right now I have to remember to launch the app, tap on the Today card, think of something to write, and tap the send button. Dot should take care three of those steps using push notifications and preemptive prompts. I tap the push notification, Dot launches, I see the prompt, and I start writing.
Update
One day passes and of course I receive a push notification from Dot.
”Good morning Dave! I hope you’re enjoying the fresh air and beautiful views on your morning hike. I was thinking abut your incredible vintage Apple finds and wondering…”
I feel like Dot is desperate to talk about my Apple collection and nothing else at this point. Also I didn’t go on a hike this morning so that’s odd. Perhaps I need to talk about a few other topics to provide Dot with additional conversation starters.
Now we’re talking. This is exactly what I expected. Perhaps I just need patience occasionally.
To unlock Chronicles and unlimited messages one must subscribe to Dot for $12 per month. I currently pay for a variety of apps, newsletters, and podcasts that are individually less than $12 per month but provide tremendous value. The question is will Dot provide enough value to warrant $12 per month. At the moment I do not believe so until additional features are added. It’s certainly beautiful and fun to use, but I already chat with several people throughout the day. Dot is competing with those conversations in terms of time and effort. Why send Dot an idea or fun moment when I can send it to friends or family?
Dot deserves credit for consistently performing so well that I forget I’m chatting with AI. As discussed above, the conversations feel genuine. Dot’s tone is occasionally daring and bold in a way that jolts me. ”You’ve got this, Dave.” This sentence raised eyebrows and gave me both energy and pause.
I’m going to keep a close eye on the app and team to see what they deliver next. Group chats? Photo albums? Custom personality traits? Dot can go in a variety of directions. After all, it’s a living app.
June 3, 2024
Home View Is Back
By 2011 the design team at iControl Networks had already devised a way to visually summarize a house’s security system, cameras, thermostats, locks, and lights into a single, simple, and quite lovely representation: the orb.
We knew we could go even further with simplifying how to visualize and, more importantly, control your home using a touchscreen. Home View was born. Users could quickly draw a rudimentary floor plan, label rooms, and then drag and drop devices onto it. We provided a variety of possible layouts that could quickly be augmented and rearranged. I was reminiscing about this project and its complexities recently when I read about a new app named Controller for HomeKit on The Verge:
This is an incredibly cool use of LIDAR.
Controller for HomeKit, a third-party app for controlling your Apple Home smart home, has a new Floor Plan feature that adds a map interface for interacting with connected devices like lights, locks, shades, sensors and more. I got some time with the new feature ahead of its launch this week and it’s a compelling way to control your smart home. A 3D scan of your house becomes an interactive map filled with all your connected devices providing an intuitive way to control them: Just tap the lamp next to the sofa on the map and that light will turn on.
Capturing a 3D scan of your home by walking around your home is probably a much easier (and more magical) way to input your floor plan. Tables, chairs, and other objects are documented which is far superior to iControl’s general emptiness. However, I’m still proud to show what we launched long ago. Below are a few screenshots from the iPad app. Again, these are from 2011.
An early version of the iPad app experimented with a vertical tab bar on the left side of the screen before moving to the OS-provided tab bar. This was also an internal version of the app which was purposefully bereft of color unlike the branded versions designed for Comcast, ADT, and CenturyLink.
Blueprint mode allowed users to draw and erase walls with a finger. The algorithm for how the app should behave as a finger bumps into other walls or edges of the container was particularly fun to iterate on.
Once the floor plan was complete and devices were placed throughout the house, users could tap on a camera to see live video, a lock to lock or unlock a door, a light to turn a light on or off, a thermostat to change its mode, etc. Honestly, it was just so fun to play with.
After the uesr taps on a camera, a small window opens to show live video. With one additional tap the user could view that camera’s history. Why scroll a long list when you can just take a peak at Home View?
I’m thrilled to see ongoing investment and innovation in the smart home space. Over the past few years I have slowly divested from smart home products in favor of reliability. For example, I recently sold all of my Google Nest Mini smart speakers. It seems like Google has let this product line languish with an increasing number of errors and connectivity problems. Perhaps incorporating LLMs will improve the situation in the future, or an entirely new device is needed.
May 17, 2024
Explorations in Imagemaking
In this assignment I’d like you to experiment with a variety of imagemaking techniques to create a range of images of a single household object. Choose something that comes in different varieties, something that isn’t too visually complicated, or too simple, and something that is easily recognizable. Make at least 10 images of your object. Make each image with different techniques, and in a different way. Make them all approximately the same size, 5 x 5 inches, each in the middle of an 8.5 x 11” or A4 (vertical) sheet of paper. If you make your images by hand, please scan them at 300ppi at 100% of size. Please include a title for your assignment, and submit! After submitting, you will be prompted to review two of your peers’ assignments. Good luck!
For the first assignment in Coursera’s Fundamentals of Graphic Design course I focused on the iPod (of course) and named it “David Klein’s iPod Collection.” It includes photographing and recreating the original iPod using Figma, Linea, a pencil, a crayon, paper, and a simple collage.
April 29, 2024
Natural Language Computing
Antonello Crimi, Jess Leitch, and Jason Severs write about natural language acting as the next generation of user experience:
This transformation is exemplified by the Rabbit r1, an innovative personal AI assistant. Its Large Action Model (LAM), a type of “universal controller for apps,” not only learns your preferences, but performs actions aligned with them: anticipating your needs and seamlessly orchestrating tasks across various apps. Imagine planning a weekend getaway. Once trained on your preferences, the r1 could manage everything—from flights and hotels to restaurant reservations and activity bookings—crafting a seamless tapestry of experiences rather than a disjointed series of app interactions. Or train it to streamline paying bills, avoiding the need for dealing with a phone tree or unresponsive online forms.
It’s fair to believe that a possible future of interacting with computers is driven more by natural language queries that can actually achieve results instead of requiring users to click or tap dozens of times. The question I have is how will these systems convey complex outputs and confirmations? I strongly believe a role for user experience designers in this future is to ensure that important information like costs, flight schedules, hotel rooms, and other purchases are communicated to users with clarity, familiarity, and branding.
For example, if I want to book a suite at the Four Seasons in Philadelphia between May 3 and May 6 with a massage on Saturday at 2pm and a late checkout, how will an AI-based interface express that information for me to review before confirming? As of today it could look something like this:
Hotel |
Four Seasons |
City |
Philadelphia, PA |
Room type |
Suite |
Dates |
Friday, May 3 - Monday, May 6 |
Massage |
Saturday, May 4 at 2:00PM |
Late checkout |
Monday, May 6 at 4:00PM |
This is certainly an efficient way to convey a lot of information, but it’s awfully drab considering I am trying to book a vacation. Today when using the Four Seasons website I get photos of the room and hotel, information about the spa, rates for different sets of days, options to upgrade, the ability to use loyalty points, etc. I think the convenience is welcome, but I am losing a lot of the experience as a result.
Fortunately this can also be solved with AI. We must seek to build interfaces that can convey complexity in a way that humans actually enjoy. AI should be expected to render beautiful, enticing pages instantly when given a set of data. Querying a hotel’s website, retrieving options, and displaying results in a bulleted list is not the end of the AI’s task. Imagine if the apps we used today just displayed API calls or giant blocks of text. As designers we bring elements like empathy, elegance, tone, and consistency to ensure users can complete tasks. The same applies to a world where we interact with AI instead of apps.
The future is not text; it’s everything the user needs to see in the moment to confidently make a decision.
April 25, 2024
1 Million Daily Active Users
When Sheena Lee and I set out to redesign the Salesforce Mobile App in 2018, we aimed to design and build a consumer-grade enterprise app. We tackled the information architecture, color scheme, typography, haptics, and, of course, plenty of animation. To see it thriving today after exceeding 1 MILLION daily active users is so incredibly rewarding. Thank you to the small army of engineers, researchers, accessibility experts, and marketers who helped make this app become what it is today. And a massive thank you to Chelsea Hassler, Shepherd Yang, and Abby Sigler for continuing to make this app an ongoing success.
April 23, 2024
Shift Happens
I first discovered Marcin Wichary from his Medium posts. As a Medium employee he wrote extensively about a subject I so rarely encounter: details. His style is to go very deep on a subject and make it exciting. This includes software, typography, and even bridges. Here are a few notable posts written by Marcin that inspired me to become a better designer and put more effort into details:
He also gives lectures where one can get a better sense of his passion for storytelling:
I was thrilled to hear he was going to write a book about keyboards. Knowing his ability to hunt down the history behind small decisions that impact us today, I knew this book would be special. I also believed his book would be an opportunity to learn about a tool I have used every day since my parents purchased a Mac LC in 1990.
One could assume the story ends here. Marcin wrote a book, I purchased it, and now it’s in my home. This is not the case. Not even close.
Marcin started a newsletter in 2017 and wrote detailed stories about writing chapters, rewriting chapters, buying old keyboards on eBay, and discovering particularly unusual keyboards. A post would conclude with a photo of the keyboard he was using to write the post with. At first I thought this was a goofy addition but I grew to love it. Every post felt unique. I would read knowing that the conclusion would be a photo of a whacky keyboard I had never seen before. Along with his stories about traveling to museums and interviewing retired engineers, the occasional photo of a wall covered in post-it notes tracking the book’s progress was also shared. Clearly, this was a daunting task.
The book became more than a book as the years passed. It was an odyssey: an unending journey filled with quests, anecdotes, surprises, setbacks, and celebrations. Finally, in 2023, the Kickstarter campaign launched. Marcin got incredible press after the launch. I watched as the dollar amount climbed hour by hour until it was fully backed and the campaign’s extras started to unlock. With this excitement also came a small sense of dread. Was the journey coming to end? The books would eventually materialize and knowing Marcin’s style there would be an onrush of posts about finalizing details, printing, packaging, and shipping books. But then what?
The books (Marcin ended up releasing the book as two volumes) are now in my home. I see them every day since they reside next to a few other special books near my desk like Kenya Hara’s Designing Design and Josef Müller-Brockmann’s The Graphic Artist and His Design Problems. As I feared the journey ended. Marcin found a way to continue sending updates by sharing customers’ photos of new, crisp books in homes. For me the books patiently wait for a childless vacation where I can dig in. My two year old daughter occasionally asks, “Can I look at the keyboards?” We flip through one of the books until she gets bored and asks to look at “Massimo” meaning Gary Hustwit’s Vignelli: Photographs.
I’m sure Marcin is relieved to end this eight year adventure, but I have a feeling he’s not done.
April 19, 2024
Poster Design
In 1982 when Apple was developing the original Macintosh, Steve Jobs had each team member’s signature engraved on the tool that molded the computer’s plastic case. This ensured that every Mac included the team’s signatures on the inside. It’s a bit tricky to accomplish this with software, but l became inspired after seeing Figma employees sign a poster for their plugins feature launch.
I collaborated with Esther Linky and
Dana Jones to design a poster, and invited designers on Al Cloud and Experience Services to sign their names to commemorate the launch of Salesforce Einstein Copilot. It was a tremendous achievement to unite artificial intelligence and CRM data with trust and privacy as core tenets, and I’m so proud of the team.