Designing The Plane While Flying It

11
Mins
·
5.31.2024
Credits

Annie Dailey

She/Her

Text Link

The phrase “building the plane while flying it” refers to the process of creating something new while simultaneously implementing it. In a fast-paced world, products are constantly evolving and adapting to new challenges. For design teams, this can mean iterating quickly, and finding solutions for problems that may still have unknowns. In this article we talk with three members of the Lattice Design teammates, product designer Kristin Lasita, brand designer Jacob Ewing, and UX researcher Meghan Earley to get their perspective on projects they’ve been working on that require designing the plane while flying it. They share their strategies for gathering feedback, maintaining design quality, and defining success.

Q: Can you describe a project where you felt like you were “designing the plane while flying it”? What were the circumstances, and how did you navigate them?

Kristin: There’s this project for a writing assistant, where Lattice will take your feedback or performance review response and offer suggestions to improve it. The output is the core experience with generative AI stuff. It’s so rooted in the prompt getting a good response. Getting started, the team all had different ideas of what the output was going to be. So the team aligned on starting with just one suggestion to the user and we’ll explore showing multiple outputs later as we feel confident in other prompts. All this to say, the team needs to create solutions that are flexible. AI doesn’t have many conventions, so we’re learning as we’re going.

Jacob: I think the pop-up truck campaign is a good example. There was a lot of discussion about how much of an activation it should be - and a lot of unknowns on what that means, right? But we had to move fast and we needed to lock in on the practical needs like getting permits and building out the truck. So where we were able to sculpt meaning into something so broad was in finding a focus. We did a spectrum of approaches - one where it’s very Lattice forward, one that’s very messaging forward, and then one that’s more visually exciting. All the while we’re trying to get a sense of what the key goal is. Presenting the logo? Presenting the messaging? Pushing a product announcement? We had to explore the right brand temperature to index on without a whole lot of clarity. In order to move it was critical to try and create something that’s malleable and can adapt as things become clearer. I was a big proponent of building a visual system that can be more of a plug and play across a variety of deliverables. I think it was just working through those levers of what’s worth investing in now versus what can be broad and clarified later.

Meghan: We’re doing a lot of different AI things right now. But Engagement Summaries was one project that we had pretty good confidence in the value it would provide. So that was a great project to kick off the work because we had seen in previous research that people wanted summaries. Beyond that, there is a fuzzier idea of summaries for performance reviews, and an even fuzzier idea of manager co-pilot. So we were juggling the execution of engagement summaries, defining the solution for performance summaries, and then going back to the foundations for manager co-pilot (understanding what the manager jobs are, the biggest pain points, and what exactly we should build). We just want to make sure we’re solving real customer problems. And so my role in all this is to understand the team’s goals and to understand our biggest knowledge gaps pertaining to those goals. We’re always getting feedback from customers; whether we’re in the discovery phase, doing concept testing, or evaluating something we’ve built, there are always touch points with customers. So in addition to understanding knowledge gaps, research helps support customer touch points and get the most out of customer feedback.

Q: What strategies do you use to gather feedback and iterate on your designs while the project is in progress?

Kristin: Oh, all the feedback! I’ve been oversharing internally (on Slack and Campsite) because I’m getting good inputs from all different stakeholders. I’ve also been leveraging external user testing for quick usability feedback. For other in the moment feedback, I’ve done internal interviews because I wanted to get a gut check outside of my immediate team on how we’re using AI. We also have been releasing internally on Lattice just because, again, we need to get tests and reps through the outputs. We can only fiddle so much - we really need to get it out there to actually understand how people use it.

Jacob: I think it’s connected to that first question, where it’s about trying to get more at the themes versus the nitty-gritty details. So the first round is more the importance of a wireframe, focusing on how we can plot out content that’s being shown. Whether that’s making recommendations for content length, figuring out structurally how something may exist, or presenting the bare bones of an idea and how that can translate practically to what is needed. The second phase is an interesting dance of what’s useful feedback and what’s creative action that’s not really needed. And it’s easy to get distracted by the look of a thing versus whether it’s achieving the intended goal. So the strategy is to break things down by content, and simplify the design so the review is less a focus on the aesthetics and more about getting to the heart of the strategy.

Meghan: We try to strike a balance of making sure we’re getting enough input from customers and moving quickly. It’s seeing where we are in the process and making sure we have confidence in the answers to our questions: Do we know who we’re building for? Do we know what they need? Do we know how we should solve that problem? And does the solution work? So we check in on each phase of the process and based on customer conversations, or previous data, we decide if we have confidence that we’re meeting the mark for customers. If we’re seeing clear themes from the responses, then that’s probably a good signal. Whereas if those themes aren’t emerging, maybe we talk to a few more customers. There are some projects where we don’t have high confidence in the answers to all these questions, but it’s ultimately what we’re working towards. And with AI there’s so many ideas and so many paths we can go down as far as what we can build into the product. We’re constantly trying to stay grounded in what people actually need. What’s exciting to me is that there are significant problems that people experience that Lattice AI can really help with.

Q: How do you maintain design quality when working quickly?

Kristin: I've been trying to systematize and keep the AI experience as consistent as possible as a starting point, but not holding that too firmly. I’ve realized that sometimes UI need to adapt to the space it's in. I've been focused on making the core experience clear and simple. There are going to be edge cases, because I'm working in a new space. I create designs that are flexible so I can solve those edge cases as they come up. I’m working within existing product experiences, and I’m doing my best to make the experience better versus layering on a worse experience.

Jacob: I think it’s worth considering what the deliverables are and where things are showing up. So for the pop-up truck, one examples is that there was always a concern about how color was going to come into play. There is a range for how things will look printed on a truck versus paper wheat pasting, plastic signs or the fabric of merchandise. There’s a lot of unknowns about the color output but that’s a risk we’re okay taking. We didn’t want to be too black and white about things… literally! So maybe if we’re okay risking color, are there just small tactical details like things looking aligned and balanced. I think it’s just understanding the various tradeoffs and touch points which informs where you can maintain quality versus where you’re ok with being more relaxed.

Meghan: We ensure we’re getting input from customers at every phase of the product development process (discover, ideate, build) and the questions we’re asking customers depend on the phase we’re in. For example, we just finished research for manager co-pilot, and its intention was to be more of a foundation to understand the problems of the manager job. Even though this is an AI project, in the research there was hardly any mention of AI because a lot of it’s hypothetical. And what we really want to understand is what manager workflows are, where their biggest challenges are, and where Lattice is uniquely positioned to help them. So we were far away from talking about AI solutions at that point. And once we identify the problems we think we can solve, that’s where we start putting AI in front of customers. Kristin has been doing a lot of this: designing and testing concepts and mocks. It’s just really important for the research to explain exactly what the AI is doing. From there we look at whether this solves a problem for people - does it make their lives easier? We want to get customers to share examples of how they’d actually use the thing. Ground the research in those examples and experiences rather than just how customers feel about it.

Q: How do you define success when working on a project that still has many unknowns?

Kristin: I think success, especially with this AI stuff, is we want to be able to experiment quickly. If things don't work, we want to fail early. It isn't necessarily that we want to measure success on whether people are using it, but measure the quality they get out of it. That's been really difficult to measure. This is a bit broader than me, but the thing with AI is we don't know what's good yet. Does it actually make an impact versus being marketing hype? We're gonna push to create experiences that we hope will make manager tasks easier to accomplish.

Jacob: I tend to come up with my own North Star for what I’m aiming for, a strategy that I can circle all the design decisions back to. And if there isn’t a specific strategy formed yet, we can help push that by developing our own look/feel and go from there. You want to feel like the visual solution is rooted in a brand rational. Like with redesigning our webinar templates – at the start of the project it was more vague as to what was needed. So we proposed things that in turn had stakeholders open up to a greater evolution and design change. It became more of an opportunity to play around and experiment. It was more successful once we provided strategic solutions to a copy/paste prompt.

Meghan: We’ve been thinking a lot about this actually because we’re creating a “first customers” program. It’s an early access program for bigger launches where we’ll give a group of customers access to the product to make sure it meets our quality bar. So we’re thinking about what the criteria for a release is such that it can actually go out to GA (General Availability). But success can be different depending on if you’re showing somebody designs to get their reactions versus assessing whether they're using it, and the extent to which they’re using it. We also use the Sean Ellis score which is a survey question that’s supposed to measure product market fit. The question is, if you could no longer use X product, how would you feel? Disappointed, somewhat disappointed, or not disappointed. And if like 40% of your population would be very disappointed, you’ve reached product market fit. We don’t follow it to a T, but we can use that to directionally understand if we have a critical mass of people who are finding value in the product. And then just testing for bugs and usability are, at a minimum, super important to ensure quality. So ultimately, we’re successful if customers find value in our product and use it.

Q: What lessons have you learned from working on projects where you had to “design the plane while it’s being built”?

Kristin: Give yourself a lot of grace and understand the constraints of what you're working on while doing the best with what you can. It's not going to be perfect, and yeah it could be better with more time. I just have to keep reminding myself that this is the best I can do, but also not resting on that too much. So it's a balancing act of making a high quality thing, but realizing that we may not hit it out of the park at first. You need to keep looking ahead and think strategically long-term. I can have real tunnel vision on this stuff. Like, I'm really focused on this writing assistant, and if I can reuse patterns from that for future things, great. But it’s ok if I can’t. We can always come back. Everyone is figuring it out! I created a Discord group for other designers to talk about their experiences in trying to integrate AI into an existing product. This isn’t the flashy side of AI, but it clearly seems like every software under the sun is trying to incorporate some element of AI. So how are people doing this? It’d be so cool to have access to a larger community!

Jacob: Practically speaking, it’s about consensus early on strategy, content, and approach so that each person can be held accountable for their deliverables or factions of the project. And there is something to respecting everyone’s expertise in the process. Like I might make recommendations about copy, but it’s ultimately someone else’s decision. I can articulate why something may not be helpful to include visually, which can help relieve some pressure about how much content might be needed. With projects that are more in flux, having open lines of communication to highlight any confusion only helps in making things move forward.

Meghan: Being a researcher, there can be a draw to rigor - setting up a big study, interviewing tons of people. But getting more comfortable with quicker, ongoing feedback and working in a more iterative style can be an advantage to learn quickly. It depends on the project, but there can be a lot of value to letting go of the notion of needing a certain number of participants. We could talk to a few customers quickly, learn and keep going. And maybe we haven’t answered all of our questions, but we answered a few of them and we can respond. Usertesting as a tool has been really huge here because one of the hardest parts is just getting access to customers when you want to do something quickly and regularly. I see a lot of designers doing quick, iterative work; just hop on user testing, and get what they need in a few days which has really enabled this type of work.

Thank you to our design team members for sharing their thoughts on designing the plane while flying it! As goals, teams, and processes evolve it’s always helpful to hear how people navigate through new challenges.

Interested in building a community of other designers working on integrating AI into existing products? Our very own Kristin Lasita made a quick survey to gauge interest in forming a Discord group. Fill out this survey to join!