Hand-drawn Mona Lisa in a frame, duplicated three times

The most excited I’ve been since my first job

A conversation with James Byun, Art Director and Senior UI Designer at OXD, on judgment, responsibility, and what gets lost when the grind disappears.
Share
FacebookLinkedInEmailCopy Link

“Where a senior designer comes in is understanding what the right solution is for who they’re designing for.”

Every morning, James Byun wakes up to a new hot take about AI. A new tool, a new demo, a new declaration that designers are either about to be liberated or made obsolete. As OXD’s Art Director and Senior UI Designer—someone with a front-end development background who’s spent years refining the craft of interface design—he has a more nuanced read than most.

Our Creative Director, Wil Arndt, sat down with James to talk about what AI is actually doing to the practice of UI design: the real workflows, the failed experiments, the light-bulb moments, and the thing he says he’ll never let AI touch.


Wil Arndt: What’s the most uncomfortable thing AI has forced you to confront about how you work as a UI designer?

James Byun: There are so many things. I mean, obviously we’re going through this massive shift. And every time I wake up, it seems like there’s a hot new thing that everybody’s jumping on. But I think for me it’s, like, when is that going to end? There’s this massive focus on shipping fast and launching fast. So is this the future? Is this just the way things are now? Or are we going to get back to a point where things calm down and we can start to focus on quality?

So you’re saying quality is being lost.

I think so. The minimum bar for quality has gone up because of what AI is able to do. But for the sake of speed, people are prioritizing that, and details are getting lost. And because AI is just so accessible now, the barrier to entry for what we do has never been lower. That’s bringing in a whole bunch of people who weren’t in this industry before, which is cool, but there is that gap in knowing what to look for. In knowing what quality looks like vs. AI slop.

And when a client hires a senior UI designer, they’re paying for taste and judgment, not execution. What do you think about what AI can and can’t touch in that equation?

Obviously there are many ways to solve problems. And AI probably has access to a lot of those solutions, especially now with agent skills. It just spits out whatever it does. But where a senior designer comes in is understanding what the right solution is for who they’re designing for. AI can generate answers all day and someone without the expertise might think they’re making the right decisions when they could be heading down the wrong path.

That’s bad news for junior designers, then, who don’t have that experience yet. They’re probably more comfortable with AI than senior designers, but they don’t have the judgment to know when it’s wrong. Do you see a path forward for them?

I think part of it depends on what the role of a designer ends up being once the dust settles. AI is obviously compressing how long it takes to do production. And when I was a junior, production work was all I did. So what does that new junior role look like?

For me, the way I learned the best was making mistakes, getting feedback, thinking about edge cases, and applying it to future work. You kind of lose that with AI because it just throws something together. If you don’t know why it did what it did, how are you going to learn from that? 

That’s why understanding the foundations of what makes a design great is still going to be so important. If you skip all of that, you won’t know if the design is working or what needs to be done to make it great.


I want to get into real workflow, because I think that’s where the interesting stuff is. You recently used AI on a client project, not a side experiment, actual production work. Walk me through it.

So I was working on a feature, basically a card component that comes up and shows the user pretty critical information when they interact with a different part of the app. And it turned out to be really challenging from an implementation standpoint, maybe because of where we were at that point in the project. Making sure that if we implemented this, it wasn’t going to break anything else, wasn’t going to have a ripple effect on the rest of the app. The dev effort estimate was really high. There was a real risk it just wouldn’t get built.

So that’s where I felt like, okay, maybe this is actually a really good opportunity to see what AI can do. I had already designed the component in Figma, designed the screens and the different cases. I found a React library that did what we needed. I looked through the demos and it did exactly what we were looking for. And then I pointed Cursor1 at the parts of the local codebase I wanted to work on, pretty simply just said I wanted to implement this plugin for this component on this page. This was very early in my AI journey where I basically had no experience with it. 

What happened when you first ran it?

It got about 70% there, but of course there were errors. So through a handful of prompts I tried to tweak things, but then I could see it was starting to edit different parts of the codebase. Anybody could see that’s not right, that’s not what I asked it to do. The code was starting to bloat, but it was working well enough for a prototype. That was the first version I shared with the developers and they looked at it and basically said, “I don’t know if we can use this.”

So the iterative prompting broke down.

Yeah. By that point I’d started learning more about AI and read something about how prompting too many times doesn’t usually produce good results. So I thought, okay, let me start fresh, but this time take everything I tried and combine it into one large prompt. And doing that a couple of times, there’s a lot of undo, retry, undo, retry, that’s when I ended up with a much cleaner result. Maybe two or three versions of that iterating on that mega prompt, and that’s when the developers looked at it and went, “Okay, we can work with this.”

Did they literally use that code?

As far as I know, but it did require some cleanup. But that feature might have taken a much longer time to implement without it—the hours required may have pushed it further out of scope. So yeah, I would say that was a success.

Here’s what struck me about that. You were able to see the code was drifting because you have a front-end background. You’ve built HTML, CSS, JavaScript on the front end before. You knew what you were looking at. A designer without that experience might have thought the first version was fine.

That’s the age-old debate, right? Should designers code? That’s been going on for, I don’t know, the last sixteen years. I think it’s great if you can, if you have the time and bandwidth to learn at least the basics and know how to be dangerous with it. But if you can’t, I think that’s okay too, because there are still AI services where you don’t need to get as technical. That said, I think the more you know and understand how to code, the more you’ll be able to use AI to its potential.

And if you had to do it again?

I’d try connecting Figma to Cursor via MCP2 and reference the artboard directly. Give it the visual context instead of trying to describe everything through prompts. Because the prompt-to-code-to-design-back-to-code loop still feels a little hacky to me.

What do you mean by hacky?

I think the workflow could be more optimized. And maybe that’s me speaking as a designer who’s been in the tools for so long. There are little nuances that are hard to communicate with words. Prompting is a very aspirational thing, like, say whatever you want and we’ll make it for you. But the reality is it needs context, it needs guardrails, it needs very specific instructions. Just going back and forth with a prompt, it’s too simple. Something’s missing there and it eventually gets to a point where you don’t have the level of control you need to do what you want.

Figma Make3 has an interesting approach where you can directly select the elements, but you’re still at the mercy of the prompt. There are some new design tools that have come out recently that are exploring some interesting ways to work with AI in design. For example, I’m excited to see more features from Paper’s4 roadmap get implemented, and Pencil5 is another tool that looks interesting. I suspect that as models continue to develop, that will allow the design tools to have supercharged features that get us designing in a way that’s more familiar. 

What I’m picturing is something closer to a design-first version of Framer6. In Framer, you’re already working closer to the actual medium but it still feels more like a UI for code, which is less flexible compared to a design-first tool like Figma. If those two approaches could merge, that would be super interesting. I think we’ll get there soon.


That project is interesting because AI lets you blur the lines between design and development, and explore something that would have been blocked by budget or resources. But it raises a question: when AI is doing the work, who’s responsible for the outcome?

Just because AI is doing the work, I don’t think that absolves you of any responsibility. AI is just another tool. If you were the one in charge of producing the result, you should be taking ownership. It’s like if you’re in the driver’s seat of a car with self-driving capabilities. If it causes a crash, you’re still responsible for what happens.

Is there a place where you would absolutely never want AI making the call?

I wouldn’t let it dictate the final outcome, especially not for a client project. It comes back to taste and judgment, making sure we’re making the right decisions for the user we’re solving this problem for. 

The problem is AI thinks it knows. And it can be very confident. Very persuasive. How do you guard against that?

You’ve got to challenge everything it spits out. It kind of reminds me of when we present different fidelities of design to a client at different stages of a project. High fidelity has a finality to it and people latch on to it very quickly, and what we get from AI can have the same effect on us. So just making sure that we’re being strict evaluators on what it generates is one of the keys to guard against it. We need to be the ones challenging it, asking ourselves, is this the right solution? Is this the right way to solve that problem? Maybe sometimes it is, but we will most likely need to take the output further to definitively answer “yes” to those questions.

Have you ever looked at something AI generated and thought, “That’s better than what I would have made”?

I’ve felt something similar to that when I see how fast it can work, not so much the output. A lot of the stuff I’ve seen is very trend-forward, but after a while, you can tell that it’s been designed by AI. That’s that AI slop. I’ve never really been one to chase trends, but it’s cool that it can do that. 


Let’s talk about convergence, then. We’ve been watching UI aesthetics converge for a decade, to the point where many brand sites look almost identical. If everyone’s now using the same AI models, pulling from the same training data, does it just accelerate?

I’m not overly worried about it, to be honest, because I feel like we’ve seen this before. When Twitter came out with Bootstrap, and then Material Design came out, and now we have libraries like Tailwind. Anyone can have access to that and build stuff. At one point lots of sites used Bootstrap, and you could tell. If someone was using Material Design, you could tell because it has unique interactive states and an overall style where even if you change the colours to fit your brand, it will still feel like Google. And for companies who are focused on shipping fast, and are satisfied with “good enough”, that might be all they want. But I do think there are still going to be folks that care about wanting to differentiate and have something tailored. That’s where they’re going to be looking for designers to actually help them stand out. 

So maybe the real value proposition for design in this era is the ability to break the mold. Not just execute, but differentiate.

For sure. I think that’s kind of always been the case though. The designer’s value comes from problem solving, from taste, from style. AI has access to all of that, but it still has trouble turning that into something that feels expertly crafted for a brand. 

There’s a lot of talk right now about developers not needing designers anymore because of AI. Does that track with what you’re seeing?

I mean, the reverse is true too, right? A designer can code with AI now. But I don’t think that’s the right way to look at it. In the same way that AI code isn’t perfect, the design isn’t either. We should be using it to up-skill and have more overlap. It should be a friction remover. Not a threat.

I’ve been thinking about this idea of “apps on demand.” No app exists for what you need, so you just vibe it into existence. You use it, and if it turns out to be a great idea, you keep it, monetize it, whatever. Or it just goes away. You don’t need it ever again.

And not just apps, even features on demand. You’re building something and you need it to do a specific thing? Just make that feature. I feel like that’s potentially going to wipe out a lot of micro-SaaS. Like right away, the first thing I think about is a brand asset library. Internally we use Google Drive but it’s clunky and not the best way to do this. We could build something that tracks all our assets and presents it the way we actually need. For companies that have been tracking stuff in spreadsheets or Notion, this changes everything. I’m sure there are some really interesting cases out there where people have created a workflow to track something in an app that wasn’t built to do that workflow. Something like that could be vibed into existence and like you said, if it works out, it could be monetized, otherwise it fills a need for the person who made it.


Which is exciting if you already know what to build and why. But if AI is compressing all of that entry-level production work, and now anyone can spin something up, it gets harder for junior designers to even find a foothold. You do informal mentorship with junior designers who are struggling to break in. And you ask them a question that apparently stops most of them cold.

Yeah. I ask them: why are you doing this? Because it’s not going to get any easier. Having a strong why, like why do you want to be a designer, why you can’t picture yourself doing anything else, that’s really important because that’s what keeps you moving forward when things get difficult, like it is now.

Do you ever get a good answer?

A lot of junior designers I’ve talked to can’t seem to answer it right away. The first response I usually get after a pause is that they’ve never thought deeply about it, or at all. I had one person message me afterward saying she was thankful for that question and realized she needed to take time to reflect on it.

If there’s one principle you’d want them to carry into their career about working with AI, what is it?

Challenge everything. Don’t take it at face value. It’s capable, but it doesn’t know what you know about your work to make it successful. It doesn’t fully understand the problem you’re trying to solve.


We’ve talked a lot about what’s uncertain. What’s the thing you keep coming back to?

I mean, for me, I would be lying if I said I wasn’t scared, but I’m also very excited. I’m curious what the role of a designer looks like when the dust settles. What does a developer look like? Where do those roles start and end? For some companies, the ones satisfied with “good enough”, maybe there’s no designer at all. But I believe there will always be companies who value design, and want to differentiate.

And I’ve been working on side projects, just to see how far I can go with it. And I realized I can spin up an iPhone app in thirty minutes. A React Native app that works on desktop and mobile, no problem. The code is not perfect, I’m not going to go out and say “developers are cooked”, but getting to this point simply wasn’t possible for me three years ago. And you know what? Being able to just go and prototype an idea as far as you can now with AI to me is incredibly exciting. 

This is probably the most excited I’ve been since I got my first job as a designer. Just realizing that it’s not just hype and promises that end up disappointing. There’s actually truth to how easy it is to do stuff now. 

Footnotes

  1. Cursor is an AI-powered code editor built for software development, allowing developers to interact with their codebase using natural language prompts.
  2. Model Context Protocol (MCP) is an open standard that allows AI tools to connect directly to applications like Figma, enabling them to work with live design data rather than relying on text descriptions.
  3. Figma Make is a feature within Figma that allows users to generate and iterate on UI components using AI prompts directly within the design tool.
  4. Paper is an AI-native design tool exploring new approaches to integrating AI into the design workflow.
  5. Pencil is an AI-powered design tool focused on generating and iterating on UI designs.
  6. Framer is a web design tool that bridges design and code, allowing designers to work closer to the final medium by building interactive, production-ready components.