Podcast: How AI is Changing the Game in UX — Chris Reardon, ex Head of Product Design, Responsible AI @ Meta
Discover the future of UX with Chris Reardon, Former Head of Product Design, Responsible AI @ Meta. Explore the evolving role of AI in UX, essential skills for future UX professionals, and ethical AI implementation. Join us for this insightful episode!
Guest: Chris Reardon, Former Head of Product Design for Responsible AI @ Meta
Host: Adam Perlis, CEO at Academy UX
In this episode, we talk to Chris Reardon, Former Head of Product Design, Responsible AI @ Meta, about the future of UX. We discuss how AI is changing the role of UX, the skills that will be most important for UX professionals to have in the future, and how they can ensure that they are using and producing AI in an ethical way.
You can follow along with Chris' work at the following links:
🔗 https://www.linkedin.com/in/christopherreardon/
✍️ https://medium.com/@chris.reardon.ux
🤝 https://www.c-squared.ai
Books about AI:
The Most Good You Can Do and also his other book Practical Ethics - Peter Singer
The Ethical Algorithm: The Science of Socially Aware Algorithm Design - Michael Kearns, Aaron Roth
Ruined by Design: How Designers Destroyed the World, and What We Can Do to Fix It - Mike Monteiro
Humans Are Underrated. What High Achievers Know That Brilliant Machines Never Will - Geoff Colvin
Other podcasts about AI
👉 Want more content like this? Sign Up for the Academy Resources newsletter where we provide tools, resources, and industry expertise on all things UX to help industry leaders, teams, and talent thrive: https://blog.academyux.com/
👉 Struggling to find the right UX talent? Academy is a product design agency built around flexible resources. Whether you need a studio team, a few resources or want to find a full-time person we are a one stop shop whatever stage you are in. https://academyux.com/
Chris Reardon is the former Head of Product Design for Responsible AI at Meta and has been working on AI products since 2013, when he helped launch IBM Watson as an executive creative director at Ogilvy. Born in England to an American Air Force family, Chris studied graphic design at Kingston University, moved into web/application design in 2000, and later worked at TBWA on Apple, IBM, and Target accounts. He joined Amelia in 2016 to work on natural language models and has been deep in AI product design ever since.
Key Takeaways
- Chris distinguishes two things commonly mashed together: natural language processing (understanding intent) and generative AI (creating new content). They blur in practice but serve different product purposes, and designers need to keep the distinction clean when designing for either.
- Natural language will gradually strip UI buttons off the screen. Crude 'did you mean this or this?' multiple-choice UIs go away once the system understands the user well enough to infer. Expect interfaces to get quieter and more conversational.
- Context awareness is the next frontier — apps will adapt their output to whether you're driving, walking, or at your desk, without you telling them. That changes what needs to be on-screen, what needs to be voice-only, and what needs to be done on your behalf.
- Chris's core framing of the current market: this is a business-model war, not just a product war. If an AI assistant disambiguates everything behind it, why would you ever visit Google Search or Amazon? That's existentially threatening to ad-revenue businesses and accelerates the subscription-vs-ad conflict.
- The traditional linear interface — type a name, pick filters, get a list — is being 'completely blown up' by natural language, because a user can now describe an outcome in one sentence ('top 1,000 second-degree connections by impressions, with emails, as a table') and get it.
- Designers moving into AI need to understand the full supply chain, not just the on-screen surface. The behind-the-screen infrastructure — the agents, models, prompts, business integrations — is where most AI product design actually lives now.
- Responsible AI is not a compliance layer bolted on at the end; Chris's Meta experience was as head of design for the discipline precisely because the hard questions (bias, consent, safety) show up in interaction patterns and UX flows.
- Chris's career path — from print/packaging at Kingston to AOL redesign to IBM Watson launch to Amelia to Meta — is a case study in how designers can bridge into AI by following curiosity through successive industries rather than waiting for an 'AI designer' job title.
Frequently Asked Questions
› Who is Chris Reardon?
Chris Reardon is the former Head of Product Design for Responsible AI at Meta. He studied graphic design at Kingston University in England, moved into web and application design around 2000, redesigned AOL's subscription-to-ad-revenue transition supporting 250 million users, led work on Apple, IBM, and Target at TBWA and Ogilvy, helped launch IBM Watson as an executive creative director in 2013, and has been working on AI for roughly a decade. He moved to Amelia in 2016 to work directly with natural language models before joining Meta's Responsible AI team.
› What's the difference between natural language processing and generative AI?
Chris's distinction: natural language processing is about understanding what you're saying — parsing intent and context so a system can route your request correctly. Generative AI produces net-new content in response. The two blur in practice (ChatGPT does both), but the product use cases are different — one disambiguates what you meant, the other creates something you didn't have before — and designers should be clear which they're building for.
› How is AI changing the traditional interface?
Natural language is 'completely blowing up' the linear, form-based interface. Instead of typing a name, picking filters, and getting a list, a user can describe an outcome in one sentence ('give me a list of 1,000 second-degree LinkedIn connections ranked by impressions, with email addresses, as a table'). As systems get better at understanding, UIs will shed crude multiple-choice buttons in favor of conversational interaction and context-aware output.
› Why does Chris Reardon think AI assistants threaten Google and Amazon's business models?
Because an assistant that can disambiguate everything behind it eliminates the need to visit the websites those companies monetize. Google Search's ad revenue depends on the user clicking through to a result; if your on-device assistant just answers the question, the click never happens. Chris frames the coming market as a battle between the ad-revenue model and the subscription-assistant model, and says that's going to reshape the whole design supply chain, not just the surface.
› What skills will UX designers of the future need?
Chris's answer is that designers need to understand the full AI supply chain — models, prompts, contextual awareness, data flows — not just the on-screen surface. The most interesting work is now behind the interface: deciding what the agent knows, how it reasons, when it should answer versus ask. Designers who stop at the screen layer will find themselves designing a smaller and smaller share of the product.
› How should designers think about responsible AI?
Not as a compliance layer bolted on at the end. Chris's Meta role existed precisely because the hard responsible-AI questions — bias, consent, safety, fairness — show up in the interaction patterns and UX flows themselves, not just in the training data. The designer is often the first person in the pipeline who can see where a model's behavior is going to collide with a user's trust, and designing that collision well is what responsible AI actually looks like in product form.
› What was Chris Reardon's role in launching IBM Watson?
At Ogilvy in 2013, Chris was an executive creative director helping launch IBM Watson — both building the brand and external image and translating what the technology could do into design decisions the market could understand. That project is when he 'got the bug for AI' and is the starting point of his full-time AI career.
› Why did Chris Reardon move to Amelia?
To get inside the technology. After years of branding and launching AI products from outside, Chris wanted hands-on experience with natural language and natural language models, and in 2016 he joined Amelia to work directly on them. That move bridged him from agency creative director into in-house AI product design.
Full Transcript
› Read the full conversation transcript
welcome to how we scaled it for design teams a show that explores the journey through the arduous Road of growing a successful design practice I'm your host Adam Perlis founder and CEO of Academy the ux Staffing recruiting agency and today we have the pleasure of speaking with Chris Reardon the former head of product design for responsible AI at meta in today's interview Christopher will discuss how AI is changing the role of ux designers skills that will be most important for ux designers to have in the future and how ux designers can ensure that they're using AI in an ethical way welcome Chris hi Adam it's great to be here I've really enjoyed conversations and I'm glad to be on the show and talking to you today same here Chris thanks so much for being here so just to kind of frame things up for people I wanted to you know tell people a little bit about you know some of the conversations that we've been having you know I think what's really interesting is a lot of teams and individuals are starting to think about both how to utilize AI in their day-to-day work but also in some cases having to build tools for AI and one of the biggest challenges in doing so is doing it in a responsible and ethical Manner and you are actually an expert in this subject and that's why I really wanted to talk to you today and if you could just give the audience a little bit of context about your background and also share with us a bit about some of the things you're working on today yeah for sure well I grew up in England I am the son of an American Air Forceman who lived in England at the time and we got moved all over the country all over the world and I got exposed to a lot of different cultures at a very early age and I think you know understanding how people live in different parts of the world is really kind of helped me have empathy for people and users I studied graphic design in England at Kingston University did print in packaging design for many years and then around 2000 I got into web design and building applications I moved to the to New York City area in 2006 I was in Los Angeles prior to that and worked in different agencies at different companies one of the sort of bigger things that I worked on at the time was redesigning AOL I helped redesign it from a subscription business to an ad Revenue business and supported 250 million users so that was a really interesting design System project later I got into working on Brands like apple and IBM and Target at tbwa and Ogilvy I was an executive creative director and helped launch IBM Watson in 2013 and that's when I really got the bug for AI so I've been working on AI since then so roughly 10 years and it's just been a fascinating journey and you know branding it building the external image of you know things like Watson but then also getting on the inside so in 2016 I really wanted to understand the technology and I moved to a company called Amelia working on natural language and natural language models much like chat GPT today and I built a virtual natural language agent at the time and learned a lot really about how voice interfaces and text interfaces were going to change the future I realized that they could be very problematic at that time and that's when I moved to meta to work on responsible AI so that kind of brings us up today I'm currently I'm Consulting I'm doing a lot of speaking and podcasts and interviews and things like that I've started to do some writing and just thinking about it's been great to see AI explode in the public eye and now lots of Interest around how do you build these systems and the tools to build these systems as well so it's been a fascinating ride that's amazing wow what a fantastic background and just so cool to hear about your journey from all these different companies all the way leading up to meta you know where you know really it sounds like you started to do you know some of the most important work of your career tell me you know you have this very interesting title there you know it was they had a product design for responsible AI and you know it sounds like it was a really important Mission but you know what should this audience know about what responsible AI means it's a great question and it's kind of multi-layered so I'll first answer what is responsible AI in terms of product design and my role and then I'll go into some of the facets of that for larger companies and companies that want to you know start to incorporate AI into their products and services so Mata responsible AI or Rai as we used to call it is basically built up of different dimensions there's AI transparency there's AI fairness privacy robustness and governance and accountability and so each of those are different facets of response being responsible and they play different roles at different times so AI is not a one-size-fits-all tool it can be very customized can be used in different ways and so it's not very easy to design a design system or a set of guidelines or what have you that is just a rubber stamp approach to how do we use and apply AI it's very contextual very scenario based lots of different variables that you have to kind of address and so this team worked across all the products that matter and we think about the use of AI in those products and whether or not it was you know being built in a way that was protecting people and Society where that was ethical you know private and so on did we give users agency over their data things like that so very much a horizontal team across all of the products but also had influence over the engineering and data science groups as well so how they actually build and maintain and manage the whole life cycle of AI what kinds of tools they use what kinds of processes they you know leverage in terms of gathering data you know cleaning and building training data and so on and so lots of process and methodology type thinking as well so it's very little honestly to do with ux UI in the sense of we're designing wireframes and mobile screens it's more thinking about the intelligence system that's inside that app that's driving those decisions those outcomes from the system the reason why a lot of companies IBM Google Apple centralize these teams is because of the complexity of the work it's still evolving people are still learning about how to manage and maintain these systems and so you want to kind of bring that team together so that they can work as a tight group on figuring out the best ways to handle this it's also an evolving regulatory space and so having a centralized group being able to understand what regulation means and compliance means and then being able to disseminate that out from a single source so there's ownership and accountability to that team and that kind of helps the goal is honestly to kind of make everybody aware of responsible AI so it's just like designing for mobile like people understand how to design mobile they didn't you know 15 years ago it's a journey and hopefully the goal is for everybody to get to a place where they can kind of understand this themselves the team itself is cross-functional we also we identify and determine the problems to solve you know what is the most critical thing that we can fix what a foundational issues that apply to all products and users and so we're trying to prioritize constantly like what is going to give us the most impact and how to measure that there's a lot of applied research that's still going on in terms of like metrics and figuring out like how do you even quantify some of these things the team itself has policy experts ethical experts Integrity experts lots of different kinds of roles involved that aren't typical in normal products and so the space is still evolving there you might even have people from philosophy involved because as the systems learn and evolve they change and so you're trying to project where could this thing go in the future and what could potential harms be in the future how do we mitigate that now what kind of methods can we start to invent and prototype now Rai is also a gatekeeper sometimes when should we use AI is AI the right tool for the job it can some other technical solution suffice why do we need a learning algorithm here or not we also look at different stages of the production life cycle and figure out you know watch what could we improve in data labeling and Gathering processes what could we learn in terms of you know content design in terms of making transparent to users how things work and making it easy for them to understand the goal too of REI is to get out of the way of the rest of the company we're not there to be a blocker we're there to help make sure that there's nothing untoward happening but we also are trying to empower teams to make decisions in real time and so we often build Frameworks and decision making methods that help teams move at the pace of their own businesses and so it's it's more of how to how to teach a person to fish rather than you know getting in the way and micromanaging them part of that too is to kind of assess is the framework holding up is it working across different scenarios is it giving you know guidance that makes sense and is actionable yeah and like when you started to talk about you know Frameworks and processes and tooling you know obviously a lot of the designers out there that are probably listening to this have never had to build a tool for you know AI before and never even had to think about ethics I mean we're so used to thinking about empathy in the user which is already kind of a new you know framing for a lot of companies especially and sometimes individuals but what are some of the tools that you guys use and that you try to put in the hands of the teams that are building these you know various AI tools so that they can more easily think about responsible Ai and ethics would they build their you know their next product that's a it's a great question so again it's like our goal is to kind of Empower and advise and educate teams on the latest thinking around these issues and so there's like three or four things that we use in terms of high level approaches the first is you know defining your principles and values it sounds like a standard kind of design approach but in this it's a little different because it's a machine that emulates thinking and thinking like processes and so you know how should that thing think how should it make trade-off and prioritize things when you think about things like transparency and privacy for example you want to make sure that an user is adequately informed right we give them information as a material nature to it that they can make informed decisions about how much data they share if we give too much transparency that could inundate the user but it could also leave the system open to adversarial attacks right if we give too much of the you know how the how the cookie is you know how the sausage is made away that leaves us open to security and privacy concerns and so you're always kind of balancing some kind of trade-off between you know sharing too much information as it comes to transparency and security so how do you how do you balance those things and so that's not a one-size-fits-all it depends on the scenario depends on the type of data that you're using you know how critical that data is or not so that's that's a different approach than say versus building a style guide where it's like Thou shalt always do X Y and Z that doesn't really work and so if you don't have an ethical team a policy team and so on small companies don't often have that they're looking for outside resources and vendors and you know reading books and so on the best thing to do is actually sit down as a team and write what you think your principles and values are bring in your audiences and have them help co-create those principles and values try and frame up those principles and values through both good scenarios and potentially negative scenarios and see how they hold up that you know sort of bulletproof those things that's a good place to get to where there's at least some kind of coherence and consistency and Engineering understands that design and ux understand that Business Leaders understand that so at least there's some coherence across the group that's a good first step and another methodology is red and blue teaming so we're stealing from engineering playbooks here where you know they hire an outside vendor ethical hackers if you will to test their security systems to see if there are flaws so that they can then patch those systems that's the old school way of doing this well it whether you're brainstorming a zero to one product or a feature it's great to everybody focuses typically on the benefits and the upside and the blue sky and everybody holds hands and is excited about the potential of this new Fantastic product that you're going to bring I recommend taking it to the room next door the day later and having a different team think about how that product might go wrong or might get subverted in some way or how the model change its Behavior over time because it drifts from its intended use it starts to learn new things that it wasn't necessarily built for or somebody in the company reuses that same model for a different task and that might have an unintended consequences and so playing the negative role helps people understand how this could be you know impactful in a not so great way and then build mitigations around that or at least maybe think about re-prioritizing the roadmap to work on the things that feel more you know safe yeah in quotes and leaving those more risky things off until you've done some more due diligence around you know what those features and capabilities might deliver with Sprint planning it's good to have experts in the room so if you don't have them on hand they're not on staff bring in outside experts like policy experts civil and human rights experts if you're in the sort of social space and have been the part of the team as you kick off a Sprint I think it's actually really powerful to Anchor everyone in that room on some scenarios and some best practices and some examples of negative outcomes some watch out things like that before the Sprint kicks off so they can be short simple 10 minute 15 minute presentations where it's like here's three things that you should learn about the EU AI act here's some things about the Digital Services act and so on here you know thinking about the broader community that you might be designing for it might not just be us Centric it might be EU it might be Asia it might be different parts of the world thinking about those regulations and just hitting the high level stuff so that everybody's grounded in a sense of the gravitas of how much these things can make a difference in society that helps as well and then lastly leveraging things like service design for continuous Improvement so you can you can do that on the end product itself but I highly recommend using service design on the actual production life cycle itself so how engineers build what tools they use what code they use what data areas gathered in those methods around Gathering it and so on each touch point in that you know Factory if you will of building the model requires different skill sets different functions to kind of work together but oftentimes those functions actually don't understand the ethical implications of what they're doing or the compliance you know legal implications of what they're doing and so on so going down that stack of humans who are touching the code and the data and so on as it moves along the chain and giving them a cheat sheet of here's the things to be wary of don't don't do these things but again you're not trying to slow down Innovation you're not trying to slow down their delivery times you want to try and integrate that into their workflow as seamlessly as you can and so that over time the products that are coming out the other end more thoughtfully engineered wow that's it's amazing that you've kind of thought through so many parts of this process I mean from the earliest stages of like just tooling and processes to kind of like help guide teams to like the types of roles that might be necessary one of the things you know that we really focus on in this podcast is helping design leaders and their team growth right and as they go through that journey of scaling their operation there are lots of things that undoubtedly will will change and they'll have to adapt and well let's be honest AI is pretty much the biggest game changer since you know maybe the discovery of fire that we've seen in a very long time what do you think design team leads are going to need to plan for in the future like what type of skills will they be looking for what type of you know tooling should they be thinking about we've talked a little bit already about process so we won't get into that but you know what other what other things you know I don't know that we're not thinking of already do we need to be thinking about as you know does that the landscape of the designer really changes yeah it's it's a fascinating space it's it's kind of like when mobile started to take off and we all started to learn about responsive grids and reflowing content and content design and all those kinds of new roles that started to develop and I think this is way bigger than that and nobody is doing this well just so you know there's no company out there that has nailed this and has like the perfect process in place and so what is interesting with that is there's both a an output in terms of like what the new products look like but then there's also you know you know who is making it how are they making it what does that mean and I think both of those are very fluid and ambiguous right now which is great for designers because we can come in and kind of start to identify those things that patterns and methods and you know things that we can operationalize in a way that design only designers can do because we have empathy for people and we can pull teams together in new ways so I think it's a really you know environmentally Rich space for designers and design thinking who are used to you know dealing with challenging problems I think NLP is you know natural language processing is obviously going to be a huge thing that's going to revolutionize you know design and product design I think natural language will be in every product within a year it'll it'll just happen meta for example just released a new model that can speak a thousand languages so just think about that from an adoption perspective it's unbelievable we can basically build an app now that works for anybody anywhere in the world which is incredible and to just jump in there for those like maybe unfamiliar and maybe and I want to make sure I get it right too when you're talking about NLP or natural language processing you're kind of talking about some of the familiar apps that I think people have started to see or like chat gbt utilizes that from an interface perspective the way that it interacts with the user and then versus like like the idea of generative AI can you actually help Define generative Ai and natural language process things so that people are understanding of the two terms and then like examples of which then yeah natural language processing is simply put me understanding what your intention is when you say something to me and then you can connect natural language processing to what we call robotic process automation which is basically a kind of if you think of natural language is the brain robotic processing is this the spine and the arms and the appendages right so that it can actually do something with that information so it is a new kind of interface you know it replaces buttons and drop downs and scroll bars and all those things because it understands your intention and then can connect the dots to a task you know an action and so on generative AI is a little different where it is basically rematching something that it's heard from you and generating something net new from that so those two things can blur it's hard to you know it's a scenario based thing but it basically is used in a more creative process so you ask GT GPT to do your kids homework you know it can write a paper for you that's generating that content but GPT can also be used as a method to understand what I'm saying and connect back to another system so in a in an application for example the natural language processing agent will understand better the context of my scenario right now am I in the car yes okay don't show me pictures because I can't take my eye off the road right so it will use language to describe something rather than show me an image whereas so that contextual awareness will be something that we'll start to see in applications and mobile apps and so I think over time natural language could disambiguate old school more you know UI what I mean by that is take it will gradually take away more of the buttons that you see on the screen because it's understanding will grow and so it won't need to have these kind of crude do you mean this or do you mean this and you push the button on the screen so I'm in this it'll actually just understand it so it won't need those other things that you normally see on say you know the typical assistance yeah I think that's one of the biggest like changes for me is like having a lot more freedom to ask the system that I'm working with to do something unique and so like before you know if I wanted to like work I don't know find somebody on LinkedIn for example you know I'd have to like go into the system I would type their name I'd then have to select a number of different filters and it would be a very linear path and it would give you a very specific you know list but if I wanted to get more creative with that and like ask it to do something special like you know give me a list of like a thousand people that are first and secondary connections that are like you know the top influencers based on Impressions you know and organize them into a table and like make the make sure that I have all their email addresses you know that would be like a typically a harder like task for it to do but I could just like write all that out in like in seconds you know it would it would do it and so the like the idea of like the traditional interface I think is almost like completely being blown up because of natural language processing and it's a bill it's ability to understand and make sense of what you're saying and then with the generative portion the GPT side able to be creative with that and maybe even expand upon it if necessary the interesting thing that is underlying what you're saying is the virtual agents will eventually get smart enough where you don't need to go anywhere else you won't need to search the web you won't need to search LinkedIn you'll just be able to say hey I'm looking for 10 experts that need to help me in Vision Ai and you know it'd be great if they were you know warm connections that I have some you know friends that could introduce me or something like that and it's your prompt from the one that you just described gets to this big because it's it just understands where you're going it's inferring and so if it's doing that in this scenario why would I ever need to go to a website so the issue there for Google is that's going to kill the advertising business right Google search is designed to be doorways into the best content on those websites and what's how they make money now this is this is kind of what Bill Gates was saying the other day where it's like we're going after Google and we're going after Amazon why would I need either of those businesses when my assistant on my device disambiguates all of that other infrastructure behind them so really this is a battle between the two business models and AD Revenue model or a subscription model and that's what's going to get really interesting here so not only is it going to change what design is a designing on screen it's going to change all of the stuff behind it that we were thinking about in terms of the supply chain and like I'm trying to think about this myself but you know we're just continuing down that natural language processing and changing of like how interfaces are built like what are what are the skills that the ux designers of the future are going to need you know we you and I have talked offline a little bit about how a lot of the visual design work could eventually kind of get offloaded in some way but as I start to think about it this is a complete Paradigm Shift right and so yeah maybe some visual design work could get offloaded to a to an AI but given that this is net new territory the AI may not even understand like how to build an interface like this because it's never learned from us humans which helped we did all this great information so like when we start to think a little bit ahead to those ux designers of the future like what are the skills they've got you know is it interface design is it something else it's it's I think it'll be something else so I think Design Systems patterns components all those things will just be owned by the AI and the reason is those things are all patterns ai's amazing at pattern recognition it's it's amazing at optimizing things and so it will be able to do that at scale that no human team could ever be able to manage I've sat in numerous critiques over the last 20 years where the conversation is how do we make sure that this thing looks like this thing looks like this thing right a missing a machine will just do that it will and not only that your assumption of what the best of those things is most likely off anyway and so you know metrics you know analytics AI will just you know kill it on that so I think visual design in the in the in the sort of typical sense of what a lot of large companies have is a massive group of designers who manage a massive design system and they all understand pieces of it and the AI will just orchestrate that whole thing the AI at the moment isn't good at being net new inventive so I think there'll still be you know we've discovered some new paradigm but I think where ux UI goes to is contextual awareness of a system so you know like I mentioned if I'm in the car it goes into this modality it talks about things in this way it prompts me in these kinds of conversations so it's more of a conversational design approach whereas if I'm sitting on the couch and I've got my iPad in front of me it might it's you know I was designing systems six or seven years ago where we would instead of describing things we would show you images and say which of these six things is the thing that you really mean and so again that's where generative AI might come in it might show me you know here's three versions of your couch that you could possibly buy for your living room in situ and now I'm like oh I want that one you know describing that in a in a voice way would just be onerous right it would just take too long whereas an image suddenly just cuts through and just gives me that Clarity so I think designers will be designing context and thinking through what is the appropriate thing at the right time and the AI will then execute against that so that's kind of where it will move and then I think obviously the AI bias AI responsibility piece will be huge as well designing for much Niche more Niche communities and having that empathy to kind of really drill down into you know what is appropriate for different communities what kinds of language what kinds of imagery it's very dicey because you could say that's inequality because you're TR you're teaching and designing for different communities in different ways that can go in a bad way which we could do a whole other podcast on but thinking through inequality and equities balancing those I think will be the prime kind of space for AI designers and the design itself will change we could we can sort of talk about the different skills designers will become more hybrid in nature they might have one for in policy and one in design and what that Designer does is workshop with regulators and civil and human rights experts to figure out like those guidelines the AI thinking machines should adhere to in different scenarios and thinking through how to plan and red team and come up with different ways that we might miss some things that person knowing policy and knowing human empathy skills coming together and being the glue between two different kinds of factions who work on this will be helpful I think the same will be true of voice designers and service designers that they'll have a foot in a different group and will become the glue to bring this larger group of stakeholders together to design these systems so Chris you talked a little bit about a voice desire what exactly is a voice designer to yeah really it's a conversational designer and when you think about how we talk to each other how we text each other how we email each other those are all different ways of communicating right we're both using this but what I might text you might be very brief my I have a 14 year old son he might just respond with the letter K right that's what I get all the time from him whereas my 70 year old mum in a text message might write something as if it's a letter where it's like several paragraphs you know dear Chris at the top love mum at the bottom in a text message and so understanding the sort of norms and etiquette of different communication channels is what a conversational designer does and so you want to train an AI to be able to manage different channels like that so that it's not delivering War and Peace through a text message and then in an email something as brief as a tweet and so it has that contextual awareness to be able to communicate in a way that feels authentic wow so I mean it sounds like you know I think a lot of writers were a little bit worried about you know being completely out of a job it does sound like you know people who are strong writers this be actually a very good job for them yeah no I think I think it is still something that's interesting I think Chachi PT has really shown how these large language models can kind of bite into that role it's pretty interesting if you ask chat gbt to distill large text down into a summary it does a really good job if you get good at you know in quotes at prompt engineering you could have it generate stuff for Social Media stuff for short form and long form and so on so it can do that work which is powerful stuff but in terms of real-time conversation you know I think most companies will want to hire people to kind of design and really articulate those conversations well especially in regulated Industries like Banking and Healthcare and so on where legally ease is really important around the actual language that's being used and so making sure that the AI is on track that way yeah I think still a lot of work needs to be done there I mean even in my usage of chat CBT either there can oftentimes be mistakes and you know or misinterpreting what I said and I kind of got to dumb it down for the system a little bit yeah it makes a whole lot of sense you know one of the things that I wanted to talk about and I know that this has become a really important topic especially in the ethics part of the conversation you know we've seen Adobe Firefly now come out they've taken a very specific stance in regards to trading their models on you know either like owned work like work that they have ownership over or that they have the rights to utilize for the purpose of training a model and that's kind of been a stake in the ground in some ways where other groups I won't name any specifics and I actually don't know if the other verbs do or do not do this so I don't want to say they do or don't but I know that adobe is putting a stake in the ground about this and people have been very vocal especially in the design Community saying like you're still other people's work I think there is somewhat of an ethical discussion to be had also about you know hey if I go to a museum and I'm inspired by Da Vinci and you know Picasso and all the great artists and then I go and create my own you know version of their like of like kind of an amalgamation of their work you know am I violating some sort of ethical code I feel like you know artists have stolen from each other for years you know like it's always been that way so where is like the line start to be drawn I'm curious to hear your point of view on all that and you know where things are going for the industry yeah it's a fascinating conversation and again this could be easily another podcast I think the difference though with artists copying artists is artists had to learn how to be artists in the first place there's still some manual work there where you know as a kid I learned how to appreciate you know Michelangelo and Leonardo and you know do figure study and kind of compare my you know charcoal sketches to theirs and you know refine my own style over years that's not the same with AI today where you can literally just type in a few sentences and boom you've got you know something that looks like it comes from the 14th century immediately and you know that kind of cuts out the middleman so I definitely think the copyright lawyers have their work cut out for them I think it extends Beyond just what we're talking about here though if I can model thinking processes can I model people out of the workforce right that's a bigger question right if I can do it with art can I do it with you know accounting or you know pick any kind of vocation and basically these things are massive automation systems and you know if you think that way that's another part of responsible AI which is just because we can should we do that and you know is society ready for Mass layoffs because these systems would become so good at what they're doing that it impacts you know the ability for the economy to function anymore because there are not enough people are able to sustain a job I think the flip side too is that people might actually start to treat people like AI you'll expect people to just deliver amazing work as quickly as an AI does and that will have impacts as well on our social and you know professional relationships with people and we haven't studied those tensions yet and how that might impact our like Collective psychology yeah I mean it certainly could be a whole it could be much more than a podcast but I think we could write a book about this particular subject because it is so complex and I think we've all started to be you know having those conversations with friends or colleagues you know about the positive and negative effects of AI not just on the Artistry but every industry and well frankly humanity and these are definitely important conversations to be had and I don't want to dive too deep into it because I think we will easily get off topic but but I do think it is a you know a really important thing for us to think about you know especially in terms of I think the approach Adobe is taking I personally have enjoyed I think that is that's the right approach you know trying to get a consent trying to think about this in an ethical and responsible manner dinner you know I think like they saw where it was going wrong we talked about the red team and blue team before and they saw that there was a lot of backlash in the community and they responded by coming up with a very I think responsible solution and I think those are the type of things that design leaders and teams need to be thinking about when they start to develop this tooling for the future whether they are creating it or utilizing it and yeah I think it's just it's going to be very interesting to see where things go from here but I think that's a good place to kind of wrap our conversation but before we go I just wanted to give you any parting words with the you know with our audience about you know the state of AI and you know where things are going for designers yeah I mean I think there are some things that designers can feel pretty safe about it I know there's a lot of anxiety in the industry right now I think design thinking and empathizing with users is gonna still be squarely in the designers role in the ux role I think being imaginative and creative and thinking through new ways of dealing with user challenges is going to be something that we still own for quite some time it's going to be very long before an AI can really understand the world and really understand people's emotions and intentionality and things like that so I think we're we're safe in that space I do think I would recommend to designers to you know learn simple 101 AI terminology you know learn what it means to build a model and train a model and all the processes that go into that the different kinds of models learn a smattering of Regulation right try and try and keep up on what's happening in the EU those things really help learn about ethical processes and methodologies Peter Singer is a great ethicist to kind of learn from practical ethics things like that are really helpful so you can start to understand how a thinking machine might have an impact on the world in terms of evaluating talent I think the world is going to become a lot more ambiguous and fluid so I think people who are adaptable and resilient and open-minded you know thinking through like how to learn new stuff and just diving in and rolling up their sleeves there are really I know I'll get some backlash for this but there are really no experts because AI isn't a static space it's constant I mean every week every day almost there's a new white paper that's coming out and frankly not all the in quotes experts understand not just the technology but how you know humans are amazing at taking tools and using them in ways the people who invented the tools didn't think about right and that's the piece that these you know research scientists don't understand is that we're very crafty in how we use tools and that's the bit that they hadn't planned for they were thinking in a lab they were thinking it could be used for X number of use cases and now it's exploded into numerous different spaces and so saying you're an expert is a little bit of a fallacy it's it's such an amorphous you know constantly changing space so you know just diving in trying to learn this stuff figuring out MVPs prototyping you know experiences you know failing fast again is like a good way to kind of get into this space and there are no stupid questions yeah it's very humbling to hear that you know there are no experts in AI at the moment and that also presents a great opportunity I think for a lot of the folks out there and the recommended reading that you you mentioned that would be great for us to include in the show notes so if you wouldn't mind sending me a few links I will make sure that those get added I personally would love to read them as well as I start to you know digest and learn the space even more but Chris this has been fantastic it was so great to speak to you about this topic and I know our audience is just going to absolutely love it just one last parting thing is is there anywhere people can you know follow along with you know any of your thoughts maybe content that you're producing sure yeah I can send you the links to that as well I have a medium account I also have a website where I have some of my work so I'm going to be doing a lot more writing on this subject I've been getting a lot of interest from various places and I want to put more time to put you know pen to paper to kind of expand on this that's fantastic well thank you again so much for taking the time with us today and yeah we'll see you soon thanks again all right thanks Adam this has been a pleasure
