Skip to content

AI in UX: The Ultimate Guide To Improve UX With AI

 Jun 20, 2024
 5526 views

AI is changing UX Research and Design. Watch the webinar now.

Discover how AI is revolutionizing UX in our recently held 60-minute webinar. Learn from industry experts about the latest AI technologies and their applications in UX research and design.

This guide covers user behavior analysis, personalization, and more. AI can enhance your UX strategy and provide deep insights into your users’ behavior. This webinar will show you how.

AI Technologies and Use Cases in UX

Our guest speakers presented AI technologies applied to UX, showcasing how AI can analyze user sessions, personalize experiences, and automate usability testing. Technologies such as conversational AI, chatbots, and voice assistants were covered, along with capabilities like heatmaps and eye-tracking. These were illustrated through use cases and real-world examples.

Live Demo and Q&A Session

Watch the recorded demo of Userlytics’ AI UX Analysis. See how integrations with leading AI platforms such as ChatGPT can speed up your qualitative session analysis.

The recording includes a Q&A session where you’ll benefit from the answers our guest speakers provided to the participants’ questions, offering valuable insights and personalized advice on integrating AI into your UX strategy.

Fill out the form on this page to access the webinar recording and learn how AI can transform your UX approach.

Transcript

[Nate Brown]

All right, let’s go ahead and get started.

I am going to share my screen, and we’re going to get kicked off here.

Let me go ahead and do so now.

Excellent.

Well, welcome, everyone.

My name is Nate from Userlytics.

And today we have a really, really exciting, hot topic for you.

AI and UX research and design have a couple of my colleagues on the call, a couple industry experts that are going to be able to really give you the low down on everything you need to know about AI and UX at the current moment.

As we know, AI has been around for a while now and not a new thing to a lot of us, but AI really exploded on the scene last year with the popularization of learning models like Chat GPT from Open AI.

And then a lot of the other bigger players out there started releasing their own different forms of AI and lots of knockoffs in different types of ways that AI is being used.

So today we really want to focus about what is AI, where has it come from and how is it being used, what technologies and strategies are being used to maximize things in the UX research and design space.

So with that, we’ll go ahead and jump in here.

I want to first introduce the speakers that we have today.

Just a quick intro for myself.

My name is Nate. Been with the Userlytics team for almost four years now.

More so on the account management side.

I’m sure a lot of you on the call recognize me to work with me on a daily basis as you use the Userlytics platform.

So I’m going to be hosting it.

But the real knowledge is going to come from my colleagues.

Zach, I’ll pass it over to you for a quick intro.

[Zach Naylor]

Sure. Thanks, Nate.

I’m Zach. I’m actually the co-founder and CEO at Aurelius.

And so we are a tool very complementary in the space, similar to what Userlytics does.

But we’re more of a research repository and analysis. My background is in UX research and product strategy. I’ve been doing that for a little bit more than 15 years now, which is crazy to say out loud every time I do.

And I’m just happy to kind of join the chat with you all.

[John Anthony]

I’m John Anthony. Thanks so much for joining us today, everyone.

I work with Userlytics as a senior UX analyst.

Been doing a lot of user studies throughout the years, more than 10 years now in UX design and UX research.

[Nate Brown]

Excellent.

Really appreciate you both being on.

I’m very excited to hear your thoughts and comments and again, insights on specifically with AI.

We’ll go ahead and jump here now. Just want to quickly talk about what our agenda is for the day. What are we going to be diving into just so you have a good idea of what we’re going to cover.

We’re planning to have this webinar last around an hour, but we have an hour and 30 minutes.

So if we have more time and lots of questions, which we hopefully do, we’ll cover those in the QA session at the end.

But John’s gonna quickly give us a brief intro, some of the definitions and history of AI specifically in UX.

And then we’re gonna jump into more of the presentation, some of the use cases. And then Zach’s gonna help us with that as well, how AI is being applied in UX.

And then you’ll get me again, just do a brief demo of some of the ways that Userlytics is using AI to enhance research insights and speed along the process of UX research.

And I believe Zapp will share a little bit from the Aurelia site as well.

We’ll have the good old QA session. So if you do have questions, throw them in the QA whenever you’d like, we’ll see them. We’ll do our best to answer them by the end of the day.

Wrap it up with a quick conclusion and then we’ll give everyone their time back for the day.

So let’s go ahead and jump in.

I believe, John, you are first, so I’ll pass it over to you. 

[John Anthony]

Great. So hi everyone, thanks so much for joining.

I don’t think it’s a surprise to anyone that recent advancements in what we’re calling AI today have been disruptive and pretty groundbreaking.

And what seems like breaking with long-held tradition, we now have a technology that seems more well-suited to understanding, interpreting, and summarizing human text and speech rather than just doing math more quickly.  

It’s exciting on many levels. I’ve had a chance to work with some recent updates to large language models, and the current software people most associate with AI and the potential is exciting and a little intimidating.

There are huge upsides for user research activities, namely being able to automate and do more efficiently tasks that have always been very time-consuming and manual.

Things like analyzing qualitative and quantitative feedback at a tremendous scale, utilizing newer computer vision analytical technologies to do smaller team work, and which used to take much, you know, to do those analogous technologies to do with a smaller team, which used to take a much larger one, much more time to accomplish.

It’s important to note that all of these new AI technologies are not just replacing human beings, but enhancing their abilities.

Of course, the career landscape will change. It already has to a degree.

Economists from Goldman Sachs and the International Monetary Fund have predicted a seismic shift in the future of work. And as we’ve seen before, how jobs changed in the past due to industrialization, and later globalization.

So we want to spend this webinar time showing you how AI in all its forms can benefit you all, the user experience and user research professionals in the audience.

And let you know where we’re seeing the trend shift and where the winds are blowing to help you prepare.

So before we begin, I want to throw up a quick poll just to gauge how familiar everyone is with the various AI technologies and techniques, specifically within UXR.

So take a few moments to just check over this poll, respond to the one you see on your screen, and then we’ll share the results.

[Nate Brown]

Awesome, I see the results coming in now, already over 70 responses.

I’ll give it a little more time, maybe a minute or two, and then I’ll go ahead and end the poll.

John, I don’t wanna buy, see one, interested in which option you would choose?

[John Anthony]

I’m pretty familiar with the techniques and UX that are out there today.

I think a lot of us will be more familiar with some of these things than we realize based on how long the technology has been around.

But obviously with recent advancements we’re becoming much more familiar at least on the surface level.

[Nate Brown]

Awesome awesome I’ll give it a few more seconds and I’ll go ahead and the poll and share the results. Alright.

[John Anthony]

That’s about what I expected.

Some people have basic knowledge of AI techniques in UX.

That’s good. I’m glad to see there’s a range of familiarity.

That’s great.

[Nate Brown]

Excellent.

[John Anthony]

One of the companies leading the AI revolution here that we’re going to be breaking down is Aurelius, which you heard about at the beginning of our webinar here.

So I’m going to toss it over to Zach, co-founder there to give us some insight into the current landscape. Zach?

[Zach Naylor]

Sure. Thanks, John.

So before we actually continue going, we should actually get a couple definitions out of the way.

And when we talk about this kind of thing, knowing exactly what we’re referring to, but then also some misconceptions getting cleared up, I think will help people get even more familiar.

So when we’re talking about AI, it is really referring to the development of computer systems, technology and software, things like that that can perform tasks that either mimic or would require human intelligence.

So these are things like understanding natural language.

These are things like recognizing patterns and even making decisions and things like that, learning from data, solving problems, right?

And so the reason this is important is because when we say AI, it is actually a much larger bucket of many things that encompass that, including a few other definitions that we want to lay out, things like machine learning, which is actually just like it sounds.

So when human learning, it’s people learning and understanding how to do things, whereas machine learning, it’s not explicitly programmed to do that, but it has the ability to to learn on its own and grow its intelligence, so to speak.

And then we reference natural language processing, which of course also is a different way AI is sort of if you thrown around and applied, but it’s the ability to understand human language, which there’s a lot of nuance and context there.

And so combining those things is important to not only define and understand that they are different and natural language processing or machine learning isn’t just AI.

There’s an umbrella just like there is UX, which is the last definition we’re going to give.

So user experience, we’re really talking about the overall experience that somebody has with a product or service that we might design for, that we might offer.

This is everything that includes every interaction that you have, accessibility, how easy something is to use, the performance, even delight and emotion that we try to evoke in those interactions. 

John, it looks like you have a hand up.

Did you have a question about the definitions? 

[John Anthony]

No, sorry. – 

[Zach Naylor]

No, no problem. – Just share my excitement, that’s all. – That’s great. (laughing) So with that, we should talk about how this has actually changed over time too, because a lot of these concepts and even some of the early versions of this technology have been around a lot longer than we realize.

There’s a lot of hand-wringing and consternation about AI now. And I think that’s because it’s gotten really hot.

But the reality is back in the 90s, there were things that were happening here, like even some simple things like we remember, couldn’t be writing things like that, helping us achieve certain tasks and things like Microsoft Office and stuff like that.

And anybody here who has a development background, this will sound more familiar when you know the development term, if, this, then, that.

This was a lot of the way AI was kind of happening back in the early ’90s, but it did start much longer ago than we actually realized.

Fast forward, you know, about 10 years, 10, 15 years, we started to see things that were a lot more advanced and certainly at the time were very groundbreaking.

And here, we know we’re referencing Amazon’s recommendation engine, or when you would go to any e-commerce platform and you saw things being suggested to you based on actions or views that you had, that was actually very groundbreaking at the time.

It’s sort of table stakes now. We expect that, look at something.

We expect that people and these companies are gonna serve us other things that might be interested, but back then this was actually very, very new.

And then of course we get all the way up to where we are today. We have like full blown virtual assistants in our homes, in our pockets, in our hands, and things like Siri and Alexa.

But also just as importantly, I think the big focus right now and the massive surge in just how quickly things are changing and improving is specifically in generative AI.

And those are the sort of things we’re talking about with like chat GPT, FLE, actual net new content generation and understanding that was largely not influenced by a human being, but rather these systems that we’ve built and the capabilities of AI now.

[John Anthony]

Great. Thanks, Zach.

Now I’d like to turn over to where we are today and where we might be going in the very near future.

So before we begin with these use cases, I’d like to give you some interest from the folks attending the webinar. What AI technique do you feel like you’re most interested in learning about today?

There’s a lot to cover here, which we will break down pretty easily for you, but obviously AI today kind of encompasses a few different areas.

We just like to kind of understand where you feel your interest most lies.

[Nate Brown]

So I see more answers rolling in, doing my best to answer as many of the QA questions that are easy to answer as we go along. So I appreciate you sticking with me on that.

The webinar chat is open in case you want to maybe share an example of a time that you’ve used AI or a scene that’s been really helpful for you. Give it just a few more seconds on the poll.

John, Zach, interested in which AI technique you’re most anticipated to either share or one that’s really been interesting for you lately?

[Zach Naylor]

Well, for me, that’s pretty easy.

I mean, there’s two spaces that we have both interest and are working in. And so for us, obviously the analysis piece of that, which is definitely a part of what Aurelius does for sure, but the automated usability testing, which I am quite sure you and the Userlytics teams are gonna talk about because a lot of this stuff can actually allow us to grow research and impact at scale.

And I don’t think it’s anything to be afraid of. In fact, this is actually a very good thing for us as searchers that we can apply this at a much greater scale and greater accuracy, not just scale, but greater accuracy and quality at scale.

[John Anthony]

I agree with Zach said, and I’m also really interested to see how sentiment analysis will evolve over the next six months to a year.

It was pretty good. It was fairly good prior to the new language, large language models becoming available to us, but now that they’re so very good at reading human text and understanding the context behind that, I think we’ll see massive shifts in sentiment analysis.

So that, you know, even if you’re doing, you know, hopefully like in real time sentiment analysis type work where even if you don’t necessarily have the time to do a full user study on something, you can get some very quick data on how people are feeling about this.

And we’re seeing this across the industry being using a variety of different platforms.

[Nate Brown]

Good.

[Zach Naylor]

Sorry, Nate. I was going to say, John, that sentiment analysis is really interesting one, as we talked about the evolution of that too, because that’s been around for a while, but it hasn’t been very accurate.

And it’s because machines detecting the nuance in human language is so tricky. Things like sarcasm and meaning based on context is actually really hard. And it’s getting so much better, so much faster.

[John Anthony]

Yeah. And I’m interested see in the poll results that conversational AI was actually the least most anticipated. Probably because it’s the one we’re probably most being hit in the face with today, which I kind of agree with.

Maybe conversational AI will sort of see its ending trend piece there, but yeah, I agree with behavioral and automated usability testing. That would be very exciting to see.

Great. So let’s talk about some of these top use cases and user research and design today.

Most of us are well aware of some of the earlier types of artificial intelligence, if you were at work, and bring us things like personalized recommendations on Amazon and Netflix.

We’ve also seen some early use of this technology to help with heat maps and eye tracking, a sort of crunching large amounts of data to paint as a picture of user behavior across time and at scale.

We’ve also made some strides in sentiment analysis as we were just talking about that would start early days prior to what we’re seeing now. You know, older sentiment analysis tools like Zach was referring to would sort of highlight keywords and try to determine the most likely attitudes of users oftentimes with limited success and some were good, some were hit or miss.

You know, these technologies have also begun to assist us in wider behavioral analysis, being able able to identify and interpret trends across large groups of people.

That’s all changing thanks in large part to the large language model-powered conversational AI, which draws upon these massive models of human speech and writing to interpret and analyze what we’re saying like never before.

So what we’re seeing from that is new advancements in the automated usability testing, which we’ll get into in a bit.

So now we’re going to take a deeper dive into some of these use cases.

First, I want to focus on personalization. It seems to be the one, you know, most of us have extensive experience with, you know, Netflix has been spending years and a lot of subscriber dollars fine tuning their learning algorithms to promote content based on your viewing habits and your feedback. You’re seeing this kind of thing across the board in the major platforms.

You know, Facebook has been doing this to tweet their feed. Twitter/X has been doing this for quite a long time. You’ve seen this kind of thing happening on Amazon, on Temu, a lot of big platforms.

Instagram and TikTok have obviously been using these learning algorithms, these large algorithms that they’ve been creating and honing over the years to do this.

You know, Amazon tailoring for the shopping experience, having the product you’re looking for sort of at your fingertips, you know, delivered by 10 o’clock tonight because they know it’s one of the most popular items in your area.

Obviously, the end result is pretty straightforward. More people watching Netflix, hence more subscribers, more people buying products from Amazon, more people scrolling their TikTok feeds and so forth.

So that’s some of the examples of sort of early AI in regards to AI driven personalization.

[Nate Brown]

Yeah, I love that example, John. I could say that the Amazon one has definitely worked on me and my wife as well.

[John Anthony]

It works on everybody. That’s why they make so much money.

So next we’re moving out of heat tracking and eye tracking solutions. So this started using machine learning behavior algorithms to better interpret and predict user behavior. Thanks to some enhancements and cameras, particularly over the last decade, two decades, mobile technology, computer vision, things like that.

Some software no longer really needs to rely on expensive and cumbersome eye tracking hardware. That was sort of the pre-state of where we are today, where we use to strap these big things to people’s heads and sort of have to, with all these cameras, watch where their eyes are moving. Now we can just use your phones or computers front facing camera to track your eye and facial movements.

You see this now when you’ve, if you ever use Face ID or sort of any sort of face unlock features on iPhones, Android devices, even computers are starting to have some of the technology as well. So you’ve seen that out there in the wild.

Now next we’re gonna talk about sentiment analysis. So it’s been around in many forms for quite some time. Originally, these simple systems would look for keywords, try to match sort of the wording that it would understand people are using, granted some of that wording might be wrong, and approximate from the combination of words and their sort of relationship to other words, whether the user was describing something positive or negative, that’s basically how sentiment analysis worked.

Now systems are obviously getting a lot smarter thanks in natural language processing. They’re able to determine with greater accuracy the intent behind users’ words. So they can understand the context of phrases, not just the keywords that are being used to gather that meaning and the relationship between larger amounts of text, of spoken or written text that is in between sort of the item it’s looking at. So it has a much larger frame of reference.

It can also quickly break down that information in whatever demographics you choose, right?

So if you’re working on a platform that has a testing going on a variety of groups, people that you’re interested in looking at, then you’re gonna be able to break that sentiment down by those groups pretty easily.

So obviously the next big leap forward that has really helped propel us in AI and AI into the mainstream is those large language models and conversational AI.

We’ve seen them in chatbots like chatGPT, people kind of went nuts when that first came out into the scene, respond to questions, and even generate entirely new content, entirely, you know, completely from prompts.

Conversational AI and the UXR space can gather feedback in real time. And of course, it’s highly scalable. It could be deployed virtually anywhere, you know, gathering and disseminating massive amounts of data.

This is really where sort of we’re at, we’re at this point in sort of the cycle where we’re seeing huge gains in productivity and efficiency and time spent with these large language models being able to really just hone all this data very quickly, comb through it, and provide some feedback, obviously with the caveat that it might not always be accurate.

So that technology has led the way for more advanced capabilities in user behavior and session analysis. AI tools can now assist in analyzing that user behavior, Zach was talking about that a bit before, like navigation paths and various levels of engagement.

These tools can analyze recorded videos for patterns and for pain points. And using machine learning, they can help to predict future behaviors in UX outcomes.

So that’s why probably more of the excitement is coming in down the line from the, the technology that’s coming down the pike is not only being able to see from sessions, from behavior on the site that you’re using or the application that you’re using, Being able to derive patterns and trends from that information and let your teams know people are gravitating towards this behavior or we’re seeing a problem coming down the pike where you may experience this issue if this current structure remains as it is. All of that obviously can help inform faster, better design decisions.

So finally, where we’re headed, we’re putting it all together. Now automated usability testing, these AI tools can be used to gather and analyze user behavior and feedback. Machine learning models can interpret the data to find pain points and predict future problems. Automated tools then generate heat maps, session replays, and really just insights that can be checked against real data to give researchers quicker access to insights than ever before.

Now, I wanna briefly go over some best practices and some ethical considerations when using AI tools and user research in UX design.

First, keep in mind these tools excel at handling repetitive tasks, but they are really a poor substitute for human.

Oops, I realized this. (laughs) Sorry about that, there is a poll. We were trying to understand how you end up balancing automation with the human touch. So really quick, just go through the poll. How do you balance the use of automated tools and human expertise in your UX projects today. How do you feel like that’s going to take shape?

[Nate Brown]

Awesome, John, as we wait for those results to roll in, I got a good question in the chat.

And might as well attack it now. The question is, how has sentiment analysis powered by AI impacted UX research? Bit of a broad question, but interested in your thoughts.

[John Anthony]

Zach, do you wanna take that to start?

[Zach Naylor]

Sorry, one more time. I had my mic muted.

It’s impacted the US research in a few ways. The first one is that it’s just gotten better, I think is the first one. So again, before, and as John mentioned as well, this was trying to pick out, previously, it was trying to pick out words that would typically be seen as positive or negative, which was somewhat reliable. And I think you even use the term, you know, limited success, which is, which is probably fair.

Um, sentiment analysis. Now, you know, it can combine a number of behavioral factors and understand context of not only those words, but certain phrases. So really the accuracy of that has gotten better.

And then more importantly, now specifically, so this is again, a combination of the natural language processing and machine learning, where it’s learning that the combination of certain things and even it being said, perhaps in slightly different ways across many people suggests something’s very good or very bad. And so, you know, for me, that’s been the biggest impact in UX research to understand that faster, because that’s the sort of thing that we would do as researchers and combing through this as humans who understand that language to know better. And frankly speaking, even when you do this as a human being, sometimes you have to go back and re-watch and re-read and it just takes a lot of time.

So it’s gotten better and it’s allowed us to understand this faster would be the shortest answers I would give to that.

[John Anthony]

Yeah, I was just going to say also, I see some people mentioning that center analysis is pretty good in their experience. Other people saying that it’s not been very useful because it’s not a very reliable measure. I would agree it has not been very reliable in the past.

So with any of these tools that we’re talking about, any of the tools that are going to be in your toolbox using the Ford, they’re only going to be good in their own space, I want to say, because you have to take it as part of a larger context.

Look at the sentiment analysis that AI is generating for you and what answers is providing for you. And use that as a starting point, a guiding point, to deeper dive into that information.

If you’re seeing in the sentiment analysis tools that AI is powering, that it’s suggesting there’s some serious pain points or some bad sentiment going on with a particular area of your user experience, your app, your website, whatever, use that as an opportunity to sort of like a little red flag to say let’s go take a look deeper look into this and see if this pans out with the data we have.

I think with all of these AI tools, it’s a really helpful guide to show you. All right, the sensors here are showing us an enemy ship off the bow. We gotta make sure we can lay human eyes on it and make sure that that’s actually true.

Okay, so we can skip forward ahead. So keep in mind these tools, we wanted to show, did we already show the results?

I’m sorry, I didn’t see if they popped up on the screen, Nate.

[Nate Brown]

Yeah, let me go ahead and share them now.

[John Anthony]

Perfect, perfect.

So I see, yeah, a big answer to using automation for a repetitive task or rely on heavy and human expertise for insights.

I would say that’s absolutely true. I mean, the AI tools that we’re talking about can start to provide some of those insights for you, but it’ll definitely be helpful to have human eyes on those and make sure that you’re actually seeing what’s there.

Great. So first keep in mind, again, as I’m mentioning, these tools excel at handling a predator task, but you need that sort of human intuition and analytical muscle behind it.

You still need people involved in interpreting the data, making informed and creative design decisions, and advocating for the user. That’s probably the most important three points I can make in terms of using AI for these purposes.

And next, make sure your AI tools comply with all the relevant privacy laws and regulations, especially, you know, thinking about some of the laws that are ever changing in the EU and other places, be open and transparent with your users about how you’re collecting this data and how it will be used.

Of course, always give them a way out, allow them to opt out if they choose. Now, it should always be a part of every user experience, particularly when you’re talking about personally identifiable information.

And finally, always focus on iterative improvement. And honestly, not a week goes by that AI models are not being updated with better understanding, more functionality, and hopefully, the tendency to make fewer mistakes. But mistakes are definitely part of the picture right now is you need to expect them, plan for them and correct for them whenever they appear.

So let’s talk ethics. Obviously the trust of our users is paramount.

So following the laws where you work, where your participants live, that’s number one, that’s why data privacy is so important.

Again, like I mentioned, be open and transparent with your users about the data you collect, how it’s used, and how long you’re keeping it for.

Make sure it’s secure. Your organization should follow the principle of least privilege, meaning only people who absolutely need access to personal identifiable information should have it and know what else.

Of course, we’re aware that all AI can have inherent biases. It was designed by humans, of course, but there are always ways to mitigate these issues. Always review the output for potential bias. Work with models that use a diverse set of training data and that optimized to remove as much of that bias as possible.

There are also fairness metric libraries out there. They’re probably eventually going to be included in a lot of these models as part of their core software. But right now we have a lot of open source ones by Microsoft and Google that developers can use to help limit the biases available in these models.

Lastly, we’re all aware of so-called dark patterns and UX design. Companies have been using them for a very long time and AI tools only make it easier and darker than ever, really.

So avoid the temptation. Many organizations today are setting up AI councils, AI centers of excellence, or AI ethics committees, ensuring these tools are used responsibly.

And now that we’ve gotten that out of the way, I wanna send it over to my colleague, Nate, to showcase some of the cool things we’re doing responsibly with AI.

[Nate Brown]

Awesome, appreciate you both, John and Zach.

Again, really insightful. Just what you had to share.

I’m going to kind of transition into a period where I want to be able to show some of the ways that Userlytics is using AI. And I actually show you how those will work in many of you on the call who use the platform already are familiar with some of these.

But there may be some of you that are not. Before I jump in though in the chat, put a thumbs up if you guys enjoyed what John and Zach had to say so far. I know I learned a lot of things. but let’s go ahead and jump in.

So I’m actually going to move my screen really quickly and jump into the actual Userlytics platform, but before I do just a couple of things that we’re gonna show.

I’m not going to go super in depth into everything today, but note that if you want a call afterwards with the Userlytics team, just to dive a little deeper on these features, feel free to reach out. I’ll have some more information towards the end, But going to be showcasing a couple of different things.

We talked a little bit about heat mapping, first click testing, I’m going to show you some of the ways that Userlytics does that.

Sentiment analysis was a big one on the call today. Going to be showing you the sentiment analysis within Userlytics.

And then one AI feature that we’re really excited about, we call it AI UX Analysis. And this is where we have AI that actually analyzes your sessions and gives you insights kind of like you see here.

So let’s go ahead and do that again, switching over my screen. Now we are actually in the Userlytics platform and going to take a look at some of the things that I mentioned.

So going to jump in really quickly to a little bit of a qualitative side first. As we can see here, we have a session, have a video of a participant. Specifically in this test, they were testing an ADDITIV website. And so what are some of the ways that Userlytics has implemented AI into the tool to help you get some insights here?

So first, obviously you have your transcription. This is an AI-based transcription, happens automatically, it’s also broken down by your activity. So you can kind of quickly find where the participant was. You can also click, you know, any specific word phrase in the transcript jump to that point in the video, which is always nice to be able to find specific words and go to those points in the video.

But one of the features that we’ve talked a lot a bit about, a lot about sentiment analysis, right?

What is the Userlytics sentiment analysis? How have I seen people that use our platform actually utilize sentiment analysis in like a real-world application.

So obviously you have the positive and negative phrases, which are the two sentiments currently supplied by the Userlytics sentiment analysis.

And then if we scroll to the bottom we kind of see an overall positive to negative and also a neutral in there as well for the video here.

Now again a couple of ways that you can use this. I’ll be showing kind of an overall sentiment analysis for this study in a little bit.

But it’s always kind of nice to know going into a video that we’re going to review, kind of what the overall sentiment of the participant was. So in this situation, it was more positive than it was negative.

It’s always generally going to have the most percentage be neutral in most situations.

But one way that I’ve actually a client of mine showed me how they used it and so I always share it now because I love the way that they worded this. You can see here on the play bar that the sentiment analysis shows you where those positive and negative phrases are.

So versus you know actually going through and and reading the different phrases and whether those are positive negative, being able to see what sections of the videos had the positive or the negative sentiment will kind of allow you to point in the right direction of where those insights are. So maybe we’re really interested in the constructive criticism that they have, or I want to see what positive things they have to say.

Again, I can quickly find those moments by using AI. And so I think, you know, a general theme of the webinar today, right, is that AI is not going to replace you as a researcher or a designer, but it’s going to allow you to do your work more quickly and more efficiently.

[John Anthony]

Yeah, I really want to stress that point, Nate, because I think that’s a lot of fear I’ve been hearing just across the industry that a lot of these things are going to be eventually replacing human beings.

I just don’t see it, to be honest with you. I think these tools are going to greatly enhance your muscles like using other kinds of tools to lift heavy objects or anything out there in the construction world, right? It’s sort of magnifying human effort, right?

Whether that be lifting or moving things or where they can see. This is the same kind of tool along those lines. It can’t really run by itself. It needs human interaction.

[Zach Naylor]

If I could add to that, I agree, John, for whatever it’s worth.

And I gave a talk specifically about AI and UX recently where it talked a lot of the misconceptions, and that is one of the biggest ones. And I asked everybody in the audience to raise their hand if they use mobile phone. And everybody did, of course.

And I said, did your mobile phone replace you in your job? Of course, the answer was no, but it does actually allow you to do your job better, faster, easier.

And that’s exactly the way I think we should look at AI. not meant to actually replace anybody, it’s really meant to augment what we can already do.

[John Anthony]

Right.

[Nate Brown]

Excellent. I had a good question in the chat from Michael actually. Thank you, Michael.

Something I actually wanted to touch on, so I appreciate you mentioning that.

He’s saying, if something in the sentiment analysis is inaccurate, can I change that? The answer is yes.

All you have to do is right click the phrase, and then you can edit it, and you can change the sentiment. And the cool thing about this is that our tool actually learns, right? So let’s say you go in there and you do make some changes. I probably wouldn’t recommend going through every single transcription and, you know, find tooth comb on everything.

But let’s say a general one, as you change it, the system will actually learn from that and it improves it.

So this sentiment analysis, at least within Userlytics, is learning over time and getting smarter. And that is to me a cool aspect of it is that it gets better over time.

And I think, John, unless you disagree that that is kind of the route AI is headed in and maybe what potentially scares people is that it gets too smart for its own good.

[John Anthony]

I think we’re really far away from, what was that thing called in Terminator? Was that Skynet?

Yeah, really, really far away from Skynet. Don’t worry too much about that.

[Nate Brown]

All right, excellent.

I want to move on really quickly and just touch on some of the analytical side of things. We talked a little bit about heat mapping first click test things like that.

I just want to show an example of a way that Userlytics does it. And specifically with first click testing. So the way that it works within Userlytics is that you are able to upload a image right screenshot and image. Could be of a website like this, specifically the ideas website, and participants click once and then confirm their click where they would click first, right, or whatever you had listed as kind of that instruction, right, where would you go to find this, or what would you click on first when you reach our website, or I think in this one it was more about you wanted to search basketball shorts, where would you go?

All right, so we prompt them based on what we’re interested in learning, and then this output provides us with where participants click. So this not only shows just a basic heat map, but it also gives us the participants and the percentage that that area was clicked.

And we can see there are a few different ones, different people going different places. Maybe this participant thought that basketball shorts would be here since it was an image of shorts.

And as we scroll down, we can see there are even a few more down here. The cool thing about this is that not only do you get that heat map, but maybe I’m just interested in the click map where specific participants click, and I can see that here as well, and then you also get an image.

All right, so this is again, a little bit of a newer feature in the tool. For those who haven’t tried it out, who are using the platform, I’d say try it out, give us feedback. And if you’re interested in using this type of activity for your own research and you aren’t using a platform like Userlytics, definitely reach out to us.

And then the last one I want to show again, this is the one that we’re really excited about. I think this probably has the best application of AI and the Userlytics tool is what we call AI UX analysis, and this is an analysis based on transcriptions. So it’s not necessarily watching the video per se, but it is taking the transcription. Let me actually come back here really quick.

That not only includes what you implemented, but what the participants spoke. So it’s kind of seeing both sides of things. And it’s giving you just general insights, right? So we can see here, you know, some of the participants were kind of overwhelmed by the amount of content and links on the homepage. All right, now like we mentioned, this is not really designed to replace you viewing videos, but a couple of ways that you could use it and ways I’ve seen people use it are one time crunch.

As we know lots of times we have solid deadlines. We maybe don’t have a lot of time to watch entire videos of participants. By utilizing this AI we get some initial things to look for.

Maybe I can pull these insights out for a report I’m doing for either my stakeholders or a client. And then again, this kind of just speeds up my analysis process.

One that another client of mine mentioned that I again thought was really cool is, let’s think about those who maybe don’t have a big background in research, right? So it could be a designer on your team. They just want some quick feedback on a design change they’ve made and iteration or maybe a newer member of the team, maybe a research intern, right?

This is going to be a really powerful and beneficial tool for them. Because again, it kind of gives them a baseline of what should I be looking for?

Maybe they do their analysis and they can kind of check their work against the AI analysis that was provided. And this analysis takes no more than a minute or two to process and you have it available. So a really, really powerful feature here.

I’m interested if anyone, again, on the call has used this. I see a lot of stuff going on in the chat, but that concludes the majority of the demo portion of what I want to show in the tool. Again, if you’re really interested in getting a more in-depth breakdown or walkthrough of the platform, feel free to reach out and I’ll have some contact information available.

[John Anthony]

Nate, this question came up in the chat as well. I just wanted to bring it to your attention as we’re talking about it. Is AI for sentiment analysis being trained more heavily on quant or qualitative data or is it about equal?

[Nate Brown]

It’s a good question. I would say that would depend a little bit on what sentiment analysis tool that you are using because they can differ depending on, you know, what tool or program you’re using them.

But our sentiment analysis tool is going to be, I would say, based more on quant specifically because it’s using the transcription.

So it’s not necessarily equal, but in terms of what the input is to get your output is going to be a little more quant focused versus watching the video.

I do believe that there are plans to include an update to send them an analysis to give it a little bit more umph and a little bit more analysis based on qualititive aspects kind of like you guys have already been mentioning.

But John, I’m wondering if you’ve maybe seen different types of sentiment analysis out there so that my answer is not too specific to Userlytics.

[John Anthony]

Not a tremendous difference. I think it’s can be equally good in both cases. So one of the other part of the question was, are user researchers using sentiment analysis AI tools for more for qualitative or quantitative?

Really, whatever fits the moment, right? I’ve seen it use a lot for qualitative and I’ve seen it used for quantitative as well. I don’t think there’s any way to say specifically whether it’s one or the other that is happening more or less, really.

[Nate Brown]

Yeah, and I guess that’s a good point.

One way that I’ve seen our sentiment analysis use from maybe more of a quant perspective is to track over time. So maybe I wanna see how participants from a sentiment analysis standpoint are feeling over time as I make iterations. So I have an early design, I do a sentiment analysis, maybe it’s potentially more negative.

But as I make iterations, I can actually show that the sentiment is getting more positive over time as I’m making these changes, or maybe not, and that could be a good insight as well.

[John Anthony]

Yeah.

[Nate Brown]

Awesome.

Zach, I did wanna give you a chance, ’cause I did promise that if you wanted to share a little bit about how Aurelius is using AI, we talked about it offline, and I was really interested in how you guys are using it, but interested to get your thoughts.

[Zach Naylor]

Yeah, for sure. It’s not entirely dissimilar to what Userlytics is doing, but we’re really focused on qualitative data, specifically bringing in the kind of research that you might do in something like Userlytics, like an interview or a moderated usability session.

And something that we’ve built in Aurelius is called AI Assist. And so what you can do is you can take one of those sessions. So let’s say you did a one-on-one interview, you can drop that into Aurelius and transcribe it with near human accuracy.

Those are all things that we’ve had for a while. But what you can do now is click AI Assistant.

What it does is gives you a summary paragraph and extracts key themes from that data, whether it’s an interview, a transcript, a usability test, open-ended survey feedback.

It doesn’t matter. So just as we do as researchers, it will actually give you key themes of what was discussed there, which you can then turn into what we call key insights.

Now, because we’re often referred to as a research repository, why that’s important is you can go from data to insights, which are searchable across every single study you have in one spot.

Now, we actually just recently launched an update to AI Assist where you can now do this across multiple sessions. So let’s say you did 15 interviews, you can get the same summary and key themes of everything that you learned across 15 interviews in just a few moments.

And you can turn those all into insights, again, searchable by keywords, tags, or both across everything, all of your projects in one spot, right in Aurelius.

[Nate Brown]

Zach, very cool.

And I’m interested, you know, as your team has implemented this new AI, what has the reaction been from your clients that have actually begun using it?

[Zach Naylor]

Yeah, surprisingly positive, only because, I say surprisingly because I come from the UX world and I know that there’s a lot of concern about the application of UI and a lot of what we do is inherently so focused on being human and understanding what that experience is, right? But it’s been so surprisingly positive because we’ve had people, I’m sure, much like folks on the call today who are very experienced researchers actually use AI assistant and reach out and say, I’m a little embarrassed. It actually caught some themes that I missed in my own analysis.

And so, you know, the whole idea of it again, and I answered some things in the chat. We’ve talked about this on the webinar where I don’t think anybody here is really building things with AI. Having the intention of replacing anybody doing this, right? Just want to help everybody do it faster and easier.

And so if what we can do in Aurelia specifically with AI assist is help you go from dated insights faster and easier, then that just means you have a lot more time that you can spend doing what you’re most valuable in doing, which is really making sure people understand those insights, the impact of communicating why it matters to our teams. And that’s the thing that we have heard a lot. And in fact, one of our customers with the new update that you can do this now across multiple sessions in Aurelia, we had somebody reach out and say, yeah, one of my researchers pretty much every report they give me is just the output from AI Assist, which isn’t, I don’t think, you know, even as somebody who quite literally built that and I’m biased, I’m gonna tell you it’s great. Even I don’t necessarily recommend that.

But the point there is that they can do, you know, higher quality research and get to the actual meaning faster. And that’s definitely what we wanna do because we know that analysis in sense making can be one of the most time-arduous processes there is in the work that we do.

And we can cut that down and that means we can have an impact faster as well.

[John Anthony]

I don’t think anyone should be embarrassed about that to be honest with you, Zach. I don’t think anyone should feel bad if they miss something that AI picks up. I mean, your sidekick in this mission being AI is this sort of embodiment of the entirety of human knowledge, plus almost perfect recall in its memory about everything it has just seen.

So you’re never really going to be able to compete with that in terms of recalling things that are being said. So it’s just a huge benefit to you to be able to have a tool like this that can take in all of this feedback and say, “Hey, and by the way, three people said this button is the wrong color or whatever happens to be.”

Right? And just use that to your advantage, I would say.

[Zach Naylor]

You know, one of the most interesting things about that too is I hear a lot a lot of discussion in the industry lately about AI potentially introducing biases in the analysis process, which is of course possible. But the interesting thing is we as humans, regardless of where we’re from, regardless of how hard we work against it, we actually bring our own biases.

So the interesting thing there is actually using AI tools like this, in some cases, remove our own biases, sort of help us get out of our own way.

One of examples that I mentioned, that’s actually what had happened. They were so focused on learning one thing, which is natural in a research study, that they actually missed some of the periphery insights that were present in the data and AI assist in Aurelia has actually helped them get that. So that was fun.

[John Anthony]

Yeah, I mean, it’s true at this stage of the game, our AI tools are not aware of a person’s gender for the most part, the color of their skin, their names even, it just sees the the words on the screen and hopefully eventually going to be able to understand some tone and then things of that nature.

Right, it’s probably unbalanced a bit more middle of the road than even we are.

[Zach Naylor]

Yeah, we bring a lot to the table as researchers and I know that people who do this work, we certainly work hard to come at that data without that, but it’s difficult just because of the human experience. And this is actually one way it does help balance that out.

[John Anthony]

Right, yep.

Great, are there any more questions? I think we have a few in the chat area.

[Nate Brown]

Well, John, lots of good questions coming in and we’re close to our QA session.

We’ll check the chat as well, the webinar chat. As I see some questions went in there, not the QA, that’s totally fine. I’ve seen a couple of good questions about, what’s next for AI UX, specifically within Userlytics. Just wanna touch base on that real quick before we continue on here because Userlytics, the AI UX as is right now is an early iteration of integration and AI to do this analysis for you. And we do have big, big plans as Userlytics is invested in including AI into our platform.

One of the ones I’m really excited for is for the ability for our platform to actually link portions of the video that back up these insights. So right now you kind of just get a general overview and kind of general insights, but the update not only in enhancing the prompt that we’re giving the AI to review it, but also giving you again direct links to sections of video that again, backup or show proof of the insights that are being given.

Also another good question around the ethical side and security side of things is that this AI UX analysis feature within Userlytics is a double opt-in feature. So there actually may be those of you on the call who use the Userlytics platform and don’t see this option.

That’s because your team actually has to opt-in for this feature so that, so it’s a double opt-in. So when you have to opt in to even have the feature available, no additional cost to your plan. But you also choose the studies that you want the AI to review.

So maybe you’re testing something sensitive and you don’t want AI to review it, then you simply just don’t request it for that study. So I hope that is valuable.

Gonna go ahead and proceed on here. Again, don’t worry, we’ll definitely get to your questions in the chat.

Do want to quickly just mention an offer since those of you spent the time with us today to sit in this webinar, listen to the things we had to say. If you don’t have a platform or specifically a platform that is utilizing AI and you’re interested in utilizing the Userlytics platform, please reach out to us and we would be happy to offer you additional credits as part of your plan.

Give you a little bit more room to even try out some of these AI features. The offer is going through about a month, so it’ll be available until the 15th. And all you have to do is reach out to AI@Userlytics.com and our team will take care of you.

If you’re already a member of the platform, just reach out to your account manager and we’ll get you hooked up.

All right, well, let’s tackle some QA, lots of good ones. I’m gonna come to chat just to see which ones I can grab over. But first, and John and Zach, I’ll kind of throw it out to either one of you if you wanna take it as, are there best practices in testing design that affect the quality of the analysis?

[Zach Naylor]

That’s an interesting question. There’s a couple of things that need to qualify there. I think it depends on the kind of testing and the kind of like tools and approach you’re using.

Generally speaking, the same with research as it is today, you know, bad data in, bad data out.

So, you know, in terms of like designing the goals and questions and cadence, the wording of those questions and stuff, that’s going to matter because then the data you get out of that, that’s what’s going to be analyzed just as it would today. So, you know, if you’re asking somebody, how much do you hate this website?

You’re right. The responses you get are obviously going to be biased right out of the gate. So I would, I would pay attention to a lot of the practices that you get, that you apply to good research today, that’s a general answer to that. I think we can get more nuance depending on what tools and ways you’re deploying those tests too.

[John Anthony]

Right, I think Zach’s correct.

And a good study design is good study design, kind of no matter whether you’re using AI tools to incorporate that or not. I just think AI tools are going to expose the flaws in your testing design system a bit more quickly, ’cause you’d have to go through all that stuff manually to kind of find out, oh, I shouldn’t have asked the question that way, or I should have asked this question before that question, right? So you’re gonna learn that stuff pretty quickly, I think using AI tools as opposed to having to wait to do it manually.

[Zach Naylor]

One more thing to add to that too, that I just thought of is it’s also, consider whether you’re gathering like massive quant data at scale or all as well because that can probably influence the analysis of this as well.

The reason I mentioned this is gonna depend on which tool you’re using too is because you’re kinda doing this on your own. For example, you can drop some things into ChatGPT, for example, and try to get this stuff out of it, but it’s gonna be a little bit different, completely driven by you.

Now, the interesting answer to that is the way in which you ask for the analysis is gonna matter just as much as the ways in which you ask the questions.

Whereas if you’re using a tool like Userlytics or like Aurelius, those things have actually already been built in to make sure that they’re focused on giving you the kind of response that you would expect as a researcher too.

So, you know, your mileage may vary as the term goes.

[John Anthony]

Yep.

[Nate Brown]

Excellent.

Another really good question here.

I’m gonna take this ’cause it says a little bit more on the user-lidic side from Nicholas, does the user know, so a participant within Userlytics, know AI is a part of the application in monitoring the session, if so, can that influence the user?

So for testers, there is kind of a blanket terms of service that they sign up to that does mention AI in it. So there’s nothing popping up on their screen as they’re going through that’s telling them, like, you’re being watched by AI or anything like that.

So there is no like influence or potential bias that they could see from that. And again, you would choose the studies that you want the AI to review in case you had, let’s say we’re doing a test where participants didn’t want to have anything with AI in there. And then a last note there, and Nicholas, I hope this is helpful.

You can always include your own specific NDA or participant terms of service where you mention AI as part of the screener that they would have to accept before they take a step.

So lots of protections in place to make sure that no rules or anything like that are being broken from an AI or sensitive information standpoint.

All right, another good question in the QA, and I’ll throw it out to you both from Emilio: When conducting tests on physical products as opposed to interfaces, can similar tools be used? And what other tools exist?

So physical products versus interfaces, a great question. I’ll throw it to you both.

[John Anthony]

Yeah, I mean, I would say with physical products, the same kind of sort of challenges are inherent, whether you’re using the high tools or not, but you also have the benefits to the high tools as well.

So we’ve done, you know, as best as we can approximate physical products and experiences on a computer surface, you know, like a screen and being able to allow people those items. And that’s always provided a lot of helpful feedback.

You know, so those sorts of experiences can be done in a variety of different ways. And I think AI tools are going to benefit you, regardless of whether or how you perform those particular tasks. Zach, I’m curious if you’ve worked with any of the physical, you know, models.

[Zach Naylor]

I haven’t yet, but my answer to that, I was thinking of it as you were speaking is, at the end of the day, depending on how you’re collecting it, the data can be analyzed by AI regardless. There are definitely ways you can apply AI-based tools for more physical products that’s personally out of my wheelhouse in direct experience, but if you’re considering what kind of research you’re doing and you’re collecting the data in a way that ends up digital, I think a lot of the things that we’re talking about absolutely apply here. If you’re talking about a little bit more real time, there’s definitely ways in which that happened because there’s just tools popping up. They’re actually applying AI and using this stuff quite literally every week. But I don’t have a specific direct experience with that.

[John Anthony]

Yeah. So we’ve done some studies on physical products, including things like home appliances, you know, but a lot of those prior to some of the better tools that are out there today, but the data that we collected would definitely be applicable to using AI tools to interpret and to provide that sort of analysis for us.

So, you know, just utilize in whatever way you can, or, you know, verbal or written feedback for people to explain and speak out loud as they’re going, what they’re doing, what they’re thinking and feeling about using that physical interface, even though they’re doing it on a computer and they’re pretending to hit a button or pretending to turn a dial on a device, the feedback is still the same.

And you could even eventually apply this to sort of like live video as opposed to having someone do it live and then get that feedback as it comes in, be a spoken word or after the fact from a transcript.

So if it’s, you know, we’ve seen a tremendous amount of enhancements to AI capabilities just in the last few months, we’re going to see even more in the few months that are going to be coming up.

So we’re going to see its ability to, you know, say, point a video, point a camera at someone using a physical interface, and then have it interpret the data from there, which is absolutely going to happen.

[Nate Brown]

Very exciting times.

Got a few more questions and we’ll go ahead and pass them over. One, I think I could probably take and it’s just related to the heat mapping and the first click available for web tests.

Currently in the current iteration, it’s screenshot or images only.

So if you want to do it on a web test, I would just simply screenshot that web page and upload it to the platform and it is again it’s because it’s a initially first click test not necessarily a click track over time and again those that just allows us to test your assets without having to implement any kind of back-end code or anything on your websites keeps everything easy simple.

Another good question it’s a little lengthy so I’ll read the whole thing out and let me know guys if you want me to circle back on it, but the question is, so far we have been talking about researcher-centric applications.

The question is, are you leveraging the AI tools to improve Userlytics UX for research participants, not just for the researchers? Participant UX is the issue I hear about the most.

So actually, I think this is one related to the Userlytics, and I totally agree here. So, so far the AI has been specifically focused on research analysis. I would say that the next step beyond kind of the ones already shared is implementing AI into building your study.

So, still on the research side, but that’s kind of the next step that Userlytics has planned. So, utilizing language models to help you build your script, maybe specific activities or specific tasks that you have in your studies. That’s kind of the next plan step beyond the analysis front for AI with Userlytics. But certainly a great thing.

We do often update our participant experience specifically on the bring your own participant side as those can be more challenging as it’s normally people who haven’t tested on tools like this before.

So a great question. I don’t I don’t believe there’s currently planned for AI to enhance that, but it’s something that we look at often.

Alrighty. Let me go ahead and get that one. So, another question on is this going to be recorded. So again, this is a recorded webinar.

If you are registered, which if you’re here, you registered, you will get a recording of it sent to you. If not tomorrow, I would say about early next week. As soon as you can, we’ll send it out to everyone who attended. So, whether you had to draw up early, you want to share it with the teammate. You will be able to view it. All right, Zach, I’m actually gonna pass this one.

It came into the chat and I’m interested ’cause I know your tool does a really good job on the reporting side and I wonder if you could speak to it.

The question from Eva was, “Is AI somehow used when creating final client reports of the studies?”

[Zach Naylor]

Oh, interesting, yeah. Well, the way it works in Aurelius is that All this is actually driven by you.

So AI is only trying to help you do that faster. We do see people taking those outputs, obviously, and putting them into the reports. Now, with respect to how that happens specifically in Aurelius, each project, we actually make a report for you automatically.

With every insight that you create, every recommendation that you make, they get added to an insight, which is shareable. So the way AI would actually come into that is whether or not you’re using the outputs of our AI tools in those reports. So that would come in two ways.

One, via the summaries, which obviously you can add to reports, no problem. And then of course any insights you create directly from those themes that get created using AISs.

So the short answer is yes, I don’t know if I answered that question. We actually do have more things coming with regard to reporting. But as it stands right now, that’s the way that that would work in Aurelius.

And, you know, I’ve actually heard people too, not specific to Aurelius. I’ve heard them using even things just like ChatGPT and then throwing some data and say, write me an outline for a report that does these kind of things. If you wanted to try that, I have not done that personally.

So there’s all a whole bunch of creative ways which you could do that, depending on how you’re going about your work.

[John Anthony]

I know in the Userlytics side, all of our reports are still created by a human being with human hands and human eyeballs and human brain patterns.

So we do use some of the AI tools that we have at our disposal, obviously, to understand some of the patterns and behaviors and sense of analysis and so forth, but it’s still informed by the human beings that are working here.

[Nate Brown]

Yeah, I think it’s a good time to shout out, John, and the rest of the UXC or UX consulting team from Userlytics.

If you’re sitting there and you think, and I would love to have someone just do research for me so I didn’t have to do it or maybe I can focus on other priorities I have. Userlytics does have a dedicated team of researchers that does include John and some of you may even have worked directly with him. They do a great job and they do all the research through your account for Userlytics and then you can be as included in the research study as you would personally like.

Alrighty well we are over our hour so I do want to push us to just kind of final thoughts.

I’ll kind of wrap us up. But you know, Zach, final conclusion from you. We’ll pass it over to John. Just kind of final thoughts from the webinar today.

[Zach Naylor]

Yeah, you know, a couple things. Number one, there’s a lot of misconception about what AI is.

And hopefully we cleared some of that up so that people understand it’s not this just nebulous thing. AI or computer systems and tools and things that human human beings be built. They’re just a lot more advanced and advancing faster than we’ve ever seen them. The other thing to, I would say, in conclusion, is that don’t be afraid of this. I would really encourage everybody to lean into this because I do firmly believe that not only UX, UX research, but a lot of industries, a lot of people doing work today are going to benefit from knowing how to use AI to do their jobs better, as opposed to resisting that change.

I don’t think it’s going anywhere. It’s definitely not a fad. It’s definitely not just ChatGPT, not just creating fun images, based on prompts. It is going to sort of infiltrate many, many parts of our lives that we haven’t even realized yet.

And I think the more that you understand about that, the less concern and the less misconception you have, and actually the better off you’ll be, both as a professional and even in your personal life.

[John Anthony]

I agree with that sentiment 100%.

I would encourage anyone who has access to any of these tools, including the stuff that’s available via usalithics to just investigate it, give it a try. Even internally, we look at the insights that are generated by the AI tools just to see how they compare to what we have come up with on our own just to better understand its capabilities and to help fine-tune it down the line as well.

Like Zach said, don’t be afraid of it. It is fun and exciting, you know, feel free to talk, ChatGPT on your spare time if you want to, to kind of understand a little bit better.

I think it’s, it’s full, you know, the future is kind of full of possibilities at this point, and we’re really only at the very beginning of this journey we’re going to be on. So, you know, a lot of you are getting in on the very ground floor of what this means for UXR and UX design.

[Nate Brown]

I love it. And again, I just really appreciate you both.

I hope everyone that has attended really was able to draw some cool insights, some new thoughts or ways of utilizing AI in your research.

From Userlytics, again our goal has always been kind of industry first. So we’ve implemented a lot of these AI into our tool to get feedback from those who use our platform.

What works?

What doesn’t work?

What would you like to see out of it?

So if you’re sitting on the call and you have any ideas, we would love to hear your feedback on the AI that we have, we would love for you to try it out and tell us what you would like to see next as well. A lot of our product roadmap is driven by those who actually use the tool, which is kind of what you would hope for.

So we do have an upcoming whitepaper that is going to be evolving around AI.

It’s going to hit a lot of the topics today, but we’ll also include some new things, strategies, and some things that you can utilize in your own research. So keep an eye out for that.

If I missed any of your questions in the chat, I’ll be combing that after the webinar ends and likely reaching out or someone from Userlytics will reach out to you.

If you’re interested in contacting Userlytics, again for either the offer or I just want, you know, some more information on how to use the AI, reach out to AI@userlytics.com or feel free to give us a call via the contact information here.

But with that, it really wraps it up today. Again, huge shout out to both John and Zach for giving us their time today and really putting a lot of thought and insight into the webinar.

So with that, I’ll go ahead and end it.

And I hope everyone has a great rest of their day.

[John Anthony]

Take care everyone.

Nate Brown

Nate Brown

Nate is an accomplished account manager for many large enterprise-level companies in the North American region. With multiple years of experience collaborating with research teams to maximize their research in the Userlytics platform, Nate possesses key insights into why some research projects lack substance and others produce valuable insights. His favorite part about working at Userlytics is building lasting relationships with his clients, even in a remote setting.

Schedule a Free Demo

Zack Naylor

Zack Naylor

Zack Naylor has been doing UX, design, research and product strategy for more than 15 years ranging from Fortune 100 companies to startups. He's the co-founder and CEO at Aurelius, a platform for UX and Product people to gather customer research data, make sense of it fast, turn it into insights and action in one central, searchable place.

Schedule a Free Demo

John Anthony

John Anthony

John Anthony is a seasoned UX designer with decades of experience in user research, design, and information architecture. With a deep understanding of how users think, John excels at helping clients gain valuable insights into user behavior and preferences. He's committed to user-centered design principles, and creates intuitive and engaging experiences that meet users and business needs.

Schedule a Free Demo

Watch It Now

Please, fill the form below to watch the webinar

Let's get in touch!

  • Userlytics Facebook
  • Userlytics Instagram
  • Userlytics X
  • Userlytics LinkedIn
  • Userlytics YouTube

Popular Resources

Blog
September 8, 2025

The 6 Best Moderated User Testing Platforms

Go beyond analytics: learn which tools help you capture real-time insights from real users.
Robot handshake human background, futuristic digital age. Representing How AI-Native Research Is Rewriting UX
Webinar
August 15, 2025

Born Digital: How AI-Native Research Is Rewriting UX

Born Digital: How AI-Native Research Is Rewriting UX. Discover how AI-native research is revolutionizing user insights.
LLM Showdown industry report cover
Whitepaper
July 10, 2025

LLM Showdown: Usability Analysis of ChatGPT, Claude & DeepSeek

ChatGPT, Claude, or DeepSeek? See which LLM stands out in UX and why! Powered by real user data and our ULX® Benchmarking Score.
UX education
Podcast
June 6, 2025

Bridging UX Education & Stakeholder Relationships

Join Nate Brown, Taylor Bras and Lindsey Ocampo in the podcast Bridging UX Education & Stakeholder Relationship to unpack the critical skills needed to succeed in a modern UX career.

Ready to Elevate Your UX Game?