Top 15 Agile Ux Designer Interview Questions You Must Prepare 19.Mar.2024

I’m assuming “this approach” me automated remote testing. One of the benefits of automating moderation is that it allows a single user researcher to do more. If you are using remote automated testing, combined with A/B testing, you could probably get away with having a single researcher cover 3-5 teams depending on the nature of the work. That estimate would of course be different if moderated testing was used to do RITE tests. With moderated forms of RITE testing, you’d need a dedicated researcher per team in most cases.

Take your list of stories, and define which ones you want to test as you scope each sprint. For each story to be tested, you create a user task in a remote usability testing software. Once you do that you can run that test as many times as you want. The only additional cost is your time to review the results and any incentives for panel recruits.

Yes, the first three methods I spoke about during the webinar work well in agile. 3×3 tests are regularly done in 2 to 3 weeks from start to finish and ideal for sprint 0 or spikes/sprint ahead. RITE testing as an ongoing process is practiced by many firms today. A/B testing methods are also very commonly used. We focused on automated testing since it’s less well known.

Define the recruiting strategy early in the project, not just in time. Depending on your research methods, you can use a combination of private lists, panels, and intercepts. The key is to predict when you will need participants, how many, and of what type. Allow at least a week for panel & private list participants to respond to remote study requests. Also keep in mind that with iterative testing, comes the need for participants on an ongoing basis. Often more participants are needed as well. Waterfall recruiting strategies won’t work, you’ll get bottlenecked.

There are a variety of ways and tools to go about remote mobile usability testing. With our own Mobile Voice of Customer tool, UX professionals can gain insights into who, how and why actual users visit their mobile websites or apps plus their satisfaction and likelihood to return and recommend the mobile site.

Basically, the way it works is the following:

  • Tag your site or app with JS code.
  • When visitors come to your app, they’ll get an invitation to participate in a test.
  • The test can include a task and/or a questionnaire.
  • When users are finished with their session and leave the app, they get a notification inviting them to fill out a survey (their browser will open if they agree).

First, test planning and design are much more important. Once you pull the trigger it’s harder to adjust for any problems. That’s why pilots are so key. If you’re used to doing only qualitative testing, only reporting issues with no metrics, you’ll find you need to play more attention to study design.

I’d say avoid longer than 30 mins, but you could do longer in some rare situations.

Tests find that stories aren’t really “done.” That me you’ll have to accept that some tasks may be added to stories that the PO thought were complete. This is no different that finding other types of software bugs on an agile project. Make sure you’ve agreed to how to prioritize UX defects of any type with the team early on and defined metrics and goals. Otherwise you’ll end up with unmeasured “UX debt”, similar to Ward Cunningham’s concept of technical debt.

Moderation takes time, so clearly both automated and A/B testing saves time. Unlike moderated testing, much of the data collected in an automated test is precompiled making analysis and reporting faster. In automated user testing standard task and sat metrics are calculated, as are clickstream graphs and heatmaps. Some of that stuff would take forever to do by hand. In contrast to A/B testing, both moderated and unmoderated can save development time. People often overlook the time it takes to develop and deploy stuff for A/B tests. Also, in many cases running follow up tests is needed with A/B testing to understand complex interactions. That can often be avoided with automated tests which give you more qualitative data.

Throughout the process, on an ongoing basis. 3×3 prototype studies are great for sprint @RITE is very suited for in sprint testing with working code. You can also test a sprint behind, but that tends to lead to UX debt, the accumulation of usability issues, so in that case make sure you have buy in on using metrics and goals to track what’s going on.

You can actually achieve statistical significance with only a few users. In the examples I used we had between 10 and 15 users and the results were statistically significant. It helps when you have the same users attempt the same tasks on comparable interfaces (called a within subjects study). Statistical significance refers to results which are unlikely to be due to chance alone.

With smaller sample sizes we are limited to detecting relatively large differences between designs (large differences in preference and performance). In early stage designs however, it is those larger noticeable differences we are most interested finding for decided the better design.

Even for a website with millions of visitors (the examples also came from websites with millions of monthly visitors) or just a thousand visitors the math works the same. For determining which sample size to use in an evaluation depends on what you are doing (comparing, estimating or finding usability problems).

This is where remote testing really helps. Run studies based on the prioritized market segments. Keep in mind that many usability problems are not location specific, but just as likely to impact any user. If faced with limited resources, prioritize the testing by market segment, focusing on high risk personas and stories. One technique you can use to prioritize is what I call the UXI matrix, you can find a description of it in the blog post Integrating UX into the Product Backlog.

If the team has a physical scrum board, I of make a column called “user tested” at the end, and move story cards over when we test them, and I either note on the card the metric we are tracking (say task completion rate), or I move it only when it meets our predefined metric based goal. I also post screenshots with markups for issues showing recommendations. Follow up by filing bugs so they are tracked as part of overall quality.

You need at least some UX representation on the team to be effective. The challenge is that it’s hard to have all the skills required on such a small team. I’d recommend having a dedicated UX lead on each team that has the skills most likely to be needed, as well as some supporting UX staff that can help out. One researcher might be able to cover multiple teams in many cases. As to what types of UX skills are necessary, it really depends on the project. That’s like asking what kind of development skills are needed. I wrote an article on this previously titled Defining the user experience function: innovation through organizational design.