Ease of use is often the biggest factor in the success of any kind of software, web service or mobile app, and navigation tests are great for measuring this in your interface design. They’re simple and to the point — we show users a screen and ask them how they would complete an action. Then we measure how many users manage it, and how quickly.
If at least 80% of participants can complete the desired action, then it’s a safe assumption that your interface is doing the job well. If the success rates are slightly lower, in the 60-80% range, then you likely have a decent foundation to work with and may only need to make small tweaks to size, color or layout to make your primary actions usable.
In the example test there is clearly ambiguity in the interface — two ‘more’ or ‘menu’ type elements mean that a user is guessing about what hides beneath. Ambiguity isn’t optimal in any interface, and the results reflect this.
Asking follow up questions in these tests may not be as effective as it can be in preference tests, simply because when a participant can’t figure out the way to complete an action, they likely don’t know why they can’t figure it out. A simple Customer Effort Score question will help you to judge if your interface is working as well as it could.
Once you’ve reached a good level of completion with your tasks, you can further optimize your interfaces by running variation sets focused on single elements, comparing the completion time for each test and picking the winner based on the elements with lowest time-to-click stats. This is a more advanced technique, so lock down some basic functionality tests before you start going deep.