I have noticed a common myopic view of the handling of edge cases around +-5% use-cases features (those features that are addressed by only 5%, give or take of users. These can be outlier, expert users, or special situation users (by job, environment, age, or other demographic.) I should note that the 5% is an quasi-arbitrary small number. It is meant to represent a portion of users that isn’t so small as to be outside of the MVP population, nor large enough to automatically consider it as a primary use case. It will and should vary depending on the size of the user base and the complexity of the application.
The problem is that this group of users is often either not parsed properly, or not defined as cumulative grouping. These exceptions tend to be handled exceptionally well in some highly complex professional applications (in terms of being highly loaded with specialty features) such as Photoshop or some Enterprise Medical Imaging software as a few good examples. Other than these cases, these are used improperly by many UX designers or company defined design process which are often, though not always, outdated.
Concept, execution or explanation.
I’ve seen many concepts and projects fail, not because they weren’t good, useful and saleable products. The problem was because the product was mark as a failure because a lack of understanding as to either the problem that it solved, or the benefit it provided.
The solutions can be:
- A simple visual that easily displays a complex interaction in a simple manner of literally showing the difference in real-time and real-life manner.
- Or it may be with a simple overall description that encompasses a sometime incomprehensible number of features or even the features are not the focus but rather the ‘simple’ integration is.
- Dealing with an ineffective or even inappropriate (business-wise) choice of data that is being used as the comparative sample that fails to present the benefit of a concept.
The first example often happens when dealing with an audience that may not be able visualize the solution being described. This inability to visualize opens the door to all kinds of cognitive biases. For example, in a fairly necessarily complex UI I was working on, I had suggested a simple fine (2 pixel line) around an active study (in a radiology environment.) This description was dismissed and then a myriad of grotesque solutions were proposed. These were too severe and problematic to consider for implementation since most focused on one aspect with considering the complexity of the UI. So, I showed in a simple two page powerpoint how it would appear if a radiologist selected a study. The concept, previous rejected, was unanimously approved (by both sales leaders as well as clinical specialists), simply because the actual images were “real” in terms of how it would look exactly on the screen (with nothing left to the imagination.)
The second example comes from having an application that can do many things through a central integration point. Each of these features has a high level of desirability to overlapping markets. The problem became apparent when questions from the audience would sidetrack the central focus (because it was not clearly defined) and then the presentation devolved into a litany of features (few of which were particularly remarkable on their and others were remarkable but undifferentiated from the less remarkable features.) Here the solution was to present the idea of integration and a central focus point as being the true benefit of all of these features.
The third example is surprisingly common. Here, the functionality is properly and thoroughly presented but the sample data being used is too small or too random to demonstrate effective results, or not ‘real enough’ to be able to correlate with results that demonstrate the power of the functionality. For example, perhaps the functionality is a way to present the use of a home based IoT climate control system using machine learning to learn usage pattern for specific households. If the database being used is not based on real aggregated database of individual home data points and is in fact an artificially generated database based on a real data but randomized for because of privacy or security concerns, then the resultant analytics will be equally randomized and fairly useless, since it would be impossible to show actual use-cases for various demographic (or other) filters. So the resultant displays from the algorithms may be dynamic, but they would show no real consequential and actionable results. This would lead the audience to simply see that this does something but cannot see how it could show me anything useful, whether it was basic information or unexpected patterns of specific groups. This ends up being a lot of effort whose result isn’t much better than simply saying “Believe me, it really works, even though you can’t see it here.”
Also consider:
Another aspect of this 5% user base is that the use-case could be a critical but one time use for 5% of the population, or it could be a regular required use for 5% of the population. While this 5%, regardless of which of these to groups your addressing, could be a different 5% for each of 19 more features/capabilities. In the first case, it can be buried in the preferences, while the later could be buried in the 2nd level (2 clicks away) with the option of custom shortcut implementation.
These may seem obvious, but the require diligence because they are often considered during a specific phase of design and development, when they should be considered all through the design process, from ideation through post production maintenance and version increments as well as postmortems.
Summary:
This is a cursory observation of the problem (meant to initiate the conversation.) There is no one solution to this issue, rather the problem should be considered in advance of the presentation and then a proleptic approach becomes a more effective presentation structure. I personally like to think of it as using scientific method to create the presentation of the concept. Theorize, test, focus on flaws, not positives (assume that the positive is the initial concept that your trying to find flaws in before someone else does, or to simply validate the quality of the concept itself), and fix it if possible.