There are several factors that guide the UX that are accepted.
- Its effectiveness (simplicity, ease, and functionality.)
- Its lack of obtrusiveness (it gets your attention based on criticality or “on demand” need.)
- Its implementation of accepted technology vs. new technology within a domain.
- Its forgiveness of error.
Effectiveness
This is often a catchpoint. The level of simplicity needs to be commensurate with the task at hand. For example: contacting someone vs. performing a diagnostic procedure. The common error here is negative simplification – that is simplifying a complex process to improve numbers of viewers without considering that the process requires many possible branching decisions, each of which may reveal a new set of choices. If a product is a single function tool, then the MVP (Minimum Viable Product) is easy to define. If, however, the product is a set of tools used to complete a generalized task, then we can often (not always) infer that the completion of the task may require a constantly changing set of tools due to unknown variables. In the later case there are some tools which will be used all the time and others that will be used less frequently but it is important that the less frequently used tools are ALWAYS available because their need/availability cannot be determined at the beginning of the process.
Part of this issue is the determination of the importance of the task and its related processes. For example, in surgery, most processes are critical even if no unexpected errors or situations are presented. A phone call on the other hand could be casual and of minor personal value or one of critical need depending on the situation. Further, a game poses no threats at all, but may anger a user if there are bugs in the process of play. Lastly is the capturing of information. This can be simple like writing or recording and only done for reference or posterity but not required for the presentation of the information which may be meant for listening only. The capture in this case is an indirect reinforcement of hearing/seeing the presentation of information but does not have any actual effect on the outcome of that information. (This, like many concepts could easily be rabbit holed, but I use these ideas for high level differentiation.)
In terms of ease of use, it has to be defined as to whether it should be easy to use. Child-proof safety tops or catches are just such an example of the fact that ease of use should not be applied blindly to everything as they are, by design, intended to limit the users to those who can already understand the reason for use. The same can be applied to professional applications where complex work requires a complex tool set.
Lastly is functionality. There are many complex processes that can be simplified, while there are other complex processes for which simplification reduces the effectiveness because decision points that allow “on-the-fly” adjustments to environmental and other unpredictable variables, when removed can produce flawed, if not catastrophic, results.
Obtrusiveness
This function varies based on use case. Often, without a fairly fully effective AI, there is often no way to determine what should draw the users attention to an attribute of a complex system. There may be regulatory, safety or security requirements that define the minimum parameters for this manner of getting the users attention, but it still doesn’t address when there are multiple points of attention of similar weight/value that are required concurrently. In these cases, it is up to the user to determine which to act on and in what order. Again, unknown variables may affect, necessarily, the user’s process. These variables may be presented in ways that the tool is not designed for. This doesn’t mean that the tool should be altered, as it may already be a highly effective single function tool, but rather it can be left to the user to determine the order based on this assessment of newly or suddenly presented variables. That is why I mentioned that only a fully effective (and mostly non-existent) AI would be required.
If we define the rules by which something should be presented to the user based on empirical use cases and also mitigate the potential issues that may happen if the information is ignored or missed, then it becomes far easier to implement it. It’s just that it’s not that common that those use cases will safely cover errors that could be problematic.
Then there is the issue of what method is used. Here, we should keep in mind that new technology is far more quickly accepted by the product development community than the world at large. This has to do with issues of confidence (will it work right?), trust (do I want to share this information?) and technological maturity (can I afford it? Or is it too cumbersome?)
Consider the concept of the future in the 1950’s with the idea of the TV-picture phone. It was perceived as a marvel of new technology, but what no one thought about was that people didn’t want to be seen at home in their underwear when they answered a phone in the early morning. It was decades before skype and facetime were used with some regularity, and even then only when people were prepared to use it. It’s still mostly used by people making long distance calls ‘back home’ perhaps to another country, or in long distance business interviews and conferences. Even now, if I think of the last three companies I have worked at, I have often seen content being shared but only extremely rarely seen live streams of video of people in these conferences. There is a level of privacy that people still hold onto across the globe when it comes to what and how much they wish to share in a communique.
There are other similar issues with new technology that are foreign to many users and also for which there is no standard. Even gestural touch interfaces don’t have a consistent standard yet even though they became widely available almost a decade ago. Even if there are cultural pseudo-standards in place, they are often context specific. “Swipe right” has different connotations depending on the context which it is used. Even the order of digits on a phone keypad and calculator keypad are not harmonized (a dialpad has the “1” in the upper left corner while the calculator has “7” in the upper left corner; this is congruent with common mental models of data chunking.)
Accepted vs. New technology.
The touch screen has been around for half a century but not widely accepted until the last decade and even that wasn’t instantaneous particularly, as mentioned above, the lack of any standardization (other than implication) of gestural use.
While technologies like VR have great possibilities, there is still the issues of acceptance, standardization of use and issues like motion sickness that have not yet been dealt with effectively.
Additionally, there is often a mistake in perception of any area of growth that discounts leveling off or even drop off from either saturation of the market, replacement by another different technology trend, even if less effective or simply limitations of a technology when it reaches the point of diminishing returns.
Since I live in Silicon Valley, there is often this bubble effect of people seeing technology all around them and assuming that it is ubiquitous when in fact it may only be ‘ubiquitous’ in high technology and/or areas of high median income. As soon as these inhabitants step into a more common area outside, they realize that the very technology they may depend on is not only not available but may also be viewed with suspicion. Consider the rise and fall of the Google Glass. While the technology was amazing to those early adopters, they hadn’t considered that many others saw it as an invasion of their privacy. It wasn’t uncommon to hear a conversation between someone wearing the Google Glass and another, where the other person would say “are you recording me?” and then not really believing whether they were or not regardless of what the Google Glass wearer said. This is not to say that it was useless, but rather that it would be more effective only in specific situations but not acceptable in many others.
Other types of feedback systems from haptic to neurological implants have promise but are still far to nascent to expect wide acceptance.
Error forgiveness.
This goes far beyond the system error of the past. Here is an area of constant annoyance. Consider the fact that there are whole internet sites devoted to posting the sometimes hilarious/embarrassing mistakes of autocorrect. This idea of “I like it when it works.” is a common cry amongst texting pairs who haven’t turned it off. As it stands currently it can speed up the communication but it can also lead to rather severe errors.
While basic machine learning algorithms can address this, it would take a deep learning algorithm to learn the cadence and style of an individual’s communication style including things like context, intent (sincerity vs, sarcasm), interests, vocabulary level, etc. along with the context of the person your conversing with since the language between a parent and child and two intimate partners may be extremely different even though two of those people could be the same person. This makes for complex interactions that can’t be ignored.
One final note:
Most of my posts are directed at more advanced areas of UX design. It is for this reason that there are not a lot of pictures as samples. I point out examples within my post as anyone beyond the beginner (and any critical thinking beginner) will understand. Additionally, I find superfluous imagery tends to belong more with “top ten” lists and other basic concepts in design. I will always use imagery when it simplifies or clarifies a particularly different, new or complex concept. Imagery can also be limiting to the conversation as any advanced designer will already have a good imagination at visualizing how a concept fits their milieu of design work.