The sitcom logic approach to failure.

By Robert R Glaser
UX designer and architect.

This is an issue that seems omnipresent in many businesses save for a miniscule percentage. It is often replaced by a bulldozer approach or worse a decision based on a random guess approach to problem solving.

What do I mean by this? Well its surprisingly easy to describe and will be easy to recognize. Whether you watch old sitcom on TVLand or new ones on a streaming service, the plot device is used commonly in sitcoms is that the one or more characters are presented with a problem (e.g. suddenly needing cash, or meeting someone, or fixing something.) They quickly realize what needs to be done and then devise a method to achieve the results. This method is typically silly and irrelevant. Something happens which quashes their process, so they fail. (Here’s where the sitcom logic comes in) After failure, the concept is discarded in whole without a thorough postmortem to determine where the problem truly lies which is usually the method for the solution which for the sake of the sitcom is usually silly. Although often a review is performed and maybe some superficial details are addressed but not thoroughly enough, so the entire project (including the initial theory, which is usually valid) is discarded.

Jeff Catlin in Forbes talks about the failure of IBM “Watson for Oncology for MD Anderson due to lack of consideration of varying cultural models and have a 62-million-dollar investment halted. Interestingly, I had previously worked at Phillips Healthcare and one of the aspects I had to consider in any [UX] design work was that the resultant design needed to work in multiple markets outside the US. So, for example, something that has little or no value in the US may have significant or even critical value in Germany or the U.K.

Another example is expressed through the issue of innovation and invention without proper commercialization. The story of the Apple visits to Xerox PARC has become mythologized, but in the long run, Xerox and Apple both had technologies which revolutionized the personal computer industry. The difference is that Apple didn’t discard or ignore these technologies, but rather commercialized them (which was significantly surpassed by Microsoft) to the point where they are ubiquitous across the industry of hardware and operating systems and applications.

Don Norman has talked about how it often takes decades for a new technology to be adopted on a massive scale. This adoption isn’t merely waiting for it to be cost-effective but rather that it be viewed as non-threatening in nature (even if it never was) or not foreign in its mental model, even though its actually easier to use or manipulate. He has used the touch screen as an example that took over 30 years to become ubiquitous (from it’s functional invention in 1975 and first production in 1982 to the real large-scale adoption with the iPhone’s release in 2007.)

My own experience has shown this is often true, particularly when some new technology is trendy. It is common for people to present it as a possible solution or methodology that is forward-looking without the background check of (even superficially) of determining whether it’s actually useful or appropriate for the case at hand. Currently, I work on the in-vehicle experience and see what technology can be used to support it. The most frequent error I hear is “Why don’t we do x because I can do that on my smartphone” without the simple consideration of the fact that cell phone use in cars is limited (to varying amounts dependent on individual State Laws, some of which allow you to do almost nothing with a cell phone.) While it seems obvious to some, most people don’t realize that the mental model is significantly different. Using a smartphone by the average smartphone user requires or draws full attention sometimes to the point of being unexpectedly and dangerously undivided. If you have seen the youtube videos of people walking into walls, polls, other objects, and even people while interacting with some social media or games, feel free to look it up.

Decisions that are made need to cover a range of issues and those issues should be graded based on how and what biases drove them. Common food is a good way of demonstrating this. If you poll people across the US to find out their favorite foods and also what foods are the most consumed, you will probably find there is some overlap but that they are not the same. You would also find out that these change over time. There are significant variables like cost, availability, and trends that have a significant effect on these. There are other variables as well and this list changes significantly between regions and even more so if you start considering countries outside of the US. Here, an important issue is brought forth. The more people are added to the averaging process, the less likely you will have a genuinely ‘average person’.

What this means from a UX standpoint is that designing to the average creates a mediocre outcome for most. So you may have an excellent theory for the UX but the data that it is based on drives the method for a solution that takes no individual into account and therefore, often, pleases few users. The difficulty here is in figuring out how to effectively subdivide the user population so that each subdivision has a way of letting the UX be seemingly (as opposed to discrete customization made by a user) customized to them. These subdivisions can be by age, culture (geographical and/or racial and/or religious), economic, sex/gender, and educational. There may be other subdivisions depending on the target audience. Some of these may not be relevant to a specific case, but they should be addressed before they are dismissed. This issue is a common driver of mediocrity or worse, failure.

What often happens here is an example common in application development where the placing all features at the same level overwhelms the user. The mistake that is certain features are eliminated (along with the users who find those important) rather than finding out how to address smaller percentages of users. This can cause a failure of the application to gain a growing or even sustainable user base.

These kinds of issues are common in applications with large feature sets. You would be unlikely to find a user (other than a person certified to teach the use of the application) who used all of the features of a complex system. Complex applications like this often have overlapping user bases which utilize the application for different purposes. Adobe’s Photoshop is a good example of this which can be seen by opening and examining all the menus and submenus available. It has users who are professional illustrators, or photographic cleaner/retouchers, or visual designers for applications development (both software and hardware) in addition to hobbyists, and even people who specialize in creating maps for cgi (3D) work. There are sets of tools for each of these groups which are often never used by other groups but are critical to the work of a specific group. The interface for Photoshop is customizable for optimization of whatever the user’s primary task is. There are also features which overlap several groups and a few features which are used by all groups. When decision makers are either out of touch with the actual users, or worse believe that their own use paradigm is (or should be) applicable to all.

So when, in circumstances such as this, there is a failure, and the solution is discarded then there is often a reconfiguration of the problem under the assumption without review, that the initial problem was wrong when in fact it was the solution that was wrong. For example, there isn’t a simple review to determine whether the problem being solved is actually needed. I may have a revolutionary solution to a problem, but if no one has any interest in solving that problem, then the implementation may be successful but the product fails.

Really innovative companies release products usually with a primary intent for a product and some ancillary solutions as well. Once in the market, the users focus primarily on one or more of the ancillary capabilities and focus minimally, if at all on the primary function. The company then realizes that instead of seeing the product as a failure, simply starts focusing on the secondary functionality as the new primary feature(s.) If they are really driven by the user’s needs, then they will genuinely asses whether the primary function is simply not needed, or was not well implemented. It really takes some fairly rigorous evaluation by the decision makers to see past their individual confirmation biases.

Personally, I learned a long time ago the deep importance of “I am not the user.” This has been really useful when going through user result analysis. Outside of basic heuristic evaluations, I always assume that my preferences are atypical and therefore irrelevant. This way I’m more open to alternative viewpoints and particularly interested in the times when many of those alternative viewpoints are similar. That becomes a simple if unexpected, target. I can then see whether the original problem definition was wrong or the solution was wrong, or maybe both. We do learn from our failures.

Advertisements
Posted in Uncategorized | Leave a comment

Why we should be removing ‘democracy’ from Design Thinking (and maybe Agile/Scrum processes too.)

Design Thinking circle-02

By Bob Glaser, UX designer

Design Thinking has been around for almost half a century. It has been used successfully for many of those years and yet, as it has gained significant momentum in the last decade, it has also been reformulated, varied, simplified, altered and ‘fixed’ by various purveyors. Many of these, for the purpose of repackaging and more importantly, reselling the concept as a training program or consultancy. Because of the breadth of design thinking, I’m assuming that the reader is already aware and likely in use of design thinking. Therefor, I will not go into a detailed description of design thinking.

One (of many) concepts that I have seen as a corrupting influence on outcomes is the input of democratic decision making into the process. Why is this corrupting (bad) to the success of the process. It is because it can have the effect of dismissing the very real positive outcomes of the process.

How?

First, let us consider the process. For the sake of clarity, I’ll choose the Nielsen Norman Group’s descriptor of the process since it addresses it in an strightforward applicable way, rather than in a broadly conceptual way. (There are many other versions out there that are also suitable, including some of the original concepts which were well refined by Stanford School of Design which had simplified the original 7 steps to 5, but some are overly detailed for the purpose of this post, even though they are just as exposed to the democratic corruption.)

That process is simple in its semi-linear circular iterative process:

  1. Empathize
  2. Define
  3. Ideate
  4. Prototype
  5. Test
  6. Implement

The first 2 are the ‘Understand’ phase, 3-4 are the ‘Explore’ phase and finally 5-6 are the ‘Materialize’ phase.

Since the process combines the seemingly paradoxical pairings of logic with imagination, and systemic reasoning with intuition, it is susceptible to being adapted in a way that can defeat the purpose of the processes results through corruption.

When a group begins this process, they consider the user’s needs, the business’ resources/viability, and the technical feasibility/capabilities. They then follow the process and come up with potential solution(s).

The problem arises at this point.

This common error, is taking potential solutions and voting on them. The problem with this approach, is that it tends to cast the base concepts out the window in order to determine a solution. Sometimes the vote is determined by some constraints such as choosing low hanging fruit even though these are low on the priorities because of the fact that they are easiest to deal with. This is often followed by the idea of resource limitations that may be artificially imposed. This may be stated like this “We are only considering the solutions which can be accomplished in [time frame] (or some sort of similar artificially or arbitrary constraints. Then the group votes on solutions based on these constraints.

Since the purpose of this process is to determine the solutions that need to be addressed*, the results are corrupted by a democratic vote which dismisses the effective and hopefully innovative result. The use of intuition and imagination of the solution creation process is being carried into a realm concurrently with logic and empirical decision making. Design thinking is meant to use these empathetic concepts to help frame or reframe the problems and potential solutions with an approach that brings creativity to the process rather than just a methodical scientific method process alone, and thereby produces more innovative solutions. It should be noted that design thinking is simply one of many ways to help produce effective implementable solutions.

The vote may easily (or regularly) concatenate the solutions and therefore eliminate the best, ideal or most effective solutions from the standpoint of the user.

*Design Thinking is a solution perspective as opposed to the problem perspective of the scientific method.

How to deal with this democratic corruption?

This is fairly easy though often not popular because it requires a little extra effort. When the group is in the early stages of gathering information (Understanding phase) they should also be defining the requirements of acceptance. These requirements are what the solutions should be put into to filter the results that will be implemented. If one is determining the requirements of MVP (minimum viable product) then it should be easy to simply say that a solution is effective but not necessary for MVP while another solution is absolutely required for MVP. Then when it comes to the ones that may or may not make it, the same criteria are applied and instead of addressing the egos of the design thinking process participants (in the business/company), the results will address the needs of the users.

This is not a flawless approach, but it helps define requirements for solutions more effectively. If it doesn’t, then that lack of effectiveness becomes a solution issue for the next iterative round of the process.

I should note that this particular issue came to me in sprint planning meetings where what will be accomplished is not based on needs, but rather schedule first, then resources, then needs. In this scenario, “needs” are the first thing that gets dropped because it’s priority is wrongly demoted to last. Design thinking places it first, and if the democratic corruption doesn’t demote it, then it remains in the forefront where it should be.

I should also note that processes that are not user oriented (directly) can still be effectively addressed by design thinking by considering the indirect effects on people, of the process(es) being addressed.}

Posted in Agile, Design Thinking, MVP, Scrum, UX Strategy | Leave a comment

Correctly Dealing with 5% Use-case

I have noticed a common myopic view of the handling of edge cases around +-5% use-cases features (those features that are addressed by only 5%, give or take of users. These can be outlier, expert users, or special situation users (by job, environment, age, or other demographic.) I should note that the 5% is an quasi-arbitrary small number. It is meant to represent a portion of users that isn’t so small as to be outside of the MVP population, nor large enough to automatically consider it. It will and should vary depending on the size of the user base and the complexity of the application.

The problem is that this group of users is often either not parsed properly, or not defined as cumulative grouping. These exceptions tend to be handled exceptionally well in some highly complex professional applications (in terms of being highly loaded with specialty features) such as Photoshop or some Enterprise Medical Imaging software as a few good examples. Other than these cases, these are use improperly by many UX designers or company defined design process which are often, though not always, outdated.

Concept, execution or explanation.

I’ve seen many concept and projects fail, not because they were not good, useful and saleable products. The problem was because the product was mark as a failure because a lack of understanding as to either the problem that it solved, or the benefit it provided.

The solutions can be:

  • A simple visual that easily displays a complex interaction in a simple manner of literally showing the difference in real-time and real-life manner.
  • Or it may be with a simple overall description that encompasses a sometime incomprehensible number of features or even the features are not the focus but rather the ‘simple’ integration is.
  • Sometimes it is even an ineffective or even inappropriate (business-wise) choice of data that is being used as the sample that fails to present the great benefit of the concept.

The first example often happens when dealing with an audience that may not be able visualize the solution being described. This inability to visualize opens the door to all kinds of cognitive biases. For example, in a fairly necessarily complex UI I was working on, I had suggested a simple fine (2 pixel line) around an active study (in a radiology environment.) This description was dismissed and then a myriad of grotesque solutions were proposed. These were too severe and problematic to consider for implementation since most focused on one aspect with considering the complexity of the UI. So, I showed in a simple two page powerpoint how it would appear if a radiologist selected a study. The concept, previous rejected, was unanimously approved (by both sales leaders as well as clinical specialists), simply because the actual images were “real” in terms of how it would look exactly on the screen (with nothing left to the imagination.)

The second example comes from having an application that can do many things through a central integration point. Each of these features has a high level of desirability to overlapping markets. The problem became apparent when questions from the audience would sidetrack the central focus (because it was not clearly defined) and then the presentation devolved into a litany of features (few of which were particularly remarkable on their and others were remarkable but undifferentiated from the less remarkable features.) Here the solution was to present the idea of integration and a central focus point as being the true benefit of all of these features.

The third example is surprisingly common. Here, the functionality is properly and thoroughly presented but the sample data being used is too small or too random to demonstrate effective results, or not ‘real enough’ to be able to correlate with results that demonstrate the power of the functionality. For example, perhaps the functionality is a way to present the use of a home based IoT climate control system using machine learning to learn usage pattern for specific households. If the database being used is not based on real aggregated database of individual home data points, and is in fact an artificially generated database based on a real data but randomized for because of privacy or security concerns, then the resultant analytics will be equally randomized and fairly useless, since it would be impossible to show actual use-cases for various demographic (or other) filters. So the resultant displays from the algorithms may be dynamic, but they would show no real consequential and actionable results. This would lead the audience to simple see that this does something but I cannot see how it could show me anything useful, whether it was basic information or unexpected patterns of specific groups. This ends up being a lot of effort whose result isn’t much better than simply saying “Believe me, it really works, even though you can’t see it here.”

Also consider:

Another aspect of this 5% user base is that the use-case could be a critical but one time use for 5% of the population, or it could be a regular required use for 5% of the population. While this 5%, regardless of which of these to groups your addressing, could be a different 5% for each of 19 more features/capabilities. In the first case, it can be buried in the preferences, while the later could be buried in the 2nd level (2 clicks away) with the option of custom shortcut implementation.

These may seem obvious, but the require diligence because they are often considered during a specific phase of design and development, when they should be considered all through the design process, from ideation through post production maintenance and version increments as well as postmortems.

Summary:

This is a cursory observation of the problem (meant to initiate the conversation.) There is no one solution to this issue, rather the problem should be considered in advance of the presentation and then a proleptic approach becomes a more effective presentation structure. I personally like to think of it as using scientific method to create the presentation of the concept. Theorize, test, focus on flaws, not positives (assume that the positive is the initial concept that your trying to find flaws in before someone else does, or to simply validate the quality of the concept itself), and fix it if possible.

Posted in Uncategorized | Leave a comment

Correcting perspectives in UX design

sextant

There are several factors that guide the UX that are accepted.

  • Its effectiveness (simplicity, ease, and functionality.)
  • Its lack of obtrusiveness (it gets your attention based on criticality or “on demand” need.)
  • Its implementation of accepted technology vs. new technology within a domain.
  • Its forgiveness of error.

Effectiveness

This is often a catchpoint. The level of simplicity needs to be commensurate with the task at hand. For example: contacting someone vs. performing a diagnostic procedure. The common error here is negative simplification – that is simplifying a complex process to improve numbers of viewers without considering that the process requires many possible branching decisions, each of which may reveal a new set of choices. If a product is a single function tool, then the MVP (Minimum Viable Product) is easy to define. If, however, the product is a set of tools used to complete a generalized task, then we can often (not always) infer that the completion of the task may require a constantly changing set of tools due to unknown variables. In the later case there are some tools which will be used all the time and others that will be used less frequently but it is important that the less frequently used tools are ALWAYS available because their need/availability cannot be determined at the beginning of the process.

Part of this issue is the determination of the importance of the task and its related processes. For example, in surgery, most processes are critical even if no unexpected errors or situations are presented. A phone call on the other hand could be casual and of minor personal value or one of critical need depending on the situation. Further, a game poses no threats at all, but may anger a user if there are bugs in the process of play. Lastly is the capturing of information. This can be simple like writing or recording and only done for reference or posterity but not required for the presentation of the information which may be meant for listening only. The capture in this case is an indirect reinforcement of hearing/seeing the presentation of information but does not have any actual effect on the outcome of that information. (This, like many concepts could easily be rabbit holed, but I use these ideas for high level differentiation.)

In terms of ease of use, it has to be defined as to whether it should be easy to use. Child-proof safety tops or catches are just such an example of the fact that ease of use should not be applied blindly to everything as they are, by design, intended to limit the users to those who can already understand the reason for use. The same can be applied to professional applications where complex work requires a complex tool set.

Lastly is functionality. There are many complex processes that can be simplified, while there are other complex processes for which simplification reduces the effectiveness because decision points that allow “on-the-fly” adjustments to environmental and other unpredictable variables, when removed can produce flawed, if not catastrophic, results.

Obtrusiveness

This function varies based on use case. Often, without a fairly fully effective AI, there is often no way to determine what should draw the users attention to an attribute of a complex system. There may be regulatory, safety or security requirements that define the minimum parameters for this manner of getting the users attention, but it still doesn’t address when there are multiple points of attention of similar weight/value that are required concurrently. In these cases, it is up to the user to determine which to act on and in what order. Again, unknown variables may affect, necessarily, the user’s process. These variables may be presented in ways that the tool is not designed for. This doesn’t mean that the tool should be altered, as it may already be a highly effective single function tool, but rather it can be left to the user to determine the order based on this assessment of newly or suddenly presented variables. That is why I mentioned that only a fully effective (and mostly non-existent) AI would be required.

If we define the rules by which something should be presented to the user based on empirical use cases and also mitigate the potential issues that may happen if the information is ignored or missed, then it becomes far easier to implement it. It’s just that it’s not that common that those use cases will safely cover errors that could be problematic.

Then there is the issue of what method is used. Here, we should keep in mind that new technology is far more quickly accepted by the product development community than the world at large. This has to do with issues of confidence (will it work right?), trust (do I want to share this information?) and technological maturity (can I afford it? Or is it too cumbersome?)

Consider the concept of the future in the 1950’s with the idea of the TV-picture phone. It was perceived as a marvel of new technology, but what no one thought about was that people didn’t want to be seen at home in their underwear when they answered a phone in the early morning. It was decades before skype and facetime were used with some regularity, and even then only when people were prepared to use it. It’s still mostly used by people making long distance calls ‘back home’ perhaps to another country, or in long distance business interviews and conferences. Even now, if I think of the last three companies I have worked at, I have often seen content being shared but only extremely rarely seen live streams of video of people in these conferences. There is a level of privacy that people still hold onto across the globe when it comes to what and how much they wish to share in a communique.

There are other similar issues with new technology that are foreign to many users and also for which there is no standard. Even gestural touch interfaces don’t have a consistent standard yet even though they became widely available almost a decade ago. Even if there are cultural pseudo-standards in place, they are often context specific. “Swipe right” has different connotations depending on the context which it is used. Even the order of digits on a phone keypad and calculator keypad are not harmonized (a dialpad has the “1” in the upper left corner while the calculator has “7” in the upper left corner; this is congruent with common mental models of data chunking.)

Accepted vs. New technology.

The touch screen has been around for half a century but not widely accepted until the last decade and even that wasn’t instantaneous particularly, as mentioned above, the lack of any standardization (other than implication) of gestural use.

While technologies like VR have great possibilities, there is still the issues of acceptance, standardization of use and issues like motion sickness that have not yet been dealt with effectively.

Additionally, there is often a mistake in perception of any area of growth that discounts leveling off or even drop off from either saturation of the market, replacement by another different technology trend, even if less effective or simply limitations of a technology when it reaches the point of diminishing returns.

Since I live in Silicon Valley, there is often this bubble effect of people seeing technology all around them and assuming that it is ubiquitous when in fact it may only be ‘ubiquitous’ in high technology and/or areas of high median income. As soon as these inhabitants step into a more common area outside, they realize that the very technology they may depend on is not only not available but may also be viewed with suspicion. Consider the rise and fall of the Google Glass. While the technology was amazing to those early adopters, they hadn’t considered that many others saw it as an invasion of their privacy. It wasn’t uncommon to hear a conversation between someone wearing the Google Glass and another, where the other person would say “are you recording me?” and then not really believing whether they were or not regardless of what the Google Glass wearer said. This is not to say that it was useless, but rather that it would be more effective only in specific situations but not acceptable in many others.

Other types of feedback systems from haptic to neurological implants have promise but are still far to nascent to expect wide acceptance.

Error forgiveness.

This goes far beyond the system error of the past. Here is an area of constant annoyance. Consider the fact that there are whole internet sites devoted to posting the sometimes hilarious/embarrassing mistakes of autocorrect. This idea of “I like it when it works.” is a common cry amongst texting pairs who haven’t turned it off. As it stands currently it can speed up the communication but it can also lead to rather severe errors.

While basic machine learning algorithms can address this, it would take a deep learning algorithm to learn the cadence and style of an individual’s communication style including things like context, intent (sincerity vs, sarcasm), interests, vocabulary level, etc. along with the context of the person your conversing with since the language between a parent and child and two intimate partners may be extremely different even though two of those people could be the same person. This makes for complex interactions that can’t be ignored.

 One final note:

Most of my posts are directed at more advanced areas of UX design. It is for this reason that there are not a lot of pictures as samples. I point out examples within my post as anyone beyond the beginner (and any critical thinking beginner) will understand. Additionally, I find superfluous imagery tends to belong more with “top ten” lists and other basic concepts in design. I will always use imagery when it simplifies or clarifies a particularly different, new or complex concept. Imagery can also be limiting to the conversation as any advanced designer will already have a good imagination at visualizing how a concept fits their milieu of design work.

 

Posted in Uncategorized | Leave a comment

Honesty in UX

By Bob Glaser, UX Architectelephant-in-the-room

One of the great inefficiencies in UX design comes from the various forms of lack of honesty. This happens in both individual design and collaborative design. I chose that wording because dishonesty implies intent while “lack of honesty” includes neglect, cognitive biases, etc that along with intent. While empathy with the user is an essential component. Rigorous and sometimes brutal honesty is essential to good UX design.

If you can accept kudos for successes, then you must accept blame for failures. Failures generally teach us more than successes. “Best” is a comparative term with no indication on the scale of total failure (0 if you will) and perfection (~ infinity if you will) based only on what currently exists. Even then, only if all previous incarnations of a concept rate at. for example, 5 and you’ve met the 10 mark then you have improved the concept by 100%, but, you have no way of knowing if perfection is 15 or 150000). This allows us to easily stagnate on the laurels of success.

Failures, however, are concrete or finite. Through rigorous honesty, we can and always should find the root causes. There’s almost always more than one cause, so you shouldn’t stop failure investigation once one answer is determined. Here is a good place to implement the “5 whys approach” as a start. The five why’s are well described in many Lean processes so I’ll not repeat them here.

It’s perfectly acceptable to make myself unpopular in meetings when, after presenting a solution to a user’s problem as being successful in user testing, hearing an internal (within the company*) comment of “I don’t like it.”, “It’s ugly.”, “It’s too plain.”, “It’s not exciting.”, etc., the response ought to be “Thank you for your opinion but it is not relevant here. You are not the user and neither am I.” The aggregated feedback from user testing is factual. I am quite aware that both formative and summative user testing may, by the necessity of the product design and use, require user opinion but this opinion is part of the aggregate scoring and should be both consistent in its testing application, non-leading in it’s style, and evenly distributed for accurate representation in the aggregate totals. The comments made are always taken into account though, because they may point out an area of potential improvement. Here is where we appropriately balance the objective results with the subjective impressions.

Another place where honesty is needed, is in the “big picture” integration of many features in a complex system. An example may be an enterprise system with a primary user group, secondary user group and tertiary user groups, (and so on) each with varying needs and perhaps UIs from the same system. Often, particularly in Agile development environments, individual features are addressed in an unintended silo approach that places “common expectation” or “intuitiveness of a single feature/function” over a common UX design in both integration and unification. This approach averages rather than optimizes the UX and UI. This is the enterprise system product that has multiple functions and multiple types of users mentioned above where hierarchy of users may not correlate to an expected hierarchy in user numbers (e.g. the mistaken primary user focus may only be 10% of the total user base.) This is not the fault of Agile method but it the Agile process allows this to be easily ignored or glossed over. (We must remember that the Agile process was developed as a software development process, without UX as part of its initial design and there are many good articles out there on methods for the incorporation of UX into the Agile process.) This may seem counter-intuitive to design, but what it does is; help to reinforce a common mental model of a complex system.

Next is honesty in priority of function. I have often experienced great effort (and financially disproportionate) in infrequently used or needed features. I think of this as the “pet project syndrome”. Another cause of this is insufficiency or even failure to clearly define the priorities with weights (based on user needs) of features in a form that is rigid enough to create a reasonable goal. The deficit in this area of honesty deprivation is the lack of focus on the primary functionality. This is also one of my favorite areas where the “bucket of rationalizations” is brought out to justify poor decisions in the design process. Here is fertile ground for false dichotomies, and false equivalencies. Often fast decision making masks these mistakes and makes them difficult to see until it’s too late. This is often a result of numerous directional changes within the development cycles and heuristic iterative processes prior to user testing.

Another area is democracy in design. This is a practice that I feel should be abolished after the first heuristic phase of formative evaluation. Then the only time this kind of voting should be applied is with a group of well targeted users who have just tested the product or prototype. Votes taken in a board room are not only of little value, they can be counterproductive and costly. Even in heuristic evaluations, these can be problematic since equal weight is given to a UX designer, feature creator, feature implementer/developer(s), system architect, technical product owner and marketing product owner. Each of these people have an agenda that may be rationalized as user-centric when underneath there may be other reasons (conscious or unconscious). I include the UX designer as also being potentially influenced here. Basically it comes down to the simple fact that the further you get from the user, the more likely you are to get decisions based on non-user relevant concepts. It is easy to fall into the trap that “these decisions affect the user in the long run” is a rationalization for business cut backs based on time or resources and the true effect that it has on the user may be irrelevant or even counter productive. It is not to say that these decisions are to be dismissed, as they may have significant business relevance, but UX should not be included in this unless there is a measurable and direct 1:1 relationship.

Any good designer knows that it is not their “great taste and discernment” that makes them a great designer but rather the ability to create something that they may find personally “ugly in concept or aesthetic or even at the cognitive level” but realize that it is ideal for the end user. If you want to create art, then become an artist where your ego is an asset rather than a liability

Another is the top 3, 5, 10 lists. This not only smacks of amateurism but also ignores the fact that the number is irrelevant when it comes to any MVP (minimum viable product). The features list for an MVP should be only changed when a serious deficit or redundancy is discovered. Not based on anyone’s personal whims (though these whims are typically presented as essential and often with circular logic or specious arguments or examples that are not properly weighted.) I have personally turned down offers to write articles base on these “top ten things…” since any good professional will know them already. They are useful for the beginner but have the dangerous flaw of being viewed as intractable rules.

To me, my best work is invisible. My favorite analogy for this is the phone. When the user wants to call their mother, their goal is to be speaking to their mother. Not a fun dialing experience, not a beautiful ‘dial/send/connect button. Just to talk to their mother. So the practical and physical tasks needed to accomplish this should be seamless and so intuitive and obvious that the user may not even be aware that they are doing it. The challenge her is in getting the user used to doing something that is new to them, different, or requiring trust that a common use case addresses with one or more extra steps. A common example of this is the elimination of the “save” function in Apples iOS. there were plenty of people who didn’t trust it or would constantly check for it until they trusted that their input was saved automatically. The caveat being the “Save As” function.

I should point out here that while I’m a believer that facts rule over opinion most of the time, I will always concede to the fact that our end users are human. There is much more than logic and statistics involved here. Culture, education levels/intellect, common mental models of the user base, and other psychological factors have an important place in UX design as well as limitations that may be set by safety, regulatory, or even budget. The important thing is to make sure that honesty is not pushed to the sidelines because of these additional variables but rather it is viewed as an important way of also dealing with them.

* these examples are based on my experiences at over 13 companies (every company I’ve ever worked for so it isn’t an indictment of any one company but rather a common systemic problem.) as well as examples given directly to me by many other great designers like Don Norman and others.

Posted in Uncategorized | Leave a comment

The disparity of eye vector orientation and proprioception demonstrated with the Oculus VR.

1 of 4

1 of 4

Recently I after playing with my Oculus VR with some games and environments with a Galazy S7 Edge, I had spoken with some other users, several of whom complained of becoming nauseous after some use with it. Unlike about  25 to 33% of the population (depending on which statistical data you use to compare), I am not prone to motion sickness so I hadn’t experienced this and so I questioned those users and found they had all been prone to motion sickness to some extent. I theorized that it probably had to do with the fact that the motion sensory system had to do with head position and not eye direction. This is a major factor in common instances of motion sickness.

For example, someone prone to motion sickness may be able to easily drive a car on a windy road with no effect, but if they are a passenger instead of the driver, they are more likely to be looking in direction other than the direct forward direction (e.g. slightly to the left when turning left or slightly to the right when tuning right.) in other words maintaining a view on the vector of movement and slightly ahead of the current position. As soon as the individual separates the direction of view from this vector, any mild disorientation is likely to initiate the motion sickness effect.

The same is true when wearing an Oculus VR headset. The Fact that there is no eye-tracking leads the user to a disparity of directional viewing vector and head orientation which will cause motion sickness in those prone to it.

This is noticeable when using a game that uses the orientation of the head to create a point in the virtual space “in front” of the user. The point only moves when you move your head, but when you move your eyes, iit doesn’t move. This creates an interesting paradigm of disparity between a seemingly immersive virtual environment and the way the brain processes visual information using both proprioception (primarily of the head) and visual vectors of orientation. When these are disconnected as in a virtual environment, then there is a blank area of perception that is most easily recognized by those who are prone to motion sickness.

Two issues then present themselves which can be taken as potential solution opportunities:

1.       Could this type of virtual reality be used in a therapeutic sense to see if there is a way of reducing motion sickness through development of a training in an environment that already separate these two elements.

2.       This is also an opportunity to add eye-tracking hardware and associated software to account for this disparity and create a more effective virtual environment.

 I will post more regarding this after more experimentation. There are different issues with the tactile UI which I will address separately, If you have any questions, just ask.

Posted in UI Function, UX Strategy, VR | Tagged , , , | Leave a comment

Culturally Agnostic UX.

Designing a culturally agnostic UX.

By Bob Glaser, UX Designer ©2014
People generic silhouettes   When designing the UX for a MVP (minimum viable product) one of the questions you need to have on your “What I need to know list” what is the initial audience demographic and what is the longer term demographic. For reasons of obvious practicality of business planning, these need to be two separate questions and should have two different answers. If the answers aren’t different, then you might as well be throwing darts at board to determining a marketing strategy. I am, of course, oversimplifying somewhat, but not too much.

The reason for these questions is that the fundamental UX structure should be culturally agnostic. There may seem to be an exception when both the user and task feedback of the UI are highly restricted to a specific and typically advanced content/skill set. (e.g. neurosurgeons.) The issue with that it that it still leaves out language as an attribute.

I don’t want to promote the idea that the UX itself is culturally agnostic because that would produce an experience that is useful to few if any. Also, if a product culturally driven and not applicable to anyone outside the target demographic, then it cultural agnosticism is far less important, but shouldn’t be dismissed completely for issues of innovation that may be repurposed later. (I‘ll address this later.) Often these types of applications or products are meant to address an issue that is specific to both a specific demographic that is also geographically specific as well.

Part of this issue also revolves around the common issue of assumption (that I’ve addressed previously.) Often when we are designing UX and IxD (not to mention content and platforms) we have biases that assign the attribute of “common knowledge” to, or worse “Common Sense”. (I could quote any one of myriad quotes about Common Sense but you can just click on the link to see for yourself.) Now in the fairness of full disclosure, this article which I write in my native US English, and the link just provided also being in English, is not culturally agnostic. Such is the limitation of my writing, but I can hope that someone who is fluent in both English and any other language who see’s it useful, is welcome to translate it. If I make an assumption based on my perspective of American culture here in Silicon Valley, you are free to ask me to clarify it or suggest wording that is more encompassing.

Transitioning to a culturally agnostic process.

Do it incrementally.

It is important to realize that it is unrealistic to expect to switch to an agnostic approach because biases can’t be ‘turned off’ cleanly or suddenly outside of a theoretical environment. Humans are affected by emotion no matter how scientific and pragmatic they may be.

The first thing that you want to do is to add an agnostic filtering step to your UX/IxD development cycle. Initially, this should focus solely on cultural biases that are presumed in the design and architecture. For example, if you have access to employees who spent a significant portion of their lives in a cultural situation that is different than yours, let them review it with the idea that they should focus on anything that you presumed in the design.

Example, you could be gathering inaccurate data by providing a question that looks like this.
I am a:
□ type A
□ type B
□ Decline to answer.

This creates a surprisingly inaccurate response. The reason is the presumption that A and B encompass everyone and that the third choice is taken literally. When the user defines themselves as neither A or B and the generic but all encompassing ‘other’ is not an option, then none of the answers is relevant. They are forced to choose an answer that is inaccurate an in a way that you can’t asses when collecting data. I know from my own personal experience in these types of questions, that I usually am somewhat angered by the fact that I don’t even have the choice of ‘other’. Having to choose ‘Decline to answer’ clearly sounds like I don’t want to tell you or let you know, when in fact, that is opposite of what I’m thinking but there’s no option to express that.

These types of questions can anger the user because they can address partnership status, sex, race, religion, nationality, even accessibility descriptions. In the US diversity questions that an employer is required to ask is a good example of this, but the employer has no control of this as it is a federal requirement. The arbitrary clumping of groups isn’t a bother to anyone who perfectly fits the available choices, but the remainder of the population has to choose between the ‘closest’ but inaccurate designation in some arbitrary way or choose “Decline” with its potentially variable inaccurate inferences.

Now, outside of government mandated questions, the UX designer can focus on those areas that they have control. There may even be the option of diffusing the government requirements by distancing the relevant questions from the government mandated questions to both improve accuracy as well as compliance and accuracy. The options here are numerous and beyond the scope of this article.

You can see that the process step is well integrated when you’ve gone through at least two release cycles and all of the stakeholders can see empirical results that have been influenced by this approach. After you have integrated this step into the process (and expect this to take time) and all the teams are acclimated to it, you can evolve to the next step:

Incorporating cultural agnosticism into the complete process.

Once team members of the step of checking for cultural agnosticism into the process, it is then time to view it as checkpoint line item that appears in every iteration in the development process. The big return here is that other groups can see how this approach can be applied to other areas outside of UX and IxD such as product management, marketing, QA, even engineering and R&D and sales.

It is really important that the prior incremental approach is full accepted or else you’re likely to hit a wall with this. The full adoption of the incremental model will create evangelists for the broader implementation that will make adoption easier and more obvious without UX/IxD being a lone grandstander looking for validation and attention.

Try to include as culturally diverse group as possible and if you can’t get that many in person, you can always go online and ask. In cases like this, always go to the source rather than what you may consider close enough.

For example a Chinese perspective should never be generalized as the Asian Perspective. It is the Chinese Perspective. Even in that there may differences such as Mandarin vs Cantonese perspective.

To make the point using the different taxonomic perspective of gender. There is more than the sex perspective of the subject. There is gender, gender identity, and sexual preference. If any of these are relevant to your UX design, then you need granular differentiation since they are not interchangeable in any way. Clumping them together is likely to give you inaccurate feedback at best and at worst, will anger the subject.

80/20 rule in cultural agnosticism.

As Don Norman says, “At some point you have to stop and release the product.”[1] I realize that the previous section on complete integration can set up for a level of granularity that could create an endless cycle of iteration and scope creep that can have a negative effect on schedule and budget. Here is where managing implementation by narrowing focus for the “Minimum” aspect of MVP. The important aspect to remember is that you paint yourself into a cultural corner that forces you to reinvent the UX Design with each new version, particularly when that new version is meant to have a primary focus on expanding the market of the product.

In the end, when you design the UX with a culturally agnostic approach, you will have a foundational design that becomes portable across cultures through easier and more effective localization.

[1] UX Hacking: An Evening with Don Norman 17TH DEC 2013 AT STANFORD GSB

Posted in Globalization Localization, MVP, UX Design, UX Strategy | Tagged , , , , , | Leave a comment