UX failures: A BOTTOM TEN LIST

IMG_0132By Robert R Glaser, UX Architect

I make mistakes. Some of my greatest successes, in fact, most of them, all came from lessons learned. It would almost seem that any success that didn’t come from failure came from luck. For failure taught me through experience what would happen in very concrete terms, when I failed.

We are at a deluge of “Top 10 lists”. This list worked out to match that 10 through coincidence and not by intent. I tent to loathe the common facile top ten list as clickbait or worse a set of ostensible and rudimentary roles that should have been learned in the “101” level of any area of study, much less expertise. Sometimes 10 is too many, and far too often, it’s not enough. I think the thing that irks me about this trend is the implication that expertise can come from reading these lists. Notwithstanding this list below. It is a recipe approach to problem-solving. In this case, in UX. This recipe approach provides answers to specific problems but never with enough background to really understand whether the solution your about to apply will actually solve the problem or do so without causing other cascading problems.

So I thought I’d make a list of common problems caused by “instant expertise” solutions of these oversimplifications and maybe provide a way of avoiding them by providing some direction rather than solutions.

I intentionally didn’t put illustrations here, since this article is meant to be thoughtful and to let you think of your own examples. Often when pictures are provided, people tend to draw conclusions too quickly and without a thorough understanding of the ideas.

1. Anything that can be distilled to a top ten list, is cursory at best.

The internet is bursting with life hacks, memes and other suggestions that seem so amazing that anyone would have even thought of it. At the same time, so many of these suggestions for easier, faster, cheaper, better, solutions are often none of those epithets (or maybe just one.)

It gets worse when you start focusing on a particular field of work like UX (but by no means limited to it.) In these we are barraged with rules of thumb that are overgeneralized, or too specific, or highly limited, or so obvious that if you have to be told, you are in the wrong area of work or study.

Would you go to a Dr. who refreshed their knowledge of current medical practices by looking at a top ten list of missed symptoms of a stroke? These lists are actually and commonly recommended as an approach to successful articles and blogs. I rarely ever come across a list where I didn’t find myself thinking after reading only one or two of the list items and thinking “except when… ” followed by countless examples. This is not necessarily the often overly simple statements but rather the often-intractable completeness of them. This is often accomplished by the prefixes of “Always…” or “Never…” or some similar authoritarian language. By the way, this list is no exception, which is why I worded the 1st title as I did. The items here are meant to create awareness and begin dialogues, not provide exact rules in advanced UX design.

2. Check your facts.

I see so many lists with factual sounding statements (often by using technical jargon) which often don’t even make sense. I would expect anyone reading this blog to be just as skeptical, and you should be.

I read an article about color use recently that popped up in my email by one of those sites that regularly publish articles about different aspects of design. As I began reading, I noticed this string of statements.

Firstly, we need a shared language of color terms and definitions. This handy list is from our quick guide to choosing a color palette.

The vocabulary of color

Hue: what color something is, like blue or red

Chroma: how pure a color is; the lack of white, black or gray added to it

Saturation: the strength or weakness of a color

Value: how light or dark a color is

Tone: created by adding gray to a pure hue

Shade: created by adding black to a pure hue

Tint: created by adding white to a hue

What bothered me was the arbitrary mix of subtractive color models (typically the RYB or CMYK associated with physical pigments and dyes.) and additive color models (typically the RGB model associated with light and digital displays) terminology which shouldn’t be mixed, particularly when its preceded by the line:

Firstly, we need a shared language of color terms and definitions.

This article in itself referenced another article which used a standard I’d never heard of as a painter nor had I read it anywhere “Painters Color Mixing Terminology” implying this was a standard taxonomy. I have heard many of the terms, but never in such a structured and almost arbitrary way. One can simply use photoshop to see that the use of these seemingly defined terms, fall apart when one assumes that they mean something concrete with clear and consistent results. Additionally, the terms subtractive colors and additive colors in the function of an everyday designer, which are critical to design work which may appear in print and on the web where each is significant and understanding those differences is essential to the practical and technical aspects of implementing any quality aesthetic design.

Additionally, the referring article also included this quote as important information:

What colors mean

Red: energy, power, passion

Orange: joy, enthusiasm, creativity

Yellow: happiness, intellect, energy

Green: ambition, growth, freshness, safety

Blue: tranquility, confidence, intelligence

Purple: luxury, ambition, creativity

Black: power, elegance, mystery

White: cleanliness, purity, perfection

Any good UX designer would through these out the window. What any color means changes wildly depending on what the context is, what the contemporary zeitgeist is, or the culture of the demographic that it is aimed at. For example, culturally, in the US red often indicates warning or danger, whereas in Japan it indicates happiness. But still, these are dependent to greater or lesser degrees on context. Not every international appearing design is as internationally accepted as the frequency of its use.

These examples are just based on one article about color alone. Other examples are myriad.

While I see and peruse a lot of UX sites, I have very few sites I read with regularity. These are the ones I trust. I’m not mentioning them because you as a reader and (I assume some of you are UX designers) should be selecting sites with some real critical thinking. I have a library of real books I can peruse as well as a wealth of online peer-reviewed literature. But I always let a healthy amount of skepticism be my guide. Even Wikipedia can be good for some foundational material as long as you check resources that may seem to be too good to be true to make sure you’re not unconsciously p-hacking yourself. Always try to prove yourself wrong, not right. PRoving yourself right is far easier and more difficult to refute misleading results due to ego.

Steve Allen, the late comic, composer, and writer, wrote in his book “Dumbth!” that lack of critical thinking, particularly in the case of taking experts assessments, reviews, and endorsements as fact. This lack of discernment is a common weak link in the chain of thought where rationalizing or biasing the opinion into fact has become a problem.

I welcome different perspectives and I’m sure that the statements I’ve made here are equally subject to them.

3. UX Design is an empirical science, psychology, and art.

I am almost pummeled with job offers for UX designers where rarely is the science of UX mentioned in anything more than a vague or oversimplified reference to user testing. Sometimes they will add more of the traditional jargon like wireframes (so often misused or misrepresented) and prototyping. What is rarely mentioned and is a red flag is that any of these UX science aspects of design are rarely indicated with any implication of support (financial or personnel). It’s as if all these tasks can easily and just as effectively be replaced by heuristics (often mine alone) and the use of “common sense” which is often the foundation in UX research for “false-consensus effect“. This is often poorly used as a cost-saving and time-saving measure. I can use my experience and good heuristic testing practices as a triage for user testing, but it is a poor substitute.

If there is a lot of language in there regarding the importance/significance of creating visual design assets, design guides, or other more visual aspects of design, then I’ll usually pass or ask if they are really looking for a visual designer with some UX background – which is basically a visual designer.

Conversely, if they require a lot of coding knowledge, front-end development, CSS, javascript, etc., then I’ll usually pass or ask if they are really looking for a developer.

The reason isn’t that I can’t do visual design (it was my original study in college) or that I can’t code (I actually went back to college to learn programming) but its because I want to design UX architecture. I don’t want to be a visual designer or software engineer. I like that I have enough knowledge and experience in these areas to be able to address UX in a way that I know can be implemented and if someone says “how can that be implemented?”, I can explain, in detail and with technical, specific references. These two skills are important skills, but when I’m writing code or deigning icon families, I’m not doing UX design.

4. UX Design should use the scientific method.

If you want to do an effective job at testing an interaction, test how it fails. Testing for success is both easier to do since it requires a far less rigorous approach, but it is also easier to cheat and bias the results even without malicious intent. I’m not referring to grand p-hacking news stories (although relevant) but rather the more subtle types of formative testing for the simplest of tasks.

I should also point out that the importance of appearance, while highly influenced by emotion and experience, can be quite variable, it is nonetheless measurable and should always be taken into account. This is where the assessment of quantitative results and qualitative results can reveal some surprising information.

5. Visual design is far less important unless your product is a commodity.

Generally, products either have a unique differentiator (something no one else has literally), or, a product’s differentiator could be not unique, but significantly better (faster, more accurate, more complete, simpler, cheaper) than a competitor. In these first two cases, visual design (if it’s not an inherent function of the product) is one of the last things to be considered in its design. This is because a differentiator needs to be both introduced and made to appear intuitive (even if it really isn’t). This is done through UX architecture such as user research, interactive design, formative iterative testing, and any other areas of ideation that may be available depending on the resources (primarily people) available.

Before the visual design is even addressed (again, excluding things that require certain visual requirements like standardized colors for stop/go, start/stop, warning/failure. Even these though should remain as words for as long as possible.

The reason for this is that visual design biases early iterative testing into creating artificial mental models that become improperly reinforced.

But if the product is at the commodity level such as a website for a store of any type, then the approach changes. Here visual design can begin far earlier but concurrently with the interactive design. In the early stages here, branding concepts such as palettes, layout schemas, font and assets standards, and designs can be explored in a componentized and scalable manner so that when the visual elements are then integrated with the interactive elements. This approach works more harmoniously in a collaborative way.

6. Formal training in UX or visual design isn’t just a “nice-to-have”.

As someone who has had formal training (i.e. University, or some nationally accredited by a professional or governing body), been self-taught, and been taught on the job, each has its advantages. One of the greatest advantages of formal training is the importance of the reduction of ego in the design process. This isn’t just so that the designer can handle criticism, but welcome it, learn from it and use it with equal constructive use on colleagues and reports. Early on in my career, I was occasionally suspicious if I didn’t get some criticism on any submission. It usually meant it was viewed in a cursory manner. While sometimes I would be told that my work was of a quality that didn’t require constant supervision (a wonderful thing to hear) it also quickly taught me that I had to improve my own self reviewing process which I still do to this day.

I have learned many things through self-taught study, both informally through training books and online courses and personal experiments, it still lacks the 2nd and 3rd party interactive review throughout the process. Learning on the job can do a far better job at this although, the learning may be less consistent due to meeting your job requirements and deadlines.

I particularly enjoy mentoring whenever I can. I used to say “There are no stupid questions.” But now there is a caveat: “It isn’t stupid until you ask the same question over again.” I realize that someone may not fully understand something without realizing it, but there is also a point where it is up to the learner to ask more follow up questions until they understand why something is being done and not simply that it has to be done.

7. Context is everything.

The context of a user experience sometimes seems so obvious that it needn’t be recorded or enumerated. This is often an error. In formal or complex systems, it’s essential, but in informal or common situations, it should still be addressed, even if informally, so long as it is recorded somewhere in the process. The reason for this is that we often overlook obvious things because they are often automatic in their execution, autonomically prefiltered sensory input such as background noise (audio, visual, or cognitive) and so on.

Two instances where I found this to have been a critical issue that was addressed far too late were:

  1. Designing (actually just updating) the UX for a cars infotainment system. Just looking at the data of mental models for media players on phones and in cars gave a rough idea. Additionally, there were the companies differentiating features, not the least of which was A.I. and machine learning in the media selection process. As a proxy, tablets were used and even in-dash prototypes would be used. All of these pieces of information were very helpful, but a flawed contextual assumption was testing without driving. While driver distraction was addressed in the assumptive paradigms, they were not tested in real-life situations. This required some significant changes to the design recommendations that were counter to the product requirements due to the simplest of use cases.
  2. When designing for an enterprise radiology workflow, I was aware that the majority of enterprise radiologists worked in dark rooms and so this was taken into account in the design paradigm. However, when simply sitting and watching a variety of radiologists with different areas of specialization work in these dark rooms, it became apparent that differentiators in screen data could not only be reduced but had to be like the current versions that these radiologists were using was clearly distracting in a way that affected their attitudes while doing their diagnostic work. While this change was not asked for by users or listed by product management, once implemented, the response was overwhelmingly positive, with no negative responses.

Each of these issues was addressed but later in the development cycle where they required resources and time were far higher than they would have been had these issues been noted properly earlier on. Incidentally, these two specifics were not the only ones but were two that could be fairly briefly described.

8. Flexible and scalable design or quick design.

Most people have heard the old saying “Better, faster, and cheaper: pick two.” This is similar in UX design although simplified into two choices rather than three. You can’t do both and you need to understand that each has its advantages and disadvantages. A design that is flexible and scalable requires much more time at the beginning since a fairly detailed understanding of the user and system ecosystem are necessary to design in this way. This deeper understanding of the system allows a more componentized approach to UX design since you can reduce the frequency and increase the speed of validation in the formative states. Additionally, it can facilitate more scalable and agile engineering development once the designs reach that phase of development. Also, way down the road when you get to more summative testing, adaptation is easier and often allows faster decision making when there are important benchmarks and deadlines. This slower beginning though often requires a somewhat defensive stance since there’s often not a lot to show soon. I should note that flexibility in design and scalability in design, while different, can be fairly easily designed concurrently with little time or resource reduction if only one is addressed since the same resources and many of the same design considerations are used for both.

The quick design approach allows MVP to get to market or at least shareholders and sales far quicker. This approach of being fast and dazzling can be a boon to a startup with no product yet in the market. There is something very compelling about having a working product as opposed to a prototype in hand. It doesn’t show what its supposed to do but rather what it does do. The big drawbacks come when changes or a new version needs to be made. Hardcoding of applications require major rework if not rebuilding the base code from scratch. Additionally, from the purely UX point, fast design, while providing a UX solution for that specific product (even if it’s the first design) is likely to create both expectations and mental models which are incompatible with new or different features and can cause user fatigue, user conversion issues, inconsistent user expectations cause by the earlier mental model, leading to frustration even though the newer/replacement/update product is better in terms of quality, features, and reliability.

Jared Spool wrote an excellent and more expensive article a few years ago on this called Experience Rot.

9. Don’t espouse good practices and then not follow them, or worse punish those who do.

So often there are things I’ve heard over and over in various companies. These are often well known (both because of popular repetition and the truth behind them.) The biggest cause of this problem is either or both fear and ego.

  • Don’t be afraid to fail. I’ve witnessed numerous punishments and more than a few firings due to many genuinely innovative initiatives that fell short even though the reasoning for failure may have been insufficient development time, unknown variables, and even while being successful, being considered a failure against the exact letter of the original hypothesis. In many of these cases, there was significant and usable invention and innovation that often was utilized later. These failures were punished because someone in a decision-making position either felt threatened, or they would cancel a program due to fear of failure rather than the potential for success since job loss and sometimes fragile egos are bigger than accolades and ROI.
  • Experience is essential and then placing hierarchy over experience. In an ideal, in fact, in many regular businesses, good leadership will hire experts to advise and help the company succeed. Often, though, a long relationship with an industry does not always mean a thorough understanding of it. Knowing the politics of an industry, never equates to knowing the technology of it. Both are separate domains of information and like many areas of knowledge, each requires (1. study, (2. practice, (3. many failures and (4. enough successes to address a luck factor, for genuine expertise. I know enough about corporate politics to realize that I want to only be involved when necessary and no more than that. But I also know that someone’s experience in the political aspects of a product or project doesn’t outweigh the technical/production side of it.
  • End meetings without a confirmed and mutually understood consensus. I have been to so many meetings where not only nothing was decided, but virtually everyone left believing that there was a decision and whatever they thought that was is what they are going to be acting on. Even a meeting where nothing is decided or resolved is ok as long as everyone leaves with that understanding. There is plenty of good basic meeting best practices out there. My point is to simply follow them.
  • We have a long-term plan but then you realize that it can change week to week or even day to day based on Knee jerk reactions at the decision-making level, so often that more resources are depleted to spinning up and spinning down rather than to actually producing anything. I of reference this in a paradigm I refer to as the “Sitcom logic approach.” This is where an idea to do something is presented and on its first (and only) run, fails hysterically (it is a sitcom and fixing it wouldn’t be funny.) Of course, what then happens is that the idea is abandoned for something else. No one tries to figure out what went wrong and whether it is fixable. Often these failure points are more likely to be minor missed considerations rather than catastrophic conceptual errors.
  • “No.” and “I don’t know?” are negative only if viewed from an ego standpoint. Dismissing or shutting someone down for these statements is in the first case (“No”), dismissing their experience and knowledge beyond what you may know. The second “I don’t know” dismisses curiosity and an opportunity to learn and innovate.
    Adam Grant has spoken and written about the successfulness of those whose careful use of “No” has improved their productivity, businesses, products, and more. I highly recommend Adams books and videos. So if you follow through on that, I needn’t simply repeat it.
    As for the “I don’t know” in the design world, ego relegates this to lack of intelligence and inexperience when the opposite is generally true. Interestingly, it is the primary door to both knowledge and experience. First, because, when you say it, you are responding to a question or situation to which you have know answer or response. This provides you with exactly the subjects or concepts you need to learn about. While learning about those subjects, you have the opportunity to relate them to your own life experiences, often as they are happening around you. This second part provides the first form of experience, and that is basic observation. The second part of experience come from when you decide to put the newly learned concepts and ideas into practice to see where and how they succeed and fail.
  • “Standards” that are really just “Trends”. There are so many examples of this that I don’t think it would be too difficult to compile a top 100 trends that became standards but still went away as trends do just more slowly because far more money and time was invested in it. I’m just going to use one example: the open office environment. As someone who has been around worked in office building with actual offices to “office” buildings which look like well decorated warehouses with modern desks. I began seeing the first long-term studies almost 20 years ago referencing the actual inefficacy of the environment. Not surprisingly studies continue to reveal the same thing.
    • They are intended to foster collaboration – but they reduce it
    • They are meant to create a more social environment – but they increase the need for privacy beyond what would be normal privacy expectations.
    • They increase distraction
    • They reduce productivity
    • They increase offsite work even when that’s not the desired effect.

The part I find amusing is that most of the adaptations to the problems of the open office environment are symptomatic cures and don’t address the actual problem. Things like privacy rooms, quiet areas or comfort areas, gaming areas.

10. The UX unicorn problem.

I am a UX architect who started out as a graphic designer (because that what we were called back then.) I was an editorial art director in medical publishing, I designed advertising for everything from food courts to couture retail. Then I got a job at Xerox. That began my journey into what was to become UX, through early (and unbelievably expensive computer graphics and animation), years of instructional design and early computer-based training, and so I went back to college to learn programming. This was useful since it taught me two things, 1st: how to design and write code. 2nd: that I didn’t want to be a programmer, but it was incredibly useful to understand what was going on in code and how to engage with software engineers. I then spent time developing programs and learning about the complexities between how people interact with machines, computers and sometimes simply objects. I got work with a lot of testing environments from summative testing in highly controlled labs with eye tracking equipment, to simple formative testing with both quantitative and qualitative results as needed. I did a stint at Gartner designing data visualizations for their top consultants for their world symposia, and designed UX VOIP systems (for regular users to administrators) for what used to be ShoreTel (now part of Mitel). I’ve designed radiology enterprise systems (PACS) and Voice controlled and enabled vehicle infotainment systems.

With all that, I find the Unicorn designation to be problematic. While I can do a lot of things because I’ve had a broad experience, I would rather apply all that experience to creating really elegant and effective UX. This doesn’t mean something that is spectacular, because that’s really more about visual design and if the process for the user is not about the result of what they want to accomplish. It doesn’t mean something that’s really interesting interaction experience since that applies to game design more than common UX. I have often said and will continue to say, if my work is noticed by the user, I have failed. This goes well beyond the rudimentary expression “If the UI needs to be explained the UX has failed.”

Here is where the Socratic quote, “The more you know, the more you realize you don’t know.” Becomes so apparent. This is important here because while the unicorn UXer seems to be able to do so many things. It means also that for all the time that they are doing user research, they are not doing strategic design. For all the time that they are organizing, running, and analyzing user tests, they are not designing wireframes. All the time they are creating visual assets, they are not establishing a design language with its documentation. All the time that they are managing tactical aspects of implementation, is time better spent on establishing the standards of Human Interaction Guidelines. Software development gets distributed and delegated but there is so often an expectation that for no good reason that a UX designer can do everything concurrently.

For all the boasting of the UX being of paramount importance to many companies, so many invest precious few resources to it nor understand its complexity and process. So when I see a company is looking for a UX designer who’s a unicorn, its typically going to be either an underpaid position which will either set them up for scapegoating or burn out the employee in no time. On the other hand, it may be more likely that they are hiring for a position for which they do not really understand the importance of the needs of the user, all the while being certain that they “Just know because it’s common sense.” This dangerously and incorrectly commodifies UX design work, and worse than that, almost forces mediocre work. It removes the ability of the UX designer to design an elegant interaction and forces them into a high-speed triage situation. These situations do happen in the best of circumstances and having a solid grounding in formal training and many years of experience increases the likelihood that the quick solution will be a good one. It is, however, a bad approach to design and development.

In summary

While I’m often surprised at the amazing outputs which were based on the luck of an early idea being successful, I’m far more impressed when the success of the outcome is from diligent well thought out work since this kind of work will far more likely lead to further successful improvements in the future.

 

Posted in Uncategorized | 1 Comment

The sitcom logic approach to failure.

By Robert R Glaser
UX designer and architect.

This is an issue that seems omnipresent in many businesses save for a miniscule percentage. It is often replaced by a bulldozer approach or worse a decision based on a random guess approach to problem solving.

What do I mean by this? Well its surprisingly easy to describe and will be easy to recognize. Whether you watch old sitcom on TVLand or new ones on a streaming service, the plot device is used commonly in sitcoms is that the one or more characters are presented with a problem (e.g. suddenly needing cash, or meeting someone, or fixing something.) They quickly realize what needs to be done and then devise a method to achieve the results. This method is typically silly and irrelevant. Something happens which quashes their process, so they fail. (Here’s where the sitcom logic comes in) After failure, the concept is discarded in whole without a thorough postmortem to determine where the problem truly lies which is usually the method for the solution which for the sake of the sitcom is usually silly. Although often a review is performed and maybe some superficial details are addressed but not thoroughly enough, so the entire project (including the initial theory, which is usually valid) is discarded.

Jeff Catlin in Forbes talks about the failure of IBM “Watson for Oncology for MD Anderson due to lack of consideration of varying cultural models and have a 62-million-dollar investment halted. Interestingly, I had previously worked at Phillips Healthcare and one of the aspects I had to consider in any [UX] design work was that the resultant design needed to work in multiple markets outside the US. So, for example, something that has little or no value in the US may have significant or even critical value in Germany or the U.K.

Another example is expressed through the issue of innovation and invention without proper commercialization. The story of the Apple visits to Xerox PARC has become mythologized, but in the long run, Xerox and Apple both had technologies which revolutionized the personal computer industry. The difference is that Apple didn’t discard or ignore these technologies, but rather commercialized them (which was significantly surpassed by Microsoft) to the point where they are ubiquitous across the industry of hardware and operating systems and applications.

Don Norman has talked about how it often takes decades for a new technology to be adopted on a massive scale. This adoption isn’t merely waiting for it to be cost-effective but rather that it be viewed as non-threatening in nature (even if it never was) or not foreign in its mental model, even though its actually easier to use or manipulate. He has used the touch screen as an example that took over 30 years to become ubiquitous (from it’s functional invention in 1975 and first production in 1982 to the real large-scale adoption with the iPhone’s release in 2007.)

My own experience has shown this is often true, particularly when some new technology is trendy. It is common for people to present it as a possible solution or methodology that is forward-looking without the background check of (even superficially) of determining whether it’s actually useful or appropriate for the case at hand. Currently, I work on the in-vehicle experience and see what technology can be used to support it. The most frequent error I hear is “Why don’t we do x because I can do that on my smartphone” without the simple consideration of the fact that cell phone use in cars is limited (to varying amounts dependent on individual State Laws, some of which allow you to do almost nothing with a cell phone.) While it seems obvious to some, most people don’t realize that the mental model is significantly different. Using a smartphone by the average smartphone user requires or draws full attention sometimes to the point of being unexpectedly and dangerously undivided. If you have seen the youtube videos of people walking into walls, polls, other objects, and even people while interacting with some social media or games, feel free to look it up.

Decisions that are made need to cover a range of issues and those issues should be graded based on how and what biases drove them. Common food is a good way of demonstrating this. If you poll people across the US to find out their favorite foods and also what foods are the most consumed, you will probably find there is some overlap but that they are not the same. You would also find out that these change over time. There are significant variables like cost, availability, and trends that have a significant effect on these. There are other variables as well and this list changes significantly between regions and even more so if you start considering countries outside of the US. Here, an important issue is brought forth. The more people are added to the averaging process, the less likely you will have a genuinely ‘average person’.

What this means from a UX standpoint is that designing to the average creates a mediocre outcome for most. So you may have an excellent theory for the UX but the data that it is based on drives the method for a solution that takes no individual into account and therefore, often, pleases few users. The difficulty here is in figuring out how to effectively subdivide the user population so that each subdivision has a way of letting the UX be seemingly (as opposed to discrete customization made by a user) customized to them. These subdivisions can be by age, culture (geographical and/or racial and/or religious), economic, sex/gender, and educational. There may be other subdivisions depending on the target audience. Some of these may not be relevant to a specific case, but they should be addressed before they are dismissed. This issue is a common driver of mediocrity or worse, failure.

What often happens here is an example common in application development where the placing all features at the same level overwhelms the user. The mistake that is certain features are eliminated (along with the users who find those important) rather than finding out how to address smaller percentages of users. This can cause a failure of the application to gain a growing or even sustainable user base.

These kinds of issues are common in applications with large feature sets. You would be unlikely to find a user (other than a person certified to teach the use of the application) who used all of the features of a complex system. Complex applications like this often have overlapping user bases which utilize the application for different purposes. Adobe’s Photoshop is a good example of this which can be seen by opening and examining all the menus and submenus available. It has users who are professional illustrators, or photographic cleaner/retouchers, or visual designers for applications development (both software and hardware) in addition to hobbyists, and even people who specialize in creating maps for cgi (3D) work. There are sets of tools for each of these groups which are often never used by other groups but are critical to the work of a specific group. The interface for Photoshop is customizable for optimization of whatever the user’s primary task is. There are also features which overlap several groups and a few features which are used by all groups. When decision makers are either out of touch with the actual users, or worse believe that their own use paradigm is (or should be) applicable to all.

So when, in circumstances such as this, there is a failure, and the solution is discarded then there is often a reconfiguration of the problem under the assumption without review, that the initial problem was wrong when in fact it was the solution that was wrong. For example, there isn’t a simple review to determine whether the problem being solved is actually needed. I may have a revolutionary solution to a problem, but if no one has any interest in solving that problem, then the implementation may be successful but the product fails.

Really innovative companies release products usually with a primary intent for a product and some ancillary solutions as well. Once in the market, the users focus primarily on one or more of the ancillary capabilities and focus minimally, if at all on the primary function. The company then realizes that instead of seeing the product as a failure, simply starts focusing on the secondary functionality as the new primary feature(s.) If they are really driven by the user’s needs, then they will genuinely asses whether the primary function is simply not needed, or was not well implemented. It really takes some fairly rigorous evaluation by the decision makers to see past their individual confirmation biases.

Personally, I learned a long time ago the deep importance of “I am not the user.” This has been really useful when going through user result analysis. Outside of basic heuristic evaluations, I always assume that my preferences are atypical and therefore irrelevant. This way I’m more open to alternative viewpoints and particularly interested in the times when many of those alternative viewpoints are similar. That becomes a simple if unexpected, target. I can then see whether the original problem definition was wrong or the solution was wrong, or maybe both. We do learn from our failures.

Posted in Uncategorized | Leave a comment

Why we should be removing ‘democracy’ from Design Thinking (and maybe Agile/Scrum processes too.)

Design Thinking circle-02

By Bob Glaser, UX designer

Design Thinking has been around for almost half a century. It has been used successfully for many of those years and yet, as it has gained significant momentum in the last decade, it has also been reformulated, varied, simplified, altered and ‘fixed’ by various purveyors. Many of these, for the purpose of repackaging and more importantly, reselling the concept as a training program or consultancy. Because of the breadth of design thinking, I’m assuming that the reader is already aware and likely in use of design thinking. Therefor, I will not go into a detailed description of design thinking.

One (of many) concepts that I have seen as a corrupting influence on outcomes is the input of democratic decision making into the process. Why is this corrupting (bad) to the success of the process. It is because it can have the effect of dismissing the very real positive outcomes of the process.

How?

First, let us consider the process. For the sake of clarity, I’ll choose the Nielsen Norman Group’s descriptor of the process since it addresses it in an strightforward applicable way, rather than in a broadly conceptual way. (There are many other versions out there that are also suitable, including some of the original concepts which were well refined by Stanford School of Design which had simplified the original 7 steps to 5, but some are overly detailed for the purpose of this post, even though they are just as exposed to the democratic corruption.)

That process is simple in its semi-linear circular iterative process:

  1. Empathize
  2. Define
  3. Ideate
  4. Prototype
  5. Test
  6. Implement

The first 2 are the ‘Understand’ phase, 3-4 are the ‘Explore’ phase and finally 5-6 are the ‘Materialize’ phase.

Since the process combines the seemingly paradoxical pairings of logic with imagination, and systemic reasoning with intuition, it is susceptible to being adapted in a way that can defeat the purpose of the processes results through corruption.

When a group begins this process, they consider the user’s needs, the business’ resources/viability, and the technical feasibility/capabilities. They then follow the process and come up with potential solution(s).

The problem arises at this point.

This common error, is taking potential solutions and voting on them. The problem with this approach, is that it tends to cast the base concepts out the window in order to determine a solution. Sometimes the vote is determined by some constraints such as choosing low hanging fruit even though these are low on the priorities because of the fact that they are easiest to deal with. This is often followed by the idea of resource limitations that may be artificially imposed. This may be stated like this “We are only considering the solutions which can be accomplished in [time frame] (or some sort of similar artificially or arbitrary constraints. Then the group votes on solutions based on these constraints.

Since the purpose of this process is to determine the solutions that need to be addressed*, the results are corrupted by a democratic vote which dismisses the effective and hopefully innovative result. The use of intuition and imagination of the solution creation process is being carried into a realm concurrently with logic and empirical decision making. Design thinking is meant to use these empathetic concepts to help frame or reframe the problems and potential solutions with an approach that brings creativity to the process rather than just a methodical scientific method process alone, and thereby produces more innovative solutions. It should be noted that design thinking is simply one of many ways to help produce effective implementable solutions.

The vote may easily (or regularly) concatenate the solutions and therefore eliminate the best, ideal or most effective solutions from the standpoint of the user.

*Design Thinking is a solution perspective as opposed to the problem perspective of the scientific method.

How to deal with this democratic corruption?

This is fairly easy though often not popular because it requires a little extra effort. When the group is in the early stages of gathering information (Understanding phase) they should also be defining the requirements of acceptance. These requirements are what the solutions should be put into to filter the results that will be implemented. If one is determining the requirements of MVP (minimum viable product) then it should be easy to simply say that a solution is effective but not necessary for MVP while another solution is absolutely required for MVP. Then when it comes to the ones that may or may not make it, the same criteria are applied and instead of addressing the egos of the design thinking process participants (in the business/company), the results will address the needs of the users.

This is not a flawless approach, but it helps define requirements for solutions more effectively. If it doesn’t, then that lack of effectiveness becomes a solution issue for the next iterative round of the process.

I should note that this particular issue came to me in sprint planning meetings where what will be accomplished is not based on needs, but rather schedule first, then resources, then needs. In this scenario, “needs” are the first thing that gets dropped because it’s priority is wrongly demoted to last. Design thinking places it first, and if the democratic corruption doesn’t demote it, then it remains in the forefront where it should be.

I should also note that processes that are not user oriented (directly) can still be effectively addressed by design thinking by considering the indirect effects on people, of the process(es) being addressed.}

Posted in Agile, Design Thinking, MVP, Scrum, UX Strategy | Leave a comment

Correctly Dealing with 5% Use-case

I have noticed a common myopic view of the handling of edge cases around +-5% use-cases features (those features that are addressed by only 5%, give or take of users. These can be outlier, expert users, or special situation users (by job, environment, age, or other demographic.) I should note that the 5% is an quasi-arbitrary small number. It is meant to represent a portion of users that isn’t so small as to be outside of the MVP population, nor large enough to automatically consider it. It will and should vary depending on the size of the user base and the complexity of the application.

The problem is that this group of users is often either not parsed properly, or not defined as cumulative grouping. These exceptions tend to be handled exceptionally well in some highly complex professional applications (in terms of being highly loaded with specialty features) such as Photoshop or some Enterprise Medical Imaging software as a few good examples. Other than these cases, these are use improperly by many UX designers or company defined design process which are often, though not always, outdated.

Concept, execution or explanation.

I’ve seen many concept and projects fail, not because they were not good, useful and saleable products. The problem was because the product was mark as a failure because a lack of understanding as to either the problem that it solved, or the benefit it provided.

The solutions can be:

  • A simple visual that easily displays a complex interaction in a simple manner of literally showing the difference in real-time and real-life manner.
  • Or it may be with a simple overall description that encompasses a sometime incomprehensible number of features or even the features are not the focus but rather the ‘simple’ integration is.
  • Sometimes it is even an ineffective or even inappropriate (business-wise) choice of data that is being used as the sample that fails to present the great benefit of the concept.

The first example often happens when dealing with an audience that may not be able visualize the solution being described. This inability to visualize opens the door to all kinds of cognitive biases. For example, in a fairly necessarily complex UI I was working on, I had suggested a simple fine (2 pixel line) around an active study (in a radiology environment.) This description was dismissed and then a myriad of grotesque solutions were proposed. These were too severe and problematic to consider for implementation since most focused on one aspect with considering the complexity of the UI. So, I showed in a simple two page powerpoint how it would appear if a radiologist selected a study. The concept, previous rejected, was unanimously approved (by both sales leaders as well as clinical specialists), simply because the actual images were “real” in terms of how it would look exactly on the screen (with nothing left to the imagination.)

The second example comes from having an application that can do many things through a central integration point. Each of these features has a high level of desirability to overlapping markets. The problem became apparent when questions from the audience would sidetrack the central focus (because it was not clearly defined) and then the presentation devolved into a litany of features (few of which were particularly remarkable on their and others were remarkable but undifferentiated from the less remarkable features.) Here the solution was to present the idea of integration and a central focus point as being the true benefit of all of these features.

The third example is surprisingly common. Here, the functionality is properly and thoroughly presented but the sample data being used is too small or too random to demonstrate effective results, or not ‘real enough’ to be able to correlate with results that demonstrate the power of the functionality. For example, perhaps the functionality is a way to present the use of a home based IoT climate control system using machine learning to learn usage pattern for specific households. If the database being used is not based on real aggregated database of individual home data points, and is in fact an artificially generated database based on a real data but randomized for because of privacy or security concerns, then the resultant analytics will be equally randomized and fairly useless, since it would be impossible to show actual use-cases for various demographic (or other) filters. So the resultant displays from the algorithms may be dynamic, but they would show no real consequential and actionable results. This would lead the audience to simple see that this does something but I cannot see how it could show me anything useful, whether it was basic information or unexpected patterns of specific groups. This ends up being a lot of effort whose result isn’t much better than simply saying “Believe me, it really works, even though you can’t see it here.”

Also consider:

Another aspect of this 5% user base is that the use-case could be a critical but one time use for 5% of the population, or it could be a regular required use for 5% of the population. While this 5%, regardless of which of these to groups your addressing, could be a different 5% for each of 19 more features/capabilities. In the first case, it can be buried in the preferences, while the later could be buried in the 2nd level (2 clicks away) with the option of custom shortcut implementation.

These may seem obvious, but the require diligence because they are often considered during a specific phase of design and development, when they should be considered all through the design process, from ideation through post production maintenance and version increments as well as postmortems.

Summary:

This is a cursory observation of the problem (meant to initiate the conversation.) There is no one solution to this issue, rather the problem should be considered in advance of the presentation and then a proleptic approach becomes a more effective presentation structure. I personally like to think of it as using scientific method to create the presentation of the concept. Theorize, test, focus on flaws, not positives (assume that the positive is the initial concept that your trying to find flaws in before someone else does, or to simply validate the quality of the concept itself), and fix it if possible.

Posted in Uncategorized | Leave a comment

Correcting perspectives in UX design

sextant

There are several factors that guide the UX that are accepted.

  • Its effectiveness (simplicity, ease, and functionality.)
  • Its lack of obtrusiveness (it gets your attention based on criticality or “on demand” need.)
  • Its implementation of accepted technology vs. new technology within a domain.
  • Its forgiveness of error.

Effectiveness

This is often a catchpoint. The level of simplicity needs to be commensurate with the task at hand. For example: contacting someone vs. performing a diagnostic procedure. The common error here is negative simplification – that is simplifying a complex process to improve numbers of viewers without considering that the process requires many possible branching decisions, each of which may reveal a new set of choices. If a product is a single function tool, then the MVP (Minimum Viable Product) is easy to define. If, however, the product is a set of tools used to complete a generalized task, then we can often (not always) infer that the completion of the task may require a constantly changing set of tools due to unknown variables. In the later case there are some tools which will be used all the time and others that will be used less frequently but it is important that the less frequently used tools are ALWAYS available because their need/availability cannot be determined at the beginning of the process.

Part of this issue is the determination of the importance of the task and its related processes. For example, in surgery, most processes are critical even if no unexpected errors or situations are presented. A phone call on the other hand could be casual and of minor personal value or one of critical need depending on the situation. Further, a game poses no threats at all, but may anger a user if there are bugs in the process of play. Lastly is the capturing of information. This can be simple like writing or recording and only done for reference or posterity but not required for the presentation of the information which may be meant for listening only. The capture in this case is an indirect reinforcement of hearing/seeing the presentation of information but does not have any actual effect on the outcome of that information. (This, like many concepts could easily be rabbit holed, but I use these ideas for high level differentiation.)

In terms of ease of use, it has to be defined as to whether it should be easy to use. Child-proof safety tops or catches are just such an example of the fact that ease of use should not be applied blindly to everything as they are, by design, intended to limit the users to those who can already understand the reason for use. The same can be applied to professional applications where complex work requires a complex tool set.

Lastly is functionality. There are many complex processes that can be simplified, while there are other complex processes for which simplification reduces the effectiveness because decision points that allow “on-the-fly” adjustments to environmental and other unpredictable variables, when removed can produce flawed, if not catastrophic, results.

Obtrusiveness

This function varies based on use case. Often, without a fairly fully effective AI, there is often no way to determine what should draw the users attention to an attribute of a complex system. There may be regulatory, safety or security requirements that define the minimum parameters for this manner of getting the users attention, but it still doesn’t address when there are multiple points of attention of similar weight/value that are required concurrently. In these cases, it is up to the user to determine which to act on and in what order. Again, unknown variables may affect, necessarily, the user’s process. These variables may be presented in ways that the tool is not designed for. This doesn’t mean that the tool should be altered, as it may already be a highly effective single function tool, but rather it can be left to the user to determine the order based on this assessment of newly or suddenly presented variables. That is why I mentioned that only a fully effective (and mostly non-existent) AI would be required.

If we define the rules by which something should be presented to the user based on empirical use cases and also mitigate the potential issues that may happen if the information is ignored or missed, then it becomes far easier to implement it. It’s just that it’s not that common that those use cases will safely cover errors that could be problematic.

Then there is the issue of what method is used. Here, we should keep in mind that new technology is far more quickly accepted by the product development community than the world at large. This has to do with issues of confidence (will it work right?), trust (do I want to share this information?) and technological maturity (can I afford it? Or is it too cumbersome?)

Consider the concept of the future in the 1950’s with the idea of the TV-picture phone. It was perceived as a marvel of new technology, but what no one thought about was that people didn’t want to be seen at home in their underwear when they answered a phone in the early morning. It was decades before skype and facetime were used with some regularity, and even then only when people were prepared to use it. It’s still mostly used by people making long distance calls ‘back home’ perhaps to another country, or in long distance business interviews and conferences. Even now, if I think of the last three companies I have worked at, I have often seen content being shared but only extremely rarely seen live streams of video of people in these conferences. There is a level of privacy that people still hold onto across the globe when it comes to what and how much they wish to share in a communique.

There are other similar issues with new technology that are foreign to many users and also for which there is no standard. Even gestural touch interfaces don’t have a consistent standard yet even though they became widely available almost a decade ago. Even if there are cultural pseudo-standards in place, they are often context specific. “Swipe right” has different connotations depending on the context which it is used. Even the order of digits on a phone keypad and calculator keypad are not harmonized (a dialpad has the “1” in the upper left corner while the calculator has “7” in the upper left corner; this is congruent with common mental models of data chunking.)

Accepted vs. New technology.

The touch screen has been around for half a century but not widely accepted until the last decade and even that wasn’t instantaneous particularly, as mentioned above, the lack of any standardization (other than implication) of gestural use.

While technologies like VR have great possibilities, there is still the issues of acceptance, standardization of use and issues like motion sickness that have not yet been dealt with effectively.

Additionally, there is often a mistake in perception of any area of growth that discounts leveling off or even drop off from either saturation of the market, replacement by another different technology trend, even if less effective or simply limitations of a technology when it reaches the point of diminishing returns.

Since I live in Silicon Valley, there is often this bubble effect of people seeing technology all around them and assuming that it is ubiquitous when in fact it may only be ‘ubiquitous’ in high technology and/or areas of high median income. As soon as these inhabitants step into a more common area outside, they realize that the very technology they may depend on is not only not available but may also be viewed with suspicion. Consider the rise and fall of the Google Glass. While the technology was amazing to those early adopters, they hadn’t considered that many others saw it as an invasion of their privacy. It wasn’t uncommon to hear a conversation between someone wearing the Google Glass and another, where the other person would say “are you recording me?” and then not really believing whether they were or not regardless of what the Google Glass wearer said. This is not to say that it was useless, but rather that it would be more effective only in specific situations but not acceptable in many others.

Other types of feedback systems from haptic to neurological implants have promise but are still far to nascent to expect wide acceptance.

Error forgiveness.

This goes far beyond the system error of the past. Here is an area of constant annoyance. Consider the fact that there are whole internet sites devoted to posting the sometimes hilarious/embarrassing mistakes of autocorrect. This idea of “I like it when it works.” is a common cry amongst texting pairs who haven’t turned it off. As it stands currently it can speed up the communication but it can also lead to rather severe errors.

While basic machine learning algorithms can address this, it would take a deep learning algorithm to learn the cadence and style of an individual’s communication style including things like context, intent (sincerity vs, sarcasm), interests, vocabulary level, etc. along with the context of the person your conversing with since the language between a parent and child and two intimate partners may be extremely different even though two of those people could be the same person. This makes for complex interactions that can’t be ignored.

 One final note:

Most of my posts are directed at more advanced areas of UX design. It is for this reason that there are not a lot of pictures as samples. I point out examples within my post as anyone beyond the beginner (and any critical thinking beginner) will understand. Additionally, I find superfluous imagery tends to belong more with “top ten” lists and other basic concepts in design. I will always use imagery when it simplifies or clarifies a particularly different, new or complex concept. Imagery can also be limiting to the conversation as any advanced designer will already have a good imagination at visualizing how a concept fits their milieu of design work.

 

Posted in Uncategorized | Leave a comment

Honesty in UX

By Bob Glaser, UX Architectelephant-in-the-room

One of the great inefficiencies in UX design comes from the various forms of lack of honesty. This happens in both individual design and collaborative design. I chose that wording because dishonesty implies intent while “lack of honesty” includes neglect, cognitive biases, etc that along with intent. While empathy with the user is an essential component. Rigorous and sometimes brutal honesty is essential to good UX design.

If you can accept kudos for successes, then you must accept blame for failures. Failures generally teach us more than successes. “Best” is a comparative term with no indication on the scale of total failure (0 if you will) and perfection (~ infinity if you will) based only on what currently exists. Even then, only if all previous incarnations of a concept rate at. for example, 5 and you’ve met the 10 mark then you have improved the concept by 100%, but, you have no way of knowing if perfection is 15 or 150000). This allows us to easily stagnate on the laurels of success.

Failures, however, are concrete or finite. Through rigorous honesty, we can and always should find the root causes. There’s almost always more than one cause, so you shouldn’t stop failure investigation once one answer is determined. Here is a good place to implement the “5 whys approach” as a start. The five why’s are well described in many Lean processes so I’ll not repeat them here.

It’s perfectly acceptable to make myself unpopular in meetings when, after presenting a solution to a user’s problem as being successful in user testing, hearing an internal (within the company*) comment of “I don’t like it.”, “It’s ugly.”, “It’s too plain.”, “It’s not exciting.”, etc., the response ought to be “Thank you for your opinion but it is not relevant here. You are not the user and neither am I.” The aggregated feedback from user testing is factual. I am quite aware that both formative and summative user testing may, by the necessity of the product design and use, require user opinion but this opinion is part of the aggregate scoring and should be both consistent in its testing application, non-leading in it’s style, and evenly distributed for accurate representation in the aggregate totals. The comments made are always taken into account though, because they may point out an area of potential improvement. Here is where we appropriately balance the objective results with the subjective impressions.

Another place where honesty is needed, is in the “big picture” integration of many features in a complex system. An example may be an enterprise system with a primary user group, secondary user group and tertiary user groups, (and so on) each with varying needs and perhaps UIs from the same system. Often, particularly in Agile development environments, individual features are addressed in an unintended silo approach that places “common expectation” or “intuitiveness of a single feature/function” over a common UX design in both integration and unification. This approach averages rather than optimizes the UX and UI. This is the enterprise system product that has multiple functions and multiple types of users mentioned above where hierarchy of users may not correlate to an expected hierarchy in user numbers (e.g. the mistaken primary user focus may only be 10% of the total user base.) This is not the fault of Agile method but it the Agile process allows this to be easily ignored or glossed over. (We must remember that the Agile process was developed as a software development process, without UX as part of its initial design and there are many good articles out there on methods for the incorporation of UX into the Agile process.) This may seem counter-intuitive to design, but what it does is; help to reinforce a common mental model of a complex system.

Next is honesty in priority of function. I have often experienced great effort (and financially disproportionate) in infrequently used or needed features. I think of this as the “pet project syndrome”. Another cause of this is insufficiency or even failure to clearly define the priorities with weights (based on user needs) of features in a form that is rigid enough to create a reasonable goal. The deficit in this area of honesty deprivation is the lack of focus on the primary functionality. This is also one of my favorite areas where the “bucket of rationalizations” is brought out to justify poor decisions in the design process. Here is fertile ground for false dichotomies, and false equivalencies. Often fast decision making masks these mistakes and makes them difficult to see until it’s too late. This is often a result of numerous directional changes within the development cycles and heuristic iterative processes prior to user testing.

Another area is democracy in design. This is a practice that I feel should be abolished after the first heuristic phase of formative evaluation. Then the only time this kind of voting should be applied is with a group of well targeted users who have just tested the product or prototype. Votes taken in a board room are not only of little value, they can be counterproductive and costly. Even in heuristic evaluations, these can be problematic since equal weight is given to a UX designer, feature creator, feature implementer/developer(s), system architect, technical product owner and marketing product owner. Each of these people have an agenda that may be rationalized as user-centric when underneath there may be other reasons (conscious or unconscious). I include the UX designer as also being potentially influenced here. Basically it comes down to the simple fact that the further you get from the user, the more likely you are to get decisions based on non-user relevant concepts. It is easy to fall into the trap that “these decisions affect the user in the long run” is a rationalization for business cut backs based on time or resources and the true effect that it has on the user may be irrelevant or even counter productive. It is not to say that these decisions are to be dismissed, as they may have significant business relevance, but UX should not be included in this unless there is a measurable and direct 1:1 relationship.

Any good designer knows that it is not their “great taste and discernment” that makes them a great designer but rather the ability to create something that they may find personally “ugly in concept or aesthetic or even at the cognitive level” but realize that it is ideal for the end user. If you want to create art, then become an artist where your ego is an asset rather than a liability

Another is the top 3, 5, 10 lists. This not only smacks of amateurism but also ignores the fact that the number is irrelevant when it comes to any MVP (minimum viable product). The features list for an MVP should be only changed when a serious deficit or redundancy is discovered. Not based on anyone’s personal whims (though these whims are typically presented as essential and often with circular logic or specious arguments or examples that are not properly weighted.) I have personally turned down offers to write articles base on these “top ten things…” since any good professional will know them already. They are useful for the beginner but have the dangerous flaw of being viewed as intractable rules.

To me, my best work is invisible. My favorite analogy for this is the phone. When the user wants to call their mother, their goal is to be speaking to their mother. Not a fun dialing experience, not a beautiful ‘dial/send/connect button. Just to talk to their mother. So the practical and physical tasks needed to accomplish this should be seamless and so intuitive and obvious that the user may not even be aware that they are doing it. The challenge her is in getting the user used to doing something that is new to them, different, or requiring trust that a common use case addresses with one or more extra steps. A common example of this is the elimination of the “save” function in Apples iOS. there were plenty of people who didn’t trust it or would constantly check for it until they trusted that their input was saved automatically. The caveat being the “Save As” function.

I should point out here that while I’m a believer that facts rule over opinion most of the time, I will always concede to the fact that our end users are human. There is much more than logic and statistics involved here. Culture, education levels/intellect, common mental models of the user base, and other psychological factors have an important place in UX design as well as limitations that may be set by safety, regulatory, or even budget. The important thing is to make sure that honesty is not pushed to the sidelines because of these additional variables but rather it is viewed as an important way of also dealing with them.

* these examples are based on my experiences at over 13 companies (every company I’ve ever worked for so it isn’t an indictment of any one company but rather a common systemic problem.) as well as examples given directly to me by many other great designers like Don Norman and others.

Posted in Uncategorized | Leave a comment

The disparity of eye vector orientation and proprioception demonstrated with the Oculus VR.

1 of 4

1 of 4

Recently I after playing with my Oculus VR with some games and environments with a Galazy S7 Edge, I had spoken with some other users, several of whom complained of becoming nauseous after some use with it. Unlike about  25 to 33% of the population (depending on which statistical data you use to compare), I am not prone to motion sickness so I hadn’t experienced this and so I questioned those users and found they had all been prone to motion sickness to some extent. I theorized that it probably had to do with the fact that the motion sensory system had to do with head position and not eye direction. This is a major factor in common instances of motion sickness.

For example, someone prone to motion sickness may be able to easily drive a car on a windy road with no effect, but if they are a passenger instead of the driver, they are more likely to be looking in direction other than the direct forward direction (e.g. slightly to the left when turning left or slightly to the right when tuning right.) in other words maintaining a view on the vector of movement and slightly ahead of the current position. As soon as the individual separates the direction of view from this vector, any mild disorientation is likely to initiate the motion sickness effect.

The same is true when wearing an Oculus VR headset. The Fact that there is no eye-tracking leads the user to a disparity of directional viewing vector and head orientation which will cause motion sickness in those prone to it.

This is noticeable when using a game that uses the orientation of the head to create a point in the virtual space “in front” of the user. The point only moves when you move your head, but when you move your eyes, iit doesn’t move. This creates an interesting paradigm of disparity between a seemingly immersive virtual environment and the way the brain processes visual information using both proprioception (primarily of the head) and visual vectors of orientation. When these are disconnected as in a virtual environment, then there is a blank area of perception that is most easily recognized by those who are prone to motion sickness.

Two issues then present themselves which can be taken as potential solution opportunities:

1.       Could this type of virtual reality be used in a therapeutic sense to see if there is a way of reducing motion sickness through development of a training in an environment that already separate these two elements.

2.       This is also an opportunity to add eye-tracking hardware and associated software to account for this disparity and create a more effective virtual environment.

 I will post more regarding this after more experimentation. There are different issues with the tactile UI which I will address separately, If you have any questions, just ask.

Posted in UI Function, UX Strategy, VR | Tagged , , , | Leave a comment