“The current trend in MMOG’s appears to be make the game so easy and interest-grabbing right out of the gate that even a person with the attention span of a monkey chewing on a flyswatter will be able to keep up and get into the swing of things. Depth of game mechanics is still possible with a system like this, but it needs to be introduced not only clearly, but later in the game, after a player has played enough to be hooked and is willing to put in some extra time to learn about the more intricate game mechanics available to them. “ [Ten Ton Hammer]
Interface used to provide information to a user should be played and fun to use. I have said this over and over again since the early 90’s. We should play our way to the information we want and the way in which we play would provide bread crumbs to the system with which it can increase our enjoyment of our time spent.
Consider the three types of interface this can relate to; Virtual, physical and a cross between the two.
In the first instance, let’s restrict our brief here to a simple web page or application where a simple point and click or tab/enter navigation is used to move through pages of information. Given a number of pages with information on them and a simple navigation system, it is possible to track a users movements through the navigation and dynamically adjust through the learned movements of the user the navigation system to best suit the individual user. A web page, given a western visual understanding of space, can be served in a simple to use and understand navigation style and can be adjusted over time to suit the individual user once a pattern of use is understood. This information can be stored and retrieved for use during future visits whereby we are able to pronounce the navigation preference of individual users making it more intuitive. Most often this information is only used to track metrics associated with multiple users or to provide user controlled preferences.
Let’s say a user enters a used vehicle sales site. The site provides selection for a number of types of used vehicles, car, truck, motorcycle etc, and three methods of finding the vehicle you want. There is a navigation on the left, a drop down list box and a set of images. This simple offering of three distinct ways of accessing the information provide a massive range of understanding to the user base. Visual users will usually opt for the images, while ‘I know what I want’ users will hit the drop down lists and ‘lookie lou’s’ who like to see lots of data and them make a choice will tend to use the left side navigation.
From a single click we have access to information about the user and can act on that information in serving up the next page and provide a custom interface that seems intuitive and fun. We also have the ability on subsequent pages to constantly refine our interface for the individual.
If a user uses the standard expand/collapse left side navigation they will more likely be the type of person that is interested in an expanded list of choices, a back/forward straight line experience, but also might be interested in an explanation of some of the lesser known options to help them decide. If a user selects cars, then the system provides navigation options to types of cars, sedan, coupe, convertible… and can hold information on the differences between them. If a user mouses over the sedan option and spends X amount of time ‘reading the mouse over’ and then does the same for coupe but not for convertible, then one can assume they were interested in definition of the terms sedan, coupe and can use that information on the next page should the user select sedan or coupe by providing further or more detailed information about the one they selected. But if they select convertible and didn’t hold the mouse over then we can assume they know what one is and are more likely to just be looking for the next level of resolution in their selection of a convertible car, make/model etc. Of course we track each of our assumptions and adjust for wrong ones. If in our example we provide a paragraph of definition information about the sedan or coupe on the next page and the user scrolls past the definition information quickly, we can adjust our assumption that they were interested in a definition and not provide masses of it on prime screen realestate, but perhaps try slightly more in depth mouseovers with a link in them to greater information.
Where a visual user will understand pictures easier, they are often not given them to select from in the next level of navigation. Here is where we have the option to learn from the user and serve up interface on a custom basis. If, as in the above example, the user selects cars through the image selector, then on the next page images of each, sedan, coupe and convertible can be shown. Again these can contain the script for tracking time on a mouse over which can be used as above. It follows in the pattern above, that the next page for make/model can show company logos or images of types of convertibles.
In the case of the drop down list user, because of the nature of these lists the user through selection from multiple lists provides the system with enough information to take the user to their final page, the next page is not the same as the other two navigation pages which still require user input.
We can also track other things the user uses or doesn’t use and make them more or less prevalent on the page. For example, if a user never uses the loan calculator, searches for trucks or always leaves the page for a bank web site, we can simplify and clean up the interface providing easier access to the items the user does use over and over again or to related items. Think about the last example where a user always goes to a banking web site, well, why not provide a link the next time that takes them there, or a loan calcuator that incorporates rates directly from their bank of choice?
With our second instance, this type of mouse over for information is not something we use in physical interface. The use of labels or icons near the button is used to convey execution result of a a press action. Along with texture, colour, material, lights and a number of other visual cues to convey a physical reaction item, the resulting mechanical action by the object is the cue that you have achieved your goal through your interface with the object. Ie, your drink comes crashing into the receptacle bin.
We can though make use of customization in physical interfaces. Let’s use in our example a set of actions performed by someone interfacing with a device in an auto assembly plant. Lets say the user is installing an object into the vehicle and this object requires that 6 bolts and 6 screws are inserted to affix the item to the vehicle. Now using RFID or a swipe card we can track individual users and store preference information about them and their methods of work. For example if user one likes to install the six bolts then the six screws we can track this and have the machine switch bits for them in that order. If user two finds it easier to install 2 bolts then the 6 screws then the remaining bolts, the machine can adjust it’s bit changing to accommodate the users interface with the item. Given an articulating head on the driver, another parameter can be stored and used such as left or right hand use in the users preference. I.e., bolt 1 and two right hand, screw 1~4 left hand insertion, screw 5 and6 right hand insertion, bolt 3~5 right hand and bolt 6 left hand. The articulation angle of the head, left or right, and the bit selection can be automatically changed by the system to aid in the actions once the users pattern is learned. Learning this pattern is something the device will do over many repetitions of the action in the same way the user will find the best way to accomplish the job at hand. Or, the pattern has to be inputed for each user based on watching the current method of work. In either case, adjustments must be easy to make for the user on the fly for the interface to increase productivity and not be a hindrance. Perhaps the user finds the angle of attack with the articulating head too different and needs to adjust how much it articulates over time for the best fit with their learned motor skills, or with an articulating head it is easier to do the task in a different pattern.
Our third instance, a cross interface, where we use a hard button to create action in a virtual space it could be interesting to have a finger over action for the button through some form of sensor that when activated provides greater information to the user than just a label does. How about you lightly touch a button and the label changes to provide a scrolling text of information about what will happen if you press the button. This would be a great use of OLED or e-Ink labels. What if we provided this option within an elevator. As I finger over the floor buttons during my journey to my known floor, I am presented on a screen with attractions for that floor. Think the old elevator controller of days gone by, third floor, ladies undergarments, fragrances, chocolates and children’s clothing everyone out… Now I may not make use of the button at that time, but knowing what I have passed will give me the option to revisit it at my leisure in the same way as passing an interesting store sign in my car may make me turn in on my return journey.
In the first two examples time spent away from the interfaces causes the user to have to relearn their methods. Given that one can not know how a persons method of understanding will change over a given length of time away we can not hold interfaces to strict formats for individuals with long lapses between usage. We can also not revert a custom interface to a standard one without pissing off the user to some extent. Think about what happens each time you have to reinstall your OS. Granted all the bugs are gone, which is of course expected, but you have to spend weeks dealing with all the standard quirks that you customized to your preference ages ago and got used to using. As designers we have to track the amount of usage and time between usage and make best guesses as to what will change.
In the first instance above, lets quickly consider a user who looks at cars day in and day out until they buy one then don’t look again for a few years. Well day in and day out we can track all the nuances of a users usage pattern and accommodate them, but after a few years only the highest level of abstraction can be counted on. And further, perhaps they changed banking institutions.
In our second case, the physical motions have to be relearned. We have to take into consideration that physical motion once learned is very hard to unlearn and comes back very quickly, so providing the mechanical movements which corresponded with the physical motions will aid in this. But we must consider why the user left the work at hand. What if they were injured outside of work or worse at work and now have to perform their job in a different way due to lasting physical condition? Is there a way we can adjust the mechanical movements of the device to aid in their physical therapy? Again, the ability of the user to make adjustments on the fly will aid in their ability to perform in a manner that is best suited to their work pattern.
With our final case, baring some kind of method of tracking individuals, customization of this type of interfaces would be very difficult. I am sure you can think up some examples or methods on your own where you can accomplish ease of use. Things like private floor elevator cards, or cell phones used to listen to audio associated with public transit LCD screens.
Although this only touches the surface of how intuitive interface can make interaction with a device more like playing a game and increase the learning curve for a given device, I hope that it inspires you to think a bit about how far you can experiment the next time you are given the task of creating an interface.
No comments:
Post a Comment