Free ebook download the human-computer interaction




















Selection from a list e. Avoid switch- ing between the keyboard and the mouse. Use default values. Compatibility of data entry with data display: The format of data-entry information should be linked closely to the format of displayed information i.

Clear and effective labeling of buttons and data-entry fields: Use consistent labeling. Distinguish between required and optional data entry. Place labels close to the data-entry field. Match and place the sequence of data-entry and selection fields in a natural scanning and hand-movement direction e. Such a placement is likely to produce frequent erroneous input.

Design of form and dialog boxes: Most visual-display layout guidelines also apply to the design of form and dialog boxes. Situations become more complicated when other forms of input are also used, such as touch, gesture, three- dimensional 3-D selection, and voice. There are separate guidelines for incorporating such input modalities. It explains how to make web content more accessible to people with disabilities.

Web content generally refers to the information in a web page or web appli- cation, including text, images, forms, sounds, and such Figure 2. The following is a summary of the guidelines: 1.

Perceivable A. Provide text alternatives for nontext content. Provide captions and other alternatives for multimedia. Create content that can be presented in different ways, including by assistive technologies, without losing meaning. Make it easier for users to see and hear content. The colors of the background and foreground text can be changed.

Operable A. Make all functionality available from a keyboard. Give users enough time to read and use content. Do not use content that causes seizures.

Help users navigate and find content. Understandable A. Make text readable and understandable. Make content appear and operate in predictable ways. Help users avoid and correct mistakes. Robust A. Maximize compatibility with current and future user tools. Many conventional principles equally apply to mobile networked devices Figure 2. Fast status information especially with regard to network connection and services 2. Minimize typing and leverage on varied input hardware e.

Large hit targets for easy and correct selection and manipulation 5. Enable shortcuts e. Keep the user informed of his or her actions 3. It concerns the limited and differ- ent sizes of a family of handheld devices i. Make sure that your app consistently provides a balanced and aestheti- cally pleasing layout by adjusting its content to varying screen sizes and orientations. Panels are a great way for your app to achieve this.

They allow you to combine multiple views into one compound view when a lot of horizontal screen real estate is available and by splitting them up when less space is available. For instance, Apple has published a design guideline document [8] that details how appli- cation icons should be designed and stylized: 1. Investigate how your choice of image and color might be interpreted by people from different cultures.

Create different sizes of your app icon for different devices. When iOS displays the app icon on the home screen of a device, it automatically adds the following visual effects: a rounded corners, b drop shadow, and c reflective shine.

These guidelines promote organi- zational styling and its identity and, ultimately, its consistency in user interfaces. Franklin Gothic is used only for text over point size. It is used for headers and should never be used for body text.

Tahoma should be used at 8-, 9-, or point sizes. Trebuchet MS bold, 10 point is used only for the title bars of Windows Figure 2. Similar to visual icons, which must capture the underlying meaning for whatever it is trying to represent and draw attention for easy recognition, ear- cons should be designed to be intuitive.

They suggest three types of earcons, namely, those that are a symbolic, b nomic, and c meta- phoric. Symbolic earcons rely on social convention such as applause for approval; nomic ones are physical such as a door slam; and metaphori- cal ones are based on capturing the similarities such as a falling pitch for a falling object [10]. We take a more in- depth look at the aural modality in Chapter 3.

The categories include design guidelines for manual control, spoken input and out- put, visual and auditory display, navigation guide, and cell phone con- sideration, to name just a few Figure 2. The use of send to make a connection and power to turn a phone on and off are notable inconsistencies.

Voice dialog Verbal commands and button labels should use the same terms. Commands of interest include dial, store, recall, and clear. This is an instance of the consistency principle.

Manual dialing The store and recall buttons, used for similar functions, should be adjacent to each other. This is an instance of the grouping principle. Source: Green, P. The following is a guideline under the checkout-process section concerning the steps of a subtask the checkout process. Check-out should start at the shopping cart, followed by the gift options or shipping method, the shipping address, the billing address, payment information, order review and finally an order summary.

The checkout process is linear. Many guidelines are still at quite a high level, similar to the HCI principles, and leave the developer wondering how to actually apply them in practice. Another reason is that there are just too many different aspects to consider especially for a large-scale system. Sometimes, the guide- lines can even be in conflict with each other, which requires priori- tizing on the part of the designer.

For instance, it can be difficult to give contrast to an item for highlighting its importance when one is restricted to using certain colors, e. Another example might be when attempting to introduce a new interface technology e. While the new interface may have been proven effective in the laboratory, it still may require significant familiarizing and training on the part of the user.

It is often the case that external constraints such as monetary and human resources restrict sound HCI practice. One must realize that all designs involve compromises and tradeoffs.

Experienced designers understand the ultimate benefit and cost for practicing sound HCI design. In Chapter 3, we will study cognitive and ergo- nomic knowledge more theoretical , which, along with the principles and guidelines we have learned so far more experiential , will be applied to HCI design.

ISO Tidwell, Jennifer. Designing interfaces. Leavitt, Michael O. Research-based web design and usability guidelines. Smith, Sidney L.

Guidelines for designing user interface software. Bedford, MA: Mitre Corporation. Reid, and Gregg Vanderheiden, eds. Web content accessibility guidelines WCAG 2. Guidelines for mobile interface design. Multi-pane layouts. Windows XP visual guidelines. Microsoft Corporation. Blattner, Meera M. Sumikawa, and Robert M. Earcons and icons: Their structure and common design principles.

Human—Computer Interaction 4 1 : 11— Suggested human factors design guidelines for driver information systems. Kalsbeek, Maarten. Interface and interaction design patterns for e-commerce checkouts. We will look at the computer aspects of HCI design in the second part of this book.

In this chapter, we take a brief look at some of the basic human factors that constrict the extent of this interaction. In Chapters 1 and 2, we studied two bodies of knowledge for HCI design, namely a high-level and abstract principles and b specific HCI guidelines.

To practice user-centered design by following these principles and guidelines, the interface requirements must often be investigated, solicited, derived, and understood directly from the tar- get users through focus interviews and surveys.

However, it is also possible to obtain a fairly good understanding of the target user from knowledge of human factors. Human-factors knowledge will particularly help us design HCI in the following ways. Also, evaluate inter- action models and interface implementations and explain or predict their performance and usability. For instance, a goal of a word-processing system might be to produce a nice-looking document as easily as possible.

This problem-solving process epito- mizes the overall information-processing model. As a lower level part of the information-processing chain [more ergonomic], we take a closer look at these and how they relate to HCI in Section 3. Figure 3. Then a hierarchi- cal plan Figure 3. A number of actions or subtasks are identified in the hope of solving the individual subgoals considering the external situation.

By enacting the series of these subtasks to solve the subgoals, the top goal is eventually accomplished. Note that enacting the subtasks does not guarantee their successful completion i. Thus the whole process is repeated by observing the resulting situation and revising and restoring the plan. Note that a specific interface may be chosen to accomplish the subtasks in the bottom. Note that in a general hierarchical task model, certain subtasks need to be applied in series, and some may need to be applied concurrently.

One can readily appreciate from the simple example in Figure 3. The interaction model must represent as much as possible what the user has in mind, especially what the user expects must be done the mental model in order to accomplish the overall task.

The interface selection should be done based on ergonomics, user preference, and other requirements or constraints. Finally, the subtask structure can lend itself to the menu structure, and the actions and objects to which the actions apply can serve as the basis for an object-class diagram for an object-oriented interactive software implementation.

In the remainder of this section and in Section 3. Ergonomic aspects are dis- cussed in Section 3. Such a phenomenon would be a result of an interface based on an ill-modeled interaction.

Memory capacity also influences the interactive performance greatly. As shown in Figure 3. The short-term memory is also sometimes known as the working memory, in the sense that it contains changing memory elements meaning- ful for the task at hand or chunks.

Humans are known to remember about eight chunks of memory lasting only a very short amount of time [2]. Imagine an interface with a large number of options or menu items. The user would have to rescan the available options a number of times to make the final selection. In an online purchasing system, the user might not be able to remember all of the relevant information such as items purchased, delivery options, credit card chosen, billing address, usage of discount cards, etc.

Retrieving information from the long-term memory is a difficult and relatively time-consuming task. Therefore, if an interactive system e. Memory-related performance issues are also important in multi- tasking. Many modern computing settings offer multitasking envi- ronments. This process can bring about overall degradation in task performance in many respects [3]. Based on these figures and a task-sequence model, one might be able to quantitatively estimate the time taken to complete a given task and, therefore, make an evaluation with regard to the origi- nal performance requirements.

Tables 3. Boff, L. Kauffman, and J. Table 3. Point to file icon P 1. Point to file icon P 2. Click mouse button BB 2. Click mouse button BB 3.

Point to file menu P 3. Move hand to keyboard M 4. Press and hold mouse button B 4. Hit command key: command-T KK 5. Move hand back to mouse H 6. Release mouse button B 7. The GOMS evaluation methodology starts by the same hierarchi- cal task modeling we have described in Section 3. Once a sequence of subtasks is derived, one might map a specific operator in Table 3. With the pre- established performance measures Table 3. Different operator mappings can be tried comparatively in terms of their performance.

Even though this model was created nearly 30 years ago, the figures are still amazingly valid. GOMS models for other computing environments have been proposed as well [8]. GOMS is quite simple in that it can only evaluate in terms of the task performance, while there are many other criteria by which an HCI design should be evaluated. Obviously, there can be some inaccuracies introduced in the use of the mental operators during the interaction modeling process.

We now shift our focus to raw information processing. First we look at the input side i. Humans are known to have at least five senses. Among them, those that would be relevant to HCI at least for now are the modalities of visual, aural, haptic force feedback , and tactile sensation. Taking external stimulation or raw sensory information sometimes computer generated and then processing it for perception is the first part in any human—computer interaction.

Another aspect of sensation and perception is attention, that is, how to make the user selectively consciously or otherwise tune in to a particular part of the information or stimulation. Note that attention must occur and be modulated within awareness of the larger task s. While we might tune in to certain important information, we often still need to have an understanding, albeit approximate, of the other activities or concurrent tasks, such as in multitasking or parallel pro- cessing of information.

In the following discussion, we examine the processes of sensa- tion and perception in the four major modalities and the associated human capabilities in this regard. Just as cognitive science was useful in interaction and task modeling, this knowledge is essential in sound interface selection and design. As already mentioned, the parameters of the visual interface design and display system will have to conform to the capacity and characteristics of the human visual system.

In this section, we review some of the important properties of the human visual system and their implications for interface design. First we take a look at a typical visual interaction situation as shown in Figure 3. The shaded area in Figure 3. Viewing distance dotted line in Figure 3. However, one might be able to define a nominal and typical viewing distance for a given task or operating environment.

The shaded area illustrates the horizontal field of view shown to be much less than the actual for illustration purpose , while the dashed line is the same as offered by the display. The display offers different fields of view depending on the viewing distance dotted line in the middle. The oval shape in the display represents the approximate area for which high details are perceived through the corresponding foveal area in the user eyes.

In Figure 3. The display offers different fields of view, depending on the viewing distance dotted line in the middle.

This is also synonymous with the power of sight, which is different for different people and age groups. Note that the display FOV is more important than the absolute size of the display. A distant large display can have the same display FOV as a close small display, even though it may incur different viewing experiences.

If possible, it is desirable to choose the most economical display, not necessarily the biggest or the one with the highest resolu- tion, with respect to the requirement of the task and the typical user characteristics. The oval region in Figure 3. On the other hand, the rods are distributed mainly in the periphery of the retina and are responsible for motion detection and less detailed peripheral vision.

While details may not be sensed, the rods contrib- ute to our awareness of the surrounding environment. Differently from that of human perception, most displays have uniform resolution. However, if the object details can be adjusted depending on where the user is looking or based on what the user may be interested in Figure 3. We may assess the utility of a large, very-high-resolution display system such as the one shown in Figure 3. Is it really worth the cost? From Ni, T.

Consequently, it can be argued that it is more economical to use a smaller high-resolution display placed at a close distance. Interestingly, Microsoft Research recently introduced a display system called the Illumiroom [9] in which a high-resolution display is used in the middle, and a wide low-resolution projection and peripheral display provides high immersion Figure 3.

A color can be specified by the composure of the amounts contributed by the three fundamental colors and also by hue particular wavelength , saturation relative difference in the major wavelength and the rest in the light , and bright- ness value total amount of the light energy Figure 3.

Contrast in brightness is measured in terms of the difference or ratio of the amounts of light ener- gies between two or more objects. The recommended ratio of the foreground to background brightness contrast is at least Color contrast is defined in terms of differences or ratios in the dimensions of hue and saturation. It is said that the brightness contrast is more effective for detail perception than the color contrast Figure 3. Before all these low-level-part features are finally Energy Hue dominant wavelength Energy of dominant wavelength Saturation Energy of white light Brightness total light energy Wavelength Figure 3.

From Hemer, M. Pre-attentive features are compos- ite, primitive, and intermediate visual elements that are automatically recognized before entering our consciousness, typically within 10 ms after entering the sensory system [12]. These features may rely on the relative differences in color, size, shape, orientation, depth, texture, motion, etc.

At a more conscious level, humans may universally recognize certain high-level complex geometric shapes and properties as a whole and understand the underlying concepts. From Ware, C. The actual form of sound feedback can be roughly divided into three types: a simple beep- like sounds, b short symbolic sound bytes known as earcons e.

As we did for the visual modal- ity, we will first go over some important parameters of the human aural capacity and the corresponding aural display parameters. It is instructive to know the decibel levels of different sounds as a guideline in setting the nominal volume for the sound feed- back Table 3. The dominant frequency compo- nents determine various characteristics of sounds such as the pitch e. Humans can hear sound waves with frequency values between about 20 and 20, Hz [13]. Phase differences occur, for example, because our left and right ears may have slightly different distances to the sound source and, as such, phase differences are also known to contribute to the perception of spatialized sound such as stereo.

When using aural feedback, it is important for the designer to set these fundamental parameters properly. A general recommendation is that the sound signal should be between 50 and Hz and com- posed of at least four prominent harmonic frequency components fre- quencies that are integer multiples of one another , each within the range of — Hz [14]. Aural feedback is more commonly used in intermittent alarms.

However, overly loud i. Instead, other techniques can be used to attract attention and convey urgency by such aural feedback techniques as repetition, variations in frequency and volume, and gradual and aural contrast to the background ambient sound e.

First, sound is effectively omnidirectional. However, as already mentioned, it can also be a nuisance as a task interrupter e. Making use of contrast is possible with sound as well. For instance, auditory feed- back would require a 15—dB difference from the ambient noise to be heard effectively. Differentiated frequency components can be used to convey certain information. Continuous sound is somewhat more subject to becoming habituated e.

In general, only one aural aspect can be interpreted at a time. Humans do possess an ability to tune in to a particular part of the sound e. As for using it actively as a means for input to interactive systems, two major methods are: a keyword recognition and b natural language understanding.

Isolated-word-recognition technology for enacting simple com- mands has become very robust lately. In most cases, it still requires speaker-specific training or a relatively quiet background.

As such, many voice input systems operate in an explicit mode or state. The need to switch to the voice-command mode is still quite a nuisance to the ordinary user. Thus, voice input is much more effective in situations where, for example, hands are totally occupied or where modes are not necessary because there is very little background noise or because there is no mixture of conversation with the voice commands.

Machine understanding of long sentences and natural-language- based commands is still very computationally difficult and demanding. With the spread of smart-media client devices that might be computationally light yet equipped with a sleuth of sensors, such a cloud-based natural-language interaction combined with intelligence will revolutionize the way we interact with computers in the near future. To be precise, the term Figure 3. The smart- media client devices would send the captured sentence in voice or text , and a correct and intel- ligent response is given back in real time.

Thus haptic refers to both the sensation of force feedback as well as touch tactile. For convenience, we will use the term haptic to refer to the modal- ity for sensing force and kinesthetic feedback through our joints and muscles even though any force feedback practically requires contact through the skin and the term tactile for sensing different types of touch e. The fingertip is one of the most sensitive areas and is frequently used for HCI purpose. Vibration fre- quency of about Hz is said to be the optimal for comfort- able perception [16].

For a fingertip, this amounts to about 0. As mentioned previously, there are many types of tactile stimula- tion, such as texture, pressure, vibration, and even temperature. For the purposes of HCI, the following parameters are deemed important, and the same goes for the display system providing the tactile-based feedback.

Physical tactile sensation is felt by a combination of skin cells and nerves tuned for particular types of stimulation, e.

From Proprioception, Intl. While there are many research prototypes and commercial tactile display devices, the most practical one is the vibration motor, mostly applied in a single actuator configuration. Most vibration motors do not offer separate controllability for amplitude and frequency. In addi- tion, most vibrators are not in direct contact with the stimulation tar- get e.

Thus additional care is needed to set the right parameter values for the best effects under the circumstances. Another way to realize vibratory tactile display is to use thin and light piezoelectric materials that exhibit vibration responses according to the amounts of electric potential supplied.

Due to their flat form factor, such materials can be embedded, for instance, into flat touch screens. Sometimes sound speakers can be used to generate indirect vibratory feedback with high controllability responding to wide ranges of amplitude and frequency signals Figure 3.

Along with tactile feed- 3. Right: tactile array with multiple actuators. The activation force for the joints is between 0. Note that haptic devices are both input and output devices at the same time. We briefly discuss this issue of haptic input in the next section in the context of human body ergonomics. The simplest form of a haptic device is a simple electromagnetic latch that is often used in game controllers. It generates a sudden inertial movement and slowly repositions itself for repeated usage.

Normally, the user holds on to the device, and inertial forces are deliv- ered in the direction relative to the game controller. Such a device is not appropriate for fast-occurring interaction e. More-complicated haptic devices are in the form of a robotic kine- matic chain, either fixed on the ground or worn on the body. As a kine- matic chain, such devices offer higher degrees of freedom and finer force control Figure 3. For the grounded device, the user interacts with the tip of the robotic chain through which a force feedback is delivered.

The sensors in the joints of the device make it possible to track the tip interaction point within the three-dimensional 3-D operating space. Using a similar control structure, body-worn devices transfer force with its mechanism directly attached to the body. Important haptic display parameters are a the degrees of freedom the number of directions in which force or torque be can displayed , b the force range should be at least greater than 0. Stability is in fact a by-product of the proper sampling period, which refers to the time taken to sense the current amount of force at the interaction point and then determine whether the target value has been reached and reinforce it a process that repeats until a target equilibrium force is reached at the interaction point.

The ideal sampling period is about Hz, and when the sampling period falls under a certain value, the robotic mechanism exhibits instability e. The dilemma is that provid- ing a high sampling rate requires a heavy computation load, not only in updating the output force, but also in physical simulation e.

They tend to be heavy, clunky, dangerous, and take up a large volume. The cost is very high, often with only a small operating range, force range, or limited degrees of freedom. In many cases, simpler devices, such as one-directional latches or vibra- tors, are used in combination with visual and aural feedback to enrich the user experience.

However, for various reasons, multimodal interfaces are gaining popularity with the ubiquity of multimedia devices. By employing more than one modality, interfaces can become more effective in a number of ways, depending on how they are configured [22]. Here are some represen- tative examples. For instance, the ring of a phone call can be simulta- neously aural and tactile to strengthen the pick-up probability. For multimodal interfaces to be effective, each feedback must be properly synchronized and consistent in its representation.

The representation must be coordinated between the two: In the previous example, if there is one highlighting, then there should also be one corresponding beep.

When inconsistent, the interpretation of the feedback can be confus- ing, or only the dominant modality will be recognized. In this section, we briefly look at ergonomics aspects. To be precise, ergonomics is a discipline focused on making products and interfaces comfortable and efficient. Thus, broadly speaking, it encom- passes mental and perceptual issues, although in this book, we restrict the term to mean ways to design interfaces or interaction devices for comfort and high performance according to the physical mechanics of the human body.

For HCI, we focus on the human motor capabilities that are used to make input interaction. From the main equation in Figure 3. Thus, to reiterate, ID represents an abstract notion of difficulty of the task, while MT is an actual prediction value for a particular task. For instance, as shown in Figure 3.

From MacKenzie, I. Berard et al. In addition to discrete-event input methods e. Obviously, humans will exhibit different motor-control perfor- mances with different devices, as already demonstrated with the two types of device mentioned previously e. The mouse and 3-D stylus, for instance, belong to what is called the isomet- ric devices, where the movement of the device directly translates to the movement in the display or virtual space.

Nonisometric devices are those that control the movement in the display in principle with some- thing else such as force, thus possibly with no movement input at all. Control accuracy for touch interface presents a different problem.

Despite our fine motor-control capability of submillimeter perfor- mance—and with recent touch screens offering higher than dpi resolution—it is the size of the fingertip contact unless using a stylus pen , 0. Even larger objects, once selected, are not easy to con- trol if the touch screen is held by another hand or arm i. We can also readily see that many of the HCI principles discussed previously in this book naturally derive from these underlying theories.

User centered system design: New perspectives on human-computer interaction. Psychological Review 63 2 : Marois, Rene, and Jason Ivanoff.

Capacity limits of information processing in the brain. Trends in cognitive sciences 9 6 : — Anderson, J. Bothell, M. Byrne, S. Douglass, C. Lebiere, and Y. An integrated theory of the mind. Psychological Review 4 : — Polk, T. Cognitive modeling. Salvucci, D. Threaded cognition: An inte- grated theory of concurrent multitasking. Psychological Review 1 : — Card, Stuart K.

Moran, and Allen Newell. The model human processor: An engineering model of human performance. In Handbook of human perception. Thomas, 1— New York: John Wiley and Sons. Schulz, Trenton.

Using the keystroke-level model to evaluate mobile phones. Microsoft Research. CHI An immersive event Illusions create an immersive experience. Ni, Tao, Greg S. Schmidt, Oliver G. Staadt, Mark A. Livingston, Robert Ball, and Richard May. A survey of large high-resolution display technologies, techniques, and applications. Hemer, Mark A. Projected changes in wave climate from a multi-model ensemble mark, Nature Climate Change — Ware, C. Information visualization: Perception for Design. Waltham, MA: Morgan Kaufmann.

Olson, Harry Ferdinand. Music, physics and engineering. Mineola, NY: Dover Publications. Topics addressed include: human factors of input devices, and the basics of sensation and perception; memory and cognitive issues of users navigating their way through interfaces; communication via programming languages and natural speech interaction; cyberpathologies such as techno-stress and Internet addiction disorders; and challenges surrounding automation and artificial intelligence.

This thoroughly updated second edition features new chapters on virtual reality and cybersecurity; expanded coverage of social media, mobile computing, e-learning, and video games; and end-of-chapter review questions that ensure students have mastered key objectives. Hugely popular with students and professionals alike, the Fifth Edition of Interaction Design is an ideal resource for learning the interdisciplinary skills needed for interaction design, human-computer interaction, information design, web design, and ubiquitous computing.

New to the fifth edition: a chapter on data at scale, which covers developments in the emerging fields of 'human data interaction' and data analytics. The chapter demonstrates the many ways organizations manipulate, analyze, and act upon the masses of data being collected with regards to human digital and physical behaviors, the environment, and society at large. Revised and updated throughout, this edition offers a cross-disciplinary, practical, and process-oriented, state-of-the-art introduction to the field, showing not just what principles ought to apply to interaction design, but crucially how they can be applied.

Explains how to use design and evaluation techniques for developing successful interactive technologies Demonstrates, through many examples, the cognitive, social and affective issues that underpin the design of these technologies Provides thought-provoking design dilemmas and interviews with expert designers and researchers Uses a strong pedagogical format to foster understanding and enjoyment An accompanying website contains extensive additional teaching and learning material including slides for each chapter, comments on chapter activities, and a number of in-depth case studies written by researchers and designers.

It observes the ways in which humans interact with computers. It also designs computer technologies that enable humans to interact with computers in new ways. This book is a vital tool for educational technology practitioners and researchers interested in incorporating e-learning practices in the education sector.

The book is organized according to the following main topics in a sequential order: new interaction paradigms, multimodality, usability studies on several interaction mechanisms, human factors, universal design and development methodologies and tools. Not content to rest on his laurels, human factors and ergonomics expert Professor Waldemar Karwowski has overhauled his standard-setting resource, incorporating coverage of tried and true methods, fundamental principles, and major paradigm shifts in philosophy, thought, and design.

Demonstrating the truly interdisciplinary nature of this field, these changes make the second edition even more comprehensive, more informative, more, in a word, encyclopedic. Keeping the format popularized by the first edition, the new edition has been completely revised and updated.

Divided into 13 sections and organized alphabetically within each section, the entries provide a clear and simple outline of the topics as well as precise and practical information. The book reviews applications, tools, and innovative concepts related to ergonomic research. Technical terms are defined where possible within entries as well as in a glossary. Students and professionals will find this format invaluable, whether they have ergonomics, engineering, computing, or psychology backgrounds.



0コメント

  • 1000 / 1000