Computer user satisfaction

Computer User Satisfaction

In concept, it sets out to measure and record on a granular level, individual operator satisfaction. Other similar or related concepts are System Satisfaction (spanning multiple users or even customers), simply User Satisfaction, or similar terms. The main differences being the target audience, survey depth, anonymity, or how the results may be used, or the findings converted into value. It may be designed to measure product segment stability, industry trends, or how content users are. This is a value for Business Strategies, Market Research, Sales Forecasting and much more. This proactively prevents dissatisfaction, which could manifest in user loss to competitors via product migration and loss of substantial user base and profit. Other fields which more directly deals with the layer of computer systems which users interact with are User Interface Design and Experience, often referred to as UI & UIX, is measured using different tools, but important facets of modern system design, development, and engineering.

User Compliance

Using findings, Product Designers, Business Analysts, and Software Engineers anticipate change, and prevent user loss by identifying missing features, shifts in requirements, general improvements, or corrections. End User Computing Satisfaction is also psychological, in that the findings can sometimes represent objective views, rather than subjective truths. For example, previous success or failure impact next generation products. Organizations emphasize value in how products and opinions thereof manifest, preserving what is valued and caring how this is perceived.

This often creates a positive feedback loop and creating a sense of agency for the user. These surveys assist to steer the system towards stable product sector positions. This is important, because the effects of satisfied or dissatisfied users could be difficult to change as time goes on. Real world examples are end-user loyalty in the premium mobile device segment, opinion and perception of dependable automotive brands, or lower quality products originate from certain nationalities based on stereotypes. In such cases, the corrective action is not made on a product level, rather it is handled in another business process via Change Management, which aims to educate, inform and promote the system with the users, swaying opinions which could not be other altered amending product.

The satisfaction measurements are often used in industry, manufacturing, or other large organizations for obtain internal user satisfaction. This could be used to motivate internal changes to improve or correct existing business processes. This could be by discontinuing use of systems, or prompt adopting to more applicable solutions. It could also be based on employee satisfaction which is important to promote productive work environments.

Doll and Torkzadeh's (1988) definition of user satisfaction is, the opinion of the user about a specific computer application, which they use. In a broader sense, the definition of user satisfaction can be extended to user satisfaction with any computer-based electronic appliance. The term user can further be removed from objective and individual contexts, as "user" refers to the collective, from individuals, groups and across organizations. The term "user" is sometimes used to refer to the account or profile of an operator, and this is not excluded from the context, as can be seen when reference is made to "users" of a network, the system, by the owner of the system, and by the distributor or developer of the system.

The CUS and the UIS

Bailey and Pearson's (1983) 39‑Factor Computer User Satisfaction (CUS) questionnaire and its derivative, the User Information Satisfaction (UIS) short-form of Baroudi, Olson and Ives are typical of instruments which one might term as 'factor-based'. They consist of lists of factors, each of which the respondent is asked to rate on one or more multiple point scales. Bailey and Pearson's CUS asked for five ratings for each of 39 factors. The first four scales were for quality ratings and the fifth was an importance rating. From the fifth rating of each factor, they found that their sample of users rated as most important: accuracy, reliability, timeliness, relevancy and confidence in the system. The factors of least importance were found to be feelings of control, volume of output, vendor support, degree of training, and organisational position of EDP (the electronic data processing, or computing department). However, the CUS requires 39 x 5 = 195 individual seven‑point scale responses.[1] Ives, Olson and Baroudi (1983), amongst others, thought that so many responses could result in errors of attrition.[2] This means, the respondent's failure to return the questionnaire or the increasing carelessness of the respondent as they fill in a long form. In psychometrics, such errors not only result in reduced sample sizes but can also distort the results, as those who return long questionnaires, properly completed, may have differing psychological traits from those who do not. Ives, et al. thus developed the UIS. This only requires the respondent to rate 13 factors that remain in significant use. Two seven‑point scales are provided per factor (each for a quality), requiring 26 individual responses. However, in a recent article, Islam, Mervi, and Käköla (2010) argued that measuring user satisfaction in industry settings is difficult as the response rate often remains low. Thus, a simpler version of the user satisfaction measurement instrument is necessary.

The problem with the dating of factors

An early criticism of these measures was that the factors date as computer technology evolves and changes. This suggested the need for updates and led to a sequence of other factor-based instruments. Doll and Torkzadeh (1988), for example, produced a factor-based instrument for a new type of user emerging at the time, called an end user. They identified end-users as users who tend to interact with a computer interface only, while previously users interacted with developers and operational staff as well. McKinney, Yoon and Zahedi (2002) developed a model and instruments for measuring web-customer satisfaction during the information phase.[3] Cheung and Lee (2005) in their development of an instrument to measure user satisfaction with e-portals, based their instrument on that of McKinney, Yoon and Zahedi (2002), which in turn was based primarily on instruments from prior studies.

The problem of defining user satisfaction

As none of the instruments in common use really rigorously define their construct of user satisfaction, some scholars such as Cheyney, Mann and Amoroso (1986) have called for more research on the factors which influence the success of end-user computing. Little subsequent effort which sheds new light on the matter exists, however. All factor-based instruments run the risk of including factors irrelevant to the respondent, while omitting some that may be highly significant to him/her. Needless to say, this is further exacerbated by the ongoing changes in information technology.

In the literature there are two definitions for user satisfaction, 'User satisfaction' and 'User Information Satisfaction' are used interchangeably. According to Doll and Torkzadeh (1988), 'user satisfaction' is defined as the opinion of the user about a specific computer application that they use. Ives et al. (1983) defined 'User Information Satisfaction' as "the extent to which users believe the information system available to them meets their information requirements." Other terms for User Information Satisfaction are "system acceptance" (Igersheim, 1976), "perceived usefulness" (Larcker and Lessig, 1980), "MIS appreciation" (Swanson, 1974) and "feelings about information system" (Maish, 1979). Ang and Koh (1997) have described user information satisfaction (UIS) as "a perceptual or subjective measure of system success." This means that user information satisfaction will differ in meaning and significance from person to person. In other words, users who are equally satisfied with the same system according to one definition and measure may not be similarly satisfied according to another.

Several studies have investigated whether or not certain factors influence the UIS, such as those by Yaverbaum (1988) and Ang and Soh (1997). Yaverbaum's (1988) study found that people who use their computers irregularly tend to be more satisfied than regular users. Ang and Soh's(1997)research, on the other hand, could find no evidence that computer background affects UIS.

Mullany, Tan, and Gallupe (2006) essay a definition of user satisfaction, claiming that it is based on memories of the past use of a system. Conversely, motivation, they suggest, is based on beliefs about the future use of the system. (Mullany et al., 2006).

The large number of studies over the past few decades, as cited in this article, shows that user information satisfaction remains an important topic in research studies despite somewhat contradictory results.

A lack of theoretical underpinning

Another difficulty with most of these instruments is their lack of theoretical underpinning by psychological or managerial theory. Exceptions to this were the model of web site design success developed by Zhang and von Dran (2000), and a measure of user satisfaction with e-portals, developed by Cheung and Lee (2005). Both of these models drew upon Herzberg's two-factor theory of motivation.[4] Consequently, their factors were designed to measure both 'satisfiers' and 'hygiene factors'. However, Herzberg's theory itself is criticized for failing to distinguish adequately between the terms motivation, job motivation, job satisfaction, and so on. Islam (2011) in a recent study found that the sources of dissatisfaction differs from the sources of satisfaction. He found that the environmental factors (e.g., system quality) were more critical to cause dissatisfaction while outcome specific factors (e.g., perceived usefulness) were more critical to cause satisfaction.

Cognitive style

A study by Mullany (2006) showed that during the life of a system, satisfaction from users will on average increase in time as the users' experiences with the system increase. Whilst the overall findings of the studies showed only a weak link between the gap in the users' and analysts' cognitive style (measured using the KAI scales) and user satisfaction, a more significant link was found in the regions of 85 and 652 days into the systems' usage. This link shows that a large absolute gap between user and analyst cognitive styles often yields a higher rate of user dissatisfaction than a smaller gap. Furthermore, an analyst with a more adaptive cognitive style than the user at the early and late stages (approximately days 85 and 652) of system usage tends to reduce user dissatisfaction.

Mullany, Tan, and Gallupe (2006) devised an instrument (the System Satisfaction Schedule (SSS)), which utilizes user-generated factors (that is, almost exclusively, and so avoids the problem of the dating of factors. Also aligning themselves to Herzberg, these authors argue that the perceived usefulness (or otherwise) of tools of the trade are contextually related, and so are special cases of hygiene factors. They consequently define user satisfaction as the absence of user dissatisfaction and complaint, as assessed by users who have had at least some experience of using the system. In other words, satisfaction is based on memories of the past use of a system. Motivation, conversely, is based on beliefs about the future use of the system. (Mullany et al., 2007, p. 464)

Future developments

Currently, some scholars and practitioners are experimenting with other measurement methods and further refinements of the definition for satisfaction and user satisfaction. Others are replacing structured questionnaires by unstructured ones, where the respondent is asked simply to write down or dictate all the factors about a system which either satisfies or dissatisfies them. One problem with this approach, however, is that the instruments tend not to yield quantitative results, making comparisons and statistical analysis difficult. Also, if scholars cannot agree on the precise meaning of the term satisfaction, respondents will be highly unlikely to respond consistently to such instruments. Some newer instruments contain a mix of structured and unstructured items.

References

  • Ang, J. and Koh, S. "Exploring the relationships between user information satisfaction and job satisfaction", International Journal of Information Management (17:3), 1997, pp 169-177.
  • Ang, J. and Soh, P. H. "User information satisfaction, job satisfaction and computer background: An exploratory study", Information & Management (32:5), 1997, pp 255-266.
  • Bailey, J.E., and Pearson, S.W. "Development of a tool for measuring and analysing computer user satisfaction", Management Science (29:5), May 1983, pp 530-545.
  • Bargas-Avila, J., Loetscher, J., Orsini, S. and Opwis, K. "Intranet Satisfaction Questionnaire: Development and Validation of a Questionnaire to Measure User Satisfaction with the Intranet" Paper submitted to Information & Management. 2008.
  • Baroudi, J.J., and Orlikowski, W.J. "A Short-Form Measure of User Information Satisfaction: A Psychometric Evaluation and Notes on Use", Journal of Management Information Systems (4:2), Spring 1988, pp 44-58.
  • Cheung, C.M.K., and Lee, M.K.O. "The Asymmetric Effect of Website Attribute Performance on Satisfaction: An Empirical Study", 38th Hawaii International Conference on System Sciences, IEEE Computer Society Press, Hawaii, 2005, pp. 175-184.
  • Cheyney, P. H., Mann, R.L., and Amoroso, D.L. "Organisational factors affecting the success of end-user computing", Journal of Management Information Systems 3(1) 1986, pp 65-80.
  • DeLone, W.H., and Mclean, E.R. "Information Systems Success: The Quest for the Dependent Variable", Information Systems Research (3:1), March 1992, pp 60-95.
  • DeLone, W.H., Mclean, and R, E. "Information Systems Success Revisited", 35th Hawaii International Conference on System Sciences, IEEE Computer Society Press, Los Alamitos, CA, 2002, pp. 238-248.
  • DeLone, W.H., and Mclean, E.R. "The DeLone and McLean Model of Information Systems Success: A Ten-Year Update", Journal of Management Information Systems (19:4), Spring 2003, pp 9-30.
  • Doll, W.J., and Torkzadeh, G. "The Measurement of End User Computing Satisfaction", MIS Quarterly (12:2), June 1988, pp 258-274.
  • Doll, W.J., and Torkzadeh, G. "The measurement of end-user computing satisfaction: theoretical considerations", MIS Quarterly (15:1), March 1991, pp 5-10.
  • Herzberg, F., Mausner, B., and Snyderman, B. The motivation to work. Wiley, New York, 1959, p. 257.
  • Herzberg, F. Work and the nature of man World Publishing, Cleveland, 1966, p. 203.
  • Herzberg, F. "One more time: How do you motivate employees?", Harvard Business Review (46:1), January-February 1968, pp 53-62.
  • Igersheim, R.H. "Management response to an information system", Proceedings AFIPS National Computer Conference, 1976, pp 877-882.
  • Islam, A.K.M. Najmul, Koivulahti-Ojala, M., and Käkölä, T. "A lightweight, industrially-validated instrument to measure user satisfaction and service quality experienced by the users of a UML modeling tool", Proceedings AMCIS 2010.
  • Islam, A.K.M. Najmul, "Information Systems Post-adoption Satisfaction and Dissatisfaction: A Study in the E-Learning Context", Proceedings PACIS 2011.
  • Ives, B., Olson, M.H., and Baroudi, J.J. "The measurement of user information satisfaction", Communications of the ACM (26:10), October 1983, pp 785-793.
  • Larcker, D.F. and Lessig, V.P. "Perceived usefulness of information: a psychometric examination", Decision Science (11:1), 1980, pp 121-134.
  • Maish, A.M. "A user's behavior towards his MIS", MIS Quarterly (3:1), 1979, pp 37-52.
  • McKinney, V., Yoon, K., and Zahedi, F.M. "The measurement of web-customer satisfaction: An expectation and disconfirmation approach", Information Systems Research (13:3), September 2002, pp 296-315.
  • Mullany, Michael John, and Auckland University of Technology. "The use of Analyst-User Cognitive Style Differentials to Predict Aspects of User Satisfaction with Information Systems" 2006. Print.
  • Mullany, M. J., Tan, F. B. and Gallupe, R. B., 2006, "The S-Statistic: a measure of user satisfaction based on Herzberg's theory of motivation", Proceedings of the 17th Australasian Conference on Information Systems (ACIS), Adelaide.
  • Mullany, M. J., Tan, F. B. and Gallupe, R. B., 2007, "The Impact Of Analyst-User Cognitive Style Differences On User Satisfaction", Proceedings of the 11th Pacific-Asia Conference on Information Systems (PACIS), Auckland.
  • Swanson, E.B. "Management and information systems: an appreciation and involvement", Management Science (21:2), 1974, pp 178-188.
  • Zhang, P., and Von Dran, G.M. "Satisfiers and dissatisfiers: a two-factor model for Website design and evaluation.", Journal of the American Society for Information Science (51:14), December 2000, pp 1253-1268.
  • Yaverbaum, G. J. "Critical factors in the user environment - an experimental study of users, organizations and tasks", MIS Quarterly (12:1), 1988, pp 75-88.
  1. ^ Bailey, James E.; Pearson, Sammy W. (May 1983). "Development of a Tool for Measuring and Analyzing Computer User Satisfaction". Management Science. 29 (5): 530–545. doi:https://doi.org/10.1287/mnsc.29.5.530. {{cite journal}}: Check |doi= value (help); External link in |doi= (help)
  2. ^ Ives, Blake; Olson, Margrethe H.; Baroudi, Jack J. (1 October 1983). "The measurement of user information satisfaction". Commun. ACM. 26 (10): 785–793. doi:https://doi.org/10.1145/358413.358430. {{cite journal}}: Check |doi= value (help); External link in |doi= (help)
  3. ^ McKinney, Vicki; Yoon, Kanghyun; Zahedi, Fatemeh “Mariam” (September 2002). "The Measurement of Web-Customer Satisfaction: An Expectation and Disconfirmation Approach". Information Systems Research. 13 (3): 296–315. doi:https://doi.org/10.1287/isre.13.3.296.76. {{cite journal}}: Check |doi= value (help); External link in |doi= (help)
  4. ^ Herzberg, Frederick (1972). Work and the nature of man (reprint ed.). London: Staples Press. ISBN 978-0286620734.