Context awarenessContext awareness refers, in information and communication technologies, to a capability to take into account the situation of entities,[1] which may be users or devices, but are not limited to those. Location is only the most obvious element of this situation. Narrowly defined for mobile devices, context awareness does thus generalize location awareness. Whereas location may determine how certain processes around a contributing device operate, context may be applied more flexibly with mobile users, especially with users of smart phones. Context awareness originated as a term from ubiquitous computing or as so-called pervasive computing which sought to deal with linking changes in the environment with computer systems, which are otherwise static. The term has also been applied to business theory in relation to contextual application design and business process management issues.[2] Qualities of contextVarious categorizations of context have been proposed in the past. Dey and Abowd (1999)[3] distinguish between the context types location, identity, activity and time. Kaltz et al. (2005)[4] identified the categories user&role, process&task, location, time and device to cover a broad variety of mobile and web scenarios. They emphasize yet for these classical modalities that any optimal categorization depends very much on the application domain and use case. Beyond more advanced modalities may apply when not only single entities are addressed, but also clusters of entities that work in a coherence of context, as e.g. teams at work or also single bearers with a multiplicity of appliances. Some classical understanding of context in business processes is derived from the definition of AAA applications[5] with the following three categories:
these three terms including additionally location and time as stated. Computer scienceIn computer science context awareness refers to the idea that computers can both sense, and react based on their environment. Devices may have information about the circumstances under which they are able to operate and based on rules, or an intelligent stimulus, react accordingly. The term context awareness in ubiquitous computing was introduced by Schilit (1994).[6][7] Context-aware devices may also try to make assumptions about the user's current situation. Dey (2001) define context as "any information that can be used to characterize the situation of an entity."[1] While the computer science community initially perceived the context as a matter of user location, as Dey discuss,[1] in the last few years this notion has been considered not simply as a state, but part of a process in which users are involved; thus, sophisticated and general context models have been proposed (see survey[8]), to support context-aware applications which use them to (a) adapt interfaces, (b) tailor the set of application-relevant data, (c) increase the precision of information retrieval, (d) discover services, (e) make the user interaction implicit, or (f) build smart environments. For example: a context-aware mobile phone may know that it is currently in the meeting room, and that the user has sat down. The phone may conclude that the user is currently in a meeting and reject any unimportant calls.[9] Context-aware systems are concerned with the acquisition of context (e.g. using sensors to perceive a situation), the abstraction and understanding of context (e.g. matching a perceived sensory stimulus to a context), and application behaviour based on the recognized context (e.g. triggering actions based on context).[10] As the user's activity and location are crucial for many applications, context awareness has been focused more deeply in the research fields of location awareness and activity recognition. Context awareness is regarded as an enabling technology for ubiquitous computing systems. Context awareness is used to design innovative user interfaces, and is often used as a part of ubiquitous and wearable computing. It is also beginning to be felt in the internet with the advent of hybrid search engines. Schmidt, Beigl and Gellersen[11] define human factors and physical environment as two important aspects relating to computer science. More recently, much work has also been done to ease the distribution of context information; Bellavista, Corradi, Fanelli and Foschini survey[12] the several middleware solutions that have been designed to transparently implement context management and provisioning in the mobile system. Grifoni, D'Ulizia and Ferri[13] provided a review of several context-aware location-based service systems using big data by analysing the methodological and practical choices that their developers made during the main phases of the context awareness process (i.e. context acquisition, context representation, and context reasoning and adaptation). Perera, Zaslavsky, Christen, and Georgakopoulos[14] have performed a comprehensive survey on context-aware computing from Internet of Things perspective by reviewing over 50 leading projects in the field. Further, Perera has also surveyed a large number of industrial products in the existing IoT marketplace from context-aware computing perspective.[15] Their survey is intended to serve as a guideline and a conceptual framework for context-aware product development and research in the IoT paradigm. The evaluation has been done using the theoretical framework developed by Dey and Abowd (1999)[3] more than a decade ago. The combination of the Internet and emerging technologies transform everyday objects into smart objects that can understand and react to their contexts.[16] Human factors related context is structured into three categories: information on the user (knowledge of habits, emotional state, biophysiological conditions), the user's social environment (co-location of others, social interaction, group dynamics), and the user's tasks (spontaneous activity, engaged tasks, general goals). Likewise, context related to physical environment is structured into three categories: location (absolute position, relative position, co-location), infrastructure (surrounding resources for computation, communication, task performance), and physical conditions (noise, light, pressure, air quality).[17][18] Relational context: dynamic and non-user-centric definitionsWhereas early definitions of context tended to center on users, or devices interfaced directly with users, the oft-cited definition from Dey[1] ("any information that can be used to characterize the situation of an entity") could be taken without this restriction. User-centric context, as may be used in the design of human-computer interfaces, may also imply an overly clearcut, and partially arbitrary, separation between "content" (anything which is explicitly typed in by users, or output to them), and context, which is implicit, and used for adaptation purposes. A more dynamic and de-centered view, advocated by Dourish [19] views context as primarily relational. This was originally congruent with the move from desktop computing to ubiquitous computing, but it does also fit with a broader understanding of ambient intelligence where the distinctions between context and content become relative and dynamic.[20] In this view, whichever sources of information (such as IoT sensors) may be context for some uses and applications, might also be sources of primary content for others, and vice versa. What matters is the set of relationships that link them, together and with their environment. Whereas early descriptions of single-user-centric context could fit with classical entity-attribute-value models, more versatile graph-based information models, such as proposed with NGSI-LD, are better adapted to capture the more relational view of context which is relevant for the Internet of Things, Cyber-Physical Systems and Digital Twins. In this broader acceptation, context is not only represented as a set of attributes attached to an entity, it is also captured by a graph that enmeshes this entity with others. Context awareness is the capability to account for this cross-cutting information from different sources. Applications in situational or social awarenessContext awareness has been applied to the area of computer-supported cooperative work (CSCW) to help individuals work and collaborate more efficiently with each other. Since the early 1990s, researchers have developed a large number of software and hardware systems that can collect contextual information (e.g., location, video feeds, away status messages) from users. This information is then openly shared with other users, thereby improving their situational awareness, and allowing them to identify natural opportunities to interact with each other. In the early days of context-aware computing, many of the systems developed for this purpose were specifically designed to assist businesses or geographically separated work teams collaborate on shared documents or work artifacts. More recently, however, there has been a growing body of work that demonstrates how this technique can also be applied to groups of friends or family members to help keep them apprised of each other's activities. To date, systems that use context awareness to improve situational awareness can be characterised by:
The most common context to obtain and share for the purposes of improving situational awareness is the user's location. In an early prototype, the Active Badge system,[21] for example, each user had a uniquely identifying badge that could be tracked via a series of overhead infrared sensors. As users walked throughout a building, their location was constantly monitored by a centralized server. Other users could then view this information (either in text form, or on a map, as was done in later work[22]) to determine if a user is in her office, thereby allowing them to determine the best time to stop by for an unplanned conversation. Location was also shared in the PeopleTones,[23] Serendipity,[24] and the Group Interaction Support Systems[25] to help users determine when they are near friends, users with shared personal interests, and teammates, respectively. In comparison with Active Badge, which only displays location information, these systems are more proactive, and will alert the users when they are in proximity of each other. This lets the user know when a potential interaction opportunity is available, thereby increasing his/her chances of taking advantage of it. Another popular context to share is a user's work activity, often by sharing video. In the Community Bar system,[26] researchers developed a desktop application that periodically took screenshots of the user's display. This information was then shared with the user's co-workers so that they could know what documents/artifacts their teammates was working on, and provided a common frame of reference so that users could n talk about these artifacts as if they were collocated. In Montage,[27] users are given the ability to remotely activate the webcam on another user's computer for a brief amount of time. This capability to "glance" at another user lets users see if they are busy or preoccupied, which in turn helps them better determine the most opportune time to initiate a conversation. A third type of context to share to improve or enhance situational awareness is the user's audio. In the Thunderwire system,[28] researchers developed an audio-only media space that allowed friends to share raw audio from their mobile devices' microphones. This system, which in essence was a perpetual conference-call, allowed users to listen to other users' audio in order to determine if and when they were participating in a conversation. The WatchMe[29] and ListenIn[30] systems also rely heavily on audio in order to determine if and when a user was potentially interruptible. Unlike Thunderwire, however, these systems rely on machine learning algorithms in order to analyze the user's audio and determine if the user is talking. This allows the system to provide other users with the same context (i.e., whether or not the user is in a conversation) without having to share the actual audio, thereby making it more privacy centric. A fourth type of context that is commonly shared is the user's overall activity. In the Hubbub[31] and Conchat[32] systems, researchers developed a series of instant messenger applications that monitored when the user was active or idle using either software, or environmental sensors, respectively. This information could then be provided to other users to let them know if and when their friends were available to respond to their messages. In the Digital Family Portrait system,[33] researchers developed a digital picture frame that provides qualitative visualizations of a user's (i.e., an elderly parent/grandparent) daily activities. This visualization is then provided to the user's adult children so that they could "check up" on their parents without having to explicitly bother them, thereby allowing them to "age in place." Although these system demonstrate how context awareness can be used to support situational and social awareness, the long term effectiveness of these systems are not yet fully understood. Many of the systems described above have only been evaluated at a conceptual level, or to demonstrate that the system is technically feasible. Consequently, while initial results from these studies suggest that context awareness can support situational awareness tasks, more longitudinal data is required. Application in health careContext-aware mobile agents[34] are a best suited host implementing any context-aware applications. Modern integrated voice and data communications equips the hospital staff with smart phones to communicate vocally with each other, but preferably to look up the next task to be executed and to capture the next report to be noted. However, all attempts to support staff with such approaches are hampered till failure of acceptance with the need to look up upon a new event for patient identities, order lists and work schedules. Hence a well suited solution has to get rid of such manual interaction with a tiny screen and therefore serves the user with
Applications in industrial productionContext-aware mobile agents are a well suited host implementing newer context-aware applications in relation to the new paradigm with industry 4.0. Modern integrated (voice and) data communications equips the workshop or production line staff with smart phones to communicate data with production control for feedback, where data originates from detecting and identifying components and parts to get integrated in flexible production management for on-demand products. However, all attempts to support staff with such approaches are hampered by fixed production schedules unless the information for customer demand and product configuration can be matched with parts supply. Hence a well suited solution has to get rid of missing interaction of production plan and production line occurrence of relevant information and material by means of
The key requirement is to implement a solution free from manual interaction of worker with information handling. Otherwise the error rate will rise with the rise in information requirements. Additionally, none of the conventional RFID, WLAN or RTLS locating solutions advertising for most precise locating serve the required quality, as determining a location in conventional attitude looking for absolute coordinates fails either technically or economically. Other approaches based on fuzzy locating promise better return on investment. Applications in pervasive gamesA pervasive game is leveraging the sensed human contexts to adapt game system behaviors. By blending of real and virtual elements and enabling users to physically interact with their surroundings during the play, people can become fully involved in and attain better gaming experience. For example, a pervasive game that uses the contexts of human activity and location in smart homes is reported by an autonomous agent.[35] Applications in mobile multimedia devicesMuseums and archaeological sites sometimes provide multimedia mobile devices as an alternative to the conventional audio guide (see e.g. the Tate Modern in London.[36] A context aware device will use the location, current user interactions and the graph of connected objects to dynamically tailor the information presented to the user.[37] In some cases this is combined with real-time navigation around the site to guide the user to artefacts or exhibits that are likely to be of interest, based on the user's previous interactions.[38] See also
References
Further reading
External links
|