Skip to main content

Flexible context aware interface for ambient assisted living

Abstract

A Multi Agent System that provides a (cared for) person, the subject, with assistance and support through an Ambient Assisted Living Flexible Interface (AALFI) during the day while complementing the night time assistance offered by NOCTURNAL with feedback assistance, is presented. It has been tailored to the subject’s requirements profile and takes into account factors associated with the time of day; hence it attempts to overcome shortcomings of current Ambient Assisted Living Systems. The subject is provided with feedback that highlights important criteria such as quality of sleep during the night and possible breeches of safety during the day. This may help the subject carry out corrective measures and/or seek further assistance. AALFI provides tailored interaction that is either visual or auditory so that the subject is able to understand the interactions and this process is driven by a Multi-Agent System. User feedback gathered from a relevant user group through a workshop validated the ideas underpinning the research, the Multi-agent system and the adaptable interface.

Introduction

The increasing older population [1] and current economic climate is resulting in health and social care provisions being stretched and this has provoked recent research into the development of assisted living systems that aim to provide efficient and effective assistance and support to older people in their own home. An Ambient Assisted Living (AAL) solution provides a subject with assistance during the day and feedback assistance based on day and night time activities and events through an A mbient A ssisted L iving F lexible I nterface (AALFI). It provides interventions adapted based on the current time of day, activity, detected events and changes of context in the environment. Feedback derived from past interventions may be beneficial in solving current issues. A Multi-Agent system (MAS) controls AALFI and the interaction method for interventions and feedback is adapted based on the subject’s requirements profile. Current solutions known as Ambient Assisted Living Systems (AAL) have three identified shortcomings; (i) they normally concentrate on providing day based assistance and support and are not aware of activities and events that occur during the night, examples include a multimodal pervasive framework for ambient assisted living [2] where older people are supported through a multimodal interface and an intelligent home middleware system [3] that assists older people by acquiring, detecting and reasoning changes of context. These and other related projects outlined in Section "Related Research Areas and Projects" provide assistance during the day or in the case of NOCTURNAL [4] during the night. The research being carried out aims to provide a (cared for) person, the subject, with assistance during the day and adapt assistance based on the time of day, contextual changes and event that has occurred. In the future assistance may be adapted according to the older person’s behaviour or mood. AALFI is aware of activities and events that occur during the night and is able to provide feedback type assistance the following day. AALFI complements the night time intervention assistance offered by NOCTURNAL with day based interventions, several new night time interventions relating to older person behaviour and feedback assistance based on day and night time events and activities. AALFI and the NOCTURNAL projects were developed in parallel, by related developing teams, and are mutually complementary. (ii) The interaction method may be inappropriate for the capability of the user which leads to further confusion and frustration, e.g. the systems may carry out actions that a person may not understand due to illegible text size and inappropriate colours. The wrong assistance may be offered to the older person causing confusion. Related research has investigated GUI layout [5], element placement [6] and font size and style used to convey information can have an effect on a subject’s ability to interact with an interface; a study with 50 partially sighted and 100 sighted children found that larger fonts and clearer text are of benefit to partially sighted people [7]. To help alleviate any possible confusion AALFI can be adapted so that text, font or colour can be changed according to a subject’s requirements and if a person’s sight degrades over time, the interface can be further adapted through a care provider/person interface. Research into GUI content, placement, interface navigation and methods of conveying information have been used during the design of AALFI and planning for future work were further interface adaption may be implemented to include changing the layout of the interface and adapting other attributes. AALFI is currently installed on a 10 inch Windows Tablet PC and can function in a particular location or be moved to a different location by the older person. The approach which the NOCTURNAL project follows is to provide interactions through a static bed side interface. In the futures many Tablet PCs may be installed in key locations and the AALFI interface may be displayed on the interface where the older person is currently located. Auditory interactions may also implemented to allow an older person to interact with an interface through speech and sound [8]. The type of assistance that is offered to the older person is tailored to the limitations imposed by the subject’s daily routine, activities and actions are often ignored and suitable feedback strategies have not been properly evaluated. (iii) Current state of the art AAL may often only provide intervention type assistance that corresponds to day based activities and not be aware of activities and events that occur during the night; when a subject may exhibit bad behaviours, or carry out activities that they do not remember the following day. AALFI provides intervention type assistance during the day in addition to feedback assistance for recognised events that occur during the day and night so that potential issues with the older persons behaviour may be drawn to their attention.

This article outlines the research ideas (Section "Research Aims"), supporting a subject by means of an adaptive multimodal interface, providing a subject with day and night time assistance, and facilitating interaction through visual and auditory modalities. Section "Related Research Areas and Projects" details the related research topics; Section "The Multi-Agent System, Interventions and Feedback" highlights related research projects and the perceived limitations with current AAL systems. Section "Adaptable Multimodal Interaction" outlines the MAS architecture, intervention and feedback processes and discusses interventions and feedback strategies. Section "Evaluation and Results" details the multimodal interaction methods while Section "Conclusions" presents the findings from a validation exercise that was completed at Age NI headquarters [1] where participants helped to validate the research ideas, the MAS and associated adaptable multimodal interface. Conclusions regarding the research direction, further development and feedback from the workshop are provided in Acknowledgments.

Research aims

The aim of this work is the development and assessment of AALFI and this section details the three main research ideas, ‘Supporting the subject through an adaptive multimodal interface that is driven and updated by a MAS’ , ‘complementing the current support offered by the NOCTURNAL project’ and ‘providing interaction through visual and auditory modalities’.

Supporting the subject through an adaptive multimodal interface that is driven and updated by a MAS

AALFI provides several forms of assistance and support interactions based on the subject’s requirements, detected context, event or action that has occurred. The interface is controlled and updated by a MAS that determines the correct intervention to make or feedback message to issue, the correct method to deploy the intervention or feedback message as either text messages or auditory interactions. A number of agent platforms were considered including JASON [9], JADE [10] and JADEX [11]. JADE offered the best means to develop the MAS; it is a mature technology that has been successfully tested in other AAL systems, including [12, 13] and [14]. The JADE agents control the interface, choosing the appropriate content and interaction method for the interventions and feedback. A profile agent was implemented to adapt the interaction method so that either visual or auditory interactions are available to the subject depending on their current requirements profile.

Complement the intervention type night time assistance offered by NOCTURNAL with day time assistance and feedback type assistance

Providing assistance during the night has been successfully demonstrated by the NOCTURNAL project [3]. AALFI complements this night time assistance with intervention assistance during the day, offering several additional interventions during the night and providing feedback assistance to the older person that is based on activities that the older person carried out during the night. The assistance approach offered by AALFI differs from that of NOCTURNALs as AALFI provides assistance through a Tablet device that can be moved from room to room (NOCTURNAL offers assistance through a bed side device), provide feedback assistance in addition to intervention assistance (NOCTURNAL offers intervention type assistance) and supports the older person during the day (NOCTURNAL supports the person during the night). The research presented in this paper mainly concentrates on the day intervention assistance that AALFI offers in relation to common activities and events that may occur in an AAL scenario and on the feedback assistance that is offered in relation to historical interventions, activities and events that have occurred during the day and night.

Interaction through visual and auditory modalities

In order for successful and effective human computer interaction to occur, it is important to consider the user’s requirements. Deficits may include sight issues and these can have an effect on how the person views and interacts with the interface. A person may be partially sighted or blind and not able to interact with a visual interface. In this case some form of speech and auditory interaction should be provided to the person so that they are able to carry out simple interactions.

The visual modality provides interactions through button, text and picture based interface. The text on navigation buttons may be altered so that the user can easily navigate and interact with the interface and understand the messages being displayed. The auditory modality includes text to speech interaction and the person is able to interact with a simple VoiceXMLa speech menu; their speech is recognized by the Sphinx speech recognizerb and messages are spoken through the Java based free text to speech synthesizerc. With the auditory profile selected, the system listens for a key word before starting the interaction process; upon this trigger the main interaction menu is articulated and the subject is then able to carry out interactions with the interface.

Flexible assistance through a contact aware interface in an AAL environment

ALLFI offers flexible assistance strategies through a context aware interface in an AAL environment. The flexibility is made possible through the use different interaction technologies including touch screen, speech recognition and synthesised speech. The older person is able to choose the interaction technology by setting their individual interaction requirements and therefore personalise their interaction experience and how they receive assistance. These requirements may be updated at any time by the older person or their care provider so that future changing interaction requirements may be accounted for. To achieve interaction flexibility, AALFI is controlled and updated by a MAS that displays context aware attributes; sensor event data from sensors placed in the AAL environment is consumed to determine what has occurred and to choose the appropriate assistance. It is this consumption of sensor data that is key to the correct assistance being offered to the older person. Context awareness is an important and essential characteristic of the MAS. Context awareness in relation to a MAS is illustrated by [15] and MAS systems are shown to exhibit context aware attributes by [16] as they are able to decipher contextual changes that occur in an AAL environment. Without these context aware attributes and characteristics, the correct assistance would not be offered and AALFI would not be able to provide flexible assistance in relation to different situations that occur in the environment ranging from safety issues such as leaving cupboard, fridge, or back doors open to health issues such as not getting enough sleep, reminding a person to consume regular meals and offering advice based assistance at key times during the day.

Related research areas and projects

This section highlights several related research areas: Human Computer Interaction (a Multimodal interface has been implemented to allow a person to interact with AALFI), Context Aware Computing (the agents make use of contextual changes to identify what is occurring in the environment and what activity the person has carried out), Ambient Assisted Living (the research falls under the area of systems to assist people in their daily lives) and Multi-Agent Systems (AALFI is controlled by a Multi-Agent System).

Overview of human computer interaction, context aware computing, ambient assisted living and multi-agent systems

Examples of Human computer interaction (HCI) may include displays that are either mobile or stationary, interactive displays and tangible physical interfaces surfaces, touch screens and auditory interfaces [17]. As well as ‘simple human computer interfaces’, there may be multimodal interfaces that have several forms of input and interaction [18]. Many, sometimes competing, technical challenges may be faced by the developer and the person that makes use of the interface including ensuring that the interface is always available, extensible, efficient, secure and respects the users privacy [19]. HCI may be supported by visual or auditory interaction modalities. Auditory interactions may be of benefit to blind or partially sighted people that are not able to interact with a visual interface. A survey [8] has been completed by 50 blind and 100 sighted people, to investigate what interactions would be of benefit to a blind person. The survey found that font size, style and text size can have an effect on how a subject interacts with an interface.

Context can be defined as “any information that can be used to characterise the situation of an entity” [20] and can be used to identify activities and events that have occurred in a smart home environment [21]. These context aware systems “provide relevant information and/or services to the user, where relevancy depends on the user’s task” [22] and can be recorded by sensors, mobile devices and personal digital assistants [23]. Methods of acquiring context include sensing context, context that is gathered from sensors; deriving context, recording context in real time and explicitly gathering context that is provided by the user of the system [24]. Context may be gathered from different architectural layers including the network layer, middleware layer, application and service layer and user infrastructure layer [25].

AAL systems are said to be able “to prolong the time people can live in a decent way in their own home by increasing their autonomy and self-confidence” [26]. AAL may be able to provide assistance and support with activities of daily living [27] and provide assistance during the night to prevent trips and falls, help with disorientation and may calm a person who wakes up [4]. The types of assistance and support that may be provided include communication support that enables contact with friends, family and care providers [28] and reminiscence activities, “a range of activities and traditional tools aimed at stimulating thoughts, feelings and memories of times gone by” [29].

A Multi-Agent System is built up of several software agents; a software agent is defined as a “computer system that is situated in some environment and that is capable of autonomous action in this environment in order to meet its design objectives” [30]. A Multi-Agent system implements many software agents that interact together and can cooperate or compete to carry out complex tasks by exchanging specially formed messages. In the case of an AAL solution, the MAS may provide interventions through meaningful interactions with a person to aid them with carrying out activities of daily living. Examples of Multi-Agent smart environments are discussed by [31] and current directions for research in this area are Multi-intelligent software agents, tracking multiple residents, profiling multiple residents and multi agent negotiation. The different types of Multi-Agent programming languages that may be used was outlined by [32]. Examples include, declarative languages, that “are partially characterised by their strong formal nature, normally grounded on logic”, Imperative languages, “less common, mainly due to the fact that most abstractions related to agent-oriented design are, typically, declarative in nature”, Hybrid Approaches were declarative and imperative language features are combined. Examples of MAS research include an agent based model for supporting group emotions [33], an access control agent based security system [34] and an agent-based system for providing automated prompting [35]. The next section outlines related research and details how AALFI and associated MAS overcome shortcomings with the identified AAL systems.

Related research projects

Insufficient work has been devoted to a user’s ability to understand the assistance that is being offered. If the subject’s requirements change over time, the method for deploying the assistance is often not adapted to these new requirements. In comparison, AALFI can take into account changes in requirements so that the interaction methods may be further adapted. From the related research it is apparent that the primary method for carrying out interactions is visual. In contrast both visual and auditory modalities are provided for by AALFI. The subject may either interact with a visual interface through touch and reading messages or carry out interaction through a speech based interface, where simple commands are issued and simple messages spoken to the subject. The subject may not be supported by current AAL systems during the night when they are more vulnerable. AALFI has been compared to several research projects (Table 1) and a comparison of the similarities and differences follows.

Table 1 Related research projects

The subject is able to interact with the MAS through AALFI either by means of touch or spoken auditory interaction. Intervention and feedback messages are displayed on the touch screen device or spoken; the method used to put forward the messages is adapted based on the subject’s requirements profile. In comparison, the Multi-Modal pervasive framework [2] provides speech based interaction as it interprets commands that the subject speaks and carries out a particular action, the subject is able to write sentences that are recognized, touch an area on a map to get directions or speak words for actions to be carried out by the application. The application does not provide the person with meaningful feedback on the actions that are being carried out and it only provides assistance during the day.

The MAS developed in this work consists of 6 agents (GUI, data, sensor, intervention, profile and feedback). The GUI, intervention and feedback agents provide the MAS with the ability to interact with a subject either by presenting text and images on a touch screen device or speaking messages. In comparison the context framework [3] consists of three main agents for the handling of contextual information, the context collecting agent (CCA), context reasoning agent (CRA) and context management agent (CMA). The outlined context services do not provide a means to carry out user interaction as it is designed to be connected to intelligent devices and appliances and provide contextual data that details how the devices are being used.

A near field communications (NFC) interface [36] that allows a subject to select what they wish to eat during the day makes use of a NFC enabled mobile device and tags to recognize choices. This relies on the person correctly placing the mobile device over the desired tag and of course requires that the device will not be misplaced by the person. By comparison AALFI has been implemented on a touch screen device and the subject can carry out simple interactions through the touch screen interface or by speaking simple commands; the subject does not require the use of any other mobile devices.

A near field communications (NFC) interface [36] that allows a subject to select what they wish to eat during the day makes use of a NFC enabled mobile device and tags to recognize choices. This relies on the person correctly placing the mobile device over the desired tag and of course requires that the device will not be misplaced by the person. By comparison AALFI has been implemented on a touch screen device and the subject can carry out simple interactions through the touch screen interface or by speaking simple commands; the subject does not require the use of any other mobile devices.

A near field communications (NFC) interface [36] that allows a subject to select what they wish to eat during the day makes use of a NFC enabled mobile device and tags to recognize choices. This relies on the person correctly placing the mobile device over the desired tag and of course requires that the device will not be misplaced by the person. By comparison AALFI has been implemented on a touch screen device and the subject can carry out simple interactions through the touch screen interface or by speaking simple commands; the subject does not require the use of any other mobile devices.

A near field communications (NFC) interface [36] that allows a subject to select what they wish to eat during the day makes use of a NFC enabled mobile device and tags to recognize choices. This relies on the person correctly placing the mobile device over the desired tag and of course requires that the device will not be misplaced by the person. By comparison AALFI has been implemented on a touch screen device and the subject can carry out simple interactions through the touch screen interface or by speaking simple commands; the subject does not require the use of any other mobile devices.

The architecture of the multi-agent service framework for context-aware elder care (CASIS) [16] consists of device agents that are connected to smart furniture including smart tables, chairs, floors and home control networks. Linking to devices directly may cause issues in the future, if a new piece of furniture is added, a new agent will need to be developed and it is thought that having an agent for each piece of furniture may limit extensibility. In comparison, AALFI is not linked directly to the sensors, furniture or other devices; instead it consumes the generated data from these devices. CASIS uses context-aware information services to remind the person to take medicines and healthcare services that enable “healthcare professionals to get updated and aggregated bio data on the elder’s health conditions”. Many activities of daily living, support with night time activities and being able to differentiate between day and night activities can be offered by AALFI.

The next research project that has been considered is a Flexible Architecture for Ambient Intelligence Systems [37] that interacts with a subject through a virtual character, which mimics a relative or friend so that they can interact with a friendly face. A virtual character may have several complicating issues, the virtual character needs to be programmed and this may add to development time and the virtual character may require more processing power during interactions due to the rendering process. As highlighted by the authors, the virtual character is non-persistent; AALFI has persistence as key interventions and actions are remembered so that feedback may be presented to the subject. AALFI is touch screen based and provides a simple GUI that has large buttons and text that is of a large font and is clear. By implementing a simple interface, processing overheads may be reduced and the device that the interface runs on may not need to be that powerful.

AALFI makes use of a simple auditory interface to provide a person with access to intervention and feedback messages. The intervention messages detail something that the subject needs to correct, “the back door has been left open for 10 minutes please close the back door, or an action that they should carry out, “it is morning and it is recommended that you have breakfast”. In comparison the Sweet-Home project [39] provides an auditory interface that allows the person to issue commands to control a smart home; communicate with the outside and to make use of shared electronic calendar. It was decided to concentrate on only providing simple key auditory interactions to a person to help prevent information overload and to keep auditory interactions simple so that confusion may be avoided. AALFI is similar to Sweet-Home in that both offer intervention type prompts to a person however meaningful feedback is not provided by the Sweet-Home auditory interface.

The Wireless sensor networks and human comfort index system [40] utilises user provided feedback and preferences to control environmental factors such as temperature and lighting. In comparison AALFI uses preferences to control how interactions are carried out through visual and auditory modalities and meaningful feedback is provided to the user that identifies issues with their activities that they themselves may need to correct.

AALFI offers a person simple prompts as intervention messages to suggest the person carries out a corrective action in response to detected events. In comparison PUCK [41] makes use of simple prompts to guide a person to carry out tasks and does not identify issues that may need to be corrected. AALFI is situational and contextually aware as contextual change events are processed, from this intervention messages are issued and feedback is generated.

The interventions and feedback messages are provided to a person through an adaptable interface with either a visual modality (a touch screen) or auditory modality (simple speech based interaction). In comparison the ‘RFID-driven situation awareness on TangiSense [42], a table interacting with tangible objects’ project makes use of RFID tags and adaptable tables (different functionality may be added and removed from a table) for the primary means of interaction and does not provide interaction through an auditory modality.

The last research project that has been considered is NOCTURNAL [38], a multi-agent system that provides assistance and support to older people during night through a bedside touchscreen interface [43] and does not provide assistance in any other location in the home, AALFI takes a different approach and allows the person to either leave the interface in one particular location or carry it to a different location so that assistance may be offered in key locations including the kitchen, living room, bedroom and WC. Meaningful pictures (Visual interaction) and calming music (Auditory interaction) are provided in response to detected events to help relieve agitation during the night, help to calm the persons and help them stay/return to sleep. AALFI complements the intervention assistance that is offered by the NOCTURNAL project by providing intervention assistance during the day, being aware of night time events and activities and providing feedback assistance that is based on these time periods and offering several additional interventions during the night designed to highlight any negative behaviours such as sitting up during the night in the kitchen or making use of the toilet at night. AALFI takes a different approach to assistance in that it provides text based messages that are designed to help encourage a person to carry out a task or corrective action in response to events that occur during the day and generate feedback during the day and night. Intervention type assistance is offered by NOCTURNAL during the night; AALFI provides intention assistance during the day and is aware of activities and events that occur during the night so that feedback assistance may be offered. Feedback is provided to a subject that details an identified trend from the previous events and corresponding interventions that may help the subject think about the activities and actions they are carrying out and may encourage the person to correct any recurring issues. In comparison, AALFI auditory interactions include speech recognition to recognize simple commands and speech synthesis to deliver messages and visual interactions through the use of textual messages, buttons and pictures. The next section details the Multi-Agent system and the associated Interventions and Feedback.

The multi-agent system, interventions and feedback

A Multi-Agent System was chosen for the implementation of AALFI over a centralized system as it is a more flexible and extendable methodology. The client device does not need to be powerful and may be a bedside touchscreen Tablet PC, for example. Seven agents have been implemented in the MAS and roles are outlined in Table 2. The current implementation of AALFI makes use of a 10 inch Windows 7 Tablet PC (full specification belowd). The Tablet device was chosen as it fully supports JAVA, JADE and is portable and can be moved from room to room by the older person. In the future AALFI may offer further interface adaption based on the current location of the Tablet device.

Table 2 The multi-agent system agents

MAS architecture

Extra agents may be added when new functionally is added to the AAL. Computational resources may be shared amongst several computers over a network and therefore only the agents that control the adaptable interface need to be installed on the client device. The current revision of the MAS architecture is shown by Figure 1. This revision provides assistance through two interaction modalities: (i) a visual interaction modality where the person interacts through a touch screen device with text and pictures (ii) an auditory interaction modality where the subject is able to interact with the interface through speech recognition and the interface interacts with the subject through speech synthesis.

Figure 1
figure 1

MAS architecture showing the layers and the interaction of the main agents. Details the key layers of the MAS architecture and shows the interactions that occur between the MAS agents.

The architecture consists of 5 layers including the (1) Interaction, Communication and sensing layer (ICS) were sensing, control of actuators and devices and interactions occur, (2) the data layer were sensor data is captured from the sensors, agent action data is recorded and contextual data is stored, (iii) the decision and logging layer were the people profile is processed and the actions carried out by the agents is logged, (iv) information layer were relationships between the agents, environment and person are managed and appropriate interventions decided, (v) context layer were contextual changes are detected and managed. Information is exchanged by the agents between layers (6 – 9). The sensor and sensor data agents work together to process sensor event data (A – B) and this results in a sensor event message being formed that contains the sensor type (PIR, bed-chair or door contact), time (in the format: yyyy-mm-dd hh:mm:ss) it was triggered and the event types (opening, closing and room visited) and the location of the sensor event (including the: bedroom, kitchen, main hall, Livingroom, Foodcupboard door, Fridgedoor…). The sensor event message (C) is sent to the context agent and this agent determines what has changed in the environment, for example if the back door has been opened and for how long it has been opened. The context agent sends a contextual event message is sent to the Intervention agent (D). Once the intervention agent receives this message, it determines the correct intervention to make to the person. This includes issuing a reminder to have meals at certain times of the day, alerting the person that the back door has been left open and during the night. The intervention message (F) is sent to the GUI agent, the appropriate method of putting forward the intervention is selected by the profile agent (F). Once the appropriate intervention has been selected, a record is stored (G) in the agent action data store. When feedback is requested by the person, this agent action data is analysed, patterns detected and appropriate feedback is selected (H). Interventions and feedback are presented to the person in the environment either with visual or auditory interactions (I) and (J). The MAS system is able to adapt the interventions that are provided to the person by tracking what the current contextual change, activity and what intervention has previously been issued. This is achieved by comparing the current event to the previous event throughout the current sensor event processing cycle. If the same event has previously occurred then how many times it has occurred and the time difference between the events is calculated. Events that logically follow each other (for example, door opening and door closing events) are recognised so that the context behind the event may be determined. For example, if the person is alerted that the back door has been opened for 10 minutes and did not close the door, the intervention issued for the door being open for 20 minutes would be different. This information is fed between layers (K) so that the most appropriate intervention is selected. The interface agent has been replaced with a GUI agent that offers more functionally and drives the interface during the multimodal interactions. The MAS consumes data from sensors that are located in the environment and these include bed-chair, door contact and PIR sensors as well as microphones for auditory interaction. The next section outlines the Multi-Agent system (MAS), what interventions and feedback are, and the underlying agent processes involved in forming the intervention and feedback messages.

Flexibility at the architectural level

The agents of the MAS detailed by Figure 1 display flexible characteristics that relate to context awareness and personalisation. Key agents of the MAS include the GUI Agent, Context Agent, Feedback agent and Intervention Agent.

The context agent flexibility relates to the ability to determine what has occurred in the environment and adapt the context message to the detected contextual changes. Without the flexibility to choose the correct contextual message, the MAS would not be able to determine the correct assistance to offer. The interaction methods chosen by the GUI Agent are flexible in that they can be further refined based on the currently selected profile. The profile represents the current older person’s interaction requirements and these can be changed at any time to take into account a change of interaction preference or underlying interaction requirements relating to old age. The assistance offered by the Feedback and Intervention agents are reliant on the underlying flexibility to adapt the assistance centred on what has occurred in the environment based on: the detected changes of context relating to a device, sensor or physical objects change of state such as for example a door being opened or closed; an activity the older person carries out such as being restless in bed (a pressure pad registers movement in the bed), using the WC (the older person enters the WC) or entering a room (the room state is detected to have changed from empty to occupied). During the night only a subset of the available interventions are offered such as for example a reminder to return to bed when the older person enters the kitchen, as previously discussed it is the flexible characteristics that enable a different intervention to be offered at different times during the day. The majority of actions the older person carries out during the night result in feedback assistance being generated and this assistance is only offered during the day. Without this flexibility both feedback and intervention assistance would be provided during the night and this may result in information overload and be detrimental to a good night’s sleep.

Sequence of events for an intervention and receiving feedback

This subsection details the sequence of the agent processes for putting forward an intervention to the subject and giving the appropriate feedback, on demand. A sequence diagram (Figure 2) shows these agent processes.

Figure 2
figure 2

Sequence of events for interventions and feedback. Details the sequence of events for determining interventions and feedback assistance.

Sensor data is stored in a sensor data repository; the sensor data agent retrieves this sensor data (1) and then sends a sensor data message to the sensor agent (2). Once this message has been received, the sensor agent sends a sensor event message to the context agent. The context agent determines how the context has changed, (3 – 5) and from this a context message is formed and sent to the interventions agent (6). The intervention agent receives this message and determines the appropriate intervention to make. Once an intervention has been determined an intervention message is sent to the GUI Agent (7) and a record of the intervention is kept by the feedback agent (8 – 9). Once the GUI Agent has received this message, it sends a profile check message to the profile agent (10 – 11), the profile details are then retrieved from the profile data store (not explicitly indicated in the figure). Once the correct profile has been selected, the GUI Agent chooses the appropriate interaction method and interface components to use, the interface is then adapted accordingly (12). The person is then able to interact with the interface (13). For example, the subject is told the back door is open, after a period of time the subject closes the back door and this generates an interface event (14) that is processed by the GUI agent (15). The GUI Agent ksends a message to the intervention agent that details the back door has been closed, resulting in a contextual change occurring (17) and this is recorded (18). If the subject chooses to receive feedback, they interact (19) with the touch screen device (visual interactions) or issue the keyword command ‘feedback’ (auditory interactions). The GUI agent provides feedback menu options to the person (21 – 22) and the person is then able to navigate through the available feedback using the touch screen device or listen to the feedback listening options. Once a choice has been made (25), the GUI Agent sends a message to the feedback agent to retrieve the feedback (26). The chosen feedback is gathered from the feedback data store (27 – 28) and the feedback message is sent to the GUI Agent (29). Depending on the current profile, the feedback will either be spoken to the person or displayed on the touch screen device screen (30). The person then receives the chosen feedback (31) and by carrying out the feedback activity, the person may be able to identify issues and correct these issues themselves. The following sections detail what an intervention and feedback is.

What is an intervention?

There are two types of interventions designed to help provide assistance and support with a wide range of events that may arise due to activities or actions that the person carries out and are designed to be simple and easy to follow so that the subject may not get confused. The two types of interventions are: (i) message intervention where information is conveyed to the user either through text to speech (Auditory modality) or a textual message displayed on a screen (visual modality) (ii) action intervention, using sound (an alarm, prompt or music) or visual stimulus (a light being turned on, picture and/or textual message being automatically displayed). The modality that is chosen to offer the current intervention is determined based on the older person interaction requirements and these requirements may be formed through carrying out research into the types of interactions that may be offered to an older person in an AAL scenario and are set through an internal ‘profile check’ process that is carried out before chosen assistance is offered.

The intervention process

The intervention process, detailed by Figure 3, shows the main agents that are responsible for determining the intervention (the Intervention agent), selecting the correct profile (Profile agent) and putting forward the intervention to the person (the Interface agent).

Figure 3
figure 3

The intervention process and agent interactions. This figure details the intervention process and the interactions that the agents carry out.

On receiving a contextual event message from the context agent (A), the intervention agent determines the appropriate intervention to make (B). Once this has been carried out, an intervention details message is formed (C) and this is sent to the GUI Agent (D). The GUI Agent sends a message to the profile agent (E) so that the appropriate interaction method may be chosen to put forward the intervention to the person. The profile agent receiving this message (F) and then retrieves profile data (G) chooses the correct profile (I) and checks the profile is correct (H). A profile message is sent back to the GUI Agent (J). The GUI Agent receives the profile message (K) and decides the appropriate interface content (L). The interface is adapted (M) and interaction can occur between the interface (N) and the person (O). The intervention is put forward to the person in a manner that they can understand as the interface is adapted according to the person’s requirements profile. The following section provides details of feedback functionality.

What is feedback?

Feedback is designed to provide a user with a message that outlines a key trend or issue that has been detected from historical interventions that the MAS has carried out. Feedback may have a positive effect on a user’s behaviour by outlining when good trends have been detected, for example if a subject has had a restful night’s sleep, they will be issued with ‘positive feedback’. In contrast, the feedback can draw the subject’s attention to a recurring event or action that may need to be corrected; for example, if the person continually leaves the backdoor open.

  • Feedback to the subject.

  • Feedback is provided to the user when they push the feedback button (visual interaction method) or issue the keyword ‘feedback’ during auditory interaction. This may reduce information overload by allowing the person to choose when to receive feedback and not be automatically provided it by the MAS. The feedback is offered between the morning and evening. The feedback is not offered at night as it is though that it may disrupt a restful night’s sleep.

  • Feedback to the care provider/health professional.

  • Feedback can also be made available to care provider and health professionals. This feedback would be more detailed and provide an insight into the activities that the person is carrying out and how the MAS is responding with interventions.

  • The feedback complements the intervention functionality and may help the user to solve recurring issues themselves.

The feedback process

In order for the correct feedback to be identified and issued, every time an intervention occurs, a record is kept of when the intervention occurred, what the intervention was and how many times the intervention has been issued. Figure 4 shows the agents that are involved in the feedback process and shows how feedback is formed for ‘restless sleep’.

Figure 4
figure 4

The feedback process for restless sleep. Details the feedback process in relation to the detection of restless sleep.

When the user is detected to be restless the Sensor agent processes the bed sensor data associated with detecting restlessness (A), a sensor event is then sent to the context agent (B) – (C). When the Context agent has processed the sensor event message and determined the changes of context, it sends a contextual event message (D) to the Intervention. The intervention then carries out the appropriate intervention (E). Details of the intervention type, time and how many times it occurred are stored in an agent action data store (F). The Feedback agent retrieves the details of historical interventions. It logs the interventions (H) – (I) and from this log, it determines the appropriate feedback (J). A feedback message is generated (K), in this case the feedback is ‘restless feedback’ and is sent to the GUI Agent (L). The chosen feedback (M) is sent to the interface agent (N). Based on the current chosen profile, the appropriate interface content is chosen (P) and the interface adapted (O). The user is able to view or listen to the feedback with the interface (P) at any time during the day only, and not during the night.

Multi-agent process for adapting the interface

This sub section details the agent actions (Figure 5) that occur when the interface is adapted to put forward an intervention or feedback message to the older person.

Figure 5
figure 5

Interface adaption for visual and auditory interactions. Shows the agent interactions that occur for choosing appropriate visual and auditory interactions.

A profile request is made (F) to the profile agent (G). The profile agent can either choose a visual profile (H) or an auditory profile (I) depending on the person’s requirements. The chosen profile is sent to the GUI Agent as either (J) (Visual) or (K) (Auditory). These messages then either result in the display of interface content including textual messages, buttons for interaction and pictures (L) or when auditory interaction has been selected (M), speech output (text to speech) and speech recognition (persons issues simple commands). The visual interface features are displayed (N – P) or auditory prompts made (O – Q). The person is then able to interact visually (R) or through speech and sounds (S).

Adaption process explanation

The intervention agent receives a contextual event message (A) and from this it chooses the correct intervention (B) and determines the intervention to put forward to the person (C). The intervention details (D) are sent to the GUI Agent (E).

The next section details the adaptable multimodal interaction that may occur between AALFI and the older person.

Adaptable multimodal interaction

During the research phase of the project various interaction requirements were considered including those relating to visual interactions (sight, readability, navigation, control) and auditory (speech, issues relating to speech and language, effects conditions such as those which relate to a stroke) may affect the older persons ability to interact with the interface. It was decided to concentrate on specific requirements for a selection of possible users so that the prototype system could be implemented, demonstrated and evaluated. In the future further work may be carried out so that interaction issues that relate to a person’s speech, other visual conditions and mobility may be accounted for and appropriate interactions offered and the layout of the GUI and GUI content may be further adapted by either the older person or care provider.

The visual interactions focus on those relating to putting forward the assistance to the person (including and not limited to the visual attributes of the interface including text size, pictures, font, size of interface…). Issues relating to navigation such as placement of buttons on the screen, size of buttons, position of interface elements and the difficulties that an older person may have with interacting with a computer interface have been considered and had an effect on the choice of device for AALFI, the design of the interface and the interaction functionality that is currently offered and may be offered in the future.

The adaption attribute is considered to be important for understanding the interventions and feedback. A user needs to be able to read and navigate the interface (visual interaction) or carry out speech based interaction and understand the messages that are being spoken (auditory interaction). The adaptable Multimodal interface that has been implemented is detailed by Figure 6 which shows 3 of the current adaptions that occur (A. small text, B. a transcript of auditory interactions and C. Large text).

Figure 6
figure 6

Interface adaption examples in relation to events that the older person has carried out in the Smart Home. Interface adaption examples: standard interface view: default text size, using text as primary interaction. Interface adaption examples: Auditory interaction transcript (Auditory interactions) and adapted visual interface with larger text and changed colour (Visual Interactions).

In the future how the person is feeling may be used to further adapt how the interface is adapted and how interventions and feedback is offered to the older person. Biometric sensors may be used to measure the person’s heart rate, moisture on the skin (sweat) and vocal stress (auditory interactions) to facilitate adaption according to how the person is feeling. The following sub section outlines the visual interaction that occurs.

Visual interaction

There are three forms of visual interaction; viewing intervention messages, viewing feedback messages and associated pictograms and viewing pictures that can be adapted based on the time of day. The types of interaction the user can make during visual interaction include navigating between the intervention, feedback and pictures functionality and alternating between the intervention/feedback messages and pictures. The user may further tailor the main interface (e.g., text size, buttons size) to their own requirements and messages can be adapted so that the person is to navigate the interface and understand any feedback and intervention messages that are displayed.

Auditory interaction

When a profile has been set to auditory interaction, speech recognition is used to listen for a key word so that interaction can occur and simple commands be issued by the person, text to speech is used to output the relevant messages and menu options. The technologies used for the VoiceXML auditory interaction are detailed by Figure 7.

Figure 7
figure 7

VoiceXML architecture showing the main technologies. The main VoiceXML technologies are detailed.

The VoiceXML menu that has been implemented provides simple voice based interactions. This comprises of: (i) Waiting Loop, the interface listens for a key word to be issued so that interaction may occur. Once the keyword has been issued, the person is welcomed to the voice menu and told what interactions that they can carry out. (ii) Main menu choices, the choices are: Listen to the feedback messages, listen to the intervention messages or exit the menu. (iii) Feedback menu, if the user has chosen to listen to feedback, they are asked if they wish to listen to the current feedback message, the last feedback message or listen to all the feedback messages. The user is also able to exit the feedback menu and return to the main menu (iv). Intervention menu, when the user has chosen to listen to interventions, they are able to listen to all the interventions, the current intervention, last intervention or exit to the main menu. (v) Exiting, if the user has chosen to exit the main menu, they are first asked if they wish to leave, on answering ‘yes’ the interaction interface is returned to the ‘waiting loop’. If the person says ‘no’ , the menu choices for the current menu are spoken to the person. In the past VoiceXML has primarily been used for banking and call centre interfaces and VoiceXML has undergone several revisions that have added to and improved functionary. The current implementation is designed to help validate the idea of having an auditory interaction modality as it currently does not leverage all the features of VoiceXML, however it provides a stepping stone for a future more advanced implementation that may offer different voices and be able to understand more words and phrases. The following sub sections outline sample dialogue between the interface (System) and the person (Subject) that occurs during auditory interaction.

Interacting with care givers and health professionals

As previously outlined the interface is designed for the primary user, the older person. It was decided that a separate interface should be implemented to allow for the care providers, health professionals and older people to carry out simple changes to the interface adaption profile and the interface is shown by Figure 8.

Figure 8
figure 8

Further customization Interface for profile requirements. This figure shows the further customization interface.

The functionality that is offered includes changing the primary interaction profile, customizing the visual profile settings and adjusting the speed of the voice and further details are provided below.

  • Change the primary interaction profile: There is a choice between visual, the GUI is displayed on a screen, buttons are displayed to allow navigation and textual messages and pictures are displayed and auditory, interactions occur through speech recognition (user to MAS Interface) and text to speech (MAS to user).

  • Alter the visual profile settings: The text size of buttons, messages and other visual prompts may be changed so that the older person can read the messages and carry out effective navigation.

  • Adapt the auditory settings (Figure 9): The speed of the computer generated voice may be altered to make it easier for the older person to understand what is being said. In the future the Voice may be changed from Male to Female depending on the person’s preference and the sensitivity of speech recognition may be adjusted.

Figure 9
figure 9

Spoken dialogue customization. Details the spoken dialogue customization.

If an older person’s requirements change over time, the interface adaption profile can be changed so that the person can continue to carry out and understand interactions. The next section details the three evaluations that were carried out to validate the underlying ideas, MAS and the Ambient Assisted Living Flexible Interface (AALFI).

Evaluation and results

Three evaluation exercises have been carried out including two with colleagues (outside the research team) to test the initial functionality and features and the third that took place at Age NI headquarters were the interface and ideas were evaluated during a workshop by potential stake holders. A scenario ‘Meet Bob’ was used to shape the research carried and the scenario is based on the real world sensor data that was processed during the evaluations. Details of the interventions and feedback messages are provided and there is a brief discussion on the utilised technology. The two evaluations conducted with colleagues are detailed and these first evaluations proved to be positive and laid the groundwork for the last evaluation that was conducted at the workshop. A final evaluation (Evaluation 4) was conducted across two workshop sessions and was attended by older people. This evaluation contributed to the validation of the interactions methods, the perceived flexibility of AALFI and the underlying MAS system and the current assistance strategies for assisting an older person in their own home.

The evaluation scenario

The scenario detailed below ‘Meet Bob’ was formed by analysing sensor data (Extract provided by Figure 10) gathered from a smart home during the course of several days. The Smart Home was single occupancy and the older person did not have any pets or visitors. The sensor data is consumed by the agents of the AALFI prototype so that it is possible to simulate a Smart Home scenario and observe the agents to see if they function as expected and provide the correct assistance and interface adaptions.

Figure 10
figure 10

Data extract for providing feedback for night time events (complementing the intervention type assistance offered by NOCTURNAL).

The resulting feedback for the sensor data extract (Figure 11) highlights the detected issue (using the WC several times during the night and offers a solution (not drinking before bed).

Figure 11
figure 11

Feedback for WC use during the night.

The sensor data extracts were used to build the scenario for the fictional older person ‘Bob’. For each of the issues that Bob faces, there is corresponding sensor data from the Smart Home.

Scenario: “meet Bob”

A scenario is considered for the evaluations of AALFI were the daily and nightly activities of a fictional older person named Bob are detailed and issues that may be encountered are outlined. The scenario has four key parts: (1) the person, their circumstances and issues; (2) the environment; (3) Issues Day, provide an insight into the typical day of an older person; (4) Issues Night, offers an insight into the issues an older person may face during the night.

(1) The Person: A fictional older person named Bob lives alone. He has several close friends and a son who visits several times a month. Bob is a keen baker and has an interest in history and genealogy. Bob has mild memory issues and has difficulty reading small text and therefore wears glasses. (2) The Environment: Bob’s home, a single story dwelling has been fitted with several types of sensors including door contact sensors that are attached to the back, cupboard and fridge doors and these generate door opening and closing events. PIR sensors are located in the hall, kitchen, living room, WC, master and guest bedrooms and these generate ‘room visited events’. Bed-chair pressure sensors have been placed in Bob’s bed, generating bed-chair in and out events. These sensors are used together to detect changes of context in the environment. Touch screen interfaces are located in the kitchen and beside the bed in the living room and each touch screen has a microphone and speakers. (3) Issues (DAY): Due to several health issues Bob sleeps in a bed in the living room and does not sleep in the Master bedroom and on occasion Bob will have trouble waking and may spend most of the day in bed. During the course of a typical day, Bob may forget to close the back door and this can sometimes result in security issues occurring as in the past a stray dog has wandered in and made a mess of the kitchen. On occasion he will open fridge and cupboard doors and forget to close them, resulting in several food items spoiling and an increase in energy use. When Bob visits the downstairs WC, he sometimes forgets to flush the WC and wash his hands. Bob likes to keep in contact with his primary care contact on a Monday, Wednesday and Saturday so that they can arrange any activities and outline any issues that he is having. Bob may forget to eat regular meals during the day and this has led to increased weight loss. (4) Issues (NIGHT) Bob often goes to bed at 10:00 pm and during the night he usually has several restless periods were he moves about a lot in the bed. In the early hours of the morning he may sit up in bed and feel disorientated and distressed. If Bob gets up and leaves the bed, he will go to the kitchen. While Bob is in the kitchen he will sit for a long period of time and drink cups of tea. When Bob eventually returns to the bedroom and goes back to bed he will often awake again after a short period of time and have to get out of bed and go to the downstairs WC.

Scenario: discussion

The target user of AALFI is an older person with mild cognitive issues such as forgetfulness and who is either short or long sighted. The ‘issues (Night)’ part of the scenario deals with the night time period and the activities that the person may require assistance with. Sensor data for the night time period has been analysed and common issues that older people may face during the night have been researched and this helps to determine the feedback that is offered the following day to highlight issues that occurred during the night. The ‘issues (Day)’ part of the scenario is designed to emulate a typical day that a person may have and show the types of activities that they can be supported; reminders to consume regular meals at set times during the day, identifying potential security and safety issues and reminding the person to carry out particular tasks.

Scenario: hardware and technology

AALFI and the associated MAS were installed on a 10 inch Touch Screen computer with speakers attached so that voice prompts could be heard over the background noise of the testing environment. During the introductory and background phases of the evaluation at the workshop, slides were presented using a projector. While workshop questionnaire 2 was completed, a live demo was carried out were the tablet was connected to the projector and the interface projected onto the screen. A microphone was used during the auditory modality demo and speakers were used to help the participants hear the auditory output. The speakers and microphone were required due to technical limitations with the Tablet hardware.

Scenario: intervention and feedback details

This section details the intervention (Table 3) and feedback (Table 4) messages that were utilised during the three evaluations.

Table 3 Intervention details
Table 4 Feedback message details

The outlined intervention and feedback message triggers correspond to the real world sensor event data that was processed during the evaluation. The feedback detailed (Table 4) complements the intervention assistance that NOCTURNAL provides with feedback assistance to help reinforce the issues that have been detected and encourage the older person to think about solutions that they may implement to overcome the issues.

AALFI contributing to the concept of flexibility

AALFI contributes to the concept of flexibility by offering (i) interaction strategies that can be tailored to an individual’s preference, (ii) offering two types of assistance, (iii) the option to further adapt the interaction techniques to changing requirements and preferences, (iv) in the future, allowing others to access the assistance, (v) possibility of offering assistance to other groups of persons, (vi) adapting the assistance offered based on the time of day.

  1. (i)

    A key concept of flexibility is tailoring the interaction method to an individual’s specific requirements. AALFI allows for these preferences to be set up so that a person can choose between carrying out visual interactions (through the touch screen and reading assistance messages from the screen) or make use of auditory interactions (speaking to AALFI and listening to assistance messages). VoiceXML technologies are utilised which have previously only been used in call centre type applications. The auditory interactions mirror the visual interactions that an older person may carry out and allow them to receive the available assistance.

  2. (ii)

    The assistance strategy is flexible, two types of assistance strategy are offered, intervention assistance for issues that require immediate attention and feedback assistance that details historical issues. The method of portraying the assistance is tailored to the message being put forward in that intervention assistance makes use of clear readable text based messages. For feedback assistance, a combination of text and pictures is used as the picture is thought to help encourage thought and reinforces the text portion of the message.

  3. (iii)

    AALFI contributes to flexibility by allowing the interaction techniques to be further adapted based on the older persons changes of preferences as they at any time may choose to change from receiving visual interaction to auditory interaction; they are able to choose to receive both visual and auditory interactions at the same time or to choose only one type of interaction method. As an older person ages, their interaction requirements may change over time, AALFI allows for the older person requirements to be further adapted to take into account these changes so that they may continue to receive assistance. Who carries out these changes is also flexible as either the older person or primary care provider may make these changes at any time.

  4. (iv)

    A feature that is being investigated as future work is to add flexibility to who can access AALFI. Currently only the older person has access to the assistance that is offered by AALFI. In the future a care provider, family member, friend or health professional may be given access to a tailored version of the assistance messages so that they are able to see how the older person is doing with regards to their health and wellbeing. AALFIs interfaces are designed in such a way that this will be relatively straightforward and the choice of JADE as the underlying MAS architecture allows for access to the assistance messages over a network and the wider internet. Interaction restrictions can be added so that only authorised users have access to the assistance messages and the level of detail contained in the message can be tailored to the person who is accessing them.

  5. (v)

    Currently AALFI offers assistance to older people in their own home, however the underlying architecture is flexible as the assistance may be adapted so that non-older people such as children or persons with disabilities may receive assistance. This may be achieved by ensuring the sensor data that is consumed in a pre-set format. The interaction methods may then be adapted for these other groups of people.

  6. (vi)

    The last way in which AALFI contributes to flexibility is that the assistance that is offered throughout the day is adapted to the current time of day, the situation and the message that needs to be forward to the person. For example, throughout the day the older person is given meal reminders on entering the kitchen, breakfast in the morning, lunch in the afternoon and dinner in the evening. However at night when the older person enters the kitchen, they are reminded of the importance of sleep and a suggestion is made that they return to bed. This flexibility ensures the assistance being offered is relevant to the current situation and that it has the desired effect on the older person’s activities and behaviour.

Validation of perceived flexibility

AALFI has been demonstrated to 18 people, the first workshop was attended by health professionals, older person’s and care providers, the second and third workshops were attended exclusively by older people. The participants were able to understand the assistance that was offered during the demonstrations and felt that the personalisation options were adequate for different older people. It is this heterogeneity of potential users being able to provide detailed feedback on the flexible features that has helped to validate the perceived flexibility of AALFI. The older people were also able to see how AALFI could be applied to different situations such as helping non older people. Full details of the workshops are presented in the results Section "Evaluation 1 and 2 details" below.

Evaluation 1 and 2 details

The first two evaluations were designed to validate the features and functionality of AALFI and the underlying MAS before carrying out validation with potential stake holders. This initial validation was considered to be important as it allowed for any underlying issues to be detected and solved. The participants consisted of 7 colleagues (outside the research team) from different research backgrounds and each had different experience and knowledge of computer interfaces, multi-agent systems and older person issues. By validating AALFI with participants from a broad range experiences, issues with features, functionally and ideas could be detected by participants who may not be an expert in a particular related research area. Each participant was allocated a 15 – 20 minute time slot and asked to complete a questionnaire to validate the usability of several key areas. These first two evaluation iterations conducted with colleagues were designed to help find any issues with the assistance being chosen in relation to scenario activities and events that are recognised from the corresponding sensor data and to discover any possible usability issues before carrying out evaluations with potential stake holders during the main evaluation conducted at the AGE NI workshop.

During the validation exercise participants were asked to view two demo videos that were recorded of AALFI consuming sensor data and offering intervention and feedback assistance in relation to the scenarios; the first related to the day part of the scenario and the second to the night part of the scenario and looking at the feedback that would be provided to an older person the following day. The video showed the interface interactions for four tasks: (1) View and navigate intervention messages, (2) View and navigate feedback messages, (3) Navigation of pictures and photos and (4) listening to calming music. After the videos had finished the participants were asked to complete the questionnaire to assess the usability of the interface for the following five areas: (1) Features and Functionality: Five questions were asked designed to measure the usability of the features and functionality of the interface in relation to reaching user goals, supporting the interface workflow, carry out frequently used tasks, level of required expertise to carry out tasks and how easy it is to use buttons; (2) Main Person Interface: Three questions were asked to determine the usability of the main interface and assessed the clearness of the interface layout and the effectiveness of directing the user to particular tasks; (3) Navigation: The participants were asked nine questions to assess the usability of the interface in relation to navigation, including how easy it to access and navigate the interface, the structure of the interface, clarity of buttons and any displayed text, whether the interface structure was clear and how easy it is to navigate the various parts of the interface; (4) Context and Text: This section of the questionnaire asked four questions dealing with the content of the interface and the text that is displayed. It accessed how appropriate text is, the terminology and language used, terms and the content of text. (5) Performance: The last three questions assessed the performance of the interface and concentrated on how the interface performed in relation to pauses, errors and readability issues and the configuration of the interface.

During the questionnaire phase of the validation exercise the participants were able to ask questions relating to the interface and functionally and see a live demonstration of particular interface functions. Once the questionnaire had been completed the interface was awarded an overall usability score of either very poor (less than 29), poor (between 29 and 49), moderate (between 49 and 69), good (between 69 and 89) and Excellent (more than 89). These usability scores are extracted from the chosen UX Design template [44] which was adapted for use in Evaluations one and two. The template was chosen as it was found to be effective for evaluating AALFI and it provided clear guidelines and a method to automatically calculate metrics relating to the usability.

Results: evaluation 1 and evaluation 2

This section details the results for Evaluation one Table 5 and (Figure 12) shows the total usability score given by each participant after the results for each question were checked and collated.

Table 5 Results from evaluation one and two
Figure 12
figure 12

Usability results evaluation 1 and 2.

Results discussion

The results for the first evaluation were positive with a ‘good usability’ level being reached. Issues that were identified during the evaluation include the loudness of auditory music interaction and this highlights underlying issue with Tablet technology and the built in speakers. Text for the intervention messages was not centred and scroll bars had to be used and it was suggested that an older person may have difficulty scrolling the text. As a result of the evaluation the text size for all visual profiles was increased and the layout was improved so that textual messages could be displayed without a need for scroll bars. The Tablet was augmented with external speakers and a microphone so that AALFI and the MAS could be evaluated without being impacted by the Tablet computers technical limitations. The results are thought to be positive as the issues that were identified with text, message scrolling and layout were fixed and these improvements had not introduced any new usability issues. It was planned for this evaluation to fully test the expansion of the auditory modality that would allow for simple commands to be spoken to the interface and intervention and feedback messages to be read out to the person. However there were issues with the speech recognition and the microphone at the time of the evaluation and therefore this was not tested. These issues provided an opportunity to go back to the underlying code and identify ways to improve the speech recognition.

Evaluation and results: evaluation 3

The aim of the third evaluation were to evaluate all features and functionally of AALFI including the visual and auditory interaction modalities, providing a person with intervention and feedback messages, carrying out navigation of the interface and assessing potential stakeholders views on key ideas and issues. The evaluation was carried out during the day time period so that the day intervention and feedback assistance could be evaluated by the potential stake holders. During the live demo of AALFI, data sets from the scenarios were used to simulate the intervention and feedback assistance that would be offered and to show the interface adaptions that would occur. Feedback that is based on night time events was presented as the scenario data utilised during the live demo of AALFI included night time events and activities that were carried out by an older person.

Participant details and method for evaluation 3

During the course of the evaluation the participants were asked to complete two questionnaires, the first dealt with the underlying ideas behind the research and on their views on subject’s related subjects. The second questionnaire was completed during a live demo of AALFI were the usability of the visual and auditory modalities were assessed. There were 11 participants at the workshop from a wide range of backgrounds including older people, health professionals, carers and subjects with Dementia (The results from the subjects with Dementia are not used as they were not expected and relevant approval was not in place; results from 9 participants are included in the evaluation).

Questionnaire 1: demographic details, thoughts on the research ideas and assistive technologies

The participants were asked to complete a questionnaire to gather demographic information and assess their views on the research area and ideas. The demographic information included 3 pieces of information: (i) the participants age, this may be useful for determining if the participant falls within the target user group, (ii) gender, different genders may respond differently to visual and auditory interactions and may have different views on assistive technology, (iii) with whom the participant lives with, may be helpful for identifying future development opportunities such as multiple user occupancy. To help keep the evaluation process anonymous, the participants were not asked for their name, occupation or any other personally identifiable information. The participants were asked a number of questions to assess their views on ‘assisted living’ and their general attitude towards assisted technology. An area of assistance that has been considered is reminders to carry out activities and actions in response to detected events and changes of context. The participants were asked a number of questions to determine how forgetful they are during the day (to help assess the value of reminder based assistance) and how complex the feedback messages should be for visual and auditory interactions as the complexity may be important to ensure the subject is able understand the messages that are being conveyed. The last set of questions dealt with night time to help determine how the participants sleep and to gage the complexity of messages offered during this period of time. The live demo of AALFI was conducted in two parts to showcase the different interaction modalities and details follow.

Questionnaire 2: visual modality

During the demo of the Visual Modality the participants were shown the interactions that occur for the intervention functionality (Figure 13); interventions messages were displayed and messages navigated by pressing the next and previous buttons. In total five messages were shown to the participants and each message corresponded to a detected event from the data.

Figure 13
figure 13

Screenshot of intervention demo showing closing backdoor intervention (demo was in colour).

Next the participants were shown the feedback functionality that includes a picture and text to represent the feedback that is being offered (Figure 14). Four feedback messages were shown to the participants and they were asked how useful they found feedback and to identity any usability issues. Once the feedback demonstration had been completed the participants were asked to listen to the auditory modality functionality demonstration were simple commands were issued to the interface and corresponding feedback and intervention message spoken by the interface.

Figure 14
figure 14

Screenshot of the feedback part of the demo showing meal reminder (demo was in colour).

Questionnaire 2: auditory modality

The auditory evaluation was divided into two parts, the first dealt with the auditory interaction for interventions and the second with the auditory interaction for feedback. Simple commands were issued to AALFI to show participants how to initiate the interaction process and hear the intervention and feedback responses.

The demonstration was designed to emulate the functionally that is offered by the visual modality. The key words that were spoken to initiate an interaction include ‘Hello’ (to wake the interface from the ‘waiting loop state’), ‘Intervention’ to load the intervention menu, ‘Feedback’ , to listen to the feedback menu, ‘Current’ to listen to the current intervention of feedback menu, All to hear all feedback and intervention messages, ‘Last’ to listen to the last feedback and intervention message and ‘Exit’ , depending on the current menu, this either exits to the first menu or returns AALFI to the waiting state.

This section provided an insight into the demonstration that was carried out during the workshop and details of the results for Questionnaire 1 and 2 follow.

Evaluation 3 results: Questionnaire 1

The results (Table 6) are interesting as it is apparent that older people may not be resistant to assistive technologies if they are useful and there is a clear benefit to the person. Support during the night may be beneficial as half the participants sleep quite badly, a quarter sleeps quite well and all the participants would make use of either visual or auditory interactions during the night. The offered feedback assistance can outline these issues with sleep and this may help an older person to think about why issues with sleep are occurring. This supports the current research idea to provide assistance during the day and provide feedback assistance based on night time events and activities. AALFI complements the research carried out by the successful completion of the NOCTURNAL project [38] were night time assistance was provided to subjects with dementia. The difficulties that an older person may face during the night and the importance of night time events and activities are further discussed in [45] were they review research relating to night time assistance for an older person with dementia and the types of assistance that they may require including guidance to different locations using lights, playing calming music to assist with restlessness and determining why an older person may be awake. The majority of participants favour basic interactions during the day and night (Figure 15) and this result supports the idea to keep interactions simple

Table 6 Questionnaire 1 results overview
Figure 15
figure 15

Complexity of interactions.

An idea that underpins the visual interactions that occur is to keep them as simple as possible so that an older person is able to understand the intervention and feedback messages that are being put forward to them.

In contrast to the complexity of visual interactions, the participants would favour basic auditory interactions during the night and complex auditory interactions during the day. This result is interesting as it shows that the auditory interactions during the day could be made more complex to allow for more features and functionality to be added. It was originally thought that carrying out auditory interactions during the night may cause an older person distress; however from this sample of results it is clear that this may not be the case and that auditory intervention during the night may be of benefit to an older person that is not able to carry out visual interactions. The results from this questionnaire were useful for finding out about potential stakeholders and the issues that they may face and how they view the complexity of interactions and the next section details the results for questionnaire 2.

Evaluation 3 results: Questionnaire 2

The results for questionnaire 2 are detailed by Table 7 and a majority of the participants thought the idea of making use of adaptable interfaces was very good and none thought it was quite or very poor.

Table 7 Questionnaire 2 results

The results for the adaptive interface question are shown by (Figure 16), each of the answers from the questionnaire was rated with a score of 1 to 4 (1 being very poor, 4 being very good).

Figure 16
figure 16

Participant views on the adaptive interface.

This was a very positive result as it helped to validate the underlying idea of implementing an AAL system were older people carrying out interactions with an Adaptive Interface.

AALFI provides intervention, feedback, reminder and picture display functionality and the results (Figure 17) show that intervention, feedback and reminders are considered to be useful and the picture functionality (classed as other) may be less useful. It is important to understand what potential stakeholders do value and this result will help to drive future work into advancing the functionality of AALFI.

Figure 17
figure 17

Valued features.

Evaluation 4

Evaluation (EV-4) was carried out across two sessions which occurred on the dame day. The first session was attended by 4 participants and the second session by 6 participants, the same questions and workshop format were followed at each session. All of the participants were of the target age for the research, over 60 years of age.

The participants were asked non-personally identifiable information including age; their gender; whether they live alone and if they live alone. In order to keep the results anonymous, the participants were not asked their name, address or anything else that could identify them.

The workshop questions and results are detailed in Table 8. These results represent the quantitative results as they allow for key research ideas, features and functionality to be measured.

Table 8 Quantitative results: evaluation 4 (EV-4) – (presented in a Thesis)

The first four main questions assessed how the workshop participants view assistive devices and adaptive interfaces for themselves and for others such as friends and family. A score of 90% for question one was achieved and this is thought to be positive as it shows that older persons will make use of assistive technology. For the second question, 90% of the participants answered that they would make use of an assistive device during the day and night, this validates the research ideas to provide assistance that deals with both the day and night time periods. 100% of participants thought that the concept of an adaptive interface was very good. For the next question, 70% of the participants thought that the adaptive interface was appropriate while 30% felt it was appropriate for older people. This result shows that when the older people themselves are going to use an assistive device they think it is very appropriate, however when they are thinking about friends and family making use of the interface, they have mixed feelings. Question 5 – 9 assessed specific features of AALFI that relate to the flexibility including the interaction method and the type of assistance that is offered in relation to the activities and actions an older person carries out and the resulting detected changes of context. With question 5, 100% said that they would make use of intervention and feedback assistance. This result validates offering two types of assistance to an older person and improves on previous assistance strategies were an older person is only offered intervention type assistance. Question 6 asked them to rate the usefulness of the voice operated interface with 80% thinking it was very useful and 20% that it was useful.

The result is positive as the participants were able to overlook the current limitations of the voice interface including the robotic voice and harsh tone. In order to aid further improvements of the voice interface, the participants were asked to choose which feature should be improved. 40% thought that the volume of the voice could be improved, 40% that the tone could be improved and 20% of the participants did not have an opinion. Question 8 assessed the participants preferred method of receiving assistance. A majority, 50% chose picture and text interactions (feedback), 10% chose text interactions (interventions) while 30% opted for auditory interactions. This shows that there may be scope to add pictures to the text interaction technique to further emphasise the assistance that is being offered. The last question was designed to gauge how easy to use the participants would find the adaptive interface. 10% felt that it would be very easy, 50% thought it would be quite easy while 30% though it would be quite difficult and 10% thought it would be very difficult. This result shows that overall a majority of older persons would find the adaptive interface easy to use, however there would need to be clear guidance and training provided so that an older person could get the most of the interface.

Qualitative results

This section details the qualitative results were the participants were asked to provide an opinion. “Seems to open a whole range of useful interventions”. This supports the use of intervention assistance and provides an insight into the types of interventions that an older person may like to see, including medication and reminder type assistance,

“I regularly take medication and sometimes I may forget to take the medication or take the wrong dose”. “I currently live in a Fold and there have been occasions were a person has passed away or been unable to leave their bed and as the care taker does not check on residents during the weekend, this has gone undiscovered… would there be a way to alert a family member, friend or carer that a person has not left their bed in several hours”. Currently AALFI is able to detect movement so that feedback assistance may be offered in relation to how well and older person sleeps, however this may be extended so that if an older person does not leave their bed in the morning, an alert could be sent to a family member, friend or care provider and outside assistance could be provided. The next statement is interesting as it represents a common view that an older person may not feel old and therefore may not think they need assistance at the current time, “Think it would be useful if they needed it later”. From the discussions that was carried out it became apparent that older people may like to have the assistance installed in their home as early as possible, event after expressing this view several older people felt that having AALFI installed as early as possible would be of great benefit, “Having it installed early, getting used to it would mean “this was normal” not forced on me…” would allow for them to get to use to making use of it and to have time to learn about all the features and functionality, this would help overcome any current anxiety about assistive devise. The flexibility of the assistance strategies and interaction methods means that the older people may choose not to receive specific assistance and can further refine the interaction methods.

As previously discussed an underlying flexible characteristic of AALFI that may be explored further in the future is the ability to adapt the assistance based on the current sensor data. As long as the sensor data that has been gathered is of the correct format, AALFI may be able to consume it and offer the correct assistance. The participants at the workshop appreciated this future flexibility and would welcome assistance for other groups of people, “Could assistance be offered to people who are not old, but may have other problems or other disabilities…?”

“Not having to rely on family members…” “Give peace of mind to relatives as they may not be nearby. Be safer for the older person…” AALFI is flexible in that family members, care providers or friends may be given access to a subset of the assistance so that they are able to track the health and well-being off the older person and be given peace of mind.

“Lengthening the time of self-reliance…” “I can see were this would be of benefit for older independent people.” “I would feel able to reflect on “oh I did not have a good night’s sleep” The flexible nature of the assistance means it can both highlight recurring issues or provide support for events and activities as they occur in near real time. “Thought I was dreaming only…” “To recognise any issues needing to be addressed”, This highlights the usefulness of feedback as a tool for drawing to the older person’s attention recurring issues and this is reliant on the flexibility of the MAS to choose the correct assistance strategy.

“Not being dependant on glasses when one does not wear them 24 hours a day…” “Another voice when living alone and attracts attention in the first instance…”

These statements are thought to support the underlying flexibility to personalise the interaction method based on the older person’s current interaction requirements and personal interaction preferences.

The results give a snapshot of the functionally and features that an older persons, health professionals and care providers value. In this case, with the group of people that completed the questionnaire, a particular functionality that was valued is reminders; however they also value interventions and feedback functionality. The current intervention messages contain several reminders, including a reminder to close the back door after a set period of time, having breakfast during the morning period and washing hands, however there is scope to expand the reminder capability to include other reminders such as getting up at a specific time and carrying out further activities of daily living. The result for the auditory voice interface was encouraging as the current implementation has several limitations including a unnatural monotone voice and on occasion is difficult to follow, the participants were able to look past these issues and determine that auditory/voice based interaction would be useful.

The implementation will be further refined in the future to overcome these issues and provide a more natural interaction method. The visual modality of the adaptive interface achieved a positive result as the majority of people found that they would not find the interface difficult to use and of the 3 who said it would be difficult, one said that they may find it less difficult over time. This result provides a basis for refinement to improve the usability of the visual modality. The next section provides a conclusion to this article.

Conclusions

The state of the art shows despite the intense and productive work in AAL there are still several underlying issues that can result in the assistance and support being provided to be inappropriate and not understood by the subject. For example, the systems do not provide the means to tailor the assistance and support to an individual’s requirements and therefore the user may not understand the feedback.

An AAL system may provide assistance and support but not keep a record of what is occurring and the subject therefore does not get any meaningful feedback on what is occurring and may not be aware of any issues with the actions that they are carrying out. The system that has been developed provides a user with assistance and support that is tailored to their specific requirements. This will help to ensure that they are either able to read messages and interact with the interface during visual interaction or speak simple commands and hear simple prompts when auditory interaction is being used. Feedback can help the user identify and solve any recurring issues that have been identified with their actions or activities.

The flexibility displayed by AALFI encompasses the personalisation of the interactions in relation to the older person’s requirements and changing requirements. The context aware characteristics that the MAS displays including the ability to choose the correct interactions, adapt the assistance offered in relation to the current time, activity carried out by the older person or the detected event and the number of times an event has occurred, help AALFI to provide flexible assistance and interactions. With this flexibility AALFI is able to provide the older person with the correct assistance, at the correct time and to adapt the interaction method for offering the assistance to the older person’s requirements profile.

The research ideas, MAS and associated adaptable interface (AALFI) have undergone several steps of validation including a workshop with older people, care providers and health professionals. The workshop produced interesting results; older people are not afraid of technology and can appreciate it if it serves a meaningful purpose. The current interface and auditory interactions have been designed to be simple so that they are easy to understand, the participants felt that this was the correct route to take. Feedback and interventions are offered to the person during the day, while at night the MAS is aware of activities and events so that feedback may be offered the following day. AALFI complements the night-time assistance that is provided by NOCTURNAL with day time interventions, several new night time interventions and feedback assistance that is based on day and night time activities and events. Once the results were evaluated, it became apparent that even though the participants were not forgetful; they placed a high value on reminders and therefore when the current interventions are being revised and improved, more reminder interventions may be added. The comments, results and observations from the workshop will influence further development of the visual interface and refinement of the speech based interaction method.

Endnotes

aVoiceXML: http://www.w3.org/TR/voicexml21/.

bCarnegie Mellon University. CMU Sphinx. Open Source Toolkit For Speech Recognition. http://cmusphinx.sourceforge.net/wiki/research/.

cFreeTTS, http://freetts.sourceforge.net/docs/index.php.

dWindows 7, portable 10 inch Tablet PC, 32GB solid state hard drive, 2GB system memory, capacitive touch screen.

eIntervention Dialogue Test.

Subject: Hello (this is the keyword to start the interaction process).

System: Welcome to the intervention and feedback messages.

System: (pauses for 10 seconds).

System: Ok, I have 3 choices for you, if you wish to listen to interventions, say the word interventions, to listen to feedback, say the word feedback, to exit, say the word exit.

Subject: Interventions.

System: Intervention messages.

System: To listen to the current intervention message, say the word current.

System: To listen to all the intervention messages, say the word all.

System: To hear the last intervention message, say the word last.

System: To exit the intervention menu, say the word exit.

Subject: Current.

System: Ok, the current intervention is as follows (pause).

System: The back door has been left open for over 10 minutes; it is recommended that you close the backdoor.

fFeedback Dialogue Test.

Subject: Hello (this is the keyword to start the interaction process).

System: Welcome to the intervention and feedback messages.

System: (pauses for 10 seconds).

System: Ok, I have 3 choices for you, if you wish to listen to interventions, say the word interventions, to listen to feedback, say the word feedback, to exit, say the word exit.

Subject: Feedback.

System: Feedback messages.

System: To listen to the current feedback, say the word current.

System: To listen to all the feedback messages, say the word all.

System: To hear the last feedback message, say the word last.

System: To exit the feedback, say the word exit.

Subject: Last.

System: Ok, the last feedback message is as follows (pause).

System: Please remember that during the morning, breakfast is an important meal.

References

  1. Busemeyer MR, Goerres a, Weschle S: Attitudes towards redistributive spending in an era of demographic ageing: the rival pressures from age and income in 14 OECD countries. J Eur Soc Policy 2009, 19: 195–212. doi:10.1177/0958928709104736

    Article  Google Scholar 

  2. D’Andrea A, D’Ulizia A, Ferri F, Grifoni P: A multimodal pervasive framework for ambient assisted living. Proc 2nd Int Conf PErvsive Technol Relat to Assist Environ - PETRA ’09 1–8 2009. doi: 10.1145/1579114.1579153

    Google Scholar 

  3. Chun-dong W, Xiu-liang M, Huai-bin W: An Intelligent Home Middleware System Based on Context-Awareness. 2009 Fifth Int. Conf. Nat. Comput. Tianjin: IEEE; 2009:165–169.

    Google Scholar 

  4. Augusto JC, Carswell W, Zheng H, et al.: NOCTURNAL Ambient Assisted Living. In Proc. Second Int. Conf. Ambient Intell. 7040 edition. Edited by: Keyson D, Lou MM, Streitz N. Berlin, Heidelberg: Springer-Verlag; 2011:350–354.

    Google Scholar 

  5. Kartakis S, Stephanidis C: A design-and-play approach to accessible user interface development in ambient intelligence environments. Comput Ind 2010, 61: 318–328. 10.1016/j.compind.2009.12.002 10.1016/j.compind.2009.12.002 10.1016/j.compind.2009.12.002

    Article  Google Scholar 

  6. Tamir DE, Mueller CJ: Pinpointing usability issues using an effort based framework. 2010 IEEE Int. Conf. Syst. Man Cybern. Istanbul: IEEE; 2010:931–938.

    Google Scholar 

  7. Madiah M, Hisham S: User-interface design: A case study of partially sighted children in Malaysia. 2010 Int. Conf. User Sci. Eng. Shah Alam: IEEE; 2010:168–173.

    Book  Google Scholar 

  8. Wersenyi G: Auditory Representations of a Graphical User Interface for a Better Human-Computer Interaction. In Audit. Disp. Edited by: Ystad S, Aramaki M, KronlandMartinet R, Jensen K. HEIDELBERGER PLATZ 3, D-14197 BERLIN, GERMANY: SPRINGER-VERLAG BERLIN; 2010:80–102.

    Chapter  Google Scholar 

  9. Bordini RH, Hubner JF, Woolbridge M: The Jason Agent Programming Language. Program. Multi-Agent Syst. AgentSpeak using Jason. John Wiley and Sons, LTD; 2007:31–68.

    Book  Google Scholar 

  10. Bellifemine F, Caire G, Poggi A, Rimassa G: Jade-a white paper. EXP Search Innov 2003, 3: 6–19.

    Google Scholar 

  11. Pokahr A, Braubach L, Lamersdorf W: Chapter 6 JADEX: A BDI REASONING ENGINE. In Multiagent Syst. Artif. Soc. Simulated Organ. Edited by: Bordini R, Mehdi D, Jürgen Dix AEFS. US: Springer; 2005:149–174.

    Google Scholar 

  12. Augusto JC, Zheng H, Mulvenna MD, et al.: Design and Modelling of the Nocturnal AAL Care System. In ISAm. Edited by: Novais P, Preuveneers D, Corchado JM. Berlin Heidelberg: Springer; 2011:109–116.

    Google Scholar 

  13. Sebbak F, Mokhtari A, Chibani A, Amirat Y: Context-aware ubiquitous framework services using JADE-OSGI integration framework. Mach. Web Intell. (ICMWI), 2010 Int. Conf. Algiers: IEEE; 2010:48–53.

    Google Scholar 

  14. Griss ML, Fonseca S, Cowan D, Kessler R: SmartAgent: extending the JADE agent behavior model SmartAgent: extending the JADE agent behavior model. 2002.

    Google Scholar 

  15. Hristova A, Bernardos AM, Casar JR: Context-aware services for ambient assisted living: a case-study. Appl Sci Biomed Commun Technol 2008 ISABEL’08 First Int Symp 2008. doi: 10.1109/ISABEL.2008.4712593

    Google Scholar 

  16. Jih W, Hsu JY, Wu CL, et al.: A multi-agent service framework for context-aware elder care. Hokkaido, Japan: AAMAS-06 Work. Serv. Comput. Agent-Based Eng; 2006:61–75.

    Google Scholar 

  17. Butz A: User Interfaces and HCI for Ambient Intelligence and Smart Environments. In Handb. Ambient Intell. Smart Environ. 1st edition. Edited by: Nakashima H, Aghajan H, Augusto JC. USA: Springer; 2010:535–558.

    Chapter  Google Scholar 

  18. Dumas B, Lalanne D, Oviatt S: Multimodal interfaces: a survey of principles, models and frameworks. In Hum. Mach. Interact. Edited by: Denis L, Jürg K. Berlin, Heidelberg: Springer; 2009:3–26.

    Chapter  Google Scholar 

  19. Blumendorf M, Albayrak S: Towards a Framework for the Development of Adaptive Multimodal User Interfaces for Ambient Assisted Living Environments. UAHCI ’09 Proc. 5th Int. Conf. Access Human-Computer Interact. Part II. Berlin, Heidelberg: Springer-Verlag; 2009:150–159.

    Google Scholar 

  20. Abowd GD, Dey AK, Brown P, et al.: Towards a better understanding of context and context-awareness. In Handheld Ubiquitous Comput. 1707 edition. Edited by: Gellersen H-W. Heidelberg: Springer Berlin; 1999:304–307.

    Chapter  Google Scholar 

  21. Cassens J, Kofod-Petersen A: Using activity theory to model context awareness: a qualitative case study. Proc. 19th Int. Florida Artif. Intell. Res. Soc. Conf. Florida, USA: AAAI Press. Citeseer; 2006:619–624.

    Google Scholar 

  22. Dey AK: Understanding and using context. Pers Ubiquitous Comput 2001, 5: 4–7. doi:10.1007/s007790170019

    Article  Google Scholar 

  23. Hong D, Chiu DKW, Shen VY: Requirements elicitation for the design of context-aware applications in a ubiquitous environment. ACM, New York, NY, USA: CEC ’05 Proc. 7th Int. Conf. Electron. Commer; 2005:590–596.

    Google Scholar 

  24. Mostefaoui GK, Pasquier-Rocha J, Brezillon P: Context-aware computing: a guide for the pervasive computing community. Computer.org 2004, 39–48.

    Google Scholar 

  25. Hong J, Suh E-H, Kim J, Kim S: Context-aware system for proactive personalized service based on context history. Expert Syst Appl 2009, 36: 7448–7457. doi:10.1016/j.eswa.2008.09.002

    Article  Google Scholar 

  26. Steg H, Strese H, Loroff C, et al.: Europe is facing a demographic challenge ambient assisted living offers solutions. Ambient Assist Living—European Overv Rep 2006, 1–85.

    Google Scholar 

  27. Sun H, Florio VD, Gui N, Blondia C: Promises and Challenges of Ambient Assisted Living Systems. Proc. 2009 Sixth Int. Conf. Inf. Technol. New Gener. 00. Las Vegas, NV: IEEE Computer Society; 2009:1201–1207.

    Google Scholar 

  28. Kleinberger T, Becker M, Ras E, et al.: Ambient intelligence in assisted living: enable elderly people to handle future interfaces. Univers Access Human-Computer Interact Interact 2007, 4555(2007):103–112. doi: 10.1007/978–3-540–73281–5_11

    Article  Google Scholar 

  29. Mulvenna M, Zhen H, Wright T: Reminiscence Systems. In Proc. 1st Int. Work. Edited by: Mulvenna M, Astell A, Zhen H, Wright T. Cambridge: Reminisc. Syst. CEUR-WS; 2009:9–11.

    Google Scholar 

  30. Wooldridge M, Fisher M, Huget M-P, Parsons S: Model checking multi-agent systems with MABLE. Proc first Int Jt Conf Auton agents multiagent Syst part 2 - AAMAS ’02 952 2002. doi: 10.1145/544862.544965

    Google Scholar 

  31. Cook DJ: Multi-agent smart environments. J Ambient Intell Smart Environ 2009, 1: 51–55. doi:10.3233/AIS-2009–0007

    Google Scholar 

  32. Pokahr A, Braubach L: A Survey of Agent-oriented Development Tools. In Multi-Agent Program. Edited by: El Fallah Seghrouchni A, Dix J, Dastani M, Bordini RH. US: Springer; 2009:289–329.

    Chapter  Google Scholar 

  33. Duell R, Memon ZA, Treur J: Ambient Support for Group Emotion: an Agent-Based Model. In Agents Ambient Intell. - Achiev. Challenges Intersect. Agent Technol. Ambient Intell. 12th edition. Edited by: Bosse T. IOS Press; 2012:239–260.

    Google Scholar 

  34. Dovgan E, Gams M: An Access-Control Agent-Based Security System. In Agents Ambient Intell. - Achiev. Challenges Intersect. Agent Technol. Ambient Intell. 12th edition. Edited by: Bosse T. IOS Press; 2012:239–260.

    Google Scholar 

  35. Das B, Narayanan C, Krishnan DJC: Automated Activity Interventions to Assist with Activities of Daily Living. In Agents Ambient Intell. - Achiev. Challenges Intersect. Agent Technol. Ambient Intell. 12th edition. Edited by: Bosse T. IOS Press; 2012:137–158.

    Google Scholar 

  36. Häikiö J, Wallin A, Isomursu M, et al.: Touch-based user interface for elderly users. ACM, New York, NY, USA: Proc. 9th Int. Conf. Hum. Comput. Interact. with Mob. devices Serv; 2007:289–296.

    Google Scholar 

  37. Stefano P, Bonamico C, Regazzoni C, Lavegetto F: 6 A Flexible Architecture for Ambient Intelligence Systems Supporting Adaptive Multimodal Interaction with Users. In Ambient Intell. Edited by: Riva G, Vatalaro F, Davide F, Alcaniz M. The Netherlands: IOS Press; 2005:97–120.

    Google Scholar 

  38. McCullagh PJ, Carswell W, Mulvenna MD, et al.: Nocturnal Sensing and Intervention for Assisted Living of People with Dementia. In Healthc. Edited by: Lai D, Begg R, Palaniswami M. Florida, USA: Sens. Networks - Challenges Towar. Pract. Appl. Taylor and Francis/CRC Press; 2012:283–303.

    Google Scholar 

  39. Portet F, Vacher M, Golanski C: Design and evaluation of a smart home voice interface for the elderly: acceptability and objection aspects. Pers Ubiquitous 2011, 1–18.

    Google Scholar 

  40. Rawi M, Al-Anbuky A: Wireless sensor networks and human comfort index. Pers Ubiquitous Comput 2012, 1–3. doi:10.1007/s00779–012–0547–9

    Google Scholar 

  41. Das B, Cook DD, Schmitter-Edgecombe M, Seelye A: PUCK: an automated prompting system for smart environments: toward achieving automated prompting—challenges involved. Pers Ubiquitous Comput 2011, 1–15.

    Google Scholar 

  42. Kubicki S: RFID-driven situation awareness on TangiSense, a table interacting with tangible objects. Pers Ubiquitous Comput 2011, 1–6.

    Google Scholar 

  43. Mulvenna M, Carswell W, McCullagh PP, et al.: Visualization of data for ambient assisted living services. IEEE Commun Mag 2011, 49: 110–117. doi:10.1109/MCOM.2011.5681023

    Article  Google Scholar 

  44. Turner N: A guide to carrying out usability reviews – UX for the masses. UX masses 2011. . Accessed 9 Feb 2012 http://www.uxforthemasses.com/usability-reviews/ . Accessed 9 Feb 2012

    Google Scholar 

  45. Carswell W, McCullagh PJ, Augusto JC, et al.: A review of the role of assistive technology for people with dementia in the hours of darkness. Technol Health Care 2009, 17: 281–304. doi:10.3233/THC-2009–0553

    Google Scholar 

Download references

Acknowledgments

We wish to thank AGE NI who facilitated the workshop by inviting the participants, providing the workspace and looking after ourselves and the participants with ample sustenance. We acknowledge the participants at the workshop who gave up their valuable time to help evaluate the ideas behind AALFI and the visual and auditory interaction modalities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to James McNaull.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

JMcN has published papers in the area of Ambient Assisted Living in relation to helping older people carry out activities of daily living in their own home during the day and night. J has completed a PhD research project were a Multi-agent system, Ambient Assisted Living Flexible Interface (AALFI) and subsequent evaluations of AALFI, contributed to the research into supporting older people at home. JCA has carried out research in the areas of Ambient Assisted Living, Ambient Intelligence and Smart Environments and has been involved in many successful research projects including NOCTURNAL and AALFI. J has published many significant papers which have contributed to the knowledge and understanding of Ambient Intelligence and Ambient Assisted Living and has presided as editor over several successful Journals. MM has conducted research in diverse research areas including Computer Science, Artificial Intelligence and Ambient Assisted Living. M is Co-founder of the TRAIL Living lab and is a member of ENoLL- European Network of Living Labs. Research projects that M has contributed to, include COGKNOW, NOCTURNAL and AALFI. Maurice has published many noteworthy papers in the areas of Ambient Assisted Living and Artificial Intelligence. PMcC is a reader in computer science at the University of Ulster and has made substantial contributions to several research projects including COGKNOW, BRAIN, NOCTURNAL and AALFI, which have contributed to the knowledge and understanding relating to helping older people deal with the effects of ageing. P has published many notable papers in the areas of Ambient Assisted Living, Ambient Intelligence and Pervasive Computing which have furthered peoples understanding on how older people may be assisted and what assistance they may require. All authors have read and approved the final manuscript.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Authors’ original file for figure 3

Authors’ original file for figure 4

Authors’ original file for figure 5

Authors’ original file for figure 6

Authors’ original file for figure 7

Authors’ original file for figure 8

Authors’ original file for figure 9

Authors’ original file for figure 10

Authors’ original file for figure 11

Authors’ original file for figure 12

Authors’ original file for figure 13

Authors’ original file for figure 14

Authors’ original file for figure 15

Authors’ original file for figure 16

Authors’ original file for figure 17

Authors’ original file for figure 18

Authors’ original file for figure 19

Authors’ original file for figure 20

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

McNaull, J., Augusto, J.C., Mulvenna, M. et al. Flexible context aware interface for ambient assisted living. Hum. Cent. Comput. Inf. Sci. 4, 1 (2014). https://doi.org/10.1186/2192-1962-4-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2192-1962-4-1

Keywords