A great deal has been written about the role of clinical decision support systems in medicine in recent years—an important category of which are expert systems. Expert systems would normally contain an explanation module—the subject of a great deal of research interest in the 1980s when the main problem-solving task for medical expert systems was diagnostics. However, expert systems nowadays are more likely to perform tasks other than diagnosis, yet the role of explanation in expert systems has been largely ignored in the health care literature since this time. Furthermore, user requirements can vary considerably in the health care domain and may include physicians, medical researchers, administrators, and patients. Such user groups would have differing levels of knowledge and goals, which would impact on the type of explanatory support provided by the system. This article examines the potential benefits of explanation facilities for a range of clinical tasks and also considers the ways in which explanation facilities may be delivered so as to be of benefit to these categories of health care user for these tasks.
- expert systems
- decision support
Explanation and Expert Systems
Much had been written in the 1980s about the role of explanation facilities in medical diagnostic expert systems: MYCIN (Clancey, 1983; Shortliffe, 1981) and PUFF (Aikins, Kunz, Shortliffe, & Fallat, 1983) were both prototype medical expert systems of interest because of their explanation. However, despite being widely recognized as a useful adjunct to expert systems, explanation facilities have been largely ignored in the health care literature in recent years. This is partly because explanation facilities were first used in diagnostic applications, such as the MYCIN expert system and its derivatives in the 1980s, but the clinical tasks served by expert systems have changed considerably since this time (Coeira, 2003; Friedman & Wyatt, 1997; Hanson, 2006). Currently, expert system tasks are more likely to be used in the determination of drug dosing and drug prescribing or in reminding clinicians to engage in preventive interventions through inoculations. Indeed, Hunt, Haynes, Hanna, and Smith (1998) published a systematic review of clinical trials of clinical expert systems, which found clear evidence for the effectiveness of such systems with 43 out of 65 trials showing an improvement of clinician performance. These trials involved a variety of expert system tasks that included diagnostic aid, preventive care, and reminder systems. However, only 20% of the diagnostic aid trials in this study were effective, as compared with74% of trials based on preventive care and reminder systems. A more recent study by Garg et al. (2005) shows that clinical expert systems improved practitioner performance in 62 (64%) of the 97 studies assessing this outcome. Of these trials, 21 were based on reminder systems, with 16 of them (76%) deemed to be successful, and 10 of these trials were diagnostic based with 4 (40%) successful. This improvement in the use of diagnostic expert systems reflects an increasing trend in recent years as will be shown later in this article.
Benefits of Explanation Facilities in Clinical Expert Systems
Very little attention seems to have been given to the ways in which explanation facilities could support modern expert system medical tasks. This is surprising, for Walton (1996) presents evidence to suggest that advice from a computer will be more convincing if supported with explanation facilities. In their evaluation of CAPSULE, an expert system giving advice to general practitioners about prescribing drugs, they say that “finding the most effective way of presenting the explanation is an important goal for future studies of computer support for prescribing drugs.” Many other studies have demonstrated the importance of a system being able to explain its own reasoning. For example, in a study of physician’s expectations and demands for computer-based consultation systems, it was found that explanation was the single most important requirement for advice giving systems in medicine (Buchanan & Shortliffe, 1984). Moreover, according to Berry, Gillie, and Banbury (1995), explanation is seen as a vital feature of expert systems—particularly in high-risk domains, such as medicine, where users need to be convinced that a system’s recommendations are based on sound and appropriate reasoning. The inclusion of explanation facilities can enhance the quality of the decision making. Gregor and Benbasat (1999) have shown that well-designed explanation facilities could improve user acceptance and performance in terms of speed and accuracy of judgments. Furthermore, explanation facilities can lead to greater adherence to the recommendations of the expert system (Darlington, 2008; Gregor & Benbasat, 1999). In addition, according to Friedman and Wyatt (1997), expert systems raise complex and professional issues, in that, to avoid exposure to liability, every expert system must treat its users as a “learned intermediary.” Consequently, opaque programs such as neural networks are clinically doubtful as compared with the transparency offered by explanation facilities.
This article briefly examines the technical means by which explanation facilities can be implemented in expert systems and the components of explanation. Explanation facilities are then considered with regard to a range of problem-solving tasks in the health care domain as well as discussing user requirements for explanation with regard to the range of users in this domain. These may include physicians, medical researchers, administrators, and patients. The next section looks at the reasons for the low usage of explanation facilities and suggests ways to increase usage.
Usage of Explanation Facilities in Expert Systems
Explanation facilities can be viewed as an optional feature for expert systems without interfering with the operation of the underlying system, and although potentially very useful, research indicates that usage of explanation facilities is frequently very low. For example, a study by Dhaliwal (1996) found that only 25% to 30% of available explanations were requested during consultations with an expert system, whereas Everett (1994) did a study that found only about half of the participants chose to view explanations at all. Some ways in which the developer can improve the relative inattention given to explanation facilities are described in more detail later in this article and include the access mechanism (see section titled “The Communication”), the content of the explanation delivery (see section titled “Explanatory Content”), and the adaptation to the user by delivering the appropriate level of abstraction (see section titled “User Adaptation”; Berry et al., 1995). Moreover, according to Clancey (1983) and Gregor and Benbasat (1999), the default explanation types in rule-based expert systems are inadequate and should be enhanced with other explanatory content (see section titled “Explanatory Content”). These shortcomings can be improved with some effort from the system developer without necessarily restructuring the expert system knowledge base—which could be very time-consuming. However, Berry et al. (1995) provide another reason for the low frequency of explanation use, in that, system developers often fail to involve users in the specification and evaluation of explanation facilities in the early stages of project development, but explanation facilities are strongly correlated with the structure and knowledge content in an expert system (Wick, 1992b). This means that the knowledge sources adopted for developing an expert system will affect the quality of the explanation facilities provided by that system. As Wyatt (1997) says, “All too often in the past the knowledge used for decision support expert systems has been acquired by a knowledge engineer from a single expert—or even by browsing out of date narrative textbooks, with all their defects” (p. 167). Understanding the difficulties of implementing explanation facilities in expert systems requires knowledge of the way that both components work together as described in the next sections.
Explanation in Clinical Expert Systems
Characteristics of Clinical Expert systems
The sections that follow will briefly examine some of the ways that knowledge is represented in expert systems followed by a description of the way in which explanation facilities would normally integrate with components of an expert system, before considering the types of explanations suitable in the health care domain.
Several knowledge representational models have been used to construct clinical expert systems, including symbolic methods, such as simple decision trees, statistical/probabilistic methods, and rule-based and frame (descriptive logic) based expert systems (Peleg & Tu, 2006; Van Bemmel & Musen, 1997). Probabilistic methods are also quite common and include certainty factors (Shortliffe, 1981) and belief networks, based on probability theory, such as Bayesian networks (Korver & Lucas, 1993; Lacave & Diez, 2004). Machine learning methods using rule induction, case-based reasoning, genetic algorithms, or even a combination of these techniques can also be used to develop the knowledge bases used by medical expert systems (Coeira, 2003; Cunningham, Doyle, & Loughrey, 2003). These techniques can be used in the medical domain by, for example, using a set of clinical cases that act as examples from which a machine learning system can then produce a systematic description of those clinical features that uniquely characterize the clinical conditions. This knowledge can be expressed in the form of rules to represent the knowledge in an expert system (Coeira, 2003). Neural networks provide another machine learning technique suited to expert system development but are not well suited to provide explanations because of their “black-box” opaque nature of operation (Cunningham et al., 2003).
Rule-based expert systems and frame (descriptive logic) based are better-suited representations for explanation as their inferences can provide transparency because of the explicit way in which the knowledge is represented.
A Typical Rule-Based Expert System Architecture
Figure 1 describes an archetypal expert system component structure. An end user would communicate with the system via a user interface and an explanation facility that would interact with an expert system inference engine. The inference engine would use medical domain knowledge stored in the knowledge base and control the consultation by determining the questions that are to be asked to achieve its goals and derive conclusions or specify actions to be taken—such as proposed therapies and so on. The explanation component would combine with the inference engine to provide explanations that could follow a consultation and provide postadvice explanations—called feedback. Explanations before advice—that is, during the question input phase—could also be generated, called feedforward explanations. The latter provides the user with a means to find out why a question is being asked during a consultation (i.e., during the data input stage). Feedforward explanations would frequently take the form of a description of technical terms (terminological) to enable the user to answer the question(s) in a meaningful way. Feedforward explanations are general in that they are not dependent on any particular output case. By contrast, feedback explanations are case specific in that they will normally present a trace of the rules that were invoked during a consultation and display intermediate inferences to arrive at a particular conclusion. Feedback explanations provide the user with a record of problem-solving action during a consultation to that the user can see how a conclusion was reached when the data have been completely input. Figure 1 also shows how the explanation facility is dependent on the expert system knowledge base.
There are three main components of an explanation that are to be considered by the designer. They are the communication, the adaptation to the user, and the content of an explanation (Lacave & Diez, 2004).
As Figure 1 shows, the explanation facility is a component of a rule-based expert system interface. The designer will therefore need to consider issues such as the language dialogue and style of presentation—that is, graphical, textual, audio, or some combination as well as the access provision mechanism of the explanatory component. Three types of provision mechanisms are possible: embedded, automatic, and user invoked. Embedded explanations would be permanently visible on the display but can result in confusion and consume valuable screen space. However, automatic explanations are automatically invoked as determined by the system (Gregor & Benbasat, 1999). They could be invoked as a result of some action taken by the user during a consultation. User-invoked explanations are the most commonly used in practice for they appear when they are explicitly requested by the user and could be implemented using hypertext links or other interface commands (Mao & Benbasat, 2000). Payne, Bettman, and Johnson (1993) have conducted research in the usage of user-invoked explanations and from their studies have proposed the cognitive effort perspective theory. This suggests that expert system users will only invoke explanations voluntarily if the perceived benefits in accessing them are outweighed by the amount of mental effort in doing so. This theory suggests good reasons for the relative inattention to explanation facilities in the absence of any specific trigger for their use. Consequently, careful thought has to be given to the design of the access mechanism (Gregor, 1996a). The main triggers for user-invoked explanations (Dhaliwal, 1996; Gregor & Benbasat, 1999) have been identified as follows:
A desire for long-term learning—normally triggered by novice users. In the health care domain, this category might include patients.
A strong disagreement with the advice given by the system—this type of trigger is normally invoked by experts whose views may conflict with that of the system advice.
The complexity of the task itself—normally triggered by novice users who will often need explanatory support with tasks involving perhaps more complicated procedures or the use of more complex technical jargon.
An explanation designer has to consider the recipient of an explanation, in terms of his or her knowledge and expectations. Unfortunately, default expert system explanations do not take into account the variability of knowledge between different users. User modeling refers to the process of managing these differences. One possible solution to this problem of user modeling has been advocated by (Cawsey, 1993). The technique involves a dialog planning method for the generation of interactive explanations. The method consists of not only planning the text to be presented to the user but also planning the dialog with the user, based on the retaining of a model of the user’s knowledge, which would be updated during the explanatory interaction between the system and the user. One prominent project incorporating a user model was OPADE (Carolis et al., 1996). This was a European Community Project–funded expert system for generating beneficiary-centered explanations about drug prescriptions that takes into account the user requirements. The main objective of OPADE was to improve the quality of drug treatment by supporting the physician in their prescription process and by increasing compliance with the clinical guidelines (Berry et al., 1995). OPADE supports two types of users: those who directly interact with the system such as general practitioners and nurses, and those who receive a report of results—that is, the patients. The explanations that are generated are dynamic (unlike static canned text explanations) in that a “user model” is maintained containing the characteristics of the user. A “text planner” component plans the discourse during a consultation. The text planner will build a tree containing the discourse plan, which will depend on the objectives that are to be met by the user model. The explanation is then delivered in natural language by taking the tree generated by the text planner as input and transforming it using text phrases into the appropriate format.
Early (first generation) expert system explanation facilities were characterized by design incorporating one or more of the following types of explanation (Chandrasekaran, Tanner, & Josephson, 1989).
As its name suggests, the rule-trace explanation links problem-solving knowledge with an explanation of a trace of rules that were invoked during a consultation. Most of the early expert systems were honed for diagnostic support and were predominantly rule-based expert systems (Darlington, 2000). The explanation facilities provided in these systems, such as MYCIN (Shortliffe, 1981), would have been predominantly problem solving knowledge via a rule trace. This is, essentially, a record of the system’s run-time rule invocation history during a consultation—frequently syntactically doctored to present the explanation in natural language form (Binsted, Cawsey, & Jones, 1995). The rule-trace facility would enable users to find out Why a system is asking a question or How, following a consultation, the system arrived at its conclusions by displaying the trace of rules invoked. This is what sets explanation facilities in expert systems apart from “help” facilities found in conventional software systems: They can provide a rule trace of the problem-solving mechanism of the inference engine during a consultation so that case-specific explanations can be delivered, whereas help facilities would normally be provided by the developer preparing text in advance to explain the different outcomes. This is known as canned text and is often used with rule-trace or other explanation facilities to enhance or supplement explanations. However, there are problems associated with canned text for the system builder has to anticipate all the possible user questions in advance to invoke the appropriate response, which can result on a lengthy development overhead.
The rule-trace explanation can reconstruct a trace from what problem-solving knowledge is contained in the expert system knowledge base. A rule trace can be executed without access to any rules that justify the existence of this knowledge. If the builder has not included the knowledge to justify the knowledge in the rule base, then the system will not be able to justify the existence of the knowledge. The importance of this justification knowledge was recognized by Clancey (1983) when attempting to extend the MYCIN system to support the training of junior physicians. He found that MYCIN failed to do this because it did not contain justification knowledge as the rules that model the domain did not capture all the forms of knowledge used by experts in their reasoning. Expert physicians would, of course, use rules of thumb themselves in solving problems, but they would also—as a result of their training and experience—possess a deep theoretical understanding of their subject domain called “deep knowledge.” They may, for example, use “rules of thumb” or heuristic knowledge when performing a diagnosis but would need to draw on their deep knowledge if an explanation that justifies such knowledge is required. In the same way, justification knowledge would have to be explicitly captured by the system designer if it was required for explanation. Justification knowledge may be captured by using deep knowledge models. Empirical research has consistently shown that user acceptance of expert systems increases for nonexpert users when this justification knowledge is present and that justification is the most effective type of explanation to bring about positive changes in user attitudes toward the advice-giving system (Ye & Johnson, 1995).
Strategic explanations provide knowledge about how to approach a problem by choosing an ordering for finding subgoals to minimize effort in the search for a solution. For example, the rule of thumb that alcoholics are likely to have an unusual etiology can lead the expert to focus on less common causes of infection first—thereby pruning the search space to find a solution. In rule-based expert systems, the strategic knowledge is frequently implicitly incorporated in the problem-solving rules. This is acceptable if the designer wants the system to provide a problem-solving role only. However, Clancey (1983) realized that this knowledge needed to be explicitly represented in the MYCIN system so that it could become transparent to students training to use the system—rather like the justification knowledge discussed earlier. A follow-up system called NEOMYCIN was developed by Clancey (Clancey, 1983; Clancey & Letsinger, 1981): This was a consultation system whose medical knowledge base contained the strategic knowledge explicitly available for training purposes. Systems containing explicit strategic knowledge could be better able to answer questions such as “Why not pursue this line of reasoning instead of that” as the solution planning would be explicitly available.
Terminological explanations provide knowledge of concepts and relationships of a domain that domain experts use to communicate with each other. The inclusion of terminological explanations is sometimes necessary because in order for one to understand a domain, one must understand the terms used to describe the domain. Terminological explanations are a category of explanations that are frequently used with feedforward explanations and would frequently be implemented using canned text. They are more likely to be used by novice or nonclinical users such as patients, rather than the more knowledgeable users. Terminological explanations provide generic rather than case-specific knowledge (Mao & Benbasat, 2000; Swartout & Smoliar, 1987).
Some approaches to second generation explanation
One of the problems with explanation in first generation expert systems incorporating the above explanation types was the unnatural and inflexible dialogues that often resulted during consultations—often as a consequence of the restrictions of the interrogatives Why and How described earlier. One approach to resolving this problem was advocated by Wick and Slagle (1989), who developed a system called Journalistic Explanation Facility (JOE). JOE delivers explanation based on a journalistic analogy for news writing reported events. It does this by extending the Why and How interrogatives to include the interrogatives Who, What, Where, and When providing scope for extending explanation queries to answer in past, present, and future text. Despite being limited by the absence of explicit domain knowledge, JOE is an example of a prototype that can make the most of the inbuilt explanation facilities and lead to low construction overheads. However, Wick (1992a) recognizes shortcomings in JOE in its inability to present the “big picture” in its explanations. Thus, different research directions were advocated, which focused more on understandability of explanations—taking into account such issues as abstraction into different levels, linguistic competence, and summarization (Swartout & Moore, 1993). One prototype system (Wick, 1992b) used a technique called Reconstructive Explanation. This technique uses one knowledge base for the expert system and another for the explanation component. The rationale for this is that an expert’s line of reasoning is not necessarily the same as the explanation provided, yet the basis for default explanations has been the rule trace of the line of reasoning. The Reconstructive Explanation system can improve on understandability but at a cost of higher construction overheads.
The techniques described above can be used to tailor explanation facilities to suit different categories of users, or stakeholders, without necessarily altering the reasoning of the underlying expert system.
Classification of Health Care Domain User-Groups
According to Leroy and Chen (2007), modern medical Decision Support Systems (DSS) are developed for four different groups of decision makers—two of whom have a medical background (clinicians and medical researchers), the other two (administrators and patients) may not. Clinicians could include physicians (including junior physicians) and nurses to use their knowledge and expert systems to make decisions on behalf of others by, for example, diagnosing diseases or evaluating drug interactions from a treatment or adopting a nursing strategy to alleviate pain. General research has shown that expert physicians do make use of explanation facilities, but their requirements are very different to that of other users. Experts tend to use feedback rule-trace explanations and are more likely—than nonexperts—to use explanations for resolving anomalies, such as disagreements with system advice, exploring alternative diagnoses, and verification of assumptions (Arnold, Clark, Collier, Leech, & Sutton, 2006; Mao & Benbasat, 2000). However, nonexperts such as trainee physicians are more likely to use a range of explanations types for short- and long-term learning. For example, Arnold et al. (2006) have shown that nonexperts tend to use both feedback and feedforward justification explanations as well as terminological feedforward explanations.
Administrators do not have direct clinical interaction with patients but are responsible for the management of health care options and facilities. They may therefore use expert systems to manage health care options and could benefit from explanation facilities by aiding clinicians, through, for example, the managing of patient referrals by finding out whether a patient is suitable for a referral. All of the explanation types discussed previously may benefit administrators depending on the nature of the application task. Furthermore, administrators may use expert system for patient care management, that is, automatically scheduling follow-up appointments or treatments, or automatically generating reminders relative to preventive care (i.e., inoculations) or tracking adherence to research protocols. Explanation facilities which enable time-dependent queries such as through using the interrogative When, as described in the JOE prototype in the previous section may be useful in this context.
Patients are the largest group of decision makers with the least amount of medical training. However, chronic patients often become expert patients because in many cases of chronic illness, patients are likely to acquire skills to help them manage their illness better (Bury, 2003) because of the severity of their discomfort, and they may have a much greater desire to understand and access to web-based expert systems could help. However, Berland, Elliot, Morales, and Algazy (2001) have shown that the general public have difficulty with wordy medical jargon. Patients would therefore clearly benefit from terminological explanations in some health care systems.
Expert System Medical Tasks Amenable to Explanation
Medical Expert System Task Types
Generating Alerts and Reminders
Alerts inform clinicians of changes in a person’s condition, perhaps by attaching an expert system to a monitor. Alerts could also be used to scan lab test results and send reminders or warnings. For example, a patient could be attached to an electrocardiogram (ECG) or pulse oximeter, and the alert expert system could warn of changes in a patient’s condition. In less acute circumstances, it might scan laboratory test results, or drug or test orders, and then send alerts or warnings—either via immediate on-screen feedback or through a messaging system like email. Reminder systems serve a slightly different purpose in that they are used to notify clinicians of important tasks that need to be done before an event occurs. An example of a reminder system could be the generation of a list of immunizations that is required by each patient on the daily schedule in an outpatient clinic (Randolph, Guyatt, Calvin, Doig, & Richardson, 1998). According to Bindels, De Clercq, Winkens, and Hasman (2000), reminder systems are far more successful than diagnostic expert systems, for the so-called Greek Oracle approach is not very popular among physicians because it reduces their role. Reminder systems are successful because they leave the physician in control and only provide feedback when the physician does not obey the rules. Explanation facilities could be important for these tasks for as Hanson (2006) says, the clinician wants access to the knowledge rules associated with the condition and the interventions proposed by the expert systems. The clinician may want to evaluate the evidence that underpins the expert systems rules, suggesting that justification and problem-solving explanation types may be appropriate for clinicians using alert systems. Moreover, as the purpose of alert and reminder systems is to provide interventions to clinicians (junior and senior), explanations may only be appropriate for these user categories.
Therapy Critiquing and Planning
These are expert system tasks that may look for inconsistencies and omissions in an existing treatment plan but would not necessarily assist in the generation of the plan. Some examples of their uses could be applied to physician order entry systems. For example, on entering an order for a blood transfusion, a clinician may receive a message stating that the patient’s hemoglobin level is above the transfusion threshold, and the clinician must justify the order by stating an indication, such as active bleeding (Randolph et al., 1998). However, planning systems have more knowledge about the structure of treatment protocols and can be used to formulate a treatment based on a data on patient’s specific condition from the Electronic Medical Record (EMR) and accepted treatment guidelines. Therapy critiquing explanations could benefit clinicians or administrators in that they could allow the user to enter a proposed solution to a problem and then explain flaws in the solution. Strategic explanations might be appropriate for administrators, enabling them to make resource or allocation decisions. Clinicians could also benefit from strategic explanations so that “why not” scenarios may be explored (Martincic, 2003). Clinicians (junior and senior) may benefit from access to rule trace and justification knowledge to understand the rationale underlying the possible flaws in the solution.
Drug Advisory Systems
Drug advisory systems such as CAPSULE (Walton, 1996) are used to assist clinicians with the prescription and medications and the selection of the most cost-effective treatments. There is much evidence to show that computer-based clinical decision support systems have been shown to improve physician performance in relation to drug treatment and adherence to protocols (Hunt et al., 1998). Furthermore, some expert systems can improve physician performance for drug dosing and preventive care (Hunt et al., 1998) without reducing drug expenditure (Vested, Nielsen, & Olesen, 1997). Explanation facilities can serve an important role in drug prescribing and dosing. An example of an expert system incorporating explanation for drug dispensing is OPADE (Carolis et al., 1996), which was described earlier. The system generates beneficiary centered explanations about drug prescriptions that take into account the user requirements (Berry et al., 1995). Patients could benefit from this type of task with the provision of terminological explanations—enabling them to understand medical terms and jargon. More generally, justification explanations could be appropriate for other categories of users for this type of medical task, so that, for example, users can see theoretical reasons for why a treatment has side effects (Berry et al., 1995) and/or be aware of interactions with other drugs. Strategic explanations could also be useful to these users so that they could understand why alternative drugs have been ruled out for consideration with a particular patient.
Early medical expert systems were mainly used for diagnosis and many of these have fallen by the wayside (Coeira, 2003). According to Taylor (2006), expert systems have proved to be least required where they were thought to be most required: in diagnosis applications. Delaney, Fitzmaurice, Riaz, and Hobbs (1999) believe that computerized diagnostic systems have not yet been developed to the stage where they can significantly aid diagnostic accuracy and also claimed that there was no groundswell of interest in diagnostic systems because physicians do not perceive that they have a deficiency of expertise within their own area of speciality. However, a number of focused diagnostic systems are still in use, and new uses are emerging, for example, ECG interpretation and laboratory test interpretations (Warner, Sorenson, & Bouhaddou, 1997). In particular, when a patient’s case is complex or rare, or the person making the diagnosis is simply inexperienced, an expert system can help in the formulation of likely diagnoses based on patient data presented to it, and the system’s understanding of illness, stored in its knowledge base. For example, diagnostic assistance may be useful with complex data, such as those provided by an ECG analysis, where most clinicians can make straightforward diagnoses but may miss rare presentations of common illnesses like myocardial infarction or may struggle with formulating diagnoses, which typically require specialized expertise. Explanation facilities could play an important part here for, as noted earlier, experts tend to use explanation facilities when there is disagreement with the advice given by the system (Gregor & Benbasat, 1999). Computer-aided decision making could also speed diagnosis, especially for difficult cases, thus providing the physician with time for other matters—such as interacting with patients. A number of web-based diagnostic systems, such as ISABEL (www.isabelhealthcare.com)—a dynamic diagnostic checklist system—are also gaining in popularity. The ISABEL diagnostic aid has been shown to be of potential use in reminding junior doctors of key diagnoses in the emergency department (Ramnarayan et al., 2007). The effects of its widespread use on decision making and diagnostic error can be clarified by evaluating its impact on routine clinical decision making. Another study by Graber and Ashlie (2008) shows that the ISABEL clinical decision support system quickly suggested the correct diagnosis in almost all complex cases. However, ISABEL provides user-invoked canned text explanations at various levels. For example, when ISABEL ranks likely diagnoses to symptoms, it can provide textual explanations with hyperlinks to lower level explanations if required. In the case of diagnostic systems, rule trace, justification, and strategic explanations could be appropriate to clinicians (junior and senior), although patients who use web-based diagnostic systems could benefit again from the provision of terminological explanations possibly delivered through canned text or other means activated by hyperlinks. Patients may find this type of explanatory support particularly important during question answering phase where they may not fully understand the meaning of a question or the relationship that the question has on the overall consultation.
Table 1 provides a summary based on suggestions for explanation facilities for the tasks described in the previous sections. These are only suggestions because in practice, much would depend on the characteristics of a particular domain.
This article recognizes the changing role of health care expert systems tasks over the past 30 years. However, one of their main features—explanation facilities—has been largely ignored in health care systems—despite the potential benefits that can be derived from their inclusion. This article has shown that there is much potential for the use of explanation facilities for problem-solving tasks other than diagnostics—the only problem-solving task that was used in the early medical expert systems.
Another trend emerging, especially with the growth of web-based medical expert systems, is the range of stakeholders—which can vary from clinicians to patients. This article has described a range of techniques that can be used to tailor and enhance explanation facilities to suit different stakeholders without necessarily altering the underlying system. However, the developer will determine the way in which the knowledge is represented and this will affect the way in which these techniques could be implemented in terms of the construction overheads—how difficult and time-consuming it is to build the explanations.
The implications of this research is then that builders of expert systems should no longer consider explanation facilities an unnecessary adjunct but should give careful consideration to the way that they might help support users of the system. In doing so, builders of health care expert systems should canvass the views of the stakeholders to gauge what explanation content, type of interaction, and access mechanism may be suitable. The low usage of explanation facilities discussed earlier in the article could be improved substantially, for this article has shown that users are more likely to adhere to expert system recommendations when quality explanation facilities are available as well as improve performance and result in more positive user perceptions about the system.
However, further empirical research is necessary to investigate the potential benefits and changes in performance arising from using explanation facilities in different medical task settings as well as expand on the general results exploring the likely benefits and performance when applied to different stakeholders in the health care domain.
Keith W. Darlington is a senior lecturer in Computing and Artificial Intelligence at London South Bank University. He specialises in Expert Systems and has published a book on the subject called The Essence of Expert Systems.
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
The author(s) received no financial support for the research, authorship, and/or publication of this article.
- © The Author(s) 2011