This literature review seeks to examine knowledge dissemination interventions (KDIs) implemented in health research and gauge their effectiveness on three kinds of outcomes: (a) knowledge acquisition, (b) changes in attitudes, and (c) changes in practice. MEDLINE and Cumulative Index to Nursing and Allied Health Literature databases from 2006 to 2011 were searched. Nineteen articles were retrieved. Most of the KDIs that were evaluated had a positive impact on knowledge acquisition and changes in attitudes, but a limited one on practice. KDIs are diverse in terms of knowledge, actors, contexts, and dissemination methods. They cannot be readily applicable to other projects.
- knowledge dissemination interventions
- knowledge translation
Knowledge translation (KT) is an emerging field of practice and study. The Canadian Institutes of Health Research (CIHR), the major funding agency for health research in Canada, has proposed the most widely used definition for KT: “Knowledge translation is a dynamic and iterative process that includes the synthesis, dissemination, exchange and ethically-sound application of knowledge to improve health, provide more effective health services and products, and strengthen the health care system.”1 Its advocates argue that it has the potential to increase the beneficial use of knowledge at all levels and in all sectors of decision making (e.g., clinicians, policy makers, patients, the general public), to improve health outcomes (Strauss, Tetroe, & Graham, 2009), and thus to contribute to maximizing returns on research investments (Graham & Tetroe, 2007).
However, Graham, Tetroe, and Gagnon (2009) acknowledge that knowledge translation is a “messy business” (p. 314). This is certainly true when it comes to the definitions given to the words in the field. Several authors have commented on the plethora of terms used, often interchangeably, and on the challenges that this plurality of designations raises for working with knowledge translation literature (Aita, Richer, & Heon, 2007; Mitchell, Fisher, Hastings, Silverman, & Wallen, 2010; Ward, House, & Hamer, 2009; Wilson, Petticrew, Calnan, & Nazareth, 2010). Since this literature review focuses on knowledge dissemination interventions (KDIs), it is worth noting a few points about what is meant by knowledge dissemination (KD) in the literature, and how various authors see some of the components of the dissemination process (e.g., object disseminated, channels of communication, audience).
First, some authors use knowledge dissemination as a synonym for other terms, such as knowledge transfer (Crane, 1985; Knott & Wildavsky, 1980), knowledge transfer and exchange (Institute of Health Economics [IHE], 2008), knowledge translation, end-of-grant knowledge translation (Brachaniec, De Paul, Elliott, Moore, & Sherwin, 2009; Tetroe et al., 2008), and diffusion (Crosswaite & Curtice, 1994; Montgomery et al., 2001). Others consider that knowledge transfer and knowledge translation are different types of knowledge dissemination, and thus subsets of knowledge dissemination (Scott et al., 2007). Wilson et al. (2010) hold the opposite view. To them, dissemination is a subset of knowledge translation.
Second, the term used to identify the “object” that is disseminated varies according to authors. Some refer to evidence-based practices, programs, and policies (Dearing & Kreuter, 2010), or to evidence-based interventions (Rabin, Brownson, Haire-Joshu, Kreuter, & Weaver, 2008). Others do not mention the qualifier “evidence-based,” but refer as well to “interventions” (Kahn, 2009) and “research” (Knowledge Utilization Studies Program [KUSP], 2011; Scott et al., 2007) as the object of dissemination. Still others use the terms “research findings” or “results” (Cronenwett, 1995; Fattal & Lehoux, 2008; IHE, 2008; Montgomery et al., 2001), “knowledge” (Crane, 1985; Kahn, 2009; Knott & Wildavsky, 1980), “ideas,” “innovations” (Crosswaite & Curtice, 1994; Kahn, 2009), or “information” (Lomas, 1993). However, as most of them do not define the terms they use, it is difficult to establish whether they all mean the same thing, but use different words, or if they intend to express particular nuances by their choice of terms.
Third, some authors qualify the process of knowledge dissemination. Some emphasize the fact that the process is active (Dearing & Kreuter, 2010; Lomas, 1993; Rabin et al., 2008), interactive (IHE, 2008; Scott et al., 2007; Tetroe et al., 2008; Wilson et al., 2010), or, conversely, passive (Crosswaite & Curtice, 1994). To some, knowledge dissemination requires a conscious effort (Kahn, 2009); it is planned (Rabin et al., 2008) and systematic (Crosswaite & Curtice, 1994), or unplanned (Crosswaite & Curtice, 1994).
Fourth, a few authors identify the communication channels by which dissemination takes place. They refer either to tools such as scientific conferences and scientific journals (Brachaniec et al., 2009; KUSP, 2011; Tetroe et al., 2008) or to channels that go beyond those traditional means, and that are more appropriate to the target audience, although they do not identify them specifically (Brachaniec et al., 2009; IHE, 2008; Kobus, Mermelstein, & Ponkshe, 2007; Rabin et al., 2008; Tetroe et al., 2008).
Finally, some definitions limit dissemination to specific audiences, such as individuals in clinical settings (Cronenwett, 1995; Montgomery et al., 2001), end-users who have the capacity to utilize the knowledge in their practice (Knott & Wildavsky, 1980; Scott et al., 2007; Wilson et al., 2010), or to an audience that the researchers want to influence (IHE, 2008). Other authors remain more general and rather suggest that the message be tailored to the specific audience targeted, notwithstanding if this audience relates to a clinical, research, community, or government setting (Brachaniec et al., 2009; Fattal & Lehoux, 2008; Kahn, 2009; Kobus et al., 2007; Lomas, 1993; Tetroe et al., 2008; Wilson et al., 2010).
Given that the definitions of knowledge dissemination published in the literature vary significantly and seem to be influenced by the “object” to be disseminated, and given the purpose of the project at stake, which will be further described in the next section, knowledge dissemination is defined in this article as an active intervention that aims at communicating research data to a target audience via determined channels, using planned strategies for the purpose of creating a positive impact on the acquisition of knowledge, attitudes, and practice.
The present literature review was conducted in the context of a study. “A Foundation for Evidence-Based Management of Nutrigenomics Expectations and ELSIs”2 aims at laying an empirical foundation that could discern and anticipate the socioethical issues associated with nutrigenomics/nutrigenetics research and its potential applications. Nutrigenomics and nutrigenetics (hereafter referred to as NGx) are described as the study of genome/genes–diet interactions and their influence on individuals’ and populations’ response to food, disease susceptibility, and population health.3 As part of this project, an extensive analysis of 173 clinical studies published in the field since 1998 until 2007 inclusively was performed. This analysis highlighted both scientific challenges and significant ethical concerns raised by the selection of participants and the methodological limitations, as well as by the publication of study results and their interpretation (Hurlimann, Stenne, Menuz, & Godard, 2011; Stenne, Hurlimann, & Godard, 2012). More specifically, results from the study reported (a) a concentration of clinical studies in Europe and North America, (b) a focus on middle-aged populations in a majority of studies, as well as (c) a lack of representation of ethnic minorities in research, and (d) methodological limitations encountered in NGx research that impact on the interpretation of study results. The study also addressed the risks associated with (a) the underreporting of methodological limitations, as well as unclear descriptions of studied populations in scientific publications, and (b) claims about potential clinical benefits originating from the study results in scientific publications.
To disseminate their findings, the research team wished to go beyond publishing their study results in scientific journals and presenting them at conferences. For that purpose, a knowledge dissemination strategy using those communication channels most useful and effective for disseminating the findings to a very specific audience, that is, clinical researchers in the field of NGx, was envisioned. However, prior to developing this knowledge dissemination strategy, a review of the literature was carried out to inform the research team on the types of KDIs that were used in applied research, at the end of studies (“end-of-grant knowledge translation”), and on their effectiveness to impact on three outcome measures: (a) knowledge acquisition, (b) changes in attitudes, and (c) changes in practice, as those criteria are the most broadly used by researchers for assessing their KDIs (Lavis, Robertson, Woodside, McLeod, & Abelson, 2003). Knowledge acquisition means the development and expansion of a knowledge base; attitudes means the agreement with, or acceptance of, either the disseminated data, or the perceived clinical applicability of these data, with the motivation and sense of self-efficacy to adopt them (Menon, Korner-Bitensky, Kastner, McKibbon, & Straus, 2009). Practice refers to the process or actions performed (e.g., provide care to patients, decide on a policy, adhere to a treatment regimen).
While substantial advancements have been made to bridge the gap between research and practice through devising innovative methods of disseminating study results (Farmer et al., 2009), limited information is readily available to advise on the best use of various KDIs (Flodgren et al., 2011). Thus, it appears important to review the literature to determine what medium would be the most successful to convey research results, such as those from the NGx study, given the specific attributes of the project (e.g., target audience, budget, and time available).
The objective of the present article is thus to report on this literature review on KDIs and its relevance for the development of a knowledge dissemination strategy in the framework of the aforementioned study on the socioethical aspects of NGx research.
A review of the literature on KDIs was completed by searching two databases: MEDLINE (through PubMed) and Cumulative Index to Nursing and Allied Health Literature (CINAHL; see Figure 1). Articles were eligible for inclusion if (a) they described KDIs related to applied research, (b) they detailed the various aspects of these interventions, and (c) they were written in English or French. The search was limited to peer-reviewed articles published between January 1, 2006, and December 31, 2011, inclusively. This time span was identified because 2006 was the year when knowledge translation activities began to really develop, at least in Canada. For instance, it is the year that the CIHR hired a new vice-president for its knowledge translation and public outreach portfolio.4 More grants became available to researchers to advance knowledge in this area. Since there are numerous words to refer to “knowledge dissemination,” and since this term or similar ones have not yet been indexed as a Medical Subject Heading (MeSH) category, the combinations of keywords searched for in our review were “knowledge translation,” “knowledge transfer,” “knowledge dissemination,” “research dissemination,” “knowledge diffusion,” “research diffusion,” “knowledge implementation,” “research implementation,” “knowledge utilization,” and “research utilization,” with “OR” as the Boolean search operator between the terms.
From an initial list of 964 citations from the MEDLINE and CINAHL databases, both systematic reviews and duplicate articles were removed. The first were discarded since the information that needed to be extracted from the individual articles was not reported on, and thus, not available in the reviews. Neither did we consider articles featuring the dissemination of clinical guidelines, health promotion,5 or medical education interventions, nor the articles presenting and discussing theories and models on KT. Clinical guidelines, like systematic reviews, reflect the aggregated results of many different studies; similarly, health promotion and medical education interventions are rarely based on the research results of a single ad hoc research project while the aim of the KD strategy for the NGx study was to communicate findings exclusively from this specific project. Theories and models about KT are not dissemination interventions. Therefore, articles covering those types of interventions (i.e., health promotion and medical education) or subjects (i.e., systematic reviews, clinical guidelines, theories or methods) were not included in the review.
Titles and abstracts of citations were reviewed and the articles potentially describing KDIs were identified. Two members of our research team went through the process individually to enhance reliability and internal validity. Identified discrepancies were discussed until a consensus was reached. Full texts from this first screening exercise were obtained to further determine eligibility for inclusion. Nineteen research articles were found to be relevant to our research question, as they described with sufficient detail concrete KDIs related to applied research.
It is noteworthy that although the literature review focused specifically on articles describing KDIs, other articles that were looked through during the search can offer valuable information for anyone preparing the development and implementation of a KDI. These articles identified barriers to and facilitators of KD, described specific target audiences, and presented models that could be used to guide the design and implementation of a KDI.
The 19 articles were read on an individual basis. Data from each eligible full-text article were then extracted. Information relevant to the objectives of the present manuscript that was excerpted included (a) authors and date of publication, (b) study design, (c) target audience, (d) type of intervention, (e) conceptual framework underlying an intervention, (f) implementation method, (g) evaluation method, and (h) outcome measures. A comparative table summarized the contents of each article with the extracted data. A content analysis was then performed to determine commonalities and differences between the 19 articles.
Description of Studies
As mentioned earlier, 19 articles met the inclusion criteria and constituted the final result sample of this literature review. In addition to describing the intervention, the effectiveness of the knowledge dissemination strategy was evaluated, after its implementation, in 10 of the 19 articles of the sample (see Amsallem et al., 2007; Colantonio et al., 2008; Dobbins et al., 2009; Ginsburg, Lewis, Zackheim, & Casebeer, 2007; Grad et al., 2008; Gross & Lowe, 2009; Kirshbaum, 2008; Russell et al., 2010; Stiell et al., 2010; Tanna, Sood, Schiff, Schwartz, & Naimark, 2011). In one article, the relevance of the content of the means of dissemination for the target audience and the engagement of the audience with the intervention was assessed (see Hundt et al., 2011). In the remaining eight articles, the focus was only on the description of the development and delivery process of the interventions. No evaluation was performed (see Allender et al., 2011; Hartling et al., 2010; Kobus et al., 2007; Mason, 2008; Redwood et al., 2010; Smith et al., 2011; Stuttaford et al., 2006; Wilkinson, Papaioannou, Keen, & Booth, 2009; see also Table 1).
The sample included five experimental studies (Amsallem et al., 2007; Dobbins et al., 2009; Kirshbaum, 2008; Stiell et al., 2010; Tanna et al., 2011) and a quasi-experimental one (Gross & Lowe, 2009), five case studies using arts-based methods (Colantonio et al., 2008; Hartling et al., 2010; Hundt et al., 2011; Mason, 2008; Stuttaford et al., 2006), six case studies using traditional methods (Allender et al., 2011; Ginsburg et al., 2007; Kobus et al., 2007; Russell et al., 2010; Smith et al., 2011; Wilkinson et al., 2009), a prospective cohort study (Redwood et al., 2010), and a prospective observational study (Grad et al., 2008). See Table 1 for a summary of studies.
Types of Knowledge Dissemination Tools
The KDIs featured in the review included three forms of communication: written material, electronic material, and interpersonal communication activities or events (see Table 1). Each form encompassed a wide variety of tools.
Written material included information instruments such as articles, booklets, fact sheets, resource guides, newsletters, editorials, press releases and news coverage, pocket cards, posters, research bulletins, research and policy briefs, science summary reports, storybooks, and synopses. Eight interventions used one written tool (see Amsallem et al., 2007; Dobbins et al., 2009; Grad et al., 2008; Hartling et al., 2010; Kirshbaum, 2008; Stuttaford et al., 2006; Tanna et al., 2011; Wilkinson et al., 2009), one intervention used two written tools (see Allender et al., 2011), and four used more than two (see Gross & Lowe, 2009; Kobus et al., 2007; Redwood et al., 2010; Stiell et al., 2010).
Electronic material referred to dissemination devices such as DVDs and CD-ROMs, email alerts, knowledge material available on the Internet, online registries of research evidence, real-time reminders, tailored messages sent by email, web conferences, and websites. Nine articles refer to the use of one electronic means of communication (see Allender et al., 2011; Amsallem et al., 2007; Ginsburg et al., 2007; Grad et al., 2008; Kobus et al., 2007; Redwood et al., 2010; Smith et al., 2011; Stiell et al., 2010; Tanna et al., 2011). Only one intervention used more than one electronic tool, with Dobbins et al. (2009) using two.
Interpersonal means of communication were composed of activities or events such as arts-based performances (i.e., theater), a community of practice networks, forums, knowledge brokers, partnerships with stakeholders, and seminars/workshops. Ten interventions used one interpersonal communication tool (see Amsallem et al., 2007; Colantonio et al., 2008; Dobbins et al., 2009; Hundt et al., 2011; Kobus et al., 2007; Mason, 2008; Redwood et al., 2010; Russell et al., 2010; Stiell et al., 2010; Stuttaford et al., 2006), three incorporated two (see Allender et al., 2011; Ginsburg et al., 2007; Gross & Lowe, 2009), and none combined more than two.
Nine dissemination interventions used only one dissemination tool (written, electronic, or interpersonal; see Colantonio et al., 2008; Hartling et al., 2010; Hundt et al., 2011; Kirshbaum, 2008; Mason, 2008; Russell et al., 2010; Smith et al., 2011; Tanna et al., 2011; Wilkinson et al., 2009), five interventions used two (see Ginsburg et al., 2007; Grad et al., 2008; Gross & Lowe, 2009; Redwood et al., 2010; Stuttaford et al., 2006), and five used three (see Allender et al., 2011; Amsallem et al., 2007; Dobbins et al., 2009; Kobus et al., 2007; Stiell et al., 2010).
Target Audience, Conceptual Underpinnings, and Characteristics of Settings
Most of the KDIs were intended for health professionals (e.g., physicians, nurses, and physiotherapists; see Table 1). Indeed, five out of the 19 KDIs did not have health professionals among their target audience (see Dobbins et al., 2009; Hartling et al., 2010; Kobus et al., 2007; Mason, 2008; Stuttaford et al., 2006). Nonprofessional audiences (i.e., patients, patient advocacy groups, patients’ relatives, general public) were targeted in eight interventions (see Colantonio et al., 2008; Hartling et al., 2010; Hundt et al., 2011; Kobus et al., 2007; Mason, 2008; Redwood et al., 2010; Smith et al., 2011; Stuttaford et al., 2006).
Several proponents of knowledge translation promote the explicit use of theories or models in designing KDIs (Improved Clinical Effectiveness Through Behavioural Research Group, 2006). These researchers believe that such theories and models provide a generalizable framework within which to represent the various dimensions of intervention implementation. They argue that these conceptual frameworks offer a process to inform the design and delivery of interventions, that they act as a guide when evaluating knowledge translation strategies, and that they suggest potential causal mechanisms, for instance, Bartholomew’s Intervention Mapping model (see Bartholomew, Parcel, Kok, & Gottlieb, 2002), Kitson’s Research into Practice framework (see Kitson, Harvey, & McCormack, 1998), and the Ottawa Model of Research Use (see Logan & Graham, 1998).
Although multiple theories and frameworks are available—Wilson et al. (2010) retrieved 33 conceptual frameworks for knowledge dissemination—only 10 interventions out of the 19 in the review had an underlying conceptual framework (see Table 1). Four of them referred to existing models. Allender et al.’s (2011) strategy was inspired by a model of intervention originating in France, known as EPODE.6 This acronym stands for Ensemble prévenons l’obésité des enfants (“Together Let’s Prevent Childhood Obesity”). It is a model that supports knowledge exchange for community-based obesity prevention. Ginsburg et al. (2007) based their intervention on the interaction models of research utilization (Landry, Amara, & Lamari, 2001), which promote interactions between researchers and users. Kobus et al. (2007) referred to Davis et al.’s model of knowledge translation (Davis et al., 2003), within which the audience progresses from awareness, through agreement and adoption, to adherence, with evidence-based practices. The authors locate the intervention described in their article in the early phase of the continuum where the goal is to generate awareness about their research findings. Smith et al. (2011) used a knowledge translation approach similar to the one promoted by Draper, Low, Withall, Vickland, and Ward (2009). They reminded us that the latter approach includes key elements, such as the careful selection of research findings, appropriate adaptation of messages to the target audience, wide and continuing dissemination of knowledge through accessible media, and interaction among the various stakeholders to facilitate sustainable changes in practice.
The authors who used theater as a means for knowledge dissemination created their performance according to either an “applied theater” approach (Hundt et al., 2011; Stuttaford et al., 2006), or a “theater of the oppressed” approach (Mason, 2008). Applied theater is the broad term used to describe theater that has been created for an explicit purpose (e.g., to disseminate research findings) and for engaging the audience with the material presented (Ackroyd, 2000). In addition to these characteristics, the theater of the oppressed (Boal, 1979) enacts the struggles of disempowered people to generate meaningful solutions to oppression.
Three interventions in the review used a framework that their authors previously developed themselves. Dobbins et al. (2009) created a “framework for the dissemination and evaluation of research evidence for health care policy and practice.” Grad et al. (2008) developed and tested a method to measure the impact of information retrieved by doctors from medical databases, using a pop-up questionnaire containing 10 items of cognitive impact. Russell et al. (2010) designed a “broker to the knowledge brokers” model that emphasizes the uptake of research findings rather than providing a synthesis of evidence or training clinicians to become experts in critical appraisal.
Characteristics of settings
In all, 10 out of 19 KDIs took place or were initiated in Canada, 3 in the United Kingdom, 2 in Australia, 2 in the United States, 1 KDI took place in France, and 1 in South Africa (see Canada: Colantonio et al., 2008; Dobbins et al., 2009; Ginsburg et al., 2007; Grad et al., 2008; Gross & Lowe, 2009; Hartling et al., 2010; Mason, 2008; Russell et al., 2010; Stiell et al., 2010; Tanna et al., 2011; UK: Hundt et al., 2011; Kirshbaum, 2008; Wilkinson et al., 2009; Australia: Allender et al., 2011; Smith et al., 2011; USA: Kobus et al., 2007; Redwood et al., 2010; France: Amsallem et al., 2007; South Africa: Stuttaford et al., 2006). A total of 8 KDIs were performed in health settings (Amsallem et al., 2007; Colantonio et al., 2008; Dobbins et al., 2009; Gross & Lowe, 2009; Hartling et al., 2010; Kirshbaum, 2008; Russell et al., 2010; Stiell et al., 2010), 5 in community settings (including theater spaces; Allender et al., 2011; Hundt et al., 2011; Redwood et al., 2010; Smith et al., 2011; Stuttaford et al., 2006), 3 in virtual locations, via the Internet (Ginsburg et al., 2007; Grad et al., 2008; Tanna et al., 2011), and 2 in universities or research centers (Kobus et al., 2007; Wilkinson et al., 2009). The location of one intervention was not specified in the article reporting on it (Mason, 2008).
Evaluation of KDI Effectiveness
In the articles for which the effectiveness of the interventions was assessed (n = 11), three outcome measures were examined: knowledge acquisition, changes in attitudes, and changes in practice. Authors measured the outcomes of their KDI using quantitative tools (e.g., knowledge survey), qualitative instruments (e.g., self-reported, open-ended question survey), or mixed methods. Eleven of the 19 interventions were evaluated. Of those 11, 7 used quantitative assessment methods (Amsallem et al., 2007; Dobbins et al., 2009; Grad et al., 2008; Gross & Lowe, 2009; Kirshbaum, 2008; Russell et al., 2010; Tanna et al., 2011), 1 used qualitative instruments (Ginsburg et al., 2007), and 3 used mixed methods (Colantonio et al., 2008; Hundt et al., 2011; Stiell et al., 2010 ).
The effectiveness of the interventions was classified in four main categories, according to the interventions’ focus on specific means of communication (arts, email alerts, knowledge brokers, or still other dissemination tools). The authors of the articles themselves determined this effectiveness. It is important to mention that one way of analyzing the data is offered, but it is not the only one possible. Some interventions used more than one dissemination tool, and thus could have been listed under more than one category. However, they were grouped under the category that represented their main dissemination tool (their secondary means of dissemination are reported in Table 2).
Colantonio et al.’s (2008) evaluation tool was a postperformance feedback survey made up of five questions scored on a 5-item Likert-type scale with space for open-ended comments. They found that research-based theater, used as a means for disseminating knowledge about traumatic brain injuries to health care professionals, managers, and decision makers, was “extremely effective” (Colantonio et al., 2008, p. 183) for imparting new or reinforcing existing knowledge. They also suggested that drama as a knowledge dissemination strategy has the potential to impact practice.
Hundt et al. (2011) used the same approach as Colantonio et al. for their evaluation. They handed out postperformance evaluation sheets with questions that had a 5-point Likert-type scale with space for further comments. However, unlike Colantonio et al., they measured the relevance of the content of their play for their target audience and the engagement of the audience with the play, rather than the effectiveness of the theatrical performance for acquisition of knowledge, and change in attitudes and practice.
Dobbins et al. (2009) contrasted three knowledge dissemination strategies for their effectiveness in promoting the incorporation of research evidence, by health decision makers, into public health policies and programs. A survey administered by telephone, conducted prior to and immediately after the intervention, assessed effectiveness. Strategy 1 consisted of emailing a first group about the availability of a website7 hosting a repository of systematic reviews evaluating public health interventions. Strategy 2 involved sending participants of a second group a series of emails over seven consecutive weeks, each time with information about a specific review, and providing the links to the repository of systematic reviews, and the link to the PDF version of the specific systematic review. Finally, Strategy 3 included Strategy 2 with an additional component designed to provide one-on-one knowledge brokering services to decision makers. The tasks conducted by the knowledge brokers included ensuring that research evidence was transferred to decision makers in ways that were most useful to them, assisting decision makers in developing the skills and capacity for evidence-informed decision making, and assisting decision makers in translating evidence into local practice. The authors noted that only participants from Strategy 2 improved from baseline.
Grad et al. (2008) delivered a research-based synopsis to physicians by email and used a self-reported cognitive impact assessment method to evaluate the intervention. They concluded that synopses sent daily were effective for knowledge acquisition and changes in attitudes: “I learned something new” (Grad et al., 2002, p. 241) was the most frequently reported impact by research participants. Yet, it must be noted that their intervention had actually a relatively small effect on the attitudes of the participants, as only 10% of them claimed that their practice would be improved by receiving email alerts.
Tanna et al. (2011) evaluated their intervention (i.e., email alerts) after 3 months, through an online survey. They reported that email alerts of new literature increased familiarity, but that no detectable knowledge acquisition occurred in a large, randomized, international population of subscribers to an email alert service.
There is no consistency across the studies that evaluated the effectiveness of email alerts as a dissemination tool. Grad et al. (2008) measured three outcomes: acquisition of knowledge, changes in attitudes, and changes in practice. Dobbins et al. (2009) chose to evaluate the effect of their intervention exclusively on changes in practice, and Tanna et al. (2011) only on acquisition of knowledge. As per the effectiveness of the interventions, Dobbins et al. (2009) reported that one strategy in their study had a positive effect on practice; the study conducted by Grad et al. (2008), however, concluded that their intervention had no impact on this outcome. Similarly, Tanna et al. (2011) did not notice any significant improvement on the acquisition of knowledge after their intervention, but Grad et al. (2008) did.
Amsallem et al. (2007) compared two KDIs among cardiologists responsible for patient care in public community or academic hospitals. Simulated cases, multiple-choice questionnaires, and documentation about prescriptions were used to assess outcomes. The authors observed, when compared with control groups, a modest acquisition of knowledge in the active mode of dissemination of their study (i.e., a 2-hr visit with participants, from a knowledge broker, every 2 months for a year, to deliver information and facilitate discussion about published trial reports and/or results of meta-analyses), and no significant improvement in the passive mode (i.e., participants received evidence-based material available on the study website every week for a year). Similarly, they determined that participants’ attitudes toward their intention to prescribe (i.e., prescription of the relevant, evidence-based treatment associated with the medical condition) were impacted positively when measured with simulated cases for the active mode of dissemination, while no impact was reported for the passive mode. However, they found that there was no benefit recorded on practice—the conformity of real prescriptions to evidence-based references (e.g., systematic reviews)—in either mode of dissemination.
Russell et al. (2010) used a self-reported knowledge questionnaire before their intervention, immediately after, and at 12 and 18 months. They observed that the introduction of knowledge brokers impacted physiotherapists’ clinical practice positively, as the use of four new measurement tools designed to assess the motor function in children with cerebral palsy increased. The authors did not define a priori the specific brokering activities to be performed by the knowledge brokers participating in their study, as they believed that their role should be flexible and responsive to the needs of the stakeholders.
There was a variation in the results from these two studies involving knowledge brokers. Amsallem et al. (2007) measured the three outcomes (i.e., acquisition of knowledge, changes in attitudes, and changes in practice), while Russell et al. (2010) measured the effectiveness of their intervention on acquisition of knowledge and changes in practice only. Both found a positive effect on knowledge acquisition, although Amsallem et al. (2007) found a limited impact. The two studies did not come to the same conclusion regarding the impact of their intervention on practice. Amsallem et al. reported no impact on this outcome, while Russell et al. (2010) observed a positive effect.
Other means of dissemination (e.g., printed material, forums)
Ginsburg et al. (2007) focused on the use of interactive dissemination activities. The two forums and web conferences that they organized following the implementation of the Canadian Adverse Events Study brought positive results. They measured the effectiveness of their two strategies through observation at the events, semistructured interviews, site visits, and reviews of documentation (i.e., relevant documents outlining patient safety initiatives). They reported broader learning about patient safety (knowledge) and a sense of urgency around addressing patient safety issues (attitudes). However, they acknowledged that their knowledge dissemination activities did not have any discernible impact on practice.
Gross and Lowe (2009) studied the impact of a multimodal dissemination strategy on the practice of physiotherapists. Evaluation was done through a survey performed immediately prior to the knowledge dissemination initiative and after 1 year. Their strategy involved the dissemination of a best practices guide for work disability prevention, the creation of a network of peer-selected, educationally influential clinicians, province-wide seminars, and the use of a best practices guide as well as other resources in the physical therapy curriculum. The authors established that their interventions had “little impact” (Gross & Lowe, 2009, p. 877) on altering practice.
Kirshbaum (2008) studied the effect on nurses of a targeted booklet about breast-cancer care. She found that the booklet improved knowledge of the members in the intervention group. This was demonstrated by a greater number of correct answers on each of the 17 knowledge items of the Exercise and Breast Cancer Questionnaire. She noticed slight changes in attitudes of the breast-cancer care nurses following her information booklet intervention. She concluded from the interpretation of the study results that there was a desire for knowledge among the respondents, combined with a desire to introduce an evidence-based change into practice. Yet, in the end, Kirshbaum found that the intervention had had a limited effect on practice considering that only 3 of the 12 practice items of the questionnaire were significantly affected by the intervention.
Stiell et al. (2010) revealed that their KDI did not bring any significant change in the rates of computed tomography (CT) imaging ordered in the emergency departments that were part of their study. The aim of their intervention was to sensitize physicians to be more selective when ordering CTs for patients with minor head injury. Documents such as radiology reports and census lists were reviewed within the intervention evaluation process, and poststudy interviews were conducted. The authors designed their intervention around simple and inexpensive strategies such as the distribution of articles, pocket cards, and posters, offering a 1-hr learning session to review supporting evidence for the clinical application of the Canadian Computed Tomography Head Rule, and finally, the implementation of a mandatory, real-time reminder of the rule on requisition for a CT scan.
Results about the effectiveness of these means of dissemination (e.g., forums, printed material) seem to be relatively uniform across interventions. The two studies that measured acquisition of knowledge and changes in attitudes (Ginsburg et al., 2007; Kirshbaum, 2008) found that such interventions had a positive impact on these two outcome measures. Conclusions related to changes in practice following KDIs with more traditional means of dissemination were about the same across the interventions that measured this outcome. Gross and Lowe (2009) as well as Kirshbaum (2008) reported limited impact of their intervention on practice, while Stiell et al. (2010) found no effect at all.
Effectiveness of KDIs Based on the Literature Review
The present review shows that most of the KDIs assessed in the 11 studies were successful for outcomes associated with knowledge acquisition and change in attitudes, even if in some cases the improvement over baseline was modest. Impact on practice did not reach the same degree of effectiveness, as half of the studies did not report any positive effect on this outcome, and 2 others observed a limited change.
It seems that the impact of a dissemination intervention on acquisition of knowledge and changes in attitudes can be assessed immediately after a dissemination activity. Conversely, changes occurring in practice as a result of an intervention may take more time to materialize. Moreover, it is difficult to perform a thorough evaluation of a dissemination strategy when staff has to move on to other projects (Wilson et al., 2010). Thus, greater and longer term funding for the evaluation of this outcome measure would certainly be helpful. Funding agencies have a crucial role to play here. Conditions of funding usually stipulate that the study must be wrapped up within a certain period of time. This can prevent researchers from planning a longitudinal evaluation. They are often left with the sole option of measuring intent to change practice (changes in attitudes), and not the actual changes in practice.
Funding agencies could contribute in other ways as well. About 42% of the studies in the review did not include any evaluation process, while some others included a very basic form of evaluation. Major funding agencies should thus partner with researchers and consider designing tools that would assist researchers in conducting the evaluation of their intervention, and make them available to those interested in using such tools.
Utility of current knowledge on KDIs for guiding practice
Some authors believe that researchers have the responsibility not only to design knowledge translation interventions, but also to evaluate their effectiveness (Bhattacharyya, Estey, & Zwarenstein, 2009). They argue that evaluation creates knowledge that may benefit others. Thus, it is relevant to ask ourselves: “Was it worth conducting a comprehensive literature review on KDIs prior to designing a knowledge dissemination strategy of a study on ethical issues raised by NGx research and its potential applications?” Are literature reviews, such as the one reported on in this article, useful for guiding practice? The answer to this question probably depends on one’s initial objective. Undoubtedly, the present review sheds light on the variety of interventions, contexts, and actors involved in the interventions carried out, and highlights the complexity of implementing KDIs. It also reveals that information from such a literature review cannot be transformed instantly into a readily implementable knowledge dissemination strategy.
The context in which the interventions took place, including the audiences that were targeted, varied significantly from one study to another. Almost three quarters of the KDIs were intended for health professionals, even if they were directed at different types of professions (e.g., physicians, nurses, physiotherapists). This tends to reinforce the idea that, first and foremost, knowledge translation targets professionals in clinical settings as a primary audience. Yet, previous studies have shown that the impact of an intervention might vary even across an audience of health professionals. Freemantle et al. (2000) concluded, in a Cochrane review of 11 studies, that printed materials did not bring change in physicians’ clinical practice. However, studies conducted among nurses reported a positive effect for the same type of intervention (Kirshbaum, 2008).
As a result, it cannot be presumed, with the knowledge currently available, that a given intervention will have the same impact regardless of audience or context. The present literature review strongly suggests that even the effectiveness of the same type of intervention is context dependent. Grad et al. (2008) reported that email alerts impacted positively on knowledge acquisition and attitudes of physicians. In contrast, Tanna et al. (2011) concluded, in their study of email alerts, that the latter did not increase knowledge of health professionals, including physicians. Thus, these results do not provide us with a clear idea of the effectiveness of this type of intervention. As a consequence, one still does not know if it would be a wise decision to integrate email alerts in a dissemination strategy. Therefore, the production of more data on KDIs implemented in various contexts, targeting different audiences, including researchers, is highly desirable.
Some interventions featured a multicomponent knowledge dissemination strategy. Stiell et al. (2010) used pocket cards, posters, learning sessions, and electronic reminders, but they did not design their assessment so that each component could be evaluated individually. It is therefore impossible to determine the effectiveness of each mode of dissemination apart from the collective impact. It would be helpful if each dissemination tool used in the context of a KDI was evaluated individually, so that the effectiveness of each means could be assessed both on its own merits and in its contribution to a collective impact.
A need to shift from the current paradigm
The present literature review suggests that there is a dearth of publications on the evaluation of the effectiveness of KDIs. More knowledge needs to be generated in this area. Moreover, most of the measurement tools that were used to evaluate the interventions were quantitative in nature. There is no doubt that quantitative data collection methods are useful in determining whether an intervention is effective or not. However, qualitative instruments should complement them, as the latter allow a more comprehensive understanding of the continuum of effectiveness. In fact, though the literature review showed which interventions were more successful and which were less so, the articles did not provide a thorough description of the reasons that made them more or less effective than others. This is why one must agree with those who believe that a greater use of rigorous interpretative (qualitative) approaches in the field of knowledge translation would contribute to a better understanding of the KDI implementation process (Goering, Boydell, & Pignatiello, 2008; Rycroft-Malone, 2007). Qualitative methods of research should be promoted as an essential complement to quantitative ones.
The utilization of various kinds of knowledge—either “scientific,” experiential, ethical, or aesthetic (Kitson, 2009)—is desirable, if researchers are to succeed in bringing research findings to end-users so that the latter can use them to make sounder decisions. Besides, further studies are needed to investigate which channel of communication is more appropriate to disseminate each of these kinds of knowledge. The present literature review suggests that arts are an effective means for translating people’s experience (Colantonio et al., 2008). However, the use of artistic forms may certainly be less relevant for disseminating other kinds of knowledge. Robust data on the correlation between the sort of knowledge and the type of dissemination intervention and their effect on given outcomes are definitively needed.
Considering the significant attention given to knowledge translation by the research community, and particularly by funding agencies (Tetroe et al., 2008), one could have thought that more KDIs matching the inclusion criteria of this literature review would have been documented. Clearly, the bulk of the literature in this field tends to focus on theories, models, or frameworks instead of on the evaluation of specific and concrete interventions (Ward et al., 2009). Yet, only half of the KDIs in the sample referred to a conceptual framework for guiding their design and implementation.
It is also striking that half of the KDIs included in the review were conducted by Canadian researchers. This is perhaps an indication that the high degree of engagement of the CIHR to promote the creation of new knowledge and its translation into practice and policy, in the last decade or so, is now paying off (Anderson et al., 2010).
In light of this literature review, a number of conclusions emerged:
“Knowledge dissemination” means different things to different people.
There is a dearth of studies published on the effectiveness of KDIs.
A significant number of authors do not evaluate the effectiveness of the KDIs that they implement.
KDIs have a stronger impact on acquisition of knowledge and changes in attitudes than on changes in practice.
KDIs described in the literature are so different among themselves in terms of context of dissemination, kind of knowledge disseminated, type of interventions used, or still target audience that they cannot be readily implemented in other intervention situations.
Current KDIs are mostly aimed at health professionals.
There is a scarcity of longitudinal evaluations of KDIs.
The kind of knowledge disseminated is almost exclusively limited to “scientific” data.
It is thus suggested that
more researchers should use qualitative methods to evaluate KDIs;
the kinds of knowledge disseminated should not be limited to “scientific” knowledge; experiential, ethical, and aesthetic forms of knowledge are also valid types of knowledge to be disseminated;
each dissemination tool that is part of the same KDI should also be evaluated individually, as much as possible, so that the effectiveness of each means can be assessed both collectively and independently;
increased funding should be made available in the context of longer research timelines so that a rigorous evaluation of KDIs, especially on changes in practice, could be carried out; and
major funding agencies should design and make available to researchers a guide to assist those interested in evaluating their KDI.
Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding The author(s) disclosed receipt of the following financial support for the research and/or authorship of this article: This work was funded by a grant from the Canadian Institutes of Health Research (CIHR).
Authors’ Note DL carried out the literature review and analysis and drafted the manuscript. VM, TH, and BG contributed to the initial inclusion/exclusion of articles, participated actively in the study design and analysis, and revised the manuscript for important intellectual content. All authors read and approved the final manuscript.
↵3. http://www.omics-ethics.org/en/what-is-nutrigenomics, accessed on September 27, 2012.
↵4. http://cahr.uvic.ca/cahr-news/cihr-presidents-announcement, accessed on October 4, 2012.
↵5. Health promotion: The process of enabling people to increase control over their health and its determinants, and thereby improve their health.
- © The Author(s) 2013
This article is distributed under the terms of the Creative Commons Attribution 3.0 License (http://www.creativecommons.org/licenses/by/3.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (http://www.uk.sagepub.com/aboutus/openaccess.htm).
Darquise Lafrenière is a clinical assistant professor in the Department of Preventive and Social Medicine at the University of Montreal and a researcher in the OMICS-ETHICS research group. Her research interests include knowledge translation in health research, especially the use of arts-based methods, research ethics, and ethics and communication.
Vincent Menuz is a postdoctoral fellow in the Research Centre on Ethics at the University of Montreal and a member of the OMICS-ETHICS research group. He completed a PhD in biology at the University of Geneva, Switzerland, where he developed a significant expertise in molecular biology, genetics, and genomics. His current work focuses on the ethical and social issues of human enhancement, as well as antiaging interventions.
Thierry Hurlimann is the research coordinator of the OMICS-ETHICS research group at the University of Montreal. He was admitted to the Geneva bar as a lawyer in 1997 and completed an LLM specialization in bioethics at McGill University in 2004. He plays an active role as a researcher and critical thinker in the evaluation and development of projects concerning nutrigenomics and personalized medicine as they relate to global justice and equity issues in both developed and developing countries.
Béatrice Godard is a professor in the Department of Preventive and Social Medicine at the University of Montreal. Her current research work explores emerging ethical responsibilities where research and the clinic intersect. She is also engaged in seeking to better understand prevailing attitudes and behaviours regarding genetic research.