Assessing the impact: The research to policy interface
Much of SRH and HIV and AIDS research, particularly in the development arena, aims to influence policy, it is “research committed to improvement” . Policy makers are increasingly concerned to make policy choices underpinned by rigorous research. The research-to-policy interface is a fast growing area of study, particularly in the SRH and HIV and AIDS research communities. The purposes are two fold: accountability of a research organisation, demonstrating achievement and value for money for funders; and as a learning exercise, developing a better understanding of the research impact process in order to enhance future impact . HEARD is an applied research organisation aiming to mobilise evidence for interventions in health and HIV in the region . Only by identifying where change has transpired as a result of its research, where it has not, and the reasons for this, can it deliver effective research.
Impact assessment is an underdeveloped field of study, in part due to the complex and dynamic nature of research impacts and the consequent difficulty in measuring them. Sumner, Perkins and Lindstrom (2008, unpublished) identify a number of significant problems when attempting to track the impact of research: difficulty in determining conceptual influence (on opinion, attitudes and thinking); identifying research users, timing of assessment; attributing impact in the context of other drivers; and using qualitative and subjective data . Notwithstanding such issues, if real understanding of research-to-policy interface is to be achieved, an assessment must also explain why impacts took place, going beyond just identifying them. Difficult as it may be, there are good reasons for attempting to evaluate the impact of policy research . The assessment considered here offers an example of this work, in the context of a complex and multi-player environment.
Methodology and issues
It seemed that the Reviewing Emergencies report had been effective in influencing policymakers. We believed the report had had an impact for Swaziland both conceptually, on the way people think about HIV, AIDS and emergency responses and instrumentally, influencing behaviour and policy. In mid 2008, the decision was taken to carry out an assessment of the impact of the report to determine the validity of this claim and understand what ‘worked’ and what ‘didn’t’. Fiona Henry, who was awarded a fellowship by the University of Edinburgh to work with HEARD, was tasked with leading the assessment.
The specific objectives were to:
• Document the creation and dissemination of the report;
• Identify and explain its impact;
• Identify any barriers and/or limitations to its impact;
• Draw lessons for maximising the impact of future research.
This article was developed from the assessment assisted by a presentation given at the meeting of DFID funded Research Programme Consortia on ‘Strengthening the research to policy and practice interface: Exploring strategies used by research organisations working on Sexual and Reproductive Health and HIV and AIDS’ held in Liverpool in May 2009; and through peer review. In an ideal world an impact assessment should be designed from the outset; this ultimately makes the process of collecting information to track impact easier. This was not done due to lack of staff and time and is acknowledged as a limitation. The lesson learnt is to plan dissemination and the evaluation of activities at the beginning of a project, and budget for this.
Forward-tracking and attribution
Two broad categories exist for impact assessments; forward-tracking, from research to outcome, and backwards-tracking, from decisions taken to potential research influence. Our impact assessment wanted to track from publication to outcome. However, forward-tracking approaches can have serious limitations . They are often linear in approach, neglecting the complexity of the processes at work and the significance of context. The policy environment is influenced by socio-cultural, political and economic factors and these must be acknowledged in order to understand why an impact took place. Taking this into account, the assessment attempts to put identified ‘impacts’ into a relevant context.
The assessment cannot claim to fully understand the influence of other ‘drivers’ on outcomes. Policy research is only one of many sources of information used in decision making or to form opinion. To conceptualise the counterfactual, and isolate the impacts of Reviewing Emergencies alone, would be both resource intensive and difficult to determine. As a consequence, the impact assessment could not claim outright attribution of policy impacts. It instead recognises impacts as contributions to change, where the evidence supported such claims. This difficult methodological issue of attributing outcomes can result in a ‘shying away’ from impact assessments . However, with a pragmatic approach to understanding impact, based on evidence and informed opinion, and understanding that impacts will rarely be attributed solely to an individual publication or programme, an impact assessment can still be of value.
Conceptualising ‘impact’: A temporal approach
‘Impact’ is used interchangeably with terms such as ‘influence’, ‘outcomes’, ‘use’ and ‘uptake’, and a number of definitions exist in the literature . In the assessment of the Reviewing Emergencies impact is defined temporally, referring to ‘initial impact’, ‘long-term impact’ and ‘potential impact’. This is important. Firstly, initial impact refers to the ‘sticky messages’ of the report: what strikes the reader instantly about the report and its findings and the key messages that they come away with. Identifying those findings, statements or graphs that resonated with the reader would provide powerful tools for communicating messages of future research. Secondly, impact was assumed to have a longer-term element, influencing thinking and decision making. This constituted the main body of the assessment. ‘Long-term impacts’ are those conceptual and instrumental impacts that change understanding and attitudes or contribute to a change in policy or behaviour. As the assessment took place about a year after the launch of Reviewing Emergencies, ‘potential impact’ considered the possibility of impact in the future. With continued advocacy, and changes to the policy environment, potential impact outlines the ‘capability’ of the report’s findings. It highlights areas in which to focus advocacy efforts in the future.
Policy research can be used for multiple, often unforeseen, purposes . Tracking a research contribution, especially one that seeks conceptual change, is difficult. Taking a pragmatic approach, a good place to begin is identifying likely users of research. The impact assessment chose five sectors for analysis to try to encompass key actors. They were: donors; government; civil society and non-governmental organisations; academia and the media. Identifying them helped to structure the analysis and understand the different ‘uptake’ of the research. The categories were purposefully broad in recognition of the broad array of policy players, and to enable flexibility in analysis; crossing national boundaries and disciplines. Lessons from Swaziland, we believed, would be applicable elsewhere in the region, especially in Lesotho, Namibia and Botswana as these are all defined as lower-middle-income countries; have similar prevalence levels; and are members (with South Africa) of the Southern African Customs Union. It was also important as Ministries of Health are often weak and in many African countries donor policies have a disproportionate influence on health.
Data and measurement
‘Measuring’ impact posed some difficulty. Changes to thinking and decisions are particularly hard to quantify. For this reason a qualitative approach was used, asking how and why people believed the report had altered their approach to the Swazi epidemic, and what impact they believed the report had. Anecdotal evidence and substantive examples were key to supporting such beliefs in the absence of quantitative evidence. Impact was ultimately considered against the aims of the report: determining what was achieved as intended, what was not achieved, and any unintended impacts.
The methods consisted first of a literature review, to develop understanding of the background and terms of reference for the study. Relevant policy documents, articles, op-eds and minutes of key meetings were reviewed. A questionnaire with questions relating to influence to date, potential influence and barriers to influence across sectors was distributed to 50 individuals, in the five sectors. Questions asked the respondents to rank how influential they thought the report had been in different areas, from ‘no influence’ to a ‘very large influence’ (including a ‘don’t know’ option). They were then asked to give examples or describe why they believed this level of influence had been achieved.
Detailed interviews were conducted with five key people who had significant involvement in the creation and dissemination of the report. Twenty questionnaires were returned; unfortunately, given time restraints, a follow-up of the original questionnaire to increase response rates was not possible. In analysing feedback from questionnaires, the percentage of answers for each ranking were calculated. Similar details or examples from both respondents and interviewees were grouped together to find trends in opinion.
We recognised a positive bias could exist. Firstly, the writing of the assessment assumed an impact had occurred. To mitigate this problem, a ‘no influence’ option was included in the questionnaire. Secondly, the respondents that worked on creating the report, or those in close partnership with the writers, may give optimistic estimates of the report’s impact to validate their own work. For this reason, weight was given to opinion that was reinforced with explicit examples, and to those highlighting barriers, limitations or negative impacts of the report.
The Scripps Research Institute (TSRI) undertakes basic biomedical research, primarily in laboratory settings, to learn how the human body operates on all levels. Our discoveries are often licensed to biotechnology or pharmaceutical firms for further development toward a drug or treatment. As a biomedical research institute, we do not see patients and rarely conduct clinical trials; for the latest information on clinical trials throughout the United States, visit www.clinicaltrials.gov . For information on specific diseases, search for associations or organizations dedicated to the disease, for example, the National Institute of Allergy and Infectious Diseases or amFAR.
What is HIV/AIDS?
Human immunodeficiency virus (HIV) is the virus that causes acquired immunodeficiency syndrome, also known as AIDS. HIV kills or damages the cells of the body’s immune system, destroying CD4 positive (CD4+) T cells, a type of white blood cell vital to fighting off infection. Because HIV compromises the immune systems, HIV-positive people are vulnerable to other infections, diseases, and complications. A blood test is used to confirm the presence of HIV in the body.
AIDS is the final stage of HIV infection. A person infected with HIV is diagnosed with AIDS when he or she has one or more opportunistic infections, such as pneumonia or tuberculosis, and has a dangerously low number of CD4+ T cells (less than 200 cells per cubic millimeter of blood).
Back to Top
Who is at Risk?
HIV is most often transmitted through unprotected sex with an infected person. AIDS may also spread by sharing drug needles or through contact with the blood of an infected person. Women with HIV can transmit it to their babies before or during birth or through breastfeeding. HIV-infected people taking antiretroviral therapy can still infect others through unprotected sex and needle-sharing.
Incapable of surviving long outside the body, HIV cannot be transmitted through routine daily activities, such as using a toilet seat, sharing food utensils or drinking glasses, shaking hands, or kissing. The virus can only be transmitted from person to person, not through animals or insect bites.
The Centers for Disease Control and Prevention (CDC) estimates more than 1 million people are living with HIV in the United States. Twenty-one percent of people living with HIV—one in five—are unaware of their infection. According to CDC statistics, an estimated 56,300 Americans become infected with HIV each year. The World Health Organization’s latest figures pinpoint the total number of people living with HIV/AIDS worldwide at 33.3 million.
Back to Top
There is no cure for HIV/AIDS. Before the development of certain medications, people with HIV could progress to AIDS in just a few years. Today, many effective medicines allow infected people to live much longer—even decades—with HIV before developing AIDS. Most of these medicines “inhibit” the progress of the disease by interfering with the virus’s reproduction, protein production, and ability to enter the body’s cells.
Back to Top
Recent Research and News on HIV/AIDS at The Scripps Research Institute
Back to Top
Links for General HIV/AIDS Information
Back to Top