Pixel

Journals
Author
Volume
Issue
Publication Year
Article Type
Keyword

How to Evaluate Collaboration within Research and Innovation

0
Hypotheses and theory

Citation Download PDF

Journal of International Business Research and Marketing
Volume 5, Issue 3, March 2020, Pages 21-25


How to Evaluate Collaboration within Research and Innovation

DOI: 10.18775/jibrm.1849-8558.2015.53.3003
URL: http://dx.doi.org/10.18775/jibrm.1849-8558.2015.53.3003

Ann Svensson

School of Business, Economics and IT, University West, Trollhättan, Sweden

Abstract: To improve and strengthen regional development is often subject of political decisions. This is the situation within the European Union, to increase innovation in SMEs. SMEs need access to knowledge, competence and collaboration, in order to increase their opportunities to develop competitiveness in products higher up in the value chain. The increasing rate of change in society also creates needs of decreasing the time from knowledge development to ready products to the market. Ongoing evaluation is prevalent in many research and development projects today in order to follow up the intended results. The ongoing evaluation could contribute to increase the efficiency and effectiveness of a specific innovation project by highlight areas of improvement and needs of development. The purpose of this paper is to analyze how collaboration within research and innovation can be evaluated, using a specific innovation project as an example. Qualitative methods have been used for the data collection and analysis, based on documents, interviews and participation in various activities. An intervention theory is created, as interpreted and structured based on information from documents and interviews. The intervention theory should be further refined, and could be used to select important aspects of the innovation project for the further evaluation. The intervention theory can be used to evaluate the causes and effects, and if the innovation project is reaching its intended final achievements and results.

Keywords: Collaboration, Research, Innovation, Regional development, Qualitative methods

How to Evaluate Collaboration within Research and Innovation

1. Introduction

Political decisions are made in order to improve and strengthen regional development within the European Union. In this respect SMEs need access to knowledge, competence and collaboration, in order to increase their opportunities to develop competitiveness in products higher up in the value chain. The increasing rate of change in society also creates needs of decreasing the time from knowledge development to ready products to the market.

Ongoing evaluation is prevalent in many research and development projects today in order to follow up the intended results. Especially in large projects financed by the European Union structural funds ongoing evaluation is prevalent as they are required to include an ongoing evaluator. The official form of evaluation in Sweden is ongoing evaluation (Brulin & Svensson 2012). Ongoing evaluation is aimed to create preconditions for continual learning within a project and to contribute to a more effective performance of a project. The ongoing evaluation has a process oriented, and thus, a forward-looking approach with the aim to support development. The ongoing evaluation is also called learning evaluation as it is conducted continuously a s a process.

The purpose of the ongoing evaluation is that the ongoing evaluator could contribute to increase of the efficiency and effectiveness of a specific project by highlight areas of improvement and needs of development. This will imply that the ongoing evaluator continually should participate in activities to support the project management in order to change and improve the project as to reach its goals. The ongoing evaluator should therefore have a close contact with the activities that will be continuously evaluated (Sandberg & Faugert 2016). Theories of evaluations reflect assumptions on how to design the evaluation and program theories reflect assumptions on how to conceptualize an intervention program for evaluation purposes (Chen 2016).

This ongoing evaluation has its background in the aim, goals and activity plan according to the decision from the Swedish Agency for Economic and Regional growth. The ongoing evaluation should have focus of the results of a project approved by the European Regional Development Fund, the long-term effects as well as on the project’s capability to change, improve and strengthen the regional growth and employment (Tillväxtverket 2016). This project is prioritized as it conforms of the strategy for growing and development within a southern region in Sweden for 2014 – 2020, within the area of collaboration within research and innovation.

In implementing a program, by conducting a project during a few years, is a challenging task. In this paper a simulation and innovation project in one region in Sweden is continually evaluated. Collaboration in projects within research and innovation, including academia as well as private and public organizations, are often faced with challenges when implementing new practices, as the new habits and routines should be a part of the organizations and communities after the projects is finished (Stirman et al. 2012). The increasing rate of change in society creates needs of decreasing the time from knowledge development to ready products to the market.

The role as an ongoing evaluator is complex as multiple issues require to be dealt with. This will also require to seeking a tight participation among the involved parties within the ongoing evaluation process, from the project management to the different stakeholders. This will need a high integrity in order to handle the complexity to give an engaged support to the development in order to help the project, in combination with the role as an ongoing evaluator and as a researcher as well (Bienkowska, Norrman & Nählinder 2010). The commitment as an ongoing evaluator can be considered as ambiguous. In research related to ongoing evaluation of implementation it is important to describe the object for implementation as well as the implementation and its organization in order to be able to make analyses and promote creative thinking (Vedung 2016). The purpose of this paper is to analyze how collaboration within research and innovation can be evaluated, by creating an intervention theory, using a specific project as an example.

2. Method

2.1. Methodological Approach

This paper is based on an ongoing evaluation of a research and innovation project within a region in Sweden. An action research method has been used, inspired by participative action appreciative research (PAAR) method (Ghaye et al. 2008). PAAR synthesizes the action research and participatory action research brings together action and reflection with the participation of various stakeholders, and adds a new dimension called appreciative intelligence. This method is concerned with developing practical knowledge in the pursuit of human purposes and to identify and amplify current achievements and produce practical solutions, which is the main purpose in ongoing evaluation.

The method of ongoing evaluation is meant to create conditions for a better and more efficient project implementation (Vedung 2016). Thus, the ongoing evaluation has process-based, foresight and development supported purposes, and should highlight areas of improvement and development needs. The evaluator then is continually participating in activities in order to support the project management for the project to reach its goals and to contribute to a learning context. The project performance is critically and constructively scrutinized according to its evaluation and indicator systems. The aim for the ongoing evaluator is to contribute to a learning environment within the project and its internal and external stakeholders.

The project that is continuously evaluated is a simulation and innovation project during the period of three year, 2016 to 2018, called INNO18 (fictitious name), conducting within a region in the south of Sweden. A university in the southern region of Sweden is success-fully engaged in research within simulation, game technology and information analysis, and the project is aimed to develop and test demonstrators for SMEs to use within virtual engineering and virtual health. The project consist of different part-projects, as innovation collaboration and two different thematic part-projects; virtual engineering and virtual health. Each of the thematic part- projects has also three different parts. Virtual engineering includes sustainable transports, advanced support for industry as well as future industrial operators. Virtual health includes the ambulance demonstrator, the elderly demonstrator and the bio-marker demonstrator.

In order to conduct the ongoing evaluation there is continually need for collecting views and information from project managers, thematic leaders and the responsible subproject managers responsible for the various demonstrators. It is also important to collect and evaluate information from other stakeholders, such as representatives affected by the project. Information from different parties also helps to create legitimacy and credibility for the evaluation

2.2. Data Collection

Both primary and secondary data has been collected in this study (Hox & Boeije 2005). A first meeting was held in the beginning of the third quarter of the project’s first year, where the project leader gave a presentation of the project. Document describing the decision and organization of the project, together with the two first activity reports were also sent to the evaluator during the last quarter of the project’s first year. This was followed by a few Skype – meetings in order to decide on an ongoing evaluation plan for the project. The ongoing evaluation plan is planned to be reviewed during the course of the project, in order to meet the needs of the ongoing evaluation.

There is an emphasis of the ongoing evaluation to use qualitative methods for data collection in order to meet the common learning in the project (Vedung 2016). However, quantitative methods can also be used. The first evaluation report, in the end of the project’s first year, was based on qualitative methods. The data used for this report were the decision document, the activity plan, the activity reports, interviews with all project, thematic and part-project leaders as well as a few stakeholders, together with participation in the convention “IT and health” arranged by the project. In total there were nine interviews of an hour each, together with a reflective discussion with the project leader.

The second evaluation report was finished at the mid second year of the project. The data collection for this report were collected by various presentation material, participation within a steering group meeting and within a project meeting as well as interviews with project, thematic, and part-project leaders. Interviews were also conducted with representatives from the municipalities’ organization in the region; the manager who is included in the steering group and its business strategist.

In the end of the second year the third evaluation report was produced. Data was during the second half of the second year collected by the activity reports, participation in the conference “IT and health” and in a steering group meeting, as well as interviews and dialogues with the project leader and the thematic leaders. An interview was also conducted with the representative from the Science Park.

2.3. Method for Analysis

Content analysis is used for analyzing the collected data (Elo & Kyngäs 2008; Graneheim & Lundman 2004)). The analysis is based on all collected data, both interviews, the participation in meetings and conventions as well as the various documents. During the interviews notes have been taken, and they have also been recorded. For the analysis the interviews afterwards have been listened through and the notes have been read. Especially inductive content analysis has been conducted using all the collected data. The data were thematized based on the demonstrators created in the project, together with their activities. Moreover, a more deductive content analysis was used in order to develop the intervention theory for the project. For the development of the intervention theory a program theory was used, together with the project application and the decision document for the project as well as interviews (Vedung 2016). The developed intervention theory is a first version created by the evaluator’s interpretation of the content of the documents, and will be used as a structured analytical tool for the further ongoing evaluation. However, first this intervention theory will be discussed and agreed with the project leader.

3. Ongoing Evaluation

The chapter is based on literature within ongoing evaluation related to evaluation of development interventions and their efforts in real-based situations in the public sector.

3.1 Evaluating Program Implementation

Evaluation is a systematic and thorough assessment of efforts that have been performed. The assessment includes a determination of a value (Vedung 2009). This will in the related context imply a detailed assessment of results, performance, administration, and management as well as the organization of public businesses. The evaluation is then retrospective, of an ex post character, and is coupled to interventions that are decided, as such they could be ongoing or finished. Within an evaluation an activity is studied. The activity is composed of a specific action or arrangement as for example a project, a provision, an act, an intervention, a program or a policy. This kind of activity will be called intervention in the following text.

Ongoing evaluation is a concept describing a continuous activity of assessment, in order to suggest changes within the intervention. This kind of evaluation is also called learning evaluation. The evaluator needs to have a close contact with the intervention and are expected to give continuous suggestions and advices to the intervention managers. Moreover, the evaluator is involved in all processes during the intervention and will also perform a final evaluation. Thus, ongoing evaluation is formative in its character (Sandberg & Faugert 2016). Ongoing evaluation is used for evaluation of projects financed by the European Union structural funds.
The evaluation has to be conducted based on rules that will guarantee the quality. This requires a systematic implementation of valuation criteria, as well as planned data collection and reporting. The valuation criteria are often based on the goals of the intervention, often formulated from political goals within interventions in the public sector. The stakeholders’ expectations are also important and useful as valuation criteria (Vedung 2009). Unless which valuation criteria are used, they should be guiding the data collection and be used as measures in the assessment of the intervention. Thus, a systematic evaluation implies an organized and methodological deliberate way of conduct. Evaluation should use scientific methods according to accepted principles (Sandberg & Faugert 2016). Evaluators’ ways of thinking should be different from ordinary daily decision making, as they engage in a process of figuring out what is needed to adress challenges in the specific intervention studied (Mertens & Wilson 2012). The public sector is seen as a system, where a whole is considered of integrated parts (Vedung 2009). The system consists of an inflow, a transformation as a process and activities, an outflow and with a feedback. However, in the public sector there is also an interest in what is happening beyond the outflow, as one or more outcomes. The inflow is in the public sector called intervention, as a kind of activity or action based on a decision, often of a political character. The transformation is called administration or implementation, and the outflow is denoted as achievement or performance.

3.2 Developing an Intervention Theory

An intervention theory, based on a program theory, is a useful tool for evaluation (Vedung 2009; 2017). A program theory has two components; a theory of change and a theory of action (Funelli & Rogers 2011). An intervention theory aims at describing the perception on how the intervention is supposed to have an impact in the society. It is used to develop and improve programs and organizations focusing on solving a wide range of problems, to aid decision making, to facilitate organizational learning and the development of new knowledge, and to meet transparency and accountability needs (Donaldson 2012). The intervention theory is not describing how the intervention has been realized in practice. The beliefs forms the basis for the intervention, and show the intentions of how the intervention process is meant to be, and can contribute to suggestions for the result.

The intervention theory includes “if-then”-statements, and expresses cause and effect, as causality. If an intervention is adopted it will probably imply certain activities that lead to certain final achievements. If certain achievements are reached, hopefully specific results will be obtained. However, it is not possible to exactly predict what is going to happen by the intervention.

As the intervention is interpreted and arranged in an intervention theory, by the evaluator, there exists an uncertainty in the interpretation. An intervention theory can never be an exact image of the intervention. When the intervention theory is described it can be used as a conceptual framework, as a frame of reference, for the ongoing evaluation. The intervention theory consists of a logic and a structure that shows the mindsets about actors, actions and effects inherent in the intervention. The interest in the evaluation is thus to evaluate causes and effects of the actions performed. Moreover, the intervention theory can be used to find cause and effects that are implied and that were not fully articulated in the description or in the decision of the intervention (Vedung 2009).

4. Results and Analysis

In the ongoing evaluation the evaluator has interpreted and arranged an inherent intervention theory for the simulation and innovation project, INNO18. This has been done in order to demonstrate how an interpretation and arrangement of the intervention of INNO18 can be seen. The intervention theory can also be used and further developed and refined in the following ongoing evaluation process.
The intervention theory presented in figure 1 is the first version, which can be further refined and developed during the ongoing evaluation. INNO18 consists of a number of constituents that should be tied into one another as a system, as a coherent unit, as to reach the results that are intended in the project.

The structured intervention theory has been arranged by the content in the text of the project application and the decision document, together with the collected data in interviews. The system model has been used as a tool in order to categorize, structure and arrange the project’s intervention theory.

The carrier of the intervention theory is a university in the southern part of Sweden, and the intervention program is the project INNO18. The project consists of a number of activities that are developed, processed and refined in order to reach the prescribed results. The results are based on certain final achievements and results. The final achievements, the outflow, denotes what is aimed to provide to the target group for the project, as for example the services that are conducted in the project and that are directed to the target group. The target group is the final receivers of the intervention, those the intervention is aimed to affect. The outflow, or the result, describes what is happening when the final achievements reach the target group, and what the action of the target group can lead to. An intervention can also have intermediate target groups that the intervention is trying to affect in order to have an impact on the final target group. The mechanisms can be seen as milestones for how the target group can be affected in order to act in compliance with the intended results.

The ongoing evaluation should be concentrated on the content of the intervention and its performance and results. This implies that the ongoing evaluation should consider the causes and effects between activities and the final achievements and results.

A few activity indicators are constructed for the project as instruments, and should also contribute to that the project’s final achievements and results should be obtained. The activity indicators are the following for the project:

  • Number of enterprises receiving support – 25;
  • Number of enterprises receiving support for introducing new products at the market – 20;
  • Number of enterprises collaborating with research institutes – 5.

Figure 1: Intervention program theory of INNO18

 5. Discussion

The presented intervention theory for the innovation project is a first version, and it is planned to be refined and developed. The further development of the intervention theory can be conducted in collaboration with the project leader, the thematic leaders and the part-project leaders, as well as with the steering committee. Hopefully the intervention theory can support the innovation project as the project’s underlying ideas have been structured according to its achievements and results.

The intervention theory can also be used in order to choose which aspects that should be further evaluated, and for refining evaluation questions for the ongoing evaluation mission. Certain aspect could be more important to evaluate than others, as the goal is to not conduct any unnecessarily evaluation. The intervention theory can thus be used by the evaluator in the dialogues with the project members and related actors, in order to obtain answers of important issues and to perform the ongoing evaluation with high quality for a high value to the innovation project. The aim with the intervention theory is to support the evaluator and the innovation project to carve the most important questions and aspects in relation to perform the actions needed to achieve the intended change (Funelli & Rogers 2011).

The further use of the intervention theory can support the evaluation about how the indicators are contributing to the innovation project’s final achievements and results, and if they are reached. It can also be used for broadening and deepening the ongoing evaluation about the related actors’ perception of the innovation project and its intended results (Vedung 2009 2017).

6. Conclusion

An intervention theory is created, and is interpreted and structured for the innovation project based on documents and interviews. The intervention theory should be further refined, and discussed with the project management and the steering committee. Certain aspects of the project can be more important to evaluate, and the intervention theory can support the selection of aspect of consideration. The intervention theory can be used to evaluate the causes and effects, and if the innovation project is reaching its intended final achievements and results.

I would like to thank the individuals who participated in interviews and meetings for their valuable contributions. I also want to thank INNO18 and the European Union structural funds, Tillväxtverket, for the opportunity to be an ongoing evaluator for the project INNO18. However, this research is conducted outside the funding from the European Union structural funds.

References

  • Bienkowska, D. Norrman, C. and Nählinder, J. (2010), “Research, facilitate, evaluate – the role of ongoing evaluation in triple helix projects”, In Book of Abstracts, October, 33.
  • Brulin, G, and Svensson, L. (2012), Managing Sustainable Development Programmes: A Learning Approach to Change, London, UK: Gower Publishing.
  • Chen, H T, (2016), “Interfacing Theories of Program with Theories of Evaluation for Advancing Evaluation Practice: Reductionism, Systems Thinking, and Pragmatic Synthesis”, Evaluation and Program Planning, 59, December, 109-118. Crossref
  • Donaldson, S I. (2012), Program Theory-Driven Evaluation Science: Strategies and Applications, Taylor & Francis Group.
  • Elo, S. and Kyngäs, H. (2008). “The qualitative content analysis process”, Journal of Advanced Nursing, 62(1), 107-115. Crossref
  • Funelli, S. and Rogers, P J. (201 1), Purposeful Program Theory: Effective Use of Theories of Change and Logic Models, John Wiley & Sons.
  • Ghaye, T. Melander‐Wikman, A. Kisare, M. Chambers, P. Bergmark, U. Kostenius, C. and Lillyman, S. (2008), “Participatory and appreciative action and reflection (PAAR)–democratizing reflective practices”, Reflective Practice, 9(4), 361-397. Crossref
  • Graneheim, U. H. and Lundman, B. (2004), “Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness”, Nurse Education Today, 24(2), 105-112. Crossref
  • Hox, J. J. and Boeije, H. R. (2005), “Data collection, primary versus secondary”, Encyclopedia of Social Measurement, 1, Crossref
  • Mertens, D M. and Wilson, A T. (2012), Program Evaluation Theory and Practice: A Comprehensive Guide, Guilford
  • Sandberg, B. Faugert, S. (2016), Perspektiv på utvärdering, Studentlitteratur.
  • Stirman, S. Kimberly, J. Cook, N. Calloway, A. Castro, F. and Charns, M. (2012). “The sustainability of new programs and innovations: a review of the empirical literature and recommendations for future research”, Implementation Science, 7(1), 17. Crossref
  • Tillväxtverket, (2016), Operativt program inom målet investering för sysselsättning och tillväxt, Commission’s decision number: C (2016)169.
  • Vedung, E. (2009), Utvärdering i politik och förvaltning, Studentlitteratur. Vedung, E. (2016), Implementering i politik och förvaltning, Studentlitteratur. Vedung, E. (2017), Public Policy and Program Evaluation, Routledge. Crossref
Share.

Comments are closed.