Evaluation of technical production in agricultural sciences a new certification scheme in Uruguay La evaluación de la producción tecnológica en ciencias agrarias Avaliação da produção técnica nas ciências agrícolas

The evaluation of technological production in agricultural sciences presents specific challenges. Unlike scientific publications, for which there are standardized evaluation criteria, technological developments require a more multidimensional and situated approach. This article analyzes a technology certification scheme developed by the National Institute of Agricultural Research (INIA by its Spanish acronym) in Uruguay. The process aims to validate the developments based on the perspective of potential users. Based on a literature review and interviews with participants, we reviewed the process design and the first implementation round in 2019. Given the study results, we reported on the innovative nature of the process, both at national and regional levels. At the same time, we highlight the importance of incorporating a variety of stakeholders and prioritizing feedback and learning over bureaucratic control. Finally, we recommend linking this process with analogous instances that may exist in other institutions within the local science system.


Introduction
The field of evaluation of technological products and processes is still under development. Unlike scientific production, which has indicators with some consensus at global level, the evaluation parameters of technological production are still at a lower systematization level (1) (2) . Although this aspect is still strongly discussed (3) (4) , the evaluation of scientific research has achieved a degree of standardization based on bibliometric indexes compiled with information from different databases (Scopus, Web of Science, Google Scholar, etc.) that are included into peer evaluation processes. The evaluation of technological production, on the other hand, is more heterogeneous and there is less international standardization of procedures. At the global level, patents are the most common indicator, and for agronomic research, the registration of plant varieties based on UPOV standards (5) (6)(7) should also be mentioned. However, not all technological products generated by agronomic research institutes may be patented or registered. In the case of processes, formal protection mechanisms are even more diffuse. The local character of the technological processes requires a more situated point of view (8) .
On the other hand, technology assessment involves more stakeholders since there is greater diversity in the aspects to analyze. Usefulness, novelty, and applicability to end-users must be considered, in addition to technical soundness. Nor is it easy to determine acceptable levels in each of these dimensions. How novel, useful or applicable does a product have to be to consider it acceptable? Do these thresholds depend on geographic, economic, or regulatory issues? How is the benchmark chosen so as to compare the degree of novelty or improvement? How is the scientific knowledge which leads to the technological proposal verified?
This article presents and discusses a new scheme for the evaluation of technological production promoted at the National Institute of Agricultural Research (INIA by its Spanish acronym) in Uruguay. This institution implemented a novel mechanism based on the concept of "certification of technologies" that includes the perspective of potential users of the developments. One of the main aspects to address was the establishment of the necessary dimensions and standards to access the certification, which required a definition of certifiable technology related to the dynamics of institutional work.
This study analyzes the general characteristics of the process in the light of the specialized literature, gathers the vision of the external members who participated during 2019, and illustrates the modifications made to the process based on the learnings of the first implementation.

Material and methods
The study was based on documentary analysis and semi-structured interviews. First, a survey of international experiences in technology certification in agricultural sciences was carried out. To this end, agricultural research agencies from different parts of the world, which are linked to INIA through cooperation networks, were contacted and information was requested to the contact reference in each institution. Secondly, the regulations and operational processes developed by INIA for the technology certification scheme were analyzed. Thirdly, interviews were conducted with external participants of the certification committees, in order to collect the views of potential technology users on the process. Eight interviews were conducted with those who were invited in 2019, covering all subject areas.
The study was part of a broader project on evaluation in the agricultural sector that included other 19 interviews with different actors associated with the evaluation process of agricultural research (researchers, members of evaluation committees, and employees). The interviews were systematized based on the different aspects of the process. The construction of the categories used for this systematization was inductive, grounded on the most outstanding aspects mentioned by the interviewees.

The evaluation of technological production in agricultural sciences
The systematization carried out by Douglas Horton stands out in the scholarly literature on the evaluation of agricultural research (9) . This author was surprised by the low penetration of program evaluation methods within agricultural research in developing countries. He pointed out that the importance of economic evaluation (both ex ante and ex post) based on the premises of the neoclassical model had displaced other forms of evaluation that could enlighten equally important aspects regarding the generation of new knowledge, techniques and appliances. In addition to the aforementioned economic evaluation, there are other methodologies, such as peer review, bibliometrics, expert committees, environmental impact assessment and participatory assessment. Horton identifies peer evaluation, expert evaluation, and economic evaluation as the main mechanisms, and the other techniques as less frequent mechanisms.
The importance of bibliometrics and participatory methods has greatly increased since the publication of Horton's work (9) . Progress made in participatory impact assessment methodologies is particularly noteworthy. Among them, we can highlight the participatory impact pathways analysis (PIPA), developed among others by the CGIAR consortium (10) (11) and other related approaches, such as ImpresS or ASIRPA (12) (13) . Impact assessment is not the only type of evaluation exercise carried out on agricultural research. Horton and Mackay (14) propose a classification according to the different stages of the research process ( Figure  1). These include needs diagnosis (stage 1), priority setting through strategic planning (2), evaluation of research proposals (3), research monitoring (4), evaluation of final project reports (5), evaluation of research outputs (6), and impact or adoption (7). The authors also propose two related evaluation activities: an eighth stage of institutional evaluation, in which the result is seen in a broader context, as well as a ninth stage of evaluation of the individual performance of researchers (9) (14) .

Figure 1. Evaluation instances in the agricultural research process
Source: Adapted from Horton and Mackay (14) On the other hand, Molas-Gallart (15) points out that evaluation usually serves three purposes that may conflict: the distribution of funding, bureaucratic control and learning to improve future activities. Horton and Mackay (14) consider it a mistake to assume that the same evaluation studies can serve very diverse interests. However, they recommend including a broad set of methodologies and frameworks and not let accountability or bureaucratic control displace the importance of assessment as a learning process.
The "co-innovation" framework developed by Ag-Research in New Zealand deserves a special mention. It is a scheme that has been developed both conceptually and empirically and that assigns great importance to external actors and potential users. Moreover, this framework has already been used in the context of family livestock production in Uruguay (16) .
Co-innovation comprises "the process of jointly developing new or different solutions to complex problems through multi-stakeholder research processes, and keeping these participatory processes active throughout the research" (17) . It is proposed as a comprehensive framework to engage external actors in research, in order to maximize impact and adoption.
Boyce and others (17) argue that this approach is particularly suitable for complex problems that have challenges that go beyond a particular technological solution, and include technical, social, cultural and economic aspects. Technology transfer would only be suitable for simple problems that can be developed by researchers and technicians and then transferred. The situation where end-users must adapt the available solution and work together with developers is more complex. Co-innovation is about joint and collaborative development of solutions from the early design stages.
In the case of co-innovation, participation is implemented in the different stages of the process and it is sought that this participation is not only formal or bureaucratic, but that it remains active or alive, throughout the cycle. Policy-makers, representatives of the industry and the community, non-governmental organizations, aboriginal groups, and other groups that may be involved in the research as partners or relevant actors are among the stakeholders that may be included.
The innovation process can be broken down into three blocks. In the first place is the co-design, that is, the formulation of the research questions and the desired results, as well as the joint definition of the work plan and the work stages. Secondly, co-development is the design of an agreed evaluation and follow-up framework, which allows monitoring and readjusting the objectives and actions based on the preliminary results obtained. Finally, implementation and co-innovation itself take place.
The co-innovation framework developers highlight five core aspects of the proposal (18) . First, the importance of involving all relevant partners and actors, in order to have a plural and shared understanding of the problem from the beginning of the research. Second, they emphasize the importance of choosing a suitable focus within the problem. The co-innovation framework puts the problem center stage (before technology or the end-user). Work should be done at the beginning on defining the work focus, but also meetings should take place throughout the process to validate that the focus has been maintained. Third, the work team must be adequate. Not only are technical skills important, but also communicative and collaborative skills to strengthen the ability of teams to co-innovate. People who have a broad vision of the system and can act as brokers or translators among researchers, partners and relevant stakeholders are needed. The fourth point refers to the importance of early and continuous communication of results. This helps to compare them with the users' knowledge, to identify new questions. It also serves to check that these results are useful to users and to keep relevant partners and actors involved in the process. Finally, they recommend putting into practice a learning cycle during the development of the project to maintain the focus on action, adapt to changing circumstances and be able to take advantage of new opportunities that arise during the course of the project. For this, they recommend including monitoring and evaluation activities from the beginning, with participatory processes focused on reflection and learning.
Turner and others (19) state that there are systemic factors, blocking mechanisms, and institutional logic that make it difficult to implement the co-innovation model. In this sense, and in the case of New Zealand, it is highlighted that the financing model of public research institutions (Crown Research Institutes) such as AgResearch is based on the overheads that they obtain when they are awarded public financing for R&D. Technological activities, having a more uncertain return on investment, are often not financially attractive to institutions like AgResearch, which need the more constant flow of funds that the overheads guarantee.
On the other hand, as of 1990s, there has been a laissez-faire policy regarding innovation, as a phenomenon that must arise spontaneously from the market, and it is not positively regarded that the State is actively involved in these processes. Also, private sector actors, especially in SME sectors, are not able to make the risky economic bets that getting involved in innovative processes may require.
Finally, it is worth mentioning the Good Agricultural Practices (GAP) approach, a set of tools to promote environmental sustainability. According to Hobbs (20) , GAP can be seen as attempts to improve agricultural sustainability on various fronts, including the protection of environmental and natural resources, the improvement of quality, and food security. In different parts of Latin America, successful work is being done on certification models associated with GAP (21) (22) . This approach can provide useful perspectives to establish criteria for evaluating technological production that consider the sustainability dimension.

INIA's technology certification process
The National Institute for Agricultural Research (INIA by its Spanish acronym) of Uruguay was created in 1989 and is made up of a network of five experimental stations located in different regions of the country and a national management office in Montevideo. According to its creation law, its financing comes from the tax collection of agricultural goods sales and an annual contribution from the Government that must be at least the same as the collected by the tax.
As of 2016, a new system of key performance indicators in institutional management (KPI) started being developed. In this context, exploration began on ways to generate adequate indicators to assess the institute's technological production, and consultations with academic experts (national and international) in the field of science and technology policies were promoted. Based on these exchanges, a technology certification system was designed and implemented in 2018. This process is based, in turn, on the previous development of an INIA product catalog. Within the catalog, there are "type 3" products: technological products that are made available to end-users (agricultural producers, technicians, decision-makers, and policy-makers).
The Technology Certification Process (Procetec) at INIA was launched in 2018 and focused on the point of view of potential users. The process seeks to identify the potential of technologies at an early development stage, as well as to systematize all related relevant information: state of the art, added value of the proposed technology, environmental and social risks, potential stakeholders, etc. This systematized information is of great value both for establishing communication and technology transfer strategies, as well as for establishing impact evaluation processes (after a certain time) in the production system and society. For this, a clear and documented definition of the product and technological process was required.
The objective, therefore, of the Procetec was the creation of a reliable, systematic, independent, and technically sound system that allows prioritizing the production of technologies carried out by the institution's researchers, thus recognizing the INIA researchers' career in the scientific and technological system of Uruguay.
In more general terms, INIA's objective is that this process, apart from its incorporation into the recognition and evaluation processes of its researchers and the institution, contributes to innovation in the agri-food sector, by applying the generated knowledge and technologies to solve problems, take advantage of opportunities and add value (economic, environmental and social). Although in this instance the process is limited to the technologies developed with INIA's participation, it is possible to work in joint initiatives to certify productions of other actors of the scientific system, in the future.

The methodology developed for technology certification
The first step was to articulate an INIA definition of technology. This should serve to shape the evaluation instances within Procetec, and it also fulfilled the broader function of guiding researchers regarding the type of technological production that is encouraged by institutional management. There was an agreement to consider four dimensions in order for a technology to be liable for certification: This certification process is approached in the early stages of the development of technologies and INIA aims for them to be technological innovations (23) . The notion of innovation is broad and can be defined as "the implementation of a product (good or service) or a new or significantly improved process, or a new method of commercialization or a new method of organization in the practices of the company, the labor organization or external relations" (24) .
In this regard, Leeuwis and Aarts (25) define innovation as the successful combination of new technological devices (hardware), new knowledge and ways of thinking (software), and new forms of institution and organization (orgware). This certification process, to the extent that it integrates recognized actors in the technological, productive, commercial and public policy fields, contributes to creating more favorable conditions for innovation, focusing not only on technological devices, but also on scientific knowledge, and knowledge coming from actors in the productive, commercial and public policy fields; as well as laying the foundations for possible institutional networks, and instruments or incentives that can be created and promoted for the improvement of the application and, above all, of the dissemination that reaches potential beneficiary actors.
This certification process is framed in technologies in a state of "experimental development", coming from both oriented fundamental research and applied research, carried out both by INIA and in a network with other scientific and productive actors (26) (27) . The Technology Readiness Level (TRL) scale developed by NASA and the US Department of Energy was used as a guide (28) (29) , which has been applied to the agricultural field, for example in the Scaling Readiness scheme framework of the CGIAR-RTB (30) . The technologies to include in the process should be located at level 6 of the TRL scale (that is, it passed the laboratory proof of concept, INIA's field experiments, or co-innovation in a few farms, but it has not yet been scaled to a massive level or on commercial farms).
In cases where the technologies have been developed within the framework of collective processes with other actors, it must be specified in the certification and communication of the technologies.
The evaluation process by external actors is organized into four stages (31) (Figure 2). First, the team leader must prepare a descriptive booklet based on a pre-established format, where the technology characteristics are described. Then, it must be endorsed by the coordinating bodies of the respective INIA system. Third, an eligibility committee will verify that the applications are complete, that the technologies are at the appropriate stage of development, and that the necessary information is available for the process. Finally, a certification committee with five members (three of them external to INIA) will be formed, which will evaluate whether each of the proposed technologies presents the attributes abovementioned in the definition of technology. External actors should be renowned in the field of the evaluated technologies and contemplate the perspectives of: (i) a leading private producer or technician; (ii) an entrepreneur or leader with an agribusiness profile; (iii) an expert in the area of public were identified within each dimension, and the number of mentions of each of them was counted.
Formation of the committee. In general terms, it should be noted that the interviewees agree that the work environment within the committee was good, and that the people selected had relevant expertise to contribute. However, it became clear that not everyone can have complete knowledge of each topic given the variety of topics in some committees (especially in livestock production), forcing them to do some previous research or to give their opinion based on common sense. It was noted that the diversity of profiles present could have slightly mitigated this difficulty. A deficit that was pointed out by three participants was the lack of business representatives with an innovative profile. The summoned representatives of the productive sector also had, in their opinion, a technical profile.
As for the call made, for more than one interviewee, the criteria by which the participants had been chosen had not been clear or transparent, giving rise to some suspicions ("why were you invited?"). Given that some members are part of the private sector and have natural economic interests in the products, in some cases, conflicts of interest were expressed. While these cases were exposed, those involved did not pass judgment and there was no record of this situation.
Quality of the information. The vast majority of interviewees referred to cases where information was deficient or incomplete. It affected decision-making, and it does not seem that consulting program managers in situ could have solved the problem.
It should be noted that the interviewees highlighted the format of the file and the evaluation grid as positive. In this regard, they mentioned that the problems were not in the format, but in how it had been completed. The format was correct since, when they were well completed, they were able to do their job perfectly with the information provided.
Work dynamics. Regarding this aspect, the interviewees mostly agree on the fact that it was an experimental application of a new process. It was perceived that the methodology was not yet established and that it would be corrected over the next editions. While some interviewees pointed out that they would have no problem talking to developers "face to face", in general, it was positively appreciated that there was an instance in which externals could deliberate without the presence of INIA staff.
On the other hand, three interviewees indicated that the time they had been given to analyze the material before the meeting had been insufficient. As some files were not complete, some participants informally consulted other experts within very tight deadlines.
Also, the difficulty of attributing the credit of a technology to INIA came up frequently in interviews. That is, even if they agreed that the technology should be certified, it did not seem appropriate for it to be certified as INIA Technology, since the product or process had had contributions made by other institutions, or represented an accumulation of knowledge from various sources over time. That is why collaborations were explicitly indicated in some of the evaluation reports (in the fruit and vegetable committee, for example).
Finally, a few participants expressed a sense of discomfort regarding whether the process seeks to "sanctify" INIA's production, or that "we put the signature on it." All of them, however, stressed that they were free to discuss and were able to make decisions autonomously.
Value for the institution. The vast majority of the interviewed stated that the process, even though it is immature and experimental, is very good and useful for INIA. They expressed that they find it very positive and healthy that the institution is open to producers and all recipients of their work. This gives INIA value to society and disseminates the work of technicians and researchers. It is a kind of "quality control." Furthermore, they also pointed out that the participants become INIA representatives by disseminating the technologies they learned about in the certification committee meetings.

Process review and lessons learned
The results of the first round of implementation, collected through the interviews and together with the perspective of the external members, were presented by the work team in a workshop held ad hoc in December 2019 in Montevideo. This instance was useful to assess the work and to discuss aspects to improve for future implementations. These changes were expressed in a new version of the work protocol (see supplementary material). Greater precision in the definitions of the form, and the stage of development of the technologies at the moment of application are among the main modifications. INIA's regional committees were given greater participation in the certification committee, in order to integrate the process with other evaluation and planning instances. The different profiles of the evaluating members were also discussed and it was agreed to incorporate a scientific-technological expert within the committee.
On the other hand, the diversity of actors within agricultural production (primary producers with a business vision, family farmers, etc.) challenges the representation. In this regard, work will be done to ensure the presence of primary producers whose profile is the most relevant for the type of technology studied.
Furthermore, timing was adjusted for the evaluation of proposals and the control over the information was reinforced, so that the evaluators have all the elements of judgment to make decisions. A more precise follow-up of the timing is planned, which allows for the request of corrections and modifications, without extending deadlines that slow down the process excessively. Finally, the generation of a new web version of the certified product catalog was proposed.
A second edition of the process will be carried out in 2020, with this new version of the protocol.

Discussion
If we intend to analyze INIA's certification process in light of the stages outlined by Horton and Mackay (see Figure 1), we must place it in stage 6 of the cycle. It is the evaluation of technological products derived from a line of research. Regarding the methodology used, it is a hybrid process between what is known as "external review by experts" and "participatory methods".
INIA's certification methodology cannot be assimilated to "peer review", because in that case it would involve other members of the academic community who judge the work of their colleagues. Regarding certification, the differential value of the process is precisely that those summoned are relevant actors to discuss the potential for adoption and commercialization of the product, not the academic value.
Expert panels do not usually involve users' points of view, but rather the view of international experts in a subject. In this regard, users' inclusion brings Procetec closer to the work with local producers, which characterizes participatory methods. However, INIA's work methodology through meetings, minutes, and recommendations is more related to the expert review method than to events more directly oriented to public participation.
An important point to highlight concerns the function of the process. We had pointed out, following Molas-Gallart (15) , that evaluation can be thought of as a process in which three functions can converge: bureaucratic control, learning, and the distribution of funds. In the case of INIA, there is no distribution of funds directly associated with the certification. The bureaucratic control function still prevails over the learning, and it would be important to modify some aspects of the process to encourage learning over the mere evaluative aspect of approving or not approving a product. Unlike other evaluation processes, in this case bureaucratic control is not about whether the researcher has carried out the required work (there are other evaluation mechanisms for this), but is rather linked to its potential to exhibit institutional management achievements. There is a significant risk that Procetec remains an only formal verification of the proposals in order to show the fulfillment of institutional objectives. The lack of development of the learning aspects is evidenced in the limited "feedback" processes towards developers and integration with other instances of the cycle outlined in Figure 1.
Therefore, it is necessary to think thoroughly about the integration of the certification process with other instances of evaluation of agricultural research. Product evaluation is just one instance in a larger evaluation cycle. The process begins with the needs assessment, continues by prioritizing topics, and allocating funding, so as to evaluate academic products and technological results. Lastly, the evaluation of the adoption of the products and generated processes. This broader vision is also found in the AgResearch co-innovation framework and other broader current theoretical frameworks, such as responsible innovation in the European Union and the open science paradigm. In this regard, it is observed that although INIA has mechanisms of interaction with external actors to identify needs and priorities (Institutional Strategic Plan (32) ), and instances of programs monitoring (through the Regional Advisory Councils and INIA's participation in innovation consortia), the link between these instances and the certification process is not direct.
Finally, it is key to highlight that co-innovation literature emphasizes on keeping external participation "alive". Evaluation processes tend towards bureaucratization, which must be avoided. This happened, for example, in the Argentinian experience of technological and social development projects (33) . It is not simply about external actors certifying, raising or lowering a thumb, but rather that they commit to really think about the usefulness of the products and doing their best to provide information to improve INIA's work. Regarding this, it is important to design work dynamics that guarantee an open work environment and flexible methodologies that focus on learning rather than on the bureaucratic-institutional sense of the initiative.

Final considerations
This study aimed to analyze the conception and implementation of the new INIA technology certification process, in light of the participants' points of view and the specialized literature on the subject. The process was interactive and in permanent consultation with the institute authorities, and ended with the presentation of the work results and the integration of recommendations for new editions.
The research evidenced that there is no "handbook" on how to carry out this type of evaluation. It is a field under construction at a global level, where experimenting with new methodologies and approaches is desirable and necessary. Proposals must adapt to the disciplinary fields and the institutional and academic cultures of each place.
INIA's experience stands out for the value given to external participants, who are responsible for making decisions about which technologies are awarded a certification. This is especially important in the context of evaluation systems which can easily develop inward and dilute potential users' points of view in bureaucratic and institutional frameworks.
On the other hand, it is also worth noting the authorities' openness to reviewing this process, considering it an open and perfectible construction.
Regarding the possible future implementations in terms of the formation of the committees, it is important to guarantee a plurality of user profiles (with technical, business, commercial, public policy, and scientific-technological references), and transparency in the member selection processes and in the management of conflicts of interest. The fact that the cases reaching the evaluation committee should correctly comply with all the previous filters (both in terms of the quality of the information provided and the stage of technology development) was also discussed. These suggestions were included in the new versions, although how they will be integrated into the new evaluation instances is yet to be defined.
Another aspect to consider refers to the possibility of linking the INIA certification process to GAP. A perspective linked to environmental sustainability could be incorporated more systematically in Procetec. So far, the evaluation guides used in the process only slightly consider the environmental perspective. They only survey whether there are environmental risks associated with the use of technology. However, the subject deserves a more detailed analysis. Certified technologies could be required to promote more sustainable practices and to be compatible with agricultural production under the GAP framework. It will also be necessary to find effective pathways for integrating interdisciplinary work, allowing the inclusion of all perspectives and knowledge associated with the problem.
In more general terms, the need to more clearly involve the learning function over those of control and certification is highlighted. For this, it is important to strengthen the feedback instances between evaluators and evaluated, providing a space for the inclusion of the comments received in new stages of technology development. This challenge is not exclusive to the INIA process, but is a characteristic of all science and technology evaluation systems in Latin America. It would also be important to integrate the certification scheme with other instances of the agricultural research cycle, linked, among others, to the formation of agendas, funding allocation, specialized journal publications, hiring and promotion of researchers, and impact evaluations carried out after the adoption of a technology.
Finally, it is important to continue working in the interface with other institutions. Inter-institutional certifying bodies, or a plurality of processes, mutually recognized by the different institutions, could be developed. In this way, the system could be integrated into a broader discussion on the evaluation of technological production in other institutions of the Uruguayan STI landscape (university, national system of researchers, public research institutes). INIA's experience may also be of interest to other agricultural research organizations in the region that want to prioritize their technological production and promote a greater plurality of profiles and career paths for researchers.

Author contribution statement
MSi, JMP and FV designed the study. FV conducted the interviews. CN, FN, JL and MSa prepared the tables and figures and contributed substantially to the interpretation of results. FV wrote the manuscript with input from all authors.