EVALUATING OPERA ARTISTIC CO-CREATION IN TRACTION: MAIN INSTRUMENTS AND MID-PROCESS EVALUATION RESULTS

 

LA EVALUACIÓN DE LA COCREACIÓN ARTÍSTICA DE ÓPERA EN TRACTION: PRINCIPALES INSTRUMENTOS Y RESULTADOS DE LA EVALUACIÓN INTERMEDIA

 

 

Anna Matamala

Universidad Autónoma de Barcelona

 

DOI: 10.5281/zenodo.7558571

………………………….

Recibido: (12 10 2022)

Aceptado: (27 12 2022)

Publicado (31 12 2022 )

…………………………

 


Cómo citar este artículo

Matamala, Anna. (2022). Evaluating artistic co-creation in Traction: Main instruments and mid-process evaluation results. ASRI. Arte y Sociedad. Revista de investigación en Arte y Humanidades Digitales, (22),

1-17. Recuperado de http://www.revistaasri.com/article/view/5130

 



Resumen

El artículo presenta el modelo de evaluación desarrollado en el proyecto europeo Traction

 

 

para evaluar la cocreación de ópera como proceso y como resultado. Ofrece una

 

 

visión general del proyecto y describe las distintas óperas que se han creado así como los resultados principales de una evaluación intermedia.  

 

Palabras clave

Evaluación, cocreación artística, arte participativo, arte comunitario, ópera.

 

Abstract

This article presents the evaluation model developed

 

in the European project Traction to assess opera co-creation both as a process and as an output. It

presents an overview of the project and describes the different trials developed as part of it together with the main results of the mid-process evaluation.  

 

Keywords

Evaluation, artistic co-creation, participatory art, community art, opera.

 


 

 

 

1.     Introduction

Artistic co-creation can be understood as the interaction of professional and non-professional artists in participatory art (Matarasso, 2021, p. 32). Participatory art is rooted in community art, which Matarasso (2019, p. 49) defines as “the creation of art as a human right, by professional and non-professional artists, co-operating as equals, for purposes and to standards they set together, and whose processes, products and outcomes cannot be known in advance”. Traction is a European project which has embarked on opera co-creation at three diverse locations: the Raval neighbourhood in Barcelona, a prison in Portugal, and different communities in Ireland. Evaluation is at the core of the project, hence different instruments to assess opera co-creation had to be developed. The main aim of this article is to present such instruments and, to a lesser extent, show their implementation in a mid-process evaluation. Although general guidance on evaluation for community art exists (Angus, 2002; Keating, 2002; Arts Council of Northern Ireland, 2004; Davies, 2016), our goal is to present the specific Traction approach so that cultural stakeholders can find a detailed account of the instruments used and can be inspired to develop their own evaluation processes.

 

The article begins with an overview of the Traction project and of the different trials that are being developed. It then provides an overview of how evaluation has been approached in other community art projects. Next, it presents the rationale behind the co-creation evaluation in Traction and how an initial map of indicators was used as its foundation. The following section provides all the details about the tools developed, and the article then summarises the main findings when applying these tools in the so-called mid-process evaluation, performed in October 2021.

 

 

2.     Traction: An overview

Traction is a project running from 2019 to 2022 funded by the European Commission which is developing three trials. The Liceu opera house in Barcelona is involving people from the Raval neighbourhood to co-create a community opera. These participants include persons with disabilities, migrants, former sex workers, and residents of Raval. A showcase of the opera took place in March 2022 and the community opera premiered in October 2022 showing the result of different co-creation workshops which have dealt with the branding of the opera, the costumes, and the choir performance. Secondly, the Sociedade Artística Musical dos Pousos (SAMP), an independent music school in Leiria, is co-creating an opera with young inmates from the Leiria youth prison. A new opera has been written and composed by professional artists with inmates, with the collaboration of relatives and staff members, and was premiered in June 2022. Some preliminary performances took place in June 2021. Finally, the Irish National Opera (INO) is leading the third trial, which focuses on developing the virtual reality community opera “Out of the Ordinary”. Irish speakers from the island of Inis Meáin and teenagers and adults from areas closer to Dublin are involved in the process alongside professional artists.

 

All three trials share the fact that they involve both professional and non-professional artists, hence the term “opera co-creation”. However, as acknowledged by Matarasso (2021), co-creation in Traction takes different forms and presents a whole spectrum, with less professional control at one end (for instance, at SAMP) and more professional control at the other (for instance, at Liceu).

 

In the processes of co-creation, two main Traction technologies are being developed and used, namely: a) Co-creation Space, a tool for asynchronous communication during co-creation activities. It is web-based and facilitates collaboration among professionals and non-professionals, who can share content and select topics of interest to follow. It has been developed using a user-centric methodology involving user tests and an open pilot (Röggla et al., 2022); b) Co-creation Stage, a web-based tool to connect multiple co-located stages in real time which has been used in the trials in Portugal and Barcelona to connect the main stage to singers in another location. Additionally, there are a series of tools related to virtual reality which have a direct impact on the opera co-creation process in Ireland.

 

Traction as a project lies on three main axes: technological development, opera co-creation—including both the process and the final result—and evaluation.  In terms of evaluation, the project aims at assessing three main aspects: the technologies developed in the project by means of user tests, the process of co-creation and its output, and the social impact of the whole experience. These three evaluation axes are closely interwoven and are developed following two main principles: the evaluation is iterative because different evaluation rounds take place and results inform subsequent rounds; the evaluation is open and adaptable because, even if a general framework is presented, the specificities of each trial need to be considered. In this article, the focus is put on co-creation both as a process and as an output, with an emphasis on the instruments developed. Additionally, some preliminary results are presented succinctly.

 

 

3.     Evaluating artistic co-creation

To define how co-creation in Traction should be evaluated, we drew from the partners’ experiences, especially community art expert François Matarasso, and looked at the existing literature on the topic (Matamala and Soler-Vilageliu, 2022). Davies (2016) gives several examples from the Created People and Places programme, using both traditional and creative methods. The examples included present standardised scales to measure wellbeing, social return on investment evaluations, but also metrics such as Culture Counts. Culture Counts—also termed Quality Metrics—is an evaluation tool developed through empirical research which has been used by different countries: it triangulates assessments of self, peer, and public in a multidimensional system. The system includes 9 metrics for self, peer, and public (concept, presentation, distinctiveness, challenge, captivation, enthusiasm, local impact, relevance, and rigour) and 3 metrics for self and peer (originality, risk, and excellence), plus 31 participatory metrics (Shared Intelligence et al., 2017). In a previous publication, Knell and Whitaker (2016, p. 26) highlight authenticity, enjoyment, experimenting, friendship, intensity, and new people as the metrics that measure the quality of the participants’ experience. This evaluation framework has been criticised because “it represents a time-consuming and reductive proxy for artistic value that is open to political abuse” (Walmsley, 2019, p. 103).

 

Jarke et al. (2019) present an overview of co-creation projects that includes evaluation proposals, both formative and summative. Formative evaluation, the most relevant one for our purposes, considers the following indicators: mutual learning, empowerment, openness and diversity, involvement and ownership, and transparency and effectiveness. Moving beyond the indicators, Bossen, Dindler, and Iversen (2016, p. 159) highlight the need to promote participatory process in evaluation “by engaging participants and stakeholders in conducting evaluations”.

 

Matarasso (2019, p. 51) suggests four elements to assess the artistic quality of the process from the participant’s point of view: experience: “[t]he extent to which people enjoy taking part. Is the process rewarding?”; authorship: the participants recognise themselves as authors of the artistic product; empowerment: “the extent that people gain control, within and beyond the project. Are they strengthened by the experience?”, and humanity: ““[t]he extent that it produces kindness, solidarity and trust. Does everyone feel valued?”. Matarasso (2019) also proposes five elements to assess the final artistic product:  craft, i.e. “the technical and artistic skill demonstrated by the work”; originality, i.e. “its relationship to the unique conditions of its creation”; ambition, i.e. “its aspiration, scale and openness”; resonance, i.e. “its relevance to what people are concerned about”, and feeling, i.e. “its non-rational effect and ability to linger in the mind”.

 

 

4.     Evaluating co-creation: The map of indicators as a starting point

As mentioned by Antonnen et al. (2016, p. 16), “no common list of performance indicators exists that is suitable for every project. Each project needs to design its own system to measure outcomes, processes and structures”. However, Traction aimed to find a list of indicators that could be used as a basis to start building the evaluation process.

 

Through a series of interviews with relevant stakeholders and an internal focus group, a map of indicators was proposed. This map (see Table 1) identifies a series of indicators linked to the process, to the artistic output, and to both as relevant guiding elements to consider when evaluation co-creation. The focus was put very much on non-professional artists, i.e. the students, inmates, and Raval neighbours, among others, who co-create the artistic output alongside the professional artists. Some of these indicators are considered outcomes because they help evaluate the changes that are a consequence of the artistic co-creation. Other indicators, which include an asterisk in Table 1, are considered outputs, meaning they help assess the work and activities generated by the project and gather more factual aspects (Matamala and Soler-Vilageliu, 2022).

 

 

 

 

Table 1. Map of co-creation indicators

As far as the artistic product quality is concerned, the indicator was broken down into different items, following Matarasso’s model described above: craft, originality, ambition, resonance, and feeling. All these indicators guided the mid-process evaluation, which took place in October 2022, and are also guiding the final evaluation in Traction, currently being finalised. They have proven useful as a framework to develop the evaluation instruments presented in the following section but also as a framework to analyse the data obtained. For instance, when a semi-structured interview was designed, the indicators helped us draft the relevant questions. They also proved useful to establish the tags in the qualitative analysis software and to process and analyse the resulting interview transcript.

 

5.     Evaluation instruments

Traction aimed to design a series of instruments that would facilitate gathering both quantitative and qualitative data on three different trials during a 3-year project. To this end, different instruments were developed, namely: co-creation evaluation log, participants’ attendance log, questionnaires and evaluation workshops with non-professional artists, interviews with professional and non-professionals artists, audience questionnaire, experts’ assessment forms. These instruments, which were the results of collaborative work among partners, are described in detail in the next paragraphs.

 

5.1. Co-creation evaluation log: it is an online form to keep track of co-creation activities. It contains different fields which gather relevant data concerning the participant profile, their engagement, and the project evolution.

Table 2. Co-creation evaluation log

 

Question

Indicator it relates to

Activity code

Identification/monitoring data

Date of the co-creation activity

Trial: INO/LICEU/SAMP

Location of the activity: arts venue, community venue, neutral venue, online.

Number of participants according to the profile:

·       Number of professionals: artistic and creative team.

·       Number of professionals: technical and production team.

·       Number of professionals: Traction-related professionals.

·       Number of non-professionals: artistic and creative team.

·       Number of non-professionals: technical and production team.

·       Number of other participants (referring to any participant who enables the participation of non-professionals, such as psychologists, prison staff, foundations, and community associations).

Participants’ profile

Has the aim of the activity been fulfilled? Yes/No/Partially

Project evolution

Has the activity been engaging for participants? Yes, for all/For some/No, it was not.

 

By engaging we mean that they have participated actively in the co-creation process. They have offered suggestions, expressed interests and preferences.

Engagement

Add your personal observations on this activity.

 

Although this form only shows one line, you are expected to write as much as you like. This is a very relevant field for the evaluation, open to any type of comments. For instance, you may want to comment on the aim of the activity, on the co-creation and participation process, engagement, artistic value, impact, skills and capabilities, change, etc.

Different indicators

 

Whereas the first rows gather quantitative data and provide closed answers, the last field is an open one aiming to collect qualitative feedback on how the session was developed. It is kept open on purpose so that facilitators filling in this form at the end of the session can highlight the most relevant aspects in relation to the indicators identified earlier.

 

5.2. Co-creation evaluation log: it is an online spreadsheet used to keep track of the participants (listed on the vertical axis with a code) who attended each of the sessions (listed on the horizontal axis also with a code) and to map participation across co-creation activities.

 

Table 3. Participants’ attendance sample log

 

 

Each participant is assigned a code and their profile is identified. Their attendance is tracked, which allows us to see how many sessions participants attend and how attendance fluctuates. This information is then contrasted with qualitative data to find an explanation for the trends identified in the form.

 

5.3 Questionnaire to assess co-creation including the statements and questions presented in Table 4. For the statements, respondents must state their level of agreement on a 6-point Likert scale. In other cases, the field is open and participants can write as much as they deem necessary.

 

Table 4. Co-creation evaluation questionnaire

 

Statements

Indicator it relates to

1. I was actively involved in [the co-creation process/workshops/what is applicable in each trial].

Engagement

2. I was motivated by [the co-creation process/workshops/what is applicable in each trial].

3. I have gained a better understanding of other people’s ideas.

Mutual understanding

4. I have learnt from other people.

Learning

5. I have made new friends.

Relationships

6. I have enjoyed it.

Satisfaction

7. I would like to do it again.

8. I feel more confident about what I can achieve now.

Personal change: empowerment

9. I feel more interested in art now.

Personal change

10. Everyone involved contributed in a balanced way.

Balanced contributions

11. Everyone involved was respectful of each other’s ideas.

Mutual understanding

12. Taking part has changed some of my previous ideas.

Personal change

13. If so, in what way? (open field)

14. Taking part was good for my wellbeing.

15. If so, in what way? (open field)

16. Have you gained any skills? Select all that apply.

·       Creative art skills (composing music, creative writing, etc.)

·       Technical art skills (screen printing, photo editing, etc.)

·       Performing skills

·       Managing work skills

·       ICT skills (technology, computer, etc.)

·       Teamwork skills

·       Communication skills (speaking, writing, etc.)

·       Other skills (please specify)

Informal learning

17. What was the best thing about taking part?

General questions with an open field not directly related to a specific indicator

18. What was the worst thing about taking part?

19. What could we do better next time?

20. Is there anything else you want to tell us?

 

This questionnaire was built collaboratively with project partners and tried to gather data linked to different indicators by a combination of closed and open questions. A written questionnaire was considered inappropriate for the inmates in SAMP because forms are usually linked to administrative processes, which are often not viewed positively. Therefore, the questionnaire was adapted into an evaluation workshop. To achieve greater focus, the number of questions was reduced, and a group dynamic of trust was developed to facilitate participation. This shows the need to adapt the evaluation instruments to each specific context.

 

5.4. Semi-structured interviews with professional and non-professional artists to assess co-creation activities and, where relevant (questions 9-13), initial performances. The interview schedule followed is indicated in Tables 5 and 6, differentiating between non-professional and professional artists.

 

 

 

Table 5. Co-creation evaluation interview: mid-process (non-professionals)

 

Question

Indicator it relates to

1. Tell us how you heard about the project and why you wanted to take part.

Ice-breaking questions

2. Can you explain what you did in the workshops?

3. What did you enjoy most—and why?

Satisfaction, engagement, project evolution

4. What wasn’t so good?

5. How could it have been better?

6. How do you feel the group got on with each other?

Mutual understanding

7. Was everybody respectful?

8. Did you all have the chance to contribute?

Balanced contributions

9. Can you explain what you did in the performance?

Ice-breaking question

10. What did you enjoy most, and why?

 

Satisfaction and open questions related to many indicators

11. What wasn’t so good?

12. How it could have been better?

13. Were you happy to perform or see people like you performing?

14. What will you remember from this experience?

 

Learning

15. Have you gained any new skills (practical, relationship with people, etc.)?

16.  What do you think you’ve learnt from the experience?

17. Have these workshops changed some of your ideas, your interests, anything at all?

Personal change

18. Would you do it again?

Satisfaction

19. Is there anything important that we haven’t talked about? Is there anything else you want to add?

Final open question (any indicator)

 

Table 6. Co-creation evaluation interview: mid-process (professionals)

 

Question

Indicator it relates to

1. Please introduce yourself and tell us about your past experience—if any—of co-creation.

Ice-breaking question

2. Can you explain your role in the workshops?

3. What was most successful in the workshops?

Project evolution, any indicator

4. Was anything disappointing? If yes, what and why?

Project evolution, any indicator

5. What did you enjoy most and why?

Satisfaction

6. How do you feel the group got on with each other?

Mutual understanding

7. Was everybody respectful?

Mutual understanding

8. About the performance: What did you enjoy most and why?

Satisfaction and open questions related to many indicators

9. About the performance: What wasn’t so good?

10. About the performance: How could it have been better?

11. Would you attend this type of performance again?

12. Did you see any development in the participants’ skills or confidence? If yes, please explain.

Learning, personal change

13. And you? What do you think you’ve learned from the experience?

Learning

14. Have these workshops changed some of your ideas, your attitudes, anything at all?

Personal change

15. What will you remember from this experience?

Satisfaction, personal change

16. Would you do it again?

Satisfaction

17. What would you change in the future?

Satisfaction, project evolution

18. Is there anything important that we haven’t talked about? Is there anything else you want to add?

Final open question (any indicator)

 

The previous questions are seen as guides that can be adapted depending on how the interview develops. All interviewers need to have in mind the map of indicators as the evaluation framework but should be flexible enough to adapt during the interviews.

 

5.5. Audience questionnaire, addressed to audiences attending a performance, including the following questions (Table 7).

 

 

Table 7. Audience questionnaire

 

Question/statement

Indicator it relates to

1. How did you get here today? Taxi/Train/Car/Bus/Bike/Walk.

Audience profile

2. Roughly how long did it take you to get here? ___ minutes.

3. Do you have any connection with the performance?

·       No, I don’t.

·       I took part in the project.

·       I know someone who took part in the project.

·       I know someone who works at INO/LICEU/SAMP.

·       I have a professional connection with the project.

·       (only for Liceu) I am related to Raval neighbourhood.

4. How much do you agree or disagree with these statements? (6-point Likert scale):

 

4a. It was well made and performed.

Quality: craft

4b. It was different from anything I’ve seen before.

Quality: originality

4c. It was about things that really matter to me.

Quality: resonance/feeling represented

4d. I felt involved in the performance.

Quality: feeling

5. Was there anything you particularly liked or disliked? Please say what, and why.

Satisfaction

6. Would you recommend this performance to a friend? No/Yes/Not sure.

7. Has the performance made you feel differently about anything? No/Yes/Not sure.

Personal change

8. If yes, please say how.

9. Did you see any live theatre or music performance in 2019 (before lockdown)? No/Yes/Not sure.

Audience profile

10. If yes, please say where.

11. Do you think technology played an important role in the performance? No/Yes/Not sure.

Technology

12. Please say why.

13. Finally, please add any other thoughts on your experience today.

Different indicators

14. Demographic information added at the end.

Audience profile

 

This questionnaire, to be distributed on paper or online, gathers key elements to identify the type of audiences present and the quality of the performance as well as personal change and satisfaction. A field for open comments is also included.

 

5.6. Experts’ assessment form: it is a template to guide selected experts assessing the quality of the performance when writing a report. The suggested items are based on Matarasso’s model plus technology, a central element in Traction.

 

Table 8. Experts’ assessment form: artistic quality

 

Indicator

Definition

Craft

It relates to the technical and artistic skill evident in the production and performance. How well was it made and executed?

Originality

It relates to the distinctiveness of the work, and the extent to which it reflects the particular context of its creation. How true does it seem to those who have created it?

Ambition

It relates to the aspiration, scale and openness of the work. Is it worth doing?

Resonance

It relates to the piece’s connection or relevance to the audience and its concerns. Does it speak to me?

Feeling

It relates to the non-rational effect of a piece and its ability to linger in the mind. Does it move me?

Technology

It refers to the use of Traction technology in the performance. What was the overall audio and video quality of the experience? Was it good enough for this performance? Did the technology help you feel engaged? (Only when Co-creation Stage is used) Do you think technology helped to connect people on stage with remote audiences?

Other comments

Please add any further thoughts about the performance or the project which have not been covered under the previous headings.

 

Around 4 experts are expected to be selected for each of the performances and they are asked to provide a report following the template indicated above.

 

5.7 Mid-process evaluation: A glimpse of the main findings

 

To illustrate how the previous evaluation instruments have been applied in Traction, some of the main facts and results of the mid-process evaluation will be presented next. Table 9 summarises the data gathered through the previous tools as of October 2021.

 

Table 9. Mid-process evaluation data (N/A: non applicable)

 

 

INO

LICEU

SAMP

Evaluation log

54 sessions (12 workshops)

15 sessions (1 workshop)

66 sessions (1 workshop)

Participants

86 participants

29 participants

82 participants

Interviews with:

5 professionals (individual)

1 professional (individual)

2 professionals (individual)

 

9 non-professionals (group)

8 non-professionals (group)

4 non-professionals (group)

Questionnaire

57 questionnaires

10 questionnaires

Evaluation workshops

Audience questionnaire

N/A

N/A

31 questionnaires

Experts’ assessment

N/A

N/A

5 reports

 

To proceed with the analysis, and taking into account that different people were involved in the evaluation of the different trials, a shared protocol was developed. Language-specific information was translated into English. Quantitative data was analysed centrally by the evaluation coordinator using descriptive statistics (median and frequency tables), and qualitative data was analysed independently by each trial on the qualitative analysis tool Atlas.ti by using a shared protocol. A report was produced by each trial for each of the interviews. With all the information, the evaluation coordinator proceeded to analyse the data globally and produce a global mid-process report which was discussed in a dedicated focus group with all partners. This discussion allowed the consortium to a) better contextualise and understand the evaluation, and b) to identify possible improvements both in terms of co-creation processes and outputs and in terms of evaluation.

 

We present next an overview of the main preliminary results. It is not the aim of this paper to provide a thorough account of all the results because, as already mentioned, our focus in this article is on the evaluation instruments and not so much on the specific results. However, a summary of the main findings is presented to show how the previous instruments have been successfully implemented.

 

 

6.     INO preliminary results

Since the beginning of the project until October 2021, INO developed 12 workshops with a total of 54 sessions. The workshops took place in three communities: residents of the island of Inis Meáin, teenagers across rural Ireland, and adults living in Tallagth, a village south of Dublin. The workshops were led by different professional artists and focused on writing, visual design, and composition. The sessions were developed online for the most part, due to the pandemic, and lasted less than 2 hours, although the last sessions that took place face-to-face lasted longer. A total of 86 participants were involved in these initial co-creation workshops. Non-professional artists generally attend 4 sessions, which correspond to a full workshop. No initial performances took place in the period under analysis.

The evaluation showed the diversity of participant profiles, with ages ranging from 14 to 71 and a wide array of occupations. Some specificities linked to each of the communities emerged in the evaluation: for instance, in the Inis Meáin community, the Irish language was considered a central element when developing co-creation processes. The attendance list showed some uneven participations which were explained through the qualitative data in terms of community dynamics and timing of the sessions. Table 10 summarises the main results of the questionnaire distributed to non-professional artists at the end of each workshop (median and frequency values on a 6-point Likert scale).

 

Table 10. Evaluation of the co-creation workshops: questionnaires (INO)

 

 

 

 

Frequency values (6-point scale)

 

 

Median

1-2

3-4

5-6

1

I was actively involved in the workshops

6

0%

10%

90%

2

I was motivated by the workshops

6

0%

10%

90%

3

I have gained a better understanding of other people’s ideas

6

0%

7%

93%

4

I have learnt from other people

6

      0%

12%

88%

5

I have made new friends

4

16%

37%

47%

6

I have enjoyed it

6

0%

7%

93%

7

I would like to do it again

6

2%

5%

93%

8

I feel more confident about what I can achieve now

6

2%

17%

91%

9

I feel more interested in art now

6

2%

17%

91%

10

Everyone involved contributed in a balanced way

6

5%

16%

79%

11

Everyone involved was respectful of each other’s ideas

6

0%

2%

98%

12

Taking part has changed some of my previous ideas

5

2%

26%

72%

13

Taking part was good for my wellbeing

6

2%

23%

75%

 

The data gathered show an extremely high engagement (90% feel actively involved and motivated) but also the need to clarify what participants can expect from the project and how they are expected to contribute. In fact, qualitative data show some initial discomfort, probably due to the uncertainty of the project in times of COVID-19, but the positive aspect is that these initial issues are adequately addressed as the project evolves. Considering that the co-creation workshops are developed during a pandemic, technology plays a key role, with both challenges and opportunities. The co-creation process and the technological challenges allow participants to acquire new skills and enhance existing ones: 96.5% of participants reporting having acquired specific skills such as creative skills (77.2%), communication skills (59.6%), teamwork skills (31.6%), ICT skills (31.6%), technical skills (26.3%), managing work skills (29.8%), performing skills (29.9%), among others. The interviews and questionnaires prove a high degree of understanding and acceptance of others (93% select high values in the Likert scale), alongside a respectful attitude (98%, see Table 10). They also point at an increased awareness, enjoyment, interest and knowledge about the arts and opera (91% in the higher range), as well as a new way of looking at oneself and the world around them (72%). The co-creation workshops are seen as an opportunity for interacting and sharing in times of COVID, which is positive for mental health. Co-creation Space is seen as a useful tool to create bonds and build relationships, as it allows continuous communication and sharing. Most participants show high satisfaction values both in the questionnaire (93% agree that they would like to do it again) and in the interviews, although in the interviews one professional also expresses frustration generally due to technical aspects and IT skills of the participants. Professionals consider that they have learnt in the process, with an increased confidence and interpersonal skills alongside an increased awareness of online etiquette. Some interesting reflections arise in the discussions: some professionals ponder whether the workshops are actual co-creation activities and others highlight the need to adequately address intellectual property rights management in co-creation. Overall, the best aspect of the INO co-creation process is, according to the participants, the active engagement and collaboration of all participants in a respectful environment, the relationships that were built, the central role of the facilitator, and the learning process that it entailed. The worst aspects are related to technical aspects in remote connections and to the short length of the workshops. In this regard, participants ask for a higher number of sessions per workshops and they also suggest minor improvements in terms of communication and social networking activities.

 

 

7.     LICEU preliminary results

The LICEU mid-process evaluation focused on one workshop with a total of 15 sessions between students from an arts school (Massana) and creatives from an occupational centre for persons with disabilities (Sínia) who co-created the community opera poster. A total of 29 persons participated in the sessions, with most of them attending 14 or 15 sessions, which shows a high degree of commitment of the participants. All sessions analysed in the mid-process evaluation took place online due to the pandemic. Data gathered through a questionnaire are summarised in Table 11, which includes both the median and frequency data.

 

Table 11. Evaluation of the co-creation workshops: questionnaires (LICEU)

 

 

 

 

Frequency values (6-point scale)

 

 

Median

1-2

3-4

5-6

1

I was actively involved in the workshops

6

0%

0%

100%

2

I was motivated by the workshops

6

0%

20%

80%

3

I have gained a better understanding of other people’s ideas

5.5

0%

30%

70%

4

I have learnt from other people

6

0%

10%

90%

5

I have made new friends

5

10%

20%

70%

6

I have enjoyed it

5.5

0%

10%

90%

7

I would like to do it again

6

0%

20%

80%

8

I feel more confident about what I can achieve now

6

10%

0%

90%

9

I feel more interested in art now

5

11%

33%

56%

10

Everyone involved contributed in a balanced way

4

30%

30%

40%

11

Everyone involved was respectful of each other’s ideas

5.5

0%

10%

90%

12

Taking part has changed some of my previous ideas

5.5

20%

0%

80%

13

Taking part was good for my wellbeing

6

10%

20%

70%

 

Sínia creatives show a high motivation from the very beginning and there is an increased engagement from students (80-100% in the higher values of involvement and motivation), with a clear evolution in terms of balanced contributions, the item which receives lower values in the questionnaire. Data from the questionnaire and the interviews highlight the new learnings achieved by everyone, derived from the fact that participants are working outside their usual circle and are becoming acquainted with a different reality: young art students on the one hand and persons with disabilities on the other hand. When explicitly asked what skills they improved, non-professional artists select: managing work (90%), teamwork (90%), ICT (80%), communication (60%), and technical art skills (40%). Respect is always present (90% select high values in the scale), and bonds are established when they start working in smaller groups. Overall, the satisfaction is high (80% agree with the fact they would do it again), although during the interviews some participants are somehow critical of the amount of work assigned to them and highlight the need for better coordination, better timing, and literally “no pandemic”. As in the previous case, managing expectations from the beginning proved challenging in a pandemic context and the project suffered from some delays. However, it is seen by participants as a project in crescendo and working together with others is identified as the best part of it. The least valued aspects mentioned by participants relate to the initial organisation, in which, as in the previous case, there was some uncertainty generally inherent to new projects, and to the impact of the pandemic, which compelled participants to work remotely. Participants highlight a more agile communication as a suggestion for improvement and they also mention the willingness to use the Traction tools, which was not possible in the first phase of project.

 

Errata: Table 11 published for the first time on Friday, Dec 30th, 2022 contained 2 errors that have been corrected at the author's request: Row 9 has been changed from 67 to 56; Row 12, a 0 has been changed to a 10 and an 80 to a 70.

Correction date: Tuesday, Dec 10th, 2023.

 

 

8.     SAMP preliminary results

Since the beginning of the project until July 2021, SAMP developed one co-creation workshop with a total of 66 sessions with inmates which produced 4 initial performances, at Gulbenkian Foundation and at Leiria Prison. All these activities were the object of the mid-process evaluation. A total of 82 participants were involved, including 69 inmates who acted as non-professional artists. Inmates attended an average of 7.5 sessions, ranging from 1 to 25. Through the evaluation tools one can observe how participation was affected by external issues such as the prison dynamics, the pandemic, inmate transfers, and visits, among other elements. All participants showed a high motivation and satisfaction, with engagement increasing as families and soloists became involved in the co-creation process. Inmates report feeling valued, and a feeling of trust, bonds and mutual respect grows as the project evolves. Participants acquire new skills and enhance existing ones and there is a projection of these learnings into a future outside the prison. Non-professional artists are seen as the main agents of change, a change which takes place for both profiles: professionals change their views about the prison and non-professionals change their views about opera. In this regard, co-creation was seen initially as a way to get out of the cell but as the project evolves a true appreciation of opera develops.

 

SAMP is the only trial that did some preliminary performances before the mid-process evaluation, both in Leiria and Lisbon. Whereas some inmates went to Lisbon, others connected remotely using Traction tools. Table 12 presents the results of the audience questionnaire.

 

 

 

Table 12. Evaluation of SAMP preliminary performances

 

 

 

 

Frequency values (6-point scale)

 

 

Median

1-2

3-4

5-6

1

It was well made and performed

5.5

0%

6%

94%

2

It was different from anything I've seen before

5

6%

26%

68%

3

It was about things that really matter to me

5.7

0%

6%

94%

4

I felt involved in the performance

5.4

3%

16%

81%

 

 

 

Yes

No

I don’t know

5

Would you recommend this performance to a friend?

 

93%

0%

7%

6

Has the performance made you feel differently about anything?

 

80%

10%

10%

7

Do you think technology played an important role in the performance?

 

97%

3%

0%

 

 

The audience questionnaire shows how most of the participants feel represented and involved, with qualitative replies showing an increased awareness of inequalities. Most respondents highlight an impactful moment in which an inmate reads to his mother. The nervousness of performing in front of the inmates’ families is also perceived. Audience members show high levels of satisfaction, with 93% stating they would recommend the opera to a friend. In terms of technology, its potential after solving some initial problems is stressed, especially in relation to remote interaction. In this regard, 97% of audience members consider that technology played a key role in the performance.

 

When looking both at the process and the output, working together is seen as the best part of the co-creation process, and inmates see it as having a positive impact on their future, alongisde the sense of achievement when presenting a performance in front of their families. Some tensions and lack of communication are seen as the less positive aspects. In this regard, the suggestions made are related to the willingness to participate more and go to Lisbon for the performances.

 

 

9.     Discussion

Opera is a total work of art in which many elements converge: music, orchestra and singing, text and libretto, staging, props and attrezzo, movement and dance. All these elements contribute to a global experience. To evaluate opera performance, Boerner (2004) proposed a multidimensional framework which includes two broad categories: a musical dimension and a stage dimension. The musical dimension refers to the quality of the orchestra, the quality of the chorus and the quality of solo voices as potential factors, and sound, tempo and rhythm as outcome factors. In the stage dimension, acting quality and staging quality (scenery, costumes, and so on) are considered potential factors, whereas action, place, time, figures, atmosphere, mood, and genre are seen as outcome factors. Boerner considers “fit” to be a central element, which is defined as the “congruity within a given performance”. This fit is developed in a three-level model: fit between the musical and stage dimensions, fit within the factors in a dimension, and fit within the potential factors in each dimension. This model is used by Boerner (2008) to suggest an evaluation questionnaire for opera performance by experts and non-experts. This fine-grained approach is not the one that has been adopted in Traction. The artistic quality of the output is not the central aspect, although it is never neglected. The ultimate function of opera co-creation is to have a social impact in a broad sense. Gillmore (2010) argues that “[w]orks of art have constitutive functions. To evaluate a work of art with reference to its constitutive function is one way to evaluate it as a work of art”. In this regard, the map of indicators has provided a guiding framework to establish how to assess this core function of community art.

 

The Traction mid-process analysis has focused mainly on the process and, to a lesser extent, on the output, as only one initial performance took place in the period under analysis. Still, this focus on the process seems to be a distinctive aspect of community art evaluation: it is not only about the aesthetic value of the opera performance but also, and most importantly, about the social effect of both the process and the output, on the participants—both professionals and non-professsionals—and on the audiences.

 

Another central aspect in Traction is that evaluation is not associated exclusively with the works of critics or connoisseurs (Lewandowska, 2021, p. 97) who act as “’gatekeepers who establish the standards of quality in artistic fields”. It is not only based on audience evaluation either. Traction evaluation takes into account the views of participants in the process (professional and non-professional artists) and in the output (professionals, non-professional artists, and audiences), including also representatives from the institutions and funding agencies to obtain a wider perspective.

 

This focus on the communities is in line with one of the “changed priorities” identified in the Bergamo Opera Europa Conference 2021: “The importance of reprioritising the heart and soul of Opera: community focused theatre. One suggestion was to engage artistic companies to create community projects, telling stories about the people and contributing to the recognition of issues of diversity on stage”, as summarised by Fost, Roling, and Rooney on the Opera Europe website (https://opera-europa.org/news/lessons-bergamo), which add another priority: “The importance of giving artists a voice in our processes and programme across all spaces. Bringing artists on board with the community focus from the start will not only help the first point of making community a priority, but it will also help the artists feel a sense of ownership of the theatre”.

 

 

10.  Conclusions

The mid-process evaluation in Traction has shown that using a map of indicators as a starting point is a useful strategy to structure the evaluation. It provides a framework upon which to build the different evaluation instruments. However, our evaluation has also proven that flexibility and adaptability to the context are critical. For instance, written questionnaires may not be suitable for certain environments, hence alternative tools need to be designed, often applying creative research methods (Kara, 2015).

 

A second critical aspect is triangulating the data and contextualising it. Many factors can affect the co-creation process and output, which at the same time can be very diverse across trials. Hence, it is fundamental to understand the different trials in all their complexity and develop the evaluation taking this complexity into account. The same data can be assigned different values depending on the context. In this regard, gathering both qualitative and quantitative data provides a more thorough picture and allows for more detailed interpretations.

 

Our evaluation has shown that, despite the huge differences across the trials, there is common ground. There are some basic principles that guide all co-creation processes and that can guide an evaluation. For it to be successful, though, the entire team needs to be engaged in the evaluation and a clear sense of why an evaluation is needed must be shared. In this sense, a mid-process evaluation can be seen as an opportunity to assess how the project is progressing and how it can be improved, rather than as only a criticism or praise of what has been done.

 

Evaluation cannot be seen as an exclusively post hoc process to show funders the achievements of the project, but needs to be integrated in the whole process of co-creation in a seamless and useful way. Evaluation is relevant if it allows one to reflect on the past and has an impact on the future. To this end, continuous communication among the different agents involved in the co-creation process and in the evaluation needs to take place.

 

This article has presented a series of instruments used in the mid-process evaluation of Traction. The strength of these instruments is twofold: it is based on collaborative processes that have allowed us to identify the key indicators to assess co-creation and they are simple and easy to implement. When developing them, we tried to find a balance between gathering relevant data and avoid overwhelming participants, so an effort was put in simplifying the instruments as much as possible. These instruments are being used in the final evaluation in 2022 but have already proven useful in the mid-process evaluation reported in this article. Making public the evaluation instruments is central so that other co-creation processes can replicate them or can develop their own inspired by the Traction evaluation framework.

 

 

Acknowledgements

 

This research has received funding from the European Union Horizon 2020 research and innovation programme under grant agreement number 870610. The author is a member of TransMedia Catalonia, a research group funded by the Catalan government (2017SGR113).

 

11.  References

 

Keating, C. (2002). Evaluating community arts and community well-being. An Evaluation guide for community arts practitioners. State of Victoria.

Angus, J. (2002). A review of evaluation in community-based art for health activity in the UK. HAD.

Arts Council of Northern Ireland (2004). Evaluation toolkit for the voluntary and community arts in Northern Ireland. Annabel Jackson Associates.

Antonnen, R., Ateca-Amestoy, V., Holopainen, K., Johansson, T., Jyrämä, K., Kiitsak-Prikk, K., Kuznetsova-Bogdanovitš, K., Luonlia, M., Kõlar, J.-M., Plaza, B., Pulk, K., Ranczakowska-Ljutjuk, A.M., Sassi, M. and Äyväri, A. (2016). Managing art projects with societal impact. Sibelius Academy Research Report Publications.

Boerner, S. (2004). Artistic quality in an opera company: toward the development of a concept. Nonprofit Management and Leadership, 14(4), 425-236.

Bossen, C., Dindler, C. and Sejer Iversen, O. (2016). Evaluation in participatory design: a literature survey. PCD 16: Proceedings of the 14th Participatory Design Conference: Full papers, 1, 151-160.

Davies, S. (Ed.) (2016). Evaluation in participatory arts programmes. Creative People and Places.

Gillmore, J. (2010). A functional view of artistic evaluation. Philosophical Studies, 155, 289-305.

Jarke, J., Kubicek, H., Gerhard, U., Introna, L., Hayes, N., et al. (2019). Interactive co-creation good practice guide. Retrieved from: https://co-creation.mobile-age.eu (last accessed: 04/2021).

Kara, H. (2015). Creative research methods in the social science. Policy Press.

Knell, J. and Whitaker, A. (2016). Participatory Metrics Report. Quality Metrics National Test. Arts Council England.

Lewadownska, K. (2021). Evaluation in interaction: the pragmatic approach to artistic judgement. PSJ, 17(3), 96-110.

Matamala, A. and Soler-Vilageliu, O. (2022). Defining and assessing artistic co-creation: the TRACTION proposal. Arte, Individuo y Sociedad, 34(3), 851-867.

Matarasso, F. (2019). A restless art. How participation won and why it matters. Calouste Gulbenkian Foundation.

Matarasso, F. (2021). Traction deliverable 3.2. Opera co-creation (preliminary report). Project report.

Röggla, T., Striner, A., Rivas, H. and César, P. (2022). The Co-creation Space: an online safe space for community opera creation. IMX ’22, June 22-24.  https://doi.org/10.1145/3505284.3532814

Shared Intelligence, The Mighty Creatives and Pickthall, S. (2017). Testing the accessibility of Arts Council England’s Quality and Participatory Metrics. Arts Council England.

Walmsley, B. (2019). Co-creating art, meaning, and value. Audience engagement in the performing arts. New directions in cultural policy research. Palgrave Macmillan.

 

   

                                                             BIO

 

Anna Matamala, BA in Translation (UAB) and PhD in Applied Linguistics (UPF, Barcelona), is an Associate Professor at the Universitat Autònoma de Barcelona. Leader of Transmedia Catalonia research group, Anna Matamala has participated (DTV4ALL, ADLAB, HBB4ALL, ACT, ADLAB PRO, IMAC) and led (AVT-LP, ALST, VIW, NEA, EASIT, RAD) funded projects on audiovisual translation and media accessibility. She is currently involved in the European projects Mediaverse and Traction. She has published extensively in international journals such as Meta, The Translator, Perspectives, Babel, and Translation Studies, among others. She is the author of a book on interjections and lexicography (IEC, 2005), co-author (with Eliana Franco and Pilar Orero) of a book on voice-over (Peter Lang, 2010), author of a book on audiovisual accessibility and translation (Eumo, 2019), and co-editor of various volumes on audiovisual translation and media accessibility. Joan Coromines Prize in 2005, APOSTA Award to Young Researchers in 2011, Dr. Margaret R. Pfanstiehl Memorial Achievement Award in Audio Description Research and Development 2021. Her research interests are audiovisual translation and accessibility. She is also actively involved in standardisation work at ISO and UNE. More information: webs.uab.cat/amatamala