top of page
Search
Data analysts and Management

CREC Learning Circle

Updated: Feb 17, 2020


We were a dozen ladies round the table at this Learning Circle, with Chris [Pascal] first sharing with us the positive experiences she and Tony [Bertram] had at their recent visit to Budapest and the Department of Engineering at the University, which is the venue for the next EECERA conference (2018). Located at the heart of Budapest, Chris also informed us that the Gala dinner will be on a boat on the Danube, where we will be able to enjoy an evening of local food, dance and music. It sounds like a very exciting few days in an incredible location. We also found out that EECERA 2019 will be a week earlier than usual and somewhere beautiful and warm.

Chris kicked off the session by stating that although the analysis process is not a linear process, it can be seen as comprising of three stages. She described them as follows:

1. Collection of Data

Data analysis really starts from the moment you enter the filed and start engaging with participants and collecting data.

From the very start it is important to have a systematic approach with for instance a number of clearly ladled folders (physically or on the computer) where the researcher can start the sorting process as soon as the data collection starts, putting away data ‘tidily’, to subsequently be retrieved and worked with.

In other words, data is collected, read through and sorted into its specific files from the very start. Each file needs its own index for maintaining clarity and easy retrieval of data. This is in effect a form of initial content analysis that starts from day one in the field.

2. Coding and Organising of Data

You always want to keep the central research question in mind when working with the data. As you read the data, coding begins by ‘tagging’ the data according to issues / themes / concepts, that may have been identified or may be from the data. Or both. In other words you may have a coding frame combining both and issues, themes and concepts. In any coding frame there also needs to be one code for ‘other’, that is, somewhere to park the data that does not fit in. This data can at times be most interesting to explore and expand on.

Chris further mentioned that she in addition to the above coding frame, codes data according to significance. She numbers them as 1, 2 or 3, depending of how relevant it may be. In effect a bit inline with a ‘traffic light’ system of red, yellow and green, Helen suggested.

This coding can be done manually or by using a computer approach such as NUDIST or NVivo software. Chris mentioned that in a viva you will need to explain why or why not you did or did not use a particular manual or software approach.

3. Analysis of Data

As you are reading, coding and analysing the data you are beginning to build up a narrative as you are interpreting it and looking for patterns.

In this stage we are not just looking to retrieve relevant data but as we interpret patters that emerge we want to tell a story. In other words, generate theory and explanations of how something works. It may be that we are generating our own theory, or developing and evolving someone else’s theory, that may become 2nd or 3rd generation theory. Generating theory is an important part of the research process, as we want to make our research transferable.

Within a piece of research there are always multiple narratives, so we have to be able to explain and justify why we have chose one narrative over another, why we may have put some of the data to the side and not pursued it. Having said that, always interrogate the negative cases, the cases that don’t fit.

Ultimately we do have to reduce the data and present it in a manageable wey. There are many creative ways of presenting data through various forms of visualisations. Faye showed an example of a Wordle generated ‘word cloud’, as an example. See links below that Stacey has compiled as examples:

http://stephanieevergreen.com/qualitative-chart-chooser/ https://visage.co/turn-qualitative-data-visual-storytelling-content/ http://annkemery.com/qual-dataviz/ https://www.pinterest.co.uk/severgreen/qualitative-data-visualization/ http://theresearchcompanion.com/tips-for-presenting-qualitative-data-in-a-conference-presentation/

After Chris’s introduction of the three main stages of how to work with data, the floor was open to anyone who wanted to discuss their frameworks or had questions.

Aline questioned a statement from the book Content Analysis by Drisco and Maschi (2015) where they state that: “no approach to content analysis goes on from initial “open” coding to include Glaser and Strauss’ (1967) axial and discriminate coding techniques”. We discussed the concepts of ‘axial coding’ and ‘stand alone codes’ and we questioned if we need to take such a ‘purist’ approach to coding as Drisco and Maschi suggest?

Helen gave us a detailed account of her research and indicated that her research is an Appreciative Inquiry in line with Cooperrider et al. (2008), in that she is looking for positive interactions to impact practice and change in an organisation.

Helen was initially inundated with a ‘messy’ pile of data that she eventually sorted using PowerPoint as an analytical tool. Donna questioned if this did not mean that the text on the slides ends up being very small, which is correct; however, Helen clarified that the slides are not used to present data in the traditional sense, just to systematically organise and keep. Helen coded the data against the four stages of pedagogical mediation, her a priori framework taken from the literature (Formosinho and Figueiredo, 2014). She added an additional category and ended up with a mix of axially coded data and stand-alone codes.

Helen also numbered her data as Chris had mentioned at the start, with numbers 1, 2 and 3 to rank the data according to importance. This is logged on a spreadsheet.

Although Helen refers to Geertz (1973) seminal work and the notion of thick description she also mentioned Martyn Hammersley’s (1990) critique of the concept in his paper: What's Wrong with Ethnography? The Myth of Theoretical Description.

The illustrative handout demonstrates Helen’s process linearly; however, in reality stages of it was an iterative process that culminated in participants checking interpretations at cluster meetings and peers scrutinising her analysis for credibility (Shenton, 2004).

It was very interesting to get a detailed account of the coding and analysis process linked to a concrete piece of research, rather than some random account from a textbook.

Aline wanted to find out more about the peer scrutiny process and it was mentioned that Azora Hurd had involved peers in the process at different points of the process, one-to-one, as opposed to in a group with everyone together at the same time. Aline mentioned she was at a point now where she had transcribed and analysed enough data against her a priori P’s framework and identified emerging codes, to want to have peers scrutinise the coding. It was mentioned that you don’t necessarily need to involve experts in the field but peers who have an understanding of the field. Wendy Messenger’s PhD was also mentioned as an example of how you can creatively work with your data.

Paola injected that she had found involving an ‘expert group’ most useful in her process and she is at a point where she would like to invite them back for further feedback.

Fay talked about her research, which interestingly was quite contrasting to Helen’s. Fay realised fairly early on that with select video footage she had plenty of data to explore the underlying values held by teachers in England and Sweden. She strategically limited the amount of data that she has now reduced to the few pages she held up to the group. Chris suggested that there is a fine line between minimalism and too much data reduction, that we need to consider.

Fay also mentioned how she was quickly able to build up trust with the participating teachers, as she herself was an early years teacher, and understood the daily challenges they face, and was able to respond positively to them during the research process. Aline mentioned the concept of being an insider/outsider researcher and the work by Corbin Dwyer (2009) “The space between” was referred to and his suggestion that:

Rather than consider this issue from a dichotomous perspective, the authors explore the notion of the space between that allows researchers to occupy the position of both insider and outsider rather than insider or outsider (Dwyer, 2009, p.54)

http://journals.sagepub.com/doi/pdf/10.1177/160940690900800105

Alison briefly mentioned she is using Laura Lundy’s model of participation as her a priori framework and how well this model has worked in her research to collect the data she is looking for when working with parents and staff http://ec.europa.eu/justice/fundamental-rights/files/lundy_model_child_participation.pdf

It was a very stimulating and enjoyable session, with something for everyone to take away. The next Learning circle is on the 14 December, when everyone is warmly welcomed to show up in Christmas jumpers.

Reflections by Aline Cole-Albaeck

http://www.crec.co.uk/announcements/reflections-about-data-analysis-and-management


0 views0 comments

Recent Posts

See All
bottom of page