Project Progression

This section of the portfolio builds upon the frontpage timeline, offering more information on each event.

Project Conception

21/09/20

This marks the beginning of the project. Initially, the goal was to use electroencephalography and computer programming techniques to create generative art that reflected the emotions felt by a listener, while listening to music. However, as the project grew, this goal also grew. Through the research and supervision stages, the investigator became interested in machine learning and the use of AI as a method of data classification. This resulted in the project investing heavily in machine learning for emotion retrieval in EEG data. Furthermore, the presence of supervision greatly benefited both the research practises and methodology, and the conceptual goals of the project


First EEG Connection

28/09/20

The first successful retrieval of data from the EEG headset. Similar integrations had been attempted by the investigator before, but all were in visual coding environments (such as Max MSP). This was the first time the data had been recorded using just code. Namely, Java in the Processing environment, using the oscP5 library. To connect the headset, the “Mind Monitor” mobile app was used. This app was designed to interface with the EEG headset over Bluetooth to retrieve the EEG signal data. It then does some pre-processing, and broadcasts the acquired data locally over OSC with very minimal delay.

import oscP5.*;
import netP5.*;

OscP5 oscP5;
NetAddress myRemoteLocation;

void setup() {
  size(400,400);
  background(0);
  oscP5 = new OscP5(this,5000);
}

void oscEvent(OscMessage theOscMessage) {
  if(theOscMessage.checkAddrPattern("/muse/elements/alpha_absolute")==true) {
    if(theOscMessage.checkTypetag("f")) {
      float firstValue = theOscMessage.get(0).floatValue();
      println(firstValue);
    }
  }
}

In short, this script imports the library and binds the program to a network port, allowing it to receive the UDP OSC information sent by the mobile device. From this, you can use IF statements to route the OSC channels, to find the specific EEG bandwidth data steams. The script then dumps all the OSC information received in the console with a PRINT function.


Programming Research Begins

05/10/20

Once the EEG data was accessible, the use of coding for generative art production was explored. Many languages and environments were investigated. However, the most notable were Python, Java and JavaScript. Ultimately, JavaScript was decided to be the most viable, for its powerful drawing capabilities, using the P5 library. However, the investigator was still exploring Processing Java, because it was able to receive OSC data with the use of a library. This was not possible in JavaScript because it is typically a web only language.


Nature Of Code

02/11/20

At this point, the book The Nature of Code was worked through. This book was key to the development of the project; it taught the investigator how to replicate natural systems using code, and provoked the idea of using neural networks for emotion classification with its final chapter. This lead to the creation of the first particle system prototype artworks. Furthermore, working through each of the coding challenges in the book provided an excellent introduction to coding, and taught the investigator many of the fundamental principals of computer programming that were essential to the success of the project.

The author, Daniel Shiffman, also posts coding videos, tutorials, and challenges on his YouTube page.

Figure 1: Anger

Figure 2: Sadness

Figure 3: Fear

Figure 4: Nature of code book


Mind Charity Fundraising

07/11/20

The investigator liaised with organisers of the Mind mental health charity, to negotiate a fundraiser event to take place during the project exhibition. This would be used as an opportunity to receive charitable donations, in exchange for material goods produced during the project (i.e., artwork prints, artwork booklets, etc.). Unfortunately, this event was cancelled due to the coronavirus pandemic. However, as the university has announced its plans to go-ahead with a replacement in-person exhibition later in the year, there might still be hope for fundraising.

Figure 5: Mind fundraiser pack


Project Concept Presentation

09/11/20

On this day, presentations of work-in-progress and project concepts were given. A PowerPoint presentation was prepared for this event which detailed all the technical points that had been investigated so far – along with future plans, finance planning and charity goals.

Figure 6: Project concept presentation slides


Key Literature Reviewed

25/11/20

This month, many key documents and papers were read and reviewed. These articles included both practical and theory based research papers, explanatory books, and other literature reviews. These documents provided a solid foundation for the project to be planned from, and confirmed the investigator’s interest in using machine learning for emotion classification. See bibliography attached.


Aggarwal, C., 2018. Neural Networks and Deep Learning. Springer International Publishing.

Altenmüller, E., 2002. Hits to the left, flops to the right: different emotions during listening to music are reflected in cortical lateralisation patterns. Neuropsychologia, 40(13), pp.2242-2256.

Bird, J., Manso, L., Ribeiro, E., Ekart, A. and Faria, D., 2018. A Study on Mental State Classification using EEG-based Brain-Machine Interface. 2018 International Conference on Intelligent Systems (IS), pp.795-800.

Kaplan, S., Dalal, R. and Lunchman, J., 2013. Measurement of emotions. In: L. Tetrick, M. Wang and R. Sinclair, ed., Research Methods in Occupational Health Psychology, 2nd ed. Routledge, pp.61-75.

Kappeler, K., 2010. Extraction of valence and arousal information from EEG signals for emotion classification. Master Dissertation. Swiss Federal Institute of Technology in Lausanne.

Mauss, I. and Robinson, M., 2009. Measures of emotion: A review. Cognition Emotion, 23(2), pp.209-237.

Posner, J., Russell, J. and Peterson, B., 2005. The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology. Development and Psychopathology, 17(03).

Savran, A., Ciftci, K., Chanel, G., Mota, J., Viet, L., Sankur, B., Akarun, L., Caplier, A. and Rombaut, M., 2006. Emotion Detection in the Loop from Brain Signals and Facial Images. [online] Dubrovnik


Non-Neural Network Alternative

02/12/20

As a fallback option, if the investigator was not able to actualise the machine learning based system that was envisioned, an alternative setup was created. This alternative system was based on the emotion detection methods used in the “Art of Feeling” project by the studio Random Quark (2017).

In principal, a reasonably simple algorithm is applied to the incoming amplitude values of various brainwave signals, which examines hemispherical differences, and produces values for emotional valence and activation. The difference in lateral activation of brain hemispheres is thought to be an indicator of emotion states (Altenmüller, 2002). However, this implementation would likely not have reflected the actual emotion state of the participant, due to its simplicity. While this may not have been as accurate or truthful as a deep learning approach, it at least provided a reasonable foundation to build from.

Arousal = betaAVERAGE/alphaAVERAGE
Valence = alphaRIGHT/betaRIGHT − alphaLEFT/betaLEF

Altenmüller, E., 2002. Hits to the left, flops to the right: different emotions during listening to music are reflected in cortical lateralisation patterns. Neuropsychologia, 40(13), pp.2242-2256.

Randomquark.com. 2017. random quark. [online] Available at: <https://randomquark.com/casestudies/mindswarms.html> [Accessed 2 March 2021].


First Neural Network Created

08/12/20

This marks the first successful creation and training of a simple neural network for data-graph classification. The user places dots with a letter value on a canvas, and the network learns the rules of the placement pattern. The network can then generate a later prediction for any position next selected by the user. While this example may not hold any relevancy to the question of the project, it demonstrates a key stepping stone in the realisation of the project’s goals. Furthermore, this was a significant personal achievement for the investigator, who prior to this project has no experience in computer programming or machine learning engineering.

Figure 7: First model training


EEG ‘Spectrogram’ Created

09/12/20

Using the Processing environment for Java, a program was created that converts the live EEG data into spectrogram-like images. These images represent a three second sliding window of each bandwidth amplitude at each electrode. This method both visually represents the data, and converts it to a format that a convolutional neural network can understand.

The raw EEG data was received, over OSC, by the Neural Scores application – a program made in the development of this project. The Neural Scores application stored a 3 second short-term memory of the EEG data, which it then rendered as a spectrogram-like image. In the spectrogram image, the channels were ordered by frequency bandwidth and assigned a colour (i.e., delta was red, gamma was blue). This provoked the network to differentiate between the bandwidths. Inside each colour band are the electrode positions (TP9, FP1, FP2, TP10). The colour displayed in each individual pixel represented the amplitude of that frequency bandwidth, at that electrode, at that point in time.

Not only did this method provide a visual representation of the EEG data, but it also allowed the system to perform image classification – a technique whereby a convolutional neural network analyses an image by inspecting the red, green, blue, and alpha values of each pixel. This allowed the network to interpret physical aspects, such as hemispherical activation. Moreover, the sliding window nature of the spectrogram inherently contextualised the data in the time domain by presenting a running history of the values. This allowed the network to interoperate temporal aspects, such as spikes and fluctuations. The valence and activation classifications were run in realtime, with live EEG data. Then, the generated values were plotted against the circumplex model of affect. This produced a prediction for the strongest emotion felt by a participant – at that point in time.

(A convolutional neural network is similar to a standard neural network, in that it can made classifications on data. However, standard neural networks can only recognise patters in single strings of data. That is, it works in a one dimensional manner. A convolutional neural network, however, can recognise patters in multidimensional data – thanks to its convolution and pooling layers. This means they are excellent at recognising patters in images – hence, image classification.)

Figure 8: EEG “spectrogram” image


Blink Detection Network Created

16/12/20

A model that has the ability to predict when a participant’s eyes were closed was created. This was done using a convolutional neural network and the EEG ‘spectrogram’ system created earlier. The EEG spectrogram program was also rewritten to run in JavaScript, rather than Java. The accuracy rating of this model confirmed the spectrogram approach was viable. Bellow is a figure of the loss values during the model’s training. It starts high because the model’s weights were randomised, and declines very quickly as the model learns to recognise the data. This is an indicator that the network was able to differentiate between classification criteria in the spectrogram images very well. Furthermore, when the was tested model in a real-world scenario, it was very clear that it was able to differentiate between states eyes open and closed with less than one second of delay.

Figure 9: Blink detection model training


New Supervisor

18/01/21

At this point, the initial project supervisor announced their plans to go on leave for post-doc research. A new supervisor was assigned to the project, and presentations were given to bring them up to date.

Figure 10: New supervisor slides


Emotion Neural Network Created

26/01/21

A monumental milestone in the project. The first emotion detection network was created, which used the same technique as the blink detection network. To train the model, EEG data was recorded while the investigator was listening to music that they thought would provoke states of high activation, low activation, high valence, and low valence. Similar to blink detection model, this model reported very promising loss and accuracy values in training (as shown in the figure bellow). This milestone served as proof that the project concept and approach was valid, and that the inclusion of deep learning for valence and activation classification was a viable option – with some polishing. When classifying, the model would report a binary classification for both valence and activation, ranging from -1 to 1.

Figure 11: Emotion detection model trianing


Emotion Neural Network Progressed

01/02/21

The network UI was redesigned to include a very simple graph that exhibited the current general emotion state of the participant. This was based on the circumplex model of affect, as suggested by Posner et al (2005). The model was also designed to broadcast the values it generates over OSC, to be used in other applications.

Figure 12: Progressed emotion model UI and connectivity


Electron.Js Researched

03/02/21

While conducting research in coding, the investigator discovered the Electron.js framework for Node.js. This framework allows JavaScript code, which would typically only work in internet browsers, to run in a standalone desktop application through Node.js. This also meant that the bridge program, which is detailed in the Project in Practise page, could be incorporated into the main program. Upon learning this, the investigator began working on building the emotion detection system into this framework.


Neural Scores Application Created

07/02/21

Using the Electron.js framework, the investigator rebuilt the emotion classification program as a standalone desktop application that could be installed easily. At this point, the emotion graph featured previously was also redesigned to a circumplex graph superimposed over an emotion colour wheel. This is a further exploration of the circumplex model of affect. This was a monumental milestone in the projects development, and symbolised the ending of the backend/AI development stage. More detail in the Project in Practise page.

Figure 13: Spectrogram view

Figure 14: Classification view


Submitted Work To A Call For Case Studies

15/02/21

As part of the ongoing research conducted by the Creative AI Lab, founded by Bunz and Jager, a call for case studies was put out by Serpentine Galleries. In this posting, the investigators requested that artists who work with artificial intelligence provide them with images and explanations of the internal tooling of their systems. This call for case studies was discovered shortly after the project supervisor recommended a panel discussion on new AI interfaces by the Creative AI lab.

Figure 15: Call for case studies submission


Frontend Research Begins

16/02/21

The investigator began researching ways of artistically representing the data generated by the Neural Scores application. During this time, many programming environments were explored. However, the investigator decided to build the frontend system in TouchDesigner with Python. This framework was chosen for its powerful shader-based visual rendering capabilities. This process was heavily influenced by topics and ideas discussed in supervisory sessions. For example, the mapping of EEG and emotion data to a visual spaces was a topic discussed at length over many weeks. Furthermore, the meaning behind this visualisation and what it wanted to accomplish was also a key topic. The goal of the investigator was to create a visual medium that both displayed the total affective experience of the participant, during the piece of music, and to display a more “in-the-moment” visualisation of the current emotion felt.


Ethics Application And Risk Assessment

17/02/21

In accordance with the university’s policies surrounding participant-based research, an ethics application form was completed. This document also includes the participant information and consent forms, and the risk assessment. The form was submitted to the project supervisor and approved by them after review.

Loader Loading…
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Yop3rro Inspiration

22/02/21

The investigator discovered an independent digital artist, named yop3rro, who creates animated posters using TouchDesigner. These posters are displayed on an Instagram page, where they remain still until interacted with – causing them to animate. This method of digital presentation heavily inspired the final output of the project.


Frontend Prototype V1

26/02/21

Using the information and experience gained through research, the investigator created the first prototype of the frontend emotion rendering system. This system, made in TouchDesigner, displayed the current emotion of the user as a series of undulating coloured lines. The colour of the lines reflected the colour currently selected on the emotion graph of the Neural Scores application, and the severity of the movement reflected the emotional activation of the participant. This system was effective at displaying emotions in an artistic manner. However, it failed to display a history of the emotions, rather it only displayed emotions one at a time. See attached.

Frontend Prototype V1

Figure 16: Prototype v1 examples


Interim Report Submitted

08/03/21

During this period, an interim project report was created and submitted for marking. The report detailed many aspects of the project, such as research, methodology, and current/planned work. The report received a mark of 78, losing marks only on the evaluation. This report went on to provide others with a comprehensive description who held interest in the project.

Loader Loading…
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

EVA 2021 Conference Paper

20/02/21

Under the recommendation of his tutor, the investigator wrote a short paper to submit for the 2021 Electronic Visualisation and the Arts (EVA) conference. On this day, the paper was accepted. In addition to publishing a paper, the investigator will be giving a 15 minute presentation at the conference, in July. This is the investigator’s first publication.

Loader Loading…
EAD Logo Taking too long?

Reload Reload document
| Open Open in new tab

Frontend Prototype V2

14/04/21

The frontend prototype was further developed to produce this version. In short, the lines seen previously were blurred and put into a feedback loop. Then, the resultant image was heavily distorted, using parallel displacement. This resulted in an attractive marbling visual effect, which displayed a short history of the emotions – aprox. 3-7 seconds. This was an important step for the frontend output, as it brought it closer to the goals of the project. However, the output still lacked a complete documentation of the emotions experience throughout the duration of the music.

Figure 17: Prototype v2 marbling

Figure 18: Prototype v2 component layering


Frontend System Complete

16/04/21

This was perhaps the most anticipated milestone of the project – the artistic realisation. This system was the product of further development upon the v2 prototype. In this version, an emotion-EEG graph was added to the centre of the image. This graph, which forms as a circle, displays both the emotion experienced (as colour) and the raw EEG data (as detail). In this version, there was also an audio reactive element in the form of a dark circle that grew and shrunk with the amplitude/beat of the music. The resultant final image was visually similar to a musical score, except with brainwave readings and affective experience – a Neural Score. The completion of this system allowed the project to progress into the next phase: data collection.

Neural Scores - Final

All Participants Recorded

29/04/21

Using the EEG device, muse0player, and the Neural Scores application, brainwave recordings were taken of two participants. These recordings were used to train the emotion recognition systems and to generate the final artistic output of the project. The participants selected four songs to train the network with, and five songs to run the experiment with. Some songs were chosen for a specific reason (such as, “First piece I learned to play” and “Song played at my best friend funeral”), and others songs were chosen because they were not significantly impactful. This provided an interesting mix of emotional pre-disposition and non pre-disposition.


All Renders Completed

29/04/21

Using the trained emotion recognition system, artistic rendered were made using live EEG data from the participants. The renders were generated at 4k resolution, and at 30-60 frames per second variable. A still image was also generated for each song recording, as well as a standalone neural score graph. These renders were then placed in the virtual gallery, on the website galleries, and on YouTube. Additionally, with the announcement of the in-person exhibition, A3 physical prints of each full image were ordered.

Full Images:

Airplane_Image
Clair_Image
Erlkonig_Image
Hellelujah_Image
Lighter_Image
Melancholy_Hill_Image
Moonlight_Image
Ride_Image
Seaweed_Image
Tranz_Image
previous arrowprevious arrow
next arrownext arrow
 

Graph Only:

Airplane_Graph_Black
Clair_Graph_Black
Erlkonig_Graph_Black
Hellelujah_Graph_Black
Lighter_Graph_Black
Melancholy_Graph_Black
Moonlight_Graph_Black
Ride_Graph_Black
Seaweed_Graph_Black
Tranz_Graph_Black
previous arrowprevious arrow
next arrownext arrow
 

Prints:


Virtual Gallery Completed

30/04/21

Due to the global pandemic, the in-person exhibition was cancelled. Therefore, a new approach to presentation was necessary. After all the recordings had been rendered, an online virtual gallery was build by the investigator. This was made using Unity’s WebGL functionality. In the gallery, the neural score images are displayed on large screens, which show a still image until interacted with. Once the viewer interacts with the image, a video of the neural score being rendered begins to play. The viewer can pause the video, and go full screen. This idea was inspired by the video posters created by yop3rro, as mentioned earlier. Click here to view the gallery.


Website Completed

16/05/21

On this day, the investigator completed the development of both the website and the portfolio.