The idea of extracting information from neurological level relevant to the marketing and utilizing the insights for better decision-making is an upcoming technique of studying underlying consumer behaviour. This commercial method is called neuro-marketing and has gained immense attention since last 5 years because of the groundbreaking improvements it made.
There are numerous ways of detecting emotions, thoughts and feelings of a person when introduced to a product – fMRI scan, EEG, eye-tracking, MIIR – but there is one trait most of the datasets share – temporality, the time-varying nature of the sequence. The dependence of information at time, t, on the past gives the leverage to implement recurrent neural networks (RNN).
RNN is a connectionist model which is known for sequence modelling and handling temporal data. It is a derivative of feed-forward neural network (FFNN) in which FFNN are stacked representing different time-steps and are linked to form a pipeline for data propagation across the time-steps. The architecture can be understood clearly from the figure shown above:
RNN is one such neural network that aided in implementation of our datasets alongside convolution neural networks (CNN) in deep learning category. There are many variants of RNN, common ones are as follows:
• Vanilla RNN
• Long-Short Term Memory model (LSTM)
• Gated Recurrent Unit (GRU)
LSTM is especially designed to overcome the hurdles caused due to vanishing gradients and the fact that learning of initial layer is near-to-zero. This is taken care of by introduction of input gate, forget gate, output gate and cell state in LSTM. It gives enhanced performance for longer sequences, perfect for datasets used in neuro-marketing.
From the structure of RNN, it is understood that it takes 3 dimensional data as input in which dimensions correspond to (samples, time-steps, features) respectively.
We will see the input, output and model architecture of RNN on fMRI dataset. The objective is to read fMRI scans of 4 subjects who where shown numerous labelled images and classify them among 3 classes: animal, artifact and scenes.
Input shape: (13136, 5, 6960)
Output: (13136, 3)
Here, is the feature vector of length, 6960 and is the output vector of length, 3. The hidden states are denoted as where is time-step and is the layer.
So, this is the basic double-layered RNN we used and got an accuracy of 67% for fMRI. The performance can be improved by fine-tuning hyper-parameters and trying out other RNN variants or hybrids.
It can be seen that the simplest of RNNs understands the complexity involved in temporal data; that is how magnificent RNN is and so is its impact on the emerging domain of neuro-marketing.
Learn from Leaders of IT
“How to use Chatgpt and Generative AI”