This project aims at understanding the laws of information dynamics in evolutionary systems. For more details go to this site.
The theory is concerned with the study of general learning systems using rigorous mathematical representation based on methods of functional and convex analyses, optimisation and information geometry. It gives geometric interpretation of a closely related information value theory due to Stratonovich (1965). The key idea is the duality of utility and information that occurs in the optimally learning systems. An important tool is the theory of monotone operators between dual pre-ordered spaces that play the role of adjoint preference morphisms (monotone relations and Galois connections between pre-orders). Some results of the theory include definition and description of optimally learning systems and their performance bounds (see Belavkin, 2008, 2009 or a video of the talk at the PASCAL2 workshop). Optimal learning systems and their performance bounds are expressed using generalised characteristic potentials, the special cases of which are the cumulant generating function and the free energy. Closed form solutions exist for the cases of information represented by negative entropy. Optimisation of the learning systems is useful in a broad range of practical applications as well as resolving some theoretical problems, such as exploration/exploitation in machine learning and paradoxes in the decision theory.
This is the project funded by EPSRC aiming at creating an agent based entirely on neural Cell Assemblies (CAs - clusters of cells spontaneously forming in large neural networks). One of the applications of the project is the symbol grounding problem in AI. See this page for the project.
This theme of my research is concerned with algorithmic implementations of the optimal learning theory. For example, the Optimist algorithm (Belavkin & Ritter, 2004) was created for the ACT-R cognitive architecture to model human and animal learning behaviour. The algorithms are stochastic in nature and have similarities with Monte-Carlo techniques. See also the following paper describing an earlier and simplified version the algorithm.
One of the main motivation behind the use of the algorithms in cognitive modelling is the study of behavioural paradoxes of the classical rational choice theory (e.g. the paradoxes of the maximum expected utility theory). See Belavkin & Ritter (2003), Belavkin (2006). In addition to cognitive modelling, these algorithms can be applied to more general optimisation problems, where the optimal exploration-exploitation balance has to be achieved (see for example Belavkin, 2005c).
It is now popular to believe that emotion is an important component of intelligence, but this is hard to prove (even the term emotion is very weakly defined). Cognitive modelling is one good way to understand this complex phenomenon, and the effects of emotions on learning and problem solving. Cognitive architectures are computational implementations of a theory of mind, and models become the tools to test how different strategies, their changes during problem solving accompanied by feelings of frustration or joy (due to failures or successes) can improve the problem solving behaviour.
Cognitive architectures enable us to see how the (empirial) entropy of the model's knowledge reduces or increases during problem solving (see Figure above). It turns out that changes in problem solving strategy correlate with the changes of the entropy of success, and the behaviour of subjects complies with the optimal learning theory. In fact, the latter was developed around my early work on the use of entropy to control noise temperature in cognitive architectures. Interestingly, such a feedback not only helps a model to use optimal learning strategy, but also improves the match between the model and experimental data (yes, data from real subjects). For example, I modelled the classical experiment by Yerkes & Dodson (1908), which implemented the simulation of the experiment and a cognitive model of the `mouse' used in their study
In their work, Yerkes & Dodson (1908) discovered what psychologists later referred to as the Inverted-U effect relating learning performance (i.e. the speed of learning) to the strength of reinforcement stimulus. This nonlinear phenomenon is important in psychological theories of arousal and basic emotions. In my view, the Yerkes and Dodson work was the first experimental evidence of the duality between utility and information in natural systems.
Multidimensional data can be very complex, but it can be simplified using various vector quantisation and visualisation techniques, such as the self-organising maps (SOM) and the independent component analysis (ICA). For example, I applied SOM to analyse and classify the learning behaviour of MSc students in the Middlesex University. The resulting topological maps enable us to see what types of students are more likely to excel or fail in their examinations. The figure below is a topological map (SOM) of students' learning strategies. Red circles show the locations of students who received marks above 80%, and yellow circles are for marks below 40%:
Independent Component Analysis (ICA) is a method of representing the data in a basis of statistically independent components. With the advance of fast algorithms, it has become possible to apply ICA to many problems including analysis of parallel time-series (e.g. share prices). EPSRC has funded the ICA Research Network connecting colleagues from many UK universities interested in this fast developing field. Several MSc students projects that I supervised at Middlesex University used ICA for data analysis (e.g. foreign exchange rates, hotel occupancy rates, etc).