The enormous interest in artificial intelligence (AI) in recent years has led to the development of extremely powerful machine learning techniques. For example, time series — any series of data where a time component is present, such as stock prices, weather patterns or electroencephalograms — are by their nature extremely common and of great interest due to their wide range of applications. Time-series analysis is a type of task for which machine learning techniques are of particular interest, enabling the prediction of future events based on past events. Given the diversity of potential applications, it is logical that the processing of such data via AI algorithms has become very popular in recent years.
A particular type of artificial neural network, called a recurrent neural network (RNN), has been specially developed in recent years to have a memory that enables the network to retain information over time in order to correctly process a time series. Each time new data is received, the network updates its memory to retain this new information. Despite these developments, it is still difficult to train such networks and their memory capability is limited in time. ” We can imagine the example of a network that receives new information every day, ” explains Nicolas Vecoven, a doctoral student in the Systems and Modeling lab at the University of Liège and first author of the study. ” but after the fiftieth day, we notice that the information from the first day had already been forgotten. “
” However, human neurons capable of retaining information over an almost infinite period of time thanks to the bi-stability mechanism. This allows neurons to stabilise in two different states, depending on the history of the electrical currents they have been subjected to, and this for an infinite period of time. In other words, thanks to this mechanism, human neurons can retain a bit (a binary value) of information for an infinite time. ,” Nicolas further explains. Based on this bi-stability mechanism, Nicolas Vecoven and his colleagues Damien Ernst (an AI specialist) and Guillaume Drion (a neuroscience specialist) from ULiège, have constructed a new artificial neuron with this same mechanism and have integrated it into recurrent artificial networks. Called a Bistable Recurrent Cell (BRC), this new artificial neuron has enabled recurrent networks to learn temporal relationships of more than 1000 time steps, where classical methods have failed after only about 100 time steps. These are important and promising results that have been published in the journal PLOS One. The three researchers are continuing their research in this particular field and are continuing to develop technologies to improve the memories of RNNs, by promoting the emergence of equilibrium points within them.