


The immense adaptation potential of evolutionary algorithms has allowed the realisation of systems that automatically produce music through feature and interactive-based composition approaches. This system - and future versions that will incorporate more advanced architectures and representation - is suitable for generating music the features of which are defined in real-time and/or interactively.Įvolutionary music composition is a prominent technique for automatic music generation. A small initial dataset is augmented to incorporate more intense variations of these two features and the system learns and generates music that not only reflects the style, but also (and most importantly) reflects the features that are explicitly given as input at each specific time. A very basic LSTM-based architecture is developed that can compose music that corresponds to user-provided values of rhythm density and pitch height/register. This paper presents an approach to creating such systems. For instance, such models are able to compose music in the style of the Bach chorales, but they are not able to compose a less rhythmically dense version of them, or a Bach choral that begins with low and ends with high pitches - even more so in an interactive way in real-time. Deep neural networks, alongside LSTM-based systems, learn implicitly: given a sufficiently large amount of data, they transform information into high-level features that, however, do not relate with the high-level features perceived by humans. Long Short-Term Memory (LSTM) neural networks have been effectively applied on learning and generating musical sequences, powered by sophisticated musical representations and integrations into other deep learning models. The results obtained by subjective tests indicate that the utilization of the proposed genetic operators drives the evolution to more user–preferable sounds. A modification of the GP operators is proposed that allows the user to have control on the randomness of the evolutionary process. This paper addresses this issue by presenting an IE system that evolves melodies us-ing Genetic Programming (GP). For the automatic creation of art and especially for music synthesis, user fatigue requires that the evolutionary process produces interesting con-tent that evolves fast.

This kind of human–machine interaction be-longs to a larger methodological context called Interac-tive Evolution (IE). The introduction of Evolutionary Computation has further boosted the research towards exploring ways to incorporate human supervision and guidance in the automatic evolution of melodies and sounds. Automatic music composition and sound syn-thesis is a field of study that gains continuously in-creasing attention.
