Transformer-Based Seq2Seq Model for Chord Progression Generation

نویسندگان

چکیده

Machine learning is widely used in various practical applications with deep models demonstrating advantages handling huge data. Treating music as a special language and using to accomplish melody recognition, generation, analysis has proven feasible. In certain music-related research, recurrent neural networks have been replaced transformers. This achieved significant results. traditional approaches networks, input sequences are limited length. paper proposes method generate chord progressions for melodies transformer-based sequence-to-sequence model, which divided into pre-trained encoder decoder. A extracts contextual information from melodies, whereas decoder uses this produce chords asynchronously finally outputs progressions. The proposed addresses length limitation issues while considering the harmony between melodies. Chord can be generated composition applications. Evaluation experiments conducted three baseline models. included bidirectional long short-term memory (BLSTM), representation transformers (BERT), generative transformer (GPT2). outperformed Hits@k (k = 1) by 25.89, 1.54, 2.13 %, respectively.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Model Transformer plugin generation

The current paper presents a new approach using generic and meta-transformations for generating platform-specific transformer plugins from model transformation specifications defined by a combination of graph transformation and abstract state machine rules (as used within the Viatra2 framework). The essence of the approach is to store transformation rules as ordinary models in the model space w...

متن کامل

Analysis of Chord Progression Data

Harmony is an important component in music. Chord progressions, which represent harmonic changes of music with understandable notations, have been used in popular music and Jazz. This article explores the question of whether a chord progression can be summarized for music retrieval. Various possibilities for chord progression simplification schemes, N-gram construction schemes, and distance fun...

متن کامل

Seq2seq-Attention Question Answering Model

A sequence-to-sequence attention reading comprehension model was implemented to fulfill Question Answering task defined in Stanford Question Answering Dataset (SQuAD). The basic structure was bidirectional LSTM (BiLSTM) encodings with attention mechanism as well as BiLSTM decoding. Several adjustments such as dropout, learning rate decay, and gradients clipping were used. Finally, the model ach...

متن کامل

Computational Model for Automatic Chord Voicing Based on Bayesian Network

We developed a computational model for automatically voicing chords based on a Bayesian network. Automatic chord voicing is difficult because it is necessary to choose extended notes and inversions by taking into account musical simultaneity and sequentiality. We overcome this difficulty by inferring the most likely chord voicing using a Bayesian network model where musical simultaneity and seq...

متن کامل

Table-to-text Generation by Structure-aware Seq2seq Learning

Table-to-text generation aims to generate a description for a factual table which can be viewed as a set of field-value records. To encode both the content and the structure of a table, we propose a novel structure-aware seq2seq architecture which consists of field-gating encoder and description generator with dual attention. In the encoding phase, we update the cell memory of the LSTM unit by ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics

سال: 2023

ISSN: ['2227-7390']

DOI: https://doi.org/10.3390/math11051111