HN2new | past | comments | ask | show | jobs | submitlogin

Style transfer and generative adversarial networks have worked well with images. How can these be applied to music?


It does seem like an interesting edge to explore doesn't it? ML techniques combined with spectral analysis could yield some interesting outcomes. The datasets wouldn't be too hard to find either - most people have pretty extensive digital music libraries.


Sony has been working on style (genre) transfer in music lately: https://youtu.be/I9M8l2guPSo?t=21m30s


Is that how they did it? Or is it just training some RNNs to repeat the same style back?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: