Doing RLBF alignment steps based on which data birds do or do not recreate
Doing RLBF alignment steps based on which data birds do or do not recreate
VAEs but instead of Gaussian latents it uses "probouncible by starling" latent space" so your hidden layer is both compressed and naturally reproduced
"bird-norming" instead of batch norming so you can ensure each layer's output can be played as a birdsong and then we can hear the learning of each layer in the types of sounds it produces
(ok i'll stop but thanks for sharing this is sick!)