Lyberta.net

Freedom is what you do with what's been done to you

The Story

Last update: 28 Apr 2018.

For me music has started from NES chiptunes. Those were simple and very memorable and I really enjoyed listening to them. But I felt that they could be improved by lifting the limitation of NES sound chip. But that took some time…

As soon as I got a PC with sound card in 2002 I knew I wanted to make music but didn’t have any knowledge in it. But, thankfully, my first DAW - ACID Pro - didn’t require any. It works with “ACIDized” loops which have key information embedded into them and transposes them automatically so you get perfect sound without any musical knowledge. Back at that time ACID Pro didn’t have ability to edit MIDI data and I didn’t know about soft synths so I just messed with loops and some effects. Over the years I produced my first “album” which then was deleted due to the lack of quality.

The things changed in 2008 when I rediscovered my old Casio synthesizer. I knew I could record each note separately and use them to make my own melodies in ACID. So that is when I decided to go for my dream of arranging NES music using good instruments. So I started digging into NES software and found several NSF players which provided good info about music. In particular, GNSF could play music using MIDI, NSFPlay had visual keyboard and NSF2MIDI did what it’s name said. So now I thought I don’t really need much music theory knowledge as these programs produce notes and I only need to make quality instruments play these notes - not a big deal. For my first cover I decided to arrange Alien 3 title theme which is very long and difficult. I recorded each NES channel into separate file, imported them into ACID and decided to mute them as I add my own version of each channel. I’ve started with drums and it was pretty easy. Even though it took about a minute to sample each drum of my synthesizer (mostly cutting start and end), there were not many of them and I had drum track in about a day. Now it was the time for bass. And that it were the things went messy. First, sampling bass was long and tiresome. But the worst thing was that bass part was using some very weird commands so GNSF didn’t produce anything, NSF2MIDI produced a file with constant pitch bend with custom limits which was not correctly understood by ACID. At about the same time I decided to look for a MIDI sequencer because it became obvious that I’m going to work with MIDI a lot. What I found was FL Studio - it was big, strange and alien to me but seemed to work well with MIDI. At the same time I decided to upgrade ACID and found that new version also got MIDI editing support, so I decided to abandon sampling for a bit and try to write MIDI notes by hand using FL Studio and NSFPlay as guides. Experimentally, I found that 16 (or 18, I don’t remember now) cents pitch bend in a file produced by NSF2MIDI is actually a semitone so I’ve finally had the way to write the bass part. While doing that I noticed that MIDI instruments included in ACID sound actually better that my synthesizer so I scrapped the sampling plans and decided to use MIDI instruments instead. As I was going through mapping the mess from NSF2MIDI to good hand-written notes, my experience with whole range of software I used grew and I noticed that FL Studio seems to be more advanced than ACID, but I had years of experience with ACID so I decided to stick with it for the most part. Also, I decided to start learning music theory because I didn’t like the fact that I had no idea why the notes I was writing sounded good.

So I went in a different direction for some time, reading Wikipedia and googling stuff about music, also noticed that FL Studio has very nice sounding instruments, much better than ACID, learned what VST is. As I was going through equal temperament, intervals, chords, scales, keys and modes, I was also looking for good sounding instruments, learned about software samplers, tried DirectWave but didn’t like it, but then stumbled on Kontakt. Oh yeah, Kontakt was awesome and it’s factory library was something I was looking for. So while I got all of this and music theory knowledge, I tried to apply it which resulted in Etude no. 1. It was started as just several chords which were then routed into arpeggiator for melody and Cytrus with “Pizzi” preset then mapped by hand to Kontakt instruments.

So as I was ramping up, I knew that the final mix for my Alien 3 cover must be done in FL studio and not ACID Pro. At about the same time I’ve found that I can install virtual MIDI cable, set it as I default MIDI playback device and then I could route MIDI output of GNSF to any DAW. While both NSF2MIDI and GNSF were written by the same author, GNSF didn’t use pitch bend and instead played separate notes when pitch of NES channel changed. This finally made arpeggios genuine and provided better output overall. So using that, I enjoyed listening to NES music played live by Kontakt instruments. Also, I could finally record it in ACID and have very good MIDI data at my hands. GNSF output all notes with 127 velocity and used expression to change the loudness so I still had to rewrite everything by hand but now it was much easier. So I finally written all notes by hand (those quick arpeggios took a lot of time). And it was the time to find instruments. I’ve found guitars, bass and drums in Kontakt factory library and found Guitar Rig for effects. So I’ve started exporting MIDI data from ACID to FL and listening how it sounds - and it sounded awesome. I imported everything. Now that I knew some music theory, I added power chords and a few sections, some automation here and there. The ending took some time, because FL displays notes only with sharps so I was like “holy crap, was scale is it when it plays A and then A#? Duh, it’s Bb! D minor”. And it was February 2012 when I finished my first NES cover. All in all, I took me about 4 years to learn software, music theory, find instruments and apply everything.

I was really hyped. I thought that now it’s time I start making these covers one-by-one as my workflow was polished. Render reference stuff using NES software, write notes in ACID, export to FL, arrange, apply effects, render final mix. And I really wanted Batman stage 1 theme arranged for strings to be my next cover. And then I’ve found a critical bug in FL Studio, while it supposedly supported all 16 MIDI channels for each track, MIDI CCs went only to channel 1. And I used different channels for different articulations and needed CCs for dynamics. This quickly made FL studio completely unsuitable for serious orchestral work and I started searching for a different DAW. Surprisingly, most DAWs out there treat each track as only 1 MIDI channel and I had to try a lot of them until I stumbled on REAPER. REAPER was way above everything else, it is also themeable so I made my own theme based on another one because default is really dark and ugly. And yeah, it took 3 weeks, if I remember correctly, and I had Batman cover in June 2012.

Now it was the time to go up one level. I knew that my covers lacked a bit because I used only mouse and all notes were perfectly quatized and it made them unhuman (especially in Alien 3). So I got myself a MIDI keyboard and started to learn to play on it. I quickly found that I restricted myself a lot by using only mouse because I couldn’t play intervals easily and having MIDI keyboard opened the whole new world of possibilities.

While I was learning to play, I also used my polished workflow to make prefabs for several other covers: Felix the Cat, Shatterhand, more Batman… I even completely arranged Felix but I didn’t like it because it was very different from what I had in mind so I put it on hold. Also it became clear that I needed more RAM. Batman consumed about 1 GiB of my 2 GiB and it was only strings and I wanted to eventually get to the full orchestra especially for stage 4. Also 2 GiB is basically a limit for 32-bit Windows XP which I had at that time. So I’ve upgraded to Windows 7 64 bit in September 2012… and found that Microsoft completely changed audio stack there. My virtual MIDI cable didn’t work and there wasn’t even a way to set default MIDI playback device without third-party tools. Windows 7 really felt like a downgrade from Windows XP. But I was not going back because I needed it for Visual Studio 2012 with all it’s C++11 features. So I’ve tried a lot of ways but couldn’t get GNSF to route into ACID or any other DAW. So I lacked the quality MIDI data which I was really dependent on. So I decided to fix this once and for all - to write my own program to produce MIDI data. I obtained the source code for NSFPlay and NotSoFatso and decided to work on top of the latter. So now it was time to dig into the internals of NES sound chip and NotSoFatso and make them serve my purpose. I like to stay away from hardware so this time it was very difficult for me and NotSoFatso is horribly Windows-dependent but I eventually stripped all the Windows API stuff from it and started to work on my own audio library (ftz Audio) to replace some horrible wav-writing code in NotSo and make it support different bit depths and sample rates.

I got to placing cue markers between NES frames in April 2013 when my life changed and I didn’t have time to work on it anymore. While I was working on this project, I also made a few Etudes played live and got some of them (4, 5 and 6) to release quality (for that time). During that time, my crusade against proprietary software started in 2010 reached the full speed and I got rid of most of my proprietary video games and found free software alternatives for most of the stuff I used. The only exceptions were Visual Studio and music software and samples. And I got to the point where I can’t make covers because the essential software (GNSF) does not work and is proprietary and I don’t have time to develop a free alternative and this is basically the only part where I depend on proprietary software. And also, NES itself is a proprietary hardware, NES games are proprietary and music in them is proprietary. And why do I waste a lot of my time reverse engineering proprietary hardware to make derivative works of proprietary music which is really a copyright infringment? No, not any more. So I went and deleted all the software, all samples, project files, everything except wavs of my finished covers because I wanted to leave a least a little bit for memory. Before that, I wanted to release the MIDI data I’ve written for others to help them make their own covers but why would I want to help other people commit copyright infringement? Nope, goodbye, project files.

By June 2013 I didn’t have any DAW or samples and only got MIDI files for my own etudes and a few exported wavs. I knew that what I got to do then is to find free DAW, free samples and to continue making my own music. I quickly found LinuxSampler and LMMS but found that they are very unstable on Windows and I really had migrate to GNU/Linux. I also found Sonatina Symphonic Orcherstra and Salamander Grand Piano. I was almost set up, just needed different OS. So in November 2013 I finally installed Debian on hardware and in February 2014 got completely rid of Windows.

Now it was time to settle myself in and start learning. First, JACK - GNU/Linux way is really different. There you have different processes that communicate with each other using JACK, not a single DAW to host everything. So I learned Rosegarden enough to record and play stuff, LinuxSampler for samples and Calf for reverb. Using that, I was able to produce Etude no. 7, 8 and resurrect no. 3. Now that I thought that I’m using only free software and samples, I finally put everything into GitHub repository. I feel that music should be developed in the same way as free software, with source code public and collaboration of knowledge and expertise toward a common goal - a wonderful music.

Now that I’ve mastered the basics of melody, I quickly went to try and improve on them. This led to Etude no. 11 which was much better than anything I’ve ever created. About that time, I got to a chance to perform some of my etudes live to a critical acclaim. Now, with more that ever confidence, I started working on Etude no. 12, 13, finished 14, 15, performed all of the finished ones live. Got more attention from people, which led me to polish all etudes up to a better quality.

In September 2014 I decided to try something new - songs. I knew I already had some skill, why not to put it to a good cause? I could write a song about what I’m campaigning for. Thus, the Free Knowledge song was born, or at least the lyrics. Lots of them. There are so many I still can’t finish even a melody + chords. So that was put on hold. However, another project - LGBT Anthem - was easily finished. Or at least the instrumental part. I then bought a professional microphone and recorded the lyrics. I’m a horrible singer but I don’t have anyone good that I could use.

In about the same time I’ve done etude no. 17, improvised 18. Performed 18, LGBT Anthem instrumental and a few others live. In December 2014 finished etude no. 19. Moved my music repository from GitHub to Gitorious because GitHub has proprietary backend and Gitorious is GNU AGPL.

Then was a big pause. During the first half of 2015 I only finished another song - Mirror - and etudes no. 20, 21. Moved repository again, this time to GitLab. Second half of the year contained mostly slight fixups, for example to etudes no. 6 and 13. In December 2015 I released etude no. 23, discovered presets in ZynAddSubFx and added it to etude no. 5.

In March 2016 something interesting happened. When I sleep, I sometimes hear wonderful music, but it’s usually quickly lost. But this time it didn’t. But I knew if I press a single key on my MIDI keyboard, I will lose it. So I grabbed my mobile phone not to awake my dad, went in the kitchen and hummed the melody on a voice recorder. Then I opened it in Sonic Visualizer and selected melodic spectrogram. Since I could sing only one note at a time, I was able to decipher the melody and write the notes. In about 3 hours I had finished etude no. 26 - one of my best and which really stand out. For example, when I sit with keyboard 99% of the time I start with tonic, barely ever with dominant. But this etude was in F# minor and started with A-C#-B melody and B minor chord - a subdominant. I would never consciously produce such melody. This was a rare success.

The next key event was again in March 2016 when I watched a Let’s Play of Undertale. I loved the game but it’s music was something from another world, it was one of the greatest things I’ve ever heard in my life. What’s the most interesting is that it uses a lot of chiptune sounds. So after listening to it for a couple of months I decided to dust off my NES stuff knowledge and make some chiptunes. So I looked at the synths but couldn’t find a free software one. So I decided to write one. And so, in June 2016, ftz NES Synth was born. The basic waveforms took a few weeks but I had finished the 1.0 version in late June. Then I decided to not confine myself to NES and make more waveforms and effects so I renamed the project to ftz Chiptune. Also, I was looking at the source code of NSFPlay (again).

In May 2016 I’ve decided to slowly move to controversal topics and made the song Religion - one of my best ones.

In June 2016 I volunteered to host website for No Budget Orchestra at nobudgetorchestra.net. It was developed by Jeff who hangs around at LinuxMusicians. He was uploading individual instruments in SFZ format so I thought it would be a good idea to host a website dedicated to those.

In July 2016 during a random conversation at LinuxMusicians I learned that LinuxSampler backend is actually proprietary. The license says that it is GNU GPLv2 but commercial use is prohibited. I was shocked as I realized that my music was made with proprietary software - something that I tried to avoid at all costs. I started to carefully check licenses of everything I used and discovered that Sonatina Symphonic Orchestra is also proprietary - CC-Sampling-Plus - and some instruments from No Budget Orchestra are also proprietary. At this point I became desperate because about half of my setup is proprietary. I was enraged, I wanted to combat this. So first thing I did is to change the license of all my works. Before that I was friendly and thought that public domain is the way to go so I used Unlicense for code and CC0 for music. Now I understood that strong copyleft is the only way to combat this injustice so I chose GNU GPLv3+ for code and CC-BY-SA 4.0+ for music.

I was messing with NSFPlay source code, stripped all Windows API stuff from it and added enough code to ftz Audio to play audio via ALSA. This was enough to create a console application that I called ftz NSF Player and released first version in July 2016, later I quickly released Qt version. Also during that month I released etude 27.

Some time during summer 2016 my plans for my own video game started to take shape. It would be called Justice and would be an ultraviolent top-down shooter. I quickly started working on a main theme based on the concept of school shooting because I wanted to create something like this for a long time. Sure enough, I finished Main Theme in September. After that I decided to write a theme for mosque level because I wanted to try something in Arabic scale and finished the calm part in December.

But still, I was pissed that I still dependent on proprietary software and samples so making music was no longer that enjoyable anymore. After finishing mosque level I wasted pretty wasted and wanted to fix this. I’ve searched for free samplers and found only SFZero which was mostly targeting macOS and there was no ready GNU/Linux build. I’ve searched for free orchestral libraries and found only VSCO: CE but it looks like it was tested only on popular proprietary samplers and several bugs in LinuxSampler made it unusable with it. However, I was unaware that this is a LinuxSampler fault. So in the end in I’ve decided to write my own sampler - ftz Sampler - and assemble my own orchestra - Libre Orchestra. Since I didn’t want to spend much money, I’ve decided that I would not longer pay for nobudgetorchestra.net and transfer it to Jeff and instead pay for libreorchestra.net. Both projects were started in January 2017.

The rest of 2017 was rather uneventful. I released a draft of etude 30 in March and quickly understood that I need to move to a full-featured DAW because SFZero and ftz Sampler would be plugins. Handling one instrument at a time is much easier than doing the whole server like LinuxSampler did. So after some pondering I decided to go with Ardour but it didn’t fit into my monitor so I put those plans on hold. I released etude 28 in April.

nobudgetorchestra.net was expiring in June and Jeff didn’t contact me about transfer so I moved NBO website to nbo.libreorchestra.net. Eventually, the original domain expired and was cybersquated.

In May 2017 I decided to do something a bit unconventional - the cover or rather my take on The House Of The Rising Sun by Animals. I kinda cheated and just downloaded MIDI file from the Internet because there was no other way for me to obtain good score. Instead, I focused on instruments and effects. Man, using JACK to manage all this is a pain. I definitely needed DAW. I posted my version online but people noted that there is not much originality so I’ve taken it down and left it for my own amusement only.

In September 2017 I finally took time to prepare the first release of Libre Orchestra. I went through all instruments in No Budget Orchestra and cherry-picked all the free ones. They formed the first release. I also started adding OpenCL bandlimited synthesis to ftz Audio with intention of it to be included in ftz Chiptune but I stumbled on a segfault in Clang or LLVM which blocked the progress.

In April 2018 I’ve found that LinuxSampler fixed most (if not all) bugs that were triggered by VSCO so it became very much usable.

The story continues…