My name is Zachary William Zyla, I am an aspiring audio engineer with dreams of becoming a mix engineer and/or a recording engineer aimed at the field of music, movies and video game productions. My time spent here at Full Sail University has broadened my horizons immensely, not just in music but in the art of Foley and many other post production and sound design careers that interest me. I came in solely with the purpose of pursuing a career in music, but quickly realized that's just the tip of the audio iceberg. With classes like Post Production, showing the audio side of movies and television, and Interactive Audio and my current class of Advanced Interactive Audio showing both creating new and innovative sounds and how to implement them into video games. I've quickly realized the limitless potential in front of me. Whatever it may be that I finally decide to become, the goal is to not only love what it is I do, but also be one of the best in my field.
Within this blog post I will be talking of what went well and what may have not gone as well within my projects. I hope to be able to look back and read this and not only learn from my mistakes and grow but to also maybe help others avoiding similar mistakes.
Section 2 Unreal Engine 4's Platformer:
This was my first time ever importing sound effects I created into a video game, let alone using Unreal 4. Also known as implementing or creating triggers for sounds, some of which are triggered by animation, some are ambiences that are playing for locations, or the sound to an entire cut scene. Overall I thought it went extremely well with it's easy to use interface. While the project seemed extremely daunting at first glance it was all rather simple work if taken bit by bit rather than as a whole. I created sounds using royalty free sound banks and a few Foley recordings of my own. The game starts with a few cut scenes of what looks like an escaping van being shot at until the van crashes and releases the robotic creature whom the player controls as its trying to flee to safety. From what we've learned in class I paid attention to all details, not just the forefront action but also the environment, the characters themselves, and context and meaning behind each animation used.
Aesthetically I was aiming for exactly what the game seemed like it needed, futuristic robotic noises in a modern day city while appeasing the light and fun style of the cartoon artwork within the game. This is called HyperRealism, meaning doing what's expected while adding more than what's expected. By the size of the robot and the fact its steps are blowing up cars and breaking ramps you could tell it was very heavy. So boosted low end, metallic footsteps with the sounds of moving machinations and gears were ideal. To make it more lighthearted, within the bulky, heaviness of the robot I put in smaller, less noticeable noises. For instance the robot's jumps I added a combination of a loose spring and a Pogo stick bouncing and mixed it with similar heavy, metallic footsteps to simulate him launching into the air and then catching itself on the ground in a way that sounded serious and real enough while not taking itself too seriously.
Though the list of required assets seemed to be never ending, once I had finished a set and starting implementing them into the game was when I really got to see it put together. I was given the video of the character's animations and cut scenes to design my sounds around. This was extremely helpful in judging the length of each sound needed and as a result I felt my cut scenes (or matinee as it's refereed to within the software) and one shot sounds were perfect. Where this failed was within my slide asset. I had to re-import a new bounce more times than I can count on one hand. In hindsight I should have tried to record my own Foley or found a new recording to use because the loop I created for it had to be cut a lot and caused it to be more glitchy and a pain to work with than I'd hoped. I finally settled with what I had made only due to time shortage.
At first I struggled to understand UE4's implementation process, specifically for my ambiences. I brought them in by dragging them into the window but where I didn't understand was how to properly spatialize the ambiences to fit within the created world. This is called attenuation, or the audible area of the file. At first I was only adjusting the height and would lose the ambience halfway through the level, then Brian, my lab instructor, explained the x, y and z used in creating depth, height and width within the project as well as fallout distance. After was smooth sailing, outside of my slide asset issues most if not all of my sounds were implemented without issue the very first time. The hardest part was timing them exactly to the animations. Using modulation and a randomize created extra new sounds from my already made sounds to make them seem more natural to the audience and less repetitive.
In regards to mixing the project I did this as I added more sounds in. Going into the Platformer knowing the only thing I can do volume-wise was turn them down aided me a lot. So I created all of my sounds to be loud that way I could turn them down as needed and not have to worry of anything being to quiet and getting lost in the mix. Using real time automation I was able to duck other sounds within the game to bring out the sound I designed for the slow motion or matrix effects triggered by the robot's jump and slide in the beginning.
Lastly I was put into a group project to create and implement sounds for the video game Limbo. First of all, working in a group is totally different then working alone. It has its advantages and its disadvantages. For advantages you're not alone with the weight of the work, you split it up among yourself then bring it in all together, a disadvantage to this is that some people may not be as up for the task as you are and either slack of or rush the work. In our case some of the work was more rushed than it should have been, but with communication and hard work we were able to bring our work in together into one final project.
In terms of aesthetics, Limbo is a darker game which we wanted to reflect in our ambiences. As a guideline we challenged ourselves and picked "Hotline Miami", which is a total 180 of Limbo's style. It's a in your face, gorey, colorful 8-bit reminiscent video game. The challenge ended up being more daunting than we had anticipated but with clever ingenuity we were able to bring some similarities between the two games. Using what can only be described as a synthetic "Super Mario jump" and highly graphic sounding deaths. In the end we came up with something that was dark and sinister like the imagery of Limbo but that carried a some what comical, 8-bit aesthetic.
The one thing that really challenged us was the fact that we built this project using wwise instead of UE4. While UE4 is a game engine, a code based library of software framework for the creation and development of video games, wwise is an audio middleware, which basically is a software that integrates two pieces of either hardward and/or software systems by data mapping and data transfer from one format to another. Which means wwise was a whole new learning curve all in its own. Once my teammates and I understood the folder hierarchy wwise uses our sounds were more easily implemented and mixed than within the UE4 platformer. This was because of the amount of control wwise gives its users of audio. It was much easier to mix in, and even has the ability to edit clips and loops within itself. That was a feature we stumbled on because of few editing mistakes in our assets and turned out to be a massive time saver.
Because of the game's two dimensional gameplay, being a side scrolling game, the audio had to reflect this as well. So since the game is in constant perspective the audio had to be non-positional; meaning there is no attenuation or spatialization used and you can only hear what you can see. The best example of this would be the wooden platform at the end of the rope swinging. You hear it clear as day and even as it slows down or you walk away you'll hear it either until you walk until its off screen or the platform itself stops rotating.
One aspect of wwise that was both our greatest strength and weakness all in one was the actor-mixers. Actor-mixers are basically a mix bus within wwise. While really useful they can be just as confusing if you're not careful and diligent because of wwise's folder hierarchy. We were instructed to create and use one but my group quickly become more ambitious than we should have. We tried creating different actor-mixers too late in the set up and caused a lot of confusion and missing files. Luckily we had a backup and were able to recover our mistakes and start fresh from our previous saved files. Once we re-established ourselves we were much more careful in the process and were able to mix our game exactly like we intended to do.
Overall I wish we had more time in wwise. We had more than ample time to work and get comfortable with Unreal's game engine UE4, but I felt much more rushed to finish wwise. Despite time constraints, my group and I felt our version of Limbo was still a success.
Below is a combination of clips of some of the audio work I did for Limbo. Those sounds included are as follows; all footsteps on the grassy ground and wood, all water sounds including splashes, the drowning, the boat and water ambiences, the noises of the boat gliding through the water and being dragged in the sand as well as the ambience for the cave.