Very wavy experimentation using FilterForge.
Very wavy experimentation using FilterForge.
I’m really not a “pixel art” type of guy. I wish I was, but unfortunately I’m just not as great with it. Now making it more interesting, what about textures? Those are more accessible honestly, and to me they’re a step down from drawings and stuff. So I began investing a bit of time trying to figure out how I can make it happen. After a couple rounds of doing manual pixel art, I realized I can potentially make procedural textures out of it. Of course, it won’t have the same feeling as manually created pixel art textures, but this project was all about how close I can get it. First of all some past things I’ve tried.
This was for my thesis film: The Door to Tomorrow
In these two cases above I’m basing the original texture off of procedural means. I had to do this partly because I had little time to generate nice looking backgrounds for the “game scenes” in the animated film. The textures were first generated in FilterForge, then processed again in FilterForge using a filter called: I Dream in EGA by Mike Blackney. The filter has custom color input, so I can choose a couple colors matching the scheme of the texture provided. This way we don’t just end up with the default 16 colors.
I can then pick and choose what parts of the texture I want to use and also erase bits and pieces of pixels accordingly to have “stuff sticking out.” The method is simple, rapid, and quite good looking as long as you don’t change pixel scales.
But I kinda wanted a bit more. So I began investing some time doing more manual pixel art to learn the little things considered. There are tons obviously, and after studying more pixel-art based textures, I came up with a small method allowing for the style to be achieved while providing the ease in deployment. Of course, this is FAR from being a complete solution to achieving that specific clarity in style, but it’s decent small step forward.
I have a copy of FilterForge 4 beta 1, and they featured groups. I’m using that. Meet the Pixel Shader, the grouped component that takes a color map and height map and spits out pixely textures with shading.
The method employed here is actauly relatively simple. The output look is what’s difficult to achieve. In creating the little shader there, I basically had to modify the entire texture to suit a nice looking image. So it’s a bit more work than just “plug and play” unfortunately. But the deployment is fairly easy and the adjustments you have to make to the filter are pretty basic.
The internals of this component isn’t impressive at all. It’s in fact stupid simple.
The package contains two derivatives so you can have two directional lights with controllable colors. There’s an ambient lighting, shading bias, and several more options to fill in the goodness. The final stage is the pixelation, which happens via a checker node. I use this because it has two color inputs, so if you input another image into that checker, you get a “checkered” result between two images. This can be useful for a “dithered” look. In this case the filter isn’t needing that effect so the output comes straight out.
But in the end, what you ULTIMATELY need to do is design a good texture to begin with. In this case, I designed the filter and had to modify it so it would work with the system I implemented. The idea here is that the little component isn’t doing the entire job, it’s the whole filter that’s giving it the good look and effect of it.
I went around modifying other filters too so to achieve the same effect. A couple months ago I produced a filter that created SciFi Tech walls. It’s a pretty detailed filter and I thought… it can use a bit of pixelation. Here’s a screen cap of the presets after adding the Pixel Shader group component to the filter.
Of course, it’s flawed in that it’s not the true goodness of hand crafted pixel art. I’m not going to state here I’ve made the pixel art machine, but it kinda gets part of the job done. In my mind, the next step after producing the texture is to actually go in and give more context by hand. I made this to help myself a bit along the way and not necessary to finalize a product.
While I don’t want to be too restrictive, I’ll need to keep the filter to myself for now. There’s still a lot of things being developed for it and I’m using it personally in my job as well. This post does expose some of its secrets but doesn’t explain it all.
In part two of this discussion, I’ll talk a bit more about designing the filter so it works around the Pixel Shader.
This post discusses some stuff regarding this filter I produced in FilterForge a while ago: http://www.filterforge.com/filters/9726.html you can download the filter there too.
I’m not really THAT technical with stuff. I honestly don’t know all the math that goes behind each node I use in FilterForge but I have some clues and hints on how some of the stuff works.
Either way, some time ago I wanted to produce a multi-level sharpen filter. Unlike a single sharpen filter, this “dream” filter basically allows for a broad range of sharpening. To do this I used Filter Forge.
Let me show you some pictures of what happens when I use it. First we have our original photograph.
The image has not been processed in anyway…. now… for the processed image.
This is very subtle (I’m using default values) but you can see some changes in the contrasts of various areas. Some areas “pop” now more than others.
Some comparison here so you can see what’s different. It’s still kinda subtle but you can mildly see what’s happening.
What makes it pretty neat is how this sharpen filter works. It’s on multiple levels of detail allowing you to sharpen small, medium, and large details separately. So if you ever want to sharpen a LARGE area then you can turn down the small and medium sharpen filters and leave the large sharpen with higher values. If you want small details to pop, you can do so too.
To show you what can visually happen here’s another shot at the image above.
With a couple additional nodes I was able to produce a filter that allows for various degrees of control while sharpening a photograph.
Now for the exciting technical aspect of how this was accomplished! You probably need to click and see at the full resolution
The basics go like this. I have multiple highpass filters with different radius settings. The radius values weren’t chosen mathematically, they were chosen based on visual output so nothing fancy there. Then you see this huge stack of Min and Max nodes which allow for combinations of the nodes. This is really the fun part because it’s where I didn’t know what to do. I began combing the highpass nodes using various tools and then I realized I should just stick with min and max because of the way they operated. Also the way they looked seemed valid enough for usage. I then used a blend node (which was set to overlay as you’d expect.)
Now this is the really weird part. There are two things being mixed here: two separate chains for min and max nodes. To combine them both, I used a blend node. I lastly gave the user control over which they want to chose. If they want a brighter image then they can slide the control to achieve what they want visually.
And that’s basically all there is to this filter!
Here are some more examples of the filter in action.
Of course you do have to be careful of over-sharpening images. I just wanted to show you what it basically can do for you.
In the last “Havoc post” I’ve shown you the assets of a particular scene in the film I promised that I would post how it would look. Well it took a while, but I did it and finished this scene off, now it just needs to be added into the film. I used Cinema4D for its super-easy-to-use clone tool and ultra fast render. Of course I can do this in Maya, I just didn’t for all sorts of reasons. Either way, The houses were modeled and UV mapped in Maya, painted using Photoshop and FilterForge. The fun stuff was in Cinema4D. By throwing the models into a Cloner Object, I can easily scatter the assets in various arrays and otherwise. I then applied a Mograph random which allows me to scatter the objects all over the place. I have control over the size and rotation as well giving me further randomness.
Rendering this is actually the cool part. Cinema 4D’s render engine is super robust. When I say robust, I really mean speed. I baked all the GI first per frame. This is mostly done automatically, though I enabled a couple settings to help the speed up the render. Further more, I optimized this render by baking some of the lights. It’s easy as Maya’s light baking, though at the same time there are several more interesting options that give Cinema4D’s light baking an further edge.
Either way here’s a sample of the render. You can click on it to see the large image.
The animation took around 3 hours to render, kinda long, but it’s with GI. I don’t think it’s not that bad. Anyway, I wonder if I should spoil the ending for this post as it is something I want to leave it quiet till the end of the animation. OR you can totally spoil it for your self by viewing it here:
The other thing I wanted to show was the compositions (layering) in After Effects. You can click on the image below to see the structure for yourself.
I also added a very quick and simple (and realistic) eye blinking within After Effects. It’s missing the eyelashes right now, and I have no idea when I can actually accomplish this within time. The hard part is the animation. We need to get that done first.
That’s it for now. It’s almost there, it just needs to be done now.