More VFX History

This is another rant I went on many years ago. I’m posting it because, hey, I need content, and I think it’s kind of interesting. I’m not forcing you to read this shit.

Float has to do with Bit depth. In shake, we can store the information in 3 different bit depths, 8, 16 & “float” or 32 bit.

8 & 16 bit are pretty much the way most of us are used to compositing. We divide the brightness of the image from black to white into so many steps. In 8 bit, it’s 256 steps, or levels of grey. In 16 bit, it’s 65536 steps. Obviously with 16 bit, we have a lot more gradations between black and white.

Floating point is new, and until recently somewhat unique to shake. Essentially, the idea is that instead of breaking up the shades from black to white into discrete, separate levels of grey, we can code those numbers as “infinitely” small decimal values. So between black and white, instead of roughly 65k steps, we can have an infinite number of steps.

There’s another aspect of floating point that can give us problems, especially if you come from other packages. Instead of just having black and white, and all the shades in between, I can have superwhite & superblack. White that is whiter than white.

Shake likes to deal with what we’d call “normalized” numbers. What that means is we’ve all agreed that the number “0” means black, and the number “1” means white. It’s arbitrary, though there are perfectly good reasons for this convention.

Floating point lets me have numbers higher than 1, or less than 0. Since this is kind of new to most people, that can be weird. And a lot of the gags we are used to doing in other packages (like adding a couple of mattes to make a combined matte) don’t work the way we expect in floating point.

Your problem is that your data in the Z channel is also stored in floating point. This makes good sense for z information. Z info is not made for human eyes. It’s data, indicating how far in depth something is from the camera.

So what you want to do is “normalize” your z-depth information so that you can view it. This is conceptually simply (get a luma of a z-channel, is what most people say), but it’s technically more complicated.

Our biggest problem is that we don’t know what the numbers are in the z channel without examining it, and we can’t “see” the information because it wasn’t made for human eyes. So, here’s what I do.

Take your image with z information. put a bytes operator on, and set it to 4 bytes (or floating point). This makes sure that the rgb channels can store the sort of information that normally lives in the Z-channel.

Okay, now put a reorder on the Bytes, and set the reorder to “zzzz”. This copies the z information into the red, green, blue & alpha channels.

Fine, but you can’t see anything. The image might be all black, or all white, it depends on how the z information was created and what it represents. Now the rest of the explanation, I have to wing it, because:

  1. I don’t have shake access to shake anymore.
  2. the values you get will be different based on the image.

Okay, look at the image coming out of the reorder. Like I said, probably all black. Go to pixel analyzer (it’s a tool in the tab on the upper right hand quadrant). There should be a button there that will let you analyze the whole image. Click that on.

Now, you see the patches, I forget, they’re marked something like low, avg & high. Click on the swatch that is low, and you should see numbers showing up in the red, green, blue values. For conversation, let’s just say the numbers you’re getting here are -527.

I want to set this number to black (or 0). So go get an add operator from color, and type in 527 into the red, green, blue & alpha wells. That’s set the lowest value to 0 now. Render this image in the viewer. It’s probably still messed up, maybe it’s all white now.

Fine, go back to the pixel analyzer, do the same thing, but this time you want to find out what the highest values are, so click on the high swatch. So, let’s say the number you’re getting here says something like 20.

Good, now put a fade on after the add. In value, type in 1/20, or 1/whateverNumber was in the high values.

Finally, you should see your z information. Okay, just to make it easier on you from now on, why not just throw a Bytes operator on after the fade, and set it to either 2 or 1 bytes (I would probably go with 1, but there are plenty of good reasons to go with 2).

VFX History

This is an old post I made on a long dead site I used to host. So in the interest of preserving history:

The Eye

One of the requests to me the other day was to discuss how to “step up”, move my effects look from “Joe-in-the-backroom” to “What-did-you-do-on-that”.

Tall order.

There’s a lot of discussion in my circle about The Eye. “Oh, yeah, he’s got a good eye.” “Who, Joe? Of course it looks like that, he has a tin eye“. We use the eye as a separator between someone who makes stuff that looks good, and stuff that just looks run of the mill.

So, I’m going to take the question as that – What is the eye, and how do I get one?

Honestly, I think the eye is a half-way mystical thing that is mostly focused on awareness. People who have the eye are very aware of images, of what the world looks like. They don’t just look, they SEE, they stare and think about how things really look, what things look good.

And here’s the awful truth about the eye. Some people will never really have it.

It’s not about some set of magic formulae you can chant and get the image. It’s not a cookbook. It really is an awareness, a sensitivity to the world around us.

Here’s the second awful truth about the eye, and one I find even more frightening:

Many people have the eye, but they never train it. A gift that is undeveloped, left alone and forgotten. They don’t train it because it’s hard, it’s something you have to constantly work on. That just saddens me.

Okay, let’s say you have the eye, or at least, you think you do. How do you train it?

Let me say again, what I tell you three times is true – the eye is about awareness. Look at stuff, and really SEE it. Look all the time.

Our business is focused on images, so it’s images you must obsess on. How does the light wrap around that tree? The light hitting that cat, what do the individual hairs look like? And how does that contribute to the total image of the cat? Analyze the things you are seeing. Think about light, moving and hitting stuff, energy flying around and finally coming to rest on some thing, draping it with light. How does that work?

Some things you can do to train the eye is to take up drawing, or photography. I’ve tried both. Personally, I’m not much of a draughtsman, meaning my drawings bear a closer resemblance to 4-year-old sketches, but that’s also about the time I stopped drawing. With more practice, I’d get good. I’m lazy and photography is easier for me, so that’s been my choice (and since I’m into geek photography like stereo & vintage cameras, that keeps my interest).

Alright, having said all that, here are some cookbook stuff. Most of what I’m going to tell you now is just opinion. As soon as I say it, someone out there is going to say, “I’ve tried that, it looks like ka-ka (a technical term you often hear professional image-makers use), what is he thinking?” Another group will write it down like scribes in secret manuscripts and trot it out twice a year to adoring acolytes. The truth is somewhere between that. Take it with a grain of NaCl.

Last thing I want to say about the eye is that you can’t just apply it to things you see, you have to apply it to your own work – and that is the hardest thing to do. When you look at your own image, you see all the stuff you did. No one else see’s that, only you. You have to take out the eye and really apply it to that thing you just made. See it, what does it really look like. Between you and me (and everyone else reading this), I know professionals of many years who still can’t do this. If they’re really professionals, they’ll take steps not to be judging their own work.


We love to soften our images. I think we do it, because some very important VFX houses tend to rely heavily on darkness and softness. We as artists run the risk of staring too long at the work of our peers and not looking at the real world.

Now, I think the blur node is my enemy. I think long and hard about every blur I include in my comp. I don’t like them. Here’s why:

I believe that we humans normally see things in pretty sharp focus (at least, as long as I’m wearing my glasses). I’m used to that. If I see something and I can’t focus on it, I get a little panicky – something must be wrong with me. So, softness, in some deep animal part of my head, is associated with being broken, and it introduces a little bit of tension.

But Rory, I see soft focus images all the time! Artists are always using it!

Yeah. Why? Because they want to introduce some tension, or they want to force the viewer towards something of interest in the image. How many times have you sat in a monster movie, the victim cowering in the foreground in sharp focus, while behind her, just beyond focus, something horrible is moving? Doesn’t it freak you out, you can’t focus on that thing behind them?

So why do you want to make parts of your image soft? I think the eye is going to go right to those soft areas, pick out the pattern and try to figure out why it doesn’t look right. Personally, I claim to be able to spot a certain famous software’s gaussian blur implementation. There is a certain characteristic to it that draws my eye and doesn’t look real.

I hear you blubbering now. “I love my blur! What do you mean I can’t use it!?!” I didn’t say that. I just think that blur is the easy answer. If you want better images, focus more on mixing than blurring.


We often use a blur when we really should be using a glow.

Light is not a straight-line, ruler driven thing. It’s waves. It bends, it folds around stuff.

This is my gag for introducing a “glow”. It’s a gag, not a real physical model, though the concepts are based on physics.

Take an image and apply a blur. Now, subtract (isub) the original image from the blurred image. Don’t swap this, or you’ll end up with a poor man’s unsharp mask.

The resulting image will be very dark. I think of it as the light that is leaking around and contaminating the darkest areas. Usually, I don’t want to apply this back to the image at 100%, so I throw a brightness on to control how much gets added.

I take this final image and add it back to the original image (iadd).

I’ve used variants of this technique to replicate the action of ProMist filters, as edge glows and wraps. I find it pretty useful, and more importantly, I’m not making the image softer, although I am affecting local contrast. All said and done, it’s more a color operation than a blur.

17 Feb 2003

Book Fail

I like to read. I read all the time, really. Of course a lot of people don’t consider surfing the net reading, but of course it is. By that measure, I’m probably reading more now than at any time in my life. That’s quite an accomplishment considering I used to read while I walked home from school, book held in front of me while I shuffled carefully down the road trying not to trip on rocks or cactus.

But books. They’re failing me lately. The physical medium by which they are distributed to meat, it just doesn’t work the way it used to.

Today, I wanted to read while eating my dinner. I pulled my little box of flash-frozen hot food next to me as I sat cross-legged on the floor, and tried to read and eat.

I used to do this all the time, my hand spread out like a giant book support, carefully sliding the pages across with little finger and thumb like a magic trick, shuffling my fingers while I read so they didn’t block words or even just guessing at the words. All. The. Time.

And tonight it was annoying. Very annoying. I finally put the book down, and opened a magazine. At lease it would lie flat, I wouldn’t have to struggle to keep the pages open, browse the pictures.

And while I meandered through pages, I felt a pang of annoyance. That’s a nice car. I’d like to know more about that. I wonder if there are any like it for sale in my area.

I couldn’t open another page and find out what I wanted. I’d have to get up and get my phone and look it up. Not worth the effort.

And I’ve suddenly realized, books are annoying me. It’s a mechanism and a culture I’m not part of. Reading on my iPhone or iPad is so much easier, and the ability to look up more data while I’m reading is seductive.

I wonder if any ancient librarian had this epiphany as he got used to working with books after dealing with those really annoying scrolls. After all, books were relatively portable, they could rest on your lap while you read. And you didn’t have to hold them up to read them, they just sat there and displayed information.

Books have been replaced in my life.