Recreating 2001’s Slit Scan Effect in Nuke’s 3D System

In 1968 a film was released that would change the sci-fi movie landscape forever.  ‘2001: A Space Odyssey’ is still considered the pinnacle of science fiction story telling and visual design.

When the film was being produced the human race had yet to land on the moon let alone live and work in space for long periods.  The science fiction films released around the same time were all of a B movie quality.  ‘2001’ treated its viewers with upmost respect and its visual style is anything but B movie!

2001-posterThe films climax sees astronaut David Bowman travel through the aptly named ‘Star Gate’ to another time and space.  When posed with creating the visuals for this scene visual effects genius Douglas Trumbull turned to a technique that had been used in many films before – Slit Scan. Although John Whitney has often been credited with pioneering the Slit Scan effect (in films like Alfred Hitchcock’s ‘Vertigo’) it is Trumbull who refined the technique.  Trumbull’s real brilliance was how he built and ran his Slit Scan machine. 

Trumbull’s Slit Scan rig consisted of a camera on a track that could move forwards and backwards, a slit about 4’ high and illuminated artwork on a panel behind the slit.  The synchronous motors he used meant the camera, slit and artwork could all be programmed to run simultaneously.  This meant that he could do endless testing and repeat shots with super accuracy.  He could also layer up the exposures for maximum effect, rewinding the film in the camera and exposing another piece of artwork.

The shutter on the camera was open as the camera moved towards or away from the slit on the track.  The shutter would then close at the end of the move, the camera would move back to the start position and the artworks start position was moved slightly.  The whole process would then be repeated for the next frame.

exampleFrames
example frames of the ‘2001’ Slit Scan effect at Stanley Kubrick: The Exhibition
slitScanKubrick
a diagram of a Slit Scan machine at Stanley Kubrick: The Exhibition in London

Brief:

After watching ‘2001′ at The Prince Charles Cinema in London I felt inspired to try and understand more about the Slit Scan effect and recreate it in a digital age.  Using the 3D environment inside Foundry’s NukeX I felt I would be able to build a virtual version of Trumbull’s Slit Scan rig.  I used a 3d camera and 3d cards to create the other parts of the setup.

giphy-1
my 3D setup in Nuke, the slit is in the front card, the artwork moves on card behind the slit
Insights:
Some factors that change the effect created include:
  • The size of the slit.
  • The amount of movement in the start position of the artwork between each exposure.
  • The speed of camera move.

Process:

The first version I created was good but was more like a 1980’s video effect, a little like a ‘howl around’ (the effect used in the opening credits of BBC’s Dr Who).  The problem was that the artworks start position was not being adjusted between each exposure.  By moving the artworks start position ever so slightly between exposures the illusion of movement is given.  Thanks to Lev for this insight.

Each exposure in my setup was made up of 100 frames.  One disadvantage of Nuke’s 3D camera is that you can not leave the shutter open as Trumbull would have done.  Everything must be done in frames.   I used the TimeEcho node to mimic the long exposure time.  After that a FrameHold made sure I’m only rendering the fully ‘exposed’ frame.

giphy-3
an example of how each frame is made up, first is shown the slit moving over 100 frames and then is shown the TimeEcho output of those 100 frames… which makes up 1 frame

Next was how to set this up so it would render automatically.  Lev and I tested setting the curves of the camera and artwork movement to loop automatically but this was very fiddly to control (and adjust later) and never worked perfectly.  Setting the loop even one frame out meant that the slit would slowly move out of sync with the camera and chunks of the image would be lost.

In the end I used Python to adjust the start position of the artwork after each frame was rendered.  In the Write node Python commands can be set to run ‘after each frame’ is rendered for example.  After each frame was rendered a TransformGeo node moved the artwork 3D card in ’X’ very slightly.  The next frame would then be rendered, again made up of the 100 frames but with a slightly new start position on the artwork.

After the frames were rendered I brought them back in to Nuke and rendered out as a QT, added a flopped version on the other side of the screen, some blurs, retimes and glows to help with the effect.

starGate_v1
the finished Nuke script
giphy
an example of the completed effect rendered from Nuke
giphy-2
another version using Kronos to slow down and blur the final output, tippy man!

Conclusion:

This project has improved my understanding of how to loop curves and how the generate keys function can be super useful.  It’s also helped me see how useful setting Python commands in the Write node can be.

This was a great project to understand a process not used these days.  As with any project of this nature, delving in to retro techniques has given me even more admiration for the technicians that worked on this effect. I’m sure it was a lot of trial and error but once they cracked it they really cracked it.  The Star Gate sequence still stands up today and along with the music really does transport Bowman (and the audience!) to another time and space.

GitHub Scripts:

Final Nuke Script – starGate_v1_to_publish.nk
Python to advance slit after each frame is rendered – afterEachFame_advanceSlitScan.py

References:

Doctor Who Slit Scan and Howlaround

Douglas Trumbull – Slit Scan Motion Picture Photography – 2009 SOC Technical Achievement Award

The History and Science of the Slit Scan Effect used in Stanley Kubrick’s 2001: A Space Odyssey

Making of Slit Scan Effect

Cinefex #85 (April 2001 of Course!) – 2001: A Space Odyssey | A Time Capsule

 

Many thanks to Lev Kolobov, Zissis Papatzikis, Angus BickertonSteve Begg, Matt Tinsley and Ed Plant for their help with this project.

Advertisement

Nuke – Rendering with the GPU in Nuke

Since I’ve been working in-house as a compositor I’ve been rendering all my Nuke scriptsgeforce_gt_03 in the Terminal on my Mac. All good… or so I thought.

I have recently discovered that Nuke does not use the GPU (Graphics Processing Unit) by default when rendering this way.  I guess it’s to do with rendering with the command line which is used for render farms which often won’t have a GPU.

Luckily the fix is simple. Add ‘–gpu’ to the terminal command and hey presto… the GPU is used when rendering.  So now the NVIDIA GeForce GT 750M in my MacBook Pro is used when rendering by the GPU accelerated nodes in my scripts.  Winner.

I have re-writen my Nuke Applescript droplet to use the GPU when rendering which you can download.

MB_NukeRender_1v1.zip

Happy rendering!

Nuke – Applescript Droplet to Render Nuke Scripts in Terminal

MB_NukeRender

**UPDATE 21/04/17**
I have updated this script to use the GPU when rendering.  Please see this post for more details – Rendering with the GPU in Nuke

Since working on Spectre I’ve started rendering all my Nuke scripts in terminal shells and not the gui.  This feels a lot tidier and also seems to run much faster.

I’ve created an Applescript droplet to help with this.  Simply drop your script on to the droplet and it will open a terminal, enter the necessary information and then start rendering.

MB_NukeRender

Thanks to jweaks on Stack Overflow for the help on this!

Please note – the version of Nuke you are running must be set up in your .bash_profile file for this droplet to work.  The Foundry have written an excellent piece on this if you need help.

Command-Line Operations

Nuke – ZDefocus Edge Errors

Has anyone had this problem with the ZDefocus Node?

I get very weird artefacts when I defocus a monitor graphic that I’m tracking into a shot.  It looks fine straight out of the ZDefocus node but when I gamma up to 10 I can see artefacts around my element.

I’ve uploaded the script to dropbox.

Any thoughts would be greatly appreciated!

My conversation on The Foundry Nuke forum.

error

Nuke – The Amazing AutoWrite Node by Tim Bowman

I thought I’d share this beauty with you all.

I’m currently working as part of a two man comp team and as such we are missing so many of the lovely scripts and buttons you get when you work in a large facility. Finding this little beauty is going to save us a lot of time. It wasn’t completely plain sailing to install so I thought I would share my findings here.

The node automatically creates and names the folders to render to based on your script name.  Genius!  And a huge time saver.

The Autowrite node is written by Tim Bowman and can be downloaded from Nukepedia.

AutoWrite v1.0

autowrite1

It takes the existing write node and adds Python to it that works out your directory structure based on your current Nuke script location. The numbers in the window above are changed to suit your pipeline.  The node has a read out in the node graph – this gives you feedback on where the write node is going to render to.  This is a very useful feature.

The tricky part of using the node comes when you want Nuke to create the directories when they don’t already exist. Also in our two man team we wanted the write node to overwrite anything that’s there already (this last part proved difficult to find the code for).

Add the following to your init.py file and all will be well with the world… and your Autowrite node.

def createWriteDir():
#Create dir callack
def createWriteDir():
import nuke, os, errno
file = nuke.filename(nuke.thisNode())
dir = os.path.dirname( file )
osdir = nuke.callbacks.filenameFilter( dir )
# cope with the directory existing already by ignoring that exception
try:
os.makedirs( osdir )
except OSError, e:
if e.errno != errno.EEXIST:
raise
nuke.addBeforeRender(createWriteDir)

It took me a while to work out this part of the puzzle.  It uses the callback Python function ‘beforeRender’.  This is run before a render is executed.  It checks if the directory structure for the write node exists and if it doesn’t it creates it.

Nuke Callbacks

Happy rendering.

Nuke – Framehold Using Current Frame

When I create a FrameHold node I’m normally on frame I want to… well… frame hold.  Therefore wouldn’t it be great if Nuke held the frame you created the FrameHold on?  Well now it can with this simple piece of Python.  Just pop it in to your menu.py file in your home/.nuke folder and off you go.

# FrameHold creation frame
nuke.menu('Nodes').addCommand( "Time/FrameHold", "nuke.createNode('FrameHold')['first_frame'].setValue( nuke.frame() )", icon='FrameHold.png')

framehold

It uses the nuke.frame() function which calls the current frame.

Happy frame holding!

Thanks to David Emney for the heads up on this.
FrameHold default to current frame

Nuke – Backdrop Utility by Geoffroy Givry

backdrop1 Nukes backdrop feature promises to keep your scripts neat and tidy and in order… but it seems to always want to pick bright pink or fluro orange! 18% grey please Nuke!

Luckily Geoffroy Givry has come up with a nifty little python script that replaces the backdrop feature with a far superior utility.  A name can be added to the group, the font size can be adjusted and the colour always defaults to… grey!  Other muted tones can also be selected.

The python script can be downloaded from Nukepedia.

labelAutobackdrop v1.0

Place the labelAutobackdrop.py file in your home/.nuke folder.  Then add the following to your ‘menu.py’ file which should be in same location (or create a ‘menu.py’ file if you don’t see one)…

# setup autoBackdrop in toolbar
import labelAutobackdrop
menuToolbar = nuke.toolbar('Nodes') 
menuToolbar.addCommand('Other/Backdrop', 'labelAutobackdrop.autoBackdrop()', 'Alt+m')

backdrop2There’s a script on the Nukepedia page that Givry suggests putting in to the ‘menu.py’ file.  When I tried it I couldn’t get it to work.  Anyone out there have an idea as to why?

Geoffroy Givry’s website.  Click on Python Corner to see his other scripts.
http://www.geoffroygivry.com

On a side note – here’s a great tutorial from David Windhorst about how to install custom menus and adjust your ‘menu.py’ file.
DW// Tutorial 01: Nuke – Install Scripts/Custom Menus

Thank you to Jason Evans for the pointers on this!

Nuke – The Hidden AutoCrop Script

This a great little script that can help to optimise your comp tree by setting the Bounding Box in an image stream.

Selecting the node your want to set the Bounding Box on.  Hit ‘X’ (to start the Script Command window) and change the script language to Python.  Then enter the following…

nukescripts.autocrop(first=None, last=None, inc=None, layer="rgba")

Nuke will run the CurveTools AutoCrop function, copy the autocropdata in to a Crop node and then delete the CurveTool.  Genius!

‘first’ and ‘last’ refer to the frame range you want to analyse.  If NONE is set it will use the Project Settings.  ‘inc’ is the increment.  ‘layer’ is the layer or layers you want to analyse.  It’s good to set it to ‘a’ (alpha) for any premulted CG you receive so you can be sure your getting all the data down stream.

I’ve created an python script to help with running this command.  It can be downloaded from Nukepedia.

autoCrop_MB

Here’s a great video from The Foundry on understanding the Bounding Box…

Bounding Box – NUKE Basic Workflows