Wednesday, June 5, 2013

How to recolor the deep buffer data in Nuke 7

The DeepRecolor node is used to merge deep buffer files (contains per sample opacity values) and standard 2D color image. This node spreads the color across all samples using the per sample opacity values.

Read in the deep image that contains per sample opacity values and the color image. Add a DeepRecolor node from the Deep menu. Connect the depth input of the DeepRecolor node with the deep image. Next, connect the color input of the DeepRecolor node with the 2D color image.
Note: If the color image is premultiplied, add an Unpermult node between the Read and DeepRecolor nodes.
On selecting the target input alpha check box, the alpha of the color image is distributed among the deep samples. As a result, when you flatten the image later, the resulting alpha will match the alpha of the color image. If this check box is clear, the DeepRecolor node distributes the color to each sample by unpremultiplying by the alpha of the color image and then remultiply by the alpha of each sample. As a result, the alpha generated by the DeepColor node will not match with the alpha of the color image.

How to convert a standard 2D image to a deep image using the depth channel

The DeepFromImage node is used to convert a standard 2D image to a deep image with a single sample for each pixel by using the depth.z channel.

Read in the image that want to convert to a deep image.
Note: If the depth information is not available in the depth.z channel, make sure that you copy the information  to the depth.z channel using the Channel nodes.
Select the premultiplied check box if you want to premultiply the input channels. If this check box is clear, the DeepFromImage node assumes that the input stream is premultiplied. Select the keep zero alpha check box if you want to keep the input samples with zero alpha are considered in the deep output. If you want to manually specify the z depth, select the specify z check box and then specify a value for the  z parameter.

You can use the DeepSample node to check the deep data created by the DeepFromImage node.

How to convert a standard image to a deep image using frames

In Nuke 7, you can use the DeepFromFrames node to create depth samples from the standard 2D image. To understand the concept, follow these steps:

Step - 1
Create a new script in Nuke and then set the format in the Project Settings panel.

Step - 2
Download an image of sky, refer Figure 1 and the load the sky image into the Nuke script.
Figure 1
Step - 3
Connect a Reformat node to the Read# node to reformat the sky image.

Step - 4
Connect a Noise node (from Filter menu) with the Reformat# node. Animate the z parameter and modify the other settings as required, in the Noise# node properties panel to apply fog over the sky image, refer Figure 2.
2
Step - 5
Connect a DeepFromFrames node with the Noise# node; the DeepFromFrames# node properties panel will be displayed in the Properties Bin. To generate the output, change the values of the parameters in the DeepFromFrame# node properties panel.

The samples parameter is used to specify the numbers of samples to be created per pixel. The fields corresponding to the frame range parameter are used to specify the frame range that will be used for sampling. By default, the premult check box is selected. As a result, the samples from the input image are premultiplied. If you clear this check box, the DeepFromFrames node considers that the input stream is already multiplied. The options in the split alpha mode drop-down are used to set how the alpha channel will split. This drop-down has two options: additive and multiplicative. By default, the multiplicative option is selected in this drop-down. As a result, when you flatten a deep image, the alpha values in the flattened output will match the original alpha. If you select additive from this drop-down, the original alpha values are not retained when image is flattened. The zmin parameter is used to assign depth to the first sample of each deep pixel output corresponding to the first frame in the range. The zmax parameter is used to assign depth to the last sample of each deep pixel output corresponding to the last frame in the range.

Tuesday, June 4, 2013

Working with deep images in Nuke 7

Nuke's powerful deep compositing tools set gives you ability to create high quality digital images faster. Deep compositing is a way to composite images with additional depth data. It helps in eliminating artifacts around the edges of the objects. Also, it reduces the need to re-render the image. You need to render the background once and then you can move the foreground objects at different places and depth in the scene. Deep images contain multiple samples per pixel at various depths. Each sample contains per pixel information about color, opacity, and depth.

DeepRead Node
The DeepRead node is used to read the deep images to the script. In Nuke, you can read deep images in two formats: DTEX (Generated by Pixar's PhotoRealistic Renderman Pro Server) and Scanline OpenEXR 2.0.
Note: The tiled OpenEXR 2.0 files are not supported by Nuke.
The parameters in the DeepRead node properties panel are similar to that of the Read node.

DeepMerge Node
The DeepMerge node is used to merge multiple deep images. It has two inputs: A and B. You can use these inputs to connect the deep images you want to merge. The options in the operation drop-down in the DeepMerge tab of the DeepMerge node properties panel are used to specify the method for combining the deep images. By default, combine is selected in this drop-down. As a result, Nuke combines samples from the A and B inputs. The drop hidden samples check box will be only available, if you select combine from the operation drop-down. When this check box is selected, all the samples that have an alpha value of 1 and are behind other samples will be discarded. If you select holdout from the operation drop-down, the samples from the B input will be hold out by the samples in the A input. As a result, samples in the B input will be removed or fade out that are occluded by the samples in the A input.

DeepTransform Node
The DeepTransform node is used to re-position the deep data along the x, y, and  z axes. You can use the x, y, and z fields corresponding to the translate parameter are used to move the deep data. The zscale parameter is used to scale the z depth values of the samples. If you want to limit the z translate and z scale effects to the non-black areas of the mask, connect an image to the mask input of the DeepTransform node.

DeepReformat Node
The DeepReformat node is the Reformat node for deep images.

DeepSample Node
The DeepSample node is used to sample a pixel in the deep image. When you add a DeepSample node in the Node Graph panel, a pos widget will be displayed in the Viewer panel. Move the widget in the Viewer panel to display the sample information in the DeepSample node properties panel.

DeepToImage Node
The DeepToImage node is used to flatten a deep image. It converts a deep image to a regular 2D image.

DeepWrite Node
The DeepWrite is the Write node for deep images. It is used to render all upstream deep nodes to OpenEXR 2.0 format. The tiled OpenEXR files are not supported by this node.

DeepColorCorrect Node
The DeepColorCorrect node is the ColorCorrect node for deep images with an additional Masking tab. The options in this tab are used to control the depth range where the effect of the color-correction will be visible. Select the limit_z check box and then adjust the trapezoid curve; the values in the A, B, C, and D fields will change. The value in A field indicates the depth where the color correction will start, values in the B and C fields indicate the range where the color correction will be at full effect, and value in the D field indicates the depth where the color-correction effect stops. You can use the mix slider to blend between the color corrected output and the original image.
Note: You can use the DeepSample node to know the precise depth values and then enter them in the A, B, C, and D fields.
DeepToPoints Node
The DepthToPoints node is used to create a point cloud using the deep data. You can use the points for position reference. To create the point cloud, connect the deep input of the DeepToPoint node to the deep image. If you want to view the cloud through a camera, connect the camera input to the Camera node and then switch to 3D view. In the properties panel of the DeepToPoint node, you can use the Point detail and Point size parameters to change the density and size of the points, respectively.

TUTORIAL
Before you starting the tutorial, navigate to the following link and then download the file to your hard drive: http://www.mediafire.com/download/34h9mew93ff6izh/nt008.zip. Next, extract the contents of the zip file.

Step - 1
Create a new script in Nuke.

Step - 2
Open the Project Settings panel and then select NTSC 720x486 0.91 from the full size format drop-down.

Step - 3
Choose the DeepRead option from the Deep menu; the Read File(s) dialog box will be displayed. In this dialog box, select the deep_bg.exr file. Next, choose the Open button; the DeepRead1 node will be inserted in the Node Graph panel.

Step - 4
Next, press 1; the output of the DeepRead1 node will be displayed in the Viewer1 panel, as shown in Figure 1.
Figure 1
Step - 5
Similarly, read in the deep_tree.exr file. Next, press 1; the output of the DeepRead2 node will be displayed in the Viewer1 panel, as shown in Figure 2.
Figure 2
Step - 6
Select the deep option from the Channel Sets drop-down; the deep data will be displayed in the Viewer1 panel, as shown in Figure 3. Next, select rgba from the Channel Sets drop-down.
Figure 3
Next, you will sample a pixel in the deep image.

Step - 7
Select the DeepRead1 node and then add DeepSample node from the Deep menu; the DeepSample1 node will be connected to the DeepRead1 node. Make sure the DeepRead1 node is selected and then press 1 to connect it to the Viewer.

Step - 8
Move the pos widget in the Viewer. You will notice that the information about the pixel underneath the pos widget is displayed in the DeepSample1 node properties panel, refer to Figure 4.
Figure 4
Step - 9
Delete the DeepSample1 node from the Node Graph panel.

Step - 10
Select the DeepRead2 node and then choose DeepMerge from the Deep menu; the input A of the DeepMerge1 node will be connected with the DeepRead2 node.

Step - 11
Make sure the DeepMerge1 node is selected and then press 1 to connect it to the Viewer1 node.

Step - 12
Connect the B input of the DeepMerge1 node with the DeepRead1 node; the output of the DeepMerge1 node will be displayed in the Viewer, refer to Figure 5.
Figure 5
Next, we will move the result of the DeepRead2 node using the DeepTransform node.

Step - 13
Select the DeepRead2 node and then choose DeepTransform from the Deep menu; the DeepTransform1 node will be inserted between the DeepRead2 and DeepTransform1 nodes.

Step - 14
In the DeepTransform tab of the DeepTransform1 node, enter 10 in the y field corresponding to the translate parameter; the tree will move at the new position. Figure 6 and 7 display the position of the tree with the y value set to 10 and 50, respectively. Experiment with different values.
Figure 6
Figure 7
Notice in Figure nt8-7 that the bounding box is outside the frame size. Next, you will use the DeepCrop node to crop the result of the DeepTransform1 node.

Step - 15
Select the DeepTransform1 node and then choose DeepCrop from the Deep menu; the DeepCrop1 node will be inserted between the DeepTransform1 and DeepMerge1 nodes.

You will notice that tree has disappeared. Next, you will fix it.

Step - 16
In the DeepCrop tab of the DeepCrop1 node, select the keep outside zrange check box.
Note: You can also adjust the size of the bounding box. To do so, adjust the crop box in the Viewer. Alternatively, enter values in the x, y, r and t fields corresponding to the bbox parameter. Select the keep outside bbox check box to keep the samples outside the bounding box. You can use the xnear and zfar parameters to crop samples in depth. Select the keep outside zrange check box if you want to keep the samples outside the range defined by the xnear and zfar parameters.
Step - 17
In the DeepMerge tab of the DeepMerge1 node properties panel, select holdout from the operation drop-down; a holdout will be created. Figure 8 display the holdout in the alpha channel.
Figure 8
Step - 18
Now, select combine from the operation drop-down.

Next, you will merge a standard image with the deep data. You need to flatten the image using the DeepToImage node.

Step - 19
Select the DeepMerge1 node and then choose DeepToImage from the Deep menu; the DeepToImage1 node is connected with the DeepMerge1 node.
Note: In the DeepToImage tab of the DeepToImage1 node properties panel, the volumetric composition check box is selected by default. On clearing this check box, Nuke assumes that samples do not overlap and it takes only the front depth of each pixel into consideration. Also, the processing time will be reduced. However, if you have overlapping pixels, the output may be different than expected.
Step - 20
Read in the sky.jpg file using the Read node; the Read1 node will be added to the Node Graph panel. Next, press 1 to view its output in the Viewer1 panel.

Step - 21
Make sure the Read1 node is selected and then add a Reformat node from the Transform menu; the Reformat1 node is connected with the Read1 node.

Step - 22
In the Reformat tab of the Reformat1 node properties panel, select height from the resize type drop-down.

Step - 23
Make sure the DeepToImage1 node is selected and then press M; the A input of the Merge1 node is connected with the DeepToImage1 node.

Step - 24
Connect the B input of the Merge1 node with the Reformat1 node.

Step - 25
Select the Merge1 node and press 1 to view the output in the Viewer.

Step - 26
Select the Reformat1 node and then press T; the Transform1 node will be inserted between the Reformat1 and Merge1 nodes. Next, adjust the position of the clouds using the Transform widget in the Viewer.

Step - 27
In the Merge tab of the Merge1 node properties panel, select A from the set bbox to drop-down. Figure 9 shows the output of the merge operation.
Figure 9
Figure 10 display the network of the nodes in the script.
Figure 10

Monday, June 3, 2013

How to generate motion vector fields by using the VectorGenerator node

The VectorGenerator node in NukeX is used to create the images with the motion vector fields. This node generates two sets of motion vectors for each frame which are stored in the vector channels. The output of the VectorGenerator node can be used with the nodes that take vector input such as the Kronos and MotionBlur nodes. The image with the fields contains an offset (x, y) per pixel. These offset values are used to wrap a neighboring frame into the current frame. Most of the frames in the sequence will have two neighbors therefore two vector fields are generated for each frame: backward vector and forward vector fields.

To add a VectorGenerator node to the Node Graph panel, select the node in the Node Graph panel from which you need to generate fields and then choose VectorGenerator from the Time menu; the VectorGenerator# node will be added to the Node Graph panel. Make sure the VectorGenerator# node is selected and then press 1 to view its output in the Viewer# panel. To view the forward motion vectors, select forward from the Channel Sets drop-down. Select backward from the Channel Sets drop-down to view the backward motion vectors. To view the backward and forward motion vectors, choose motion from the Channel Sets drop-down. Figure 1 through 4 show the input image, forward and backward motion vectors, forward, and backward motion vectors, respectively.
Figure 1
Figure 2
Figure -3
Figure 4
While viewing the motion vectors in the Viewer# panel, the x values are represented by the red channel while the y values are represented by the green channel. Figures 5 and 6 show the backward x and backward y values in the red and green channels, respectively.
Figure 5
Figure 6
If you have an image sequence in which the foreground object is moving over the background, you might not get the desired result because the motion estimation might not get calculated properly around the edges between the foreground and background. To rectify this, create a matte for the foreground object and then connect it with the matte input of the VectorGenerator node. Then, select the required matte channel from the matte channel drop-down in the VectorGenerator tab of the VectorGenerator# node properties panel. On doing so, the motion approximation will be proper between the edges. If you use the matte channel, you can output vectors for the foreground or background by selecting Foreground or Background from the Output drop-down.
Note: In the VectorGenerator tab of the VectorGenerator# node properties panel, the Use GPU if available check box is selected by default. As a result, Nuke uses GPU instead of CPU for processing vector fields. It helps in enhancing the processing performance.
To increase the resolution of the vector fields, adjust the Vector Detail parameter. This parameter is helpful in adjusting the density of the calculated motion vector fields. Higher values will give you more accurate results but will tax you on the processing time. If you specify the value 1 for the Vector Detail parameter, a vector will be generated for each pixel. If you set a higher value for the Smoothness parameter, the vector fields will be less detailed. The default value of 0.5 is fine for most of the image sequences. If there is a change in luminance in your image sequence, select the Flicker Compensation check box in the Advance area of the VectorGeerator# node properties panel. When you select this check box, Nuke will take into account the change in luminance and flickering.
Note: On selecting the Flicker Compensation check box, the processing time will increase.
The VectorGenerator node uses luminance values (monochrome) to generate motion estimation. The options in the Advanced > Tolerances area are used to fine tune the weight of color channels while calculating the luminance values.

Once you have generated the fields, you can use the output of the VectorGenerator node with the Kronos or MotionBlur node.

You might also like: Create motion blur effect using the motion vector pass

Sunday, June 2, 2013

How to create a position pass in Nuke 7 using the DepthToPosition node

The DepthToPosition node is used to generate 2D position pass using the depth data available in the input image. The position pass is created by projecting the depth through camera. Then, position of each projected point is saved. This node along with the PositionToPoints node is used to create a point cloud similar to the point cloud that the DepthToPoints node generates. In fact, the DepthToPoints node is a gizmo that contains the DepthToPosition and DepthToPoints nodes. In this tutorial, we will generate a position pass and then place a 3D sphere in the scene. To do this, follow these steps.

Step - 1
Navigate to the following link and then download the zip file to your hard-drive: https://www.dropbox.com/s/xo7eemr6qz16icl/nt007.zip. Next, extract the content of the zip file.

Step - 2
Using a Read node, bring in the nt007.rar; the Read1 node will be inserted in the Node Graph panel.

Step - 3
Connect the Read1 node to the Viewer1 node by selecting the Read1 node and then pressing 1, refer to Figure 1.
Figure 1
Step - 4
Add a DepthToPosition node from the 3D menu; the DepthToPosition1 node will be inserted in the Node Graph panel.

You will notice that there is no output in the Viewer1 panel because we need to provide the correct depth channel by selecting appropriate channel from the depth drop-down in the DepthToPosition tab of the DepthToPosition1 node properties panel. The depth information in the nt007.exr file contained in the Z_Depth channel.

Step - 5
Select the Z_Depth.red option from the depth drop-down; the position pass will be displayed in the Viewer1 panel, refer to Figure 2.
Figure 2
Step - 6
Add a Camera node from the 3D menu. Next, import the Camera001.chan file that you have downloaded earlier.

Step - 7
Connect the camera input of the DepthToPosition1 node to the Camera1 node.

If you need to change the output channel, you can select the channel from the output drop-down available in the DepthToPosition tab of the DepthToPosition1 node properties panel. You can also create a new Channel Set for the position pass.

Next, you will place a 3D sphere on top of the first chopper.

Step - 8
Sample pixels from the Viewer1 panel and then note down the coordinates, refer to Figure 3.
Figure 3
Step - 9
Add a Sphere node from the 3D > Geometry menu. Next, add a ColorBars node from the Image menu.

Step - 10
Connect the ColorBars1 node to the Sphere1 node.

Step - 11
In the Sphere tab of the Sphere1 node properties panel, enter the XYZ coordinates values that you have noted down in step 8 in the x, y, and z fields corresponding to the translate parameter.

Step - 12
Connect a ScanlineRender node to the Sphere1 node. Make sure the ScanlineRender1 node is selected in the Node Graph panel and then press 1 to view its output in the Viewer1 panel.

Step - 13
Select the ScanlineRender1 node and then press M; the A input of the Merge1 node is connected with the ScanlineRender1 node.

Step - 14
Connect the B input of the Merge1 node with the Read1 node.

You will notice there is no sphere on top of the chopper. To rectify this, we need to connect the same camera to the cam input of the ScanlineRender1 node that we have used to generate the position pass.

Step - 15
Connect the cam input of the ScanlineRender1 node with the Camera1 node; the sphere will be visible in the Viewer1 panel, refer to Figure 4.
Figure 4
Figure 5 shows the node network.
Figure 5

Saturday, June 1, 2013

How to render position pass in Maya and then use it with the PositionToPoints node

The PositionToPoints node is used to generate a 3D point cloud using the position data contained in the image. In this tutorial, we will first create a position render pass in Maya 2014 and then we will create a 3D point cloud using the position data in Nuke. Then, we will composite a 3D object in our scene with help of the 3D point cloud. Lets get started:

Step - 1
Create a project folder in Maya and open the scene that you need to render. Next, create a camera and set the camera angle. Figure 1 displays the scene that we will render.
Figure 1
We will be rendering a 32 bit image so first we set frame buffer to 32 bit.

Step - 2
Invoke the Render Settings window and then select mental ray from the Render Using drop-down list.

Step - 3
Now, choose the Quality tab and then enter 1.5 in the Quality edit box.

Step - 4
Scroll down to Framebuffer area in the Quality tab and then select RGBA (Float) 4x32 Bit from the Data Type drop-down list.

Next, you will create layers in Layer Editor and create layer overrides.

Step - 5
Select everything in the viewport and then choose the Render tab in Layer Editor. Next, choose the Create new layer and assign selected objects button from Layer Editor, refer to Figure 2; the layer1 layer will be created in Layer Editor.
Figure 2
Step - 6
Rename the layer1 layer as beauty_layer.

Step - 7
Similarly, create a new layer and rename it as position_layer.

Step - 8
With the position_layer selected, invoke the Render Settings window and then clear the Enable Default Light check box. Next, right-click on the check box; a shortcut menu will be displayed. Next, choose the Clear Layer Override button from the menu.

Step - 9
Select the beauty_layer in Layer Editor and then in the Render Settings window, make sure the Enable Default Light check box is selected and then create a layer override for it, as discussed above.

Step - 10
In the File Output area of the Render Settings window, enter <Scene> in the File name prefix edit box.

Step - 11
Select OpenEXR (exr) from the Image format drop-down list.

Step - 12
Select the name.ext (Single Frame) option from the Frame/Animation ext drop-down list because we will be rendering only one frame.

Step - 13
In the Indirect Lighting tab, select Final Gathering check box from the Final Gathering area and then create a layer override. Next, enter 200 in the Accuracy edit box.

Step - 14
Now, close the Render Settings window.

Step - 15
Select the position_layer layer in Layer Editor and then invoke the Hypershade window.

Step - 16
Now, add a Sampler Info uility node and and a Surface Shader node to the Work Area tab of the Hypershade window.

Step - 17
Drag the samplerInfo1 node onto the surfaceShader1 node with the SHIFT held down; Connection Editor will be displayed.

Step - 18
Expand the pointWorld node in the Outputs area. Next, expand the outColor node in the Inputs area.

Step - 19
Connect pointWorldX, pointWorldY, pointWorldZ to the outColorR, outColorG, and outColorB, refer to Figure 3.
Figure 3
Step - 20
Next, right-click on the surfaceShader1 node; a marking menu will be displayed. Next, choose Assign Material Override for position_layer, refer to Figure 4; the surface shader will be applied to the objects in the position_layer as  an override.
Figure 4
Step - 21
Now, choose Render > Batch Render from the menu bar; the rendering will start.

Step - 22
On completion of the rendering, select camera1 in the viewport and export it as fbx file.

If you want to use the images that I have rendered, you can download the files from the following link: https://www.dropbox.com/s/o9h9fcbftxyjw5h/nt006.zip. This file contains camera fbx file as well. Next, we will bring the position into Nuke.

Step - 23
Start Nuke.

Step - 24
Read in the position pass; the Read1 node will be inserted in the Node Graph panel. Similarly, import the color pass in the Node Graph panel; the Read2 node will be inserted in the Node Graph panel.

Step - 25
Add a PositionToPoints node to the Node Graph panel.

Step - 26
Connect the Read2 node with the PositionToPoints1 node. Next, connect the pos input of the PositionToPoints1 node with the Read1 node.

Step - 27
Select the PositionToPoints1 node in the Node Graph panel and then press 1 to view the point cloud in the Viewer1 panel. Rotate around the scene to view the cloud properly, refer to Figure 5.
Figure 5
The points in the scene are the 3D representation of the position data that we rendered from Maya.

Step - 28
In the PositionToPoints tab of the PositionToPoints1 node, select wireframe from the display drop-down.

Step - 29
Choose Camera from the 3D menu; the Camera1 node will be inserted in the Node Graph panel.

Step - 30
In the File tab of the Camera1  node properties panel, select the read from file check box and then load the cam.fbx file that you have downloaded.

Next, you will add a 3D geometry in the scene and align it with help of the points in the cloud.

Step - 31
Add a Cylinder node to the Node Graph panel and then connect the butterfly.jpg with it that you have downloaded.

Step - 32
In the Viewer1 panel, place the cylinder as required.

Step - 33
Connect a ScanlineRender node to the Cylinder1 node; the ScanlineRender1 node will be inserted in the Node Graph panel.

Step - 34
Connect the cam input of the ScanlineRender1 node with the Camera1 node.

Step - 35
Select the ScanlineRender1 node and press M; the Merge1 node will be inserted in the Node Graph panel.

Step - 36
Connect the B input of the Merge1 node with the Read2 node.

Step - 37
Select the Merge1 node in the Node Graph panel and then press 1 to view the output in the Viewer1 panel, refer to Figure 6.
Figure 6
This concludes the tutorial. You can use Light and other nodes to blend the your 3D object with your render. Figure nt6-7 shows the node network used in the script.
Figure 7

Friday, May 31, 2013

Create a Point Cloud by using the DepthToPoints Node

The DepthToPoints gizmo is used to generate a 3D point cloud from a depth pass and 3D camera. This gizmo takes the color and depth information in the image and then creates the image as a 3D point cloud. Then, you can use the points in the point cloud to line up any geometry in the 3D space. Make sure that the alpha channel of the image is not set to black. Figures 1 and Figure 2 display the color and depth information of an input image.
Figure 1
Figure 2
To generate a point cloud using DepthToPoints gizmo, you need to follow the steps given below:

Step - 1
Export a camera chan file from your 3D application.

Step - 2
Read in the input image having depth channel embedded in it; the Read1 node will be inserted into the Node Graph panel.

Step - 3
Add a DepthToPoints node from the 3D > Geometry menu; the DepthToPoints1 node will be inserted in the Node Graph panel.

Step - 4
Make a connection between the image input of the DepthToPoints1 node and Read1 node. If image contains normal data, connect it with the norm input of the DepthToPoints1 node.

Step - 5
Click on the empty area of the Node Graph panel and then add a Camera node from the 3D menu; the Camera1 node will be inserted in the Node Graph panel.

Step - 6
In the Camera tab of the Camera1 node properties panel, click on the file_menu icon; a flyout will be displayed.

Step - 7
Choose the Import chan file option from the flyout; the Chan File dialog box will be displayed. In this dialog box, navigate to the location where you saved the chan file and then select it. Next, choose the Open button from the dialog box.

Step - 8
Select the DepthToPoints1 node in the Node Graph and then press 1 to view the output in the Viewer1 panel. You will notice that point cloud is displayed in the Viewer1 panel but the position of the cloud is not proper. We need to connect the chan camera data to the DepthToPoints1 to get the correct camera angle.

Step - 9                                  
In the User tab of the DepthToPoints1 node properties panel, select the depth channel from the depth drop-down.

Step - 10
Connect the camera input of the DepthToPoints1 node with the Camera1 node; the point cloud will be displayed in the Viewer1 panel, refer to Figure 3.
Figure 3
By default, the DepthToPoints node displays the point cloud in a solid color. If you want to display the outline of the geometry in the Viewer panel, select the wireframe option from the display drop-down, refer to Figure 4.
Figure 4
Step - 11
Enter 0.1 in the point detail field; the density of the point cloud will change in the Viewer1 panel, refer Figure 5.
Figure 5
If you set the value 1 for the point detail field, all available points will be displayed in the Viewer panel. If you want to change the size of the points, enter a value in the point size field. Now, you can place a 3D geometry in the 3D space with help of the point cloud.

Thursday, May 30, 2013

Working with the ZDefocus node in Nuke 7 Part - 2

In continuation with the Part - 1 of this article, we will work with the remaining options available in the ZDefocus node properties panel. Read Part - 1 here.

Step - 1
Navigate to the following link and then download the image with the name city_illumination.jpg on your hardd drive.

Step – 2
Load the city_illumination.jpg file into the script; the Read1 node will be inserted in the Node Graph panel.

Step – 3
Make sure the Read1 node is selected and then press 1 to view its output in the Viewer1 panel, refer to Figure 1.
Figure 1
Step - 4
Connect a ZDefocus node to the Read1 node. You will notice an error message in the Viewer1 panel about the missing depth channel. This error is generated because by default the ZDefocus node looks for depth information in the depth.z channel which is selected by default in the depth channel drop-down and there is no depth channel available in the city_illumination.jpg file.

Step - 5
In the ZDefocus tab of the ZDefocus node properties panel, select rgba.alpha from the depth channel drop-down; an error message will be displayed in the Viewer1 panel about missing alpha channel.

Step - 6
In the Read1 node properties panel, select the auto alpha check box. You will notice in the Viewer1 panel that highlights are now out of focus, as shown in Figure 2.
Figure 2
By default, the disc option is selected in the filter type drop-down. As a result, a round disc filter will be applied to the image. The filter shape parameter is used to dissolve the shape between 0 (gaussian, blobby shape)  to 1(disc) range.

Step - 7
Enter 2 in the aspect ratio field. You will notice the cat's eye type effect in the Viewer1 panel, as shown in Figure 3.
Figure 3
The aspect ratio parameter controls the aspect ratio of the filter. The default ratio is 1:1.

Step - 8
Enter 1 in the aspect ratio field. Next, select bladed from the filter type drop-down; the highlights in the Viewer1 panel will displayed in shape of iris blades, as shown in Figure 4.
Figure 4
Step - 9
Enter 3 in the bladed field; the highlights in the Viewer1 panel will display in the shape which is made of 3 iris blades, as shown in Figure 5.
Figure 5
The roundness parameter is used control the rounding of the polygon edges of the filter. If you set zero value for this parameter, no rounding will occur. The rotation parameter is used to define the rotation of the filter in degrees. The inner size parameter is used to control the size of the inner polygon. The inner feather parameter is used to add feathering around the outward and inward edges of the inner polygon. The inner brightness controls the brightness of the inner polygon. Adjust these parameter as per your requirement.

Step - 10
Select the catadioptric check box. This check box is used to produce annular defocused areas thus producing the donut-shaped highlights. The catadioptric parameter is used to control the catadioptric hole in the bokeh. This parameter will only be available, if you select the catadioptric check box. Figure 6 shows the bokeh created using the following values:

filter type: bladed
aspect ratio: 1.04
blades: 7
roundness: 0
roatation: 66
inner size: 0.105
inner feather: 0.285
inner brightness: 0.07
catadioptric: Selected
catadioptric size: 0.41
Figure 6
Step - 11
Select the gamma correction check box; a gamma curve of 2.2 will be applied on the image before blurring and then reversed for the final result. This will make the bokeh more pronounced, as shown in Figure 7.
Figure 7
Step - 12
Check the bloom check box to make the highlights more visible. When you select the check box, the bloom threshold and bloom gain parameters will become active. The highlights above the value specified by the bloom threshold value will be multiplied with the values specified for the bloom gain parameter.

Step - 13
Enter 0.88 and 2.44 in the bloom threshold and bloom gain parameters, respectively. Figure 8 shows the highlights after entering the value.
Figure 8
If you select filter shape setup from the output drop-down, the filter shape which is responsible for the shape of  the highlights will be displayed in the Viewer1 panel, as shown in Figure 9.
Figure 9
Step - 14
Select bladed from the filter type drop-down and then adjust the parameter corresponding to bladed filter type. You will notice the change in the shape in the Viewer1 panel.

Next, you will apply a custom filter to the ZDefocus node. You can create a filter image using a Flare or Roto node.

Step - 15
Rest the ZDefocus node properties. Next, select rgba.alpha from the depth channel drop-down.

Step - 16
Click on the empty area of the Node Graph panel and then add Constant node. Next, set its size to 255x255. The added Constant1 node will act as a place holder for the Flare node.

Step - 17
Make sure the Constant1 node is selected in the Node Graph panel and then connect a Flare node with it. Next, press 1 to view the output of the Flare1 node in the Viewer1 panel.

Step - 18
In the Viewer1 panel, use the position widget to position the flare at the center of the Constant node's result.

Step - 19
In the Flare tab of the Flare1 node properties panel, enter 16 and 1 in the edge flattening and corner sharpness parameters, respectively.

Step - 20
Connect the filter input of the ZDefocus1 node with the Flare1 node.

Step - 21
Select the ZDefocus1 node and then press 1 to view its output.

Step - 22
In the ZDefocus1 node properties panel, select image from the filter type drop-down; an error message will be displayed in the Viewer1 panel because the filter image has no alpha channel embedded in it. To rectify it, select rgba.red from the filter channel drop-down.

You will  notice in the Viewer1 panel that the shape of the highlights is changed according to the output of the Flare1 node.

This concludes part - 2 of "Working with ZDefocus node".

Wednesday, May 29, 2013

Working with the ZDefocus node in Nuke 7 Part - 1

The ZDefocus node is a major upgrade to the ZBlur node. The ZDefous node is used to blur an image according to the depth map channel and gives you ability to simulate blur using depth of field. This node splits the input image into layers. All pixels have the same depth value within a layer. Also, the whole layer receives same amount of blur size. After processing all the layers present in the input image, the ZDefocus blends layers together from back to the font of the image thus preserving the order of the elements in the scene.

To add a ZDefocus node to the Node Graph panel, select the input image that you need to blur and then choose the Filter button to display the Filter menu. Next, choose ZDefocus from the menu; the ZDefocus# node will be inserted in the Node Graph panel. Also, The ZDefocus# node properties panel will be displayed with the ZDefocus tab chosen in the Properties Bin, refer to Figure 1.
art21-1
Figure 1
You notice in the Node Graph panel that the apart from the regular mask and Output connectors, the ZDefocus# node has two more input connectors: filter and image. These are discussed next:

filter: This image connected to this input takes the shape of the out of focus highlights. These highlights are also referred to as “Bokeh”. You can use a Roto or Flare node to create the filter image. If you want to add color fringing to Bokeh, you can use also connect a color image to the filter input.

image: This input is used to connect to to the input image that you want to blur. Make sure that this image contains a depth channel.

You will also notice a focal point widget in the Viewer# panel. This widget is used to adjust the position of the focal plane. On moving this widget, the focus plane and focal point parameters update automatically. If you select the Use GPU if available check box in the node properties panel, the processing of the node is run on the GPU instead of CPU. If GPU is present in the scene, its name will be displayed above the check box, refer to Figure art21-1. You can also select which GPU you need to use. To do so, open the Preferences dialog box by pressing SHIFT+S and then choose the desired option from the GPU Device drop-down of the GPU Device area, refer to Figure 2.
art21-2
Figure 2
Before moving farther, navigate to http://www.mediafire.com/download/lpbg7sv7lf7hlz2/art021.zip and download the zip file. Next, extract the contents of the zip file to your hard drive. The zip file contains the chopper.exr file which we will used to explain the concepts here.

Step – 1
Lunch Nuke and start a new script in it.

Step – 2
Load the chopper.exr file into the script; the Read1 node will be inserted in the Node Graph panel.

Step – 3
Make sure the Read1 node is selected and then press 1 to view its output in the Viewer1 panel, refer to Figure 3.

Step – 4
Select Z_Depth from the Channel Sets drop-down; the depth channel will be displayed in the Viewer1 panel, refer to Figure 4. Now, select rgba from the Channel Sets drop-down.
art21-3
Figure 3
art21-4
Figure 4
Step – 5
Connect a ZDefocus node to the Read1 node. You will notice an error message in the Viewer1 panel about the missing depth channel. This error is generated because by default the ZDefocus node looks for depth information in the depth.z channel which is selected by default in the depth channel drop-down.

Step – 6
Select Z_Depth.red from the depth channel drop-down; you will notice blur in the Viewer1 panel, refer to Figure 5.

The options in the channels drop-down located above the depth channel drop-down are used to select channels on which the blur will be applied.

Step – 7
In the Viewer1 panel, move the focal point widget to the front part of the chopper; the area around the point will be in focus immediately, refer to Figure 6.
art21-5
Figure 5
art21-6
Figure 6
Step – 8
Select depth from the math drop-down.

The options in math drop-down are used to specify the method that will be used to calculate the distance between the camera and the object using the information available in the depth channel. If you hove the mouse pointer over the math drop-down, a tooltip will appear with information about the formula used to calculate the blur. By default, the far=0 option is selected in this drop-down. This option is compatible with the depth maps generated using Nuke and RenderMan.

Step – 9
Enter 0.1, 8, and 10 in the depth of field, size, and maximum fields, respectively.

The depth of field parameter is used to specify the depth of field around the focus plane. The size parameter is used to set the size of the blur. The size of the blur is clipped at the value specified using the maximum parameter. The blur inside check box located next to the depth of field parameter is used to apply a small amount to the in focus area so that the transition between the in focus and out of focus areas look smooth.

Step – 10
Select the focal plane setup from the output drop-down; the depth of field information will be displayed in the rgb channels in the Viewer1 panel. Move the focal point to see the output properly, refer to Figure 7.

The red color represents the area (less than DOF) that is in focus. The green color represents the area that is inside DOF. If depth of parameter is set to 0, you wont be able to see green area in the viewport. The blue color represents the area that is greater than DOF.

Step – 11
Select the layer setup option from the output drop-down.

This option is similar to the focal plane setup option but it displays the DOF information after the depth is divided into layers, refer to Figure 8. When the automatic layer spacing check box is selected, the ZDefocus node automatically decides how many depth layers to be used based on the value specified by the maximum parameter. When you clear this check box, you can use the depth layers and layer curve parameters to control the numbers of layers and spacing between the layers, respectively.
art21-7
Figure 7
art21-8
Figure 8
Now, experiment with the controls in the ZDefous1 node properties panel until you get the desired result. Also, use the focal point widget in the Viewer1 panel to interactively change the focus point.

This concludes the part – 1.

Read Part - 2 here.