Lighting in Visual Effect Week 9

The full name of IBL is “image-based lighint” and is a method of disguising global lighting. Using this method gives better visuals and achieves real-time rendering. One way to achieve this is to first take a picture of the environment. This image can be taken by a real-world camera (HDR is recommended for better results). It can also be rendered in real time by the in-game camera.

AOVs

A space on the lighting so we can break down the beauty, render into multiple lighting render. And beyond that, for each lighting render, we can break it down into food based on the shading component( diffuse component specular components and subsurface ghetto component).

with the help of this I.D. pass, we can easily separate the characters in com and apply different settings on them. And also the position pass so we can use the position. We can use the position past to draw a map, draw a map based on where the ground is so that we can discard the other Paxos in other areas.

Select Transform > SphericalTransform to insert a SphericalTransform node after the HDR image. You use this node to convert the HDR image to a spherically mapped image. In the controls of the node, select the input type and output type (in this case sphere).

Select 3D>Lighting>Environment and insert an Environment node into your script. Connect the SphericalTransform node to the mapping input of the environment node and connect the environment node to the scene node.

Houdini Session Week 8

This week will be all about volumes and smoke, fire or explosions. In volumes we will learn to create and control combustion style simulations in a sparse thermal solver and then take it a step further by running a bunch of wedge SIM cards with PDG so we can work more efficiently. Custom simulation and post-simulation techniques, along with custom shader and lighting strategies, will allow us to create production quality work in a way that out-of-the-box tools cannot. After some compositing, we’ll end up with a big bang – but more importantly, the knowledge and confidence to hack solvers and shaders to create all sorts of effects.

Smoke is the display state of fog. For example, the fog volume in an ISO offset is directly a visual vector field;
ISO is a number of small squares that always face the camera, e.g. the SDF volume in ISO offset;
Pauli is the normal model display, e.g. ISO surface and quad mesh in ISO offset;
Volume is a vector field. By default it is displayed in smoke mode, but by default this node has no value and therefore cannot be seen’
In ISO displays it is also stored as a voxel from which distances, directions and other data can be retrieved;
VDB is an open VDB, which is an updated general volume data type. It is possible to export the VDB format as a generic material containing various density and other volume data;
Convert VDB converts the relationship between VDB and VDB;
Volume visualisation can be used to visualise the display colours of the fog;

node – Atmosphere_volume > shader to out_environment > atmosphere

Adding Volume Rasterize Attributes and Selecting Density Attributes.

Group – A group of points in the input for coarsening.
Attributes – The mode specifies which attributes are used to create the corresponding VDB.
Note – only floating and vector attributes can be razed.

The basic building blocks of a smoke simulation are the object, solver and procurement. The smoke object (sparse) node creates a dynamic object containing the required fields, and then the solver evolves the object as the simulation progresses. The simplest smoke simulation requires the following data.

density the area containing the smoke;
temperature the scale field used for buoyancy calculations;
vel the vector field that captures the instantaneous motion of the smoke.
The solver is responsible for ensuring that these fields change in a way that is consistent with the smoke, but sourcing is responsible for injecting these quantities during the simulation. For example, you may need to continuously add or cause hot areas to rise in the smoke source.

Fire

Pyrosolver_sparse – this node is an extension of the smoke dissolver (sparse). It takes into account an additional simulation field (capturing the presence of flames) and adds some additional shaping parameters to allow more control over the emergency appearance.

Add disturbance, add gas swirl and confinement scale 0.5, gas swirl limiting – gas swirl limiting DOP applies swirl limiting to the velocity field. This is a force that amplifies existing vortices with the aim of eliminating diffusion that occurs during the diffusion phase of a fluid solvent, confinement scale – the strength of vortex confinement.

Standard_volume

standard_volume is a physically based volume shader. It provides independent control over volume density, scattering colours and transparent colours. Blackbody emission is used to render fires and explosions directly from the physics simulation. Each component can be controlled by a volume channel from the volume object, and other parameters can act as multipliers on the volume object. Optionally the channels can be left blank and custom shaders (such as volume examples or procedural textures) can be attached to manipulate each component using more control.

Houdini Session Week 7

  • Linear workflows.

Linear workflows have become the industry standard adopted by most studios because of the great flexibility. It is much easier to adjust the reflectivity of an object or change the basic colours of individual model sections in Photoshop rather than fine-tune and 3D render. This is the great flexibility that the Pixel Squid product offers to the end user. Each layer or element rendered contributes to the final image. Therefore, Pixel Squid content must be generated in a linear workflow. The diagram below shows the basic flow of the linear pipeline.

  • High Dynamic Range Imaging

High Dynamic Range Imaging (HDRI) is a technique for photographic imaging and film and ray-traced computer-generated imaging that reproduces a wider range of luminance than standard digital imaging or photographic techniques. Standard techniques only allow differentiation within a certain luminance range. There are no visible features outside this range, as in brighter areas everything looks pure white and pure black in darker areas.

The ratio between the maximum and minimum values of an image is called dynamic range. HDRI can be used to record many real-world scenes that contain very bright, direct sunlight to extreme shadows or very faint nebulae. High Dynamic Range (HDR) images are usually produced by capturing and then combining several different, narrower exposure ranges of the same subject.

The two main types of HDR images are computer renderings and images produced by combining multiple low dynamic range (LDR) or standard dynamic range (SDR) photographs. You can also purchase HDR images using a special image sensor such as an oversampled binary image sensor. Due to print and display contrast limitations, the extended brightness range of the HDR input image must be compressed to make it visible.

Gamma Correction

It is the editing of the gamma curve of an image to perform non-linear tonal editing of the image to detect the dark and light parts of the image signal and increase the ratio of the two to improve image contrast. In the field of computer graphics, the curve of the conversion relationship between the screen output voltage and the corresponding brightness is called the gamma curve.

In terms of the characteristics of a conventional CRT (cathode ray tube) screen, the curve is usually a power function, Y = (X + e) γ, where Y is the luminance, X is the output voltage, e is the compensation factor and the power value (γ) is the gamma value. Changing the magnitude of the power value (γ) changes the gamma curve of the CRT. A typical gamma value is 0.45, which makes the brightness of a CRT image linear. For display screens such as televisions using CRTs, this must be corrected as the luminous greyscale of the input signal is not a linear function, but an exponential one.

ACES

The Academy Colour Coding System is an open colour management and exchange system developed by the Academy of Motion Picture Arts and Sciences (AMPAS) and industry partners.

Colour depth

In computer graphics, colour depth represents the number of bits used to store 1 pixel of colour in a bitmap or video frame buffer. It is also referred to as bit/pixel (bpp). The higher the colour depth, the more colours are available.

The colour depth is described by the term “n-bit colour”. If the colour depth is n bits, then there are 2n colour options and the number of bits used to store each pixel is n.

Mantra

Existing render nodes can be edited using the name of the node using the Render License Edit render node. To see the actual driver node network, click on the path at the top of the Network Editor pane and select another network that is not supported. If you add object properties to a render driver, they are defined by default for all objects in the scene. Select the render node, click on the gear menu in the Edit Parameter Editor and select Edit Render Parameters to edit the node’s properties. For complex scenes involving multiple render passes, separate lighting and shadow passes etc., you can create dependencies between render drivers by connecting driver nodes. See Rendering of dependencies.

Lighting in Visual Effect week 6

1.1 inspecting the source image files

Use wooden box as a reference, sample the colour value of this wooden box. Then apply the grading into the other.Separating the HDRI. And Exposure (Color) node and mutiply (Math) node, Remap the image by spherical transform,that we can draw a straight line easier. Then open the nuke script and create the project.

Intro the nuke surface

ctrl + a select all, ctrl + shift Search/Replace window Then can see the image in panel, click and drag to selection, and put it down to the bottom of screen, then ctrl + shift + left button + drag

Then click on Add Mutiply (math) to change the channel to RGB, click on the color picker within the display panel and then you can see all the other controls. Use the same math to change the colors (up and down). Make it match better. Put the color value of the environment into low pass. This makes the whole environment darker than before, and then draws a map to isolate it. Make it a separate map. Use the same method to lower it. This way we can simplify the setup and aviod too much at this point.

Introduction to the workflow

Houdini Session Week 5

eek 5 was spent doing destruction effects. I tried to import the models I made at Maya into Houdini for the special effects. Each session was an opportunity to get new content, which really made it challenging and fulfilling for me. Rigid body breaks are a very powerful feature of Houdini. In fact, most of the core work is done in SOP, such as the breaking of different materials, constraint generation control, attribute control and wrapping object transformations, including adding detail and material rendering in post.

Voronoi is a subdivision of the space plane, characterised by any position in a polygon being closest to the example point of the polygon (e.g. a residential area) and away from the example point of an adjacent polygon. Each polygon contains and only contains one example point. Due to Voronoi’s equal division, it can be used to solve nearest points, smallest closed circles and many other spatial analysis problems such as adjacency, proximity and accessibility.

This node allows you to generate three types of fragmented concrete, glass and wood.

It has four input ports: Geometry, Constrained Geometry, Proxy Geometry and optional inputs that can be connected to additional points to control the shape of the fragments. You can directly select the three damage presets for the material type. If a constraint geometry is connected, it automatically generates constraints between fragments.

Use different display modes: The Guide Geometry option controls different display modes. The fracture geometry is the mode of all three presets. As the name implies, it shows the broken shape.

[Bullet Solver]. The Bullet Solvent DOP sets up objects to use the Bullet Dynamic Solver. This solver can use a simplified representation of the object, such as a box or sphere, or a compound of these simple shapes to compose more complex shapes. The solver can use any convex shape depending on the geometric point of the object, or it can collide objects with the emotional body of a cloth, solid or wired object.We only need to use [bullet solver].

Next, we need to use “limits”. Make a model that only breaks and touches the ground when it falls.

This is then applied to the house and the final result is the destruction of the house.