Search This Blog

Blog Archive

Monday, October 25, 2010

back to Brisbane

...from Harry Potter to Polly Pocket :) whats next...I wonder.

This job is giving me some nice time to think about lighting using spotlights to fake GI.
And im getting some nice renders...

I've also taken the time to dabble in a little MaxScript to try to optimise our workflow.

My scripts make selection sets of the characters and sets and props so we can easily orgainse the lighting.
Another script sets up the 'light linking' or Include/Exclude Lists for characters to remove hoo-man error and ensure the lights are only effecting what they should be.

Other scipts swap out shaders to account for updates in character models, and keep things streamlined.
Some test renders coming soon....(20/1/11)

Monday, September 6, 2010

bit of humor

Im at home sick today when i should be at work, working on the latest Harry Potter movie !
so some humor is in need - from swann - thnx on the side effcts forums.

Wednesday, July 14, 2010

procedural feather

the beginnings of a procedural feather generator

stuff I've though about so far:
proxy / hi res toggle - currently the 'fronds'can be turned off but ideally it'd be nice to have a few polygons with the render UVmapped on automagically



the user can adjust :
  • the position along the stem where the fronds start
  • the sweep thickness of the stem via ramp
  • the poly detail of the stem
  • the overall shape of the fronds via a ramp
  • the angle fronds grow out and down toward the base of the feather
    vs how perpendicular from the stem they are
  • the number of fronds and the detail of the curves 
  • the render width of the fronds / curves.
  • the thickness of the individual fronds via a ramp
  • the noise on the fronds
  • the overall  bend and twist of the feather
Further stuff i want to have a crack at:

a procedural colouring system based on real feathers
subtle randomisation tests on most parameters
irridescent shading
a texture generation system for low res geometry.

Sunday, July 11, 2010

UN-subdivide

rob kelly nutted this out for me...

put in a subdivided mesh with no history...and get the UN-subdivided equivalent out of it.

he uses a VOPSOP that counts how many neighbors a point has, and if it has 4, then
it can be removed. The trick is to delete every second face first , then check every points neighbor count,
then on another version, delete every second face/prim - but starting from prim #1 as opposed to prim  #0,
then repeat the neighbor count and point removal.

Combining the two results gets you the un-subdivided mesh.

The example file is on odforce here, and shows two methods of achieving the same result.

nice work Rob Kelly ! :)

Wednesday, June 23, 2010

SOP renderer of sorts...

inspired by ben p's blog

a SOP land 'renderer'
below - the 'secne' as rendered normally, and the 'render' :) from the SOP renderer...
annd finally - the '3D pixels' - boxes copied onto points.

the scene







colour info through the ray SOP






 the 3D pixels rendered from a different angle
 lighting info from the phong shader equation. basically the dot prod of the surface normal
and the light position - multiplied by the light's multiplier, and added to the diffuse colour already present.

Monday, June 21, 2010

'champix' plasma

Whist at Cutting Edge, I worked on developing some Plasma effects.
The plasma was to 'arc' between the packet of cigarettes and the inner bell jar
walls. The arcs were also to travel around a little and change positions after a certain amount of time.

This could be achieved several ways - but time was short so I belted out this solution.

The plasma needed to have two contact points, so i used scatter to generate points over each object - the bell jar and the box inside it - to use as a base for the contact points. I then used a sort to randomise the point numbers so that  when the seed is changed - the point numbers change. When the arc of plasma uses just one of these points as a template or contact  point, this can be animated randomly simply by changing the seed.

The important part - was to change the seed every few seconds only - not every frame. I used the expression:
int($F/25) in the seed parameter of the sort SOPS. As the animation plays - every 25 frames, the expression grows by a value of one.

I used a line copied to one point only of the scatter SOP as the foundation of the arc of plasma.  At this stage in the network - the line only has 2 points - a start point and an end point. The other point is then 'rayed' onto the box that sits inside of the bell jar.

So now I have straight lines that seem to randomly change both at the start and end point positions - yet are always touching the surface of both the box and the inside of the belljar. So now i need some noise, but in order for the noise to get results - I also need more points. After resampling - the noise is achieved in a VOPSOP with Turbulence added to the original point position.

The noise creates pseudo random movement - and moves some points away from each other more than others - creating different lengths of line segments with in the line. I decided to resample again after this just to get a nice consistency of length in the line segments.

One interesting aspect I have in this SOP network that I didn't use in the original TVC was the third sort SOP that causes the point numbers to shift and wrap around down the line. this allowed me create an effect or energy actually traveling along the line from the belljar toward the box inside it. To achieve this I had to give random colours based on the points and then create groups from this. This forced the newly assigned point colours to follow the motion of the changing point numbers and thus travel along the length of the line.

To accentuate this I swept a grid along the lines to create tubes and then 'peaked' out only certain points from my group that cause a little bulge to travel down the line. In this network I have disabled the sweep, as I realised this could probably be done with mantra and the width attribute.

So now I had one plasma arc buzzing and popping around - so a copy SOP allowed me to duplicate this around and stamping allowed me to randomise the noise seed so each arc was unique.
Stamping was also used to to offset the animation timing and point colour assignment.

Finally - a trail SOP gave me my velocity for motion blur and a point sop with a min expression allowed me to clamp my velocity and thus motion blur to a threshold The attribute create SOP gave me width so my lines could be rendered as curves.

The final result which also used some real footage of electrical arcs.


Wednesday, June 9, 2010

Prido Effects project

Character modeling,texture /displacement Maps and animation :
Mike Paech
motionjunkie@gmail.com

displacement 1 and below no displacement


ok!
Nov 12 update !
I'm back on this project !

The overall goal is to review render passes in Houdini and some FX by lighting Prido - the character in two environments - a stormy night and a dry dusty day. FX included will be rain, rain bouncing off of the character, splashing water, drips off of the character, some breath and some lightning. For the Dry env, there will be dust and heat distortion.

Currently ive rebuilt the scene for H11 Apprentice and have had some wierd rendering. ive noticed the shader work flow has changed a little so - this will be the first thing i get sorted.

Friday, June 4, 2010

Tuesday, June 1, 2010

the end of a small era...



...on this Friday, I will say goodbye to Cutting Edge Post

I really enjoyed  working with these people and was lucky enough to learn a whole bunch of Houdini from the 3D dept. !

I worked on some great projects including the Aussie hit feature Beneath Hill 60, and the soon to come aussie horror "Needle", working alongside other talented artists such as Dave Brown, Tom King,  Kate Kerrigan , Andrew Kimberly, Ant, GoatBoy, Swinney as well as VFX supes Ron Roberts and Steve Anderson.

I also had the pleasure of working with one of the loveliest producers I had ever met - Jo Gregory.

Thanx for the good times guys !

Wednesday, May 12, 2010

Voronoi shatter ReadMe

Voronoi Fracture OTL
This OTL contains a Voronoi Fracture SOP,
as well as Fracture Solver and Fracture Pieces DOPs that does dynamic, impact location-based fracturing,
and a few other helper assets that provide base functionality for these assets. 

Assets overview

Voronoi Fracture - SOP - provides Voronoi fracturing capability
Reattach Pieces - SOP
asset that iteratively, randomly fuses groups of geometry as output from the Fracture SOP or the Shatter SOP 
Fracture Solver - DOP - create fractured geometry for RBD objects within a DOPnet based on impacts Fracture Pieces - DOP - responsible for creating new objects in the DOPnet from the fractured geometry Nearest Points - SOP - support Vex SOP that uses point clouds or neigbour information to put the N nearest points into an attribute on each point
If you're unfamiliar with what a 2D Voronoi diagram looks like, google "Voronoi Diagram" and you should get plenty of examples. Or look at the "basic_2d" examples in the example fracture file.

Voronoi Fracture SOP Overview
The Voronoi fracture SOP takes two inputs: the mesh to fracture, and the points around which to build each Voronoi cell.  Typically these points will be generated by one of two SOPs, the Scatter SOP, or the PointsFromVolume SOP. 

If doing a solid fracture, where the interior surface of the object is built for each piece, it is very useful to have all the points reside within the volume. That will ensure you'll get one piece/fragment for each cell point.  Both PointsFromVolume and an IsoOffset to a FogVolume followed by a Scatter SOP will do that.  PointsFromVolume is useful if you know the approximate size of the pieces you want, since it operates on point separation.  Scatter is useful if you know the total number of pieces you want. 

It's also useful for location-based fracturing, as you can modulate the density from the IsoOffset SOP to generate more points within particular regions of the object. 

Where there is higher point density, the Fracture SOP will generate more, smaller pieces. 

The provided .hip files have examples of both types of point generation, as well as location-based fracture. At the moment the Fracture SOP takes a very brute-force approach to building the Voronoi cells. 

For each input cell point, it incrementally clips the input mesh by the half-plane that divides the current cell point and another cell point.  If building the input surface, it caps the resulting hole in the mesh, then continues iteratively clipping and capping. 

This sounds pretty slow, and it is, but two things save it: the Clip SOP is quite fast, especially when it can trivially reject the input geometry (i.e. it won't clip anything), and the Fracture SOP uses pointclouds to start with the closest point and works outwards, short-circuiting any additional work when the Clip SOP won't have any effect, and stopping at the user-defineable "MaxCuts" parameter.  Currently the Fracture SOP also detects then when the input cell points are co-planar, and uses the Triangulate2D SOP to get the adjacent Voronoi cells.  So for most 2D inputs you should not see any artifacts.

Most of this is demonstrated in the example files, but a couple of tips when fracturing:

--Mesh complexity slows down fracturing more than anything.  PolyReduce before fracturing if you can.
 --The Fracture SOP should support most types of geometry, convex or concave.  If you end up with artifacts and are sure the MaxCuts value is high enough, please let me know on the odForce forum.

--When testing fractures, debugging the MaxCuts value, etc, turning off "Create Inside Surface" will speed things up and should look the same on the surface of the object.

--The sign of a too low MaxCuts paramater is overlapping polygons on the surface. Sometimes you can template the initial geometry, put the Display flag on the points, turn on point display and point numbers, and determine which cell point is causing the problem.  Enter that number into the "Cell Point Group" of the Fracture SOP and you can quickly test increasing the MaxCuts param until the artifact goes away. 
There's an example of too low Maxcuts in the example file.

--Sometimes you'll have problems with insufficent point density in long appendages or something in the input object.  Try using IsoOffset and scaling the density up in those areas.  See the "horse" example in the example files.

--Don't use "Convex Interior Polygons" to triangulate the interior surface.  If you want that, apply a Divide SOP after the fact with "inside" as the group.

Dynamic Fracturing Overview
The dynamic fracturing functionality in this asset is provided by the combination of the Fracture Solver DOP and the

Fracture Pieces DOP. 
The Fracture Solver DOP is responsible for creating fractured geometry for any object that needs to fracture on a given timestep (usually due to impact or Magnet Force explosion).  The Fractured Pieces DOP checks for new fractured geo at every timestep, copies the appropriate whole object enough times to create each new piece, assigns the new geometry to these new pieces, and deletes the original whole object.  On most timesteps where there are no impacts, neither DOP does anything.Fracturing usually occurs as the result of an impact.  The basic algorithm for location-based fracturing as performed by the Fracture Solver is demonstrated in the basic_fracture_examples.hip file under the "location-based_fracture" subnet.

Dynamic Fracturing tips(a few of which apply to the Shatter SOP as well):

--If pre-fracturing and using RBD Fractured Object or Glue Object, make sure to set group mask to piece_* or equivalent, else you'll also get an RBD object for inside and outside group, which will mess things up.

--It seems that changing settings inside a SOP solver won't always cause the DOPNet to recook.  Since most of the parameters on the Fracture Solver go straight thru to the SOP Solver, often you have to do force a recook of the DOPNet when changing params (I often bypass and un-bypass the Multisolver DOP)

--If you're not sure whether an object will fracture or not, try it on its own in the SOPs context first with the Voronoi Fracture SOP.

--Use "Visualize Impact Region" to get an idea of where the Fracture Solver thinks the impacts are.

--If you set up a sim from the shelf and have "Ghost Objects" on in the Viewport, Houdini doesn't seem to display most of the new pieces.  Turn off "Ghost Objects" when in DOPs (not a bad idea anyway).

--Even when the sim is done caching, there can be a slight hiccup in the playback when lots of new pieces become visible.  Flipbook to get a better feel for the smoothness of the whole/fractured transition.

--The fracturing operation necessarily occurs after the RBD Solver has already handled the collision, so sometimes the whole/fractured transition looks clunky.  Usually substepping the entire DOPnet makes things look/behave a lot better.  Two is often sufficient; I often use four.

--The pieces created by the Fracture operation tend to have many sided polygons. 

To avoid interpenetration you might have to toggle the collision Surface rep to Edges and toggle on Triangulate (slow).  This is mostly true for stacked up pieces.  Explosions and shattering and such where the pieces go everywhere it's not usually necessary.

--Use as coarse a collision Volume resolution as you can get away with. 
Calculating the SDF for hundreds of pieces can take a long time and is often overkill.  I may add automatic scaling of the resolution, but for now there's an example that shows how to override the resolution on just the pieces.

--RBDAutoFreeze with a relatively high "Enable threshold" is your friend.

--To temporarily turn off fracturing, just bypass the Fracture Solver.

--To avoid fracturing a particular object, set the "Group" field of the Fracture Solver to exclude that object (method may change)

--If you only want one level of fracturing, set the Min Volume to just under the volume (mass/density) of your object, or put "rdbobject1", not "rbdobject1_*" in the group field of the FractureSolver, which will exclude any pieces from further fracturing.

--If you know your geometry will fracture correctly with a high enough MaxCuts value, and you're using less than 100 or so pieces per fracture operation, it might be worth it to set the MaxCuts param of the Fracure Settings to the max number of pieces for the fracture.

Sometimes it's better than having to resim the entire thing because of bad geometry from the fracture operation.

Parameter Reference
Voronoi Fracture SOP
Group to Fracture - Primitive group to fracture
Create Inside Surface - Whether to build interior surface while fracturing
Connect Inside Edges - like Shatter SOP, connects surface polys to interior polys, usually off.
Cusp Interior Edges - just applies a Facet Sop to the "inside" group with Cusp Polygons at 40 degress
Visualize Pieces - colors primitives by piece number (should this be points?)
Piece Group Prefix - the name to prefix all piece group names with Cut Settings
Cell Point Group - a pattern/group expression that specifies a subset of points around which to cut cells
Max Cuts Per Cell - number of other cell points that will be considered when cutting a given cell
Cut Plane Offset - offsets the cutting plane by this amount to put some space between the pieces
Convex Interior Polygons - convexes the polygons generated in the interior at each step
Max Interior Edges - the number of edges a convex Polygon can have when previous option is selected

Reattach Settings
        Reattach Piece - whether to reattach pieces or not
        Iterations - number of reattach iterations
        Seed - a random seed used in picking which pieces to fuse
        Point Tolerance - the tolerance used in the Fuse operation when reattaching pieces
       
Delete Faces at Reattachment -
attempt to remove coincident polygons when reattaching Voronoi cells

Limit by Face Area Ratio - not yet implemented
Limit by Total Face Area - won't reattach if total surface area of combined piece would exceed this value 

Fracture Solver DOP
Impact Settings
Minimum Impact - an impact has to exceed this value to cause a fracture
Minimum Volume - an object has to be larger than this (mass/density) to be eligible for fracture
Re-fracture Delay - a piece must have existed for this many seconds before it can fracture again
Compute Number of Points - have Scatter automatically calculate number of points in impact volume (bit flaky)

Number of Points - min and max number of pieces that will be generated by fracture
Points per Area - used by Compute Number of Points
Location-Based Fracture - fracture based on impact location
Fracture From Magnet Force Metaballs - add the metaball geometry for any Magnet Forces to an object's impact zone
Impact Radius - the radius of each impact for location-based fracturing
Outside Radius Percentage - the point density outside the impact radius
Visualize Impact Region - visualize impact region with red/white ramp
Fracture Settings same as Voronoi Fracture SOP above

Velocity Transfer Settings

Velocity Scale - overall scale for velocity transferred to new pieces
Randomness Scale - overall scale for random velocity added to new piecesPre/Post Velocity scale - blend of pre-post impact velocity to transfer to the new pieces
Group - group of RBD Objects the Fracture Solver should solve for Fracture Pieces DOP
New Pieces Group - name of group that new pieces created by this DOP should be put in (for creation timestep only)
Group - group of RBD Objects this DOP should create new pieces for

Friday, April 2, 2010

animated noisy gradient in Houdini

what is a trivial task...in Max, is a bit wierd in Houdini...but not so bad when you see how.
Once this is set up - you can always save this as an asset as well.

I started plaing with the old ramps VOP, and then realised the new ramp generator is
the quickest and easiest way - that I know of.

1. make sure you have a shader view ( like the Material Editor...sort of...) window open, 
otherwise youll have to make a grid with UVs......im busy thesedays :)
2. make a vopsop surface shader ( and apply  it to you geo if you have some)
3. dive into it and add a ...well add these !

click the image to see it bigger

what the hell ! :)
I have to make all these nodes just for this effect !
umm yep  ! Houdini excells at complicated tasks...so simple talks are a little more complicated.
some shortcuts...
the globals is given to you for free in a VOPSOP.
you could add your own paramaters called s and t - if the names are correct - Houdini recognises them and they work as if the were piped directly from the Globals.

the three parm's piping into the Turb Noise - i dont have to create and name them manually of course.
I just MMB on the parm's inputs that i want the user to be able to access on the top level and go with the default names - so the these are quick and easy no brainers.
The rest i have to click a few buttons for as you would in Max :)

the key is the ramp parm - it could be a float (spline) ramp or a vectory ramp - i went for simplicity.
The other tricky thing i found - was to plug s and t (like U and V) into my noise - i couldnt see the noise without giving it s and t.

So the s plugging into the ramp gives me horizontal ramp - t would give me vertical. This is mutiplied by the noise, and piped into a simple mix - note the dotted line here indicates values are being converted - the multiply is outputting a vector - but the mix wants a float as the Bias - so Houdini automatically converts it to read a float.

The animation of the wipe from left to right is done by animating the position from 0 to 1 in the top level of the VOP shaders - in the ramps parameters. Ive animated the position on two points on the ramp.

Another more WYSIWYG way - would be to do this in COPS - houdinis compositor - this would be similar the the gradients in Photoshop or After Effects - but this would likely require 'rendering' out an image sequence to disk - just like the old days !

some other things i thought you can play with:
* using an expression in the point positions to keep them evenly spaced, and to enable only having to animate one - the other is driven by the first eg point 2 pos = point 1 pos + 10%
* mixing several types of noise together
*animating the noise / size of the noise

umm im sure there are more ways - and probably more efficient ways to do this in VOPS, this is what first came to mind.


have fun !






Tuesday, March 23, 2010

PBR test 2

and the seond images is the same but with a colour limit applied - and then equalised by the preview in the 'IPR' / renderview - which takes the gamma right down to .172 something...something....

  
note the self illuminted tube isnt darkened - the image is / was linear from Houdini.

Wednesday, March 17, 2010

matching the plate

instead of doing what i used to do and line up geo
in the viewport - albeit a big one,  with a bg image plate...

now i make my objects render with a wireframe
shader, and zoom in on the region i need to match
and render only that little area.

With a wire frame shader - on the geometry it becomes really
quick - almost intereactive - so in one view i can edit or transform my geometry and in the other i can see the wireframe render update over the plate and i can get a much better match.

* rendering at 2048x1172 gives you much clearer idea if your geo is accurate anough - but theres no point if your rndering at less res than your screen res - you may as well just line it up in the viewport ! :)

...kinda reminds me of using max's virtual viewport. :)

you can do a virtual viewport in Houdini as well...a post for another day.

Tuesday, March 16, 2010

copy stamp play


experimenting with new techniques and re-appropriating methods is too much fun !.
this is simply some static scattered
cubes with a motion blur attribute randomly stamped on them.

less cubes, less motion blur

copy stamp play

this random play reminds me of the old days at Famous Faces,
messing around with Vicon cameras :)

its all in scale here :)

http://www.newgrounds.com/portal/view/525347
regarding the maya to houdini pipeline:

animation is brought in via the fbx format
(NB in the fbx import options check `unlock vertex cache`
otherwise all the animation will be saved in the Houdini file resulting in very large hip files rather than referenced on disk like it should be)
also check 'import directly to /obj' to bring the nodes directly into the root level of houdini thus bypassing a useless encapsulating layer of hierarchy
so far we have been starting a new blank session of Houdini to import the fbx files
and then copying and pasting the relevant nodes made by this fbx import into ANOTHER houdini session which is setup for Houdini specific geometry caching and rendering.
the models come into Houdini automatically with a vertex cache SOP that links the model with its associated animation CHOP (channel operator) to give it the life that puts bread on out tables.
By default the path to the animation involves the object name in it
(in the choppath parameter)
such as :
../../fbx_chops/MDL_tongue02_chopfile

the fbx import process automatically adds "_chopfile" to the end of the name
after sourcing the object name (tongue02) in this case which makes our live kinda esier once we get our heads around whats happening
we can then replace this path with this :
`"../../fbx_chops/" + opname("..") + "_chopfile"`
we've split up the string adding quotes and + signs to concatenate it .
the cmd 'opname' looks at the objects name in question procedurally and automagically
derives it - no matter which object we are dealing with.
".." is like 'the node, one level up from where we are now' - just like unix.

the single quotes around the expression force it to evaluate.
this new expression can then pasted onto ALL the vertex cache SOPS in every animated object - and pasted all in one hit using the attribute spreadsheet.
when the animation in Maya is updated or changed, we simply copy in the CHOP network *ONLY* and as long as no object names have changed in maya - we can link the new animation from maya to the old models in Houdini (essentially the same models - because they cant change point count). The Chop netwotk name shouldnt change either - but if it does - we just change our expression to suit.
why cant we just past in the new animation network and all the paths will automatically connect as they are ? well we could....but if there is any old fbx animation networks hanging around in houdini, it will add a '_1' increment to the name and break the path.
why cant we just remove the "_1" from the name of the animation network ?
we could but any paths that are specific - or explicit or not using an expression
will automatically update to follow the current fbx animation it is connected to, in an attempt to keep the paths connected.
So if we ever have 2 or more fbx animation networks in the one houdini file - it can easily be broken. This is avoided with the expression meaning we only have one name to keep an eye on one name - the fbx CHOP subnet name.
this gives us a workflow to simply update animation from maya by coping and pasting one node from one Houdini file to another and hopefully everys at the pub by 6 pm with a beer in there hand and smiles on there faces :)
----------------------------------------------
we could also just go into the fbx animation node and simply copy all the CHOP nodes, then paste them into the old fbx_chops subnet node (*after* deleting all the old ones to avoid houdini's automatic renaming) which is even easier !

Monday, March 1, 2010

some opscript

from odforce...

mental note to self:

"You can always run

opscript -r -b /obj/*

that will show you all the commands needed to reproduce the object networks in a script format.."

Sunday, February 14, 2010

RENDERING DEPTH MAP SHADOWS

rendering shadow maps outto disk to speed up your final renders

even after the rat shadow maps are baked to disk,
you can still change a number of parameters with having to write the shadow maps back to disk again

ie

colour
intensity
shadow intensity

shadow quality - im not sure about this one

shadow softness and shadow blur


NOTE INCREASING PIXEL SAMPLES AND RES IS VERY ADVERSE TO RENDER TIMES !

it may be better to increase pixel samples to get mor detail than to increase resolution fo the maps.
it seems double the map size and double the samples will more than quadruple the render times.


i should do some proper tests...

Thursday, February 4, 2010

Tuesday, February 2, 2010

vol disp shader

wonderful world of stere-ereo



well its not pretty - but here it is - my first stereo image...
youll need red and blue glasses !

some stereo volumes from the viewport:


and some notes from the web:
Notes from wikipedia
basics:
anaglyph images are used to provide a stereoscopic effect.

2 colour glasses - each colour chromatically opposite
each colour blocks out the opposing light waves leaving near black

each image is superimposed (additively) but offset with respect to each other to produce the 3D effect
ususally the subject is in the middle with the forground and background shifted latterally in opposite directions

~the visual cortex of the brain fuses this into perception of a 3D scene or composition

most glasses are red on the left eye and cyan on the right
red and blue is cheaper - but there is a improvement in image
especially when viewing skin tones with a cyan filter

anaglyphs are easier to view than parallel (diverging) images or cross viewd pairs stereograms (magic eye ? )

Anaglyphs can be made in two ways
1 using two cameras offset from each other and convering - aiming toward the same subject
with different filters on each camera

2 on a computer by using pixel operations instead of the filetrs to knock out each color - and the offset can also be done digitally

** NB you stil need two base images when creating anaglyphs digitally

in photoshop - you can adjust the colours for the images and screen them back onto each other

TERMS
IAD  Inter Axial Distance / IOD inter-occular Distance
can we assume inter occualar refers to eyes interaxial refers to camera lenses.
http://forums.creativecow.net/thread/268/43#51
IPD - inter-pupillary distance
is considered the correct term for the measurement of the distance between the left and right eyes of the viewer.

IOA inter axial Angle - this refers to convergence - the angle the cameras are pointing in
toward the subject - also called `toeing` in

Parallax (wikipedia) fro greek parallaxis - "alteration"

In the art of stereoscopy, screen parallax is defined as a measure of the distance between
left and right corresponding or homologous image points when such points are projected
on a screen.
homologous from greek 'to agree'
biology : similarities between charateristics of organisms eg forelimbs in mammals
wikipedia:
Parallax is an apparent displacement or difference in the apparent position
of an object viewed along two different lines of sight.

It is measured by the angle of inclination between the two lines of sight.

Nearby objects have more parallax than distant objects - therefore you can measure distances
Stellar parallax provides the basis for all distant measurements in astronomy.
Stereopsis:
Many animals, including humans, have two eyes with overlapping visual fields to
use parallax to gain depth perception; this process is known as stereopsis.
Lenticular is the adjective relating to lens.

AutoStereo - general term to descripe the stereo effct from a device that does not require glasses
(either anaglyphic or lenticular)

-------------------------------------------------------------------------------------
Houdini Notes
-------------------------------------------------------------------------------------

Houdini has an asset which is a built-in stereo cam rig with controls for IAD
and ZPS - Zero Parallax Setting - which controls the convergence point.

in mplay the left image scan be viewed as C and the right can be viewed as C2

IAD - the distance between the two cameras - the center of the lenses ?
ZPS - Zero parallax screen - houdini make a plane where points of the
two images intersect

*     The ZPS coincides with the viewing screen
*     Objects between the camera and the zps,
    or in front of the ZP appear to be in front of the screen
*    Objects behind the ZPS in 3D, appear to be behind the screen

* when there is no IOA (angle) - ie the cameras are parallel - the
convergence plane appears to be just 'beyond infinity' - or as far as the eyes can see.

* there doesnt seem to be a control for IOA on the Houdini StereoCamera
- the cameras are paralell
-------------------------------------------------------------------------------------

* "...the big caveat with toe in, is it creates geometric distortions
due to symmetric keystoning. "

http://forums.creativecow.net/thread/268/134
-------------------------------------------------------------------------------------------------
Algoythmic Interaxial Seperation Reduction
(to produce lower parallax values)

http://www.freshpatents.com/Algorithmic-interaxial-reduction-dt20080228ptan20080049100.php

In the art of stereoscopy, screen parallax is defined as a measure of the distance between
left and right corresponding or homologous image points when such points are projected
on a screen.

homologous from greek 'to agree'
biology : similarities between charateristics of organisms eg forelimbs in mammals
-------------------------------------------------------------------------------------------------
LINKS

Professional Anaglyph camera rigs
http://www.3dfilmfactory.com/3d_camera_rigs.html

French dude worked on meet the robinsons
makes his own anaglyphs and camera rigs
http://bernard.mendiburu.free.fr/pro/book/movies/

Avatar

Imax Cineams
http://www.imax.com/

Wikipedia :  Lenticular Lenses
http://en.wikipedia.org/wiki/Lenticular_lens/

Lenticular Prints
http://www.gorillaprint.com.au/index.php?/print/lenticulars.html

Monday, February 1, 2010

D A Y B R E A K E R S


woohoo !
my first feature film experience as an FX animator.
Big Kudos to the Team at Kanuka Studios !

trailer

Rangi Sutton - FX Supervisor
Kirsty Martin - FX coordinator
"Robert James Kelly" - Houdini  TD
Allan Mckay - TD
DJ Nicke - Char Animator
Alicia Aguilera - Compositor
Loren Robinson  - Compositor

for a full listing of credits:
http://www.imdb.com/title/tt0433362/fullcredits


some reviews:
http://www.imdb.com/title/tt0433362/

as i get time, ill try to write about the experience and the effects i worked on.

go see it at a cinema near you !




Thursday, January 21, 2010

missing motion blur ? borrow some V !

from gmail chat:
i did a nifty trick today ! (thanx Dave !)

...well nifty for me...

one of my frames for some reason had no motion blur
so on the previous frame and the frame after
i locked copies of the nodes.

Then took the velocity (v) off those - averaged them in a VOPSOP and which also applied the new v to the frame with the missing v.
then a switch SOP to switch between the old node and the fixed node for just that frame !

i feel like the attribute nodes are finally less abstract for me
i think i just needed a practical example of usage to feel comfortable with it


in the image - you can see 2 different ways of achieving this - the first method i used - on th e left is more kunky - more nodes - but it works. The second - the almighty VOPSOP is so elegant !

inside the VOPSOPS are 3 new nodes !




COPS bugs


I know theres a few bug in COPS - but heres one i hadnt seen before

converting from 16 bit tif to 8 bit in a ROP - i got some really funky output.
soon solved however with a conevrt COP before the ROP

Follow by Email