From reader emails, I realized that a basic introduction to some of the concepts referenced later in the guide might be helpful. Many of y...
From reader emails, I realized that a basic introduction to some of the concepts referenced
later in the guide might be helpful. Many of you are already familiar with these terms,
so feel free to skip this chapter! However, if your background is in still photography or if
you’re new to digital imaging in general, this bonus chapter should help clarify some basic
cinematography concepts that we’ll be working with going forward. By no means is this an
exhaustive glossary, but it is a good starting point. I’m going to explain things from a practical,
crash-course standpoint rather than a scientific, 100% semantically-correct perspective,
because I think it’s handier to know how something works in practice than it is to know all of
the details of why it works — if you’re looking for knowledge of the latter, there are of course thousands of good resources on the internet to bolster your knowledge. In alphabetical
order, then, here are ten basic concepts you should be familiar with:
1. Aspect Ratios & Anamorphic Lenses
Aspect Ratio used to be a more prominent issue for digital cinematographers than it
is today: before the advent of high-definition cameras, the standard 4:3 aspect ratio
of standard-definition TV was generally seen as undesirable for anyone looking for a
“cinematic” look, because 4:3 (or 1.33:1) content was associated with broadcast TV,
while widescreen compositions were what people expected to see in the theater. When
we say “4:3,” we mean the image is four units wide and three units high. When we
“1.33:1,” we mean… well, you get it — the same thing. Many times the “:1” is removed
because it is implied – shooters will simply say “1.85” instead of “1.85:1.”
HDTV today is widescreen by
default, with a 16:9 aspect ratio
that works out to be 1.78:1 — very
similar to the traditional 1.85:1
aspect ratio of many feature films.
Other than these two virtuallyindistinguishable aspect ratios, the
most common widescreen aspect
is the CinemaScope ratio of 2.35:1,
which appears most often in the
multiplex in big-budget films.
2.35:1 films are typically shot with
anamorphic cine lenses. Anamorphic lenses are not spherical in the sense that they
squeeze an image to fill the negative or sensor, with an additional step necessary during
projection to re-stretch the image to the intended size. The odd-looking image here of
a lens with an oval aperture demonstrates the non-spherical nature of an anamorphic
lens (the aperture is perfectly round, but the lens is distorting our view of it). While it
is possible to attach an anamorphic lens to a DSLR, most of us will simply shoot at the
native widescreen aspect ratio of 16:9. 2. Bokeh
Bokeh (pronounced like “bo” from “boat” and “ke” from “Kentucky”) is one of the
chief reasons many shooters have switched to DSLRs. Bokeh is a term derived from the Japanese word
“boke” which, roughly
translated, means “blur
quality.” Bokeh refers to
the portions of an image
that are defocused or
blurry. In the filmmaker’s
toolkit, bokeh is not only
an aesthetically pleasing
quality, but it also allows
the filmmaker to focus the
viewer’s eye on an object
or area of interest in the
frame. Bokeh is a function of shallow depth-of-field.
3. Compression & Bit Rate
Compression refers to a method for reducing the amount of data a DSLR produces;
in the case of video-shooting DSLRs, all cameras currently employ some method of
compression. If you’re used to shooting photos in JPEG format, you’re used to capturing
compressed images; while RAW can also employ compression, it is generally thought
of as “uncompressed.” This is because, as far as shooters are concerned, when we’re
talking about compression we’re talking about lossy compression — meaning, a codec
(compression algorithm) that throws out data in order to reduce file size. As you can
imagine, tossing portions of an image has negative side effects, and while many codecs deal with images perceptually in order
to minimize their perceived impact, the
difference is there. For example, if you
upload a video to YouTube, the service recompresses your video in order to optimize
it for internet delivery; you might not notice
this compression, but check out this video
that’s been recompressed a thousand times
and you can see that every compression
step throws out data along the way. On the
positive side, however, lossy codecs are also the reason we can record hours of footage
to inexpensive flash memory devices like CF and SD cards.
The most common compression formats in DSLRs are h.264 and MJPEG, and while both
are lossy, h.264 is generally much more efficient (it introduces less artifacts at the same
bit rate as MPJEG). Bit rate is the amount of data per time that a given codec adheres
to; higher bit rates are almost always better because they use less compression. At press
time there are no DSLRs that shoot uncompressed video.
4. Depth of Field
The amount to which objects in the foreground, mid-ground and background are all
in focus at once is a function of depth of field. A shallow depth of field would mean
that only one plane was in focus; a wide (or deep) depth of field would mean that
all planes are in focus at once. Depth of field is determined by the focal distance and aperture size (see below for
more on Aperture). DSLRs
exploded in popularity almost
singlehandedly because of
their ability to render images
with a shallow depth of field.
This is chiefly due to their
massive sensor sizes (see
the next chapter, “Choosing
a DSLR,” for an examination
of sensor sizes), which are
exponentially larger than
previous video cameras. On a basic level, shallow depth of field (DOF) allows filmmakers
to blur out areas of the image they deem to be unimportant or undesired.
5. Exposure & Aperture
Exposure refers to the amount of light allowed to enter the DSLR sensor (or any imaging
surface). When shooting stills, DSLRs use a mechanical shutter to regulate exposure by
opening for the desired amount of time (1/60th or 1/1000th of a second, for example)
and then closing. DSLRs are generally rated to last for hundreds of thousands of shutter
cycles, but at 24 frames per second, couldn’t your DSLR reach that limit very quickly? No,
because in video mode, DSLRs use an electronic shutter — the sensor basically turns on
and off to regulate exposure, instead of relying on a physical barrier (i.e., the mechanical
shutter) to regulate light. Aperture refers to the ajustable opening near the rear of the lens that lets light through —
the amount of light it transmits is
generally referred to as the F-stop
(T-stop is very similar, except it’s
measured instead of calculated).
We’ll go more into depth on
aperture in the “Lenses” section
of the guide, but keep in mind that
the size of the aperture does not
only affect the amount of light, but
also the angle of light rays hitting
the sensor — a narrow aperture
creates an image with a wide depth of field, whereas a large aperture creates an image
with a shallower depth of field.
6. Focal Length
Technically, focal length refers to the distance over which collimated rays are brought
into focus. An easier way to think of it: focal length refers to image magnification. A
longer focal length, e.g. 100mm, makes distant objects appear larger, whereas those
same objects will appear smaller with a shorter focal length, e.g. 35mm. Focal length
also refers to angle of view; longer focal lengths have a narrower angle of view, whereas
shorter focal lengths have a broader angle of view. When it comes to focal length, a
picture is worth a thousand words, so here are images taken with the camera in the
same place, but with lenses of different focal lengths attached:
7. Frame Rate
Frame rate is the frequency with which your DSLR captures consecutive images. This
typically corresponds to the number right before a “P” in the case of progressive images,so that 24p is 24 frames per second, 30p is 30 frames per second, and 60p is 6,000,000
frames per second. Just kidding. Different frame rates have very different motion
rendering characteristics, which, combined with different shutter speeds, produce
images that behave very differently. Motion pictures have had a standard frame rate of
24 frames per second since the 1920s, and audiences have come to associate this frame
rate with cinematic content, so being able to shoot in 24p is essential if you’re planning
on shooting narrative material. However, you don’t always have to shoot at the same
frame rate at which you’re planning on distributing your material. For example, if your
DSLR can shoot 60p, this is a very effective way of acquiring slow-motion footage —
anything shot at 60p can be played back at 40% speed in a 24p timeline for a flawless
slow-motion effect, and can generally be slowed down further in your editing system.
8. ISO & Noise
ISO is actually the International Organization for Standardization, which is why you see
it used in lots of places beyond photography — many businesses are certified ISO:9001,
for example. As cinematographers we’re concerned with just one “standardization,”
however — the one that pertains to measurement of noise in photography. ISO as it
relates to digital photography is based on analog standards of film speed — while we
won’t be shooting a frame of actual film with our DSLRs, our cameras are calibrated so
that an ISO of 400 on our camera is somewhat equivalent to a film SLR’s ISO 400. ISO is a
logarithmic measurement, so ISO 400 is twice as sensitive to light as ISO 200, ISO 200 is
twice as sensitive as ISO 100, and so on and so forth.
The relationships between sensitivity and noise is basically linear, however, so the higher the ISO, the brighter the image — and the more noise contained in the image.
However, thanks to sophisticated noise reduction and other processing tricks, DSLRs
have managed to dramatically reduce noise at higher ISOs, and can often blow film stock
out of the water (this depends on which camera you’re shooting with, which we’ll cover
in the next chapter).
9. Progressive vs. Interlaced
Interlacing was a workaround invented for oldertech CRT monitors in the 1930s that has lived far too
long. In the early days, video bandwidth was more
limited than today, and so engineers found a way to
divide a frame into two images and display it using
alternating fields. As you can see in this image of a
tire wheel, interlacing can cause motion artifacts (as
well as a host of other problems). We’re lucky to live
in a predominantly progressive society today — in
the imaging sense if not the political. Progressive scanning is a method that captures
and displays the lines of an image in sequence, which is akin to motion picture film with
regards to motion rendering. Compared to interlaced images, progressive images have
a higher vertical resolution, lower incidence of artifacts, and scale better (both spatially
and temporally). Friends don’t let friends shoot interlaced! Luckily, while there are
plenty of video cameras that shoot interlaced footage, every DSLR I can think of shoots
progressive footage.
10. Shutter Speed
Shutter speed refers to the length of time an image is exposed. For film SLRs, this would
be measured by the amount of time the camera’s mechanical shutter is open, but for
shooting video on DSLRs, this is simulated electronically. Shutter speed affects the amount
of light that reaches the camera and also affects the motion rendering of the moving image. Lower shutter speeds yield a brighter and smoother image (up to and including
water and light blurring tricks), whereas higher shutter speeds result in a darker and more
stroboscopic image.
Motion picture film cameras typically shoot with a 180-degree shutter, which means that
the shutter is open 50% of the time (180 out of 360 degrees). This means the amount of
time your shutter is open is half of the shooting frame rate; thus, at 24 frames per second,
a 180-degree shutter is best emulated on a DSLR by choosing a shutter speed of 1/48.
This may not be possible depending on your DSLR, so the closest reading will do — 1/50
or 1/60, for example. This gives the most “filmic” rendering of motion, but can be varied
greatly depending on your intention. Higher shutter speeds create “jerkier” images, as
most famously seen in action films like Saving Private Ryan and Gladiator. Conversely,
lower shutter speeds create “smoother” images due to increased motion blur. There is no
hard and fast rule when it comes to shutter speed, but if you’re not sure of what shutter
speed to select, go with the setting that’s closest to half that of your current frame rate.
No comments