IntroductionUnlike most sections of this text, there is no source code or project associated with this
section. Here, we will be discussing vector math, graphical rendering theory, and OpenGL.
This serves as a primer to the rest of the book.Vector MathThis book assumes that you are familiar with algebra and geometry, but not necessarily
with vector math. Later material will bring you up to speed on more complex subjects,
but this will introduce the basics of vector math.A vector can have many meanings, depending on whether we are
talking geometrically or numerically. In either case, vectors have dimensionality; this
represents the number of dimensions that the vector has. A two-dimensional vector is
restricted to a single plane, while a three-dimensional vector can point in any physical
space. Vectors can have higher dimensions, but generally we only deal with dimensions
between 2 and 4.Technically, a vector can have only one dimension. Such a vector is called a
scalar.In terms of geometry, a vector can represent one of two concepts: a position or a
direction within a particular space. A vector position represents
a specific location in space. For example, on this graph, we have a vector position
A:A vector can also represent a direction. Direction vectors do
not have an origin; they simply specify a direction in space. These are all direction
vectors, but the vectors B and D are the same, even though they are drawn in different
locations:That's all well and good for geometry, but vectors can also be described numerically.
A vector in this case is a sequence of numbers, one for each dimension. So a
two-dimensional vector has two numbers; a three-dimensional vector has three. And so
forth. Scalars, numerically speaking, are just a single number.Each of the numbers within a vector is called a component. Each
component usually has a name. For our purposes, the first component of a vector is the X
component. The second component is the Y component, the third is the Z, and if there is
a fourth, it is called W.When writing vectors in text, they are written with parenthesis. So a 3D vector could
be (0, 2, 4); the X component is 0, the Y component is 2, and the Z component is 4. When
writing them as part of an equation, they are written as follows:In math equations, vector variables are either in boldface or written with an arrow
over them.When drawing vectors graphically, one makes a distinction between position vectors and
direction vectors. However, numerically there is no difference
between the two. The only difference is in how you use them, not how you represent them
with numbers. So you could consider a position a direction and then apply some vector
operation to them, and then consider the result a position again.Though vectors have individual numerical components, a vector as a whole can have a
number of mathematical operations applied to them. We will show a few of them, with both
their geometric and numerical representations.Vector AdditionYou can take two vectors and add them together. Graphically, this works as
follows:Remember that vector directions can be shifted around without changing their
values. So if you put two vectors head to tail, the vector sum is simply the
direction from the tail of the first vector to the head of the last.Numerically, the sum of two vectors is just the sum of the corresponding
components:Vector Addition with NumbersAny operation where you perform an operation on each component of a vector is
called a component-wise operation. Vector addition is
component-wise. Any component-wise operation on two vectors requires that the two
vectors have the same dimensionality.Vector Negation and SubtractionYou can negate a vector. This reverses its direction:Numerically, this means negating each component of the vector.Vector NegationJust as with scalar math, vector subtraction is the same as addition with the
negation of the second vector.Vector MultiplicationVector multiplication is one of the few vector operations that has no real
geometric equivalent. To multiply a direction by another, or multiplying a position
by another position, does not really make sense. That does not mean that the numerical
equivalent is not useful, though.Multiplying two vectors numerically is simply component-wise multiplication, much like
vector addition.Vector MultiplicationVector/Scalar OperationsVectors can be operated on by scalar values. Recall that scalars are just single
numbers. Vectors can be multiplied by scalars. This magnifies or shrinks the length
of the vector, depending on the scalar value.Numerically, this is a component-wise multiplication, where each component of the
vector is multiplied with each component of the scalar.Vector-Scalar MultiplicationScalars can also be added to vectors. This, like vector-to-vector multiplication, has
no geometric representation. It is a component-wise addition of the scalar with each
component of the vector.Vector-Scalar AdditionVector AlgebraIt is useful to know a bit about the relationships between these kinds of vector
operations.Vector addition and multiplication follow many of the same rules for scalar addition
and multiplication. They are commutative, associative, and distributive.Vector AlgebraVector/scalar operations have similar properties.LengthVectors have a length. The length of a vector direction is the distance from the
starting point to the ending point.Numerically, computing the distance requires this equation:Vector LengthThis uses the Pythagorean theorem to compute the length of the vector. This works for
vectors of arbitrary dimensions, not just two or three.Unit Vectors and NormalizationA vector that has a length of exactly one is called a unit
vector. This represents a pure direction with a standard, unit
length. A unit vector variable in math equations is written with a ^ over the
variable name.A vector can be converted into a unit vector by normalizing it.
This is done by dividing the vector by its length. Or rather, multiplication by the
reciprocal of the length.Vector NormalizationThis is not all of the vector math that we will use in these tutorials. New vector
math operations will be introduced and explained as needed when they are first used. And
unlike the math operations introduced here, most of them are not component-wise
operations.Range NotationThis book will frequently use standard notation to specify that a value must be
within a certain range.If a value is constrained between 0 and 1, and it may actually have the values 0 and
1, then it is said to be on the range [0, 1]. The square brackets mean
that the range includes the value next to it.If a value is constrained between 0 and 1, but it may not actually have a value of 0,
then it is said to be on the range (0, 1]. The parenthesis means that the range does not
include that value.If a value is constrained to 0 or any number greater than zero, then the infinity
notation will be used. This range is represented by [0, ∞). Note that infinity can never
be reached, so it is always exclusive. A constraint to any number less than zero, but
not including zero would be on the range (-∞, 0).Graphics and RenderingThis is an overview of the process of rendering. Do not worry if you do not understand
everything right away; every step will be covered in lavish detail in later
tutorials.Everything you see on your computer's screen, even the text you are reading right now
(assuming you are reading this on an electronic display device, rather than a printout)
is simply a two-dimensional array of pixels. If you take a screenshot of something on
your screen, and blow it up, it will look very blocky.Each of these blocks is a pixel. The word pixel
is derived from the term Picture
Element. Every pixel on your screen has a particular
color. A two-dimensional array of pixels is called an
image.The purpose of graphics of any kind is therefore to determine what color to put in
what pixels. This determination is what makes text look like text, windows look like
windows, and so forth.Since all graphics are just a two-dimensional array of pixels, how does 3D work? 3D
graphics is thus a system of producing colors for pixels that convince you that the
scene you are looking at is a 3D world rather than a 2D image. The process of converting
a 3D world into a 2D image of that world is called
rendering.There are several methods for rendering a 3D world. The process used by real-time
graphics hardware, such as that found in your computer, involves a very great deal of
fakery. This process is called rasterization, and a rendering
system that uses rasterization is called a rasterizer.In rasterizers, all objects that you see are empty shells. There are techniques that
are used to allow you to cut open these empty shells, but this simply replaces part of
the shell with another shell that shows what the inside looks like. Everything is a
shell.All of these shells are made of triangles. Even surfaces that appear to be round are
merely triangles if you look closely enough. There are techniques that generate more
triangles for objects that appear closer or larger, so that the viewer can almost never
see the faceted silhouette of the object. But they are always made of triangles.Some rasterizers use planar quadrilaterals: four-sided objects, where all of the
points lie in the same plane. One of the reasons that hardware-based rasterizers
always use triangles is that all of the lines of a triangle are guaranteed to be in
the same plane. Knowing this makes the rasterization process less
complicated.An object is made out of a series of adjacent triangles that define the outer surface
of the object. Such series of triangles are often called
geometry, a model or a
mesh. These terms are used interchangeably.The process of rasterization has several phases. These phases are ordered into a
pipeline, where triangles enter from the top and a 2D image is filled in at the bottom.
This is one of the reasons why rasterization is so amenable to hardware acceleration: it
operates on each triangle one at a time, in a specific order. Triangles can be fed into
the top of the pipeline while triangles that were sent earlier can still be in some
phase of rasterization.The order in which triangles and the various meshes are submitted to the rasterizer
can affect its output. Always remember that, no matter how you submit the triangular
mesh data, the rasterizer will process each triangle in a specific order, drawing the
next one only when the previous triangle has finished being drawn.OpenGL is an API for accessing a hardware-based rasterizer. As such, it conforms to
the model for rasterization-based 3D renderers. A rasterizer receives a sequence of
triangles from the user, performs operations on them, and writes pixels based on this
triangle data. This is a simplification of how rasterization works in OpenGL, but it is
useful for our purposes.Triangles and VerticesTriangles consist of 3 vertices. A vertex is a
collection of arbitrary data. For the sake of simplicity (we will expand upon this
later), let us say that this data must contain a point in three dimensional space.
It may contain other data, but it must have at least this. Any 3 points that are not
on the same line create a triangle, so the smallest information for a triangle
consists of 3 three-dimensional points.A point in 3D space is defined by 3 numbers or coordinates. An X coordinate, a Y
coordinate, and a Z coordinate. These are commonly written with parenthesis, as in (X,
Y, Z).Rasterization OverviewThe rasterization pipeline, particularly for modern hardware, is very complex.
This is a very simplified overview of this pipeline. It is necessary to have a
simple understanding of the pipeline before we look at the details of rendering
things with OpenGL. Those details can be overwhelming without a high level
overview.Clip Space TransformationThe first phase of rasterization is to transform the vertices of each triangle
into a certain region of space. Everything within this volume will be rendered
to the output image, and everything that falls outside of this region will not
be. This region corresponds to the view of the world that the user wants to
render.The volume that the triangle is transformed into is called, in OpenGL parlance,
clip space. The positions of the triangle's vertices in
clip space are called clip coordinates.Clip coordinates are a little different from regular positions. A position in 3D
space has 3 coordinates. A position in clip space has four
coordinates. The first three are the usual X, Y, Z positions; the fourth is called
W. This last coordinate actually defines what the extents of clip space are for this
vertex.Clip space can actually be different for different vertices within a triangle. It
is a region of 3D space on the range [-W, W] in each of the X, Y, and Z directions.
So vertices with a different W coordinate are in a different clip space cube from
other vertices. Since each vertex can have an independent W component, each vertex
of a triangle exists in its own clip space.In clip space, the positive X direction is to the right, the positive Y direction
is up, and the positive Z direction is away from the viewer.The process of transforming vertex positions into clip space is quite arbitrary.
OpenGL provides a lot of flexibility in this step. We will cover this step in detail
throughout the tutorials.Because clip space is the visible transformed version of the world, any triangles
that fall outside of this region are discarded. Any triangles that are partially
outside of this region undergo a process called clipping.
This breaks the triangle apart into a number of smaller triangles, such that the
smaller triangles are all entirely within clip space. Hence the name clip
space.Normalized CoordinatesClip space is interesting, but inconvenient. The extent of this space is
different for each vertex, which makes visualizing a triangle rather difficult.
Therefore, clip space is transformed into a more reasonable coordinate space:
normalized device coordinates.This process is very simple. The X, Y, and Z of each vertex's position is divided
by W to get normalized device coordinates. That is all.The space of normalized device coordinates is essentially just clip space, except
that the range of X, Y and Z are [-1, 1]. The directions are all the same. The
division by W is an important part of projecting 3D triangles onto 2D images; we
will cover that in a future tutorial.The cube indicates the boundaries of normalized device coordinate space.Window TransformationThe next phase of rasterization is to transform the vertices of each triangle
again. This time, they are converted from normalized device coordinates to
window coordinates. As the name suggests, window
coordinates are relative to the window that OpenGL is running within.Even though they refer to the window, they are still three dimensional
coordinates. The X goes to the right, Y goes up, and Z goes away, just as for clip
space. The only difference is that the bounds for these coordinates depends on the
viewable window. It should also be noted that while these are in window coordinates,
none of the precision is lost. These are not integer coordinates; they are still
floating-point values, and thus they have precision beyond that of a single
pixel.The bounds for Z are [0, 1], with 0 being the closest and 1 being the farthest.
Vertex positions outside of this range are not visible.Note that window coordinates have the bottom-left position as the (0, 0) origin
point. This is counter to what users are used to in window coordinates, which is
having the top-left position be the origin. There are transform tricks you can play
to allow you to work in a top-left coordinate space if you need to.The full details of this process will be discussed at length as the tutorials
progress.Scan ConversionAfter converting the coordinates of a triangle to window coordinates, the
triangle undergoes a process called scan conversion. This
process takes the triangle and breaks it up based on the arrangement of window
pixels over the output image that the triangle covers.The center image shows the digital grid of output pixels; the circles represent
the center of each pixel. The center of each pixel represents a
sample: a discrete location within the area of a pixel.
During scan conversion, a triangle will produce a fragment
for every pixel sample that is within the 2D area of the triangle.The image on the right shows the fragments generated by the scan conversion of the
triangle. This creates a rough approximation of the triangle's general shape.It is very often the case that triangles are rendered that share edges. OpenGL
offers a guarantee that, so long as the shared edge vertex positions are
identical, there will be no sample gaps during scan
conversion.To make it easier to use this, OpenGL also offers the guarantee that if you pass
the same input vertex data through the same vertex processor, you will get identical
output; this is called the invariance guarantee. So the onus
is on the user to use the same input vertices in order to ensure gap-less scan
conversion.Scan conversion is an inherently 2D operation. This process only uses the X and Y
position of the triangle in window coordinates to determine which fragments to
generate. The Z value is not forgotten, but it is not directly part of the actual
process of scan converting the triangle.The result of scan converting a triangle is a sequence of fragments that cover the
shape of the triangle. Each fragment has certain data associated with it. This data
contains the 2D location of the fragment in window coordinates, as well as the Z
position of the fragment. This Z value is known as the depth of the fragment. There
may be other information that is part of a fragment, and we will expand on that in
later tutorials.Fragment ProcessingThis phase takes a fragment from a scan converted triangle and transforms it
into one or more color values and a single depth value. The order that fragments
from a single triangle are processed in is irrelevant; since a single triangle
lies in a single plane, fragments generated from it cannot possibly overlap.
However, the fragments from another triangle can possibly overlap. Since order
is important in a rasterizer, the fragments from one triangle must all be
processed before the fragments from another triangle.This phase is quite arbitrary. The user of OpenGL has a lot of options of how to
decide what color to assign a fragment. We will cover this step in detail throughout
the tutorials.Direct3D NoteDirect3D prefers to call this stage pixel processing or
pixel shading. This is a misnomer for several reasons. First,
a pixel's final color can be composed of the results of multiple fragments
generated by multiple samples within a single pixel. This
is a common technique to remove jagged edges of triangles. Also, the fragment
data has not been written to the image, so it is not a pixel yet. Indeed, the
fragment processing step can conditionally prevent rendering of a fragment based
on arbitrary computations. Thus a pixel in D3D parlance may never
actually become a pixel at all.Fragment WritingAfter generating one or more colors and a depth value, the fragment is written
to the destination image. This step involves more than simply writing to the
destination image. Combining the color and depth with the colors that are
currently in the image can involve a number of computations. These will be
covered in detail in various tutorials.ColorsPreviously, a pixel was stated to be an element in a 2D image that has a
particular color. A color can be described in many ways.In computer graphics, the usual description of a color is as a series of numbers
on the range [0, 1]. Each of the numbers corresponds to the intensity of a
particular reference color; thus the final color represented by the series of
numbers is a mix of these reference colors.The set of reference colors is called a colorspace. The
most common color space for screens is RGB, where the reference colors are Red,
Green and Blue. Printed works tend to use CMYK (Cyan, Magenta, Yellow, Black). Since
we're dealing with rendering to a screen, and because OpenGL requires it, we will
use the RGB colorspace.You can play some fancy games with programmatic shaders (see below) that allow
you to work in different colorspaces. So technically, we only have to output to
a linear RGB colorspace.So a pixel in OpenGL is defined as 3 values on the range [0, 1] that represent a
color in a linear RGB colorspace. By combining different intensities of this 3
colors, we can generate millions of different color shades. This will get extended
slightly, as we deal with transparency later.ShaderA shader is a program designed to be run on a renderer as
part of the rendering operation. Regardless of the kind of rendering system in use,
shaders can only be executed at certain points in that rendering process. These
shader stages represent hooks where a user can add
arbitrary algorithms to create a specific visual effect.In term of rasterization as outlined above, there are several shader stages where
arbitrary processing is both economical for performance and offers high utility to
the user. For example, the transformation of an incoming vertex to clip space is a
useful hook for user-defined code, as is the processing of a fragment into final
colors and depth.Shaders for OpenGL are run on the actual rendering hardware. This can often free
up valuable CPU time for other tasks, or simply perform operations that would be
difficult if not impossible without the flexibility of executing arbitrary code. A
downside of this is that they must live within certain limits that CPU code would
not have to.There are a number of shading languages available to various APIs. The one used in
this tutorial is the primary shading language of OpenGL. It is called,
unimaginatively, the OpenGL Shading Language, or GLSL. for short.
It looks deceptively like C, but it is very much not C.What is OpenGLBefore we can begin looking into writing an OpenGL application, we must first know
what it is that we are writing. What exactly is OpenGL?OpenGL as an APIOpenGL is usually thought of as an Application Programming
Interface (API). The OpenGL API has been exposed to a number of
languages. But the one that they all ultimately use at their lowest level is the C
API.The API, in C, is defined by a number of typedefs, #defined enumerator values, and
functions. The typedefs define basic GL types like GLint,
GLfloat and so forth. These are defined to have a specific bit
depth.Complex aggregates like structs are never directly exposed
in OpenGL. Any such constructs are hidden behind the API. This makes it easier to
expose the OpenGL API to non-C languages without having a complex conversion
layer.In C++, if you wanted an object that contained an integer, a float, and a string,
you would create it and access it like this:struct Object
{
int count;
float opacity;
char *name;
};
//Create the storage for the object.
Object newObject;
//Put data into the object.
newObject.count = 5;
newObject.opacity = 0.4f;
newObject.name = "Some String";
In OpenGL, you would use an API that looks more like this://Create the storage for the object
GLuint objectName;
glGenObject(1, &objectName);
//Put data into the object.
glBindObject(GL_MODIFY, objectName);
glObjectParameteri(GL_MODIFY, GL_OBJECT_COUNT, 5);
glObjectParameterf(GL_MODIFY, GL_OBJECT_OPACITY, 0.4f);
glObjectParameters(GL_MODIFY, GL_OBJECT_NAME, "Some String");None of these are actual OpenGL commands, of course. This is simply an example of
what the interface to such an object would look like.OpenGL owns the storage for all OpenGL objects. Because of this, the user can only
access an object by reference. Almost all OpenGL objects are referred to by an
unsigned integer (the GLuint). Objects are created by a function of the
form glGen*, where * is the type of the object. The first
parameter is the number of objects to create, and the second is a
GLuint* array that receives the newly created object names.To modify most objects, they must first be bound to the context. Many objects can
be bound to different locations in the context; this allows the same object to be
used in different ways. These different locations are called
targets; all objects have a list of valid targets, and
some have only one. In the above example, the fictitious target
GL_MODIFY is the location where objectName is
bound.The enumerators GL_OBJECT_* all name fields in the object that
can be set. The glObjectParameter family of functions set
parameters within the object bound to the given target. Note that since OpenGL is a
C API, it has to name each of the differently typed variations differently. So there
is glObjectParameteri for integer parameters,
glObjectParameterf for floating-point parameters, and so
forth.Note that all OpenGL objects are not as simple as this example, and the functions
that change object state do not all follow these naming conventions. Also, exactly
what it means to bind an object to the context is explained below.The Structure of OpenGLThe OpenGL API is defined as a state machine. Almost all of the OpenGL functions
set or retrieve some state in OpenGL. The only functions that do not change state
are functions that use the currently set state to cause rendering to happen.You can think of the state machine as a very large struct with a great many
different fields. This struct is called the OpenGL context,
and each field in the context represents some information necessary for
rendering.Objects in OpenGL are thus defined as a list of fields in this struct that can be
saved and restored. Binding an object to a target within the
context causes the data in this object to replace some of the context's state. Thus
after the binding, future function calls that read from or modify this context state
will read or modify the state within the object.Objects are usually represented as GLuint integers; these are handles
to the actual OpenGL objects. The integer value 0 is special; it acts as the object
equivalent of a NULL pointer. Binding object 0 means to unbind the currently bound
object. This means that the original context state, the state that was in place
before the binding took place, now becomes the context state.Let us say that this represents some part of an OpenGL context's state:OpenGL Object Statestruct Values
{
int iValue1;
int iValue2;
};
struct OpenGL_Context
{
...
Values *pMainValues;
Values *pOtherValues;
...
};
OpenGL_Context context;To create a Values object, you would call something like
glGenValues. You could bind the
Values object to one of two targets:
GL_MAIN_VALUES which represents the pointer
context.pMainValues, and GL_OTHER_VALUES
which represents the pointer context.pOtherValues. You would bind
the object with a call to glBindValues, passing one of the two
targets and the object. This would set that target's pointer to the object that you
created.There would be a function to set values in a bound object. Say,
glValueParam. It would take the target of the object, which
represents the pointer in the context. It would also take an enum representing which
value in the object to change. The value GL_VALUE_ONE would
represent iValue1, and GL_VALUE_TWO would
represent iValue2.The OpenGL SpecificationTo be technical about it, OpenGL is not an API; it is a specification. A document.
The C API is merely one way to implement the spec. The specification defines the
initial OpenGL state, what each function does to change or retrieve that state, and
what is supposed to happen when you call a rendering function.The specification is written by the OpenGL Architectural Review
Board (ARB), a group of representatives from
companies like Apple, NVIDIA, and AMD (the ATI part), among others. The ARB is part
of the Khronos Group.The specification is a very complicated and technical document. However, parts of
it are quite readable, though you will usually need at least some understanding of
what should be going on to understand it. If you try to read it, the most important
thing to understand about it is this: it describes results, not
implementation. Just because the spec says that X will happen does not mean that it
actually does. What it means is that the user should not be able to tell the
difference. If a piece of hardware can provide the same behavior in a different way,
then the specification allows this, so long as the user can never tell the
difference.OpenGL ImplementationsWhile the OpenGL ARB does control the specification, it does not control
OpenGL's code. OpenGL is not something you download from a centralized location.
For any particular piece of hardware, it is up to the developers of that
hardware to write an OpenGL Implementation for that
hardware. Implementations, as the name suggests, implement the OpenGL
specification, exposing the OpenGL API as defined in the spec.Who controls the OpenGL implementation is different for different operating
systems. On Windows, OpenGL implementations are controlled virtually entirely by the
hardware makers themselves. On Mac OSX, OpenGL implementations are controlled by
Apple; they decide what version of OpenGL is exposed and what additional
functionality can be provided to the user. Apple writes much of the OpenGL
implementation on Mac OSX, which the hardware developers writing to an Apple-created
internal driver API. On Linux, things are... complicated.The long and short of this is that if you are writing a program and it seems to be
exhibiting off-spec behavior, that is the fault of the maker of your OpenGL
implementation (assuming it is not a bug in your code). On Windows, the various
graphics hardware makers put their OpenGL implementations in their regular drivers.
So if you suspect a bug in their implementation, the first thing you should do is
make sure your graphics drivers are up-to-date; the bug may have been corrected
since the last time you updated your drivers.OpenGL VersionsThere are many versions of the OpenGL Specification. OpenGL versions are not
like most Direct3D versions, which typically change most of the API. Code that
works on one version of OpenGL will almost always work on later versions of
OpenGL.The only exception to this deals with OpenGL 3.0 and above, relative to previous
versions. v3.0 deprecated a number of older functions, and v3.1 removed most of
those functions from the APIDeprecation only means marking those functions as to be removed in later
functions. They are still available for use in 3.0.. This also divided the specification into 2 variations (called
profiles): core and compatibility. The compatibility profile retains all of the
functions removed in 3.1, while the core profile does not. Theoretically, OpenGL
implementations could implement just the core profile; this would leave software
that relies on the compatibility profile non-functional on that
implementation.As a practical matter, none of this matters at all. No OpenGL driver developer is
going to ship drivers that only implement the core profile. So in effect, this means
nothing at all; all OpenGL versions are all effectively backwards compatible.GlossaryvectorA value composed of an ordered sequence of other values. The number of
values stored in a vector is its dimensionality. Vectors can have math
operations performed on them as a whole.scalarA single, non-vector value. A one-dimensional vector can be considered a
scalar.vector positionA vector that represents a position.vector directionA vector that represents a direction.vector componentOne of the values within a vector.component-wise operationAn operation on a vector that applies something to each component of the
vector. The results of a component-wise operation is a vector of the same
dimension as the input(s) to the operation. Many vector operations are
component-wise.unit vectorA vector who's length is exactly one. These represent purely directional
vectors.vector normalizationThe process of converting a vector into a unit vector that points in the
same direction as the original vector.pixelThe smallest division of a digital image. A pixel has a particular color in a
particular colorspace.imageA two-dimensional array of pixels.renderingThe process of taking the source 3D world and converting it into a 2D image
that represents a view of that world from a particular angle.rasterizationA particular rendering method, used to convert a series of 3D triangles into a
2D image.geometry, model, meshA single object in 3D space made of triangles.vertexOne of the 3 elements that make up a triangle. Vertices can contain arbitrary
of data, but among that data is a 3-dimensional position representing the
location of the vertex in 3D space.clip space, clip coordinatesA region of three-dimensional space into which vertex positions are
transformed. These vertex positions are 4 dimensional quantities. The fourth
component (W) of clip coordinates represents the visible range of clip space for
that vertex. So the X, Y, and Z component of clip coordinates must be between
[-W, W] to be a visible part of the world.In clip space, positive X goes right, positive Y up, and positive Z
away.Clip-space vertices are output by the vertex processing stage of the rendering
pipeline.clippingThe process of taking a triangle in clip coordinates and splitting it if
one or more of its vertices is outside of clip space.normalized device coordinatesThese are clip coordinates that have been divided by their fourth component.
This makes this range of space the same for all components. Vertices with
positions on the range [-1, 1] are visible, and other vertices are not.window space, window coordinatesA region of three-dimensional space that normalized device coordinates are
mapped to. The X and Y positions of vertices in this space are relative to the
destination image. The origin is in the bottom-left, with positive X going right
and positive Y going up. The Z value is a number on the range [0, 1], where 0 is
the closest value and 1 is the farthest. Vertex positions outside of this range
are not visible.scan conversionThe process of taking a triangle in window space and converting it into a
number of fragments based on projecting it onto the pixels of the output
image.sampleA discrete location within the bounds of a pixel that determines whether to
generate a fragment from scan converting the triangle. The area of a single
pixel can have multiple samples, which can generate multiple fragments.fragmentA single element of a scan converted triangle. A fragment can contain
arbitrary data, but among that data is a 3-dimensional position, identifying the
location on the triangle in window space where this fragment originates
from.invariance guaranteeA guarantee provided by OpenGL, such that if you provide binary-identical
inputs to the vertex processing, while all other state remains exactly
identical, then the exact same vertex in clip-space will be output.colorspaceThe set of reference colors that define a way of representing a color in
computer graphics, and the function mapping between those reference colors
and the actual colors. All colors are defined relative to a particular
colorspace.shaderA program designed to be executed by a renderer, in order to perform some
user-defined operations.shader stageA particular place in a rendering pipeline where a shader can be executed
to perform a computation. The results of this computation will be fed to the
next stage in the rendering pipeline.OpenGLA specification that defines the effective behavior of a
rasterization-based rendering system.OpenGL contextA specific set of state used for rendering. The OpenGL context is like a
large C-style struct that contains a large number of fields that can be
accessed. If you were to create multiple windows for rendering, each one
would have its own OpenGL context.object bindingObjects can be bound to a particular location in the OpenGL context. When
this happens, the state within the object takes the place of a certain set
of state in the context. There are multiple binding points for objects, and
each kind of object can be bound to certain binding points. Which bind point
an object is bound to determines what state the object overrides.Architectural Review BoardThe body of the Khronos Group that governs the OpenGL
specification.OpenGL ImplementationThe software that implements the OpenGL specification for a particular
system.