concentrate on c/c++ related technology

plan,refactor,daily-build, self-discipline,

  C++博客 :: 首页 :: 联系 :: 聚合  :: 管理
  37 Posts :: 1 Stories :: 12 Comments :: 0 Trackbacks

常用链接

留言簿(9)

我参与的团队

搜索

  •  

最新评论

阅读排行榜

评论排行榜

#

only one vertex shader can be active at one time.
every vertex shader- driven program must run the following steps:
1) check for vertex shader support by checking the D3DCAPS8:VertexShaderVersion field.
D3DVS_VERSION(X,Y) shader x.y.
if(pCaps->VertexShaderVersion < D3DVS_VERSION(1,1))
{
return E_FAIL;
}
here to judge whether the vertex shader is suited for shader1.1.
the vertex shader version is in the D3DCAPS8 structure.
2) declaration of the vertex shader with D3DVSD_* macros, to map vertex buffers streams to input registers.
you must declare a vertex shader before using it,
SetStreamSource: bind a vertex buffer to a device data stream. D3DVSD_STREAM.
D3DVSD_REG:bind a single vertex register to a vertex element/property from vertex stream.
3) setting the vertex constant register with SetVertexShaderConstant.
you fill the vertex shader constant registers with SetVertexShaderConstant, and  get the vertex shader constant registers with GetVertexShaderConstant.
D3DVSD_CONSTANT: used in vertex shader declaration, and it can only be used once.
SetVertexShaderConstant: it can be used in every DrawPrimitive* calls.
4) compile previously written vertex shader with D3DXAssembleShader*.
different instructions include:
add  dest src1 src2  add src1 and src2 together.
dp3  dest src1 src2 dest.x = dest.y = dest.z = dest.w = (src1.x * src2.x ) + (src1.y * src2.y) + (src1.z* src2.z)
dp4  dest src1 src2 dest.w =  (src1.x * src2.x ) + (src1.y * src2.y) + (src1.z* src2.z) +(src1.w* src2.w) and dest.x dest.y, dest.z is not used.
dst dest src1 src2  dest.x = 1; dest.y = src1.y * src2.y;dest.z = src1.z;dest.w = src2.w; it is useful to calculate standard attentuation.
expp dest, src.w float tmp = (float)pow(2, w); WORD tmpd = *(DWORD*)&tmp & 0xffffff00; dest.z = *(float*)&tmpd;
lit dest, src

Calculates lighting coefficients from two dot products and a power.
---------------------------------------------
To calculate the lighting coefficients, set up the registers as shown:

src.x = N*L ; The dot product between normal and direction to light
src.y = N*H ; The dot product between normal and half vector
src.z = ignored ; This value is ignored
src.w = specular power ; The value must be between ?28.0 and 128.0
logp dest src.w 
 float tmp = (float)(log(v)/log(2)); 
 DWORD tmpd = *(DWORD*)&tmp & 0xffffff00; 
 dest.z = *(float*)&tmpd;
mad dest src1 src2 src3 dest = (src1 * src2) + src3
max dest src1 src2 dest = (src1 >= src2)?src1:src2
min dest src1 src2 dest = (src1 < src2)?src1:src2
mov dest, src move
mul dest, src1, src2  set dest to the component by component product of src1 and src2
nop nothing
rcp dest, src.w
if(src.w == 1.0f)
{
  dest.x = dest.y = dest.z = dest.w = 1.0f;
}
else if(src.w == 0)
{
  dest.x = dest.y = dest.z = dest.w = PLUS_INFINITY();
}
else
{
  dest.x = dest.y = dest.z = m_dest.w = 1.0f/src.w;
}
rsq dest, src

reciprocal square root of src
(much more useful than straight 'square root'):

float v = ABSF(src.w);
if(v == 1.0f)
{
  dest.x = dest.y = dest.z = dest.w = 1.0f;
}
else if(v == 0)
{
  dest.x = dest.y = dest.z = dest.w = PLUS_INFINITY();
}
else
{
  v = (float)(1.0f / sqrt(v));
  dest.x = dest.y = dest.z = dest.w = v;
}
sge dest, src1, src2 dest = (src1 >=src2) ? 1 : 0
slt dest, src1, src2 dest = (src1 <src2) ? 1 : 0

The Vertex Shader ALU is a multi-threaded vector processor that operates on quad-float data. It consists of two functional units. The SIMD Vector Unit is responsible for the mov, mul, add, mad, dp3, dp4, dst, min, max, slt and sge instructions. The Special Function Unit is responsible for the rcp, rsq, log, exp and lit instructions.

rsq is used in normalizing vectors to be used in lighting equations.
The exponential instruction expp can be used for fog effects, procedural noise generation.
A log function can be the inverse of a exponential function, means it undoes the operation of the exponential function.

The lit instruction deals by default with directional lights. It calculates the diffuse & specular factors with clamping based on N * L and N * H and the specular power. There is no attenuation involved, but you can use an attenuation level separately with the result of lit by using the dst instruction. This is useful for constructing attenuation factors for point and spot lights.

The min and max instructions allow for clamping and absolute value computation.
Using the Input Registers

The 16 input registers can be accessed by using their names v0 to v15. Typical values provided to the input vertex registers are:

  • Position(x,y,z,w)
  • Diffuse color (r,g,b,a) -> 0.0 to +1.0
  • Specular color (r,g,b,a) -> 0.0 to +1.0
  • Up to 8 Texture coordinates (each as s, t, r, q or u, v , w, q) but normally 4 or 6, dependent on hardware support
  • Fog (f,*,*,*) -> value used in fog equation
  • Point size (p,*,*,*)

The input registers are read-only. Each instruction may access only one vertex input register. unspecified components of the input registers default to 0.0 for the .x, .y, .z and 1.0 for the components w.

all data in an input register remains persistent throughout the vertex shader execution and even longer. that means they retain their data longer than the life-time of a vertex shader, so it is possible to re-use the data of the input registers in the next vertex shader.

Using the Constant Registers

Typical uses for the constant registers include:

  • Matrix data: quad-floats are typically one row of a 4x4 matrix
  • Light characteristics, (position, attenuation etc)
  • Current time
  • Vertex interpolation data
  • Procedural data

the constant registers are read-only from the perspective of the vertex shader, whereas the application can read and write into the constant registers.they can be reused just as input registers.
this allows an application to avoid making redundant SetVertexShaderConstant() calls.
Using the Address Register
you access the address registers with a0 to an(more than one address register should be available in vertex shader versions higher than 1.1)
Using the Temporary Registers
you can access 12 temporary registers using r0 to r11.
each temporary register has single write and triple read access. therefore an instruction could have the same temporary register as a source three times, vertex shaders can not read a value from a temporary register before writing to it. if you try to read a temporary register that was not filled with a value, the API will give you an error messge while creating the vertex shader(CreateVertexShader)
Using the Output Registers
there are up to 13 write-only output registers that can be accessed using the following register names. they are defined as the inputs to the rasterizer and the name of each registers is preceded by a lower case 'o'. the output registers are named to suggest their use by pixel shaders.
every vertex shader must write at least to one component of oPos, or you will get an error message by the assembler.
swizzling and masking
if you use the input, constant and temporary registers as source registers, you can swizzle the .x, .y, .z and .w values independently of each other.
if you use the output and temporary registers as destination registers you can use the .x, .y, .z and .w values as write-masks.
component modifier description
R.[x].[y].[z].[w]     Destination mask
R.xwzy                  source swizzle
- R                        source negation 
Guidelines for writing the vertex shaders
the most important restrictions you should remember when writing vertex shaders are the following:
they must write to at least one component of the output register oPos.
there is a 128 instruction limit
every instruction may souce no more than one constant register,e.g, add r0, c4,c3 will fail.
every instruction may souce no more than one input register, e.g. add r0,v1,v2 will fail.
there are no c-like conditional statements, but you can mimic an instruction of the form r0 = (r1 >= r2) ? r3 : r4 with the sge instruction.
all iterated values transferred out of the vertex shader are clamped to [0..1]
several ways to optimize vertex shaders:
when setting vertex shader constant data, try to set all data in one SetVertexShaderConstant call.
pause and think about using a mov instruction, you may be able to avoid it.
choose instructions that perform multiple operations over instructions that perform single operations.
collapse(remove complex instructions like m4x4 or m3x3 instructions)vertex shaders before thinking about optimizations.
a rule of thumb for load-balancing between the cpu/gpu: many calculations in shaders can be pulled outside and reformulated per-object instead of per-vertex and put into constant    registers. if you are doing some calculation which is per object rather than per vertex, then do it on the cpu and upload it on the vertex shader as a constant, rather than doing it on the GPU.
one of the most interesting methods to optimize methods to optimize your applications bandwidth usage, is the usage of the compressed vertex data.
Compiling a Vertex Shader
Direct3D uses byte-codes, whereas OpenGL implementations parses a string. therefore the Direct3D developer needs to assemble the vertex shader source with an assembler.this might help you find bugs earlier in your development cycle and it also reduces load-time.
three different ways to compile a vertex shader:
write the vertex shader source into a separate ASCII file for example test.vsh and compile it with vertex shader assembler into a binary file, for example test.vso. this file will be opened and read at game start up. this way, not every person will be able to read and modify your vertex shader source.
write the vertex shader source into a separate ASCII file or as a char string into you *.cpp file and compile it "on the fly" while the application starts up with the D3DXAssembleShader*() functions.
write the vertex shader source in an effects file and open this effect file when the application starts up.the vertex shader can be compiled by reading the effect files with D3DXCreateEffectFromFile. it is also possible to pre-compile an effects file. this way, most of the handling of vertex shaders is simplified and handled by the effect file functions.
 
5) Creating a vertex shader handle with CreateVertexShader.
the CreateVertexShader function is used to create and validate a vertex shader.
6) setting a vertex shader with SetVertexShader for a specific object.
you set a vertex shader for a specific object by using SetVertexShader before the DrawPrimitive() call of this object.
vertex shaders are executed with SetVertexShader as many times as there are vertices,.
7) delete a vertex shader with DeleteVertexShader().
when the game shuts down or when the device is changed, the resources taken by the vertex shader must be released. this must be done by calling DeleteVertexShader with the vertex shader handle.

Point light source.
a point light source has color and position within a scene, but no single direction. all light rays originate from one point and illuminate equally in all directions. the intensity of the rays will remain constant regardless of their distance from the point source unless a falloff value is explicitly stated. a point light is useful to simulate light bulb.

to get a wider range of effects a decent attenuation equation is used:
funcAttenuation = 1/A0 + A1 * dL + A2 * dL * dL

posted @ 2008-12-09 11:18 jolley 阅读(520) | 评论 (0)编辑 收藏

     摘要: samplers: a windows into video memory with associated state defining things like filtering, and texture coordination addressing mode.   in DXSDK version 8.0 or earlier, the application can pass...  阅读全文
posted @ 2008-11-27 19:53 jolley 阅读(1014) | 评论 (0)编辑 收藏

Dsound提供了模拟的声源对象和听者(listener),声源和听者的关系可以通过三个变量来描述:在三维空间的位置,运动的速度,以及运动方向.
产生音效的几个条件是: 1)声源不动,听者移动,2)声源走动,听者移动, 3)声源和听者都移动.
3d环境里面,通过IDirectSound3DBuffer8接口来表述声源,这个接口只有在创建时设置DSBCAPS_CTRL3D标志的DirectSound buffer才支持这个接口,这个接口提供一些函数来设置和获取声源的一些属性,可以通过主缓冲区来获取IDirectSound3DListener8接口,通过这个接口,我们可以控制着声学环境中的多数参数,比如多普勒变换的数量,音量衰减的比率.

当听者接近声源的时候,听到的声音就越大,否则就越小,直到消失.
声源的最小距离,就是声音的音量开始随着距离大幅度衰减的起始点.
DirectSound的缺省的距离DS3D_DEFAULTMINDISTANCE定义为1个单位,或者是1米,我们规定,声音在1米处的音量是full volume,在2米衰减一半,4米衰减为1/4,一次类推.
最大距离,就是声源的音量不再衰减的距离,我们称为声源的最大距离.
sound buffer处理模式: normal, head-relative, disabled.

在正常模式下面,声源的位置和方向是真实世界中的绝对值,这种模式适合声源相对于听者不动的情形.

 在head relative模式下,声源的所有3d特性都跟听者的当前的位置,速度,以及方向有关,当听者移动,或者转动方向,3d buffer就会自动重新调整world space.这种模式下可以实现一种在听者头部不停的嗡嗡叫的声音,那些一直跟随着听者的声音,根本没必要用3d声音来实现.

在disable 模式下,3d声音失效,所有的声音听起来好象来自听者头部的.
主要注意的是两个位置: 声源位置, 听者位置,之前我遇到的问题就是这个问题了,listenerPosition在登陆界面播放界面音效的时候,就记录了,但是在进入游戏以后却没有将玩家的坐标重新赋给听者的位置,并且根据玩家的状态不停地更新.
1)声源位置确定,但是听者位置不对,这样就无法找到声音的有效距离,这样使3d音效看起来像是环境声音一样,走到哪里听到哪里.
2)从模型里面获取的位置只是初始化的时候的位置,后来绑定到模型上面的位置却没有确定下来,这样一来,造成了声源位置和听者位置都不对.

后来在direct sound里面做了尝试代码:将listener和sound buffer都设置成相应的位置,然后在这个位置上面播放声音,结果发现毫无距离感和衰减,directsound里面的play3DSound例子其实也并非真正意义上的positional sound,因为它只是用正余弦函数在改变声源的位置.给的感觉只是声音飞来飞去的,并没有有什么衰减的成分在里面,另外在那边做一些测试,将声源位置固定,然后改变听者的距离,我猜想这样应该就有一种,靠近声源位置的话,声音就大,远离声源位置的话,声音就小.事实上就跟播放普通音乐一样.不过唯一值得赞扬的是:directsound里面有次缓冲区这个概念来支持声音混合,以及可以播放音乐,真正意义上的3d音效还是不用directSound比较好.建议使用fmod,或者openal之类的,比较实在.

DSBCAPS_CTRLPAN | DSBCAPS_CTRLVOLUME | DSBCAPS_CTRLFREQUENCY = DSBCAPS_CTRLDEFAULT.

DirectSound不支持双声道混合(双双,或者单双),即只支持单声道混合,并且要求声音的信息(比如频率,采样)一致. 建议采用8位采样大小,以及22Khz的采样频率,相关转换软件用goldwave.
posted @ 2008-10-31 10:10 jolley 阅读(880) | 评论 (0)编辑 收藏

Direct3D---- HAL----  Graphic Devices.
REF device: reference Rasterizer.
surface: a matrix of pixel that Direct3D uses primarily to store 2D image data.
when we visualize the surface data as matrix, the pixel data is actually stored in a linear array.
Direct3D ---- HAL ---- Graphics device
REF: reference rasterizer, which emulates the wholly Direct3D in software.
this allows u to write and test code that uses Direct3D features that are not available on your device.
D3DDEVTYPE_REF,D3DDEVTYPE_HAL,
the width and height of a surface is measured in pixel.
IDirect3DSurface9 includes several methods:
1)LockRect: allow to obtain a pointer to the surface memory.
2)UnlockRect: after LockRecting it, then we should UnlockRect it.
3)GetDesc:retrieve a description of the surface by filling out the a D3DSURFACE_DESC structure.

Multisample:smooth out blocky-looking images that can result when representing images as a matrix of pixels.
Multisample values: D3DMULTISAMPLE_NONE, D3DMULTISAMPLE_1_SAMPLE/D3DMULTISAMPLE_16_SAMPLE.

we often need to specify the pixel format of Direct3D resources when we create a surface or texture.
the format of a pixel is defined by specifying a member of the D3DFORMAT enmuerated type.
D3DFMT_R8G8B8, D3DFMT_X8R8G8B8,D3DFMT_A8R8G8B8 are widely supported.
D3DPOOL_DEFAULT: it instructs Direct3D to place the resource in the memory that is best suited for the resource type and its usage.
it maybe video memory, AGP memory, or system memory.

D3DPOOL_MANAGED:resources placed in the manage pool are managed by Direct3D(that is, they are moved to video or AGP memory)
also a back-up copy of the resource is maintained in the system memory.
when resources are accessed and changed by the application,
they work with the system copy.
then Direct3D automatically updates them to video memory as needed.

D3DPOOL_SYSTEMMEM:specifies that the resource be placed in system memory.

D3DPOOL_SCRATCH:specify that the resource be placed in system memory. the difference between this pool
and D3DPOOL_SYSTEMMEM is that these resources must not follow the graphics device's restrictions.

Direct3D maintains a collection of surfaces, usually two or three,
called a swap chain that is represented by the IDirect3DSwapChain9 interface.

swap chains and more specifically,
the technique of page flipping are used to provide smooth animation betweeen frames
Front Buffer: the contents of this buffer are currently beging displayed by the monitor.
Back Buffer: the frame currently being processed its rendered to this buffer.

the application's frame rate is often out of sync with the mointor's refresh rate,
we do not want to update the contents of the front buffer with the next frame of animation
until the monitor has finished drawing the current frame,
but we donot want to halt our rendering while waiting for the monitor to
finish displaying the contents of the front buffer either.

we render to an off-screen surface(back buffer); then when the monitor is done displaying the surface in the back buffer, we move it to the end of the swaip chain and the next back buffer in the swap chain is promoted to be the front buffer.
this process is called presenting .

the depth buffer is a surface that does not contain image data but rather depth information about a particular pixel.
there is an entry in the depth buffer that corresponds to each pixel in the final rendered image.

In order for Direct3D to determine which pixels of an object are in front of another,
it uses a technique called depth buffering or z-buffering.

depth buffering works by computing a depth value for each pixel,and performing a depth test.
the pixel with the depth value closest to the camera wins, and that pixel gets written.
24-bit depth buffer is more accurate.

software vertex processing is always supported and can always be used.
hardware vertex processing can only be used if the graphics card supports vertex processing in hardware.

in our application, we can check if a device supports a feature
by checking the corresponding data member or bit in the D3DCAPS9 instance.

initializing Direct3D:
1) Acquire a pointer to an IDirect3D9 interface.
2) check the device capabilities(D3DCAPS9) to see
if the primary display adapter(primary graphics card)support hardware vertex processing, or transformation & Light.
3) Initialize an instance of D3DPRESENT_PARAMETERS.
4) Create the IDirect3DDevice9 object based on an initialized D3DPRESENT_PARAMETERS structure.

1) Direct3DCreate9(D3D_SDK_VERSION), the D3D_SDK_VERSION can guarantees that the application is built against the correct header files.
IDirect3D9 object is used for two things: device enumeration and creating the IDirect3DDevice9 object.
device enumeration refers to finding out the capabilities, display modes,
formats, and other information about each graphics device available on the system.

2) check the D3DCAPS9 structure.
use caps.DevCaps & D3DDEVCAPS_HWTRANSFORMANDLIGHT to check which type of vertex processing that the display card supports.

3) Fill out the D3DPRESENT_PARAMETERS structure.

4)Create the IDirect3DDevice9 interface.

it works like this:
we create a vertex list and an index list, the vertex list consists of all the unique vertices,
and the index list contains values that index into the vertex list to
define how they are to be put together to form triangles.

the camera specifies what part of the world the viewer can see
and thus what part of the world for which we need to generate 2d image.

the volume of space is a frustum and defined by the field of view angles and the near and far planes.

the projection window is the 2d area
that the 3d geometry inside the frustum gets projected onto to create the 2D image representation of the 3D scene.

local space, or modeling space, is the coordinate system in which we define an object's triangle list.

objects in local space are transformed to world space through a process called the world transform,
which usually consists of translations, rotations,
and scaling operations that set the position, orientation, and size of the model in the world.
D3DXMatrixTranslatoin.

projection and other operations are difficult or less efficient.
when the camera is at an arbitrary position and orientation in the world.
to make things easier, we transform the camera to the origin of the world system and rotate it
so that the camera is looking down the positive z-axis.

all geometry in the world is transformed along with the camera
so that the view of the world remains the same. this transformatoin is called view space transformation.
D3DXMatrixLookAtLH.

Direct3D takes advantage of this by culling(discard from further processing) the back facing polygons,
this is called backface culling.

by default: Direct3D treats triangles with vertices specified in a clockwise winding order(in view space)as front facing.
triangles with vertices specified in counterclockwise winding orders(in view space) are considered back facing.
Lighting sources are defined in world space but transformed into view space by the view space transformation.
in view space these light sources are applied to light the objects in the scene to give a more realistic appearance.

we need to cull the geometry that is outside the viewing volume, this process is called clipping.

in view space we have the task of obtaining a 2d representation of the 3D scene.

the process of going from n dimension to an n-1 dimension is called projection.

there are many ways of performing a projection, but we are interested in a particular way called perspective projection.
a perspective projection projects geometry in such a way that foreshortening occurs.
this type of projection allows us to respresent a 3D scene on a 2D image.

the projection transformation defines our viewing volume(frustum) and
is responsible for projecting the geometry in the frustum onto the projection window.
D3DXMatrixPerspectiveFovLH.

viewport transform is responsible for transforming coordinates on the project window to a rectangle on the screen,
which we call the viewport.

a vertex buffer is simply a chunk of contiguous memory that contains vertex data(IDirect3DVertexBuffer9).
a index buffer is a chunk of contiguous memory that contains index data(IDirect3DIndexBuffer9).

Set the stream source.
setting the stream source hooks up a vertex buffer to a stream that essentially feeds geometry into the rendering pipleline.

once we have created a vertex buffer and, optionally, an index buffer,
we are almost ready to render its contents, but there are three steps that must be taken first.
1) Set the stream source.SetStreamSource.
2) Set the vertex format. SetFVF.
3) Set index buffer.SetIndices.

D3DXCreateTeapot/D3DXCreateBox/D3DXCreateCylinder/D3DXCreateTorus/D3DXCreateSphere.

D3DCOLOR_ARGB/D3DCOLOR_XRGB/D3DCOLORVALUE/
#define D3DCOLOR_XRGB(r,g,b) D3DCOLOR_ARGB(0xff,r,g,b)
typedef struct _D3DCOLORVALUE
{
 float r;
        float g;
        float b;
        float a;
}D3DCOLORVALUE;
0.0f < components < 1.0f.

shading occurs during raserization and specifies
how the vertex colors are used to compute the pixel colors that make up the primitive.

with flat shading, the pixels of a primitive are uniformly colored by the color specified
in the first vertex of the primitive.

with gouraud shading, the colors at each vertex are interpolated linearly across the face of the primitive.

the Direct3D lighting model, the light emitted by a light source consists of three components,or three kinds of light.
ambient light:
this kind of light models light that has reflected off other surfaces and is used to brighten up the overall scene.
diffuse light:
this type of light travels in a light direction. when it strikes a surface, it reflects equally in all directions.

diffuse light reflects equally in all directions, the reflected light will reach the eye no matter the viewpoint,
and therefore we do not need to take the viewer into consideration.thus,
the diffuse lighting equation needs only to consider the light direction and the attitude of the surface.

specular light: when it strikes a surface, it reflects harshly in one directions,
causing a bright shine that can only be seen from some angles.

since the light reflects in one direction,
clearly the viewpoint,
in addition to the light direction and surface attitude,
must be taken into consideration in the specular lighting equation.
used to model light that produces hightlights on such objects,
the bright shines created when light strikes a polished surface.

the material allows us to define the percentage at which light is reflected from the surface.

a face normal is a vector that describes the direction a polygon is facing.

Direct3D needs to know the vertex normals so that it can determine the angle at which light strikes a surface,
and since lighting calculations are done per vertex,
Direct3D needs to know the surface orientation per vertex.

Direct3D supports three types of light sources:

point lights: the light source has a position in world space and emits light in all directions.

directional lights: the light source has no position but shoots parallel rays of light in the specified direction.

spot lights: it has position and shines light through a conical shape in a particular direction.

the cone is characterized by two angles, theta, and phi, theta describes an innder cone,and phi describes an outer cone.

texture mapping is a technique that allows us to map image data onto triangles.

D3DFVF_TEX1: our vertex structure contains one pair of texture coordinates.

D3DXCreateTextureFromFile: to load texture from disk.it can load bmp,dds,dib, jpg, png and tga.

SetTexture: set the current texture.
Filtering is a technique that Direct3D uses to help smooth out these distortions.
distortions include: MAGFILTER/MINFILTER.

nearest point sampling:
default filtering method, produces the worst-looking result, but the fastest to compute.
D3DSAMP_MAGFILTER, D3DTEXF_POINT.
D3DSAMP_MINFILTER, D3DTEXF_POINT
linear filtering:
produces fairly good results, and can be fast on today's hardware.
D3DSAMP_MAGFILTER, D3DTEXF_LINEAR.
D3DSAMP_MINFILTER, D3DTEXF_LINEAR.
anisotropic filtering:
provide the best result, but take the longest time to compute.
D3DSAMP_MAGFILTER, D3DTEXF_ANISOTROPIC.
D3DSAMP_MINFILTER, D3DTEXF_ANISOTROPIC.
the anisotropic level should also be set, and the maximum level is 4.
the idea behind mimaps is to take a texture and
create a series of smaller lower resolution textures
but customize the filtering for each of these levels so it perserves the detail that is important for us.

the mipmap filter is used to control how Direct3D uses the mipmaps.

D3DTEXF_NONE: Disable mipmapping
D3DTEXF_POINT: Direct3D will choose the level that is closest in size to that triangle.
D3DTEXF_LINEAR: Direct3D will choose two closest levels, filter each level with the min and mag filters,
and linearly combine these two levels to form the final color values.

mipmap chain is created automatically with the D3DXCreateTextureFromFile function if the device supports mipmapping.

blending allows us to blend pixels that
we are currently rasterizing with pixels
that have been previously rasterized to the same location.

in other words, we blend primitives over previously drawn primitives.

the idea of combining the pixel values that are currently being computed(source pixel)
with pixel values previously written(destination pixel) is called blending.

u can enable blending by setting D3DRS_ALPHABLENDENABLE to be true.

you can set the source blend factor and destination blend factor by setting D3DRS_SRCBLEND and D3DRS_DESTBLEND.

the default values for the source blend factor and destination blend factor are D3DBLEND_SRCALPHA and D3DBLEND_INVSRCALPHA.

the alpha component is mainly used to specify the level of transparency of a pixel.

In order to make the alpha component describe the level of transparent of each pixel,
we must set  D3DBLEND_SRCALPHA and D3DBLEND_INVSRCALPHA.

we can obtain alpha info from a texture's alpha channel.

the alpha channel is an extra set of bits reserved for each texel that stores a alpha component.

when the texure is mapped to a primitive, the alpha component in the alpha channel are also mapped,
and they become tha alpha components for the pixels of the texured primitive.

dds file is an image format specifically designed for DirectX applications and textures.

the stencil buffer is an 0ff-screen buffer that we can use to achieve special effects.
the stencil buffer has the same resolution as the back buffer and deep buffer,
so that the i-jth pixel in the stencil buffer corresponds with the i-jth pixel in the back buffer and deep buffer.

use stencil buffer, we can set it like this.Device->SetRenderState(D3DRS_STENCILENABLE, true/false).
we can clear the stencil buffer, use Device->Clear(0,0,D3DCLEAR_TARGET|D3DCLEAR_ZBUFFER|D3DCLEAR_STENCIL,0xff00000,1.0f,0);
it means that we want to clear the stencil buffer as well the target(back buffer) and depth buffer.

a stencil buffer can be created at the time that we create the depth buffer.
when specifying the format of the depth buffer,we can specify the format of stencil buffer at the same time.
in actuality, the stencil buffer and depth buffer share the same off-screen surface buffer.
but a segment of memory in each pixel is designated to each particular buffer.

we can use the stencil buffer to block rendering to certain areas of the back buffer.
the decision to block a particular pixel from being written is decided by stencil test.
the test is performed for every pixel.
(ref & mask) ComparisonOperator (value & mask)
ref: application-defined reference value.
mask: application-defined mask value.
value: the pixel in the stencil buffer that we want to test.
if the test evaluates to be false, we block the pixel from being written to the back buffer.
if a pixel isn't written to the back buffer, it isn't written to the depth buffer either.

we can set the stencil reference value by Device->SetRenderState(D3DRS_STENCILREF,0x1122);
we can set the stencil mask value by Device->SetRenderState(D3DRS_STENCILMASK,0x1215);
the default is 0xffffffff, which doesn't mask any bits.

we can not explicitly set the individual stencil values, but recall that we can clear the stencil buffer.
in addition, we can use the stencil render state to control what's written to the stencil buffer.

the comparison operation can be any member of the D3DCMPFUNC emumerated type.

In addition to decide whether to write or block a particular pixel from being written to the back buffer.
we can specify how the stencil buffer should be updated.
Device->SetRenderState(D3DRS_STENCILFAIL,StencilOperation).
 
we can set a writtten mask that will mask off bits of any value that we want to write in the stencil buffer.
we set the state D3DRS_STENCILWRITEMASK.

the stencil buffer allows us to block rendering to certain areas on the back buffer.

we can use the stencil buffer to block the rendering of the reflected teapot if it is not being rendered into the mirror.

parallel light shadow.
r(t) = p +tL (1)
n.p + d = 0  (2)
the set of intersection points found by
shooting r(t) through each of the object's vertices with the plane
defines the geometry of the shadow.
the equations:
s = p + [(-d-n.p)/(n.L)]L
L: define the position of the point light

point light shadow.
r(t) = P + t(P - L) (1)
n.p + d = 0         (2)
the set of intersection points found by
shooting r(t) through each of the object's vertices with the plane
define the geometry of the shadow.
L: define the direction of the parallel light rays.

shadow matrix can be gotten from D3DXMatrixShadow.
using stencil buffer, we can prevent writing overlapping pixels and therefore avoid double blending artifacts.

ID3DXFont is to draw the Direct3D application.

we can create an ID3DXFont interface using the D3DXCreateFontIndirect function.
also we can use D3DXCreateFont function to obtain a pointer to an ID3DXFont interface.

the ID3DXFont and CFont samples for this chapter compute and display the frames rendered per second(fps).

CD3DFont can be simple alternatives for fonts though it doesn't support enough complex formats and font types.
to use CD3DFont class, we should include ,d3dfont header files/source files, d3dutil header files/source files,dxutil header files/source files.

CD3DFont class can be constructed from its constructor functions.
and it can use its member functions, such as DrawText.

D3DXCreateText can also create text.

ID3DXBaseMesh interface contains a vertex buffer that stores the vertices of the mesh and an index buffer
 that how these vertices are put together to form the triangles of the mesh.
GetVertexBuffer,GetIndexBuffer.
also there are these related functions:LockVertexBuffer/LockIndexBuffer, UnlockVertexBuffer/UnlockIndexBuffer.

GetFVF/GetNumVertices/GetNumBytesPerVertex/GetNumFaces.

a mesh consists of one or more subsets.
a subset is a group of triangles in the mesh that can all be rendered using the same attribute.
by attribute we mean material, texture, and render states.
each triangle in the mesh is given an attribute ID that specifies the subset in which the triangle lives.

the attribute IDs for the triangles are stored in the mesh's attribute buffer, which is a DWORD array.
since each face has an entry in the attribute buffer,
the mumber of elements in the attribute buffer is equal to the number of faces in the mesh.
the entries in the attribute buffer and the triangles defined in the index buffer have a one-to-one correspondence.
that is, entry i in the attribute buffer corresponds with triangle i in the index buffer.
we can access attribute buffer by LockAttributeBuffering and UnlockAttributeBuffering.

ID3DXMesh interface provides the DrawSubset(DWORD AttribId) method to
draw the triangles of a particular subset specified by the AttribId argument.
when we want to optimize the mesh, then we can use function OptimizeInplace.

// get the adjacency of the non-optimized mesh.
DWORD adjacencyInfo[Mesh->GetNumFaces() *3]'
Mesh->GenerateAdjacency(0.0f,adjacencyInfo);

// array to hold optimized adjacency info.
DWORD optimizedAdjacencyInfo[Mesh->GetNumFaces() * 3];
Mesh->OptimizeInplace(
 D3DXMESHOPT_ATTSORT|
 D3DXMESHOPT_COMPACT|
 D3DXMESHOPT_VERTEXCACHE,
 adjacencyInfo,
 optimizedAdjacencyInfo,
 0,
 0);

a similar method is the Optimize method,
which outputs an optimized version of the calling mesh object rather than actually optimizing the calling mesh object.

when a mesh is optimized with the D3DXMESHOPT_ATTSORT flag,
the geometry of the mesh is sorted by its attribute
so that the geomoetry of a particular subset exists as a contiguous block in the vertex/index buffers.

In addition to sorting the geometry,
the D3DXMESHOPT_ATTRSORT optimization builds an attribute table.
the attribute table is an array of D3DXATTRIBUTERANGE structures.

Each entry in the attribute table corresponds to a subset of the mesh and
specifies the block of memory in the vertex/index buffers,
where the geometry for the subset resides.

to access the attribute table of a mesh, we can use GetAttributeTable method.
the method can return the number of attributes in the attribute table or
it can fill an array of D3DXATTRIBUTERANGE structures with the attribute data.
to get the number of elements in the attribute table, we pass in 0 for the first argument:
DWORD numSubsets  = 0;
Mesh->GetAttributeTable(0,&numSubsets);
once we know the number of elements, we can fill a D3DXATTRIBUTERANGE array with the actual attribute table by writing:
D3DXATTRIBUTERANGE* table = D3DXATTRIBUTERANGE[numSubsets];
Mesh->GetAttributeTable(table,&numSubsets);
we can also set the attribute table directly by SetAttributeTabling.

the adjacency array is a DWORD array, where each entry contains an index identifying a triangle in the mesh.

GenerateAdjacency can also output the adjacency info.
DWORD adjacencyInfo[Mesh->GetNumFaces()*3];
Mesh->GenerateAdjacency(0.001f,adjacencyInfo);

sometimes we need to copy the data from one mesh to another.
this is accomplished with the ID3DXBaseMesh::CloneMeshFVF method.
this method allows the creation options and flexible vertex format of the destination mesh to be different from those of the source mesh.
for example:
ID3DXMesh* clone = 0;
Mesh->CloneMeshFVF(
Mesh->GetOptions(),
D3DFVF_XYZ|D3DFVF_NORMAL,
Device,
&clone);

we can also create an empty mesh using the D3DXCreateMeshFVF function.
by empty mesh, we mean that we specify the number of faces and vertices that we want the mesh to be able to hold.
then D3DXCreateMeshFVF allocated the appropriately sized vertex, index, and attribute buffers.once we have the mesh's buffers allocated, we mannually fill in the mesh's data contents,
that is we must write the vertices, indices, and attributes to the vertex buffer, index buffer, and attribute buffer, respectively.

alternatively, you can create an empty mesh with the D3DXCreateMesh function.

ID3DXBuffer interface is a generic data structure that D3DX uses to store data in a contiguous block of memory.
GetBufferPointer: return a pointer to the start of the data.
GetBufferSize: return the size of the buffer in bytes.

load a x file: D3DXLoadMeshFromX.

D3DXComputeNormals generate the vertex normals for any mesh by using normal averaging.
if adjacency information is provided, then duplicated vertices are disregarded.
if adjacency info is not provided,
then duplicated vertices have normals averaged from the faces that reference them.

ID3DXPMesh allows us to simplify a mesh by applying a sequence of edge collapse transformations(ECT)
each ECT removes one vertex and one or two faces.
because each ECT is invertible(its inverse is called a vertex split),
we can reverse the simplification process and restore the mesh to its exact original state.

we would end up spending time rendering a high triangle count model when a simpler low triangle count model would suffice.
we can create an ID3DXPMesh object using the following function:
D3DXGeneratePMesh.

the attribute weights are used to determine the chance that a vertex is removed during simplification.
the higher a vertex weight, the less chance it has of being removed during simplification.

one way that we can use progressive meshes is to adjust the LOD(level of details) of a mesh based on its distance from the camera.

the vertex weight structure allows us to specify a weight for each possible component of a vertex.

bounding boxes/spheres are often used to speed up visibility tests and collision tests, among other things.

a more efficient approach would be to compute the bounding box/sphere of each mesh and then do one ray/box or ray/sphere intersection test per object.
we can then say that the object is hit if the ray intersected its bounding volume.

since the right,up,and look vectors define the camera's orientation in the world, we sometimes refer to all three as the orientation vectors. the orientation vectors must be orthonormal.
a set of vectors is orthonormal if they are mutually perpendicular to each other and of unit length.

an orthogonal matrix has the property that its inverse equals its transpose.

each time this function is called, we recompute the up and right vectors with respect to the look vector to ensure that they are mutually orthogonal to each other.

pitch, or rotate the up and look vectors around the camera's right vector.
Yaw, or rotate the look and  right vectors round  the camera's up vector.
Roll, or rotate the up and right vectors around the camera's look vector.

walking means moving in the direction that we are looking(along the look vector).
strafing is moving side to side from the direction we are looking, which is of course moving along the right vector.
flying is moving along the up vector.

the AIRCRAFT model allows us to move freely through space and gives us six degrees of freedom.
however, in some games, such as a first-person shooter, people can't fly;

a heightmap is an array where each element specifies the height of  a particular vertex in the terrain grid.
one of the possible graphical representations of a heightmap is a grayscale map, where darker values reflect portions of the terrain with low altitude and whiter values refect portions of the terrain with higher altitudes.

a particle is a very small object that is usually modeled as a point mathematically.
programmers would use a billboard to display a particle, a billboard is a quad whose world matrix orients it so that it always faces the camera.
 
Direct3D 8.0 introduced a special point primitive called a point sprite that is most applicable to particle system.
point sprites can have textures mapped to them and can change size.we can describe a point sprite by a single point. this saves memory and processing time because we only have to store and process one vertex over the four needed to store a billboard(quad).
 we can add one field in the particle vertex structure to specify the size of the particle with the flag D3DFVF_PSIZE.

the behavior of the point sprites is largely controlled through render states.

the formula below is used to calculate the final size of a point sprite based on distance and these constants:
FinalSize = ViewportHeight.Size.sqrt(1/(a + bd+cdd))
final size : the final size of the point sprite after the distance calculations.
viewportheight: the height of the viewport.
size: corresponds to the value specifies by the D3DRS_POINTSIZE render state.
A,B,C: correspond to the values specified by D3DRS_POINTSCALE_A,D3DRS_POINTSCALE_B,
D3DRS_POINTSCALE_C.
D: the distance of the point sprite in view space to the camera's position. since the camera is positioned at the origin in view space, this value is D = sqrt(xx+yy+zz), where (x,y,z) is the position of the point sprite in view space.

the attributes of a particle are specific to the particular kind of particle system that we are modeling.
the particle system is responsible for updating, displaying, killing, and creating particles.
we use the D3DUSAGE_POINTS to specifies that the vertex buffer will hold point sprites when creating vertex buffer for the point sprite.
we use D3DUSAGE_DYNAMIC when creating vertex buffer is because we need to update our particles every frame.

therefore, once we compute the picking ray, we can iterate through each object in the scene and test if the ray intersects it. the object that the ray intersects is the object that was picked by the user.

when using picking algorithm, we need to know the object that was picked, and its location in 3D space.

screen to projection window transform:
the first task is to transform the screen point to the projection window.
the viewport transformation matrix is:
[ width/2  0  0 0]
[0 -height/2 0 0 ]
[0 0 MaxZ - MinZ 0]
[x+(width/2) y+(height/2) MinZ 1]
transforming a point p = (px,py,pz) on the projection window by the viewport transformation yields the screen point s = (sx,sy):
sx = px(width/2) +  x + width/2
sy = -py(height/2) + y + height/2.
recall that the z-coordinate after the viewport transformation is not stored as part of the 2D image but is stored in the depth buffer.

assuming the X and Y memebers of the viewport are 0,and P be the projection matrix, and since entries P00 and P11 of a transformation matrix scale the x and y coordinates of a point, we get:
px = (2x/viewportwidth - 1)(1/p00)
py = (-2y/viewportheight + 1)(1/p11).
pz = 1
computing the picking ray
recall that a ray can be represented by the parameteric equation p(t) = p0 + tu, where p0 is the origin of the ray describing its position and u is a vector describing its direction.
transforming ray 
in order to perform a ray-object intersection test, the ray and the objects must be in the same coodinate system. rather than transform all the objects into view space, it's often easier to transform the picking ray into world space or even an object's local space.
D3DXVec3TransformCoord : transform points.
D3DXVec3TransformNormal: transform vectors.
for each object in the scene, iterate through its triangle list and test if the   ray intersects one of the triangles, if it does, it must have hit the object that the triangle belongs to.
the picking ray may intersect multiple objects. however, the object closest to the camera is the object that was picked, since the closer object would have obscured the object behind it.
HLSL.
we write out shaders in notepad and save them as regular ascii text files. then we use the D3DXCompileShaderFromFile function to compile our shaders.
the special colon syntax denotes a semantic, which is used to specify the usage of the variable. this is similar to the flexible vertex format(FVF) of a vertex structure.

as with a c++ program, every HLSL program has an antry point,
posted @ 2008-10-30 22:56 jolley 阅读(1749) | 评论 (2)编辑 收藏

bug01: 在创建窗口的时候的width/height跟初始化D3D的时候的后缓冲区width/height不一致,致使在CreateDevice的时候返回D3DERR_INVALIDCALL的错误报告.
bug02:
static LRESULT CALLBACK WindowProc(HWND window, UINT msg, WPARAM wParam, LPARAM lParam);  // 回调函数
wnd.lpfnWndProc = WindowProc; 
这里使用static的原因
error C3867: 'WinWrapper::WindowProc': function call missing argument list; use '&WinWrapper::WindowProc' to create a pointer to member
e:\dx beginner\d3dinit\d3dinit\winwrapper.cpp(46) : error C2440: '=' : cannot convert from 'LRESULT (__stdcall WinWrapper::* )(HWND,UINT,WPARAM,LPARAM)' to 'WNDPROC'
bug03: D3DXCOLOR_XRGB(255.0f,0.0f,0.0f).就会出现这样的错误
error C2296: '&' : illegal, left operand has type 'float',而将浮点型转换为整数型就可以通过了.
bug04:

Direct3D9: (ERROR) :Current vertex shader declaration doesn't match VB's FVF
这里是因为创建的时候使用不同的fvf,我这里的出错是因为我在工程里面用到了两个顶点缓冲器,而在渲染之前的操作都基于缓冲器A,而在渲染的时候却采用缓冲器B,这样就出现了这样的问题,并且两个顶点缓冲器采用的FVF都是不同的.

bug05:
很多次的时候,都发现了基本图元都有绘制成功的,但是就是显示不出来,跟了很久,后来发现是相机的位置问题.
 // 设置相机坐标和相关信息
 D3DXMATRIX matCamera;
 D3DXVECTOR3 eye(-10.0f,3.0f,-15.0f);         // 相机坐标(eye)
 D3DXVECTOR3 lookAt(0.0f,0.0f,0.0f);      // 相机观察的坐标位置(look at)
 D3DXVECTOR3 up(0.0f,1.0f,0.0f);          // 相机的向上变量
 D3DXMatrixLookAtLH(&matCamera,
  &eye,
  &lookAt,
  &up);
 pD3DDevice->SetTransform(D3DTS_VIEW,&matCamera);
这个函数很重要,很多时候调整一下eye之后就好了.
另外在光源的位置设置上面也存在同样的问题,如果光源的direction和position没有设置好的,就只能看到物体的背面,而重新调整之后就可以看到物体的原貌了.

directInput.
directInput里面采用了钩子处理,这样钩子是直接作用windows消息的,这样会带来不必要的麻烦,而win32 api或者windows message就不要,直接用windows message或许还好些, 另外directInput也存在很多麻烦问题。
1) 创建了多余的线程仅仅是用原始输入从键盘读取数据(实际上你可用win32自己读取)
2) 不支持控制面板里面用户设置的键盘重复率,
3) 不支持大写字母和shift后的字母,必须检查大写键是否开启或者关闭以及常规字母。
4) 不支持非英语国家的键映射。
5) 不支持输入法编辑器(比如汉语)。
6) 不支持可访问性键盘和其它,比如需要特殊驱动的声音控制。

在国外都不用这个directInput而转用windows message或者win32 api,GetKeyBoardState,以及GetMouseState之类的。

在使用directInput和windows消息上面应该采用这样的方式来处理.

今天遇到一个错误,他在释放空间的时候出错,错误提示为
DAMAGE: after normal block(#78493) at  0x015EBADB.
后来查询出来了,是因为申请的空间太小,这样的话,只要把申请的空间大小加大就好了.

tweening:Short for in-betweening, the process of generating intermediate frames between two images to give the appearance that the first image evolves smoothly into the second image. Tweening is a key process in all types of animation, including computer animation. Sophisticated animation software enables you to identify specific objects in an image and define how they should move and change during the tweening process.

posted @ 2008-09-19 01:15 jolley 阅读(477) | 评论 (0)编辑 收藏

索引缓冲器:制作索引次序,并且根据这个索引次序将所有绘制的图行进行编码,这样有两个好处:
1)按照索引次序可以减少渲染顶点数目,这样可以提高性能效率。
2)索引缓冲器中的索引可以提供cache的功能,可以cache住最近用于变换和光照的顶点数据,下次使用的时候,就不用再处理了。并且在GPU里面是不知道abc, bcd里面的bc是一致的,这样除非凭借索引。一般处理的方式是尽量减少数据量,可以通过改变绘制图元的方式和顶点结构的组织格式(SetFVF,SetVertexDeclaration)。
关于绘制图元的方式(两个三角形):
1)如果是triangle list表示就是:
abcbcd
2)如果是triangle strip,就是
abcd
3)如果是index buffer + triangle list就是
abcd
012,123
4)如果是 index buffer + triangle strip就是
abcd
0123

顶点结构和索引值结构是根据绘制图元的方式来决定的,比如选择triangle strip的顶点设置跟triangle list的顶点设置就有不同.并且索引缓冲的设置是按照顺时针(DX)的形式进行编号的.

在使用图形渲染顶点的时候,如果对模型的顶点布局和顶点渲染方式有不清楚的,这样可以通过美工来获得这种信息,之前在想一架飞机的时候,都没有将模型构建起来,但是花了不少的精力在那上面,后面才想到,其实没必要将心思全部花在那里的,而可以将心思放在具体的模型导入上面,在美术那边就可以获得顶点的详细信息,这样有助于解决问题的关键,顶点布局在美术那边会得到一个比较完整的体现,而程序这边只需要知道顶点的结构就好了,比如详细的顶点结构,顶点在导入模型里面的布局等等,诸如此类的。另外还有一个点的是:其实研究顶点的话,可以将之放到模型里面去分析,这样的话,可以减少单独设计顶点带来的困扰。

摄像坐标系就是定义在摄像机的屏幕可视区域,在摄像机的坐标系中,x轴向右,z轴向前(朝向屏幕内,或者摄像机方向),y轴向上(不是世界的上方,而是摄像机的上方),

为了简化世界坐标系到物体坐标系的转换,人们引入了一种新的坐标系,称为惯性坐标系,意思是世界坐标系到物体坐标系的半途,惯性坐标系的原点跟物体坐标系的原点重合,但是惯性坐标系的轴平行于物体坐标系的轴。可以用惯性坐标系来做为物体坐标系与世界坐标系的中介:用旋转能将物体坐标系转换到惯性坐标系,用平移可以将惯性坐标系转换到物体坐标系。
物体坐标系转换成世界坐标系要经历的步骤:
1)将物体坐标顺时针45度,转换到惯性坐标系。
2)将惯性坐标系向下向右旋转到世界坐标系。
3)这样物体坐标轴顺时针45度,向下向下向右旋转到世界坐标系。

嵌套坐标系:定义了一种层次的,或者树状的坐标系.世界坐标系就是这个树的根。

对于许多向量,我们只关心它的方向而不关心她的大小,在这个情况下使用单位向量非常重要。(D3DXVec3Normalize)
一般来说,点乘描述两个向量的相似程度。点乘等于向量大小以及向量的cos值的积。
几何意义在于:a的长度与b在a上的投影长度的乘积,或者是b的长度与a在b上投影长的乘积,它是一个标量,而且可正可负,相互垂直的向量内积为0。

叉积描述的是一个向量垂直于所在的平面的两个向量(D3DXVec3Cross)。
叉积最重要的应用在于创建垂直于平面,三角形,以及多边形的向量。

p,q,r定义为x,y,z轴上面的单位向量。那么任意一个向量都可以表示为v = xp + yq + zr; 并且将,p,q,r这些称为基向量,这里基向量就是卡笛尔坐标。
一个坐标系可以用任意三个基向量表示,当然这三个基向量要线形无关。

矩阵的每一行都能解释为转换后的基向量。

可以通过想象变换后的坐标系的基向量来想象矩阵。这些基向量在2d中构成“L”形状,在3d中构架成三角架形状。

// 重新产生基向量
    D3DXVec3Normalize(&vLook, &vLook);    // 归一化向量,获得look方向
 D3DXVec3Cross(&vRight, &vUp, &vLook);  // 获得up/look法线所在平面的垂直法线
    D3DXVec3Normalize(&vRight, &vRight);    // 归一化向量,获得right方向
 D3DXVec3Cross(&vUp, &vLook, &vRight); // 获得right/look法线所在平面的垂直法线
    D3DXVec3Normalize(&vUp, &vUp);// 归一化向量,获得up方向
     // Matrices for pitch, yaw and roll
// 用归一化后的向量和一个标量(角度)旋转后获得一个旋转矩阵。
 D3DXMATRIX matPitch, matYaw, matRoll;
 D3DXMatrixRotationAxis(&matPitch, &vRight, fPitch );
 D3DXMatrixRotationAxis(&matYaw, &vUp, fYaw );   
  D3DXMatrixRotationAxis(&matRoll, &vLook, fRoll);
 
 // rotate the LOOK & RIGHT Vectors about the UP Vector
 // 用一个矩阵来变换一个3D向量.
 D3DXVec3TransformCoord(&vLook, &vLook, &matYaw);
 D3DXVec3TransformCoord(&vRight, &vRight, &matYaw);

 // rotate the LOOK & UP Vectors about the RIGHT Vector
 D3DXVec3TransformCoord(&vLook, &vLook, &matPitch);
 D3DXVec3TransformCoord(&vUp, &vUp, &matPitch);

 // rotate the RIGHT & UP Vectors about the LOOK Vector
 D3DXVec3TransformCoord(&vRight, &vRight, &matRoll);
 D3DXVec3TransformCoord(&vUp, &vUp, &matRoll); 

D3DXVECTOR3 *WINAPI D3DXVec3TransformCoord(      

    D3DXVECTOR3 *pOut,
    CONST D3DXVECTOR3 *pV,
    CONST D3DXMATRIX *pM
);
其原理是pOut' = pV' * pM ,因为pM是4*4矩阵,这样的话,pV' = [pV 1] ,并且之后求得出来的结果向量是pOut'去掉z轴(w)得到pOut = [pOut'.x/w pOut'y/w pOut'z/w].
另外D3DXVec3TransformNormal的做法是差不多的,只是其中一项w被设置为0.

// 设置照相机矩阵,位置和方向。
static D3DXVECTOR3 vCameraLook=D3DXVECTOR3(0.0f,0.0f,1.0);
 static D3DXVECTOR3 vCameraUp=D3DXVECTOR3(0.0f,1.0f,0.0f);
 static D3DXVECTOR3 vCameraPos=D3DXVECTOR3(0.0f,0.0f,-5.0f);
 D3DXMATRIX view;

 D3DXMatrixLookAtLH (&view,&vCameraPos,  // pEye = Position
       &vCameraLook,  // pAt
       &vCameraUp);  // pUp 
 m_pd3dDevice->SetTransform(D3DTS_VIEW, &view);
POSITION: 定义了物体的位置
LOOK: 定义了物体所指的方向
RIGHT: 定义了物体的右侧指向
UP:仅在物体会绕着LOOK向量旋转时才是必需的,表示哪个方向对于物体来说是"上"或"下"。
pitch - RIGHT
roll - LOOK
yaw - UP
LOOK移动- 改变POSITION.

  m_pd3dDevice->SetTransform(D3DTS_WORLD, &m_pObjects[0].matLocal );

  设置渲染纹理内容。
  m_pd3dDevice->SetTexture( 0, m_pTexture );
  m_pd3dDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE );
  m_pd3dDevice->SetTextureStageState( 0, D3DTSS_COLOROP,   D3DTOP_SELECTARG1 );

  // Passing an FVF to IDirect3DDevice9::SetFVF specifies a legacy FVF with stream 0.
// 设置顶点格式
  m_pd3dDevice->SetFVF(FVF ); 
// 将顶点缓冲器绑定到设备数据流。
  m_pd3dDevice->SetStreamSource( 0, m_pVB, 0, sizeof(VERTEX) );
// 设置索引数据
  m_pd3dDevice->SetIndices( m_pIB ); 
 // 绘制
  m_pd3dDevice->DrawIndexedPrimitive( D3DPT_TRIANGLELIST,
           0,
           0,
           16,  // number of vertices
           0,
           10); // number of primitives

在旋转向量之前,必须重新归一化,因为要使向量相互垂直。
D3DXQuaternionRotationYawPitchRoll: 用给定的yaw, pitch, roll来构建四元数。
D3DXMatrixRotationQuaternion: 用四元数来构建旋转。

Qx = [cos(yaw/2) (sin(yaw/2),0,0)]
Qy = [cos(pitch/2) (0, sin(pitch/2),0)]
Qz = [cos(roll/2) (0,0,sin(roll/2))]

D3DXMatrixLookAtLH对于构建一个跟随式照相机是很有帮助的。

可以使用四元数的方式执行旋转向量:
 fRoll = fPitch = fYaw = 0.0f;
    D3DXVECTOR3 vPos(0.0f, 0.0f, 0.0f);
    static D3DXMATRIX matView  = D3DXMATRIX(1.0f, 0.0f, 0.0f, 0.0f,
         0.0f, 1.0f, 0.0f, 0.0f, 
         0.0f, 0.0f, 1.0f, 0.0f, 
         0.0f, 0.0f,-5.0f, 1.0f);
// 更新位置和观察矩阵
 D3DXMATRIX matR, matTemp;
// 用yaw/pitch/roll来构建四元数。
 D3DXQuaternionRotationYawPitchRoll (&qR, fYaw, fPitch, fRoll); 
// 用四元数来构建旋转矩阵
 D3DXMatrixRotationQuaternion (&matR, &qR);
//  应用旋转矩阵     
 D3DXMatrixMultiply (&matView, &matR, &matView);
// 平移矩阵
 D3DXMatrixTranslation (&matTemp, vPos.x, vPos.y, vPos.z);
// 应用旋转矩阵
 D3DXMatrixMultiply (&matView, &matTemp, &matView);
// 为了绕任意点所做的旋转为线性的。
 D3DXMatrixInverse (&matTemp, NULL, &matView);    

 m_pd3dDevice->SetTransform(D3DTS_VIEW, &matTemp );

一个窗口应用程序中视口的大小被定义为该窗口客户区的大小,而一个全屏应用程序中视口的大小则定义为屏幕的分辨率。
视口用法: 通过GetViewPort可获取视口数据,以视口的大小和提供给深度缓冲器的MinX,MinY来填充一个视口结构。应在DrawPrimitive*命令之前用SetViewPort来设置视口。在完成绘制后,应恢复原来的视口,以便在一次渲染中清理整个渲染目标,以及通过Direct3D框架提供的字体类来绘制文字。

渲染场景最有效的方法是仅渲染那些可以被观察者看到的像素。如果渲染了那些看不到像素,那么这种多余的操作称为覆绘。
深度缓冲器储存着显示器中每个像素的深度信息,在显示虚拟世界之前,应当清除深度缓冲器中的每个像素,将它们设置为最远可能的深度值。在光栅化时,深度缓冲算法会获取当前多边形所涉及的每个像素的深度。如果一个像素比先前存储在深度缓冲器中的像素更接近于照相机,则较近的像素被显示出来,并且这个新的深度值也将覆盖深度缓冲器中原先的内容,每次绘制多边形的像素时都执行一遍这个过程。

颜色缓冲器存储了稍后将要绘制到屏幕上的内容。深度缓冲器的每个像素通常都是16位或24位。深度精度取决于深度缓冲器的位数。
W缓冲器:减少了Z缓冲器在处理远距离物体时遇到的问题。可以通过这些获得相关支持:
m_pd3dDevice->SetRenderState(D3DRS_ZENABLE,D3DZB_USEW).
以及
if (d3dCaps.RasterCaps & D3DPRASTERPS_WBUFFER)来判断是否支持W buffer.

如何使用四元数来旋转照相机?
通过偏航,俯仰,以及横滚来构建一个四元数,然后把它转换成一个矩阵,最后求该矩阵的逆矩阵

只有正方形的矩阵(方阵)才能求逆,因此当我们说矩阵求逆,那么它就是方矩阵
并不是每个方阵都有逆矩阵。

平面其实可以用法向量n和常数d来表示。判断点和平面的关系:

    假如n·p + d = 0,那么点p与平面共面。

    假如n·p + d >0,那么点p平面的前面且在平面的正半空间里。

    假如n·p + d <0,那么点p平面的背面且在平面的负半空间里。

创建平面的方法:
1)通过点和法线来创建相关的平面,D3DXPlaneFromPointNormal。
2)通过平面上面的三点,p0,p1,p2来表示,D3DXPlaneFromPoints。


http://www.cppblog.com/shadow/articles/2807.html
http://www.cppblog.com/lovedday/archive/2008/04/04/46264.html

posted @ 2008-09-15 08:32 jolley 阅读(582) | 评论 (0)编辑 收藏

const 常量函数只能调用其中的常量相关的东西。
struct StringLess:
 public std::binary_function<const std::string&,
        const std::string&,
        bool>
{
 bool operator()(const std::string& a, const std::string& b)const
 {
  return strcmp(a.c_str(),b.c_str());
 }
};

std::map<std::string,Core::Rtti*,StringLess> nameTable;
 Core::Rtti* Factory::GetRttiName(std::string className)const
 {
  return this->nameTable[className];
 }
但是还发现出现错误。
g:\framework\foundation\foundation\core\factory.cpp(60) : error C2678: 二进制“[”: 没有找到接受“const std::map<_Kty,_Ty,_Pr>”类型的左操作数的运算符(或没有可接受的转换)
        with
        [
            _Kty=std::string,
            _Ty=Core::Rtti *,
            _Pr=StringLess
        ]
        e:\microsoft visual studio 8\vc\include\map(166): 可能是“Core::Rtti *&std::map<_Kty,_Ty,_Pr>::operator [](const std::basic_string<_Elem,_Traits,_Ax> &)”
        with
        [
            _Kty=std::string,
            _Ty=Core::Rtti *,
            _Pr=StringLess,
            _Elem=char,
            _Traits=std::char_traits<char>,
            _Ax=std::allocator<char>
        ]
        试图匹配参数列表“(const std::map<_Kty,_Ty,_Pr>, std::string)”时
        with
        [
            _Kty=std::string,
            _Ty=Core::Rtti *,
            _Pr=StringLess
        ]
这里主要是const函数的滥用,因为不清楚const函数究竟能对什么进行操作就滥用。
map的const对象不可以调[]。
operator[] 不是 const类型。
所以这个错误基本上将const去掉就好了。
这里总结一些东西,以前也滥用过const的东西
返回const表示这个返回内容是只读的,不能被修改的。
参数使用const表示这个参数是只读的,而不能进行任何修改,只参与计算,而不修改本身。
const常函数,只能对常量进行操作,说明在这里主要是进行常量成员的操作,而不做任何与const无关的操作,上面就是个很好的例子。
posted @ 2008-09-03 10:20 jolley 阅读(1605) | 评论 (0)编辑 收藏

doxygen是帮助改进工程结构和相关优化的工具,能够提供工程类调用关系图和函数调用关系图。
主要需要下面这些操作:
1) doxygen, 2)Graphviz,图形化可视软件,3)iconv,中文编码转化工具。
将这些安装好以后,打开doxygen主界面选择expert.进行相关配置,其中要配置的信息包括:1)project,主要是工程名称和版本以及输出目录,这里关系到chm文档第一页显示的标题。2)Build,主要是选择显示的模式,比如Extract_ALL:将显示程序所有的元素:类,函数,变量;Extract_PRIVATE:显示私有变量等。3)Message,warn_logfile项目里面可以给出出错以及相关的编译log,之后的编译信息都将在这个对应的log文件里面找到。4)Input,主要是输出你要生成软件文档的工程,这里要给出目录。5)source browser,代码浏览器,是否可以浏览到代码。6)HTML,如果要产生CHM文档的话,那么就一定得要选择generate_htmlhelp,7)dot, 这里主要是要选择CLASS_DIAGRAMS,UML_LOOK,CALL_GRAPH,CALLER_GRAPH.等显示类结构图,uml图,被调用者关系图,调用者关系图。

之后做一些选择就可以生成doxyfile的东西,按照这个doxyfile的东西就可以生成相应的配置信息,doxyfile是doxygen的配置信息,是可以被编辑的。

配置好之后,就可以生成相关的html文件,png文件等。

之后再用chm打包成chm文件,方便查阅。
这里可以借助其它的打包软件来处理这些,因为html打包的容量有限,并且doxygen生成的html有时候也有问题,比如关联的东西太多了,就会产生很多麻烦,比如html无法正常产生,我有次就产生以后,发现其中的html字节都为0KB,很郁闷,chm来打包也是很有问题的,打包东西太多了,就sb了,所以做到这点可以用doxygen+打包软件(未必是chm文件).
一些其他有用信息可以在这里获得:
http://www.fmddlmyy.cn/text21.html
posted @ 2008-08-17 16:24 jolley 阅读(235) | 评论 (0)编辑 收藏

静态lib在交叉工程中经常会出现因为lib不是最新的,带来很多麻烦,之前遇到的一个问题是,在工程A里面修改了工程B里面的文件,但是B工程没有编译,这样工程A里面用到工程B里面的lib是旧的,这样会造成一种情况是,你在工程A里面调试程序的时候,发现明明是可以单步调试到某语句的,但是实际上却执行到前面或者后面,这是因为这个程序语句没有在lib里面更新的缘故.
使用dll的好处:
1)便于分工开发,2)便于后期维护和扩展,3)编译生成的版本少,一般只有调试和发布版本的dll4)便于封装代码.相对而言,静态lib每次都要编译才生成,不像dll一样可以进行二进制更新.并且dll更新版本以后不需要再次编译,只要给出接口,更新实现就可以更新dll相关版本了.
一般地在引擎中都强调多使用dll,将引擎各个模块写成dll的形式,这样方便以后开发和后期维护,不过dll的一个难办的地方是:除非你给出你要输出的接口,否则你不能调用dll里面的任何东西,这样就要在生成dll的时候,给出相关输出函数接口,方便提供给client使用.
posted @ 2008-07-30 20:45 jolley 阅读(154) | 评论 (0)编辑 收藏

将任务分配得足够细,不能存在组员间存在任务重叠的情况
用心去感动他们,跟他们多沟通,交流,将心比心.
及时跟踪进度,在没有赶上进度的情况下,及时地做出判断.
确定优先级别,什么事情先做,什么事情后做,什么事情可以很快做好,什么事情做好,需要一段时间,什么事情现在可以做,什么事情现在可以不做,什么事情现在不做,将来可以做等,都要很清晰.
在分配任务之前,先将问题说清楚,目标和要求必须很明确,并且根据个人的能力进行跟踪,保证进度和相关质量.
要求组员不能重复犯同样的错误.

思考问题不能太系统化,如果是这样的话,一定要从多角度去想想办法,从角度A和角度B以及角度C去思考问题D,这样可能更加有效,现在规定一下,思考问题起码要从三个角度上去思考,这样才能将问题确定下来,否则是很容易犯考虑不全的错误,希望这样的错误以后不要再犯,以前是按照多角度思考的,现在又染上旧毛病了,说明还没有掌握这种能力,需要学习和加强.

给出任务的时候,要步骤清晰,学习用步骤1,步骤2,步骤3等分块列出来.而不能写得太笼统.

通盘把握,全局考虑.

学会灵活变通和周旋,相信事情总有解决办法,未必一定要认定要那么多,学会追求殊途同归的效果,不能将自己的思维局限在某个部分而无法释放出来。
事情都会有个期限,过了那个期限,就要学会换种思路和角度去重新审视这个问题了。

未完待续.
posted @ 2008-07-14 22:14 jolley 阅读(154) | 评论 (0)编辑 收藏

仅列出标题
共4页: 1 2 3 4