stereo geometry in OpenGL

what is stereoscopy?
 

Binocular vision

Most people see naturally through two eyes. The eyes are placed an average of 63.5mm apart - depending on genetic factors.  Two similar but different viewpoints generate a small amount of parallax error where the depth of the scene viewed is not equivalent to the point the eyes are converged at.  The computational power of the brain then takes over and facilitates depth perception.  There is a subgroup of the population who can only see out of one eye due to anatomical or physiological problems and cannot see stereoscopically.

Depth cues

Binocular parallax is a powerful depth cue in binocular individuals, but it is by no means the only one used by the mind.  A lot of depth cues are available to one eye and anything that can be seen as a depth cue in a standard photograph can be included in this category.

Motion parallax is a depth cue which can be used by the monocular individual or the cinematographer: this occurs due to a smaller amount of motion in a scene when an object is far away - for example in scenes from a moving train window.  Perspective and texture gradient allow depth perception by objects getting smaller as the distance from the user increases.  Fog and haze increasing with distance also contribute to depth perception.  Occlusion of objects by other objects which are nearer to the viewer are powerful depth cues that are available in normal scenes but are not applicable to radiographs.  Known objects also help with depth perception - where a object of known size (say, an elephant) appears in the scene, one can often make a guess as to how far away it is and compare it to other objects in the same scene.  Shading and lighting also plays an important role, especially if one knows where the light source is or if it is included in the scene.

Benefits of stereoscopy

A well presented stereoscopic image is pleasing to look at.  The depth of structures is readily apparent and the appreciation of details not observed before becomes evident.  The combination of two views can provide more useful information in a scene than a single view, or two views that are taken from widely disparate viewpoints.  Relative depth can be easily gauged and with proper measurement apparatus (as in aerial photography or radiostereogrammetrical analysis) absolute depth can be measured in a pair of images.

Problems in stereoscopy

A poorly presented stereoscopic image leaves no impression; or one of headaches, eyestrain and nausea with the unfortunate viewer.

Most of the bad stereoscopic experiences (of which there are many) are due to poor presentation.  It is not uncommon to see images that are presented with rotation or vertical parallax errors, and also not uncommon to see images with the views flipped (pseudostereo).  Geometry is also important and if this is changed (eg. shifting from small screen to large projection screen), eyestrain may be expected.

Accommodation-convergence mismatch is the problem that arises when the projected image plane is in focus but the eyes are converging at a different depth plane.  Humans have a tolerance for some mismatch but when this becomes too large, either the image defocusses or stereo fusion is lost.  This link between accommodation and convergence is hard-wired into our nervous systems but can be overcome to an extent by practice.  It is also easier to compensate for viewing close images (converging) as most people are not able to make their eyes diverge.

Ghosting is the presence of a small proportion of the image intended for the other eye being visible.  It is a problem for most of the computer based viewing systems where the image pathways are not kept physically separated.  Where high contrast images are used (such as when looking at metallic implants on a radiograph)  ghosting becomes a significant problem.  The solution has yet to be devised for computer viewers, but using a purely optical viewing system should not give any ghosting.

viewing systems

Free viewing

This is the cheapest method of viewing, but also requires the most practice and dedication.  Images can be viewed cross-eyed, or parallel.  Most people are unable to diverge their eyes and therefore cannot stereoscopically fuse images that are separated by a larger distance that their intraocular distance.  Cross-eyed viewing is easier for most people but also can be difficult due to the neurological linking of accommodation and convergence.  The main benefits of free viewing are that it can be done anywhere, anytime with no viewing aids and no image ghosting.

Optical aids


Included here are optical devices that permit the delivery of images to each eye with tolerable accomodation-convergance disparity.  The device can be as simple as handheld mirror or quite complex and bulky (some of the large mirror-based stereoscopes used for photogrammetry or old two-wall radiograph viewers).  I carry a small mini-Wheatstone viewer - essentially two small periscopes laid on side to spread one's effective intraocular distance - in my bag that can be used to quickly view images on any computer screen or on radiographs that are printed in parallel view format.

Wheatstone principle

One of the easiest stereoscopes to make is one using a single mirror.  2 computer screens, radiograph viewing boxes, or pictures are placed at 45o to each other with one image laterally reversed.  A carefully sized mirror is then placed on the plane between the two images - this superimposes the images optically in the right viewing position.  If at all possible, a front-silvered mirror should be used to eliminate ghosting. 

Mirror based stereoscope

Despite the ease of use of digital displays, I believe that if a dedicated user will gain the most information from an optical film viewing system at the moment.  This is for two reasons - the lack of ghosting in a well designed optical system and the wider dynamic range of transparency film as compared to prints or screen displays.

Liquid crystal shutterglasses

Liquid crystal shutterglasses are a neat device.  The glasses are formed of liquid crystal panels that alternately blank out in sync with the left and right images being displayed on the screen.  The ability to drive the glasses and alternate left/right images on the screen (page-flipping) results in a smooth flicker-free stereo appearance if the speed is high enough.  I find 120 Hz to be comfortable flicker-free stereo, but there are people who can comfortably look at 60 Hz stereo as well as those that need rates of 140 Hz+ for comfort.  Also, if shutterglasses are used in a room with florescent lights, the interference between the shutterglasses and the florescent tubes will cause major flicker.
 As long as the images aren't given a horrible amount of parallax, the accomodation-convergance discrepancy is largely dealt with.  Ghosting is also an issue with the shutterglasses, especially in high contrast regoins of the image.

Shutterglasses are a reasonably cheap (US$40-100+) solution.

Polarized light systems

IMAX has used these in projection systems for years with success.  Standard linearly polarized glasses have polarizers oriented at 135o for the left eye and 45o for the right eye.  Similar polarizer orientation is used for each of the projectors and a silver non-depolarizing projection screen needs to be used.  There are a number of other polarizing displays available such as the VREX micropol filter (horizontally interlaced alternate polarization) or the Z-Screen from Stereographics (active screen shutter with passive circular polarizing glasses).  The advantage of polarized systems is that they do allow multiple viewers easy access to the picture.  The drawbacks are ghosting which is still present and the need for users to wear eyeglasses.

Autostereoscopic displays

Autostereo is the ability to deliver separate images to each eye without requiring the use of viewing glasses.  At present, there are two main methods for achieving this - use of a barrier to block light destined for the contralateral eye, or use or a lenticular lens to direct light into the chosen eye.  Autostereoscopic displays are available from two to nine different viewing zones, but for each additional viewing zone, the effective resolution of the image is degraded.  The other drawback with autostereoscopic displays is the requirement for the user to be in a fairly well defined "sweet spot" or else the image will be displayed in mono or pseudostereo - head tracking devices can overcome this, but are cumbersome and expensive at present.  Both raster barrier and lenticular displays do suffer from ghosting which can be significant when high contrast images are used.

Displays are currently quite expensive and do have technical deficits, but these are being addressed by developers.  Solutions such as the DTI and Sharp screens use a switchable raster barrier and can be used for conventional mono work as well.

Where picture quality is important in an autostereoscopic screen, the SeeReal lenticular autostereoscopic screens offer the clearest picture with the least ghosting at the present time.

Emerging technology

Technology will continue to develop and there are some interesting ideas being worked on.  Holographic displays, elemental lens arrays and true volumetric displays are all being developed presently.  One of the most interesting developing technologies uses your retina as the primary projection screen - I'd still feel uncomfortable at having two projectors aiming for my eyes but can see the potential.

Anaglyph

The anaglyph is the use of color (usually red-left, cyan-right) to code for left and right images.  The images can be printed, displayed, or projected on just about any medium and a simple set of glasses with the appropriate lenses is used to direct the appropriate image to the appropriate eye.  "Retinal rivalry" is the conflict that arises in the brain by having the two different colors used for the same object.  Apparently, this does not bother a lot of people - I find anaglyphs unusable for any length of time.\

home

stereoscopic radiology

 

 
Stereoscopic radiology is the use of stereoscopic imaging principles on radiographs and volumetric data.

Roentgen described the first radiograph in 1895 and it was only a matter of 2-3 years before stereoradiographs were being taken.  A peak of popularity followed with most radiologists using the technique by the 1930's.  The discovery that x-rays could be harmful did a lot to kill off the technique as the extra radiation could not be justified.  Today there are few radiologists and even fewer clinicians who have been exposed to stereoscopy, much less use it.  I believe that a large part of this is the main mode of dissemination of knowledge in the medical world - by journal.  Stereoscopy has to be experienced first hand using a well set up viewing device to appreciate and publication in journal format without providing adequate viewing aids does not help the potential viewer.

There are a number of situations where plain stereoscopic radiographs may still be of significant benefit:
    - in practices (developing world, rural locations, military field hospitals) where CT scanning is not available, but more information is wanted
    - in situations where metallic implant components need to be imaged, but too much implant scatter occurs in the CT machine (older scanners)
    - dislocated hip or shoulder, where lateral is un-interpretable and often painful to obtain
    - erect spine - reconstructed CT and MRI data in the scoliotic, deformed, or unstable spine is not available in the erect position

Plain radiographs do need to be treated differently from photographic images when viewing for a number of different reasons.  Depth cues from perspective are preserved, though the obscuring of objects further away from the viewer are obviously not.  In photographic images, the focus plane is presented sharpest and objects progressively defocus away from this.  Radiographs are similar, though one needs to remember that the plane of sharpest focus is at the film plane and everything closer to the tube will be progressively defocused.  There are also no lighting or depth haze cues in radiographs to rely on.  With objects that occlude most of the x-rays from reaching the film, such as metalware or dense soft tissue, only a silhouette is recorded and the details that are available elsewhere in the radiograph are not visualized.

Technique

The most important part of taking a pair of stereoradiographs is to have the patient and film stay in the same location whilst the tube is shifted.
  Tube to film distance should remain constant for both films.  Most people who have written on the subject have recommended a tube shift of about 1/10th of the tube to film distance.  This can be a bit less for smaller subjects if the image will be magnified (hypostereo).  As the tube can be regarded as a point source of radiation, toeing in the tube should have no effect on the picture unless the fulcrum on which the tube swings is placed eccentric to the tube.  Use of a grid does cause a significant gradient that is visually obvious - we have not decided what to do with the grid yet.

Limitations and drawbacks

Before using radiographs stereoscopically, it is important to understand the limitations of stereoscopy.  Due to our inability to effectively gauge distance from the amount our eyes are accommodated and converged, it is not useful for assessing absolute depth in an image without stereophotogrammetric devices.  Changes in depth and depth relationships, however, are accurately judged.  

The main objection to stereoradiography is that for each stereo view, a double dose of radiation is required - where a stereo view does not add additional information, it cannot be justified.  If the technique does provide additional information that contributes positively towards clinical management, then it as reasonable to use as modalities like CT which also require additional radiation.  When considering radiation, it is useful to remember that the dosage from two AP or PA films of the trunk is lower than that of an AP + lateral series.  The two views required also require less irradiation than accepted investigative modalities such as plain film tomography, where multiple slices are made of a region.

Another problem in stereoradiography is the need for the patient to stay still whilst the two views are taken.  Whilst this is easy where filming tables are used, it is harder to get good films in erect patients, patients with neuromuscular disorders, studies which are dependant on respiration phase, and where patients are in considerable pain.

Volumetric data in stereo

Volumetric data can be rendered from two different viewpoints (either using "toed-in" or asymmetric frustum projection) to give stereoscopic views of a subject.  Rendering can be done either using surface generation algorithms or by mapping intensity/density to opacity.  Surface rendering algorithms were developed to speed up the rendering process by reducing the number of geometric primitives and for surface smoothing.   With the increasing speed of recent computers, it is becoming more feasible to render the full volumetric data set on widely available computing platforms.

I have written tutorials on the use of two programs - VolView and AMIDE - for use in volumetric rendering available: see the "opacity based rendering" page.

If you have developed or are developing other uses for stereoscopy in radiology or orthopaedics, I'd be interested to know.

orthopaedic applications

 

We use binocular vision every day in everyday life.  There are few surgeons who would prefer living - much less operating - with only one eye.  Binocular microscopes are in common usage in microvascular surgery, as are surgical loupes that give good stereoscopic vision.  Is there any reason that we should persist in viewing our imaging with one eye?  Do we continue listening to our music using monophonic gramophones?

The ability to perceive images tridimensionally can add to a overall understanding of a bony problem (the "personality" of the fracture or deformity).  Radiostereogrammetrical analysis (RSA) to measure prosthesis migration is currently the main use of stereoscopy in orthopaedics.  RSA devices are very accurate, often with accuracy of depth measurement in a stereo pair of 1mm or less.  Harnessing this ability to perceive depth in other clinical situations has a lot of potential.   When using stereoscopy the limitations of the technique must be kept in mind - it is an adjunct to currently available imaging modalities, not a replacement.

Stereoscopic endoscopy is an area which is being opened up by general, urological, and cardiothoracic surgeons.  There is potential use in orthopaedics, but due to current technical and cost limitations, we are not actively looking at development in this area at present.

Software for rendering volumetric CT and MRI data in stereo is available:  see the "opacity based rendering" page for details and a couple of quick tutorials on the subject.

Stereoscopic visuallization is a tool that is mainly useful in getting images with depth information as well as increasing the perceived resolution of the image by using both eyes.  Applications need only be limited by one's imagination.

opacity based rendering


Most people usually think of colour as a composite of Red, Green, and Blue; or Cyan, Magenta, Yellow.  Computer graphics cards that deal with tridimensional rendering also use a fourth component - Alpha, or opacity.  By mapping or converting the luminance of an image to an Alpha value, this opacity component can be used to reconstruct a virtual radiograph from CT data and viewed from any angle with or without stereo aids and the viewer can alter the viewpoint or opacity of the image in real time.

Display of reconstructed CT or MRI data has been done as a surface structure with lighting algorithms applied.  This has mainly been due to the fact that using a surface model decreases the amount of data that needs to be processed.  It does have drawbacks: One cannot "see-into" the volume, the surface generation algorithms smooth over regions than one may want to see, and the lighting algorithms also come at increased computational cost.  With the advances in computer technology, viewing the full volumetric data set can be done on a modest computer platform with a reasonable graphics card at home or on the move - this reduces the surgeon's dependency on the radiology department and their expensive Silicon Graphics workstations.

I have not uploaded any full volumes here, as it is all real patient data and will not be made freely available.  If you are interested and want some examples, email me with your professional details.

Opacity rendering tutorials

The two programs that I use are VolView from Kitware and AMIDE, an opensource development by Andy Loening.

Kitware has very kindly built in DICOM file support to VolView which makes it easy to open a 3D DICOM file or convert a stack of DICOM images into a single 3D file.

AMIDE is a useful program that allows volume rendering in parallel view stereo with alteration of the stereo parameters, as well as tools to remove extras such as plaster casts or CT tables.

To use either of the programs to look at CT data like a "virtual radiograph" follow the links below:

stereo geometry in OpenGL



OpenGL is a powerful cross-platform graphics API which has many benefits - specifically a wide range of hardware and software support and support for quad-buffered stereo rendering.  Quad-buffering is the ability to render into left and right front and back buffers independently.  The front left and front right buffers displaying the stereo images can be swapped in sync with shutterglasses while the back left and back right buffers are being updated - giving a smooth stereoscopic display.  It is also relatively easy to learn with a bit of application  - the code below was written inside of 3 months from the time I started programming due in no small part to the many resources available on the web.

When rendering in OpenGL, understanding the geometry behind what you want to achieve is essential.  Toed-in stereo is quick and easy but does have the side effect that a bit of keystone distortion creeps into both left and right views due to the difference between the rendering plane and the viewing plane.  This is not too much of a problem in central portions of a scene, but becomes significant at the screen edges.  Asymmetric frustum parallel projection (equivalent to lens-shift in photography) corrects for this keystone distortion and puts the rendering plane and viewing plane in the same orientation.  It is essential that when using the asymmetric frustum technique that the rendering geometry closely matches the geometry of the viewing system.  Failure to match rendering and viewing geometry results in a distorted image delivered to both eyes and can be more disturbing than the distortion from toed-in stereo.

Paul Bourke has an excellent site with examples of stereoscopic rendering in OpenGL.  If you are interested in creating stereo views in OpenGL, it is worth spending time working out the geometry of toed-in (which is quite easy but introduces distortion into the viewing system) and asymmetric frustum parallel axis projection for yourself.  Below is my method - familiarity with OpenGL, GLUT and C is assumed and you need to have a graphics card which is capable of quad-buffering:

Toed-in stereo

Toed-in geometry

The idea is to use gluLookAt to set the camera position and point it at the middle of the screen from the two eye positions:

//toed-in stereo

float depthZ = -10.0;                                      //depth of the object drawing

double fovy = 45;                                          //field of view in y-axis
double aspect = double(screenwidth)/double(screenheight);  //screen aspect ratio
double nearZ = 3.0;                                        //near clipping plane
double farZ = 30.0;                                        //far clipping plane
double screenZ = 10.0;                                     //screen projection plane
double IOD = 0.5;                                          //intraocular distance

void init(void)
{
  glViewport (0, 0, screenwidth, screenheight);            //sets drawing viewport
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();
  gluPerspective(fovy, aspect, nearZ, farZ);               //sets frustum using gluPerspective
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity();
}
GLvoid display(GLvoid)
{
  glDrawBuffer(GL_BACK);                                   //draw into both back buffers
  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);      //clear color and depth buffers

  glDrawBuffer(GL_BACK_LEFT);                              //draw into back left buffer
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity();                                        //reset modelview matrix
  gluLookAt(-IOD/2,                                        //set camera position  x=-IOD/2
            0.0,                                           //                     y=0.0
            0.0,                                           //                     z=0.0
            0.0,                                           //set camera "look at" x=0.0
            0.0,                                           //                     y=0.0
            screenZ,                                       //                     z=screenplane
            0.0,                                           //set camera up vector x=0.0
            1.0,                                           //                     y=1.0
            0.0);                                          //                     z=0.0
 

  glPushMatrix();
  {
    glTranslatef(0.0, 0.0, depthZ);                        //translate to screenplane
    drawscene();
  }
  glPopMatrix();


  glDrawBuffer(GL_BACK_RIGHT);                             //draw into back right buffer
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity();                                        //reset modelview matrix
  gluLookAt(IOD/2, 0.0, 0.0, 0.0, 0.0, screenZ,            //as for left buffer with camera position at:
            0.0, 1.0, 0.0);                                //                     (IOD/2, 0.0, 0.0)

  glPushMatrix();
  {
    glTranslatef(0.0, 0.0, depthZ);                        //translate to screenplane
    drawscene();
  }
  glPopMatrix();

 
  glutSwapBuffers();
}


Asymmetric frustum parallel axis projection stereo

This is a bit more complex, as one needs to set up an asymmetric frustum first before moving the camera viewpoint.  In OpenGL, the asymmetric frustum is set up with the camera at the (0.0, 0.0, 0.0) position and then needs to be translated by IOD/2 to make sure that there is no parallax difference at the screen plane depth.  Geometry for the right viewing frustum is depicted below:

Asymmetric frustum geometry

To set up an asymmetric frustum, the main thing is deciding how much to shift the frustum by.  This is quite easy as long as we assume that we only want to move the camera by +/- IOD/2 along the X-axis.  From the geometry it is evident that the ratio of frustum shift to the near clipping plane is equal to the ratio of IOD/2 to the distance from the screenplane.

I decided to use a function call to set up the frustum on initiallization and anytime the viewport is changed:

#define DTR 0.0174532925

struct camera
{
    GLdouble leftfrustum;
    GLdouble rightfrustum;
    GLdouble bottomfrustum;
    GLdouble topfrustum;
    GLfloat modeltranslation;
} leftCam, rightCam;

float depthZ = -10.0;                                      //depth of the object drawing

double fovy = 45;                                          //field of view in y-axis
double aspect = double(screenwidth)/double(screenheight);  //screen aspect ratio
double nearZ = 3.0;                                        //near clipping plane
double farZ = 30.0;                                        //far clipping plane
double screenZ = 10.0;                                     //screen projection plane
double IOD = 0.5;                                          //intraocular distance

void setFrustum(void)
{
    double top = nearZ*tan(DTR*fovy/2);                    //sets top of frustum based on fovy and near clipping plane
    double right = aspect*top;                             //sets right of frustum based on aspect ratio
    double frustumshift = (IOD/2)*nearZ/screenZ;

    leftCam.topfrustum = top;
    leftCam.bottomfrustum = -top;
    leftCam.leftfrustum = -right + frustumshift;
    leftCam.rightfrustum = right + frustumshift;
    leftCam.modeltranslation = IOD/2;

    rightCam.topfrustum = top;
    rightCam.bottomfrustum = -top;
    rightCam.leftfrustum = -right - frustumshift;
    rightCam.rightfrustum = right - frustumshift;
    rightCam.modeltranslation = -IOD/2;
}
void init(void)
{
  glViewport (0, 0, screenwidth, screenheight);            //sets drawing viewport
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();

  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity();
}
GLvoid reshape(int w, int h)
{
    if (h==0)
    {
        h=1;                                               //prevent divide by 0
    }
    aspect=double(w)/double(h);
    glViewport(0, 0, w, h);
    setFrustum();
}
GLvoid display(GLvoid)
{
  glDrawBuffer(GL_BACK);                                   //draw into both back buffers
  glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);      //clear color and depth buffers

  glDrawBuffer(GL_BACK_LEFT);                              //draw into back left buffer
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();                                        //reset projection matrix
  glFrustum(leftCam.leftfrustum, leftCam.rightfrustum,     //set left view frustum
            leftCam.bottomfrustum, leftCam.topfrustum,
            nearZ, farZ);
  glTranslatef(leftCam.modeltranslation, 0.0, 0.0);
       //translate to cancel parallax
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity);
  glPushMatrix();
  {
    glTranslatef(0.0, 0.0, depthZ);                        //translate to screenplane
    drawscene();
  }
  glPopMatrix();


  glDrawBuffer(GL_BACK_RIGHT);                             //draw into back right buffer
  glMatrixMode(GL_PROJECTION);
  glLoadIdentity();                                        //reset projection matrix
  glFrustum(rightCam.leftfrustum, rightCam.rightfrustum,   //set left view frustum
            rightCam.bottomfrustum, rightCam.topfrustum,
            nearZ, farZ);
  glTranslatef(rightCam.modeltranslation, 0.0, 0.0);
      //translate to cancel parallax
  glMatrixMode(GL_MODELVIEW);
  glLoadIdentity);


  glPushMatrix();
  {
    glTranslatef(0.0, 0.0, depthZ);                        //translate to screenplane
    drawscene();
  }
  glPopMatrix();

 
  glutSwapBuffers();
}

ghosting


 
Ghosting, or crosstalk, is a problem that is less evident in stereo movies and gaming due to motion of the scene and lower contrast levels as well as the presence of color in the images.  Medical images viewed in stereo are not forgiving if the viewing system has even a small level of ghosting.  This is because images are typically grayscale, static, have areas of very high contrast and also areas in which fine gradations of gray are used to differentiate structures.  The obvious and best solution for the problem is to use a optical viewing system which has no chance of ghosting.

In using any viewing system where light from the images is physically superimposed, a small percentage of the image destined for the contralateral eye does leak through the coding device, whether shutterglasses, polarized glasses, anaglyph glasses or an autostereoscopic screen.  With shutterglasses the ghosting arises from two components:  Firstly, the persistence of the image on the monitor phosphor persists for a bit longer after it has been switched off and this time lag allows some of the image to be still present at high enough levels to be perceptible when the contralateral shutter opens.  Secondly, even with the shutter closed, a small proportion of light still does leak through from the wrong image.

To eliminate perceptible ghosting, the amount of light leakage to the contralateral eye should be below 2% of what is being displayed to the that eye (the Weber fraction).

To make shutterglasses or autostereoscopic screens truly useful for radiology, the ghosting will have to be eliminated or minimized in future generations of stereoscopic equipment.

posted on 2007-05-03 20:11 zmj 阅读(4460) 评论(1)  编辑 收藏 引用

评论

# re: stereo geometry in OpenGL 2008-10-07 20:47 Mr hu

为什么不让别人加下这个群 my QQ number is 287977419  回复  更多评论   


只有注册用户登录后才能发表评论。
网站导航: 博客园   IT新闻   BlogJava   知识库   博问   管理