|
2012年9月18日
原文地址:在Ubuntu中安装Python科学计算环境 作者:HyryStudio
在Ubuntu下安装Python模块通常可以使用apt-get和pip命令。apt-get命令是Ubuntu自带的包管理命令,而pip则是Python安装扩展模块的工具,通常pip会下载扩展模块的源代码并编译安装。
Ubuntu 12.04中缺省安装了Python2.7.3,首先通过下面的命令安装pip,pip是Python的一个安装和管理扩展库的工具。
sudo apt-get install python-pip
安装Python开发环境,方便今后编译其他扩展库,占用空间92.8M:
sudo apt-get install python-dev
IPython
为了安装最新版的IPython 0.13beta,需要下载IPython源代码,并执行安装命令。在IPython 0.13beta中提供了改进版本的IPython notebook。下面的命令首先安装版本管理软件git,然后通过git命令从IPython的开发代码库中下载最新版本的IPython源代码,并执行安装命令:
cd sudo apt-get install git git clone https://github.com/ipython/ipython.git cd ipython sudo python setup.py install
如果安装目前的最新稳定版本,可以输入:
sudo apt-get install ipython
安装完毕之后,请输入ipython命令测试是否能正常启动。
为了让IPython notebook工作,还还需要安装tornado和pyzmq:
sudo pip install tornado sudo apt-get install libzmq-dev sudo pip install pyzmq sudo pip install pygments
下面测试IPython:
cd mkdir notebook cd notebook ipython notebook
为了在IPython中离线使用LaTeX数学公式,需要安装mathjax,首先输入下面的命令启动ipython notebook:
在IPython notebook界面中输入:
from IPython.external.mathjax import install_mathjax install_mathjax()
NumPy,SciPy和matplotlib
通过apt-get命令可以快速安装这三个库:
sudo apt-get install python-numpy sudo apt-get install python-scipy sudo apt-get install python-matplotlib
如果需要通过pip编译安装,可以先用apt-get命令安装所有编译所需的库:
sudo apt-get build-dep python-numpy sudo apt-get build-dep python-scipy
然后通过pip命令安装:
sudo pip install numpy sudo pip install scipy
通过build-dep会安装很多库,包括Python 3.2。
PyQt4和Spyder
下面的命令安装PyQt4,Qt界面设计器,PyQt4的开发工具以及文档:
sudo apt-get install python-qt4 sudo apt-get install qt4-designer sudo apt-get install pyqt4-dev-tools sudo apt-get install python-qt4-doc
安装完毕之后,文档位于:
/usr/share/doc/python-qt4-doc
安装好PyQt4之后通过下面的命令安装Spyder:
sudo apt-get install spyder
由于Spyder经常更新,通过下面的命令可以安装最新版:
sudo pip install spyder --upgrade
cython和SWIG
Cython和SWIG是编写Python扩展模块的工具:
sudo pip install cython sudo apt-get install swig
输入 cython --version 和 swig -version 查看版本。
ETS
ETS是enthought公司开发的一套科学计算软件包,其中的Mayavi通过VTK实现数据的三维可视化。
首先通过下面的命令安装编译ETS所需的库:
sudo apt-get install python-dev libxtst-dev scons python-vtk pyqt4-dev-tools python2.7-wxgtk2.8 python-configobj sudo apt-get install libgl1-mesa-dev libglu1-mesa-dev
创建ets目录,并在此目录下下载ets.py,运行ets.py可以复制最新版的ETS源程序,并安装:
mkdir ets cd ets wget https://github.com/enthought/ets/raw/master/ets.py python ets.py clone sudo python ets.py develop #sudo python ets.py install 或者运行install安装
如果一切正常,那么输入 mayavi2 命令则会启动mayavi。
OpenCV
为了编译OpenCV需要下载cmake编译工具,和一些依赖库:
sudo apt-get install build-essential sudo apt-get install cmake sudo apt-get install cmake-gui sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev sudo apt-get install libjpeg-dev libpng-dev libtiff-dev libjasper-dev
然后从 http://sourceforge.net/projects/opencvlibrary/ 下载最新版的OpenCV源代码,并解压。然后创建编译用的目录release,并启动cmake-gui:
在界面中选择OpenCV源代码的目录,和编译输出目录release,然后按Configure按钮,并根据需要设置各个编译选项,最后点Generate按钮,退出cmake-gui界面。进入编译路径,执行下面的命令:
cd release make sudo make install
安装完毕之后,启动IPython,并输入 import cv2 测试OpenCV是否能正常载入。
2012年9月17日
1from sgmllib import SGMLParser 2import sys,urllib2,urllib,cookielib 3class spider(SGMLParser): 4 def __init__(self,email,password): 5 SGMLParser.__init__(self) 6 self.h3=False 7 self.h3_is_ready=False 8 self.div=False 9 self.h3_and_div=False 10 self.a=False 11 self.depth=0 12 self.names="" 13 self.dic={} 14 15 self.email=email 16 self.password=password 17 self.domain='renren.com' 18 try: 19 cookie=cookielib.CookieJar() 20 cookieProc=urllib2.HTTPCookieProcessor(cookie) 21 except: 22 raise 23 else: 24 opener=urllib2.build_opener(cookieProc) 25 urllib2.install_opener(opener) 26 27 def login(self): 28 url='http://www.renren.com/PLogin.do' 29 postdata={ 30 'email':self.email, 31 'password':self.password, 32 'domain':self.domain 33 } 34 req=urllib2.Request( 35 url, 36 urllib.urlencode(postdata) 37 ) 38 39 self.file=urllib2.urlopen(req).read() 40 #print self.file 41 def start_h3(self,attrs): 42 self.h3 = True 43 def end_h3(self): 44 self.h3=False 45 self.h3_is_ready=True 46 47 def start_a(self,attrs): 48 if self.h3 or self.div: 49 self.a=True 50 def end_a(self): 51 self.a=False 52 53 def start_div(self,attrs): 54 if self.h3_is_ready == False: 55 return 56 if self.div==True: 57 self.depth += 1 58 59 for k,v in attrs: 60 if k == 'class' and v == 'content': 61 self.div=True; 62 self.h3_and_div=True #h3 and div is connected 63 def end_div(self): 64 if self.depth == 0: 65 self.div=False 66 self.h3_and_div=False 67 self.h3_is_ready=False 68 self.names="" 69 if self.div == True: 70 self.depth-=1 71 def handle_data(self,text): 72 #record the name 73 if self.h3 and self.a: 74 self.names+=text 75 #record says 76 if self.h3 and (self.a==False): 77 if not text:pass 78 else: self.dic.setdefault(self.names,[]).append(text) 79 return 80 if self.h3_and_div: 81 self.dic.setdefault(self.names,[]).append(text) 82 83 def show(self): 84 type = sys.getfilesystemencoding() 85 for key in self.dic: 86 print ( (''.join(key)).replace(' ','')).decode('utf-8').encode(type), \ 87 ( (''.join(self.dic[key])).replace(' ','')).decode('utf-8').encode(type) 88 89 90 91 92renrenspider=spider('your email','your password') 93renrenspider.login() 94renrenspider.feed(renrenspider.file) 95renrenspider.show() 96
2012年8月19日
Google Earth坐标-美国航空母舰坐标
这里罗列了已经发现的所有美国现役和退役的航空母舰。其中包括:
“小鹰”号 CV63 35°17'29.66"N,139°39'43.67"E
“肯尼迪”号 CVN67 30°23'50.91"N, 81°24'14.86"W
“尼米兹”号 CVN68 32°42'47.88"N,117°11'22.49"W
“艾森豪威尔”号 CVN69 36°57'27.13"N, 76°19'46.35"W
“林肯” 号 CVN72 47°58'53.54"N,122°13'42.94"W
“华盛顿”号 CVN73 36°57'32.90"N, 76°19'45.10"W
“杜鲁门”号 CVN75 36°48'53.25"N,76°17'49.29"W
“无畏”号 CV-11 40°45'53.88"N,74° 0'4.22"W
“莱克星顿”号 CV-2 27°48'54.13"N,97°23'19.65"W
“星座”号 47°33'11.30"N,122°39'17.24"W
“独立”号 47°33'7.53"N,122°39'30.13"W
“游骑兵”号 47°33'10.63"N,122°39'9.53"W
“佛瑞斯特”号和“萨拉托加”号 41°31'39.59"N,71°18'58.70"W
“美利坚”号 39°53'6.36"N,75°10'45.55"W
本列表收录了美国海军己退役或现役中的航空母舰,包括船级属于CV、CVA、CVB、CVL或CVN的全部舰只。编号在CVA-58之后的都属于超级航空母舰(排水量超过75,000吨),CVN-65和CVN-68以后的都属于核动力航空母舰。
排水量较小的护航航空母舰(Escort Aircraft Carriers,CVE),则另行收录于美国海军护航航空母舰列表中。
2012年8月10日
高光谱成像是新一代光电检测技术,兴起于2O世纪8O年代,目前仍在迅猛发展巾。高光谱成像是相对多光谱成像而言,通过高光谱成像方法获得的高光谱图像与通过多光谱成像获取的多光谱图像相比具有更丰富的图像和光谱信息。如果根据传感器的光谱分辨率对光谱成像技术进行分类,光谱成像技术一般可分成3类。
(1) 多光谱成像——光谱分辨率在 delta_lambda/lambda=0.1数量级,这样的传感器在可见光和近红外区域一般只有几个波段。
(2) 高光谱成像—— 光谱分辨率在 delta_lambda/lambda=0.01数量级,这样的传感器在可见光和近红外区域有几卜到数百个波段,光谱分辨率可达nm级。
(3) 超光谱成像—— 光谱分辨率在delta_lambda/lambda =O.001数量级,这样的传感器在可见光和近红外区域可达数千个波段。
众所周知,光谱分析是自然科学中一种重要的研究手段,光谱技术能检测到被测物体的物理结构、化学成分等指标。光谱评价是基于点测量,而图像测量是基于空间特性变化,两者各有其优缺点。因此,可以说光谱成像技术是光谱分析技术和图像分析技术发展的必然结果,是二者完美结合的产物。光谱成像技术不仅具有光谱分辨能力,还具有图像分辨能力,利用光谱成像技术不仅可以对待检测物体进行定性和定量分析,而且还能进对其进行定位分析。
高光谱成像系统的主要工作部件是成像光谱仪,它是一种新型传感器,2O世纪8O年代初正式开始研制,研制这类仪器的目的是为获取大量窄波段连续光谱图像数据,使每个像元具有几乎连续的光谱数据。它是一系列光波波长处的光学图像,通常包含数十到数百个波段,光谱分辨率一般为1~l0nm。由于高光谱成像所获得的高光谱图像能对图像中的每个像素提供一条几乎连续的光谱曲线,其在待测物上获得空间信息的同时又能获得比多光谱更为丰富光谱数据信息,这些数据信息可用来生成复杂模型,来进行判别、分类、识别图像中的材料。
通过高光谱成像获取待测物的高光谱图像包含了待测物的丰富的空间、光谱和辐射三重信息。这些信息不仅表现了
地物空间分布的影像特征,同时也可能以其中某一像元或像元组为目标获取它们的辐射强度以及光谱特征。影像、辐射与光谱是高光谱图像中的3个重要特征,这3个特征的有机结合就是高光谱图像。
高光谱图像数据为数据立方体(cube)。通常图像像素的横坐标和纵坐标分别用z和Y来表示,光谱的波长信息以(Z即轴)表示。该数据立方体由沿着光谱轴的以一定光谱分辨率间隔的连续二维图像组成。
2012年7月30日
Q:Link Error 2019 无法解析的外部符号 _cvCreateImage A:应将解决方案平台改为win64。 工具栏上方的解决方案平台—》点击下拉菜单—》配置管理器—》活动解决方案平台—》新建—》键入获选着新平台—》x64 问题就解决啦!哈哈!
Q:Error C1189 Building MFC application with /MD[d] (CRT dll version) requires MFC shared dll version. Please #define _AFXDLL or do not use /MD[d] A:Go to the project properties (Project menu, Properties). Set 'Use of MFC' to "Use MFC in a Shared DLL". You have to make this change for both the debug and release configurations
2012年7月25日
2012年7月24日
- Introduction
- The Idea
- The Gaussian Case
- Experiments with Black-and-White Images
- Experiments with Color Images
- References
Introduction
Filtering is perhaps the most fundamental operation of image processing and computer vision. In the broadest sense of the term "filtering", the value of the filtered image at a given location is a function of the values of the input image in a small neighborhood of the same location. For example, Gaussian low-pass filtering computes a weighted average of pixel values in the neighborhood, in which the weights decrease with distance from the neighborhood center. Although formal and quantitative explanations of this weight fall-off can be given, the intuition is that images typically vary slowly over space, so near pixels are likely to have similar values, and it is therefore appropriate to average them together. The noise values that corrupt these nearby pixels are mutually less correlated than the signal values, so noise is averaged away while signal is preserved. The assumption of slow spatial variations fails at edges, which are consequently blurred by linear low-pass filtering. How can we prevent averaging across edges, while still averaging within smooth regions? Many efforts have been devoted to reducing this undesired effect. Bilateral filtering is a simple, non-iterative scheme for edge-preserving smoothing.
Back to Index
The Idea
The basic idea underlying bilateral filtering is to do in the range of an image what traditional filters do in its domain. Two pixels can be close to one another, that is, occupy nearby spatial location, or they can be similar to one another, that is, have nearby values, possibly in a perceptually meaningful fashion. Consider a shift-invariant low-pass domain filter applied to an image:
The bold font for f and h emphasizes the fact that both input and output images may be multi-band. In order to preserve the DC component, it must be
Range filtering is similarly defined:
In this case, the kernel measures the photometric similarity between pixels. The normalization constant in this case is
The spatial distribution of image intensities plays no role in range filtering taken by itself. Combining intensities from the entire image, however, makes little sense, since the distribution of image values far away from x ought not to affect the final value at x. In addition, one can show that range filtering without domain filtering merely changes the color map of an image, and is therefore of little use. The appropriate solution is to combine domain and range filtering, thereby enforcing both geometric and photometric locality. Combined filtering can be described as follows:
with the normalization
Combined domain and range filtering will be denoted as bilateral filtering. It replaces the pixel value at x with an average of similar and nearby pixel values. In smooth regions, pixel values in a small neighborhood are similar to each other, and the bilateral filter acts essentially as a standard domain filter, averaging away the small, weakly correlated differences between pixel values caused by noise. Consider now a sharp boundary between a dark and a bright region, as in figure 1(a).
When the bilateral filter is centered, say, on a pixel on the bright side of the boundary, the similarity function s assumes values close to one for pixels on the same side, and values close to zero for pixels on the dark side. The similarity function is shown in figure 1(b) for a 23x23 filter support centered two pixels to the right of the step in figure 1(a). The normalization term k(x) ensures that the weights for all the pixels add up to one. As a result, the filter replaces the bright pixel at the center by an average of the bright pixels in its vicinity, and essentially ignores the dark pixels. Conversely, when the filter is centered on a dark pixel, the bright pixels are ignored instead. Thus, as shown in figure 1(c), good filtering behavior is achieved at the boundaries, thanks to the domain component of the filter, and crisp edges are preserved at the same time, thanks to the range component.
Back to Index
The Gaussian Case
A simple and important case of bilateral filtering is shift-invariant Gaussian filtering, in which both the closeness function c and the similarity function s are Gaussian functions of the Euclidean distance between their arguments. More specifically, c is radially symmetric:
where
is the Euclidean distance. The similarity function s is perfectly analogous to c :
where
is a suitable measure of distance in intensity space. In the scalar case, this may be simply the absolute difference of the pixel difference or, since noise increases with image intensity, an intensity-dependent version of it. Just as this form of domain filtering is shift-invariant, the Gaussian range filter introduced above is insensitive to overall additive changes of image intensity. Of course, the range filter is shift-invariant as well.
Back to Index
Experiments with Black-and-White Images
Figure 2 (a) and (b) show the potential of bilateral filtering for the removal of texture. The picture "simplification" illustrated by figure 2 (b) can be useful for data reduction without loss of overall shape features in applications such as image transmission, picture editing and manipulation, image description for retrieval.
|
|
|
(a) |
|
(b) |
|
Figure 2 |
|
Bilateral filtering with parameters sd =3 pixels and sr =50 intensity values is applied to the image in figure 3 (a) to yield the image in figure 3 (b). Notice that most of the fine texture has been filtered away, and yet all contours are as crisp as in the original image. Figure 3 (c) shows a detail of figure 3 (a), and figure 3 (d) shows the corresponding filtered version. The two onions have assumed a graphics-like appearance, and the fine texture has gone. However, the overall shading is preserved, because it is well within the band of the domain filter and is almost unaffected by the range filter. Also, the boundaries of the onions are preserved.
Back to Index
Experiments with Color Images
For black-and-white images, intensities between any two gray levels are still gray levels. As a consequence, when smoothing black-and-white images with a standard low-pass filter, intermediate levels of gray are produced across edges, thereby producing blurred images. With color images, an additional complication arises from the fact that between any two colors there are other, often rather different colors. For instance, between blue and red there are various shades of pink and purple. Thus, disturbing color bands may be produced when smoothing across color edges. The smoothed image does not just look blurred, it also exhibits odd-looking, colored auras around objects.
Figure 4 (a) shows a detail from a picture with a red jacket against a blue sky. Even in this unblurred picture, a thin pink-purple line is visible, and is caused by a combination of lens blurring and pixel averaging. In fact, pixels along the boundary, when projected back into the scene, intersect both red jacket and blue sky, and the resulting color is the pink average of red and blue. When smoothing, this effect is emphasized, as the broad, blurred pink-purple area in figure 4 (b) shows. To address this difficulty, edge-preserving smoothing could be applied to the red, green, and blue components of the image separately. However, the intensity profiles across the edge in the three color bands are in general different. Smoothing the three color bands separately results in an even more pronounced pink and purple band than in the original, as shown in figure 4 (c). The pink-purple band, however, is not widened as in the standard-blurred version of figure 4 (b). A much better result can be obtained with bilateral filtering. In fact, a bilateral filter allows combining the three color bands appropriately, and measuring photometric distances between pixels in the combined space. Moreover, this combined distance can be made to correspond closely to perceived dissimilarity by using Euclidean distance in the CIE-Lab color space. This color space is based on a large body of psychophysical data concerning color-matching experiments performed by human observers. In this space, small Euclidean distances are designed to correlate strongly with the perception of color discrepancy as experienced by an "average" color-normal human observer. Thus, in a sense, bilateral filtering performed in the CIE-Lab color space is the most natural type of filtering for color images: only perceptually similar colors are averaged together, and only perceptually important edges are preserved. Figure 4 (d) shows the image resulting from bilateral smoothing of the image in figure 4 (a). The pink band has shrunk considerably, and no extraneous colors appear.
Figure 5 (c) shows the result of five iterations of bilateral filtering of the image in figure 5 (a). While a single iteration produces a much cleaner image (figure 5 (b)) than the original, and is probably sufficient for most image processing needs, multiple iterations have the effect of flattening the colors in an image considerably, but without blurring edges. The resulting image has a much smaller color map, and the effects of bilateral filtering are easier to see when displayed on a printed page. Notice the cartoon-like appearance of figure 5 (c). All shadows and edges are preserved, but most of the shading is gone, and no "new" colors are introduced by filtering.
Back to Index
References
[1] C. Tomasi and R. Manduchi, "Bilateral Filtering for Gray and Color Images", Proceedings of the 1998 IEEE International Conference on Computer Vision, Bombay, India. [2] T. Boult, R.A. Melter, F. Skorina, and I. Stojmenovic,"G-neighbors", Proceedings of the SPIE Conference on Vision Geometry II, pages 96-109, 1993. [3] R.T. Chin and C.L. Yeh, "Quantitative evaluation of some edge-preserving noise-smoothing techniques", Computer Vision, Graphics, and Image Processing, 23:67-91, 1983. [4] L.S. Davis and A. Rosenfeld, "Noise cleaning by iterated local averaging", IEEE Transactions on Systems, Man, and Cybernetics, 8:705-710, 1978. [5] R.E. Graham, "Snow-removal - a noise-stripping process for picture signals", IRE Transactions on Information Theory, 8:129-144, 1961. [6] N. Himayat and S.A. Kassam, "Approximate performance analysis of edge preserving filters", IEEE Transactions on Signal Processing, 41(9):2764-77, 1993. [7] T.S. Huang, G.J. Yang, and G.Y. Tang, "A fast two-dimensional median filtering algorithm", IEEE Transactions on Acoustics, Speech, and Signal Processing, 27(1):13-18, 1979. [8] J.S. Lee, "Digital image enhancement and noise filtering by use of local statistics", IEEE Transactions on Pattern Analysis and Machine Intelligence, 2(2):165-168, 1980. [9] M. Nagao and T. Matsuyama, "Edge preserving smoothing", Computer Graphics and Image Processing, 9:394-407, 1979. [10] P.M. Narendra, "A separable median filter for image noise smoothing", IEEE Transactions on Pattern Analysis and Machine Intelligence, 3(1):20-29, 1981. [11] K.J. Overton and T.E. Weymouth, "A noise reducing preprocessing algorithm",Proceedings of the IEEE Computer Science Conference on Pattern Recognition and Image Processing, pages 498-507, Chicago, IL, 1979. [12] P. Perona and J. Malik, "Scale-space and edge detection using anisotropic diffusion", IEEE Transactions on Pattern Analysis and Machine Intelligence, 12(7):629-639, 1990. [13] G. Ramponi, "A rational edge-preserving smoother", Proceedings of the International Conference on Image Processing, volume 1, pages 151-154, Washington, DC, 1995. [14] G. Sapiro and D.L. Ringach, "Anisotropic diffusion of color images", Proceedings of the SPIE, volume 2657, pages 471-382, 1996. [15] D.C.C. Wang, A.H. Vagnucci, and C.C. Li, "A gradient inverse weighted smoothing scheme and the evaluation of its performance", Computer Vision, Graphics, and Image Processing, 15:167-181, 1981. [16] G. Wyszecki and W. S. Styles, Color Science: Concepts and Methods, Quantitative Data and Formulae, John Wiley and Sons, New York, NY, 1982. [17] L. Yin, R. Yang, M. Gabbouj, and Y. Neuvo, "Weighted median filters: a tutorial",IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, 43(3):155-192, 1996.
我们在写程序时,常常会遇到类型转换的问题。现总结一些常见的类型转换。
1,const char*(C风格字符串)与string之间转换:
(1) const char*可以直接对string类型赋值,例如:
const char* pchar = "qwerasdf";
stringstr = pchar;
(2) string通过c_str()函数转换为C风格字符串,例如:
string str = "qwerasdf";
const char* pchar = str.c_str();
2,const char*类型可以直接给CString类型赋值,例如:
const char* pchar = "qwerasdf";
CString str = pchar;
3,string类型变量转为为Cstring类型变量
CString类型变量可以直接给string类型变量赋值,但是string类型不能对CString类型直接赋值。通过前两类
转换我们可以得到,string类型变量转换为const char*类型,然后再直接赋值就可以了。例如:
CString cstr;
sring str = “asdasd”;
cstr = str.c_str();
同理,CStrng类型变量先转换为string类型在调用c_str()函数就可以完成向const char*类型的转换。例如:
CString cStr = "adsad";
string str = cStr;
const char* pchar = str.c_str(); 4,double,int转string
double temp; stringstream strStream; strStream<<temp; string ss = strStream.str()
string 转double,int string.atoi , string.atof
从上面我们可以上面看出,通过类型之间的相互转化,会使本来要通过复杂的函数来完成的类型转换变得简单易懂。
|