随笔:18 文章:0 评论:0 引用:0
C++博客 首页 发新随笔
发新文章 联系 聚合管理

const DS_EXTENSION_UUID = "{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}";
function DS_InitObserver()
{
 DS_UninstallObserver.register();
}

var DS_UninstallObserver = {
 _uninstall : false,
 observe : function(subject, topic, data)
 {
  if(topic == "em-action-requested")
  { 
   subject.QueryInterface(Components.interfaces.nsIUpdateItem);
   if(subject.id == EXTENSION_UUID)
   {
    if(data == "item-uninstalled")
    {
     this._uninstall = true;
    }
    else if(data == "item-cancel-action")
    {
     this._uninstall = false;
    }
   }
  }
  else if(topic == "quit-application-granted")
  {
   if(this._uninstall)
   {
    /* uninstall stuff. */
   }
   this.unregister();
  }
 },
 register : function()
 {
  var observerService = Components.classes["@mozilla.org/observer-service;1"].getService(Components.interfaces.nsIObserverService);
  observerService.addObserver(this, "em-action-requested", false);
  observerService.addObserver(this, "quit-application-granted", false);
 },
 unregister : function()
 {
  var observerService = Components.classes["@mozilla.org/observer-service;1"].getService(Components.interfaces.nsIObserverService);
  observerService.removeObserver(this,"em-action-requested");
  observerService.removeObserver(this,"quit-application-granted");
 }
}

window.addEventListener("load", DS_InitObserver, false);

That is OK. It works well.

posted @ 2008-07-15 09:58 王者归来 阅读(210) | 评论 (0)编辑 收藏
 
Security changes

Chrome access

In prior versions of Firefox, any web page could load scripts or images from chrome using the chrome:// protocol. Among other things, this made it possible for sites to detect the presence of add-ons -- which could be used to breach a user's security by bypassing add-ons that add security features to the browser.

Firefox 3 only allows web content to access items in the chrome://browser/ and chrome://toolkit/ spaces. These files are intended to be accessible by web content. All other chrome content is now blocked from access by the web.

There is, however, a way for extensions to make their content web-accessible. They can specify a special flag in their chrome.manifest file, like this:

content mypackage location/ contentaccessible=yes

This shouldn't be something you need to do very often, but it's available for those rare cases in which it's needed. Note that it's possible that Firefox may alert the user that your extension uses the contentaccessible flag in this way, as it does constitute a potential security risk.

Note: Because Firefox 2 doesn't understand the contentaccessible flag (it will ignore the entire line containing the flag), if you want your add-on to be compatible with both Firefox 2 and Firefox 3, you should do something like this:
content mypackage location/
content mypackage location/ contentaccessible=yes
So, I use the following two lines in the chrome.manifest file.

content firefoxtoolbar jar:chrome/firefoxtoolbar.jar!/content/ 
content firefoxtoolbar jar:chrome/firefoxtoolbar.jar!/content/ contentaccessible=yes

Then, it works fine.
posted @ 2008-07-10 15:39 王者归来 阅读(205) | 评论 (0)编辑 收藏
 

FILE *fp; 

NPError NPP_NewStream(NPP instance,

                         NPMIMEType type,

                         NPStream* stream,

                         NPBool seekable,

                         uint16* stype)

{

     MessageBox(NULL, "NPP_NewStream", "INFO", MB_OK);

       fp = fopen("C:\\FILE.BIN", "wb"); // 必须为 b 模式,否则 fwrite 写入数据不正确

     if(instance == NULL)

{

         return NPERR_INVALID_INSTANCE_ERROR;

}

     NPError rv = NPERR_NO_ERROR;

     return rv;

}

 

int32 NPP_WriteReady (NPP instance, NPStream *stream)

{

     if(instance == NULL)

{

         return NPERR_INVALID_INSTANCE_ERROR;

}

int32 rv = 1024;

     return rv;

}


int32 NPP_Write (NPP instance, NPStream *stream, int32 offset, int32 len, void *buffer)

{  

     if(instance == NULL)

     {

         return NPERR_INVALID_INSTANCE_ERROR;

}

     int32 iWrSize;

     if(fp)

     {

         iWrSize = fwrite(g_StreamBuf, 1, len, fp);

     }

 
     int32 rv = iWrSize;

     return rv;

}
 

NPError NPP_DestroyStream (NPP instance, NPStream *stream, NPError reason)

{

     if(instance == NULL)

{

         return NPERR_INVALID_INSTANCE_ERROR;

} 

     if(fp)

     {

         fclose(fp);

         fp = NULL;

     }

     NPError rv = NPERR_NO_ERROR;

     return rv;

}

posted @ 2008-07-10 12:30 王者归来 阅读(350) | 评论 (0)编辑 收藏
 

1) Problem
for (CMyClasses::iterator it = Classes.begin(); it != Classes.end(); it++)
{
    HandleClass(xxx, &yyy, it);
 }
Comment:typedef vector<CMyClass *> CMyClasses; CMyClasses Classes;

2) Solution
for (CMyClasses::iterator it = Classes.begin(); it != Classes.end(); it++)
{
    HandleClass(xxx, &yyy, &*it);
 }  

http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=2313761&SiteID=1

posted @ 2008-07-04 15:15 王者归来 阅读(238) | 评论 (0)编辑 收藏
 

构建具有负载均衡功能MySQL集群 PART 2

一、建立两台用作负载均衡的服务器(一主一辅):
      Load Balancer 1 (Primary): 192.168.0.13
      Load Balancer 2 (Backup): 192.168.0.14
      1、配置 IPVS
      Load Balancer  1/ Load Balancer 2:
      执行以下操作
      yum install ipvsadm
      ifconfig eth0:0 192.168.0.15 netmask 255.255.255.255 broadcast 192.168.0.15 up(或者
      直接在配置文件 ifcfg-eth0:0 中配置)
      route add -host 192.168.0.15 dev eth0:0
      ipvsadm -A -t 192.168.0.15:3306 -s wlc
      ipvsadm -a -t 192.168.0.15:3306 -r 192.168.0.10
      ipvsadm -a -t 192.168.0.15:3306 -r 192.168.0.11
      service ipvsadm save(将 IPVS 表保存到 /etc/sysconfig/ipvsadm)
      
      192.168.0.15 为整个 MySQL Cluster 的虚拟 IP,3306 为 MySQL 服务的默认
      端口

      2、配置 Real Server 端
      SQL Node 1 / SQL Node2:
      执行以下操作
      在 /etc/sysctl.conf 作如下配置
      net.ipv4.ip_forward = 1
      net.ipv4.conf.lo.arp_ignore = 1
      net.ipv4.conf.lo.arp_announce = 2      
      net.ipv4.conf.all.arp_ignore = 1
      net.ipv4.conf.all.arp_announce = 2      
      执行以下命令
      ifconfig lo:0 192.168.0.15 netmask 255.255.255.255 broadcast 192.168.0.15 up
      route add -host 192.168.0.15 dev lo:0 
   
      3、配置 ldirectord
      
      4、配置 heartbeat

      5、测试

 

posted @ 2008-06-26 14:51 王者归来 阅读(233) | 评论 (0)编辑 收藏
 

构建具有负载均衡功能MySQL集群 PART 1

过去的五天都在研究MySQL集群和负载均衡,GOOGLE了,看了N多的资料。感觉颇多有用的资料都是英文的,看来英文水平真是值得老板们和员工们重视啊。经过在有用的和无用的资料堆中翻滚、挣扎,今天终于把这个MySQL集群搞定。

一、系统平台:所有节点采用CentOS 5

二、网络结构(三台机器):
      Data Node 1: 192.168.0.10
      Data Node 2: 192.168.0.11
                     
      NDB Management Node: 192.168.0.12

      SQL Node 1: 192.168.0.10
      SQL Node 2: 192.168.0.11

三、下载、安装软件
      1、下载安装包(http://mirrors.24-7-solutions.net/pub/mysql/Downloads/MySQL-5.0/)
      NDB Management Node: MySQL-ndb-management-5.0.51a-0.glibc23.i386.rpm MySQL-ndb-tools-5.0.51a-0.glibc23.i386.rpm
      SQL Nodes: MySQL-server-5.0.51a-0.glibc23.i386.rpm MySQL-client-5.0.51a-0.glibc23.i386.rpm MySQL-shared-5.0.51a-0.glibc23.i386.rpm
      Data Nodes: MySQL-ndb-storage-5.0.51a-0.glibc23.i386.rpm
      2、在各节点中安装相应的包,如提示存在包依赖问题,解决之。

四、配置文件
      1、config.ini (NDB Management Node):
      [NDBD DEFAULT]
      NoOfReplicas=2
      # NDB Management Node
      [NDB_MGMD]
      id=1
      hostname=192.168.0.12
      datadir=/var/lib/mysql-cluster
      # Data Nodes
      [NDBD]
      id=2
      hostname=192.168.0.10
      [NDBD]
      id=3
      hostname=192.168.0.11
      # SQL Nodes
      [MYSQLD]
      id=4
      hostname=192.168.0.10
      [MYSQLD]
      id=5
      hostname=192.168.0.11

      其中SQL Node的 hostname 不是必须的。
      
      2、my.cnf (Data Node and SQL Node)
      # Options for mysqld process:
      [mysqld]
      ndbcluster                      # run NDB storage engine   
      ndb-connectstring=192.168.0.12  # location of management server

      # Options for ndbd process:
      [mysql_cluster]                 
      ndb-connectstring=192.168.0.12  # location of management server

五、初始启动
      NDB Management Node: ndbd_mgmd -f /etc/config.ini
      Data Nodes: ndbd --initial
      SQL Nodes: service mysql restart  (注意要restart)
      全部启动完后,回到 NDB Management Node,将看到整个MySQL Cluster 的状态(注意此处 IP 不同):


六、测试
      在 SQL Node 1 依次执行以下操作:

      (未完待续)      
     

posted @ 2008-06-20 18:06 王者归来 阅读(398) | 评论 (0)编辑 收藏
 

<转载>How to install Ultra Monkey LVS
in a 2-Node HA/LB Setup on CentOS/RHEL4

I'm the resident Linux guru at my job -- a mid-sized local company with a decent sized IT department. We like to install servers in clusters to improve our fault tolerance. Being the Linux guy in a shop where Windows servers outnumber Unix server about 8:1, I wanted to do a one up on Windows' active-passive (high availability, or HA) cluster setup by doing a 2-node active-active (load balanced, or LB) cluster using the Linux Virtual Server (LVS) system. Our Linux distribution of choice is RedHat Enterprise Linux 4 (RHEL 4), and CentOS is the most compatible free clone thereof. Version 4 of these distros uses the Linux 2.6 kernel.

I was able to find a number of good tutorials on the web for configuring similar platforms, but nothing that quite matched what we wanted to do. Hence, I'm writing one now.

For these examples, let's assume that you have two physical web servers named lvs1 (192.168.0.1) and lvs2 (192.168.0.2) that you want to cluster together. They sit on a class C network, with a gateway router of 192.168.0.254. Those machines are known as the "real servers," since they are the ones that do the real work of serving up web pages. The outside world will reference those servers using a single hostname of vip1 (192.168.0.100). Either or both real servers will answer requests made to vip1. The determination of which real server will answer each request is made by the "ldirectord" package. In a larger setup, ldirectord would run on its own HA pair of servers, but in our 2-node setup, it jumps back and forth between the two real servers. The jumping back and forth (in case one director server completely dies) is handled by the "heartbeat" package.

The first step is to download all the necessary packages. All of them could be built from source, but I prefer to use RPM packages when available because they allow you to manage versions and dependancies much more easily. Since LVS doesn't officially ship with RHEL, the best place to get recent packages seems to be from the CentOS respository at ftp://ftp.osuosl.org/pub/centos/4.4/extras/i386/RPMS/ or directly from the Linux-HA web site at http://linux-ha.org/download/index.html. There is a bug in the IPaddr2 script in all 2.x versions prior to 2.0.8, so until 2.0.8 makes it into the repositories, you'll have to apply this patch (relative to v2.0.7) to /usr/lib/ocf/resource.d/heartbeat/IPaddr2.

The exact package list required will vary depending on what's already installed on your system. At a minimum, you will need the following packages. The indenting indicates the package dependancies; ie, most packages exist to support heartbeat and heartbeat-ldirectord.

  • heartbeat
    • heartbeat-pils
    • heartbeat-stonith
  • heartbeat-ldirectord
    • ipvsadm
    • perl-MailTools
      • perl-TimeDate
    • perl-Net-IMAP-Simple
    • perl-Net-IMAP-Simple-SSL
      • perl-IO-Socket-SSL
        • perl-Net-SSLeay
    • perl-Mail-POP3Client
    • perl-Mail-IMAPClient
    • perl-Authen-Radius
      • perl-Data-HexDump

Once the necessary packages are installed, you can start the configuration process. There's a pretty good writeup for installing Ultra Monkey in a 2-node HA/LB setup on RHEL3 or Debian here. I had a couple problems with that on RHEL4, though, which is why I'm writing my own tutorial.

First, you need to change a few kernel parameters by editing /etc/sysctl.conf. Ensure that the following variables are all set to the following values. Beware that some of them may be set to other values somewhere in the file, while others won't exist yet at all. These settings prevent the servers from advertising via ARP the VIP address that will later be assigned to each localhost interface. They also allow the machine acting as the director to forward packets to the other real server when necessary.

#========================================================================
# UltraMonkey requirements below
#
# Enable configuration of arp_ignore option
net.ipv4.conf.all.arp_ignore = 1
# When an arp request is received on eth0, only respond if that address is
# configured on eth0. In particular, do not respond if the address is
# configured on lo
net.ipv4.conf.eth0.arp_ignore = 1
# Ditto for eth1, add for all ARPing interfaces
#net.ipv4.conf.eth1.arp_ignore = 1
# Enable configuration of arp_announce option
net.ipv4.conf.all.arp_announce = 2
# When making an ARP request sent through eth0 Always use an address that
# is configured on eth0 as the source address of the ARP request.  If this
# is not set, and packets are being sent out eth0 for an address that is on
# lo, and an arp request is required, then the address on lo will be used.
# As the source IP address of arp requests is entered into the ARP cache on
# the destination, it has the effect of announcing this address.  This is
# not desirable in this case as adresses on lo on the real-servers should
# be announced only by the linux-director.
net.ipv4.conf.eth0.arp_announce = 2
# Ditto for eth1, add for all ARPing interfaces
#net.ipv4.conf.eth1.arp_announce = 2
# Enables packet forwarding
net.ipv4.ip_forward = 1
#
# UltraMonkey requirements above
#========================================================================

To make these changes take effect, either reboot the system or run:

# /sbin/sysctl -p

Next, you need to configure the loopback interface to have an alias for the VIP address so that the real servers will know to answer connections on that IP even when they're not acting as the director. Create a file named "/etc/sysconfig/network-scripts/ifcfg-lo:0" that contains IP information for the VIP and its network:

DEVICE=lo:0
IPADDR=192.168.0.100
NETMASK=255.255.255.255
NETWORK=192.168.0.0
BROADCAST=192.168.0.255
ONBOOT=yes
NAME=loopback

To turn on this new alias, run:

# /sbin/ifup lo

or

# service network start

This alias won't show up when running "ifconfig", a fact that caused me to waste several hours tracking down a problem that didn't even exist. Instead, you can verify its existance by running:

# ip addr sh lo
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
inet 192.168.0.100/32 brd 192.168.0.255 scope global lo:0
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever

Because we're using what's known as LVS-DR (direct routing), you need to make sure that the default gateway for the servers' primary network interface points to the proper gateway router rather than to the director. To do this, check for the GATEWAY entry in either "/etc/sysconfig/network" or "/etc/sysconfig/network-scripts/ifcfg-eth0" and ensure that it lists the proper IP:

# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=lvs1
GATEWAY=192.168.0.254

or

# cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
BOOTPROTO=static
ONBOOT=yes
TYPE=Ethernet
IPADDR=192.168.0.1
NETMASK=255.255.255.0
GATEWAY=192.168.0.254

You can verify this by running:

# ip route show 0/0
default via 192.168.0.254 dev eth0

Now it's time to configure the heartbeat package to handle failover of the VIP and ldirectord package. There are three files in "/etc/ha.d" that must be configured to make things work. Each of these files should be identical on the two real servers. The packages will install default config files full of comments, but here are a reasonable set of configuration parameters. Everywhere you see a hostname listed, it must match the output of "uname -n" on the appropriate server. The "authkeys" file must be readable only by root for security purposes.

# cat /etc/ha.d/ha.cf
logfacility   local0
keepalive     1
deadtime      10
warntime      5
initdead      120
udpport       694
mcast eth0 225.0.0.1 694 1 0
auto_failback off
node          lvs2.mydomain.com
node          lvs3.mydomain.com
ping          192.168.0.254
respawn hacluster /usr/lib/heartbeat/ipfail
crm off
# cat /etc/ha.d/authkeys
auth 2
2 sha1 ThisIsMyPassword
# cat /etc/ha.d/haresources
lvs1.mydomain.com       \
ldirectord::ldirectord.cf \
LVSSyncDaemonSwap::master \
IPaddr2::192.168.0.100/24/eth0/192.168.0.255
# cat /etc/ha.d/ldirectord.cf
checktimeout=15
checkinterval=5
autoreload=no
logfile="/var/log/ldirectord.log"
quiescent=no
virtual=192.168.0.100:80
fallback=127.0.0.1:80
real=192.168.0.1:80 gate
real=192.168.0.2:80 gate
service=http
request="ldirectord.html"
receive="It worked"
scheduler=rr
persistent=600
protocol=tcp
checktype=negotiate

The above files should be the same on both hosts. ldirectord.cf above is configured to check for a web server on port 80 which contains a file in the root directory named ldirectord.html containing only the string "It worked". Ldirectord checks the health of each real server by querying each web server for that file. If it gets back a file containing the receive string, it considers the server willing and able to receive public requests. There are builtin check mechanisms for serveral other popular services, too.

Now you need to make sure that heartbeat is started at boot time and that ldirectord is NOT started at boot by running this on both servers:

/sbin/chkconfig heartbeat on
/sbin/chkconfig ldirectord off
/sbin/service ldirectord stop
/sbin/service heartbeat start

You also need to ensure that your user services (httpd, mysql, etc) are running before you turn on heartbeat. Give it a minute to startup and stabilize, then check that things are running by typing:

lvs1# ip addr sh
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 192.168.0.255 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:50:56:8a:01:10 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.1/18 brd 192.168.0.255 scope global eth0
inet 192.168.0.100/18 brd 192.168.0.255 scope global secondary eth0
inet6 fe80::250:56ff:fe8a:110/64 scope link
valid_lft forever preferred_lft forever
lvs2# ip addr sh
1: lo: <LOOPBACK,UP> mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
inet 192.168.0.100/32 brd 192.168.0.255 scope global lo:0
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:50:56:8a:1f:39 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.2/18 brd 192.168.0.255 scope global eth0
inet6 fe80::250:56ff:fe8a:1f39/64 scope link
valid_lft forever preferred_lft forever

The first node you started up (the active director, lvs1 in this example) should have the VIP on eth0, while the second node you started should have it on lo. You can now run ipvsadm to check the status of the nodes and any incoming connections. Only the machine currently acting as director will list any useful info:

lvs2# ipvsadm -L -n
IP Virtual Server version 1.2.0 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
lvs1# ipvsadm -L -n
IP Virtual Server version 1.2.0 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.100:80 rr persistent 600
-> 192.168.0.2:80              Route   1      0          0
-> 192.168.0.1:80              Local   1      0          0

You can see above in the "weight" column that incoming requests will be split equally between the two real servers. If you stop the HTTP daemon on one of othe servers, within a few seconds the weight for that server will drop top zero, and no more new requests will be directed to that server. To allow existing connections to finish politely while sending all new connections to the other box (if you're about to do some planned maintenance, for example), set the weight of the dying server to zero with the first command below. In order to make new connections from persistent hosts make the transition, you must set "quiescent=no" in ldirectord.cf. With "quiescent=yes", persistent hosts will continue trying to hit the dying server even after it dies, on the assumption that it will eventually come back.

# /sbin/ipvsadm -e -t 192.168.0.100:80 -r 192.168.0.2:80 -w 0
# /sbin/ipvsadm -L -n
IP Virtual Server version 1.2.0 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.0.100:80 rr persistent 600
-> 192.168.0.2:80              Route   0      0          0
-> 192.168.0.1:80              Local   1      0          0

If you don't want to remember that first ipvsadm command, you can (de)activate individual real services using this init script. Run "service cluster stop lvs2" to set the weight for lvs2 to zero. Determining the other functionality is left as an exercise for the reader.

last updated 12 March 2007
Obi-Wan (obiwan@jedi.com)
posted @ 2008-06-20 16:54 王者归来 阅读(525) | 评论 (0)编辑 收藏
 
     摘要: <转载>How To Set Up A Load-Balanced MySQL Cluster Version 1.0 Author: Falko Timme <ft [at] falkotimme [dot] com> Last edited 03/27/2006 This tutorial shows how to configure a MySQL 5 clus...  阅读全文
posted @ 2008-06-20 16:49 王者归来 阅读(338) | 评论 (0)编辑 收藏
仅列出标题
共2页: 1 2 
CALENDER
<2024年11月>
272829303112
3456789
10111213141516
17181920212223
24252627282930
1234567

常用链接

留言簿(1)

随笔档案

相册

搜索

  •  

最新评论

阅读排行榜

评论排行榜


Powered By: 博客园
模板提供沪江博客