免费观看又色又爽又黄的小说免费_美女福利视频国产片_亚洲欧美精品_美国一级大黄大色毛片

hadoop2.XHA詳細配置-創新互聯

hadoop-daemon.sh與hadoop-daemons.sh區別

成都創新互聯專注于思茅網站建設服務及定制,我們擁有豐富的企業做網站經驗。 熱誠為您提供思茅營銷型網站建設,思茅網站制作、思茅網頁設計、思茅網站官網定制、小程序設計服務,打造思茅網絡公司原創品牌,更為您提供思茅網站排名全網營銷落地服務。

hadoop-daemon.sh只能本地執行

hadoop-daemons.sh能遠程執行

1. 啟動JN

hadoop-daemons.sh start journalnode

hdfs namenode -initializeSharedEdits //復制edits log文件到journalnode節點上,第一次創建得在格式化namenode之后使用

http://hadoop-yarn1:8480來看journal是否正常

2.格式化namenode,并啟動Active Namenode

一、Active NameNode節點上格式化namenode

hdfs namenode -format
hdfs namenode -initializeSharedEdits

初始化journalnode完畢

二、啟動Active Namenode

hadoop-daemon.sh start namenode

3.啟動 Standby namenode

一、Standby namenode節點上格式化Standby節點

復制Active Namenode上的元數據信息拷貝到Standby Namenode節點上

hdfs namenode -bootstrapStandby

二、啟動Standby節點

hadoop-daemon.sh start namenode

4.啟動Automatic Failover

在zookeeper上創建 /hadoop-ha/ns1這樣一個監控節點(ZNode)

hdfs zkfc -formatZK
start-dfs.sh

5.查看namenode狀態

hdfs  haadmin -getServiceState nn1
active

6.自動failover

hdfs  haadmin -failover nn1 nn2

配置文件詳細信息

core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://ns1</value>
    </property>
    
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/modules/hadoop-2.2.0/data/tmp</value>
    </property>
    
    <property>
        <name>fs.trash.interval</name>
        <value>60*24</value>
    </property>
    
    <property>
        <name>ha.zookeeper.quorum</name>
        <value>hadoop-yarn1:2181,hadoop-yarn2:2181,hadoop-yarn3:2181</value>
    </property>
    
    <property>  
        <name>hadoop.http.staticuser.user</name>
        <value>yuanhai</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>
    
    <property>
        <name>dfs.nameservices</name>
        <value>ns1</value>
    </property>
    
    <property>
        <name>dfs.ha.namenodes.ns1</name>
        <value>nn1,nn2</value>
        </property>
        
    <property>
        <name>dfs.namenode.rpc-address.ns1.nn1</name>
        <value>hadoop-yarn1:8020</value>
    </property>
    
        <property>
        <name>dfs.namenode.rpc-address.ns1.nn2</name>
        <value>hadoop-yarn2:8020</value>
    </property>
    
    <property>
        <name>dfs.namenode.http-address.ns1.nn1</name>
        <value>hadoop-yarn1:50070</value>
    </property>
    
    <property>
        <name>dfs.namenode.http-address.ns1.nn2</name>
        <value>hadoop-yarn2:50070</value>
    </property>
    
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://hadoop-yarn1:8485;hadoop-yarn2:8485;hadoop-yarn3:8485/ns1</value>
    </property>
    
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/opt/modules/hadoop-2.2.0/data/tmp/journal</value>
    </property>
    
     <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    
    <property>
        <name>dfs.client.failover.proxy.provider.ns1</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/hadoop/.ssh/id_rsa</value>
    </property>
    
    <property>
        <name>dfs.permissions.enabled</name>
        <value>false</value>
    </property>
    

<!--     <property>
        <name>dfs.namenode.http-address</name>
        <value>hadoop-yarn.dragon.org:50070</value>
    </property>

    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop-yarn.dragon.org:50090</value>
    </property>
    
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file://${hadoop.tmp.dir}/dfs/name</value>
    </property>
    
    <property>
        <name>dfs.namenode.edits.dir</name>
        <value>${dfs.namenode.name.dir}</value>
    </property>
    
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file://${hadoop.tmp.dir}/dfs/data</value>
    </property>
    
    <property>
        <name>dfs.namenode.checkpoint.dir</name>
        <value>file://${hadoop.tmp.dir}/dfs/namesecondary</value>
    </property>
    
    <property>
        <name>dfs.namenode.checkpoint.edits.dir</name>
        <value>${dfs.namenode.checkpoint.dir}</value>
    </property>
-->    
</configuration>

slaves

hadoop-yarn1
hadoop-yarn2
hadoop-yarn3

yarn-site.xml

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop-yarn1</value>
    </property> 
    
    <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
    </property>

    <property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>604800</value>
    </property> 

</configuration>

mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>hadoop-yarn1:10020</value>
        <description>MapReduce JobHistory Server IPC host:port</description>
    </property>

    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>hadoop-yarn1:19888</value>
        <description>MapReduce JobHistory Server Web UI host:port</description>
    </property>
    
    <property>
        <name>mapreduce.job.ubertask.enable</name>
        <value>true</value>
    </property>
    
</configuration>

hadoop-env.sh

export JAVA_HOME=/opt/modules/jdk1.6.0_24

其他相關文章:

http://blog.csdn.net/zhangzhaokun/article/details/17892857

另外有需要云服務器可以了解下創新互聯scvps.cn,海內外云服務器15元起步,三天無理由+7*72小時售后在線,公司持有idc許可證,提供“云服務器、裸金屬服務器、高防服務器、香港服務器、美國服務器、虛擬主機、免備案服務器”等云主機租用服務以及企業上云的綜合解決方案,具有“安全穩定、簡單易用、服務可用性高、性價比高”等特點與優勢,專為企業上云打造定制,能夠滿足用戶豐富、多元化的應用場景需求。

新聞名稱:hadoop2.XHA詳細配置-創新互聯
網站網址:http://m.newbst.com/article40/dchpho.html

成都網站建設公司_創新互聯,為您提供網站制作搜索引擎優化品牌網站設計微信小程序建站公司云服務器

廣告

聲明:本網站發布的內容(圖片、視頻和文字)以用戶投稿、用戶轉載內容為主,如果涉及侵權請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網站立場,如需處理請聯系客服。電話:028-86922220;郵箱:631063699@qq.com。內容未經允許不得轉載,或轉載時需注明來源: 創新互聯

微信小程序開發