Slaytanic https://xianglei.tech In Code We Trust Tue, 19 May 2020 13:01:44 +0000 zh-CN hourly 1 https://wordpress.org/?v=5.4.1 DS means Data Scientist? NO! https://xianglei.tech/archives/xianglei/2020/01/5588.html https://xianglei.tech/archives/xianglei/2020/01/5588.html#respond Fri, 10 Jan 2020 06:51:01 +0000 https://xianglei.tech/?p=5588 These “Data Scientist” in our dear first party, can hold all my jokes of this year. (Update at any time)

  1. DS: Why my spark job takes so much time?
    Me: ?
    DS:
    Me: No, It’s a resident YARN container, not your spark job!
  2. DS: Why my spark job report error?
    Me: Give me your code and the screenshot.
    DS: 
    Me: No, You should not stop spark context before you run spark, I mean You shoud not ask someone to answer your question after you murdered him!
  3. DS: Why I can’t login SSO system?
    (SSO built on their system, using OAuth2, Username and Password authenticates were all on their servers, I only took the authorization code and userinfo after they logged in).
    Me: It’s not my business, please contact your server admin if you ensure about username and password are all correctly.
    DS: I don’t care, you should solve this problem.
    Me: Sorry ma’m, I can’t fix your company’s servers.
    DS: I don’t care, you must fix it.
    Me: Alright, please give me the root password of your SSO server.
    DS: I don’t know about what are you talking about, you should fix it.
    Another DS: Hey, It’s not their problem, you should contact our infosec team.
    DS: Ohch
  4. Me: You‘ve written such beautiful PigLatinic Python.
    DS: Thank you, I thought so.DS means “Data Scienist”? No I call them “Definitively Stupid”
]]>
https://xianglei.tech/archives/xianglei/2020/01/5588.html/feed 0
jupyterlab and pyspark2 integration in 1 minute https://xianglei.tech/archives/xianglei/2019/04/5567.html https://xianglei.tech/archives/xianglei/2019/04/5567.html#respond Tue, 09 Apr 2019 10:08:00 +0000 https://xianglei.tech/?p=5567 As we use CDH 5.14.0 on our hadoop cluster, the highest spark version to be support is 2.1.3, so this blog is to record the procedure of how I install pyspark-2.1.3 and integrate it with jupyter-lab.

Evironment:
spark 2.1.3
CDH 5.14.0 – hive 1.1.0
Anaconda3 – python 3.6.8

  1. Add export to spark-env.sh
    export PYSPARK_PYTHON=/opt/anaconda3/bin/python
    export PYSPARK_DRIVER_PYTHON=/opt/anaconda3/bin/jupyter-lab
    export PYSPARK_DRIVER_PYTHON_OPTS='  --ip=172.16.191.30 --port=8890'
  2. install sparkmagic
    pip install sparkmagic
  3. Use conda or pip command to downgrade ipykernel to 4.9.0, cause ipykernel 5.x doesn’t support sparkmagic, it will throw a Future exception.
    https://github.com/jupyter-incubator/sparkmagic/issues/492
  4. /opt/spark-2.1.3/bin/pyspark –master yarn

If you need to run with backgrand , use nohup.

if nessasery, add a kernel json at /usr/share/jupyter/kernels/pyspark2 or /usr/local/share/jupyter/kernels/pyspark2, with the content as
{
"argv": [
"python3.6",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python3.6+PySpark2.1",
"language": "python",
"env": {
"PYSPARK_PYTHON": "/opt/anaconda3/bin/python",
"SPARK_HOME": "/opt/spark-2.1.3-bin-hadoop2.6",
"HADOOP_CONF_DIR": "/etc/hadoop/conf",
"HADOOP_CLIENT_OPTS": "-Xmx2147483648 -XX:MaxPermSize=512M -Djava.net.preferIPv4Stack=true",
"PYTHONPATH": "/opt/spark-2.1.3-bin-hadoop2.6/python/lib/py4j-0.10.7-src.zip:/opt/spark-2.1.3-bin-hadoop2.6/python/",
"PYTHONSTARTUP": "/opt/spark-2.1.3-bin-hadoop2.6/python/pyspark/shell.py",
"PYSPARK_SUBMIT_ARGS": " --jars /opt/spark-2.1.3-bin-hadoop2.6/jars/greenplum-spark_2.11-1.6.2.jar --master yarn --deploy-mode client --name JuPysparkHub pyspark-shell",
"JAVA_HOME": "/opt/jdk1.8.0_141"
}
}

Another problem, in pyspark, sqlContext cannot access remote hivemetastore and without any exceptions, when i run show databases in pyspark, it always return me default. And then i found out, in spark2’s jars dir, there was a hive-exec-1.1.0-cdh5.14.0.jar, delete this jar file, everythings ok.

]]>
https://xianglei.tech/archives/xianglei/2019/04/5567.html/feed 0
自己动手打造ipv6梯子 https://xianglei.tech/archives/xianglei/2018/12/2866.html https://xianglei.tech/archives/xianglei/2018/12/2866.html#respond Wed, 12 Dec 2018 05:40:25 +0000 https://xianglei.tech/?p=2866 以下内容适合有一定网络及梯子搭建经验的人士阅读。

准备工作:
Vultr美服主机一台,以下简称HAA,创建时选择同时支持IPv6和v4,最低价到20181212为止是3.5刀,选择位置靠近西海岸,Seattle,LA,均可,速度会比较好,自家网络测试觉得硅谷速度一般。

Vultr 欧服Paris,Amsterdam,Frankfurt均可,数量自定,选择IPv6 only,价格2.5刀。或者法国的scaleway,最低价格1.99欧。以下简称SSs。

阿里云或其他云主机一台,主要是利用其接入骨干网优势,可选操作,创建选择按量付费,最低配。

开工:
美服HAA部署HAProxy,配置如下

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private
        ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
        ssl-default-bind-options no-sslv3

defaults
        log     global
        mode    tcp # tcp 四层代理(默认http为七层代理)
        option  redispatch
        option  abortonclose
        timeout connect 6000
        timeout client  60000
        timeout server  60000
        timeout check   2000

listen  admin_stats
        bind    *:1090 #状态页,绑定ipv6[::]及ipv4 0.0.0.0 1090端口
        mode    http
        maxconn 10
        stats   refresh 30s
        stats   uri     /
        stats   realm   HAProxy
        stats   auth    haproxy:haproxy

listen ss
        bind    *:8388 #同时绑定ipv6[::]及v4 0.0.0.0地址 8388端口
        mode    tcp
        balance leastconn
        maxconn 8192
        server  Amsterdam-vultr                                         2001:19f0:5001:27ce:xxxx:xxx:xxxx:xxxx:8388      check   inter   180000  rise    1       fall    2 #反代阿姆斯特丹主机的ipv6地址的8388端口
        server  Amsterdam-scaleway                                      2001:bc8:xxxx:xxxx::1:8388                       check   inter   180000  rise    1       fall    2
        server  Seattle-vultr                                           2001:19f0:8001:1396:xxxx:xxx:xxxx:xxxx:8388      check   inter   180000  rise    1       fall    2

然后在对应的SSs云主机上部署和搭建ss server,这个教程很多,SSs配置时,IP绑定于::,例如:

{
  "server":"::",
  "port_password":{
    "995":"xxxxxxxxxx"
  },
  "timeout":300,
  "method":"aes-256-cfb",
  "workers": 10
}

这里需要注意的是,SS我看到的,Python版支持ipv6绑定,C版的貌似不支持IPv6绑定,没做更深入测试,所以这里我用的是Python版SS

启动SS并启动HAProxy,这就可以用了。好处是,仅IPv6的机器价格更低,被GFW拦截的可能性也更低,速度很快,且全球布局,连接速度主要取决于家里的连接美服的速度。如果资金不紧张,可以再搞一个国内的阿里云主机,最低配,搭建HAProxy再次反代美服的HAA,利用其骨干网优势。我家里是100Mbps宽带,通过阿云反代美服,上油土鳖等视频网站,带宽峰值可以到2-3MB,4K高清毫无压力。

所用服务器均为最低配,ubuntu18.04或debian stretch。

]]>
https://xianglei.tech/archives/xianglei/2018/12/2866.html/feed 0
Using py-SparkSQL2 in Zeppelin to query hdfs encryption data https://xianglei.tech/archives/xianglei/2018/11/2849.html https://xianglei.tech/archives/xianglei/2018/11/2849.html#respond Fri, 23 Nov 2018 09:50:59 +0000 https://xianglei.tech/?p=2849 %spark2_1.pyspark from pyspark.sql import SQLContext from pyspark.sql import HiveContext, Row from pyspark.sql.types import * import pandas as pd import pyspark.sql.functions as F trial_pps_order = spark.read.parquet('/tmp/exia/trial_pps_select') pps_order = spark.read.parquet('/tmp/exia/orders_pps_wc_member') member_info = spark.read.parquet('/tmp/exia/member_info') # newHiveContext=HiveContext(sc) query_T=""" select * from crm.masterdata_hummingbird_product_mst_banner_v1 where brand_name = 'pampers' """ product_mst=spark.sql(query_T) product_mst.show()

%spark2_1.pyspark: custom interpreter in Zeppelin 0.7.2
crm.masterdata_hummingbird_product_mst_banner_v1: hive table, data stored in hdfs encrypt zone.

The code throws exception below:

Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-7483288776781667654.py", line 367, in <module>
    raise Exception(traceback.format_exc())
Exception: Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-7483288776781667654.py", line 360, in <module>
    exec(code, _zcUserQueryNameSpace)
  File "<stdin>", line 14, in <module>
  File "/usr/lib/spark-2.1.3-bin-hadoop2.6/python/pyspark/sql/dataframe.py", line 318, in show
    print(self._jdf.showString(n, 20))
  File "/usr/lib/spark-2.1.3-bin-hadoop2.6/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/usr/lib/spark-2.1.3-bin-hadoop2.6/python/pyspark/sql/utils.py", line 63, in deco
    return f(*a, **kw)
  File "/usr/lib/spark-2.1.3-bin-hadoop2.6/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
    format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o76.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 3.0 failed 4 times, most recent failure: Lost task 0.3 in stage 3.0 (TID 6, pg-dmp-slave28.hadoop, executor 1): java.io.IOException: No KeyProvider is configured, cannot access an encrypted file
	at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1338)
	at org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:1414)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:298)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:298)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
	at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
	at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
	at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:256)
	at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216)
	at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:102)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:100)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1455)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1443)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1442)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1442)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
	at scala.Option.foreach(Option.scala:257)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1670)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1625)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1614)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1928)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1941)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1954)
	at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:333)
	at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
	at org.apache.spark.sql.Dataset$$anonfun$org$apache$spark$sql$Dataset$$execute$1$1.apply(Dataset.scala:2390)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:57)
	at org.apache.spark.sql.Dataset.withNewExecutionId(Dataset.scala:2792)
	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$execute$1(Dataset.scala:2389)
	at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collect(Dataset.scala:2396)
	at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2132)
	at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2131)
	at org.apache.spark.sql.Dataset.withTypedCallback(Dataset.scala:2822)
	at org.apache.spark.sql.Dataset.head(Dataset.scala:2131)
	at org.apache.spark.sql.Dataset.take(Dataset.scala:2346)
	at org.apache.spark.sql.Dataset.showString(Dataset.scala:248)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
	at py4j.Gateway.invoke(Gateway.java:282)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:238)
	at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.IOException: No KeyProvider is configured, cannot access an encrypted file
	at org.apache.hadoop.hdfs.DFSClient.decryptEncryptedDataEncryptionKey(DFSClient.java:1338)
	at org.apache.hadoop.hdfs.DFSClient.createWrappedInputStream(DFSClient.java:1414)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:304)
	at org.apache.hadoop.hdfs.DistributedFileSystem$3.doCall(DistributedFileSystem.java:298)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:298)
	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:766)
	at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:109)
	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
	at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:257)
	at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:256)
	at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:216)
	at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:102)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
	at org.apache.spark.scheduler.Task.run(Task.scala:100)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	... 1 more

So, Spark will use hive-site.xml to connect hiveserver2 in its conf directory. such as /usr/lib/spark-2.1.0-bin-hadoop2.6/conf, and the hive-site.xml will transmit to hive.

Solution:

add encrypt to hive-site.xml

  <property>
    <name>hadoop.security.key.provider.path</name>
    <value>kms://http@dmp-master2.hadoop:16000/kms</value>
  </property>
  <property>
    <name>dfs.encrypt.data.transfer.algorithm</name>
    <value>3des</value>
  </property>
  <property>
    <name>dfs.encrypt.data.transfer.cipher.suites</name>
    <value>AES/CTR/NoPadding</value>
  </property>
  <property>
    <name>dfs.encrypt.data.transfer.cipher.key.bitlength</name>
    <value>256</value>
  </property>
  <property>
    <name>dfs.encryption.key.provider.uri</name>
    <value>kms://http@dmp-master2.hadoop:16000/kms</value>
  </property>

 

]]>
https://xianglei.tech/archives/xianglei/2018/11/2849.html/feed 0
Kerberos Master/Slave HA configuration https://xianglei.tech/archives/xianglei/2018/06/2831.html https://xianglei.tech/archives/xianglei/2018/06/2831.html#respond Wed, 20 Jun 2018 11:13:49 +0000 https://xianglei.tech/?p=2831 Since we only have one KDC on our cluster, it will be an SPOF (Single Point of Failure), so I have to create a Master/Slave KDC to avoid this problem.

There would be some steps to convert SP to HA.

Description
master2.hadoop is existence KDC previously, master1.hadoop will install a new KDC server

  1. Install KDC on new node(master1.hadoop).
    yum -y install krb5-server
  2. Change config file on origin KDC(master2.hadoop)

    [libdefaults]
    default_realm = PG.COM
    dns_lookup_kdc = false
    dns_lookup_realm = false
    ticket_lifetime = 7d
    renew_lifetime = 30d
    forwardable = true
    #default_tgs_enctypes = rc4-hmac
    #default_tkt_enctypes = rc4-hmac
    #permitted_enctypes = rc4-hmac
    udp_preference_limit = 1
    kdc_timeout = 3000
    [realms]
    PG.COM =
    {
    kdc = master2.hadoop
    kdc = master1.hadoop
    admin_server = master2.hadoop
    }
    [logging]
    default = FILE:/var/log/krb5kdc.log
    admin_server = FILE:/var/log/kadmind.log
    kdc = FILE:/var/log/krb5kdc.log

    Red block are very important on centos 6, orange block is the new line added

  3. On new node(master1.hadoop)
    scp master2.hadoop:/var/kerberos/krb5kdc/kdc.conf /var/kerberos/krb5kdc/
    scp master2.hadoop:/var/kerberos/krb5kdc/kadm5.acl /var/kerberos/krb5kdc/
    scp master2.hadoop:/var/kerberos/krb5kdc/.k5.PG.COM /var/kerberos/krb5kdc/
    scp master2.hadoop:/etc/krb5.conf /etc/
    kadmin
    : ank host/master1.hadoop
    : xst host/master1.hadoop
  4. On old node(master2.hadoop)
    kadmin
    : ank host/master2.hadoop
    : xst host/master2.hadoop
  5. And then back to new node(master1.hadoop)

    vi /var/kerberos/krb5kdc/kpropd.acl
    and insert two lines

    host/master1.hadoop@PG.COM
    host/master2.hadoop@PG.COM

    and then

    kdb_util stash
    kpropd -S
  6. Jump to old node(master2.hadoop)
    kdb_util dump /var/kerberos/krb5kdc/kdc.dump
    kprop -f /var/kerberos/krb5kdc/kdc.dump master1.hadoop

    When see “Database propagation to master1.hadoop: SUCCEEDED”, it means all the work have done well enough, and the slave should be start now.

  7. Last step on new node(master1.hadoop)
    service krb5kdc start

    The meaning of red block in step two is:
    Cenots 6.x with Kerberos 1.10.x had a bug that will cause sync kdb failed, the issue is there is a problem when you use rc4 as the default enctype. So you must comment the to avoid this happen. kprop doesn’t works with rc4 encrypt type.

    https://github.com/krb5/krb5/commit/8d01455ec9ed88bd3ccae939961a6e123bb3d45f

    It fixed on kerberos 1.11.1

    finally: of course you should restart kdc and kadmin services

]]>
https://xianglei.tech/archives/xianglei/2018/06/2831.html/feed 0
Use encrypted password in zeppelin and some other security shit https://xianglei.tech/archives/xianglei/2018/01/2823.html https://xianglei.tech/archives/xianglei/2018/01/2823.html#respond Mon, 29 Jan 2018 09:25:27 +0000 https://xianglei.tech/?p=2823 For security reason, we cannot expose user password in zeppelin, so we must write down encrypted password into shiro.ini, so how to enable encrypt passwd in zeppelin?

Open shiro.ini, in [user] area, replace all user’s password with sha256 encrypted. such as
xianglei = ba4ae0f17be1449007b955f97f7d1ca967ec72da3f39047adcc3c62eb02524b5, admin,admaster
This encrypted password we can use many tools to generate, so wen do not discus how to generate passwd here.
And Then, in [main] area
add these lines
sha256Matcher = org.apache.shiro.authc.credential.Sha256CredentialsMatcher
iniRealm.credentialsMatcher = $sha256Matcher
And then, restart zeppelin, done!


A file system security error in hive jdbc

java.lang.NoClassDefFoundError: com/google/protobuf/ProtocolMessageEnum
	at java.lang.ClassLoader.defineClass1(Native Method)
	at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
	at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
	at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
	at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at java.lang.Class.getDeclaredConstructors0(Native Method)
	at java.lang.Class.privateGetDeclaredConstructors(Class.java:2595)
	at java.lang.Class.getConstructor0(Class.java:2895)
	at java.lang.Class.getConstructor(Class.java:1731)
	at org.apache.hive.service.cli.HiveSQLException.newInstance(HiveSQLException.java:243)
	at org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:209)
	at org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:235)
	at org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:235)
	at org.apache.hive.service.cli.HiveSQLException.toStackTrace(HiveSQLException.java:235)
	at org.apache.hive.service.cli.HiveSQLException.toCause(HiveSQLException.java:196)
	at org.apache.hive.service.cli.HiveSQLException.<init>(HiveSQLException.java:108)
	at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:241)
	at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:227)
	at org.apache.hive.jdbc.HiveQueryResultSet.next(HiveQueryResultSet.java:364)
	at org.apache.commons.dbcp2.DelegatingResultSet.next(DelegatingResultSet.java:191)
	at org.apache.commons.dbcp2.DelegatingResultSet.next(DelegatingResultSet.java:191)
	at org.apache.zeppelin.jdbc.JDBCInterpreter.getResults(JDBCInterpreter.java:478)
	at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:592)
	at org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:692)
	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:101)
	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:616)
	at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
	at org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: com.google.protobuf.ProtocolMessageEnum
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	... 42 more

It looks like a jar dependencies error, but actually, its an hdfs permissions error.
The real reason is, the file that hive read, its  owned by  one guy, and the permission is set to 640. And when another guy tried to read the file ,the error appears. But in hive, it not looks like someone lost the jar on machine.


How to disable some keyword in zeppelin?
Well, sometime, we must disable some keywords in zeppelin in special scenarios, so all we should do is to change the code.
In zeppelin-interpreter/src/main/java/org/apache/zeppelin/interpreter/remote/RemoveInterpreterServer.java
We add these code:

public static HashSet<String[]> blockedCodeString = new HashSet<>();
  static {
    blockedCodeString.add(new String[]{"import", "os"});
    blockedCodeString.add(new String[]{"import", "sys"});
    blockedCodeString.add(new String[]{"import", "subprocess"});
    blockedCodeString.add(new String[]{"import", "pty"});
    blockedCodeString.add(new String[]{"import", "socket"});
    blockedCodeString.add(new String[]{"import", "commands"});
    blockedCodeString.add(new String[]{"import", "paramiko"});
    blockedCodeString.add(new String[]{"import", "pexpect"});
    blockedCodeString.add(new String[]{"import", "BaseHTTPServer"});
    blockedCodeString.add(new String[]{"import", "ConfigParser"});
    blockedCodeString.add(new String[]{"import", "platform"});
    blockedCodeString.add(new String[]{"import", "popen2"});
    blockedCodeString.add(new String[]{"import", "copy"});
    blockedCodeString.add(new String[]{"import", "SocketServer"});
    blockedCodeString.add(new String[]{"import", "sysconfig"});
    blockedCodeString.add(new String[]{"import", "tty"});
    blockedCodeString.add(new String[]{"import", "xmlrpmlib"});
    blockedCodeString.add(new String[]{"etc"});
    blockedCodeString.add(new String[]{"boot"});
    blockedCodeString.add(new String[]{"dev"});
    blockedCodeString.add(new String[]{"lib"});
    blockedCodeString.add(new String[]{"lib64"});
    blockedCodeString.add(new String[]{"lost+found"});
    blockedCodeString.add(new String[]{"mnt"});
    blockedCodeString.add(new String[]{"proc"});
    blockedCodeString.add(new String[]{"root"});
    blockedCodeString.add(new String[]{"sbin"});
    blockedCodeString.add(new String[]{"selinux"});
    blockedCodeString.add(new String[]{"usr"});
    blockedCodeString.add(new String[]{"passwd"});
    blockedCodeString.add(new String[]{"useradd"});
    blockedCodeString.add(new String[]{"userdel"});
    blockedCodeString.add(new String[]{"rm"});
    blockedCodeString.add(new String[]{"akka "});
    blockedCodeString.add(new String[]{"groupadd"});
    blockedCodeString.add(new String[]{"groupdel"});
    blockedCodeString.add(new String[]{"mkdir"});
    blockedCodeString.add(new String[]{"rmdir"});
    blockedCodeString.add(new String[]{"ping"});
    blockedCodeString.add(new String[]{"nc"});
    blockedCodeString.add(new String[]{"telnet"});
    blockedCodeString.add(new String[]{"ftp"});
    blockedCodeString.add(new String[]{"scp"});
    blockedCodeString.add(new String[]{"ssh"});
    blockedCodeString.add(new String[]{"ps"});
    blockedCodeString.add(new String[]{"hostname"});
    blockedCodeString.add(new String[]{"uname"});
    blockedCodeString.add(new String[]{"vim"});
    blockedCodeString.add(new String[]{"nano"});
    blockedCodeString.add(new String[]{"top"});
    blockedCodeString.add(new String[]{"cat"});
    blockedCodeString.add(new String[]{"more"});
    blockedCodeString.add(new String[]{"less"});
    blockedCodeString.add(new String[]{"chkconfig"});
    blockedCodeString.add(new String[]{"service"});
    blockedCodeString.add(new String[]{"netstat"});
    blockedCodeString.add(new String[]{"iptables"});
    blockedCodeString.add(new String[]{"ip"});
    blockedCodeString.add(new String[]{"route "});
    blockedCodeString.add(new String[]{"curl"});
    blockedCodeString.add(new String[]{"wget"});
    blockedCodeString.add(new String[]{"sysctl"});
    blockedCodeString.add(new String[]{"touch"});
    blockedCodeString.add(new String[]{"scala.sys.process"});
    blockedCodeString.add(new String[]{"0.0.0.0"});
    blockedCodeString.add(new String[]{"58.215.191"});
    blockedCodeString.add(new String[]{"git"});
    blockedCodeString.add(new String[]{"svn"});
    blockedCodeString.add(new String[]{"hg"});
    blockedCodeString.add(new String[]{"cvs"});
    blockedCodeString.add(new String[]{"exec"});
    blockedCodeString.add(new String[]{"ln"});
    blockedCodeString.add(new String[]{"kill"});
    blockedCodeString.add(new String[]{"rsync"});
    blockedCodeString.add(new String[]{"lsof"});
    blockedCodeString.add(new String[]{"crontab"});
    blockedCodeString.add(new String[]{"libtool"});
    blockedCodeString.add(new String[]{"automake"});
    blockedCodeString.add(new String[]{"autoconf"});
    blockedCodeString.add(new String[]{"make"});
    blockedCodeString.add(new String[]{"gcc"});
    blockedCodeString.add(new String[]{"cc"});
  }
......

@Override
  public RemoteInterpreterResult interpret(String noteId, String className, String st,
      RemoteInterpreterContext interpreterContext) throws TException {
    if (logger.isDebugEnabled()) {
      logger.debug("st:\n{}", st);
    }
    Interpreter intp = getInterpreter(noteId, className);
    InterpreterContext context = convert(interpreterContext);
    context.setClassName(intp.getClassName());

    Scheduler scheduler = intp.getScheduler();
    InterpretJobListener jobListener = new InterpretJobListener();
    InterpretJob job = new InterpretJob(
        interpreterContext.getParagraphId(),
        "remoteInterpretJob_" + System.currentTimeMillis(),
        jobListener,
        JobProgressPoller.DEFAULT_INTERVAL_MSEC,
        intp,
        st,
        context);

    InterpreterResult result;

    try{
      String matchesStrings = anyMatch(st, blockedCodeString);
      result = new InterpreterResult(Code.ERROR, "Contains dangerous code : " + matchesStrings);
    }catch (Exception me){ // no match any
      scheduler.submit(job);
      while (!job.isTerminated()) {
        synchronized (jobListener) {
          try {
            jobListener.wait(1000);
          } catch (InterruptedException e) {
            logger.info("Exception in RemoteInterpreterServer while interpret, jobListener.wait", e);
          }
        }
      }

      if (job.getStatus() == Status.ERROR) {
        result = new InterpreterResult(Code.ERROR, Job.getStack(job.getException()));
      } else {
        result = (InterpreterResult) job.getReturn();

        // in case of job abort in PENDING status, result can be null
        if (result == null) {
          result = new InterpreterResult(Code.KEEP_PREVIOUS_RESULT);
        }
      }
    }
    return convert(result,
        context.getConfig(),
        context.getGui());
  }
......

 

]]>
https://xianglei.tech/archives/xianglei/2018/01/2823.html/feed 0
Enable HTTPS access in Zeppelin https://xianglei.tech/archives/xianglei/2017/10/2811.html https://xianglei.tech/archives/xianglei/2017/10/2811.html#comments Tue, 31 Oct 2017 04:59:43 +0000 https://xianglei.tech/?p=2811 I was using certified key file to enable HTTPS, if you use self-signatured key, see second part

First part:
I had got two files which one is  the private key named server.key and another one is certification file named server.crt
Use the following command to create a jks keystore file

openssl pkcs12 -export -in xxx.com.crt -inkey xxx.com.key -out xxx.com.pkcs12
keytool -importkeystore -srckeystore xxx.com.pkcs12 -destkeystore xxx.com.jks -srcstoretype pkcs12

Second part:
Use self-signatured key

# Generate root key file and cert file, key file could be named key or pem, it's same.
openssl genrsa -out root.key(pem) 2048 # Generate root key file
openssl req -x509 -new -key root.key(pem) -out root.crt # Generate root cert file

# Generate client key and cert and csr file
openssl genrsa -out client.key(pem) 2048 # Generate client key file
openssl req -new -key client.key(pem) -out client.csr # Generate client cert request file
openssl x509 -req -in client.csr -CA root.crt -CAkey root.key(pem) -CAcreateserial -days 3650 -out client.crt # Use root cert to generate client cert file

# Generate server key and cert and csr file
openssl genrsa -out server.key(pem) 2048 # Generate server key file, use in Zeppelin
openssl req -new -key server.key(pem) out server.csr @ Generate server cert request file
openssl x509 -req -in server.csr -CA root.crt -CAkey root.key(pem) -CAcreateserial -days 3650 -out server.crt # Use root cert to generate server cert file

# Generate client jks file
openssl pkcs12 -export -in client.crt -inkey client.key(pem) -out client.pkcs12 # Package to pkcs12 format, must input a password, you should remember the password
keytool -importkeystore -srckeystore client.pkcs12 -destkeystore client.jks -srcstoretype pkcs12 # The client password you just input at last step

# Generate server jks file
openssl pkcs12 -export -in server.crt -inkey server.key(pem) -out server.pkcs12 # Package to pkcs12 format, must input a password, you should remember the password
keytool -importkeystore -srckeystore server.pkcs12 -destkeystore server.jks -srcstoretype pkcs12 # The server password you just input at last step

The server key, cert and jks are using to configure zeppelin, the client key, cert and jks are using to install into browser or your client access codes.
Then, make a directory to put the server things in it, such as

mkdir -p /etc/zeppelin/conf/ssl
cp server.crt server.jks /etc/zeppelin/conf/ssl

And then modify zeppelin-site.xml to enable https access

<property>
  <name>zeppelin.server.ssl.port</name>
  <value>8443</value>
  <description>Server ssl port. (used when ssl property is set to true)</description>
</property>
<property>
  <name>zeppelin.ssl</name>
  <value>true</value>
  <description>Should SSL be used by the servers?</description>
</property>
<property>
  <name>zeppelin.ssl.client.auth</name>
  <value>false</value>
  <description>Should client authentication be used for SSL connections?</description>
</property>
<property>
  <name>zeppelin.ssl.keystore.path</name>
  <value>/etc/zeppelin/conf/ssl/xxx.com.jks</value>
  <description>Path to keystore relative to Zeppelin configuration directory</description>
</property>
<property>
  <name>zeppelin.ssl.keystore.type</name>
  <value>JKS</value>
  <description>The format of the given keystore (e.g. JKS or PKCS12)</description>
</property>
<property>
  <name>zeppelin.ssl.keystore.password</name>
  <value>password which you input on generating server jks step</value>
  <description>Keystore password. Can be obfuscated by the Jetty Password tool</description>
</property>

Then, all completed, and you can redirect 443 to 8443 by using iptables or other reverse proxy tools

]]>
https://xianglei.tech/archives/xianglei/2017/10/2811.html/feed 2
How to use cloudera parcels manually https://xianglei.tech/archives/xianglei/2017/10/2809.html https://xianglei.tech/archives/xianglei/2017/10/2809.html#respond Fri, 20 Oct 2017 08:50:06 +0000 https://xianglei.tech/?p=2809 Cloudera Parcel is actually a compressed file format, it just a tgz file with some meta info, so we can simply untar it with command tar zxf xxx.parcel. So we have the capability to  extract multi version of hadoop in a single machine. It’s easy to make hadoop upgrade or  downgrade, only ln -s CDH symbol link to a specific version directory.

With understanding that, I can package a self-distributed parcel package with my patches, and use cloudera-manager to manage the cluster… That sounds good

]]>
https://xianglei.tech/archives/xianglei/2017/10/2809.html/feed 0
Integrate pyspark and sklearn with distributed parallel running on YARN https://xianglei.tech/archives/xianglei/2017/07/2795.html https://xianglei.tech/archives/xianglei/2017/07/2795.html#respond Thu, 20 Jul 2017 09:01:49 +0000 https://xianglei.tech/?p=2795 Python is useful for data scientists, especially with pyspark, but it’s a big problem to sysadmins, they will install python 2.7+ and spark and numpy,scipy,sklearn,pandas on each node, well, because Cloudera said that. Wow, imaging this, You have a cluster with 1000+ nodes or even 5000+ nodes, although you are good at DevOPS tools such as puppet, fabric, this work still cost lot of time.

Why do we have to install python on each node? Because on cluster mode, your py script will be contained to a  jar archive through py4j and SparkContext and distributed to every node they will be run. So you must ensure that these nodes have python and spark explaination environment, it means you must install numpy, sklearn… these packages on these node at least. But, these python packages doesn’t support Python 2.6 anymore, so your sysadmin must compile Python 2.7 and then pip install these packages on each node with legacy system such as CentOS 6.x .

So, I don’t want to do this terrible work, then I found another express way to run pyspark + sklearn + numpy + other machine learning libraries on YARN and without install these libraries on every node. Only I did is installed a hadoop+pyspark+sklearn client. It can submit pyspark+sklearn job to the cluster which did not installed any python environment. Let’s see my code.

#/usr/local/python/bin/python
import math
import sys
import urllib
import re
from collections import namedtuple
import random
from math import sin, cos, sqrt, atan2, radians
import json

from pyspark import SparkContext
try:
    sc.stop()
except:
    pass
import pandas as pd, numpy as np

from sklearn.cluster import DBSCAN

sc = SparkContext(appName="machine_learning")

def get_device_id(idfa,idfa_md5,imei):
    device_id=''
    if idfa !='' and re.search(r'[a-zA-Z0-9]{8}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{12}',idfa) != None:
        device_id=idfa
    elif  idfa_md5 !='' and re.search(r'[a-zA-Z0-9]{32}',idfa_md5) != None:
        device_id=idfa_md5
    elif imei !='' and re.search(r'0-9]{15}|[a-zA-Z0-9]{32}',imei) != None:
        device_id=imei
    return device_id

def get_xyz(loc):
    ll = re.split("x|\*",loc)
    try:
        xy = tuple(ll)[0:2]
        xy = (float(xy[0]),float(xy[1]))
        return xy
    except:
        return(-1,-1)

def dbscan_latlng(lat_lngs,mim_distance_km,min_points=10):
    
    coords = np.asmatrix(lat_lngs)  
    kms_per_radian = 6371.0088
    epsilon = mim_distance_km/ kms_per_radian
    db = DBSCAN(eps=epsilon, min_samples=min_points, algorithm='ball_tree', metric='haversine').fit(np.radians(coords))
    cluster_labels = db.labels_

    num_clusters = len(set(cluster_labels))
#     clusters = pd.Series([coords[cluster_labels == n] for n in range(num_clusters)])
    # print('Number of clusters: {}'.format(num_clusters))
    
    return cluster_labels
        

table_file='/user/hive/warehouse/temp.db/id_lat_lng_201704/00018*'
lbs = (sc
           .textFile(table_file,5000)
           .map(lambda a: a.split("\x01"))
           .map(lambda a: (get_device_id(a[3],a[4],a[5]),get_xyz(a[6]))) #[ip,day,hour, id ,(lat,lng)]
           .filter(lambda (id,(lat,lng)): lat != -1 and lat !=0.0 and id !='' and id !='00000000-0000-0000-0000-000000000000')
           .map(lambda (id,(lat,lng)) : (id,[(lat,lng)]))
          .reduceByKey(lambda a,b : a +b )
          .filter(lambda (id, lbss): len(lbss) > 10 )

           )
           
lbs_c = lbs.map(lambda (id, lat_lngs):(id, dbscan_latlng(lat_lngs,0.02,1) ))

print lbs_c.count()

This code uses sklearn, pyspark, pandas to calculate lbs geo count num. I run this code in a cluster without python environment successfully.

The step below:

  1. You must have a client node that can access hdfs and yarn cluster, with spark-core and spark-python installed on it. set $SPARK_HOME and join it to $PATH
  2. Compile Python 2.7.13 with shared lib to /usr/local/python and install easy_install or pip
  3. pip install numpy scipy pandas sklearn jieba sparkly pyinstaller
  4. Go to /usr/lib/spark/python/lib, find py4j-0.x-src.zip and pyspark.zip, unzip them and install them into /usr/local/python/lib/python2.7/site-packages
  5. This is the final step: use pyinstall to compile your code, and just run it.
    pyinstaller -F machine_learning.py
    and if you use skearn maybe you should add –hidden-import sklearn.neighbors.typedefs or other hidden import classes.
  6. cd dist/ && ./machine_learning
  7. Enjoy your result.

In this example, pyinstaller -F will compile all python environment with all needed libraries and you code into one executable file, so with this file runs ,it will call pyspark, pyspark will call spark and submit this job to yarn, yarn will distribute this single file to hadoop’s distributed cache, and with this single file contains all needed python interpreter and numpy, sklearn… these libraries, it should be run successful like you submit a jar file.

Note: You do not need to install Databricks spark-sklearn package too.

All things done on a single client node, is that clear? No distibuted install, no spark-sklearn or other additional libraries. Just do it.

This is a test example I wrote firstly. uses spark, but just import sklearn, not use it. This code is running successfully either. “/tmp/xxxx.csv” was stored in HDFS, not a local file.

]]>
https://xianglei.tech/archives/xianglei/2017/07/2795.html/feed 0
Spark read LZO file error in Zeppelin https://xianglei.tech/archives/xianglei/2017/07/2786.html https://xianglei.tech/archives/xianglei/2017/07/2786.html#respond Wed, 12 Jul 2017 06:18:50 +0000 https://xianglei.tech/?p=2786 Due to our dear stingy Party A  said they will add not any nodes to the cluster, so we must compress the data to reduce disk consumption. Actually  I like LZ4, it’s natively supported by hadoop, and the compress/decompress speed is good enough,  compress ratio is better than LZO. But, I must choose LZO finally, no reason.

Well, since we use Cloudera Manager to  install Hadoop and Spark, so it’s no error when read lzo file in command line, simply use as text file, Ex:

val data = sc.textFile("/user/dmp/miaozhen/ott/MZN_OTT_20170101131042_0000_ott.lzo")
data.take(3)

But in zeppelin, it will told me: native-lzo library not available, WTF?

Well, Zeppelin is a self-run environment, it will read its configuration only, do not read any other configs, Ex: it will not try to read /etc/spark/conf/spark-defaults.conf . So I must wrote all spark config such as you wrote them in spark-deafults.conf.

In our cluster, the Zeppelin conf looks like this:

]]>
https://xianglei.tech/archives/xianglei/2017/07/2786.html/feed 0
Troubleshooting on Zeppelin with keberized cluster https://xianglei.tech/archives/xianglei/2017/05/2748.html https://xianglei.tech/archives/xianglei/2017/05/2748.html#respond Wed, 24 May 2017 08:01:01 +0000 https://xianglei.tech/?p=2748 We’ve updated Zeppelin from 0.7.0 to 0.7.1, still work with kerberized hadoop cluster, we use some interpreters in zeppelin, not all. And I wanna write some troubleshooting records with this awesome webtool. BTW: I can write a webtool better than this 1000 times, such as phpHiveAdmin, basically I can see the map/reduce prograss bar

We used
1. pyspark with python machine learning libraries
Since we use centos6 which using python 2.6, not support with python machine learning libraries, such as numpy, scipy, sklearn, pandas… So I had to compile new python 2.7 on each node, and pip install these libs. When I write a test demo in Zeppelin, it shows me an python error

#python
import pandas
import scipy
import numpy
import sklearn

It gives me this.

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'list' object has no attribute 'show'
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'list' object has no attribute '_displayhook'

Well , this error means you forgot to install matplotlib, you should pip install it , and it will be fine.

 

2. Hive with new user

java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
	at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:279)
	at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
	at org.apache.commons.dbcp2.DelegatingStatement.execute(DelegatingStatement.java:291)
	at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:580)
	at org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:692)
	at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:95)
	at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:490)
	at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
	at org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
	at java.util.concurrent.FutureTask.run(FutureTask.java:262)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
	at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)

Well there are three parts will cause this issue.
first: You only added a zeppelin user, did not add this linux user to all of you hadoop nodes, zeppelin will use real username to submit a job to kerberized hadoop yarn instead using zeppelin user, unless, you set hadoop.proxy.zzeppelin.hosts and hadoop.proxy.zeppelin.group in core-site.xml
second: You did not added this kerberos username with kadmin.
third: You did not create this user’s home directory on HDFS.
Check these 3 parts, it should be ok.
And another thing is, you should put hive-site.xml in zeppelin’s conf dir.

 

3. Install R to work with Zeppelin
first: Install R to each node
yum -y install R R-devel libcurl-devel openssl-devel (install epel previously of course)
second: Install R packages

install.packages(c('devtools', 'knitr', 'ggplot2', 'mplot', 'googleVis', 'glmnet', 'pROC', 'data.table', 'rJava', 'stringi', 'stringr', 'evaluate', 'reshape2', 'caret', 'sqldf', 'wordcloud'), repos='https://mirrors.tuna.tsinghua.edu.cn/CRAN')

third: compile zeppelin with this options below, because of R uses GPL license, so mvn will not compile Zeppelin with SparkR support by default. You must add -Pr and -Psparkr arguments and rerun maven package.

mvn package ......
            -Pspark-1.6 -Dspark.version=$SPARK_VERSION \
            -Phadoop-2.6 -Dhadoop.version=$HADOOP_VERSION \
            -Pyarn \
            -Pr \
            -Psparkr \
            -Pscala-2.10 \
            -Pbuild-distr"

And then, next error:

library(‘SparkR’) not found

devtools::install_github("amplab-extras/SparkR-pkg", subdir="pkg")

 

4. Write HiveQL in %spark.pyspark
such like this:

from pyspark.sql import HiveContext, Row            
newHiveContext=HiveContext(sc)                      
query1="""select * from track.click limit 100"""
rows1=newHiveContext.sql(query1)
rows1.show()

In cli, it’s ok, but in Zeppelin, it will cause an exception below:

Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-5311749997581805313.py", line 349, in <module>
    raise Exception(traceback.format_exc())
Exception: Traceback (most recent call last):
  File "/tmp/zeppelin_pyspark-5311749997581805313.py", line 342, in <module>
    exec(code)
  File "<stdin>", line 5, in <module>
  File "/usr/lib/spark/python/pyspark/sql/dataframe.py", line 257, in show
    print(self._jdf.showString(n, truncate))
  File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "/usr/lib/spark/python/pyspark/sql/utils.py", line 45, in deco
    return f(*a, **kw)
  File "/usr/lib/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
    format(target_id, ".", name), value)
Py4JJavaError: An error occurred while calling o47.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, pg-dmp-slave7.hadoop, executor 1): java.lang.NoClassDefFoundError: Lorg/apache/hadoop/hive/ql/plan/TableDesc;
	at java.lang.Class.getDeclaredFields0(Native Method)
	at java.lang.Class.privateGetDeclaredFields(Class.java:2436)
	at java.lang.Class.getDeclaredField(Class.java:1946)
	at java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1659)
	at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
	at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:480)
	at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:468)
	at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
	at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
	at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
	at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1706)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1344)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
	at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
	at org.apache.spark.scheduler.Task.run(Task.scala:89)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:229)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.plan.TableDesc
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	... 104 more
Driver stacktrace:
	at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
	at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
	at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
	at scala.Option.foreach(Option.scala:236)
	at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1642)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1601)
	at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1590)
	at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
	at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1844)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1857)
	at org.apache.spark.SparkContext.runJob(SparkContext.scala:1870)
	at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:212)
	at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165)
	at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
	at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
	at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
	at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:53)
	at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
	at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
	at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1505)
	at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1375)
	at org.apache.spark.sql.DataFrame$$anonfun$head$1.apply(DataFrame.scala:1374)
	at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
	at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374)
	at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456)
	at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
	at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
	at py4j.Gateway.invoke(Gateway.java:259)
	at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
	at py4j.commands.CallCommand.execute(CallCommand.java:79)
	at py4j.GatewayConnection.run(GatewayConnection.java:209)
	at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NoClassDefFoundError: Lorg/apache/hadoop/hive/ql/plan/TableDesc;
	at java.lang.Class.getDeclaredFields0(Native Method)
	at java.lang.Class.privateGetDeclaredFields(Class.java:2436)
	at java.lang.Class.getDeclaredField(Class.java:1946)
	at java.io.ObjectStreamClass.getDeclaredSUID(ObjectStreamClass.java:1659)
	at java.io.ObjectStreamClass.access$700(ObjectStreamClass.java:72)
	at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:480)
	at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:468)
	at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
	at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:602)
	at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
	at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
	at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
	at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readArray(ObjectInputStream.java:1706)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1344)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
	at scala.collection.immutable.$colon$colon.readObject(List.scala:362)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1017)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
	at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
	at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
	at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
	at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
	at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
	at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:61)
	at org.apache.spark.scheduler.Task.run(Task.scala:89)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:229)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	... 1 more
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.plan.TableDesc
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	... 104 more

/usr/lib/hive/lib/hive-jdbc-1.1.0-cdh5.9.0.jar
/usr/lib/hadoop/hadoop-common-2.6.0-cdh5.9.0.jar
/usr/lib/hive/lib/hive-jdbc-1.1.0-cdh5.9.0-standalone.jar
/usr/lib/hive/lib/hive-shims-0.23-1.1.0-cdh5.9.0.jar
/usr/lib/hadoop/hadoop-auth-2.6.0-cdh5.9.0.jar

add these jars into spark conf->depends area in zeppelin interpreters config page.

 

5. I used to compile zeppelin without R, and I re-compile zeppelin with R, and set it in interpreters page, but it still not work

Well, actually, I confused with this for a day, until I realized, Zeppelin’s notes or paragraphs are all bound to interpreters, so even you change the zeppelin, it still work with old interpreters. So it seems you must delete your whole note and rewrite a new one. I think this is a ridiculous architecture design

 

6. Cannot use SQL  DataFrame Context in pyspark in Zeppelin / Cannot verify credential exception / Spark application hangs up, cannot be finished in YARN app manager.
Well, these three question seems to be one problem.

sqlContext.registerDataFrameAsTable(content_df, 'content_df')
select * from content_df where packageName='com.qiyi.video' limit 100
 INFO [2017-05-26 13:43:30,079] ({pool-2-thread-18} SchedulerFactory.java[jobStarted]:131) - Job remoteInterpretJob_1495777410079 started by scheduler org.apache.zeppelin.spark.SparkSqlInterpreter514869690
ERROR [2017-05-26 13:43:30,081] ({pool-2-thread-18} Job.java[run]:188) - Job failed
org.apache.zeppelin.interpreter.InterpreterException: java.lang.reflect.InvocationTargetException
        at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:119)
        at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:95)
        at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:490)
        at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
        at org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.zeppelin.spark.SparkSqlInterpreter.interpret(SparkSqlInterpreter.java:116)
        ... 11 more
Caused by: java.lang.NullPointerException
        at org.apache.spark.sql.hive.client.ClientWrapper.conf(ClientWrapper.scala:205)
        at org.apache.spark.sql.hive.HiveContext.hiveconf$lzycompute(HiveContext.scala:554)
        at org.apache.spark.sql.hive.HiveContext.hiveconf(HiveContext.scala:553)
        at org.apache.spark.sql.hive.HiveContext.parseSql(HiveContext.scala:333)
        at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:817)
        ... 16 more
ERROR [2017-05-26 13:47:16,017] ({pool-2-thread-2} Logging.scala[logError]:95) - Uncaught exception in thread pool-2-thread-2
java.lang.StackOverflowError
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater.updateCredentialsIfRequired(ExecutorDelegationTokenUpdater.scala:89)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater$$anon$1$$anonfun$run$1.apply$mcV$sp(ExecutorDelegationTokenUpdater.scala:49)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater$$anon$1$$anonfun$run$1.apply(ExecutorDelegationTokenUpdater.scala:49)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater$$anon$1$$anonfun$run$1.apply(ExecutorDelegationTokenUpdater.scala:49)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1766)
        ......
 WARN [2017-05-26 13:47:16,035] ({pool-2-thread-2} Logging.scala[logWarning]:91) - Error while trying to update credentials, will try again in 1 hour
java.lang.StackOverflowError
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
        at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142)
        at java.net.URLClassLoader.defineClass(URLClassLoader.java:449)
        at java.net.URLClassLoader.access$100(URLClassLoader.java:71)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
        at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
        at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater.updateCredentialsIfRequired(ExecutorDelegationTokenUpdater.scala:89)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater$$anon$1$$anonfun$run$1.apply$mcV$sp(ExecutorDelegationTokenUpdater.scala:49)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater$$anon$1$$anonfun$run$1.apply(ExecutorDelegationTokenUpdater.scala:49)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater$$anon$1$$anonfun$run$1.apply(ExecutorDelegationTokenUpdater.scala:49)
        at org.apache.spark.util.Utils$.logUncaughtExceptions(Utils.scala:1766)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater$$anon$1.run(ExecutorDelegationTokenUpdater.scala:49)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater.updateCredentialsIfRequired(ExecutorDelegationTokenUpdater.scala:79)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater$$anon$1$$anonfun$run$1.apply$mcV$sp(ExecutorDelegationTokenUpdater.scala:49)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater$$anon$1$$anonfun$run$1.apply(ExecutorDelegationTokenUpdater.scala:49)
        at org.apache.spark.deploy.yarn.ExecutorDelegationTokenUpdater$$anon$1$$anonfun$run$1.apply(ExecutorDelegationTokenUpdater.scala:49)

I found this error in zeppelin/spark/src/main/org/apache/zeppelin/spark/SparkSqlInterpreter.java

public InterpreterResult interpret(String st, InterpreterContext context) {
    SQLContext sqlc = null;
    SparkInterpreter sparkInterpreter = getSparkInterpreter();

    if (sparkInterpreter.getSparkVersion().isUnsupportedVersion()) {
      return new InterpreterResult(Code.ERROR, "Spark "
          + sparkInterpreter.getSparkVersion().toString() + " is not supported");
    }

    sparkInterpreter.populateSparkWebUrl(context);
    sqlc = getSparkInterpreter().getSQLContext();
    SparkContext sc = sqlc.sparkContext();
    if (concurrentSQL()) {
      sc.setLocalProperty("spark.scheduler.pool", "fair");
    } else {
      sc.setLocalProperty("spark.scheduler.pool", null);
    }

    sc.setJobGroup(getJobGroup(context), "Zeppelin", false);
    Object rdd = null;
    try {
      // method signature of sqlc.sql() is changed
      // from  def sql(sqlText: String): SchemaRDD (1.2 and prior)
      // to    def sql(sqlText: String): DataFrame (1.3 and later).
      // Therefore need to use reflection to keep binary compatibility for all spark versions.
      Method sqlMethod = sqlc.getClass().getMethod("sql", String.class);
      rdd = sqlMethod.invoke(sqlc, st);
    } <span style="color: #ff0000;" data-mce-style="color: #ff0000;">catch (InvocationTargetException ite) {
      if (Boolean.parseBoolean(getProperty("zeppelin.spark.sql.stacktrace"))) {
        throw new InterpreterException(ite);
      }
      logger.error("Invocation target exception", ite);
      String msg = ite.getTargetException().getMessage()
              + "\nset zeppelin.spark.sql.stacktrace = true to see full stacktrace";
      return new InterpreterResult(Code.ERROR, msg);</span>
    } catch (NoSuchMethodException | SecurityException | IllegalAccessException
        | IllegalArgumentException e) {
      throw new InterpreterException(e);
    }

    String msg = ZeppelinContext.showDF(sc, context, rdd, maxResult);
    sc.clearJobGroup();
    return new InterpreterResult(Code.SUCCESS, msg);
  }

If you enable HiveContext in zeppelin.spark.config, it cannot read schema that you registered in DataFrameAsTable, and it will try to use hive’s keytab to instead zeppelin keytab, and then, hangs up in YARN app manager.
To resolve this, just set
zeppelin.spark.useHiveContext = false
In zeppelin’s interpreter config page.

Well, gives you my interpreters configs that I used:

 

My Zeppelin configurations and for debugging arguments:

zeppelin-env.sh

export ZEPPELIN_INTERPRETERS="org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.angular.AngularInterpreter,org.apache.zeppelin.shell.ShellInterpreter,org.apache.zeppelin.hive.HiveInterpreter"
export ZEPPELIN_PORT=8080
export ZEPPELIN_CONF_DIR=/etc/zin/conf
export ZEPPELIN_LOG_DIR=/var/log/zin
export ZEPPELIN_PID_DIR=/var/run/zin
export ZEPPELIN_WAR_TEMPDIR=/var/run/zin/webapps
export ZEPPELIN_NOTEBOOK_DIR=/var/lib/zin/notebook
export MASTER=yarn-client
export SPARK_HOME=/usr/lib/spark
export HADOOP_CONF_DIR=/etc/hadoop/conf:/etc/hive/conf
export ZEPPELIN_JAVA_OPTS="-Dspark.yarn.jar=/usr/lib/zin/interpreter/spark/zeppelin-spark_2.10-0.7.1.jar"
export HADOOP_HOME=/usr/lib/hadoop
export ZEPPELIN_INTP_CLASSPATH_OVERRIDES=/etc/hive/conf
export ZEPPELIN_HOME=/usr/lib/zin
#add this line for debugging
export SPARK_PRINT_LAUNCH_COMMAND=true

log4j.properties

log4j.rootLogger = INFO, dailyfile

log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n

log4j.appender.dailyfile.DatePattern=.yyyy-MM-dd
log4j.appender.dailyfile.Threshold = DEBUG
log4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender
log4j.appender.dailyfile.File = ${zeppelin.log.file}
log4j.appender.dailyfile.layout = org.apache.log4j.PatternLayout
log4j.appender.dailyfile.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n

#add these lines for debugging
log4j.logger.org.apache.zeppelin.interpreter.InterpreterFactory=DEBUG
log4j.logger.org.apache.zeppelin.notebook.Paragraph=DEBUG
log4j.logger.org.apache.zeppelin.scheduler=DEBUG
log4j.logger.org.apache.zeppelin.livy=DEBUG
log4j.logger.org.apache.zeppelin.flink=DEBUG
log4j.logger.org.apache.zeppelin.spark=DEBUG
log4j.logger.org.apache.zeppelin.interpreter.util=DEBUG
log4j.logger.org.apache.zeppelin.interpreter.remote=DEBUG

zeppelin-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
   Licensed to the Apache Software Foundation (ASF) under one or more
   contributor license agreements.  See the NOTICE file distributed with
   this work for additional information regarding copyright ownership.
   The ASF licenses this file to You under the Apache License, Version 2.0
   (the "License"); you may not use this file except in compliance with
   the License.  You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.
-->

<configuration>

<property>
  <name>zeppelin.server.addr</name>
  <value>0.0.0.0</value>
  <description>Server address</description>
</property>

<property>
  <name>zeppelin.server.port</name>
  <value>8080</value>
  <description>Server port.</description>
</property>

<property>
  <name>zeppelin.server.ssl.port</name>
  <value>8443</value>
  <description>Server ssl port. (used when ssl property is set to true)</description>
</property>

<property>
  <name>zeppelin.server.context.path</name>
  <value>/</value>
  <description>Context Path of the Web Application</description>
</property>

<property>
  <name>zeppelin.war.tempdir</name>
  <value>webapps</value>
  <description>Location of jetty temporary directory</description>
</property>

<property>
  <name>zeppelin.notebook.dir</name>
  <value>notebook</value>
  <description>path or URI for notebook persist</description>
</property>

<property>
  <name>zeppelin.notebook.homescreen</name>
  <value></value>
  <description>id of notebook to be displayed in homescreen. ex) 2A94M5J1Z Empty value displays default home screen</description>
</property>

<property>
  <name>zeppelin.notebook.homescreen.hide</name>
  <value>false</value>
  <description>hide homescreen notebook from list when this value set to true</description>
</property>


<!-- Amazon S3 notebook storage -->
<!-- Creates the following directory structure: s3://{bucket}/{username}/{notebook-id}/note.json -->
<!--
<property>
  <name>zeppelin.notebook.s3.user</name>
  <value>user</value>
  <description>user name for s3 folder structure</description>
</property>

<property>
  <name>zeppelin.notebook.s3.bucket</name>
  <value>zeppelin</value>
  <description>bucket name for notebook storage</description>
</property>

<property>
  <name>zeppelin.notebook.s3.endpoint</name>
  <value>s3.amazonaws.com</value>
  <description>endpoint for s3 bucket</description>
</property>

<property>
  <name>zeppelin.notebook.storage</name>
  <value>org.apache.zeppelin.notebook.repo.S3NotebookRepo</value>
  <description>notebook persistence layer implementation</description>
</property>
-->

<!-- Additionally, encryption is supported for notebook data stored in S3 -->
<!-- Use the AWS KMS to encrypt data -->
<!-- If used, the EC2 role assigned to the EMR cluster must have rights to use the given key -->
<!-- See https://aws.amazon.com/kms/ and http://docs.aws.amazon.com/kms/latest/developerguide/concepts.html -->
<!--
<property>
  <name>zeppelin.notebook.s3.kmsKeyID</name>
  <value>AWS-KMS-Key-UUID</value>
  <description>AWS KMS key ID used to encrypt notebook data in S3</description>
</property>
-->

<!-- provide region of your KMS key -->
<!-- See http://docs.aws.amazon.com/general/latest/gr/rande.html#kms_region for region codes names -->
<!--
<property>
  <name>zeppelin.notebook.s3.kmsKeyRegion</name>
  <value>us-east-1</value>
  <description>AWS KMS key region in your AWS account</description>
</property>
-->

<!-- Use a custom encryption materials provider to encrypt data -->
<!-- No configuration is given to the provider, so you must use system properties or another means to configure -->
<!-- See https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/EncryptionMaterialsProvider.html -->
<!--
<property>
  <name>zeppelin.notebook.s3.encryptionMaterialsProvider</name>
  <value>provider implementation class name</value>
  <description>Custom encryption materials provider used to encrypt notebook data in S3</description>
</property>
-->


<!-- If using Azure for storage use the following settings -->
<!--
<property>
  <name>zeppelin.notebook.azure.connectionString</name>
  <value>DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey></value>
  <description>Azure account credentials</description>
</property>

<property>
  <name>zeppelin.notebook.azure.share</name>
  <value>zeppelin</value>
  <description>share name for notebook storage</description>
</property>

<property>
  <name>zeppelin.notebook.azure.user</name>
  <value>user</value>
  <description>optional user name for Azure folder structure</description>
</property>

<property>
  <name>zeppelin.notebook.storage</name>
  <value>org.apache.zeppelin.notebook.repo.AzureNotebookRepo</value>
  <description>notebook persistence layer implementation</description>
</property>
-->

<!-- Notebook storage layer using local file system
<property>
  <name>zeppelin.notebook.storage</name>
  <value>org.apache.zeppelin.notebook.repo.VFSNotebookRepo</value>
  <description>local notebook persistence layer implementation</description>
</property>
-->

<!-- For connecting your Zeppelin with ZeppelinHub -->
<!--
<property>
  <name>zeppelin.notebook.storage</name>
  <value>org.apache.zeppelin.notebook.repo.GitNotebookRepo, org.apache.zeppelin.notebook.repo.zeppelinhub.ZeppelinHubRepo</value>
  <description>two notebook persistence layers (versioned local + ZeppelinHub)</description>
</property>
-->

<property>
  <name>zeppelin.notebook.storage</name>
  <value>org.apache.zeppelin.notebook.repo.GitNotebookRepo</value>
  <description>versioned notebook persistence layer implementation</description>
</property>

<property>
  <name>zeppelin.notebook.one.way.sync</name>
  <value>false</value>
  <description>If there are multiple notebook storages, should we treat the first one as the only source of truth?</description>
</property>

<property>
  <name>zeppelin.interpreter.dir</name>
  <value>interpreter</value>
  <description>Interpreter implementation base directory</description>
</property>

<property>
  <name>zeppelin.interpreter.localRepo</name>
  <value>local-repo</value>
  <description>Local repository for interpreter's additional dependency loading</description>
</property>

<property>
  <name>zeppelin.interpreters</name>
  <value>org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter,org.apache.zeppelin.rinterpreter.RRepl,org.apache.zeppelin.rinterpreter.KnitR,org.apache.zeppelin.spark.SparkRInterpreter,org.apache.zeppelin.spark.SparkSqlInterpreter,org.apache.zeppelin.spark.DepInterpreter,org.apache.zeppelin.markdown.Markdown,org.apache.zeppelin.angular.AngularInterpreter,org.apache.zeppelin.shell.ShellInterpreter,org.apache.zeppelin.file.HDFSFileInterpreter,org.apache.zeppelin.flink.FlinkInterpreter,,org.apache.zeppelin.python.PythonInterpreter,org.apache.zeppelin.python.PythonInterpreterPandasSql,org.apache.zeppelin.python.PythonCondaInterpreter,org.apache.zeppelin.python.PythonDockerInterpreter,org.apache.zeppelin.lens.LensInterpreter,org.apache.zeppelin.ignite.IgniteInterpreter,org.apache.zeppelin.ignite.IgniteSqlInterpreter,org.apache.zeppelin.cassandra.CassandraInterpreter,org.apache.zeppelin.geode.GeodeOqlInterpreter,org.apache.zeppelin.postgresql.PostgreSqlInterpreter,org.apache.zeppelin.jdbc.JDBCInterpreter,org.apache.zeppelin.kylin.KylinInterpreter,org.apache.zeppelin.elasticsearch.ElasticsearchInterpreter,org.apache.zeppelin.scalding.ScaldingInterpreter,org.apache.zeppelin.alluxio.AlluxioInterpreter,org.apache.zeppelin.hbase.HbaseInterpreter,org.apache.zeppelin.livy.LivySparkInterpreter,org.apache.zeppelin.livy.LivyPySparkInterpreter,org.apache.zeppelin.livy.LivyPySpark3Interpreter,org.apache.zeppelin.livy.LivySparkRInterpreter,org.apache.zeppelin.livy.LivySparkSQLInterpreter,org.apache.zeppelin.bigquery.BigQueryInterpreter,org.apache.zeppelin.beam.BeamInterpreter,org.apache.zeppelin.pig.PigInterpreter,org.apache.zeppelin.pig.PigQueryInterpreter,org.apache.zeppelin.scio.ScioInterpreter</value>
  <description>Comma separated interpreter configurations. First interpreter become a default</description>
</property>

<property>
  <name>zeppelin.interpreter.group.order</name>
  <value>spark,md,angular,sh,livy,alluxio,file,psql,flink,python,ignite,lens,cassandra,geode,kylin,elasticsearch,scalding,jdbc,hbase,bigquery,beam</value>
  <description></description>
</property>

<property>
  <name>zeppelin.interpreter.connect.timeout</name>
  <value>30000</value>
  <description>Interpreter process connect timeout in msec.</description>
</property>


<property>
  <name>zeppelin.ssl</name>
  <value>false</value>
  <description>Should SSL be used by the servers?</description>
</property>

<property>
  <name>zeppelin.ssl.client.auth</name>
  <value>false</value>
  <description>Should client authentication be used for SSL connections?</description>
</property>

<property>
  <name>zeppelin.ssl.keystore.path</name>
  <value>keystore</value>
  <description>Path to keystore relative to Zeppelin configuration directory</description>
</property>

<property>
  <name>zeppelin.ssl.keystore.type</name>
  <value>JKS</value>
  <description>The format of the given keystore (e.g. JKS or PKCS12)</description>
</property>

<property>
  <name>zeppelin.ssl.keystore.password</name>
  <value>change me</value>
  <description>Keystore password. Can be obfuscated by the Jetty Password tool</description>
</property>

<!--
<property>
  <name>zeppelin.ssl.key.manager.password</name>
  <value>change me</value>
  <description>Key Manager password. Defaults to keystore password. Can be obfuscated.</description>
</property>
-->

<property>
  <name>zeppelin.ssl.truststore.path</name>
  <value>truststore</value>
  <description>Path to truststore relative to Zeppelin configuration directory. Defaults to the keystore path</description>
</property>

<property>
  <name>zeppelin.ssl.truststore.type</name>
  <value>JKS</value>
  <description>The format of the given truststore (e.g. JKS or PKCS12). Defaults to the same type as the keystore type</description>
</property>

<!--
<property>
  <name>zeppelin.ssl.truststore.password</name>
  <value>change me</value>
  <description>Truststore password. Can be obfuscated by the Jetty Password tool. Defaults to the keystore password</description>
</property>
-->

<property>
  <name>zeppelin.server.allowed.origins</name>
  <value>*</value>
  <description>Allowed sources for REST and WebSocket requests (i.e. http://onehost:8080,http://otherhost.com). If you leave * you are vulnerable to https://issues.apache.org/jira/browse/ZEPPELIN-173</description>
</property>

<property>
  <name>zeppelin.anonymous.allowed</name>
  <value>false</value>
  <description>Anonymous user allowed by default</description>
</property>

<property>
  <name>zeppelin.notebook.public</name>
  <value>false</value>
  <description>Make notebook public by default when created, private otherwise</description>
</property>

<property>
  <name>zeppelin.websocket.max.text.message.size</name>
  <value>1024000</value>
  <description>Size in characters of the maximum text message to be received by websocket. Defaults to 1024000</description>
</property>

</configuration>

No, I won’t paste shiro.ini here.

 

]]>
https://xianglei.tech/archives/xianglei/2017/05/2748.html/feed 0
Use kerberized Hive in Zeppelin https://xianglei.tech/archives/xianglei/2017/05/2730.html https://xianglei.tech/archives/xianglei/2017/05/2730.html#respond Wed, 03 May 2017 18:52:12 +0000 https://xianglei.tech/?p=2730 We deployed Apache Zeppelin 0.7.0 for the Kerberos secured Hadoop cluster, and my dear colleague cannot use it correctly, so I have to find out why he can’t use anything in Zeppelin, except shell command.

I start with Kerberized Hive

ERROR [2017-05-03 23:49:10,603] ({qtp1128883028-1682} NotebookServer.java[afterStatusChange]:2018) - Error
org.apache.zeppelin.interpreter.InterpreterException: paragraph_1493825452456_506278578's Interpreter hive not found
        at org.apache.zeppelin.notebook.Note.run(Note.java:572)
        at org.apache.zeppelin.socket.NotebookServer.persistAndExecuteSingleParagraph(NotebookServer.java:1626)
        at org.apache.zeppelin.socket.NotebookServer.runParagraph(NotebookServer.java:1600)
        at org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:263)
        at org.apache.zeppelin.socket.NotebookSocket.onWebSocketText(NotebookSocket.java:59)
        at org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextMessage(JettyListenerEventDriver.java:128)
        at org.eclipse.jetty.websocket.common.message.SimpleTextMessage.messageComplete(SimpleTextMessage.java:69)
        at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65)
        at org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextFrame(JettyListenerEventDriver.java:122)
        at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.incomingFrame(AbstractEventDriver.java:161)
        at org.eclipse.jetty.websocket.common.WebSocketSession.incomingFrame(WebSocketSession.java:309)
        at org.eclipse.jetty.websocket.common.extensions.ExtensionStack.incomingFrame(ExtensionStack.java:214)
        at org.eclipse.jetty.websocket.common.Parser.notifyFrame(Parser.java:220)
        at org.eclipse.jetty.websocket.common.Parser.parse(Parser.java:258)
        at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.readParse(AbstractWebSocketConnection.java:632)
        at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:480)
        at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
        at java.lang.Thread.run(Thread.java:745)

Two things caused this problem:
1. Privilege, Zeppelin must download some jars from repo.maven.org with using interpreters insides, it called dependencies. I packeged zeppelin as an RPM destribution, so the installation path is /usr/lib/zeppelin, and the download path should be /usr/lib/zeppelin/local-repo by default. But this path is owned by root, so chown the local-repo to zeppelin:zeppelin will resolve this issue.
2. There is not hive interpreter any more, in official documentations, the Hive interpreter was deprecated for a long time, it use jdbc instead. My dear colleague do not read any document, and create an interpreter named hive, so it’s gone wrong.

The right way is doing like this, put the config in jdbc area.

And you should use it like
%jdbc(hive)
show databases

Then, I must resolve the Kerberized Hive in Zeppelin, I google-ed, but there are no useful information, it seems no one doing this before.
I add hive-jdbc and hadoop-common as the official document, but it doesn’t work.
Hive interpreter
logs give me a warn

 WARN [2017-05-03 23:57:48,982] ({pool-2-thread-9} NotebookServer.java[afterStatusChange]:2026) - Job 20170503-234910_1513582802 is finished, status: ERROR, exception: null, result: %text org.apache.zeppelin.interpreter.InterpreterException: Could not load shims in class org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge23
java.lang.ClassNotFoundException: org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge23
        at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnection(JDBCInterpreter.java:413)
        at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:561)
        at org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:660)
        at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
        at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:489)
        at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
        at org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)

shims? well, just add shims in jdbc->dependencies area, but, still not work, give me anothor error

ERROR [2017-05-04 00:14:23,725] ({qtp1128883028-1804} NotebookServer.java[onMessage]:355) - Can't handle message
org.apache.zeppelin.interpreter.InterpreterException: org.apache.thrift.transport.TTransportException
        at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.cancel(RemoteInterpreter.java:367)
        at org.apache.zeppelin.interpreter.LazyOpenInterpreter.cancel(LazyOpenInterpreter.java:100)
        at org.apache.zeppelin.notebook.Paragraph.jobAbort(Paragraph.java:457)
        at org.apache.zeppelin.scheduler.Job.abort(Job.java:236)
        at org.apache.zeppelin.socket.NotebookServer.cancelParagraph(NotebookServer.java:1536)
        at org.apache.zeppelin.socket.NotebookServer.onMessage(NotebookServer.java:269)
        at org.apache.zeppelin.socket.NotebookSocket.onWebSocketText(NotebookSocket.java:59)
        at org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextMessage(JettyListenerEventDriver.java:128)
        at org.eclipse.jetty.websocket.common.message.SimpleTextMessage.messageComplete(SimpleTextMessage.java:69)
        at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.appendMessage(AbstractEventDriver.java:65)
        at org.eclipse.jetty.websocket.common.events.JettyListenerEventDriver.onTextFrame(JettyListenerEventDriver.java:122)
        at org.eclipse.jetty.websocket.common.events.AbstractEventDriver.incomingFrame(AbstractEventDriver.java:161)
        at org.eclipse.jetty.websocket.common.WebSocketSession.incomingFrame(WebSocketSession.java:309)
        at org.eclipse.jetty.websocket.common.extensions.ExtensionStack.incomingFrame(ExtensionStack.java:214)
        at org.eclipse.jetty.websocket.common.Parser.notifyFrame(Parser.java:220)
        at org.eclipse.jetty.websocket.common.Parser.parse(Parser.java:258)
        at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.readParse(AbstractWebSocketConnection.java:632)
        at org.eclipse.jetty.websocket.common.io.AbstractWebSocketConnection.onFillable(AbstractWebSocketConnection.java:480)
        at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:544)
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
        at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException
        at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
        at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
        at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429)
        at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318)
        at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219)
        at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
        at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_cancel(RemoteInterpreterService.java:291)
        at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.cancel(RemoteInterpreterService.java:276)
        at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.cancel(RemoteInterpreter.java:364)
        ... 21 more

Then I add hive-exec.jar into dependencies, then zeppelin died… remove hive-exec and restart zeppelin…

The final error is a hadoop proxy problem, I use zin as a hive authentication account, and add zeppelin user to all nodes and create a zeppelin keytab file, and then it give me this error

org.apache.hive.service.cli.HiveSQLException: Failed to validate proxy privilege of zeppelin for admin
        at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:241)
        at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:232)
        at org.apache.hive.jdbc.HiveConnection.openSession(HiveConnection.java:491)
        at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:181)
        at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
        at java.sql.DriverManager.getConnection(DriverManager.java:571)
        at java.sql.DriverManager.getConnection(DriverManager.java:187)
        at org.apache.commons.dbcp2.DriverManagerConnectionFactory.createConnection(DriverManagerConnectionFactory.java:79)
        at org.apache.commons.dbcp2.PoolableConnectionFactory.makeObject(PoolableConnectionFactory.java:205)
        at org.apache.commons.pool2.impl.GenericObjectPool.create(GenericObjectPool.java:861)
        at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:435)
        at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363)
        at org.apache.commons.dbcp2.PoolingDriver.connect(PoolingDriver.java:129)
        at java.sql.DriverManager.getConnection(DriverManager.java:571)
        at java.sql.DriverManager.getConnection(DriverManager.java:233)
        at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnectionFromPool(JDBCInterpreter.java:351)
        at org.apache.zeppelin.jdbc.JDBCInterpreter.getConnection(JDBCInterpreter.java:385)
        at org.apache.zeppelin.jdbc.JDBCInterpreter.executeSql(JDBCInterpreter.java:561)
        at org.apache.zeppelin.jdbc.JDBCInterpreter.interpret(JDBCInterpreter.java:660)
        at org.apache.zeppelin.interpreter.LazyOpenInterpreter.interpret(LazyOpenInterpreter.java:94)
        at org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer$InterpretJob.jobRun(RemoteInterpreterServer.java:489)
        at org.apache.zeppelin.scheduler.Job.run(Job.java:175)
        at org.apache.zeppelin.scheduler.ParallelScheduler$JobRunner.run(ParallelScheduler.java:162)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178)
        at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hive.service.cli.HiveSQLException: Failed to validate proxy privilege of zin for admin
        at org.apache.hive.service.auth.HiveAuthFactory.verifyProxyAccess(HiveAuthFactory.java:402)
        at org.apache.hive.service.cli.thrift.ThriftCLIService.getProxyUser(ThriftCLIService.java:750)
        at org.apache.hive.service.cli.thrift.ThriftCLIService.getUserName(ThriftCLIService.java:384)
        at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:411)
        at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:316)
        at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1253)
        at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1238)
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:746)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
        ... 3 more
Caused by: org.apache.hadoop.security.authorize.AuthorizationException: User: zin is not allowed to impersonate admin

This is so easy, just like hue or other applications, this user is not set as a hadoop proxy user/group in core-site, I think until I add a hadoop proxy user it will work, but I don’t know how to add a new config in FUCKING Cloudera Manager, so I give up, and use an exist user named hive to access zeppelin.
So the zeppelin config is  like this finally.

And with kerberized spark on yarn?
Just add following config in spark-default.conf
spark.yarn.principal user/host@PG.COM
spark.yarn.keytab /etc/hadoop/conf.empty/user.keytab

Finally, I can sleep.



]]>
https://xianglei.tech/archives/xianglei/2017/05/2730.html/feed 0
Troubleshooting kerberized hive issues https://xianglei.tech/archives/xianglei/2017/05/2724.html https://xianglei.tech/archives/xianglei/2017/05/2724.html#respond Wed, 03 May 2017 15:01:43 +0000 https://xianglei.tech/?p=2724 Today, my colleagues want to use hive in zeppelin, it’s the first time to use hive in this new kerberized cluster, and unfortunately there was an authenticate issue of using hive. So I have to debug on it.

The hive client was installed hadoop-client and hive and put all the needed keytabs in config dirs and set the right permission of their all, but still could not connect to the cluster. The log always shows authentication failed.

Firstly I must descripting our new cluster, this is a cluster for P&G Group that managed by Cloudera Manager with an enterprise license. The nodes were all installed with cloudera hadoop 5.10.1 and other components with same versioned parcels. The client deployed with CDH 5.9.0 rpm package with manually, and an Apache community versioned Zeppelin. I think if I didn’t use cloudera manager, the cluster management would be easier much more. CM is the biggest nightmare in the history of bigdata. The first big problem is you never know where the manager put the config files are.

Now, back to our discussing, the kerberos. So, for debugging, I logged in a cluster node, and try hive and beeline command, they all seems worked well.

On pg-dmp-slavexx.hadoop

hive
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.10.1-1.cdh5.10.1.p0.10/jars/hive-common-1.1.0-cdh5.10.1.jar!/hive-log4j.properties
WARNING: Hive CLI is deprecated and migration to Beeline is recommended.
hive> show databases;
OK
default
Time taken: 1.661 seconds, Fetched: 1 row(s)
hive>

Well it seems good, now beeline

beeline -u 'jdbc:hive2://pg-dmp-master2.hadoop:10000/default;principal=hive/pg-dmp-master2.hadoop@PG.COM'
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
scan complete in 1ms
Connecting to jdbc:hive2://pg-dmp-master2.hadoop:10000/default;principal=hive/pg-dmp-master2.hadoop@PG.COM
Connected to: Apache Hive (version 1.1.0-cdh5.10.1)
Driver: Hive JDBC (version 1.1.0-cdh5.10.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.1.0-cdh5.10.1 by Apache Hive
0: jdbc:hive2://pg-dmp-master2.hadoop:10000/d> show databases;
INFO  : Compiling command(queryId=hive_20170503222424_9512c898-9822-4659-b07b-f8abb2fd50b7): show databases
INFO  : Semantic Analysis Completed
INFO  : Returning Hive schema: Schema(fieldSchemas:[FieldSchema(name:database_name, type:string, comment:from deserializer)], properties:null)
INFO  : Completed compiling command(queryId=hive_20170503222424_9512c898-9822-4659-b07b-f8abb2fd50b7); Time taken: 0.004 seconds
INFO  : Executing command(queryId=hive_20170503222424_9512c898-9822-4659-b07b-f8abb2fd50b7): show databases
INFO  : Starting task [Stage-0:DDL] in serial mode
INFO  : Completed executing command(queryId=hive_20170503222424_9512c898-9822-4659-b07b-f8abb2fd50b7); Time taken: 0.013 seconds
INFO  : OK
+----------------+--+
| database_name  |
+----------------+--+
| default        |
+----------------+--+
1 row selected (0.106 seconds)
0: jdbc:hive2://pg-dmp-master2.hadoop:10000/d>

Well, it seems good either.

Second step, I logged in the 5.9.0 client and try to use same command, and hive runs failed.

hive
2017-05-03 22:09:28,228 WARN  [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present.  Continuing without it.

Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:541)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
        at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:206)
        at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:324)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:285)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:260)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:514)
        ... 8 more
Caused by: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1530)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3037)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3056)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3281)
        at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:217)
        at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:201)
        ... 12 more
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1528)
        ... 19 more
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed
        at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
        at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:430)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:240)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1528)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3037)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3056)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3281)
        at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:217)
        at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:201)
        at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:324)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:285)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:260)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:514)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:477)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:240)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
        ... 24 more

And beeline failed too.

beeline -u 'jdbc:hive2://pg-dmp-master2.hadoop:10000/default;principal=hive/pg-dmp-master2.hadoop@PG.COM'
2017-05-03 22:27:44,881 WARN  [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present.  Continuing without it.
scan complete in 1ms
Connecting to jdbc:hive2://pg-dmp-master2.hadoop:10000/default;principal=hive/pg-dmp-master2.hadoop@PG.COM

17/05/03 22:27:46 [main]: ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
        at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
        at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:202)
        at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:167)
        at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
        at java.sql.DriverManager.getConnection(DriverManager.java:571)
        at java.sql.DriverManager.getConnection(DriverManager.java:187)
        at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:142)
        at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:207)
        at org.apache.hive.beeline.Commands.connect(Commands.java:1457)
        at org.apache.hive.beeline.Commands.connect(Commands.java:1352)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52)
        at org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1130)
        at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1169)
        at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:810)
        at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:890)
        at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:510)
        at org.apache.hive.beeline.BeeLine.main(BeeLine.java:493)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
        at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
        at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
        at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
        ... 35 more
HS2 may be unavailable, check server status
Error: Could not open client transport with JDBC Uri: jdbc:hive2://pg-dmp-master2.hadoop:10000/default;principal=hive/pg-dmp-master2.hadoop@PG.COM: GSS initiate failed (state=08S01,code=0)
Beeline version 1.1.0-cdh5.9.0 by Apache Hive
beeline>

When I saw HS2 may be unavailable, I think the hiveserver2 process must be down, so I check it in the CM and with ps -aux, but it stiill alive, and I try to use telnet hiveserver2 10000, and telnet metastore_server 9083, each connected correctly. So I didn’t know what happens. Reading the log, there is a SASL error, WTF?

It looks like a kerberos authentication issue. I google-ed whole day, but there was no useful info for me.

So I open the hive-env.sh, and add this line, as the same, I add this to another client which is working well.

export HADOOP_OPTS="-Dsun.security.krb5.debug=true ${HADOOP_OPTS}"

Well, on good client, it shows the kerberos debug info like this.

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=512M; support was removed in 8.0
scan complete in 2ms
Connecting to jdbc:hive2://pg-dmp-master2.hadoop:10000/default;principal=hive/pg-dmp-master2.hadoop@PG.COM
Java config name: null
Native config name: /etc/krb5.conf
Loaded from native config
>>>KinitOptions cache name is /tmp/krb5cc_0
>>>DEBUG <CCacheInputStream>  client principal is xianglei@PG.COM
>>>DEBUG <CCacheInputStream> server principal is krbtgt/PG.COM@PG.COM
>>>DEBUG <CCacheInputStream> key type: 23
>>>DEBUG <CCacheInputStream> auth time: Wed May 03 18:29:34 CST 2017
>>>DEBUG <CCacheInputStream> start time: Wed May 03 18:29:34 CST 2017
>>>DEBUG <CCacheInputStream> end time: Thu May 04 18:29:34 CST 2017
>>>DEBUG <CCacheInputStream> renew_till time: Wed May 10 18:29:33 CST 2017
>>> CCacheInputStream: readFlags()  FORWARDABLE; RENEWABLE; INITIAL;
>>>DEBUG <CCacheInputStream>  client principal is xianglei@PG.COM
>>>DEBUG <CCacheInputStream> server principal is X-CACHECONF:/krb5_ccache_conf_data/fast_avail/krbtgt/PG.COM@PG.COM@PG.COM
>>>DEBUG <CCacheInputStream> key type: 0
>>>DEBUG <CCacheInputStream> auth time: Thu Jan 01 08:00:00 CST 1970
>>>DEBUG <CCacheInputStream> start time: null
>>>DEBUG <CCacheInputStream> end time: Thu Jan 01 08:00:00 CST 1970
>>>DEBUG <CCacheInputStream> renew_till time: null
>>> CCacheInputStream: readFlags() 
Found ticket for xianglei@PG.COM to go to krbtgt/PG.COM@PG.COM expiring on Thu May 04 18:29:34 CST 2017
Entered Krb5Context.initSecContext with state=STATE_NEW
Found ticket for xianglei@PG.COM to go to krbtgt/PG.COM@PG.COM expiring on Thu May 04 18:29:34 CST 2017
Service ticket not found in the subject
>>> Credentials acquireServiceCreds: same realm
default etypes for default_tgs_enctypes: 23.
>>> CksumType: sun.security.krb5.internal.crypto.RsaMd5CksumType
>>> EType: sun.security.krb5.internal.crypto.ArcFourHmacEType
>>> KdcAccessibility: reset
>>> KrbKdcReq send: kdc=pg-dmp-master2.hadoop TCP:88, timeout=3000, number of retries =3, #bytes=621
>>> KDCCommunication: kdc=pg-dmp-master2.hadoop TCP:88, timeout=3000,Attempt =1, #bytes=621
>>>DEBUG: TCPClient reading 612 bytes
>>> KrbKdcReq send: #bytes read=612
>>> KdcAccessibility: remove pg-dmp-master2.hadoop
>>> EType: sun.security.krb5.internal.crypto.ArcFourHmacEType
>>> KrbApReq: APOptions are 00100000 00000000 00000000 00000000
>>> EType: sun.security.krb5.internal.crypto.ArcFourHmacEType
Krb5Context setting mySeqNumber to: 575633251
Created InitSecContextToken:
0000: 01 00 6E 82 02 1B 30 82   02 17 A0 03 02 01 05 A1  ..n...0.........
0010: 03 02 01 0E A2 07 03 05   00 20 00 00 00 A3 82 01  ......... ......
0020: 45 61 82 01 41 30 82 01   3D A0 03 02 01 05 A1 08  Ea..A0..=.......
0030: 1B 06 50 47 2E 43 4F 4D   A2 28 30 26 A0 03 02 01  ..PG.COM.(0&....
0040: 00 A1 1F 30 1D 1B 04 68   69 76 65 1B 15 70 67 2D  ...0...hive..pg-
0050: 64 6D 70 2D 6D 61 73 74   65 72 32 2E 68 61 64 6F  dmp-master2.hado
0060: 6F 70 A3 82 01 00 30 81   FD A0 03 02 01 17 A1 03  op....0.........
0070: 02 01 05 A2 81 F0 04 81   ED 7C 10 DA F1 10 84 5A  ...............Z
0080: EF 26 A4 1F 75 47 E7 AD   18 DE 05 1F B8 F8 9D 2F  .&..uG........./
0090: A1 CB 55 11 1E 19 56 0D   1C 9D B1 6D E3 84 FD A5  ..U...V....m....
00A0: 06 70 06 64 5C 6A F7 05   CE AA 38 6D 53 62 08 23  .p.d\j....8mSb.#
00B0: 2B 4A 8F 77 BB 1F A1 8D   CC A9 5B 31 A5 7A 85 21  +J.w......[1.z.!
00C0: 34 98 9F FD D4 B9 25 74   6A E5 5D FE 77 B1 73 27  4.....%tj.].w.s'
00D0: B1 54 E5 46 05 61 BF 0E   39 9E 1C 2E 3B 03 4A 39  .T.F.a..9...;.J9
00E0: 11 8D D3 F9 8F 23 FA 42   89 A0 1D E4 0C 10 05 C4  .....#.B........
00F0: 12 99 4F 69 6A 0D C6 E1   D0 F0 B3 8B DA 05 AF 35  ..Oij..........5
0100: 9D F1 33 3D A2 8C B1 1A   C9 77 1E 54 99 03 E0 8A  ..3=.....w.T....
0110: D4 20 F9 BC 34 23 7F 4C   A5 DC E4 90 0D 73 74 07  . ..4#.L.....st.
0120: 59 10 13 7C B0 44 5F 20   CE D2 C1 F2 BF 75 77 96  Y....D_ .....uw.
0130: DF 08 7A FF BB 7C 1F 7C   7C 0F 98 90 C2 0F 4D E9  ..z...........M.
0140: 81 A3 1F 64 D7 12 31 1E   A9 0C 78 33 46 66 5A DE  ...d..1...x3FfZ.
0150: F6 8E F6 02 F2 11 1C 8C   F6 BB 0C 4F FB C2 39 DB  ...........O..9.
0160: 7A F3 94 0D 95 28 A4 81   B8 30 81 B5 A0 03 02 01  z....(...0......
0170: 17 A2 81 AD 04 81 AA B7   6B 3E 91 7B 6A 78 A3 35  ........k>..jx.5
0180: E5 40 C3 24 C6 8A 90 29   D6 CC 9A 6C D1 97 DE 58  .@.$...)...l...X
0190: 18 1E B4 E5 B6 8D D3 53   F7 D4 E9 D5 ED E6 F1 E7  .......S........
01A0: AB 7F 16 B3 A6 EB F1 4B   FA FF 23 2E C7 01 60 1E  .......K..#...`.
01B0: 19 45 C0 1C 0C AA 0A 4E   3F A2 50 AD 01 7B FF 97  .E.....N?.P.....
01C0: 31 85 FD 18 34 73 4B 7A   1C 6A 98 2D BD 9E 76 86  1...4sKz.j.-..v.
01D0: 53 A0 78 AF E1 D4 0E 47   7B 78 6E CE 26 64 BB E0  S.x....G.xn.&d..
01E0: A4 72 EE D5 72 23 45 E8   F3 26 F3 CD A8 55 ED 83  .r..r#E..&...U..
01F0: 57 0D C0 F5 F3 38 2B 10   66 10 8D E7 2F F7 01 FE  W....8+.f.../...
0200: 0A 19 57 7E 62 95 CB A1   33 A2 C4 43 CA E6 49 71  ..W.b...3..C..Iq
0210: 63 E6 01 EF 6A A1 4E E2   FC 36 66 65 D6 41 B4 F9  c...j.N..6fe.A..
0220: 64                                                 d

Entered Krb5Context.initSecContext with state=STATE_IN_PROCESS
>>> EType: sun.security.krb5.internal.crypto.ArcFourHmacEType
Krb5Context setting peerSeqNumber to: 15371956
Krb5Context.unwrap: token=[60 30 06 09 2a 86 48 86 f7 12 01 02 02 02 01 11 00 ff ff ff ff 81 6d f2 03 73 5b 76 3c 92 69 4f 82 dc b2 40 63 f9 2d de 4f f8 7c af 41 01 01 00 00 01 ]
Krb5Context.unwrap: data=[01 01 00 00 ]
Krb5Context.wrap: data=[01 01 00 00 ]
Krb5Context.wrap: token=[60 30 06 09 2a 86 48 86 f7 12 01 02 02 02 01 11 00 ff ff ff ff 4d 06 d1 37 3b 4c 57 96 72 04 26 e2 af 91 90 81 b2 f3 e8 d6 07 8e d3 7a 01 01 00 00 01 ]
Connected to: Apache Hive (version 1.1.0-cdh5.10.1)
Driver: Hive JDBC (version 1.1.0-cdh5.10.1)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Beeline version 1.1.0-cdh5.10.1 by Apache Hive
0: jdbc:hive2://pg-dmp-master2.hadoop:10000/d>

On failed node, it shows nothing, just failed.

beeline -u 'jdbc:hive2://pg-dmp-master2.hadoop:10000/default;principal=hive/pg-dmp-master2.hadoop@PG.COM'
2017-05-03 22:27:44,881 WARN  [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present.  Continuing without it.
scan complete in 1ms
Connecting to jdbc:hive2://pg-dmp-master2.hadoop:10000/default;principal=hive/pg-dmp-master2.hadoop@PG.COM
Java config name: null
Native config name: /etc/krb5.conf
Loaded from native config
17/05/03 22:27:46 [main]: ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
        at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
        at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:202)
        at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:167)
        at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
        at java.sql.DriverManager.getConnection(DriverManager.java:571)
        at java.sql.DriverManager.getConnection(DriverManager.java:187)
        at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:142)
        at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:207)
        at org.apache.hive.beeline.Commands.connect(Commands.java:1457)
        at org.apache.hive.beeline.Commands.connect(Commands.java:1352)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:52)
        at org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1130)
        at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1169)
        at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:810)
        at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:890)
        at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:510)
        at org.apache.hive.beeline.BeeLine.main(BeeLine.java:493)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
        at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
        at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
        at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
        ... 35 more
HS2 may be unavailable, check server status
Error: Could not open client transport with JDBC Uri: jdbc:hive2://pg-dmp-master2.hadoop:10000/default;principal=hive/pg-dmp-master2.hadoop@PG.COM: GSS initiate failed (state=08S01,code=0)
Beeline version 1.1.0-cdh5.9.0 by Apache Hive
beeline>

I create a principle and a new keytab for the failed node, but still failed. But see, there is no kerberos authenticate info on failed node.

Then I start thinking, kerberos will create a local cache for the application, this will prevent access KDC server when call hive every time, it will authenticated by local. Maybe, the failed node was not read the local cache of kerberos? So I copy the core-site.xml of hadoop to the hive conf path. The core-site.xml contains a set named hadoop.security.auth_to_local, and the value is DEFAULT. And the issue solved.

And other logs here

/tmp/root/hive.log on failed node

2017-05-03 22:09:30,656 ERROR [main]: transport.TSaslTransport (TSaslTransport.java:open(315)) - SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
        at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:430)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:240)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1528)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3037)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3056)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3281)
        at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:217)
        at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:201)
        at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:324)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:285)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:260)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:514)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
        at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
        at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
        at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
        ... 36 more
2017-05-03 22:09:30,661 WARN  [main]: hive.metastore (HiveMetaStoreClient.java:open(439)) - Failed to connect to the MetaStore Server...
2017-05-03 22:09:31,663 ERROR [main]: transport.TSaslTransport (TSaslTransport.java:open(315)) - SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
        at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:430)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:240)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1528)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3037)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3056)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3281)
        at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:217)
        at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:201)
        at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:324)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:285)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:260)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:514)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
        at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
        at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
        at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
        ... 36 more
2017-05-03 22:09:31,665 WARN  [main]: hive.metastore (HiveMetaStoreClient.java:open(439)) - Failed to connect to the MetaStore Server...
2017-05-03 22:09:32,666 ERROR [main]: transport.TSaslTransport (TSaslTransport.java:open(315)) - SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
        at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:430)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:240)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1528)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3037)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3056)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3281)
        at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:217)
        at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:201)
        at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:324)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:285)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:260)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:514)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147)
        at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121)
        at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187)
        at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:193)
        ... 36 more
2017-05-03 22:09:32,667 WARN  [main]: hive.metastore (HiveMetaStoreClient.java:open(439)) - Failed to connect to the MetaStore Server...
2017-05-03 22:09:33,674 WARN  [main]: metadata.Hive (Hive.java:registerAllFunctionsOnce(204)) - Failed to register all functions.
java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1530)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3037)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3056)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3281)
        at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:217)
        at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:201)
        at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:324)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:285)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:260)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:514)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.lang.reflect.InvocationTargetException
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1528)
        ... 19 more
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: GSS initiate failed
        at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
        at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:430)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:240)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1528)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
        at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:3037)
        at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:3056)
        at org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:3281)
        at org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:217)
        at org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:201)
        at org.apache.hadoop.hive.ql.metadata.Hive.<init>(Hive.java:324)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:285)
        at org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:260)
        at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:514)
        at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:689)
        at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:628)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:477)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:240)
        at org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.<init>(SessionHiveMetaStoreClient.java:74)
        ... 24 more

hiveserver2.log

2017-05-03 22:27:46,471 ERROR org.apache.thrift.server.TThreadPoolServer: [HiveServer2-Handler-Pool: Thread-63]: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:793)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:790)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:356)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1900)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:790)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199)
        at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        ... 10 more

metastore server log

2017-05-03 22:58:16,642 ERROR org.apache.thrift.server.TThreadPoolServer: [pool-4-thread-90]: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:793)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:790)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:356)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1900)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:790)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199)
        at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        ... 10 more
2017-05-03 22:58:17,646 ERROR org.apache.thrift.server.TThreadPoolServer: [pool-4-thread-91]: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:793)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:790)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:356)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1900)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:790)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199)
        at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        ... 10 more
2017-05-03 22:58:18,648 ERROR org.apache.thrift.server.TThreadPoolServer: [pool-4-thread-92]: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:793)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:790)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:356)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1900)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:790)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199)
        at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        ... 10 more

]]>
https://xianglei.tech/archives/xianglei/2017/05/2724.html/feed 0
Deploy shadowsocks https://xianglei.tech/archives/xianglei/2017/04/2695.html https://xianglei.tech/archives/xianglei/2017/04/2695.html#respond Mon, 24 Apr 2017 08:53:08 +0000 https://xianglei.tech/?p=2695 Since I live in China, the Great Fire Wall is almost blocked every thing on this planet, so I have to find lots of ladders to over the wall to find some useful things. Freegate, Lvdou, and shadow socks. Chinese people lives in tragedy everyday.

Now I’m tring Shadowsocks, it’s a simple tool to find the outside world. Several months ago, I bought a cloud server that located in USA west of Aliyun to build Bigtop of apache. Now I will start this server as a shadow socks server.

# pip install shadowsocks

And then edit a file named shadows,json in any directory, content like this:

{
  "server":"server.ip",
  "port_password":{
    "8381":"yourpass",
    "8382":"yourpass2"
  },
  "timeout":300,
  "method":"aes-256-cfb",
  "fast_open":false,
  "workers":1
}

Save and exit and run

# ssserver -c shadows.json -d start

On android I can install shadowsocks thru goople app market; on ubuntu doing this

# sudo add-apt-repository ppa:hzwhuang/ss-qt5
# sudo apt-get update
# sudo apt-get install shadowsocks-qt5

and then see the better world.

]]>
https://xianglei.tech/archives/xianglei/2017/04/2695.html/feed 0
Enable Kerberos secured Hadoop cluster with Cloudera Manager https://xianglei.tech/archives/xianglei/2017/04/2678.html https://xianglei.tech/archives/xianglei/2017/04/2678.html#respond Mon, 24 Apr 2017 05:30:42 +0000 https://xianglei.tech/?p=2678 I created a secured Hadoop cluster for P&G with cloudera manager, and this document is to record how to enable kerberos secured cluster with cloudera manager. Firstly we should have a cluster that contains kerberos KDC and kerberos clients

  1. Install KDC server
    Only one server run this, note, kdc is only install on a single server

    sudo yum -y install krb5-server krb5-libs krb5-workstation krb5-auth-dialog openldap-clients

    This command will install Kerberos Server and some useful commands from krb5-workstation

  2. Modify /var/kerberos/krb5kdc/kdc.conf
    [kdcdefaults]
     kdc_ports = 88
     kdc_tcp_ports = 88
    
    [realms]
     PG.COM = {
      #master_key_type = aes256-cts
      acl_file = /var/kerberos/krb5kdc/kadm5.acl
      dict_file = /usr/share/dict/words
      admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
      supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
      max_renewable_life = 7d
     }

    PG.COM used to be EXAMPLE.COM
    and add max_renewable_life = 7d or more longer than this, 4w means 4 weeks

  3. Modify /etc/krb5.conf
    [libdefaults]
    default_realm = PG.COM
    dns_lookup_kdc = false
    dns_lookup_realm = false
    ticket_lifetime = 259200
    renew_lifetime = 604800
    forwardable = true
    default_tgs_enctypes = rc4-hmac
    default_tkt_enctypes = rc4-hmac
    permitted_enctypes = rc4-hmac
    udp_preference_limit = 1
    kdc_timeout = 3000
    [realms]
     PG.COM = {
      kdc = pg-dmp-master2.hadoop
      admin_server = pg-dmp-master2.hadoop
     }

    EXAMPLE.COM to PG.COM. And in [realms] area, kdc point to node where I install KDC and admin_server point to server where installed kadmin

  4. Create the realm on KDC server
    # kdb5_util create -s -r PG.COM

    This will create the working realm named PG.COM

  5. Create the admin user principle in PG.COM on KDC server
    # kadmin.local -q "addprinc root/admin"

    I use root/admin as an admin user, you can type in a different password for this admin, note, root/admin is not same as root, they can keep in kerberos’s database either, and they are two different account.

  6.  Edit /var/kerberos/kadm.acl
    */admin@PG.COM

    This will define who are admins in PG.COM realm, this means everyone with /admin@PG.COM could be kerberos admin.

  7. Check all the kerberos’s configure files, ensure there are no errors.

    /etc/krb5.conf
    /var/kerberos/krb5kdc/krb.conf
    /var/kerberos/krb5kdc/kadm.acl

  8.  And now start KDC and Kadmin service on KDC server
    # service krb5kdc start
    # service kadmin start

    This will start KDC and Kadmin service on KDC server

  9. Login to all other nodes of this cluster, and run this
    # yum install krb5-workstation krb5-libs krb5-auth-dialog openldap-clients cyrus-sasl-plain cyrus-sasl-gssapi

    Installing kerberos clients on all nodes

And then, back to Cloudera Manager , click Cluster->Operation, and find enable kerberos in dropdown menu. Then enable all the question, and Next->Next->Next… till it ends.

When the all procedure done, you can use kerberos secured hadoop cluster.

 

So there were some more questions when installed kerberos secured cluster.

How could I add a new user principle?

# kadmin.local -q "addprinc username@PG.COM"

And the note is , when you add a common user to the cluster, you should run this command above on Kadmin(KDC) server, and username@PG.COM is a normal user in kerberos, if you want to add an admin user, you should use username/admin@PG.COM, the /admin is which you defined in kadm.acl

 

How should I administrating HDFS or YARN, I mean, how to use hdfs user or yarn user?

Well, since cloudera manager will automatically create several users in cluster such as hdfs, yarn, hbase… and these users were all defined as nologin without any password. And these users are all admin users of Hadoop but not in kerberos database. So you can’t ask ticket like this.

# kinit hdfs
kinit: Client not found in Kerberos database while getting initial credentials
# kinit hdfs@PG.COM
kinit: Client not found in Kerberos database while getting initial credentials

But you can ask ticket by using keytabs of these cluster users like this

 kinit -kt hdfs.keytab hdfs/current_server@PG.COM

current_server is the hostname of  your current logged in server and wants to access with HDFS or YARN such as a hadoop client, in cluster, hdfs.keytab are all different on each node. so you must write command like this above.

 

When I  run a smoke test like Pi, it gone wrong? I’ve already add a new user principle in KDC database.

17/04/24 11:18:35 INFO mapreduce.Job: Job job_1493003216756_0004 running in uber mode : false
17/04/24 11:18:35 INFO mapreduce.Job:  map 0% reduce 0%
17/04/24 11:18:35 INFO mapreduce.Job: Job job_1493003216756_0004 failed with state FAILED due to: Application application_1493003216756_0004 failed 2 times due to AM Container for appattempt_1493003216756_0004_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page:http://pg-dmp-master1.hadoop:8088/proxy/application_1493003216756_0004/Then, click on links to logs of each attempt.
Diagnostics: Application application_1493003216756_0004 initialization failed (exitCode=255) with output: main : command provided 0
main : run as user is dmp
main : requested yarn user is dmp
User dmp not found

Failing this attempt. Failing the application.
17/04/24 11:18:35 INFO mapreduce.Job: Counters: 0
Job Finished in 1.094 seconds
java.io.FileNotFoundException: File does not exist: hdfs://PG-dmp-HA/user/dmp/QuasiMonteCarlo_1493003913376_864529226/out/reduce-out
        at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1257)
        at org.apache.hadoop.hdfs.DistributedFileSystem$20.doCall(DistributedFileSystem.java:1249)
        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
        at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1249)
        at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1817)
        at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1841)
        at org.apache.hadoop.examples.QuasiMonteCarlo.estimatePi(QuasiMonteCarlo.java:314)
        at org.apache.hadoop.examples.QuasiMonteCarlo.run(QuasiMonteCarlo.java:354)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
        at org.apache.hadoop.examples.QuasiMonteCarlo.main(QuasiMonteCarlo.java:363)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
        at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
        at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

How I can create a user keytab for submit job only instead using hdfs/yarn/mapred users?

# kadmin
kadmin: xst -k dmp.keytab dmp@PG.COM
# ktutil
ktutil: rkt dmp.keytab
ktutil: wkt dmp-2.keytab
ktutil: clear
# kinit -k -t dmp-2.keytab dmp@PG.COM
# hadoop fs -ls /
...

And you can use this keytab file in JAAS for mapreduce job submit. Could ktutil be omitted? I’ll try it later.

 

In an unsecured cluster, hadoop will distibuted all the containers to nodes and start the containers as an exists username such as yarn. But in a secured cluster, job’s containers will be distributed and run as the username who you submitted it.  So, this error is you add a principle only, but there were no such a user on every node. So the container executor can not start up the application master or mapper or reducer. And you must add this user to each node to solve this problem. Simply use linux command

# useradd dmp

On every node in cluster and client, another express way to add user to all nodes is by using openldap to instead add user manually.

 

Finally we should talk about some basic theory of kerberos

There are three phases required for a client to access a service
– Authentication
– Authorization
– Service request


Client sends a Ticket-Granting Ticket (TGT) request to AS


AS checks database to authen2cate client
  – Authentication typically done by checking LDAP/Active Directory
  – If valid, AS sends Ticket/Granting Ticket (TGT) to client


Client uses this TGT to request a service ticket from TGS
  – A service Bcket is validation that a client can access a service


TGS verifies whether client is permitted to use requested service
  – If access granted, TGS sends service Bcket to client

Client can use then use the service
  – Service can validate client with info from the service ticket

kinit program is used to obtain a ticket from Kerberos
klist to see users in kerberos database
kdestroy to explicitly delete your ticket

( Pics and their comments are reffered from Cloudera Administrator Training Course, copyright to cloudera.com )

]]>
https://xianglei.tech/archives/xianglei/2017/04/2678.html/feed 0
Grammy Best Metal — Megadeth Dystopia https://xianglei.tech/archives/xianglei/2017/03/294.html https://xianglei.tech/archives/xianglei/2017/03/294.html#respond Thu, 09 Mar 2017 16:40:09 +0000 https://xianglei.tech/?p=294 My favorite band, sweet metal riff

]]>
https://xianglei.tech/archives/xianglei/2017/03/294.html/feed 0
Dr.Elephant mysql connection error https://xianglei.tech/archives/xianglei/2017/03/274.html https://xianglei.tech/archives/xianglei/2017/03/274.html#respond Mon, 06 Mar 2017 10:49:11 +0000 http://xianglei.tech/?p=274 This is the first time I try to use english to write my blog, so don’t jeer at the mistake of my grammar and spelling.

Because of multi threaded drelephant will cause JobHistoryServer’s Loads very high, so I stopped it for a strench of time. Until last week, a period pull from JHS patch merge request from github was released. I re-compiled dr. elephant and deploy the new dr. elephant on the cluster. It seems stable, but on this Monday morning, my leader told me that there were no more counters and any information about cluster jobs in dr. elephant.  So I logged in to the server, and check log, then I found this message below.

[error] c.j.b.ConnectionHandle - Database access problem. Killing off this connection and all remaining connections in the connection pool. SQL State = HY000

And then, I found two things will cause this issue, one of them is selinux config is set to enforcing, change this config value to disabled and reboot the server, it seem good, but still got same error, only not too many.

Then I review the code of dr. elephant, I  find out that  create table in mysql initializing script of play framework has an issue. the index is too long, I change the index to 250 previously, but if an url is longer then 250, it gone wrong.

create index yarn_app_result_i4 on yarn_app_result (flow_exec_id(250));
create index yarn_app_result_i5 on yarn_app_result (job_def_id(250));
create index yarn_app_result_i6 on yarn_app_result (flow_def_id(250));

So I delete this index limitation, and rewrite the sql like this, add a innodb_large_prefix and row_format=dynamic to the table creation script, and finally, no more error log appears…

SET GLOBAL innodb_file_format=Barracuda;
SET GLOBAL innodb_large_prefix = ON;
CREATE TABLE yarn_app_result (
  id               VARCHAR(50)   NOT NULL              COMMENT 'The application id, e.g., application_1236543456321_1234567',
  name             VARCHAR(100)  NOT NULL              COMMENT 'The application name',
  username         VARCHAR(50)   NOT NULL              COMMENT 'The user who started the application',
  queue_name       VARCHAR(50)   DEFAULT NULL          COMMENT 'The queue the application was submitted to',
  start_time       BIGINT        UNSIGNED NOT NULL     COMMENT 'The time in which application started',
  finish_time      BIGINT        UNSIGNED NOT NULL     COMMENT 'The time in which application finished',
  tracking_url     VARCHAR(255)  NOT NULL              COMMENT 'The web URL that can be used to track the application',
  job_type         VARCHAR(20)   NOT NULL              COMMENT 'The Job Type e.g, Pig, Hive, Spark, HadoopJava',
  severity         TINYINT(2)    UNSIGNED NOT NULL     COMMENT 'Aggregate severity of all the heuristics. Ranges from 0(LOW) to 4(CRITICAL)',
  score            MEDIUMINT(9)  UNSIGNED DEFAULT 0    COMMENT 'The application score which is the sum of heuristic scores',
  workflow_depth   TINYINT(2)    UNSIGNED DEFAULT 0    COMMENT 'The application depth in the scheduled flow. Depth starts from 0',
  scheduler        VARCHAR(20)   DEFAULT NULL          COMMENT 'The scheduler which triggered the application',
  job_name         VARCHAR(255)  NOT NULL DEFAULT ''   COMMENT 'The name of the job in the flow to which this app belongs',
  job_exec_id      VARCHAR(800)  NOT NULL DEFAULT ''   COMMENT 'A unique reference to a specific execution of the job/action(job in the workflow). This should filter all applications (mapreduce/spark) triggered by the job for a
 particular execution.',
  flow_exec_id     VARCHAR(255)  NOT NULL DEFAULT ''   COMMENT 'A unique reference to a specific flow execution. This should filter all applications fired by a particular flow execution. Note that if the scheduler supports sub-
workflows, then this ID should be the super parent flow execution id that triggered the the applications and sub-workflows.',
  job_def_id       VARCHAR(800)  NOT NULL DEFAULT ''   COMMENT 'A unique reference to the job in the entire flow independent of the execution. This should filter all the applications(mapreduce/spark) triggered by the job for al
l the historic executions of that job.',
  flow_def_id      VARCHAR(800)  NOT NULL DEFAULT ''   COMMENT 'A unique reference to the entire flow independent of any execution. This should filter all the historic mr jobs belonging to the flow. Note that if your scheduler 
supports sub-workflows, then this ID should reference the super parent flow that triggered the all the jobs and sub-workflows.',
  job_exec_url     VARCHAR(800)  NOT NULL DEFAULT ''   COMMENT 'A url to the job execution on the scheduler',
  flow_exec_url    VARCHAR(800)  NOT NULL DEFAULT ''   COMMENT 'A url to the flow execution on the scheduler',
  job_def_url      VARCHAR(800)  NOT NULL DEFAULT ''   COMMENT 'A url to the job definition on the scheduler',
  flow_def_url     VARCHAR(800)  NOT NULL DEFAULT ''   COMMENT 'A url to the flow definition on the scheduler',

  PRIMARY KEY (id)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ROW_FORMAT=DYNAMIC;

create index yarn_app_result_i1 on yarn_app_result (finish_time);
create index yarn_app_result_i2 on yarn_app_result (username,finish_time);
create index yarn_app_result_i3 on yarn_app_result (job_type,username,finish_time);
create index yarn_app_result_i4 on yarn_app_result (flow_exec_id);
create index yarn_app_result_i5 on yarn_app_result (job_def_id);
create index yarn_app_result_i6 on yarn_app_result (flow_def_id);
create index yarn_app_result_i7 on yarn_app_result (start_time);

CREATE TABLE yarn_app_heuristic_result (
  id                  INT(11)       NOT NULL AUTO_INCREMENT COMMENT 'The application heuristic result id',
  yarn_app_result_id  VARCHAR(50)   NOT NULL                COMMENT 'The application id',
  heuristic_class     VARCHAR(255)  NOT NULL                COMMENT 'Name of the JVM class that implements this heuristic',
  heuristic_name      VARCHAR(128)  NOT NULL                COMMENT 'The heuristic name',
  severity            TINYINT(2)    UNSIGNED NOT NULL       COMMENT 'The heuristic severity ranging from 0(LOW) to 4(CRITICAL)',
  score               MEDIUMINT(9)  UNSIGNED DEFAULT 0      COMMENT 'The heuristic score for the application. score = severity * number_of_tasks(map/reduce) where severity not in [0,1], otherwise score = 0',

  PRIMARY KEY (id),
  CONSTRAINT yarn_app_heuristic_result_f1 FOREIGN KEY (yarn_app_result_id) REFERENCES yarn_app_result (id)
);


create index yarn_app_heuristic_result_i1 on yarn_app_heuristic_result (yarn_app_result_id);
create index yarn_app_heuristic_result_i2 on yarn_app_heuristic_result (heuristic_name,severity);

CREATE TABLE yarn_app_heuristic_result_details (
  yarn_app_heuristic_result_id  INT(11) NOT NULL                  COMMENT 'The application heuristic result id',
  name                          VARCHAR(128) NOT NULL DEFAULT ''  COMMENT 'The analysis detail entry name/key',
  value                         VARCHAR(255) NOT NULL DEFAULT ''  COMMENT 'The analysis detail value corresponding to the name',
  details                       TEXT                              COMMENT 'More information on analysis details. e.g, stacktrace',

  PRIMARY KEY (yarn_app_heuristic_result_id,name),
  CONSTRAINT yarn_app_heuristic_result_details_f1 FOREIGN KEY (yarn_app_heuristic_result_id) REFERENCES yarn_app_heuristic_result (id)
);

create index yarn_app_heuristic_result_details_i1 on yarn_app_heuristic_result_details (name);

At last I send a pull request to linkedin on github.com…

]]>
https://xianglei.tech/archives/xianglei/2017/03/274.html/feed 0
试用时间序列数据库InfluxDB https://xianglei.tech/archives/xianglei/2017/01/158.html https://xianglei.tech/archives/xianglei/2017/01/158.html#respond Tue, 24 Jan 2017 15:50:07 +0000 http://xianglei.tech/?p=158 Hadoop集群监控需要使用时间序列数据库,今天花了半天时间调研使用了一下最近比较火的InfluxDB,发现还真是不错,记录一下学习心得。

Influx是用Go语言写的,专为时间序列数据持久化所开发的,由于使用Go语言,所以各平台基本都支持。类似的时间序列数据库还有OpenTSDB,Prometheus等。

OpenTSDB很有名,性能也不错,但是基于HBase,要用那个还得先搭一套HBase,有点为了吃红烧肉自己得先去杀猪,烫皮,拔毛的感觉。Prometheus相关文档和讨论太少,而InfluxDB项目活跃,使用者多,文档也比较丰富,所以先看看这个。Influx可以说是LevelDB的Go语言修改版实现,LevelDB采用LSM引擎,效率很高,Influx是基于LSM引擎再修改的TSM引擎,专为时间序列设计。

InfluxDB 架构原理介绍

LevelDB 架构原理介绍

下午跟七牛的CrazyJVM聊了一下,因为七牛都是用Go,所以也大量部署了Influx给大型企业级用户使用,据说是全球最大的InfluxDB集群,七牛也给Influx提交了大量的Patch,结果Influx通过早期开源弄得差不多稳定了,突然就闭源了,这也太不局气了,然后搞Cluster功能收费,单机功能免费使用。

昨天看了一会文档,今天试用了一下,感觉很不错,值得推荐。把学习了解的内容记录下来,供爱好者参考,也省的自己时间长忘了。

InfluxDB其实不能说像哪个数据库,初上手感觉更像Mongo类型的NoSQL,但是有意思的是,它提供了类SQL接口,对开发人员十分友好。命令行查询结果的界面又有点像MySQL,挺有意思的。

不写安装部署和CLI接口,因为实在没得可写,直接yum或者apt就装了。service一启动,再influx命令就进命令行了,网上一大堆安装教程

InfluxDB有几个关键概念是需要了解的。

database:相当于RDBMS里面的库名。创建数据库的语句也十分相似。一进去就可以先创建一个数据库玩,加不加分号都行。

CREATE DATABASE 'hadoop'

 

然后需要创建一个用户,我省事点,直接创建一个最高权限,就看了一天,然后直接写REST接口去了,权限管理慢慢再细看。

CREATE USER "xianglei" WITH PASSWORD 'password' WITH ALL PRIVILEGES

 

使用查询语句插入一条数据

INSERT hdfs,hdfs=adh,path=/ free=2341234,used=51234123,nonhdfs=1234

 

Influx没有先建立schema的概念,因为Influx允许存储的数据是schemeless的,表在这里叫做measurement,数据在插入时如果没有表,则自动创建该表。

measurement: 相当于RDBMS里面的数据表名。

在上面的INSERT语句中,跟在insert后面的第一个hdfs就是measurement,如果不存在一个叫做hdfs的,就自动创建一个叫做hdfs的表,否则直接插入数据。

然后是tags,tags的概念类似于RDBMS里面的查询索引名,这里的tags是hdfs=adh和path=/,等于我建立了两个tags。

free往后统称叫fields,tags和fields之间用空格分开,tags和fields内部自己用逗号分开。tags和fields名称可以随意填写,主要是一开始设计好就行。

所以,对以上插入语句做一个注释的话,就是这样。

INSERT [hdfs(measurement)],[hdfs=adh,path=/(tags)] [free=2341234,used=51234123,nonhdfs=1234(fields)]

 

然后即可查询该数据

SELECT free FROM hdfs WHERE hdfs='adh' and path='/'

 

name: hdfs
time                    free
----                    ----
1485251656036494252     425234
1485251673348104714     425234

 

SELECT * FROM hdfs LIMIT 2
name: hdfs
time                  free    hdfs    nonhdfs  path   used
----                  ----    ----    -------  ----   ----
1485251656036494252   425234  adh     1341     /      23412
1485251673348104714   425234  adh     1341     /      23412

 

这里的where条件,即是上面tags里面的hdfs=adh和path=/,所以tags可以随意添加,但是在插入第一条数据的时候,最好先设计好你的查询条件。当然,你插入的任何数据,都会自动添加time列,数了数,应该是纳秒级的时间戳。


上面是Influx的基本概念和基本使用的记录,下面是接口开发的使用。以Tornado示例Restful查询接口。

Influx本身支持restful的HTTP API,python有直接封装的接口可以调用,直接 pip install influxdb即可

influxdb-python文档

Talk is cheap, show me your code.

Models  Influx模块,用于连接influxdb

class InfluxClient:
    def __init__(self):
        self._conf = ParseConfig()
        self._config = self._conf.load()
        self._server = self._config['influxdb']['server']
        self._port = self._config['influxdb']['port']
        self._user = self._config['influxdb']['username']
        self._pass = self._config['influxdb']['password']
        self._db = self._config['influxdb']['db']
        self._retention_days = self._config['influxdb']['retention']['days']
        self._retention_replica = self._config['influxdb']['retention']['replica']
        self._retention_name = self._config['influxdb']['retention']['name']
        self._client = InfluxDBClient(self._server, self._port, self._user, self._pass, self._db)
 
    def _create_database(self):
        try:
            self._client.create_database(self._db)
        except Exception, e:
            print e.message
 
    def _create_retention_policy(self):
        try:
            self._client.create_retention_policy(self._retention_name,
                                                 self._retention_days,
                                                 self._retention_replica,
                                                 default=True)
        except Exception, e:
            print e.message
 
    def _switch_user(self):
        try:
            self._client.switch_user(self._user, self._pass)
        except Exception, e:
            print e.message
 
    def write_points(self, data):
        self._create_database()
        self._create_retention_policy()
        if self._client.write_points(data):
            return True
        else:
            return False
 
    def query(self, qry):
        try:
            result = self._client.query(qry)
            return result
        except Exception, e:
            return e.message

 

连接influxdb的配置从项目的配置文件里读取,自己写也行。

Controller模块InfluxController

class InfluxRestController(tornado.web.RequestHandler):
    '''
    "GET"
        op=query&qry=select+used+from+hdfs+where+hdfs=adh
    '''
    查询方法,使用HTTP GET
    def get(self, *args, **kwargs):
        op = self.get_argument('op')
        #自己实现的python switch case,网上一大堆
        for case in switch(op):
            if case('query'):
                #查询语句从url参数获取
                qry = self.get_argument('qry')
                #实例化Models里面的class
                influx = InfluxClient()
                result = influx.query(qry)
                #返回结果为对象,通过raw属性获取对象中的字典。
                self.write(json.dumps(result.raw, ensure_ascii=False))
                break
 
            if case():
                self.write('No argument found')
 
    #写入数据,使用HTTP PUT
    def put(self):
        op = self.get_argument('op')
        for case in switch(op):
            if case('write'):
                #data should urldecode first and then turn into json
                data = json.loads(urllib.unquote(self.get_argument('data')))
                influx = InfluxClient()
                #写入成功或失败判断
                if influx.write_points(data):
                    self.write('{"result":true}')
                else:
                    self.write('{"result":false}')
                break
            if case():
                self.write('No argument found')

 

Tornado配置路由

applications = tornado.web.Application(
    [
        (r'/', IndexController),
        (r'/ws/api/influx', InfluxRestController)
    ],
    **settings
)

 

JSON项目配置文件

{
  "http_port": 19998,
  "influxdb":{
    "server": "47.88.6.247",
    "port": "8086",
    "username": "root",
    "password": "root",
    "db": "hadoop",
    "retention": {
      "days": "365d",
      "replica": 3,
      "name": "hound_policy"
    },
    "replica": 3
  },
  "copyright": "CopyLeft 2017 Xianglei"
}

 

插入测试

def test_write():
    base_url = 'http://localhost:19998/ws/api/influx'
    #data = '[{"measurement": "hdfs"},"tags":{"hdfs": "adh","path":"/user"},"fields":{"used": 234123412343423423,"free": 425234523462546546,"nonhdfs": 1341453452345}]'
    #构造插入数据
    body = dict()
    body['measurement'] = 'hdfs'
    body['tags'] = dict()
    body['tags']['hdfs'] = 'adh'
    body['tags']['path'] = '/'
    body['fields'] = dict()
    body['fields']['used'] = 234123
    body['fields']['free'] = 425234
    body['fields']['nonhdfs'] = 13414
    tmp = list()
    tmp.append(body)
 
    op = 'write'
    # dict data to json and urlencode
    data = urllib.urlencode({'op': op, 'data': json.dumps(tmp)})
    headers = {'Content-Type': 'application/x-www-form-urlencoded; charset=UTF-8'}
    try:
        http = tornado.httpclient.HTTPClient()
        response = http.fetch(
            tornado.httpclient.HTTPRequest(
                url=base_url,
                method='PUT',
                headers=headers,
                body=data
            )
        )
        print response.body
    except tornado.httpclient.HTTPError, e:
        print e
 
test_write()

 

插入数据后通过访问http连接获取插入结果

curl -i "http://localhost:19998/ws/api/influx?op=query&qry=select%20*%20from%20hdfs"
HTTP/1.1 200 OK
Date: Tue, 24 Jan 2017 15:47:42 GMT
Content-Length: 1055
Etag: "7a2b1af6edd4f6d11f8b000de64050a729e8621e"
Content-Type: text/html; charset=UTF-8
Server: TornadoServer/4.4.2
 
{"values": [["2017-01-24T09:54:16.036494252Z", 425234, "adh", 13414, "/", 234123]], "name": "hdfs", "columns": ["time", "free", "hdfs", "nonhdfs", "path", "used"]}

 

收工,明天用React写监控前端

]]>
https://xianglei.tech/archives/xianglei/2017/01/158.html/feed 0
Hadoop监控分析工具Dr.Elephant https://xianglei.tech/archives/xianglei/2017/01/50.html https://xianglei.tech/archives/xianglei/2017/01/50.html#respond Wed, 04 Jan 2017 08:07:14 +0000 http://xianglei.tech/?p=50 公司基础架构这边想提取慢作业和获悉资源浪费的情况,所以装个dr elephant看看。LinkIn开源的系统,可以对基于yarn的mr和spark作业进行性能分析和调优建议。

DRE大部分基于java开发,spark监控部分使用scala开发,使用play堆栈式框架。这是一个类似Python里面Django的框架,基于java?scala?没太细了解,直接下来就能用,需要java1.8以上。

prerequest list:

Java 1.8

PlayFramework+activator

Nodejs+npm

scala+sbt

编译服务器是设立在美国硅谷的某云主机,之前为了bigtop已经装好了java,maven,ant,scala,sbt等编译工具,所以下载activator解压放到/usr/local并加入PATH即可。

然后从 github clone一份dr-elephant下来,打开compile.conf,修改hadoop和spark版本为当前使用版本,:wq保存退出,运行compile.sh进行编译,经过短暂的等待之后,因为美国服务器,下依赖快。会有个dist文件夹,里面会打包一个dr-elephant-2.0.x.zip,拷出来解压缩就可以用了。

DRE本身需要mysql 5.5以上支持,或者mariadb最新的10.1稳定版本亦可。这里会有一个问题,就是在DRE/conf/evolutions/default/1.sql里面的这三行:

create index yarn_app_result_i4 on yarn_app_result (flow_exec_id);
create index yarn_app_result_i5 on yarn_app_result (job_def_id);
create index yarn_app_result_i6 on yarn_app_result (flow_def_id);

 

由于在某些数据库情况下,索引长度会超过数据库本身的限制,所以,需要修改索引长度来避免无法启动的情况发生。

create index yarn_app_result_i4 on yarn_app_result (flow_exec_id(150));
create index yarn_app_result_i5 on yarn_app_result (job_def_id(150));
create index yarn_app_result_i6 on yarn_app_result (flow_def_id(150));

 

然后就应该没啥问题了。

到数据库里创建一个叫drelephant的数据库,并给出相关访问权限用户

接下来是需要配置DRE:

打开app-conf/elephant.conf

# Play application server port
# 启动dre后play框架监听的web端口
port=8080
# Database configuration
# 数据库主机,用户名密码库名
db_url=localhost
db_name=drelephant
db_user="root"
db_password=

 

其他默认即可,不需更改

然后是GeneralConf.xml

<configuration>
  <property>
    <name>drelephant.analysis.thread.count</name>
    <value>3</value>
    <description>Number of threads to analyze the completed jobs</description>
  </property>
  <property>
    <name>drelephant.analysis.fetch.interval</name>
    <value>60000</value>
    <description>Interval between fetches in milliseconds</description>
  </property>
  <property>
    <name>drelephant.analysis.retry.interval</name>
    <value>60000</value>
    <description>Interval between retries in milliseconds</description>
  </property>
  <property>
    <name>drelephant.application.search.match.partial</name>
    <value>true</value>
    <description>If this property is "false", search will only make exact matches</description>
  </property>
</configuration>

 

修改drelephant.analysis.thread.count,默认是3,建议修改到10,3的话从jobhistoryserver读取的速度太慢,高于10的话又读取的太快,会对jobhistoryserver造成很大压力。下面两个一个是读取的时间周期,一个是重试读取的间隔时间周期。

然后到bin下执行start.sh启动。And then, show smile to the yellow elephant。

装完看了一下这个东西,其实本身原理并不复杂,就是读取各种jmx,metrics,日志信息,自己写一个也不是没有可能。功能主要是把作业信息里的内容汇总放到一屏里面显示,省的在JHS的页面里一个一个点了。

That’s it, so easy

]]>
https://xianglei.tech/archives/xianglei/2017/01/50.html/feed 0
Apache Bigtop与卖书求生 https://xianglei.tech/archives/xianglei/2016/12/20.html https://xianglei.tech/archives/xianglei/2016/12/20.html#respond Fri, 30 Dec 2016 05:48:42 +0000 http://xianglei.tech/?p=20 快一年没写博客了,终于回来了,最近因公司业务需要,要基于cdh发行版打包自定义patch的rpm,于是又搞起了bigtop,就是那个hadoop编译打包rpm和deb的工具,由于国内基本没有相关的资料和文档,所以觉得有必要把阅读bigtop源码和修改的思路分享一下。

我记得很早以前,bigtop在1.0.0以前版本吧,是用make进行打包的,其实这个0.9.0以前的版本,搁我觉得就不应该出现在apache正式仓库里,就应该放在incubator里面,但是估计由于是cdh主导开发的,而Doug Cutting又是前基金会主席,所以,一个基本没有产品化的东西从孵化器提升到顶级项目相对容易一些吧。cloudera官方在github上开源的的cdh-package应该是基于bigtop 0.6.0的,不过由于他们的每个git分支只更新rpm的spec文件,所以,貌似默认情况下根本使不了,不厚道啊。而apache的bigtop又没有cdh相关的avro,sentry,llama等依赖,所以只能自己读源码修改。

解决方案一:基于cdh-package进行修改,优势是贴近cloudera,可能需要修改的代码量比较少,劣势是基于make,后期维护性和可扩展性较差,我可不想去改Makefile那种东西。

解决方案二:基于apache bigtop进行修改,优势是使用gradle编译,可维护性可扩展性好,劣势是代码修改量大。

考虑再三,我决定还是贴近社区,远离资本家,跟广大无产阶级走,所以我选择了apache bigtop,另外,cdh-package除了需要java1.7以外,还需要java1.5,所以。Let it be.

当然,这里有很多坑都需要踩,其中最大的一个坑就是GFW。感谢政府对我一奔四十的老爷们的思想保护,远离黄赌毒,用伟大的长城防火墙屏蔽了全世界。伟大的长城防火墙不但有花季护航,还有而立护航,不惑护航,知天命护航,耳顺护航及古稀护航,耄耋护航,期颐护航等众多配置选项,保护国人从生到死不受国外先进技术的侵蚀。

所以,如果你想正常编译hadoop及其周边生态,听我的,买个国外的云主机,绝对事半功倍。同时,为了保证对bigtop修改本身的版本控制及错误回滚,git或者svn是需要的。

以下内容基于bigtop 1.1.0 production以及美国云主机

打包编译相关技能天赋加点:

gradle, maven, ant, forrest, groovy, shell, rpm spec.特别是shell和spec的天赋要尽可能点满,不行就去看rpm.org里面的文档。而maven和ant基本都是自动施法,不太需要点天赋。另外,maven, ant, java本身的版本就不再赘述了。

按照我对bigtop源码的理解,分为执行层,编译层和脚本层。执行层即gradle和gradle的相关定义文件。编译层包括maven, ant,嵌套在maven里的ant,forrest,scala等。脚本层为rpm的spec文件,deb的定义文件以及他们所包含的编译相关脚本,如do-build-components这类脚本。

定义编译什么东西及它的版本,下载地址的定义,文件名的定义是在bigtop.bom中定义的,然后会调用package.gradle来进行自动下载及配置编译目录,打包目录等。之后会通过package.gradle调用rpmbuild来读取spec文件,spec文件会通过内部的Source0这类的定义来读取编译脚本,最终通过rpmbuild来建立所有需要的rpm包。

初始下载解压缩bigtop-1.1.0之后,需要先对bigtop依赖的包进行初始化,会下载protobuf,snappy什么的。完成之后用户可以编译的是apache的hadoop及周边相关,编译之后是可以用,但是不符合我的需求。为啥,因为cloudera 2B似的为显示自己牛逼,兼容,搞了一个画蛇添足的0.20-mapreduce。由于之前集群安装的是cdh的hadoop,已安装的rpm依赖里面有0.20的安装包,所以,如果我用原生apache bigtop打包出来的 cdh hadoop,是没有0.20这个package的。那么在自己做了repository之后用yum update,会提示缺少0.20的依赖,需要使用–skip-broken来安装,作为一个处女座是不允许这种情况发生的。另外,据同事反馈,cdh的hadoop如果使用apache的zookeeper做ha时会出现找不到znode的问题,无法ha。

所以,唯一的解决办法是找到cdh的spec文件,打的跟cdh一模一样才可以。这东西其实并不难找,留个问题自己发现吧。不过,直接取出来的cdh spec文件与打包脚本,在apache bigtop上是不能直接使用的。需要修改不少地方,比如像prelink,还有需要建立一套busybox出来,当然其他的打包依赖还有诸如boost,llvm,thrift等等。还有,cdh会把自己的编译依赖建立在/opt/toolchain下面,但是apache bigtop不会有这东西,自己建软链就可以解决了。

写着写着趴下眯了一会午觉,起来突然不知道该写什么了,如果熟悉之前说的天赋加点,这玩意确实没什么难度。如果不熟悉,那这玩意是相当的难以理解和使用,会遇到各种各样的报错,特别是如果在rpmbuild过程中报错,是很难找到出错原因的。

至于建立yum仓库这种事情就更不用描述了。

整个项目的关键点就是脚本和spec语言,gradle语言都是次要的。

最后,为显示自己牛逼,放两张截图出来。

 

 

我的下一个milestone是把hortonworks的storm package打到cdh hadoop上面跑。不过在实现这个目标之前,似乎公司要把我派去写hive和pig脚本,真是没兴趣啊。

最后,打个广告,Nathan Marz (Storm作者) 的书,《大数据系统构建–Lambda架构实践》上市。译者:马延辉,魏东琦,还有我,欢迎大家踊跃购买,看完之后批评指正。

购买链接

京东

当当

亚马逊

]]>
https://xianglei.tech/archives/xianglei/2016/12/20.html/feed 0
Tornado学习笔记(四) https://xianglei.tech/archives/xianglei/2016/01/57.html https://xianglei.tech/archives/xianglei/2016/01/57.html#respond Wed, 20 Jan 2016 09:40:32 +0000 http://xianglei.tech/?p=57 一、Tornado的语言国际化方法

Tornado做国际化折腾了一下下,Tornado这部分的官方文档太poor了。所以自己记录一下如何用tornado结合gettext做国际化。

第一步,在项目路径下建立./locales/zh_CN/LC_MESSAGES文件夹。

第二步,使用xgettext或poedit在第一步的文件夹下创建一个po文件,比如messages.po,我用poedit创建,比xgettext方便一些。

第三步,编辑该messages.po文件,当然,po文件有自己特定的格式,需要按照它的格式编写。

msgid ""
msgstr ""
"Project-Id-Version: \n"
"POT-Creation-Date: \n"
"PO-Revision-Date: \n"
"Last-Translator: \n"
"Language-Team: \n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Language: zh_CN\n"
"X-Generator: Poedit 1.8.4\n"
 
msgid "Sign in"
msgstr "登入"
 
msgid "Sign out"
msgstr "登出"
 
msgid "Username"
msgstr "用户名"
 
msgid "Password"
msgstr "密码"

msgid是网页里原先的文本内容,msgstr是准备替换的内容。新内容直接用编辑器往后追加msgid和msgstr就可以了。

第四步,修改HTML网页模板

{% include '../header.html' %}
 
<form method="post" action="/User/Signin">
    {{ _("Sign in") }}<br/>
    {{ _("Username") }}<br/>
    <input type="text" name="username" /><br/>
    {{ _("Password") }}<br/>
    <input type="password" name="password" /><br />
    {% module xsrf_form_html() %}
    <input type="submit" name="submit" value="{{ _("Sign in") }}" />
</form>
 
{% include '../footer.html' %}

 

html里面的{{ _(“Sign in”) }}等内容就是需要gettext查找和替换的内容。

第五步,在tornado主文件内添加gettext支持的方法。

import os
import tornado.autoreload
import tornado.httpserver
import tornado.ioloop
import tornado.web
import tornado.locale
 
'''
...
'''
 
if __name__ == '__main__':
    tornado.locale.set_default_locale('zh_CN')
    tornado.locale.load_gettext_translations('./locales', 'messages')
    server = tornado.httpserver.HTTPServer(application)
    server.listen(20000)
    loop = tornado.ioloop.IOLoop.instance()
    tornado.autoreload.start(loop)
    loop.start()

 

由于我用的ubuntu系统,所以服务器端会被强制认为使用en_US编码,所以我作为调试,强制指定了set_default_locale(‘zh_CN’),然后使用tornado.locale.load_gettext_translations(‘./locales’, ‘messages’)来读取locales文件夹下的messages项目的mo文件。

第六步,在自己写的Handler里面,加入locale.translate

class BaseHandler(tornado.web.RequestHandler):
    def get_current_user(self):
        _ = self.locale.translate
        user = self.get_secure_cookie('username')
        return user
 
 
class SigninHandler(BaseHandler):
    def get(self):
        self.render('User/sign_in.html')
 
    def post(self):
        username = self.get_argument('username')
        password = self.get_argument('password')
        if username == 'xianglei':
            self.set_secure_cookie('username', 'xianglei')
            self.redirect('/')
 
 
class SignoutHandler(BaseHandler):
    def get(self, *args, **kwargs):
        self.clear_all_cookies()
        self.redirect('/')

 

_=self.locale.translate,self.locale.translate实际是一个方法,那么把这个方法放到_这个对象里面,然后_方法会被自动代入到模板中去执行替换_(“Sign in”),所以实际在模板里面写的 {{ _(“Sign in”) }}实际上是让Tornado执行tornado.locale.translate()方法。这样的话,如果我去掉之前的set_default_locale(),页面显示的就是英文的Sign in,加上,显示的就是中文的登入。

同样,Tornado也可以使用一个csv文件作为翻译的基础字典,默认是采用csv方式的。

二、Tornado作为HTTP client执行RESTful命令。

之前已经记录了Tornado异步的客户端,昨天调试了一下用Tornado做HDFS和YARN的RESTful客户端。HDFS的RESTful方式,不能使用异步,需要使用Tornado同步客户端才可以。HDFS和YARN的RESTful管理方式需要用到HTTP的四种查询方式,GET,POST,PUT,DELETE。其中PUT和DELETE的方式跟POST和GET很类似。

比如

class MakeDirectoryHandler(BaseHandler):
    @tornado.web.authenticated
    def post(self):
        host = self.get_argument('host')
        port = self.get_argument('port')
        directory = self.get_argument('directory')
        username = self.get_secure_cookie('username')
        base_url = 'http://'+host+':'+port+'/webhdfs/v1'+directory+'?op=MKDIRS&user.name='+username
        put_body = dict()
        put_body['op'] = 'MKDIRS'
        put_body['user.name'] = username
        put_body = urllib.urlencode(put_body)
        try:
            http = tornado.httpclient.HTTPClient()
            response = http.fetch(
                    tornado.httpclient.HTTPRequest(
                            url=base_url,
                            method='PUT',
                            body=put_body,
                    )
            )
            self.write(response.body)
        except tornado.httpclient.HTTPError, e:
            self.write('{"errcode":"'+str(e).replace('\n', '<br />')+'"}')

 

HDFS的MKDIRS方法放在PUT组里面,所以提交的参数需要用urlencode进行编码转换后PUT给RESTful接口。

而DELETE则是。

class RemoveHandler(BaseHandler):
    @tornado.web.authenticated
    def post(self):
        host = self.get_argument('host')
        port = self.get_argument('port')
        filename = self.get_argument('filename')
        '''
        If recursive = true, it use to remove whole directory
        If recursive = false, it use to remove a file or an empty directory
        The argument must be string.
        '''
        recursive = self.get_argument('recursive')
        username = self.get_secure_cookie('username')
        base_url = 'http://'+host+':'+port+'/webhdfs/v1'+filename+'?op=DELETE&recursive='+recursive+'&user.name='+username
        try:
            http = tornado.httpclient.HTTPClient()
            response = http.fetch(
                    tornado.httpclient.HTTPRequest(
                            url=base_url,
                            method='DELETE',
                    )
            )
            self.write(response.body)
        except tornado.httpclient.HTTPError, e:
            self.write('{"errcode":"'+str(e).replace('\n', '<br />')+'"}')

 

跟GET方式一样,DELETE不需要封装传递参数。

]]>
https://xianglei.tech/archives/xianglei/2016/01/57.html/feed 0
Hadoop运维记录系列(十七) https://xianglei.tech/archives/xianglei/2015/11/59.html https://xianglei.tech/archives/xianglei/2015/11/59.html#respond Fri, 13 Nov 2015 09:45:20 +0000 http://xianglei.tech/?p=59 上个月通过email,帮朋友的朋友解决了一个Cloudera的Spark-SQL无法访问HBase做数据分析的问题,记录一下。

首先,对方已经做好了Hive访问HBase,所以spark-sql原则上可以通过调用Hive的元数据来访问Hbase。但是执行极慢,而且日志无报错。中间都是邮件沟通,先问了几个问题,是否启用了Kerberos,是否Hive访问Hbase正常,HBase shell访问数据是否正常等等,回答说没有用Kerberos,Hive访问Hbase正常,spark-sql读取Hive元数据也正常,Hbase shell也正常,就是spark-sql跑不了。

其次,对方有两套环境,实验室环境可以跑,但是生产环境不能跑,然后实验室和生产环境版本和各种xml配置都完全一样,所以实在找不到不能跑的原因了。

前期看日志,由于没有任何WARN或ERROR,所以很难排查,初步推断是由于配置错误造成的,但是哪里配置错误,因为都是邮件,无法确定。后来对方又发过来一个日志,说约3小时后报错,显示如下。

Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, BData-h2): org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find region for in xxxxxxxx after 35 tries.

 

Spark界面中的这段报错证明了确实是配置错误的这个推断,这个报错通常是由于Spark无法通过API访问Hbase的Region造成的。那么,如果是cloudera版的,都是预先就已经做好兼容性了,所以可以排除因兼容性问题导致的无法访问,那么只能有一个原因就是配置错误。

接着对比了一下实验室环境和生产环境的spark启动日志,发现两边所加载的jar包不一致,推断是spark-env或者是hbase-env配置有出入导致的,将这一推断告知对方,并建议将hbase/lib/*和hbase-site.xml等文件写入到spark的SPARK_DIST_CLASSPATH中。于是对方将hbase/lib和/etc/hbase/conf路径添加到SPARK_DIST_CLASSPATH中,问题解决。

这个排错是个特别小的问题,但是本着可能会对他人有帮助,同时自己别忘了的原则,还是记录一下。

]]>
https://xianglei.tech/archives/xianglei/2015/11/59.html/feed 0
Hadoop运维记录系列(十六) https://xianglei.tech/archives/xianglei/2015/08/61.html https://xianglei.tech/archives/xianglei/2015/08/61.html#respond Fri, 28 Aug 2015 09:52:12 +0000 http://xianglei.tech/?p=61 应了一个国内某电信运营商集群恢复的事,集群故障很严重,做了HA的集群Namenode挂掉了。具体过程不详,但是从受害者的只言片语中大概回顾一下历史的片段。

  1. Active的namenode元数据硬盘满了,满了,满了…上来第一句话就如雷贯耳。
  2. 运维人员发现硬盘满了以后执行了对active namenode的元数据日志执行了 echo “” > edit_xxxx-xxxx…第二句话如五雷轰顶。
  3. 然后发现standby没法切换,切换也没用,因为standby的元数据和日志是5月份的…这个结果让人无法直视。

因为周末要去外地讲课,所以无法去在外地的现场,七夕加七八两天用qq远程协助的方式上去看了一下。几个大问题累积下来,导致了最终悲剧的发生。

  1. Namenode的元数据只保存在一个硬盘mount里,且该盘空间很小。又有N多人往里面塞了各种乱七八糟的文件,什么jar包,tar包,shell脚本。
  2. 按照描述,standby元数据只有到5月份,说明standby要么挂了,要么压根就没启动。
  3. 没有做ZKFC,就是失效自动恢复,应该是采用的手动恢复方式(而且实际是没有JournalNode的,后面再说)。
  4. 至于raid0, lvm这种问题就完全忽略了,虽然这也是很大的问题。

然后他们自己折腾了一天,没任何结果,实在起不来了,最好的结果是启动两台standby namenode,无法切换active。通过关系找到了我,希望采用有偿服务的方式让我帮忙进行恢复。我一开始以为比较简单,就答应了。结果上去一看,故障的复杂程度远超想象,堪称目前遇到的最难的集群数据恢复挑战。由于无法确切获知他们自己操作后都发生了什么,他们自己也说不清楚,也或者迫于压力不敢说,我只能按现有的数据资料尝试进行恢复。

第一次尝试恢复,我太小看他们的破坏力了,以至于第一次恢复是不成功的,七夕下班抛家舍业的开搞。在这次尝试中,发现HA是没有用ZKFC做自动恢复的,完全是手动恢复,于是顺带帮他们安装配置了ZKFC。然后首先初始化shareEdits,发现他们压根没有做shareEdits,那就意味着,其实原来的JournalNode可能根本就没起作用。然后启动zookeeper,接着启动journalnode,然后启动两个NN,两个NN的状态就是standby,然后启动ZKFC,自动切换失败。根据日志判断,是两个NN中的元数据不一致导致了脑裂。于是使用haadmin里面的隐藏命令强行指定了一个NN。启动了一台NN,而另一台则在做脑裂后的元数据自动恢复。自动恢复log日志如下

INFO org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader: replaying edit log: xxxx/xxxx transactions completed. (77%)

 

经过漫长的等待,SNN元数据恢复了。但是一直没有脱离safemode状态,因为太晚了,就没有继续进行,只是告诉他们,等到safemode脱离了,就可以了,如果一直没有脱离,就强行使用safemode leave脱离。但是,我把一切看的太简单了。

第二天,打电话说集群仍然不能用。我上去一看,还是处于safemode。于是强行脱离safemode,但是只有active脱离了,standby仍未脱离,HDFS也无法写入数据,这时看hdfs的web ui,active和standby的数据块上报远未达到需要的数量,意识到元数据有丢失。但对方坚称元数据是故障后立刻备份的,而且当时的误操作只是针对edits日志,fsimage没有动。说实话,我倒宁可他们把fsimage清空了,也不要吧edits清空了。fsimage可以从edits恢复,而edits清空了,就真没辙了。

于是,第二天再次停止NN和Standby NN,停止ZKFC,停止JournalNode,结果NN又起不来了,报

The namenode has no resources availible

 

一看,恢复时产生的log太多,元数据的硬盘又满了。只好跟对方合计,把元数据换到另外一个比较空的硬盘里处理。我也不知道为什么他们要找那么小的一块盘存元数据,跟操作系统存一起了。挪动元数据文件夹,然后改配置,然后启动ZK,Jounal, NN, StandbyNN,使用bootStrapStandby手工切换主从。再启动ZKFC,HDFS恢复,然后强制脱离safemode。touchz和rm测试HDFS可以增删文件,没有问题了。

第二次尝试恢复,这时实际上HDFS已经可以正常访问,算是恢复了,但是元数据有丢失,这个确实没办法了。于是,跟他们商量,采取第二种办法,尝试通过日志恢复元数据。他们同意尝试恢复。于是将他们自己备份的editslog和fsimage从他们自己备份的文件夹拷到元数据文件夹,使用recover命令进行editslog到元数据的恢复。经过一段时间等待,恢复,再重启NN和Standby NN,结果发现日志里恢复出来的数据比之前恢复的还要旧,于是再按第一种方案的下半段方法恢复成以前的元数据。下面说为什么。

最终恢复出来的元数据所记录的数据有580TB多一些,丢失部分数据。

  1. 原activeNN的日志已经被清空, 这上面的fsimage是否被动过不知道,之前他们自己操作了什么我不得而知。由于这上面磁盘已满,所以这上的fsimage实际是不可信的。
  2. JournalNode没有做initializeShareEdits,也没有做ZKformat,所以Journalnode实际上没有起作用。jn文件夹下无可用做恢复的日志。方案二中的恢复是用StandbyNN的日志进行恢复的,由于standby根本没有起作用,所以通过日志只能恢复到做所谓的HA之前的元数据。
  3. 原standby NN虽然启动了,也是手工置为standby,但是由于没有Journalnode起作用,所以虽然DN会上报操作给standby NN,但是无日志记录,元数据也是旧的。最后的日志也就是记录到5月份,而且已然脑裂。下图对理解NNHA作用机理非常重要,特别是所有箭头的指向方向。wKiom1XgGXOyNibZAAHVhKfC2xo054.jpg

那么,最后总结整个问题发生和分析解决的流程。

先做名词定义

ANN = Active NameNode

SNN = Standby NameNode

JN = JournalNode

ZKFC = ZooKeeper Failover Controller

问题的发生:

  1. ANN元数据放在了一块很小的硬盘上,而且只保存了一份,该硬盘满,操作人员在ANN上执行了 echo “” > edits….文件的操作。
  2. 当初自己做HA,没有做initializeShareEdits和formatZK,所以JN虽启动,但实际未起作用,而SNN也实际未起作用。只是假装当了一个standby?所以JN上无实际可用edits日志。
  3. 操作人员在问题发生后最后备份的实际是SNN的日志和元数据,因为ANN editlog已清空,而且ANN硬盘满,即便有备份,实际也是不可信的。

问题的恢复:

  1. 恢复备份的fsimage,或通过editslog恢复fsimage。
  2. 将fsimage进行恢复并重启NN,JN等相关进程,在safemode下,Hadoop会自行尝试进行脑裂的修复,以当前Acitve的元数据为准。
  3. 如遇到元数据和edits双丢失,请找上帝解决。这个故障案例麻烦就麻烦在,如果你是rm editslog,在ext4或ext3文件系统下,立刻停止文件系统读写,还有找回的可能,但是是echo “” > edits,这就完全没辙了。而且所有最糟糕的极端的情况全凑在一起了,ANN硬盘满,日志删,元数据丢,SNN压根没起作用,JN没起作用。

问题的总结:

  1. 作为金主的甲方完全不懂什么是Hadoop,或者说听过这词,至于具体的运行细节完全不了解。
  2. 承接项目的乙方比甲方懂得多一点点,但是很有限,对于运行细节了解一些,但仅限于能跑起来的程度,对于运维和优化几乎无概念。
  3. 乙方上层领导认为,Hadoop是可以在使用过程中加强学习和理解的。殊不知,Hadoop如果前期搭建没有做好系统有序的规划,后期带来的麻烦会极其严重。况且,实际上,乙方每个人都在加班加点给甲方开发数据分析的任务,对于系统如何正常运行和维护基本没时间去了解和学习。否则,绝对不会有人会执行清空edit的操作,而且据乙方沟通人员描述,以前也这么干过,只是命好,没这次这么严重(所以我怀疑在清空了日志之后肯定还做了其他的致命性操作,但是他们不告诉我)。
  4. Hadoop生产集群在初期软硬件搭建上的规划细节非常之多,横跨网络,服务器,操作系统多个领域的综合知识,哪一块的细节有缺漏,未来都可能出现大问题。比如raid0或lvm,其实是个大问题,但N多人都不会去关注这个事。Yahoo benchmark表明,JBOD比RAID性能高出30%~50%,且不会无限放大单一磁盘故障的问题,但我发现很少有人关注类似的细节,很多生产集群都做了RAID0或RAID50。
  5. 很多培训也是鱼龙混杂,居然有培训告诉说map和reduce槽位数配置没用,这要不是赤裸裸的骗人,就是故意要坑人。

这是一次非常困难的集群数据恢复的挑战,最终的实际结果是恢复了大约580TB数据的元数据,并且修复了脑裂的问题。整个过程没有使用特殊的命令,全部是hadoop命令行能看到的haadmin, dfsadmin里面的一些命令。整个恢复过程中需要每个服务器多开一个CRT,观察所有进程log的动态。以便随时调整恢复策略和方法。最后特别提醒一下,seen_txid文件非常重要。

整个过程通过qq远程方式完成,中间断网无数次,在北京操作两天,在上海操作一天。为保护当事人隐私,以金主和事主代替。

]]>
https://xianglei.tech/archives/xianglei/2015/08/61.html/feed 0
Tornado学习笔记(三) https://xianglei.tech/archives/xianglei/2015/08/64.html https://xianglei.tech/archives/xianglei/2015/08/64.html#respond Tue, 11 Aug 2015 09:55:32 +0000 http://xianglei.tech/?p=64 记录一些Tornado中的常用知识。

  1. 获取远端IP,直连tornado,用self.request.remote_ip,如果是走nginx反向代理方式,需要在nginx中的Server/Location配置如下
    proxy_pass http://data.xxx.com;
    #proxy_redirect off;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header REMOTE-HOST $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

     

然后在tornado中使用self.request.headers[‘X-Real-IP’]或self.request.headers[‘X-Forward-For’]获取用户真实IP

2. 获取checkbox中的多项,单个表单变量的获取在tornado中用self.get_argument(“input_name”)获取,多个相同名字的变量使用self.get_arguments(“input_name”)获取,比如,则使用self.get_arguments(‘ck’)获取到一个list里面。这里跟php的区别是,chekbox的变量名后面不能带方括号。比如,在php里checkbox需要写成ck[],但是在tornado里不需要这样写。

3. RequestHandler中的set_cookie和set_secure_cookie可以设定cookie过期时间,使用set_secure_cookie(‘name’,’value’, expires_days=None)设置关闭即销毁,或者设置一个整形的数字为过期的天数。

4. 对于_xsrf的cookie处理,在笔记一中已经记录,{% module xsrf_form_html() %}会在模板页面中直接生成一个

<input type="hidden" name="_xsrf" value="2|16ecb15b|07a3a51e047a34f944eef2b6b4e5e017|1439262852"/>

 

这样一个hidden元素,其中的value每次刷新页面,hash值都不一样,这个hash值是根据时间戳,version,token和binascii.b2a_hex和a2b_hex来进行加解密计算的。这个扯远了,主要是想记录在前端如果用jquery的post或get,如何获取这个token。按照tornado官方文档的做法试了一下,貌似不行,可能是文档的更新没有跟上代码的更新,也可能是我自己前端水平不行,反正是没成功。

tornado官方获取页面中的_xsrf做法如下。

function getCookie(name) {
    var r = document.cookie.match("\\b" + name + "=([^;]*)\\b");
    return r ? r[1] : undefined;
}
 
jQuery.postJSON = function(url, args, callback) {
    args._xsrf = getCookie("_xsrf");
    $.ajax({url: url, data: $.param(args), dataType: "text", type: "POST",
        success: function(response) {
        callback(eval("(" + response + ")"));
    }});
};

 

后来换了个方法,用jquery直接获取页面中的元素名称的value,而不去读取cookie的值,确实是前端水平不行,或许这样会不安全吧。

xsrf = $('input[name=_xsrf]').val();

 

这样就可以取到_xsrf的值,并进行$.post操作。

5. 清除cookie,之前网上有文章说清除tornado的cookies是将set_cookie(‘name’,’value’)方法里面的对应name的value设置为空。实际上,tornado源码中提供了清除cookie的方法。

def clear_cookie(self, name, path="/", domain=None):
        """Deletes the cookie with the given name.
 
        Due to limitations of the cookie protocol, you must pass the same
        path and domain to clear a cookie as were used when that cookie
        was set (but there is no way to find out on the server side
        which values were used for a given cookie).
        """
        expires = datetime.datetime.utcnow() - datetime.timedelta(days=365)
        self.set_cookie(name, value="", path=path, expires=expires,
                        domain=domain)
 
def clear_all_cookies(self, path="/", domain=None):
        """Deletes all the cookies the user sent with this request.
 
        See `clear_cookie` for more information on the path and domain
        parameters.
 
        .. versionchanged:: 3.2
 
           Added the ``path`` and ``domain`` parameters.
        """
        for name in self.request.cookies:
            self.clear_cookie(name, path=path, domain=domain)

 

清除给定名称的cookie和清除全部cookie。

这两天在看websocket和long polling相关,回头看完也记录一下。

]]>
https://xianglei.tech/archives/xianglei/2015/08/64.html/feed 0
阿里云FreeBSD初始化方法 https://xianglei.tech/archives/xianglei/2015/07/67.html https://xianglei.tech/archives/xianglei/2015/07/67.html#respond Tue, 28 Jul 2015 10:00:11 +0000 http://xianglei.tech/?p=67 阿里云貌似最近推出了FreeBSD镜像,这是我最喜欢的操作系统,个人看法比Linux好太多了。但是阿里云方面文档没有跟上,无任何挂载硬盘相关的操作说明,所以记录一下在阿里云FreeBSD镜像环境下挂载云磁盘的操作过程。

  1. 用dmesg查看云硬盘在/dev的设备号,在xen环境的linux里是xvdb1,FreeBSD下通常是xbd1,由于xbd1未按照freebsd标准分区格式化,所以,如直接mount /dev/xbd1 /opt会报错,Invalid augument什么的。
  2. 分区格式化,先初始化磁盘
    dd if=/dev/zero of=/dev/xbd1 bs=1k count=1
    fdisk -BI /dev/xbd1 (完事会出来xbd1s1)
    disklabel -B -w -r /dev/xbd1s1 auto
    newfs /dev/xbd1s1
    mount /dev/xbd1s1 /opt
  3. echo “/dev/xbd1s1     /opt            ufs     rw      1       1” >> /etc/fstab,中间用tab分割

完成

——–20150806修订——–

FreeBSD 10取消pkg_add的等命令,用pkg代替,pkg安装在GFW环境下比较靠谱,中间曾发现rrdtool被GFW屏蔽,无法通过ports编译安装。

]]>
https://xianglei.tech/archives/xianglei/2015/07/67.html/feed 0
Tornado学习笔记(二) https://xianglei.tech/archives/xianglei/2015/07/75.html https://xianglei.tech/archives/xianglei/2015/07/75.html#respond Mon, 13 Jul 2015 02:02:08 +0000 http://xianglei.tech/?p=75 我一直用python2.x,python2.x内置的字符编码方式是unicode,这就对中文的处理造成了一些困扰,尤其是在用tornado写json接口的时候,如果不做处理,出来的没有中文,都是\x4d5f之类的东西。所以通常需要这样去处理下。

除了正常的

#!/usr/bin/env python
# coding: utf-8

之外

import sys
reload(sys)
sys.setdefaultencoding('utf-8')

是不可少的

另外,在做json.dumps的时候

self.write(response.body)

ensure_ascii=False需要有,才能正常的在json中显示中文。

这个是json包处理的问题而不是tornado处理的问题,self.write()中直接写中文则不会发生该问题。

关于tornado的httpclient异步回调功能,用一个简单的例子表达,访问baidu的IP地址查询库

class QueryIPHandler(tornado.web.RequestHandler):
    @tornado.web.asynchronous
    @tornado.gen.coroutine
    def get(self, *args, **kwargs):
        ip = self.get_argument('ip')
        local_ip = self.request.headers['X-Real-IP']
        conf = Config.get_config()
        base_url = 'http://apis.baidu.com/apistore/iplookupservice/iplookup'
        params = {}
        params['ip'] = ip
        try:
            http = tornado.httpclient.AsyncHTTPClient()
            headers = dict(self.request.headers)
            headers['apikey'] = conf['BAIDU_APIKEY']
            response = yield http.fetch(
                tornado.httpclient.HTTPRequest(
                    url = base_url,
                    headers=headers,
                    method='POST',
                    body=json.dumps(params, ensure_ascii=False)
                )
            )
            self.write(response.body)
        except tornado.httpclient.HTTPError, e:
            self.write(e)

 

使用web.asychronous修饰,使用gen.coroutine修饰,个人认为使用协程的好处是不用再写回调方法了。

base_url是需要访问百度api网址

params是请求参数,也就是query_string,

然后尝试使用tornado.httpclient.AsyncHTTPClient()

headers需要封装成字典类型并加入baidu的apikey。

response = yield http.fetch就是异步回调的”主体思想”,用协程和yield方式就不需要像以前一样写回调函数了。

body参数按照tornado官方文档是字符串而非字典,所以要把params变量dumps成json串传过去。

http.fetch里的可配置参数项参考

http://www.tornadoweb.org/en/stable/httpclient.html

剩下的就是异常处理了。

为何要异步非阻塞?因为tornado是单进程的,如果不异步非阻塞的话,假如访问baidu很慢,则tornado进程会被卡住,访问tornado的其他页面也会卡在那里等待,直到百度访问完成,其他页面才会响应,所同理,node也是单进程的,天生就是异步非阻塞的。相比于node,我个人觉得tornado 的好处在于,想阻塞就阻塞,想非阻塞就非阻塞,很灵活,而node想写一个同步阻塞的应用就很麻烦了。

—-20150815修订—-

通过看源码得知,字符串处理可以使用tornado.escape,如tornado.escape.json_encode(dict),但是中文也会出现问题,tornado.escape内部对json_encode的处理也是对json包的二次封装,对于包含中文的内容,需要使用tornado.escape.json_encode(dict).decode(‘unicode_escape’)处理就可以得到正确的结果。

不得不说,tornado的文档结构很好,就是说明台简单了,也缺少例子的支持,想深入了解,还是得自己看源码。

]]>
https://xianglei.tech/archives/xianglei/2015/07/75.html/feed 0
使用flume替代原有的scribe服务 https://xianglei.tech/archives/xianglei/2015/07/109.html https://xianglei.tech/archives/xianglei/2015/07/109.html#respond Sun, 12 Jul 2015 08:13:25 +0000 http://xianglei.tech/?p=109 以前很多业务都是用scribe做日志收集的支撑的,后来fb停止了对scribe的开发支持。而且scribe在机器上编译一次的代价太大了,各种坑,正好后来flume从1.3.0开始加入了对scribe的支持。就可以把原来scribe上面接入的数据转用flume收集了。虽然我很喜欢scribe,但是失去了官方支持毕竟还是很闹心的。

agent.channels=c1
agent.channels.c1.capacity=20000
agent.channels.c1.transactionCapacity=10000
agent.channels.c1.type=memory
agent.sinks=k1
agent.sinks.k1.channel=c1
agent.sinks.k1.hdfs.batchSize=8000
agent.sinks.k1.hdfs.filePrefix=log
agent.sinks.k1.hdfs.fileType=DataStream
agent.sinks.k1.hdfs.path=hdfs://NNHA/data/flume/%{category}/%Y%m%d
agent.sinks.k1.hdfs.rollCount=0
agent.sinks.k1.hdfs.rollInterval=86400
agent.sinks.k1.hdfs.round=true
agent.sinks.k1.hdfs.roundUnit=minute
agent.sinks.k1.hdfs.roundValue=1
agent.sinks.k1.hdfs.serializer.appendNewline=false
agent.sinks.k1.hdfs.useLocalTimeStamp=true
agent.sinks.k1.hdfs.writeFormat=TEXT
agent.sinks.k1.type=hdfs
agent.sources=r1
agent.sources.r1.channels=c1
agent.sources.r1.host=0.0.0.0
agent.sources.r1.port=1463
agent.sources.r1.type=org.apache.flume.source.scribe.ScribeSource
agent.sources.r1.workerThreads=5

 

主要是serializer.appendNewline设置为false,否则会每条自动添加一个回车上去,其他也没什么太多好解释的,用过flume的自然秒懂,hdfs.path里面,%{category}就是意味着原来scribe里面的category。

flume 1.6的新特性里面是加入了对kafka的source和sink的支持,以及对数据内容的正则过滤传递,这点很有用,貌似下个月或者下下个月会有本关于flume的新书上市。

]]>
https://xianglei.tech/archives/xianglei/2015/07/109.html/feed 0
Hadoop运维记录系列(十五) https://xianglei.tech/archives/xianglei/2015/04/113.html https://xianglei.tech/archives/xianglei/2015/04/113.html#respond Thu, 30 Apr 2015 08:25:42 +0000 http://xianglei.tech/?p=113 早期搭建Hadoop集群的时候,在做主机和IP解析的时候,通常的做法是写hosts文件,但是Hadoop集群大了以后做hosts文件很麻烦,每次加新的服务器都需要整个集群重新同步一次hosts文件,另外,如果在同一个域下面做两个集群,做distcp,也需要把两个集群的hosts文件全写完整并完全同步,很麻烦。那么,一劳永逸的办法就是做DNS。DNS我这边已经用了很长时间了,几年前为了学这个还专门买了一本巨厚的BIND手册。

做DNS服务器最常用的就是BIND,ISC开发并维护的开源系统。

以centos6为例,使用BIND 9.8.2,在域名解析服务器上安装bind和域名正反向查询工具

yum install bind bind-utils

安装完成后,配置文件在 /etc/named.conf,域名数据文件通常我们会放在 /var/named,配置文件不是很复杂。留一个小问题,172.16.0.0/18写成子网掩码应该写多少?在该子网内可用的IP地址范围是多少?

/etc/named.conf

//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//
 
options {
        listen-on port 53 { 172.16.0.2; }; //监听内网地址53端口, ns1要改成172.16.0.1
//      listen-on-v6 port 53 { ::1; }; //不监听IPv6
        directory       "/var/named"; //DNS数据文件存储目录
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        allow-query     { 172.16.0.0/18; }; //允许172.16.0.0/18的子网IP主机进行查询,任意主机写any;
        recursion yes; //允许递归查询
 
        dnssec-enable yes;
        dnssec-validation yes;
        dnssec-lookaside auto;
 
        /* Path to ISC DLV key */
        bindkeys-file "/etc/named.iscdlv.key";
 
        managed-keys-directory "/var/named/dynamic";
};
 
logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};
 
zone "." IN {
        type hint;
        file "named.ca";
};
 
zone "hadoop" IN { //我们的hadoop域
        type master;
        file "hadoop.zone"; 
};
 
zone "16.172.in-addr.arpa" IN { 
        type master;
        file "172.16.zone"; 
};
 
include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

 

然后是正向解析文件 /var/named/hadoop.zone

$TTL 600
$ORIGIN hadoop.
@       IN      SOA     ns1     root ( ;SOA部分必写
        0; Serial
        1D; Refresh
        1H; Retry
        1W; Expire
        3H); Negative Cache TTL
 
@       IN      NS      ns1.hadoop.
@       IN      NS      ns2.hadoop.
;用两台namenode同时担负nameserver,反正namenode平时也没什么具体事干,DNS查询走udp端口,不会对 namenode造成压力
;另外一个原因是namenode基本不会挂,而DN等服务器比较容易挂,所以NN同时做NS也更稳定,当然,有钱可以单独购置NS服务器,土豪请随意。
;两台namenode一起
ns1             IN      A       172.16.0.1
ns2             IN      A       172.16.0.2
 
;两台正向解析服务器的A记录,至于A, CNAME, MX等含义不解释了。
 
namenode-01     IN      A       172.16.0.1
namenode-02     IN      A       172.16.0.2
;服务器的A记录

反向解析文件 /var/named/172.16.zone

反向解析文件里需要把IP地址的顺序倒过来写,例如,172.16.0.1在反向文件里要写成1.0.16.172,所以,文件名命名为16.172.zone更符合规则。

$TTL 600
@ IN SOA namenode-01.hadoop. root.namenode-01.hadoop. ( //SOA部分必写
        0; Serial
        1D; Refresh
        1H; Retry
        1W; Expire
        3H); Negative Cache TTL
; 反向解析文件里不能有$ORIGIN,所以在下面先写上全部主机名
@       IN      NS      ns1.hadoop.
@       IN      NS      ns2.hadoop.
 
1.0     IN      PTR     ns1.hadoop. 
2.0     IN      PTR     ns2.hadoop.
1.0     IN      PTR     namenode-01.hadoop.
2.0     IN      PTR     namenode-02.hadoop.

 

全部完成后执行

chkconfig --add named
service named restart

接下来在所有主机的/etc/resolv.conf文件中添加

nameserver 172.16.0.1
nameserver 172.16.0.2

然后删除所有主机中的hosts文件内容,只保留127.0.0.1

用nslookup测试一下

[root@namenode-01 named]# nslookup 
> set q=A
> namenode-02.hadoop
Server:         172.16.0.1
Address:        172.16.0.1#53
#正向查询
Name:   namenode-02.hadoop
Address: 172.16.0.2
> set q=PTR
> 172.16.0.2
Server:         172.16.0.1
Address:        172.16.0.1#53
#反向查询
2.0.16.172.in-addr.arpa name = namenode-02.hadoop.
 
####然后关闭ns1的DNS服务进行测试。
 
[root@namenode-01 named]# service named stop
停止 named:.                                              [确定]
[root@namenode-01 named]# nslookup          
> set q=A
> namenode-01.hadoop
Server:         172.16.0.2
Address:        172.16.0.2#53
 
Name:   namenode-01.hadoop
Address: 172.16.0.1
> set q=PTR
> 172.16.0.1
Server:         172.16.0.2
Address:        172.16.0.2#53
 
1.0.16.172.in-addr.arpa name = namenode-01.hadoop.

 

 

这样,做好了Namenode高可用,也勉强算是做好了DNS的高可用,集群中任意一台Namenode挂机,也不会影响整个集群的正常服务,新买的服务器只需要装好操作系统,在/etc/resolv.conf里面设置两个nameserver的IP地址即可,这就比hosts文件方便多了。

]]>
https://xianglei.tech/archives/xianglei/2015/04/113.html/feed 0
关于Diablo3的历史和现状思考 https://xianglei.tech/archives/xianglei/2015/04/117.html https://xianglei.tech/archives/xianglei/2015/04/117.html#respond Mon, 27 Apr 2015 08:27:20 +0000 http://xianglei.tech/?p=117 大菠萝3已经通了,用巫医基本没难度,玩游戏的过程中引发了一些思考,结合整个游戏的历史背景设定,总觉得不写出来就缺点什么。我是暴雪的忠实粉丝,暴雪每款游戏我都玩了,包括早期Dos下的失落的维京人。

简单介绍一下,Diablo是暴雪的一个ARPG的系列游戏,游戏名称来源于美国的一个Diablo山,而Diablo本身是西班牙文,就是魔鬼的意思,整个游戏的内容就是以打怪升级混装备为主。游戏的世界观很宏大,这是最吸引人的地方之一,天堂,地狱,人类,三界混战。天堂是天使军团,地狱有魔神Diablo,Mephisto,Baal三个大统领,下面还有四个魔王Beliar啥的,统帅地狱军团,人类是天使和魔鬼的结合体,也是天堂和地狱争夺的重要力量,谁获得了人类的支持,谁就有战胜对方的机会。大概的游戏背景就是这样,玩家作为人类的英雄,帮助天使打地狱,但是天使其实也不怎么样,曾经搞过投票要消灭人类。

但是我要说的跟这些都没关系,我要说的是Diablo失败的教训总结,游戏出了3部,Diablo也失败了三次。每次都没有人帮他总结经验教训,所以屡败屡战,屡战屡败。

那么Diablo失败的原因,我分析总结归纳如下:

首先,Diablo没有做好自己地狱军团的管理,CEO, CTO, CFO三魔神还算团结,但是下面几个魔王高级总监互相争权夺利,相互掐架,甚至打算赶跑创业团队的CEO-Diablo自己当领导,导致的后果就是面对天使和人类的进攻各自为战,毫无战略眼光和战术配合,这个大概是大公司的通病吧。而且三魔神似乎并不从中进行调解,虽然说此乃帝王之术,治内纵其互斗,Diablo收渔翁之利。但是这在治理公司内部还算好使,一旦面对外敌则各自为战,后继乏力,让人类和天使得以集中力量各个击破。所以,Diablo缺乏长远目标和管理能力,应该去商学院进修,念个EMBA,顺便扩大自己的鬼脉关系,加强各地区魔鬼的紧密合作。

其次,魔神没有建立好自己的粉丝团,天使为了拉拢人类,建立了赫拉迪姆,恶魔建立了女巫会,但是明显赫拉迪姆的支持者更多。恶魔能给的,天使都能给,恶魔不能给的天使还能给,而且是免费给,所以,人类的本性是贪婪的,总想获取更多,于是赫拉迪姆教会的支持者更多。尽管Diablo使用精神力量腐化了赫拉迪姆的大主教和国王李奥瑞克,但是广大人民群众并不支持他。而且Diablo没有想到让人类社会也参与到地狱的政治生活中,他应该在人类世界建立人类代表大会制度,如果建立了代表大会,人类代表由地狱指派,不管地狱提出什么政策,人类代表都举手通过。从第一届举手到死,那才舒坦,同时人类还觉得自己能参与到地狱的管理中是莫大的荣幸,地狱既平等又民主,不像天堂,都是高阶天使投票表决,根本没人类什么事,尽管绝大多数人类自己也没有地狱的投票权,连选民证都没有。

第三,Diablo没有强调以经济建设为中心,Diablo发动力量修建了那么多的地下城,陷阱。却没有想到提高人类的生活水平和生活质量。地下城极其庞大,却根本不通地铁,也没有CBD商圈和互联网,这点比起魔兽世界就差很多了。你看地精的商业帝国多发达,掌控了各大主城的拍卖行,各种族在商业上根本没有说话的份。而Diablo只是个好战份子,完全没有经济头脑和政治头脑。试想,如果Diablo致力于发展地狱和人类的经济,首先提高恶魔和人类的生活水平,获得更高的政治支持率,而不是仅在教会内部宣传自己的伟光正,与此同时进行分布广泛的战略物资储备,那么过几百年,Diablo绝对可以拉拢整个人类团队,把马车换成汽车谁不干啊。如果Diablo能很好的开发地下城的房地产和周边如钢铁行业,建材行业,既可以提高人类的就业率,也可以提高人类的生活水平,平房扒了盖楼房,再卖给人类,很快GDP就可以超过天堂。再弄些个地狱直属企业,强行垄断战略物资的经营权,但雇佣人类做劳动力,做出来的东西再卖给人类,地狱早就发了,纳税额绝对超过三桶油和移动联通。而且人类富了就牛逼了,该买奢侈品炫耀了,买奢侈品就会去天堂,直接把天堂的马桶搋子都买断货了,能把天使们都看傻眼了。然后买东西休息的时候在天堂的金刚大门上刻字:莉娅凯恩到此一游,或者对着衣卒尔的塑像吐痰,(其实根本不知道衣卒尔是谁,就是有口痰想吐了)在金刚大门前也不排队,吵吵闹闹的,又或者等入关的时候聚一堆打扑克。进了混沌要塞一群人类大妈开始跳舞,放最炫民族风,泰瑞尔脑袋都能炸了。

第四,Diablo没有建立良好的宣传手段,只建立了一些毫不起眼的教会,教徒都破衣烂衫的。同时也没有明确的宗教目标,眼光太狭隘,只以攻打天堂为目标,却没有考虑地狱和人类攻打完天堂赢了以后怎么办。他应该提出以解放全人类为目标,消灭天堂的阶级统治为纲领,着力于进行宣传天堂对人类的剥削和压迫为手段的大规模路演。同时建立地狱卫星频道,在人类社会家家送电视和锅,每天循环播放地狱新闻联播。前十分钟Diablo,Mephisto,Baal等领导人亲切会见来访的人类的代表团,并提出对人类的无偿经济援助,免息贷款。中间十分钟播出地狱百姓们安居乐业,今年金币又是大丰收。四大魔王送温暖到地狱的边远地区,免费发放白金币。CPI下调,GDP提高,奶牛关开放,喝奶不花钱之类的。最后十分钟播出天堂各种内讧,高阶天使在议会打架,扔鞋,马萨伊尔发疯要当死神消灭全人类等等,而低阶天使都生活在水深火热中,不是遭灾就是内战。这样过十年,不用Diablo发话,一帮人类愤青就会在地狱局域网上发帖高喊:“打倒天堂帝国主义,实现全人类的解放。地狱必将取代天堂,地狱是人类的终极目标,Diablo是人类的大救星,Baal是名号响彻宇宙的无敌大元帅”。毕竟天堂曾经就消灭人类搞过投票,虽然没实施,但谁要是说一句天堂好,立马会遭到无数的地狱党徒围攻:“人奸,天堂狗,滚去天堂找你的天使爹去。”还需要在人类社会树立地狱英雄的榜样,比如在地狱天堂的战争中舍身救战友,以自己肉体挡火球的魔鬼英雄,而且得说,这些英雄都是为人类的解放而牺牲的。就算是杀人无数的魔鬼,也要把他立为楷模,说他当时提出的“凡天下之田,天下人同耕,无处不均匀”等,虽然在实现理想的过程中犯了一些微不足道的错误,最后被天使和人类共同剿灭,但是都不足以抹煞他伟大的功绩。

第五,没有建立信息防火墙,任由天使在人类社会建立赫拉迪姆,宣传天使的政策,Diablo如果能通过技术手段屏蔽一切来自天堂的消息,那么对自己宣传无异于锦上添花,只建立地狱和人类的互联网,互联互通。天堂的网站用DNS污染,IP禁止等手段。甚至给浏览器颁发有漏洞的证书也行啊,同时建立行之有效的监控手段,凡是翻墙上天堂网站发帖反对地狱的,过两天就会有魔鬼上门查水表。也没有雇佣人类水军在人类网站上回帖,回一帖0.5金币,内容无非就是复制粘贴:“支持大菠萝;大菠萝好可爱;大菠萝的政策亚克西;坚决拥护大菠萝的正确领导”等等诸如此类的话。天使在youheaventube.com上发视频抨击地狱,人类根本看不到,人类访问youheaventube.com是404。

第六,地狱的污染问题太严重,人类无法生存,Diablo由于只关注地下城和陷阱的修建,虽然不搞房地产开发,但是也需要消耗大量的水泥和钢材,于是造成地下城里污水横流,到处充满了有毒的空气和瘟疫。如果Diablo能想办法把这些毒气吹到天堂而不是人类世界,让人类世界处处充满蓝天和阳光鲜花。或者高薪雇佣一些人类学者,比如迪卡德凯恩,宣传地狱飘上来的毒气和雾霾可以阻挡天堂对人类的窥探,还能防止天堂的激光制导武器的打击,海带可以缠上天堂战船的螺旋桨之类的。再加上地下城的经济开发,Diablo能输给天堂?

写不动了,Diablo的问题还有很多,不能一一详尽,综上所述,虽可能不尽不实,但是绝对是大菠萝屡战屡败的核心问题。如果在第四部游戏中,大菠萝解决了以上问题,虽然不敢保证一定可以战胜天堂,但绝对不会输了。

顺便吐槽网易的服务器,战网真的应该琢磨琢磨高并发的问题,根本就没法玩,又卡又掉线。4.23公测开服的当天别说上游戏了,战网网站都上不去。

以上故事纯属虚构,如有雷同,纯粹巧合。就写一乐,老少愤青,别太往心里去。

]]>
https://xianglei.tech/archives/xianglei/2015/04/117.html/feed 0
Tornado学习笔记(一) https://xianglei.tech/archives/xianglei/2015/04/120.html https://xianglei.tech/archives/xianglei/2015/04/120.html#respond Sat, 25 Apr 2015 08:29:54 +0000 http://xianglei.tech/?p=120 最近开始用Tornado做开发了,究其原因,主要是Tornado基于Python,一来代码量少开发速度快,二来采用epoll方式,能够承载的并发量很高。在我的i5台式机上用ab测试,不连接数据库的情况下,单用get生成页面,大概平均的并发量在7900左右。这比php或者java能够承载并发量都高很多很多。三来Python代码可维护性相对来说比php好很多,语法结构清晰。四来,tornado的框架设计的很黄很暴力,以HTTP请求方式作为方法名称,通常情况下,用户写一个页面只需要有get和post两种方式的方法定义就够了。

在学习的过程中遇到一些比较重要的问题,记录下来以后备查,在学习的过程中遇到不少问题,基本都是靠翻墙解决,百度实在是令人痛苦不堪。记录比较零散一些,可能不仅限于tornado,也会包括python的一些知识。由于我也还在学习过程中,所以有些东西不一定详尽或者理解到位,tornado高人勿拍。

tornado入门不是很难,只要理解了他处理的方式就很好做了。tornado在处理网页的时候,针对于URL的连接,实际就是对class类的一个路由映射。而类中的方法通常无非就两种,处理连接请求的get或者post。所以tornado的页面编写很简单。比如,这是一个用作验证登录用户的类,逐行解释一下:

class SigninHandler(BaseHandler): #引入BaseHandler
    def post(self): #HTTP的POST方法,是GET渲染的form中的post method所对应
        username = self.get_argument('username') #获取form中username的值
        password = self.get_argument('password') #获取form中password的值
        conn = MySQLdb.connect('localhost', user = 'root', passwd = '', db = 'datacenter', charset = 'utf8', cursorclass = MySQLdb.cursors.DictCursor) #连接数据库,指定cursorclass的目的是要让返回结果以字典的形式呈现,如果不写,是以元组形式返回
        cursor= conn.cursor() #定义数据库指针
 
        sql = 'SELECT * FROM dc_users WHERE username=%s AND password=password(%s)' #写sql,为何这样写后面再说
        cursor.execute(sql, (username, password,)) #执行SQL
        row = cursor.fetchone() #获取一条,返回值为dict,因为前面连接数据库时定义了cursorclass = MySQLdb.cursors.DictCursor,当然,你需要import MySQLdb.cursors的包
        if row: #如果存在记录
            self.set_secure_cookie('id', str(row['id']).encode('unicode_escape'),  expires_days=None) #设置安全cookie,防止xsrf跨域
            self.set_secure_cookie('username', row['username'].encode('unicode_escape'),  expires_days=None) #same
            self.set_secure_cookie('role', row['role'].encode('unicode_escape'),  expires_days=None) #same
            ip = self.request.remote_ip #获取来访者IP
            sql = 'UPDATE dc_users SET last_access = NOW(), last_ip=%s WHERE id = %s' #认证审计变更的SQL
            cursor.execute(sql, (ip, row['id'],)) #执行SQL
            conn.commit() #提交执行
            cursor.close() #关闭指针
            conn.close() #关闭数据库连接
            self.redirect('/') #转入首页
            return #返回,按照官方文档的要求,在redirect之后需要写空的return,否则可能会有问题,实测确实会有问题
        else: #如果不存在记录
            self.redirect('/Signin') #跳转回登录页面
            return
    def get(self): #HTTP GET方式
        self.render('users/login_form.html') #渲染登录框HTML

login_form.html内容如下

{% include 'header.html' %} <!--引入header文件,需要跟login_form在同一路径下,否则写相对路径,如 {% include '../header.html' %} -->
<div class="container">
    <h2><script>document.write(language.Title + ' ' + language.Version + ' - ' + language.Codename)</script></h2>
    <form class="form-horizontal" method="post" action="/Signin"> <!--这里的action对应的上面Python代码中SigninHandler的post方法-->
        {% module xsrf_form_html() %} <!--防跨域cookie模块-->
        <div class="form-group">
            <label class="col-sm-2 control-label"><script>document.write(language.Username + language.Colon)</script></label>
            <div class="col-sm-4"><input class="form-control" type="text" name="username" placeholder="Username"></div>
        </div>
        <div class="form-group">
            <label class="col-sm-2 control-label"><script>document.write(language.Password + language.Colon)</script></label>
            <div class="col-sm-4"><input class="form-control" type="password" name="password" placeholder="Password"></div>
        </div>
        <div class="form-group">
            <div class="col-sm-2"></div>
            <div class="col-sm-4">
                <button type="submit" class="col-sm-4 btn btn-info"><script>document.write(language.Signin)</script></button>
            </div>
        </div>
    </form>
</div>
{% include 'footer.html' %}

对于主代码,应如下:

#-*- coding: utf-8 -*-
 
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
import tornado.ioloop
import tornado.web
import tornado.httpserver
import tornado.autoreload
import os
 
class BaseHandler(tornado.web.RequestHandler): #BaseHandler
    def get_current_user(self):
        user = self.get_secure_cookie('username')
        return user
 
class IndexHandler(BaseHandler):
    @tornado.web.authenticated
    def get(self):
        if not self.current_user:
            self.redirect('/Signin') #如未登录,则跳转Signin,Signin的GET方法调用的就是login_form.html页面
            return
        self.render('welcome.html') #否则渲染welcome.html
 
settings = \
    {
        "cookie_secret": "HeavyMetalWillNeverDie", #Cookie secret
        "xsrf_cookies": True, #开启跨域安全
        "gzip": False, #关闭gzip输出
        "debug": False, #关闭调试模式,其实调试模式是很纠结的一事,我喜欢打开。
        "template_path": os.path.join(os.path.dirname(__file__), "./templates"), #定义模板,也就是login_form.html或header.html相对于本程序所在的位置
        "static_path": os.path.join(os.path.dirname(__file__), "./static"), #定义JS, CSS等文件相对于本程序所在的位置
        "login_url": "/Signin", #登录URL为/Signin
    }
 
application = tornado.web.Application([
    (r"/", IndexHandler), #路由设置/ 使用IndexHandler
    (r"/signin", SigninHandler) # Signin使用SigninHandler
], **settings)
 
if __name__ == "__main__": #启动tornado,配置里如果打开debug,则可以使用autoload,属于development模式,如果关闭debug,则不可以使用autoload,属于production模式。autoload的含义是当tornado监测到有任何文件发生变化,不需要重启server即可看到相应的页面变化,否则是修改了东西看不到变化。
    server = tornado.httpserver.HTTPServer(application)
    server.bind(10002) #绑定到10002端口
    server.start(0) #自动以多进程方式启动Tornado,否则需要手工启动多个进程
    tornado.ioloop.IOLoop.instance().start()

 

 

if __name__ == “__main__”: #启动tornado,配置里如果打开debug,则可以使用autoload,属于development模式,如果关闭debug,则不可以使用autoload,属于production模式。autoload的含义是当tornado监测到有任何文件发生变化,不需要重启server即可看到相应的页面变化,否则是修改了东西看不到变化。
server = tornado.httpserver.HTTPServer(application)
server.bind(10002) #绑定到10002端口
server.start(0) #自动以多进程方式启动Tornado,否则需要手工启动多个进程
tornado.ioloop.IOLoop.instance().start()

对于sql部分,执行最好写成cursor.execute(sql, (id,)),将%s的东西以元组形式传递给execute方法,这样做的目的是最大程度避免SQL注入的发生。如果直接写为 ‘select * from xxx where id = ‘ + id 或者 ‘select * from xxx where id = %s’ % id 的话,会被注入。另外,如果是sqlite3的话,需要写成 ‘select * from xxx where id=?’ ,然后execute方式一样。

另外,如果开启了禁止xsrf跨域功能的话,在每个HTML的form表单里必须加上{% module xsrf_form_html() %}否则会出现禁止访问的错误。

下篇记录一下编码格式处理,这个在python2上最讨厌。

]]>
https://xianglei.tech/archives/xianglei/2015/04/120.html/feed 0
Hadoop运维记录系列(十四) https://xianglei.tech/archives/xianglei/2015/04/126.html https://xianglei.tech/archives/xianglei/2015/04/126.html#respond Mon, 20 Apr 2015 08:42:54 +0000 http://xianglei.tech/?p=126 周末去了趟外地,受托给某省移动公司(经确认更正,是中国移动位置基地,不是省公司)做了一下Hadoop集群故障分析和性能调优,把一些问题点记录下来。

该系统用于运营商的信令数据,大约每天1T多数据量,20台Hadoop服务器,赞叹一下运营商乃真土豪,256G内存,32核CPU,却挂了6块2T硬盘。还有10台左右的服务器是64G内存,32核CPU,4~6块硬盘,据用户反馈,跑数据很慢,而且会有失败,重跑一下就好了。

软件环境是RedHat 6.2,CDH Hadoop 4.2.1。

总容量260多TB,已使用200多T。

首先,这硬件配置属于倒挂,内存CPU和硬盘不匹配,因为作为Hadoop来说,注重的是磁盘的高吞吐量,而并发读写磁盘个数决定了吞吐量的大小。对于Hadoop来说,硬盘数量多比单盘容量大更具有优势。举例说明,假设一块硬盘的读写速度是210MB每秒,一块硬盘3TB,那么全部扫描一遍大概需要4小时左右。而1TB的硬盘扫描一遍大概只需要一个多小时。Hadoop最好的硬盘挂载方式是JBOD,也就是直接插主板不做raid。那么如果同时10块硬盘读写,得到的吞吐量是2.1G左右。20块就是4G左右。所以,对于Hadoop来说,30块1TB硬盘的性能要绝对优于10块3TB的硬盘。但是目前来说,性价比最高的还是2TB的硬盘。

还好只有20台服务器,可以挨个手工上去查一下,如果是200台,挨个查我就死在移动了。除了硬件配置不合理外,还查出几个比较重要的问题。

一、20台机器没有做机架感知,而且复制份数是2,因为受集群总存储能力限制,目前只能复制两备份,这个没辙了,以后扩服务器再说吧。

二、配置不合理,且没有为异构硬件搭配不同的Hadoop配置项
1. 因为采用CDH 4,所以实际是在YARN上跑MRv1,256G内存的服务器配置的map槽位数是150个,reduce槽位数是130个,其实是很大程度上超出了32核CPU的计算能力,绝大部分的CPU时间消耗在了MR Java进程之间的切换上。大量的Java进程用strace跟踪看都是在互斥锁上等待(FUTEX_WAIT)。所以,跑任务的时候服务器的System CPU很高,真正用于MR的User CPU很少。

2. 256G跟64G内存的服务器,Hadoop配置文件一样,64G也是150 map slots, 130 reduce slots,mapred.map(reduce).child.java.opt内存也没改,没发生OOM真是奇迹…MR槽位数不是越大越好,要跟CPU和内存数量搭配着算才好,至于2.0,更简单一些,只要配置vcore数量就行了,但是也不是vcore配的越大就越好。而且,据说搭建集群之前有公司给他们培训过Hadoop,居然告诉他们map,reduce槽位数的配置项没用,不用管,这培训也太坑人了吧。

3. 没有做磁盘保留空间,我到了现场以后没多久一台NodeManager就挂了,我以为是我神威所致,服务器害怕了,上去一看是硬盘100%了…

三、Linux系统层面没有做优化。
1. 没有开启TCP回收和TCP复用

2. 没有设置文件打开句柄数

3. 还有三台服务器没有关闭SELinux

4. 没有自动化运维,20台Hadoop服务器都是tar包解压缩手工装的。

5. 没有监控,据说是我去之前几天才装的Ganglia…监控数据采集了MR和HBASE,没有HDFS

6. 没有做数据压缩。

7. 小文件太多,块数量380万,文件数量360万。分块大小设定128M,上去看很多文件都达不到,基本都是40~80兆左右。

四、业务层面太过复杂。
1. 数据清洗依赖Hive进行,而没有采用编写MR的方式,Hive开发速度快,但是执行效率是真低啊。

2. 单个查询Join太多,并开启了Hive的并行查询,引发大量的Stage任务,占用太多MR槽位。

3. 同时并发计算太多,没有做Job分解和规划调度。

4. 清洗后的数据使用snappy压缩,这玩意计算读取的时候不分块的,只能1map读取。

总的来说,基本还是由于硬件配置不合理和对Hadoop底层不熟悉导致的性能较低,这个不是我这个层面能解决的问题了。

当然,这不是我见过的配置最不合理的硬件,记得去年年初中软国际卖某部委的Hadoop服务器,找我给配置,128G内存,64核CPU,4块2T盘。当时我看着这服务器是真不知道该怎么配才合适。当时说给钱,到现在也没给,赖账了这帮孙子…

其实搞hadoop,不仅仅只是搭起来能跑那么简单,传统运营商基本还是拿传统的思维观念在看待这些新生的开源事物,认为单个机器CPU足够多,内存足够大,就不需要很多台机器了。如果只是这样简单的话,就不需要开发Hadoop了,谷歌只要弄个牛逼配置的大型机,继续用Oracle就好了。Hadoop也好,Spark也好,更多的是蚂蚁啃大象的思路,虽然单个机器烂,但是只要足够多,也可以解决大问题。

云服务也好,大数据也罢,真正注重的是运维的能力,运维强则数据强

]]>
https://xianglei.tech/archives/xianglei/2015/04/126.html/feed 0
解决Ubuntu更新nVidia显卡驱动后黑屏问题 https://xianglei.tech/archives/xianglei/2015/04/144.html https://xianglei.tech/archives/xianglei/2015/04/144.html#respond Thu, 09 Apr 2015 08:26:51 +0000 http://xianglei.tech/?p=144 跟大数据没关系,自己使用的一个记录。

现在已经完全抛弃windows了,完全用ubuntu来干活了。不是windows不好,是中国的windows生态环境太差了,随便装个国产小软件会附带给用户装一堆垃圾的东西,什么各种毒霸,各种手机助手,各种输入法,稍微点错个什么按钮就装一堆的垃圾,到处都是陷阱,就连开源的SourceForge推出的FileZilla安装器,后台还偷偷自动下载Norton 360强制安装。他妈的老外都被中国这帮臭流氓软件公司带坏了。用户还根本没得选择,所以干脆痛下决心,把Windows干掉,直接上ubuntu了,使用方法习惯了几天就好了,把eclipse也换成了IntelliJIDEA,感觉开发效率比用windows的时候还高不少,现在再看别人用windows觉得那玩意就是渣渣啊。除了网银还得虚拟机,别的真的没啥需要windows来做的。而且,说实话,在linux下用命令行解决问题比用界面解决快多了。

但是,又说但是了,Linux系统虽然很好,可显卡厂商对它支持并不好,我用了半年ubuntu,基本出问题都是在nVidia显卡上,Linus Torvalds曾经在大会上当众对英伟达竖中指,并说”Fuck you nVidia”。我的使用的感觉也是一样的。

官方linux版本驱动更新太慢,ppa的bumble bee里面有最新的驱动,但是不稳定。昨晚上就被搞黑屏了。笔记本是intel/nVidia双显卡,平时都是用intel集显,没啥大问题,昨晚下载安装了xorg-edgers的349驱动,然后选了一下nVidia显卡,结果直接启动黑屏了,怎么折腾都不行。Google被屏蔽,百度上翻好几页基本都是建议重装系统,所以再次吐槽百度,搜索正经事出来的结果就是垃圾。特别提醒程序员们,如果想提高自己的各种水平,花钱买VPN翻墙上google也不能用百度,用百度的程序员处于鄙视链的最底层,是程序员里的败类,不管你用什么语言。:)

然后还是翻墙上了谷歌,第一条就搜到想要的答案了。很快就解决了,适用于nVidia更新非官方显卡驱动后启动黑屏或花屏,整理方法如下,记录一下,省的以后忘了。

  1. 启动黑屏后可以按Ctrl-Alt-F1进入命令行界面。这个是基础知识,需连网。
  2. 进入cli模式后,sudo apt-get install ppa-purge
  3. sudo ppa-purge xorg-edgers
  4. sudo apt-get purge nvidia-*
  5. 创建一个临时文件夹如 mkdir ~/tmp
  6. cp /etc/X11/xorg.conf* ~/tmp
  7. sudo apt-get autoremove
  8. sudo apt-get update
  9. sudo apt-get upgrade
  10. sudo reboot
  11. 重启后还是C+A+F1进命令行
  12. sudo apt-get install nvidia-331 nvidia-prime
  13. sudo cp ~/tmp/xorg.conf.nvidia-xconfig-original /etc/X11/xorg.conf
  14. sudo vi /etc/X11/xorg.conf
  15. 确保内容如下
    Section "ServerLayout"
        Identifier "layout"
        Screen 0 "nvidia"
        Inactive "intel"
    EndSection
     
    Section "Device"
        Identifier "intel"
        Driver "intel"
        BusID "PCI:0@0:2:0"
        Option "AccelMethod" "SNA"
    EndSection
     
    Section "Screen"
        Identifier "intel"
        Device "intel"
    EndSection
     
    Section "Device"
        Identifier "nvidia"
        Driver "nvidia"
        BusID "PCI:1@0:0:0"
        Option "ConstrainCursor" "off"
    EndSection
     
    Section "Screen"
        Identifier "nvidia"
        Device "nvidia"
        Option "AllowEmptyInitialConfiguration" "on"
        Option "IgnoreDisplayDevices" "CRT"
    EndSection

     

最后重启,解决。最后启动完成可能会报错,这个用一下 dpkg-reconfigure nvidia-331 nvidia-331-uvm nvidia-settings 就可以了。再重启应该就没事了。

最后跟着Linus大喊一声:“Fuck you nVidia”

]]>
https://xianglei.tech/archives/xianglei/2015/04/144.html/feed 0
OpenWRT嵌入式Linux故障排除一例 https://xianglei.tech/archives/xianglei/2015/01/148.html https://xianglei.tech/archives/xianglei/2015/01/148.html#respond Fri, 09 Jan 2015 16:20:03 +0000 http://xianglei.tech/?p=148 跟大数据没关系,只是帮朋友忙排了个错记录一下。

以前关系很不错的同事,目前在企业级wifi领域创业,采购了我们的大数据服务,正在给他做平台的搭建和调试。然后这几天他这个CEO在调试路由器的时候遇到一些问题,在搞大数据的同时捎带手解决了一下他这个问题。

OpenWRT,嵌入式Linux,主要用在MIPS或ARM设备上。路由器和wifi设备很多会采用这个系统,特点是轻巧。

Coova-Chilli,在openwrt下的接入访问控制器,提供认证网关,可以使用radius或http来做接入计费等工作。

正常的话,在启动chilli以后,会启动四个tun虚拟隧道网卡,而故障是偶发性的,不定期的会有两个IP地址一样的tun设备。比如是这样

tun0 10.1.0.1
tun1 10.1.0.1
tun2 10.2.0.1
tun3 10.3.0.1
tun4 10.4.0.1

 

正常的情况下是应该只有tun0-3的设备,但是每次启动都会多出一两个tun,而且还不固定,有时候是tun0-1 IP地址一样,有时候tun2-3 IP地址一样。而且OpenWRT默认是不记录syslog的。很难排查。其实可以从logread里面读取syslog,但是syslog里其实没记录任何东西。

那哥们以前也是写代码的,苦熬了三个通宵没找到问题在哪,在chilli启动脚本里面设置了各种记log,wait,sleep,都没用。下午过去讨论完当前大数据平台的需求就没事了,然后我闲的蛋疼就给他看了一下那个脚本,chilli脚本应该没有太多的问题,然后他是按照官方部署文档搭建的。一开始也没看出问题在哪。chilli脚本默认是放在/etc/init.d目录下的。按说不会有问题,后来快感来了,他告诉我他写了一个命令在rc.local做启动,我看了一下rc.local里面,他写了一个启动脚本放到了/root下面。vi 那个在/root下的启动脚本,里面写了一个/etc/init.d/chilli restart。我就问他这是干嘛用的,他说wrt官方让这样写,说这样写保险。我尝试注销掉restart行,重启10遍,tun隧道都毫无问题。20分钟搞定。

问题分析

chilli原始脚本如下

#! /bin/sh    
### BEGIN INIT INFO    
# Provides:          chilli    
# Required-Start:    $remote_fs $syslog $network    
# Required-Stop:     $remote_fs $syslog $network    
# Default-Start:     2 3 4 5    
# Default-Stop:      0 1 6    
# Short-Description: Start CoovaChilli daemon at boot time    
# Description:       Enable CoovaChilli service provided by daemon.    
### END INIT INFO    
PATH=/sbin:/bin:/usr/sbin:/usr/bin    
DAEMON=/usr/sbin/chilli    
NAME=chilli    
DESC=chilli    
START_CHILLI=0    
if [ -f /etc/default/chilli ] ; then    
. /etc/default/chilli    
fi    
if [ "$START_CHILLI" != "1" ] ; then    
echo "Chilli default off. Look at /etc/default/chilli"    
exit 0    
fi    
test -f $DAEMON || exit 0    
. /etc/chilli/functions    
MULTI=$(ls /etc/chilli/*/chilli.conf 2>/dev/null)    
[ -z "$DHCPIF" ] && [ -n "$MULTI" ] && {    
for c in $MULTI;     
do    
    echo "Found configuration $c"    
    DHCPIF=$(basename $(echo $c|sed 's#/chilli.conf##'))    
    export DHCPIF    
    echo "Running DHCPIF=$DHCPIF $0 $*"    
    sh $0 $*    
done    
exit    
}    
if [ -n "$DHCPIF" ]; then    
CONFIG=/etc/chilli/$DHCPIF/chilli.conf    
else    
CONFIG=/etc/chilli.conf    
fi    
[ -f $CONFIG ] || {    
echo "$CONFIG Not found"    
exit 0    
}    
check_required    
RETVAL=0    
prog="chilli"    
case "$1" in    
start)    
    echo -n "Starting $DESC: "    
    /sbin/modprobe tun >/dev/null 2>&1    
    echo 1 > /proc/sys/net/ipv4/ip_forward    
    writeconfig    
    radiusconfig    
    test ${HS_ADMINTERVAL:-0} -gt 0 && {       
(crontab -l 2>&- | grep -v $0    
        echo "*/$HS_ADMINTERVAL * * * * $0 radconfig"    
        ) | crontab - 2>&-    
    }    
    ifconfig $HS_LANIF 0.0.0.0    
    start-stop-daemon --start --quiet --pidfile /var/run/$NAME.$HS_LANIF.pid \    
        --exec $DAEMON -- -c $CONFIG    
    RETVAL=$?    
    echo "$NAME."    
    ;;    
checkrunning)    
    check=`start-stop-daemon --start --exec $DAEMON --test`    
    if [ x"$check" != x"$DAEMON already running." ] ; then    
$0 start    
    fi    
    ;;    
radconfig)    
    [ -e $MAIN_CONF ] || writeconfig    
    radiusconfig    
    ;;    
restart)    
    $0 stop    
    sleep 1    
    $0 start    
    RETVAL=$?    
    ;;    
stop)    
    echo -n "Stopping $DESC: "    
    crontab -l 2>&- | grep -v $0 | crontab -    
    start-stop-daemon --oknodo --stop --quiet --pidfile /var/run/$NAME.$HS_LANIF.pid \    
    --exec $DAEMON    
    echo "$NAME."    
    ;;    
reload)    
    echo "Reloading $DESC."    
    start-stop-daemon --stop --signal 1 --quiet --pidfile \    
    /var/run/$NAME.$HS_LANIF.pid --exec $DAEMON    
    ;;    
condrestart)    
    check=`start-stop-daemon --start --exec $DAEMON --test`    
    if [ x"$check" != x"$DAEMON already running." ] ; then    
$0 restart    
RETVAL=$?    
    fi    
    ;;    
status)    
    status chilli    
    RETVAL=$?    
    ;;    
*)    
    N=/etc/init.d/$NAME    
    echo "Usage: $N {start|stop|restart|condrestart|status|reload|radconfig}" >&2    
    exit 1    
    ;;    
esac    
exit 0

 

问题在于,他在调试的时候,在for c in $MULTI循环里面,为了保证每个子进程都启动成功,加了一个wait,后面在建立tun通道的时候为了调试又加了几个sleep。照着官方文档,他又加了个restart到rc.local里面,这样问题就来了,/etc/init.d里面是自动执行chilli start命令的,而加上了wait和sleep。init.d的启动脚本会等待,而这时候Linux在不同的tty又启动了rc.local里面的chilli restart命令,于是两个或三个相同的tun IP地址就会共同存在。

反正问题解决了,鉴于他为这种破事熬了三个通宵,我就可以以先知的口吻教育这个亲自调试程序的CEO:“尽信书不如无书”。开源系统的官方文档往往滞后,可能新版本早就解决了需要restart的问题,但是文档没有及时更新,导致这种问题的发生。

总结,了解各种系统的工作原理是多么重要。

最后,帮这哥们发个招聘广告,他们表面上是个做wifi硬件的创业团队,气氛融洽,待遇优厚,其实是个气氛融洽,待遇优厚的大数据公司。公司的核心重点是基于hadoop的大数据挖掘和机器学习,欢迎网友自荐或推荐相关人才,不在于说你技术有多牛,就算刚从大学出来也没问题,我这老板朋友更看重你是否具备钻研和学习的精神,加入该公司会得到本屌的Hadoop开发与运维技术的亲身指导。机会难得,踊跃报名。

]]>
https://xianglei.tech/archives/xianglei/2015/01/148.html/feed 0
Hadoop的word co-occurrence实现 https://xianglei.tech/archives/xianglei/2014/08/153.html https://xianglei.tech/archives/xianglei/2014/08/153.html#respond Sun, 24 Aug 2014 08:44:15 +0000 http://xianglei.tech/?p=153 Word Co-occurrence一直不知道该怎么正确翻译, 单词相似度?还是共生单词?还是单词的共生矩阵?

这在统计里面是很常用的文本处理算法,用来度量一组文档集中所有出现频率最接近的词组.嗯,其实是上下文词组,不是单词.算是一个比较常用的算法,可以衍生出其他的统计算法.能用来做推荐,因为它能够提供的结果是”人们看了这个,也会看那个”.比如做一些协同过滤之外的购物商品的推荐,信用卡的风险分析,或者是计算大家都喜欢什么东西.

比如 I love you , 出现 “I love” 的同时往往伴随着 “love you” 的出现,不过中文的处理跟英文不一样,需要先用分词库做预处理.

按照Mapper, Reducer和Driver的方式拆分代码

Mapper程序:

package wco;
 
import java.io.IOException;
 
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
 
public class WCoMapper extends Mapper<LongWritable, Text, Text, IntWritable> {
 
  @Override
  public void map(LongWritable key, Text value, Context context)
      throws IOException, InterruptedException {
     
    /*
     * 将行内容全部转换为小写格式.
     */
    String line_lc = value.toString().toLowerCase();
    String before = null;
     
    /*
     *  将行拆分成单词
     *  并且key是前一个单词加上后一个单词
     *  value 是 1
     */
    for (String word : line_lc.split("\\W+")) { //循环行内容,按照空格进行分割单词
      if (word.length() > 0) {
        if (before != null) { //如果前词不为空,则写入上下文(第一次前词一定是空,直接跳到下面的before = word)
          context.write(new Text(before + "," + word), new IntWritable(1));
        }
        before = word; //将现词赋值给前词
      }
    }
  }
}

 

Reducer程序:

package wco;
 
import java.io.IOException;
 
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;
 
public class WCoReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
 
  @Override
  public void reduce(Text key, Iterable<IntWritable> values, Context context)
      throws IOException, InterruptedException {
 
    int wordCount = 0;
    for (IntWritable value : values) {
      wordCount += value.get(); //单纯计算word count
    }
    context.write(key, new IntWritable(wordCount));
  }
}

 

Driver程序就不解释了,天下的Driver都一样:

package wco;
 
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.Job;
 
import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
 
public class WCo extends Configured implements Tool {
 
  @Override
  public int run(String[] args) throws Exception {
 
    if (args.length != 2) {
      System.out.printf("Usage: hadoop jar wco.WCo <input> <output>\n");
      return -1;
    }
 
    Job job = new Job(getConf());
    job.setJarByClass(WCo.class);
    job.setJobName("Word Co Occurrence");
 
    FileInputFormat.setInputPaths(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
 
    job.setMapperClass(WCoMapper.class);
    job.setReducerClass(WCoReducer.class);
 
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
 
    boolean success = job.waitForCompletion(true);
    return success ? 0 : 1;
  }
 
  public static void main(String[] args) throws Exception {
    int exitCode = ToolRunner.run(new Configuration(), new WCo(), args);
    System.exit(exitCode);
  }
}

 

算法的核心其实就是把前词和后词同时取出来作为key加上一个value做word count,统计单词的共生频率来对文本进行聚类.看网上说k-means的很多,其实很多时候算法是根据需求走的,k-means或者模糊k均值不一定就高大上,wordcount也不一定就穷矮矬.

]]>
https://xianglei.tech/archives/xianglei/2014/08/153.html/feed 0
Hadoop2的ResourceManager高可用配置 https://xianglei.tech/archives/xianglei/2014/06/156.html https://xianglei.tech/archives/xianglei/2014/06/156.html#respond Fri, 06 Jun 2014 15:04:04 +0000 http://xianglei.tech/?p=156 Hadoop 2.2没怎么关注过,太新,bug太多。2.4出来以后关注了一些东西,比如2.4里面直接带了ResourceManager的高可用,这点比较吸引人。之前2.2没注意有没有,貌似是没有,然后CDH自己出了一个解决方案,这次2.4的更新直接自己带了,还不错,这样就全了,Namenode有HA和Federation,RM也有了HA,而且也可以通过ZKFC自动做故障切换。大概从2.4开始,Hadoop就可以往生产环境逐渐切换了。

直接记录配置 RM HA 的最小需求和配置项。跟NN的HA一样,RM的HA也需要两台机器硬件配置相同,这个没什么可解释的了,当初1代的时候,NN和SNN就必须是一模一样的硬件配置。就像配置NN的HA一样,RM的HA也需要给出servicename。以下配置是配置RM自动失效恢复的配置项,大概没多少人会用到手动恢复吧,用手动恢复就把zookeeper部分给干掉就行了。

    <property>
        <name>yarn.resourcemanager.ha.enabled</name>
        <value>true</value>
    </property>
 
    <property>
        <name>yarn.resourcemanager.ha.rm-ids</name>
        <value>rm1,rm2</value>
    </property>
 
    <property>
        <name>yarn.resourcemanager.hostname.rm1</name>
        <value>192.168.1.2</value>
    </property>
 
    <property>
        <name>yarn.resourcemanager.hostname.rm2</name>
        <value>192.168.1.3</value>
    </property>
 
    <property>
        <name>yarn.resourcemanager.recovery.enabled</name>
        <value>true</value>
    </property>
 
    <property>
        <name>yarn.resourcemanager.store.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
    </property>
 
    <property>
        <name>yarn.resourcemanager.zk-address</name>
        <value>192.168.1.2:2181,192.168.1.3:2181</value>
        <description>For multiple zk services, separate them with comma</description>
    </property>
 
    <property>
          <name>yarn.resourcemanager.cluster-id</name>
          <value>yarn-ha</value>
    </property>

 

将配置写入yarn-site.xml里面,然后分别在两台服务器正常启动RM就可以了,就像启动NN HA一样的方式 sudo -u yarn yarn-daemon.sh start resourcemanager

另外,在Hadoop的各种HA中,有个隐藏属性是很多人不知道的,就是强制切换,一般来说,我们通过命令行切换HA,需要去运行

sudo -u hdfs hdfs haadmin -transitionToActive/transitionToStandby

 

或者

sudo -u yarn yarn rmadmin -transitionToActive/transitionToStandby

 

但是,这种方式在启用了ZKFC做自动失效恢复的状态下是不允许修改的,提示信息里只说了可以强制执行,但是没有提供命令,其实强制切换主备命令很简单。加个forcemanual就好了。

sudo -u hdfs hdfs haadmin -transitionToActive --forcemanual nn1

 

但是这样做的后果是,ZKFC将停止工作,你将不会再有自动故障切换的保障,但是有些时候,这是必须的,特别是有时候,Hadoop的NN在ZKFC正常工作的情况下,也会出现两个standby,两个standby的问题就在于诸如Hive和Pig这种东西,会直接报一个什么 Operation category READ is not supported in state standby 什么什么的,甚至你看着明明一个是active,一个是standby,也会报这个错误,这时候就必须手动强制切换了,强制切换完以后,别忘了重新启动ZKFC就好了。这个强制切换的要求就是用户必须没有任何对元数据的操作,这样才能有效的防止脑裂的发生。应该来说,进入安全模式再切换会比较稳妥一些。

补充: Hadoop ResourceManager的实现不像namenode只能有两个做HA,ResourceManager的HA可以多台。

]]>
https://xianglei.tech/archives/xianglei/2014/06/156.html/feed 0
Hadoop运维记录系列(十三) https://xianglei.tech/archives/xianglei/2014/06/160.html https://xianglei.tech/archives/xianglei/2014/06/160.html#respond Thu, 05 Jun 2014 09:57:21 +0000 http://xianglei.tech/?p=160 记录一下在2.x里面不会很常见的报错。只是在测试集群中发生,生产集群大概很少有人会去重启Namenode吧,特别是做了HA的。

场景是在2.x里做好了Namenode HA,以Namespace URI方式访问HDFS时,报错,然后两个Namenode貌似都是standby,然后历史任务服务器无法启动,HBase的Master也无法启动。其实这个故障很简单。

2014-06-05 17:20:09,548 FATAL hs.JobHistoryServer (JobHistoryServer.java:launchJobHistoryServer(158)) - Error starting JobHistoryServer
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://HAtest:8020/history/done]
    at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:503)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:88)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:108)
    at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:93)
    at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
    at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:155)
    at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:165)
Caused by: java.net.ConnectException: Call From slave.hadoop/192.168.118.134 to slave.hadoop:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:783)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:730)
    at org.apache.hadoop.ipc.Client.call(Client.java:1351)
    at org.apache.hadoop.ipc.Client.call(Client.java:1300)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
    at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:651)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:186)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1679)
    at org.apache.hadoop.fs.Hdfs.getFileStatus(Hdfs.java:124)
    at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1116)
    at org.apache.hadoop.fs.FileContext$14.next(FileContext.java:1112)
    at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90)
    at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1112)
    at org.apache.hadoop.fs.FileContext$Util.exists(FileContext.java:1528)
    at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.mkdir(HistoryFileManager.java:556)
    at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:500)
    ... 8 more
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:735)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:529)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:493)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:547)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
    at org.apache.hadoop.ipc.Client$Connection.access$2600(Client.java:314)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1399)
    at org.apache.hadoop.ipc.Client.call(Client.java:1318)
    ... 28 more

 

用ls命令无法访问HDFS,经检查,是没有启动ZooKeeper导致的,HDFS HA需要用Zookeeper做支持,除了需要JournalNode之外,还需要在两个Namenode上部署ZKFC,也就是Zoo Keeper Failed Controller。Namenode热备的失效恢复由ZKFC完成,如果没启动ZKFC,Namenode和JournalNode可以启动,但无法区分哪个才是主要提供Namespace服务的服务器,应该是产生了Brain split,也就是脑裂,于是整个HDFS都无法访问。

至于HBase Master无法启动,也就很好解释了。

另外,大集群,ZKFC的tickTime需要设置的更大一些,以防出现同步时间过短导致的网络拥塞。

]]>
https://xianglei.tech/archives/xianglei/2014/06/160.html/feed 0
阿姆憨杜普(ARM Hadoop)集群荣获硬蛋大赛奖项 https://xianglei.tech/archives/xianglei/2014/04/163.html https://xianglei.tech/archives/xianglei/2014/04/163.html#respond Sat, 19 Apr 2014 16:34:31 +0000 http://xianglei.tech/?p=163 基于arm架构硬件的Hadoop HBase集群参加了首届科通芯城的硬蛋i未来硬件创新大赛,进入决赛,最后拿到了十佳项目。有奖杯为证。

5383e4d4jw1eflaub5ge4j20p118gq6b.jpg

背景是FreeBSD的虚拟机

关注我博客的人可能知道,其实早在去年年初的时候,就已经拿ARM架构的单片机构建了单机版的Hadoop。后来陆续又搞了几个单片机,逐渐搭起了集群。嵌入式linux还是挺好玩的。后来的集群加入了HBase和Hive,又陆续构建了OpenTSDB,并编译了Arm版的Hadoop 2.3。可以作为监控数据存储和展现服务器用。但是Spark没搞过去,2G内存编译直接告诉内存不够,不能编译。

目前的一个表现形式是Hadoop部署盒子,一个盒子,只需三步手工操作,就可以自动化部署和搭建整个Hadoop集群了。

]]>
https://xianglei.tech/archives/xianglei/2014/04/163.html/feed 0
给刚玩Hadoop的朋友一些建议 https://xianglei.tech/archives/xianglei/2014/04/167.html https://xianglei.tech/archives/xianglei/2014/04/167.html#respond Thu, 17 Apr 2014 08:52:41 +0000 http://xianglei.tech/?p=167 随着两会中间央视新闻天天说大数据,很多人纷纷开始关注大数据和Hadoop以及数据挖掘和数据可视化了,我现在创业,遇到很多传统数据行业往Hadoop上面去转型的公司和个人,提了很多问题,大多数问题还都是差不多的。所以我想整理一些,也可能是很多人都关注的问题。

关于Hadoop版本的选择?

目前为止,作为半只脚迈进Hadoop大门的人,我建议大家还是选择Hadoop 1.x用。可能很多人会说,Hadoop都出到2.4,为啥还用1.x呢,说这话一听就没玩过hadoop。

理由一: Hadoop 1.x和2.x是完全两个不同的东西,并不是像说单机的webserver从1.0升级到2.0那么简单的事情。也不是说我现在用的mysql 5.0,只要编译一个新版本就直接无缝迁移到5.5的事情。Hadoop从1.0过度到2.0是整个架构体系全部推翻重写的。从实现方式到用户接口完全是两个完全不同的东西,不要简单的认为那不过就像nginx从0.8升级到1.4一样。所以我给的建议是,生产环境用1.x,实验环境部署2.x作为熟悉使用。

理由二: 依然是,Hadoop不是webserver,分布式系统尽管Hadoop实现出来了,但是他仍然是非常复杂的一套体系,单说HDFS存储,以前Hadoop 0.20.2想升级到0.20.203,首先你需要在所有节点部署上新版的Hadoop,然后停止整个集群的所有服务,做好元数据备份,然后做HDFS升级,还不能保证HDFS一定能升级成功。这样升级一次的代价是很大的,停服务不说,万一升级不成功能不能保证元数据完整无误都是不可预知的。远比你想象的麻烦的多得多得多。千万不要以为有了Cloudera Manager或者其他管理软件你就真的可以自动化运维了,部署Hadoop只是万里长征的第一步而已。

理由三: Hadoop 2.x目前很不稳定,Bug比较多,更新迭代速度太快,如果你想选择2.x,想清楚再做决定,这玩意不是说你选择新的版本就万无一失了,Openssl多少年了,还出现了心脏滴血的漏洞,何况刚出来才不到一年的Hadoop2,要知道,Hadoop升级到1.0用了差不多7,8年的时间,而且经过了无数大公司包括Yahoo,Facebook,BAT这样的公司不停的更新,修补,才稳定下来。Hadoop2才出现不到一年,根本没有经过长期稳定的测试和运行,看最近Hadoop从2.3升级到2.4只用了一个半月,就修复了400多个bug。

所以,不建议大家现在直接在生产集群就上2.x,再等等看吧,等稳定了再上也不迟。如果大家关注Apache JIRA的话,可以看到Hadoop 3.0已经开始内部bug跟踪了。

关于Hadoop的人才?

我觉得企业需要从两个方面来考虑hadoop的人才问题,一个是开发人才,一个是维护人才。

开发人才目前比较匮乏,基本都集中在互联网,但这个是一个在相对短时间内能解决的事情,随着Hadoop培训的普及和传播。以及Hadoop本身在接口方面的完善,这样的人才会越来越多。

维护人才我觉得互联网外的行业一段时间内基本不用考虑,不是太多了,而是根本没有。Hadoop和云计算最后拼的就是运维,大规模分布式系统的运维人才极难培养。特别是DevOps,本身DevOps就很稀缺,而在稀缺人才中大部分又是用puppet, fabric去搞web运维的,转向分布式系统运维难度还是有的。所以这种人才很难招聘,也很难培养。参看左耳朵耗子的InfoQ访谈 http://www.infoq.com/cn/articles/chenhao-on-cloud

然后你需要明确自己想要的开发人才类型,打个比方Hadoop就好象是windows或者linux操作系统,在这个操作系统上,既可以用photoshop画图,又可以用3dmax做动画,也可以用Office处理表格,但是应用软件所实现的目的是不一样的。这还是需要CTO,CIO对大数据和Hadoop及周边应用有个起码的了解。不要把Hadoop跟mysql php或者传统的J2EE做类比,认为没什么难的,大不了外包。完全不是这么回事。

关于Hadoop的培训内容?

经过几家企业的Hadoop内部培训,我发现刚转型企业都有一个问题是贪多。想做一次培训把hadoop和周边所有东西都了解透了,比较典型的是我最近去上海培训的一个公司,从Hadoop到HBase到Mahout到分词到Spark Storm全要听。然后培训机构就只能找几个老师分别讲不同的内容,我觉得这种培训对企业的意义不大,顶多就是给员工一个扎堆睡午觉的机会。

第一、Hadoop就不是一两次讲课就能搞明白的东西,除了理论知识,还需要大量的实践经验的支持。

第二、每个Hadoop生态组件都是一个很复杂的玩意,使用确实简单,但是要真正理解每一个组件没那么容易。尤其是Mahout,Spark,R这些涉及大量统计学和数学理论的玩意,你叫一帮搞产品的,毫无编程和统计学背景的人来听课,他们真的只能睡午觉,我都觉得让他们过来听Hadoop是很残忍的事情,明明听不懂,因为领导在旁边,还不得不努力坚持不睡觉。

第三、每个人擅长的领域不同,没有任何一个老师既能讲Windows服务器运维,又能讲Excal高级技巧还能讲3DMax动画PhotoShop绘图的。而培训机构为了抢单,往往承诺企业找几个老师一起讲,企业也往往觉得,一样的价格,我把所有都听了,多爽啊。其实不然,每个老师的讲课风格,知识点水平,内容设计都是不同的,鸡肉,面粉,蔬菜放在一起不一定是大盘鸡和皮带面,也很有可能是方便面,最后搞得食之无味弃之可惜。所以企业在选择做培训的时候一定要有的放矢,不要搞大而全,浪费资源不说,还毫无效果。可以分开几种不同的培训方向,找不同的,专业性强的培训机构来完成。当然,这也需要CTO,CIO具有一定的想法和眼光,更多的是,起码你作为领导者,应该比别人了解的更多一点,不是说技术细节上的,而是技术方向上的把握要比员工更精准。

关于与传统业务的对接?

这个也是很多人关心的,特别是传统企业,之前用的是Oracle,大量的数据存放在里面,一下子用Hadoop替代是不可能的。这个我觉得就属于想多了,Hadoop说白了是离线分析处理工具,目的不是代替你的数据库,事实上也根本不可能代替关系型数据库。他所作的是关系型数据库做不了的脏活累活,是原有业务架构的补充,而不是替换者。

而且这种辅助和替换是逐步完成的,不能一蹴而就,在我所认知的范围内,没有任何一家公司上来就说我直接把mysql不用了,直接上Hadoop,碰上这样的,我首先会赞叹他的决心,然后我拒绝给他出方案,我会明确告诉他,这样是不可能的。

Hadoop提供了多种工具给大家做传统数据库业务的对接,除了sqoop,你还可以自己写,Hadoop接口很简单的,JDBC接口也很简单的。

有日子没更新博客了,创业真的很忙,也很难。好在大批的Hadoop圈子里的朋友都很支持我们,给予我们很多无私的帮助,谢谢大家。

]]>
https://xianglei.tech/archives/xianglei/2014/04/167.html/feed 0
写几个Hadoop部署用到的小脚本 https://xianglei.tech/archives/xianglei/2014/03/169.html https://xianglei.tech/archives/xianglei/2014/03/169.html#respond Fri, 07 Mar 2014 10:44:26 +0000 http://xianglei.tech/?p=169 最近抛弃非ssh连接的hadoop集群部署方式了,还是回到了用ssh key 验证的方式上了。这里面就有些麻烦,每台机器都要上传公钥。恰恰我又是个很懒的人,所以写几个小脚本完成,只要在一台机器上面就可以做公钥的分发了。

首先是生成ssh key脚本

#!/bin/sh
ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys

 

ssh-keygen一般来说需要输入passphrase,但是一般都是三个回车过去了,我懒的输入,加上-P ”就不用了。

然后是添加公钥到从节点的脚本

#!/bin/sh
read -p "输入远端服务器IP: " ip
ssh-copy-id -o StrictHostKeyChecking=no -i ~/.ssh/id_rsa.pub root@$ip
ssh root@$ip 'sed -i "s/^#RSAAuthentication\ yes/RSAAuthentication\ yes/g" /etc/ssh/sshd_config'
ssh root@$ip 'sed -i "s/^#PubkeyAuthentication\ yes/PubkeyAuthentication yes/g" /etc/ssh/sshd_config'
ssh root@$ip 'sed -i "s/^#PermitRootLogin\ yes/PermitRootLogin\ yes/g" /etc/ssh/sshd_config'
ssh root@$ip 'service sshd restart'
hostname=`ssh root@${ip} 'hostname'`
echo "添加主机名和IP到本地/etc/hosts文件中"
echo "$ip    $hostname" >> /etc/hosts
echo "远端主机主机名称为$hostname, 请查看 /etc/hosts 确保该主机名和IP添加到主机列表文件中"
echo "主机公钥复制完成"

 

然后是第三个脚本读取主机列表然后把/etc/hosts复制到所有主机上

#!/bin/sh
cat /etc/hosts | while read LINE
do
    ip=`echo $LINE | awk '{print $1}' | grep -v "::" | grep -v "127.0.0.1"`
    echo "Copying /etc/hosts to ${ip}"
    scp -o StrictHostKeyChecking=no /etc/hosts root@${ip}:/etc/
done

 

不解释了

]]>
https://xianglei.tech/archives/xianglei/2014/03/169.html/feed 0
搭建红外遥控arm-hadoop集群过程 https://xianglei.tech/archives/xianglei/2014/01/171.html https://xianglei.tech/archives/xianglei/2014/01/171.html#respond Sat, 04 Jan 2014 09:03:33 +0000 http://xianglei.tech/?p=171 很多人玩开发板用树莓派,树莓派的确很好,但是对于hadoop来说,内存有点小,只有512MB。所以我找了一圈,最后用的是国内一个开源硬件团队的产品叫CubieTruck。内存有2G,板载存储有8G,千兆网口,可以挂载2.5寸机械或SSD硬盘。

先简述一下大体步骤。

1. 刷系统,做系统

由于CubieTruck(以下简称CT)默认系统安装的是安卓4.2,所以需要变更一下。换成ubuntu 的 linaro server 13.08。

2. 安装部署Hadoop

这个没啥可简介的,由于hadoop是java语言编写的,CPU指令集的问题是由JVM解决的,所以,理论上,Hadoop可以工作在任何一个平台下,包括arm。所以直接用apache官方的tar或者deb即可,我用32位的deb安装,需要做一些调整,重新打包再安装,安装过程没有问题,计算也没发现错误。

3. 挂载红外接口驱动,编写红外控制脚本。

下面详述整个步骤

一、刷系统

名词解释:

linaro:AMD,高通等arm研发厂商共同研制的ubuntu on arm操作系统,其实就是ubuntu。

nand:板载的存储芯片,在linux里被识别的设备名称。

用全志的PhoenixSuit工具,现在最新版本好像是1.0.8。下载CT官方的linaro server镜像,所谓linaro,是当初amd,高通等公司联合开发的ubuntu on arm。

然后按照软件的每一步提示,可以很方便把板载系统替换为ubuntu server。

连接显示器,键盘,网线等一切需要的外设,这里不需要鼠标,因为linux server是完全命令行操作的,任何在server上装xwindows和gnome的都是傻缺的行为。

当然你也可以使用Desktop版把它作为mini pc使用,官方甚至还提供了fedora和arch linux供使用。

这时候工作刚刚开始,由于linux的镜像没有完全把8G的空间全部利用上,所以我们还要对nand进行重新划分和格式化。刷机的linux img会将nand划分为3个分区,nanda-nandc,在默认情况下,只用到了nanda和nandb,nandc完全没利用,所以我们要把nandc格式化并挂载上。查看方式是ls /dev/nand*,默认直接进系统,如果提示用户名密码,用linaro/linaro即可。可sudo su -为root

#cd /dev
#mkfs.ext4 /dev/nandc
#echo "mount /dev/nandc /opt" >> /etc/rc.local
#mount /dev/nandc /opt
#df -h

 

格式化并挂载之后,df -h查看,多了5.1G左右的空间。

假设挂载了硬盘,CT可无障碍挂载SATA 2.5寸硬盘,在linux中硬盘识别为sda设备,CT硬盘供电口有两个,黄黑的是12V供电,红黑的是5V供电。接入2.5寸盘需要把两个接口都插上。12V供电口在靠近网卡一侧,5V为远离网卡一侧。看红色主板的接法,CT里面提供SATA硬盘线,无需单买。

wKioL1LHfvuRFljSAAEJnBvtYIY238.jpg

然后分区格式化挂载硬盘,注意:硬盘插线需要在断电情况下进行,不要带电插拔。

#fdisk /dev/sda
#mkfs.ext4 /dev/sda1
#mkdir -p /data
#echo "mount /dev/sda1 /data" >> /etc/rc.local
#mount /dev/sda1 /data

 

fdisk具体操作看提示,m是帮助,正常情况下新硬盘用n创建新的primary分区。旧硬盘需要用d命令先删除旧分区。然后再n一个primary分区,剩下的一路回车。最后用w将分区信息保存,退出后执行格式化。

这时应该是把硬盘挂载到/data路径下了,第一步刷机的工作完成。如果喜欢的话,可以执行以下apt-get update和upgrade。

二、安装部署Hadoop

可直接用官方的tar包安装,也可使用deb安装,我使用官方提供的deb安装,但是需要做一些改造的操作,默认官方提供i386和amd64的deb包。这个在arm上无法安装,会提示cpu arch问题,但是实际是可以使用的,这个可以自己做一个deb包出来,过程如下,官方下载i386的deb

然后执行改造和重打包

#dpkg -x hadoop_1.2.1-1_i386.deb hadoop_1.2.1-1_all
#dpkg -e hadoop_1.2.1-1_i386.deb hadoop_1.2.1-1_all
#cd hadoop_1.2.1-1_all/DEBIAN
#vi control #将里面的Architecture: i386改成Architecture: all,或者是#Architecture: armhf,其他不动,保存退出
#cd ../../
#dpkg -b hadoop_1.2.1-1_all
#dpkg -i hadoop_1.2.1-1_all.deb

 

hadoop包会创建两个用户hdfs和mapred。

分别给hdfs和mapred用户建立ssh免密码通道,剩下的就是看配置分布式hadoop的官方手册了。这里不再赘述hadoop的部署过程。跟在x86的过程一模一样。

最后配置完了就是能够用如下命令顺利启动即可。

#sudo -u hdfs start-dfs.sh
#sudo -u mapred start-mapred.sh

 

三、红外口编程。

红外口编程需要加载红外口的内核模块,lsmod看一下,如果没有sun4i_ir,就执行一下modprobe sun4i_ir驱动。

然后编辑一个ir.py的文件,做红外的接收和相应控制。在我的电视遥控器上,按2捕获到的是键盘的1,按3捕获的是键盘2,以此类推。之前可以先测试一下,启动该脚本后是否可以在tty1上捕获到红外输入。注意,必须是tty1,也就是显示器接到CT上的那个控制台。在这个脚本里,按2是启动hdfs,按3是停止,按4是启动mapred,按5是停止mapred,按6是启动balancer,按7是停止balancer,按8是启动一个mapred的冒烟测试。

import select
import os, sys, time
import termios
def ir_catch():
        fd = sys.stdin.fileno()
        r = select.select([sys.stdin],[],[],0.01)
        rcode = ''
        if len(r[0]) >0:
                rcode  = sys.stdin.read(1)
        return rcode
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
new_settings = old_settings
new_settings[3] = new_settings[3] & ~termios.ICANON
new_settings[3] = new_settings[3] & ~termios.ECHONL
print 'old setting %s'%(repr(old_settings))
termios.tcsetattr(fd,termios.TCSAFLUSH,new_settings)
while True:
        c = ir_catch()
        if len(c) !=0 :
                #print 'input: %s'%(ord(c))
                if(ord(c) == 10):
                        print 'Power'
                        os.popen('reboot')
                elif(ord(c) == 49):
                        print '2'
                        f = os.popen('sudo -u hdfs /usr/sbin/start-dfs.sh').readlines()
                        for a in f:
                                print a
                elif(ord(c) == 50):
                        print '3'
                        f = os.popen('sudo -u hdfs /usr/sbin/stop-dfs.sh').readlines()
                        for a in f:
                                print a
                elif(ord(c) == 51):
                        print '4'
                        f = os.popen('sudo -u mapred /usr/sbin/start-mapred.sh').readlines()
                        for a in f:
                                print a
                elif(ord(c) == 52):
                        print '5'
                        f = os.popen('sudo -u mapred /usr/sbin/stop-mapred.sh').readlines()
                        for a in f:
                                print a
                elif(ord(c) == 53):
                        print '6'
                        f = os.popen('sudo -u hdfs /usr/sbin/start-balancer.sh').readlines()
                        for a in f:
                                print a
                        elif(ord(c) == 54):
                        print '7'
                        f = os.popen('sudo -u hdfs /usr/sbin/stop-balancer.sh').readlines()
                        for a in f:
                                print a
                elif(ord(c) == 55):
                        print '8'
                        f = os.popen('sudo -u mapred hadoop jar /usr/share/hadoop/hadoop-examples-1.2.1.jar pi 10 100').readlines()
                        for a in f:
                                print a
                else:
                        print 'Unknown'
        else:
                #print 'Sleep 1'
                time.sleep(1)

 

四、赘述

开启红外和gpio的操作以后,可以做的事情很多,不仅仅是遥控hadoop集群,甚至家里的电器都可以遥控。这只是一个很简单的测试,开发板能干的事很多。

]]>
https://xianglei.tech/archives/xianglei/2014/01/171.html/feed 0
Hadoop自动化运维之创建deb包 https://xianglei.tech/archives/xianglei/2014/01/213.html https://xianglei.tech/archives/xianglei/2014/01/213.html#respond Tue, 31 Dec 2013 16:24:11 +0000 http://xianglei.tech/?p=213 2014第一篇博文,将来也会逐步写成一个系列,新年新气象。

将hadoop及其周边生态系统deb/rpm化对于自动化运维来说意义重大,建立好整个生态的rpm和deb然后再创建本地yum或者apt的源,可大大简化hadoop的部署和运维。实际上,cloudera和hortonworks都是这么做的。

本来想把rpm和deb都写了,不过估计篇幅不够,还是分开吧,先从deb讲起。deb创建比较容易一些,不需要写什么spec脚本。

以hadoop 2.2.0为例,apache官方并不提供基于2.0的rpm和deb,所以我们自己想法去创建自己的修改过的rpm和deb。

一、先下载hadoop编译后的包,大概100多兆,然后解压缩

#wget http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0.tar.gz
#tar zxf hadoop-2.2.0.tar.gz

二、创建打包需要的文件夹

#mkdir -p /opt/hadoop_2.2.0-1_amd64/DEBIAN
#mkdir -p /opt/hadoop_2.2.0-1_amd64/usr
#mkdir -p /opt/hadoop_2.2.0-1_amd64/etc

其中DEBIAN是放置打包脚本用的,usr和etc是将来打包后将会安装到的路径。最后打包完成后,这里的usr目录对应的就是未来linux系统里面的/usr目录,etc目录对应的就是linux系统的/etc目录。

三、将hadoop里面的东西复制到目的文件夹

第一步解压缩后的hadoop-2.2.0文件夹下应该有如下一些文件夹。
-bin
-etc
–|-hadoop
-sbin
-share
-lib
-libexec
-include
原始tar包里面的hadoop大概的文件夹结构大概是这样的。然后执行复制。

#tar zxf hadoop-2.2.0.tar.gz
#cd hadoop-2.2.0
#cp -rf bin sbin lib libexec share include /opt/hadoop_2.2.0-1_amd64/usr/
#cp -rf etc/hadoop /opt/hadoop_2.2.0-1_amd64/etc/

复制后的打包文件夹/opt/hadoop_2.2.0-1_amd64/目录结构应大致如下
-DEBIAN
-etc
–|-hadoop
-usr
–|-bin
–|-sbin
–|-include
–|-lib
–|-libexec
–|-share
然后开始编写DEBIAN文件夹下的控制文件,ubuntu和debian打包相对rpm简单一些,只需要写几个独立的脚本文件即可。
进入DEBIAN文件夹,先编辑元数据文件control

#cd /opt/hadoop_2.2.0-1_amd64/DEBIAN
#vi control

输入以下内容

Package: hadoop
Version: 2.2.0-GA
Section: misc
Priority: optional
Architecture: amd64
Provides: hadoop
Maintainer: Xianglei
Description: The Apache Hadoop project develops open-source software for reliable, scalable, distributed computing.

保存退出,然后编辑同目录下的conffile,用来监视安装后配置文件的变化,以在卸载的时候保留更改后的配置文件。

#vi /opt/hadoop_2.2.0-1_amd64/DEBIAN/conffile

输入以下内容

/etc/hadoop/core-site.xml
/etc/hadoop/hdfs-site.xml
/etc/hadoop/mapred-site.xml
/etc/hadoop/yarn-site.xml
/etc/hadoop/hadoop-env.sh
/etc/hadoop/yarn-env.sh

继续。还有四个控制文件需要编辑,分别是postinst安装后操作,postrm删除前操作,preinst安装前操作,prerm删除前操作,都是以脚本的形式来编写。放一起写。

#vi postinst
#------
mkdir -p /usr/etc
ln -s /etc/hadoop /usr/etc/hadoop
rm -f /etc/hadoop/hadoop
#------
#vi postrm
#------
/usr/sbin/userdel hdfs 2> /dev/null >/dev/null
/usr/sbin/userdel mapred 2> /dev/null >/dev/null
/usr/sbin/groupdel hadoop 2> /dev/null >dev/null
exit 0
#------
#vi preinst
#------
getent group hadoop 2>/dev/null >/dev/null || /usr/sbin/groupadd -g 123 -r hadoop
/usr/sbin/useradd --comment "Hadoop MapReduce" -u 202 --shell /bin/bash -M -r --groups hadoop --home /var/lib/hadoop/mapred mapred 2> /dev/null || :
/usr/sbin/useradd --comment "Hadoop HDFS" -u 201 --shell /bin/bash -M -r --groups hadoop --home /var/lib/hadoop/hdfs hdfs 2> /dev/null || :
#------
#vi prerm
#------
#不写内容,空即可
#------

这时候基本就完事了,当然,你还需要修改一下hadoop脚本里面的路径输出配置,以适应打包安装后的路径。这个就很简单了,没啥可说的了。
然后在shell里面执行

#cd /opt
#dpkg -b hadoop_2.2.0-1_amd64

然后你将会得到hadoop_2.2.0-1_amd64.deb安装包。用dpkg -i 命令安装试试吧。做饭吃去了,下一次说做apt源和rpm包的办法。

]]>
https://xianglei.tech/archives/xianglei/2014/01/213.html/feed 0
基于异构arm硬件的Hadoop集群测试 https://xianglei.tech/archives/xianglei/2013/12/215.html https://xianglei.tech/archives/xianglei/2013/12/215.html#respond Sat, 21 Dec 2013 16:34:10 +0000 http://xianglei.tech/?p=215 3月份的时候做了单机版的hadoop on arm的测试,最近又买了一个新的arm板子,就考虑把他们串起来组一个hadoop集群。而且由于产品更新换代的问题,硬件上是异构的。

基于arm架构单片机的hadoop服务器尝试

namenode是cubieboard一代,采用单核arm v7架构,1G内存,4G板载flash ROM

datanode采用cubietruck,双核armv7,2G内存,8G板载Flash ROM,挂了一个80G的2.5寸磁盘。

两个主板的操作系统均采用ubuntu server。

nn的操作系统装在SD卡上,把nand分区格式化掉当存储,所谓nand就是flash ROM。

dn操作系统直接刷在nand上,无SD卡,挂载一块硬盘当存储。

怎么把linux安装到nand上这次先不讲,以后单独说。

linaro@namenode:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/mmcblk0p2  1.8G  1.1G  600M  66% /
devtmpfs        408M  4.0K  408M   1% /dev
none            408M  128K  408M   1% /tmp
none             82M  164K   82M   1% /run
none            408M     0  408M   0% /var/tmp
none            5.0M     0  5.0M   0% /run/lock
none            408M     0  408M   0% /run/shm
none            100M     0  100M   0% /run/user
/dev/nand       3.8G   75M  3.5G   3% /opt
linaro@datanode-01:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/root       2.0G  1.3G  648M  67% /
devtmpfs        913M  4.0K  913M   1% /dev
none            4.0K     0  4.0K   0% /sys/fs/cgroup
none            183M  224K  183M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            913M     0  913M   0% /run/shm
none            100M     0  100M   0% /run/user
/dev/nandc      5.1G  139M  4.7G   3% /opt
/dev/sda1        74G  180M   70G   1% /data
linaro@datanode-01:~$

dd测试一下磁盘性能,nand读写可以忽略不计,flash ROM上读写数据的效率可以用惨不忍睹来性能。写数据只有每秒可怜的5M,读数据每秒7M,逆天的慢。

linaro@datanode-01:~$ sudo time dd if=/dev/zero of=/data/1GB bs=4096 count=250000
250000+0 records in
250000+0 records out
1024000000 bytes (1.0 GB) copied, 23.7274 s, 43.2 MB/s
0.28user 11.04system 0:23.73elapsed 47%CPU (0avgtext+0avgdata 776maxresident)k
8inputs+2000000outputs (0major+252minor)pagefaults 0swaps

如果使用磁盘的性能差不多,就可以拼凑出一个超级屌丝的Hadoop集群来用。

写数据性能,没有想象中的快,不过作为一个N年前的80G 2.5寸,5400转的SATA盘,这个成绩不错了。43.2MB/s

读数据性能超乎意料的好,惊了。338MB/s。

linaro@datanode-01:~$ sudo time dd if=/data/1GB of=/dev/null bs=4096 count=250000
250000+0 records in
250000+0 records out
1024000000 bytes (1.0 GB) copied, 3.02673 s, 338 MB/s
0.19user 2.80system 0:03.03elapsed 98%CPU (0avgtext+0avgdata 776maxresident)k
0inputs+0outputs (0major+252minor)pagefaults 0swaps

 

如果是这样,那么当成Hadoop服务器肯定是没啥大问题的,本来hadoop就是做一次写入多次读取来用的,写入慢一些不怕,只要读取够快就行了。哪怕做hbase的在线服务也可以将就。

有图有真相
黑色为cubieboard一代,红色板卡为cubietruck,CT下面是80G硬盘

供电和网络

硬盘连接

CPU info,datanode+tasktracker双核处理器

Namenode单核处理器

Namenode操作系统及cpu架构

Datanode操作系统及CPU架构

Pi冒烟测试,至少比我之前单片做Hadoop测试要快,如果把nand完全不当存储,全释放出来的话,可能还要更快。

namenode上nand当存储使。

datanode上挂载一个nand分区和硬盘共同存储数据。

两台tasktracker

两台datanode

总容量

arm因为现在只有32位CPU,所以处理能力很有限,但很高兴的是,这并不影响磁盘性能。我们至少可以组建一个基于arm的hadoop存储集群,作为冷数据的存储和备份使用。或者组建一个对线上提供查询服务的HBASE集群。这样做的主要好处是成本非常低廉,并且易于维护。

算笔账,一块arm板子几百块钱,去掉作为开发板的那些不必要外设,诸如wifi,火线,HDMI,蓝牙,GPIO口,SD卡口的话,还能便宜。1TB的2.5寸7200转sata盘淘宝售价不到500。1T存储的总成本不到1000,去买块1T的sas盘也不止这些钱了。

一台x86服务器,无硬盘也要10000多,假设用6块2T的sata盘,最便宜也要20000左右。而采用arm方案,12块arm板子加1T硬盘再加上电源设备和交换机设备只需要12000来块钱,硬件采购成本可以降低40%左右。

而更省的是电力成本,一个arm板子加硬盘的耗电量大概是750mA,使用12V电源换算成功率大概是9瓦/时,算算x86服务器是多少瓦的?现在PC机电源450瓦都是起步,Dell R720的电源功率是750瓦。12个arm板子加硬盘的功率才只有108瓦。电费节省600%!!

2U服务器 arm开发板
数量 1 12
磁盘 2Tx6=12TB 1Tx12=12TB
功率 750×1=750瓦 9×12=108瓦

至于维护方面,由于arm板卡硬盘和服务器一体化,如果有一块硬盘坏了,整体更换即可,无需关闭集群或者做热插拔。

不过还是那句话,arm现在计算能力不足,无法用做大规模分布式计算,但是这种方式提供冷数据存储备份或小规模hbase在线服务是绰绰有余的。但是,这个问题在2014年arm 64位芯片大规模出货后应该可以得到极大的改善,目前由于32 位处理器的内存寻址范围只能到4G,所以还没法做大规模的集群应用,64位到来后,一切都会好起来。

]]>
https://xianglei.tech/archives/xianglei/2013/12/215.html/feed 0