-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy path007.Python Spark
More file actions
144 lines (133 loc) · 5.69 KB
/
007.Python Spark
File metadata and controls
144 lines (133 loc) · 5.69 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
Python Spark:
一、安装Scala:
1、下载Scala:wget http://www.scala-lang.org/files/archive/scala-2.11.6.tgz
2、解压缩Scala
解压缩scala-2.11.6.tgz到cala-2.11.6目录:tar xvf scala-2.11.6.tgz
把scala-2.11.6迁移到/usr/local目录:mv scala-2.11.6 /usr/local/scala
3、Scala用户环境变量的设置:
vi /etc/profile:
#SCALA_PATH-----------------------
export SCALA_HOME=/usr/local/scala
export PATH=$PATH:$SCALA_HOME/bin
source /etc/profile
二、安装Spark:
1、安装Spark:
下载Spark:wget https://www.apache.org/dyn/closer.lua/spark/spark-2.3.1/spark-2.3.1-bin-hadoop2.7.tgz
解压缩Spark到spark-2.0.2-bin-hadoop2.7目录:tar zxf spark-2.0.0-bin-hadoop2.6.tgz
把spark-2.0.2-bin-hadoop2.7目录移动到/usr/local/spark:mv spark-2.0.2-bin-hadoop2.7 /usr/local/spark
2、用户环境变量设置:
vi /etc/profile:
#SPARK_PATH-----------------------
export SPARK_HOME=/usr/local/spark
export PATH=$PATH:$SPARK_HOME/bin
source /etc/profile
三、启动pyspark交互式界面:
1、启动pyspark
进入Spark交互式界面:pyspark
2、退出pyspark
离开pyspark交互式界面:exit()
(解决NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable)
vi /usr/local/spark/conf/spark-env.sh:
添加:export LD_LIBRARY_PATH="/usr/local/hadoop/lib/native/"
四、设置pyspark显示信息
1、复制log4j模板文件
切换到spark配置文件目录:cd /usr/local/spark/conf
复制log4j模板文件到log4j.properties:cp log4j.properties.template log4j.properties
2、设置log4j
vi log4j.properties:
将INFO改为WARN
3、再次进入pyspark
pyspark
(信息少了很多,只列出重要部分信息)
五、创建测试用的文本文件:
1、复制LICENSE.txt
复制LICENSE.txt:cp /usr/local/hadoop/LICENSE.txt ~/wordcount/input/
ll ~/wordcount/input
2、启动所有虚拟服务器
3、进入master虚拟机,启动Hadoop Multi-Node Cluster
启动Hadoop Multi-Node Cluster:start-all.sh
4、上传测试文件HDFS目录
在HDFS创建目录:hadoop fs -mkdir -p /user/root/wordcount/input
切换至~/wordcount/input数据文件目录:cd ~/wordcount/input
上传文本文件到HDFS:hadoop fs -copyFromLocal LICENSE.txt /user/root/wordcount/input
列出HDFS文件:hadoop fs -ls /user/root/wordcount/input
六、本地运行pyspark程序
1、进入spark
本地运行pyspark程序:pyspark --master local[4]
2、查看当前的运行模式:sc.master
3、读取本地文件
读取本地的文件:textFile=sc.textFile("file:/usr/local/spark/README.md")
显示项数:textFile.count()
4、读取HDFS文件
读取HDFS文件:textFile=sc.textFile("hdfs://master:9000/user/root/wordcount/input/LICENSE.txt")
显示项数:textFile.count()
5、退出pyspark:exit()
七、在Hadoop YARN运行pyspark
(预)解决spark运行时出现Neither spark.yarn.jars nor spark.yarn.archive is set:
在hdfs上创建目录:hadoop fs -mkdir -p /user/root/hadoop/spark_jars
上传spark的jars:hadoop fs -put /usr/local/spark/jars/* /user/root/hadoop/spark_jars/
修改spark-default.conf:spark.yarn.jars=hdfs://master:9000/usr/local/spark/jars/* /user/root/hadoop/spark_jars/
1、在Hadoop YARN运行pyspark
在Hadoop YARN运行pyspark:HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop pyspark --master yarn --deploy-mode client
设置Hadoop配置文件目录:HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
要运行的程序:pyspark
设置运行模式是YARN-client:--master yarn --deploy-mode client
2、查看当前的运行模式
sc.master
3、读取HDFS文件
textFile=sc.textFile("hdfs://master:9000/user/root/wordcount/input/LICENSE.txt")
textFile.count()
八、构建Spark Standalone Cluster运行环境
1、编辑spark-env.sh:vi /usr/local/spark/conf/spark-env.sh
添加:
export SPARK_MASTER_IP=master
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=512m
export SPARK_WORKER_INSTANCES=4
其中:
设置master的IP或服务器名称:export SPARK_MASTER_IP=master
设置每个Worker使用的CPU核心:export SPARK_WORKER_CORES=1
设置每个Worker使用的内存:export SPARK_WORKER_MEMORY=512m
设置实例数:export SPARK_WORKER_INSTANCES=4
2、将master的spark程序复制到data1
ssh data1
mkdir /usr/local/spark
chown root:root /usr/local/spark
exit
scp -r /usr/local/spark root@data1:/usr/local
3、将master的spark程序复制到data2
ssh data2
mkdir /usr/local/spark
chown root:root /usr/local/spark
exit
scp -r /usr/local/spark root@data2:/usr/local
4、将master的spark程序复制到data3
ssh data3
mkdir /usr/local/spark
chown root:root /usr/local/spark
exit
scp -r /usr/local/spark root@data3:/usr/local
5、编辑slaves文件
编辑slaves文件:vi /usr/local/spark/conf/slaves
输入:
data1
data2
data3
九、在Spark Standalone运行pyspark
1、启动Spark Standalone Cluster
启动Spark Standalone Cluster:/usr/local/spark/sbin/start-all.sh
分别启动master与slaves:
/usr/local/spark/sbin/start-master.sh
/usr/local/spark/sbin/start-slaves.sh
2、在Spark Standalone运行pyspark
在Spark Standalone运行pyspark程序:pyspark --master spark://master:7077 --num-executors 1 --total-executor-cores 3 --executor-memory 512m
3、查看当前的运行模式
sc.master
4、读取本地文件
textFile=sc.textFile("file:/usr/local/spark/README.md")
textFile.count()
5、读取HDFS文件
textFile=sc.textFile("hdfs://master:9000/user/root/wordcount/input/LICENSE.txt")
textFile.count()
6、停止Spark Standalone Cluster
/usr/local/spark/sbin/stop-all.sh