我是Apache Spark和Scala的新手(也是Hadoop的初学者).我完成了Spark SQL教程:https://spark.apache.org/docs/latest/sql-programming-guide.html 我尝试对标准csv文件执行简单查询,以便在我当前的集群上对其性能进行基准测试.
我使用来自https://s3.amazonaws.com/hw-sandbox/tutorial1/NYSE-2000-2001.tsv.gz的数据,将其转换为csv并复制/粘贴数据,使其大10倍.
我使用Scala将它加载到Spark中:
// sc is an existing SparkContext. val sqlContext = new org.apache.spark.sql.SQLContext(sc) // createSchemaRDD is used to implicitly convert an RDD to a SchemaRDD. import sqlContext.createSchemaRDD
定义类:
case class datum(exchange: String,stock_symbol: String,date: String,stock_price_open: Double,stock_price_high: Double,stock_price_low: Double,stock_price_close: Double,stock_volume: String,stock_price_adj_close: Double)
读入数据:
val data = sc.textFile("input.csv").map(_.split(";")).filter(line => "exchange" != "exchange").map(p => datum(p(0).trim.toString, p(1).trim.toString, p(2).trim.toString, p(3).trim.toDouble, p(4).trim.toDouble, p(5).trim.toDouble, p(6).trim.toDouble, p(7).trim.toString, p(8).trim.toDouble))
转换为表格:
data.registerAsTable("data")
定义查询(列出所有以'IBM'作为股票代码的行):
val IBMs = sqlContext.sql("SELECT * FROM data WHERE stock_symbol ='IBM'")
执行计数,以便查询实际运行:
IBMs.count()
查询运行正常,但返回res:0而不是5000(这是使用Hive with MapReduce返回的).