Я только что опробовал базовый фрагмент кода для примера потоковой передачи pyspark (версия 3.0.1) в моей системе Lubuntu 20.04 LTS с использованием python 3.9x.
Я открыл новый блокнот jupyter в GoogleChrome, начал со следующего кода (часть, которая еще не выдает ошибку):
# Import modules
from pyspark.sql import SparkSession
from pyspark.sql.functions import explode, split
#%%
## SCRIPT
# Instantiate the new spark session
spark = SparkSession.builder.appName("StreamingDemo").getOrCreate()
# Call stream with sending messages to local host with standard port "9999"
# --> loads socket
lines = spark.readStream.format("socket").option("host", "local").option("port", 9999).load()
# Create dataframe
# NOTE on several methods employed here (hover over them for docs)
words = lines.select(explode(split(lines.value, " ")).alias("word"))
# Create word counter
wordCounts = words.groupBy("word").count()
# Print result
print(wordCounts.printSchema())
# Create output mode for the stream (hover over functions for docs)
om = wordCounts.writeStream.outputMode("complete")
# Create query for the output-mode
# NOTE on output format: can also be "json" if further processing is needed
query = om.format("console").start()
Вывод в консоли:
[I 18:21:45.684 NotebookApp] Kernel restarted: f55c9433-3ae9-45f0-b34b-a1123e2899b0
[I 18:21:45.719 NotebookApp] Restoring connection for f55c9433-3ae9-45f0-b34b-a1123e2899b0:4417bfd825454f4790078827ccc529df
[I 18:21:45.720 NotebookApp] Replaying 3 buffered messages
[I 18:21:48.797 NotebookApp] Saving file at /Untitled.ipynb
20/12/20 18:21:57 WARN Utils: Your hostname, andylu-Lubuntu-PC resolves to a loopback address: 127.0.1.1; using 192.168.1.98 instead (on interface wlp3s0)
20/12/20 18:21:57 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/home/andylu/.pyenv/versions/3.9.0/lib/python3.9/site-packages/pyspark/jars/spark-unsafe_2.12-3.0.1.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
20/12/20 18:21:59 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
20/12/20 18:22:04 WARN TextSocketSourceProvider: The socket source should not be used for production applications! It does not support recovery.
20/12/20 18:22:07 WARN StreamingQueryManager: Temporary checkpoint location created which is deleted normally when the query didn't fail: /tmp/temporary-0843cc22-4f7c-4b2e-a6ef-3ba5aa16ec08. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort.
20/12/20 18:22:08 ERROR MicroBatchExecution: Query [id = c5d1875b-7c4e-4ff2-a922-af871b311812, runId = 79c0726b-4457-4c3d-b81f-04f36bc8eedd] terminated with error
java.net.UnknownHostException: local
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:220)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:609)
at java.base/java.net.Socket.connect(Socket.java:558)
at java.base/java.net.Socket.<init>(Socket.java:454)
at java.base/java.net.Socket.<init>(Socket.java:231)
at org.apache.spark.sql.execution.streaming.sources.TextSocketMicroBatchStream.initialize(TextSocketMicroBatchStream.scala:71)
at org.apache.spark.sql.execution.streaming.sources.TextSocketMicroBatchStream.planInputPartitions(TextSocketMicroBatchStream.scala:117)
at org.apache.spark.sql.execution.datasources.v2.MicroBatchScanExec.partitions$lzycompute(MicroBatchScanExec.scala:44)
at org.apache.spark.sql.execution.datasources.v2.MicroBatchScanExec.partitions(MicroBatchScanExec.scala:44)
at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExecBase.supportsColumnar(DataSourceV2ScanExecBase.scala:61)
at org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanExecBase.supportsColumnar$(DataSourceV2ScanExecBase.scala:60)
at org.apache.spark.sql.execution.datasources.v2.MicroBatchScanExec.supportsColumnar(MicroBatchScanExec.scala:29)
at org.apache.spark.sql.execution.datasources.v2.DataSourceV2Strategy.apply(DataSourceV2Strategy.scala:91)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$1(QueryPlanner.scala:63)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:489)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:68)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
at scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:162)
at scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:162)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:162)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:160)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1429)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:68)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
at scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:162)
at scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:162)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:162)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:160)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1429)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:68)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
at scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:162)
at scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:162)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:162)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:160)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1429)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:68)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$3(QueryPlanner.scala:78)
at scala.collection.TraversableOnce.$anonfun$foldLeft$1(TraversableOnce.scala:162)
at scala.collection.TraversableOnce.$anonfun$foldLeft$1$adapted(TraversableOnce.scala:162)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at scala.collection.TraversableOnce.foldLeft(TraversableOnce.scala:162)
at scala.collection.TraversableOnce.foldLeft$(TraversableOnce.scala:160)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1429)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.$anonfun$plan$2(QueryPlanner.scala:75)
at scala.collection.Iterator$$anon$11.nextCur(Iterator.scala:484)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:490)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.execution.SparkStrategies.plan(SparkStrategies.scala:68)
at org.apache.spark.sql.execution.QueryExecution$.createSparkPlan(QueryExecution.scala:330)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$sparkPlan$1(QueryExecution.scala:94)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:133)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:133)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:94)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:87)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executedPlan$1(QueryExecution.scala:107)
at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:133)
at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)
at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:133)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:107)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:100)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runBatch$14(MicroBatchExecution.scala:563)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:352)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:350)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runBatch(MicroBatchExecution.scala:553)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:223)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:352)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:350)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:69)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:191)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:57)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:185)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:334)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:245)
Затем выполняется проблемная строка:
query.awaitTermination()
Он выдает следующее исключение:
StreamingQueryException Traceback (most recent call last)
<ipython-input-2-885fef5a9f37> in <module>
----> 1 query.awaitTermination()
~/.pyenv/versions/3.9.0/lib/python3.9/site-packages/pyspark/sql/streaming.py in awaitTermination(self, timeout)
101 return self._jsq.awaitTermination(int(timeout * 1000))
102 else:
--> 103 return self._jsq.awaitTermination()
104
105 @property
~/.pyenv/versions/3.9.0/lib/python3.9/site-packages/py4j/java_gateway.py in __call__(self, *args)
1302
1303 answer = self.gateway_client.send_command(command)
-> 1304 return_value = get_return_value(
1305 answer, self.gateway_client, self.target_id, self.name)
1306
~/.pyenv/versions/3.9.0/lib/python3.9/site-packages/pyspark/sql/utils.py in deco(*a, **kw)
132 # Hide where the exception came from that shows a non-Pythonic
133 # JVM exception message.
--> 134 raise_from(converted)
135 else:
136 raise
~/.pyenv/versions/3.9.0/lib/python3.9/site-packages/pyspark/sql/utils.py in raise_from(e)
StreamingQueryException: local
=== Streaming Query ===
Identifier: [id = c5d1875b-7c4e-4ff2-a922-af871b311812, runId = 79c0726b-4457-4c3d-b81f-04f36bc8eedd]
Current Committed Offsets: {}
Current Available Offsets: {TextSocketV2[host: local, port: 9999]: -1}
Current State: ACTIVE
Thread State: RUNNABLE
Logical Plan:
WriteToMicroBatchDataSource ConsoleWriter[numRows=20, truncate=true]
+- Aggregate [word#3], [word#3, count(1) AS count#7L]
+- Project [word#3]
+- Generate explode(split(value#0, , -1)), false, [word#3]
+- StreamingDataSourceV2Relation [value#0], org.apache.spark.sql.execution.streaming.sources.TextSocketTable$$anon$1@14d00623, TextSocketV2[host: local, port: 9999]
На самом деле, я должен иметь возможность использовать этот поток так (открывая другой экземпляр терминала):
nc -lk 9999
Там тогда должна быть возможность ввести, например. «Привет, Андреас» и получите вывод количества слов в консоли ноутбука jupyter, содержащей поток. Тем не менее, я не могу найти способ обойти эту ошибку.
РЕДАКТИРОВАТЬ дополнительные опробованные вещи:
Во-первых, я изменил имя хоста «host» на «localhost», так как это, похоже, является стандартным термином.
Затем, как предложил ниже @Mazahir Hussain, я попробовал следующее (заменив lines
на wordCounts
, поскольку это моя цель):
query = wordCounts \
.writeStream \
.outputMode("append") \
.format("console") \
.option("checkpointLocation", "/tmp/dtn2/checkpoint")\
.start()
Однако режим «добавления» выдает следующее исключение:
AnalysisException: Append output mode not supported when there are streaming aggregations on streaming DataFrames/DataSets without watermark;;
Aggregate [word#3], [word#3, count(1) AS count#7L]
+- Project [word#3]
+- Generate explode(split(value#0, , -1)), false, [word#3]
+- StreamingRelationV2 org.apache.spark.sql.execution.streaming.sources.TextSocketSourceProvider@167d70f6, socket, org.apache.spark.sql.execution.streaming.sources.TextSocketTable@695149a6, org.apache.spark.sql.util.CaseInsensitiveStringMap@b832311b, [value#0]
Поэтому я изменил режим с «добавить» на «завершить», чтобы избежать этой ошибки.
Тем не менее, при выполнении query.awaitTermination()
выдается другая ошибка:
StreamingQueryException: Connection refused (Connection refused)
=== Streaming Query ===
Identifier: [id = c5843d59-0f89-4974-a694-9f9ae36cf4fe, runId = 5d43c468-ce55-4f8c-b41a-d60f6e053ade]
Current Committed Offsets: {}
Current Available Offsets: {TextSocketV2[host: localhost, port: 9999]: -1}
Current State: ACTIVE
Thread State: RUNNABLE
Logical Plan:
WriteToMicroBatchDataSource ConsoleWriter[numRows=20, truncate=true]
+- Aggregate [word#3], [word#3, count(1) AS count#7L]
+- Project [word#3]
+- Generate explode(split(value#0, , -1)), false, [word#3]
+- StreamingDataSourceV2Relation [value#0], org.apache.spark.sql.execution.streaming.sources.TextSocketTable$$anon$1@5914e8ff, TextSocketV2[host: localhost, port: 9999]
Наконец, предложение @Mazahir Hussain реализовать .option("checkpointLocation", "/tmp/dtn2/checkpoint")
не было решением.
Создайте каталог контрольной точки в /tmp/, а затем укажите путь например Я создал каталог "dtn2" и "checkpoint"
query = lines \
.writeStream \
.outputMode("append") \
.format("console") \
.option("checkpointLocation", "/tmp/dtn2/checkpoint")\
.start()
запрос.awaitTermination()
Примечание: вы должны добавить контрольную точку в свой код.
Вначале откройте код как блокнот jupyter в новом UNIX-терминале:
jupyter notebook "scriptname.ipynb"
Затем запустите в нем следующий код:
# Import modules
from pyspark.sql import SparkSession
from pyspark.sql.functions import explode, split
#%%
## SCRIPT
# Instantiate the new spark session
spark = SparkSession.builder.appName("StreamingDemo").getOrCreate()
# * Call stream with sending messages to local host with standard port "9999"
# --> loads socket
# NOTE on hosthame: use "localhost" being the standard term for working locally on your private computer ("local" apparently doesn't work correctly)
lines = spark.readStream.format("socket").option("host", "localhost").option("port", 9999).load()
# Create dataframe
# NOTE on several methods employed here (hover over them for docs)
words = lines.select(explode(split(lines.value, " ")).alias("word"))
# Create word counter
# NOTE: store in a new dataframe
wordCounts = words.groupBy("word").count()
# * Print results
print(wordCounts.printSchema())
# NOTE on printing out the head: it doesn't work here, but throws the following AnalysisException:
# "method: AnalysisException: Queries with streaming sources must be executed with writeStream.start();;socket"
# --> this seems to function after initiating an actual streaming query
#print(wordCounts.head(5))
Теперь откройте UNIX
-терминал и запустите потоковое соединение netcat следующим образом:
nc -lk 9999
Введите несколько слов, которые вы хотите транслировать, и последовательно нажимайте Enter, например:
hallo world
blabla
Затем вернитесь в блокнот jupyter и запустите последний фрагмент кода, чтобы запустить запрос:
# * Create query for the stream * #
# NOTE on output format: can also be "json" or "memory" if further processing is needed
# NOTE on options "append" and "complete":
# - complete: doesn't need unique counts, e.g. typing in Hello Andreas and then Hello Rikkert counts Hello twice
# - append: only new elements will be considered
query = wordCounts.writeStream.outputMode("complete").format("console").start()
query.awaitTermination()
Теперь другой открытый сеанс терминала ноутбука juypter должен быть обновлен следующим образом:
-------------------------------------------
Batch: 0
-------------------------------------------
+----+-----+
|word|count|
+----+-----+
+----+-----+
-------------------------------------------
Batch: 1
-------------------------------------------
+------+-----+
| word|count|
+------+-----+
| hello| 1|
|blabla| 1|
| world| 1|
+------+-----+
Вы можете ввести все, что хотите, чтобы количество пакетных слов обновлялось в сеансе терминала ноутбука Jupyter соответственно.
Чтобы завершить процесс, нажмите CTRL + C
сначала в сеансе netcat
-терминала, затем, при необходимости, в сеансе jupyter notebook
.
В итоге нужно было только знать, когда именно запускать потоковую передачу через netcat, а также то, что jupyter notebook
должен запускаться через другой UNIX
-терминал, чтобы показывать интерактивное пакетное обновление всякий раз, когда в сеансе netcat
-терминала вводятся слова.
PS на локальном имени хоста:
При использовании «local» вместо «localhost» будет выдано следующее исключение, поэтому убедитесь, что вы называете свой хост «localhost» в этом контексте:
StreamingQueryException: local
=== Streaming Query ===
Identifier: [id = d4226889-efd8-4992-86e0-2064e7fd45ae, runId = 7b626410-d764-4b53-a8ad-1850b6f0ddd0]
Current Committed Offsets: {}
Current Available Offsets: {TextSocketV2[host: local, port: 9999]: -1}
Current State: ACTIVE
Thread State: RUNNABLE
Logical Plan:
WriteToMicroBatchDataSource ConsoleWriter[numRows=20, truncate=true]
+- Aggregate [word#3], [word#3, count(1) AS count#7L]
+- Project [word#3]
+- Generate explode(split(value#0, , -1)), false, [word#3]
+- StreamingDataSourceV2Relation [value#0], org.apache.spark.sql.execution.streaming.sources.TextSocketTable$$anon$1@262f8557, TextSocketV2[host: local, port: 9999]
Спасибо за ваше предложение, я попробовал (см. редактирование моего первоначального вопроса), к сожалению, это не было решением. Позже я нашел причину проблемы и опубликовал ответ, тем не менее спасибо за ваши усилия.