【问题标题】:Regex match with dataframe column values正则表达式与数据框列值匹配
【发布时间】:2020-03-15 19:22:26
【问题描述】:

我想在 Map[String,List[scala.util.matching.Regex]] 与数据框列之间执行查找。如果任何List[scala.util.matching.Regex] 与数据框列值匹配,则它应该从Map[String,List[scala.util.matching.Regex]] 返回key

Map[String,List[scala.util.matching.Regex]] = Map(m1 -> List(rule1, rule2), m2 -> List(rule3), m3 -> List(rule6)))

我想遍历正则表达式列表并与数据框列值匹配。如果正则表达式匹配可以并行而不是顺序完成会更好

dataframe


+------------------------+
|desc                    |
+------------------------+
|STRING MATCHES SSS rule1|
|STRING MATCHES SSS rule1|
|STRING MATCHES SSS rule1|
|STRING MATCHES SSS rule2|
|STRING MATCHES SSS rule2|
|STRING MATCHES SSS rule3|
|STRING MATCHES SSS rule3|
|STRING MATCHES SSS rule6|
+------------------------+

O/P:

+-------------------+------------------------+
|merchant           |desc                    |
+-------------------+------------------------+
|m1                 |STRING MATCHES SSS rule1|
|m1                 |STRING MATCHES SSS rule1|
|m1                 |STRING MATCHES SSS rule1|
|m1                 |STRING MATCHES SSS rule2|
|m1                 |STRING MATCHES SSS rule2|
|m2                 |STRING MATCHES SSS rule3|
|m2                 |STRING MATCHES SSS rule3|
|m3                 |STRING MATCHES SSS rule6|
+-------------------+------------------------+

【问题讨论】:

  • 请提供样本数据和预期输出以清楚地理解问题
  • @Nikk,已更新数据和预期的 O/P
  • 谢谢,我会尽快检查并为您提供解决方案
  • 它解决了你的问题吗?

标签: scala apache-spark


【解决方案1】:

这是基于DataFramemap函数和预定义规则集rules的另一种方式:

import spark.implicits._
import scala.util.matching.Regex

val df = Seq(
("STRING MATCHES SSS rule1"),
("STRING MATCHES SSS rule1"),
("STRING MATCHES SSS rule1"),
("STRING MATCHES SSS rule2"),
("STRING MATCHES SSS rule2"),
("STRING MATCHES SSS rule3"),
("STRING MATCHES SSS rule3"),
("STRING MATCHES SSS rule6"),
("STRING MATCHES SSS ruleXXX")
).toDF("desc")

val rules = Map(
  "m1" -> List("rule1".r, "rule2".r), 
  "m2" -> List("rule3".r), 
  "m3" -> List("rule6".r)
)

df.map{r =>
  val desc = r.getString(0)
  val merchant = rules.find(_._2.exists(_.findFirstIn(desc).isDefined)) match {
      case Some((m : String, _)) => m
      case None => null
    }

  (merchant, desc)
}.toDF("merchant", "desc").show(false)

输出:

+--------+--------------------------+
|merchant|desc                      |
+--------+--------------------------+
|m1      |STRING MATCHES SSS rule1  |
|m1      |STRING MATCHES SSS rule1  |
|m1      |STRING MATCHES SSS rule1  |
|m1      |STRING MATCHES SSS rule2  |
|m1      |STRING MATCHES SSS rule2  |
|m2      |STRING MATCHES SSS rule3  |
|m2      |STRING MATCHES SSS rule3  |
|m3      |STRING MATCHES SSS rule6  |
|null    |STRING MATCHES SSS ruleXXX|
+--------+--------------------------+

解释:

  • rules.find(... 从规则中找到键/值对

  • _._2.exists(... 具有正则表达式的值

  • desc匹配的_.findFirstIn(desc).isDefined

  • case Some((m : String, _)) => m 并从该对中提取密钥

PS:我不确定你的意思是什么正则表达式匹配可以并行而不是顺序完成,因为上述解决方案中的映射函数已经并行执行。并行化级别取决于所选的分区号。要在 map 函数中添加额外的并行化,例如以线程(或 Scala Futures)的形式,它肯定会在不提高性能的情况下使代码复杂化。那是因为如果您创建大量线程,则更有可能为 CPU 造成瓶颈,而不是加速您的程序。 Spark 是一种有效的分布式系统,无需寻找并行执行的替代方案。

【讨论】:

    【解决方案2】:

    您可以像下面这样声明UDF,它将并行运行并且速度很快。根据我理解您的问题,以下只是一个参考。您可以以此为参考,并据此设计您的UDF

    scala> import org.apache.spark.sql.expressions.UserDefinedFunction
    
    scala> def RuleCheck:UserDefinedFunction = udf((colmn:String) => {
         |  val Rule:Map[String,List[String]] = Map("Number" -> List("[0-9]"),"Statment" -> List("[a-zA-Z]"), "Fruit" -> List("apple","banana","orange"), "Country" -> List("India","US","UK"))
         | var Out = scala.collection.mutable.Set[String]()
         | Rule.foreach{ rr =>
         | val key = rr._1
         | val Listrgx = rr._2
         | 
         | Listrgx.foreach{ x =>
         | val rgx = x.r
         | 
         | if(rgx.findFirstMatchIn(colmn).mkString != ""){
         | Out += key
         | }
         |         }
         |       }
         |       Out.mkString(",") })
    
    scala> df.show()
    +---+--------------------+
    | id|             comment|
    +---+--------------------+
    |  1|     I have 3 apples|
    |  2|I like banana and...|
    |  3|        I am from US|
    |  4|          1932409243|
    |  5|       I like orange|
    |  6|         #%@#$@#%@#$|
    +---+--------------------+
    
    
    scala> df.withColumn("Key", RuleCheck(col("comment"))).show(false)
    +---+---------------------------------+----------------------+
    |id |comment                          |Key                   |
    +---+---------------------------------+----------------------+
    |1  |I have 3 apples                  |Number,Fruit,Statment |
    |2  |I like banana and I am from India|Country,Fruit,Statment|
    |3  |I am from US                     |Country,Statment      |
    |4  |1932409243                       |Number                |
    |5  |I like orange                    |Fruit,Statment        |
    |6  |#%@#$@#%@#$                      |                      |
    +---+---------------------------------+----------------------+
    

    【讨论】:

      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 2014-08-04
      • 2020-06-09
      • 2016-11-10
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      相关资源
      最近更新 更多