【问题标题】:Queueing with cflock on Coldfusion 9在 Coldfusion 9 上使用 cflock 排队
【发布时间】:2010-06-23 16:30:30
【问题描述】:

自从我们从 ColdFusion 8 Enterprise 升级到 ColdFusion 9 Enterprise 以来,我们一直在处理我们的事件网关问题。

我们设置了一个事件网关来建立与第三方的连接。他们至少每 10 秒更新一次,有时每秒更新多次。

我们有一个配置为事件网关侦听器的 Java 类,它将事件和数据推送到 CFC 函数。在该函数中,我们实际上使用了 named <cflock> 来确保按顺序处理请求,并且此时请求将排队以独占访问命名锁。此锁有 30 秒 超时。

我也对这个函数进行了很多调试,我注意到了一些事情:

  • Requetst 会在<cflock> 标签之前排队,等待锁,这个队列可以有超过 40 个事件的长度
  • 所有请求在 6 秒内获得锁时都会被处理。从获得锁到完成处理的平均时间实际上是 1.5 到 2 秒

所以问题是,我会时不时地发生锁定超时,而且是在 30 秒之后。首先,我记录请求是否正在等待锁定。它看起来像这样:

"Information","Thread-23","06/23/10","15:45:18","APP1","F4B42A5A-F34D-C614-DE01B150E6906F78 (1277304318606) : PRE LOCK"

然后再往下看,我看到了同样的请求:

"Error","Thread-23","06/23/10","15:45:48","APP1","F4B42A5A-F34D-C614-DE01B150E6906F78 (1277304348607) : LOCK ERROR: A timeout occurred while attempting to lock lock_ResponseDispatcher."

他们之间有 30 秒。此时,请求和与之关联的任何事件数据都将丢失。对我不好。

所以我想看看队列的处理速度是否足够快。我不确定<cflock> 是如何对事件进行排队的。有硬性限制吗?

无论如何,在这次特定的运行中,我看到了:

  • 当队列中已经有 6 个请求时,该请求进入队列,因此队列中的第 7 个请求a

  • 在接下来的 30 秒内,大约有 17 个请求从队列中移除

  • 大约相同数量的请求被添加到队列中

  • 在此期间,相关请求未处理,30 秒后超时

我简直不敢相信自己的眼睛!好像<cflock>队列不是按先进先出(FIFO)处理,而是先进后出(FILO)!

这样的事情可能吗?有没有其他人见过这种行为?

非常感谢任何有任何想法的人。

夏兰

【问题讨论】:

    标签: java coldfusion asynchronous locking


    【解决方案1】:

    认为这里的关键是我使用的是异步的event gateways。事实上,在进行实验之后,我的问题的原因似乎很明显:)

    我已经在 CF admin 中指定了可用于处理事件网关请求的线程数(参见事件网关 -> 设置)。此设置在 CF 开发版上停留在 1,但可以在企业版中增加。为了这个实验,我将它增加到 5。只有当它增加时,您才会看到这种奇怪的行为。

    所以我的事件网关代码非常简单,它只是创建了一个 UUID(这样我就可以在我的日志中跟踪请求),然后将线程锁定 5 秒以模拟实际处理。这个sleep 发生在cflock 调用中,因此只有一个线程可以同时进行处理。我们需要这个来避免在真实代码中处理重复。

    这是完整的 CFC:

    component {
    
        public void function onIncomingMessage (required struct msg) {
    
            var sys = createObject("java", "java.lang.System");
            var tag = createUUID();
            var logFile = "test\gatewaytest";
    
            writelog (file=logFile, text="#tag# - about to queue");
    
            try {
    
                lock name="myTestLock" timeout="120" {
                    writelog (file=logFile, text="#tag# - got lock");
    
                    thread action="sleep" duration="5000"; //ms 
                }
    
                writelog (file=logFile, text="#tag# - released lock");
    
            } catch (any e) {
                writelog (file=logFile, text="#tag# - ERROR - #e.message#");
            }
        }
    }
    

    注意锁上的很长的超时值(2 分钟)。这是为了处理事件网关异步处理带来的问题。

    事件网关是一个简单的内置 CFML 类型,其 ID 为“TestGW”,我链接到上面的 CFC。

    我设置了一个简单的脚本来向事件网关发送事件,这里是完整的:

    <cfset msg = {mymessage = "hello gateway"} />
    <cfset sendGatewayMessage("TestGW", msg) />
    

    场景 1 - 单线程:

    如果事件网关处理线程的数量设置为 1 并且我敲击网关,我会看到以下日志输出:

    "Information","Thread-17","06/25/10","10:32:09",,"50805BB4-1C23-9073-67A70A86CA6F8E54 - about to queue"
    "Information","Thread-17","06/25/10","10:32:09",,"50805BB4-1C23-9073-67A70A86CA6F8E54 - got lock"
    "Information","Thread-17","06/25/10","10:32:14",,"50805BB4-1C23-9073-67A70A86CA6F8E54 - released lock"
    "Information","Thread-17","06/25/10","10:32:14",,"50811F1A-1C23-9073-67AD3E9C0BF2000C - about to queue"
    "Information","Thread-17","06/25/10","10:32:14",,"50811F1A-1C23-9073-67AD3E9C0BF2000C - got lock"
    "Information","Thread-17","06/25/10","10:32:19",,"50811F1A-1C23-9073-67AD3E9C0BF2000C - released lock"
    "Information","Thread-17","06/25/10","10:32:19",,"5081E27F-1C23-9073-67B5D2EF6AED8426 - about to queue"
    "Information","Thread-17","06/25/10","10:32:19",,"5081E27F-1C23-9073-67B5D2EF6AED8426 - got lock"
    "Information","Thread-17","06/25/10","10:32:24",,"5081E27F-1C23-9073-67B5D2EF6AED8426 - released lock"
    "Information","Thread-17","06/25/10","10:32:24",,"5082A5E1-1C23-9073-674E9467F395686F - about to queue"
    "Information","Thread-17","06/25/10","10:32:24",,"5082A5E1-1C23-9073-674E9467F395686F - got lock"
    "Information","Thread-17","06/25/10","10:32:29",,"5082A5E1-1C23-9073-674E9467F395686F - released lock"
    

    这里要注意的关键是它是单线程的。这一切都是关于一次将事件排队,并且一切都按顺序发生。

    场景 2 - 更多线程:

    如果事件网关处理线程的数量增加到 5 并且我敲击网关,我会看到以下日志输出:

    "Information","Thread-18","06/25/10","11:26:01",,"526CC05B-C9E1-FADE-73CE3426BC0A3F92 - about to queue"
    "Information","Thread-18","06/25/10","11:26:01",,"526CC05B-C9E1-FADE-73CE3426BC0A3F92 - got lock"
    "Information","Thread-27","06/25/10","11:26:01",,"526CD0EB-049E-D382-2C3A7E3C0DBF8ED3 - about to queue"
    "Information","Thread-21","06/25/10","11:26:02",,"526CDEED-C2B3-3C92-0F57CFA317AC02F8 - about to queue"
    "Information","Thread-20","06/25/10","11:26:02",,"526CEE25-F25B-890C-F7501B5489C6BB21 - about to queue"
    "Information","Thread-25","06/25/10","11:26:02",,"526CFD3C-EAFD-40E7-EBA2BE59B87D5936 - about to queue"
    "Information","Thread-24","06/25/10","11:26:03",,"526D0FC5-E5E2-642E-452636C8838ADE33 - about to queue"
    "Information","Thread-26","06/25/10","11:26:03",,"526D1096-C82E-535B-36D57D3A431D1436 - about to queue"
    "Information","Thread-23","06/25/10","11:26:03",,"526D1F9C-9A9C-FA84-E153A944123E77BE - about to queue"
    "Information","Thread-19","06/25/10","11:26:04",,"526D2EDC-EA54-4D83-3F6BB681A5CCAA89 - about to queue"
    "Information","Thread-22","06/25/10","11:26:04",,"526D3F09-073F-2B0C-E94652D1C95B09CB - about to queue"
    "Information","Thread-18","06/25/10","11:26:06",,"526CC05B-C9E1-FADE-73CE3426BC0A3F92 - released lock"
    "Information","Thread-22","06/25/10","11:26:06",,"526D3F09-073F-2B0C-E94652D1C95B09CB - got lock"
    "Information","Thread-22","06/25/10","11:26:11",,"526D3F09-073F-2B0C-E94652D1C95B09CB - released lock"
    "Information","Thread-27","06/25/10","11:26:11",,"526CD0EB-049E-D382-2C3A7E3C0DBF8ED3 - got lock"
    "Information","Thread-27","06/25/10","11:26:16",,"526CD0EB-049E-D382-2C3A7E3C0DBF8ED3 - released lock"
    "Information","Thread-19","06/25/10","11:26:16",,"526D2EDC-EA54-4D83-3F6BB681A5CCAA89 - got lock"
    "Information","Thread-19","06/25/10","11:26:21",,"526D2EDC-EA54-4D83-3F6BB681A5CCAA89 - released lock"
    

    特别注意 UUID 526D3F09-073F-2B0C-E94652D1C95B09CB 的请求。它是记录的最后一个请求,因此位于队列的末尾。然而,一旦锁可用,它会跳起来并抓住锁 - 不是首先出现的526CD0EB-049E-D382-2C3A7E3C0DBF8ED3 请求。

    结论:

    由于当我们使用多个线程时,我们无法保证在使用事件网关时等待cflock 时线程的处理顺序。我们需要确保锁的超时值足够高,以便在繁忙时间可以在任何一个请求超过锁超时之前完整地处理事件队列。

    我想这可能适合我们将cflock 与多线程事件网关一起使用!

    我希望这可以帮助遇到此问题的其他人。

    干杯,夏兰。

    【讨论】:

      【解决方案2】:

      我不确定如何解决您的 FIFO 与 LIFO 问题;但我可以对此提出建议:

      他们之间有 30 秒。此时,请求和与之关联的任何事件数据都将丢失。对我不好。

      CFLock 标签有一个名为throwOnTimeout 的属性,默认为true。如果将其设置为 false,而不是引发异常,则在超时的情况下,处理将跳过锁定的代码块并正常继续。您可以利用它来发挥自己的优势,如下所示:

      <cfset made_it_through_lock = false />
      <cflock name="single_threaded_lock_name" throwOnTimeout="false">
          <!--- ... do stuff ... --->
          <cfset made_it_through_lock = true />
      </cflock>
      <cfif made_it_through_lock eq false>
          <!--- 
                log the event data that you don't want to lose, then abort, 
                setting the necessary http status code & headers
          --->
      </cfif>
      

      【讨论】:

      • 我实际上是通过将锁定的代码包装在 try/catch 中来捕获异常。但是 +1 建议。
      • 是的,这是另一种选择,但抛出和捕获异常是昂贵的。如果这是预期的行为,您可能会通过不使用它们来获得更好的性能。我想知道——您能否将 FIFO 与 LIFO 问题提炼成一个简单的示例,该示例仅执行锁定和超时以表明它真的是 LIFO?
      • 这不是您看到的真正预期的行为。但是,我试图想一种方法来简单地演示这个问题。我怀疑关键问题是存在这个锁的 CFC 是通过异步事件网关调用的。调用此 CFC 通常只会导致请求被 CF 排队(并且它们显示在 cfstat 中)。通过事件网关进行的任何调用甚至都不会在 cfstat 上注册。我会做一些修修补补,然后回复你。
      • 我想我已经弄清楚了——见下文!谢谢。
      猜你喜欢
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2015-04-12
      • 2011-10-13
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2011-04-02
      相关资源
      最近更新 更多