【问题标题】:MongoDB Aggregation Lookup with Pipeline Doesn't Work使用管道进行 MongoDB 聚合查找不起作用
【发布时间】:2020-04-13 00:32:36
【问题描述】:

我有两个系列。我正在尝试将 Collection 2 的文档添加到 Collection 1,如果 Collection 2 中的数字 1 和数字 2 在 Collection 1 中指定的某个范围内。FYI Collection 1 中的 ObjectId 和 Collection 2 中的 ObjectId 指的是两个不同的项目/产品,因此我无法加入此 ID 上的两个集合。

集合 1 中的示例文档:

{'_id': ObjectId('4321'),
 'number1_lb': 61.205672407820025,
 'number1_ub': 61.24170844385606,
 'number2_lb': -149.75074963516136,
 'number2_ub': -149.71471359912533}

集合 2 中的示例文档:

{'_id': ObjectId('1234'),
  'number1': 1.282298,
  'number2': 103.8475}

我想要输出:

{'_id': ObjectId('4321'),
 'number1_lb': 61.205672407820025,
 'number1_ub': 61.24170844385606,
 'number2_lb': -149.75074963516136,
 'number2_ub': -149.71471359912533,
 'recs': [ObjectId('3456'), ObjectId('4567'),...]

我认为带有管道的查找阶段会起作用。我的代码目前如下:

 {"$lookup":{
        "from": "Collection 2",
        "let":{
            "number1_lb":"$number1_lb",
            "number1_ub":"$number1_ub",
            "number2_lb":"$number2_lb",
            "number2_ub":"$number2_ub"
        },
        "pipeline": [
            {"$match":
             {"$expr":
              {"$and":[
                  {"$gte":["$number1","$$number1_lb"]},
                  {"$gte":["$number2","$$number2_lb"]},
                  {"$lte":["$number1","$$number1_ub"]},
                  {"$lte":["$number2","$$number2_ub"]}
              ]}}}
        ],
        "as": "recs"
    }}

但是运行上面的命令没有输出。我是不是做错了什么??

【问题讨论】:

    标签: mongodb pymongo aggregation


    【解决方案1】:

    我运行了它,它似乎工作正常;但我不得不调整coll1 中的输入数据,因为它不符合$match 的标准。

    from pymongo import MongoClient
    from bson.json_util import dumps
    
    db = MongoClient()["testdatabase"]
    # Data Setup
    db.coll1.replace_one({"_id": "4321"}, {"_id": "4321", "number1_lb": -61.205672407820025, "number1_ub": 61.24170844385606, "number2_lb": -149.75074963516136, "number2_ub": 149.71471359912533}, upsert=True)
    db.coll2.replace_one({"_id": "1234"}, {"_id": "1234", "number1": 1.282298, "number2": 103.8475}, upsert=True)
    # Run the aggregation
    results = db.coll1.aggregate([
        {"$lookup": {
            "from": "coll2",
            "let": {
                "number1_lb": "$number1_lb",
                "number1_ub": "$number1_ub",
                "number2_lb": "$number2_lb",
                "number2_ub": "$number2_ub"
            },
            "pipeline": [
                {"$match":
                    {"$expr":
                        {"$and": [
                            {"$gte": ["$number1", "$$number1_lb"]},
                            {"$gte": ["$number2", "$$number2_lb"]},
                            {"$lte": ["$number1", "$$number1_ub"]},
                            {"$lte": ["$number2", "$$number2_ub"]}
                        ]}}}
            ],
            "as": "recs"
        }}
    ])
    # pretty up the results
    print(dumps(results, indent=4))
    

    给予:

    [
        {
            "_id": "4321",
            "number1_lb": -61.205672407820025,
            "number1_ub": 61.24170844385606,
            "number2_lb": -149.75074963516136,
            "number2_ub": 149.71471359912533,
            "recs": [
                {
                    "_id": "1234",
                    "number1": 1.282298,
                    "number2": 103.8475
                }
            ]
        }
    ]
    

    【讨论】:

    • 你是对的。我的原始代码是正确的,边界太小而无法获取任何东西。谢谢!
    【解决方案2】:

    您希望使用 $lookup$project

       {
            $lookup: {
                from: "Collection2",
                localField: [Foreign Field of the Collection1],
                foreignField: [Principal field of the foreign collection here Collection2],
                as: "nameJoint"
                }
        },
        {$project: {
    
            "newFieldName": 
        }},
    

    但是要在两个文档之间建立一个联合,以便在这两个文档之间建立一个公共字段。我不确定在这种情况下是否有一个,或者我误解了它。

    (A $lookup 基本上是 noSQL 中的一个 SQL 联合)

    【讨论】:

      猜你喜欢
      • 2018-12-11
      • 2021-01-22
      • 2020-07-27
      • 1970-01-01
      • 1970-01-01
      • 1970-01-01
      • 2021-04-21
      • 1970-01-01
      • 2020-06-08
      相关资源
      最近更新 更多