【发布时间】:2015-04-27 10:14:56
【问题描述】:
我正在将一个 csv 城市列表导入到我的 Django 应用程序中。我对 Django 和 Python 很陌生,导入运行相当快,前 25,000 行大约需要 5 分钟,接下来的 25,000 行需要 2 小时。我停止了导入并从中断的地方重新开始,接下来的 25,000 次大约需要 4 分钟。显然我做错了,因为每次插入都会变慢。
任何帮助都会很好,我这样做主要是为了学习,而不仅仅是导入数据,目前直接导入 postgresql 会更快,这样我就可以继续我的项目,但我想知道我的'我做错了,所以我可以在 Django / Python 中变得更好。
tia
from myapp import Country, State, City
def add_country(isocode, name):
c = Country.objects.get_or_create(name=name.strip().replace('"', ''), isocode=isocode.strip())[0]
return c
def add_state(country, isocode, name, statetype):
country_model = Country.objects.get(isocode=country.strip().lower())
s = State.objects.get_or_create(name=name.strip().replace('"', ''), isocode=isocode.strip().lower().replace('"', ''), country=country_model, statetype=statetype.strip().replace('"', ''))[0]
return s
def add_city(country, state, name):
country_model = Country.objects.get(isocode=country.strip().lower().replace('"', ''))
try:
state_model = State.objects.get(name=state.strip().replace('"', ''), country=country_model)
except State.DoesNotExist:
state_model = None
ci = City.objects.get_or_create(name=name.strip().replace('"', ''), state=state_model, postcode='')[0]
return ci
with open('country.csv', 'rb') as csvfile:
myreader = csv.reader(csvfile, delimiter=',', quotechar='"')
for counrow in myreader:
add_country(counrow[0], counrow[1])
with open('state.csv', 'rb') as csvfile:
myreader = csv.reader(csvfile, delimiter=',', quotechar='"')
for counrow in myreader:
add_state(counrow[0], counrow[1], counrow[2], counrow[3])
with open('city1.csv', 'rb') as csvfile:
myreader = csv.reader(csvfile, delimiter=',', quotechar='"')
for counrow in myreader:
add_city(counrow[0], counrow[1], counrow[2])
with open('city2.csv', 'rb') as csvfile:
myreader = csv.reader(csvfile, delimiter=',', quotechar='"')
for counrow in myreader:
add_city(counrow[0], counrow[1], counrow[2])
with open('city3.csv', 'rb') as csvfile:
myreader = csv.reader(csvfile, delimiter=',', quotechar='"')
for counrow in myreader:
add_city(counrow[0], counrow[1], counrow[2])
更新:
所以我将代码更改为使用批量插入,第一组城市现在刚刚超过两分钟,第二组是 10 分钟,几个小时后我还没有完成第三组。一定有某种垃圾收集过程或我遗漏的东西,因为我什至切换了文件,每个文件在第一次运行时花费的时间相同。
新代码如下所示:
def add_country(isocode, name, created_by, changed_by, country_list):
country_list.append(Country(name=name.strip().replace('"', ''), isocode=isocode.strip()))
def add_state(country, isocode, name, statetype, created_by, changed_by, state_list):
country_model = Country.objects.get(isocode=country.strip().lower())
state_list.append(State(name=name.strip().replace('"', ''), isocode=isocode.strip().lower().replace('"', ''), country=country_model, statetype=statetype.strip().replace('"', '')))
def add_city(country, state, name, created_by, changed_by, city_list):
country_model = Country.objects.get(isocode=country.strip().lower().replace('"', ''))
try:
state_model = State.objects.get(name=state.strip().replace('"', ''), country=country_model)
except State.DoesNotExist:
state_model = None
city_list.append(City(name=name.strip().replace('"', ''), state=state_model, postcode=''))
country_list = []
state_list = []
city_list = []
print "Countries"
print time.strftime("%H:%M:%S")
with open('country.csv', 'rb') as csvfile:
myreader = csv.reader(csvfile, delimiter=',', quotechar='"')
for counrow in myreader:
add_country(counrow[0], counrow[1], adminuser, adminuser, country_list)
Country.objects.bulk_create(country_list)
print "States"
print time.strftime("%H:%M:%S")
with open('state.csv', 'rb') as csvfile:
myreader = csv.reader(csvfile, delimiter=',', quotechar='"')
for counrow in myreader:
add_state(counrow[0], counrow[1], counrow[2], counrow[3], adminuser, adminuser, state_list)
State.objects.bulk_create(state_list)
print "Cities 1"
print time.strftime("%H:%M:%S")
with open('city1.csv', 'rb') as csvfile:
myreader = csv.reader(csvfile, delimiter=',', quotechar='"')
for counrow in myreader:
add_city(counrow[0], counrow[1], counrow[2], adminuser, adminuser, city_list)
City.objects.bulk_create(city_list)
print "Cities 2"
print time.strftime("%H:%M:%S")
city_list = []
with open('city2.csv', 'rb') as csvfile:
myreader = csv.reader(csvfile, delimiter=',', quotechar='"')
for counrow in myreader:
add_city(counrow[0], counrow[1], counrow[2], adminuser, adminuser, city_list)
City.objects.bulk_create(city_list)
print "Cities 3"
print time.strftime("%H:%M:%S")
city_list = []
with open('city3.csv', 'rb') as csvfile:
myreader = csv.reader(csvfile, delimiter=',', quotechar='"')
for counrow in myreader:
add_city(counrow[0], counrow[1], counrow[2], adminuser, adminuser, city_list)
City.objects.bulk_create(city_list)
【问题讨论】:
-
好吧,
get_or_create先搜索然后插入,很明显,每次导入都会减慢速度。 -
所以在所有更改之后,最终文件需要 45 分钟,所以第一个是 4 分钟,第二个是 12 分钟,最后一个是 45 分钟。我将城市导入导出并在单独的 python 文件中运行它们,每个需要 3 分钟。一定有一些我在这里/没有在做的事情,因为实际的导入运行得很快,让它们在一个文件中运行不会。正如我所说,这对我来说确实是一次学习体验,性能对所有学习都很重要,因此非常感谢任何提示和信息。
-
编辑后的代码和你运行的代码完全一样吗?
-
请修正缩进。查看函数
add_city(...)中的country=[]行。与其余代码的那一行不应成为函数add_city()的一部分
标签: python django postgresql csv postgresql-copy