【发布时间】:2017-08-20 12:50:08
【问题描述】:
我一直在用美汤从网站http://slc.bioparadigms.org提取信息
但我只对疾病和 OMIM 编号感兴趣,因此对于列表中已有的每个 SLC 转运蛋白,我想提取这两个特征。问题是两者都与类 prt_col2 有关。所以,如果我搜索这个类,我会得到很多点击。我怎样才能只得到疾病?有时也没有与 SLC 转运蛋白相关的疾病,或者有时没有 OMIM 编号。我怎样才能提取信息?我在下面放了一些屏幕截图,向您展示它的外观。任何帮助将不胜感激!这是我在这里的第一篇文章,请原谅我的任何错误或遗漏信息。谢谢!
http://imgur.com/aTiGi84另一个是/L65HSym
理想情况下,输出将是例如:
转运体:SLC1A1
疾病:癫痫
OMIM:12345
编辑:我到目前为止的代码:
import os
import re
from bs4 import BeautifulSoup as BS
import requests
import sys
import time
def hasNumbers(inputString): #get transporter names which contain numbers
return any(char.isdigit() for char in inputString)
def get_list(file): #get a list of transporters
transporter_list=[]
lines = [line.rstrip('\n') for line in open(file)]
for line in lines:
if 'SLC' in line and hasNumbers(line) == True:
get_SLC=line.split()
if 'SLC' in get_SLC[0]:
transporter_list.append(get_SLC[0])
return transporter_list
def get_transporter_webinfo(transporter_list):
output_Website=open("output_website.txt", "w") # get the website content of all transporters
for transporter in transporter_list:
text = requests.get('http://slc.bioparadigms.org/protein?GeneName=' + transporter).text
output_Website.write(text) #ouput from the SLC tables website
soup=BS(text, "lxml")
disease = soup(text=re.compile('Disease'))
characteristics=soup.find_all("span", class_="prt_col2")
memo=soup.find_all("span", class_='expandable prt_col2')
print(transporter,disease,characteristics[6],memo)
def convert(html_file):
file2= open(html_file, 'r')
clean_file= open('text_format_SLC','w')
soup=BS(file2,'lxml')
clean_file.write(soup.get_text())
clean_file.close()
def main():
start_time=time.time()
os.chdir('/home/Programming/Fun stuff')
sys.stdout= open("output_SLC.txt","w")
SLC_list=get_list("SLC.txt")
get_transporter_webinfo(SLC_list) #already have the website content so little redundant
print("this took",time.time() - start_time, "seconds to run")
convert("output_SLC.txt")
sys.stdout.close()
if __name__ == "__main__":
main()
【问题讨论】:
标签: python-3.x beautifulsoup web