我一直在用漂亮汤从http://slc.bioparadigms.org网站上提取信息
但我只对疾病和OMIM编号感兴趣,所以对于我在列表中已经有的每个SLC转运体,我想提取这两个特征。问题是这两者都与prt_col2类相关。因此,如果我搜索这个类,我会得到很多匹配。我怎么只会得这种病呢?此外,有时没有与SLC转运体相关的疾病,或者有时没有OMIM编号。如何提取信息?我在下面放了一些截图来向你展示它的样子。任何帮助我们都将不胜感激!这是我在这里的第一篇文章,请原谅我的任何错误或遗漏的信息。谢谢!
http://imgur.com/aTiGi84另一个是/L65HSym
因此,理想情况下,输出将是例如:
传输器: SLC1A1
疾病:癫痫
OMIM: 12345
编辑:我到目前为止的代码:
import os
import re
from bs4 import BeautifulSoup as BS
import requests
import sys
import time
def hasNumbers(inputString): #get transporter names which contain numbers
return any(char.isdigit() for char in inputString)
def get_list(file): #get a list of transporters
transporter_list=[]
lines = [line.rstrip('\n') for line in open(file)]
for line in lines:
if 'SLC' in line and hasNumbers(line) == True:
get_SLC=line.split()
if 'SLC' in get_SLC[0]:
transporter_list.append(get_SLC[0])
return transporter_list
def get_transporter_webinfo(transporter_list):
output_Website=open("output_website.txt", "w") # get the website content of all transporters
for transporter in transporter_list:
text = requests.get('http://slc.bioparadigms.org/protein?GeneName=' + transporter).text
output_Website.write(text) #ouput from the SLC tables website
soup=BS(text, "lxml")
disease = soup(text=re.compile('Disease'))
characteristics=soup.find_all("span", class_="prt_col2")
memo=soup.find_all("span", class_='expandable prt_col2')
print(transporter,disease,characteristics[6],memo)
def convert(html_file):
file2= open(html_file, 'r')
clean_file= open('text_format_SLC','w')
soup=BS(file2,'lxml')
clean_file.write(soup.get_text())
clean_file.close()
def main():
start_time=time.time()
os.chdir('/home/Programming/Fun stuff')
sys.stdout= open("output_SLC.txt","w")
SLC_list=get_list("SLC.txt")
get_transporter_webinfo(SLC_list) #already have the website content so little redundant
print("this took",time.time() - start_time, "seconds to run")
convert("output_SLC.txt")
sys.stdout.close()
if __name__ == "__main__":
main() 发布于 2017-08-21 03:19:09
无意冒犯,我不想读你在问题中放的这么大一段代码。
我会说它可以被简化。
您可以在SLCs =中获得到SLC的链接的完整列表。下一行显示了有多少个,后面的一行显示了最后一个链接包含的href属性,作为示例。
在每个SLC的页面中,我查找字符串“疾病”,然后,如果它在那里,我导航到附近的链接。我以类似的方式找到了OMIM。
请注意,我只处理第一个SLC。
>>> import requests
>>> import bs4
>>> main_url = 'http://slc.bioparadigms.org/'
>>> main_page = requests.get(main_url).content
>>> main_soup = bs4.BeautifulSoup(main_page, 'lxml')
>>> stem_url = 'http://slc.bioparadigms.org/protein?GeneName=SLC1A1'
>>> SLCs = main_soup.select('td.slct.tbl_cell.tbl_col1 a')
>>> len(SLCs)
418
>>> SLCs[-1].attrs['href']
'protein?GeneName=SLC52A3'
>>> stem_url = 'http://slc.bioparadigms.org/'
>>> for SLC in SLCs:
... SLC_page = requests.get(stem_url+SLC.attrs['href'], 'lxml').content
... SLC_soup = bs4.BeautifulSoup(SLC_page, 'lxml')
... disease = SLC_soup.find_all(string='Disease: ')
... if disease:
... disease = disease[0]
... diseases = disease.findParent().findNextSibling().text.strip()
... else:
... diseases = 'No diseases'
... OMIM = SLC_soup.find_all(string='OMIM:')
... if OMIM:
... OMIM = OMIM[0]
... number = OMIM.findParent().findNextSibling().text.strip()
... else:
... OMIM = 'No OMIM'
... number = -1
... SLC.text, number, diseases
... break
...
('SLC1A1', '133550', "Huntington's disease, epilepsy, ischemia, Alzheimer's disease, Niemann-Pick disease, obsessive-compulsive disorder")https://stackoverflow.com/questions/45782335
复制相似问题