我真的是语义网的新手,我想首先从这个网页中提取包括'ACT‘在内的地址id,然后将信息保存在RDF结构的链接下,然后将它们保存到数据库中以备将来使用。
下面是我的代码:
import requests
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
import re
url='http://gnafld.net/address/?page=7&per_page=10'
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
print(soup)运行代码后,我得到了以下答案:
@prefix ldp: <http: ldp#="" ns="" www.w3.org=""> .
@prefix rdf: <http: 02="" 1999="" 22-rdf-syntax-ns#="" www.w3.org=""> .
@prefix rdfs: <http: 01="" 2000="" rdf-schema#="" www.w3.org=""> .
@prefix reg: <http: linked-data="" purl.org="" registry#=""> .
@prefix xhv: <https: 1999="" vocab#="" www.w3.org="" xhtml=""> .
@prefix xml: <http: 1998="" namespace="" www.w3.org="" xml=""> .
@prefix xsd: <http: 2001="" www.w3.org="" xmlschema#=""> .
<http: address="" gaact714846009="" gnafld.net=""> a <http: def="" gnaf#address="" gnafld.net=""> ;
rdfs:label "Address ID: GAACT714846009"^^xsd:string ;
reg:register <http: ?per_page="10&page=7" address="" gnafld.net=""> .
<http: address="" gaact714846010="" gnafld.net=""> a <http: def="" gnaf#address="" gnafld.net=""> ;
rdfs:label "Address ID: GAACT714846010"^^xsd:string ;
reg:register <http: ?per_page="10&page=7" address="" gnafld.net=""> .
<http: address="" gaact714846013="" gnafld.net=""> a <http: def="" gnaf#address="" gnafld.net=""> ;
rdfs:label "Address ID: GAACT714846013"^^xsd:string ;
reg:register <http: ?per_page="10&page=7" address="" gnafld.net=""> .
<http: address="" gaact714846014="" gnafld.net=""> a <http: def="" gnaf#address="" gnafld.net=""> ;
rdfs:label "Address ID: GAACT714846014"^^xsd:string ;
reg:register <http: ?per_page="10&page=7" address="" gnafld.net=""> .
<http: address="" gaact714846015="" gnafld.net=""> a <http: def="" gnaf#address="" gnafld.net=""> ;
rdfs:label "Address ID: GAACT714846015"^^xsd:string ;
reg:register <http: ?per_page="10&page=7" address="" gnafld.net=""> .
<http: address="" gaact714846016="" gnafld.net=""> a <http: def="" gnaf#address="" gnafld.net=""> ;
rdfs:label "Address ID: GAACT714846016"^^xsd:string ;
reg:register <http: ?per_page="10&page=7" address="" gnafld.net=""> .
<http: address="" gaact714846017="" gnafld.net=""> a <http: def="" gnaf#address="" gnafld.net=""> ;
rdfs:label "Address ID: GAACT714846017"^^xsd:string ;
reg:register <http: ?per_page="10&page=7" address="" gnafld.net=""> .
<http: address="" gaact714846018="" gnafld.net=""> a <http: def="" gnaf#address="" gnafld.net=""> ;
rdfs:label "Address ID: GAACT714846018"^^xsd:string ;
reg:register <http: ?per_page="10&page=7" address="" gnafld.net=""> .
<http: address="" gaact714846019="" gnafld.net=""> a <http: def="" gnaf#address="" gnafld.net=""> ;
rdfs:label "Address ID: GAACT714846019"^^xsd:string ;
reg:register <http: ?per_page="10&page=7" address="" gnafld.net=""> .
<http: address="" gaact714846020="" gnafld.net=""> a <http: def="" gnaf#address="" gnafld.net=""> ;
rdfs:label "Address ID: GAACT714846020"^^xsd:string ;
reg:register <http: ?per_page="10&page=7" address="" gnafld.net=""> .
<http: address="" gnafld.net=""></http:> a reg:Register ;
rdfs:label "Address Register"^^xsd:string ;
reg:containedItemClass <http: def="" gnaf#address="" gnafld.net=""> .
<http: ?per_page="10&page=7" address="" gnafld.net=""> a ldp:Page ;
ldp:pageOf <http: address="" gnafld.net=""></http:> ;
xhv:first <http: ?per_page="10&page=1" address="" gnafld.net=""> ;
xhv:last <http: ?per_page="10&page=1450001" address="" gnafld.net=""> ;
xhv:next <http: ?per_page="10&page=8" address="" gnafld.net=""> ;
xhv:prev <http: ?per_page="10&page=6" address="" gnafld.net=""> .
</http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></http:></https:></http:></http:></http:></http:>如何从其中提取包含'ACT‘的地址ID?我知道如果我想使用Beautifulsoup,我需要一个html格式。但它返回海龟格式。或者如何将turtle格式更改为html格式?(比如更改request()或Beautifulsoup()中的一些参数?)我真的是stuck..Can,有谁能告诉我吗?提前感谢
发布于 2018-05-03 07:10:41
您可以像这样使用正则表达式
import re
import requests
url = 'http://gnafld.net/address/?page=7&per_page=10'
response = requests.get(url)
response.raise_for_status()
results = re.findall('\"Address ID: (GAACT[0-9]+)\"', response.text)结果列表将包含ids。如果你不想让GA出现在id的开头,你可以把最后一行改为。
results = re.findall('\"Address ID: GA(ACT[0-9]+)\"', response.text)这将查找所有形式为"Address ID: GAACT[0-9]+"的非重叠字符串。[0-9]+查找长度至少为1的任何数字字符串。捕获括号中的字符串并将其作为结果返回。这就是为什么你只获取id (例如GAACT714846020)而不是整个字符串(例如"Address ID: GAACT714846020")的原因。将GA移出括号会将其从结果中删除。
发布于 2018-05-18 05:16:28
有一个名为RDFLib的库,不需要BeautifulSoup或Regex。(pip install rdflib)这一切都是关于读取和查询RDF数据,包括海龟格式。
要开始:
from rdflib import Graph
g = Graph()
g.parse('http://gnafld.net/address/?page=7&per_page=10') 这会将海龟数据加载到可以查询的内容中。您可以使用SPARQL来查询它,以获得这些地址:
res = g.query("""SELECT ?subject ?add
WHERE {
?subject a <http://gnafld.net/def/gnaf#Address>.
?subject rdfs:label ?add.
}""")
for row in res:
print(row.subject, row.add)可以使用SPARQL查询来访问它们中您需要的任何数据。如果它也是RDF,您还可以使用RDFLib来解析新链接中的数据。
我认为你会想要做的是:
for row in res:
g.parse(row.subject)这将把来自这些对象的所有数据转换为RDF格式。在那里,您可以使用以下命令将其作为文件保存在RDF中:
g.serialize("my_data.rdf", format="pretty-xml")或者继续使用SPARQL运行查询。
https://stackoverflow.com/questions/50144517
复制相似问题