这个任务必须分成两部分:
- 提取所有不在
a标签中的文本
- 找到(或者说是猜测)该文本中的所有 url 并将它们包装起来
对于第一部分,我建议使用BeautifulSoup。你也可以使用html.parser,但这会是很多额外的工作
使用递归函数查找文本:
from bs4 import BeautifulSoup
from bs4.element import NavigableString
your_text = """I was surfing <a href="...">www.google.com</a>, and I found an
interesting site https://www.stackoverflow.com/. It's amazing! I also liked
Heroku (http://heroku.com/pricing)
more.domains.tld/at-the-end-of-line
https://at-the_end_of-text.com"""
soup = BeautifulSoup(your_text, "html.parser")
def wrap_plaintext_links(bs_tag):
for element in bs_tag.children:
if type(element) == NavigableString:
pass # now we have a text node, process it
# so it is a Tag (or the soup object, which is for most purposes a tag as well)
elif element.name != "a": # if it isn't the a tag, process it recursively
wrap_plaintext_links(element)
wrap_plaintext_links(soup) # call the recursive function
您可以通过将pass 替换为print(element) 来测试它是否只找到您想要的值。
现在查找 url 并替换自身。使用的正则表达式的复杂性实际上取决于您想要的精确度。我会选择这个:
(https?://)? # match http(s):// in separate group if present
( # start of the main capturing group, what will be between the tags
(?:[\w-]+\.)+ # at least one domain and any subdomains before TLD
[a-z]+ # TLD
(?:/\S*?)? # /[anything except whitespace] if present - URL path
) # end of the group
(?=[\.,)]?(?:\s|$)) # prevent matching any of ".,)" that might appear immediately after the URL as the text goes...
函数和代码添加,包括替换:
import re
def create_replacement(matchobj):
if matchobj.group(1): # if there's http(s)://, keep it
full_url = matchobj.group(0)
else: # otherwise prepend it. it would be a long discussion if https or http. decide.
full_url = "http://" + matchobj.group(2)
tag = soup.new_tag("a", href=full_url)
tag.string = matchobj.group(2)
return str(tag)
# compile the pattern beforehand, as it's going to be used many times
r = re.compile(r"(https?://)?((?:[\w-]+\.)+[a-z]+(?:/\S*?)?)(?=[\.,)]?(?:\s|$))")
def wrap_plaintext_links(bs_tag):
for element in bs_tag.children:
if type(element) == NavigableString:
replaced = r.sub(create_replacement, str(element))
element.replaceWith(BeautifulSoup(replaced)) # make it a Soup so that the tags aren't escaped
elif element.name != "a":
wrap_plaintext_links(element)
注意:你也可以在我上面写的代码中包含模式解释,见re.X标志