如何使用 Python 将我已经知道其 URL 地址的图像保存在本地?

2025-03-05 09:18:00
admin
原创
44
摘要:问题描述:我知道互联网上一张图片的 URL。例如http://www.digimouth.com/news/media/2011/09/google-logo.jpg,其中包含 Google 的徽标。现在,如何使用 Python 下载此图像,而无需在浏览器中实际打开 URL 并手动保存文件。解决方案 1:Py...

问题描述:

我知道互联网上一张图片的 URL。

例如http://www.digimouth.com/news/media/2011/09/google-logo.jpg,其中包含 Google 的徽标。

现在,如何使用 Python 下载此图像,而无需在浏览器中实际打开 URL 并手动保存文件。


解决方案 1:

Python 2

如果您只想将其保存为文件,那么这里有一个更直接的方法:

import urllib

urllib.urlretrieve("http://www.digimouth.com/news/media/2011/09/google-logo.jpg", "local-filename.jpg")

第二个参数是文件应保存的本地路径。

Python 3

正如SergO所建议的,下面的代码应该适用于Python 3。

import urllib.request

urllib.request.urlretrieve("http://www.digimouth.com/news/media/2011/09/google-logo.jpg", "local-filename.jpg")

解决方案 2:

import urllib
resource = urllib.urlopen("http://www.digimouth.com/news/media/2011/09/google-logo.jpg")
output = open("file01.jpg","wb")
output.write(resource.read())
output.close()

file01.jpg将包含您的图像。

解决方案 3:

我编写了一个可以完成此操作的脚本,您可以在我的 github 上使用它。

我利用 BeautifulSoup 来解析任何网站上的图片。如果你要进行大量的网页抓取(或打算使用我的工具),我建议你sudo pip install BeautifulSoup。有关 BeautifulSoup 的信息可在此处找到。

为了方便起见,这里是我的代码:

from bs4 import BeautifulSoup
from urllib2 import urlopen
import urllib

# use this image scraper from the location that 
#you want to save scraped images to

def make_soup(url):
    html = urlopen(url).read()
    return BeautifulSoup(html)

def get_images(url):
    soup = make_soup(url)
    #this makes a list of bs4 element tags
    images = [img for img in soup.findAll('img')]
    print (str(len(images)) + "images found.")
    print 'Downloading images to current working directory.'
    #compile our unicode list of image links
    image_links = [each.get('src') for each in images]
    for each in image_links:
        filename=each.split('/')[-1]
        urllib.urlretrieve(each, filename)
    return image_links

#a standard call looks like this
#get_images('http://www.wookmark.com')

解决方案 4:

这可以通过请求来完成。加载页面并将二进制内容转储到文件中。

import os
import requests

url = 'https://apod.nasa.gov/apod/image/1701/potw1636aN159_HST_2048.jpg'
page = requests.get(url)

f_ext = os.path.splitext(url)[-1]
f_name = 'img{}'.format(f_ext)
with open(f_name, 'wb') as f:
    f.write(page.content)

解决方案 5:

Python 3

urllib.request — 用于打开 URL 的可扩展库

from urllib.error import HTTPError
from urllib.request import urlretrieve

try:
    urlretrieve(image_url, image_local_path)
except FileNotFoundError as err:
    print(err)   # something wrong with local path
except HTTPError as err:
    print(err)  # something wrong with url

解决方案 6:

我编写了一个扩展 Yup. 脚本的脚本。我修复了一些问题。它现在将绕过 403:禁止访问问题。当无法检索图像时,它不会崩溃。它会尝试避免损坏的预览。它会获取正确的绝对 URL。它会提供更多信息。它可以通过命令行中的参数运行。

# getem.py
# python2 script to download all images in a given url
# use: python getem.py http://url.where.images.are

from bs4 import BeautifulSoup
import urllib2
import shutil
import requests
from urlparse import urljoin
import sys
import time

def make_soup(url):
    req = urllib2.Request(url, headers={'User-Agent' : "Magic Browser"}) 
    html = urllib2.urlopen(req)
    return BeautifulSoup(html, 'html.parser')

def get_images(url):
    soup = make_soup(url)
    images = [img for img in soup.findAll('img')]
    print (str(len(images)) + " images found.")
    print 'Downloading images to current working directory.'
    image_links = [each.get('src') for each in images]
    for each in image_links:
        try:
            filename = each.strip().split('/')[-1].strip()
            src = urljoin(url, each)
            print 'Getting: ' + filename
            response = requests.get(src, stream=True)
            # delay to avoid corrupted previews
            time.sleep(1)
            with open(filename, 'wb') as out_file:
                shutil.copyfileobj(response.raw, out_file)
        except:
            print '  An error occured. Continuing.'
    print 'Done.'

if __name__ == '__main__':
    url = sys.argv[1]
    get_images(url)

解决方案 7:

适用于 Python 2 和 Python 3 的解决方案:

try:
    from urllib.request import urlretrieve  # Python 3
except ImportError:
    from urllib import urlretrieve  # Python 2

url = "http://www.digimouth.com/news/media/2011/09/google-logo.jpg"
urlretrieve(url, "local-filename.jpg")

或者,如果附加要求requests可以接受并且它是一个 http(s) URL:

def load_requests(source_url, sink_path):
    """
    Load a file from an URL (e.g. http).

    Parameters
    ----------
    source_url : str
        Where to load the file from.
    sink_path : str
        Where the loaded file is stored.
    """
    import requests
    r = requests.get(source_url, stream=True)
    if r.status_code == 200:
        with open(sink_path, 'wb') as f:
            for chunk in r:
                f.write(chunk)

解决方案 8:

使用请求库

import requests
import shutil,os

headers = {
    'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36'
}
currentDir = os.getcwd()
path = os.path.join(currentDir,'Images')#saving images to Images folder

def ImageDl(url):
    attempts = 0
    while attempts < 5:#retry 5 times
        try:
            filename = url.split('/')[-1]
            r = requests.get(url,headers=headers,stream=True,timeout=5)
            if r.status_code == 200:
                with open(os.path.join(path,filename),'wb') as f:
                    r.raw.decode_content = True
                    shutil.copyfileobj(r.raw,f)
            print(filename)
            break
        except Exception as e:
            attempts+=1
            print(e)


ImageDl(url)

解决方案 9:

使用简单的pythonwget模块下载链接。用法如下:

import wget
wget.download('http://www.digimouth.com/news/media/2011/09/google-logo.jpg')

解决方案 10:

这是一个非常简短的回答。

import urllib
urllib.urlretrieve("http://photogallery.sandesh.com/Picture.aspx?AlubumId=422040", "Abc.jpg")

解决方案 11:

回答晚了,但python>=3.6你可以使用dload,即:

import dload
dload.save("http://www.digimouth.com/news/media/2011/09/google-logo.jpg")

如果您需要图像bytes,请使用:

img_bytes = dload.bytes("http://www.digimouth.com/news/media/2011/09/google-logo.jpg")

使用安装pip3 install dload

解决方案 12:

Python 3 版本

我针对 Python 3 调整了@madprops 的代码

# getem.py
# python2 script to download all images in a given url
# use: python getem.py http://url.where.images.are

from bs4 import BeautifulSoup
import urllib.request
import shutil
import requests
from urllib.parse import urljoin
import sys
import time

def make_soup(url):
    req = urllib.request.Request(url, headers={'User-Agent' : "Magic Browser"}) 
    html = urllib.request.urlopen(req)
    return BeautifulSoup(html, 'html.parser')

def get_images(url):
    soup = make_soup(url)
    images = [img for img in soup.findAll('img')]
    print (str(len(images)) + " images found.")
    print('Downloading images to current working directory.')
    image_links = [each.get('src') for each in images]
    for each in image_links:
        try:
            filename = each.strip().split('/')[-1].strip()
            src = urljoin(url, each)
            print('Getting: ' + filename)
            response = requests.get(src, stream=True)
            # delay to avoid corrupted previews
            time.sleep(1)
            with open(filename, 'wb') as out_file:
                shutil.copyfileobj(response.raw, out_file)
        except:
            print('  An error occured. Continuing.')
    print('Done.')

if __name__ == '__main__':
    get_images('http://www.wookmark.com')

解决方案 13:

使用 Requests 对 Python 3 进行的一些新功能:

代码中的注释。可立即使用的功能。


import requests
from os import path

def get_image(image_url):
    """
    Get image based on url.
    :return: Image name if everything OK, False otherwise
    """
    image_name = path.split(image_url)[1]
    try:
        image = requests.get(image_url)
    except OSError:  # Little too wide, but work OK, no additional imports needed. Catch all conection problems
        return False
    if image.status_code == 200:  # we could have retrieved error page
        base_dir = path.join(path.dirname(path.realpath(__file__)), "images") # Use your own path or "" to use current working directory. Folder must exist.
        with open(path.join(base_dir, image_name), "wb") as f:
            f.write(image.content)
        return image_name

get_image("https://apod.nasddfda.gov/apod/image/2003/S106_Mishra_1947.jpg")

解决方案 14:

这是下载图像最简单的方法。

import requests
from slugify import slugify

img_url = 'https://apod.nasa.gov/apod/image/1701/potw1636aN159_HST_2048.jpg'
img = requests.get(img_url).content
img_file = open(slugify(img_url) + '.' + str(img_url).split('.')[-1], 'wb')
img_file.write(img)
img_file.close()

解决方案 15:

如果你还没有该图片的 URL,你可以使用gazpacho来抓取它:

from gazpacho import Soup
base_url = "http://books.toscrape.com"

soup = Soup.get(base_url)
links = [img.attrs["src"] for img in soup.find("img")]

urllib然后按照所述下载资产:

from pathlib import Path
from urllib.request import urlretrieve as download

directory = "images"
Path(directory).mkdir(exist_ok=True)

link = links[0]
name = link.split("/")[-1]

download(f"{base_url}/{link}", f"{directory}/{name}")

解决方案 16:

# import the required libraries from Python
import pathlib,urllib.request 

# Using pathlib, specify where the image is to be saved
downloads_path = str(pathlib.Path.home() / "Downloads")

# Form a full image path by joining the path to the 
# images' new name

picture_path  = os.path.join(downloads_path, "new-image.png")

# "/home/User/Downloads/new-image.png"

# Using "urlretrieve()" from urllib.request save the image 
urllib.request.urlretrieve("//example.com/image.png", picture_path)

# urlretrieve() takes in 2 arguments
# 1. The URL of the image to be downloaded
# 2. The image new name after download. By default, the image is saved
#    inside your current working directory

解决方案 17:

下载图像文件,避免所有可能的错误:

import requests
import validators
from urllib.request import Request, urlopen
from urllib.error import URLError, HTTPError


def is_downloadable(url):
  valid=validators. url(url)
  if valid==False:
    return False
  req = Request(url)
  try:
    response = urlopen(req)
  except HTTPError as e:
    return False
  except URLError as e:
    return False
  else:
    return True



for i in range(len(File_data)):   #File data Contain list of address for image 
                                                      #file
  url = File_data[i][1]
  try:
    if (is_downloadable(url)):
      try:
        r = requests.get(url, allow_redirects=True)
        if url.find('/'):
          fname = url.rsplit('/', 1)[1]
          fname = pth+File_data[i][0]+"$"+fname #Destination to save 
                                                   #image file
          open(fname, 'wb').write(r.content)
      except Exception as e:
        print(e)
  except Exception as e:
    print(e)

解决方案 18:

好吧,这是我的初步尝试,可能完全是过度了。如果需要,请更新,因为这无法处理任何超时,但我这样做只是为了好玩。

代码列于此处:https://github.com/JayRizzo/JayRizzoTools/blob/master/pyImageDownloader.py

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# =============================================================================
# Created Syst: MAC OSX High Sierra 21.5.0 (17G65)
# Created Plat: Python 3.9.5 ('v3.9.5:0a7dcbdb13', 'May  3 2021 13:17:02')
# Created By  : Jeromie Kirchoff
# Created Date: Thu Jun 15 23:31:01 2022 CDT
# Last ModDate: Thu Jun 16 01:41:01 2022 CDT
# =============================================================================
# NOTE: Doesn't work on SVG images at this time.
# I will look into this further: https://stackoverflow.com/a/6599172/1896134
# =============================================================================
import requests                                 # to get image from the web
import shutil                                   # to save it locally
import os                                       # needed
from os.path import exists as filepathexist     # check if file paths exist
from os.path import join                        # joins path for different os
from os.path import expanduser                  # expands current home
from pyuser_agent import UA                     # generates random UserAgent

class ImageDownloader(object):
    """URL ImageDownloader.
    Input : Full Image URL
    Output: Image saved to your ~/Pictures/JayRizzoDL folder.
    """
    def __init__(self, URL: str):
        self.url = URL
        self.headers = {"User-Agent" : UA().random}
        self.currentHome = expanduser('~')
        self.desktop = join(self.currentHome + "/Desktop/")
        self.download = join(self.currentHome + "/Downloads/")
        self.pictures = join(self.currentHome + "/Pictures/JayRizzoDL/")
        self.outfile = ""
        self.filename = ""
        self.response = ""
        self.rawstream = ""
        self.createdfilepath = ""
        self.imgFileName = ""
        # Check if the JayRizzoDL exists in the pictures folder.
        # if it doesn't exist create it.
        if not filepathexist(self.pictures):
            os.mkdir(self.pictures)
        self.main()

    def getFileNameFromURL(self, URL: str):
        """Parse the URL for the name after the last forward slash."""
        NewFileName = self.url.strip().split('/')[-1].strip()
        return NewFileName

    def getResponse(self, URL: str):
        """Try streaming the URL for the raw data."""
        self.response = requests.get(self.url, headers=self.headers, stream=True)
        return self.response

    def gocreateFile(self, name: str, response):
        """Try creating the file with the raw data in a custom folder."""
        self.outfile = join(self.pictures, name)
        with open(self.outfile, 'wb') as outFilePath:
            shutil.copyfileobj(response.raw, outFilePath)
        return self.outfile

    def main(self):
        """Combine Everything and use in for loops."""
        self.filename = self.getFileNameFromURL(self.url)
        self.rawstream = self.getResponse(self.url)
        self.createdfilepath = self.gocreateFile(self.filename, self.rawstream)
        print(f"File was created: {self.createdfilepath}")
        return

if __name__ == '__main__':
    # Example when calling the file directly.
    ImageDownloader("https://stackoverflow.design/assets/img/logos/so/logo-stackoverflow.png")

相关推荐
  政府信创国产化的10大政策解读一、信创国产化的背景与意义信创国产化,即信息技术应用创新国产化,是当前中国信息技术领域的一个重要发展方向。其核心在于通过自主研发和创新,实现信息技术应用的自主可控,减少对外部技术的依赖,并规避潜在的技术制裁和风险。随着全球信息技术竞争的加剧,以及某些国家对中国在科技领域的打压,信创国产化显...
工程项目管理   2588  
  为什么项目管理通常仍然耗时且低效?您是否还在反复更新电子表格、淹没在便利贴中并参加每周更新会议?这确实是耗费时间和精力。借助软件工具的帮助,您可以一目了然地全面了解您的项目。如今,国内外有足够多优秀的项目管理软件可以帮助您掌控每个项目。什么是项目管理软件?项目管理软件是广泛行业用于项目规划、资源分配和调度的软件。它使项...
项目管理软件   1553  
  IPD(Integrated Product Development)流程作为一种先进的产品开发管理模式,在众多企业中得到了广泛应用。其中,技术评审与决策评审是IPD流程中至关重要的环节,它们既有明显的区别,又存在紧密的协同关系。深入理解这两者的区别与协同,对于企业有效实施IPD流程,提升产品开发效率与质量具有重要意义...
IPD管理流程   31  
  本文介绍了以下10款项目管理软件工具:禅道项目管理软件、ClickUp、Freshdesk、GanttPRO、Planview、Smartsheet、Asana、Nifty、HubPlanner、Teamwork。在当今快速变化的商业环境中,项目管理软件已成为企业提升效率、优化资源分配和确保项目按时交付的关键工具。然而...
项目管理系统   26  
  建设工程项目质量关乎社会公众的生命财产安全,也影响着企业的声誉和可持续发展。高质量的建设工程不仅能为使用者提供舒适、安全的环境,还能提升城市形象,推动经济的健康发展。在实际的项目操作中,诸多因素会对工程质量产生影响,从规划设计到施工建设,再到后期的验收维护,每一个环节都至关重要。因此,探寻并运用有效的方法来提升建设工程...
工程项目管理制度   21  
热门文章
项目管理软件有哪些?
曾咪二维码

扫码咨询,免费领取项目管理大礼包!

云禅道AD
禅道项目管理软件

云端的项目管理软件

尊享禅道项目软件收费版功能

无需维护,随时随地协同办公

内置subversion和git源码管理

每天备份,随时转为私有部署

免费试用