Python provides different modules like urllib, requests etc to download files from the web. I am going to use the request library of python to efficiently download files from the URLs.
Let’s start a look at step by step procedure to download files using URLs using request library−
1. Import module
import requests
2. Get the link or url
url = '//www.facebook.com/favicon.ico' r = requests.get[url, allow_redirects=True]
3. Save the content with name.
open['facebook.ico', 'wb'].write[r.content]
save the file as facebook.ico.
Example
import requests url = '//www.facebook.com/favicon.ico' r = requests.get[url, allow_redirects=True] open['facebook.ico', 'wb'].write[r.content]
Result
We can see the file is downloaded[icon] in our current working directory.
But we may need to download different kind of files like image, text, video etc from the web. So let’s first get the type of data the url is linking to−
>>> r = requests.get[url, allow_redirects=True] >>> print[r.headers.get['content-type']] image/png
However, there is a smarter way, which involved just fetching the headers of a url before actually downloading it. This allows us to skip downloading files which weren’t meant to be downloaded.
>>> print[is_downloadable['//www.youtube.com/watch?v=xCglV_dqFGI']] False >>> print[is_downloadable['//www.facebook.com/favicon.ico']] True
To restrict the download by file size, we can get the filezie from the content-length header and then do as per our requirement.
contentLength = header.get['content-length', None] if contentLength and contentLength > 2e8: # 200 mb approx return False
Get filename from an URL
To get the filename, we can parse the url. Below is a sample routine which fetches the last string after backslash[/].
url= "//www.computersolution.tech/wp-content/uploads/2016/05/tutorialspoint-logo.png" if url.find['/']: print[url.rsplit['/', 1][1]
Above will give the filename of the url. However, there are many cases where filename information is not present in the url for example – //url.com/download. In such a case, we need to get the Content-Disposition header, which contains the filename information.
import requests import re def getFilename_fromCd[cd]: """ Get filename from content-disposition """ if not cd: return None fname = re.findall['filename=[.+]', cd] if len[fname] == 0: return None return fname[0] url = '//google.com/favicon.ico' r = requests.get[url, allow_redirects=True] filename = getFilename_fromCd[r.headers.get['content-disposition']] open[filename, 'wb'].write[r.content]
The above url-parsing code in conjunction with above program will give you filename from Content-Disposition header most of the time.
Updated on 30-Jul-2019 22:30:26
- Related Questions & Answers
- Downloading file using SAP .NET Connector
- How are files extracted from a tar file using Python?
- Rename multiple files using Python
- Using SAP Web Service from WSDL file
- Web Scraping using Python and Scrapy?
- Python Implementing web scraping using lxml
- How to copy files from one folder to another using Python?
- How to copy files from one server to another using Python?
- How to convert PDF files to Excel files using Python?
- How to copy certain files from one folder to another using Python?
- Implementing web scraping using lxml in Python?
- Does HTML5 allow you to interact with local client files from within a web browser?
- Generate temporary files and directories using Python
- How to remove swap files using Python?
- How to create powerpoint files using Python
I wanted do download all the files from a webpage. I tried wget
but it was failing so I decided for the Python route and I found this thread.
After reading it, I have made a little command line application, soupget
, expanding on the excellent answers of PabloG and
Stan and adding some useful options.
It uses BeatifulSoup to collect all the URLs of the page and then download the ones with the desired extension[s]. Finally it can download multiple files in parallel.
Here it is:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from __future__ import [division, absolute_import, print_function, unicode_literals]
import sys, os, argparse
from bs4 import BeautifulSoup
# --- insert Stan's script here ---
# if sys.version_info >= [3,]:
#...
#...
# def download_file[url, dest=None]:
#...
#...
# --- new stuff ---
def collect_all_url[page_url, extensions]:
"""
Recovers all links in page_url checking for all the desired extensions
"""
conn = urllib2.urlopen[page_url]
html = conn.read[]
soup = BeautifulSoup[html, 'lxml']
links = soup.find_all['a']
results = []
for tag in links:
link = tag.get['href', None]
if link is not None:
for e in extensions:
if e in link:
# Fallback for badly defined links
# checks for missing scheme or netloc
if bool[urlparse.urlparse[link].scheme] and bool[urlparse.urlparse[link].netloc]:
results.append[link]
else:
new_url=urlparse.urljoin[page_url,link]
results.append[new_url]
return results
if __name__ == "__main__": # Only run if this file is called directly
# Command line arguments
parser = argparse.ArgumentParser[
description='Download all files from a webpage.']
parser.add_argument[
'-u', '--url',
help='Page url to request']
parser.add_argument[
'-e', '--ext',
nargs='+',
help='Extension[s] to find']
parser.add_argument[
'-d', '--dest',
default=None,
help='Destination where to save the files']
parser.add_argument[
'-p', '--par',
action='store_true', default=False,
help="Turns on parallel download"]
args = parser.parse_args[]
# Recover files to download
all_links = collect_all_url[args.url, args.ext]
# Download
if not args.par:
for l in all_links:
try:
filename = download_file[l, args.dest]
print[l]
except Exception as e:
print["Error while downloading: {}".format[e]]
else:
from multiprocessing.pool import ThreadPool
results = ThreadPool[10].imap_unordered[
lambda x: download_file[x, args.dest], all_links]
for p in results:
print[p]
An example of its usage is:
python3 soupget.py -p -e -d -u
And an actual example if you want to see it in action:
python3 soupget.py -p -e .xlsx .pdf .csv -u //healthdata.gov/dataset/chemicals-cosmetics