Python 获取动漫番剧 -XXOO[通俗易懂]

Python 获取动漫番剧 -XXOO[通俗易懂]前言没有什么好说的,就是想起来前些年失恋使劲刷番剧缓解自己糟糕的情绪。纪念下。一、直接上代码1.搜索入口#搜索动漫名称列表defget_video_list(name):#开启代理#proxy={‘http’:’http://127.0.0.1:8080′,’https’:’https://127.0.0.1:8080′}url=’http://www.7666.tv/search.php?searchword=’+nam…

大家好,又见面了,我是你们的朋友全栈君。

 

前言

没有什么好说的,就是想起来前些年失恋使劲刷番剧缓解自己糟糕的情绪。纪念下。


一、直接上代码

1.搜索入口

# 搜索动漫名称 列表
def get_video_list(name):
    # 开启代理
    # proxy = {'http': 'http://127.0.0.1:8080', 'https': 'https://127.0.0.1:8080' }
    url = 'http://www.7666.tv/search.php?searchword=' + name + '&submit='
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Upgrade-Insecure-Requests': '1',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cookie': ""

    }
    # url 中文转码
    url = url.replace(url.split("/")[-1].split(".")[0], quote(url.split("/")[-1].split(".")[0]))
    # 发起请求 , proxies=proxy
    response = requests.get(url, headers)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 得到搜索结果。获取视频信息
    html_obj = etree.HTML(response.text)
    v_list = html_obj.xpath('//ul[@class="myui-vodlist__media clearfix"]/li')
    counter = 0
    result = []
    result_head = ['序号', '名称', '类型', '时间', '链接', '简介']
    result.append(result_head)
    for v in v_list:
        thumb_a = v.xpath('//div[@class="thumb"]/a[@class="myui-vodlist__thumb img-lg-150 img-xs-100 lazyload"]')[
            counter]
        # 视频名称
        video_name = thumb_a.attrib.get('title')
        # 视频头像
        video_head = thumb_a.attrib.get('data-original')
        # 视频链接
        video_url = thumb_a.xpath('@href')[0]
        # 视频评分
        pattern = re.compile(r'\s+');
        thumb_span_g = thumb_a.xpath('//span[@class="pic-tag pic-tag-top"]')[counter]
        video_grade = re.sub(pattern, '', str(thumb_span_g.xpath('text()')[0]))
        # 视频最近更新
        thumb_span_u = thumb_a.xpath('//span[@class="pic-text text-right"]')[counter]
        video_update = re.sub(pattern, '', str(thumb_span_u.xpath('text()')[0]))

        detail_p = v.xpath('//div[@class="detail"]/p')
        # 视频导演
        video_director = detail_p[0].xpath('text()')[0]
        # 视频主演
        video_starring = detail_p[1][1].xpath('text()')[0]
        # 视频分类
        video_type = detail_p[2].xpath('text()')[0]
        # 视频地区
        video_address = detail_p[2][2].tail
        # 视频年份
        video_year = detail_p[2][4].tail
        # 视频简介
        video_synopsis = v.xpath('//div[@class="detail"]/p[@class="hidden-xs"]/text()')[counter]
        video_synopsis = video_synopsis.encode("gbk", 'ignore').decode("gbk", "ignore")
        counter = counter + 1
        # print(video_name, video_head, video_url, video_grade, video_update)
        # print(video_director, video_starring, video_type, video_address, video_year, video_synopsis)
        # print('\n')
        result.append([counter, video_name, video_type, video_year, video_url, video_synopsis])
    return result

2.视频集数

# 查询单个视频信息
def search_video_info(url):
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Upgrade-Insecure-Requests': '1',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cookie': ""

    }
    # 发起请求
    response = requests.get(url, headers)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 得到搜索结果。获取视频信息
    html_obj = etree.HTML(response.text)
    v_list = html_obj.xpath('//ul[@class="myui-content__list scrollbar sort-list clearfix"]/li/a')
    result = []
    result_head = ['序号', '视频集数', '视频集数链接']
    result.append(result_head)
    counter = 1
    for v in v_list:
        # 视频集数
        video_set = v.xpath('text()')[0]
        # 视频集数链接
        video_set_url = v.xpath('@href')[0]
        vr = [counter, video_set, video_set_url]
        result.append(vr)
        counter = counter + 1
    return result

3.下载TS文件


# TS 流  m3u8
# 获取ts 路径列表
def search_video_ts(url):
    result_urls = []
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Upgrade-Insecure-Requests': '1',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cookie': ""

    }
    # 发起请求
    response = requests.get(url, headers)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 得到搜索结果。获取视频信息
    html_obj = etree.HTML(response.text)
    # 获取ts流路径
    ts_info = html_obj.xpath('//div[@class="embed-responsive embed-responsive-16by9 clearfix"]/script/text()')[0]
    pattern = r"now=(.*)"
    m = re.findall(pattern, ts_info, re.I)
    # 当前视频的m3u8文件(不包含ts地址)
    ts_url = str(m).split(";")[0].replace("[", '').replace('"', '').replace("'", '')
    # 1.转发地址,发起请求
    response = requests.get(ts_url, headers)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 当前视频的m3u8m3u8文件(包含ts地址)
    ts_url_2 = response.text.split("\n")[2]
    # 拼接地址
    ts_url_2 = ts_url.split("index")[0] + ts_url_2
    # 2.ts地址列表,发起请求
    response = requests.get(ts_url_2)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 得到搜索结果。获取视频信息
    ts_list = response.text.split("\n")
    # https://sina.com-h-sina.com/20180812/8108_9a67fe52/1000k/hls/f9ebcf457c6000.ts
    for ts in ts_list:
        if (str.find(ts, '#') != -1) == False and len(ts) != 0:
            ts_url_3 = ts_url_2.split("index")[0] + ts
            result_urls.append(ts_url_3)
    return result_urls


def target_handel_download(start, end, name, url_list):
    # 开启代理
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Upgrade-Insecure-Requests': '1',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cookie': ""

    }
    for url in url_list[start:end]:
        global count
        print('间隔一秒,开始下载ts文件>>>' + url + ', count>>' + str(count))
        time.sleep(1)
        try:
            r = requests.get(url, headers=headers, stream=True)
        except Exception as ex:
            print('下载ts文件>>>' + url + ' 异常,间隔5秒重新下载, err>>' + ex)
        else:
            with open(url.split("hls/")[1], "wb") as code:
                code.write(r.content)
            print("下载进度:%.2f" % (count / len(url_list)))
        count = count + 1



# TS 流  m3u8
# 获取ts 路径列表,下载,多线程
def download_video_ts(name, result_list, num_thread=100):
    global count
    count = 0
    # 下载ts文件
    file_size = len(result_list)
    # 启动多线程写文件
    part = file_size // num_thread  # 如果不能整除,最后一块应该多几个字节
    counter = 0
    for i in range(num_thread):
        start = part * i
        if i == num_thread - 1:  # 最后一块
            end = file_size
        else:
            end = start + part
        print('start>>' + str(start) + '  end>>' + str(end) + '   路径name>>' + name + '  ts文件数量>>' + str(len(result_list)))
        print('间隔5秒启动多线程写文件, 线程总数>>' + str(num_thread) + ', 当前线程>>' + str(i))
        time.sleep(5)
        t = threading.Thread(target=target_handel_download,
                             kwargs={'start': start, 'end': end, 'name': name, 'url_list': result_list})
        t.setDaemon(True)
        t.start()
        counter = counter + 1

    # 等待所有线程下载完成
    main_thread = threading.current_thread()
    # 所有存活的 Thread 对象
    for t in threading.enumerate():
        if t is main_thread:
            continue
        t.join()

 4.合并TS文件为mp4格式

# 合并小文件
# copy/b D:\newpython\doutu\sao\ts_files\*.ts d:\fnew.ts
# 在windows系统下面,直接可以使用:copy/b *.ts video.mp4  把所有ts文件合成一个mp4格式文件
def merge_ts_list(result_list, videoNamePy, name):
    tmp = []
    for file in result_list[0:568]:
        tmp.append(file.replace("\n", ""))
        # 合并ts文件
    # windows cmd 操作
    shell_str = 'copy /b *.ts ' + name + '.mp4' + '\n' + 'del ' + videoNamePy + '\*.ts'
    return shell_str


# 把合并命令写到文件中|也可直接执行
def wite_to_file(cmdString):
    f = open("combined.cmd", 'w', encoding="utf-8")
    f.write(cmdString)
    f.close()
    print('执行cmd命令............', cmdString)
    # 解决中文乱码
    os.system('chcp 65001')
    r = os.system(cmdString)
    print(r)

4.全部代码走一波

# encoding=utf-8
# 用python 视频下载器 

import requests
import re
import threading
import os
import datetime
import time
import sys
import pinyin.cedict
from lxml import etree
# from lxml import html
from urllib.parse import quote

# etree = html.etree

head_url = 'http://www.7666.tv'
count = 0


# 搜索动漫名称 列表
def get_video_list(name):
    # 开启代理
    # proxy = {'http': 'http://127.0.0.1:8080', 'https': 'https://127.0.0.1:8080' }
    url = 'http://www.7666.tv/search.php?searchword=' + name + '&submit='
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Upgrade-Insecure-Requests': '1',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cookie': ""

    }
    # url 中文转码
    url = url.replace(url.split("/")[-1].split(".")[0], quote(url.split("/")[-1].split(".")[0]))
    # 发起请求 , proxies=proxy
    response = requests.get(url, headers)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 得到搜索结果。获取视频信息
    html_obj = etree.HTML(response.text)
    v_list = html_obj.xpath('//ul[@class="myui-vodlist__media clearfix"]/li')
    counter = 0
    result = []
    result_head = ['序号', '名称', '类型', '时间', '链接', '简介']
    result.append(result_head)
    for v in v_list:
        thumb_a = v.xpath('//div[@class="thumb"]/a[@class="myui-vodlist__thumb img-lg-150 img-xs-100 lazyload"]')[
            counter]
        # 视频名称
        video_name = thumb_a.attrib.get('title')
        # 视频头像
        video_head = thumb_a.attrib.get('data-original')
        # 视频链接
        video_url = thumb_a.xpath('@href')[0]
        # 视频评分
        pattern = re.compile(r'\s+');
        thumb_span_g = thumb_a.xpath('//span[@class="pic-tag pic-tag-top"]')[counter]
        video_grade = re.sub(pattern, '', str(thumb_span_g.xpath('text()')[0]))
        # 视频最近更新
        thumb_span_u = thumb_a.xpath('//span[@class="pic-text text-right"]')[counter]
        video_update = re.sub(pattern, '', str(thumb_span_u.xpath('text()')[0]))

        detail_p = v.xpath('//div[@class="detail"]/p')
        # 视频导演
        video_director = detail_p[0].xpath('text()')[0]
        # 视频主演
        video_starring = detail_p[1][1].xpath('text()')[0]
        # 视频分类
        video_type = detail_p[2].xpath('text()')[0]
        # 视频地区
        video_address = detail_p[2][2].tail
        # 视频年份
        video_year = detail_p[2][4].tail
        # 视频简介
        video_synopsis = v.xpath('//div[@class="detail"]/p[@class="hidden-xs"]/text()')[counter]
        video_synopsis = video_synopsis.encode("gbk", 'ignore').decode("gbk", "ignore")
        counter = counter + 1
        # print(video_name, video_head, video_url, video_grade, video_update)
        # print(video_director, video_starring, video_type, video_address, video_year, video_synopsis)
        # print('\n')
        result.append([counter, video_name, video_type, video_year, video_url, video_synopsis])
    return result


# 查询单个视频信息
def search_video_info(url):
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Upgrade-Insecure-Requests': '1',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cookie': ""

    }
    # 发起请求
    response = requests.get(url, headers)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 得到搜索结果。获取视频信息
    html_obj = etree.HTML(response.text)
    v_list = html_obj.xpath('//ul[@class="myui-content__list scrollbar sort-list clearfix"]/li/a')
    result = []
    result_head = ['序号', '视频集数', '视频集数链接']
    result.append(result_head)
    counter = 1
    for v in v_list:
        # 视频集数
        video_set = v.xpath('text()')[0]
        # 视频集数链接
        video_set_url = v.xpath('@href')[0]
        vr = [counter, video_set, video_set_url]
        result.append(vr)
        counter = counter + 1
    return result


# TS 流  m3u8
# 获取ts 路径列表
def search_video_ts(url):
    result_urls = []
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Upgrade-Insecure-Requests': '1',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cookie': ""

    }
    # 发起请求
    response = requests.get(url, headers)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 得到搜索结果。获取视频信息
    html_obj = etree.HTML(response.text)
    # 获取ts流路径
    ts_info = html_obj.xpath('//div[@class="embed-responsive embed-responsive-16by9 clearfix"]/script/text()')[0]
    pattern = r"now=(.*)"
    m = re.findall(pattern, ts_info, re.I)
    # 当前视频的m3u8文件(不包含ts地址)
    ts_url = str(m).split(";")[0].replace("[", '').replace('"', '').replace("'", '')
    # 1.转发地址,发起请求
    response = requests.get(ts_url, headers)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 当前视频的m3u8m3u8文件(包含ts地址)
    ts_url_2 = response.text.split("\n")[2]
    # 拼接地址
    ts_url_2 = ts_url.split("index")[0] + ts_url_2
    # 2.ts地址列表,发起请求
    response = requests.get(ts_url_2)
    # 解决requests.get 网页中文乱码
    response.encoding = response.apparent_encoding
    # 得到搜索结果。获取视频信息
    ts_list = response.text.split("\n")
    # https://sina.com-h-sina.com/20180812/8108_9a67fe52/1000k/hls/f9ebcf457c6000.ts
    for ts in ts_list:
        if (str.find(ts, '#') != -1) == False and len(ts) != 0:
            ts_url_3 = ts_url_2.split("index")[0] + ts
            result_urls.append(ts_url_3)
    return result_urls


def target_handel_download(start, end, name, url_list):
    # 开启代理
    headers = {
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36',
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Connection': 'keep-alive',
        'Upgrade-Insecure-Requests': '1',
        'Content-Type': 'application/x-www-form-urlencoded',
        'Cookie': ""

    }
    for url in url_list[start:end]:
        global count
        print('间隔一秒,开始下载ts文件>>>' + url + ', count>>' + str(count))
        time.sleep(1)
        try:
            r = requests.get(url, headers=headers, stream=True)
        except Exception as ex:
            print('下载ts文件>>>' + url + ' 异常,间隔5秒重新下载, err>>' + ex)
        else:
            with open(url.split("hls/")[1], "wb") as code:
                code.write(r.content)
            print("下载进度:%.2f" % (count / len(url_list)))
        count = count + 1



# TS 流  m3u8
# 获取ts 路径列表,下载,多线程
def download_video_ts(name, result_list, num_thread=100):
    global count
    count = 0
    # 下载ts文件
    file_size = len(result_list)
    # 启动多线程写文件
    part = file_size // num_thread  # 如果不能整除,最后一块应该多几个字节
    counter = 0
    for i in range(num_thread):
        start = part * i
        if i == num_thread - 1:  # 最后一块
            end = file_size
        else:
            end = start + part
        print('start>>' + str(start) + '  end>>' + str(end) + '   路径name>>' + name + '  ts文件数量>>' + str(len(result_list)))
        print('间隔5秒启动多线程写文件, 线程总数>>' + str(num_thread) + ', 当前线程>>' + str(i))
        time.sleep(5)
        t = threading.Thread(target=target_handel_download,
                             kwargs={'start': start, 'end': end, 'name': name, 'url_list': result_list})
        t.setDaemon(True)
        t.start()
        counter = counter + 1

    # 等待所有线程下载完成
    main_thread = threading.current_thread()
    # 所有存活的 Thread 对象
    for t in threading.enumerate():
        if t is main_thread:
            continue
        t.join()


# 合并小文件
# copy/b D:\newpython\doutu\sao\ts_files\*.ts d:\fnew.ts
# 在windows系统下面,直接可以使用:copy/b *.ts video.mp4  把所有ts文件合成一个mp4格式文件
def merge_ts_list(result_list, videoNamePy, name):
    tmp = []
    for file in result_list[0:568]:
        tmp.append(file.replace("\n", ""))
        # 合并ts文件
    # windows cmd 操作
    shell_str = 'copy /b *.ts ' + name + '.mp4' + '\n' + 'del ' + videoNamePy + '\*.ts'
    return shell_str


# 把合并命令写到文件中
def wite_to_file(cmdString):
    f = open("combined.cmd", 'w', encoding="utf-8")
    f.write(cmdString)
    f.close()
    print('执行cmd命令............', cmdString)
    # 解决中文乱码
    os.system('chcp 65001')
    r = os.system(cmdString)
    print(r)


# 展示视频搜索结果
def show_video_list(result):
    print('------------------结果如下--------------------')
    for r in result:
        print(str(r).replace('[', '').replace(']', '').replace(',', ''))
        print('\n')


# 验证是否数字
def is_number(s):
    try:
        int(s)
        return True
    except ValueError:
        pass
    try:
        import unicodedata
        unicodedata.numeric(s)
        return True
    except (TypeError, ValueError):
        pass
    return False


def download_vodeo_ts(videoName, cwd, content):
    print('你选择的集数内容:',
          str(content).replace('[', '').replace(']', '').replace(',', ''))
    # 视频详情
    print("正在爬取-{}-视频集数详情".format(str(content[1])))
    # 视频集数下载ts
    result_urls = search_video_ts(head_url + str(content[2]))
    if len(result_urls) < 1:
        print('视频TS-{}-,无内容!'.format(str(content[1])))
    else:
        # 转拼音
        # videoNamePy = sys.path[0] + '\\' + pinyin.get(videoName, format="numerical")
        videoNamePy = sys.path[0] + '\\' + videoName
        # 下载TS文件
        start = datetime.datetime.now().replace(microsecond=0)
        print('下载 start..................>>', start)
        print('文件名称>>>>>>>', videoNamePy)
        if not os.path.exists(videoNamePy):
            os.mkdir(videoNamePy)
            os.chdir(videoNamePy)
        else:
            os.chdir(videoNamePy)
        download_video_ts(videoNamePy, result_urls, 5)
        end = datetime.datetime.now().replace(microsecond=0)
        print('下载 end..................>>', end)
        # 合并小文件
        cmd = merge_ts_list(result_urls, videoNamePy, videoNamePy + '\\' + videoName + str(content[1]))
        # 把合并命令写到文件中
        wite_to_file(cmd)
        print(str(content[1]) + "-视频下载完成")


# main
def download_vodeo_man():
    cwd = os.getcwd()  # 获取当前目录即dir目录下
    video_list = []
    print("------------------------current working directory------------------" + cwd)
    while True:
        exit_flag = False
        videoName = input("请输入动漫名称||输入exit退出:")
        if 'exit' == videoName:
            break
        # 搜索动漫名称 列表
        print('搜索限制3秒一次.....................')
        time.sleep(3)
        try:
            video_list = get_video_list(videoName)
        except Exception as ex:
            print('搜索异常-{},请重新输入!'.format(ex))
        if len(video_list) < 2:
            print('没有找到你想要的视频,请重新输入!')
        else:
            show_video_list(video_list)
            while True:
                if exit_flag == True:
                    break
                num = input("请选择你要下载的视频序号||键入t返回上一级:")
                if num == 't':
                    break
                elif is_number(num):
                    if int(num) > len(video_list):
                        print('没有找到你选择的序号,请重新输入:')
                    else:
                        content = video_list[int(num)]
                        video_name = content[1]
                        print('你选择的内容:', str(content).replace('[', '').replace(']', '').replace(',', ''))
                        # 视频详情
                        print("正在爬取-{}-视频详情".format(str(content[1])))
                        try:
                            result = search_video_info(head_url + str(content[4]))
                        except Exception:
                            print('视频详情异常,返回上一级!')
                            break
                        if len(result) < 2:
                            print('视频-{}-,无内容!'.format(str(content[1])))
                        else:
                            show_video_list(result)
                            while True:
                                num = input("请选择你要下载的视频集数序号||键入all下载所有||键入t返回上一级:")
                                if num == 't':
                                    break
                                if num == 'all':
                                    print('你选择下载所有集数')
                                    number = 1
                                    for content in result:
                                        try:
                                            print('开始下载-{}'.format(video_name+str(content[1])))
                                            download_vodeo_ts(video_name, cwd, result[number])
                                        except Exception as ex:
                                            print('下载-{}-集数异常-{},继续下载!'.format(content[1], ex))
                                        number = number + 1
                                    exit_flag = True
                                    break
                                elif int(num) > len(result):
                                    print('没有找到你选择的请选择你要下载的视频集数序号序号,请重新输入:')
                                else:
                                    content = result[int(num)]
                                    try:
                                        print('开始下载-{}'.format(video_name+str(content[1])))
                                        download_vodeo_ts(video_name, cwd, content)
                                    except Exception as ex:
                                        print('下载-{}-集数异常-{},返回上一级!'.format(content[1], ex))
                                    break


if __name__ == '__main__':
    download_vodeo_man()
    print("OK")

最后

我以为相爱的两个人分手,至少要有一件轰轰烈烈的大事,比如说第三者,比如说绝症,其实不用,忙碌疲乏,不安就够了。

感谢各位大大的耐心阅读~

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/160883.html原文链接:https://javaforall.net

(0)
全栈程序员-站长的头像全栈程序员-站长


相关推荐

  • 前端调用rpc接口_api接口调用

    前端调用rpc接口_api接口调用问题背景需要根据id通过rpc调用查询具体信息,因为没有提供批量查询的接口,所以做法是挨个遍历查询,那意味着:如果有100个id,就需要顺序进行100次rpc调用,假设每次rpc接口的调用时间是50ms(这个速度很快了),那单单rpc调用就要占用5s,所以接口的响应会非常慢。下面进行优化。优化方案:方案一:让服务方提供批量查询接口,需要服务提供方配合,这里暂不采用。方案二:rpc服务的调用由顺序调用修改为并行调用,采用线程池实现rpc的并发调用。具体实现如下:1)创建线程的类public

    2022年10月11日
    0
  • navicat premium 15 J激活码_在线激活[通俗易懂]

    (navicat premium 15 J激活码)本文适用于JetBrains家族所有ide,包括IntelliJidea,phpstorm,webstorm,pycharm,datagrip等。IntelliJ2021最新激活注册码,破解教程可免费永久激活,亲测有效,下面是详细链接哦~https://javaforall.net/100143.html…

    2022年3月28日
    103
  • linux查看mysql用户权限_教您如何查看MySQL用户权限

    linux查看mysql用户权限_教您如何查看MySQL用户权限教您如何查看MySQL用户权限如果需要查看MySQL用户权限,应该如何实现呢?下面就为您介绍查看MySQL用户权限的方法,并对授予MySQL用户权限的语句进行介绍,供您参考。查看MySQL用户权限:showgrantsfor你的用户比如:showgrantsforroot@’localhost’;Grant用法GRANTUSAGEON*.*TO’discuz’@’local…

    2022年6月18日
    150
  • struts2标签具体解释

    struts2标签具体解释

    2021年12月15日
    37
  • Win10禁止更新小插件Privatezilla Version 0.50.5[通俗易懂]

    Win10禁止更新小插件Privatezilla Version 0.50.5[通俗易懂]Win10禁止更新小插件PrivatezillaVersion0.50.5禁用功能:Win10隐私、微软小娜、Bloatware、软件权限、Win更新等下载地址:https://l13144.lanzoui.com/iMdzkt5dr0j

    2022年5月4日
    139
  • 五千字长文为你揭秘滴滴共享出行派单算法原理(干货)「建议收藏」

    五千字长文为你揭秘滴滴共享出行派单算法原理(干货)「建议收藏」关注ITValue,看企业级最新鲜、最具价值报道!本文作者|滴滴首席算法工程师导读:说到滴滴的派单算法,大家可能感觉到既神秘又好奇,从出租车扬召到司机在滴滴平台抢…

    2022年5月5日
    110

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号