resnet18 pytorch_如何搭建服务器

resnet18 pytorch_如何搭建服务器参照ResNet50的搭建,由于50层以上几乎相同,叠加卷积单元数即可,所以没有写注释。101和152的搭建注释可以参照我的ResNet50搭建中的注释:训练可以参照我的ResNet18搭建中的训练部分:ResNet101和152可以依旧参照ResNet50的网络图片:上代码:ResNet101的model.py模型:importtorchimporttorch.nnasnnfromtorch.nnimportfunctionalasFclassDownSampl

大家好,又见面了,我是你们的朋友全栈君。如果您正在找激活码,请点击查看最新教程,关注关注公众号 “全栈程序员社区” 获取激活教程,可能之前旧版本教程已经失效.最新Idea2022.1教程亲测有效,一键激活。

Jetbrains全系列IDE稳定放心使用

ResNet18的搭建请移步:使用PyTorch搭建ResNet18网络并使用CIFAR10数据集训练测试
ResNet34的搭建请移步:使用PyTorch搭建ResNet34网络
ResNet34的搭建请移步:使用PyTorch搭建ResNet50网络

参照我的ResNet50的搭建,由于50层以上几乎相同,叠加卷积单元数即可,所以没有写注释。
ResNet101和152的搭建注释可以参照我的ResNet50搭建中的注释
ResNet101和152的训练可以参照我的ResNet18搭建中的训练部分

ResNet101和152可以依旧参照ResNet50的网络图片:
在这里插入图片描述

上代码:

ResNet101的model.py模型:

import torch
import torch.nn as nn
from torch.nn import functional as F


class DownSample(nn.Module):
    def __init__(self, in_channel, out_channel, stride):
        super(DownSample, self).__init__()
        self.down = nn.Sequential(
            nn.Conv2d(in_channel, out_channel, kernel_size=1, stride=stride, padding=0, bias=False),
            nn.BatchNorm2d(out_channel),
            nn.ReLU(inplace=True)
        )

    def forward(self, x):
        out = self.down(x)
        return out


class ResNet101(nn.Module):
    def __init__(self, classes_num):            # 指定分类数
        super(ResNet101, self).__init__()
        self.pre = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        )
        # --------------------------------------------------------------------
        self.layer1_first = nn.Sequential(
            nn.Conv2d(64, 64, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.Conv2d(64, 256, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(256)
        )
        self.layer1_next = nn.Sequential(
            nn.Conv2d(256, 64, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.Conv2d(64, 256, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(256)
        )
        # --------------------------------------------------------------------
        self.layer2_first = nn.Sequential(
            nn.Conv2d(256, 128, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),
            nn.Conv2d(128, 128, kernel_size=3, stride=2, padding=1, bias=False),
            nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),
            nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(512)
        )
        self.layer2_next = nn.Sequential(
            nn.Conv2d(512, 128, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),
            nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),
            nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(512)
        )
        # --------------------------------------------------------------------
        self.layer3_first = nn.Sequential(
            nn.Conv2d(512, 256, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 256, kernel_size=3, stride=2, padding=1, bias=False),
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 1024, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(1024)
        )
        self.layer3_next = nn.Sequential(
            nn.Conv2d(1024, 256, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 1024, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(1024)
        )
        # --------------------------------------------------------------------
        self.layer4_first = nn.Sequential(
            nn.Conv2d(1024, 512, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),
            nn.Conv2d(512, 512, kernel_size=3, stride=2, padding=1, bias=False),
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),
            nn.Conv2d(512, 2048, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(2048)
        )
        self.layer4_next = nn.Sequential(
            nn.Conv2d(2048, 512, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),
            nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),
            nn.Conv2d(512, 2048, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(2048)
        )
        # --------------------------------------------------------------------
        self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))
        self.fc = nn.Sequential(
            nn.Dropout(p=0.5),
            nn.Linear(2048 * 1 * 1, 1000),
            nn.ReLU(inplace=True),
            nn.Dropout(p=0.5),
            nn.Linear(1000, classes_num)
        )

    def forward(self, x):
        out = self.pre(x)
        # --------------------------------------------------------------------
        layer1_shortcut = DownSample(64, 256, 1)
        layer1_shortcut.to('cuda:0')
        layer1_identity = layer1_shortcut(out)
        out = self.layer1_first(out)
        out = F.relu(out + layer1_identity, inplace=True)

        for i in range(2):
            identity = out
            out = self.layer1_next(out)
            out = F.relu(out + identity, inplace=True)
        # --------------------------------------------------------------------
        layer2_shortcut = DownSample(256, 512, 2)
        layer2_shortcut.to('cuda:0')
        layer2_identity = layer2_shortcut(out)
        out = self.layer2_first(out)
        out = F.relu(out + layer2_identity, inplace=True)

        for i in range(3):
            identity = out
            out = self.layer2_next(out)
            out = F.relu(out + identity, inplace=True)
        # --------------------------------------------------------------------
        layer3_shortcut = DownSample(512, 1024, 2)
        layer3_shortcut.to('cuda:0')
        layer3_identity = layer3_shortcut(out)
        out = self.layer3_first(out)
        out = F.relu(out + layer3_identity, inplace=True)

        for i in range(22):
            identity = out
            out = self.layer3_next(out)
            out = F.relu(out + identity, inplace=True)
        # --------------------------------------------------------------------
        layer4_shortcut = DownSample(1024, 2048, 2)
        layer4_shortcut.to('cuda:0')
        layer4_identity = layer4_shortcut(out)
        out = self.layer4_first(out)
        out = F.relu(out + layer4_identity, inplace=True)

        for i in range(2):
            identity = out
            out = self.layer4_next(out)
            out = F.relu(out + identity, inplace=True)
        # --------------------------------------------------------------------
        out = self.avg_pool(out)
        out = out.reshape(out.size(0), -1)
        out = self.fc(out)

        return out


ResNet152的model.py模型:

import torch
import torch.nn as nn
from torch.nn import functional as F


class DownSample(nn.Module):
    def __init__(self, in_channel, out_channel, stride):
        super(DownSample, self).__init__()
        self.down = nn.Sequential(
            nn.Conv2d(in_channel, out_channel, kernel_size=1, stride=stride, padding=0, bias=False),
            nn.BatchNorm2d(out_channel),
            nn.ReLU(inplace=True)
        )
        
    def forward(self, x):
        out = self.down(x)
        return out


class ResNet152(nn.Module):
    def __init__(self, classes_num):            # 指定了分类数目
        super(ResNet152, self).__init__()
        self.pre = nn.Sequential(
            nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
        )
        # -----------------------------------------------------------------------
        self.layer1_first = nn.Sequential(
            nn.Conv2d(64, 64, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.Conv2d(64, 256, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(256)
        )
        self.layer1_next = nn.Sequential(
            nn.Conv2d(256, 64, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.Conv2d(64, 64, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(64),
            nn.ReLU(inplace=True),
            nn.Conv2d(64, 256, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(256)
        )
        # -----------------------------------------------------------------------
        self.layer2_first = nn.Sequential(
            nn.Conv2d(256, 128, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),
            nn.Conv2d(128, 128, kernel_size=3, stride=2, padding=1, bias=False),
            nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),
            nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(512)
        )
        self.layer2_next = nn.Sequential(
            nn.Conv2d(512, 128, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),
            nn.Conv2d(128, 128, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(128),
            nn.ReLU(inplace=True),
            nn.Conv2d(128, 512, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(512)
        )
        # -----------------------------------------------------------------------
        self.layer3_first = nn.Sequential(
            nn.Conv2d(512, 256, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 256, kernel_size=3, stride=2, padding=1, bias=False),
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 1024, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(1024)
        )
        self.layer3_next = nn.Sequential(
            nn.Conv2d(1024, 256, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 256, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(256),
            nn.ReLU(inplace=True),
            nn.Conv2d(256, 1024, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(1024)
        )
        # -----------------------------------------------------------------------
        self.layer4_first = nn.Sequential(
            nn.Conv2d(1024, 512, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),
            nn.Conv2d(512, 512, kernel_size=3, stride=2, padding=1, bias=False),
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),
            nn.Conv2d(512, 2048, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(2048)
        )
        self.layer4_next = nn.Sequential(
            nn.Conv2d(2048, 512, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),
            nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1, bias=False),
            nn.BatchNorm2d(512),
            nn.ReLU(inplace=True),
            nn.Conv2d(512, 2048, kernel_size=1, stride=1, padding=0, bias=False),
            nn.BatchNorm2d(2048)
        )
        # -----------------------------------------------------------------------
        self.avg_pool = nn.AdaptiveAvgPool2d((1, 1))
        self.fc = nn.Sequential(
            nn.Dropout(p=0.5),
            nn.Linear(2048 * 1 * 1, 1000),
            nn.ReLU(inplace=True),
            nn.Dropout(p=0.5),
            nn.Linear(1000, classes_num)
        )

    def forward(self, x):
        out = self.pre(x)
        # -----------------------------------------------------------------------
        layer1_shortcut = DownSample(64, 256, 1)
        # layer1_shortcut.to('cuda:0')
        layer1_identity = layer1_shortcut(out)
        out = self.layer1_first(out)
        out = F.relu(out + layer1_identity, inplace=True)

        for i in range(2):
            identity = out
            out = self.layer1_next(out)
            out = F.relu(out + identity, inplace=True)
        # -----------------------------------------------------------------------
        layer2_shortcut = DownSample(256, 512, 2)
        # layer2_shortcut.to('cuda:0')
        layer2_identity = layer2_shortcut(out)
        out = self.layer2_first(out)
        out = F.relu(out + layer2_identity, inplace=True)

        for i in range(7):
            identity = out
            out = self.layer2_next(out)
            out = F.relu(out + identity, inplace=True)
        # -----------------------------------------------------------------------
        layer3_shortcut = DownSample(512, 1024, 2)
        # layer3_shortcut.to('cuda:0')
        layer3_identity = layer3_shortcut(out)
        out = self.layer3_first(out)
        out = F.relu(out + layer3_identity, inplace=True)

        for i in range(35):
            identity = out
            out = self.layer3_next(out)
            out = F.relu(out + identity, inplace=True)
        # -----------------------------------------------------------------------
        layer4_shortcut = DownSample(1024, 2048, 2)
        # layer4_shortcut.to('cuda:0')
        layer4_identity = layer4_shortcut(out)
        out = self.layer4_first(out)
        out = F.relu(out + layer4_identity, inplace=True)

        for i in range(2):
            identity = out
            out = self.layer4_next(out)
            out = F.relu(out + identity, inplace=True)
        # -----------------------------------------------------------------------
        out = self.avg_pool(out)
        out = out.reshape(out.size(0), -1)
        out = self.fc(out)

        return out
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/185167.html原文链接:https://javaforall.net

(0)
全栈程序员-站长的头像全栈程序员-站长


相关推荐

  • IMU校正以及姿态融合[通俗易懂]

    版权声明:本文为博主原创文章,未经博主允许不得转载。博主:shenshikexmu联系方式:shenshikexmu@163.com缘起有缘在简极科技兼职两年。接触了IMU,我去的时候那家公司还是一个要把IMU放进足球的公司,祝愿简极越来越好。IMU校正算法大概是接触传感器三个月做出来的,博客IMU加速度、磁力计校正--椭球拟合的内容,那时只是把校准问题当作椭球拟合问题。融合算法大…

    2022年4月18日
    63
  • 几大工作流引擎对比图_工作流引擎框架

    几大工作流引擎对比图_工作流引擎框架纵观jBPM:从jBPM3到jBPM5以及Activiti5:http://www.infoq.com/cn/articles/rhjbpm5activiti5工作流引擎选择(为何使用activ

    2022年8月2日
    8
  • BZOJ3503:[CQOI2014]和谐矩阵(高斯消元,bitset)

    BZOJ3503:[CQOI2014]和谐矩阵(高斯消元,bitset)

    2022年4月2日
    47
  • 什么是pisa测试_PISA测试真相:哪些学校代表中国考取第一名

    什么是pisa测试_PISA测试真相:哪些学校代表中国考取第一名原标题:PISA测试真相:哪些学校代表中国考取第一名在北京金融行业工作的王鑫如,去年女儿出生后就开始规划送她去哪里接受教育,留在北京,还是随着一个工作机会去香港,或者全家移民国外?她说,将来女儿读大学很大可能会去国外,但基础教育阶段有没有必要出去?中国的基础教育竞争力到底强不强?大学有各种国际排行榜单,不同国家的中小学质量要如何对比?12月3日公布的第七轮国际学生评估结果(Programmefor…

    2022年5月3日
    56
  • Hibernate二级缓存适用场景[通俗易懂]

    Hibernate二级缓存适用场景[通俗易懂]Hibernate二级缓存适用场景1.什么样的数据适合存放到第二级缓存中?1)很少被后台修改的数据,这里指的是前台后台使用了不同的orm实现,如一个用的hibernate加二级缓存,一个用的jdbc(前台用户可以修改,修改后会同步到缓存中)2)不是很重要的数据,允许出现偶尔并发的数据3)访问量大,不会被并发访问的数据,如个人资料4)

    2022年5月24日
    42
  • 真理的基本的属性是_thread和handler区别

    真理的基本的属性是_thread和handler区别原文地址:http://blog.csdn.net/luckeryin/article/details/5649144C#中,Thread类有一个IsBackground的属性.MSDN上对它的解释是:获取或设置一个值,该值指示某个线程是否为后台线程。个人感觉这样的解释等于没有解释..Net中的线程,可以分为后台线程和前台线程。后台线程与前台线程并没有本质的区别,它们之间唯一

    2022年10月16日
    3

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号