ResNet34_keras dropout

ResNet34_keras dropout参考:https://www.kaggle.com/meaninglesslives/unet-resnet34-in-keras

大家好,又见面了,我是你们的朋友全栈君。如果您正在找激活码,请点击查看最新教程,关注关注公众号 “全栈程序员社区” 获取激活教程,可能之前旧版本教程已经失效.最新Idea2022.1教程亲测有效,一键激活。

Jetbrains全系列IDE稳定放心使用

backbone

Resnet34网络结构图:

其中在网络搭建的过程中分为4个stage,蓝色箭头是在Unet中要进行合并的层。注意:前向的运算encoder过程一共经过了5次降采样,包括刚开始的 7 ∗ 7 7*7 77卷积 stride,所以decoder过程要有5次上采样的过程,但是跨层连接(encoder 与 decoder之间)只有4次,如下图所示,以输入图像大小224×224为例:
在这里插入图片描述
在这里插入图片描述

Resnet34代码搭建(keras)

卷积block搭建

有两种形式:
在这里插入图片描述
A: 单纯的shortcut
B: 虚线的shortcut是对特征图的维度做了调整( 1 ∗ 1 1*1 11卷积)

def basic_identity_block(filters, stage, block):
    """The identity block is the block that has no conv layer at shortcut. # Arguments kernel_size: default 3, the kernel size of middle conv layer at main path filters: list of integers, the filters of 3 conv layer at main path stage: integer, current stage label, used for generating layer names block: 'a','b'..., current block label, used for generating layer names # Returns Output tensor for the block. """

    def layer(input_tensor):
        conv_params = get_conv_params()
        bn_params = get_bn_params()
        conv_name, bn_name, relu_name, sc_name = handle_block_names(stage, block)

        x = BatchNormalization(name=bn_name + '1', **bn_params)(input_tensor)
        x = Activation('relu', name=relu_name + '1')(x)
        x = ZeroPadding2D(padding=(1, 1))(x)
        x = Conv2D(filters, (3, 3), name=conv_name + '1', **conv_params)(x)

        x = BatchNormalization(name=bn_name + '2', **bn_params)(x)
        x = Activation('relu', name=relu_name + '2')(x)
        x = ZeroPadding2D(padding=(1, 1))(x)
        x = Conv2D(filters, (3, 3), name=conv_name + '2', **conv_params)(x)

        x = Add()([x, input_tensor])
        return x

    return layer


def basic_conv_block(filters, stage, block, strides=(2, 2)):
    """The identity block is the block that has no conv layer at shortcut. # Arguments input_tensor: input tensor kernel_size: default 3, the kernel size of middle conv layer at main path filters: list of integers, the filters of 3 conv layer at main path stage: integer, current stage label, used for generating layer names block: 'a','b'..., current block label, used for generating layer names # Returns Output tensor for the block. """

    def layer(input_tensor):
        conv_params = get_conv_params()
        bn_params = get_bn_params()
        conv_name, bn_name, relu_name, sc_name = handle_block_names(stage, block)

        x = BatchNormalization(name=bn_name + '1', **bn_params)(input_tensor)
        x = Activation('relu', name=relu_name + '1')(x)
        shortcut = x
        x = ZeroPadding2D(padding=(1, 1))(x)
        x = Conv2D(filters, (3, 3), strides=strides, name=conv_name + '1', **conv_params)(x)

        x = BatchNormalization(name=bn_name + '2', **bn_params)(x)
        x = Activation('relu', name=relu_name + '2')(x)
        x = ZeroPadding2D(padding=(1, 1))(x)
        x = Conv2D(filters, (3, 3), name=conv_name + '2', **conv_params)(x)

        shortcut = Conv2D(filters, (1, 1), name=sc_name, strides=strides, **conv_params)(shortcut)
        x = Add()([x, shortcut])
        return x

    return layer

Resnet34网络搭建

网络结构即如上图所示。

def build_resnet(
     repetitions=(2, 2, 2, 2),
     include_top=True,
     input_tensor=None,
     input_shape=None,
     classes=1000,
     block_type='usual'):
    # Determine proper input shape
    input_shape = _obtain_input_shape(input_shape,
                                      default_size=224,
                                      min_size=197,
                                      data_format='channels_last',
                                      require_flatten=include_top)
    if input_tensor is None:
        img_input = Input(shape=input_shape, name='data')
    else:
        if not K.is_keras_tensor(input_tensor):
            img_input = Input(tensor=input_tensor, shape=input_shape)
        else:
            img_input = input_tensor
    # get parameters for model layers
    no_scale_bn_params = get_bn_params(scale=False)
    bn_params = get_bn_params()
    conv_params = get_conv_params()
    init_filters = 64
    if block_type == 'basic':
        conv_block = basic_conv_block
        identity_block = basic_identity_block
    else:
        conv_block = usual_conv_block
        identity_block = usual_identity_block
    # renet bottom
    x = BatchNormalization(name='bn_data', **no_scale_bn_params)(img_input)
    x = ZeroPadding2D(padding=(3, 3))(x)
    x = Conv2D(init_filters, (7, 7), strides=(2, 2), name='conv0', **conv_params)(x)
    x = BatchNormalization(name='bn0', **bn_params)(x)
    x = Activation('relu', name='relu0')(x)
    x = ZeroPadding2D(padding=(1, 1))(x)
    x = MaxPooling2D((3, 3), strides=(2, 2), padding='valid', name='pooling0')(x)
    # resnet body repetitions = (3,4,6,3)
    for stage, rep in enumerate(repetitions):
        for block in range(rep):
            # print(block)
            filters = init_filters * (2**stage) 
            # first block of first stage without strides because we have maxpooling before
            if block == 0 and stage == 0:
                # x = conv_block(filters, stage, block, strides=(1, 1))(x)
                x = identity_block(filters, stage, block)(x)
                continue 
            elif block == 0:
                x = conv_block(filters, stage, block, strides=(2, 2))(x)  
            else:
                x = identity_block(filters, stage, block)(x)          
    x = BatchNormalization(name='bn1', **bn_params)(x)
    x = Activation('relu', name='relu1')(x)
    # resnet top
    if include_top:
        x = GlobalAveragePooling2D(name='pool1')(x)
        x = Dense(classes, name='fc1')(x)
        x = Activation('softmax', name='softmax')(x)
    # Ensure that the model takes into account any potential predecessors of `input_tensor`.
    if input_tensor is not None:
        inputs = get_source_inputs(input_tensor)
    else:
        inputs = img_input      
    # Create model.
    model = Model(inputs, x)
    return model
def ResNet34(input_shape, input_tensor=None, weights=None, classes=1000, include_top=True):
    model = build_resnet(input_tensor=input_tensor,
                         input_shape=input_shape,
                         repetitions=(3, 4, 6, 3),
                         classes=classes,
                         include_top=include_top,
                         block_type='basic')
    model.name = 'resnet34'
    if weights:
        load_model_weights(weights_collection, model, weights, classes, include_top)
    return model

decoder过程

def build_unet(backbone, classes, skip_connection_layers,
               decoder_filters=(256,128,64,32,16),
               upsample_rates=(2,2,2,2,2),
               n_upsample_blocks=5,
               block_type='upsampling',
               activation='sigmoid',
               use_batchnorm=True):
    input = backbone.input
    x = backbone.output
    if block_type == 'transpose':
        up_block = Transpose2D_block
    else:
        up_block = Upsample2D_block
    # convert layer names to indices
    skip_connection_idx = ([get_layer_number(backbone, l) if isinstance(l, str) else l
                               for l in skip_connection_layers])
    # print(skip_connection_idx) [128, 73, 36, 5]
    for i in range(n_upsample_blocks):
        # print(i)
        # check if there is a skip connection
        skip_connection = None
        if i < len(skip_connection_idx):
            skip_connection = backbone.layers[skip_connection_idx[i]].output
            # print(backbone.layers[skip_connection_idx[i]])
            # <keras.layers.core.Activation object at 0x00000164CC562A20>
        upsample_rate = to_tuple(upsample_rates[i])
        x = up_block(decoder_filters[i], i, upsample_rate=upsample_rate,
                     skip=skip_connection, use_batchnorm=use_batchnorm)(x)
    x = Conv2D(classes, (3,3), padding='same', name='final_conv')(x)
    x = Activation(activation, name=activation)(x)
    model = Model(input, x)
    return model

参考:

  1. https://www.kaggle.com/meaninglesslives/unet-resnet34-in-keras
  2. https://github.com/qubvel/segmentation_models/blob/master/segmentation_models/unet/model.py
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/185407.html原文链接:https://javaforall.net

(0)
全栈程序员-站长的头像全栈程序员-站长


相关推荐

  • vimrc配置_vim环境配置

    vimrc配置_vim环境配置博文背景写代码没vim难受,装个vim它对于快捷键的设置并不能和visualstudio一样很方便地设置将所有快捷键映射到vim插件,所以记录配置后的文件非常有必要。具体操作切换vim插件状态的快捷键设置:文件路径:C:\Users\Administrator\AppData\Roaming\JetBrains\WebStormXXX\options\vim_settings.xml文件内容:<application><componentname=”VimEdito

    2022年9月27日
    3
  • ora 01031 权限不足_sql权限不足

    ora 01031 权限不足_sql权限不足原因分析:因为当前用户没有对其他用户的表的修改权限,所有报权限不足的错误。解决办法:把自己所有的权限都给用户B。grantallprivilegestoB;

    2022年4月19日
    220
  • linux 查询内核版本_linux内核版本号的构成

    linux 查询内核版本_linux内核版本号的构成文章目录Linux内核(Linuxkernel)简介Linux内核版本号1、在CentOS下如:2、在Ubuntu下如:3、在ARMCortex-A7内核的嵌入式Linux开发板下内核版本分类查看Linux内核版本命令查看Linux系统版本的命令本文作者:Jasonhu本文链接:http://jasonhzy.github.io/2019/02/05/linux-kernel-version/Linux内核(Linuxkernel)简介 Linux内核版本命名在不同时期有着不同的规范

    2022年8月23日
    6
  • linux rootfs_linux常用文件系统类型

    linux rootfs_linux常用文件系统类型linux中有一个让很多初学者都不是特别清楚的概念,叫做“根文件系统”。我接触linux前前后后也好几年了,但是对这个问题,至今也不是特别的清楚,至少没法向其他初学者们给出一个很全面很到位的解释。于是,今天我们就来理一理这个话题。

    2022年10月7日
    3
  • html中设置背景图片为平铺,html背景图片怎么设置平铺方式

    html中设置背景图片为平铺,html背景图片怎么设置平铺方式在html中,可利用background-repeat属性来设置背景图片的平铺方式;当属性值设置为“repeat”时可向垂直和水平方向平铺,“repeat-x”时可水平平铺,“repeat-y”时可垂直平铺,“no-repeat”时不平铺。本教程操作环境:windows7系统、CSS3&&HTML5版、DellG3电脑。html背景图片设置平铺方式div{border:1px…

    2022年6月1日
    69
  • 指令的四个周期_cpu指令周期流程图

    指令的四个周期_cpu指令周期流程图指令流程图的概念菱形:译码,测试,表示判断,如零指令字是0或者1.与前面的CPU周期紧密相连,不单独占用CPU周期。每个方框箭头下面的是公共操作符符号,表示一条指令结束。mov指令将R1寄存器的数据存储到R2寄存器中,lad指令时间主存中的数据存储到寄存器中。sto是将R2中的数据根据R3中的主存地址存储到主存中。lad和sto是寄存器-主存指令需要三个CPU周期,其他都是寄存器-…

    2022年10月13日
    4

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号