ResNet34_keras dropout

ResNet34_keras dropout参考:https://www.kaggle.com/meaninglesslives/unet-resnet34-in-keras

大家好,又见面了,我是你们的朋友全栈君。如果您正在找激活码,请点击查看最新教程,关注关注公众号 “全栈程序员社区” 获取激活教程,可能之前旧版本教程已经失效.最新Idea2022.1教程亲测有效,一键激活。

Jetbrains全系列IDE稳定放心使用

backbone

Resnet34网络结构图:

其中在网络搭建的过程中分为4个stage,蓝色箭头是在Unet中要进行合并的层。注意:前向的运算encoder过程一共经过了5次降采样,包括刚开始的 7 ∗ 7 7*7 77卷积 stride,所以decoder过程要有5次上采样的过程,但是跨层连接(encoder 与 decoder之间)只有4次,如下图所示,以输入图像大小224×224为例:
在这里插入图片描述
在这里插入图片描述

Resnet34代码搭建(keras)

卷积block搭建

有两种形式:
在这里插入图片描述
A: 单纯的shortcut
B: 虚线的shortcut是对特征图的维度做了调整( 1 ∗ 1 1*1 11卷积)

def basic_identity_block(filters, stage, block):
    """The identity block is the block that has no conv layer at shortcut. # Arguments kernel_size: default 3, the kernel size of middle conv layer at main path filters: list of integers, the filters of 3 conv layer at main path stage: integer, current stage label, used for generating layer names block: 'a','b'..., current block label, used for generating layer names # Returns Output tensor for the block. """

    def layer(input_tensor):
        conv_params = get_conv_params()
        bn_params = get_bn_params()
        conv_name, bn_name, relu_name, sc_name = handle_block_names(stage, block)

        x = BatchNormalization(name=bn_name + '1', **bn_params)(input_tensor)
        x = Activation('relu', name=relu_name + '1')(x)
        x = ZeroPadding2D(padding=(1, 1))(x)
        x = Conv2D(filters, (3, 3), name=conv_name + '1', **conv_params)(x)

        x = BatchNormalization(name=bn_name + '2', **bn_params)(x)
        x = Activation('relu', name=relu_name + '2')(x)
        x = ZeroPadding2D(padding=(1, 1))(x)
        x = Conv2D(filters, (3, 3), name=conv_name + '2', **conv_params)(x)

        x = Add()([x, input_tensor])
        return x

    return layer


def basic_conv_block(filters, stage, block, strides=(2, 2)):
    """The identity block is the block that has no conv layer at shortcut. # Arguments input_tensor: input tensor kernel_size: default 3, the kernel size of middle conv layer at main path filters: list of integers, the filters of 3 conv layer at main path stage: integer, current stage label, used for generating layer names block: 'a','b'..., current block label, used for generating layer names # Returns Output tensor for the block. """

    def layer(input_tensor):
        conv_params = get_conv_params()
        bn_params = get_bn_params()
        conv_name, bn_name, relu_name, sc_name = handle_block_names(stage, block)

        x = BatchNormalization(name=bn_name + '1', **bn_params)(input_tensor)
        x = Activation('relu', name=relu_name + '1')(x)
        shortcut = x
        x = ZeroPadding2D(padding=(1, 1))(x)
        x = Conv2D(filters, (3, 3), strides=strides, name=conv_name + '1', **conv_params)(x)

        x = BatchNormalization(name=bn_name + '2', **bn_params)(x)
        x = Activation('relu', name=relu_name + '2')(x)
        x = ZeroPadding2D(padding=(1, 1))(x)
        x = Conv2D(filters, (3, 3), name=conv_name + '2', **conv_params)(x)

        shortcut = Conv2D(filters, (1, 1), name=sc_name, strides=strides, **conv_params)(shortcut)
        x = Add()([x, shortcut])
        return x

    return layer

Resnet34网络搭建

网络结构即如上图所示。

def build_resnet(
     repetitions=(2, 2, 2, 2),
     include_top=True,
     input_tensor=None,
     input_shape=None,
     classes=1000,
     block_type='usual'):
    # Determine proper input shape
    input_shape = _obtain_input_shape(input_shape,
                                      default_size=224,
                                      min_size=197,
                                      data_format='channels_last',
                                      require_flatten=include_top)
    if input_tensor is None:
        img_input = Input(shape=input_shape, name='data')
    else:
        if not K.is_keras_tensor(input_tensor):
            img_input = Input(tensor=input_tensor, shape=input_shape)
        else:
            img_input = input_tensor
    # get parameters for model layers
    no_scale_bn_params = get_bn_params(scale=False)
    bn_params = get_bn_params()
    conv_params = get_conv_params()
    init_filters = 64
    if block_type == 'basic':
        conv_block = basic_conv_block
        identity_block = basic_identity_block
    else:
        conv_block = usual_conv_block
        identity_block = usual_identity_block
    # renet bottom
    x = BatchNormalization(name='bn_data', **no_scale_bn_params)(img_input)
    x = ZeroPadding2D(padding=(3, 3))(x)
    x = Conv2D(init_filters, (7, 7), strides=(2, 2), name='conv0', **conv_params)(x)
    x = BatchNormalization(name='bn0', **bn_params)(x)
    x = Activation('relu', name='relu0')(x)
    x = ZeroPadding2D(padding=(1, 1))(x)
    x = MaxPooling2D((3, 3), strides=(2, 2), padding='valid', name='pooling0')(x)
    # resnet body repetitions = (3,4,6,3)
    for stage, rep in enumerate(repetitions):
        for block in range(rep):
            # print(block)
            filters = init_filters * (2**stage) 
            # first block of first stage without strides because we have maxpooling before
            if block == 0 and stage == 0:
                # x = conv_block(filters, stage, block, strides=(1, 1))(x)
                x = identity_block(filters, stage, block)(x)
                continue 
            elif block == 0:
                x = conv_block(filters, stage, block, strides=(2, 2))(x)  
            else:
                x = identity_block(filters, stage, block)(x)          
    x = BatchNormalization(name='bn1', **bn_params)(x)
    x = Activation('relu', name='relu1')(x)
    # resnet top
    if include_top:
        x = GlobalAveragePooling2D(name='pool1')(x)
        x = Dense(classes, name='fc1')(x)
        x = Activation('softmax', name='softmax')(x)
    # Ensure that the model takes into account any potential predecessors of `input_tensor`.
    if input_tensor is not None:
        inputs = get_source_inputs(input_tensor)
    else:
        inputs = img_input      
    # Create model.
    model = Model(inputs, x)
    return model
def ResNet34(input_shape, input_tensor=None, weights=None, classes=1000, include_top=True):
    model = build_resnet(input_tensor=input_tensor,
                         input_shape=input_shape,
                         repetitions=(3, 4, 6, 3),
                         classes=classes,
                         include_top=include_top,
                         block_type='basic')
    model.name = 'resnet34'
    if weights:
        load_model_weights(weights_collection, model, weights, classes, include_top)
    return model

decoder过程

def build_unet(backbone, classes, skip_connection_layers,
               decoder_filters=(256,128,64,32,16),
               upsample_rates=(2,2,2,2,2),
               n_upsample_blocks=5,
               block_type='upsampling',
               activation='sigmoid',
               use_batchnorm=True):
    input = backbone.input
    x = backbone.output
    if block_type == 'transpose':
        up_block = Transpose2D_block
    else:
        up_block = Upsample2D_block
    # convert layer names to indices
    skip_connection_idx = ([get_layer_number(backbone, l) if isinstance(l, str) else l
                               for l in skip_connection_layers])
    # print(skip_connection_idx) [128, 73, 36, 5]
    for i in range(n_upsample_blocks):
        # print(i)
        # check if there is a skip connection
        skip_connection = None
        if i < len(skip_connection_idx):
            skip_connection = backbone.layers[skip_connection_idx[i]].output
            # print(backbone.layers[skip_connection_idx[i]])
            # <keras.layers.core.Activation object at 0x00000164CC562A20>
        upsample_rate = to_tuple(upsample_rates[i])
        x = up_block(decoder_filters[i], i, upsample_rate=upsample_rate,
                     skip=skip_connection, use_batchnorm=use_batchnorm)(x)
    x = Conv2D(classes, (3,3), padding='same', name='final_conv')(x)
    x = Activation(activation, name=activation)(x)
    model = Model(input, x)
    return model

参考:

  1. https://www.kaggle.com/meaninglesslives/unet-resnet34-in-keras
  2. https://github.com/qubvel/segmentation_models/blob/master/segmentation_models/unet/model.py
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。

发布者:全栈程序员-站长,转载请注明出处:https://javaforall.net/185407.html原文链接:https://javaforall.net

(0)
全栈程序员-站长的头像全栈程序员-站长


相关推荐

  • 光棍节程序员闯关秀-解密

    光棍节程序员闯关秀-解密前言最近看到的了一个比较有意思的解密游戏,这解密的过程中确实花了不少的功夫,后来通过搜索才发现这是好几年前的题目,但是题目虽然是老的,但技术是没有过时的,不得不承认其中有些问题我确实解答不上来,不过解密的过程还是很有意思的,在此记录一下,游戏地址为光棍节程序员闯关秀第1关(总共10关)有兴趣的可以自己玩一下,有些题目还是很需要专业知识的,具体的解题步骤网络上一大堆,不过我发现一个问题,你们为什么不把

    2022年7月17日
    27
  • apache中文乱码_文件名称乱码怎么解决

    apache中文乱码_文件名称乱码怎么解决RestSharp是一个第三方开源的Http模拟请求辅助类,其底层实现基于System.Net.HttpWebRequest,且不依赖于任何第三方控件。其github地址为:https://github.com/restsharp/RestSharp,start数可以说明该类库的知名度,当然侧面也可以证明它的确是一个比较好用的HTTP请求辅助类。一般情况下,RestSharp都工作的很好,只是当…

    2022年9月7日
    2
  • php layer弹出层更改背景,详解Layer弹出层样式[通俗易懂]

    php layer弹出层更改背景,详解Layer弹出层样式[通俗易懂]前言:学习layer弹出框,之前项目是用bootstrap模态框,后来改用layer弹出框,在文章的后面,我会分享项目的一些代码(我自己写的)。layer至今仍作为layui的代表作,她的受众广泛并非偶然,而是这五年多的坚持,不断完善和维护、不断建设和提升社区服务,使得猿们纷纷自发传播,乃至于成为今天的Layui最强劲的源动力。目前,layer已成为国内最多人使用的web弹层组件,GitHub自然…

    2022年7月13日
    15
  • idea2021 激活码【最新永久激活】

    (idea2021 激活码)JetBrains旗下有多款编译器工具(如:IntelliJ、WebStorm、PyCharm等)在各编程领域几乎都占据了垄断地位。建立在开源IntelliJ平台之上,过去15年以来,JetBrains一直在不断发展和完善这个平台。这个平台可以针对您的开发工作流进行微调并且能够提供…

    2022年3月26日
    373
  • STL源码剖析_stl编程指令详解

    STL源码剖析_stl编程指令详解SLT简介STL(StandardTemplateLibrary),即标准模板库,是一个高效的C++程序库。包含了诸多在计算机科学领域里常用的基本数据结构和基本算法。为广大C++程序员们提供了一

    2022年8月4日
    4
  • jsonschema校验json数据_xml schema校验

    jsonschema校验json数据_xml schema校验ajv使用在使用前,需要知道json-schema是什么。json-schemajson-schema是一个用来描述json数据格式。ajvajv是一个校验json-schem

    2022年8月2日
    18

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

关注全栈程序员社区公众号