這篇文章將為大家詳細講解有關Keras:Unet網絡如何實現多類語義分割,小編覺得挺實用的,因此分享給大家做個參考,希望大家閱讀完這篇文章后可以有所收獲。
創新互聯建站-專業網站定制、快速模板網站建設、高性價比隆回網站開發、企業建站全套包干低至880元,成熟完善的模板庫,直接使用。一站式隆回網站制作公司更省心,省錢,快速模板網站建設找我們,業務覆蓋隆回地區。費用合理售后完善,10余年實體公司更值得信賴。1 介紹
U-Net最初是用來對醫學圖像的語義分割,后來也有人將其應用于其他領域。但大多還是用來進行二分類,即將原始圖像分成兩個灰度級或者色度,依次找到圖像中感興趣的目標部分。
本文主要利用U-Net網絡結構實現了多類的語義分割,并展示了部分測試效果,希望對你有用!
2 源代碼
(1)訓練模型
from __future__ import print_function import os import datetime import numpy as np from keras.models import Model from keras.layers import Input, concatenate, Conv2D, MaxPooling2D, Conv2DTranspose, AveragePooling2D, Dropout, \ BatchNormalization from keras.optimizers import Adam from keras.layers.convolutional import UpSampling2D, Conv2D from keras.callbacks import ModelCheckpoint from keras import backend as K from keras.layers.advanced_activations import LeakyReLU, ReLU import cv2 PIXEL = 512 #set your image size BATCH_SIZE = 5 lr = 0.001 EPOCH = 100 X_CHANNEL = 3 # training images channel Y_CHANNEL = 1 # label iamges channel X_NUM = 422 # your traning data number pathX = 'I:\\Pascal VOC Dataset\\train1\\images\\' #change your file path pathY = 'I:\\Pascal VOC Dataset\\train1\\SegmentationObject\\' #change your file path #data processing def generator(pathX, pathY,BATCH_SIZE): while 1: X_train_files = os.listdir(pathX) Y_train_files = os.listdir(pathY) a = (np.arange(1, X_NUM)) X = [] Y = [] for i in range(BATCH_SIZE): index = np.random.choice(a) # print(index) img = cv2.imread(pathX + X_train_files[index], 1) img = np.array(img).reshape(PIXEL, PIXEL, X_CHANNEL) X.append(img) img1 = cv2.imread(pathY + Y_train_files[index], 1) img1 = np.array(img1).reshape(PIXEL, PIXEL, Y_CHANNEL) Y.append(img1) X = np.array(X) Y = np.array(Y) yield X, Y #creat unet network inputs = Input((PIXEL, PIXEL, 3)) conv1 = Conv2D(8, 3, activation='relu', padding='same', kernel_initializer='he_normal')(inputs) pool1 = AveragePooling2D(pool_size=(2, 2))(conv1) # 16 conv2 = BatchNormalization(momentum=0.99)(pool1) conv2 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv2) conv2 = BatchNormalization(momentum=0.99)(conv2) conv2 = Conv2D(64, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv2) conv2 = Dropout(0.02)(conv2) pool2 = AveragePooling2D(pool_size=(2, 2))(conv2) # 8 conv3 = BatchNormalization(momentum=0.99)(pool2) conv3 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv3) conv3 = BatchNormalization(momentum=0.99)(conv3) conv3 = Conv2D(128, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv3) conv3 = Dropout(0.02)(conv3) pool3 = AveragePooling2D(pool_size=(2, 2))(conv3) # 4 conv4 = BatchNormalization(momentum=0.99)(pool3) conv4 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4) conv4 = BatchNormalization(momentum=0.99)(conv4) conv4 = Conv2D(256, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv4) conv4 = Dropout(0.02)(conv4) pool4 = AveragePooling2D(pool_size=(2, 2))(conv4) conv5 = BatchNormalization(momentum=0.99)(pool4) conv5 = Conv2D(512, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv5) conv5 = BatchNormalization(momentum=0.99)(conv5) conv5 = Conv2D(512, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv5) conv5 = Dropout(0.02)(conv5) pool4 = AveragePooling2D(pool_size=(2, 2))(conv4) # conv5 = Conv2D(35, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv4) # drop4 = Dropout(0.02)(conv5) pool4 = AveragePooling2D(pool_size=(2, 2))(pool3) # 2 pool5 = AveragePooling2D(pool_size=(2, 2))(pool4) # 1 conv6 = BatchNormalization(momentum=0.99)(pool5) conv6 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6) conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv6) up7 = (UpSampling2D(size=(2, 2))(conv7)) # 2 conv7 = Conv2D(256, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up7) merge7 = concatenate([pool4, conv7], axis=3) conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge7) up8 = (UpSampling2D(size=(2, 2))(conv8)) # 4 conv8 = Conv2D(128, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up8) merge8 = concatenate([pool3, conv8], axis=3) conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge8) up9 = (UpSampling2D(size=(2, 2))(conv9)) # 8 conv9 = Conv2D(64, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up9) merge9 = concatenate([pool2, conv9], axis=3) conv10 = Conv2D(32, 3, activation='relu', padding='same', kernel_initializer='he_normal')(merge9) up10 = (UpSampling2D(size=(2, 2))(conv10)) # 16 conv10 = Conv2D(32, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up10) conv11 = Conv2D(16, 3, activation='relu', padding='same', kernel_initializer='he_normal')(conv10) up11 = (UpSampling2D(size=(2, 2))(conv11)) # 32 conv11 = Conv2D(8, 3, activation='relu', padding='same', kernel_initializer='he_normal')(up11) # conv12 = Conv2D(3, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv11) conv12 = Conv2D(3, 1, activation='relu', padding='same', kernel_initializer='he_normal')(conv11) model = Model(input=inputs, output=conv12) print(model.summary()) model.compile(optimizer=Adam(lr=1e-3), loss='mse', metrics=['accuracy']) history = model.fit_generator(generator(pathX, pathY,BATCH_SIZE), steps_per_epoch=600, nb_epoch=EPOCH) end_time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S') #save your training model model.save(r'V1_828.h6') #save your loss data mse = np.array((history.history['loss'])) np.save(r'V1_828.npy', mse)
網頁名稱:Keras:Unet網絡如何實現多類語義分割?-創新互聯
文章網址:http://m.newbst.com/article0/cossio.html
成都網站建設公司_創新互聯,為您提供移動網站建設、企業網站制作、網站內鏈、品牌網站設計、搜索引擎優化、企業建站
聲明:本網站發布的內容(圖片、視頻和文字)以用戶投稿、用戶轉載內容為主,如果涉及侵權請盡快告知,我們將會在第一時間刪除。文章觀點不代表本網站立場,如需處理請聯系客服。電話:028-86922220;郵箱:631063699@qq.com。內容未經允許不得轉載,或轉載時需注明來源: 創新互聯