English  |  正體中文  |  简体中文  |  Items with full text/Total items : 27855/29356 (95%)
Visitors : 39152140      Online Users : 2458
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version
    Please use this identifier to cite or link to this item: http://ir.lib.cyut.edu.tw:8080/handle/310901800/43039


    Title: 基於FaceNet之口罩人臉辨識研究
    FaceNet-based Mask Face Recognition Research
    Authors: 蕭宏州
    HSIAO, HUNG-CHOU
    Contributors: 資訊管理系
    鄭文昌;李麗華
    CHENG, WEN-CHANG;LI, LI-HUA
    Keywords: 人臉辨識;深度學習;基因演算法;退火機制;分類器
    face recognition;deep learning;genetic algorithm;annealing mechanism;classifier
    Date: 2024-03-01
    Issue Date: 2024-04-18 10:13:15 (UTC+8)
    Abstract: 深度學習推動人臉辨識的發展,但COVID-19大流行使得人們必須佩戴口罩以降低感染風險,此因素給人臉辨識帶來新的挑戰。本研究基於FaceNet,使用自定義的MASK600資料集,提出三種口罩人臉辨識方法。首先戴口罩會降低辨識效能,此行為會影響FaceNet轉換後的特徵向量維度。第一個方法使用基因演算法選擇並移除受到口罩影響的特徵向量維度。實驗證明,使用基因演算法自動選擇與移除受到戴口罩影響的FaceNet生成的128維度特徵向量確實能提高辨識效能。但在實際應用之前仍有改進的餘地。因此第二個方法將FaceNet與遷移學習和退火機制相結合,重新訓練經過驗證的InceptionResNetV2、InceptionV3和MobileNetV2模型架構。實驗表明,Cosine Annealing訓練的三個不同規模的模型皆優於使用Fixed、Step與Exponential學習率機制。但此方法仍需大量的模型訓練次數。為了減少模型訓練次數,第三個方法將FaceNet和分類器與第二種方法中的遷移學習相結合。在新的損失函數中結合Triplet Loss和SoftMax Loss。此外,優化器的學習率通過Cosine Annealing更新,訓練過程中讓模型更快的收斂。實驗證明,與第二個的方法相比,本方法不僅達到了實用水準,且節省模型訓練次數。InceptionResNetV2、InceptionV3和MobileNetV2三種模型於測試集的準確率分別為 93.76%、93.31%與93.59%。
    Deep Learning has allowed face recognition to advance, but the COVID-19 pandemic has made mask wear necessary to lower the risk of infection, which presents new challenges for face identification. This research proposes three mask face recognition approaches based on the FaceNet and using the customized MASK600 dataset.Firstly, wearing a mask will decrease recognition performance since it impacts several feature vector dimensions of FaceNet conversion. First approach selects and eliminates the feature vector dimensions impacted by mask using a Genetic Algorithm. Experimental results reveal that employing Genetic Algorithm to autonomously selects and eliminate 128-dimensional feature vectors produced by FaceNet that are impacted by wearing mask, in fact, enhance recognition efficacy. But there is still room for improvement before it can be used in practice. Thus the second approach using the FaceNet in conjunction with Transfer Learning and Annealing Mechanism, the validated InceptionResNetV2, InceptionV3, and MobileNetV2 model architectures are retrained in this approach. Tests demonstrate that the three models of varying sizes created by Cosine Annealing outperform the Fixed, Step, and Exponential Learning Rate mechanism. However, this approach still requires a lot of epochs for model training.To reduce the model training epochs. The third approach combining FaceNet and classifier with Transfer Learning in the second approach. SoftMax Loss and Triplet Loss are combined in the new loss function. Moreover, the optimizer's learning rate is constantly updated via Cosine Annealing to enable a quicker convergence impact for the model during training. It is shown this approach not only reaches the practical level but also saves the model training epochs compared to the second approach. The accuracy of InceptionResNetV2, InceptionV3, and MobileNetV2 models in the testing set is 93.76%, 93.31%, and 93.59%, respectively.
    Appears in Collections:[資訊管理系、資訊科技研究所] 博碩士論文

    Files in This Item:

    File SizeFormat
    112CYUT0396004-002.pdf2995KbAdobe PDF129View/Open


    All items in CYUTIR are protected by copyright, with all rights reserved.


    著作權政策宣告
    1.本網站之數位內容為朝陽科技大學所收錄之機構典藏,無償提供學術研究與公眾教育等公益性使用,惟仍請適度、合理使用本網站之內容,以尊重著作權人之權益。商業上之利用,則請先取得著作權人之授權。
    2.本網站之製作已盡力防止侵害著作權人之權益,如仍發現本網站之數位內容有侵害著作權人權益情事者,請權利人通知本網站維護人員(yjhung@cyut.edu.tw),維護人員將立即採取移除該數位著作等補救措施。

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback