Knowledge Distillation for Improved Spoof Detection
(2025) In Master’s Theses in Mathematical Sciences 2025:E2 FMAM05 20242Mathematics (Faculty of Engineering)
- Abstract
- As deep neural networks grow more powerful, they also require more computational resources, which becomes a challenge when deploying on edge devices with limited computational capacity. This thesis looks into knowledge distillation, a model compression technique where a large "teacher" network transfers its learned features to a smaller "student" network. Our goal is to maintain high accuracy in spoof detection of fingerprint biometric identification while reducing model size and computational costs. We focus on distilling knowledge from a ResNet18 teacher model into lightweight MobileNet based students (TinyNet and MicroNet), testing logit based and feature based distillation strategies and projection methods between teacher and student... (More)
- As deep neural networks grow more powerful, they also require more computational resources, which becomes a challenge when deploying on edge devices with limited computational capacity. This thesis looks into knowledge distillation, a model compression technique where a large "teacher" network transfers its learned features to a smaller "student" network. Our goal is to maintain high accuracy in spoof detection of fingerprint biometric identification while reducing model size and computational costs. We focus on distilling knowledge from a ResNet18 teacher model into lightweight MobileNet based students (TinyNet and MicroNet), testing logit based and feature based distillation strategies and projection methods between teacher and student layers. Experiments using both public and internal datasets with varied cropping size show that distillation improves performance in smaller models, with feature based distillation using convolutional projections giving the best results. These results demonstrate the potential of knowledge distillation for deploying robust spoof detection models in real world, resource constrained environments. (Less)
- Popular Abstract (Swedish)
- Artificiell intelligens blir alltmer populärt och används idag från allt till datorseende och självkörande bilar till chatbot assistenter som OpenAI:s världskända ChatGPT. Kärnan bakom dagens stora boom i artificiell intelligens är neurala nätverk, som blev möjligt att använda på grund av en stor utveckling i uträkningskraft med grafikkort. Neurala nätverk behöver astronomiska summor med uträkningar, och de allra största modellerna innehåller miljarder med tal och biljoner med multiplikationer, och det är lätt att se hur elräkningen kan gå i taket. I vårt exjobb har vi tittat på en populär ny teknik för att "destillera" kunskapen från ett massivt neuralt nätverk till ett mycket mindre, men fortfarande behålla karaktären och prestandan av... (More)
- Artificiell intelligens blir alltmer populärt och används idag från allt till datorseende och självkörande bilar till chatbot assistenter som OpenAI:s världskända ChatGPT. Kärnan bakom dagens stora boom i artificiell intelligens är neurala nätverk, som blev möjligt att använda på grund av en stor utveckling i uträkningskraft med grafikkort. Neurala nätverk behöver astronomiska summor med uträkningar, och de allra största modellerna innehåller miljarder med tal och biljoner med multiplikationer, och det är lätt att se hur elräkningen kan gå i taket. I vårt exjobb har vi tittat på en populär ny teknik för att "destillera" kunskapen från ett massivt neuralt nätverk till ett mycket mindre, men fortfarande behålla karaktären och prestandan av den mycket större modellen. (Less)
Please use this url to cite or link to this publication:
http://lup.lub.lu.se/student-papers/record/9183839
- author
- Petersson, Karl-Johan LU and Holm, Emil LU
- supervisor
-
- Ivar Persson LU
- organization
- course
- FMAM05 20242
- year
- 2025
- type
- H2 - Master's Degree (Two Years)
- subject
- keywords
- Artificial Intelligence, Deep Learning, Resource Constrained Environments, Model Compression, Knowledge Distillation, Teacher-Student Training, Biometric Security, Fingerprint Biometrics, Spoof Detection, Feature Regularization
- publication/series
- Master’s Theses in Mathematical Sciences 2025:E2
- report number
- LUTFMA-3567-2025
- ISSN
- 1404-6342
- language
- English
- id
- 9183839
- date added to LUP
- 2025-04-02 14:17:03
- date last changed
- 2025-04-02 14:17:03
@misc{9183839, abstract = {{As deep neural networks grow more powerful, they also require more computational resources, which becomes a challenge when deploying on edge devices with limited computational capacity. This thesis looks into knowledge distillation, a model compression technique where a large "teacher" network transfers its learned features to a smaller "student" network. Our goal is to maintain high accuracy in spoof detection of fingerprint biometric identification while reducing model size and computational costs. We focus on distilling knowledge from a ResNet18 teacher model into lightweight MobileNet based students (TinyNet and MicroNet), testing logit based and feature based distillation strategies and projection methods between teacher and student layers. Experiments using both public and internal datasets with varied cropping size show that distillation improves performance in smaller models, with feature based distillation using convolutional projections giving the best results. These results demonstrate the potential of knowledge distillation for deploying robust spoof detection models in real world, resource constrained environments.}}, author = {{Petersson, Karl-Johan and Holm, Emil}}, issn = {{1404-6342}}, language = {{eng}}, note = {{Student Paper}}, series = {{Master’s Theses in Mathematical Sciences 2025:E2}}, title = {{Knowledge Distillation for Improved Spoof Detection}}, year = {{2025}}, }