Fingerprint Synthesis from Diffusion Models and Generative Adversarial Networks
(2025) Future of Information and Communication Conference, FICC 2025 In Lecture Notes in Networks and Systems 1283 LNNS. p.289-312- Abstract
We present novel approaches involving generative adversarial networks and diffusion models in order to synthesize high-quality, live, and spoof fingerprint images while preserving features such as uniqueness and diversity. We generate live fingerprints from noise with a variety of methods, and we use image translation techniques to translate live fingerprint images to spoof. To generate different types of spoof images based on limited training data we incorporate style transfer techniques through a cycle autoencoder equipped with a Wasserstein metric along with Gradient Penalty (CycleWGAN-GP) in order to avoid mode collapse and instability. We find that when the spoof training data includes distinct spoof characteristics, it leads to... (More)
We present novel approaches involving generative adversarial networks and diffusion models in order to synthesize high-quality, live, and spoof fingerprint images while preserving features such as uniqueness and diversity. We generate live fingerprints from noise with a variety of methods, and we use image translation techniques to translate live fingerprint images to spoof. To generate different types of spoof images based on limited training data we incorporate style transfer techniques through a cycle autoencoder equipped with a Wasserstein metric along with Gradient Penalty (CycleWGAN-GP) in order to avoid mode collapse and instability. We find that when the spoof training data includes distinct spoof characteristics, it leads to improved live-to-spoof translation. We assess the diversity and realism of the generated live fingerprint images mainly through the Fréchet Inception Distance (FID) and the False Acceptance Rate (FAR). Our best diffusion model achieved a FID of 15.78. The comparable WGAN-GP model achieved slightly higher FID while performing better in the uniqueness assessment due to a slightly lower FAR when matched against the training data, indicating better creativity. Moreover, we give example images showing that a DDPM model clearly can generate realistic fingerprint images.
(Less)
- author
- Tang, Weizhong
LU
; Llamosas, Diego Andre Figueroa
; Liu, Donglin
LU
; Johnsson, Kerstin
LU
and Sopasakis, Alexandros
LU
- organization
- publishing date
- 2025
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- keywords
- Diffusion model, Fingerprint generation, Generative adversarial network
- host publication
- Advances in Information and Communication - Proceedings of the 2025 Future of Information and Communication Conference, FICC 2025
- series title
- Lecture Notes in Networks and Systems
- editor
- Arai, Kohei
- volume
- 1283 LNNS
- pages
- 24 pages
- publisher
- Springer Science and Business Media B.V.
- conference name
- Future of Information and Communication Conference, FICC 2025
- conference location
- Berlin, Germany
- conference dates
- 2025-04-28 - 2025-04-29
- external identifiers
-
- scopus:105000738416
- ISSN
- 2367-3389
- 2367-3370
- ISBN
- 9783031844560
- DOI
- 10.1007/978-3-031-84457-7_18
- language
- English
- LU publication?
- yes
- additional info
- Publisher Copyright: © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
- id
- b1e075a4-972c-43e6-afbe-8ff9d554202b
- date added to LUP
- 2025-04-03 12:00:00
- date last changed
- 2025-07-10 19:43:51
@inproceedings{b1e075a4-972c-43e6-afbe-8ff9d554202b, abstract = {{<p>We present novel approaches involving generative adversarial networks and diffusion models in order to synthesize high-quality, live, and spoof fingerprint images while preserving features such as uniqueness and diversity. We generate live fingerprints from noise with a variety of methods, and we use image translation techniques to translate live fingerprint images to spoof. To generate different types of spoof images based on limited training data we incorporate style transfer techniques through a cycle autoencoder equipped with a Wasserstein metric along with Gradient Penalty (CycleWGAN-GP) in order to avoid mode collapse and instability. We find that when the spoof training data includes distinct spoof characteristics, it leads to improved live-to-spoof translation. We assess the diversity and realism of the generated live fingerprint images mainly through the Fréchet Inception Distance (FID) and the False Acceptance Rate (FAR). Our best diffusion model achieved a FID of 15.78. The comparable WGAN-GP model achieved slightly higher FID while performing better in the uniqueness assessment due to a slightly lower FAR when matched against the training data, indicating better creativity. Moreover, we give example images showing that a DDPM model clearly can generate realistic fingerprint images.</p>}}, author = {{Tang, Weizhong and Llamosas, Diego Andre Figueroa and Liu, Donglin and Johnsson, Kerstin and Sopasakis, Alexandros}}, booktitle = {{Advances in Information and Communication - Proceedings of the 2025 Future of Information and Communication Conference, FICC 2025}}, editor = {{Arai, Kohei}}, isbn = {{9783031844560}}, issn = {{2367-3389}}, keywords = {{Diffusion model; Fingerprint generation; Generative adversarial network}}, language = {{eng}}, pages = {{289--312}}, publisher = {{Springer Science and Business Media B.V.}}, series = {{Lecture Notes in Networks and Systems}}, title = {{Fingerprint Synthesis from Diffusion Models and Generative Adversarial Networks}}, url = {{http://dx.doi.org/10.1007/978-3-031-84457-7_18}}, doi = {{10.1007/978-3-031-84457-7_18}}, volume = {{1283 LNNS}}, year = {{2025}}, }