Blockwise Classification: Resolving Inconsistencies with Spectral Correction and Hedged Ridge Regression
(2025) EITM02 20251Department of Electrical and Information Technology
- Abstract
- Image classification systems have been widely applied in smartphones, industrial inspection, and automated devices. However, in real-world scenarios, existing classifiers often produce inconsistent or incorrect predictions due to factors such as image degradation, inter-class similarity, and model instability, thereby reducing the overall performance of the system. To address these issues and enhance the stability and robustness of classification systems under suboptimal conditions, this thesis proposes a post-processing error correction framework based on existing classification outputs. By integrating classification information, the proposed method can effectively identify and correct systematic prediction errors, thereby improving the... (More)
- Image classification systems have been widely applied in smartphones, industrial inspection, and automated devices. However, in real-world scenarios, existing classifiers often produce inconsistent or incorrect predictions due to factors such as image degradation, inter-class similarity, and model instability, thereby reducing the overall performance of the system. To address these issues and enhance the stability and robustness of classification systems under suboptimal conditions, this thesis proposes a post-processing error correction framework based on existing classification outputs. By integrating classification information, the proposed method can effectively identify and correct systematic prediction errors, thereby improving the reliability and accuracy of the final decision.
In the experiments, classifiers were constructed using different strategies, including KNN and SVM. These classifiers were used to recognize images, while principal eigenvalue analysis was employed to extract key discriminative features. Furthermore, Hybrid Ridge Regression (HRR) was applied to enhance prediction stability and accuracy.
The experiments were conducted on a standard image dataset, where recognition errors mainly stemmed from the classifiers’ prediction uncertainty under real-world conditions. A block-wise structure was adopted during the classification process. The results demonstrate that the proposed method significantly improves classification accuracy and exhibits strong robustness, especially when training data is limited or predictions are unstable.
Moreover, the method is computationally efficient and structurally lightweight, making it suitable for both standard computing platforms and resource-constrained devices.
In summary, the proposed classification strategy achieves a well-balanced trade-off among accuracy, robustness, and efficiency. Its flexible structure allows integration with various classification models, demonstrating strong adaptability and practical application value. (Less) - Popular Abstract
- Imagine if your phone’s face unlock worked flawlessly, no matter how poor the lighting or how smudged the camera lens was—our research aims to make image classification systems smarter and more reliable, even when the data is imperfect. Image classification technology is already a part of our daily lives: from unlocking phones and sorting photos to enabling self-driving cars to “see” the road. However, in real-world scenarios, photos are often blurry, lighting can be challenging, or the devices themselves may make mistakes. These issues can lead to unreliable classification results, causing inconvenience for users and limiting the effectiveness of the technology.
To address these “non-ideal” scenarios, we propose a new solution. Unlike... (More) - Imagine if your phone’s face unlock worked flawlessly, no matter how poor the lighting or how smudged the camera lens was—our research aims to make image classification systems smarter and more reliable, even when the data is imperfect. Image classification technology is already a part of our daily lives: from unlocking phones and sorting photos to enabling self-driving cars to “see” the road. However, in real-world scenarios, photos are often blurry, lighting can be challenging, or the devices themselves may make mistakes. These issues can lead to unreliable classification results, causing inconvenience for users and limiting the effectiveness of the technology.
To address these “non-ideal” scenarios, we propose a new solution. Unlike traditional methods that rely on a single classification approach, our method can flexibly integrate with various classification strategies, such as One-vs-One and One-vs-All, and has demonstrated clear performance improvements in our experiments. This is like having a group of classmates check each other’s homework, so that mistakes can be quickly found and corrected.
Our main innovation lies in simultaneously introducing both a Principal Eigenvalue Error Correction mechanism and a Hedged Ridge Regression method, both of which are equally important and complementary in our system. We employ a Block-Wise Classification structure, where two images are combined into a single block for joint classification. The Principal Eigenvalue Error Correction is applied based on the results of this block-wise classification to further refine and optimize the overall decision. Meanwhile, the Hedged Ridge Regression method significantly enhances the accuracy and stability of the model, with especially notable improvements when only a small amount of data is available.
Experiments on standard image datasets show that our method can significantly improve classification accuracy and robustness, particularly under noisy or low-data conditions. At the same time, the system is computationally efficient and resource-friendly, making it easy to deploy on ordinary computers or even small edge devices, with promising prospects for practical applications.
In summary, our work makes image classification not only smarter but also more dependable, opening up new possibilities for applications such as autonomous driving and security systems, and making everyday life more convenient and reliable. (Less)
Please use this url to cite or link to this publication:
http://lup.lub.lu.se/student-papers/record/9197729
- author
- Huo, Chunguang LU and Hu, Gaobo LU
- supervisor
- organization
- course
- EITM02 20251
- year
- 2025
- type
- H2 - Master's Degree (Two Years)
- subject
- keywords
- Support Vector Machine, Eigenvalue, Block-Wise, classification, Joint processing, Hedged Ridge Regression, One-vs-All, One-vs-One
- report number
- LU/LTH-EIT 2025-1066
- language
- English
- id
- 9197729
- date added to LUP
- 2025-06-17 15:18:22
- date last changed
- 2025-06-17 15:18:22
@misc{9197729, abstract = {{Image classification systems have been widely applied in smartphones, industrial inspection, and automated devices. However, in real-world scenarios, existing classifiers often produce inconsistent or incorrect predictions due to factors such as image degradation, inter-class similarity, and model instability, thereby reducing the overall performance of the system. To address these issues and enhance the stability and robustness of classification systems under suboptimal conditions, this thesis proposes a post-processing error correction framework based on existing classification outputs. By integrating classification information, the proposed method can effectively identify and correct systematic prediction errors, thereby improving the reliability and accuracy of the final decision. In the experiments, classifiers were constructed using different strategies, including KNN and SVM. These classifiers were used to recognize images, while principal eigenvalue analysis was employed to extract key discriminative features. Furthermore, Hybrid Ridge Regression (HRR) was applied to enhance prediction stability and accuracy. The experiments were conducted on a standard image dataset, where recognition errors mainly stemmed from the classifiers’ prediction uncertainty under real-world conditions. A block-wise structure was adopted during the classification process. The results demonstrate that the proposed method significantly improves classification accuracy and exhibits strong robustness, especially when training data is limited or predictions are unstable. Moreover, the method is computationally efficient and structurally lightweight, making it suitable for both standard computing platforms and resource-constrained devices. In summary, the proposed classification strategy achieves a well-balanced trade-off among accuracy, robustness, and efficiency. Its flexible structure allows integration with various classification models, demonstrating strong adaptability and practical application value.}}, author = {{Huo, Chunguang and Hu, Gaobo}}, language = {{eng}}, note = {{Student Paper}}, title = {{Blockwise Classification: Resolving Inconsistencies with Spectral Correction and Hedged Ridge Regression}}, year = {{2025}}, }