Apress

Pattern Recognition and Classification

An Introduction

By Geoff Dougherty

Pattern Recognition and Classification Cover Image

This volume, both comprehensive and accessible, introduces all the key concepts in pattern recognition, and includes many examples and exercises that make it an ideal guide to an important methodology widely deployed in today’s ubiquitous automated systems.

Full Description

  • ISBN13: 978-1-4614-5322-2
  • 208 Pages
  • User Level: Science
  • Publication Date: October 28, 2012
  • Available eBook Formats: PDF
  • eBook Price: $109.00
Buy eBook Buy Print Book Add to Wishlist
Full Description
The use of pattern recognition and classification is fundamental to many of the automated electronic systems in use today. However, despite the existence of a number of notable books in the field, the subject remains very challenging, especially for the beginner. Pattern Recognition and Classification presents a comprehensive introduction to the core concepts involved in automated pattern recognition. It is designed to be accessible to newcomers from varied backgrounds, but it will also be useful to researchers and professionals in image and signal processing and analysis, and in computer vision. Fundamental concepts of supervised and unsupervised classification are presented in an informal, rather than axiomatic, treatment so that the reader can quickly acquire the necessary background for applying the concepts to real problems. More advanced topics, such as semi-supervised classification, combining clustering algorithms and relevance feedback are addressed in the later chapters. This book is suitable for undergraduates and graduates studying pattern recognition and machine learning.
Table of Contents

Table of Contents

  1. Preface.
  2. Acknowledgments.
  3. Chapter 1 Introduction.
  4. 1.1 Overview.
  5. 1.2 Classification.
  6. 1.3 Organization of the Book.
  7. Bibliography.
  8. Exercises.
  9. Chapter 2 Classification.
  10. 2.1 The Classification Process.
  11. 2.2 Features.
  12. 2.3 Training and Learning.
  13. 2.4 Supervised Learning and Algorithm Selection.
  14. 2.5 Approaches to Classification.
  15. 2.6 Examples.
  16. 2.6.1 Classification by Shape.
  17. 2.6.2 Classification by Size.
  18. 2.6.3 More Examples.
  19. 2.6.4 Classification of Letters.
  20. Bibliography .
  21. Exercises.
  22. Chapter 3 Non
  23. Metric Methods.
  24. 3.1 Introduction.
  25. 3.2 Decision Tree Classifier.
  26. 3.2.1 Information, Entropy and Impurity.
  27. 3.2.2 Information Gain.
  28. 3.2.3 Decision Tree Issues.
  29. 3.2.4 Strengths and Weaknesses .
  30. 3.3 Rule
  31. Based Classifier .
  32. 3.4 Other Methods.
  33. Bibliography .
  34. Exercises.
  35. Chapter 4 Statistical Pattern Recognition .
  36. 4.1 Measured Data and Measurement Errors.
  37. 4.2 Probability Theory.
  38. 4.2.1 Simple Probability Theory.
  39. 4.2.2 Conditional Probability and Bayes’ Rule.
  40. 4.2.3 Naïve Bayes classifier.
  41. 4.3 Continuous Random Variables.
  42. 4.3.1 The Multivariate Gaussian.
  43. 4.3.2 The Covariance Matrix.
  44. 4.3.3 The Mahalanobis Distance.
  45. Bibliography .
  46. Exercises.
  47. Chapter 5 Supervised Learning.
  48. 5.1 Parametric and Non
  49. Parametric Learning.
  50. 5.2 Parametric Learning.
  51. 5.2.1 Bayesian Decision Theory .
  52. 5.2.2 Discriminant Functions and Decision Boundaries.
  53. 5.2.3 MAP (Maximum A Posteriori) Estimator.
  54. Bibliography.
  55. Exercises.
  56. Chapter 6 Non
  57. Parametric Learning.
  58. 6.1 Histogram Estimator and Parzen Windows.
  59. 6.2 k
  60. Nearest Neighbor (k
  61. NN) Classification .
  62. 6.3 Artificial Neural Networks (ANNs).
  63. 6.4 Kernel Machines.
  64. Bibliography .
  65. Exercises.
  66. Chapter 7 Feature Extraction and Selection.
  67. 7.1 Reducing Dimensionality.
  68. 7.1.1 Pre
  69. Processing.
  70. 7.2 Feature Selection.
  71. 7.2.1 Inter/Intra
  72. Class Distance.
  73. 7.2.2 Subset Selection.
  74. 7.3 Feature Extraction.
  75. 7.3.1 Principal Component Analysis (PCA).
  76. 7.3.2 Linear Discriminant Analysis (LDA).
  77. Bibliography .
  78. Exercises.
  79. Chapter 8 Unsupervised Learning.
  80. 8.1 Clustering.
  81. 8.2 k
  82. Means Clustering.
  83. 8.2.1 Fuzzy c
  84. Means Clustering .
  85. 8.3 (Agglomerative) Hierarchical Clustering.
  86. Bibliography .
  87. Exercises.
  88. Chapter 9 Estimating and Comparing Classifiers.
  89. 9.1 Comparing Classifiers and the No Free Lunch Theorem .
  90. 9.1.2 Bias and Variance.
  91. 9.2 Cross
  92. Validation and Resampling Methods .
  93. 9.2.1 The Holdout Method .
  94. 9.2.2 k
  95. Fold Cross
  96. Validation .
  97. 9.2.3 Bootstrap.
  98. 9.3 Measuring Classifier Performance   .
  99. 9.4 Comparing Classifiers.
  100. 9.4.1 ROC curves.
  101. 9.4.2 McNemar’s Test.
  102. 9.4.3 Other Statistical Tests.
  103. 9.4.4 The Classification Toolbox.
  104. 9.5 Combining classifiers.
  105. Bibliography.
  106. Chapter 10 Projects.
  107. 10.1 Retinal Tortuosity as an Indicator of Disease.
  108. 10.2 Segmentation by Texture.
  109. 10.3 Biometric Systems.
  110. 10.3.1 Fingerprint Recognition.
  111. 10.3.2 Face Recognition.
  112. Bibliography.
  113. Index.
Errata

If you think that you've found an error in this book, please let us know about it. You will find any confirmed erratum below, so you can check if your concern has already been addressed.

* Required Fields

No errata are currently published