Iris recognition using neural networks

March 28, 2023 0 Comments

Introduction

There are various forms of human verification around the world, as it is of great importance to all organizations and different centers. Today, the most important forms of human verification are recognition through DNA, face, fingerprint, signature, speech, and iris.

Among all, one of the recent, reliable and technological methods is iris recognition, which is practiced by some organizations today, and its wide use in the future is undoubted. Iris is a non-identical organism made of colorful muscles including robots with molded lines. These lines are the main causes of the iris of each one not being identical. Even the irises of a pair of eyes of a person are completely different from each other. Even in the case of identical twins, the irises are completely different. Each iris is specialized by very narrow lines, rakes and vessels in different people. The accuracy of identification through the iris increases by using more and more details. It has been shown that the patterns of the iris almost never change from the time a child is one year old throughout his life.

In recent years there has been considerable interest in the development of pattern recognition systems based on neural networks, due to their ability to classify data. The type of neural network practiced by the researcher is Learning Vector Quantization, which is a functional competitive network in the field of pattern classification. The iris images prepared as a database, are in the form of PNG (Portable Network Graphics) pattern, meanwhile they must be pre-processed through which the iris boundary is recognized and its features extracted. To do so, edge detection is done by using the Canny approach. For more diverse and characteristic extraction of iris images, DCT transformation is practiced.

2.Feature Extraction

To increase the accuracy of our verification of the iris system, we must extract the features so that they contain the main elements of the images for comparison and identification. Extracted features should be in a way that causes the least failure on system output, and ideally system output failure should be zero. The useful features to be extracted are obtained through edge detection in the first step and in the next step we use the DCT transformation.

2.1 Edge detection

The first step locates the outer limit of the iris, that is, the limit between the iris and the sclera. This is done by detecting edges in the grayscale iris image. In this work, the edges of the iris are detected using the “Canny method” which finds edges by finding local maxima of the gradient. The gradient is calculated using the derivative of a Gaussian filter. The method uses two thresholds to detect strong and weak edges, and includes weak edges in the output only if they are connected to strong edges. This method is resistant to additive noise and is capable of detecting “true” weak edges.

Although some literature has considered the detection of ideal stepped edges, the edges obtained from natural images are usually not ideal stepped edges at all. Instead, they are typically affected by one or more of these effects: focal blur caused by a finite depth of field and a finite point spread function, penumbral blur caused by shadows created by light sources of non-zero radius, shadowing on a uniform object edge and local specularities or intermediate reflections in the vicinity of the object edges.

2.1.1 Cunning method

Canny’s edge detection algorithm is known by many as the optimal edge detector. Canny’s intentions were to improve the many edge detectors that already existed when he began his work. He was very successful in achieving his goal and his ideas and methods can be found in his article, “A Computational Approach to Edge Detection”. In his article, he followed a list of criteria to improve current edge detection methods. The first and most obvious is the low error rate. It is important that existing edges in images are not overlooked and that there are NO responses to non-edges. The second criterion is that the edge points are well located. In other words, the distance between the edge pixels found by the detector and the actual edge must be minimal. A third criterion is to have only one response to a single edge. This was implemented because the first 2 were not substantial enough to completely eliminate the possibility of multiple responses to an edge.

Operator Canny works in a multi-stage process. First, the image is smoothed using Gaussian convolution. A simple 2-D first derivative operator (something like Roberts’ Cross) is then applied to the smoothed image to highlight regions of the image with significant spatial derivatives. The edges give rise to ridges in the image of gradient magnitude. The algorithm then scans along the top of these ridges and zeros out all pixels that are not actually on the top of the ridge to produce a thin line in the output, a process known as nonmaximal suppression. The tracking process exhibits hysteresis controlled by two thresholds: T1 and T2, with T1 > T2. Tracking can only start at a point on a ridge above T1. Tracking then continues in both directions from that point until the height of the crest falls below T2. This hysteresis helps ensure that noisy edges are not split into multiple edge fragments.

2.2 Discrete Cosine Transform

Like any Fourier-related transform, Discrete Cosine Transforms (DCT) express a function or signal in terms of a sum of sinusoids with different frequencies and amplitudes. Like the Discrete Fourier Transform (DFT), a DCT operates on a function on a finite number of discrete data points. The obvious distinction between a DCT and a DFT is that the former uses only cosine functions, while the latter uses both cosines and sinusoids (in the form of complex exponentials). However, this visible difference is simply a consequence of a deeper distinction: a DCT involves different boundary conditions than the DFT or other related transforms.

Fourier-related transforms that operate on a function over a finite domain, such as the DFT or DCT or a Fourier series, can be thought of as an implicit definition of an extension of that function outside the domain. That is, once you write a function f(x) as a sum of sinusoids, you can evaluate that sum at any x, even for x where the original f(x) was not specified. The DFT, like the Fourier series, involves a periodic extension of the original function. A DCT, like a cosine transform, implies a uniform extension of the original function.

A Discrete Cosine Transform (DCT) expresses a sequence of a finite number of data points in terms of a sum of cosine functions that oscillate at different frequencies. DCTs are important for numerous applications in science and engineering, from lossy compression of audio and images (where small high-frequency components can be discarded), to spectral methods for the numerical solution of partial differential equations. The use of cosine functions instead of sine functions is critical in these applications: for compression, it turns out that cosine functions are much more efficient (as explained below, fewer are needed to approximate a typical signal), while that for differential equations the cosines express a particular choice of boundary conditions.

In particular, a DCT is a Fourier-related transform similar to the Discrete Fourier Transform (DFT), but using only real numbers. DCTs are equivalent to DFTs of about twice the length, which operate on real data with uniform symmetry (since the Fourier transform of a real and even function is real and even), where in some variants the input data and output are shifted in the middle of a sample. . There are eight standard DCT variants, of which four are common.

The most common variant of the discrete cosine transform is the type II DCT, which is often simply referred to as “the DCT”; its inverse, type III DCT, is therefore often referred to simply as “the inverse DCT” or “the IDCT.” Two related transforms are the discrete sine transform (DST), which is equivalent to a DFT of real and odd functions, and the modified discrete cosine transform (MDCT), which is based on a DCT of overlapping data.

The DCT, and in particular the DCT-II, is often used in signal and image processing, especially for lossy data compression, because it has a strong “energy compaction” property. Most of the signal information tends to be concentrated in a few low frequency components of the DCT.

3. Neural network

In this work a neural network structure is used, which is the learning vector quantization neural network. A brief description of this network is presented below.

3.1 Quantification of learning vectors

Learning Vector Quantization (LVQ) is a supervised version of vector quantization, similar to Selforganizing Maps (SOM) based on the work of Linde et al, Gray, and Kohonen. It can be applied to pattern recognition, multiclass classification, and data compression tasks, for example, speech recognition, image processing, or customer classification. As a supervised method, LVQ uses known target output classifications for each input pattern on the form.

LVQ algorithms do not approximate class sample density functions as vector quantization or probabilistic neural networks do, but instead directly define class boundaries based on prototypes, a nearest neighbor rule, and a paradigm. where the winner takes all. The main idea is to cover the input sample space with ‘codebook vectors’ (CVs), each of which represents a region labeled with a class. A CV can be viewed as a prototype of a class member, located at the center of a class or decision region in input space. A class can be represented by an arbitrary number of CVs, but a CV represents only one class.

In terms of neural networks, an LVQ is a feedforward network with a hidden layer of neurons, fully connected to the input layer. A CV can be viewed as a hidden neuron (“Kohonen neuron”) or a weight vector of the weights between all input neurons and the considered Kohonen neuron, respectively.

Learning means modifying the weights according to the adaptation rules and thus changing the position of a CV in the input space. Since the class boundaries are constructed piecewise linearly as segments of the midplanes between CVs of neighboring classes, the class boundaries are adjusted during the learning process. The tessellation induced by the set of CVs is optimal if all the data within a cell belong to the same class. Classification after learning is based on the proximity of a presented sample to the CVs: the classifier assigns the same class label to all samples that fall into the same tiling: the cell label

Leave a Reply

Your email address will not be published. Required fields are marked *