Friday, 13 January 2017

A Comprehensive Study of Content Based Image Retrieval

Vol. 3  Issue 3
Year: 2016
Issue:Jul-Sep
Title:A Comprehensive Study of Content Based Image Retrieval
Author Name:Piyush Kothyari, Shriprakash Dwivedi and H.L. Mandoria
Synopsis:
Content Based Image Retrieval, is an application of computer vision techniques for retrieval of an image from the database by using its content. In the earlier days, when it comes to image retrieval, it was only concept based which means using of metadata such as keywords, tags, or descriptions associated with the image, giving a concept, or a descriptive meaning to the image, but cannot guarantee that, for every image there exist associated text annotations or complete text annotations so in this context, the term “content” refer to Colors, Shapes, Textures, or any other image feature information that can be derived from the image itself. In this paper, the authors have reviewed various methods for performing CBIR on the basis of shape, Color, Texture. This paper briefly elaborates the feature on the basis of these contents.

A Kernel SVM Classifier for Classification of Brain Tumors in Magnetic Resonance Images

Vol. 3  Issue 3
Year: 2016
Issue:Jul-Sep
Title:A Kernel SVM Classifier for Classification of Brain Tumors in Magnetic Resonance Images
Author Name:T. Chandra Sekhar Rao and G. Sreenivasulu
Synopsis:
The term Computer Aided Diagnosis (CAD) broadly encompasses the use of computer algorithms to aid in the process of image interpretation. CAD is also now used in general to categorize and computerize the extraction of quantitative measurements from medical images. CAD system has become the most important research subject in the domain of medical imaging and diagnostic radiology. CAD systems act as a credible secondary opinion thereby improving the accuracy and the consistency of radiological diagnosis. In this work a classifier based on Support Vector Machine (SVM) has been designed and presented for the classification of brain tumors in images from Magnetic Resonance Imaging (MRI). The SVM classifier uses a kernel in the form of Gaussian Radial Basis function kernel (GRB kernel) to enhance the classifier performance. The result of the classifier performance has been validated with the help of expert clinical opinion. The results demonstrate the suitability of the proposed classifier in the classification of brain tumors.

Character Restoration in Degraded Documents Using Hybrid Neuron Fuzzy Approach

Vol. 3  Issue 3
Year: 2016
Issue:Jul-Sep
Title:Character Restoration in Degraded Documents Using Hybrid Neuron Fuzzy Approach
Author Name:Harshmani, Nancy Gupta and Gurpreet Kaur
Synopsis:
Ancient documents may contain the valuable information of our historical past, which might be available in the printed or handwritten form. The text preservation of these antique documents has a vital significance for future generation and references. But due to several degradation factors such as bleeding through, shadow through, ink bleeding, paper aging, etc., documents are unable to show their contents up to the mark. There are various restoration techniques that may serve this purpose, but due to non-linear and complex nature of the degrading parameters, the existing techniques turn out to be less promising. The aim of study is to investigate the capability of ANN & Fuzzy logic, i.e. 'Neuro-Fuzzy technique' to restore the historical documents from their digital images. In the proposed technique, the Back- Propagation Neural Network (BPNN) is trained to cope with different degrading factors and Fuzzy rules are used to further suppress leftover spurious pixels. The output results of the proposed technique on different degraded document images are presented and compared with various existing techniques, viz. Otsu, Sauvola, Wolf, Niblack, Bernsen and Maximum entropy. The comparative results show the superiority of the proposed technique, which outperforms all other comparative techniques by providing visually better output images.

Breast Cancer Diagnosis from Low Intensity Asymmetry Thermogram Breast Images Using Fast Support Vector Machine

Vol. 3  Issue 3
Year: 2016
Issue:Jul-Sep
Title:Breast Cancer Diagnosis from Low Intensity Asymmetry Thermogram Breast Images Using Fast Support Vector Machine
Author Name:M. Subi Stalin and R. Kalaimagal 
Synopsis:
Breast cancer is the leading cause of death, linked to primary cancer. Screening of Thermogram images, the most robust method for early diagnosis of breast cancer is widely recommended with the introduction of several Computer Aided Diagnosis (CAD) techniques. The main difficulties of the thermography asymmetrical temperature distribution leads to abnormality for even disease. The authors have presented one of the fastest pattern recognition techniques that have been more efficient in the classification of tumors as benign or malignant – Fast Support Vector Machine (FSVM). This method has been developed and implemented in statistical learning theory over the past decade and they gave promising classification results for efficient tumor diagnosis. The main objective of the proposed work is to help in diseases diagnosis by Thermogram analyses applying a three phase approach. In the first phase of work, Thermogram images apply preprocessing and is segmented by separating left and right portions of the breast regions. After segmentation process (second phase), some textural features are extracted using Discrete Curvelet Transform (DCT): temperature range, mean temperature, standard-deviation, and the quantization of higher tone in an eight level pasteurization. This last feature considers the entire image temperature and measures the percentage of area occupied by pixels with the higher temperatures of the image. In the final phase of the work, a supervised learning method based on Fast Support Vector Machine (FSVM) is used for the extracted feature classification. The features are extracted from a set of 50 images confirmed by physician diagnosis. The proposed method achieved the average results of accuracy 98.5%, sensitivity 96%, and specificity 96.5%.

Denoising of Images by Wavelets and Contourlets Using Bi-Shrink Filter

Vol. 3  Issue 3
Year: 2016
Issue:Jul-Sep
Title:Denoising of Images by Wavelets and Contourlets Using Bi-Shrink Filter
Author Name:S. Swarnalatha and P. Satyanarayana 
Synopsis:
Denoising refers to the recovery of an image that has been contaminated by noise due to poor quality of image acquisition and transmission. Accordingly, there is a need to reduce the noise present in the image as a consequence to produce the denoised image. This paper presents Image denoising using Wavelet transforms and Contourlet transforms governed by bivariate shrinkage (Bi-shrink) filter techniques. The Wavelet transforms have the shift sensitivity and poor directionality that is shown by peak signal-to-noise ratio. In this paper, Translation Invariant Contourlet Transforms is proposed to overcome the limitations of wavelet transforms, hence to increase the peak signal-to-noise ratio. The results illustrate the efficacy of the proposed transform in terms of peak signal-to-noise ratio, execution time and visual quality of images.

Comparative Study of PCA and SPIHT Methods in Medical image Compression

Vol. 3  Issue 3
Year: 2016
Issue:Jul-Sep
Title:Comparative Study of PCA and SPIHT Methods in Medical image Compression
Author Name:Ravi Kiran, Chandrashekhar Kamargaonkar and Monisha Sharma
Synopsis:
Compression of medical picture has acquired great attention attributable to its raising need to decrease the picture size while not compromising the diagnostically crucial medical data exhibited on the picture. The PCA algorithm may be used to help in picture compression. In this paper, a comparative study is provided for PCA and SPIHT compression method. Here the PCA algorithm is characterized in two forms, i.e. Standard PCA and Block-Based PCA. The block based PCA has 2 extended-PCA algorithms that manipulate the block data of the picture are evaluated. The first algorithm is referred to as block-by-block PCA where the standard PCA algorithm is utilized on every block of the picture. In the next algorithm- the block-to-row PCA, all of block data are initially concatenated into a row before the standard PCA algorithm is therefore utilized in the remodelled matrix. In this work, the SPIHT is being compared with the above two methods in terms of image quality and compression ratio. With this work, it is observed that block-based PCA performs superior to the PCA algorithm and SPIHT with regards to picture quality, producing a similar compression ratio like the PCA algorithm.

Performance Analysis of Two Level Multiple Image Formats

Vol. 3  Issue 2
Year: 2016
Issue:Apr-Jun
Title:Performance Analysis of Two Level Multiple Image Formats
Author Name:Prakriti Gautam and Deepak Sharma
Synopsis:
Steganography is the process of hiding information in a carrier in order to provide the secrecy of text, music, audio and images. It can be defined as the study of imperceptible communication that deals with the ways of concealing the existence of communicating messages. For hiding secret information in various file formats, there exists a large variety of Steganographic techniques where some are more complex than others and all of them have respective strong and weak points. The aim of the paper is to provide the user, a comparative analysis of the first and second level steganography using the MLSB embedding technique in various types of file formats as the cover image like JPEG, BMP, PNG, etc. With the help of results obtained, the best format for both the levels of hiding can be concluded. The secret image used is also taken of all different formats so that a conclusion can be drawn that which type of format is suitable for the cover as well as secret image.

Comparative Analysis Of Edge Detection Methods And Its Implementation Using UTLP Kit

Vol. 3  Issue 2
Year: 2016
Issue:Apr-Jun
Title:Comparative Analysis Of Edge Detection Methods And Its Implementation Using UTLP Kit
Author Name:R. Hemalatha, N. Santhiyakumari, M. Madheswaran and S. Suresh
Synopsis:
In today's scenario, several advanced techniques are used in diagnosing images prevailed from medical imaging system. Edge detection methods reduce the quantity of data and remove ineffective information while preserving important structural properties in an image. The objective of this paper is to compare the performance of different edge detection methods like canny deriche, Morpho gradient, Prewitt, Ridge Valley, Roberts, Sobel and Zero are crossing of the Common Carotid Artery image. The statistical parameters such as minimum and maximum pixel values, mean, standard deviation, skewness and kurtosis are considered to study the performance of various edge detection methods with the aid of Aphelion Dev software. The edge detecting image has been implemented in Unified Technology Learning Platform to increase the processing speed of an image. It has been observed that Canny Deriche edge detection method is more suitable than other methods for detecting accurate edges. This can be used in medical applications to detect and extract the features of an image.

Contrast Enhancement Based Brain Tumour MRI Image Segmentation And Detection With Low Power Consumption

Vol. 3  Issue 2
Year: 2016
Issue:Apr-Jun
Title:Contrast Enhancement Based Brain Tumour MRI Image Segmentation And Detection With Low Power Consumption
Author Name:B. Shoban Babu, S. Varadarajan and S. Swarnalatha
Synopsis:
Medical image processing is the most challenging topic in the research field. Brain tumour is a serious life altering disease condition. Image segmentation plays a significant role in the estimation of suspicious regions from the medical images. In this paper, two algorithms have been proposed; one algorithm for image enhancement and the other algorithm for image segmentation. The MRI image wherein the brain tumour is to be detected, is enhanced using the proposed technique called Power Constrained Contrast Enhancement (PCCE). Thus, the obtained enhanced image is subjected to image segmentation using threshold point method, to detect the brain tumour. The performance of the proposed method is analysed by computing the parameters of the brain tumour like volume, area, Contrast, Emissive display, mean and standard deviation. These parameter values are compared with the earlier methods, and it found that the performance of the proposed technique of brain tumour detection is quite satisfactory over the earlier existing techniques.

Comparative Analysis Of Edge Detection Techniques.

Vol. 3  Issue 2
Year: 2016
Issue:Apr-Jun
Title:Comparative Analysis Of Edge Detection Techniques.
Author Name:Navkamal Kaur and Beant Kaur
Synopsis:
This paper presents a comparative analysis of traditional edge detector operator with the proposed algorithm on grayscale images. The proposed algorithm is based on mathematical morphology and thresholding. Mathematical morphology is a new technique for edge detection based on set theory. It has been used for feature extraction and feature detection. Basic operations of Mathematical morphology are dilation, erosion, opening and closing. Based on these operations, experiment results are obtained using square structuring elements of different sizes with different images. The authors adopt the thresholding to change the brightness of edges of image. Detection of edge is a preprocessing step in image processing. Edge detection has been done using traditional operators (Canny, LoG, Prewitt and Sobel). The edge detection process had been used to reduce the amount of data and filter out useless information. The aim of this paper is to obtain the useful edges of the image object. This paper prefer round shape object images to extract the edges using different combination of structuring elements. Performance evaluation of the proposed algorithm is based on parameters like Root Mean Square Error (RMSE). This parameter is used to calculate the image quality of an output image. The experiment result shows that the proposed algorithm has more superior results than traditional operators.

A Novel Watermarking Approach based on Fuzzy Wavelets for High Authentication And Robustness

Vol. 3  Issue 2
Year: 2016
Issue:Apr-Jun
Title:A Novel Watermarking Approach based on Fuzzy Wavelets for High Authentication And Robustness
Author Name:U. Ravi Babu 
Synopsis:
The present study derives a novel scheme to embed the watermark for high authentication, robustness, security and copyright protection based on Fuzzy Wavelet. So far no researcher has attempted to use fuzzy logic in the spatial domain. The present paper developed a new technique called Fuzzy Wavelet (FW) approach for selecting the pixel locations to insert the watermark. The main aim of the proposed method is embedding the watermark image fully and to extract the watermarked image in an efficient manner. The approach is called Fuzzy Wavelet Region based Even, Odd (FWREO) method. The watermark bits are embedded in the pixel location selected by the FW approach by using the novel approach called Region based Even Odd (REO) method. The proposed method mainly consists of two steps. In step one, identify the pixel locations where the watermark is embedded. In the second step, REO method is applied on the FW approach of the first step. To find the effectiveness of the proposed method, the present method is tested on 24 images with the size of 512×512. The proposed gives comparative results when compared with other existing methods.

Feature Segmentation Of Blood Vessels Using Active Contour With Hybrid Region Information With Application To Retinal Images

Vol. 3  Issue 1
Year: 2016
Issue:Jan-Mar 
Title:Feature Segmentation Of Blood Vessels Using Active Contour With Hybrid Region Information With Application To Retinal Images
Author Name:Suri Sahithi and G. Guru Prasad
Synopsis:
Computerized detection of blood vessel structures is becoming one of the most interesting parts in the field of diagnosis of the vascular diseases. The objective of this paper is to introduce a novel filter, based on a new kernel function with Cauchy distribution to improve the accuracy of the automated retinal vessel detection. Moreover, for a good segmentation performance, the proposed model has the benefit of using distinct types of region information. The aim of the proposed model is to increase the accuracy of an image.

Analysis Of Iris Segmentation Using Circular Hough Transform And Daughman's Method

Vol. 3  Issue 1
Year: 2016
Issue:Jan-Mar 
Title:Analysis Of Iris Segmentation Using Circular Hough Transform And Daughman's Method
Author Name:Divya Ann Roy and Urmila S. Soni 
Synopsis:
Iris recognition is a special type of biometric system, which is used to identify a person by analyzing the patterns in the iris. It is used to recognize the human identity through the textural characteristics of one's iris muscular patterns. Although eye colour is dependent on heredity, iris is independent even in the twins. Out of various biometrics such as finger and hand geometry, face, ear and voice recognition, iris recognition has been proved to be one of the most accurate and reliable biometric modalities because of its high recognition. Iris recognition involves 5 major steps. Firstly, image acquisition is done in which the image is captured by a high resolution camera, then the iris and the pupillary boundary are extracted out from the whole eye image, which is called segmentation. After segmentation, the circular dimension is converted to a fixed rectangular dimension which is called normalization. From this normalised image, the feature is extracted from Gabor filter, DFT, FFT, etc. At last, the iris code is matched using Hamming distance and Euclidean method. This project focuses on iris segmentation. Iris segmentation is the most important part in the iris recognition process because the areas that are wrongly considered as the iris regions would corrupt the biometric templates resulting in a very poor recognition [16]-[21]. The main objective of iris segmentation is to separate the iris region from the pupil and sclera boundaries. There are various methods for segmenting the iris from an eye image to give a best segmented result. In this project, iris segmentation is done using Daugman's integro differential method and Circular Hough Transform to find out the pupil and the iris boundaries. Iris images are taken from the CASIA V4 database, and the iris segmentation is done using Matlab software where iris and pupilary boundaries are segmented out. The experimental result shows that 84% accuracy is obtained by segmenting the iris by Circular Hough Transform and 76% accuracy is obtained by segmenting the iris through Daughman's method. It is concluded that, the Circular Hough Transform method of iris recognition is more accurate than the Daughman's method.

Multispectral Image Compression With High Resolution Improved SPIHT For Testing Various Input Images

Vol. 3  Issue 1
Year: 2016
Issue:Jan-Mar 
Title:Multispectral Image Compression With High Resolution Improved SPIHT For Testing Various Input Images
Author Name:V. Bhagya Raju, K. Jaya Sankar, C. D. Naidu and Srinivas Bachu
Synopsis:
Due to the current development of Multispectral sensor technology, the use of Multispectral images has become more and more popular in recent years in remote sensing applications. This paper exploits the spectral and spatial redundancies that exist in different bands of multispectral images and effectively compresses these redundancies by means of a lossy compression method while preserving the crucial and vital spectral information of objects that prevails in the multispectral bands. In this paper, interpolated super resolution transform based DWT with Improved SPIHT algorithm for various multispectral datasets has been proposed. The proposed algorithm, a lossy multispectral image compression method yields better performance results for PSNR and Compression Ratio with sym8 wavelet when compared with previous well-known compression methods and existing discrete wavelets.

Hybrid Wavelet Based Approach for Image De-Noising Through PCA

Vol. 3  Issue 1
Year: 2016
Issue:Jan-Mar 
Title:Hybrid Wavelet Based Approach for Image De-Noising Through PCA
Author Name:Gunjan Sethi, Sukhvir Kaur and Jagdeep Singh
Synopsis:
De-Noising is a crucial problem for various types of image in the digital image processing. The main objective is to be fade away the noise factor by transfiguring into realistic Image as well as safeguarding the real quality and structure of the Image. Much hardware equipment such as digital electronic devices may suffer some issues that are noisy and blurred images due to degradation in the quality of the visioning image. These noisy images and blur images come under the problem of less information about the working object in a capturing environment. In this paper, the De-Noising technique has been proposed at different standard deviation for each processed image to check that at what level of noise it may work. In this proposed technique, wavelet is applied to a Noisy Image and further on the decomposed sections, SPG-PCA is used for quality enhancement. It consists of two stages: image estimation by removing the noise and further refinement of the first stage. Noise is removed at the maximum extent in first stage and the application of NPG improves the visualization of the De-Noised Image. A different standard deviation helps to optimize the original image which is based on the De-Noising scheme using quality matrices. The proposed technique can also be applied on satellite images, television pictures, medical images, etc. In this research work optimized De-Noising matrices like PSNR, SSIM, Maximum Difference and Normalized Cross-Correlation for the Dataset. Experimental results show a much improved performance of the proposed filters in the presence of Gaussian noise that are analyzed and illustrated.

Cuckoo Search Framework For Feature Selection And Classifier Optimization In Compressed Medical Image Retrieval

Vol. 3  Issue 1
Year: 2016
Issue:Jan-Mar 
Title:Cuckoo Search Framework For Feature Selection And Classifier Optimization In Compressed Medical Image Retrieval
Author Name:Reddi Kiran Kumar and Vamsidhar Enireddy
Synopsis:
With the availability of different medical imaging equipment for diagnoses, medical professionals are increasingly depending on the computer aided techniques for retrieving similar images from large repositories. This work investigates medical image retrieval problem for lossless compressed images. Lossless compression technique is utilized for compressing the medical images for easy transmission and storage. Texture features are extracted using Gabor filters, Shape features using the Gabor - shape and best features of these are selected by using a novel Cuckoo Search algorithm and compared with other statistical techniques. Classification was done by using the Recurrent neural Network. Optimization of the neural network is done using the Cuckoo Search. Experimental results show the advantages of the proposed framework.

Adaptive Thresholding For Porosity Measurement In Sand Particles

Vol. 2  Issue 4
Year: 2015
Issue:Oct-Dec
Title:Adaptive Thresholding For Porosity Measurement In Sand Particles
Author Name:Maheswaran U and Priyadharshini S
Synopsis:
In this paper, the authors evaluate the porosity of given sand images using image processing technique. An adaptive thresholding technique has been used to calculate this. It has been shown that, the flow and shear characteristics of granular particles such as soils are significantly dependent on the shape of the particles. This is important from a practical viewpoint because a fundamental understanding of granular behavior will lead to an improved understanding of soil stability and influence the design of structural foundations. Furthermore, the calculation of soil stability and, consequently, structural stability is particularly useful during earthquake events. In previous work, the author have demonstrated the applicability of X-ray and optical tomography measurements for characterizing 3-D shapes of natural sands and manufactured granular particles. In this paper, the authors name extended the work to measure the arrangement and the orientation of an assemblage of such particles. A combination of X-ray Computed Tomography (CT) for measuring the coordinates of the individual particles and an iterative adaptive thresholding technique for computing the local variations in porosity is employed to generate porosity maps. Such maps can be used to gain a more fundamental understanding of the shear characteristics of granular particles. In this paper, author demonstrate the success of technique by exercising the method on several sets of granular particles—glass beads (used as a control), natural Michigan Dune and Daytona Beach sand, and processed Dry #1 sand.

Non Linear Robust Edge Detector For Noisy Images

Vol. 2  Issue 4
Year: 2015
Issue:Oct-Dec
Title:Non Linear Robust Edge Detector For Noisy Images
Author Name:Atluri Srikrishna, B. Eswara Reddy and M. Pompapathi
Synopsis:
Identification of edge pixels of the noisy signal without doing regularization is still a challenging problem for researchers. There exists different methods and each method has its own assumptions, advantages, and limitations. Author propose a Non Linear Robust Edge Detector (NLRED), which uses n x n window in order to detect the edges of all possible orientations at noisy images. The proposed method partitions the neighbors of the pixel which is under observation for edge candidature into two sub regions based on differences in the local gray level value. The proposed method calculates test statistic for pixel of each sub region by calculating mean, the placement of each member, index of variability and test statistics. The test statistics with maximum value is considered based on two sub regions. These statistics are calculated for eight different orientations. Among these, the statistic with minimum value is considered for edge candidature. The performance is measured in terms of Figure of Merit (FOM) to show the efficiency of the proposed method by comparing with statistical edge detector CANNY approach, in order to detect all possible edges.

A Computer Aided Diagnosis (CAD) System For Segmentation And Analysis Of Brain Magnetic Resonance Images

Vol. 2  Issue 4
Year: 2015
Issue:Oct-Dec
Title:A Computer Aided Diagnosis (CAD) System For Segmentation And Analysis Of Brain Magnetic Resonance Images
Author Name:T. Chandra Sekhar Rao and G. Sreenivasulu 
Synopsis:
Medical image processing has become the main stay of diagnosis for a multitude of diseased conditions. The advent of sophisticated image processing procedures coupled with the exponential growth in processing power of systems and storages has resulted in huge volume of data that has to be interpreted and analyzed. The huge volume data implies the need for automated analysis system to reduce the burden on radiologists and help in providing quality diagnosis. This paper presents a Computer Aided Diagnosis (CAD) system for segmentation and analysis of Brain tumors in magnetic resonance images. The system has scope for making different analysis like edge analysis, morphological processing, histogram analysis etc. As part of system four different segmentation approaches like, K means segmentation, Watershed segmentation; Fuzzy C Means (FCM) segmentation and Enhanced Independent Component Analysis (EICA) and Mixture model based segmentation are implemented. The performance of the segmentation approaches are evaluated using different performance measures.

Image Denoising Using Hybrid Filter In Presence Of Multiple Noises And Graphical User Interface For Medical Image Enhancement

Vol. 2  Issue 4
Year: 2015
Issue:Oct-Dec
Title:Image Denoising Using Hybrid Filter In Presence Of Multiple Noises And Graphical User Interface For Medical Image Enhancement
Author Name:Gopi Karnam and Ramashri Tirumala
Synopsis:
In the field of image processing, the filtering algorithms are functional over the noisy images to eliminate the noise and protect the image details. In medical diagnosis, removing noise is a very challenging issue, as images are corrupted by multiple noises. Medical images like CT, MRI, and PET have information about the heart, nerves and brain. These images are to be precise and free from noise. This paper presents an efficient method for noise reduction, contrast enhancement for medical images. The projected method uses Hybridization of adaptive median filter with the wiener filter for denoising multiple noises. Wiener filter have enhanced stability between smoothness and precision. It also shows the GUI representation of Image smoothing, Histogram Equalization. The method is experimented on the MRI (Magnetic Resonance Image) and performance is evaluated in terms of the Peak Signal to Noise Ratio (PSNR), Correlation coefficient, Mean Absolute Error (MAE) and Mean Square Error (MSE). The proposed technique removes the Gaussian noise, Impulse noise and Blurredness in the images and improve the quality of images. The result shows that, the hybrid filter outperforms most of the basic algorithms for reduction of multiple noises in medical images. Finally, the results proved that the exploitation of hybrid filter gives the appropriate and consistent results on the test images and provide precision to picture while preserving its information.

An Estimation Of Human Age Group Based On Facial Edge Image Patterns

Vol. 2  Issue 4
Year: 2015
Issue:Oct-Dec
Title:An Estimation Of Human Age Group Based On Facial Edge Image PatternsTransform
Author Name:Gandu Bharat and Burusu Rajesh Kumar
Synopsis:
The present paper derives a new approach for estimating the age group of a person based on structural patterns of an edge image identified in a human face image. This approach uses canny edge operator algorithm for extracting the edges of the image because canny edge operator gives more edges. In this approach, edges are most important because, wrinkles are formed on face when age is growing. When wrinkles are formed on facial image, automatically more edge information is available. This approach uses a structural concept. The present study derived four distinct structural patterns on each 3x3 sub window of facial edge image i.e. Right Diagonal Pattern (RDP), Left Diagonal Pattern (LDP), Vertical Central Line Pattern (VCLP) and Horizontal Central Line Pattern (HCLP) pattern. The central pixel value of the 3x3 sub image is considerable in all four patterns. Based on formation of those four structural patterns, i.e. frequency of occurrences of structural patterns estimate the human age group into five age groups i.e. Child (0 to 9 years), Young (10 to 20 years), Young Adult (21-35), Adult (36 to 50 years) and Senior Age (>50 years). The efficiency of the proposed method is calculated by applying it on different huge facial databases like FgNET and Morph. The proposed method shows high rate of classification when compared with the other existing methods.

Denoising Technique For Segmentation Of Medical Images Using Biorthogonal Wavelet Transform

Vol. 2  Issue 3
Year: 2015
Issue:Jul-Sep 
Title:Denoising Technique For Segmentation Of Medical Images Using Biorthogonal Wavelet Transform
Author Name:Ritu Agrawal and Manisha Sharma
Synopsis:
The proposed paper is divided into two parts: firstly image denoising and secondly image segmentation. All medical images are corrupted by noise at the time of transmission and acquisition. The goal of denoising is to remove noise while retaining the visual quality of image. Biorthogonal wavelet transform via hard thresholding is used for denoising. The second part of the paper is designed for segmentation of Magnetic Resonance Angiography (MRA) cardio image after denoising. Image segmentation plays a significant role in medical field. The aim of image segmentation is to extract meaningful objects lying in the image. The analysis of MRA images using two segmentation methods namely region growing method and fuzzy c- means method are discussed in the presence of different noise level and comparison of two segmentation techniques based on accuracy has been performed. The experimental results shows that fuzzy cmeans method yields better segmentation result compared to the region growing method.

Thursday, 12 January 2017

Improved Video Watermarking Using Discrete Cosine Transform

Vol. 2  Issue 3
Year: 2015
Issue:Jul-Sep 
Title:Improved Video Watermarking Using Discrete Cosine Transform
Author Name:Dolley Shukla and Manisha Sharma
Synopsis:
In this paper, an improved video watermarking algorithm is presented for copy protection. Each video frame is divided into non-overlapping 8*8 blocks. Each block is a transformed 2D-DCT. After performing the same transformation to all the blocks and frames, embedding of image watermarking is performed with the selected DCT coefficients. Performance of the system is evaluated using peak Signal to Noise Ratio, Mean Square Error, Normalized Mean Square Error, Root Mean Square Error and Absolute Mean Error.

Comparative Analysis Of BM3D And Complex Wavelet Transform Based Image Denoising Techniques

Vol. 2  Issue 3
Year: 2015
Issue:Jul-Sep 
Title:Comparative Analysis Of BM3D And Complex Wavelet Transform Based Image Denoising Techniques
Author Name:S. Swarnalatha, P. Satyanarayana and B. Shoban Babu 
Synopsis:
This paper presents a comparison between the BM3D and Complex Wavelet Transform based image denoising techniques based on their performance analysis. Complex Wavelet Transforms overcome the limitations of classic Discrete Wavelet Transforms such as, shift sensitivity, poor directionality. Block Matching with 3D filtering (BM3D) technique is a combination of spatial and transform domain filtering techniques. BM3D employs the spatial filtering like Wiener Filtering, and transform based techniques such as Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT); and fusing all the filtered results into a single image to accomplish better performance. As the BM3D techniques use both the spatial and transform based filtering techniques, it achieves a better performance than that of the Complex Wavelet Transforms. However, BM3D based image denoising technique consumes more time than that of Wavelet Transform based image denoising techniques.

Transform Based Efficient And Robust Watermarking Technique For Medical Images Using Differential Evolution

Vol. 2  Issue 3
Year: 2015
Issue:Jul-Sep 
Title:Transform Based Efficient And Robust Watermarking Technique For Medical Images Using Differential Evolution
Author Name:Venugopal Reddy and Siddaiah P
Synopsis:
In this research work, a transform based medical image watermarking algorithm is used. Differential Evolution (DE) optimization technique is proposed to ensure that the watermark maintains its structural integrity along with robustness and imperceptibility. Differential Evolution (DE) optimization is employed to optimize the objective function to choose a correct type of wavelet and scaling factor. The water marking is proposed to be implemented using Discrete Wavelet Transforms (DWT), Lifted Wavelet Transform (LWT) and Singular Value Decomposition (SVD) techniques. The encryption is done using RSA and AES encryption algorithms. A Graphical User Interface (GUI) which enables the user to have ease of operation in loading the image, watermark it, encrypt it, and also retrieve the original image whenever necessary is also designed and presented in this paper. The robustness and the integrity of the watermark are tested by measuring different performance parameters and subjecting it to various attacks.

Image Authentication Using Semifragile Watermarking And Vector Quantization

Vol. 2  Issue 2
Year: 2015
Issue:Apr-Jun
Title:Image Authentication Using Semifragile Watermarking And Vector Quantization
Author Name:Archana Tiwari and Manisha Sharma
Synopsis:
In this paper, a new technique for image authentication is presented which uses semifragile watermarking scheme using vector quantization approach. The proposed scheme suggests semifragile watermarking scheme for blind image authentication using VQ (Vector quantization) by applying novel index based method combined with Discrete cosine transform. Present method is tested using, different combinations of code books for watermark insertion and extraction procedure . The proposed method results in an improved PSNR with an average value of 42 dB compared to average value 30 dB suggested by Lu et al.

Data Hiding In Encrypted Compressed Videos For Privacy Information Protection

Vol. 2  Issue 2
Year: 2015
Issue:Apr-Jun
Title:Data Hiding In Encrypted Compressed Videos For Privacy Information Protection
Author Name:Sivappagari Chandra Mohan Reddy and M. Manasa 
Synopsis:
The large size of video data makes it desirable to be stored and processed in the cloud. To provide security and privacy for the data in the cloud, the video data must be stored in encrypted form. It is also necessary to perform data hiding to avoid the leakage of video content. This paper proposes a novel method of embedding additional data in H.264/AVC video bit streams to meet the privacy preserving requirements of cloud data management. This method proposes three processes namely: H.264/AVC video encryption, Embedding of encrypted data and extraction of original data. Depending upon the property of H.264/AVC codec, the code words of I-frames and P-frames are encrypted with stream ciphers. The data to be embedded into the encrypted video is also encrypted by Chaos encryption technique and hidden into the video by bit replacement method. Data extraction is done both in encrypted domain and in decrypted domain. Experimental results show that the proposed method preserves file size, whereas degradation in video quality caused by data embedding is less.

Compression Of Encrypted Images In Prediction Error Domain

Vol. 2  Issue 2
Year: 2015
Issue:Apr-Jun
Title:Compression Of Encrypted Images In Prediction Error Domain
Author Name:Betcy K Joy and Baby Dhanya S. N
Synopsis:
The rapid development of techniques requires fast and secure transmission of images through the network. Traditional way of secure transmission is, to encrypt the compressed images. Even if this meets security prerequisites, this situation may be reversed. Therefore, an efficient image encryption then compression system (ETC) in prediction error domain is introduced. In this system, image is encrypted by calculating the prediction error of each pixel followed by encoding techniques. The encoding is done by simple arithmetic coding method. Simulation results shows that bit rate can be reduced to 12.