Advanced Image Processing for Analytics in Biomedicine and Bioscience

All medical imaging systems suffer from the effects of acquisition noise, channel noise, and fading. When decisions relevant to these image data are taken, any deviation from real values could affect the decisions made. Additionally, detecting anomalies from image data requires special processing to go behind the surface data. We have developed computationally low power, low bandwidth


Introduction
Biomedical engineering and biomedicine are exciting fields that try to answer a broad range of questions in the interface of medicine, science, and engineering. It is hard to imagine where medicine and science would be today without advanced imaging systems, wireless sensor networks, robotics, 3D printers, nanoparticles, and big date analytics, to name a few. Technology has had a dramatic effect on the trend of biomedicine and biosciences, producing what may be the most challenging period in the history of human beings. One form of the convergence of technology in science and medicine is advanced biomedical imaging systems. Imaging systems provide alternative means for analysis and visualization of the scientific and medical information and services, providing patients and researchers with an extraordinary new range of options.
Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to process images for medical and scientific purposes. Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre and post-processing techniques based on wavelets to process image data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem [1]. This paper discusses innovative wavelet-based filter banks designed to enhance the analysis of images using parametric spectral methods and signal classification algorithms.
Finally, the volume, depth, and breadth of image data acquired from advance imaging systems has resulted in the recent out-of bounds growth of data, thus, requiring modern processing and visualization techniques to extract useful information for decision-making. Finding correlation among thousands of variables in big datasets to determine their relative importance is not a simple task. Advanced analysis and scientific data visualization have proven to be effective techniques in discerning information from big datasets. Using proven, fast, and sophisticated filtering techniques, this article also aims at extracting information, showing patterns, and allowing mining of big datasets in real-time for faster and more effective decision making. Given the unique challenges of scientific big data visualization, the research presented in this paper covers some potential solutions and offers a means of setting standards for this new and evolving field.

Research and Methodology
In this study, a major approach to image processing for biomedical and scientific applications has been considered. A few research efforts have reported that de-noising based on wavelet filters has improved the performance of the FFT-based algorithms. Others have reported on post-processing of image data (for de-noising and de-speckling) using wavelet analysis and how the image quality gets enhanced using sub-band coding. Effective algorithms for image compression for either method have not been reported. Wavelet transforms will be integrated to evaluate the effect of raw data de-noising, decomposition, and compression on the overall efficiency of the algorithms. Wavelets have been applied extensively and successfully to many datasets in physical sciences, but there has not been a comprehensive study of their effects on medical or scientific image processing.
Among the major parameters that govern creation of an efficient image processing tool, realizing the order (scale, dimension, or size) and orthogonality (separation) of image dataset are two of the most important ones. There have been many attempts to improve the decision making process of image data by manipulating these parameters. Dimensionality reduction is one of the basic operations in the toolbox of image data analysts, designers of machine learning and pattern recognition systems. Given a large set of images but few observations, an obvious idea is to reduce the degrees of freedom in the measurements by representing them with a smaller set of more "condensed" variables. This amounts to reducing the dimensionality of image dataset to reduce computational load in further processing [2].
In computational analysis in scientific and medical domains, images are often compared based on their features, e.g., size, depth and other domain-specific aspects. Certain features may be more significant than others while comparing the images and drawing corresponding inferences for specific applications. Though domain experts may have subjective notions of similarity for comparison, they seldom have a distance function that ranks the image features based on their relative importance. The proposed method ranks features for learning such a distance function in order to capture the semantics of the images [3].

Results and Discussion
Digital image processing algorithms have long served to manipulate image data to be a good fit for analysis and synthesis of any kind. For the medical and scientific images a special wavelet-based approach has been considered to suppress the effect of noise and data order. One of the advantages of this approach is in that one algorithm serves to reduce the data order, remove noise, and decompose the image into different layers for component analysis. The proposed technique uses the orthogonality properties of wavelets to decompose the dataset into spaces of coarse and detailed signals. With the filter banks being designed from special bases for this specific application, the output signal in this case would be components of the original signal represented at different time and frequency scales and translations. A detailed description of the technique follows in the next section.

Wavelet-based transforms
Traditionally, Fourier transform (FT) has been applied to time-domain signals for signal processing tasks such as noise removal, order reduction, decomposition, pattern recognition, and classification. The shortcoming of the FT is in its dependence on time averaging over entire duration of the signal. Due to its short time span, analysis of wireless sensor network nodes requires resolution in particular time and frequency rather than frequency alone. Wavelets are the result of translation and scaling of a finite-length waveform known as mother wavelet. A wavelet divides a function into its frequency components such that its resolution matches the frequency scale and translation. To represent a signal in this fashion it would have to go through a wavelet transform. Application of the wavelet transform to a function results in a set of orthogonal basis functions that are the time-frequency components of the signal. Due to its resolution in both time and frequency wavelet transform is the best tool for detection and classification of signals that are non-stationary or have discontinuities and sharp peaks. Depending on whether a given function is analyzed in all scales and translations or a subset of them, the continuous (CWT), discrete (DWT), or multiresolution wavelet transform (MWT) can be applied.
An example of the generating function (mother wavelet) based on the Sinc function for the CWT is: The subspaces of this function are generated by translation and scaling. For instance, the subspace of scale (dilation) a and translation (shift) b of the above function is: When a function x is projected into this subspace, an integral would have to be evaluated to calculate the wavelet coefficients in that scale. The function x can be shown in term of its components: Due to computational and time constraints it is impossible to analyze a function using all of its components. Therefore, usually a subset of the discrete coefficients is used to reconstruct the best approximation of the signal. This subset is generated from the discrete version of the generating function. Applying this subset to a function x with finite energy will result in DWT coefficients from which one can closely approximate (reconstruct) x using the coarse coefficients of this sequence: The MWT is obtained by picking a finite number of wavelet coefficients from a set of DWT coefficients. However, to avoid computational complexity, two generating functions are used to create the subspaces Vm and Wm from which the two (fast) wavelet transform pairs (MWT) can be generated: and In this paper the DWT has been used to suppress noise and reduce order of data in a wireless sensor network. Due to its ability to extract information in both time and frequency domain, DWT is considered a very powerful tool. The approach consists of decomposing the signal of interest into its detailed and smoothed components (high-and low-frequency). The detailed components of the signal at different levels of resolution localize the time and frequency of the event. Therefore, the DWT can extract the coarse features of the signal (compression) and filter out details at high frequency (noise). DWT has been successfully applied to system analysis for removal of noise and compression [4]. In this paper we present how DWT can be applied to detect and filter out noise and compress signals. A detailed discussion of theory and design methodology for the special-purpose filters for this application follows.

Theory of dwt-based filters for noise suppression and order reduction
DWT-based filters can be used to localize abrupt changes in signals in time and frequency. The invariance to shift in time (or space) in these filters makes them unsuitable for compression problems. Therefore, creative techniques have been implemented to cure this problem [5]. These techniques range in their approach from calculating the wavelet transforms for all circular shifts and selecting the "best" one that minimizes a cost function [6], to using the entropy criterion [7] and adaptively decomposing a signal in a tree structure so as to minimize the entropy of the representation. In this paper a new approach to cancellation of noise and compression of data has been proposed. The discrete Meyer adaptive wavelet (DMAW) is both translation-and scale-invariant and can represent a signal in a multi-scale format. While DMAW is not the best fit for entropy criterion, it is well suited for the proposed compression and cancellation purposes [8][9][10].
The process to implement DMAW filters starts with discritizing the Meyer wavelets defined by wavelet and scaling functions as the masks for these functions are obtained and are convolved, the generating function (mother wavelet) mask can be obtained as: Decomposing the re-normalized signal according to the conventional DWT, will result in the entire DMAW filter basis for different scales:  Figure 1-3 show the experimental results for the application of the proposed filter banks to a noisy sinusoidal signal. As is evident from these figures, a signal can be decomposed in as many levels as desired by the application and allowed by the computational constraints. Levels shown from top to bottom represent the coarse to detailed components of the original signal. Once the signal is decomposed to its components, it is easy to do away with pieces that are not needed. For instance, noise, which is the lower most signal in Figure 1 can be totally discarded. On the other hand, if compression is necessary, all but the coarse component (upper most element, below the original signal) can be kept and the rest of the modules discarded. This signal alone is a fairly good approximation of the original signal. Figure 2 shows the thresholds and coefficients of the signal being filtered.

Conclusion and Future Work
As expected from the theory, the DMAW filters performed well under noisy conditions in an imaging environment. The decomposed signal could be easily freed up from noise and reduced down to its coarse component only. This could be reduction by several orders of magnitude in some cases. Future plans include the application of these filters to fused image datasets and comparison between the two approaches.
Additionally, the results of these study can be used in the decision making stage to realize the difference this approach can make in speed and efficiency of this process. Future work will address issues such as characterizing the parameters for simulation and modeling of the proposed filter; showing how complex examples with correlated image data will be filtered for redundancy; comparing the proposed approach with other similar approaches and giving comparative results to support the claimed advantages, both theoretically and experimentally.