斗式提升機(jī)總體結(jié)構(gòu)設(shè)計(jì)【TH315.0 】
斗式提升機(jī)總體結(jié)構(gòu)設(shè)計(jì)【TH315.0 】,TH315.0 ,斗式提升機(jī)總體結(jié)構(gòu)設(shè)計(jì)【TH315.0,提升,晉升,總體,整體,結(jié)構(gòu)設(shè)計(jì),th315
Received 15 May 2008; accepted 20 July 2008 Projects 50674086 supported by the National Natural Science Foundation of China, BS2006002 by the Society Development Science and Technology Plan of Jiangsu Province and 20060290508 by the Doctoral Foundation of Ministry of Education of China Corresponding author. Tel: +86-516-83591702; E-mail address: Mine-hoist fault-condition detection based on the wavelet packet transform and kernel PCA XIA Shi-xiong, NIU Qiang, ZHOU Yong, ZHANG Lei School of Computer Science PCA; KPCA; fault condition detection 1 Introduction Because a mine hoist is a very complicated and variable system, the hoist will inevitably generate some faults during long-terms of running and heavy loading. This can lead to equipment being damaged, to work stoppage, to reduced operating efficiency and may even pose a threat to the security of mine per- sonnel. Therefore, the identification of running faults has become an important component of the safety system. The key technique for hoist condition moni- toring and fault identification is extracting informa- tion from features of the monitoring signals and then offering a judgmental result. However, there are many variables to monitor in a mine hoist and, also, there are many complex correlations between the variables and the working equipment. This introduces uncertain factors and information as manifested by complex forms such as multiple faults or associated faults, which introduce considerable difficulty to fault diagnosis and identification 1 . There are currently many conventional methods for extracting mine hoist fault features, such as Principal Component Analysis (PCA) and Partial Least Squares (PLS) 2 . These methods have been applied to the actual process. However, these methods are essentially a linear transformation approach. But the actual monitoring process includes nonlinearity in different degrees. Thus, researchers have proposed a series of nonlinear methods involving complex nonlinear transforma- tions. Furthermore, these non-linear methods are con- fined to fault detection: Fault variable separation and fault identification are still difficult problems. This paper describes a hoist fault diagnosis feature exaction method based on the Wavelet Packet Trans- form (WPT) and kernel principal component analysis (KPCA). We extract the features by WPT and then extract the main features using a KPCA transform, which projects low-dimensional monitoring data samples into a high-dimensional space. Then we do a dimension reduction and reconstruction back to the singular kernel matrix. After that, the target feature is extracted from the reconstructed nonsingular matrix. In this way the exact target feature is distinct and sta- ble. By comparing the analyzed data we show that the method proposed in this paper is effective. 2 Feature extraction based on WPT and KPCA 2.1 Wavelet packet transform The wavelet packet transform (WPT) method 3 , which is a generalization of wavelet decomposition, offers a rich range of possibilities for signal analysis. J China Univ Mining k=1, 2, , n), where n is the length of the signal. Then we can get: 22 33 1 () d n jj jk k E St t x = = (2) Consider that we have made only a 3-layer wavelet package decomposition of the echo signals. To make the change of each frequency component more de- tailed the 2-rank statistical characteristics of the re- constructed signal is also regarded as a feature vector: 2 3 1 1 () n jk jjk k Dxx n = = (3) Step 4: The 3 j E are often large so we normalize them. Assume that 7 2 3 0 j j EE = = , thus the derived feature vectors are, at last: 30 31 36 37 / , / , ., / , / E EE E E EE E=T (4) The signal is decomposed by a wavelet package and then the useful characteristic information feature vectors are extracted through the process given above. Compared to other traditional methods, like the Hil- bert transform, approaches based on the WPT analy- sis are more welcome due to the agility of the process and its scientific decomposition. 2.2 Kernel principal component analysis The method of kernel principal component analysis applies kernel methods to principal component analy- sis 45 . Let 1 , 1, 2, , , 0 M N kk k xRk M x = = = . The prin- cipal component is the element at the diagonal after the covariance matrix, T 1 1 M ij j x x M = = C , has been diagonalized. Generally speaking, the first N values along the diagonal, corresponding to the large eigen- values, are the useful information in the analysis. PCA solves the eigenvalues and eigenvectors of the covariance matrix. Solving the characteristic equa- tion 6 : 1 1 () M jj j x vx M = = C (5) where the eigenvalues 0 and the eigenvectors 0 N R is essence of PCA. Let the nonlinear transformations, : N R F , x X , project the original space into feature space, F. Then the covariance matrix, C, of the original space has the following form in the feature space: T 1 1 ()( ) M ij j x x M = = C (6) Nonlinear principal component analysis can be considered to be principal component analysis of C in the feature space, F. Obviously, all the eigenvalues of C (0 ) and eigenvectors, 0VF satisfy VV = C . All of the solutions are in the subspace that transforms from ( ), 1, 2, , i x iM =: ( ( ) ) ( ) , 1, 2, , kk x VxVk M = =C (7) There is a coefficient i . Let 1 () M ii i Vx = = (8) From Eqs.(6), (7) and (8) we can obtain: XIA Shi-xiong et al Mine-hoist fault-condition detection based on the wavelet packet 569 1 11 () () 1 ( ( ) ( )( ( ) ( ) M ik i i MM ik j j i ij ax x ax x x x M = = = (9) where 1, 2, , kM=. Define A as an MM rank matrix. Its elements are: () ( ) ij i j A - x x= (10) From Eqs.(9) and (10), we can obtain 2 M aa =AA. This is equivalent to: M aa = A (11) Make 12 M as As eigenvalues, and 12 , , , M as the corresponding eigenvector. We only need to calculate the test points projec- tions on the eigenvectors k V that correspond to nonzero eigenvalues in F to do the principal compo- nent extraction. Defining this as k , it is given by: 1 () ()() M kk ii k i Vx xx = = = (12) It is easy to see that if we solve for the direct prin- cipal component we need to know the exact form of the non-linear image. Also as the dimension of the feature space increases the amount of computation goes up exponentially. Because Eq.(12) involves an inner-product computation, ( ) ( ) i x x , accord- ing to the principles of Hilbert-Schmidt we can find a kernel function that satisfies the Mercer conditions and makes ( , ) ( ) ( ) ii K xx x x=. Then Eq.(12) can be written: 1 () (,) M kk ii k i Vx xx = = = K (13) Here is the eigenvector of K. In this way the dot product must be done in the original space but the specific form of ( )x need not be known. The mapping, ( )x , and the feature space, F, are all completely determined by the choice of kernel func- tion 78 . 2.3 Description of the algorithm The algorithm for extracting target features in rec- ognition of fault diagnosis is: Step 1: Extract the features by WPT; Step 2: Calculate the nuclear matrix, K, for each sample, ( 1, 2, , ) N i x Ri N=, in the original in- put space, and ( ( ) ( ) ij i j K xx=; Step 3: Calculate the nuclear matrix after zero-mean processing of the mapping data in feature space; Step 4: Solve the characteristic equation M aa = A ; Step 5: Extract the k major components using Eq.(13) to derive a new vector. Because the kernel function used in KPCA met the Mercer conditions it can be used instead of the inner product in feature space. It is not necessary to con- sider the precise form of the nonlinear transformation. The mapping function can be non-linear and the di- mensions of the feature space can be very high but it is possible to get the main feature components effec- tively by choosing a suitable kernel function and kernel parameters 9 . 3 Results and discussion The character of the most common fault of a mine hoist was in the frequency of the equipment vibration signals. The experiment used the vibration signals of a mine hoist as test data. The collected vibration sig- nals were first processed by wavelet packet. Then through the observation of different time-frequency energy distributions in a level of the wavelet packet we obtained the original data sheet shown in Table 1 by extracting the features of the running motor. The fault diagnosis model is used for fault identification or classification. Table 1 Original fault data sheet Eigenvector (10 4 ) E 50 E 51 E 41 E 31 E 21 E 11 Fault style 1 166.495 1.3498 0.13612 0.08795 0.19654 0.25780 F 1 2 132.714 1.2460 0.10684 0.07303 0.12731 0.19007 F 1 3 112.25 1.5353 0.21356 0.09543 0.16312 0.16495 F 1 4 255.03 1.9574 0.44407 0.31501 0.33960 0.28204 F 2 5 293.11 2.6592 0.66510 0.43674 0.27603 .027473 F 2 6 278.84 2.4670 0.49700 0.44644 0.28110 0.27478 F 2 7 284.12 2.3014 0.29273 0.49169 0.27572 0.23260 F 3 8 254.22 1.5349 0.47248 0.45050 0.28597 0.28644 F 3 9 312.74 2.4337 0.42723 0.40110 0.34898 0.24294 F 3 10 304.12 2.6014 0.77273 0.53169 0.37281 0.27263 F 4 11 314.22 2.5349 0.87648 0.65350 0.32535 0.29534 F 4 12 302.74 2.8337 0.72829 0.50314 0.38812 0.29251 F 4 Experimental testing was conducted in two parts: The first part was comparing the performance of KPCA and PCA for feature extraction from the origi- nal data, namely: The distribution of the projection of the main components of the tested fault samples. The second part was comparing the performance of the classifiers, which were constructed after extracting features by KPCA or PCA. The minimum distance and nearest-neighbor criteria were used for classifica- tion comparison, which can also test the KPCA and PCA performance. In the first part of the experiment, 300 fault sam- ples were used for comparing between KPCA and PCA for feature extraction. To simplify the calcula- tions a Gaussian kernel function was used: Journal of China University of Mining & Technology Vol.18 No.4 570 2 2 (, ) (), () exp 2 x y xy x y = K (10) The value of the kernel parameter, , is between 0.8 and 3, and the interval is 0.4 when the number of reduced dimensions is ascertained. So the best correct classification rate at this dimension is the accuracy of the classifier having the best classification results. In the second part of the experiment, the classifi- ers recognition rate after feature extraction was ex- amined. Comparisons were done two ways: the minimum distance or the nearest-neighbor. 80% of the data were selected for training and the other 20% were used for testing. The results are shown in Tables 2 and 3. Table 2 Comparing the recognition rate of the PCA and KPCA methods (%) PCA KPCA Minimum distance 91.4 97.2 Nearest-neighbor 90.6 96.5 Table 3 Comparing the recognition times of the PCA and KPCA methods (s) Times of extraction Times of classification Total times PCA 216.4 38.1 254.5 KPCA 129.5 19.2 148.7 From Tables 2 and 3, it can be concluded from Ta- bles 2 and 3 that KPCA takes less time and has rela- tively higher recognition accuracy than PCA. 4 Conclusions A principal component analysis using the kernel fault extraction method was described. The problem is first transformed from a nonlinear space into a lin- ear higher dimension space. Then the higher dimen- sion feature space is operated on by taking the inner product with a kernel function. This thereby cleverly solves complex computing problems and overcomes the difficulties of high dimensions and local minimi- zation. As can be seen from the experimental data, compared to the traditional PCA the KPCA analysis has greatly improved feature extraction and efficiency in recognition fault states. References 1 Ribeiro R L. Fault detection of open-switch damage in voltage-fed PWM motor drive systems. IEEE Trans Power Electron, 2003, 18(2): 587593. 2 Sottile J. An overview of fault monitoring and diagnosis in mining equipment. IEEE Trans Ind Appl, 1994, 30(5): 13261332. 3 Peng Z K, Chu F L. Application of wavelet transform in machine condition monitoring and fault diagnostics: a review with bibliography. Mechanical Systems and Sig- nal Processing, 2003(17): 199221. 4 Roth V, Steinhage V. Nonlinear discriminant analysis using kernel function. In: Advances in Neural Informa- tion Proceeding Systems. MA: MIT Press, 2000: 568 574. 5 Twining C, Taylor C. The use of kernel principal com- ponent analysis to model data distributions. Pattern Recognition, 2003, 36(1): 217227. 6 Muller K R, Mika S, Ratsch S, et al. An introduction to kernel-based learning algorithms. IEEE Trans on Neural Network, 2001, 12(2): 181. 7 Xiao J H, Fan K Q, Wu J P. A study on SVM for fault diagnosis. Journal of Vibration, Measurement & Diag- nosis, 2001, 21(4): 258262. 8 Zhao L J, Wang G, Li Y. Study of a nonlinear PCA fault detection and diagnosis method. Information and Con- trol, 2001, 30(4): 359364. 9 Xiao J H, Wu J P. Theory and application study of fea- ture extraction based on kernel. Computer Engineering, 2002, 28(10): 3638.
收藏