購買設(shè)計請充值后下載,,資源目錄下的文件所見即所得,都可以點(diǎn)開預(yù)覽,,資料完整,充值下載可得到資源目錄里的所有文件。。?!咀ⅰ浚篸wg后綴為CAD圖紙,doc,docx為WORD文檔,原稿無水印,可編輯。。。具體請見文件預(yù)覽,有不明白之處,可咨詢QQ:12401814
外文翻譯部分:
英文原文
Mine-hoist fault-condition detection based on
the wavelet packet transform and kernel PCA
Abstract: A new algorithm was developed to correctly identify fault conditions and accurately monitor fault development in a mine hoist. The new method is based on the Wavelet Packet Transform (WPT) and kernel PCA (Kernel Principal Component Analysis, KPCA). For non-linear monitoring systems the key to fault detection is the extracting of main features. The wavelet packet transform is a novel technique of signal processing that possesses excellent characteristics of time-frequency localization. It is suitable for analysing time-varying or transient signals. KPCA maps the original input features into a higher dimension feature space through a non-linear mapping. The principal components are then found in the higher dimension feature space. The KPCA transformation was applied to extracting the main nonlinear features from experimental fault feature data after wavelet packet transformation. The results show that the proposed method affords credible fault detection and identification.
Key words: kernel method; PCA; KPCA; fault condition detection
1 Introduction
Because a mine hoist is a very complicated andvariable system, the hoist will inevitably generate some faults during long-terms of running and heavy loading. This can lead to equipment being damaged,to work stoppage, to reduced operating efficiency andmay even pose a threat to the security of mine personnel. Therefore, the identification of running fault shas become an important component of the safety system. The key technique for hoist condition monitoring and fault identification is extracting information from features of the monitoring signals and then offering a judgmental result. However, there are many variables to monitor in a mine hoist and, also , there are many complex correlations between thevariables and the working equipment. This introduce suncertain factors and information as manifested by complex forms such as multiple faults or associated faults, which introduce considerable difficulty to fault diagnosis and identification[1]. There are currently many conventional methods for extracting mine hoist fault features, such as Principal Component Analysis(PCA) and Partial Least Squares (PLS)[2]. These methods have been applied to the actual process. However, these methods are essentially a linear transformation approach. But the actual monitoring process includes nonlinearity in different degrees. Thus, researchers have proposed a series of nonlinearmethods involving complex nonlinear transformations. Furthermore, these non-linear methods are confined to fault detection: Fault variable separation and fault identification are still difficult problems.This paper describes a hoist fault diagnosis featureexaction method based on the Wavelet Packet Transform(WPT) and kernel principal component analysis(KPCA). We extract the features by WPT and thenextract the main features using a KPCA transform,which projects low-dimensional monitoring datasamples into a high-dimensional space. Then we do adimension reduction and reconstruction back to thesingular kernel matrix. After that, the target feature isextracted from the reconstructed nonsingular matrix.In this way the exact target feature is distinct and stable.By comparing the analyzed data we show that themethod proposed in this paper is effective.
2 Feature extraction based on WPT and
KPCA
2.1 Wavelet packet transform
The wavelet packet transform (WPT) method[3],which is a generalization of wavelet decomposition, offers a rich range of possibilities for signal analysis. The frequency bands of a hoist-motor signal as collected by the sensor system are wide. The useful information hides within the large amount of data. In general, some frequencies of the signal are amplified and some are depressed by the information. That is tosay, these broadband signals contain a large amountof useful information: But the information can not bedirectly obtained from the data. The WPT is a finesignal analysis method that decomposes the signalinto many layers and gives a etter resolution in thetime-frequency domain. The useful informationwithin the different requency ands will be expressed by different wavelet coefficients after thedecomposition of the signal. The oncept of “energy information” is presented to identify new information hidden the data. An energy igenvector is then used to quickly mine information hiding within the large amount of data.The algorithm is:
Step 1: Perform a 3-layer wavelet packet decomposition of the echo signals and extract the signal characteristics of the eight frequency components ,from low to high, in the 3rd layer.
Step 2: Reconstruct the coefficients of the waveletpacket decomposition. Use 3 j S (j=0, 1, …, 7) to denote the reconstructed signals of each frequencyband range in the 3rd layer. The total signal can thenbe denoted as:
(1)
Step 3: Construct the feature vectors of the echosignals of the GPR. When the coupling electromagneticwaves are transmitted underground they meetvarious inhomogeneous media. The energy distributing of the echo signals in each frequency band willthen be different. Assume that the corresponding energyof 3 j S (j=0, 1, …, 7) can be represented as3 j E (j=0, 1, …, 7). The magnitude of the dispersedpoints of the reconstructed signal 3 j S is: jk x (j=0,1, …, 7; k=1, 2, …, n), where n is the length of thesignal. Then we can get:
(2)
Consider that we have made only a 3-layer waveletpackage decomposition of the echo signals. To makethe change of each frequency component more detailedthe 2-rank statistical characteristics of the reconstructedsignal is also regarded as a feature vector:
(3)
Step 4: The 3 j E are often large so we normalize them. Assume that, thus the derived feature vectors are, at last:
T=[] (4)
The signal is decomposed by a wavelet packageand then the useful characteristic information featurevectors are extracted through the process given above.Compared to other traditional methods, like the Hilberttransform, approaches based on the WPT analysisare more welcome due to the agility of the processand its scientific decomposition.
2.2 Kernel principal component analysis
The method of kernel principal component analysisapplies kernel methods to principal component analysis[4–5].
The principalcomponent is the element at the diagonal afterthe covariance matrix,has beendiagonalized. Generally speaking, the first N valuesalong the diagonal, corresponding to the large eigenvalues,are the useful information in the analysis.PCA solves the eigenvalues and eigenvectors of thecovariance matrix. Solving the characteristic equation[6]:
(5)
where the eigenvalues ,and the eigenvectors, is essence of PCA.
Let the nonlinear transformations, ? : RN F ,x X , project the original space into feature space,F. Then the covariance matrix, C, of the original space has the following form in the feature space:
(6)
Nonlinear principal component analysis can be
considered to be principal component analysis ofin the feature space, F. Obviously, all the igenvaluesof C and eigenvectors, V F \ {0} satisfyV = V . All of the solutions are in the subspace
that transforms from
(7)
There is a coefficient Let
(8)
From Eqs.(6), (7) and (8) we can obtain:
(9)
where k =1, 2, ….., M . Define A as an M×M rank
matrix. Its elements are:
From Eqs.(9) and (10), we can obtain
M Aa = A2a . This is equivalent to:
M Aa = Aa .
Make as A’s eigenvalues, and, as the corresponding eigenvector.
We only need to calculate the test points’ projections
on the eigenvectorsthat correspond to
nonzero eigenvalues in F to do the principal component
extraction. Defining this asit is given by:
(12)
principal
component we need to know the exact form of the non-linear image. Also as the dimension of the feature space increases the amount of computation goes up exponentially. Because Eq.(12) involves an inner-product computation, according to the principles of Hilbert-Schmidt we can find a kernel function that satisfies the Mercer conditions and makesThen Eq.(12) can
be written:
Here is the eigenvector of K. In this way the dot product must be done in the original space but the specific form of (x) need not be known. The mapping, (x) , and the feature space, F, are all completely determined by the choice of kernel function[ 7–8].
2.3 Description of the algorithm
The algorithm for extracting target features in recognition of fault diagnosis is:
Step 1: Extract the features by WPT;
Step 2: Calculate the nuclear matrix, K, for each sample, in the original input space, and
Step 3: Calculate the nuclear matrix after zero-mean processing of the mapping data in feature space;
Step 4: Solve the characteristic equation M a = Aa ;
Step 5: Extract the k major components using Eq.(13) to derive a new vector. Because the kernel function used in KPCA met the Mercer conditions it can be used instead of the inner product in feature space. It is not necessary to consider the precise form of the nonlinear transformation. The mapping function can be non-linear and the dimensions of the feature space can be very high but it is possible to get the main feature components effectively by choosing a suitable kernel function and
kernel parameters[9].
3 Results and discussion
The character of the most common fault of a mine hoist was in the frequency of the equipment vibration signals. The experiment used the vibration signals of
a mine hoist as test data. The collected vibration signals were first processed by wavelet packet. Then through the observation of different time-frequency
energy distributions in a level of the wavelet packet we obtained the original data sheet shown in Table 1 by extracting the features of the running motor. The
fault diagnosis model is used for fault identification or classification.
Experimental testing was conducted in two parts: The first part was comparing the performance of KPCA and PCA for feature extraction from the original
data, namely: The distribution of the projection of the main components of the tested fault samples. The second part was comparing the performance of the classifiers, which were constructed after extracting features by KPCA or PCA. The minimum distance and nearest-neighbor criteria were used for classification comparison, which can also test the KPCA and PCA performance. In the first part of the experiment, 300 fault samples were used for comparing between KPCA and PCA for feature extraction. To simplify the calculations a Gaussian kernel function was used:
10
The value of the kernel parameter, , is between 0.8 and 3, and the interval is 0.4 when the number of reduced dimensions is ascertained. So the best correct classification rate at this dimension is the accuracy of the classifier having the best classification results. In the second part of the experiment, the classifiers’ recognition rate after feature extraction was examined. Comparisons were done two ways: the
minimum distance or the nearest-neighbor. 80% of the data were selected for training and the other 20% were used for testing. The results are shown in Tables 2 and 3.
From Tables 2 and 3, it can be concluded from Tables 2 and 3 that KPCA takes less time and has relatively higher recognition accuracy than PCA.
4 Conclusions
A principal component analysis using the kernel fault extraction method was described. The problem is first transformed from a nonlinear space into a linearlinear
higher dimension space. Then the higher dimension feature space is operated on by taking the inner product with a kernel function. This thereby cleverly solves complex computing problems and overcomes the difficulties of high dimensions and local minimization. As can be seen from the experimental data, compared to the traditional PCA the KPCA analysis has greatly improved feature extraction and efficiency in recognition fault states.
References
[1] Ribeiro R L. Fault detection of open-switch damage in
voltage-fed PWM motor drive systems. IEEE Trans
Power Electron, 2003, 18(2): 587–593.
[2] Sottile J. An overview of fault monitoring and diagnosis
in mining equipment. IEEE Trans Ind Appl, 1994, 30(5):
1326–1332.
[3] Peng Z K, Chu F L. Application of wavelet transform in
machine condition monitoring and fault diagnostics: a
review with bibliography. Mechanical Systems and Signal
Processing, 2003(17): 199–221.
[4] Roth V, Steinhage V. Nonlinear discriminant analysis
using kernel function. In: Advances in Neural Information
Proceeding Systems. MA: MIT Press, 2000: 568–
574.
[5] Twining C, Taylor C. The use of kernel principal component
analysis to model data distributions. Pattern
Recognition, 2003, 36(1): 217–227.
[6] Muller K R, Mika S, Ratsch S, et al. An introduction to
kernel-based learning algorithms. IEEE Trans on Neural
Network, 2001, 12(2): 181.
[7] Xiao J H, Fan K Q, Wu J P. A study on SVM for fault
diagnosis. Journal of Vibration, Measurement & Diagnosis,
2001, 21(4): 258–262.
[8] Zhao L J, Wang G, Li Y. Study of a nonlinear PCA fault
detection and diagnosis method. Information and Control,
2001, 30(4): 359–364.
[9] Xiao J H, Wu J P. Theory and application study of feature
extraction based on kernel. Computer Engineering,
2002, 28(10): 36–38.
中文譯文
基于PCA技術(shù)核心的打包和變換的礦井提升機(jī)失誤的發(fā)現(xiàn)
摘要:
一個新的運(yùn)算法則被正確的運(yùn)用于證明和監(jiān)視礦井提升機(jī)的過失情況。這種方法是基于小浪小包變換(WPT)和PCA為核心基礎(chǔ)的。(KPCA,核心校長成份分析)因為非線性監(jiān)聽系統(tǒng)主要是通過主要特征來發(fā)現(xiàn)和吸取系統(tǒng)過失的。小浪小包變換是處理時間-頻率局限性的優(yōu)良特性信號的新技術(shù)。它對分析改變時間或短暫的信號是適當(dāng)?shù)摹?
KPCA 透過最初的輸入的非線性映射映射特征進(jìn)入較高的尺寸特征空間。主要的成份然后在較高的尺寸特征空間被發(fā)現(xiàn)。KPCA 變形被適用于從實驗的過失特征小浪后的數(shù)據(jù)小包變形吸取主要的非線性特征。結(jié)果表示,被提議的方法負(fù)擔(dān)可信的過失發(fā)現(xiàn)和確認(rèn)。
關(guān)鍵詞:核心方法;主成分分析;核主元分析;故障檢測
1介紹
因為我的礦井提升機(jī)是一個非常復(fù)雜的可變性比較大的系統(tǒng),
升高不可避免的產(chǎn)生錯誤和長時間的超載。這些都有可能損壞設(shè)備,操作終端,甚至降低工作效率,對我們員工的安全帶來威脅。因此,流動錯誤的確認(rèn)一直被認(rèn)為是安全系統(tǒng)的一個重要組成部分。對于升高情況的測試和監(jiān)聽只要是依靠探取監(jiān)聽信號和他的結(jié)果的信息特征。但是,在那里礦井提升的高度檢測和工作設(shè)備之間有許多復(fù)雜的相互關(guān)系。這些因素和數(shù)據(jù)的引進(jìn)可以當(dāng)作有很多部分顯示形成許多錯誤和失誤。這錯誤的介紹和確認(rèn)會給我們帶來相當(dāng)多的困難認(rèn)識?,F(xiàn)在,很多利用我的技術(shù)發(fā)現(xiàn)現(xiàn)有提升機(jī)缺點(diǎn)的方法在許多傳統(tǒng)的方法中扮演著重要的角色。比如主要成份分析(PCA)和部分最少廣場(PLS)。這些方法已經(jīng)被熟練的運(yùn)用于我們的實際生產(chǎn)中來。但是這些方法在本質(zhì)上是接近的。然而在實際工作中的監(jiān)視設(shè)備往往發(fā)、生非線性的。因此我們的研究員已經(jīng)計劃了包括浮躁的非線性變形等一系列的無線發(fā)現(xiàn)技術(shù)。此外,這些非線性方法限制了錯誤的發(fā)現(xiàn),現(xiàn)在這些錯誤的分離和缺點(diǎn)的確認(rèn)依然是一個困難的所在。這篇論文是基于小浪小包變換 (WPT)和核心校長成份分析 (KPCA)的礦井提升機(jī)的失誤確認(rèn)方法。我們吸取 WPT和 thenextract 的特征
使用 KPCA 變換,計劃低空間的監(jiān)聽 datasamples 進(jìn)入高空間的空間主要部份特征。接著我們做到尺寸的減少 然后進(jìn)行核心點(diǎn)陣式尺寸的重建。之后我們的目標(biāo)是重建特征和非反常的點(diǎn)陣式尺寸。這樣我們得到的是清楚又穩(wěn)定的目標(biāo)特征?;诖宋覀儽硎?,這種方法在這次計劃中分析出來的數(shù)據(jù)是有效的。
2基于小波包變換的特征提取
核心單元的分析
2.1小波包變換
小波包變換(小波包變換)方法[ 3 ] ,這是一個小波的概括分解,提供了的很多可能性分析,傳感系統(tǒng)的信號頻帶的升降器點(diǎn)擊收集到的信號是非常廣泛的。這些信息中隱藏了大量的使用信息。一般情況下,一些平率信號的擴(kuò)大包含不好的信息。
這就是說,這些寬帶信號包含大量有用的信息:但是信息不能獲得的數(shù)據(jù)。然而小波包變換是一個很好的信號分析方法,分解許多層并給出了一個見上書決議時間頻域。實用的信息在不同的發(fā)展戰(zhàn)略將不同的小波系數(shù)后的信號。該信號的提出,是以確定新的信息隱藏數(shù)據(jù)。能源然后用于排雷信息隱藏的算法是:
第1步:執(zhí)行3層小波包分解的回波信號,并提取信號
特點(diǎn)八高頻成分,從低到高,在第三層。
第2步:重構(gòu)系數(shù)波讓包分解。
利用3 j S (j=0, 1, …, 7)
指每個重建信號的頻帶范圍內(nèi)的第3層??偟男盘柧涂梢员幻麨椋?
(1)
第3步:構(gòu)建特征向量的的探地雷達(dá)。當(dāng)電磁波的耦合傳輸他們滿足各種地下非均勻介質(zhì)。能源分布的回波信號在每個頻帶然后將不同:
承擔(dān)相應(yīng)的能量 3 j S (j=0, 1, …, 7) 可以代表3 j E (j=0, 1, …, 7).
的規(guī)模分散點(diǎn)的重建信號3 j S 是 jk x (j=0,1, …, 7; k=1, 2, …, n),
其中n是長度的信號。然后,我們可以得到:
(2)
認(rèn)為我們?nèi)〉昧酥挥?層波讓包分解的回波信號。為了使每一個變化的更詳細(xì)的頻率成分的2階統(tǒng)計特性的重建信號也視為一個特征向量:
(3)
(4)第4步3 j E往往大,所以我們正?;麄?。假設(shè),從而得出的特征向量是,最后:
T=[]
信號分解的小波包,然后有用的特征信息提取的特征向量是通過上述過程。相對于其他傳統(tǒng)方法,如Hilbertt并存的形式,方法基于小波包變換分析更歡迎由于敏捷的過程和它的科學(xué)分解。
2.2版內(nèi)核主成分分析
該方法的核心主成分分析方法,適用于核心主成分分析[ 4-5 ] 。
主要組成部分是在對角線元素后,協(xié)方差矩陣,已是結(jié)尾 。一般而言,第一次N值山對角線長,相應(yīng)的大特征值,是有用的信息在數(shù)據(jù)分析.PCA解決了特征值和特征向量的協(xié)方差矩陣。求解特征方程[ 6 ] :
如果特征值和特征向量,是屬于PCA的。使非線性變換,RN F ,x X項目原始空間到特征空間,樓然后,協(xié)方差矩陣,中,原來的空間具有下列表格中的功能空間:
(6)
非線性主成分分析可
被認(rèn)為是主成分分析的功能空間,樓顯然,所有的C抗原值和特征向量,V F \ {0} 滿足V = V。所有的解決方案是在子這一轉(zhuǎn)變從
(7)
使系數(shù) 可以得到
(8)
從6 7 8式我們可以得到
(9)
使k =1, 2, …..,M定義A是M×M的矩陣,她的要點(diǎn)是
M Aa = Aa .從9和10式我們可以得到M Aa = A2a這就相當(dāng)于M Aa = Aa .
使作為A的特征值,以及相應(yīng)的特征向量。我們只需要計算測試點(diǎn)的預(yù)測的特征向量對應(yīng)的非零特征值的F這樣做主要成分的提取。界定這種因為它是由:
(12)
主要組成部分,我們需要知道確切形式的非線性圖像。還為層面的特征空間增加了計算量隨之呈指數(shù)。由于均衡器。 式( 12 )涉及黨內(nèi)產(chǎn)品計算,
根據(jù)原則的Hilbert -施密特我們能夠找到一種核心功能,滿足了條件,使美這樣式12可以改寫成 這里是K的一個變量。這樣,斑點(diǎn)產(chǎn)品必須在原來的空間,但具體形式 (x)必須不知道。他測繪, (x)和空間的特點(diǎn),男,都完全取決于選擇核函數(shù)[ 7-8 ] 。
2.3說明算法該算法提取目標(biāo)特征識別的故障診斷是:
第1步:提取特征的小波包變換;第2步:計算核基質(zhì),鉀,每
例如 在原來的輸入空間,和
第3步:計算核基質(zhì)后零意味著處理測繪數(shù)據(jù)的特征空間;
第4步:求解特征方程M a = Aa ;
第5步:提取的K主要組成部分使用情商。 ( 13 )制定出一個新的載體。
因為內(nèi)核中使用的核主元分析功能會見了美世的條件,可用于代替內(nèi)積的特征空間。沒有必要考慮的具體形式的非線性變換。映射功能可以非線性和層面的特征空間可能很高,但有可能得到有效成分主要特點(diǎn)選擇合適的核函數(shù)和內(nèi)核參數(shù)[ 9 ] 。
3結(jié)果與討論
性質(zhì)的最常見故障的礦井提升機(jī)是在頻率的設(shè)備振動信號。實驗所用的振動信號的礦井提升機(jī)的測試數(shù)據(jù)。所收集的振動信號首先處理小波包。然后通過觀察不同的時頻能量分布在一個水平的小波包,我們獲得原始數(shù)據(jù)列于表1
提取的特點(diǎn),運(yùn)行發(fā)動機(jī)。該故障診斷模型用于故障識別或分類。
實驗測試,分兩部分進(jìn)行: 第一部分是性能的比較核主元分析和主成分分析的特征提取從原來的數(shù)據(jù),即:分布情況的預(yù)測的主要組成部分測試故障樣本。那個第二部分是比較的性能分類,這是建造在提取功能的核主元分析或常設(shè)仲裁法院。的最短距離和近鄰的標(biāo)準(zhǔn),用于分類相比之下,這也可以測試的核主元分析和常設(shè)仲裁法院的執(zhí)行情況。在第一部分實驗中, 300個故障樣本被用于核主元分析和比較主成分分析的特征提取。為了簡化計算高斯核函數(shù)使用:
10
價值的內(nèi)核參數(shù), ,是關(guān)系 0.8和3 ,以及時的間隔為0.4的數(shù)目減少尺寸的確定。因此,最好的糾正分類率在這個層面的準(zhǔn)確性分類器擁有最好的分類結(jié)果。
在第二部分實驗中,分類器'識別率的特征提取后進(jìn)行了檢查。比較兩種方式進(jìn)行:在最小距離或近鄰。 80 %的這些數(shù)據(jù)被選定為培訓(xùn)和其他20 % 用于測試。結(jié)果見下表
2和3 。
從表2和第3 ,可以得出結(jié)論從表第2和第3的核主元分析需要更少的時間和相對
更高的識別準(zhǔn)確率超過常設(shè)仲裁法院。
4結(jié)論
主要組成分析中的應(yīng)用內(nèi)核故障提取方法描述。問題首先是由一個非線性空間到線性高維空間。然后,高維功能空間上運(yùn)行,采取了內(nèi)部產(chǎn)品的核心功能。這從而巧妙地解決復(fù)雜的計算問題和克服的困難,高維和局部極小??梢钥闯?,從實驗數(shù)據(jù),相比傳統(tǒng)的主成分分析的核主元分析分析大大改善了特征提取和效率
在承認(rèn)過失國。
參考文獻(xiàn)
[1] Ribeiro R L. Fault detection of open-switch damage in
voltage-fed PWM motor drive systems. IEEE Trans
Power Electron, 2003, 18(2): 587–593.
[2] Sottile J. An overview of fault monitoring and diagnosis
in mining equipment. IEEE Trans Ind Appl, 1994, 30(5):
1326–1332.
[3] Peng Z K, Chu F L. Application of wavelet transform in
machine condition monitoring and fault diagnostics: a
review with bibliography. Mechanical Systems and Signal
Processing, 2003(17): 199–221.
[4] Roth V, Steinhage V. Nonlinear discriminant analysis
using kernel function. In: Advances in Neural Information
Proceeding Systems. MA: MIT Press, 2000: 568–
574.
[5] Twining C, Taylor C. The use of kernel principal component
analysis to model data distributions. Pattern
Recognition, 2003, 36(1): 217–227.
[6] Muller K R, Mika S, Ratsch S, et al. An introduction to
kernel-based learning algorithms. IEEE Trans on Neural
Network, 2001, 12(2): 181.
[7] Xiao J H, Fan K Q, Wu J P. A study on SVM for fault
diagnosis. Journal of Vibration, Measurement & Diagnosis,
2001, 21(4): 258–262.
[8] Zhao L J, Wang G, Li Y. Study of a nonlinear PCA fault
detection and diagnosis method. Information and Control,
2001, 30(4): 359–364.
[9] Xiao J H, Wu J P. Theory and application study of feature
extraction based on kernel. Computer Engineering,
2002, 28(10): 36–38.