Non-commitment Entropy: A Noval Modality for Uncertainty Measurement

Page 1

Transactions on Computer Science and Technology June 2015, Volume 4, Issue 2, PP.27-34

Non-commitment Entropy: A Noval Modality for Uncertainty Measurement Pengyuan Wei#, Kun She, Wei Pan

School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China #

Email: py_wei@qq.com

Abstract Three-way decision rule is the extension of traditional two-way decision. In the real environment, a decision maker is not easy to make choice between acceptance and rejection for the uncertainly or incomplete information. In this case, people used to choose three-way decision for the uncertain and high risky decision with extra but necessary cost. Meanwhile some general uncertainty measures are proposed by generalizing Shannon’s entropy. The theory of information entropy makes the uncertainty measures more accuracy in boundary of three-way decision. In this paper, we propose several types of non-commitment entropy by using the relation of the ‘third’ decision—non-commitment, and employ the proposed model to evaluate the significance of the attributes for classification as well. Keywords: Three-way Decision, Entropy, Uncertainty Measurement

1 INTRODUCTION An important application of rough set theory is to induce classification or decision rules that indicate the decision class of an object based on its values on some condition attributes[1-3]. A decision class is a subset of a universe of objects in the field of three-way decision[4]. And it is approximated by a pair of definable sets with respect to a logic language[5]. With more profound and extensive research in decision theory, the advantages of three-way decision have been accepted by the rough sets and decision theory community[6].

2 ENTROPY IN THE VIEW OF THREE-WAY DECISION An easy way to comply with the journal paper formatting requirements is to use this document as a template and simply type your text into it. Uncertainty measures in approximation space are important in rough approximation[6-12].The traditional binary decision corresponding two choices. The following is an example of cost matrix in three-way decision: Tom is going to buy a suit at the only one clothing store in a town. He select a new style as his favor and paid the seller 50 dollar for a coat. However, the seller told him the coat is qualified while the pants might have quality problems, these are the only two before the next day. And the quality problem could not be distinguished from the surface. Worse, it may come out a few days later. So Tom needs to make a decision whether buy these this pants or not, because a coat without pants is useless, and it would be seen as a character of waste, which expressed by Table 1. The two-way decisions of ordinary correspondence two options: buy the pants or give up. In this case, we can see a relationship between decisions and costs. Let’s suppose that the price of the coat is 50 dollar and the pants is 150 dollar. The two-way decisions of ordinary correspondence two options: Buy or do not buy. If Tom buy these pants, the best result is no waste only if Tom buy these pants and they are qualified. However the risk of this choice is: If these pants’ status is unqualified, he would waste 200 dollar. And, do not buy is the second choice, it’s risk is if the pants’ status is qualified, he would waste 150 dollar for the coat. Consideration is given different decisions Table 1. - 27 http://www.ivypub.org/cst


TABLE 1. COST MATRIX OF THREE-WAY DECISION Status

Action

Qualified

Unqualified

Buy

No waste

150 dollar

Do not Buy

200 dollar

50 dollar

Buy at a further shop

2 miles

2 miles

If the probability of qualified pants is greater(for example, the probability is equal or greater than α , α > β ), then buy the pants is a better choice; If the probability of qualified pants is lower(for example, the probability is equal or less than β , α > β ), do not buy the pants would be a better choice; If the probability of qualified pants is neither great or low(for example, the probability is greater than β and less than α , α > β ), then, both of the choices are unsatisfactory. That is, once he forcibly make two decisions, each one is unreasonable. Thus, Tom may has his third choice — buy the same pants at a further shop, and the cost is only the extra 2 miles distance. Indeed, that’s a typical solution of three-way decision.

2.1 Three-way Decision Unlike rules in the classical rough set theory, all three types of rules may be uncertain and nondeterministic. These rules again lead to a three-way decision, based on two threshold values α and β , while α and β are obtained from the evaluation function. Intuitively, they represent the levels of our tolerance in making incorrect decisions. For positive rules, the error rate of accepting a non-member of d as a member of d is below 1 − α , Conversely, the error rate of rejecting a member of d as a non-member of d is below 1 − β . Given an information system IS = (U , A) , and a binary relationship R,U = {x1 , x2, ..., xn } , ∀xi ∈ U , the information granule is defined as follows:

[ xi ]R =

r1i r2i r + + ... + ni x1 x2 xn

(1)

It is assumed that the mapping I a is single-valued. In this case, the value of an object x ∈ U on an attribute a ∈ At is denoted by I a ( x) . apr (α , β ) ( X ) and apr (α , β ) ( X ) are called a pair of lower and upper approximations, and defined as follows : apr (α , β ) ( X ) = {[ x] ∈ U | P( X | [ x]) ≥ α }

(2)

apr (α , β ) ( X ) = {[ x] ∈ U | P( X | [ x]) > β }

(3)

Three-way decision is based on the classic rough set, Yao gives the basic model of three-way decision[4]. Let ( E ,  ) be a totally ordering set, where  is a relation of totally ordering relation. The pair of α and β are the thresholds which satisfy the condition β  α , ( ( β α ) ∧ ¬(α β ) ), set E += {x ∈ U | x α } represents the domain of acceptance , and set E −= {x ∈ U | xβ } represents the domain of rejection. Given a evaluation function v : U → E . Thus , three-way decision defined as follows: POS(α , β ) ( X = ) {x ∈ U | v( x)α }

(4)

NEG(α , β ) ( X = ) {x ∈ U | v( x)β }

(5)

BND(α , β ) ( X ) = apr (α , β ) ( X ) − apr (α , β ) ( X ) = {x ∈ U | β  v ( x )  α }

(6)

Evaluation function v( x) could be given by experiences usually. In this paper, it is described by probability function P( X | [ x]) . Positive region POS(α , β ) ( x) , negative region NEG(α , β ) ( x) and boundary region BND(α , β ) ( X ) are based on the rough set approximations of X , which can divide the universe U into these three disjoint regions. - 28 http://www.ivypub.org/cst


2.2 Kernel Entropy The cardinality of [ xi ]R is computed in the form of | [ xi ]R |= ∑ j =1 rij , Thus, the expected cardinality of [ xi ]R is n

computed as follows, where U is cardinality of set U . | [ xi ]R | |U |

(7)

1 n ∑ log 2 Card ([ xi ]R ) | U | i =1

(8)

Card ([ xi ]) =

The kernel entropy is defined as follows: H P ( A) = −

3 NON-COMMITMENT ENTROPY 3.1 Several Type of NCE Definition 1. Let IS = (U , A) be an information system, where U = {x1 , x2, ..., xn } and A = { A1 , A2 ,..., Am } are finite, nonempty sets of objects, X ⊆ U . α and β are a pair of thresholds that satisfy the condition β < α . The equivalence class of non-commitment [ xi ]NR is computed as follows: [ xi ]NR = BND(α , β ) ( X )

(9)

Thus, the expected cardinality of [ xi ]NR are computed as follows: Card ([ xi ]NR ) =

| [ xi ]NR | |U |

(10)

Definition 2. Let IS = (U , A) be an information system, where U = {x1 , x2, ..., xn } and A = { A1 , A2 ,..., Am } are finite, nonempty sets of objects, B ⊆ A . R = {R1 , R2 ,..., Rl } is a set of binary relations. α and β are a pair of thresholds that satisfy the condition β < α . NCE is denoted by: NH (α , β ) ( B) R = −

1 n ∑ log 2 Card ([ xi ]NR ) | U | i =1

(11)

Definition 3. Let IS = (U , A) be an information system, where U = {x1 , x2, ..., xn } and A = { A1 , A2 ,..., Am } are finite, nonempty sets of objects, X ⊆ U . α and β are a pair of thresholds that satisfy the condition β < α . The equivalence class of positive non-commitment [ xi ]NPR is computed as follows: [ xi ]NPR = BND(α , β ) ( X )  X

(12)

Definition 4. Let IS = (U , A) be an information system, where U = {x1 , x2, ..., xn } and A = { A1 , A2 ,..., Am } are finite, nonempty sets of objects, X ⊆ U , X  X C = U and X  X C = φ . α and β are a pair of thresholds that satisfy the condition β < α . The equivalence class of negative non-commitment [ xi ]NNR is computed as follows: [ xi ]NNR = BND(α , β ) ( X )  X C

(13)

Thus, the cardinality of [ xi ]NPR is computed as follows: Card ([ xi ]NPR ) =

| [ xi ]NPR | | BND(α , β ) ( X ) |

(14)

| [ xi ]NNR | | BND(α , β ) ( X ) |

(15)

Similarly the cardinality of [ xi ]NNR is computed as follows: Card ([ xi ]NNR ) =

- 29 http://www.ivypub.org/cst


Definition 5. Let IS = (U , A) be an information system, where U = {x1 , x2, ..., xn } and A = { A1 , A2 ,..., Am } are finite, nonempty sets of objects, B ⊆ A . R = {R1 , R2 ,..., Rl } is a set of binary relations. α and β are a pair of thresholds that satisfy the condition β < α . Non-Commitment Positive Entropy (NCPE) is denoted by : NpH (α , β ) ( B) R = −

1 n ∑ log 2 Card ([ xi ]NPR ) | U | i =1

(16)

Definition 6. Let IS = (U , A) be an information system, where U = {x1 , x2, ..., xn } and A = { A1 , A2 ,..., Am } are finite, nonempty sets of objects, B ⊆ A . R = {R1 , R2 ,..., Rl } is a set of binary relations. α and β are a pair of thresholds that satisfy the condition β < α . Non-Commitment Negative Entropy (NCNE) is denoted by : NnH (α , β ) ( B) R = −

1 n ∑ log 2 Card ([ xi ]NNR ) | U | i =1

(17)

Definition 7. Let IS = (U , A) be an information system, where U = {x1 , x2, ..., xn } ,Given a set of general relation R = {R1 , R2 ,..., Rl } , B1 , B2 ⊆ A , [ xi ]1NPR and [ xi ]2NPR are induced by B1 and B2 , α and β are a pair of thresholds that satisfy the condition β < α .The positive non-commitment joint entropy is expressed in the form:

NpH (α , β ) ( B1  B2 ) R = −

1 n ∑ log 2 Card (max([ xi ]1NPR ,[ xi ]2NPR )) | U | i =1

(18)

Similarly, the negative non-commitment joint entropy is expressed in the form: NnH (α , β ) ( B1  B2 ) R = −

1 n ∑ log 2 Card (min([ xi ]1NNR ,[ xi ]2NNR )) | U | i =1

(19)

Definition 8. Let IS = (U , A) be an information system, where U = {x1 , x2, ..., xn } ,Given a set of general relation R = {R1 , R2 ,..., Rl } , B1 , B2 ⊆ A , α and β are a pair of thresholds that satisfy the condition β < α .The negative noncommitment conditional entropy is expressed in the form:

= NnH (α , β ) ( B2 | B1 ) R NnH (α , β ) ( B1  B2 ) R − NnH (α , β ) ( B1 ) R

(20)

Similarly, the positive non-commitment conditional entropy is expressed in the form: = NpH (α , β ) ( B2 | B1 ) R NpH (α , β ) ( B1  B2 ) R − NpH (α , β ) ( B1 ) R

(21)

The conditional entropy reflects the uncertainty of B2 if B1 is given.

3.1 Several Theorems about NCE Theorem 1. Let IS = (U , A) be an information system, where U = {x1 , x2, ..., xn } is a finite, nonempty sets of objects. R = {R1 , R2 ,..., Rl } is a set of binary relations. α1 , α 2 , β1 and β 2 are the thresholds that satisfy the condition

β1 ≤ β 2 < α 2 ≤ α1 , Then, we have: NH (α 2 , β2 ) ( A) R ≥ NH (α1 , β1 ) ( A) R

(22)

Proof. ∀xi ∈ U , β1 ≤ β 2 < α 2 ≤ α1 , Then, we have α1 − β1 ≤ α 2 − β 2 , and [ xi ]2NR ⊆ [ xi ]1NR . Therefore, Card ([ xi ]2NR ) ≤ Card ([ xi ]1NR ) , so obviously, we have NH (α 2 , β2 ) (A) R ≥ NH (α1 , β1 ) (A) R .

Corollary 1. Theorem 1. Let IS = (U , A) be an information system, where U = {x1 , x2, ..., xn } is a finite, nonempty sets of objects. R = {R1 , R2 ,..., Rl } is a set of binary relations. α1 , α 2 , β1 and β 2 are the thresholds that satisfy the condition β1 ≤ β 2 < α 2 ≤ α1 , Then, we have: NpH (α 2 , β2 ) ( A) R ≥ NpH (α1 , β1 ) ( A) R

(23)

NnH (α 2 , β2 ) ( A) R ≥ NnH (α1 , β1 ) ( A) R

(24)

- 30 http://www.ivypub.org/cst


Proof. Similar to Theorem 1, we have Card ([ xi ]2NPR ) ≤ Card ([ xi ]1NPR ) , NpH (α 2 , β2 ) ( A) R ≥ NpH (α1 , β1 ) ( A) R . And Card ([ xi ]2NNR ) ≤ Card ([ xi ]1NNR ) , NnH (α 2 , β2 ) (A) R ≥ NnH (α1 , β1 ) (A) R .

4 FEATURE REDUCTION AND EXPERIMENTATION BASED ON NCE 4.1 Feature Reduction From the definitions in section 3, feature reduction methods can be constructed that use the NCE , to gauge the significance of feature subsets. Here we show a feature selection method based on NCE. As we call the information system IS = (U , C , D) a decision system, where C is a set of condition attributes. Then, as we express in Definition 8 that non-commitment condition entropy NnH (α , β ) ( D | C ) R and NpH (α , β ) ( D | C ) R are the uncertainty of D if condition attribute C are given. The relevance between condition attributes and decision are reflected by the condition entropy. Definition 9. Let IS = (U , C , D) be a decision system, where U = {x1 , x2 ,..., xn } is a nonempty finite set of objects. Given a set of general binary relations R = {R1 , R2 ,..., Rl } , B ⊆ C , thus, we define significance of attribute subset B in the non-commitment of view: NpSIG = NpH (α , β ) ( D) R − NpH (α , β ) ( D | B) R (α , β ) ( B , D ) R = NpH (α , β ) ( D) R + NpH (α , β ) ( B) − NpH (α , β ) ( D  B) R NnSIG(α , β ) ( B, D) R = NnH (α , β ) ( D) R - NnH (α , β ) ( D | B) R = NnH (α , β ) ( D) R + NnH (α , β ) ( B) - NnH (α , β ) ( D  B) R

(25)

(26)

In the above equations, NpSIG(α , β ) ( B, D) R is used to evaluate the significance of attribute subset B by the positive non-commitment, while NnSIG(α , β ) ( B, D) R evaluates the significance of attribute subset B by the negative noncommitment. It is obvious that NpSIG(α , β ) ( B, D) R and NnSIG(α , β ) ( B, D) R become the symmetric uncertainty measure. As it is well-known, mutual information is widely applied in evaluating features and constructing decision trees. but the classical definition of mutual information can just be used to deal with discrete features. NpSIG(α , β ) ( B, D) R and NnSIG(α , β ) ( B, D) R can be used to deal with numerical and fuzzy information. Formally, a forward search algorithm for feature selection based on NCE is written as follows: Algorithm 1. Feature Selection based on NCE(FD-NCE) Input: sample set U = {} , feature set A , decision D and stopping threshold α Output: reduct red 1. red ← φ , v ← 0 ; 2. while red ≠ C 3. for each ai ∈ (C − red ) 4. generate DTi = < U , red  ai , D > 5. compute vred  ai ( D) = ( BNDred  ai ,(α , β ) ( D)) / | U | 6. compute Sig (ai= , red , D) vred  ai ( D) − vred ( D) a k 6. choose a k let Sig (ak , red , D) = max( Sig (ai , red , D)) 7. if Sig (ak , red , D) > α 8. red  ak → red 9. go to step 3 - 31 http://www.ivypub.org/cst


10. else 11. exit while 12. end if 13. end while 14. return red The time complexity of algorithm is O(n 2 m log m) , where n and m are the numbers of features and samples, respectively. it is worth noting that the proposed measures of dependency and mutual information can be incorporated with other search strategies used in other feature selection algorithms, such as ABB (Automatic Branch and Bound), Set Cover, probabilistic search, and GP (Genetic programming) . In this study, we are not going to compare and discuss the influence of search strategies on the results of feature selection. Here we focus on the comparison of the proposed method when dealing with different evaluation measures.

4.2 Experimentation This section presents the experimental evaluation of the selection methods for the task of pattern classification, over 6 benchmark dataset with several different classifiers. We compare the effectiveness of NCE in evaluating feature quality. The data sets are downloaded from UCI Machine Learning Repository. They are described in table 2. The numerical attributes of the samples are linearly normalized as follows: x= ( x − xmin ) / ( xmax − xmin )

(27)

Where xmin and xmax are the bounds of the given attribute. Three popular leaning algorithms such as CART and liner SVM are introduced to evaluate the quality of selected features. The parameters of the linear SVM is taken as the default values. The experiments were run in a 10-fold cross validation mode. TABLE 2 DATA DESCRIPTION

ID

Data

Sample

Features

Class

1

wdbc

569

31

2

2

iono

351

34

2

3

wine

178

13

3

4

heart

270

13

2

5

sonar

208

60

2

6

glass

214

9

7

We use greedy search procedure to find optimal features in terms of these evaluation functions. However, we know that the greedy search can not get the optimal solutions to tasks usually. Furthermore, we may get completely different solutions if the first feature selected with different algorithms are different. Although the selected features are different, they may all be effective for classification learning. TABLE

3 SUBSET OF FEATURES SELECTED WITH NCPE, NCNE, FE AND KE

Data

NCPE

NCNE

FE

KE

wdbc

29,22,23,12,9

24,29,23,30,9,13,10

29,22,23,12,9

29,22,23,12,9

iono

5,6,8,24,28,10,21

5,6,8,23,34,29

5,6,8,25,28,24,34,7

5,6,34,29,8,23

wine

7,1,5,10

7,1,11,14

7,1,5,13

7,1,11,4

heart

13,12,3,11,1,7

13,12,3,10,1,4

13,12,3,10,1,7,11,2,8,4

13,12,3,10,1,4,5

sonar

12,27,21,37,32

2,13,7,23,11

12,27,21,37,32,30,54

2,34,13,7,23

glass

3,7,5,4,1

3,7,9,1,5

3,7,4,9,5

3,7,4,9,1,5

As we know, we consider the ranking of features in feature selection, sometimes, a little difference in feature qualities may lead to completely different ranking. The other is the search strategy we used in these algorithms. - 32 http://www.ivypub.org/cst


TABLE

4 CLASSIFICATION ACCURACIES BASED ON CART(%)

Data

NCPE

NCNE

FE

KE

wdbc

93.1±3.2

93.7±3.5

93.0±3.5

94.2±3.5

iono

87.5±5.5

88.5±6.3

88.1±6.0

88.6±6.5

wine

92.4±7.2

89.9±8.3

92.2±7.5

89.9±8.5

heart

78.5±7.5

81.4±9.2

75.2±9.2

80.0±7.7

sonar

72.9±12.0

72.2±15.1

70.7±10.3

72.2±15.3

glass

67.7±12.9

65.3±14.2

65.9±12.7

65.1±14.6

TABLE

5 CLASSIFICATION ACCURACIES BASED ON LINEAR SVM(%)

Data

NCPE

NCNE

FE

KE

wdbc

96.5±2.1

96.5±2.2

96.1±2.1

95.9±2.1

iono

83.5±5.5

85.1±5.8

85.0±5.3

85.0±5.9

wine

97.4±3.5

94.4±5.3

97.2±3.9

94.4±5.2

heart

83.2±9.0

81.8±7.1

82.9±9.4

82.2±6.0

sonar

65.0±11.5

67.7±15.9

61.9±11.6

67.8±15.7

glass

60.9±9.4

58.9±8.7

60.4±9.1

57.1±8.7

In this experiment, we compare NCPE and NCNE with Kernel Entropy(KE) and Fuzzy Entropy(FE). The parameters of the KE and FE are kept consistent in Ref. 5 and Ref. 6. We compute the significance of single feature with five evaluation functions. At the same time, we reported the classification accuracies of the each feature based on the use of the CART and linear SVM. Comparing the performance of raw data we can find although most of features have been removed, most of the classification accuracies derived from the reduced data sets do not decrease, but increase. The experimental results show that no matter which classification algorithms are used, NCE is better than or equivalent to KE.

5 CONCLUSION Generally, we can see that there is a potential similarity between the three-way decision and classical information entropy. The NCE has been developed as an extension of the information entropy. In NCE model, we induce information entropy to describe the uncertainty of non-commitment relationship. The two types of NCE (NCPE, NCNE) are influenced by the two types of rate of non-commitment. We proposed the forward greed features selection algorithms, which will be helpful for applying this theory to practical issues. NCE provides an effective approach in the context of three-way decision. The experimental result shows that NCE can have its advantage for rule extraction and knowledge discovery. It has been experimented that when several other entropy in the information systems possess a contradiction or inconsistent relationship. We experiment with a number of values over different datasets, and use such 4 entropies to make the feature selection. At the same time, we compute classification accuracies obtained for single features with linear SVM and CART. High values of correlation coeffective are reflective of the associated classification capabilities of the corresponding features. So the value domain generating a great correlation coefficient is used in computing similarity. From the above research, the future work could move along 3 directions. First, risks and costs exist in NCE. How the optimization model makes the risks and costs to a minimum. Second, the current future selection algorithms based on entropy might not be robust enough for real-world applications. How to boost and make an improvement would be an important issue. Last, in the use of large data samples, we will continue to improve the efficiency of NCE.

ACKNOWLEDGMENT The authors wish to thank the anonymous reviewers for their constructive comments on this study. This work was supported by National Natural Science Foundation of China (Nos. 61450110440). - 33 http://www.ivypub.org/cst


REFERENCES [1]

Pawlak Z. “Rough Sets.” Theoretical Aspects of Reasoning about Data, Dordrecht: Kluwer Academic Publishers, 1991

[2]

Pawlak Z, Skowron A. “Rudiments of rough sets.” Information Sciences, 177(2007): 3-27. Accessed June 3, 2006.

[3]

Yao Y. “A note on definability and approximations.” LNCS Transactions on Rough Sets, 7(2007), 274-282

[4]

Yao Y. “Three-way decision: an interpretation of rules in rough set theory.” Proceedings of the 4th International Conference on

doi:10.1016/j.ins.2006.06.003

Rough Sets and Knowledge Technology. Gold Coast, Australia, July 20-24, 2009 [5]

Hu Q, Zhang L, Chen D, Pedrycz W, Yu D. “Gaussian kernel based fuzzy rough sets: Model, uncertainty measures and applications.” Int. J. Approx. Reason. 51(2010): 453–471. Accessed March 1, 2010. doi:10.1016/j.ijar.2010.01.004

[6]

Liang J, Shi Z, Li D, Wierman M J. “Information entropy: rough entropy and knowledge granulation in incomplete information systems.” International Journal of General Systems, 35(2006): 641–654. Accessed January 19 2006. doi:10.1080/ 03081070600687668

[7]

Leung Y, Li D Y. “Maximal consistent block technique for rule acquisition in incomplete information systems.” Inf. Sci.,

[8]

Liang J, Chin K, Dang C, Richard C, “A new method for measuring uncertainty and fuzziness in rough set theory.” Int. J. Gen.

[9]

Liang J, Shi Z. “The information entropy, rough entropy and knowledge granulation in rough set theory.” International Journal of

153(2003): 85–106. Accessed July 1, 2003. doi:10.1016/S0020-0255(03)00061-6 Systems, 31(2002): 331–342. Accessed September 24, 2010. doi:10.1080/0308107021000013635 Uncertainty: Fuzziness and Knowledge-Based Systems, 12(2004), 37–46. Accessed February 1, 2004. doi:10.1142/ S0218488504002631 [10] Liang J, Xu Z. “The algorithm on knowledge reduction in incomplete information systems.” International Journal of Uncertainty: Fuzziness and Knowledge-Based Systems, 10(2002): 95–103. Accessed February 1, 2002. doi:10.1142/S021848850200134X [11] Liang J, Li D. Uncertainty and Knowledge Acquisition in Information Systems. Beijing: Science Press, 2005 [12] Dai J, Xu Q. “Approximations and uncertainty measures in incomplete information systems.” Information Sciences, 198(2012):62-80. Accessed September 1, 2012. doi:10.1016/j.ins.2012.02.032 [13] Azam N, Yao J. “Multiple criteria decision analysis with game-theoretic rough sets.” Proceedings of the 8th International RSCTC conference. Chengdu, China, August 17-20, 2012

AUTHORS 1

2

Pengyuan Wei was born in Baoji, Shaanxi

Kun She was born in Chengdu, Sichuan

Province, China, 1984. He received his

Province, China, 1966. He is a professor in

master degree in school of information and

the

software

engineering, UESTC. He received his PhD

engineering,

university

of

school

in

china(UESTC) in 2011, and is now a PhD

engineering, UESTC in 2006.

science & engineering, UESTC.

of

computer

electronic science and technology of candidate in the school of computer

school

of

computer

science science

& &

Dr. She engaged on middleware and cloud computing research about computer science and technology and

Mr. Wei is engaged in the lab of artificial intelligence and

has published over 60 research papers in the field of wavelet

pervasive computing in school of computer science &

computing, rough set theory, information system and granular

engineering, UESTC. His research concentrate on the rough sets

computing. His current research interests are incomplete

and decision theoretic about computer science. His current

information system, rough set theory, and granular computing

research interests are incomplete information system, rough set

and cloud computing and technology.

theory, and granular computing.

Dr. She has won the third prize of scientific and technological progress of Sichuan province in 2003; The second prize of scientific and technological progress of Chengdu city in 2004; The second prize of scientific and technological progress of Sichuan province in 2007.

- 34 http://www.ivypub.org/cst


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.