Zj34281294

Page 1

American Journal of Engineering Research (AJER)

2014

American Journal of Engineering Research (AJER) e-ISSN : 2320-0847 p-ISSN : 2320-0936 Volume-03, Issue-04, pp-281-294 www.ajer.org Research Paper

Open Access

Probabilistic Stochastic Graphical Models With Improved Techniques 1

Swati Kumari, 2Anupama Verma

Assistant Professor, JRU, Department of Computer Science Engineering, Ranchi - 834002 Assistant Professor, JRU, Department of Computer Science Engineering, Ranchi – 834002 Abstract: - The web may be viewed as a directed graph. A HTML page can be treated as node and hyperlink as an edge from one node to another this will form a directed graph. But the nature of this graph is evolving with time, so we require evolving graph model. The Gn,p and Gn,N are static models. In these models, graph G = (V,E) is not changing with time. Sometimes we need a graph model which is time evolving Gt = (Vt,Et). There are two characteristic functions of evolving graph model [5] first gives the number of vertex added at time t+1, whereas second gives set of edges added at time t+1. Evolving copying model is different from traditional G n,p in the following: 1. Independently chosen edges do not show the results found on the web. 2. This model captures the evolving natures of web graph. In this paper we review some recent work on generalizations of the random graph aimed at correcting these shortcomings. We describe generalized random graph models of both directed and undirected networks that incorporate arbitrary non-Poisson degree distributions, and extensions of these models that incorporate clustering too. We also describe two recent applications of random graph models to the problems of network robustness and of epidemics spreading on contact networks. This paper studies the random graph models. Concentration is made over the paper stochastic models for web graph by Ravi Kumar et. al. [5]. During the proof of the results given in the paper some more precise results have been found. Some of results are here which can be compared with results of the papers. The improvements are presented here along with results in the original paper for comparison. Keywords: - Random Graph, Martingale, Web Graph.

I.

INTRODUCTION

Definition 1. A HTML page can be treated as node and hyperlink as an edge from one node to another this will form a directed graph called web graph. At present this web graph has billion vertices [5], and an average degree of about 7. In this thesis we discuss different random graph models. These observation suggest that web is not well modelled by traditional models such as Gn,p. A lot of models for the random graph have been proposed like Gn,p and Gn,N models [6] etc. These models do not ensure the power-law for degree of vertices and do not explain the abundance of bipartite cliques observed in the web graph. The copying graph model [5] explains these and also covers the evolving nature of the web graph. Evolving copying model is different from traditional Gn,pin the following: (a) Independently chosen edges do not show the results found on the web. (b) This model captures the evolving natures of web graph. In this model during every time interval one vertex arrives. At time step t, a single vertex u arrives and the i-th out-link of u is then chosen as follows. With probability Îą, the destination is chosen uniformly at random from Vt, with remaining probability the outlink is taken to be the i th outlink of prototype vertex p. where Vt is set of vertex at time t.

www.ajer.org

Page 281


American Journal of Engineering Research (AJER)

2014

A. Problem Definition The problem is to develop random graph model that explain the different nature of web graph. We have to study probabilistic techniques, different models for random graph and find more results based on the models B. Related Work The "copying" model analysed in this thesis were first introduced by Kleinberget. al. [4]. Motivated by observations of power-laws for degrees on the graph of telephone calls, Aiello, Chung, and Lu [1] propose a model for "massive graph"(Henceforth the "ACL model"). In 2000, Ravi Kumar et. al. proposed a numberof models in his paper "Stochastic models for the web graph" [5]. This thesis hassome more precise results, and also the proof of different results available in thepaper "Stochastic models for the web graph". C. Contribution (1) The web graph is so large that the study of different nature of this graph is difficult without a good modeling. (2) The evolution of different social networking site can be explained with the help of this modeling. D. Results and Organization Results in Paper |E[Nt,0|Nt-k,0](k+1),0]|≤ 2 t+ι

E[Nt,0|Ntt+Îą

-Îą2-ln đ?‘Ą ≤ E[Nt,0]≤ 1+Îą Pr| Nt,0 - E[Nt,0]≼ l| ≤ exp (đ?‘™âˆ—đ?‘™ ) 4đ?‘Ą 1+Îą

Mywork done |E[Nt,0|Nt-k,0](k+1),0]|≤ 1

E[Nt,0|Nt-

t

E[Nt,0] =1+Îą Pr| Nt,0 - E[Nt,0]≼ l| ≤ exp (-đ?‘™âˆ—đ?‘™ ) 2đ?‘Ą t t đ?‘Ąâˆ’1 đ?‘— =0 đ?‘†đ?‘—, 0 = 1+Îą 1+Îą E[Nt,0] 1 limđ?‘Ąâ†’∞ đ?‘Ą = 1+Îą

E[Nt,0]= |E[Nt,i|Nt-k,iNt-k,i-1] |E[Nt,i|Nt-(k-1),iNt-(k-1),i-1]| ≤ 2

II.

Pt = |E[Nt,i|Nt-k,iNt-k,i-1] - |E[Nt,i|Nt(k-1),iNt-(k-1),i-1]| ≤ 1for Nt-k,I = Nt-(k+1),i

PROBABILISTIC TECHNIQUES

Preliminaries We define a discrete probability space (Ί, đ?‘?) where Ί is discrete (finite or countable infinite) set and đ?‘? is probability such that đ?‘ƒ: Ί → [0,1]means∀ ω Ďľ â„Ś đ?‘? (ω) Ďľ [0, 1] and đ?‘? đ?œ” =1 đ?œ”

In this thesis we will discuss only discrete probability spaces. Definition 2. Given Ί, đ?‘? , đ??ľ ⊆ Ί, Define conditional probability over (Ί, đ?‘?(đ??ľ))as follows. 0 đ?‘–đ?‘“ đ?œ” ∉ đ??ľ, đ?‘? đ?œ” đ??ľ = đ?‘?(đ?œ” ) đ?‘–đ?‘“ đ?œ”đ?œ–đ??ľ đ?‘?(đ??ľ) Where đ?‘? đ?œ” đ??ľ is the probability of happening of ω when B has already happened? Now the probability of happening of A when B has already happened is denoted by P(A|B) and is given by đ?‘ƒ đ??´đ??ľ =

đ?‘?(đ?œ”|đ??ľ) đ?œ”đ?œ–đ??´

=

đ?‘? đ?œ”|đ??ľ + đ?œ”đ?œ–đ??´ â‹‚đ??ľ đ??ś

đ?‘? đ?œ”|đ??ľ đ?œ” đ?œ–đ??´â‹‚đ??ľ

=0+

đ?‘? đ?œ”|đ??ľ đ?œ”đ?œ–đ??´ â‹‚đ??ľ

= đ?œ”đ?œ–đ??´ â‹‚đ??ľ

đ?‘?(đ?œ”) đ?‘ƒ(đ??ľ) 1

= đ?‘?(đ??ľ)

www.ajer.org

đ?œ”đ?œ–đ??´ â‹‚đ??ľ đ?‘?

đ?œ”

Page 282


American Journal of Engineering Research (AJER) ⇒đ?‘ƒ đ??´đ??ľ =

2014

đ?‘?(đ??´â‹‚đ??ľ) đ?‘?(đ??ľ)

Expectation of Random Variable Expectation of a random variable is its basic characteristic. The expectationof random variable is weighted average of the values it assumes, where each valueis weighted by the probability that the variable assumes that value. Definition 3. A random variable X is a real-valued function over the sample space â„Ś; that is X :â„Ś → R. The expectation of random variable X is defined as đ??¸{đ?‘‹} =

đ?‘‹ đ?œ”đ?‘— đ?‘?(đ?œ”đ?‘— ) đ?‘—

The expectation is finite If

đ?‘— |đ?‘‹đ?œ” đ?‘— |đ?‘?(đ?œ”đ?‘— )converges;

otherwise, the expectationis unbounded. 1

For example:A random variable X that takes the values 2đ?‘— with probability 2đ?‘— for j = 1,2,‌‌ The expectation of X ∞ 1 đ?‘— đ??¸đ?‘‹ = 2 →∞ 2đ?‘— đ?‘— =1

Here the expectation is unbounded. Conditional Expectation Definition 4.The conditional expectation [3] of X relative to B is defined whenP{B} > 0 as đ??¸ đ?‘‹đ??ľ =

đ?‘‹ đ?œ”đ?‘— đ?‘?(đ?œ”đ?‘— |đ??ľ) đ?‘—

Definition 5.Let (đ?›ş1 ; p1) and (đ?›ş2 ; p2) be probability distributions and (â„Ś; p) iscalled the joint distribution of them if the following conditions are satisfied. đ?‘?(đ?œ”1 , đ?œ”2 ) = 1 (đ?œ” 1 ,đ?œ” 2 )đ?œ–Ί

đ?‘?(đ?œ”1 , đ?œ”2 ) = đ?‘ƒ2 đ?œ”2 = đ?‘?(đ?œ”2 ) đ?œ”1

đ?‘?(đ?œ”1 , đ?œ”2 ) = đ?‘ƒ1 đ?œ”1 = đ?‘?(đ?œ”1 ) đ?œ”2

where Ί = Ί1 Ă— Ί2 = (đ?œ”1 , đ?œ”2 |đ?œ”1 đ?œ–Ί1 , đ?œ”2 đ?œ–Ί2 } Definition 6.Two random variable X and Y are independent if and only if đ?‘ƒđ?‘&#x; đ?‘‹ = đ?‘Ľ ∊ đ?‘Œ = đ?‘Ś = đ?‘ƒđ?‘&#x; đ?‘‹ = đ?‘Ľ . đ?‘ƒđ?‘&#x;â Ą (đ?‘Œ = đ?‘Ś) for a values of x and y. Similarly, random variables X 1,X2,‌,Xk are mutuallyindependent if and only if, for any subset I ⊆ [1, k] and any values xi, iđ?œ–I, đ?‘ƒđ?‘&#x;

đ?‘‹đ?‘– = đ?‘Ľđ?‘– =

đ?‘ƒđ?‘&#x;â Ą ( đ?‘‹đ?‘– = đ?‘Ľđ?‘– ) đ?‘–đ?œ–đ??ź

đ?‘–đ?œ–đ??ź

If the events đ?œ‰đ?‘– and đ?œ‰đ?‘— are pairwise independent then đ?‘ƒđ?‘&#x; đ?œ‰đ?‘– đ?œ‰đ?‘— = đ?‘ƒđ?‘&#x;â Ą[đ?œ‰đ?‘– ] for all i≠j Proof. đ?‘ƒđ?‘&#x; đ?œ‰đ?‘– đ?œ‰đ?‘— = =

Pr â Ą [đ?œ‰ đ?‘– |Pr â Ą|đ?œ‰ đ?‘— ] Pr â Ą[đ?œ‰ đ?‘— ]

đ?‘ƒđ?‘&#x;â Ą [đ?œ‰đ?‘– ∊ đ?œ‰đ?‘— ] đ?‘ƒđ?‘&#x;â Ą [đ?œ‰đ?‘— ]

= Prâ Ą [đ?œ‰đ?‘– ]

Linearity of Expectation đ??¸ đ?‘‹ + đ?‘Œ = đ??¸ đ?‘‹ + đ??¸[đ?‘Œ] Proof.

www.ajer.org

Page 283


American Journal of Engineering Research (AJER) đ??¸ đ?‘‹+đ?‘Œ =

2014

đ?‘Ľ + đ?‘Ś đ?‘?(đ?‘‹, đ?‘Œ) (đ?‘Ľ,đ?‘Ś)đ?œ–â„?Ă—â„?

=

đ?‘Ľ + đ?‘Ś Prâ Ą (â Ą (đ?‘‹ = đ?‘Ľ) ∊ (đ?‘Œ = đ?‘Ś)) đ?‘Ľ

=

đ?‘Ś

đ?‘Ľđ?‘ƒđ?‘&#x; đ?‘‹ = đ?‘Ľ ∊ đ?‘Œ = đ?‘Ś đ?‘Ľ

+

đ?‘Ś

đ?‘Śđ?‘ƒđ?‘&#x; đ?‘‹ = đ?‘Ľ ∊ đ?‘Œ = đ?‘Ś đ?‘Ľ

=

đ?‘Ľđ?‘ƒđ?‘&#x; đ?‘‹ = đ?‘Ľ + đ?‘Ľ

đ?‘Ś

đ?‘Śđ?‘ƒđ?‘&#x; đ?‘Œ = đ?‘Ś đ?‘Ś

= đ??¸ đ?‘‹ + đ??¸[đ?‘Œ] General form

đ?‘›

đ??¸

đ?‘›

đ?‘‹đ?‘– = đ?‘–=1

[đ?‘‹đ?‘– ] đ?‘–=1

đ??¸ đ?‘Žđ?‘‹ + đ?‘? = đ?‘Žđ??¸ đ?‘‹ + đ?‘? Proof. đ??¸ đ?‘Žđ?‘‹ + đ?‘? =

đ?‘Ž đ?‘‹ = đ?‘Ľ + đ?‘? Pr đ?‘‹ = đ?‘Ľ đ?‘Ľđ?œ– â„?

=đ?‘Ž

đ?‘‹ = đ?‘Ľ Pr đ?‘‹ = đ?‘Ľ + đ?‘? đ?‘Ľđ?œ– â„?

Pr đ?‘‹ = đ?‘Ľ đ?‘Ľđ?œ– â„?

= đ?‘Žđ??¸ đ?‘‹ + đ?‘?. 1 = đ?‘Žđ??¸ đ?‘‹ + đ?‘? If X and Y are two independent random variables, thenE[X,Y] = E[X].E[Y ] Bernoulli Random Variable If we run an experiment such that it succeedswith probability p and fails with probability (1-đ?‘?). Let Y be a randomvariable such that 1 đ?‘–đ?‘“ đ?‘ đ?‘˘đ?‘?đ?‘?đ?‘’đ?‘ đ?‘ đ?‘¤đ?‘? đ?‘?, đ?‘Œ= 0 đ?‘–đ?‘“ đ?‘“đ?‘Žđ?‘–đ?‘™đ?‘ đ?‘¤đ?‘? (1 − đ?‘?) Here the random variable Y is called a Bernoulli random variable. Theorem 1. The expectation of Bernoulli random variable is same as the probability of success of the random variable. Proof. E[Y] = 1. đ?‘? + 0. (1 - đ?‘?) = đ?‘? ⇒ E[Y] =Pr[Y=1] Binomial Distribution Consider a sequence of n independent experiments,each of which succeeds with probability đ?‘?. If we represents X as numberof successes in n experiments then X has a binomial distribution. Definition 7. A binomial random variable X with parameters n and đ?‘?, denotedby đ??ľ(đ?‘›, đ?‘?)is defined by following probability distribution on đ?‘— = 0,1,2, ‌ . , đ?‘›: đ?‘› đ?‘— đ?‘? (1 − đ?‘?)đ?‘›âˆ’đ?‘— đ?‘ƒđ?‘&#x; đ?‘‹ = đ?‘— = đ?‘— đ?‘˜đ?œ–{0,1,2, ‌ . đ?‘›} 0 đ?‘œđ?‘Ąâ„Žđ?‘’đ?‘&#x;đ?‘¤đ?‘–đ?‘ đ?‘’ Theorem 2.The expectation of binomial random variable is đ?‘›đ?‘?. Proof. The binomial random variable X can be expressed as the sum of Bernoulli random variables. đ?‘›

đ?‘‹=

đ?‘‹đ?‘– đ?‘–=1

đ?‘›

đ??¸ đ?‘‹ =đ??¸

đ?‘‹đ?‘– = đ?‘–=1

from Linearity of expectation

đ?‘›

đ??¸[đ?‘‹đ?‘– ] đ?‘–=1

đ?‘›

đ?‘? đ?‘–=1

from expectation of Bernoulli random variable = đ?‘›đ?‘?

www.ajer.org

Page 284


American Journal of Engineering Research (AJER)

2014

Geometric Distribution Sequence of independent trials until the first success is found where each trialsuccess with probability đ?‘?, This gives an example of geometric distribution. A geometric random variable X with parameter đ?‘? is given by following probabilitydistribution on đ?‘› = 0,1,2,‌. Pr đ?‘‹ = đ?‘› = (1 − đ?‘?)(đ?‘› −1) . đ?‘? For geometric random variable X equals đ?‘› there must be đ?‘› − 1 failures followedby a success. đ?‘› 1 Definition 8. đ?‘–=1 = â„?(đ?‘›)called the harmonic number = ln đ?‘› + Θ 1 . 1

đ?‘Ą

Proof. Since is monotonically decreasing function, we can write đ?‘–

đ?‘›

â&#x;š đ?‘˜=2

1 ≤ đ?‘˜

đ?‘›

1

ln đ?‘› = âˆŤđ?‘Ľ=1 ≤ đ?‘˜

1 đ?‘› đ?‘˜=1 đ?‘˜

đ?‘›

1 = ln đ?‘› đ?‘Ľ=1 đ?‘˜

⇒ ln đ?‘› ≤ â„?(đ?‘›) ≤ ln đ?‘› + 1 ⇒ â„? đ?‘› = đ?‘› ln đ?‘› + Θ đ?‘› 1 Theorem 3.The expectation of geometric random variable is đ?‘? Proof.

∞

đ?‘–. (1 − đ?‘?)đ?‘–−1 . đ?‘?

đ??¸đ?‘‹ = đ?‘–=1 ∞

đ?‘–. (1 − đ?‘?)đ?‘–−1

= đ?‘?. đ?‘–=1

∞

đ?‘–. đ?‘Ą đ?‘–−1

= đ?‘?.

(âˆľ 1 − đ?‘? = đ?‘Ą 0 < đ?‘Ą < 1)

đ?‘–=1

= đ?‘?. (1 + 2đ?‘Ą + 3đ?‘Ą 2 + 4đ?‘Ą 3 + â‹Ż ) 1 1 1 = đ?‘?. =đ?‘? 2= 2 (1 − đ?‘Ą) đ?‘? đ?‘? 1 2 3 {âˆľ đ?‘“ đ?‘Ľ = 1 + 2đ?‘Ą + 3đ?‘Ą + 4đ?‘Ą + â‹Ż = đ?‘ đ?‘˘đ?‘š đ?‘œđ?‘“ đ??ş. đ?‘ƒ. 1−đ?‘Ą 1 đ?‘“′ đ?‘Ľ = 1 + 2đ?‘Ą + 3đ?‘Ą 2 + 4đ?‘Ą 3 + â‹Ż = } (1 − đ?‘Ą)2 Markov's Inequality Let X be a random variable assumes only non-negative values the for all đ?‘Ą > 0 đ??¸[đ?‘‹] Prâ Ą [đ?‘‹ ≼ đ?‘Ą] ≤ đ?‘Ą Proof. đ??¸ đ?‘‹ = đ?‘Ľđ?œ– â„? đ?‘Ľđ?‘ƒđ?‘&#x;(đ?‘‹ = đ?‘Ľ) ≼

đ?‘Ľđ?‘ƒđ?‘&#x;(đ?‘‹ = đ?‘Ľ) đ?‘Ľ ≼đ?‘˜

≼

đ?‘˜đ?‘ƒđ?‘&#x;(đ?‘‹ = đ?‘Ľ) đ?‘Ľâ‰Ľđ?‘˜

đ??¸đ?‘‹ ≼

đ?‘˜đ?‘ƒđ?‘&#x; đ?‘‹ = đ?‘Ľ = đ?‘˜đ?‘ƒđ?‘&#x;(đ?‘‹ ≼ đ?‘˜) đ?‘Ľâ‰Ľđ?‘˜

đ??¸[đ?‘‹] ≼ đ?‘˜đ?‘ƒđ?‘&#x;(đ?‘‹ ≼ đ?‘˜) đ??¸[đ?‘‹] đ?‘ƒđ?‘&#x;(đ?‘‹ ≼ đ?‘˜) ≤ đ?‘˜ đ??¸[đ?‘‹] â&#x;š đ?‘ƒđ?‘&#x;(đ?‘‹ ≼ đ?‘Ą) ≤ đ?‘Ą Chebyshev's Inequality Prâ Ą (|đ?‘‹ − đ??¸ đ?‘‹ | ≼ đ?‘Ą) ≤ Proof.

đ?‘‰đ?‘Žđ?‘&#x;[đ?‘‹] đ?‘Ą2

Pr đ?‘‹ − đ??¸ đ?‘‹ ≼ đ?‘Ą = Prâ Ą ((|đ?‘‹ − đ??¸ đ?‘‹ )2 | ≼ đ?‘Ą 2 ) (đ?‘‹ − đ??¸ đ?‘‹ )2 ≤ đ?‘Ą2 đ?‘‰đ?‘Žđ?‘&#x;[đ?‘‹] = đ?‘Ą2 2 Since (đ?‘‹ − đ??¸ đ?‘‹ ) is a non-negative random variable, we can apply Markov'sinequality.

www.ajer.org

Page 285


American Journal of Engineering Research (AJER)

2014

Jensen's Inequality If f is a convex function, then đ??¸[đ?‘“ đ?‘‹ ] ≼ đ?‘“(đ??¸ đ?‘‹ )| Proof. Suppose đ?‘“ has a Taylor series expansion. Expending đ?œ‡ = đ??¸ đ?‘‹ and usingthe Taylor series expansion with a remainder term, yields for some đ?‘Ž. đ?‘“ ′′ đ?‘Ž (đ?‘Ľ − đ?œ‡)2 đ?‘“ đ?‘Ľ = đ?‘“ đ?œ‡ + đ?‘“′ đ?œ‡ đ?‘Ľ − đ?œ‡ + 2 ≼ đ?‘“ đ?œ‡ + đ?‘“ ′ đ?œ‡ đ?‘Ľ − đ?œ‡ (âˆľ đ?‘“ ′′ (đ?‘Ž) ≼ 0 đ?‘?đ?‘Ś đ?‘?đ?‘œđ?‘›đ?‘Łđ?‘’đ?‘Ľđ?‘–đ?‘Ąđ?‘Ś. ) đ?‘“(đ?‘Ľ) ≼ đ?‘“ đ?œ‡ + đ?‘“ ′ đ?œ‡ đ?‘Ľ − đ?œ‡ Taking expectation on both sides đ??¸ đ?‘“ đ?‘Ľ ≼ đ?‘“ đ?œ‡ + đ?‘“ ′ đ?œ‡ đ??¸ đ?‘Ľ − đ?œ‡ = đ?‘“(đ?œ‡) đ??¸[đ?‘“ đ?‘Ľ ] ≼ đ?‘“(đ??¸ đ?‘‹ ) Martingale đ??ˆ- field Definition 9.A đ?œŽ- field(đ?›ş, đ?”˝) consists of a sample space and a collection ofsubsets đ?”˝ satisfying the following conditions [7]. đ?œ™đ?œ–đ?”˝ đ?œ€đ?œ–đ?”˝ ⇒ đ?œ€â€˛đ?œ–đ?”˝ đ?œ€1 , đ?œ€2 , ‌ đ?œ–đ?”˝ ⇒ đ?œ€1 âˆŞ đ?œ€2 âˆŞ ‌ đ?œ–đ?”˝ Partition of đ?œ´ Definition 10.đ?”˝is a partition of Ί if đ?”˝ ⊆ 2Ί and (i) đ??šđ?œ–đ?”˝ đ??š = Ί (ii) đ??š ≠đ?œ™ ∀ đ??šđ?œ–đ?”˝ (iii) đ??š ∊ đ??ş = đ?œ™ ∀ đ??š ≠đ??ş đ?‘¤â„Žđ?‘’đ?‘&#x;đ?‘’ đ??š, đ??şđ?œ–đ?”˝

Figure 1: Partition of Sample space Here đ?”˝ is partition of Ί then corresponding algebra of partition đ?”¸(đ?”˝1 ) is {đ?œ™, đ??š, đ??ş, đ??ť, ‌ , đ??š âˆŞ đ??ş, đ??š âˆŞ đ??ť, ‌ , đ??š âˆŞ đ??ş âˆŞ đ??ť, ‌ } Where algebra is smallest set containing đ?”˝ that is closed under finite union, intersectionand complementation. Definition 11. If đ?”˝1 , đ?”˝2 are partitions of đ?›ş then đ?”˝1 ⊆ đ?”˝2 if đ??´đ?œ–đ?”¸(đ?”˝1 )implies đ??´đ?œ–đ?”¸(đ?”˝2 ). Where đ?”¸(đ?”˝1 ) and đ?”¸(đ?”˝2 ) are algebra of đ?”˝1 and đ?”˝2 respectively. Filtration Definition 12. Given the đ?œŽ- field (đ?›ş, đ?”˝) with đ?”˝ ⊆ 2đ?›ş , a filter (sometimes alsocalled filtration) is a nested sequence đ?”˝1 ⊆ đ?”˝2 ⊆ đ?”˝3 ‌ ⊆ đ?”˝đ?‘› of subsets of 2đ?›ş suchthat [7] đ?”˝0 = {đ?œ™, Ί} đ?”˝đ?‘› = 2Ί đ?‘“đ?‘œđ?‘&#x; 0 ≤ đ?‘– ≤ đ?‘›, Ί, đ?”˝đ?‘– đ?‘–đ?‘ đ?‘Ž đ?œŽ − đ?‘“đ?‘–đ?‘’đ?‘™đ?‘‘. Definition 13. Let (đ?›ş, đ?”˝) be any đ?œŽ − đ?‘“đ?‘–đ?‘’đ?‘™đ?‘‘, and Y any random variable that takeson distinct values on the elementary events in đ?”˝. Then đ??¸ đ?‘‹ đ?”˝ = đ??¸ đ?‘‹ đ?‘Œ .

Figure 2: Filtration Notice that the conditional expectation đ??¸[đ?‘‹|đ?‘Œ]does not really depend on theprecise value of Y on the specific elementary event. In fact, Y is merely an indicatorof the elementary events inđ?”˝. Conversely, we can write

www.ajer.org

Page 286


American Journal of Engineering Research (AJER)

2014

đ??¸ đ?‘‹ đ?‘Œ = đ??¸[đ?‘‹|đ?œŽ(đ?‘Œ)] where đ?œŽ(đ?‘Œ) is the đ?œŽ − đ?‘“đ?‘–đ?‘’đ?‘™đ?‘‘ generated by the events of the type{đ?‘Œ = đ?‘Ś}, i.e., thesmallest Ďƒ- field over which Y is measurable. Definition 14. A random variable X over Ďƒ- field (đ?›ş, đ?”˝) is well defined ifđ?‘‹ đ?œ”1 = đ?‘‹(đ?œ”2 )for all đ??´đ?œ–đ?”˝ where đ?œ”1 , đ?œ”2 đ?œ–đ??´ and đ?”˝ is partition. It is also called đ?”˝ -measurable. A random variable X is đ?”˝ -measurable if its valueis constant over each block in the partition generation đ?”˝. Since the partitions generating the Ďƒ- fields in a filter are successively more refined, it follows that if X,đ?”˝đ?‘– -measurable, it is also đ?”˝đ?‘— -measurable for all đ?‘— ≼ đ?‘–. i.e If đ?”˝1 ⊆ đ?”˝2 and X is well defined over (đ?›ş, đ?”˝1 ) then X is well defined over(đ?›ş, đ?”˝2 ) Converse may not be true. Suppose X is well defined over đ?”˝2 . How to define a random variable over đ?”˝1 so that it is well defined. Let random variable đ?‘‹ đ?œ” đ?‘?(đ?œ” ) đ?‘Œ đ?œ” = đ?œ”đ?œ–đ??´ đ?‘? (đ??´) đ?‘¤â„Žđ?‘’đ?‘&#x;đ?‘’ đ?œ”đ?œ–đ??´đ?œ–đ?”˝1 . ⇒đ?‘Œ đ?œ” =

đ?‘‹ đ?œ” đ?‘?(đ?œ”|đ??´) đ?œ”đ?œ–đ??´

Proof.

đ?‘Œ đ?œ”đ?‘– =

đ?œ” đ?‘– đ?œ–đ??´

đ?‘‹đ?‘– đ?œ” đ?‘?(đ?œ”đ?‘– |đ??´) {đ?‘–|đ?œ” đ?‘– đ?œ–đ??´} đ?‘Ľđ?‘– đ?‘?đ?‘–

=

{đ?‘–|đ?œ” đ?‘– đ?œ–đ??´} đ?‘?đ?‘–

đ?‘Œ(đ?œ”đ?‘– )is same for a partition. So random variable is constant for a particularpartition. Similarly we can proof for other partitions. Here we assume that the Sample space has been partitioned, the points đ?œ”1 , đ?œ”2 , đ?œ”3 with probability đ?‘?1 , đ?‘?2 , đ?‘?3 and valuesđ?‘Ľ1 , đ?‘Ľ2 , đ?‘Ľ3 respectively are in finer partition đ?”˝2 Theorem 4. Defining Y as conditional expectation of X given đ?”˝1 denoted byđ??¸[đ?‘‹|đ?”˝1 ] = đ?‘Œ then đ??¸ đ?‘Œ = đ??¸[đ?‘‹] Proof. đ??¸ đ?‘Œ = đ?œ”đ?œ– Ί đ?‘Œ đ?œ” đ?‘?(đ?œ”) =

đ?‘Œ đ?œ” đ?‘?(đ?œ”) đ??´đ?œ– đ?”˝1 đ?œ”đ?œ– A

=

đ?‘Œ đ?œ” đ??´đ?œ– đ?”˝1

đ?‘?(đ?œ”) đ??´đ?‘ đ?‘Œ đ?‘–đ?‘ đ?‘¤đ?‘’đ?‘™đ?‘™ đ?‘‘đ?‘’đ?‘“đ?‘–đ?‘›đ?‘’đ?‘‘ đ?œ”đ?œ– A

đ?‘Œ đ?œ” đ?‘?(đ??´) đ??´đ?œ– đ?”˝1

=

đ?‘?(đ??´) đ??´đ?œ– đ?”˝1

=

đ?‘‹ đ?œ” đ?‘?(đ?œ”|đ??´) đ?œ”đ?œ–đ??´

đ?‘?đ??´ . đ??´đ?œ– đ?”˝1

=

đ?œ”đ?œ–đ??´

đ?‘‹ đ?œ” đ?‘?(đ?œ”) đ?‘?(đ??´)

đ?‘‹ đ?œ” đ?‘?(đ?œ”) đ??´đ?œ– đ?”˝1 đ?œ”đ?œ–đ??´

=

đ?‘‹ đ?œ” đ?‘?(đ?œ”) đ?œ”đ?œ– Ί

= đ??¸[đ?‘‹] From this result it is clear that for filter đ?”˝1 ⊆ đ?”˝2 ⊆ đ?”˝3 ‌ ⊆ đ?”˝đ?‘› overĎƒ- field and corresponding random variables are đ?‘‹0 , đ?‘‹1 , đ?‘‹2 , ‌ , đ?‘‹đ?‘› then đ?œ‡ = đ??¸ đ?‘‹0 = đ??¸ đ?‘‹1 = đ??¸ đ?‘‹2 = â‹Ż = đ??¸[đ?‘‹đ?‘› ] Martingale Definition 15.Let (đ?›ş, đ?”˝, đ?‘ƒđ?‘&#x;) be a probability space with a fiterđ?”˝0 , đ?”˝1 , đ?”˝2 , ‌.Suppose đ?‘‹0 , đ?‘‹1 , đ?‘‹2 , ‌ are random variables such that for all đ?‘– ≼ 0, Xi is đ?”˝đ?‘– -measurable. The sequence đ?‘‹0 , đ?‘‹1 , đ?‘‹2 , ‌ , đ?‘‹đ?‘› is a martingale provided, for all đ?‘– ≼ 0, đ??¸ đ?‘‹đ?‘–+1 đ?”˝đ?‘– = đ?‘‹đ?‘– Definition 16.The set of random variables {đ?‘?đ?‘› , đ?‘› = 0,1,2 ‌ } is said to be amartingale [8] with respect to sequence {đ?‘‹đ?‘› , đ?‘› = 0,1,2 ‌ } if đ?‘?đ?‘› is a function ofđ?‘‹0 , đ?‘‹1 , đ?‘‹2 , ‌ , đ?‘‹đ?‘› E[|Zn|] <∞,andđ??¸[đ?‘?đ?‘›+1 đ?‘‹0 , đ?‘‹1 , đ?‘‹2 , ‌ , đ?‘‹đ?‘› = đ?‘?đ?‘› If we say {đ?‘?đ?‘› } is a martingale [8] (without specifying {đ?‘‹đ?‘› , đ?‘› = 0,1,2 ‌ } whenit is a martingale with respect to itself. i.e. {đ?‘?đ?‘› } is a martingale if E[|Zn|] <∞,andđ??¸ đ?‘?đ?‘›+1 đ?‘?0 , đ?‘?1 , đ?‘?2 , ‌ đ?‘?đ?‘› = đ?‘?đ?‘› . Note:-{đ?‘?đ?‘› } is martingale with respect to {đ?‘‹đ?‘› } then it is a martingale. Theorem 5.{đ?‘?đ?‘› }is martingale with respect to {đ?‘‹đ?‘› } then it is a martingale

www.ajer.org

Page 287


American Journal of Engineering Research (AJER)

2014

đ??¸ đ?‘?đ?‘›+1 đ?‘?0 , đ?‘?1 , đ?‘?2 , ‌ đ?‘?đ?‘› = đ?‘?đ?‘› Proof.we know the identity đ??¸ đ?‘‹ đ?‘ˆ = đ??¸[đ??¸[đ?‘‹|đ?‘ˆ, đ?‘‰]|đ?‘ˆ] {đ?‘?đ?‘› }is martingale with respect to {đ?‘‹đ?‘› } then đ??¸ đ?‘?đ?‘›+1 đ?‘?0 ‌ đ?‘?đ?‘› = đ??¸[đ??¸ đ?‘?đ?‘›+1 đ?‘?0 ‌ đ?‘?đ?‘› , đ?‘‹0 ‌ đ?‘‹đ?‘› |đ?‘?0 ‌ đ?‘?đ?‘› ] = đ??¸[đ??¸ đ?‘?đ?‘›+1 đ?‘‹0 ‌ đ?‘‹đ?‘› |đ?‘?0 ‌ đ?‘?đ?‘› ] = đ??¸ đ?‘?đ?‘› đ?‘?0 ‌ đ?‘?đ?‘› = đ?‘?đ?‘› Theorem 6. đ??¸ đ?‘?đ?‘› = đ??¸ đ?‘?0 Proof. đ?‘?0 ‌ đ?‘?đ?‘› are functions of đ?‘‹0 ‌ đ?‘‹đ?‘› đ??¸ đ?‘?đ?‘›+1 đ?‘?0 , đ?‘?1 , đ?‘?2 , ‌ đ?‘?đ?‘› = đ?‘?đ?‘› Taking expectation on both the sides. đ??¸ đ?‘?đ?‘›+1 = đ??¸ đ?‘?đ?‘› đ??¸ đ?‘?đ?‘› = đ??¸ đ?‘?0 đ??¸ đ?‘?0 is called mean of the martingale. Example: - Fair bet game:suppose đ?‘‹đ?‘– be the outcome of đ?‘–đ?‘Ąâ„Ž game. đ?‘?đ?‘› is the fortune of gambler after đ?‘Ąâ„Ž đ?‘› game. For given outcome of first n games the gambler's expected fortune after(n + 1)st game is equal to his fortune before the game. The gambler's expectedwinning on each game is equal to zero. Here đ?‘‹0 , đ?‘‹1 , đ?‘‹2 , ‌ , đ?‘‹đ?‘› is martingale. Azuma's Inequality Let đ?‘‹0 , đ?‘‹1 , đ?‘‹2 , ‌ , đ?‘‹đ?‘› be a martingale sequence such that for all đ?‘˜ |đ?‘‹đ?‘˜ − đ?‘‹đ?‘˜âˆ’1 | ≤ đ?‘? Prâ Ą [|đ?‘‹đ?‘Ą − đ?‘‹0 | ≼ đ?‘™] ≤ 2expâ Ą (−

đ?‘™2 ) 2đ?‘Ąđ?‘? 2

Where c may depend on k for all t≼0 and for any l>0 Proof. Let đ?”˝1 ⊆ đ?”˝2 ⊆ đ?”˝3 ‌ ⊆ đ?”˝đ?‘› be filter sequence and đ?‘‹0 , đ?‘‹1 , đ?‘‹2 , ‌ , đ?‘‹đ?‘› be martingalesequence where đ?‘‹0 = đ??¸[đ?‘‹đ?‘– ] for đ?‘–đ?œ–0,1,2, ‌ , đ?‘›. Also đ??¸ đ?‘‹đ?‘— đ?”˝đ?‘– = đ?‘‹đ?‘– đ?‘“đ?‘œđ?‘&#x; đ?‘– < đ?‘— If X is đ?”˝đ?‘– measurable (constant for particular partition) and Y is anotherrandom variable then we observe that đ??¸[đ?‘‹đ?‘Œ|đ?”˝đ?‘– ] = đ?‘‹đ??¸[đ?‘Œ|đ?”˝đ?‘– ] Since đ?‘‹0 , đ?‘‹1 , đ?‘‹2 , ‌ , đ?‘‹đ?‘› is martingale and let đ?‘Œđ?‘– = đ?‘‹đ?‘– − đ?‘‹đ?‘–−1 for đ?‘– = 1,2, ‌ , đ?‘Ą. đ??¸ đ?‘Œđ?‘– đ?‘‹0 , đ?‘‹1 , đ?‘‹2 , ‌ , đ?‘‹đ?‘–−1 = đ??¸[đ?‘‹đ?‘– − đ?‘‹đ?‘–−1 |đ?‘‹0 , đ?‘‹1 , ‌ , đ?‘‹đ?‘–−1 ] = đ??¸ đ?‘‹đ?‘– đ?‘‹0 , đ?‘‹1 , ‌ , đ?‘‹đ?‘–−1 − đ?‘‹đ?‘– = 0 đ?‘Œ 1− đ?‘– đ?‘?đ?‘–

đ?‘Œ 1+ đ?‘– đ?‘?

Now consider, đ?‘Œđ?‘– = −đ?‘?đ?‘– 2 + đ?‘?đ?‘– 2 đ?‘– Using the convexity of the function đ?‘’ đ?›źđ?‘Œđ?‘– 1−

đ??¸ đ?‘’ đ?›źđ?‘Œđ?‘–

≤�

(đ?›źđ?‘? đ?‘– ) 2

2

đ?‘Œđ?‘–

1+

đ?‘?đ?‘–

đ?‘Œđ?‘– đ?‘?

đ?‘– đ?›ź đ?‘?đ?‘– đ?‘’ đ?›ź đ?‘Œđ?‘– ≤ đ?‘’ −đ?›źđ?‘? đ?‘– + đ?‘’ 2 2 đ?›źđ?‘? đ?‘– −đ?›źđ?‘? đ?‘– đ?‘’ +đ?‘’ đ?‘Œđ?‘– đ?›źđ?‘? = + (đ?‘’ đ?‘– + đ?‘’ −đ?›ź đ?‘? đ?‘– ) 2 2đ?‘?đ?‘– đ?‘’ đ?›źđ?‘? đ?‘– + đ?‘’ −đ?›źđ?‘? đ?‘– đ?‘Œđ?‘– đ?›źđ?‘? đ?‘‹0 , đ?‘‹1 , ‌ , đ?‘‹đ?‘–−1 ≤ đ??¸ + (đ?‘’ đ?‘– + đ?‘’ −đ?›źđ?‘? đ?‘– ) đ?‘‹0 , đ?‘‹1 , ‌ , đ?‘‹đ?‘–−1 2 2đ?‘?đ?‘– đ?‘’ đ?›źđ?‘? đ?‘– + đ?‘’ −đ?›ź đ?‘? đ?‘– = 2

Using Taylor series ⇒ đ??¸ đ?‘’ đ?›źđ?‘Œđ?‘– đ?‘‹0 , đ?‘‹1 , ‌ , đ?‘‹đ?‘–−1 ≤ đ?‘’

(đ?›źđ?‘? đ?‘– )2

2

đ?‘Ą

đ??¸ đ?‘’đ?›ź

đ?‘‹đ?‘Ą −đ?‘‹0

đ?‘’ đ?›źđ?‘Œđ?‘– ]

= đ??¸[ đ?‘–=1

đ?‘Ąâˆ’1

đ?‘’ đ?›źđ?‘Œđ?‘– ] đ??¸ đ?‘’ đ?›źđ?‘Œđ?‘– đ?‘‹0 , đ?‘‹1 , ‌ , đ?‘‹đ?‘–−1

= đ??¸[ đ?‘–=1

đ?‘Ąâˆ’1

đ?‘’ đ?›źđ?‘Œđ?‘– ] đ?‘’

≤ đ??¸[

(đ?›źđ?‘? đ?‘– )2

2

đ?‘–=1

www.ajer.org

Page 288


American Journal of Engineering Research (AJER) ≤ �� ≤�

đ?›ź 2 đ?‘Ąđ?‘? 2 /2

2

2014

đ?‘Ą 2 đ?‘–=1 đ?‘? đ?‘– /2

As đ?‘?đ?‘– = đ?‘? ∀ đ?‘–

Pr đ?‘‹đ?‘Ą − đ?‘‹0 ≼ đ?‘™ = Prâ Ą [đ?‘’ đ?›ź đ?‘‹đ?‘Ą −đ?‘‹0 ≼ đ?‘’ đ?›źđ?‘™ ] đ?›ź đ?‘‹đ?‘Ą −đ?‘‹0 đ??¸đ?‘’ ≤ đ?‘’ đ?›źđ?‘™ 2 2 ≤ đ?‘’ đ?›ź đ?‘Ąđ?‘? /2−đ?›źđ?‘Ą đ?‘™ đ?‘™2 2 2 = đ?‘’ −đ?‘™ /(2đ?‘Ąđ?‘? ) đ?‘¤â„Žđ?‘’đ?‘&#x;đ?‘’ đ?›ź = 2 ⇒ Prâ Ą [|đ?‘‹đ?‘Ą − đ?‘‹0 | ≼ đ?‘™] ≤ 2expâ Ą (− ) đ?‘Ąđ?‘? 2đ?‘Ąđ?‘? 2

III.

A SURVEY OF RANDOM GRAPH MODEL FOR WEB MODELING

Random graphs Random graph is a graph which is generated by some random process. Two basic models for generating random graphs are Gn,Nand Gn,p models. As we know there are many NP- hard computational problems defined on graphs: Hamiltonian cycle, independent set, Vertex cover, and so forth. One question worth asking is whether these problems are hard for most inputs or just for a relatively small fraction of all graphs. Random graph models provide a probabilistic setting for studying such questions. Gn,NModel of Random Graph In Gn,Nmodel of random graph n is the count of vertices and N is number of edges in the graph. We consider all undirected graphs on n vertices with exactly N edges. Since, there are n vertices so possible number of edges đ?‘ đ?‘› đ?‘› are( ). Out of( ) many possible edges graph has N edges, this can be selected in((đ?‘›)) manyways. So total 2 2 2 đ?‘ possible graphs are((đ?‘›)). 2 Generating graph using Gn,Nmodel đ?‘› One way to generate a graph from graphs in Gn,N is to start with no edges. Choose one of the ( ) possible edges 2 đ?‘› uniformly at random and add it to the edges in the graph. Now choose one of the remaining ( ) − 1 possible 2 edges independently and uniformly at random and add to the graph. Similarly, continue choosing one of the remaining unchosen edges independently at random until there are N edges. Gn,pModel of Random Graph In Gn,p model of random graph n is the count of vertices and p is the probability of selection of an edge. We consider all undirected graphs on n distinct vertices v1, v2,‌,vn. A graph with a given set of m edges has đ?‘›

probability pm(1 − p)(2 )−đ?‘š . Generating graph using Gn,p model đ?‘› One way to generate a random graph in Gn,p is to consider each of the ( ) possible edges in some order and then 2 đ?‘› independently add each edge to the graph with probability p. i.e. Corresponding to each edge out of ( ) throw a 2 coin biased with outcome head with probability p. If outcome is head add the edge to the graph, don't add đ?‘› otherwise. The expected number of edges in the graph is therefore ( )p, and each vertex has expected degree (n2 1) p. đ?‘› The Gn,N and Gn,p models are related: when p = N/( ). The Gn,p and Gn,N are static models. In these models, 2 graph G = (V,E) is not changing with time. Sometimes we need a graph model which is time evolving Gt = (Vt, Et). Evolving Model Evolving Model of graph covers evolving nature of it. There are two characteristic functions of evolving graph model [5] first gives the number of vertex added at time t+1, whereas second gives set of edges added at time t+1, these are called characteristic function of evolving graph model. Characteristic functions of evolving graph model fv (Vt,t) and fe (ft,Gt, t). fv (Vt, t) gives number of vertex added at time t + 1 and fe (ft,Gt, t) gives the set of edges added at time t + 1. Vt+1 = Vt + fv(Vt,t) Et+1 = EtâˆŞfe(ft,Gt,t) The evolving graph model is completely characterized by f v and fe.

www.ajer.org

Page 289


American Journal of Engineering Research (AJER)

2014

Linear growth copying model The Linear growth copying model is evolving model. In each time interval one vertex is added. This is linear growth, because at timestep t, a single vertex arrives and may link to any of the first t-1 vertices. This model describes web graph easily. Web graph is time evolving in nature where at timestep one web page may arrive and is connected to previous one, or a page may be deleted. Generating graph using linear growth copying model [5] In each time interval one vertex is added. Edges are added in following way: for a new vertex u out-degree d is constant. The ith out-degree is decided as- With probability Îą destination is chosen uniformly from Vt and with remaining probability destination is chosen as the ith outlink of prototype vertex p. Where Îą is copy factor Îąâˆˆ(0, 1) and the out-degree is d ≼ 1 and fv (Vt,t) = 1. The prototype vertex is chosen once in advance.If Nt,i represents number of vertices having in-degree i at time t, for simplicity d = 1 and i = 0. In-degree Nt,0 distribution Theorem 6.E[Nt,0│Nt-k,0] = Nt-k,0Sk,0 + đ?‘˜âˆ’1 đ?‘— =0 đ?‘†đ?‘—, 0. Proof E[Nt,0│Nt-k,0] = 1 + (1đ?›ź

�

�

đ?‘Ąâˆ’1

�

)+(1-

�

đ?‘Ąâˆ’1

)(1-

đ?›ź đ?‘Ąâˆ’2

)+‌

+(1-đ?‘Ąâˆ’1)(1-đ?‘Ąâˆ’2)‌.(1-đ?‘Ąâˆ’đ?‘˜ )Nt-k,0 S0,0=1 đ?›ź Sk,0=Sk-1,0(1-đ?‘Ąâˆ’đ?‘˜ ) E[Nt,0│Nt-k,0] =Nt-k,0Sk,0+

đ?‘˜âˆ’1 đ?‘— =0 đ?‘ đ?‘—, 0 đ?›ź

Sk,0= Sk-1,0(1-đ?‘Ąâˆ’đ?‘˜ ) E[Nt,0│Nt-k,0]=Nt-k,0Sk,0+ đ?‘˜âˆ’1 đ?‘— =0 đ?‘†đ?‘—, 0 t-k=1 E[Nt,0│N1,0]=N1,0St-1,0+ đ?‘Ąâˆ’2 đ?‘— =0 đ?‘†đ?‘—, 0 E[Nt,0]= đ?‘Ąâˆ’1 đ?‘†đ?‘—, 0 (As N 1,0=1) đ?‘— =0 Theorem 7. E [Nt,0│Nt-k,0] - E[Nt,0│Nt-k+1),0]│≤1 Proof. │E[Nt,0│Nt-k,0] - E[Nt,0│Nt-(k+1),0]│= │Nt-k,0Sk,0 + đ?‘˜đ?‘— −1 =0 đ?‘†đ?‘— , 0 –Nt-(k-1),0Sk+1,0Case 1: (Nt-k,0=Nt-(k+1),0+1) đ?›ź =│Nt-(k+1),0Sk,0 + Sk,0 – Nt-(k+1),0Sk,0 (1 -đ?‘Ą −(đ?‘˜ +1))-Sk,0│

đ?‘˜ đ?‘— =0 đ?‘†đ?‘—

, 0│

đ?›źđ?‘ đ?‘Ą − đ?‘˜ +1 ,0

=│Sk,0-Sk,0(1-

đ?‘Ą −(đ?‘˜ +1) đ?›źđ?‘ đ?‘Ą −(đ?‘˜ +1,0)

(Sk,o(1-

đ?‘Ą −(đ?‘˜ +1)

)│≤1

)≤1)

From equations it is probability which is always ≤ 1 and

Sk,0≤ 1.

Case 2. (Nt-k=Nt-(k+1),0) đ?›ź =│Nt-(k+1),0Sk,0-Nt-(k+1),0Sk,0(1-đ?‘Ą −(đ?‘˜ +1))-Sk,0│ đ?›źđ?‘ đ?‘Ą − đ?‘˜ +1 ,0

=│Sk,o(1-

đ?‘Ą −(đ?‘˜ +1) đ?›źđ?‘ đ?‘Ą − đ?‘˜ +1 ,0

(Sk,0(1-

đ?‘Ą −(đ?‘˜ +1)

)│≤1

)≤ 1)

From equations ,it is probability which is always ≤ 1 and as Sk,0≤1 │E[Nt,0│Nt-k,0] - E[Nt,0│Nt-(k+1),0]│≤ 1 This result is an improvement compared to the result given in the paper [5] │E[Nt,0│Nt-k,0] - E[Nt,0│Nt-(k+1),0]│≤ 2

www.ajer.org

Page 290


American Journal of Engineering Research (AJER)

2014

đ?‘Ą

Theorem 8.E[Nt;0] = 1+đ?›ź . Proof. From equation đ?›ź Sk,0 = Sk-1,0 (1) đ?‘Ą −đ?‘˜

�

Sk,0 – Sk-1,0 = -Sk-1,0 đ?‘Ą −đ?‘˜ (Sk,0-Sk-1,0)(t-k)= - âˆ?Sk-1,0 (t -k) (Sk-1,0 – Sk,0) = âˆ?Sk-1,0 (t - (k - 1)) (Sk-1,0 – Sk,0) = (1 + Îą) Sk-1,0 (1 + Îą) Sk-1,0 = (t -k) Sk,0 - (t - (k + 1)) Sk+1,0 (1 + Îą) S0,0 = tS0,0 - (t - 1) S1,0 (1 + Îą) S1,0 = (t - 1) S1,0 - (t -2) S2,0 (1 + Îą) S2,0 = (t - 2) S2,0 - (t - 3) S3,0 : (1 + Îą) St-2,0 = 2St-2,0 – St-1,0 (1 + Îą) St-1,0 = St-1,0 - (t - t) St,0 (1+Îą)

đ?‘Ą −1 đ?‘— =0 đ?‘†đ?‘—

, 0 =S0,0 � ⇒E[Nt,0]=1+�

This result is an improvement compared to the result given in the paper [5] � +� 1+�

� +�

–ι^2lnt ≤ E[Nt,0] ≤ 1+�

đ?‘™ 2

Theorem 9. Prâ Ą [|đ?‘ đ?‘Ą ,0 − đ??¸ [đ?‘ đ?‘Ą ,0 ]| ≼ đ?‘™ ] ≤ 2expâ Ą (− 2đ?‘Ą ) Proof. Azuma's Inequality Let đ?‘‹ 0 , đ?‘‹ 1 , đ?‘‹ 2 , ‌be a martingale sequence such that for all đ?‘˜ |đ?‘‹ đ?‘˜ − đ?‘‹ đ?‘˜ −1 | ≤ đ?‘? wheređ?‘? may depend onđ?‘˜ . Then for all t≼0 and any đ?œ† >0 Prâ Ą [|đ?‘‹ đ?‘Ą − đ?‘‹ 0 | ≼ đ?‘™ ] ≤ expâ Ą (−

đ?‘™ 2 ) 2đ?‘Ą đ?‘? 2

From theorem 7 |đ??¸ đ?‘ đ?‘Ą ,0 đ?‘ đ?‘Ą −đ?‘˜ ,0 − đ??¸ đ?‘ đ?‘Ą ,0 đ?‘ đ?‘Ą − đ?‘˜ +1 ,0 | ≤ 1 ⇒ Prâ Ą [|đ?‘ đ?‘Ą ,0 − đ??¸ [đ?‘ đ?‘Ą ,0 ]| ≼ đ?‘™ ] ≤ expâ Ą(− This result is an improvement compared to the result given in the paper[5] Prâ Ą [|đ?‘ đ?‘Ą ,0 − đ??¸ [đ?‘ đ?‘Ą ,0 ]| ≼ đ?‘™ ] ≤ expâ Ą (− đ?‘Ą −1

đ??¸ đ?‘ đ?‘Ą ,0 = đ?‘ƒ đ?‘Ą = limđ?‘Ą →∞

đ??¸ [đ?‘ đ?‘Ą ,0 ] đ?‘Ą

� � ,0 = � =0

đ?‘™

đ?‘™

2

2đ?‘Ą

)

2

4đ?‘Ą

)

� 1+�

1

= 1+� .

IV.

DISTRIBUTIONS

In-degree đ?‘ľđ?’• ,đ?’Š distribution đ?‘ đ?‘Ą ,đ?‘– represents number of vertices having in- degree i at time tfor simplicity d = 1 đ??ťđ?‘’đ?‘›đ?‘?đ?‘’

www.ajer.org

đ?‘ľđ?’• ,đ?’Š đ?‘–đ?‘

Page 291


American Journal of Engineering Research (AJER) 𝑁 𝑡 −1,𝑖 − 1

2014

𝛼 𝑁 𝑡 −1,𝑖 (1 − 𝛼 )𝑁 𝑡 −1,𝑖 + 𝑡 −1 𝑡 −1 (1 − 𝛼 )(𝑖 − 1)𝑁 𝑡 −1,𝑖 −1 𝛼 𝑁𝑡 −1,𝑖 −1 𝑤𝑝 + 𝑡 −1 𝑡 −1 𝑜𝑡 ℎ𝑒𝑟𝑤𝑖𝑠𝑒 −1 = 𝑁 0,𝑖 −1 , 𝑁 1,𝑖 −1 , 𝑁 2,𝑖 −1 , 𝑁 3,𝑖 −1 , … , 𝑁 𝑡 −1,𝑖 −1 for 0 ≤ 𝑡 − 𝑤𝑝

𝑁 𝑡 −1,𝑖 + 1 𝑁 𝑡 −1,𝑖

define𝑋 𝑡 −𝑘 = 𝐸 [𝑁 𝑡 ,𝑖 |𝑁𝑡 −𝑘 ,𝑖 𝑁 ∗,𝑖 −1 ]where 𝑁 ∗,𝑖 𝑘 ≤𝑡 The sequence 𝑋 0 , 𝑋 1 , 𝑋 2 , … 𝑋 𝑡 forms Doob's martingale [3]

𝑘 −1

𝐸 𝑁 𝑡 ,𝑖 |𝑁 𝑡 −𝑘 ,𝑖 𝑁∗,𝑖

−1

= 𝑁 𝑡 −𝑘 ,𝑖 𝑆 𝑘 ,𝑖 +

𝑆 𝑗 ,𝑖 𝐹 𝑗 +1,𝑖

−1

𝑗 =0

where

𝑆 0,𝑖 = 1 𝛼 1−𝛼 𝑖 𝑆 𝑘 ,𝑖 = 𝑆 𝑘 −1,𝑖 (1 − − ) 𝑡 −𝑘 𝑡 −𝑘 𝛼 + 1 − 𝛼 (𝑖 − 1) 𝐹 𝑘 ,𝑖 −1 = 𝑡 −𝑘

Proof.

𝐸 [𝑁𝑡 ,0 |𝑁 𝑡 −1,𝑖 = 𝛽 , 𝑁 𝑡 −1,𝑖 −1 = 𝛾 ] 𝛼𝛽 1 − 𝛼 𝑖𝛽 𝛼𝛾 + 1 − 𝛼 (𝑖 − 1)𝛾 = 𝛽 −1 + + (𝛽 + 1) 𝑡 −1 𝑡 −1 𝑡 −1 𝛼𝛽 1 − 𝛼 𝑖𝛽 𝛼𝛾 + 1 − 𝛼 (𝑖 − 1)𝛾 +𝛽 − + − 𝑡 −1 𝑡 −1 𝑡 −1 𝛼 1−𝛼 𝑖 𝛼 + 1 − 𝛼 (𝑖 − 1) = 1− − 𝛽+ 𝛾 𝑡 −1 𝑡 −1 𝑡 −1 𝛼 1−𝛼 𝑖 𝛼 + 1 − 𝛼 (𝑖 − 1) ⇒ 𝐸 𝑁 𝑡 ,𝑖 |𝑁 𝑡 −1,𝑖 𝑁𝑡 −1,𝑖 −1 = 1 − − |𝑁 𝑡 −1,𝑖 + 𝑁 𝑡 −1,𝑖 −1 𝑡 −1 𝑡 −1 𝑡 −1 𝐸 𝑁 𝑡 ,𝑖 |𝑁 𝑡 −2,𝑖 , 𝑁 𝑡 −2,𝑖 −1 , 𝑁 𝑡 −1,𝑖 −1 = 𝐸 𝐸 𝑁 𝑡 ,𝑖 |𝑁 𝑡 −1,𝑖 𝑁𝑡 −1,𝑖 −1 |𝑁 𝑡 −2,𝑖 , 𝑁 𝑡 −2,𝑖 −1 , 𝑁 𝑡 −1,𝑖 −1 𝛼 1−𝛼 𝑖 = 1− − 𝐸 𝑁 𝑡 −1,𝑖 |𝑁 𝑡 −2,𝑖 , 𝑁 𝑡 −2,𝑖 −1 , 𝑁 𝑡 −1,𝑖 −1 𝑡 −1 𝑡 −1 𝛼 + 1 − 𝛼 (𝑖 − 1) + 𝐸 𝑁 𝑡 −1,𝑖 −1 |𝑁 𝑡 −2,𝑖 , 𝑁 𝑡 −2,𝑖 −1 , 𝑁 𝑡 −1,𝑖 −1 𝑡 −1 𝛼 1−𝛼 𝑖 𝐸 𝑁 𝑡 ,𝑖 |𝑁 𝑡 −2,𝑖 , 𝑁 𝑡 −2,𝑖 −1 = 1 − − { 𝑡 −1 𝑡 −1 𝛼 1−𝛼 𝑖 1− − 𝑁 𝑡 −2,𝑖 𝑡 −2 𝑡 −2 𝛼 + 1 − 𝛼 (𝑖 − 1) 𝛼 + 1 − 𝛼 (𝑖 − 1) + 𝑁 𝑡 −2,𝑖 −1 } + 𝑁 𝑡 −1,𝑖 −1 𝑡 −2 𝑡 −1 ∵ 𝐸 𝑁 𝑡 −1,𝑖 −1 |𝑁𝑡 −2,𝑖 , 𝑁 𝑡 −2,𝑖 −1 , 𝑁 𝑡 −1,𝑖 −1 = 𝑁 𝑡 −1,𝑖 −1 𝑎𝑛𝑑 𝐸 𝑁 𝑡 −1,𝑖

−1 |𝑁 𝑡 −2,𝑖

, 𝑁 𝑡 −2,𝑖

−1 , 𝑁 𝑡 −1,𝑖 −1

𝐸 𝑁 𝑡 ,𝑖 |𝑁 𝑡 −3,𝑖 , 𝑁 𝑡 −3,𝑖 =𝐸

= 𝐸 𝑁 𝑡 −1,𝑖 |𝑁 𝑡 −2,𝑖 , 𝑁 𝑡 −2,𝑖

−1 𝑁 𝑡 −2,𝑖

, 𝑁 𝑡 −2,𝑖

−1

−1 , 𝑁 𝑡 −1,𝑖 −1

𝛼 1−𝛼 𝑖 𝐸 𝑁 𝑡 ,𝑖 |𝑁𝑡 −2,𝑖 , 𝑁 𝑡 −2,𝑖 −1 , 𝑁 𝑡 −1,𝑖 −1 |𝑁 𝑡 −3,𝑖 , 𝑁 𝑡 −3,𝑖 −1 𝑁𝑡 −2,𝑖 , = 1− − 𝑡 −1 𝑡 −1 𝑁 𝑡 −2,𝑖 −1 , 𝑁 𝑡 −1,𝑖 −1 𝛼 1−𝛼 𝑖 { 1− − 𝐸 𝑁 𝑡 −2,𝑖 |𝑁 𝑡 −3,𝑖 , 𝑁 𝑡 −3,𝑖 −1 𝑁𝑡 −2,𝑖 , 𝑁 𝑡 −2,𝑖 −1 , 𝑁 𝑡 −1,𝑖 −1 𝑡 −2 𝑡 −2 𝛼 + 1 − 𝛼 (𝑖 − 1) 𝑁 𝑡 −2,𝑖 −1 |𝑁 𝑡 −3,𝑖 , 𝑁 𝑡 −3,𝑖 −1 𝑁𝑡 −2,𝑖 + 𝐸 } 𝑡 −2 , 𝑁 𝑡 −2,𝑖 −1 , 𝑁 𝑡 −1,𝑖 −1 𝛼 + 1 − 𝛼 (𝑖 − 1) 𝑁 𝑡 −1,𝑖 −1 |𝑁 𝑡 −3,𝑖 , 𝑁 𝑡 −3,𝑖 −1 𝑁 𝑡 −2,𝑖 , + 𝐸 𝑡 −1 𝑁 𝑡 −2,𝑖 −1 , 𝑁 𝑡 −1,𝑖 −1

www.ajer.org

Page 292


American Journal of Engineering Research (AJER)

2014

𝑘 −1

⇒ 𝐸 𝑁 𝑡 ,𝑖 |𝑁 𝑡 −𝑘 ,𝑖 𝑁∗,𝑖

−1

= 𝑁 𝑡 −𝑘 ,𝑖 𝑆 𝑘 ,𝑖 +

𝑆 𝑗 ,𝑖 𝐹 𝑗 +1,𝑖

−1

𝑗 =0

where

𝑆 𝑘 ,𝑖 = 𝑆 𝑘 −1,𝑖 𝐹 𝑘 ,𝑖 −1 =

𝑺 𝟎 ,𝒊 = 𝟏 𝛼 1−𝛼 𝑖 (1 − − ) 𝑡 −𝑘 𝑡 −𝑘

𝛼 + 1 − 𝛼 (𝑖 − 1) 𝑡 −𝑘

V.

𝑓𝑜𝑟

𝑘 ≥1

SURVEY INFERENCE

The synthesis |𝑬 𝑵𝒕 ,𝒊 |𝑵𝒕 −𝒌 ,𝒊 𝑵𝒕 −𝒌 ,𝒊 −𝟏 = 𝑬 𝑵𝒕 ,𝒊 |𝑵𝒕 − 𝒌 +𝟏

,𝒊

𝑵𝒕 − 𝒌 +𝟏

,𝒊 −𝟏

|≤𝟐

Proof.

𝐸 𝑁 𝑡 ,𝑖 |𝑁 𝑡 −𝑘 ,𝑖 𝑁𝑡 −𝑘 ,𝑖

−1

= 𝐸 𝑁 𝑡 ,𝑖 |𝑁 𝑡 − 𝑘 +1 ,𝑖 𝑁 𝑡 − 𝑘 +1 ,𝑖

𝑘 −1

= 𝑁 𝑡 −𝑘 ,𝑖 𝑆 𝑘 ,𝑖 +

−1

|

𝑘

𝑆 𝑗 ,𝑖 𝐹 𝑗 +1,𝑖

−1

− 𝑁 𝑡 −(𝑘 +1),𝑖 𝑆 𝑘 +1,𝑖 +

𝑗 =0

𝑆 𝑗 ,𝑖 𝐹 𝑗 +1,𝑖

−1

𝑗 =0

= 𝑁 𝑡 −𝑘 ,𝑖 𝑆 𝑘 ,𝑖 − 𝑁 𝑡 −(𝑘 +1),𝑖 𝑆 𝑘 +1,𝑖 − 𝑆 𝑘 ,𝑖 𝐹 𝑘 +1,𝑖

−1

Case 1: (𝑁 𝑡 −𝑘 ,𝑖 = 𝑁 𝑡 −(𝑘 +1),𝑖 − 1) = |𝑁 𝑡 − 𝑘 +1 ,𝑖 𝑆 𝑘 ,𝑖 − 𝑆 𝑘 ,𝑖 − 𝑁 𝑡 −(𝑘 +1),𝑖 𝑆 𝑘 ,𝑖 (1 − −𝑆 𝑘 ,𝑖

𝛼 + 1−𝛼 𝑖 −1 𝑡 − 𝑘 +1

𝛼 1−𝛼 𝑖 − ) 𝑡 − (𝑘 + 1) 𝑡 − (𝑘 + 1)

𝑁 𝑡 − 𝑘 +1 ,𝑖

−1 |

𝛼𝑁𝑡− 𝑘 +1 ,𝑖 1 − 𝛼 𝑁𝑡− 𝑘+1 ,𝑖 = | − 𝑆𝑘 ,𝑖 + 𝑆𝑘 ,𝑖 {( + ) 𝑡− 𝑘+1 𝑡− 𝑘+1 𝛼+ 1−𝛼 𝑖−1 − 𝑁𝑡− 𝑘 +1 ,𝑖−1 }| 𝑡− 𝑘+1 ≤2 𝛼+ 1−𝛼 𝑖−1 (∵ 𝑁𝑡− 𝑘+1 ,𝑖−1 ≤ 1) 𝑡− 𝑘+1 𝛼𝑁𝑡− 𝑘+1 ,𝑖 1 − 𝛼 𝑁𝑡− 𝑘+1 ,𝑖 (∵ + ≤ 1) 𝑡− 𝑘+1 𝑡− 𝑘+1 Bound for Case 1 is same as that of paper [3]. Case 2:

(𝑁𝑡−𝑘 ,𝑖 = 𝑁𝑡−(𝑘+1),𝑖 + 1) = |𝑁𝑡− 𝑘 +1 ,𝑖 𝑆𝑘,𝑖 + 𝑆𝑘 ,𝑖 𝛼 1−𝛼 𝑖 −𝑁𝑡−(𝑘+1),𝑖 𝑆𝑘,𝑖 (1 − − ) 𝑡 − (𝑘 + 1) 𝑡 − (𝑘 + 1) 𝛼+ 1−𝛼 𝑖−1 −𝑆𝑘 ,𝑖 𝑁𝑡− 𝑘 +1 ,𝑖−1 | 𝑡− 𝑘+1 𝛼𝑁𝑡− 𝑘 +1 ,𝑖 1 − 𝛼 𝑁𝑡− 𝑘 +1 ,𝑖 = |𝑆𝑘 ,𝑖 + 𝑆𝑘,𝑖 {( + ) 𝑡− 𝑘+1 𝑡− 𝑘+1 −(1 −

𝛼+ 1−𝛼 𝑖−1 𝑡− 𝑘 +1

𝑁𝑡− 𝑘+1 ,𝑖−1 )} ≤ 2

Bound for Case 2 is same as that of paper [3] Case 3: (∵ 𝑁𝑡−𝑘,𝑖 = 𝑁𝑡−(𝑘 +1),𝑖 ) 𝛼 1−𝛼 𝑖 = 𝑁𝑡− 𝑘 +1 ,𝑖 𝑆𝑘,𝑖 − 𝑁𝑡− 𝑘+1 ,𝑖 𝑆𝑘,𝑖 (1 − − ) 𝑡 − (𝑘 + 1) 𝑡 − (𝑘 + 1)

www.ajer.org

Page 293


American Journal of Engineering Research (AJER)

2014

đ?›ź+ 1−đ?›ź đ?‘–−1 đ?‘ đ?‘Ąâˆ’ đ?‘˜ +1 ,đ?‘–−1 đ?‘Ąâˆ’ đ?‘˜+1 đ?›źđ?‘ đ?‘Ąâˆ’ đ?‘˜ +1 ,đ?‘– 1 − đ?›ź đ?‘ đ?‘Ąâˆ’ đ?‘˜+1 ,đ?‘– = đ?‘†đ?‘˜ ,đ?‘– ( + ) đ?‘Ąâˆ’ đ?‘˜+1 đ?‘Ąâˆ’ đ?‘˜+1 đ?›ź+ 1−đ?›ź đ?‘–−1 −đ?‘†đ?‘˜ ,đ?‘– đ?‘ đ?‘Ąâˆ’ đ?‘˜ +1 ,đ?‘–−1 đ?‘Ąâˆ’ đ?‘˜+1 ≤1 Bound for Case 3 obtained here is an improvement compared to the result givenin the paper [5] −đ?‘†đ?‘˜ ,đ?‘–

Azuma's Inequality: Let đ?‘‹0 , đ?‘‹1 , đ?‘‹2 , ‌be a martingale sequence such that for all đ?‘˜ |đ?‘‹đ?‘˜ − đ?‘‹đ?‘˜ −1 | ≤ đ?‘? wheređ?‘?may depend onđ?‘˜. Then for all t≼0 and any đ?œ†>0 âˆľ |đ??¸ đ?‘ đ?‘Ą,đ?‘– |đ?‘ đ?‘Ąâˆ’đ?‘˜ ,đ?‘– đ?‘ đ?‘Ąâˆ’đ?‘˜ ,đ?‘–−1 = đ??¸ đ?‘ đ?‘Ą,đ?‘– |đ?‘ đ?‘Ąâˆ’ đ?‘˜ +1 ,đ?‘– đ?‘ đ?‘Ąâˆ’ đ?‘˜ +1 đ?‘™2 ⇒ Prâ Ą|đ?‘ đ?‘Ą,đ?‘– − đ??¸[đ?‘ đ?‘Ą,đ?‘– | ≼ đ?‘™] ≤ expâ Ą (− ) 4đ?‘Ą

VI.

,đ?‘–−1

|≤2

CONCLUSION

This report studies random graph models using probabilistic techniques.Concentration is made over the paper Stochastic Model for Web graph by RaviKumar et.al. [5]. during the proof of the results given in the paper some moreprecise results have been found. Example- for i = 0 report proves |E[Nt,0|Nt-k,0]- E[Nt,0|Ntt . For (k+1),0]|≤ 1whereas in the paper it is |E[Nt,0|Nt-k,0]- E[Nt,0|Nt-(k+1),0]|≤ 2. It is also proved that the E[Nt,0] = 1+ι general value of i, report proves that|E[Nt,i|Nt-k,iNt-k,i-1] - |E[Nt,i|Nt-(k-1),iNt-(k-1),i-1]| ≤ 1for Nt-k,I = Nt-(k+1),iwhereas result of the paper is |E[Nt,i|Nt-k,iNt-k,i-1] - |E[Nt,i|Nt-(k-1),iNt-(k-1),i-1]| ≤ 2. In future one can find improvement in |E[Nt,i|Nt-k,iNt-k,i-1] - |E[Nt,i|Nt-(k-1),iNt-(k-1),i-1]| for general value of Nt-k,i. VII. APPENDIX

REFERENCES [1] [2] [3] [4] [5] [6] [7] [8]

W. Aiello, F. Chung, and L. Lu. A random graph model for massive graphs. 2000. B. Bollobas. Random graphs. 1985. J.L.Doob. What is a martingale. The American Mathematical Monthly Vol 78, No.5 (May 1971), pp. 451-46, 1971. J. Kleinberg, S. R. Kumar, P. Raghavan, S. Rajagopalan, and A. Tomkins. The web as a graph: Measurements, models and methods. 1999. Ravi Kumar, PrabhakarRaghavan, Sridhar Rajagopalan, D. Sivakumar, Andrew Tomkins, and Eli Upfal. Stochastic models for the web graph. 2000. Michael Mitzenmacher and Eli Upfal. Probability and computing. Rajeev Motawani and PrabhakarRaghavan. Randomized algorithm. 2009. Sheldom M. Ross. Probability models for computer science. 2006.

www.ajer.org

Page 294


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.