Editior ral Note: Polygon n is MDC Hialeah's H Academic A Journal. IIt is a multi-disciplinary onlin ne publication whos se purpose e is to disp play the a cademic w work prod duced by ffaculty and sta aff. In this s issue, we e find seve en articless that cele ebrate the e scholarsh hip of teachin ng and learning from m differentt academiic disciplin nes. As we e cannot understtand a polygon merrely by contemplating its side es, our go oal is to prresent work th hat repres sents the campus c as s a whole.. We enco ourage ourr colleague es to send in n submissions for th he next iss sue of Poly ygon. The ediitorial com mmittee an nd reviewers would like to th hank Dr. V Vicente, Drr. Bradley y-Hess, Drr. Castro, and Prof. Jofre for their unw wavering support. Allso, we wou uld like to thank Pro of. Javier Due単as D fo or his work k on the d design of tthe journal. In additiion, the co ommittee would like e to thank k the conttributors fo or making g this edition possib ble. It is ou ur hope th hat you, o our colleag gues, conttinue to contribute and d support the missio on of the jjournal. Sincere ely, The Pollygon Edittorial Com mmittee
mittee: The Editorial Comm Dr. Moh hammad Shak kil - Editor-iin-Chief Dr. Jaim me Bestard Prof. Vicctor Calderin n
Revieweers: Prof. Steeve Strizver-Munoz Prof. Josseph Wirtel
Patron ns: Dr. Viccente, Campuus President Dr. Anaa Maria Braddley-Hess, D Dean Dr. Carridad Castro,, Chair of LA AS Prof. M Maria Jofre, C Chair of EAP P and Foreiggn Languaages
Mission of Miami Dade College The mission of the College is to provide accessible, affordable, high-quality education that keeps the learner’s needs at the center of the decision-making process.
Miami Dade College District Board of Trustees Helen Aguirre Ferré, Chair Peter W. Roulhac, Vice Chair Armando J. Bucelo Jr. Marielena A. Villamil Mirta "Mikki" Canton Benjamin León III Robert H. Fernandez Eduardo J. Padrón, College President
Editorial Note Barricade the Doors First: A Reflection on Zombie Fiction and its Critique of Post Modern Society.
Victor Calderín
Synthesis of Optimum Experimental Plans Ensuring Computation of Integral Parameters Using a Regression Equation.
Dr. Manuel Carames
Teaching Strategies Involving CAT’s and the Statistical Validation of the Results
Dr. Luis Martin & Dr. Manuel Carames
Incorporating Environmental Immersions to Learning Community linked courses in Mathematics and Geography
Dr. Jaime Bestard & Dr. Arturo Rodriguez
Using Beta-binomial Distribution in Analyzing Some Multiple-Choice Questions of the Final Exam of a Math Course, and its Application in Predicting the Performance of Future Students
Dr. Mohammad Shakil
The Magic Math
David Tseng, (Graphed by Nancy Liu)
Barricade the Doors First: A Reflection on Zombie Fiction and its Critique of Post Modern Society Prof. Victor Calderin English and Communincations Miami-Dade College Hialeah Campus 1780 West 49th St., Hialeah 33012 E-mail: vcalderi@mdc.edu Abstract The trope of the zombie is a staple of the contemporary horror genre. This paper will reflect on how the symbol of the zombie is a post modern refraction of the traditional resurrection story. In addition to addressing the concept of a resurrection, the zombie mythos also allows addresses the state of the family unit in the post modern world. Ultimately, the horror of zombie fiction is not the walking dead but what they say about us, the living. Themes: Popular Culture Key Words:Zombie Fiction, Resurrection, Post Modernism, George A. Romero
Barricade the Doors First: A Reflection on Zombie Fiction and its Critique of Post Modern Society
George A. Romero forever changed how we view the zombie in 1968 with the release of Night of the Living Dead, adding a new symbolic depth to the undead. With his film, Romero ties the metaphor of the zombie directly to the disconnection we feel in postmodern society. The zombie is the embodiment of society sans soul, sans intellect, only driven by the urge to consume. But these disparaging characteristics fly in the face of the metaphor that is at the core of the zombie mythos: the resurrection. When we think about the concept of resurrection, two key concepts are at play. First, the idea of a resurrection directly usurps the permanence of death, contradicting most empirical data. But not only does the resurrection defeat death, it merges spirit to transfigured flesh, creating the perfect condition for an afterlife. And it is this afterlife
that we find the second main factor; eternal reunion with family. Not only does the resurrection offer beautified life after death, but it offers us the company of those we love. And so this concept had endured for thousands of years, from the Elysian Mysteries to Christian Dogma, until the advent of George A. Romero’s Night of the Living Dead. Romero’s work defines our concept of the zombie; they are mindless automatons who crave one thing: living flesh. And while they look human and have some semblance of intelligence, as seen throughout the film when the spectator sees them pick rudimentary tools like rocks and sticks, zombies do not have the basic requirements that would allow us to label them as human. The classical definition of resurrection creates an image of perfect flesh merged with spirit, but the postmodern condition transforms this idea in the zombie. The zombie is not perfected flesh but rotten, dilapidated form that mimics the body. This is why in the opening of the film Barbra and Johnny cannot immediately identify the zombie approaching, a critique on our society and its zombie-esque movement. This is also why the scenes where the zombies are aimlessly walking around the countryside are so disturbing. The horror isn’t just the gore factor but also the social psychological danger of all of us becoming soulless machines. This lack of any sense of spirit, of animus, is what really defines the zombie as a metaphor is direct opposition to the resurrection. If the classical view allowed us to return in spirit and perfect flesh, the postmodern view, which is based on lack of any connections, be they literal or symbolic, creates a revised metaphor, a dark resurrection, where the flesh does return, but it is not perfect since the spirit is completely missing. But this new symbol goes beyond this, as the individual is affected by the zombie apocalypse, so is society as a whole. In the very opening of Night of the Living Dead, we see how society has begun to already fragment. One of the most fundamental units of society is the family, which will be test in
extreme ways throughout the film. Early in the film when the siblings, Barbra and Johnny, park in the cemetery, they are bickering over the distance required for this ritual of family observance, one which their mother insists upon. Their exchange touches on the theme of family discord. Johnny does not see the purpose of leaving the flower of the relative’s grave, even joking about the caretaker making a profit by reselling the ornament. Barbra defends her mother’s request, later on praying to show respect. While the relative is never really mentioned, it is important to note that it is a male relative. As Johnny begins to tease his sister, he is attacked by a male zombie, and the sibling urges his sister to run for safely while he unsuccessfully tries to fend off the ghoul. Barbra and Johnny make up the first familiar subunit of the film. The other units are met in the house where the majority of the film takes place. At the secluded farmhouse, we discover the other two family subunits. The first is the Coopers, composed of the husband, wife, and infirm daughter, and the other is the young couple from the town, Tom and Judy. Both groups were hiding from the zombies in the basement of the house when Ben, the protagonist and lone figure of the film, and Barbra, who is catatonic after her brother’s death, were barricading the home. Both groups show the family at different points in time. The young couple represents the initial phase of the union. Tom and Judy are effectively inseparable, which is a condition that will ironically lead to their inevitable demise. The Coopers signify the family in the latter phase where they have a child. While Tom and Judy express love for each other, Mr. and Mrs. Cooper do not, as seen when Mr. Cooper tells his wife that even though she is not happy with him, they must stay together and survive for their daughter’s sake. Despite all affection and intentions, none of these groups survive. The idea of the resurrection promises a reunion with lost loved one, but in The Night of the Living Dead, this reunion is a frustrated one. The family is shattered and left in pieces.
Barbra has already lost her brother, and at the end of the film, when he reappears as a zombie, it is Johnny who drags his sister away into the zombie horde. The Coopers meet their end in the basement. When their daughter turns, being previously bitten, she murders her mother, who discovers her eating the flesh of her father, who dies from the gun wound inflicted by Ben in their final scuffle. Tom and Judy meet their end during the botched attempt to fuel the truck, where out of loyalty the boyfriend stays in the combusting truck trying to save his girlfriend as it explodes. All the family units fall apart or are destroyed by the zombie threat, which is completely fitting, for as the resurrection promises reunion, zombification promises separation. In addition to this dichotomy, as the zombie nightmare destroys the family, the individual is the only one able to survive, and this is true in the case of Ben. Ben is able to endure the night because he is able to think clearly. Upon discovering the house, he secures it, searches it for possible threats, and devises an escape plan and attempts to executes it, which should be noted fails not because of him but because of Tom. Also, Ben is the only one to survive the night, but in an ironic twist he is shot by the search party securing the area. The one human to survive the zombies is senselessly killed by other humans, leaving us with a question: Who are the real monsters? The symbol of the zombie is a troubling one. It is a metaphor that not only denies us the spirit and intellect that make us human, but it also destroys the family unit that is essential to humanity. Besides these practical applications, the myth of the zombie completely subverts the concept of a resurrection. As the resurrection story is one that offers hope against the inevitable, the zombie narrative is one that destroys hope, leaving us alone, uncertain, and lost.
Works Cited The Night of the Living Dead. Dir. George A. Romero. Perf. Duane Jones. 1968.
Optimal Experimental Plans SYNTHESIS OF OPTIMUM EXPERIMENTAL PLANS ENSURING COMPUTATION OF INTEGRAL PARAMETERS USING A REGRESSION EQUATION. (SOME EXAMPLES)
Dr. Manuel Carames Assistant Professors Mathematics Department Miami‐Dade College, North Campus 11380 NW 27 Avenue, Miami, Florida 33167, USA Emails: mcarames@mdc.edu ABSTRACT This is an attempt to share with my colleagues some experiences related to Optimal Experimental Plans and particularly to ones that were introduced by me and have the general name of I plans (I because they are plans to estimate integral indexes of the response function). Theme: Optimal Experimental Plans Key words: Experiment, estimation, statistic, system, model Introduction: In different scenarios, engineers, researchers, etc need to estimate not the values of the response functions in certain points, they need to estimate some integral indexes of this function. To this idea is dedicated this paper Body: What is the researched object?
r P
r X
r Y
Object
r Z
1
Optimal Experimental Plans
The set of parameters which define the state of the object is divided in the following groups:
r
Input variables (vector of input variables) x
= ( x1 , x2 L xm ) . In this group we have the
controllable parameters of the object. The values of these parameters are inside given intervals and they are given by the schedule of the technological process or technical
≤ xi ≤ xi max ; i = 1,2,L , m ; r Output variables (vector of output variables) y = ( y1 , y2 , L yr ) . In this group we have
constrains and are of the form xi min
the variables that contain information about quantity and quality characteristics of the output product.
r
In group z we include the variables that we can control but we cannot manipulate.
r z = ( z1 , z2 , L z s ) . r In group p we include variables that we cannot control neither manipulate. In this group we
have noises and we do not know the points where they are applied, we neither know the time characteristics nor the power of then.
r p = ( p1 p2 L pl ) .
r
It is clear from the way that we defined the universe of signals that only the once that belong to x could be manipulated. In case of active experiment all the variables that belong to
r p will be represented by equivalent
addition noise e applied to the output.
r
Variables from z , whose characteristics during the experiment are known will be considered as
r
variables that belong to group x . Let us assume that we have one output signal, then the response function can be represented as
( )
r r y = η x , θ + e where e is the noise with the following properties:
E (e ) = 0 the mean of the noise is zero. E is the mean operator ;
(
)
E ei .e j = 0∀i ≠ j this means in different point of the factor space xi , x j where the output is i
j
measured e and e are not correlated ;
( )
r D ei = σ 2∀i this means that dispersion of the noise in all points of factor space xi is the same. 2
Optimal Experimental Plans The last two conditions can be written other way around E
W
{e.e′} = σ 2W where
is the unit matrix with dimension equals to n × n ;
n is the amount of experiments (amount of measures of the output;
e′ = (e1 , e2 , L, en ) is the vector of the values of the errors in the points where the measures were taken;
E is the mean operator;
η (x , θ ) = ∑ θ i f i ( x ) ; r r
k
r
i =1
θ i ‐ coefficients of the model; r f i ( x )‐ given functions of input variables. The schema of the object can be given now by the following sketch:
e
r x
Object
y
+
What do we want to do? Sometimes we want to find not the discrete value of the response function and instead we want to find some integral indexes of the function, it means
[
]
k r r r r r r r r r μ j = ∫ yω j ( x )dx = ∫ η (x , θ ) + e ω j ( x )dx = ∫ ⎡⎢ ∑ θ i f i ( x ) + e⎤⎥ω j ( x )dx Ω
Ω
Ω ⎣i =1
⎦
The estimation of the indexes we will find:
μˆ j = E (μ j ) = ∫ ∑ θˆi f i ( x )ω ( x )dx ; k
r
r r
Ω i =1
where μ j is the estimated index; 3
Optimal Experimental Plans
y is the output parameter (output variable);
θˆi is the estimation of the coefficient θ i of the model; r
ω j ( x ) given not random weight function that in general depends of the input factors; Ω is the given region of factor space where we calculate the integral idexes; _____
j = 1, M ; M
is the amount of calculated integral indexes.
We will look for the estimators of the coefficients of the model inside linear class of estimators, what means:
r
θˆ = Ty ; Where y ′
r = ( y1 , y2 , L yn ) is the vector of the values of response function in different points xi of
the factor space;
n is the amount of experiments and T is a matrix with dimension k × n We will look for estimators with the following characteristics ‐
()
r r E μˆ = μ real what means that their expected value will be the same as the real value of the integral indexes;
‐
(
)(
)
r ⎡ rˆ r ′ rˆ ⎤ lim p ⎢ μ − μ real μ n − μ real ≥ ξ ⎥ = 0 what means that the estimation converge n →∞ ⎣ ⎦ from the probabilistic point of view to the real values of the estimated parameters. Index n rˆ signify that estimation μ was obtained after n measures and ξ is any in front given n
positive number and ‐
D
(μrˆ ) ≤ D(μrˆ )
(rμrˆ )
0 where D
covariance matrix and D
(μrˆ )
0 is the covariance matrix of
any unbiased estimations of μ 0 .
These estimators are known as the best linear estimators and the estimations that we obtain are the best linear estimation of the integral index of the response function. 4
Optimal Experimental Plans It is known that in order for us to improve the quality of the properties of the statistic estimations we need, somehow, to select points in the factor space to do our measures, it means, WE NEED TO PLAN THE EXPERIMENT!! Today we can find in the literature different type of experimental plans. When we plan an active experiment, the necessary statistic material for the estimation of the parameters is collected following a define research program. Research program is the experimental plan and it satisfies certain criterion of optimality. About Experimental Plans They are divided in exact and continuous plans. Exact plans are optimal for a given number of observations N The task of finding an optimal exact plan is done by finding where, in the plan region, measurements should be taken to satisfy the given criterion of optimality. If the obtained plan is concentrated in n point l as ξ l
=
rl
N
n
said we have ∑ ξ l l =1
≤ N points them we can define the observation frequency in
, where rl is the number of observations done in point xl . From what was
= 1 where ξ l proportion of observations that were done on point xl considering
the total amount of done observations as the unit. The main characteristic for exact plans is that rl
= ξl N where rl is a positive integer.
Continuous plans are not related to a specific number of observations. These plans are given by a ξ x , that is , ξ x = 1 . A continuous norm plan ε is the positive probability metric ∫ d x following set of magnitudes :
( )
⎧x
x
( )
. . . x ⎫
n ε =⎨ 1 2 ⎬ ⎩ξ1 ξ 2 . . . ξ n ⎭
where x i are points of the spectra of the plan and x ∈ X where X is the planning region;
i
ξ i is the frequency of observations in corresponding points of the plan. We can find a correspondence between Norm Plans, Norm Information Matrix of the Plan and Norm Covariance Matrix. 5
Optimal Experimental Plans Norm Information Matrix of the Plan is given by: L
(ε ) = ∫ f (x ) f ′(x )dξ (x ) where x
′( x ) = ( f1( x ), f 2 ( x ),L , f k ( x )) f is the base in which the response function is decomposed. In the case that the metric is contained in a finite number of points we have :
n L(ε ) = ∑ ξi f ( xi ) f ′( xi ) i =1
And the Norm Covariance Matrix is:
D(ε ) = L−1(ε )
Statistic properties of the estimation of integral index of response functions Let us assume that η where
(xr,θ ) is a function linear related to parameters, this is: η (xr,θ ) = θ ′f (xr ) r
r
r r r r f ′( x ) = ( f1 ( x ), f 2 ( x ), L f k ( x )) .
y2 , L , y with In the points x1 , x2 , L xn were done independently measures y 1, n 2
2
2
,σ σ 1 , σ dispersions equal to 2 , L .
n
θ We will analyze only linear estimation for what means that we are looking for such estimations that could be represented as θˆ = Ty where y′
k × n .
= ( y1 , y2 , L yn ) and T is a matrix with dimensions
It is known from literature the following theorem: The best linear estimation for the unknown parameters θ are θˆ
= B −1 y where matrix B is equal to
n r r B = ∑ vi f ( xi ) f ′( xi ) and is not a degenerated matrix; vi = σ −2 i . Covariance matrix of i =1
( )= B
estimation is equal to D θˆ
−1
.
One corollary of the above theorem is that the estimation of any linear combination t
()
tˆ = Cθˆ . Covariance matrix of estimations tˆ is equal to D(tˆ ) = CD θˆ C ′
= Cθ will be
6
Optimal Experimental Plans
r
Let us demonstrate that our estimations of μ (integral indexes of response function) are linear combination of the coefficient of the model:
r r η (x, θ ) = ∑ θ j f j ( x ) r
k
j =1
( )
r r r r r x r r E [μl ] = ∫ ωl ( x )η x , θ dx = ∫ ω ( x ) ∑ θ j f j ( x )dx = l j =1 Ω Ω
l(
r
) 1( r )
r
l(
r
) k (r)
r
= θ ∫ ω x f x dx + L + θ ∫ ω x f x dx
1
Ω
k
Ω
Using matrix notation we can represent the last expression as:
⎛ ∫ ω1 f1dx L ∫ ω1 f m dx ⎞ ⎟⎛ θ ⎞ ⎜ Ω ⎟⎜ 1 ⎟ r ⎜Ω μ =⎜ M L M ⎟⎜ M ⎟ ⎜ ∫ ωl f1dx L ωl f m dx ⎟⎜⎝θ m ⎟⎠ ∫ ⎟ ⎜ ⎠ ⎝ Ω Ω
Where
⎛ μ1 ⎞ ⎜ ⎟ μ =⎜ M ⎟ ⎜μ ⎟ ⎝ l⎠ r
It is known too that the best linear estimation for θˆ minimizes the sum of the weight squares of the difference between the real value and the one that is calculated by the model (Lease Square Method, LSM) n r S (θ ) = ∑ vi [ yi − f ′( xi )θ ]2 i =1
7
Optimal Experimental Plans If we put together what was said in the statistic properties of the estimation and what LSM says we can conclude that covariance matrix for our integral indexes is the following:
⎛ ∫ ω1 ( xr ) f1 ( xr )dxr L ∫ ω1 ( xr ) f k ( xr )dxr ⎞ ⎜ ⎟ Ω Ω ⎟ D(μˆ ) = φD(θˆ)φ ′ where φ = ⎜ M M M ⎜ r r r r r r⎟ ⎜ ∫ ωm ( x ) f1 ( x )dx L ∫ ωm ( x ) f1 ( x )dx ⎟ ⎝Ω ⎠ Ω Why is actual the problem? In different situation scientists, engineers and researchers could find one or more of the following real problems: -
estimation of the mean value of the response function and not the function itself;
-
estimation of the volume of raw material in a mine;
-
estimation of the amplitude of harmonics in a complex signal;
-
estimation of some ordinates of transformed function;
-
estimation of the power of a signal for given frequencies.
What is the objective of the experiment?
[ l]
μ ; Estimate E
[ ]
( )
r r r ( x )η x dx , θ Where E μ ∫ ω l = l
k Ω r r r ( ) + e is the response function x , + e = f x η θ θ ∑ And i i
( )
i =1
which is known except for the numerical values of the parameters. Ω Is the given region of factor space;
____
l = 1 , M ; M number of integral indexes that need to be calculated and
8
Optimal Experimental Plans
(r)
given non‐random weight function of input factors. ωl x What do we need to find?
*
ε E We need to find: that belongs to the class of continuous and norm plans such that r produces the minimum value of variance of the estimator . μ Definition of Experimental Optimal Plan ID Plan is an optimal plan if and only if it minimizes the determinant of the Covariance Matrix, ID ε* that is,
( )
⎛ˆ ⎞ ˆ or r
r
min det D με = det D⎜ μ * ⎟ ⎝ ε ⎠ ε ∈E
( )
( )
rˆ rˆ μ μ det det L = L max ε* ε* ε ∈E
L Where is the plan Information Matrix I D is the plan which minimizes the volume of the dispersion ellipsoid. Some examples:
The response function mean value estimation. Application # 1 We will assume that we have a weight function then: ω ≡1
1
(r ) r
, is the mean value of the response function; E ⎡ μ ⎤ = ∫η x ,θ dx
⎢⎣ 1 ⎥⎦
Meaning: in different technological objects the inputs can change
Ωr r r xi min ≤ xi ≤ xi max
we need to estimate the mean value of the output index.
Estimation of the amplitude of given harmonic of the response function. Application # 2 The given information is the same as before with the exception of weight functions. In this problem they are the same as the Fourier series,
r
ω ( x ) = [sin ( x ), cos( x ), sin (2 x ), cos(2 x ),L, sin (Mx ), cos(Mx )]
9
Optimal Experimental Plans
kπ ⎡ 1 ⎤ 1 L E ⎢μ ⎥ = xη (x, θ )dx ∫ cos ⎢⎣ ⎥⎦ L−L L
kπ ⎡ 2⎤ 1 L E ⎢μ ⎥ = xη (x,θ )dx ∫ sin ⎢⎣ ⎥⎦ L−L L
Meaning: a parametric signal which describes complex periodic movement is given and we need to select some of its components
Estimation of some ordinates of a given transform function. Application # 3
(r ) r
Given: parametric signal in the region . η x ,θ X
OM A transformation to region is performed. This means
( )
(
)
r r r ~ r η x ∈ X x ,θ → ηω ∈ O ω , β where M
( )
( )
r r r r ~ η ω , β = ∫ h(ω , x )η x ,θ dx x
(
r
)
OM and weight functions that areused to go from region to region . X h ω, x
(
)
i ωx
Significance: if then we are applying Fourier Transformation to the input signal and h ω, x = e we want to estimate the power of this signal for given frequencies.
ID Analytic synthesis of a plan Given:
η (x, θ ) = θ 0 + θ1x x ∈ X = [− 1,1] r
0 ⎧− 1
ε* ∈ E = ⎨
⎩ l1
1⎫ ⎬ 1 − 2l1 l1 ⎭
We need to find a plan which belongs to class norm and continuous plans and it will give the E minimum dispersion for the estimation of mean value of the response function in the region X
()
D Sˆ → min ε ∈E
10
Optimal Experimental Plans That is,
3 L(ε ) = ∑ li f ( xi ) f ( xi ) = i =1 ⎛ 1 − 1⎞ ⎛1 0⎞ ⎛ 1 1⎞ ⎛ 1 0 ⎞ ⎟⎟ ⎟⎟ + (1 − 2l1 )⎜⎜ ⎟⎟ + l1⎜⎜ ⎟⎟ = ⎜⎜ = l1⎜⎜ l 0 2 1 1 0 0 1 1 − ⎝ ⎠ ⎝ ⎠ ⎝ ⎠ ⎝ 1⎠
Let us find the Information Matrix of the Plan. ′
Vector in this case has the following form φ φ Them
()
()
= (2,0 )
D Sˆ = φD θˆ φ ′ = φ [L(ε )]−1φ ′ =
⎛ 1 0 ⎞⎛ 2 ⎞ (2,0)⎜ 0 1 l ⎟⎜⎜ ⎟⎟ = 4 ⎜ 1⎟ 0 ⎝ 2 ⎠⎝ ⎠
This result tells us that estimation dispersion of the mean of the response function does not depend of the frequencies of the plan, meaning that we can distribute the resources the way we want. We can use 50% in the extremes (any person will think that this is the correct distribution and different type of optimal experimental plans recommended so) or you can put all your resources in the center of the plan. The physical meaning of this result is that for accurate estimation of the mean value of a straight line you can distribute the resources uniformly at the ends of the plan region or you can put the resources all on the plan center to accurately estimate the free term. Some conclusions ‐ It is really convenient to have optimal experimental plans that specifically deal with integral indexes of the response function. -
Some results were totally unexpected.
-
There are many applications for these plans.
-
There is a numerical method to synthesize these type of plans based on nonlinear optimization and use the Rosembrok’s algorithm on rotation of coordinates.
11
Optimal Experimental Plans References • • • • • •
Doctoral Thesis “Synthesis Optimal Experimental Plans for Calculation Integral Indexes of the Response Function” by Manuel Caramés, Moscow 1983 (in Russian); Letski E., Caramés M. Experimental Plans for Accelerated Life Testing. News for Machinery Industry. !979, No 2. (in Russian) Experiment Theory by Nalimov V.V. Nauka 1971 (in Russian); Experimental Optimal Theory by Fedorov V.V. Nauka 1971 (in Russian); Experimental Design for Researching Technological Processes. Hartman K., Letski E., Shefer V. Mir 1977 (in Russian); Krug G. K. Doctoral Thesis “Methods of Statistic Analysis for Objects that need to be Controlled” Moscow 1968.
12
Teaching Strategies Involving CAT’s and the Statistical Validation of the Results
Dr. Luis Martin & Dr. Manuel Carames Assistant Professors Mathematics Department Miami-Dade College, North Campus 11380 NW 27 Avenue, Miami, Florida 33167, USA Emails: mcarames@mdc.edu lmartin7@mdc.edu
ABSTRACT During the 2008 fall term, the authors conducted an experiment to study the effect of the application of CAT’s on the rate of success of our students. Two MAC 1114 day classes were chosen, with similar characteristics in their composition. A set of CAT’s were given to one of the groups (experimental group) and no CAT’s whatsoever were given to the other group (control group). At the end of the semester the information regarding the behavior of both groups was gathered and compared by using a hypotheses testing procedure, which was the well-known and documented inference about the difference between two sample proportions. The results of this experiment as well as our recommendations are described in this paper.
Theme: Educational Research Key words: Statistics, Hypothesis Testing, Classroom Assessment Techniques.
1
1. Introduction How the students perform at Miami-Dade College is the main concern for both administrators and faculty members in this institution. As instructors, it is very important for us that the students learn the material and consequently pass our classes with the highest possible grade.
Classroom Assessment Techniques (CAT’s) have being extensively reported in the literature (1), and also extensively applied by the instructors in their classes. On the other hand, statistical procedures are available to assess and validate the results obtained from the application of these CAT’s. We consider that the hypothesis test for two proportions constitutes a very powerful tool. In this paper we report on the application of this technique to validate the results obtained from the piloting of two MAC 1114 classes during the 2008 fall semester.
2. Body
The CAT’s used in this experiment were the Minute Paper, the Muddiest Point, and the RSQC2 (Recall, summarize, question, connect, and comment) (1). The format of the actual surveys given to the students is given below.
Minute Paper Course: _____________________
Date: _________________
This survey is anonymous At this point I want to evaluate your learning for a reason other than to assign a grade. I want to assess how much and how well you are learning so I can help you learn better. The analysis of the results of this assessment will permit me to learn more about how you are learning in order to improve it. 1) What was the most important thing you learned today? 2) What questions remain uppermost in your mind as we conclude this class session?
2
Muddiest Point
Course: ________________
Date: _________________
This survey is anonymous At this point I want to evaluate your learning for a reason other than to assign a grade. I want to assess how much and how well you are learning so I can help you learn better. The analysis of the results of this assessment will permit me to learn more about how you are learning in order to improve it.
What was the muddiest point of my lecture today?
RSQC2 Course: _____________________
Date: _________________
This survey is anonymous At this point I want to evaluate your learning for a reason other than to assign a grade. I want to assess how much and how well you are learning so I can help you learn better. The analysis of the results of this assessment will permit me to learn more about how you are learning in order to improve it. 1) Make a list – in words or simple phrases – of what you recall as the most important or meaningful points from the previous class.
2) Use a sentence to summarize the essence of the previous class.
3
Class Selection and Processing the Results of each CAT A total of five CAT’s were applied to the MAC 1114 experimental group. The control group did not receive any CAT. Both are morning classes with similar characteristics and enrollment. The results of each of these five CAT’s were summarized, tabulated, and discussed with the students the day after its application. This contributed to improve the communication with the students. We consider of high importance that the results of the CAT’s should be ready for the next meeting with the students. Also, every single aspect that was pointed out by the students in their surveys must be discussed and clarified. If they see that we don’t put too much attention to this, they will lose interest. Actual student survey papers are available from the authors upon request.
Statistical Validation of Results The final passing rate was greater in the experimental group, 72% Vs 46% in the control group. Now, the research question is whether the fact that the passing rate in the experimental group was greater that the passing rate of the control group is due to chance or sampling, or the application of CAT’s does help the students to better their performance. To answer this question we decided to run a hypothesis test procedure.
Description of the Statistical Tool The hypothesis testing procedure utilized in this experiment was the well-known and documented inference about two proportions, described in (2), (3), (4), and (5). A significance level of α = 0.05 was considered, which is the most frequently used in studies like this. The notation and formulas that are used in this work are described next.
Notation
n1 , n2
Sample sizes (enrolment of each class)
x1 , x2
Number of successes (# of students who passed in each class)
pˆ 1
x1 n1
Sample proportion of successes in sample 1
4
pˆ 2
x2 n2
Sample proportion of successes in sample 2
qˆ1
1 pˆ 1
Sample proportion of failures in sample 1
qˆ 2
1 pˆ 2
Sample proportion of failures in sample 2
p
x1 n1
Pooled sample proportion of successes
q
1 p Pooled sample proportion of failures
x2 n2
p = population proportion (p1 = p2: The two population proportions are assumed to be equal in the Null Hypothesis)
z
( pˆ 1
pˆ 2 ) ( p1 pq n1
p2 )
pq n2
= Test Statistic for Two Proportions Hypothesis Test
α = Significance Level of the Test P-Value = Probability of getting a value of the test statistic that is at least as extreme as the one representing the sample data, assuming that Ho is true.
Sample Requirements Samples should be big enough: at least 10 successes and at least 10 failures (5). Other authors require that the four outcomes n1 p1 , n1 q1 , n2 p 2 , and n2 q 2 are greater than 5 (2). Samples must be random and independent samples (4) & (5).
5
All these requirements are fulfilled in our samples. This means that we expect that the sampling distribution of the differences between the proportions of the two samples is approximately normal.
Sample Data The data gathered from the two classes at the end of the semester is summarized in the following table. Ref 475113 (TR)
Ref 475112 (MWF) (Control Group)
n1 = 29
n2 = 28
x1 = 21
x2 = 13
pˆ 1
pˆ 2
0.72413
0.46429
Hypotheses The application of CAT’s should enhance the learning capabilities of the students and therefore, it should contribute to better the performance throughout the semester and as a consequence, it should contribute to increase the passing rate. The formal hypotheses will be expressed as follows: H 0 : p1
p2 (Null Hypothesis)
H1 : p1
p2 (Alternate Hypothesis) (Original Claim)
The general approach, as in all hypotheses testing, is to assume that H 0 is true, and then see if the sample evidence goes against this assumption.
6
Calculations The different formulas were fed with the sample data, and the calculations were performed as shown below.
p
x1 n1
x2 n2
21 13 29 28
34 57
0.59649 Pooled Sample Proportion of Successes
Pooled Sample Proportion of Successes
z
0.72413 0.46429 0.59649 0.40351 29
0.59649 0.40351 28
2.00 Test Statistic
P-Value = 2 (Area to the right of test statistic) = 2 (1 – 0.9772) = 0.0456 (This can be interpreted as the probability of getting a value of the test statistic that is at least as extreme as the one representing the sample data, assuming that H 0 is true).
Critical Value = 1.645 (Corresponding to a right-tailed test with a significance level of 0.05, when using the z-distribution).
Decision Rule If we use the traditional method of testing hypotheses, the decision rule would be expressed as follows: Reject H 0 if the test statistic is greater than or equal to the critical value. In our case, since 2.00 > 1.645, we conclude that we will reject H 0 . If we use the P-Value method, the decision rule is the following: Reject H 0 if the P-Value is less than or equal to the significance level of the test. In our case, since 0.046 < 0.05, we conclude one more time in that we will reject H 0 .
7
3) Interpretation of the Results on H 0 , Conclusions, and Future Work We report that at the significance level of 0.05, there is sufficient sample evidence to support the claim that the student passing rate in MAC 1114 is increased when CAT’s are applied, and this result is statistically significant, according to (2). On the other hand, there is a maximum probability of 0.05 of making a Type I Error, which is the one that is committed when a true null hypothesis is rejected. Based on this experiment and the subsequent statistical analysis of the results, we have much better guidance when recommending a treatment based on the application of CAT’s. We are also considering extending this experiment to math preparatory courses.
REFERENCES (1) T.A. Angelo & K.P. Cross, “Classroom Assessment Techniques: A Handbook for College Teachers”, 2nd Edition, 1993, Jossey-Bass, A Wiley Imprint. (2) A. Naiman, et.al., “Understanding Statistics”, (3) L.J. Kitchens, “Exploring Statistics”, (4) M. Triola, “Elementary Statistics”, 10th Edition, 2006, Pearson, Addison Wesley. (5) De Veaux, et.al., “Statistics: Data and Models”,
8
“Incorporating Environmental Immersions to Learning Community linked courses in Mathematics and Geography” Dr. Jaime Bestard, Mathematics, MDC- Hialeah Campus Dr. Arturo Rodriguez, Earth Sciences, MDC-North Campus.
Abstract: The learning community between GEO2000 “Physical Geography” and MAC1105 “College Algebra” during the fall semester 2008-01 at MDC Hialeah Campus linked two disciplines and two departments from different campuses and the cooperation as a facilitator of the Earth Ethics Institute in an instructional practice that is intended to reinforce the competencies of the two courses by the application of the concepts in a real case scenario of principles of measuring, analysis of variable interaction and the observation of the environment. The students performed practical approach to scenarios explained in the classroom, including a service learning activity in favor of the community and the motivation of the students produced a positive effect in the academic results that in similar courses taught separately was not experienced and not expected given the typical performance for the instructors’ courses taught independently. The differences between the mean of responses and the percent of positive responses in the student feedback survey as well as the differences in the grade distribution review are significant at the 0.05 level, what supports the effectiveness of the practice of curricular link of mathematics courses to other disciplines with quantitative reasoning objectives. Theme: Educational Research Key Words: Learning Community, Environmental Immersions, STEM Education
1) INTRODUCTION There is certain trend to advise students to take the mathematics requirements very independent from the science or related courses, due to the belief that students with poor expectations in quantitative reasoning may perform low in a semester with another science or related course to mathematics. The belief that the inclusive practice of the principles of a mathematics course in a same semester curricular arrange with another discipline which uses quantitative reasoning to argument may produce good result, brought the idea to form the learning community of MAC1105 “College Algebra” with GEO2000”Physical Geography” in the MDC -Hialeah Campus fall semester 2008-01. The instructors usually teach the courses independently and the match occurred when two AMATYC Institute projects were developed by them. Final Report, March 2008 Foundations for Success National Mathematics Advisory Panel
•
……”Limitations in the ability to keep many things in mind (working-memory) can hinder mathematics performance. - Practice can offset this through automatic recall, which results in less information to keep in mind and frees attention for new aspects of material at hand. - Learning is most effective when practice is combined with instruction on related concepts. - Conceptual understanding promotes transfer of learning to new problems and better long-term retention”…..
http://www.ed.gov/MathPanel During two years, the Math advisory Panel worked in the nation: Review of 16,000 research studies and related documents. Public testimony gathered from 110 individuals. Review of written commentary from 160 organizations and individuals, 12 public meetings held around the country. Analysis of survey results from 743 Algebra teachers There is enough scientific evidence cumulated that lead to the previous point, just consider the following research that occurred in our country during the last three years: Foundations for Success National Mathematics Advisory Panel.
2) Methods This experimental study is intended to: To increase student engagement in the instructional process by facilitating the association of the learning outcomes in different disciplines reinforcing the common topics in competencies. To increase motivation of students and the acquisition of quantitative reasoning skills, when the solution of an example in class has potential environmental, technical and social extended impact in a real life application. To improve the student success rate as well as the student pass rates in critical courses, by the effect of the multidisciplinary approach to the solution of problems and cooperative discussions. Improve retention and enrollment in critical courses by effective multidisciplinary and college â&#x20AC;&#x201C; wide coordinate interaction. The following factors produce interest in the experimental study: Underprepared students in need of intensive instructional techniques, produce a demand for motivational instructional engines. In order to investigate avenues to better serve the students, faculty at MDC research best practices in teaching learning strategies. According to the MDC Mathematics Discipline Annual Report for the academic year 2007-08(Fig 1) the performance of several math courses is affected to levels of the pass rate under 60 %. While MAC1105 â&#x20AC;&#x153;College Algebraâ&#x20AC;? is not currently the worst case scenario; it is not showing a consistent positive trend.
Fig 1. Pass Rate(%) MDC Mathematics Discipline 2007-08 Academic Annual Report There is certain trend to advice students not to take their mathematics requirements with science or other math related courses, due to the belief that students with poor expectations in quantitative reasoning may perform low in a semester with another science or related course to mathematics. The idea to link a math course with a science course using quantitative reasoning the foundation of our learning community MAC1105 “College Algebra” with GEO2000”Physical Geography” MDC -Hialeah Campus fall 2008-01.
became
From this perspective, other authors studied the meta-reflective interactions in the problem space, the cognitive demands, as follows: Meta-reflective interactions in the problem space. Cognitive Demands and Second-Language Learners: A Framework for Analyzing Mathematics Instructional Contexts, Campbell, et all. Mathematical Thinking & Learning, 2007, Vol. 9 Issue 1, p3-30, 28p, 1 chart, 3 diagrams p9
The authors of this study participated in several previous projects related to Math across the Curriculum, sponsored by the AMATYC Summer and Winter Institutes The preparation of the exercises to extend the interaction of the quantitative reasoning competencies to the multidisciplinary learning community was outlined and constructed in such an institute. The framework to produce the Learning Community was possible with the support of the college-wide interaction, the collaborative work of the instructors and the support of the Earth Ethics Institute. Analysis of the competencies: MAC1105 Competency 5: The student will demonstrate knowledge of functions from a numerical, graphical, verbal and analytic perspective. GEO2000 Competency 5:
The student will be able to analyze the regional concept by:
a. demonstrating knowledge the area analysis tradition.
b. identifying the regionâ&#x20AC;&#x2122;s locations, spatial extent and boundaries. c. evaluating the factors that differentiate the regions as functional or formal. Note: GEO 2000 Competencies 2, 3, 4 show compatibility with MAC1105 Competency 5, as well.
3) Data Analysis 3.1)The study consisted in the learning community for the MAC1105 ref # 481380 and GEO2000 ref # 500955 as an experimental group of common students and control groups of independent instruction of the courses in MAC1105 ref# 481373 and GEO2000 ref# 343421 The Action Plan had a delay due to the complexity of the interaction to make the learning community with the targeted population, which made up, conveniently last Fall 2008. The population under study was a learning community to serve dual enrollment students from MATER Academy. The learning community includes the presence of both instructors in both disciplines. The study group consisted in the learning community for the MAC1105 ref # 481380 and GEO2000 ref # 500955 as an experimental group of common students and control groups of independent instruction of the courses in MAC1105 ref# 481373 and GEO2000 ref# 343421. The experimental group was formed as described in the following figures 3 and 4. It is remarkable the consistency of the targeted population with the MDC-Hialeah Campus typical students.
35 30 25 20 Male 15
Female
10 5 0 Sophomore
Junior
Senior
Fig 3. Gender structure in targeted population 70 60 50 40 30 20
Series1
10 0
Fig 4. Ethnicity structure of the target population 3.2)Assessment of the studentsâ&#x20AC;&#x2122; opinions The intentionality of the activity was declared to the students who prepared their experiment log. The conditions of the instructional time of the activity was a key factor The students were assessed by a pre and post survey, showing the logical change of the population across the semester in Fig 5
80 70 60 50 40
Pre
30
Post
20 10 0 Male
Female
Fig 5. Gender structure of the pre- post survey A Pre-Post survey of the participating studentsâ&#x20AC;&#x2122; opinions was applied in compliance with the AMATYC, MAC3 project. The survey includes twenty one questions related to the studentsâ&#x20AC;&#x2122; perception of skills and or learning outcomes. Those questions were responded in the Pre and the Post applications, as well as ten questions related to the understanding and gains or improvements during the course. Also, the evaluators recorded gender, age group and ethnicity, keeping the privacy but recording apart of their identification number that lead to matching the pre and post surveys. 3.3)Summary of the AMATYC Survey According to ANOVA (two factors) at 5% significance level there is no significant difference among the questions. There is a high significance among the responses to each question. It implies that the questions were not weighed, while the answers obviously show the students opinion. 3.4)Teaching strategies During the fall replication, integrative instructional methods were applied to the students in the combined subjects.
The environmental immersion consisted in a data collection of the velocity of the wind, air humidity, temperature and position, and the observational study of the sea grass
Humidity Velocity Wind (mph)
Water Temp. (F)
Shore Air Temp Position sand (F) in shore Temp. (F) (ft)
Amou of life collec units)
87
12
71
84
74
0
15
87
14
72
83
75
150
22
86
14
72
83
77
300
28
85
17
77
85
80
450
35
88
18
78
88
82
600
30
The definition of Quantitative Reasoning might not be interpreted related to a course like GEO2000, but the reality of the situations that students find when they entered in the environmental immersion is definitely a real application of the following definition The application of mathematical concepts and skills to solve real-world problems. In order to perform effectively as professionals and citizens, students must become competent in reading and using quantitative evidence and applying basic quantitative skills to the solution of real- life problems( 3).
3.5)Students Feedback Analysis A comparative analysis (t â&#x20AC;&#x201C; test )of the average response and the positive responses was conducted between the experimental and the control groups The results are significant at the level of 0.05 with p-values of 0.028 and 0.014, respectively for the sample size of 27 students under analysis, which support that the opinion of the students favor the application of the immersion.
A comparative analysis (t â&#x20AC;&#x201C; test )of the average response and the positive responses was conducted between the experimental and the control groups The results are significant at the level of 0.05 with p-values of 0.028 and 0.014, respectively for the sample size of 27 students under analysis, which support that the opinion of the students favor the application of the immersion. 3.5) Analysis of the grades
Grade Distribution Review Course
Reference #
Pass Rate %
Success Quotient %
Retention Rate %
MAC1105(LC) 481380
75
78
96
GEO2000 (LC)
500955
100
100
100
MAC1105 (control)
481373
61
80
76
GEO2000 (control)
343421
92
92
100
Fig 6. Descriptive comparative Grade Distribution Review for the experimental and for the control groups
Fig. 7 Grade Distribution MAC1105 W/o LC GEO2000 W/o LC Ref# 481373 Ref # 343421
A B C D F
1 3 16 2 3
9 2 1 0 1
MAC1105 W/ LC Ref# 481380
0 4 14 5 0
GEO2000 W/ LC
24 1 0 0 0
W W/I
8 0
0 0
1 1
0 0
The differences are significant even between categories of grades, at the 0.05 level The display of the general grade distribution offers the effect of the immersion over the learning in the subject compared versus the case in which the subject is taught separately and without immersions. 4) Concluding Remarks: The study shows the influence of the instructional effective interaction over the quantitative reasoning skills, and this effect can be measured from the both sides of the learning process as follows:
IMPACT ON STUDENTS Decrease math anxiety Positive learning attitude Recognition of math in daily life Lifelong appreciation of math Positive experience in math courses Perception that math is relevant Development of quantitative reasoning IMPACT ON INSTRUCTORS Collaboration- cooperation Enhance of departmental campus vision Improve teaching strategies. Generalization of conclusions
RECOMMENDATIONS: To introduce the Method of Test Item analysis to consolidate the results of the opinion surveys To replicate the study involving a sample in the IAC to extend the results with double interactions of MAC1105 and MAT1033 together with GEO 2000 and PSC1515 To continue incorporating the Earth Ethics Institute environmental immersions to courses with potential risk in motivation of students
REFERENCES:
1. Bestard, Rodriguez: “Course syllabi for MAC1105, GEO2000 MDC-Hialeah Campus, Fall 2008. 2. Bestard, J: “The T-link Project” AMATYC Summer Institute, Leavenworth, WA, Aug, 2006. 3. Diefenderfer, et all: “Interdisciplinary Quantitative Reasoning” Hollins University, VA, 2000. 4. Rodriguez, Bestard: “The Global Warming awareness and the environmental math instruction at the MDC-Hialeah Campus. AMATYC Winter Institute, Miami Beach, Fl January 2007)
Using Beta-binomial Distribution in Analyzing Some Multiple-Choice Questions of the Final Exam of a Math Course, and its Application in Predicting the Performance of Future Students*
Dr. Mohammad Shakil Department of Mathematics Miami-Dade College Hialeah Campus 1780 West 49 th St., Hialeah 33012 E-mail: mshakil@mdc.edu
Abstract
Creating valid and reliable classroom tests are very important to an instructor for assessing student performance, achievement and success in the class. The same principle applies to the State Exit and Classroom Exams conducted by the instructors, state and other agencies. One powerful technique available to the instructors for the guidance and improvement of instruction is the test item analysis. This paper discusses the use of the beta-binomial distribution in analyzing some multiple-choice questions of the final exam of a math course, and its application in predicting the performance of future students. It is hoped that the finding of this paper will be useful for practitioners in various fields.
Key words: Binomial distribution; Beta distribution; Beta-binomial distribution; Goodness-of-fit; Predictive beta-binomial probabilities; Test item analysis.
2000 Mathematics Subject Classification: 97C30, 97C40, 97C80, 97C90, 97D40
*Part of this paper was presented on Conference Day, MDC, Kendall Campus, March 05, 2009.
1
1. Introduction: Creating valid and reliable classroom tests are very important to an instructor for assessing student performance, achievement and success in the class. One powerful technique available to the instructors for the guidance and improvement of instruction is the test item analysis. If the probability of success parameter, p, of a Binomial distribution has a beta distribution with shape parameters α > 0 and β > 0, the resulting distribution is known as a beta binomial distribution. For a binomial distribution, p is assumed to be fixed for successive trials. For the beta-binomial distribution, the value of p changes for each trial. Many researchers have contributed to the theory of beta binomial distribution and its applications in various fields, among them Pearson (1925), Skellam (1948), Lord (1965), Greene (1970), Massy et. al. (1970), Griffiths (1973), Williams (1975), Huynh (1979), Wilcox (1979), Smith (1983), Lee and Sabavala (1987), Hughes and Madden (1993), and Shuckers (2003), are notable. Since creating valid and reliable classroom tests are very important to an instructor for assessing student performance, achievement and success in the class, this paper discusses the use of the beta-binomial distribution in analyzing some multiple-choice questions of the final exam of a math course, and its application in predicting the performance of future students. It is hoped that the finding of this paper will be useful for practitioners in various fields. The organization of this paper is as follows. Section 2 discusses some well known distributions, namely, binomial and beta. The beta-binomial distribution is discussed in Section 3. In Section 4, the beta-binomial distribution is used to analyze multiple-choice questions in a Math Final Exam, with application in predicting the performance of future students. Using beta-binomial distribution, a diagnosis of some failure questions in the said exam is provided in Section 5. Some concluding remarks are presented in Section 6. 2. An Overview of Binomial and Beta Distributions: This section discusses some well known distributions, namely, binomial and beta. 2.1 Binomial Distribution: The binomial distribution is used when there are exactly two mutually exclusive outcomes of a trial. These outcomes are often called successes and failures. The binomial probability distribution is the probability of obtaining x successes in n trials. It has the following probability mass function:
n n x (1) bx; p, n p x 1 p , x 0,1, ..., n ,0 p 1, x n where p is the probability of a success on a single trial, is the combinatorial function of n things x taken x at a time, and the mean and standard deviation are n p and n p 1 p respectively. For example, the following Figure 1 depicts the binomial probabilities of x successes in n = 30 trials, when p = 0.25 is the probability of a success on a single trial.
2
Figure 1: The PDF of a Binomial Distribution: n = 30 trials, and p = 0.25 2.2 Beta Distribution: The Beta distribution is a continuous distribution on the interval [0, 1], with shape parameters α > 0 and β > 0. Letting p have a Beta distribution, its probability density function is given by: p 1 (1 p ) 1 f ( p | , )
,0 p 1, 0, 0,
(2)
B( , ) ( )( ) denotes the complete beta function. Taking values on the interval ( ) [0,1], the distribution is unimodal if α > and β > 1. If both α and β are 1, then the beta distribution is equivalent to the continuous uniform distribution on that interval. If only one of these parameters are less than 1, then the distribution Is J-shaped or reverse J-shaped. If both are less than 1, the distribution is U-shaped. The effects of various values of α and β on the shapes of the Beta distribution are given in the following Figure 2. The mean and the variance for a Beta random variable are given by where B( , )
, and 2
, respectively. 1
3
p Figure 2: The PDF of Beta Distribution for Various Values of α and β (Note that A = α and B = β in the Figure) (Source: http://www.itl.nist.gov/). 3. The Beta-Binomial Distribution: This section discusses the beta-binomial distribution. For the sake of completeness, the beta-binomial distribution is derived. If the probability of success
parameter, p, of a Binomial distribution has a beta distribution with shape parameters α > 0 and β > 0, the resulting distribution is known as a beta binomial distribution. For a binomial distribution, p is assumed to be fixed for successive trials. For the beta-binomial distribution, the value of p changes for each trial. Suppose a continuous random variable Y has a distribution with parameter θ and pdf g y . Let h be the prior pdf of . Then the distribution associated with the marginal pdf of Y, that is,
k 1 y h g y d ,
is called the predictive distribution because it provides the best description of the probability on Y. Accordingly, by Bayes’ theorem, the conditional (that is, the posterior) pdf k y of , given Y = y, is given by:
k y
g y h . k 1 y
Note that the above formula can easily be generalized to more than one random variable. For a
4
nice discussion, please visit Hogg, et al. (2005). In what follows, using Bayes’ Rule, the derivation of beta-binomial distribution is given. For details, see, for example, Schuckers (2003), Lee (2004), and Hogg, et al. (2005, 2006), among others. 3.1 Derivation of Beta-Binomial Distribution: Suppose that there are m individual Test items and each of those individual test items is tested n times. Let X i n , pi ~ Bin n , pi , where Xi is the number of successes, and
n P( X xi ) pixi (1 pi ) n xi , i 1,2,3,..., n . xi
(3)
Supposing the prior pdf of each of the parameter pi in equation (3) to be the beta pdf (2), the joint pdf is given by: m n p xi 1 (1 pi ) n xi 1 , f ( x , p | , , n) f ( x | p , n) f ( p | , ) i B , i 1 xi
(4)
T
T
where p p1 , p2 , , pm , and x x1 , x2 , , xm . It is evident from equation (4) that, in drawing inference from the beta-binomial probability model, the selection of the parameters α and β is crucial, since they define the overall probability of success. Thus integrating the equation (4), a joint Beta-binomial distribution or product Beta-binomial distribution is obtained as follows: f ( x | , , n) f ( x , p | , , n) d p f ( x | p, n ) f ( p | , ) d p
m n B ( xi , n x i ) , xi 0,1,2,...n. B( , ) i 1 xi
(5)
The equation (5), denoted as X i , , n ~ Betabin , , n , is called the predictive distribution because it provides the best description of the probabilities on X 1 , X 2 , , X m . Taking m = 1 in equations (4) and (5), we easily get the following: (i) The Predictive Beta-Binomial Distribution
n B( x, n x) x k 1( x) , x 0,1,2,..., n. B ( , )
5
(ii) The Posterior of the Binomial Distribution with Beta Priors: p x 1 (1 p ) n x 1 k p x
,0 p 1, x 0,1,2,..., n , B( x, n x)
which is a beta pdf with parameters x , and n x . Clearly prior is conjugate since both posterior and prior belong to the same class of distributions (that is, beta). 3.2 Mean and Variance: The equation (5), denoted as, X i , , n ~ Betabin , , n , is called the predictive distribution because it provides the best description of the probabilities on n X 1 , X 2 , , X m . Its mean and variance are given by E X i n , and n n respectively, where , and Var X i n 1 C 2 1 n . The beta-binomial distribution is also known as an extravariation model, C 1 because it allows for greater variability among the xi's, than the binomial distribution. The additional term, C, allows for additional variability beyond the 1 that is found under the binomial model. Note that the variance of a binomial random variable is n p 1 p . 4. Beta-Binomial Distribution Analysis of the Mathematics Exam Questions: Using the betabinomial distribution, this section analyzes the multiple-choice questions of the final exam of a math course taught by me during the Fall 2007-1 term. For analysis, the data obtained from the ParSCORETM Item Analysis Report of the exam under question has been considered. The ParSCORETM item analysis consists of three types of reports, that is, a summary of test statistics, a test frequency table, and item statistics. The test statistics summary and frequency table describe the distribution of test scores. The item analysis statistics evaluate class-wide performance on each test item. Some useful item analysis statistics are following. For the sake of completeness, the details of these are provided in the Appendix A.
Item Difficulty Item Discrimination Distractor Analysis Reliability
The test item statistics of the considered math final exam – version A are summarized in the following Tables 1 and 2. It consisted of 30 items. A group of 7 students took this version A of the test. Another group of 7 students took the version B of the test. It appears from these statistical analyses that a large value of KR-20 = 0.90 for version B indicates its high reliability in comparison to version A, which is also substantiated by large positive values of Mean DI = 0.450 > 0.30 and Mean Pt. Bisr. = 04223, small value of standard error of measurement (that is, SEM = 1.82), and an ideal value of mean (that is, µ = 19.57 > 18, the passing score) for version B. For details on these, see Shakil (2008).
6
Table 1: Test Item Statistics of the Math Final Exam – Version A
Table 2: A Comparison of Exam Test Item Statistics – Versions A & B
4.1 Goodness-of-Fit of Binomial and Predictive Beta-Binomial Distributions: Maple 11 has been used for computing the data moments, estimating the parameter (by employing the method of moments), and chi-square test for goodness-of-fit. The data moments are computed as:
1 0.5714267 and 2 0.421756 . The observed, expected binomial and expected predictive beta-binomial frequencies of the performance of the questions (that is, successful questions) in
7
the considered math exam (version A) data have been provided in the following Table 3, along with a plot of the corresponding histogram given in the Figure 3. Table 3: Observed and Expected Binomial and Predictive Beta-Binomial Frequencies x 0 1 2 3 4 5 6 7
Obs 1 5 2 4 6 2 5 5
Bin 7.97E-02 0.743555 2.974236 6.609452 8.812655 7.050165 3.133425 0.596846
BBD 3.212274 3.026111 3.037186 3.138957 3.330805 3.661291 4.301067 6.29231
Figure 3: Frequency Distributions of Successful Math Exam Questions
8
The estimation of the parameters and chi-square goodness-of-fit test are provided in Tables 4 and 5 respectively. Table 4: Parameter Estimates of the Binomial and Predictive Beta-Binomial Models for the Success of the Math Exam Question (Version A) Data Model
Parameter
Binomial
Predictive Beta-Binomial
0.57143 p
0.8981194603
0.6735948342
Table 5: Comparison Criteria (Chi-Square Test for Goodness-of-Fit)
Model
Binomial
Predictive Beta-Binomial
Test Statistic
74.458
6.673302289
Critical Value
14.06714058
14.06714058
p-value
0
0.4636692440
From the chi-square goodness-of-fit test, we observed that the Predictive Beta-Binomial Model fits the Successes of the considered Math Exam Questions Data (Version A) reasonably well. The Predictive Beta-Binomial Model produces the highest p-value and therefore fitted better than Binomial distributions. Also, from the Histograms for the Observed, Expected Binomial and Expected Predictive Beta-Binomial Frequencies of Successful Math Exam Questions Data (Version A) plotted, for the parameters estimated in Table 4, as given in the Figures 4 – 6 below, we observed that the Predictive Beta-Binomial Model fits the Successes of the considered Math Exam Questions Data (Version A) reasonably well.
9
Figure 4: Binomial Probabilities of k Successes per Question
Figure 5: Fitting the Beta Posterior Distribution PDF to the Considered Math Exam Questionsâ&#x20AC;&#x201C; Ver. A
Figure 6: Comparison of Prior and Posterior Beta Distributions
10
4.2 Data Analysis: This section discusses various data analysis of the considered Math Exam Question Success Data, which are presented in the following Tables 6, 7 and 8. The computations are done by using Maple 11 and R Software.
Table 6: Summary Statistics of Different Posterior Beta-Binomial Distributions for the Considered Math Exam Question Success Data (Version A)
No. of Questions
No. of Trials (Students) Per Question
m
n
No. of Successes Per Question
Point Estimate of Prior Beta Mean
Point Estimate of Posterior BetaBinomial Mean
Posterior BetaBinomial Median
Posterior Beta-Binomial Variance
90 % C. I. Estimate of Posterior BetaBinomial Mean
(0.004539054, 0.306892967) (0.04340072, 0.47599671) (0.1100912, 0.6112220) (0.1958725, 0.7263161) (0.2980277, 0.8248512) (0.4171535, 0.9067242) (0.558081, 0.968254) (0.7406081, 0.9986905)
~
~
beta
beta binom
0
0.5714267
0.1047771
0.07506755
0.009799589202
1
0.5714267
0.2214399
0.19921830
0.01801184815
2
0.5714267
0.3381027
0.32500740
0.02338026884
3
0.5714267
0.4547654
0.45109090
0.02590485125
4
0.5714267
0.5714282
0.57722770
0.02558559538
5
0.5714267
0.6880910
0.70327590
0.02242250125
6
0.5714267
0.8047538
0.82891440
0.01641556884
7
0.5714267
0.9214166
0.95171830
0.007564798163
k 30
7
11
Table 7: Predictive Beta-Binomial Probabilities for Future (Simulated) Sample of n = 20 Students who take the Same Math Exam (Version A)
No. of Questions
No. of Future Sample of Trials (Students) Per Question
m
n
No. of Successes in the Previous Sample of n = 7
Most Likely No. of Successes in the Future Sample of n = 20
Predictive BetaBinomial Probabilities
~
k k 30
20
0
1
2 3
4
5
6
7
0 1 2 3 1 2 3 4 5 5 6 8 9 10 11 12 13 14 15 16 16 17 18 19 20 18 19 20
0.3146720 0.2119047 0.1483370 0.1048905 0.1164135 0.1299005 0.1283373 0.1178286 0.1026085 0.1023370551 0.1027100402 0.0938969431 0.0950385713 0.0918930055 0.0940656041 0.0960800571 0.0936068326 0.1029871 0.1068208 0.1045330 0.1144824 0.1319770 0.1430934 0.1402707 0.1085313 0.1301244 0.2119595 0.4232004
12
Table 8: Predictive Beta-Binomial Probability of At Least 18 Successes out of Future (Simulated) Sample of n = 20 Students who take the Same Math Exam (Version A) No. of Questions
No. of Future Sample of Trials (Students) Per Question
m
n
No. of Successes in the Previous Sample of n = 7
Predictive Beta-Binomial Probability of at least 18 successes out of Future Sample of n = 20
k 18
k 30
20
0 1 2 3 4 5 6 7
0.00001583399 0.0002678766 0.002198643 0.01199997 0.0487983 0.1553201 0.3918954 0.7652843
5. Diagnosis of Failure Questions of the Said Math Exam and Some Recommendation: Using the Predictive Beta-Binomial Probabilities, this section discusses the diagnosis of some failure questions of the considered Math Exam - Version A, that is, having Success Rate < 60 %. Some recommendations are also given based on this analysis. (i)
Number of Failure Questions (that is, Questions having Success Rate < 60 %) is n 18 .
(ii)
Number of Failure Questions with Point Biserial r pbis 0.46 is k 3 .
(iii)
Suppose p denotes the probability of failure in the 18 failure items due to the rpbis 0 or low positive value of r pbis or poor construction of exam questions.
(iv)
After analyzing the said Math Exam Questions (Version A) Item Analysis Data, it is found that the number of failure questions out of the total number of exam questions m 30 , having the Point Biserial r pbis 0.46 , is given by k 3 . Thus, about 10 % (that is, p 0.10 ) failure of the total number of exam questions m 30 in Version A have Point Biserial r pbis 0.46 . Since 0 p 1 , letting p have a Beta distribution with shape parameters 0, 0 , we have
0.10 .
(6)
Further, if we consider the first two failure questions in the above 18 Failure Exam Questions and consider that one of these two failure questions has Point Biserial r pbis 0.46 , then the probability of the second exam
13
question failure, having Point Biserial r pbis 0.46 , is increased to about 90 % (that is,
p 0.90 ). Consequently, using the following formula for the posterior mean of the betabinomial distribution ~
p betabinom
k , k 1, 2 , 3 , , n , n
the posterior estimate of p with one failure after one trial is given by
1 0.90 . 1
(7)
Solving the equations (6) and (7) by using Maple 11, the values of the parameters and are obtained as follows:
0.0125000000 0 , and 0.1125000000 .
(v)
Now, updating the posterior probabilities with 3 successes out of 3 trials, 4 successes out of 4 trials, and so on, in the remaining 15 failure items due to the rpbis 0 or low positive value of
r pbis or poor construction of exam questions, the posterior estimates of p are provided in the following Table 9:
Table 9: Diagnosis of the Failure Questions in the said Math Exam (Version A) using the Predictive Beta-Binomial Probabilities Number of Trials k
Posterior Beta-Binomial Estimate of p ~ k p betabinom , k 3 , ,17 k
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
0.9640000000 0.9727272727 0.9780487805 0.9816326531 0.9842105263 0.9861538462 0.9876712329 0.9888888889 0.9898876404 0.9907216495 0.9914285714 0.9920353982 0.9925619835 0.9930232558 0.9934306569
14
(vi)
Recommendation: Using the updated posterior probabilities from the above Table F after the 3rd failure, then the 4th one, and so on, we have the following product: 17
17
k
0.01250000000 k
k 0.01250000000 0.1125000000 k 0.8060295953 . k 3
k 3
Thus, it is observed from the above analysis, the probability that all the remaining failure questions having poor Point Biserial is about 80.60 %, which, I believe, is the needed value in making our decision to revise the considered Math Exam Questions (Version A).
6. Concluding Remarks: This paper discusses the beta-binomial distribution, the use of the beta binomial distribution to analyze questions of the state exit exams, and create valid and reliable classroom tests. This paper discusses the use of the beta-binomial distribution to assess students` performance based on questions in the state exit exams. It is hoped that the present study would be helpful in recognizing the most critical pieces of the state exit test items data, and evaluating whether or not the test item needs revision by taking different sample data for the considered math exam and applying the said technique to analyze these data. The methods discussed in this project can be used to describe the relevance of test item analysis to classroom tests. These procedures can also be used or modified to measure, describe and improve tests or surveys such as college mathematics placement exams (that is, CPT), mathematics study skills, attitude survey, test anxiety, information literacy, other general education learning outcomes, etc. It
is hoped that the finding of this paper will be useful for practitioners in various fields. References Albert, J. (2007). Bayesian Computation with R. Springer, USA.
Green, J. D. (1970). Personal Media Probabilities. Journal of Advertising Research 10, 12-18.
Griffiths, D. A. (1973). Maximum Likelihood Estimation for the Beta Binomial Distribution and an Application to the Household distribution of the Total number of Cases of a Disease. Biometrics 29, 637-648.
Hogg, R. V., McKean, J. W., and Craig, A. T. (2005). Introduction to Mathematical Statistics. PrenticeHall, USA.
Hogg, R. V., and Tanis, E. (2006). Probability and Statistical Inference. Prentice-Hall, USA.
Hughes, G., and Madden, L.V. (1993). Using the beta-binomial distribution to describe aggregated patterns of disease incidence. Phytopathology 83, 759-763.
Huynth, H. (1979). Statistical Inference for Two Reliability Indices in Mastery Testing Based on the Beta Binomial Model. Journal of Educational Statistics 4, 231-246.
Karian, Z. A., and Tanis, E. A. (1999). Probability and Statistics-Explorations with MAPLE. 3rd Edition, Prentice- Hall, USA.
15
Lee, J. C., and Sabavala, D. J. (1987). Bayesian Estimation and Prediction for the Beta-Binomial Model. Journal of Business & Economic Statistics 5, 357-367.
Lee, P. M. (2004). Bayesian Statistics – An Introduction, 3rd Edition. Oxford University Press, USA.
Lord, F. M. (1965). A Strong True-Score theory, with Applications. Psychometrika 30, 234-270.
Massy, W. F., Montgomery, D. B., and Morrison, D. G. (1970). Stochastic Models of Buying Behavior. Cambridge, MA, MIT Press. Pearson, E. S. (1925). Bayes’ Theorem in the Light of Experimental Sampling. Biometrika 17, 388-442. Schuckers, M.E. (2003). Using The Beta-binomial Distribution To Assess Performance Of A Biometric Identification Device, International Journal of Image and Graphics (3), No. 3, July 2003, pp. 523-529.
Shakil, M. (2008). Assessing Student Performance Using Test Item Analysis and its Relevance to the State Exit Final Exams of MAT0024 Classes – An Action Research Project, Polygon, Vol. II, Spring 2008. Skellam, J. G. (1948). A Probability distribution Derived from the Binomial Distribution by Regarding the Probability of Success as Variable between the Sets of Trials. Journal of the Royal Statistical Society Ser. B 10, 257- 261.
Smith, D. M. (1983). Maximum Likelihood estimation of the parameters of the beta binomial distribution. Appl. Stat. 32, 192-204.
Wilcox, R. R. (1979). Estimating the Parameters of the Beta Binomial Distribution. Educational and Psychological Measurement 39, 527-535.
Williams, D. A. (1975). The Analysis of Binary Responses from Toxicological Experiments Involving Reproduction and Teratogenicity. Biometrics 31, 949-952.
Appendix A
Review of Some Useful Item Analysis Statistics: An item analysis involves many statistics that can provide useful information for determining the validity and improving the quality and accuracy of multiple-choice or true/false items. These statistics are used to measure the ability levels of examinees from their responses to each item. The ParSCORETM item analysis generated by Miami Dade College – Hialeah Campus Reading Lab when a Multiple-Choice Exam is machine scored consists of three types of reports, that is, a summary of test statistics, a test frequency table, and item statistics. The test statistics summary and frequency table describe the distribution of test scores. The item analysis statistics evaluate class-wide performance on each test item. The ParSCORE TM report on item analysis statistics gives an overall view of the test results and evaluates each test item, which are also useful in comparing the item analysis for different test forms. In what follows, descriptions of some useful, common item analysis statistics, that is, item difficulty, item discrimination, distractor analysis, and reliability, are presented
16
below. For the sake of completeness, definitions of some test statistics as reported in the ParSCORETM analysis are also provided.
(I) Item Difficulty: Item difficulty is a measure of the difficulty of an item. For items (that is, multiplechoice questions) with one correct alternative worth a single point, the item difficulty (also known as the item difficulty index, or the difficulty level index, or the difficulty factor, or the item facility index, or the item easiness index, or the p -value) is defined as the proportion of respondents (examinees) selecting the answer to the item correctly, and is given by
p
c n
where p the difficulty factor, c the number of respondents selecting the correct answer to an item, and n total number of respondents. Item difficulty is relevant for determining whether students have learned the concept being tested. It also plays an important role in the ability of an item to discriminate between students who know the tested material and those who do not. Note that (i)
0 p 1.
(ii)
A higher value of p indicate low difficulty level index, that is, the item is easy. A lower value of p indicate high difficulty level index, that is, the item is difficult. In general, an ideal test should have an overall item difficulty of around 0.5; however it is acceptable for individual items to have higher or lower facility (ranging from 0.2 to 0.8). In a criterion-referenced test (CRT), with emphasis on mastery-testing of the topics covered, the optimal value of p for many items is expected to be 0.90 or above. On the other hand, in a norm-referenced test (NRT), with emphasis on discriminating between different levels of achievement, it is given by p 0.50 .
(iii)
To maximize item discrimination, ideal (or moderate or desirable) item difficulty level, denoted as p M , is defined as a point midway between the probability of success, denoted as p S , of answering the multiple - choice item correctly (that is, 1.00 divided by the number of choices) and a perfect score (that is, 1.00) for the item, and is given by
pM pS
(iv)
1 pS . 2
Thus, using the above formula in (iv), ideal (or moderate or desirable) item difficulty levels for multiple-choice items can be easily calculated, which are provided in the following table.
17
Number of Alternatives
Probability of Success
Ideal Item Difficulty Level
( pS )
( pM )
2
0.50
0.75
3
0.33
0.67
4
0.25
0.63
5
0.20
0.60
(Ia) Mean Item Difficulty (or Mean Item Easiness): Mean item difficulty is the average of difficulty easiness of all test items. It is an overall measure of the test difficulty and ideally ranges between 60 % and 80 % (that is, 0.60 p 0.80 ) for classroom achievement tests. Lower numbers indicate a difficult test while higher numbers indicate an easy test.
(II) Item Discrimination: The item discrimination (or the item discrimination index) is a basic measure of the validity of an item. It is defined as the discriminating power or the degree of an item's ability to discriminate (or differentiate) between high achievers (that is, those who scored high on the total test) and low achievers (that is, those who scored low), which are determined on the same criterion, that is, (1) internal criterion, for example, test itself; and (2) external criterion, for example, intelligence test or other achievement test. Further, the computation of the item discrimination index assumes that the distribution of test scores is normal and that there is a normal distribution underlying the right or wrong dichotomy of a student’s performance on an item. There are several ways to compute the item discrimination, but, as shown on the ParSCORE TM item analysis report and also as reported in the literature, the following formulas are most commonly used indicators of item’s discrimination effectiveness. (a) Item Discrimination Index (or Item Discriminating Power, or D -Statistics), D : Let the students’ test scores be rank-ordered from lowest to highest. Let
pU
No. of students in upper 25 % 30 % group answering the item correctly , Total Number of students in upper 25 % 30 % group
pL
No. of students in lower 25 % 30 % group answering the item correctly Total Number of students in lower 25 % 30 % group
and
18
The ParSCORE TM item analysis report considers the upper 27 % and the lower 27 % as the analysis groups. The item discrimination index, D , is given by
D pU p L . Note that
(i) (ii)
(iii)
(iv)
(v)
(vi)
1 D 1. Items with positive values of D are known as positively discriminating items, and those with negative values of D are known as negatively discriminating items. If D 0 , that is, pU p L , there is no discrimination between the upper and lower groups. If D 1.00 , that is, pU 1.00 and p L 0 , there is a perfect discrimination between the two groups. If D 1.00 , that is, pU 0 and p L 1.00 , it means that all members of the lower group answered the item correctly and all members of the upper group answered the item incorrectly. This indicates the invalidity of the item, that is, the item has been miskeyed and needs to be rewritten or eliminated. A guideline for the value of an item discrimination index is provided in the following table.
(vii) Quality of an Item Item Discrimination Index, D Very Good Item; Definitely Retain
D 0.50 Good Item; Very Usable
0.40 D 0.49 Fair Quality; Usable Item
0.30 D 0.39 Potentially Poor Item; Consider Revising
0.20 D 0.29 Potentially Very Poor;
D 0.20 Possibly Revise Substantially, or Discard
(b) Mean Item Discrimination Index, D :
This is the average discrimination index for all test items combined. A large positive value (above 0.30) indicates good discrimination between the upper and lower scoring students. Tests that do not discriminate well are generally not very reliable and should be reviewed.
19
(c) Point-Biserial Correlation (or Item-Total Correlation or Item Discrimination) Coefficient, rpbis : The point-biserial correlation coefficient is another item discrimination index of assessing the usefulness (or validity) of an item as a measure of individual differences in knowledge, skill, ability, attitude, or personality characteristic. It is defined as the correlation between the student performance on an item (correct or incorrect) and overall test-score, and is given by either of the following two equations (which are mathematically equivalent).
(a)
rpbis
X C XT p, q s
where rpbis the point-biserial correlation coefficient; X C the mean total score for examinees who
have answered the item correctly; X T the mean total score for all examines; p the difficulty value of the item; q 1 p ; and s the standard deviation of total exam scores.
(b)
m p mq rpbis pq , s
where rpbis the point-biserial correlation coefficient; m p the mean total score for examinees who have answered the item correctly; m q the mean total score for examinees who have answered the item incorrectly; p the difficulty value of the item; q 1 p ; and s the standard deviation of total exam scores.
Note that
(i)
The interpretation of the point-biserial correlation coefficient, r pbis , is same as that of the
D -statistic. (ii)
It assumes that the distribution of test scores is normal and that there is a normal distribution underlying the right or wrong dichotomy of a student performance on an item.
(iii)
It is mathematically equivalent to the Pearson (product moment) correlation coefficient, which can be shown by assigning two distinct numerical values to the dichotomous variable (test item), that is, incorrect = 0 and correct = 1.
(iv)
1 rpbis 1 .
(v)
rpbis 0 means little correlation between the score on the item and the score on the test.
20
(vi)
A high positive value of r pbis indicates that the examinees who answered the item
(viii)
correctly also received higher scores on the test than those examinees who answered the item incorrectly. A negative value indicates that the examinees who answered the item correctly received low scores on the test and those examinees who answered the item incorrectly did better on the test. It is advisable that an item with rpbis 0 or with large negative value of r pbis should be eliminated or revised. Also, an item with low positive value of rpbis should be
(ix)
revised for improvement. Generally, the value of r pbis for an item may be put into two categories as provided in the following table. Quality Point-Biserial Correlation Coefficient, rpbis Acceptable Range
rpbis 0.30 Ideal Value
rpbis 1
(x)
The statistical significance of the point-biserial correlation coefficient, r pbis , may be determined by applying the Student’s t test.
Remark: It should be noted that the use of point-biserial correlation coefficient, r pbis , is more advantageous than that of item discrimination index statistics, D , because every student taking the test is taken into consideration in the computation of rpbis , whereas only 54 % of test-takers passing each item in both groups (that is, the upper 27 % + the lower 27 %) are used to compute D .
(d) Mean Item-Total Correlation Coefficient, rpbis : It is defined as the average correlation of all the test items with the total score. It is a measure of overall test discrimination. A large positive value indicates good discrimination between students.
(III) Internal Consistency Reliability Coefficient (Kuder-Richardson 20, KR20 , Reliability Estimate): The statistic that measures the test reliability of inter-item consistency, that is, how well the test items are correlated with one another, is called the internal consistency reliability coefficient of the test. For a test, having multiple-choice items that are scored correct or incorrect, and that is administered only once, the Kuder-Richardson formula 20 (also known as KR-20) is used to measure the internal consistency reliability of the test scores. The KR-20 is also reported in the ParSCORETM item analysis. It is given by the following formula:
21
n n s 2 p i qi i 1 KR20 2 s n 1
where KR20 = the reliability index for the total test; n = the number of items in the test; s 2 = the variance of test scores; p i = the difficulty value of the item; and q i 1 p i . Note that
(i)
0.0 KR20 1.0 .
(ii)
KR20 0 indicates a weaker relationship between test items, that is, the overall test score is less reliable. A large value of KR20 indicates high reliability.
(iii)
Generally, the value of KR20 for an item may be put into the following categories as provided in the table below.
Quality
KR20 Acceptable Range
KR20 0.60 Desirable
KR20 0.75 Better t
0.80 KR20 0.85 Ideal Value
KR20 1
(iv)
Remarks: The reliability of a test can be improved as follows: a) By increasing the number of items in the test for which the following Spearman-Brown prophecy formula is used.
rest
nr 1 n 1r
where rest = the estimated new reliability coefficient; r = the original KR20 reliability coefficient; n = the number of times the test is lengthened.
22
b) Or, using the items that have high discrimination values in the test. c) Or, performing an item-total statistic analysis as described above.
(IV) Standard Error of Measurement ( SE m ): It is another important component of test item analysis to measure the internal consistency reliability of a test. It is given by the following formula:
SEm s 1 KR20 , 0.0 KR20 1.0 , where SEm = the standard error of measurement; s = the standard deviation of test scores; and KR20 = the reliability coefficient for the total test.
Note that
(i)
SEm 0 , when KR20 1 .
(ii)
SEm 1 , when KR20 0 .
(iii)
A small value of SEm (e.g., 3 ) indicates high reliability; whereas a large value of
SEm indicates low reliability. (iv)
Remark: Higher reliability coefficient (i.e., KR20 1 ) and smaller standard deviation
for a test indicate smaller standard error of measurement. This is considered to be more desirable situation for classroom tests. (v) Test Item Distractor Analysis: It is an important and useful component of test item analysis. A test item distractor is defined as the incorrect response options in a multiple-choice test item. According to the research, there is a relationship between the quality of the distractors in a test item and the student performance on the test item, which also affect the student performance on his/her total test score. The performance of these incorrect item response options can be determined through the test item distractor analysis frequency table which contains the frequency, or number of students, that selected each incorrect option. The test item distractor analysis is also provided in the ParSCORETM item analysis report. A general guideline for the item distractor analysis is provided in the following table:
Item Response Options
Item Difficulty p Item Discrimination Index D or r pbis
Correct Response
0.35 p 0.85 (Better) D 0.30 or rpbis 0.30 (Better)
Distractors
p 0.02 (Better) D 0 or rpbis 0 (Better)
23
(v) Mean: The mean is a measure of central tendency and gives the average test score of a sample of respondents (examinees), and is given by n
x i i 1
,
x n
where xi individual test score , xi individual test score , n no. of respondents . (vi) Median: If all scores are ranked from lowest to highest, the median is the middle score. Half of the scores will be lower than the median. The median is also known as the 50th percentile or the 2nd quartile. (vii) Range of Scores: It is defined as the difference of the highest and lowest test scores. The range is a basic measure of variability. (viii) Standard Deviation: For a sample of n examinees, the standard deviation, denoted by s , of test scores is given by the following equation
2 n
s
x x i i 1 , n 1
where xi individual test score and x average test score . The standard deviation is a measure of variability or the spread of the score distribution. It measures how far the scores deviate from the mean. If the scores are grouped closely together, the test will have a small standard deviation. A test with a large value of the standard deviation is considered better in discriminating the student performance levels. (ix) Variance: For a sample of n examinees, the variance, denoted by s 2 , of test scores is defined as the square of the standard deviation, and is given by the following equation
2 n xi x i 1 . s2 n 1
(x) Skewness: For a sample of n examinees, the skewness, denoted by 3 , of the distribution of the test scores is given by the following equation
24
3 n n xi x 3 s , n 1n 2 i 1
where xi individual test score , x average test score and
s s tan dard deviation of test scores . It measures the lack of symmetry of the distribution. The skewness is 0 for symmetric distribution and is negative or positive depending on whether the distribution is negatively skewed (has a longer left tail) or positively skewed (has a longer right tail).
(xi) Kurtosis: For a sample of n examinees, the kurtosis, denoted by 4 , of the distribution of the test scores is given by the following equation
4 2 n n n 1 3 n 1 xi x , 4 n 1n 2 n 3 i 1 s n 2 n 3
where xi individual test score , x average test score , and
s s tan dard deviation of test scores . It measures the tail-heaviness (the amount of probability in the tails). For the normal distribution, 4 3 . Thus, depending on whether 4 3 or 3 , a distribution is heavier tailed or lighter tailed.
1
The Magic Math Written by David Tseng Graphed by Nancy Liu
Abstract: The content of this article provides quick and easy methods to make calculations involving real numbers. These tricks involve the Chinese chopsticks method of multiplication, the multiplication table of 9, the squares of 5, multiplication without calculator, and the Pythagorean Theorem. These tricks should be very interesting for students taking mathematic classes such as MAT0002 (basic arithmetics), MAT0020 (elementary algebra), MAT1033 (intermediate algebra), MAC1114 (trigonometry), and MAC1147 (pre-calculus), as they reveal to be fast, reliable and time saving.
ď&#x201A;ˇ
Introduction: In order to inspire students that math is not boring but instead can be very interesting, this article demonstrates few examples to show how math can be beautiful and intellectually challenging.
Chinese chopsticks method for the multiplication of numbers without a table of multiplication Once upon a time there was a Chinese farmer who never went to school; therefore he did not learn how use his abacus. To sell his goods in the market, he used basic counting of chopsticks. To help him know how much 23 x 12 is, we are going to use Chinese chopsticks to solve the multiplication 1) Regular case 23 x 12 = ? In this method, each number forms a group, and each group contains a number of chopsticks equal to the number of the group they represent. Letâ&#x20AC;&#x2122;s call group one A1 = 2 chopsticks, group two A2 = 3 chopsticks, group three A3 = 1chopstick and finally group four A4 = 2 chopsticks. Each group is arranged in the following order: A1 in the top of the page, A2 in the bottom, A3 to the left, and A4 to the right. Each group has to be disposed in a way that the chopsticks from group A1 and A2 can be distinguished to be from two different groups; A3 and A4 should be placed over A 1 and A2 at a 90o angle, but also clearly distinguishable. Because of the way they are placed, A3 and A4 must have points of intersection with A1 and A2; the entire method relies on these intersection points, and throughout this article, we will designate them with the + symbol. A1 +A3 is in the top left corner, A1 + A4 in the top right corner, A2 + A3 are in the bottom left corner and A2 + A4 in the bottom right corner. Each intersection has several points of contacts. For example if A1 and A4 are put one on top of each other at a 90 degrees angle, they will have
2
four points of contact, as A1 has 2 chopsticks and A 4 has also 2 chopsticks. In that same spirit, A1 +A3 have 2 points of contact, A2 + A3 have 3 and A 2 + A4 have 6. However, each intersection is not counted as a single entity. A2 + A3 and A1 + A4, which are in the same diagonal, must be counted together; their total amount of intersection points will thus be 7. We finally have the result for our multiplication: the three single digit numbers 2, 6 and 7. However we should arrange them accordingly to find out the final answer. The order is simple: first we write the result from A1 +A3 which is 2 , then the result for the diagonal A2 + A3 and A1 + A4 which is 7, and then the result from A2 + A4 which is 6. The result is thus 276! It can actually be read from the indicated “bubbles” in the graph from left to right. 23 x 12 = 276
2) Digit carry-over case What would happen if the numbers we have to multiply bring about intersections that have a number of touching points with two digits? To solve this, we use the digit carry-over method. Let’s multiply 24 x 31 = ?
3
This time we will use B1= 2 chopsticks, B 2= 4 chopsticks, B 3=3 chopsticks and B 4= 1 chopstick. We arrange them the same way as we did previously. However, this time we do not count the intersection points in any order. We should start with the bottom right corner or the B2 + B4 intersection, followed by the diagonal formed with B1 + B4 and B2 + B3 and then finish counting with the top left corner or B1 + B3 intersection. From this we obtain that B2 + B4 has 4 points of contact, and the diagonal B 1 + B4 and B2 + B3 has 14 points of contact. Here we pause a moment to explain the necessity of the digit carry-over case. Since each intersection should give a single digit number of points of contact, from the number 14, we take the four as being the result for the intersection and the remainder 1 is carried-over and added to the counting of the next intersection, in this case, the upper left corner or B1 + B3 intersection. Therefore we no longer have 6 as a result of the intersection we normally would, but seven. We thus have now the result for our multiplication: 7, 4 and 4 which is correct. 24 x 31 = 744 ď&#x192;ź
4
The Magic Squares of 5 Calculating squares of numbers is fairly easy as long as we are taking the square of the numbers between one and ten. When the square involves greater numbers it becomes a little bit more complicated. However, there is a way to quickly calculate the square of numbers whose last digit is 5. As we know, 52 = 25. Letâ&#x20AC;&#x2122;s rewrite the single digit 5 as the two digit number 05; its square root is still going to be 25. To investigate the squares of higher numbers, letâ&#x20AC;&#x2122;s analyze 25 2 = 225. The squares of these numbers can be found in a very easy way. We separate the number to be squares into two components: B which is the last digit of the number and A the remainder. As an example, the number 05 would be separated into A = 0 and B= 5, 25 would be separated into A=2 and B=5. Note that since the numbers we are dealing with are those ending with 5, B will always be equal to 5. The calculation of squares is as followed: A x (A+1) = C and B 2 = D will produce CD. Illustrating this in 052 A = 0 therefore A+1 = 1 making A x (A+1) = 0 x 1= 0. Then we have B 2 = 52 = 25. Finally we get C= 0 and D= 25 therefore CD = 025 or commonly written as 25. In the same manner, if we want to square the number 25, we separate it into two components: A = 2 and B= 5. B2 =25 and A x (A +1) = 2 x 3 = 6. Therefore C = 6 and D = 25, making CD = 625 which is the result of 252.
5
The Interesting Number 9 This case does not make any calculation easier but it is interesting to discover the nice properties about the multiplication table of the number 9. The number 9 has a multiplication table that is totally symmetrical. The multiples of 9 from one to five are respectively 9, 18, 27, 36 and 45. It appears that the multiples of 9 from six to ten are 54, 63, 72, 81, and 90 respectively, which are the symmetric opposite of the first five numbers 09,18,27,36 and 45. Also if we look at the table on a column, from top to bottom, we observe that the multiples of 9 from 1 to 10 are all made of both numbers from 0 to 9 as their first digit and 9 to 0 as their second digit, showing again a geometrical symmetry.
Interesting Application of Multiplication In elementary school, students learn the tables of multiplication. Unfortunately, due to the constant use of calculators they stop using them; and instead of multiplying, say 4x6 in their minds, they plug it into the calculator to obatin the answer. Although using the calculator may at times to prove to be a faster method to get to the mathetical answer of a problem, it deprives the students from learning some interesting ideas that can be unveiled from the tables of multiplication. One such instance is the table of two, and its concept of doubling. The doubling effect is better be explained by the following example, As you walk along a lake, you noticed that lake is inhabited by just one lotus flower. The next day, you walked by the lake, and noticed that there are two lotus flowers. The following day, you see four lotus flowers instead of two. If you know the lake will be completely covered on the 30 th day, how long would it take for half of the lake to be
6
covered by lotus flowers? Keep in mind that you do not know the size of the lake, nor the size of the lotus flower. You can start this problem by noticing that as each day goes by the amount of lotus flowers is doubled. Then you designate the intital lotus flower as . Since the amount of flowers is doubled everyday, you would multiply it by two every time. This means that on day one you have . On the second day you have 2x( 2 . And so on and so forth. If the doubling concept is kept in mind, then it can be said that by the 29th day, the lake should be half occupied, and so the next (the 30th), after it is doubled, the lake would be completely covered.
Another Way of Multiplying Without Using a Calculator Really early in school life, students adopt the use of calculators in a very addictive way. Many times they cannot solve the simplest calculation without plugging it in a calculator. There are many ways to perform calculations without using a calculator; however they can be tedious at times. Here we will be demonstrating a very simple and easy way to calculate squares of numbers that are not multiples of five. The steps to follow are simple: If you have a number you want to square, let’s call it x2. Since x2 = x2 – y2 + y2, and x2 – y2 = (x + y) (x – y), therefore we obtain (x + y) (x – y) + y2, where x is the number to be squared, and y is the difference between that number and the next perfect square. This is the formula we are going to use to compute our squares without the calculator!
7
The Pythagorean Theorem All students who have gone through Algebra, Geometry and Trigonometry, know about the Pythagorean Theorem, as it is one of the first mathematical theorems that is taught. It states that for a right triangle, the addition of the squares of the two shorter sides of the triangle is equal to the hypotenuse (the longest side) of that same triangle. It is expressed by the equation A 2 + B2 = C2. The most common and well known Pythagorean triangles are the 3-4-5, the 5-12-13, and the 724-25 triangles. Looking at them, we can discover a pattern. For all three of them the shortest sides are equal to 3,5 and 7 respectivcely. They are all arithmetic sequences with a difference of 2. Also, another pattern that we can notice is that to find the value of the second shorter side of the triangle. By multiplying the shortest with a sequenced number and then adding it to the result: (3x1) +1=4; (5x2) +2 =12; (7x3) +3 =24. Finally, the hypotenuse is obtain by adding 1 to the previously found side: 4+1 =5; 12+1 =13; 24 +1=25. We can, therefore summerize our patterns into a formula. Whenever the shortest side of the right triangle is an odd number, the second shorter side is found by first finding the component y that will be plugged in later into the formula: xy +y = B. x is the starting number in the arithmetic sequence, y is a multiplier that increases by 1 for each calculation. This calculation produces the number for the hypotenuse.
8
What happens if the shortest side of the Pythagorean triangle is an even number? Well, the answer is simple. We follow steps similar to the prior ones, having two corrections to them. Whenver the shortest side is an even number, the second shorter side is computed with the formula (xz â&#x20AC;&#x201C; 1) where z as a multiplier that starts at 1 and decreases by 1 for next calculation. When z1= 1, x1=4; when z2=2, x2 = 8. We then find the regularity of the right triangle starting with even numbers as being 4. We can predict any number down the road for the second shorter side to be the previous + 4. The hypotenuse is then found by adding 2 to the value of the second shorter side of the triangle.
The Magic Pascal Triangle The Pascal triangle was created by the French mathematician Pascal in the mid seventeen century. Looking at it, it resembles a simple triangle with numbers inside. But in reality, it is a beautifully intriguing multiplication table. Within the triangle, there are many interesting characteristics we can discover. a) Symmetry in the diagonal direction: Each diagonal of the triangle is formed by a set of consecutive numbers that have a symmetric image on the other side of the triangle b) Symmetry in the horizontal direction:
9
Imagine the triangle is vertically separated in half. Each horizontal line from one side has its symmetric immage on the other side of the triangle. Take, for example, the fifth horizontal line of our graph, if you cut it in half right through the middle, you observe the same pattern of numbers to be exactly symmetric: 15-10 is the exact reflection of 10-5-1. c) Multiples of two in the horizontal direction: If we add all the numbers of each horizontal line of the triangle, their results are equal to exponentials of 2 in the perfect order. For example, the sum of the first row is 1 and 20 = 1; the sum of the second row is 2 and 2 1= 2, and the sum of the third row is 4 and 22 = 4; this pattern goes on indefinitely. d) Addition in L shape: The fourth characteristic of the Pascal triangle resides in the addition of numbers forming L-shapes. If we add any series of numbers in a diagonal line, starting with the most outer number in that line, their total is equal to the number forming the L arm. To illustrate that fact, let’s take for example the second diagonal from our graph; the series starts with the most outer number 1 and if we add 1+2+3 =6 which is the number forming the L with these three numbers.
*
* Note: 2^x means 2 x
Shortcut to Obtain Partial Fractions For students, the operation to calculate partial fractions can be tedious at times. Here we will present the Heaviside method in three shortcut cases where calculating partial fractions is not that hard. These methods can be used for College Algebra, Calculus or Differential Equations. Case A Let’s find the partial fractions for F(s) =
(
)( )
F(s) = (
=
)( )
+
10
=? (
Let (
−
)
Let’s find ? =
)( )
Replace x = s 2 (
1 − 2)(
1 1 = ( − + 4) 6
1 ) +4
Multiply by (3s +1)
−
F(s) =
Case B (
)
Let’s find the partial fractions for F(s) = ) ((
( (
)
)
)
= +
Let F (s) =
[This is required for solving inverse Laplace Transforms]
) ((
(
)
)
(
(
)
)
F (s) = ) ((
)
1 – s(4 – 3S) = A( (s – 1)2 + 1) + B(s – 1)s + sC *The cover-up technique requires the effective elimination of more coefficients by letting S=0
1 = 2A A=
Let s = 1 1 – 1(4 – 3) = A + C 0 =A+C C = –A=−
Let s = -1 1+1(4 + 3) = 5A + B (- 2) (-1) – C 8 = 5A + 2B – C 2B = 8 – 5A + C =8− − =5 B=
F(s) =
+
− (
)
((
)
)
11
+
F(s) =
− ) (
)
((
)
Case C Let’s find the partial fraction for F(s) = (
Let F(s) =
+
= (
)
(
)
)
+ (
)
(
)
We modify the numerators by matching them with (s-2) s2 + 4s – 5 = ( (s – 2) + 2)2 + 4( (s – 2) +2) – 5 = (s – 2)2 + 4 + 4(s – 2) + 4(s – 2) + 8 – 5 = (s – 2)2 + 8(s – 2) +7 Therefore (
(
)
)
F(s) = (
F(s) =
+(
)
)
+(
)
REF: “Joy of Math” by Arthur Benjamin Biography: