1. Using the T test of means, the difference of the data can be measured. The table 1 is to measure the difference of the weights using the program 1. The analysis of the data is given below:


One-Sample Test


 


Test Value = 0


 


 


 


 


 


 


 


t


df


Sig. (2-tailed)


Mean Difference


95% Confidence Interval of the Difference


 


 


 


 


 


 


 


Lower


Upper


 


WBP


11.422


14


.000


86.0667


69.9057


102.2276


 


WAP


27.142


14


.000


67.8000


62.4423


73.1577


 


 


The program 2 is given below:


One-Sample Test


 


Test Value = 0


 


 


 


 


 


 


 


t


df


Sig. (2-tailed)


Mean Difference


95% Confidence Interval of the Difference


 


 


 


 


 


 


 


Lower


Upper


 


WBP


28.332


14


.000


80.6000


74.4983


86.7017


 


WAP


26.115


14


.000


69.0000


63.3332


74.6668


 


 


The program 3 is given below:


One-Sample Test


 


Test Value = 0


 


 


 


 


 


 


 


t


df


Sig. (2-tailed)


Mean Difference


95% Confidence Interval of the Difference


 


 


 


 


 


 


 


Lower


Upper


 


WBP


35.610


14


.000


77.8000


73.1141


82.4859


 


WAP


28.365


14


.000


63.8000


58.9759


68.6241


 


 


Based on the data above the most effective way of weight loss is the program 1 for having the most number of the weight loss.


 


Question 1.B


Correlation


One of the statistical techniques that can be shown on the relationship of the two variables is related is through the use of correlation. This means that the correlation analysis is the method used in measuring the strength and its relationship between the two variables with the aid of the single number which is so called the correlation coefficient. As an example, there is a relationship between the height and the weight of the people thought it is not perfect. This means that the people of the same height will vary on their weight. This can also signifies that the correlation can also tell the variability of the weight of the people as it relates in their height. There are also different types of correlation techniques as the Survey System’s Optional Statistics Module that includes most of the common type as the product-moment and the Pearson. The correlation is also appropriate for any data and it can works for the data wherein the numbers are meaningful as it contains in the quantities of some sort. Most of the statistician tells that it is not applicable for the rating scale but some still using it though it is required to have so much care. The correlation coefficient is the main result of the correlation analysis or simply “ r “ which ranges from + 0.01 to + 0.99 this means that as the r is closer to +1 of -1, the variables are also closely related. This means that if the r is close to 0, then, the variables have no correlation. In getting positive value of r, it only means that the one variable gets larger the other also gets larger. In the case of negative value of r, it is sometimes called the inverse correlation. Squaring the value r yield to the percentage of the variation of one variable as it relates to the other variable. This means that an r value of 0.7 has a 49% of the related variance. The second part is the technique of Pearson correlation which is usually use in the linear relationship wherein as the variable gets larger the other also gets larger or it can be smaller but in the direct proportion (Creative Research System, 2002).


The formula for finding the value of r is:



                        Source: Research Methods Knowledge Base


 


In doing the said technique, it is important to determine that the correlation only affects only one subject and not to the others. This means that violations occur if one chose the half of the groups’ subject and other half of the other because there will affect also the half subjects and to the other half. There is also an invalid calculations if the value of X and Y are intertwined as the value of the midterm to the overall scores wherein the midterm is one of the criteria in the overall scores. Controlling the values of X signifies that it is better to calculate the linear regression and not the correlation.


 


 


Simple Linear Regression


The regression projects the distribution of the variable which is called response and with the aide of the one or more predictors. In the incident of studying one predictor to its relationship to the response variable, then it is called the simple regression analysis. The components of the model of simple linear regression is forecasted to be the relationship between the variable of X and Y or some of the transformation of X. This means that air of he (X,Y) is being observed for unit of n and can yield to a sample of the pair (x1, y1), (x2,y2)…..(xn,yn). This can be graph in the scatter plot and can also provide the clue for the possible relationship of X and Y. This also means that the scatter plot will be a huge clue for eh possible relationship of the Y and X. In order to understand the analysis and the concept of regression, it is advisable to determine the simple linear regression model for the investigation of the response relationship variable of Y and X.  (Astro Temple, n.d.).


The standard regression equation is:


y = mx +b 


where: y = predicted value


Fitting the regression equation into the set of data has a reason of describing the data and to predict the response of the carrier. The regression line had been calculated because it can be seen and use for to the prediction. This line can also gives the good fit for the set of data when the points are close to it. This means that value which are obtained from the line must be close to it that to other line. In assessing for the fit of the line, the vertical distances of the points and to the other lines are the only one that matters. The line of regression is also called to be the least square regression equation. This also means that the sum of the squares in its residual must be least. There is also importance of fitting large number of lines trough the aide of trial and error in finding the best fit. This can also be minimized trough the line for which



 


or it can also be computed manually (Dallal, 2000). 


The regression can also be use for prediction wherein the confidence interval can be a best way in assessing the quality of the prediction. In the prediction using the regression, the confidence interval in the single forecast value of the Y is also corresponds to the chosen value of X and to the straight pint of line.


Estimating the value of the m and b needs to use the criterion of least-squared-error that will want to find in estimating the value of m and Y which can also minimize the difference between the predicted and the observed values for all of the value of X. The error can be found through searching for procedure in the evaluated and propose different values of m and b. The other way is trough the approach of using the calculus in finding the equations for estimating the m and b (IBM, 2003).


To make it short, the simple linear regression is the modeling technique to study further due to fact that it is the main way for the understanding to the more advanced forms of modeling statistics. This can also be a versatile modeling technique which can be use in modeling the curvilinear data through the transformation of the raw of data which are commonly having the logarithmic or the transformation power. The said transformation can also make the data linear in order for using the simple linear regression in modeling the data. The results of the model which is linear can be expressed through the linear formula that relates to the transformed values (Ibid).


In the given problems being solved, the regression and the correlation had been responsible in determining the relationship of the programs and the weights of the clients. This can also be used in forecasting the value of the weights as it relates to the given programs.


 


Question 2.


Chi-Square Technique


a. The chi-square technique is use for the investigation of the difference of the categorical variable distribution. Normally, the variable can yield the data into numerical variable or to categories in the numerical form. On the other hand, the chi square statistics had been done in comparing for the counts and for the tallies of the categorical response between the independent group of two or more (Eck, 2001). The formula for the chi-square statistics is:




The technique of chi-square can be use in determining for the worth of the effort of the researcher in interpreting the contingency table. This is therefore required in the interpretation of the results.


 


b. For the more precise use of chi-square, the example below shows the incidence of the three types of Malaria in the three tropical regions. The table of values are given below:


 


 Asia


Africa


South America


Totals


 Malaria A


31


14


45


90


 Malaria B


2


5


53


60


 Malaria C


53


45


2


100


 Totals


 86


64


100


250


                                    Source: Mathbeans Project by Eck, 2001


The following table had now been set up,


Observed


Expected


|O -E|


 (O — E)2


 (O — E)2/ E


 31


 30.96


 0.04


 0.0016


 0.0000516


 14


 23.04


 9.04


81.72


3.546


 45


 36.00


 9.00


81


2.25


 2


 20.64


 18.64


347.45


16.83


 5


 15.36


 10.36


107.33


6.99


 53


 24.00


 29.00


841


35.04


 53


 34.40


 18.60


345.96


10.06


 45


 25.60


 19.40


376.36


14.7


 2


 40.00


 38.00


 1444.00


36.1


                                    Source: Mathbeans Project by Eck, 2001


The computed value of the chi square is 125.516 with the degree of freedom of (c-1)(r-1) = 4.


In this particular problem, the chi-square distribution table is given below:


Df


0.5


0.1


0.05


0.02


0.01


0.001


1


0.455


2.706


3.841


5.412


6.635


10.827


2


1.386


4.605


5.991


7.824


9.21


13.815


3


2.366


6.251


7.815


9.837


11.345


16.268


4


3.357


7.779


9.488


11.668


13.277


18.465


5


4.351


9.236


11.07


13.388


15.086


20.517


                        Source: Mathbeans Project by Eck, 2001


 


 


In this particular study, the decision is that the null hypothesis needs to reject because there is no relationship between the malaria and to the location.


In the sample problem, it had been expected that the there is a relationship of the place and the decease which resulted of having no relationship base on the chi square method.


 


c. The technique of using the chi-square is that it can be seen as to general. This can also be applied for any of the distribution either trough continuous or for discrete wherein the function of cumulative distribution is can be computed. This means that the data plot can support the chi-square for all of the distribution whereas it can support the function of CDF. Nevertheless, the chi square can be sensitive on the performance of binning the data as well as it requires enough sample size in order to get 5 as the minimum expected frequency.


In using the chi square technique, misleading or incorrect analysis can be done if there is a violation on the assumptions of goodness of fit. Some of the examples of the potential assumption violations are the lack of independence, the structural zero, the outliers, special problems which has the continuous variables, and the expected cell frequencies for the chi-square test expected are small.


 


  


Bibliography


Chi Square 2001, Math Beans Project, Eck, viewed 21 April, 2008, http://math.hws.edu/javamath/ryan/ChiSquare.html.


Correlation 2002, Creative Research Systems, viewed 21 April, 2008, http://www.surveysystem.com/correlation.htm.


Correlation 2006, Research Methods Knowledge Base, viewed 21 April, 2008,


http://www.socialresearchmethods.net/kb/statcorr.php.


Simple Linear Regression 2003, IBM, viewed 21 April, 2008, http://www.ibm.com/developerworks/web/library/wa-linphp2/.


Simple Linear Regression 2000, TUFTS, viewed 21 April, 2008, http://www.tufts.edu/~gdallal/slr.htm.


Simple Linear Regression n.d., Astro Temple, viewed 21 April, 2008, http://astro.temple.edu/~jagbir/regression1.pdf.



Credit:ivythesis.typepad.com



0 comments:

Post a Comment

 
Top