final blog comments

Normal
0

false
false
false

EN-GB
KO
AR-SA

/* Style Definitions */
table.MsoNormalTable
{mso-style-name:”Table Normal”;
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:””;
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin-top:0cm;
mso-para-margin-right:0cm;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0cm;
text-indent:36.0pt;
line-height:200%;
mso-pagination:widow-orphan;
font-size:11.0pt;
font-family:”Calibri”,”sans-serif”;
mso-ascii-font-family:Calibri;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Calibri;
mso-hansi-theme-font:minor-latin;
mso-bidi-font-family:Arial;
mso-bidi-theme-font:minor-bidi;}

http://emu2468.wordpress.com/2012/03/25/is-the-term-approach-significance-cheating/#comment-78

http://kpsychb.wordpress.com/2012/02/10/comments-for-my-ta-week-3/#respond

http://theakatysingleton.wordpress.com/2012/02/05/dsm-a-fair-and-accurate-method-of-diagnosis/#comment-41

http://laurenpsychology.wordpress.com/2012/03/25/should-psychology-be-written-for-the-layman-or-should-science-be-exclusively-for-scientists/#comment-105

T- Test

T-test are used when you have to test the hypotheses of an unknown population mean, it  is used as a substitute.

a t distribution is computed through using sample variance . this is due to t- test using variance instead of error and the fact that t is used to test the difference between two means  .t distribution are often bell-shaped and   have a definite mean of zero . the shape if the t distribution changes due tho the degrees of freedom: the larger the degrees f freedom the closer to  z the t distribution looks even though  they are known to have more variability, be flatter and more spread out  than z – distributions which hs a more central peak. the flatness of the t is caused by the variability of  the scores in the distribution.

although t-test are exactly as great as they sound – they have their complications to for example testing  variable and finding causation.so how useful are they, a study that helps ys e answer this very question  as it shows that t-test help us to see whether the null hypothesis is true  rest and thus has  any given  effect, it also dictates the variability af a sample and the effect of  sample size has on this.  this study plainly highlights the strength of such test in cases where there are a small amount if variables pitted against each othe as : t-test become more complex  as more variables are added to the mix.

http://beheco.oxfordjournals.org/content/14/3/446.short

 

 

 

CHi squared

 Chi- squared is also know as the goodness of fit, this means that it tests the shape and proportion of the data in relation to the null. the goodness of fit if tested by comparing the observed frequencies with the distribution set in  the null hypothesis.

chi squared in my eyes is the most understandable of all the  statistical tests as there is no need to calculate sample means you just use calculation of individuals in each category . this is a result of the observed frequency ; the number of individuals in each category.

I believe tha chi- squared is one of  the most helpful statistical tests as it is able to test the relationship between to variables thus enabling causality. this is done by evaluating the between the two variables.

David schoenfeild conducted an experiment using this statistical test. these  tests were based both expected and observed frequencies and whether these covariets fall within the hypothetical L. in this study similarly to clinical trails there had the be partitions within the observable data in order to tell significance . here chi squared helped as it tested the individual components of the data in order for a correlation between Cox’s proposed model and the for the proportional hazards regression model .

http://biomet.oxfordjournals.org/content/67/1/145.short

Why is the “file drawer problem”a problem ????

this is  a question that as a trainee psychologist i have pondered for a while …. what is the file drawer phenomena and why is such a problem when it comes to publishing major research.

the file drawer  problem/ phenomena which was coined by Robert  Rosenthal in 1979, is to do with publication bias ; essentially it  draws on the issue which arises when researches reject their null  due to the statistical insignificance of the results thus leading less  likelihood of their   data  being published. these negative or inclusive results often never reach the  public reasachers , editors, pharmaceutical companies and medical journals are wary of publishing such data which may go against/ contradict common belief.  this in my eyes is a major problem as it shows a direct representation of the data field thus leading to direct problems when it comes to continuation if experiments or extreme bias of data. http://www.ma.utexas.edu/users/mks/statmistakes/filedrawer.html

the file drawer problem has major implications when it comes to question validity as the effect reduces validity: the  literature produces by the file drawer  effect is largely unrepresentative of the population of the studies  for example Merck’s research on Vioxx (arthritis drug) had the results that showed that the drug wasnt as effective. this research is deretimental both  those adminstration and taking the drug as the pseudo result gives an inaccurate image of its true effect as  it largely distorts the data thus giving a bias slant on the results.

 http://www.meta-analysis.com/downloads/PBPreface.pdf

 Iyangar and Greenhouse in 1988 suggested a plausible way of fixing publishing bias; the sensitivity of effect size( 0.05). as this scale can directly show the level of significance an entity posses, thus creating a stricter more efficient presentation of the data without ruling out haf he results ………  the more stringent the alpha level , the better the test  of significance  is. http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177013012

to conclude in my opinion the file drawer effect is an out dated practice which should be replaced for a method that best equips the psychological community.

“Should psychology be written for the layman or should science be exclusively for scientists????”

I believe strongly that psychological , journals, articles and reports should be written for individuals thus making them accessible by the general public. yes some of the words used would be the psychology specific but the statistics and evaluations of the results should be accessible to all walks of life thus written in a straight forward fashion , explainable to an individual outside the field.

The statistics element is often shown  in adverts or in journals use statistic in order to illustrate their point . One example of this is “the Virginia twin study “: this study tested 1412 pairs of  caucasian twins on how genetics and the environment  can impact on their development during adolescence. The statistics in such studies help the reader to quickly grasp the findings in order to make sense of the possible impact they may have on the world or he selves  as statistics shown as percentages  are an easy, reliable, straight forward way for individuals to engage with the present findings. 

http://www.wpic.pitt.edu/research/famhist/PDF_Articles/Blackwell/BF9.pdf 

Further more as psychology is somewhat a social science and not just a neural science  i believe that i is in psychologists best interests to pass on their knowledge  to others thus creating an accessible platform where information can be recieved freely.

Qualatative research : Easy or Harder than it looks

among psychology students there is a common misconception that qualitative methods or research is substantially easier than spss or statical programs.

qualitative data is extremely time consuming as it requires te investigator to meticulously analyse the data for themes which represent the research topics and highlight a substantial viewpoint .

janet llieva, steve Baron and Nigel Heculum conducted an online in the pros and cons of marketing research. they found that the main problem with qualitative data is that it is not always representative of the population thus not generalisehttp://www.mendeley.com/research/online-surveys-in-marketing-research-pros-and-cons/.

further studies were conducted by Patrica Johnson and john t. Winstone. this study brought together the pro and cons of data analysis software for qualitative research. they fund  that there were many advantages such as improved validity, audabilityand flexibility of the data and  the methods used to analyse  the data.  he concerns were mainly as the software is deterministic and often uses rigid processes to analyse the data it also only focuses on the volume and breath of the data rather than he depth and the meaning of the data which in my eyes would hinder the development of a qualitative research as qualatative is ment to focus on the detail within the data which makes  Qualiataive analysis what it is.http://onlinelibrary.wiley.com/doi/10.1111/j.1547-5069.2000.00393.x/abstract

on the whole qualitative research would say is even more complex and demanding than statistics and numerical data due to its subjective nature and  th fact  that individuals view and present thought can influence   the outcome of the analysis.

till next time

xoxo