Finally been able to reject the null hypothesis

Thread posts

See all posts in this thread below.

  • User
  • Message
  • Actions
    • Published: 08-04-2015 02:13 pm
      Updated: 08-04-2015 02:22 pm
    • I have been researching for some time how to be to evaluate whether a specific result is statistically significant or not.

      If one knows the distribution for the test one can discern the possibility at getting a specific score by random.

      Since the current test consists of 22 traits with 3 questions each resulting in 66 questions with the five distinct answers -30, -15, 0, 15, 30 and the score being a plain sum of all answers we get 5^66 possible distinct answers. If one calculates the number of permutations to get a specific score or higher one can discern how plausible it is for someone to get that score by random.

      After some I time I found out that it's impossible to solve this by computation, it takes way too long time.

      So after searching around I found that I can use the 'central limit theorem' to predict the distribution for the test being the standard distribution. So now for every test-result, which have enough data, it will also give you an indication if the test result is statistically significant or not. It seems that scores above 300 are all statistically significant.

      But how can you predict how accurate a specific type is?

      All distinct types have a unique set for the traits were half of the traits are positvely correlated and half of the traits are anti-correlated. So all types have a distinct score calculation. This means that a high score on the test should be seen as evidence that the specific type definitions are actually accurate.

      If you have any questions, please ask, maybe I missed something.

      :stare-screen-smiley:chat-smiley

      ( Since the self-reporting test consists of questions about introspective experience what we actually quantify is phenomenal experience. It's quite fascinating I think because introspection are not accessible by empirical observations so it's not naturally compatible with the scientific method. So one could say we are using a ontological conservative approach on folk-psychology and trying to prove concepts about mental experience. If anyone is more capable of explaining the philosophy behind this please go ahead. )
    • Published: 08-07-2015 09:28 am
    • I discovered a calculation-problem, since there are 16 different types of forming a sum probability must be multiplied with 16. Have fixed that now and it seemed to have rised the limit of statistical significance to equal and above to 400 instead of 300 like before.
    • Published: 08-19-2015 08:54 am
    • Hmm I noticed this was not actually correct. Having 16 or 8 possible sums does not equal the possibility of all added together. That formula produced invalid values.
      I need to fix this last issue to prevent Type 1 and Type 2 errors.
    • Published: 08-19-2015 01:56 pm
      Updated: 08-19-2015 01:57 pm
    • Alright finally I'm done. Statistical significance is reached when score is around 430 or higher.

      1. It first calculates probability of getting score or higher in one distribution.
      2. Calculates the probability of getting a lower score than that (inverted value).
      3. Powers the inverted probability with the number of types (16 or 8 depending on test-version).
      4. Takes the final probability and inverts it again.
      5. Now we have the probability of getting the score or higher in 8 or 16 different distributions.

      It should be correct now.
    • ErikThor likes this post.
    • Published: 08-20-2015 05:00 am
    • How do we stand on statistically significant vs insignifanct results based on this? Maybe we should clear our old data since we've made some large revisions since.
    • Published: 08-20-2015 05:06 am
    • The majority of the results now are statistically insignificant. There are some traits which doesn't correlate or anti-correlate according to our hypothesis when using the 28 traits model for types.

      Here are a thread about that issue.

      The correlations are only calculated for the 25 latest results so we don't need to clear any old data. Also all results have their specific answers saved so the system will recalculate type automatically when type definition changes.


  • User
  • Message
  • Actions