In
the previous work, the proposed work was related to the healthcare data and the
professional knowledge. The information is processed through the validated
process and can be considered for the future perspective of diagnoses of
disease. The diagnosis process depends on the use of machines that can be
tracked by professional doctors. The previous work provides a complete research
plan for the operational model. The research questions are perfectly aligned
with the considered variables, relation, and statistical analysis of the
research.
The
research questions considered in the analysis were related to the contribution
of data science, widespread diseases for the diagnosing process, influenza
treatment, and use of science in the identification of risk factors and other
treatment processes. The research can be updated for the operational model
analysis. The research problem is specifically related to the proposed plan and
depends on the observation’s goals and objectives of the research. The research
questions describe problems, variables, and their relation.
The
statistical analysis is considered in the analysis. There is two proposed
hypothesis including the null hypothesis and alternative hypothesis. The
hypothesis deals with the significance of relation of data science with the tracking,
curing, and diagnosing of widespread diseases. The alternative hypothesis
considers data science for tracking, diagnosing, and widespread diseases. The
impact of data science is also considered in the hypothesis.
The
updated version of research is based on the operational model and investigation
of data science role in the treatment of diseases. In the latest updated
hypothesis, the more precise statement is considered that defines the relation
between the treatment process and data science. The role of new technology and
machines is significant in disease analysis, diagnostic understanding, and
tracking of issues of disease.
1. Discuss possible sources of uncertainty:
sampling error, researcher bias, reliability and validity of the instrument.
There
are different sources of errors and uncertainty in the research and these
sources of error include sampling error, researcher bias, the validity of the
instrument, and reliability. The sources of error can be due to faulty recording
and the wrong recording of the measurement (Statistics. laerd. com, 2018). The misreading of
scale can cause blunders in research. The errors can be classified into three
types such as systematic error, random error, and blunders. The systematic
errors can be identified on the basis of sources and causes (Ward, Self, & Froehle, 2015).
The data errors can be due to the wrong
substitution, systematic biases, random biases, missing and substitution (Statistics. laerd. com, 2018). The small data
errors induce measurable effects on the goals of operational performance. The
systematic substitution errors result as an increase in the frequency of errors
and the shorter duration metrices of the results can decrease performance
issues. The metrices of longer time duration have a lower proportion of data
errors (Ward, Self, & Froehle, 2015). The optimization of
instruments is required to enhance the reliability and validity of the
instrument. The impact of errors on the operational performance can be
identified and then measured at a low and higher frequency of data. The
perception of the operational model requires to underline the metrics and potential
impact of operational data on the analysis (Statistics. laerd. com, 2018).
2. Assume one of the hypotheses is true, and
your study produces that result. What does that mean to your study?
The
level of significance demonstrates the probability of accepting and rejecting
the null hypothesis if it is true (Statistics. laerd. com, 2018). In the present
study, the null hypothesis is "data science has no significant
relationship with day to day living" and the alternative hypothesis is
"data science has a positive impact on day to day living. Consider if the
significance level of the hypothesis is 0.05 that indicates 5% of the risk in
concluding the existing and actual differences. If the value of “p” is less
than or sometimes equal to the level of significance than we can reject the
null hypothesis (Statistics. laerd. com, 2018).
The
statistical analysis provides vision about the rejection of the region, If the
value of p is smaller than the significance value than it can be rejected. In
the present consideration if we reject the null hypothesis that it measures the
alternative hypothesis is acceptable (Ward, Self, & Froehle, 2015). On the basis of
selected and rejected hypothesis, it can be concluded that data science has a positive
impact on the day to day living. The probability of rejecting the hypothesis is
different in both cases including null and alternative hypothesis (Ward, Self, & Froehle, 2015).
3. Assume one of the hypotheses is true, but
your study causes you to reject that hypothesis. What does that mean to your
study?
In
the previous assessment two hypothesis were proposed including null and
alternative hypothesis the n Null Hypothesis states that “Data science has no
significant relationship with diagnosing, tracking and curing some of the
world’s deadliest and widespread diseases” while on the other hand the Alternative
Hypothesis states that Data science has a positive impact on diagnosing,
tracking and curing some of the world’s deadliest and widespread diseases (Ward, Self, & Froehle, 2015).
There
are some alternative solutions for the condition of true hypothesis and if the
study rejects that hypothesis. Consider if the level of significance is above
then the cut off value than the null hypothesis cannot be rejected and at the
same time the alternative hypothesis cannot be accepted (Ward, Self, & Froehle, 2015).
The
hypothesis can be considered correct if the study also matches with the results
(Ward, Self, & Froehle, 2015). On the other hand,
if the statistical analysis shows that the level of significance below the
cutoff value then alternative hypothesis should be accepted. Basically, the
cutoff point describes the statistical analysis and stimulates the hypothesis
as accepted or rejected (Ward, Self, & Froehle, 2015).
4. Suppose the results are statistically
significant (p < 0.05 and the null hypothesis is rejected), but the effect
size is very small. How would that influence your interpretation?
According
to Cohen the smaller effect size is approximately equal to 0.2 and the medium
effect size is 0.5 and the larger effect size is 0.8. The mean of the two
groups can be different and it can be different by 0.2 or more. The difference
can be trivial and statistically significant. The interpretation of smaller
size is different and attributes to the unexplained variance for the dependent
and independent variance (Ward, Self, & Froehle, 2015).
Statistics. laerd. com. (2018). Hypothesis Testing .
Retrieved from statistics.laerd.com: https://statistics.laerd.com/statistical-guides/hypothesis-testing-3.php
Ward, M. J., Self, W.
H., & Froehle, C. M. (2015). Effects of Common Data Errors in Electronic
Health Records on Emergency Department Operational Performance Metrics: A Monte
Carlo Simulation. Acad Emerg Med, 22(09), 1085-1092.