Second, we consider Hamilton cycles in hypergraphs. In particular, we determine the minimum codegree thresholds for Hamilton l-cycles in large k-uniform hypergraphs for l less than k/2. We also determine the minimum vertex degree threshold for loose Hamilton cycle in large 3-uniform hypergraphs. These results generalize the well-known theorem of Dirac for graphs.

Third, we determine the minimum codegree threshold for near perfect matchings in large k-uniform hypergraphs, thereby confirming a conjecture of Rodl, Rucinski and Szemeredi. We also show that the decision problem on whether a k-uniform hypergraph with certain minimum codegree condition contains a perfect matching can be solved in polynomial time, which solves a problem of Karpinski, Rucinski and Szymanska completely.

At last, we determine the minimum vertex degree threshold for perfect tilings of C_4^3 in large 3-uniform hypergraphs, where C_4^3 is the unique 3-uniform hypergraph on four vertices with two edges.

]]>Several different theoretical frameworks were used to analyze student comprehension. This gave multiple viewpoints to further explore students' thoughts as they worked either aloud individually or in a group setting. First, APOS theory (Asiala et al., 1996) was used to analyze students' understanding of the concept of vertex of the quadratic function in relation to the derivative on certain tasks. Students' personal meaning of the vertex and its impact on the understanding of the derivative was noted as well as students' lack of connection between explicit and real world problems. Obstacles of misconception of the vertex, trouble with the free fall formula, and problems with graphing due to a weak schema of quadratic functions were all identified as barriers to student understanding of real world problems.

Next, Skemp's (1976) relational and instrumental understanding framework was used to explain how students think-aloud individually. Trends in the thought process while working alone as well as students' ability to identify and correct mistakes were analyzed. Lastly, Vygotsky's (1978) concept of zone of proximal development was used to describe the difference in ability of students working by themselves versus in a group setting. In a group setting, some students worked within their zone of proximal development as they were influenced by peers to fix incorrect solutions.

Based on APOS, several suggested activities pertaining to the quadratic function and its derivative were developed for implementation in the classroom to help students overcome misconceptions and obstacles. Future research is suggested as a continuation to improve student understanding of quadratic functions and the derivative.

]]>We prove that the intersection algebra is a finitely generated R-algebra when R is a Unique Factorization Domain and the two ideals are principal, and use fans of cones to find the algebra generators. This is done in Chapter 2, which concludes with introducing a new class of algebras called fan algebras.

Chapter 3 deals with the intersection algebra of principal monomial ideals in a polynomial ring, where the theory of semigroup rings and toric ideals can be used. A detailed investigation of the intersection algebra of the polynomial ring in one variable is obtained. The intersection algebra in this case is connected to semigroup rings associated to systems of linear diophantine equations with integer coefficients, introduced by Stanley.

In Chapter 4, we present a method for obtaining the generators of the intersection algebra for arbitrary monomial ideals in the polynomial ring.

]]>Left truncation has been studied extensively while right truncation has not received the same level of attention. In one of the earliest studies on right truncation, Lagakos *et al.* (1988) proposed to transform a right truncated variable to a left truncated variable and then apply existing methods to the transformed variable. The reverse-time hazard function is introduced through transformation. However, this quantity does not have a natural interpretation. There exist gaps in the inferences for the regular forward-time hazard function with right truncated data. This dissertation discusses variance estimation of the cumulative hazard estimator, one-sample log-rank test, and comparison of hazard rate functions among finite independent samples under the context of right truncation.

First, the relation between the reverse- and forward-time cumulative hazard functions is clarified. This relation leads to the nonparametric inference for the cumulative hazard function. Jiang (2010) recently conducted a research on this direction and proposed two variance estimators of the cumulative hazard estimator. Some revision to the variance estimators is suggested in this dissertation and evaluated in a Monte-Carlo study.

Second, this dissertation studies the hypothesis testing for right truncated data. A series of tests is developed with the hazard rate function as the target quantity. A one-sample log-rank test is first discussed, followed by a family of weighted tests for comparison between finite $K$-samples. Particular weight functions lead to log-rank, Gehan, Tarone-Ware tests and these three tests are evaluated in a Monte-Carlo study.

Finally, this dissertation studies the nonparametric inference for the hazard rate function for the right truncated data. The kernel smoothing technique is utilized in estimating the hazard rate function. A Monte-Carlo study investigates the uniform kernel smoothed estimator and its variance estimator. The uniform, Epanechnikov and biweight kernel estimators are implemented in the example of blood transfusion infected AIDS data.

]]>We first construct empirical likelihood confidence interval and simultaneous confidence bands for the odds ratio of two survival functions to address small sample efficacy and sufficiency. The empirical log-likelihood ratio is developed, and the corresponding asymptotic distribution is derived. Simulation studies show that the proposed empirical likelihood band has outperformed the normal approximation band in small sample size cases in the sense that it yields closer coverage probabilities to chosen nominal levels.

Furthermore, in order to incorporate prognostic factors for the adjustment of survival functions in the comparison, we construct simultaneous confidence bands for the ratio and odds ratio of survival functions based on both the Cox model and the additive risk model. We develop simultaneous confidence bands by approximating the limiting distribution of cumulative hazard functions by zero-mean Gaussian processes whose distributions can be generated through Monte Carlo simulations. Simulation studies are conducted to evaluate the performance for proposed models. Real applications on published clinical trial data sets are also studied for further illustration purposes.

In the end, the population attributable fraction function is studied to measure the impact of risk factors on disease incidence in the population. We develop semiparametric estimation of attributable fraction functions for cohort studies with potentially censored event time under the additive risk model. ]]>

The second topic is on combination of several diagnostic tests to achieve better diagnostic accuracy. We consider the optimal linear combination that maximizes the area under the receiver operating characteristic curve (AUC); the estimates of the combination's coefficients can be obtained via a non-parametric procedure. However, for estimating the AUC associated with the estimated coefficients, the apparent estimation by re-substitution is too optimistic. To adjust for the upward bias, several methods are proposed. Among them the cross-validation approach is especially advocated, and an approximated cross-validation is developed to reduce the computational cost. Furthermore, these proposed methods can be applied for variable selection to select important diagnostic tests.

However, the above best-subset variable selection method is not practical when the number of diagnostic tests is large. The third topic is to further develop a LASSO-type procedure for variable selection. To solve the non-convex maximization problem in the proposed procedure, an efficient algorithm is developed based on soft ROC curves, difference convex programming, and coordinate descent algorithm.

]]>