The accuracy of facial recognition algorithms on images taken in controlled conditions has improved significantly over the last two decades. As the focus is turning to more unconstrained or relaxed conditions and toward videos, there is a need to better understand what factors influence performance. If these factors were better understood, it would be easier to predict how well an algorithm will perform when new conditions are introduced.

Previous studies have studied the effect of various factors on the verification rate (VR), but less attention has been paid to the false accept rate (FAR). In this dissertation, we study the effect various factors have on the FAR as well as the correlation between marginal FAR and VR. Using these relationships, we propose two models to predict marginal VR and demonstrate that the models predict better than using the previous global VR.

]]>The latter half focuses on the application of empirical likelihood method in economics and finance. Two models draw our attention. The first one is the predictive regression model with independent and identically distributed errors. Some uniform tests have been proposed in the literature without distinguishing whether the predicting variable is stationary or nearly integrated. Here, we extend the empirical likelihood methods in Zhu, Cai and Peng (2014) with independent errors to the case of an AR error process. The proposed new tests do not need to know whether the predicting variable is stationary or nearly integrated, and whether it has a finite variance or an infinite variance. Another model we considered is a GARCH(1,1) sequence or an AR(1) model with ARCH(1) errors. It is known that the observations have a heavy tail and the tail index is determined by an estimating equation. Therefore, one can estimate the tail index by solving the estimating equation with unknown parameters replaced by Quasi Maximum Likelihood Estimation (QMLE), and profile empirical likelihood method can be employed to effectively construct a confidence interval for the tail index. However, this requires that the errors of such a model have at least finite fourth moment to ensure asymptotic normality with n^{1/2} rate of convergence and Wilk's Theorem. We show that the finite fourth moment can be relaxed by employing some Least Absolute Deviations Estimate (LADE) instead of QMLE for the unknown parameters by noting that the estimating equation for determining the tail index is invariant to a scale transformation of the underlying model. Furthermore, the proposed tail index estimators have a normal limit with n^{1/2} rate of convergence under minimal moment condition, which may have an infinite fourth moment, and Wilk's theorem holds for the proposed profile empirical likelihood methods. Hence a confidence interval for the tail index can be obtained without estimating any additional quantities such as asymptotic variance.

The refined inertia of a square real matrix $B$, denoted $\ri(B)$, is the ordered $4$-tuple $(n_+(B), \ n_-(B), \ n_z(B), \ 2n_p(B))$, where $n_+(B)$ (resp., $n_-(B)$) is the number of eigenvalues of $B$ with positive (resp., negative) real part, $n_z(B)$ is the number of zero eigenvalues of $B$, and $2n_p(B)$ is the number of pure imaginary eigenvalues of $B$. The minimum rank (resp., rational minimum rank) of a sign pattern matrix $\cal A$ is the minimum of the ranks of the real (resp., rational) matrices whose entries have signs equal to the corresponding entries of $\cal A$.

First, we identify all minimal critical sets of inertias and refined inertias for full sign patterns of order 3. Then we characterize the star sign patterns of order $n\ge 5$ that require the set of refined inertias $\mathbb{H}_n=\{(0, n, 0, 0), (0, n-2, 0, 2), (2, n-2, 0, 0)\}$, which is an important set for the onset of Hopf bifurcation in dynamical systems. Finally, we establish a direct connection between condensed $m \times n $ sign patterns and zero-nonzero patterns with minimum rank $r$ and $m$ point-$n$ hyperplane configurations in ${\mathbb R}^{r-1}$. Some results about the rational realizability of the minimum ranks of sign patterns or zero-nonzero patterns are obtained.

]]>Second, as a generalization of (hyper)graph matchings, we determine the minimum vertex degree threshold asymptotically for perfect K_{a,b,c}-tlings in large 3-uniform hypergraphs, where K_{a,b,c} is any complete 3-partite 3-uniform hypergraphs with each part of size a, b and c. This partially answers a question of Mycroft, who proved an analogous result with respect to codegree for r-uniform hypergraphs for all r ≥ 3. Our proof uses Regularity Lemma, the absorbing method, fractional tiling, and a recent result on shadows for 3-graphs.

The second part explores some connections of dense alternating sign matrices with total unimodularity, combined matrices, and generalized complementary basic matrices.

In the third part of the dissertation, an explicit formula for the ranks of dense alternating sign matrices is obtained. The minimum rank and the maximum rank of the sign pattern of a dense alternating sign matrix are determined. Some related results and examples are also provided.

]]>Second, we consider Hamilton cycles in hypergraphs. In particular, we determine the minimum codegree thresholds for Hamilton l-cycles in large k-uniform hypergraphs for l less than k/2. We also determine the minimum vertex degree threshold for loose Hamilton cycle in large 3-uniform hypergraphs. These results generalize the well-known theorem of Dirac for graphs.

Third, we determine the minimum codegree threshold for near perfect matchings in large k-uniform hypergraphs, thereby confirming a conjecture of Rodl, Rucinski and Szemeredi. We also show that the decision problem on whether a k-uniform hypergraph with certain minimum codegree condition contains a perfect matching can be solved in polynomial time, which solves a problem of Karpinski, Rucinski and Szymanska completely.

At last, we determine the minimum vertex degree threshold for perfect tilings of C_4^3 in large 3-uniform hypergraphs, where C_4^3 is the unique 3-uniform hypergraph on four vertices with two edges.

]]>Several different theoretical frameworks were used to analyze student comprehension. This gave multiple viewpoints to further explore students' thoughts as they worked either aloud individually or in a group setting. First, APOS theory (Asiala et al., 1996) was used to analyze students' understanding of the concept of vertex of the quadratic function in relation to the derivative on certain tasks. Students' personal meaning of the vertex and its impact on the understanding of the derivative was noted as well as students' lack of connection between explicit and real world problems. Obstacles of misconception of the vertex, trouble with the free fall formula, and problems with graphing due to a weak schema of quadratic functions were all identified as barriers to student understanding of real world problems.

Next, Skemp's (1976) relational and instrumental understanding framework was used to explain how students think-aloud individually. Trends in the thought process while working alone as well as students' ability to identify and correct mistakes were analyzed. Lastly, Vygotsky's (1978) concept of zone of proximal development was used to describe the difference in ability of students working by themselves versus in a group setting. In a group setting, some students worked within their zone of proximal development as they were influenced by peers to fix incorrect solutions.

Based on APOS, several suggested activities pertaining to the quadratic function and its derivative were developed for implementation in the classroom to help students overcome misconceptions and obstacles. Future research is suggested as a continuation to improve student understanding of quadratic functions and the derivative.

]]>We prove that the intersection algebra is a finitely generated R-algebra when R is a Unique Factorization Domain and the two ideals are principal, and use fans of cones to find the algebra generators. This is done in Chapter 2, which concludes with introducing a new class of algebras called fan algebras.

Chapter 3 deals with the intersection algebra of principal monomial ideals in a polynomial ring, where the theory of semigroup rings and toric ideals can be used. A detailed investigation of the intersection algebra of the polynomial ring in one variable is obtained. The intersection algebra in this case is connected to semigroup rings associated to systems of linear diophantine equations with integer coefficients, introduced by Stanley.

In Chapter 4, we present a method for obtaining the generators of the intersection algebra for arbitrary monomial ideals in the polynomial ring.

]]>Left truncation has been studied extensively while right truncation has not received the same level of attention. In one of the earliest studies on right truncation, Lagakos *et al.* (1988) proposed to transform a right truncated variable to a left truncated variable and then apply existing methods to the transformed variable. The reverse-time hazard function is introduced through transformation. However, this quantity does not have a natural interpretation. There exist gaps in the inferences for the regular forward-time hazard function with right truncated data. This dissertation discusses variance estimation of the cumulative hazard estimator, one-sample log-rank test, and comparison of hazard rate functions among finite independent samples under the context of right truncation.

First, the relation between the reverse- and forward-time cumulative hazard functions is clarified. This relation leads to the nonparametric inference for the cumulative hazard function. Jiang (2010) recently conducted a research on this direction and proposed two variance estimators of the cumulative hazard estimator. Some revision to the variance estimators is suggested in this dissertation and evaluated in a Monte-Carlo study.

Second, this dissertation studies the hypothesis testing for right truncated data. A series of tests is developed with the hazard rate function as the target quantity. A one-sample log-rank test is first discussed, followed by a family of weighted tests for comparison between finite $K$-samples. Particular weight functions lead to log-rank, Gehan, Tarone-Ware tests and these three tests are evaluated in a Monte-Carlo study.

Finally, this dissertation studies the nonparametric inference for the hazard rate function for the right truncated data. The kernel smoothing technique is utilized in estimating the hazard rate function. A Monte-Carlo study investigates the uniform kernel smoothed estimator and its variance estimator. The uniform, Epanechnikov and biweight kernel estimators are implemented in the example of blood transfusion infected AIDS data.

]]>