Browsing Department of Biostatistics and Epidemiology: Theses andDissertations by Title
Now showing items 118 of 18

An Iterative Procedure to Select and Estimate WaveletBased Functional Linear MixedEffects Regression ModelsActigraphy is the continuous longterm measurement of activityinduced acceleration by means of a portable device that often resembles a watch and is typically worn on the wrist. Actigraphy is increasingly being used in clinical research to measure sleep and activity rhythms that might not otherwise be available using traditional techniques such as polysomnography. Actigraphy has been shown to be of value when assessing circadian rhythm disorders and sleep disorders and when evaluating treatment outcomes. It can provide more objective information on sleep habits in the patient's natural sleep environment than using the patient's recollection of their activity or a written sleep diary. We propose a waveletbased functional linear mixed model to investigate the impact of functional predictors on a scalar response when repeated measurements are available on multiple subjects. The advantage of the proposed model is that each subject has both individual scalar covariate effects and individual functional effects over time, while also sharing common population scalar covariate effects and common population slope functions. An iterative procedure is used to estimate and select the fixed and random effects by utilizing the partial consistency property of the random effect coefficients and selecting groups of random effects simultaneously via the smoothly clipped absolute deviation (SCAD) penalty function. In the first study of its kind, we compare multiple functional regression methods through a large number of simulation parameter combinations. The proposed model is applied to actigraphy data to investigate the effect of daily activity on Hamilton Rating of Depression Scale (HRSD), Insomnia Severity Index (ISI) and Reduced Morningness Eveningness Questionnare (RMEQ) scores.

Assessing the Validity of Vitamin D Supplementation in Patients with Symptomatic Knee OsteoarthritisIn an older adult population, does supplementation of 25hydroxyvitamin D for patients with symptomatic knee osteoarthritis with low serum vitamin D alleviate symptoms of the disease?

A Bayesian Framework To Detect Differentially Methylated Loci in Both Mean And Variability with Next Generation SequencingDNA methylation at CpG loci is the best known epigenetic process involved in many complex diseases including cancer. In recent years, nextgeneration sequencing (NGS) has been widely used to generate genomewide DNA methylation data. Although substantial evidence indicates that di erence in mean methylation proportion between normal and disease is meaningful, it has recently been proposed that it may be important to consider DNA methylation variability underlying common complex disease and cancer. We introduce a robust hierarchical Bayesian framework with a Latent Gaussian model which incorporates both mean and variance to detect di erentially methylated loci for NGS data. To identify methylation loci which are associated with disease, we consider Bayesian statistical hypotheses testing for methylation mean and methylation variance using a twodimensional highest posterior density region. To improve computational e ciency, we use Integrated Nested Laplace Approximation (INLA), which combines Laplace approximations and numerical integration in a very e cient manner for deriving marginal posterior distributions. We performed simulations to compare our proposed method to other alternative methods. The simulation results illustrate that our proposed approach is more powerful in that it detects less false positives and it has true positive rate comparable to the other methods.

Bayesian Functional Clustering and VMR Identification in Methylation Microarray DataThe study of the relation between DNA and health and disease has had a lot of time, energy, and money invested in it over the years. As more scientific knowledge has accumulated, it has become clear that the relations between DNA and health isn’t just a function of the sequence of nucleotide bases, but also on permanent modifications of DNA that affect DNA transcriptions and thus have a macroscopic effect on an individual. The study of modifications to DNA is known as epigenetics.Epigenetic changes have been shown to play a role in certain diseases, including cancer (Novak 2004). Finding locations of differential methylation in two groups of cells is an ongoing area of research in both science and bioinformatics. The number of developed statistical methods for establishing differential DNA methylation between two groups is limited (Bock 2012). Many developed methods are developed for nextgeneration sequencing data and may not work for microarray data, and vice versa. Bisulfite sequencing, the nextgeneration sequencing technique for attaining methylation data, often comes with limited sample size and considerations must be made for low and variable coverage, and smoothing the methylation values. The analysis of nextgeneration sequencing data also involves small sample sizes.In addition, these methods can be sensitive to how individual CpG regions are grouped together as a region for analysis. If the DMRs are small relative to the sizes of 5 established regions, then the method may not detect a region as having differential methylation. Robust methods for clustering microarray data have also been an ongoing area of research. It is desirable to have a method that could be applied to microarray data could increase the sample size and mitigate the previous problems if the method used is robust to missing values, outliers, and microarray data noise. Functional clustering has shown to be effective when properly conducted on gene expression data. It can be used when the data have temporal measurements to identify genes that are possibly coexpressed. The clustering of methylation data can also be shown to identify epigenetic subgroups that can potentially be very useful (Wang, 2011). [introduction]

Classification Methods for CircularLinear Data Using Periodic FunctionsIn many fields such as medicine, agriculture and environmental studies, data are collected over time which can have some repeated pattern within a certain time period. Those data with the linear responses or measures such as blood pressure or solar energy with circular predictor, are called circularlinear data. The data having repeated measures over time are usually analyzed using longitudinal analysis methods. However, applying classical longitudinal data analysis to circularlinear data is generally inappropriate since the circular pattern of time would be treated as a simple continuous variable. Parametric approaches for circularlinear data have been developed using various modeling methods. We propose a Bayesian nonparametric MCMC circular smoothing splines approach, which is not only appropriate but also adds more flexibility for modeling and classification for circularlinear data. We first fit the circularlinear data on an estimated circle, to elicit functional pattern from the data, and then classify the patterns. In the development of the classification procedure, we use functional data analysis and some widely used dimension reduction classification methods such as the principal component analysis and support vector machine. We evaluate the performance of the proposed modelling and classification methods through extensive simulation, and demonstrate using the 20052006 NHANES physical activity monitor data on insomnia patients. In simulation study, the nonparametric Bayesian smoothing splines method coupled with support vector machine approach yields best performance in classification in terms of concordance rate. Our proposed nonparametric approach performed slightly better than the established parametric methods. Also, the initial data fitting procedures using a periodic regression function to reduce the noise in the data are shown to improve the performance in the classification problem. The result in the analysis of the NHANES data is consistent with simulation

Classifying Rheumatoid Arthritis Risk with Genetic Subgroups Using GenomeWide AssociationStructured genomewide association methods can be used to find population substructure, determine significant SNPs, and subsequently narrow down the field of SNPs to those most significant for determining disease risk. Beginning with more than 500,000 SNPs and rheumatoid arthritis (RA) phenotype data for cases and controls, we used a threepart clustering approach that found 684 SNPs significant for determining RA after accounting for clusters, and of those, 168 SNPs with differing odds across clusters. These 168 SNPs were used to create 16 population subgroups, each revealing a unique pattern of minor allele frequencies. The subgroups showed some commonality in multidimensional scaling plots, however, and were combined into five RA risk categories, each with odds differing from the other categories with pvalues less than 0.0001. Thus, based on SNP information from 168 SNPs it may be possible to assign an individual into one of five distinct RA risk categories.

Correlation Coefficient Inference for LeftCensored Biomarker Data with Known Detection LimitsResearchers are often interested in the relationship between biological concentrations obtained using two different assays, both of which may be biomarkers. Despite the continuing advances in biotechnology, the value of a particular biomarker may fall below some known limit of detection (LOD). Data values such as these are referred to as nondetects (NDs) and can be treated as leftcensored observations. When attempting to measure the association between two concentrations, both of which are subject to NDs, serious complications can arise in the data analysis. Simple substitution, random imputation, and maximum likelihood estimation methods are just a few of the methods that have been proposed for handling NDs when estimating the correlation between two variables, both of which are subject to leftcensoring. Unfortunately, many of the popular methods require that the data follow a bivariate normal distribution or that only a small percentage of the data for each variable are below the LOD. These assumptions are often violated with biomarker data. In this paper, we evaluate the performance of several methods, including Spearman’s rho, when the data do not follow a bivariate normal distribution and when there are moderate to large censoring proportions in one or both of the variables. We evaluate the performance of seven methods for estimating the correlation, ρ, between two leftcensored variables using bias, median absolute deviation, 95% confidence interval width, and coverage probability under assumptions of various sample sizes, correlations, and censoring proportions. We show that using substitution and imputation methods yields biased estimates of ρ and less than nominal coverage probability under most of the simulation parameters we examined. We recommend the maximum likelihood method for general use even when the data significantly depart from bivariate normality.

False coverage rate  adjusted smoothed bootstrap simultaneous confidence intervals for selected parametersMany modern applications refer to a large number of populations with high dimensional parameters. Since there are so many parameters, researchers often draw inferences regarding the most significant parameters, which are called selected parameters. Benjamini and Yekutieli (2005) proposed the false coveragestatement rate (FCR) method for multiplicity correction when constructing confidence intervals for only selected parameters. FCR for the confidence interval method is parallel to the concept of the false discovery rate for multiple hypothesis testing. In practice, we typically construct FCRadjusted approximate confidence intervals for selected parameters either using the bootstrap method or the normal approximation method. However, these approximated confidence intervals show higher FCR for small and moderate sample sizes. Therefore, we suggest a novel procedure to construct simultaneous confidence intervals for the selected parameters by using a smoothed bootstrap procedure. We consider a smoothed bootstrap procedure using a kernel density estimator. A pertinent problem associated with the smoothed bootstrap approach is how to choose the unknown bandwidth in some optimal sense. We derive an optimal choice for the bandwidth and the resulting smoothed bootstrap confidence intervals asymptotically to give better control of the FCR than its competitors. We further show that the suggested smoothed bootstrap simultaneous confidence intervals are FCRconsistent if the dimension of data grows no faster than N^3/2. Finite sample performances of our method are illustrated based on empirical studies. Through these empirical studies, it is shown that the proposed method can be successfully applied in practice.

Mathematical and Stochastic Modeling of HIV Immunology and EpidemiologyIn HIV virus dynamics, controlling of viral load and maintaining of CD4 value at a higher level are always primary goals for the providers. In recent years, a new molecule was discovered, namely, eCD4Ig, which mimics CD4 if introduced into the human body and has potential to change existing HIV virus dynamics. Thus, to understand dynamics of viral load, eCD4Ig, CD4 cells, we have developed mathematical models by incorporating interactions between this new molecule and other known immunological, virological information. We further investigated model based speculations for management, and obtained the level of eCD4Ig required for elimination of virus. Next, we built epidemiological model for HIV spread and control among discordant couple through dynamics of PrEP (Preexposure prophylaxis). For this, an actuarial assumptions based stochastic model is used to obtain the mean remaining time of couple to stay as discordant. We generalized single hookup/marriage stochastic model to multiple hookup/marriage model.

A modified bump hunting approach with correlationadjusted kernel weight for detecting differentially methylated regions on the 450K arrayDNA methylation plays an important role in the regulation of gene expression, as hypermethylation is associated with gene silencing. The general purpose of this dissertation is the development of a statistical method, called DMR Detector, for detecting differentially methylated regions (DMRs) on the 450K array. DMR Detector makes three key modifications to an existing method called Bumphunter. The first is what statistic to collect from the initial fitting for further analysis. The second is to perform kernel smoothing under the assumption of correlated errors using a newly proposed correlationadjusted kernel weight. The third is how to define regions of interest. In simulation, the method was shown to have high power comparable to Bumphunter, with consistently lower familywise type I error rate, controlled well below the 0.1 FDR. DMR Detector was applied to real data and was able to detect one DMR that was not detected by Bumphunter.

A Modified Information Criterion in the 1d Fused Lasso for DNA Copy Number Variant Detection using Next Generation Sequencing DataDNA Copy Number Variations (CNVs) are associated with many human diseases. Recently, CNV studies have been carried out using Next Generation Sequencing (NGS) technology that produces millions of short reads. With NGS reads ratio data, we use the 1d fused lasso regression for CNV detection. Given the number of copy number changes, the corresponding genomic locations are estimated by fitting the 1d fused lasso. Estimation of the number of copy number changes depends on a tuning parameter in the 1d fused lasso. In this dissertation, we propose a new modified Bayesian information criterion, called JMIC, to estimate the optimal tuning parameter in the 1d fused lasso. In theoretical studies, we prove that the number of change points estimated by JMIC converges the true number of changes. Also, our simulation studies show that JMIC outperforms the other criteria considered. Finally, we apply our proposed method to the reads ratio data from the breast tumor cell HCC1954 and its matched cell line provided by Chiang et al. (2009).

Multivariate Poisson Abundance Models for Analyzing Antigen Receptor DataAntigen receptor data is an important source of information for immunologists that is highly statistically challenging to analyze due to the presence of a huge number of Tcell receptors in mammalian immune systems and the severe undersampling bias associated with the commonly used data collection procedures. Many important immunological questions can be stated in terms of richness and diversity of Tcell subsets under various experimental conditions. This dissertation presents a class of parametric models and uses a special case of them to compare the richness and diversity of antigen receptor populations in mammalian Tcells. The parametric models are based on a representation of the observed receptor counts as a multivariate Poisson abundance model (mPAM). A Bayesian model tting procedure is developed which allows tting of the mPAM parameters with the help of the complete likelihood as opposed to its conditional version which was used previously. The new procedure is shown to be often considerably more e cient (as measured by the amount of Fisher information) in the regions of the mPAM parameter space relevant to modeling Tcell data. A richness estimator based on the special case of the mPAM is shown to be superior to several existing richness estimators from the statistical ecology literature under the severe undersampling conditions encountered in antigen receptor data collection. The comparative diversity analyses based on the mPAM special case yield biologically meaningful results when applied to the Tcell receptor repertoires in mice. It is also shown that the amount of time to implement the Bayesian model tting procedure for the mPAM special case scales well as the dimension increases and that the amount of computational resources required to conduct complete statistical analyses for the mPAM special case can be drastically lower for our Bayesian model tting procedure than for code based on the conditional likelihood approach.

A New Method For Analyzing 1:N Matched Case Control Studies With Incomplete Data1:n matched casecontrol studies are commonly used to evaluate the association between the exposure to a risk factor and a disease, where one case is matched to up till n controls. The odds ratio is typically used to quantify such association. Difficulties in estimating the true odds ratio arise, when the exposure status is unknown for at least one individual in a group. In the case where the exposure status is known for all individuals in a group, the true odds ratio is estimated as the ratio of the counts in the discordant cells of the observed twobytwo table. In the case where all data are independent, the odds ratio is estimated using the crossproduct ratio from the observed table. Conditional logistic regression estimates are used for incomplete matching data. In this dissertation we suggest a simple method for estimating the odds ratio when the sample consists of a combination of paired and unpaired observations, with 1:n matching. This method uses a weighted average of the odds ratio calculations described above. This dissertation compares the new method to existing methods via simulation.

Penalized Least Squares and the Algebraic Statistical Model for Biochemical Reaction NetworksSystems biology seeks to understand the formation of macro structures such as cellular processes and higher level cellular phenomena by investigating the interactions of systems’ individual components. For cellular biology, this goal is to understand the dynamic behavior of biological materials within the cell, a container consisting of smaller materials such as mRNA, proteins, enzymes and other intermediates necessary for regulating intracellular functions and chemical species levels. Understanding these cellular dynamics is needed to help develop new drug therapies, which can be targeted to specific molecules or specific genes, in order to perturb the system for a desired result. In this work we develop inferential procedures to estimate reaction rate coefficients in cellular systems of ordinary differential equations (ODEs) from noisy data arising from realizations of molecular trajectories. It is assumed that these systems obey the so called chemical mass action law of kinetics, with corresponding deterministic mass action limit as the system size becomes infinite. The estimation and inference is based on the penalized least squares estimates, where the covariance structure of these estimates corresponds to the solution of a system of coupled nonautonomuous ODEs. Another topic discussed here is that of network topology estimation. The algebraic statistical model (ASM) offers a means of performing this topological inference for the special class of conic networks. We prove that the ASM recovers the true network topology as the number of samples grows without bound, a property known in the literature as sparsistency. We propose a method to extend the ASM to a wider class of networks that are decomposable into multiple cones.

A resampling method of time course gene expression data for gene network inferenceManipulation of cellular functions may aid in treatment and/or cure of a disease. Thus, identifying the topology of a gene regulatory network (GRN) and the molecular role of each gene is essential. Discovering GRNs from gene expression data is hampered by intrinsic attributes of the data: small sample size n, large number of variables (genes) p, and unknown error structure. Numerous theoretical approaches for GRN inference attempt to overcome these difficulties; however, most solutions utilized in these methods are to provide either point estimators such as coefficient estimators or make numerous assumptions which are often incompatible with the data. Furthermore, the different solutions cause GRN inference methods to provide inconsistent results. This dissertation proposes a resampling method for timecourse gene expression data which can provide interval estimators for existing GRN inference methods without any distributional assumptions via bootstrapping and a statistical model that considers the various components of the data structure such as trend of gene expressions, errors of timecourse data, and correlation between genes, etc. This method will produce more precise GRNs that are consistent with observed gene expression data. Furthermore, by applying our method to multiple existing GRN inference methods, the resulting networks obtained from different inference methods could be combined using the joint confidence region for their parameters. Thus, this method can be used for the validation of identified networks and GRN inference methods.

Statistical Methods for reaction NetworksStochastic reaction networks are important tools for modeling many biological phenomena, and understanding these networks is important in a wide variety of applied research, such as in disease treatment and in drug development. Statistical inference about the structure and parameters of reaction networks, sometimes referred to in this setting as model calibration, is often challenging due to intractable likelihoods. Here we utilize an idea similar to that of generalized estimating equations (GEE), which in this context are the socalled martingale estimating equations, for estimation of reaction rates of the network. The variance component is estimated using the approximate variance under the linear noise approximation, which is based on partial dierential equation, or FokkerPlanck equations, which provides an approximation to the exact chemical master equation. The method is applied to data from the plague outbreak at Eyam, England from 16651666 and the COVID19 pandemic data. We show empirically that the proposed method gives good estimates of the parameters in a large volume setting and works well in small volume settings.

Statistical Methods to Detect Deferentially Methyleated Regions with NextGeneration Sequencing DataResearchers in genomics are increasingly interested in epigenetic factors such as DNA methylation because they play an important role in regulating gene expression without changes in the sequence of DNA. Abnormal DNA methylation is associated with many human diseases, including various types of cancer. We propose three different approaches to test for differentially methylated regions (DMRs) associated with complex traits, while accounting for correlations within and among CpG sites in the DMRs. One approach is a nonparametric method using a kernel distance statistic and the second one is a likelihoodbased method using a binomial spatial scan statistic. Both of these approaches detect differential methylation regions between cases and controls along the genome. The kernel distance method uses the kernel function, while the binomial scan statistic approach uses a mixed effect model to incorporate correlations among CpG sites. Extensive simulations show that both approaches have excellent control of type I error, and both have reasonable statistical power. The binomial scan statistic approach appears to have higher power, while the kernel distance method is computationally faster. We also propose a third method under the Bayesian framework for comparing methylation rates when disease status is classified into ordinal multinomial categories (e.g., stages of cancer). The DMRs are detected using moving windows along the genome. Within each window, the Bayes factor is calculated to compare the two models corresponding to constant vs. monotonic methylation rates among the groups. As in the case of the scan statistic approach, the correlations between the sites are incorporated using a mixed effect model. Results from extensive simulation indicate that the Bayesian method is statistically valid and reasonably powerful to detect DMRs associated with disease severity. The proposed methods are demonstrated using data from a chronic lymphocytic leukemia (CLL) study.

TWOSAMPLE TESTS FOR HIGH DIMEMSIONAL MEANS WITH PREPIVOTING and DATA TRANSFORMATIONWithin the medical field, the demand to store and analyze small sample, large variable data has become everabundant. Several twosample tests for equality of means, including the revered Hotelling’s T2 test, have already been established when the combined sample size of both populations exceeds the dimension of the variables. However, tests such as Hotelling’s T2 become either unusable or output small power when the number of variables is greater than the combined sample size. We propose a test using both prepivoting and Edgeworth expansion that maintains high power in this higher dimensional scenario, known as the “large p small n ” problem. Our test’s finite sample performance is compared with other recently proposed tests designed to also handle the “large p small n ” situation. We apply our test to a microarray gene expression data set and report competitive rates for both power and TypeI error.