Topics

PubMed Journal Database | Biometrical journal. Biometrische Zeitschrift RSS

00:35 EDT 22nd August 2019 | BioPortfolio

The US National Library of Medicine and National Institutes of Health manage PubMed.gov which comprises of more than 29 million records, papers, reports for biomedical literature, including MEDLINE, life science and medical journals, articles, reviews, reports and  books.

BioPortfolio aims to cross reference relevant information on published papers, clinical trials and news associated with selected topics - speciality.

For example view all recent relevant publications on Epigenetics and associated publications and clincial trials.

Showing PubMed Articles 1–25 of 93 from Biometrical journal. Biometrische Zeitschrift

Longitudinal analysis of pre- and post-treatment measurements with equal baseline assumptions in randomized trials.

For continuous variables of randomized controlled trials, recently, longitudinal analysis of pre- and posttreatment measurements as bivariate responses is one of analytical methods to compare two treatment groups. Under random allocation, means and variances of pretreatment measurements are expected to be equal between groups, but covariances and posttreatment variances are not. Under random allocation with unequal covariances and posttreatment variances, we compared asymptotic variances of the treatment ef...

Estimating the decision curve and its precision from three study designs.

The decision curve plots the net benefit of a risk model for making decisions over a range of risk thresholds, corresponding to different ratios of misclassification costs. We discuss three methods to estimate the decision curve, together with corresponding methods of inference and methods to compare two risk models at a given risk threshold. One method uses risks (R) and a binary event indicator (Y) on the entire validation cohort. This method makes no assumptions on how well-calibrated the risk model is ...

A multistate model for early decision-making in oncology.

The development of oncology drugs progresses through multiple phases, where after each phase, a decision is made about whether to move a molecule forward. Early phase efficacy decisions are often made on the basis of single-arm studies based on a set of rules to define whether the tumor improves ("responds"), remains stable, or progresses (response evaluation criteria in solid tumors [RECIST]). These decision rules are implicitly assuming some form of surrogacy between tumor response and long-term endpoints...

A flexible design for advanced Phase I/II clinical trials with continuous efficacy endpoints.

There is growing interest in integrated Phase I/II oncology clinical trials involving molecularly targeted agents (MTA). One of the main challenges of these trials are nontrivial dose-efficacy relationships and administration of MTAs in combination with other agents. While some designs were recently proposed for such Phase I/II trials, the majority of them consider the case of binary toxicity and efficacy endpoints only. At the same time, a continuous efficacy endpoint can carry more information about the a...

Power gains by using external information in clinical trials are typically not possible when requiring strict type I error control.

In the era of precision medicine, novel designs are developed to deal with flexible clinical trials that incorporate many treatment strategies for multiple diseases in one trial setting. This situation often leads to small sample sizes in disease-treatment combinations and has fostered the discussion about the benefits of borrowing of external or historical information for decision-making in these trials. Several methods have been proposed that dynamically discount the amount of information borrowed from hi...

Joint regression analysis for survival data in the presence of two sets of semi-competing risks.

In many clinical trials, multiple time-to-event endpoints including the primary endpoint (e.g., time to death) and secondary endpoints (e.g., progression-related endpoints) are commonly used to determine treatment efficacy. These endpoints are often biologically related. This work is motivated by a study of bone marrow transplant (BMT) for leukemia patients, who may experience the acute graft-versus-host disease (GVHD), relapse of leukemia, and death after an allogeneic BMT. The acute GVHD is associated wit...

The population-attributable fraction for time-dependent exposures using dynamic prediction and landmarking.

The public health impact of a harmful exposure can be quantified by the population-attributable fraction (PAF). The PAF describes the attributable risk due to an exposure and is often interpreted as the proportion of preventable cases if the exposure was extinct. Difficulties in the definition and interpretation of the PAF arise when the exposure of interest depends on time. Then, the definition of exposed and unexposed individuals is not straightforward. We propose dynamic prediction and landmarking to def...

Marginal hazard ratio estimates in joint frailty models for heart failure trials.

This work is motivated by clinical trials in chronic heart failure disease, where treatment has effects both on morbidity (assessed as recurrent non-fatal hospitalisations) and on mortality (assessed as cardiovascular death, CV death). Recently, a joint frailty proportional hazards model has been proposed for these kind of efficacy outcomes to account for a potential association between the risk rates for hospital admissions and CV death. However, more often clinical trial results are presented by treatment...

The one-inflated positive Poisson mixture model for use in population size estimation.

The one-inflated positive Poisson mixture model (OIPPMM) is presented, for use as the truncated count model in Horvitz-Thompson estimation of an unknown population size. The OIPPMM offers a way to address two important features of some capture-recapture data: one-inflation and unobserved heterogeneity. The OIPPMM provides markedly different results than some other popular estimators, and these other estimators can appear to be quite biased, or utterly fail due to the boundary problem, when the OIPPMM is the...

Modeling physical activity data using L -penalized expectile regression.

In recent years accelerometers have become widely used to objectively assess physical activity. Usually intensity ranges are assigned to the measured accelerometer counts by simple cut points, disregarding the underlying activity pattern. Under the assumption that physical activity can be seen as distinct sequence of distinguishable activities, the use of hidden Markov models (HMM) has been proposed to improve the modeling of accelerometer data. As further improvement we propose to use expectile regression ...

Meta-analysis of rare events under the assumption of a homogeneous treatment effect.

We studied the performance of several meta-analysis methods in rare event settings, when the treatment effect is assumed to be homogeneous and baseline prevalences are either homogeneous or heterogeneous. We conducted extensive simulations that included the three most common effect sizes with count data: the odds ratio, the relative risk, and the risk difference. We investigated several important scenarios by varying the level of rareness, the value of the trials' arms unbalance, and the size of the treatme...

Multilevel regression and poststratification as a modeling approach for estimating population quantities in large population health studies: A simulation study.

There are now a growing number of applications of multilevel regression and poststratification (MRP) in population health and epidemiological studies. MRP uses multilevel regression to model individual survey responses as a function of demographic and geographic covariates. Estimated mean outcome values for each demographic-geographic respondent subtype are then weighted by the proportions of each subtype in the population to produce an overall population-level estimate. We recently reported an extensive ca...

Sampling uncertainty versus method uncertainty: A general framework with applications to omics biomarker selection.

Uncertainty is a crucial issue in statistics which can be considered from different points of view. One type of uncertainty, typically referred to as sampling uncertainty, arises through the variability of results obtained when the same analysis strategy is applied to different samples. Another type of uncertainty arises through the variability of results obtained when using the same sample but different analysis strategies addressing the same research question. We denote this latter type of uncertainty as ...

Cox regression model with randomly censored covariates.

This paper deals with a Cox proportional hazards regression model, where some covariates of interest are randomly right-censored. While methods for censored outcomes have become ubiquitous in the literature, methods for censored covariates have thus far received little attention and, for the most part, dealt with the issue of limit-of-detection. For randomly censored covariates, an often-used method is the inefficient complete-case analysis (CCA) which consists in deleting censored observations in the data ...

Dynamic prediction: A challenge for biostatisticians, but greatly needed by patients, physicians and the public.

Prognosis is usually expressed in terms of the probability that a patient will or will not have experienced an event of interest t years after diagnosis of a disease. This quantity, however, is of little informative value for a patient who is still event-free after a number of years. Such a patient would be much more interested in the conditional probability of being event-free in the upcoming t years, given that he/she did not experience the event in the s years after diagnosis, called "conditional surviva...

Correcting for measurement error in fractional polynomial models using Bayesian modelling and regression calibration, with an application to alcohol and mortality.

Exposure measurement error can result in a biased estimate of the association between an exposure and outcome. When the exposure-outcome relationship is linear on the appropriate scale (e.g. linear, logistic) and the measurement error is classical, that is the result of random noise, the result is attenuation of the effect. When the relationship is non-linear, measurement error distorts the true shape of the association. Regression calibration is a commonly used method for correcting for measurement error, ...

Predictive functional ANOVA models for longitudinal analysis of mandibular shape changes.

In this paper, we introduce a Bayesian statistical model for the analysis of functional data observed at several time points. Examples of such data include the Michigan growth study where we wish to characterize the shape changes of human mandible profiles. The form of the mandible is often used by clinicians as an aid in predicting the mandibular growth. However, whereas many studies have demonstrated the changes in size that may occur during the period of pubertal growth spurt, shape changes have been les...

Advaced Topics in Biostatistics: Editorial for the ISCB38 Special Issue.

Editorial: Year 2018 report.

Bayesian personalized treatment selection strategies that integrate predictive with prognostic determinants.

The evolution of "informatics" technologies has the potential to generate massive databases, but the extent to which personalized medicine may be effectuated depends on the extent to which these rich databases may be utilized to advance understanding of the disease molecular profiles and ultimately integrated for treatment selection, necessitating robust methodology for dimension reduction. Yet, statistical methods proposed to address challenges arising with the high-dimensionality of omics-type data predom...

Assessment of local influence for the analysis of agreement.

The concordance correlation coefficient (CCC) and the probability of agreement (PA) are two frequently used measures for evaluating the degree of agreement between measurements generated by two different methods. In this paper, we consider the CCC and the PA using the bivariate normal distribution for modeling the observations obtained by two measurement methods. The main aim of this paper is to develop diagnostic tools for the detection of those observations that are influential on the maximum likeli...

Marginal false discovery rate control for likelihood-based penalized regression models.

The popularity of penalized regression in high-dimensional data analysis has led to a demand for new inferential tools for these models. False discovery rate control is widely used in high-dimensional hypothesis testing, but has only recently been considered in the context of penalized regression. Almost all of this work, however, has focused on lasso-penalized linear regression. In this paper, we derive a general method for controlling the marginal false discovery rate that can be applied to any penalized ...

Testing random effects in linear mixed-effects models with serially correlated errors.

In linear mixed-effects models, random effects are used to capture the heterogeneity and variability between individuals due to unmeasured covariates or unknown biological differences. Testing for the need of random effects is a nonstandard problem because it requires testing on the boundary of parameter space where the asymptotic chi-squared distribution of the classical tests such as likelihood ratio and score tests is incorrect. In the literature several tests have been proposed to overcome this difficul...

Interim analysis incorporating short- and long-term binary endpoints.

Designs incorporating more than one endpoint have become popular in drug development. One of such designs allows for incorporation of short-term information in an interim analysis if the long-term primary endpoint has not been yet observed for some of the patients. At first we consider a two-stage design with binary endpoints allowing for futility stopping only based on conditional power under both fixed and observed effects. Design characteristics of three estimators: using primary long-term endpoint only,...

An efficient sample size adaptation strategy with adjustment of randomization ratio.

In clinical trials, sample size reestimation is a useful strategy for mitigating the risk of uncertainty in design assumptions and ensuring sufficient power for the final analysis. In particular, sample size reestimation based on unblinded interim effect size can often lead to sample size increase, and statistical adjustment is usually needed for the final analysis to ensure that type I error rate is appropriately controlled. In current literature, sample size reestimation and corresponding type I error con...


Quick Search