Advertisement

Topics

PubMed Journal Database | Journal of biopharmaceutical statistics RSS

11:46 EDT 25th March 2019 | BioPortfolio

The US National Library of Medicine and National Institutes of Health manage PubMed.gov which comprises of more than 29 million records, papers, reports for biomedical literature, including MEDLINE, life science and medical journals, articles, reviews, reports and  books.

BioPortfolio aims to cross reference relevant information on published papers, clinical trials and news associated with selected topics - speciality.

For example view all recent relevant publications on Epigenetics and associated publications and clincial trials.

Showing PubMed Articles 1–25 of 133 from Journal of biopharmaceutical statistics

Use of propensity score and disease risk score for multiple treatments with time-to-event outcome: a simulation study.

Propensity score (PS) and disease risk score (DRS) are often used in pharmacoepidemiologic safety studies. Methods of applying these two balancing scores are extensively studied in binary treatment settings. However, the use of PS and DRS is not well understood in the case of non-ordinal multiple treatments. Some PS methods of multiple treatments have been implemented since the theoretical establishment. Nevertheless, most of the work applies to continuous or binary outcomes. Little work has been done for t...

Asymptotic confidence interval construction for proportion ratio based on correlated paired data.

In ophthalmological and otolaryngology studies, measurements obtained from both organs (e.g., eyes or ears) of an individual are often highly correlated. Ignoring the intraclass correlation between paired measurements may yield biased inferences. In this article, four different confidence interval (CI) construction methods (maximum likelihood estimates based Wald-type CI, profile likelihood CI, asymptotic score CI and an existing method adjusted for correlated bilateral data) are applied to this type of cor...

Studying treatment-effect heterogeneity in precision medicine through induced subgroups.

Precision medicine, in the sense of tailoring the choice of medical treatment to patients' pretreatment characteristics, is nowadays gaining a lot of attention. Preferably, this tailoring should be realized in an evidence-based way, with key evidence in this regard pertaining to subgroups of patients that respond differentially to treatment (i.e., to subgroups involved in treatment-subgroup interactions). Often a-priori hypotheses on subgroups involved in treatment-subgroup interactions are lacking or are i...

A commentary on: statistical inference problems in sequential parallel designs.

A Sequential Parallel Comparison Design has two stages, the first comparing drug to placebo and the second comparing drug to placebo among patients who did not respond to placebo in the first stage. The paper, Statistical Inference Problems in Sequential Parallel Designs, claims that the estimate of the treatment difference in the second stage is biased and that under certain circumstances, a suggested hypothesis test will reject the null hypothesis when it should be accepted. This rejoinder argues that the...

Simulation optimization for Bayesian multi-arm multi-stage clinical trial with binary endpoints.

Multi-arm multi-stage designs, in which multiple active treatments are compared to a control and accumulated information from interim data are used to add or remove arms from the trial, may reduce development costs and shorten the drug development timeline. As such, this adaptive update is a natural complement to Bayesian methodology in which the prior clinical belief is sequentially updated using the observed probability of success. Simulation is often required for planning clinical trials to accommodate t...

An adaptive multi-stage phase I dose-finding design incorporating continuous efficacy and toxicity data from multiple treatment cycles.

Phase I designs traditionally use the dose-limiting toxicity (DLT), a binary endpoint from the first treatment cycle, to identify the maximum-tolerated dose (MTD) assuming a monotonically increasing relationship between dose and efficacy. In this article, we establish a general framework for a multi-stage adaptive design where we jointly model a continuous efficacy outcome and continuous/quasi-continuous toxicity endpoints from multiple treatment cycles. The normalized Total Toxicity Profile (nTTP) is used ...

A closed-form estimator for meta-analysis and surrogate markers evaluation.

Estimating complex linear mixed models using an iterative full maximum likelihood estimator can be cumbersome in some cases. With small and unbalanced datasets, convergence problems are common. Also, for large datasets, iterative procedures can be computationally prohibitive. To overcome these computational issues, an unbiased two-stage closed-form estimator for the multivariate linear mixed model is proposed. It is rooted in pseudo-likelihood-based split-sample methodology and useful, for example, when eva...

Partial Youden index and its inferences.

In medical diagnostic research, medical tests with continuous values are widely employed to distinguish between diseased and non-diseased subjects. The diagnostic accuracy of a medical test can be assessed by using the receiver operating characteristic (ROC) curve of the test. To summarize the ROC curve and determine an optimal cut-off point for test results, the Youden index is commonly used. In particular, the Youden index is optimized over the entire range of values for sensitivity and specificity, which...

Bayesian sample size determination for longitudinal studies with continuous response based on different scientific questions of interest.

Longitudinal study designs are commonly applied in much scientific research, especially in the medical, social, and economic sciences. Longitudinal studies allow researchers to measure changes in each individual's responses over time and often have higher statistical power than cross-sectional studies. Choosing an appropriate sample size is a crucial step in a successful study. In longitudinal studies, because of the complexity of their design, including the selection of the number of individuals and the nu...

Modeling the impact of preplanned dose titration on delayed response.

Dose titration becomes more and more common in improving drug tolerability as well as determining individualized treatment doses, thereby maximizing the benefit to patients. Dose titration starting from a lower dose and gradually increasing to a higher dose enables improved tolerability in patients as the human body may gradually adapt to adverse gastrointestinal effects. Current statistical analyses mostly focus on the outcome at the end-of-study follow-up without considering the longitudinal impact of dos...

Estimation of delay time in survival data with delayed treatment effect.

In randomized controlled trials with delayed treatment effect, there is a delay period before the experimental therapy starts to exhibit a beneficial effect. The phenomenon of delayed treatment effect is often observed in the emerging and important field of immuno-oncology. It is important to estimate the duration of delay as this information helps in characterizing the pattern of comparative treatment effect, understanding the mechanism of action of the experimental therapy, and forming optimal treatment s...

Randomized dose-escalation designs for drug combination cancer trials with immunotherapy.

This work considers Phase I cancer dual-agent dose-escalation clinical trials in which one of the compounds is an immunotherapy. The distinguishing feature of trials considered is that the dose of one agent, referred to as a standard of care, is fixed and another agent is dose-escalated. Conventionally, the goal of a Phase I trial is to find the maximum tolerated combination (MTC). However, in trials involving an immunotherapy, it is also essential to test whether a difference in toxicities associated with ...

Interval estimators of relative potency in toxicology and radiation countermeasure studies: comparing methods and experimental designs.

The relative potency of one agent to another is commonly represented by the ratio of two quantal response parameters; for example, the LD of animals receiving a treatment to the LD of control animals, where LD is the dose of toxin that is lethal to 50% of animals. Though others have considered interval estimators of LD, here, we extend Bayesian, bootstrap, likelihood ratio, Fieller's and Wald's methods to estimate intervals for relative potency in a parallel-line assay context. In addition to comparing thei...

Assay sensitivity in "Hybrid thorough QT/QTc (TQT)" study.

A concurrent positive control should be included in a thorough QTc clinical trial to validate the study according to ICH E14 guidance. Some pharmaceutical companies have started to use "hybrid TQT" study to meet ICH E14 regulatory requirements since the release of ICH E14 Q&A (R3). The "hybrid TQT" study includes the same treatment arms (therapeutic and/or supratherapeutic dose of investigational drug, placebo, and positive control) with sample size less than traditional TQT studies, but use concentration-Q...

Sample size calculations for comparing two groups of count data.

A sample size formula for comparing two groups of count data is derived using the method of moments by matching the first and second moments of the distribution of the count data, and it does not need any further distributional assumption. Compared to sample size formulas derived using a likelihood-based approach or using simulations, the proposed sample size formula applies to count data following any distribution in addition to the negative binomial distribution. The proposed sample size formula can be us...

Confidence intervals for proportion ratios of stratified correlated bilateral data.

In stratified bilateral studies, responses from two paired body parts are correlated. Confidence intervals (CIs), which reveal various features of the data, should take the correlations into account. In this article, five CI methods (sample-size weighted naïve Maximum likelihood estimation (MLE)-based Wald-type CI, complete MLE-based Wald-type CI, profile likelihood CI, MLE-based score CI and pooled MLE-based Wald-type CI) are derived for proportion ratios under the assumption of equal correlation coeffici...

Estimation of causal effects in clinical endpoint bioequivalence studies in the presence of intercurrent events: noncompliance and missing data.

In clinical endpoint bioequivalence (BE) studies, the primary analysis for assessing equivalence between a generic and an innovator product is based on the observed per-protocol (PP) population (usually completers and compliers). However, missing data and noncompliance are post-randomization intercurrent events and may introduce selection bias. Therefore, PP analysis is generally not causal. The FDA Missing Data Working Group recommended using "causal estimands of primary interest." In this paper, we propos...

Treatment effect on ordinal functional outcome using piecewise multistate Markov model with unobservable baseline: an application to the modified Rankin scale.

In clinical trials, longitudinally assessed ordinal outcomes are commonly dichotomized and only the final measure is used for primary analysis, partly for ease of clinical interpretation. Dichotomization of the ordinal scale and failure to utilize the repeated measures can reduce statistical power. Additionally, in certain emergent settings, the same measure cannot be assessed at baseline prior to treatment. For such a data set, a piecewise-constant multistate Markov model that incorporates a latent model f...

Incorporating a companion test into the noninferiority design of medical device trials.

Noninferiority trials are commonly utilized to evaluate the safety and effectiveness of medical devices. It could happen that the noninferiority hypothesis is rejected while the performance of the active control is clinically not satisfactory. This may pose a great challenge when making a regulatory decision. To avoid such a difficult situation, we propose to conduct a companion test to assess the performance of the active control when testing the main noninferiority hypothesis and to incorporate such a tes...

Methods for the analysis of multiple endpoints in small populations: A review.

While current guidelines generally recommend single endpoints for primary analyses of confirmatory clinical trials, it is recognized that certain settings require inference on multiple endpoints for comprehensive conclusions on treatment effects. Furthermore, combining treatment effect estimates from several outcome measures can increase the statistical power of tests. Such an efficient use of resources is of special relevance for trials in small populations. This paper reviews approaches based on a combina...

Rejoinder to Mr. Peter J. Laud.

Quantitative decision-making in randomized Phase II studies with a time-to-event endpoint.

One of the most critical decision points in clinical development is Go/No-Go decision-making after a proof-of-concept study. Traditional decision-making relies on a formal hypothesis testing with control of type I and type II error rates, which is limited by assessing the strength of efficacy evidence in a small isolated trial. In this article, we propose a quantitative Bayesian/frequentist decision framework for Go/No-Go criteria and sample size evaluation in Phase II randomized studies with a time-to-even...

Heterogeneous growth bent-cable models for time-to-event and longitudinal data: application to AIDS studies.

The major limitations of growth curve mixture models for HIV/AIDS data are the usual assumptions of normality and monophasic curves within latent classes. This article addresses these limitations by using non-normal skewed distributions and multiphasic patterns for outcomes of prospective studies. For such outcomes, new skew-t (ST) distributions are proposed for modeling heterogeneous growth trajectories, which exhibit not abrupt but gradual multiphasic changes from a declining trend to an increasing trend ...

L-statistics of absolute differences for quantifying the agreement between two variables.

In many clinical studies, Lin's (1989) concordance correlation coefficient (CCC) is a popular measure of agreement for continuous outcomes. Most commonly, it is used under the assumption that data are normally distributed. However, in many practical applications, data are often skewed and/or thick-tailed. King and Chinchilli (2001) proposed robust estimation methods of alternative CCC indices, and we propose an approach that extends the existing methods of robust estimators by focusing on functionals that y...

Comments on "One-tailed asymptotic inferences for the difference of proportions: analysis of 97 methods of inference" by Álvarez Hernández M, Martín Andrés A and Herranz Tejedor I. (2018).


Advertisement
Quick Search
Advertisement
Advertisement