Latest bioRxiv papers
Category: bioinformatics — Showing 50 items
A New Sparse Bayesian Quantile Neural Network-based Approach and Its Application to Discover Physiological Sweet Spots in the Canadian Longitudinal Study on Aging
Min, J.; Vishnyakova, O.; Brooks-Wilson, A.; Elliott, L. T.AI Summary
- The study introduces Q-FSNet and Q-DirichNet, neural network frameworks integrating quantile regression for identifying physiological sweet spots in high-dimensional data.
- Using data from the Canadian Longitudinal Study on Aging, these methods identified 25 metabolites with optimal ranges that minimize biological age acceleration.
- The findings suggest dietary and gut microbiome-derived metabolites as potential biomarkers for healthy aging, supported by existing literature.
Abstract
Identifying physiological sweet spots (optimal ranges for homeostasis) is essential for precision medicine. However, traditional statistical methods often rely on globally linear or locally jagged models that struggle to capture the smooth, non-linear nature of biological regulation in high-dimensional data. We present the Quantile Feature Selection Network (Q-FSNet), a neural network-based framework that integrates quantile regression, feature selection, and uncertainty estimation to identify biomarkers with sweet spots. Unlike traditional methods, Q-FSNet learns continuous response curves without requiring pre-specified number of change points. We further introduce Quantile Dirichlet Network (Q-DirichNet), a fully Bayesian extension that utilizes Dirichlet priors to automate feature shrinkage. Using data from the Canadian Longitudinal Study on Aging, we identified 25 metabolites with distinct homeostatic ranges for which biological age acceleration is minimized. The metabolites with sweet spots for biological aging include some derived from diet or produced by the gut microbiome; this highlights their potential for knowledge translation and public health impact. Our results, corroborated by existing literature, demonstrate that these sparse neural network-based methods offer a scalable and interpretable tool for discovering metabolic signatures of healthy aging vs. dysregulation in large-scale omics research.
bioinformatics2026-02-20v1ProteoMapper: Alignment-Aware Identification and Quantitative Analysis of Contextual Motif-Domain Patterns in Protein Families
Sefa, S. M.; Sarkar, J.; Robin, A. H. K.; Uddin, M.AI Summary
- ProteoMapper integrates domain annotation with motif detection to analyze spatial relationships in protein families, introducing metrics like positional conservation scoring and Motif-Domain Coverage Score (MDCS).
- The tool processes alignments in Excel format, providing rapid analysis and color-coded reports, validated across three protein families with high accuracy.
- In Arabidopsis ERD6-like sugar transporters, MDCS analysis showed PROSITE signatures PS00216 and PS00217 are fully domain-embedded but differ in evolutionary conservation, suggesting subfunctionalization.
Abstract
Protein function depends on interactions between structural domains and regulatory motifs. Yet current tools analyze these elements separately, hindering investigation of disease mutations affecting evolutionarily conserved, structurally constrained motifs. We present ProteoMapper, a computational framework integrating HMMER-based domain annotation with user-defined motif detection to quantify motif-domain spatial relationships in protein families. ProteoMapper introduces two discovery metrics: (1) positional conservation scoring, identifying motifs at identical alignment coordinates in [≥] N% of sequences (default 60%), indicating purifying selection; (2) Motif-Domain Coverage Score (MDCS), quantifying motif embedding within Pfam domains (MDCS=1: fully embedded; MDCS=0: extra-domain). The platform processes Excel-formatted alignments without programming requirements, delivering color-coded reports with conserved motif positions, domain boundaries, and MDCS values. Parallel execution of sequence batches enables rapid analysis (8 motifs were searched in 150 sequences with complete Pfam scanning in <6 seconds on standard hardware). Validation across three protein families confirmed technical accuracy and biological insight. In PLATZ transcription factors (24 proteins), domain predictions achieved 0.94 mean intersection-over-union versus published annotations, exactly reproducing 22 of 23 reported spans. In Arabidopsis ERD6-like sugar transporters (17 proteins), MDCS analysis revealed canonical PROSITE signatures PS00216 and PS00217 are equally domain-embedded (MDCS=1.0) but evolutionarily divergent. PS00217 shows positional conservation (58.8% of sequences) while PS00216 exhibits dispersal, suggesting subfunctionalization. In tomato actin-depolymerizing factors (11 proteins), domain detection achieved 100% sensitivity with >93% positional concordance. ProteoMapper enables hypothesis-driven investigation of evolutionary constraints, regulatory mechanisms, and variant effect prediction in biomedical and functional proteomics. Source code, documentation, and test results with datasets at https://github.com/sifullah0/ProteoMapper.
bioinformatics2026-02-20v1Geometric-aware and interpretable deep learning for single-cell batch correction via explicit disentanglement and optimal transport
Jiang, C.; Zheng, R.; Ji, Y.; Cao, S.; Fang, Y.; Wang, Z.; Wang, R.; Liang, S.; Tao, S.AI Summary
- The study introduces iDLC, a deep learning framework for single-cell RNA sequencing batch correction, using explicit feature disentanglement and optimal transport for dual-level correction.
- iDLC separates biological from technical components in a structured latent space and uses mutual nearest neighbors for geometric alignment.
- Evaluations on various datasets show iDLC effectively removes batch effects, preserves cell subtypes, and outperforms existing methods in both correction and biological fidelity.
Abstract
Single-cell RNA sequencing enables high-resolution characterization of cellular heterogeneity, yet integrating datasets from diverse sources remains challenging due to batch effects. Current methods rely on implicit feature disentanglement and and lack geometric constraintsoften result in under-correction, over-correction, or compromised biological fidelity. Here, we present iDLC, an interpretable deep learning framework that performs dual-level correction through explicit feature disentanglement and optimal transport - regularized adversarial alignment. iDLC separates biological and technical components within a structured latent space, then leverages high-confidence mutual nearest neighbor pairs to guide geometrically constrained distribution alignment. Systematic evaluation across pancreatic cancer datasets with varying batch effect intensities, multi-source human immune cells, and large-scale cross-species atlases demonstrates that iDLC robustly eliminates complex batch effects while preserving fine-grained cell subtypes, continuous developmental trajectories, and rare populations. The framework scales efficiently to datasets exceeding one million cells and consistently outperforms existing methods in both batch correction and biological conservation metrics. iDLC provides a principled and reliable tool for constructing unified single-cell reference atlases across diverse experimental conditions and biological systems.
bioinformatics2026-02-20v1OT-knn: a neighborhood-aware optimal transport framework for aligning spatial transcriptomics data
Song, J.; Li, Q.AI Summary
- OT-knn is introduced as a method for aligning spatial transcriptomics (ST) data by integrating local neighborhood information into an optimal transport framework.
- It reconstructs each spot using its k-nearest neighbors to capture microenvironment context, enhancing robustness against noise and variability.
- Evaluations on simulated and real datasets, including human and mouse brain data, show OT-knn achieves accurate alignment despite spatial deformation, donor heterogeneity, and developmental variation.
Abstract
Spatial transcriptomics (ST) measures gene expression while preserving spatial context within tissues, enabling detailed characterization of tissue organization. As ST technologies advance, aligning datasets across tissue sections, individuals, platforms, and developmental stages has become increasingly important but remains challenging due to sparse expression, biological heterogeneity, and geometric distortions between slices. We introduce OT-knn, a method for ST alignment that integrates local neighborhood information within an optimal transport framework. Rather than relying solely on single-spot expression, OT-knn reconstructs each spot using its spatial k-nearest neighbors, capturing microenvironment context that is more robust to noise and variability. These representations are then used to derive probabilistic correspondences between slices. We evaluate OT-knn using simulated data with known ground-truth alignment and real datasets from multiple ST platforms, including human dorsolateral prefrontal cortex data (10x Genomics Visium), mouse brain aging data with both within-donor and cross-donor comparisons (MERFISH), and a multi-stage axolotl brain dataset (Stereo-seq). Across these settings, OT-knn achieves accurate and robust alignment, particularly in the presence of spatial deformation, donor heterogeneity, and developmental variation.
bioinformatics2026-02-20v1SuperCell2.0 enables semi-supervised construction of multimodal metacell atlases
Herault, L.; Gabriel, A. A.; Duc, B.; Dolfi, B.; Shah, A.; Joyce, J. A.; Gfeller, D.AI Summary
- SuperCell2.0 is introduced as a workflow for constructing semi-supervised multimodal metacells from large single-cell datasets.
- It was found that multimodal metacells outperform single-modality metacells, enhancing inter-modality consistency and integration of multiomic data.
- The workflow identified interferon-primed monocytes and macrophages in blood and tumor samples, with markers used to characterize this population in healthy donors.
Abstract
Multimodal single-cell atlases comprising hundreds of thousands of cells provide unique resources for exploring complex biological tissues and generating testable hypotheses. To streamline the analysis of such large datasets, we introduce SuperCell2.0, a robust workflow to build (semi-)supervised multimodal metacells. We demonstrate that multimodal metacells outperform metacells built with a single modality, improve inter-modality consistency, and facilitate integration of multiomic single-cell datasets. SuperCell2.0 can further leverage full or partial cell type annotations to improve metacell quality. This workflow enables us to construct multimodal metacell atlases from blood and tumor samples and identifies interferon-primed monocytes and macrophages in the circulation and in the tumor microenvironment. Markers derived from the metacell analysis enable us to sort and phenotypically characterize this population in healthy donors. Overall, our work demonstrates how SuperCell2.0 facilitates the analysis of large multimodal single-cell atlases.
bioinformatics2026-02-20v1wavess 1.2: Presenting an HLA-aware within-host virus sequence simulation framework
Lapp, Z.; Leitner, T.AI Summary
- The study extends the wavess framework to simulate within-host virus sequence evolution by incorporating an HLA-aware CD8+ CTL response and variable recombination rates.
- This allows for more accurate modeling of virus sequences, especially in regions influenced by CTLs, and supports investigations into how these mechanisms affect within-host evolution.
Abstract
Motivation: Understanding how virus sequences are shaped by selection can inform vaccine design and transmission inference. Modeling within-host evolution to interrogate these questions requires a detailed mechanistic framework that accurately captures sequence diversification. The CD8+ cytotoxic T-lymphocyte (CTL) response plays an important role in immune-mediated selection and can leave strong signatures in virus sequences; however, existing sequence-based within-host virus modeling frameworks do not explicitly include an HLA-aware CTL response. Results: We extended our previously published within-host sequence evolution simulator, wavess, to include an explicit CTL response, and share a method for identifying HLA-specific CTL epitopes given a founder virus sequence. We also updated the model to permit a variable recombination rate, which allows for modeling recombination hotspots, non-adjacent genes, and segmented genomes. These extensions to wavess allow for more accurate simulation of viruses and virus genes, particularly in regions of the genome where the immune response is dominated by CTLs (rather than antibodies). It also provides the foundation for investigations of how these newly-added biological mechanisms influence within-host evolution. Availability and implementation: The core of wavess is written in Python 3, with helper functions written in R. It is available at https://github.com/MolEvolEpid/wavess.
bioinformatics2026-02-20v1Prediction of ligand-dependent conformational sampling of ABC transporters by AlphaFold3 and correlation to experimental structures and energetics
Tang, Q.; Mchaourab, H.; Wu, T.; Soubasis, B.AI Summary
- This study uses AlphaFold3 to predict nucleotide-dependent conformational changes in ABC transporters, comparing these predictions to experimental structures.
- AlphaFold3 accurately samples known conformations and correlates with experimental dynamics, also predicting previously unobserved conformations.
- The study suggests that AlphaFold3's predictions might extrapolate from known structures, as sequence determinants influence the predicted conformational changes.
Abstract
AlphaFold3 architecture represented an important leap relative to Alphafold2 by enabling the inclusion of protein ligands in the prediction network. Ligand-dependent structural rearrangements are inherently difficult to predict computationally as they imply transitions between states separated by large energy differences. Here we apply AlphaFold3 to predict nucleotide-dependent changes in the conformational cycle of representative ABC transporters that have been extensively investigated by experimental structural biology techniques. We show that under similar conditions, AlphaFold3 predictions sample experimentally observed conformations. Moreover, the heterogeneity of these predictions correlates with experimental measures of dynamics obtained from multiple techniques. For couple of the tested transporters, the implied relative energetics of the conformations mirror their experimental counterpart. Remarkably, AlphaFold3 predicts previously unobserved conformations that have been implied to be sampled by ABC transporters. Finally, we report preliminary results showing that postulated sequence determinants of conformational changes modify the predictions of AlphaFold3. Although hundreds of ABC transporter structures have been determined and were included in the training data of AF3, we propose that aspects of its predictions reflect extrapolation of principles learned from these structures.
bioinformatics2026-02-20v1Chemical Probes in Scientific Literature: Expanding and Validating Target-Disease Evidence
Adasme, M. F.; Ochoa, D.; Lopez, I.; Do, H.-M.-A.; McDonagh, E. M.; O'Boyle, N. M.; Leach, A. R.; Zdrazil, B.AI Summary
- This study systematically analyzed over 18 million articles to quantify the impact of 561 chemical probes, identifying 5,558 unique target-disease associations.
- Findings showed chemical probe evidence precedes structured data by 1-7 years, revealed 353 new T-D pairs, and 135 high-confidence associations for therapeutic repurposing in rare diseases.
- Chemical probes were crucial for validating target-disease associations, enhancing evidence beyond correlative data like RNA expression.
Abstract
Chemical probes are indispensable tools for validating therapeutic hypotheses, yet their broader impact on early-stage drug discovery remains unquantified. To our knowledge, this study represents the first systematic, large-scale investigation of the chemical probe literature. By screening over 18 million articles using a high-quality dictionary of 561 chemical probes, we identified 20,000 articles mentioning a chemical probe which resulted in 5,558 unique target-disease (T-D) associations. Our analysis yields four principal findings that redefine the utility of these chemicals: First, we show that chemical probe evidence typically precedes the appearance of structured data in major knowledge bases by 1-7 years, providing a crucial lead time for target prioritisation. Second, we identified 353 T-D pairs (6.4%) with no prior evidence in the Open Targets Platform, highlighting the approach's discovery potential. Third, the application of strict novelty filters uncovered 135 new high-confidence associations between targets and diseases, revealing distinct opportunities for therapeutic repurposing in non-oncological, rare autoimmune diseases, and diseases without effective therapies due to complex biology or high treatment resistance. Finally, we demonstrate that chemical probes are essential for strengthening evidence, providing functional validation for associations previously supported only by weaker, correlative data such as RNA expression or animal models. Collectively, these findings illustrate that chemical probes catalyse early therapeutic discovery, emphasising the importance of cataloguing existing probes and identifying new ones.
bioinformatics2026-02-20v1Differential analysis of image-based chromatin tracing data with Dory
Ma, Z.; Liu, M.; Wang, S.; Wang, S.; Zang, C.AI Summary
- Dory is a statistical method designed for differential analysis of chromatin tracing data to identify spatial pattern differences between two groups.
- It quantifies pairwise spatial distances and uses multi-level statistical tests to detect significant structural changes, producing a differential score matrix.
- Application of Dory revealed associations between chromatin structural changes and alterations in A/B compartments, promoter-enhancer interactions, and gene expression.
Abstract
Spatial organization of the genome plays a vital role in defining cell identity and regulating gene expression. The three-dimensional (3D) genome structure can be measured by sequencing-based techniques such as Hi-C usually on the cell population level or by imaging-based techniques such as chromatin tracing at the single-cell level. Chromatin tracing is a multiplexed DNA fluorescence in situ hybridization (FISH)-based method that can directly map the 3D positions of genomic loci along individual chromosomes at single-molecule resolution. However, few computational tools are available for statistical differential analysis of chromatin tracing data, which are inherently high-dimensional, highly variable and contain many missing values. Here, we present Dory, a statistical method for identifying differential spatial patterns between two groups of chromatin traces. Dory quantifies pairwise spatial distances among genomic regions in a chromatin trace and applies multi-level statistical tests to detect significant structural differences between the two groups of traces. It produces a differential score matrix highlighting region pairs with significant distance difference. Applying Dory to multiple chromatin tracing datasets, we found that the detected chromatin structural changes were associated with alterations in A/B compartments and promoter-enhancer interactions correlated with differential gene expression. Dory is a robust and user-friendly computational tool for quantitative analysis of imaging-based 3D genome data that enables systematic exploration of chromatin architecture and its roles in gene regulation.
bioinformatics2026-02-20v1Learning heritable multimodal brain representation via contrastive learning
Xia, T.; Zhao, X.; Islam, S. S. M.; Mohammed, K. K.; Xie, Z.; Zhi, D.AI Summary
- This study introduces a multimodal contrastive learning framework using paired T1- and T2-weighted MRIs to derive heritable brain representations.
- The approach improves prediction of traditional imaging-derived phenotypes, age, and brain disorders compared to single-modality models.
- GWAS on these representations showed increased genetic loci overlap, revealing shared biological targets and enhancing genetic discovery.
Abstract
Magnetic resonance imaging (MRI)-derived phenotypes (IDP) has enabled the discovery of numerous genomic loci associated with brain structure and function. However, most existing IDPs and learned representations are derived from a single imaging modality, missing complementary information across modalities and potentially limiting the scope of genetic discovery. Here, we introduce a multimodal contrastive learning framework to derive heritable representations from paired T1- and T2-weighted MRIs. Unlike single-modality reconstruction-based models, we designed a momentum-based contrastive learning framework. As a result, our approach offers improved prediction of traditional IDPs, age, and brain disorders. Notably, genome-wide association studies (GWAS) of the learned representations reveal a substantially higher overlap of genetic loci across modalities, indicating improved alignment of their underlying genetic architecture. Analysis of the GWAS loci identified shared protein and drug targets, yielding meaningful biological insights. Overall, our framework learns shared representations across brain imaging modalities that exhibit anatomical and genetic coherence.
bioinformatics2026-02-20v1A statistical framework for defining synergistic anticancer drug interactions
Dias, D.; Zobolas, J.; Ianevski, A.; Aittokallio, T.AI Summary
- The study developed a statistical framework to identify significant synergistic anticancer drug interactions by establishing reference null distributions from a large dataset of over 2,000 drug combinations across 125 cancer cell lines.
- This approach allowed for the calculation of empirical p-values, confirming known synergistic combinations and revealing novel ones that were previously overlooked.
- The framework was also applied to a smaller dataset, demonstrating its general applicability in detecting significant drug combination effects.
Abstract
Synergistic drug combinations have the potential to delay drug resistance and improve clinical outcomes. However, current cell-based screens lack robust statistical assessment to identify significant synergistic interactions for downstream experimental or clinical validation. Leveraging a large-scale dataset that systematically evaluated more than 2,000 drug combinations across 125 pan-cancer cell lines, we established reference null distributions separately for various synergy metrics and cancer types. These data-driven reference distributions enable estimation of empirical p-values to assess the significance of observed drug combination effects, thereby standardizing synergy detection in future studies. The statistical evaluation confirmed key synergistic combinations and uncovered novel combination effects that met stringent statistical criteria, yet were overlooked in the original analyses. We revealed cell context-specific drug combination effects across the tissue types and differences in statistical behavior of the synergy metrics. To demonstrate the general applicability of our approach to smaller-scale studies, we applied the reference distributions to evaluate the significance of combination effects in an independent dataset. We provide a fast and statistically rigorous approach to detecting synergistic drug interactions in combinatorial screens.
bioinformatics2026-02-19v3SpecLig: Energy-Guided Hierarchical Model for Target-Specific 3D Ligand Design
Zhang, P.; Han, R.; Kong, X.; Chen, T.; Ma, J.AI Summary
- SpecLig is introduced as a framework for generating small molecules and peptides with enhanced target affinity and specificity, addressing the issue of promiscuous binding in structure-based models.
- It uses a hierarchical SE(3)-equivariant variational autoencoder and an energy-guided geometric latent-diffusion model, incorporating chemical priors to favor pocket-complementary fragment combinations.
- Evaluations show that SpecLig's ligands bind with high specificity and affinity, with real applications demonstrating reduced off-target risks.
Abstract
Structure-based generative models often optimize single-target affinity with ignorance of specificity, resulting in the generation of high-affinity candidates that exhibit promiscuous binding across unrelated targets. This decoupling of affinity and specificity not only compromises therapeutic efficacy but also elevates off-target risks that constrain translational potential. Therefore, we introduce SpecLig, a unified structure-based framework that jointly generates small molecules and peptides with improved target affinity and specificity. SpecLig represents a complex as a block-based graph, combining a hierarchical SE(3)-equivariant variational autoencoder with an energy-guided geometric latent-diffusion model. Chemical priors derived from block-block contact statistics are explicitly incorporated, biasing generation towards pocket-complementary fragment combinations. We benchmark SpecLig on peptide and small-molecule tasks using standard public datasets and propose precision/breadth testing paradigms to quantify specificity. Across multiple evaluations, ligand candidates generated by SpecLig usually bind to the target pocket with high specificity and affinity while maintaining competitive advantages in other attributes. Ablations indicate that both hierarchical representation and energy guidance contribute to success. Finally, we present multiple real applications that demonstrate how SpecLig improves ligands in natural complexes to mitigate potential off-target risks. SpecLig, therefore, provides a practical route to prioritize higher-specificity designs for downstream experimental validation. The codes are available at: https://github.com/CQ-zhang-2016/SpecLig.
bioinformatics2026-02-19v3Convergence of Angiotensin Signaling on Lung Pericyte and Stromal Behaviors
Benjamin, K. J. M.; Gonye, E.; Sauler, M.; Gidner, S.; Malinina, A.; Neptune, E. R.AI Summary
- The study investigated the expression of angiotensin receptors AGTR1 and AGTR2 in human lung tissue using bulk and single-nucleus transcriptomics, finding AGTR1 in lung pericytes and AGTR2 in alveolar epithelial type 2 cells.
- AGTR1 expression in pericytes was linked to pericyte behaviors; its inhibition restored pericyte numbers in an emphysema model, suggesting a role in airspace repair.
- In COPD, AGTR1 showed dysregulated expression in stromal cells, and angiotensin II with cigarette smoke exposure impaired pericyte migration and proliferation.
Abstract
The renin-angiotensin system is a well-characterized regulator of tissue homeostasis whose clinical relevance has expanded to include lung disorders such as chronic obstructive pulmonary disease (COPD)-associated emphysema, idiopathic pulmonary fibrosis, and COVID-19. Despite this interest, the cell-specific localization of angiotensin receptors in the human lung has remained poorly defined, in part due to limitations of available antibody reagents. Here, we define the expression patterns of the two predominant angiotensin receptors, AGTR1 and AGTR2, using complementary bulk and single-nucleus transcriptomic datasets from human lung tissue. We demonstrate that these receptors exhibit mutually exclusive, compartment-specific localization, with AGTR1 expressed in lung pericytes and AGTR2 expressed in alveolar epithelial type 2 cells. AGTR1 is detectable in isolated lung pericytes, and spatial colocalization with pericyte markers confirmed within the airspace microvasculature compartment by RNAscope. Airspace pericyte abundance was reduced in an experimental emphysema model but restored by pharmacologic attenuation of AGTR1 signaling commensurate with airspace repair. In COPD lungs, AGTR1 expression showed heterogeneous, disease-associated dysregulation across stromal populations, including upregulation in alveolar fibroblasts. Bulk transcriptomics also revealed aging-associated redistribution of AGTR1 expression into stromal compartments. Angiotensin II and cigarette smoke impaired pericyte migration toward endothelial cells, while combined exposure suppressed pericyte proliferation. Together, these findings identify AGTR1 as a new highly selective marker of lung pericytes and a regulator of pericyte behaviors within the airspace microvasculature. These findings provide a cell-resolved framework for angiotensin signaling with direct relevance to airspace resilience and therapeutic targeting.
bioinformatics2026-02-19v2jazzPanda: A hybrid approach to find spatial markergenes in imaging-based spatial transcriptomics data
Jin, X.; Putri, G. H.; Cheng, J.; Asselin-Labat, M.-L.; Smyth, G. K.; Phipson, B.AI Summary
- The study introduces jazzPanda, a hybrid method for identifying spatial marker genes in imaging-based spatial transcriptomics, which integrates spatial coordinates of gene detections and cells.
- jazzPanda uses a binning approach to pseudobulk gene detections and cells within clusters, enhancing marker gene analysis through linear models.
- Testing on datasets from Xenium, CosMx, and MERSCOPE showed that jazzPanda's marker genes have strong spatial correlation and increased specificity compared to existing methods.
Abstract
Spatial transcriptomics enables the understanding of the spatial architecture of tissues, providing deeper insight into tissue structure and cellular neighbourhoods. A crucial step in the analysis of spatial data is cell type identification. In single cell RNA-sequencing (scRNA-seq) analysis, cells are clustered according to their transcriptional similarity, and marker genes for each cluster identified. Marker analysis identifies genes highly expressed in each cluster compared to the remaining clusters, and these marker genes are used to annotate clusters with cell types. For spatial data, there are limited software tools for appropriate marker gene detection methods that account for the spatial distribution of gene expression. Tools developed for scRNA-seq ignore spatial information for the cells and genes. We have developed a hybrid approach to prioritize marker genes that uses the spatial coordinates of gene detections and cells making up clusters. We propose a binning approach that effectively "pseudobulks" gene detections and cells within clusters that can then be used as input into linear models for marker analysis. Our approach can account for multiple samples and background noise. We have tested our methods on several public datasets from di!erent platforms including Xenium, CosMx and MERSCOPE. The marker genes detected by our method show strong spatial correlation with the corresponding clusters and have increased specificity compared to other methods. The method is implemented in the jazzPanda R Bioconductor package and is publicly available (https://bioconductor.org/packages/jazzPanda).
bioinformatics2026-02-19v2Differential analysis of genomics count data with edge*
Pachter, L.AI Summary
- The study addresses the integration of edgeR, a tool for differential expression analysis, into the Python ecosystem, which is prevalent in single-cell genomics.
- They developed edgePython, a Python version of edgeR 4.8.2, incorporating a negative binomial gamma mixed model for multi-subject single-cell analysis and empirical Bayes shrinkage for cell-level dispersion.
- Key findings include the successful adaptation of edgeR to Python, enhancing its utility in single-cell genomic studies.
Abstract
The edgeR Bioconductor package is one of the most widely used tools for differential expression analysis of count-based genomics data. Despite its popularity, the R-only implementation limits its integration with the Python centric ecosystem that has become dominant in single-cell genomics. We present edgePython, a Python port of edgeR 4.8.2 that extends the framework with a negative binomial gamma mixed model for multi-subject single-cell analysis and empirical Bayes shrinkage of cell-level dispersion.
bioinformatics2026-02-19v2Harnessing DNA Foundation Models for Cross-Species Transcription Factor Binding Site Prediction in Plant Genomes
Haghani, M.; Dhulipalla, K. V.; Li, S.AI Summary
- This study evaluates the performance of DNA foundation models (DNABERT-2, AgroNT, HyenaDNA) in predicting transcription factor binding sites (TFBSs) in plant genomes using Arabidopsis thaliana and Sisymbrium irio data.
- The models were benchmarked against specialized methods like DeepBind and BERT-TFBS.
- HyenaDNA showed superior predictive accuracy and computational efficiency, suggesting potential for scalable genome-wide TFBS prediction in plants.
Abstract
Accurate prediction of transcription factor binding sites (TFBSs) is crucial for understanding gene regulation. While experimental methods like ChIP-seq and DAP-seq are informative, they are labor-intensive and species-specific. Recent advancements in large-scale pretrained DNA foundation models have shown promise in overcoming these limitations. This study evaluates the performance of three such models, DNABERT-2, AgroNT, and HyenaDNA, in predicting TFBSs in plants. Using Arabidopsis thaliana and Sisymbrium irio DAP-seq data, we benchmark their accuracy against specialized methods like DeepBind and BERT-TFBS. Our results demonstrate that foundation models, particularly HyenaDNA, offer superior predictive accuracy and computational efficiency, highlighting their potential for scalable, genome-wide TFBS prediction in plants.
bioinformatics2026-02-19v2Fine-tuning protein language models on human spatial constraint improves variant effect prediction by reducing wild-type sequence bias
Bajracharya, G.; Capra, J. A.AI Summary
- The study introduces Human Spatial Constraint (HuSC), which quantifies intraspecies constraint on missense variants by integrating human genetic variation with 3D protein structures.
- Fine-tuning protein language models (PLMs) on HuSC scores enhances prediction of variant effects by reducing bias towards wild-type sequences.
- HuSC outperforms traditional conservation metrics in predicting pathogenic variants and improves variant fitness predictions across different taxa and assays.
Abstract
Protein language models (PLMs) achieve state-of-the-art performance in predicting effects of missense variants, yet they do not explicitly consider variation within the human population. Here, we introduce Human Spatial Constraint (HuSC), a framework for quantifying intraspecies constraint on missense variants that integrates population-scale human genetic variation with 3D protein structures. We then fine-tune PLMs on HuSC scores. HuSC models the expected frequency of missense variation under neutral evolution and compares it to observed variation, accounting for both variation in mutational processes and 3D structural context. HuSC outperforms traditional inter- and intraspecies conservation metrics in predicting pathogenic variants. By focusing on intraspecies variation, HuSC reveals protein sites under human-specific constraint that cannot be captured by interspecies models. Integrating this intraspecies perspective into PLMs by fine-tuning on HuSC scores improves the prediction of variant fitness from deep mutational scans across diverse taxa and functional assay types. The improvement after fine-tuning comes largely from reducing bias toward wild-type sequences in regions that tolerate variation. Together, these results demonstrate that combining intraspecies constraint with cross-species PLMs improves their performance in variant-effect interpretation.
bioinformatics2026-02-19v2The practical impact of numerical variability on structural MRI measures of Parkinson's disease
Chatelain, Y. M. B.; Sokołowski, A.; Sharp, M.; Poline, J.-B.; Glatard, T.AI Summary
- The study investigated how numerical variability in MRI analyses affects structural measures in Parkinson's disease using FreeSurfer to simulate computational differences.
- Numerical variability was found to be significant, reaching up to one-third of population variability, impacting statistical conclusions.
- A tool was developed to estimate the Numerical-Population Variability Ratio (NPVR), revealing a high probability of false positives and negatives in existing Parkinson's disease MRI studies due to numerical variability.
Abstract
Numerical variability is rarely quantified in neuroimaging despite many biomarkers relying on subtle morphometric differences across individuals. We instrumented FreeSurfer, a widely used neuroimaging pipeline, to simulate numerical differences across computational environments, and used it to measure numerical variability in MRI analyses of Parkinson's disease patients and controls. In multiple cortical and subcortical regions, numerical variation reached nearly one-third of the population variability, altering statistical conclusions about group differences and clinical associations. To assess the impact of numerical noise in existing studies, we developed a practical tool that estimates the Numerical-Population Variability Ratio (NPVR) in a study, and propagates the resulting numerical uncertainty to common statistics and associated p-values. By applying this framework to thirteen previously published studies reporting MRI measures of Parkinson's disease, we quantified the probability of numerically induced false positives and false negatives in the literature, highlighting a substantial impact of numerical variability on MRI measures of Parkinson's disease. These results underscore the importance of systematically evaluating numerical stability in neuroimaging and provides a practical framework to do so.
bioinformatics2026-02-19v2Pioneer and Altimeter: Fast Analysis of DIA Proteomics Data Optimized for Narrow Isolation Windows
Wamsley, N. T.; Wilkerson, E. M.; Major, M. B.; Goldfarb, D.AI Summary
- The study introduces Pioneer and Altimeter, tools designed for fast analysis of DIA proteomics data, addressing challenges posed by narrow isolation windows in mass spectrometry.
- Altimeter models fragment intensity as a function of collision energy, allowing spectral library reuse, while Pioneer re-isotopes spectra and uses advanced techniques for efficient analysis.
- These tools enable high-confidence protein identification and quantification, performing analyses 2-6 times faster while controlling false-discovery rates across various experimental setups.
Abstract
Advances in mass spectrometry have enabled increasingly fast data-independent acquisition (DIA) experiments, producing datasets whose scale and complexity challenge existing analysis tools. Those same advances have also led to the use of narrow isolation windows, which alter MS2 spectra via fragment isotope effects and give rise to systematic deviations from spectral libraries. Here we introduce Pioneer and Altimeter, open-source tools for fast DIA analysis with explicit modeling of isolation-window effects. Altimeter predicts deisotoped fragment intensity as a continuous function of collision energy, allowing a single spectral library to be reused across datasets. Pioneer re-isotopes predicted spectra per scan and combines an intensity-aware fragment index, spectral deconvolution, and dual-window quantification for fast, spectrum-centric DIA analysis. Across instruments, experimental designs, and sample inputs, Pioneer enables high-confidence identification and precise quantification at scale, completing analyses 2-6x faster and maintaining conservative false-discovery rate control.
bioinformatics2026-02-19v2Investigating the topological motifs of inversions in pangenome graphs
Romain, S.; Dubois, S.; Legeai, F.; Lemaitre, C.AI Summary
- This study investigated how inversions are represented in pangenome graphs, focusing on identifying topological motifs for inversion bubbles.
- Two motifs were identified: path-explicit and alignment-rescued, and a tool was developed to annotate these from bubble-caller outputs.
- Analysis across four pipelines showed significant differences in inversion representation, with low recovery rates in real human datasets, indicating challenges in pangenomic inversion analysis.
Abstract
Background: Pangenome graphs are increasingly used in genetic diversity analyses because they reduce reference bias in read mapping and enhance variant discovery and genotyping from SNPs to Structural Variants. In pangenome graphs, variants appear as bubbles, which can be detected by dedicated bubble calling tools. Although these tools report essential information on the variant bubbles, such as their position and allele walks in the graph, they do not annotate the type of the detected variants. While simple SNPs, insertions, and deletions are easily distinguishable by allele size, large balanced variants like inversions are harder to differentiate among the large number of unannotated bubbles and remain underexplored in pangenome graph benchmarks and analyses. Results: In this work we focused on inversions, which have been drawing renewed attention in evolutionary genomics studies in the past years, and aimed to assess how this type of variant is handled by state of the art pangenome graph pipelines. We identified two distinct topological motifs for inversion bubbles: one path-explicit and one alignment-rescued, and developed a tool to annotate them from bubble-caller outputs. We constructed pangenome graphs with both simulated data and real data using four state of the art pipelines, and assessed the impact of inversion size, genome divergence and variant density on inversion representation and accuracy. Conclusions: Our results reveal substantial differences between pipelines in simulated graphs, with some inversions either misrepresented or lost. In addition, recovery rates are strikingly low in real human datasets, highlighting major challenges in analyzing inversions through pangenomic approaches.
bioinformatics2026-02-19v2NanoHIVSeq: A Long-Read Bioinformatics Pipeline for High-Throughput Processing of HIV Env Sequences
Sheng, Z.; Xiao, Q.; Qiao, Y.; Lu, H.; McWhirter, J.; Sagar, M.; Wu, X.AI Summary
- NanoHIVSeq is a UMI-free bioinformatics pipeline designed for high-throughput sequencing of HIV-1 Env gene using Oxford Nanopore Technology (ONT).
- It processes ONT data through clustering, consensus polishing, indel correction, denoising, and genotyping to recover functional Env variants.
- Testing on plasmid env and bulk HIV datasets showed NanoHIVSeq's high robustness, reproducibility, and accuracy (>99.9% or >Q30), comparable to UMI methods.
Abstract
High-throughput sequencing of the HIV-1 envelope (Env) gene from viral quasispecies is essential for epidemiology, virus-antibody coevolution studies, and evaluating therapeutics, but the conventional single-genome amplification (SGA) coupled with Sanger sequencing is labor-intensive and low-throughput. Oxford Nanopore Technology (ONT) offers long-read sequencing advantages, but high error rates (1-7%) poses a challenge in identifying biological variants from sequencing artifacts. Without unique molecular identifiers (UMIs), which lose DNA template and add complexity in library preparation, here we introduce NanoHIVSeq, a UMI-free and reference-free bioinformatics pipeline that processes ONT data from bulk Env PCR amplicons through multistep clustering, consensus polishing, indel correction, denoising, and genotyping to recover functional full-length Env variants. By leveraging advanced ONT duplex sequencing technology, NanoHIVSeq was assessed using plasmid env and bulk HIV reservoir datasets, demonstrating high robustness, recovery rate, reproducibility, and accuracy (>99.9% or >Q30) comparable to UMI approaches. Our findings indicated that NanoHIVSeq allows flexible and simplified ONT library preparation for reproducible and efficient Env sequencing especially for large cohorts.
bioinformatics2026-02-19v1Foundation Models Improve Perturbation Response Prediction
Cole, E.; Huizing, G.-J.; Addagudi, S.; Ho, N.; Hasanaj, E.; Kuijs, M.; Johnstone, T.; Carilli, M.; Davi, A.; Ellington, C.; Feinauer, C.; Li, P.; Menegaux, R.; Mohammadi, S.; Shao, Y.; Zhang, J.; Lundberg, E.; Song, L.; Bar-Joseph, Z.; Xing, E. P.AI Summary
- The study analyzed over 600 models to assess the effectiveness of foundation models in predicting cellular responses to genetic or chemical perturbations.
- Findings showed that while some foundation models did not outperform simple baselines, others significantly enhanced prediction accuracy.
- Integrating multiple foundation models was shown to approach fundamental performance limits, confirming their utility in improving cellular response simulations.
Abstract
Predicting cellular responses to genetic or chemical perturbations has been a long-standing goal in biology. Recent applications of foundation models to this task have yielded contradictory results regarding their superiority over simple baselines. We conducted an extensive analysis of over 600 different models across various prediction tasks and evaluation metrics, demonstrating that while some foundation models fail to outperform simple baselines, others significantly improve predictions for both genetic and chemical perturbations. Furthermore, we developed and evaluated methods for integrating multiple foundation models for perturbation prediction. Our results show that with sufficient data, these models approach fundamental performance limits, confirming that foundation models can improve cellular response simulations.
bioinformatics2026-02-19v1A Machine Learning and Benchmarking Approach for Molecular Formula Assignment of Ultra High-Resolution Mass Spectrometry Data from Complex Mixtures
Shabbir, B.; Oliveira, P. B.; Fernandez-Lima, F.; Saeed, F.AI Summary
- This study applies machine learning, specifically KNN, DTR, and RFR algorithms, to improve molecular formula assignment in ultra-high resolution mass spectrometry (UHRMS) data from complex mixtures like dissolved organic matter (DOM).
- The approach was benchmarked against traditional methods, showing a 43% increase in formula annotations (5796 vs 4047) and up to 2x more formulas assigned with Model-Synthetic (8268 vs 4047).
- DTR and RFR achieved formula-level accuracies of 86.5% and 60.4%, respectively, enhancing the reliability of characterizing complex systems in environmental science, metabolomics, and petroleomics.
Abstract
A machine learning approach to molecular formula assignment is crucial for unlocking the full potential of ultra-high resolution mass spectrometry (UHRMS) when analyzing complex mixtures. By combining data-driven models with rigorous benchmarking, the accuracy, consistency, and speed in identifying plausible molecular formulas from vast spectral datasets can be improved. Compared with traditional de novo methods that rely heavily on rule-based heuristics, and manual parameter tuning, machine learning approaches can capture complex patterns in data and adapt more readily to diverse sample types. In this paper, we describe the application of a machine learning methods using the k-nearest neighbors (KNN) algorithm trained on curated chemical formula datasets of UHRMS analysis of dissolved organic matter (DOM) covering the saline river continuum and tropical wet/dry season variability. The influence of the mass accuracy (training set with 0.15-1ppm) was evaluated on a blind test set of DOMs of different geographical origins. A Decision Tree Regressor (DTR) and Random Forest Regressor (RFR) based on mass accuracy (<1ppm) was used. Results from our ML models exhibit 43% more formulas annotated than traditional methods (5796 vs 4047), Model-Synthetic achieved 99.9% assignment rate and annotated/assigned 2x more formulas (8,268 vs 4047). DTR and RFR achieved formula-level accuracies (FA) of 86.5% and 60.4%, respectively. Overall, results show an increase in formula assignment when compared with traditional methods. This ultimately enables more reliable characterization of complex natural and engineered systems, supporting advances in fields such as environmental science, metabolomics, and petroleomics. Furthermore, the novel data set produced for this study is made publicly available, establishing an initial benchmark for molecular formula assignment in UHRMS using machine learning. The dataset and code are publicly available at: https://github.com/pcdslab/dom-formula-assignment-using-ml
bioinformatics2026-02-19v1Benchmarking Large Language Models for Predicting Therapeutic Antisense Oligonucleotide Efficacy
Wei, Z.; Griesmer, S.; Sundar, A.AI Summary
- This study benchmarks large language models (LLMs) and molecular embedding models for predicting the efficacy of therapeutic antisense oligonucleotides (ASOs) using datasets PFRED, openASO, and ASOptimizer.
- DNA sequence-based representations with gene context were found to be superior to SMILES-based representations for predicting ASO efficacy.
- GPT-3.5-Turbo with few-shot prompting showed the best performance, achieving R² values up to 0.6381, significantly outperforming baseline regression models.
Abstract
Antisense oligonucleotides (ASOs) are a promising class of therapeutic agents capable of selectively modulating gene expression and treating a wide range of genetic and neurological disorders. Accurate prediction of ASO efficacy is essential for accelerating drug discovery and reducing experimental costs, yet remains a challenging computational task due to complex sequence-function relationships. In this study, we benchmark large language models (LLMs) and molecular embedding-based regression models for predicting therapeutic ASO efficacy across three publicly available biological datasets: PFRED, openASO, and ASOptimizer. We evaluate multiple transformer-based molecular embedding models, including ChemBERTa and MolFormer, alongside prompt-engineered LLM configurations such as GPT-3.5-Turbo, LLaMA-2-7B, and Galactica-6.7B. Our results demonstrate that DNA sequence-based representations combined with gene context outperform SMILES-based molecular representations for efficacy prediction. Among the evaluated approaches, GPT-3.5-Turbo using few-shot prompting achieves strong predictive performance, reaching coefficient of determination (R squared) values up to 0.6381 and substantially outperforming baseline regression models. These findings highlight the potential of general-purpose large language models as effective tools for biomolecular prediction and computational drug discovery. This work provides a systematic benchmarking framework and establishes a foundation for integrating large language models into therapeutic antisense oligonucleotide design pipelines.
bioinformatics2026-02-19v1Experimental Time Points Guided Transcriptomic Velocity Inference
Zang, X.; Shu, X.; Zhang, N.; Wu, Y.; Deng, M.; Zhou, X.; Yang, J.; Zhang, C.-Y.; Wang, X.; Zhou, Z.; Wang, J.AI Summary
- The study introduces CellDyc, a semi-supervised learning framework that uses experimental time points to enhance the reconstruction of cellular trajectories via transcriptomic velocities.
- CellDyc outperforms existing methods in various contexts, providing insights like temporal heterogeneity in erythroid maturation and delayed monocyte differentiation in glioblastoma.
- It integrates well with tools like CellRank and remains effective with inferred temporal data.
Abstract
Time-series single-cell RNA sequencing enables longitudinal tracking of biological processes, yet cellular trajectory reconstruction informed by experimental time remains challenging. Existing trajectory inference methods either perform de novo reconstruction without leveraging experimental time points, or prioritize transitions between time points while paying less attention to intra-time-point dynamics. To reconcile experimental time points with local precision, we present CellDyc, a semi-supervised learning framework that leverages experimental time-point supervision to reconstruct transcriptomic velocities and recover an intrinsic gene-embedded time. CellDyc consistently outperforms existing approaches in reconstructing cellular trajectories across development, disease, and reprogramming contexts. Biologically, CellDyc provides novel insights, such as resolving temporal heterogeneity in erythroid maturation and quantitatively demonstrating that the immunosuppressive environment delays monocyte differentiation in glioblastoma. CellDyc integrates seamlessly with downstream tools like CellRank and remains robust even when only inferred temporal information is available. Collectively, CellDyc offers a rigorous, data-driven solution for deciphering time-resolved cellular dynamics.
bioinformatics2026-02-19v1NaVis: a virtual microscopy framework for interactive, high-resolution navigation of spatial transcriptomics data
Oshinjo, A.; Wu, J.; Petrov, P.; Izzi, V.AI Summary
- NaVis is a web-based virtual microscopy framework designed to enhance the exploration of spatial transcriptomics data by providing interactive, high-resolution navigation.
- It allows for near real-time super-resolution inference from low-resolution platforms, transforming resolution into a user-controlled parameter.
- NaVis offers a point-and-click interface, making it accessible to non-coders and facilitating direct interrogation of spatial molecular architecture, thus broadening its use in biological research.
Abstract
Despite the wide adoption of spatial transcriptomics (ST) into the biomedical community, its practical use remains constrained by a fundamental resolution/coverage trade off and by reliance on computationally intensive and static workflows. As a result, transcriptome-wide spatial data are typically interpreted as ad-hoc processed outputs rather than explored dynamically as one would do with stained or fluorescence tissue images, limiting ST accessibility and slowing biological insight. Here we introduce NaVis, a web-based virtual microscopy framework that redefines how spatial transcriptomics is experienced. NaVis enables near real-time, on-demand super-resolution inference from low-resolution whole-transcriptome platforms (10x Genomics Visium V1/V2, Cytassist and VisiumHD), generating high-resolution reconstructions that approach microscopy-level detail while preserving transcriptome-wide coverage. Unlike conventional interpolation approaches that produce fixed images, NaVis computes and refines spatial reconstructions interactively as users navigate tissue sections, transforming resolution from a platform-imposed constraint into a dynamic, user-controlled parameter. Also, NaVis is delivered through a fully point-and-click browser interface requiring no coding expertise, thus removing computational mediation and allowing clinicians, pathologists and experimental researchers to directly interrogate spatial molecular architecture. By coupling high-resolution inference with immediate visual interaction, NaVis shifts spatial transcriptomics from a static computational analysis to an exploratory, microscopy-like modality, broadening both its accessibility, conceptual reach, and potential for biological discoveries.
bioinformatics2026-02-19v1Identification of an ERCC2 mutation associated mutational signature of nucleotide excision repair deficiency in targeted panel sequencing data
Stojkova, O.; Borcsok, J.; Sztupinszki, Z.; Diossy, M.; Prosz, A.; Neil, A.; Mouw, K. W.; Sorensen, C. S.; Szallasi, Z.AI Summary
- This study developed a method to identify a mutational signature of nucleotide excision repair (NER) deficiency from targeted panel sequencing data in bladder cancer with ERCC2 mutations.
- ERCC2 wild type bladder cancers with high levels of this signature showed better response to neoadjuvant platinum therapy and improved survival.
- The signature was also observed in other solid tumors with ERCC2 mutations, suggesting potential therapeutic targeting beyond bladder cancer.
Abstract
Next generation sequencing based mutational signatures are frequently used to identify tumors with specific DNA repair deficiencies for targeted therapeutic strategies. Although mutational signatures are most commonly derived from whole exome (WES) or whole genome sequencing (WGS) data, more patients currently undergo tumor sequencing using more limited targeted panels that typically encompass several hundred cancer-associated genes. Identifying clinically relevant mutational signatures from targeted panel data requires new approaches capable of deriving signatures from the more limited sequencing data. Here, we derive and validate a panel sequencing-based composite mutational signature associated with nucleotide excision repair (NER) deficiency induced by inactivating ERCC2 mutations in bladder cancer. Using publicly available panel sequencing data, we find that ERCC2 wild type (WT) bladder cancer cases that have high levels of this mutational signature respond better to neoadjuvant platinum therapy and have improved overall survival compared to ERCC2 WT cases with low levels of the signature. We also find that other solid tumor types with ERCC2 mutations also show the characteristic mutational signature seen in NER-deficient ERCC2-mutant bladder cancers, suggesting a novel approach to therapeutically target these ERCC2-mutant solid tumors beyond bladder cancer.
bioinformatics2026-02-19v1Spartan: Spatial Activation Aware Transcriptomic Analysis Network
Faiz, M. F. I.; Jokl, E.; Jennings, R.; Piper Hanley, K.; Sharrocks, A.; Iqbal, M.; Baker, S. M.AI Summary
- Spartan is a new framework designed to improve the identification of spatial domains in spatial transcriptomics by modeling spatial transitions and using Local Spatial Activation (LSA) to enhance resolution.
- It integrates spatial topology and activation signals to accurately partition tissues across various technologies like Visium HD, MERFISH, Stereo-seq, and STARmap.
- Applied to a high-resolution Visium HD section of developing human esophagus and stomach, Spartan effectively delineates transitional regions and detects genes linked to tissue remodeling.
Abstract
Spatial transcriptomics is rapidly advancing toward single cell level resolution, revealing complex tissue architectures organized across continuous anatomical gradients. However, accurate identification of spatial domains remains a central computational challenge, as many existing clustering approaches blur anatomical boundaries, merge transitional zones, or fail to resolve localized microstructures. Here we introduce Spartan, an activation-aware multiplex graph framework that explicitly models spatial transitions for high-resolution domain discovery. Spartan integrates spatial topology, Local Spatial Activation (LSA), a neighborhood deviation signal that amplifies localized transcriptomic shifts often attenuated by similarity-based clustering. By jointly modeling cohesion within domains and activation at interfaces, Spartan recovers anatomically aligned partitions across spatially resolved transcriptomics technologies including Visium HD, MERFISH, Stereo-seq, and STARmap. We demonstrate its utility in a high-resolution Visium HD section of developing human esophagus and stomach, where activation-aware graph integration enables precise delineation of transitional regions such as the gastroesophageal junction and supports stable multi-scale domain recovery without fragile hyperparameter tuning. Beyond domain identification, Spartan leverages activation-aware structure to detect spatially variable genes associated with localized tissue remodeling. Spartan scales near-linearly with dataset size, providing a robust and interpretable framework for spatial systems-level analysis.
bioinformatics2026-02-19v1In silico degradomics reveals disease- and endotype-specific alterations in the joint tissue landscape
Hoyle, A.; Midwood, K. S.AI Summary
- The study developed DegrAID, an in-silico pipeline to analyze semi-tryptic peptides from proteomic data, mapping neo-epitopes in matrix proteins without the need for labeling or enrichment.
- Applied to osteoarthritis and rheumatoid arthritis (RA) patient samples, DegrAID identified distinct degradomes in different tissues and highlighted disease-specific degradation patterns.
- In RA, different endotypes (myeloid and lymphoid) showed varied degradation patterns, with proteoglycans more degraded in myeloid-RA and collagens in lymphoid-RA, revealing endotype-specific biomarkers.
Abstract
Tissues dynamically remodel extracellular matrix to maintain homeostasis, alterations in which are an early pathogenic hallmark of disease. Protein degradation, essential for tissue remodelling, is often dismissed as indiscriminate damage, despite evidence of its specificity. A major determinant of protein tissue levels and activity, matrix proteolysis also creates circulating degradation products that are emerging biomarkers, with specific collagen fragments capable of tracking disease severity. Understanding intentional matrix destruction therefore is key to understanding tissue biology. Unbiased, holistic analysis, extending our knowledge beyond ubiquitously expressed collagens, will uncover tissue- and disease-specific remodelling. However, degradomics' technical demands, requiring labelling and enrichment for neo-epitopes generated by cleavage events, restricts its inclusion in omics research. Here, we develop an in-silico pipeline (DegrAID) that identifies semi-tryptic peptides in unlabelled/unenriched proteomic datasets, mapping neo-epitopes within matrix domain organization and 3D structure, correlating these with known/predicted protease sites, and applies this to rare patient cohorts. Validation with matched degradomic data showed good conservation across degraded proteins and cleavage sites. Interrogation of multiple, independent cohorts including cartilage, synovial tissue and synovial fluid from osteoarthritis (OA) or rheumatoid arthritis (RA) patients identified distinct degradomes between disease and tissue compartments. Further investigating RA heterogeneity revealed myeloid and lymphoid endotypes that display different treatment responses, have substantially different degradation patterns. Proteoglycans were more degraded in myeloid-RA, while collagens more so in lymphoid-RA, with notable exceptions, and endotype-specific fingerprints were conserved between synovial tissue and fluid. Thus, this tool provides new insights into tissue remodelling by unlocking degradomes from any proteomic dataset.
bioinformatics2026-02-19v1Hi-Cformer enables multi-scale chromatin contact map modeling for single-cell Hi-C data analysis
Wu, X.; Chen, X.; Jiang, R.AI Summary
- Hi-Cformer is a transformer-based method designed to model multi-scale chromatin contact maps from single-cell Hi-C data, addressing challenges like sparsity and uneven contact distribution.
- It uses a specialized attention mechanism to capture dependencies across genomic regions and scales, providing robust low-dimensional cell representations and clearer cell type separation.
- Hi-Cformer accurately imputes chromatin interactions, identifies 3D genome features, and extends to cell type annotation with high accuracy across different datasets.
Abstract
Single-cell Hi-C captures the three-dimensional organization of chromatin in individual cells and provides insights into fundamental genomic processes such as gene regulation and transcription. While analyses of bulk Hi-C data have revealed multi-scale chromatin structures like A/B compartments and topologically associating domains, single-cell Hi-C data remain challenging to analyze due to sparsity and uneven distribution of chromatin contacts across genomic distances. These characteristics lead to strong signals near the diagonal and complex multi-scale local patterns in single-cell contact maps. Here, we propose Hi-Cformer, a transformer-based method that simultaneously models multi-scale blocks of chromatin contact maps and incorporates a specially designed attention mechanism to capture the dependencies between chromatin interactions across genomic regions and scales, enabling the integration of both global and fine-grained chromatin interaction features. Building on this architecture, Hi-Cformer robustly derives low-dimensional representations of cells from single-cell Hi-C data, achieving clearer separation of cell types compared to existing methods. Hi-Cformer can also accurately impute chromatin interaction signals associated with cellular heterogeneity, including 3D genome features such as topologically associating domain-like boundaries and A/B compartments. Furthermore, by leveraging its learned embeddings, Hi-Cformer can be extended to cell type annotation, achieving high accuracy and robustness across both intra- and inter-dataset scenarios.
bioinformatics2026-02-18v2BioGraphX: Bridging the Sequence-Structure Gap via PhysicochemicalGraph Encoding for Interpretable Subcellular Localization Prediction
Saeed, A.; Abbas, W.AI Summary
- BioGraphX introduces a novel encoding framework that constructs protein interaction graphs from sequences using biochemical rules, bypassing the need for 3D structure determination.
- The framework integrates 158 interpretable biophysical features with ESM-2 embeddings, enhancing prediction accuracy on the DeepLoc benchmarks.
- SHAP analysis reveals that BioGraphX-Net uses sequence profiles for exclusion and specific biophysical features for precise localization, with Frustration features aiding in resolving targeting ambiguities.
Abstract
Computational approaches for protein subcellular localization prediction are important for understanding cellular mechanisms and developing treatments for complex diseases. However, a critical limitation of current methods is their lack of interpretability: while they can predict where a protein localizes, they fail to explain why the protein is assigned to a specific location. Moreover, traditional approaches rely on Anfinsen' s principle, which assumes that protein behavior is determined by its native three-dimensional structure, requiring costly and time-consuming process. Here, we propose BioGraphX, a novel encoding framework that constructs protein interaction graphs directly from protein sequences using biochemical rules. This approach eliminates the need for three dimensional structure determination by encoding 158 interpretable features grounded in biophysical principles. Building upon this representation, BioGraphX Net demonstrates superior performance on the DeepLoc benchmarks by integrating ESM-2 embeddings with the proposed features via a gating mechanism. Gating analysis shows that although ESM-2 embeddings provide strong contributions, BioGraphX features function as high-precision filters. SHAP analysis shows that BioGraphX-Net encodes a sophisticated biophysical logic: sequence profiles act as universal exclusion filters, while organelle-specific combinations of biophysical features enable precise compartment discrimination. Notably, Frustration features help resolve targeting ambiguities in complex compartments, reflecting evolutionary constraints while preventing mislocalization from sequence mimicry. It has the additional advantage of promoting Green AI in bioinformatics, achieving performance comparable to the state-of-the-art while maintaining a minimal parameter count of 13.46 million. In summary, BioGraphX not only provides accurate predictions but also offers new insights into the language of life.
bioinformatics2026-02-18v2Short linear motifs - Underexplored players driving Toxoplasma gondii infection
Alvarado Valverde, J.; Lapouge, K.; Boergel, A.; Remans, K.; Luck, K.; Gibson, T.AI Summary
- The study explores the role of short linear motifs in Toxoplasma gondii's infection process, focusing on how these motifs facilitate interactions with host proteins.
- A computational pipeline was developed to identify motifs in Toxoplasma secreted proteins, revealing 24,291 motif matches in 295 proteins.
- Experimental validation confirmed the presence of TRAF6-binding motifs in Toxoplasma proteins RON10 and GRA15, highlighting the utility of motif predictions in understanding infection mechanisms.
Abstract
Pathogens infect hosts by interacting with host proteins and exploiting their functions to their advantage. Short linear motifs, small functional regions within intrinsically disordered protein regions, are common mediators of host-pathogen protein interactions. While motifs have been more extensively studied in viruses and bacteria, the extent to which eukaryotic unicellular parasites use motifs during infection remains unexplored. Toxoplasma gondii is a widespread intracellular Apicomplexan parasite capable of infecting all warm-blooded animals and invading any of their nucleated cells. Toxoplasma's secreted proteins are key in interacting with host proteins during infection, making them potential sources for motifs. To highlight the role of motifs in Toxoplasma gondii infection, we curated 21 known motif instances in Toxoplasma proteins from the scientific literature. To identify more motifs in Toxoplasma secreted proteins, we developed a computational pipeline that annotates putative motif matches with structural and functional features. Through this approach, we identified a set of 24,291 motif matches in 295 secreted proteins. We highlight strategies for further prioritisation of likely functional motif matches by focusing on integrin motifs, degrons and TRAF6-binding motifs. We subjected four predicted TRAF6-binding motifs to experimental validation, supporting the predicted motifs in the Toxoplasma proteins RON10 and GRA15. Our motif predictions provide a valuable resource for generating hypotheses and designing experiments to study infection mechanisms. The characterisation of motifs in Toxoplasma will be key to understanding the molecular principles underlying its broad host range and more comprehensive Apicomplexan infection strategies.
bioinformatics2026-02-18v2Structural Characterization of the Type IV Secretion System in Brucella melitensis for Virtual Screening-Based Therapeutic Targeting
Kapoor, J.; Panda, A.; Rajagopal, R.; Kumar, S.; Bandyopadhyay, A.AI Summary
- The study focused on characterizing the Type IV Secretion System (T4SS) in Brucella melitensis to explore its potential as a therapeutic target for brucellosis.
- Computational modeling and structural analysis of T4SS components were performed, revealing conserved architecture with E. coli T4SS despite low sequence identity.
- Virtual screening identified three promising drug candidates (Ezetimibe, Chlordiazepoxide, Alloin) targeting the VirB11 ATPase dimeric interface, with favorable binding energies confirmed by molecular dynamics simulations.
Abstract
Brucellosis is a globally important zoonotic disease caused by Brucella melitensis, the most virulent and clinically significant species affecting both humans and livestock. Unlike many Gram-negative pathogens, B. melitensis, a facultative intracellular pathogen, lacks conventional virulence factors and instead relies on specialized systems such as the Type IV Secretion System (T4SS) for secretion of effector proteins. In this study, an integrated computational pipeline was implemented to identify, model, and assemble the T4SS components, encoded by virB operon, from the complete B. melitensis proteome. Template-based modeling strategies were employed to generate structures of T4SS subcomplexes, referencing crystallographic data from E. coli T4SS. Structural superposition with E. coli homologs revealed highly conserved architecture despite only 30 to 50% sequence identity. Stereochemical validation confirmed high model quality and favorable interactions among most VirB protein pairs. Membrane insertion analysis of the membrane-embedded assemblies further corroborated the spatial orientation of the modeled T4SS. Potential of T4SS as a drug target was explored by targeting dimeric interface of VirB11 ATPase to disrupt protein-protein interactions that could disarm the pathogen. Virtual screening of compounds from DrugBank database revealed compounds with docking score less than -7.0 kcal/mol that were screened based on ADMET properties, yielding three promising candidates: Ezetimibe (Drug Id: DB00973), Chlordiazepoxide (Drug Id: DB00475), and Alloin (Drug Id: DB15477). MM-GBSA analysis estimated favorable binding free energies for these compounds and molecular dynamics simulation for 200 ns further confirmed the protein-ligand interaction stability. Collectively, these findings provide new insights into the architecture of B. melitensis T4SS and identify three potential drug molecules targeting T4SS. This supports FDA approved drug repurposing as an effective strategy for anti-virulence therapy against Brucellosis.
bioinformatics2026-02-18v1KG-Orchestra: An Open-Source Multi-Agent Framework for Evidence-Based Biomedical Knowledge Graphs Enrichment.
Mohamed, A. H.; Shalaby, K. S.; Kaladharan, A.; Atas Guvenilir, H.; Tom Kodamullil, A.AI Summary
- KG-Orchestra is a multi-agent framework designed to enrich biomedical knowledge graphs (BKGs) by focusing on specific topics, using Retrieval-Augmented Generation (RAG) for evidence acquisition, validation, and integration.
- Evaluations on specialized contexts like Nelivaptan-Alzheimer's link and gut-brain axis interactions showed that Qwen 3 variants and hybrid retrieval strategies improved reasoning and evidence relevance.
- The framework ensures high triplet integrity and biological validity, is computationally flexible, and supports applications like drug repurposing and pathway completion.
Abstract
Biomedical Knowledge Graphs (BKGs) offer integrative representations of complex biology, yet their utility is compromised by the limitations of current construction methods: manual curation offers high fidelity but is unscalable, whereas purely automated Large Language Model (LLM) approaches often yield broad networks lacking mechanistic granularity. We present KG-Orchestra, an open-source multi-agent framework designed to build specialized, directional, cause-and-effect BKGs by enriching seed graphs. The framework focuses on increasing granularity within specific topics by leveraging Retrieval-Augmented Generation (RAG) to autonomously acquire, validate, and integrate evidence. The system orchestrates specialized agents for retrieval, schema alignment, and triplet validation with explicit, traceable provenance, transforming sparse seeds into dense, high-resolution resources. We evaluated KG-Orchestra on two specialized contexts -- the mechanistic link between Nelivaptan and Alzheimer's Disease (NADKG) and the complex probiotic interactions within the gut-brain axis (ProPreSyn-GBA) -- across varying computational budgets. Our benchmarking results demonstrate that Qwen 3 variants deliver superior reasoning performance and that hybrid retrieval strategies significantly enhance evidence relevance. Furthermore, the multi-agent architecture ensures high triplet integrity and biological validity through iterative cross-checking and self-correction. The framework remains computationally flexible, deploying from single laptop GPUs to high-performance clusters. By bridging knowledge gaps and adding context-aware entities, KG-Orchestra increases reliability while validating seed assertions against up-to-date sources. This versatility supports critical downstream applications, including completing missing mechanistic pathways, integrating novel entities for drug repurposing, constructing targeted subgraphs from entity lists, and retroactively validating graph evidence for transparent auditing.
bioinformatics2026-02-18v1Wayfarer: A multiscale framework for spatial analysis of tumor progression
Moses, L.; Herault, A.; Cabon, L.; Dumitrascu, B.AI Summary
- Wayfarer is a multiscale framework designed to analyze how spatial association metrics in tumor progression change across different spatial scales using spatial -omics data.
- Applied to Xenium data from lung adenocarcinoma, Wayfarer revealed that tumor progression involves shifts in spatial patterns, with increased fine-scale coherence in ERBB2-high regions and coarse-scale clustering of immune markers.
- This framework transforms spatial aggregation from a confounder into a diagnostic tool, available as an R package via Bioconductor.
Abstract
Spatial biology spans multiple length scales, from intracellular organization to tissue-level architecture. Spatial transcriptomics captures this structure, yet most analyses operate at a single spatial resolution, implicitly assuming that biological organization is scale-consistent. In practice, spatial autocorrelation and co-localization are functions of scale, and conclusions can depend on arbitrary aggregation choices. Here we present Wayfarer, a multiscale framework for spatial -omics that tracks how spatial association metrics evolve across nested spatial aggregations, enabling statistical comparison of multiscale structure across biological conditions. Using Xenium data from lung adenocarcinoma (LUAD), we show that spatial patterns often co-exist at fine and coarse scales and that progression is accompanied by reproducible shifts in scale-response profiles. These include increased fine-scale coherence of ERBB2-high tumor regions and coarse-scale clustering of immune-associated markers that are not apparent at a single resolution. Wayfarer converts spatial aggregation from a confounder into a diagnostic signal and is implemented as an R package to be released through Bioconductor.
bioinformatics2026-02-18v1Private Information Leakage from Polygenic Risk Scores
Nikitin, K.; Gursoy, G.AI Summary
- This study investigates the privacy risks of sharing Polygenic Risk Scores (PRSs), demonstrating that PRSs can be used to reconstruct parts of an individual's genome.
- Using dynamic programming and population-based likelihood estimation, the research shows how a single PRS value can reveal genotypes, with increased accuracy when combining multiple PRSs.
- The authors propose an analytical framework to evaluate privacy risks and suggest methods for sharing PRS models while maintaining utility.
Abstract
Polygenic Risk Scores (PRSs) estimate the likelihood of individuals to develop complex diseases based on their genetic variations. While their use in clinical practice and direct-to-consumer genetic testing is growing, the privacy implications of publicly sharing PRS values are often underestimated. In this work, we demonstrate that PRSs can be exploited to recover genotypes and to de-anonymize individuals. We describe how to reconstruct a portion of an individual's genome from a single PRS value by using dynamic programming and population-based likelihood estimation, which we experimentally demonstrate on PRS panels of up 50 variants. We highlight the risks of combining multiple, even larger-panel PRSs to improve genotype-recovery accuracy, which can lead to the re-identification of individuals or their relatives in genomic databases or to the prediction of additional health risks, not originally associated with the disclosed PRSs. We then develop an analytical framework to assess the privacy risk of releasing individual PRS values and provide a potential solution for sharing PRS models without decreasing their utility. Our tool and instructions to reproduce our calculations can be found at https://github.com/G2Lab/prs-privacy.
bioinformatics2026-02-18v1Construction of distinct k-mer color sets via set fingerprinting
Alanko, J. N.; Puglisi, S. J.AI Summary
- This study introduces a Monte Carlo algorithm for constructing distinct k-mer color sets in the colored de Bruijn graph model, focusing on reducing memory usage during index construction.
- The algorithm uses on-the-fly deduplication via incremental fingerprinting, providing a strong error probability bound of 2^(-82).
- Applied to 65,536 S. enterica genomes, it compressed the color sets to 40 GiB in 7 hours and 17 minutes, using only 14 GiB of RAM.
Abstract
The colored de Bruijn graph model is the currently dominant paradigm for indexing large microbial reference genome datasets. In this model, each reference genome is assigned a unique color, typically an integer id, and each k-mer is associated with a color set, which is the set of colors of the reference genomes that contain that k-mer. This data structure supports a variety of pseudoalignment algorithms, which aim to determine the set of genomes most compatible with a query sequence. In most applications, many distinct k-mers are associated with the same color set. In current indexing algorithms, color sets are typically deduplicated and compressed only at the end of index construction. As a result, the peak memory usage can greatly exceed the size of the final data structure, making index construction a bottleneck in analysis pipelines. In this work, we present a Monte Carlo algorithm that constructs the set of distinct color sets for the k-mers directly in any individually compressed form. The method performs on-the-fly deduplication via incremental fingerprinting. We provide a strong bound on the error probability of the algorithm, even if the input is chosen adversarially, assuming that a source of random bits is available at run time. We show that given an SBWT index of 65,536 S. enterica genomes, we can enumerate and compress the distinct color sets of the genomes to 40 GiB on disk in 7 hours and 17 minutes, using only 14 GiB of RAM and no temporary disk space, with an error probability of at most 2^(-82).
bioinformatics2026-02-18v1Modeling the organizational heterogeneity of lipid-enriched microdomains in the neuronal membranes of gray and white matter of Alzheimer brain: A computational lipidomics study
Peesapati, S.; Chakraborty, S.AI Summary
- This study used lipidomics and molecular dynamics simulations to model how Alzheimer's disease (AD) affects lipid composition and membrane organization in gray (GM) and white matter (WM) of the brain.
- Findings indicate that AD leads to significant changes in membrane thickness and microdomain distribution, with more pronounced alterations in GM than WM.
- The study highlights the role of lipid composition in neuronal membrane homeostasis, showing increased cholesterol/ceramide/sphingomyelin domains in GM under AD conditions.
Abstract
Alzheimer disease (AD) is a leading cause of death among the elderly, with no existing treatment. The development of therapy is further challenged by a limited understanding of molecular pathogenesis and the absence of reliable early detection biomarkers. Neuroimaging and lipidomic studies reveal structural and biochemical alterations in both gray and white matter in AD patients, including disruptions in membrane organization and neuronal signaling pathways. In the present work, we employed lipidomics guided modeling of membranes in gray and white matter regions in healthy and diseased (AD) conditions, and used all-atom molecular dynamics (MD) simulations to examine how AD-associated alterations in lipid composition influence the structure, spatial organization, and micro-heterogeneity of neuronal plasma membranes in different brain regions. Data suggest that AD associated lipid alterations in gray matter (GM) and white matter (WM) impact membrane thickness and microdomain distribution, highlighting the critical role of lipid composition in maintaining neuronal membrane homeostasis and function. Higher-order cholesterol/ceramide/sphingomyelin enriched domains are more abundant in the neuronal membranes of the GM region in diseased conditions. Under AD-mimicking conditions, lipidomic analyses demonstrate that neuronal membranes in GM experience more substantial compositional and structural remodeling than those in WM. Our results show significant changes in membrane microdomain distribution across the lipid bilayers, and, interestingly, these changes are more pronounced in the gray matter than in the white matter. This study establishes a framework for modeling the tissue-specific lipidomics data to understand how disease-driven compositional changes affect the structure, organization, and dynamics of biological membranes.
bioinformatics2026-02-18v1The Role of Human-Specific lncRNA in Hyaline Cartilage Development
Osone, T.; Takao, T.; Takarada, T.AI Summary
- This study investigates the role of human-specific long non-coding RNAs (lncRNAs) in hyaline cartilage development using human iPS cells differentiated into limb bud-like mesenchymal cells and then into hyaline cartilage-like tissue.
- Bulk RNA sequencing revealed that human-specific lncRNAs are significantly upregulated in the cartilage-like tissue, potentially regulating genes related to the extracellular matrix.
- These findings suggest that controlling human-specific lncRNAs could enhance regenerative cartilage tissue quality and provide insights into human-specific diseases.
Abstract
One of the distinctive characteristics of humans is their bipedalism. To achieve upright bipedal walking, the angles of the pelvis and femur have been altered. Although evolutionary hypotheses on the transition to bipedalism exist, the molecular mechanisms remain unclear. This study attempts to elucidate these mechanisms using a system for inducing hyaline cartilage-like tissue from human iPS cells via limb bud like mesenchymal cells. Focus was placed on non-coding RNAs, known for their potential in generating biological diversity. Bulk RNA sequencing was conducted to compare the expression and functions of human-specific long non-coding RNAs between limb bud like mesenchymal cells and induced hyaline cartilage-like tissue. The results indicated that human-specific lncRNAs, significantly upregulated in hyaline cartilage-like tissue, may regulate genes related to the extracellular matrix. These findings suggest the potential to develop regenerative cartilage tissue with enhanced ECM quality through controlling human-specific lncRNAs. Additionally, studying human-specific lncRNAs could elucidate mechanisms of diseases that are less common in other species but more prevalent in humans.
bioinformatics2026-02-18v1Pioneer and Altimeter: Fast Analysis of DIA Proteomics Data Optimized for Narrow Isolation Windows
Wamsley, N. T.; Wilkerson, E. M.; Major, M.; Goldfarb, D.AI Summary
- The study introduces Pioneer and Altimeter, tools designed for fast analysis of data-independent acquisition (DIA) proteomics data, specifically optimized for narrow isolation windows.
- Altimeter models fragment intensity as a function of collision energy, allowing spectral library reuse, while Pioneer re-isotopes spectra and uses advanced techniques for rapid, accurate DIA analysis.
- These tools enable high-confidence protein identification and quantification, performing analyses 2-6 times faster while controlling false-discovery rates across various experimental setups.
Abstract
Advances in mass spectrometry have enabled increasingly fast data-independent acquisition (DIA) experiments, producing datasets whose scale and complexity challenge existing analysis tools. Those same advances have also led to the use of narrow isolation windows, which alter MS2 spectra via fragment isotope effects and give rise to systematic deviations from spectral libraries. Here we introduce Pioneer and Altimeter, open-source tools for fast DIA analysis with explicit modeling of isolation-window effects. Altimeter predicts deisotoped fragment intensity as a continuous function of collision energy, allowing a single spectral library to be reused across datasets. Pioneer re-isotopes predicted spectra per scan and combines an intensity-aware fragment index, spectral deconvolution, and dual-window quantification for fast, spectrum-centric DIA analysis. Across instruments, experimental designs, and sample inputs, Pioneer enables high-confidence identification and precise quantification at scale, completing analyses 2-6x faster and maintaining conservative false-discovery rate control.
bioinformatics2026-02-18v1Analysis of Transcriptograms in Epithelial-Mesenchymal Transition (EMT)
Santos, O. J.; Dalmolin, R. J.; de Almeida, R. M. C.AI Summary
- This study uses a novel pipeline integrating Transcriptogram with PCA to analyze EMT in single-cell RNA-seq data, reducing noise by projecting data onto PPI-ordered gene lists.
- Applied to TGF-β1-induced MCF10A cells, the method revealed EMT as a systemic reprogramming with distinct cellular trajectories, identifying key modules like a metabolic switch, cell cycle blockade, and a detoxification program.
- The approach enhances the resolution of cellular plasticity, showing EMT involves multiple stages and pathways, not just morphological changes.
Abstract
Single-cell RNA sequencing (single-cell RNA-seq) has represented a revolution in gene expression analysis. However, high dropout rates and stochastic noise often reduce the amount of information captured in these experiments. The epithelial-mesenchymal transition (EMT), which is fundamental to tumor progression and organismal development, is particularly difficult to fully characterize due to the existence of intermediate states. In this work, we demonstrate that projecting transcriptomic data onto gene lists ordered using protein-protein interaction (PPI) information acts as a biological low-pass filter, attenuating technical noise and increasing the statistical power of the analyses. We propose and validate an innovative pipeline that integrates the Transcriptogram method with Principal Component Analysis (PCA). By applying a moving average over functionally ordered genes, we drastically increase the signal-to-noise ratio, enabling the inference of cellular trajectories. The method was applied to a public dataset of TGF-{beta}1-induced MCF10A cells, with rigorous batch-effect correction based on biological controls. The results reveal that EMT is not merely a morphological change, but a coordinated, systemic reprogramming. This approach enabled the identification of critical modules that would remain hidden in conventional analyses: (i) a massive Metabolic Switch (Cluster 2), indicating a transition toward oxidative phosphorylation to sustain invasion; (ii) a strategic blockade of the cell cycle (Cluster 4); and (iii) a Detoxification Shield and chemoresistance program (Cluster 5), characterized by endogenous activation of metallothioneins. We conclude that the combination of PPI network topology and dimensionality reduction offers superior resolution for dissecting cellular plasticity. The method not only validates classical markers, but also reveals the hidden functional architecture of the transition, showing that EMT is not a single, uniform process, but rather one in which cells can follow distinct trajectories, halting at different stages of differentiation.
bioinformatics2026-02-18v1Differential analysis of genomics count data with edge
Pachter, L.AI Summary
- The study introduces edgePython, a Python port of the edgeR package, to facilitate differential expression analysis in the Python-dominated single-cell genomics field.
- edgePython includes a new negative binomial gamma mixed model for multi-subject single-cell analysis and applies empirical Bayes shrinkage to cell-level dispersion.
- This adaptation aims to enhance integration with Python tools while maintaining the core functionalities of edgeR.
Abstract
The edgeR Bioconductor package is one of the most widely used tools for differential expression analysis of count-based genomics data. Despite its popularity, the R-only implementation limits its integration with the Python centric ecosystem that has become dominant in single-cell genomics. We present edgePython, a Python port of edgeR 4.8.2 that extends the framework with a negative binomial gamma mixed model for multi-subject single-cell analysis and empirical Bayes shrinkage of cell-level dispersion.
bioinformatics2026-02-18v1Fast structural search for classification of gut bacterial mucin O-glycan degrading enzymes
Erden, M.; Schult, T.; Yanagi, K.; Sahoo, J. K.; Kaplan, D. L.; Cowen, L. J.; Lee, K.AI Summary
- The study introduces Deep Enzyme Function Transfer (DEFT), which combines sequence- and structure-based methods to improve the classification of enzymes, particularly at the detailed levels of the EC number hierarchy.
- DEFT first uses a protein language model to assign the first two levels of the EC number, then employs structure-based prediction for the remaining levels, reducing false positives.
- Benchmarking showed DEFT's superior accuracy and efficiency, enabling high-throughput annotations, with experimental validation on glycoside hydrolase profiles of gut bacteria.
Abstract
The Enzyme Commission (EC) numbering scheme provides a hierarchical way to classify enzymes according to their catalytic functions. While recent protein language model (PLM) based approaches like CLEAN and ProteInter have improved sequence-based EC number prediction, they struggle with fine-grained classification at the deepest hierarchical level. Structure-based approaches for grouping similar proteins using alignment tools excel at finding proteins that share overall global structure, but suffer from high false positive rates when classifying proteins that are globally structurally similar but functional differentiation depends on a localized region. This problem is particularly relevant to EC number prediction, as enzymatic function depends on its catalytic domain, which is a relatively small, specific region of the protein. We introduce Deep Enzyme Function Transfer (DEFT) that harmonizes sequence- and structure-based approaches through the key insight that PLM based annotations of the first two EC number hierarchy levels vastly reduces false positives that are likely to show in purely structure-based EC number prediction. Given an enzyme of interest, DEFT first uses a PLM based method to assign the first two levels of the enzymes EC number, and then uses a structure-based method to predict the remaining two levels of the EC number. Using benchmarking datasets, we demonstrate that DEFT achieves superior accuracy compared to current state-of-the-art tools for EC number prediction. Furthermore we show that DEFTs computational efficiency enables high-throughput, genome-wide annotations of organisms enzyme repertoires. We illustrate this capability by experimentally validating DEFT predicted glycoside hydrolase (GH) profiles of intestinal mucus associated bacteria.
bioinformatics2026-02-18v1Adaptive Tracepoints for Pangenome Alignment Compression
Kaushan, H.; Marco-Sola, S.; Garrison, E.; Prins, P.; Guarracino, A.AI Summary
- The study introduces adaptive tracepoints, a method for compressing sequence alignments in pangenomes by segmenting alignments based on complexity metrics like edit or diagonal distance, rather than fixed intervals.
- On simulated long sequence alignments, diagonal-bounded tracepoints achieved 10.5-13.7X better compression than fixed-length encodings, while edit-bounded tracepoints offered a balance between compression and reconstruction efficiency.
- Real pangenome data showed compression improvements of 23-139X with no degradation in alignment scores and linear reconstruction time.
Abstract
Motivation: Storing millions of sequence alignments from large-scale genomic comparisons requires efficient compression methods. While fixed-size alignment encodings offer uniform spacing and bounded reconstruction cost, they cannot adapt to variable alignment complexity across sequences, missing compression opportunities in conserved regions. Results: We present adaptive tracepoints, a complexity-aware alignment encoding that segments alignments using configurable complexity metrics (edit distance or diagonal distance) rather than fixed intervals. Segments are bounded by either the number of differences or the deviation from the main diagonal, adapting to local alignment characteristics. Reconstruction guarantees that alignments maintain identical or improved alignment scores. We validate the correctness of our method on simulated and real pangenomes with varying lengths and divergences. Diagonal-bounded tracepoints achieve 10.5- 13.7X better compression than fixed-length encodings (l=100) on simulated long sequence alignments (100 Kb), while edit-bounded tracepoints provide a tunable trade-off between compression and reconstruction cost, approaching diagonal-bounded compression at higher thresholds with substantially lower memory and runtime. On real pangenomes (390M alignments), these methods compress alignments by 23-139X relative to uncompressed representations, with no score degradation and reconstruction time linear in alignment length. Availability: Code and documentation are publicly available at https://github.com/AndreaGuarracino/tracepoints, https://github.com/AndreaGuarracino/tpa, and https://github.com/AndreaGuarracino/cigzip. Contact: aguarracino@tgen.org Supplementary information: Supplementary data are available at Bioinformatics online.
bioinformatics2026-02-18v1Drug-Target Interaction Prediction with PIGLET
Carpenter, K. A.; Altman, R. B.AI Summary
- The study introduces PIGLET, a novel graph transformer method for drug-target interaction (DTI) prediction, which uses a proteome-wide knowledge graph.
- PIGLET was benchmarked against existing models on the Human dataset using random and drug-based splits.
- PIGLET showed similar performance to other models on random splits but outperformed them on the more rigorous drug-based split.
Abstract
Drug-target interaction (DTI) prediction is a key task for computed-aided drug development that has been widely approached by deep learning models. Despite extremely high reported performance, these models have yet to find widespread success in accelerating real-world drug discovery. In contrast with the most common approach of creating embeddings from one-dimensional or three-dimensional representations of the input drug and input target, we create a novel graph transformer method for DTI prediction that operates on a proteome-wide knowledge graph of binding pocket similarity, protein-protein interactions, drug similarity, and known binding relationships. We benchmark our method, named PIGLET, against existing DTI prediction models on the Human dataset. We assess performance with two different splitting strategies: the frequently-reported random split, and a novel, more rigorous drug-based split. All models perform similarly well on the random split, and PIGLET outperforms all models on the drug-based split. We highlight the utility of PIGLET through a real-world drug discovery case study.
bioinformatics2026-02-18v1Cancer Driver Gene Discovery: A Patient-Level Statistical Framework
Bahari, F.; Montazeri, H.AI Summary
- The study introduces iDriver, a statistical framework designed to identify cancer driver genes by integrating mutation recurrence and functional impact at the patient level, addressing the challenge of patient-specific mutation burden variability.
- When applied to 29 cancer types, iDriver identified both known and novel cancer drivers in coding and noncoding regions, demonstrating clinical and biological relevance.
- In benchmarks, iDriver outperformed 12 other methods, achieving top rankings for identifying known cancer drivers in both coding and noncoding genomic elements.
Abstract
Tumor genomes harbor a mixture of neutral and positively selected mutations, yet distinguishing true cancer drivers remains a major challenge. Several factors can obscure the detection of selection signals, among which patient-specific variation in mutational burden plays a significant role. Current approaches often fail to account for the heterogeneity in mutation burden across different patients; in particular, no existing method explicitly accounts for it when integrating both mutation recurrence and functional impact. Here we present iDriver, a probabilistic graphical model that integrates both mutation recurrence and functional impact at the individual-patient level, enabling an enhanced estimation of positive selection across functional genomic elements. Applying iDriver to 29 cancer types, we identify both known and previously unrecognized drivers spanning coding and noncoding regions, and provide evidence for their clinical and biological relevance. In comprehensive benchmarks against 12 established driver discovery methods, iDriver consistently outperformed all competitors, achieving the highest rankings for known cancer drivers across both coding and noncoding elements.
bioinformatics2026-02-18v1Influence of molecular representation and charge on protein-ligand structural predictions by popular co-folding methods
Bugrova, A.; Orekhov, P.; Gushchin, I.AI Summary
- This study investigated how the input format (CCD or SMILES) and charge of ligands (methylamine and acetic acid) affect protein-ligand structural predictions by four algorithms: AlphaFold 3, Boltz-2, Chai-1, and Protenix-v1.
- Results showed that the input format significantly influenced prediction outcomes more than protonation, while changes in charge did not align with experimental expectations.
- The study suggests improving prediction algorithms by ensuring consistency across input formats and incorporating protonation steps in training and prediction.
Abstract
Recently developed deep learning-based tools can effectively generate structural models of complexes of proteins and non-proteinaceous compounds. While some of their predictive capabilities are truly exciting, others remain to be thoroughly tested. Here, we probe whether the ligand input format (Chemical Component Dictionary, CCD, or Simplified Molecular Input Line Entry System, SMILES) and charge (which depends on protonation) will affect the results of the predictions by four popular algorithms: AlphaFold 3, Boltz-2, Chai-1, and Protenix-v1. We chose methylamine and acetic acid as two of the simplest titratable chemicals that are omnipresent in proteins as amino and carboxy moieties, and are consequently ubiquitous in the Protein Data Bank models that are most commonly used for training. Unexpectedly, we found that for both molecules, in many cases the input format affected the prediction results, and did it much stronger compared to protonation, whereas changes in the formally specified charge of the molecules did not lead to changes in binding expected from experiments. We conclude that (i) ensuring identical results irrespective of input formats and (ii) inclusion of protonation-related steps into training and prediction pipelines are the two available paths for improvement of protein-ligand structure prediction algorithms.
bioinformatics2026-02-18v1Learning Mappings from Cryo-EM Images to Atomic Coordinates via Latent Representations
Abid, E.; Jonic, S.AI Summary
- The study investigates if supervised learning can map noisy cryo-EM images to 3D atomic coordinates without pose recovery, using a convolutional auto-encoder to generate latent representations.
- A regression network then predicts atomic coordinates from these latents.
- Results showed mean RMSDs of 2.11 Å for adenylate kinase and 0.80 Å for nucleosome core particles, demonstrating that latent representations can effectively preserve necessary structural information.
Abstract
Single-particle cryo-electron microscopy (cryo-EM) aims to determine three-dimensional (3D) structures of biomolecular complexes from noisy two-dimensional (2D) projection images acquired at unknown orientations. The presence of pose uncertainty and continuous conformational heterogeneity makes high-resolution reconstruction challenging. Here, we investigate, in a controlled synthetic setting, whether supervised learning can map noisy cryo-EM single-particle images to atomic coordinates without pose recovery or 2D projection calculations. We propose a convolutional auto-encoder to compress particle images into their corresponding latent representations, followed by a regression network to predict 3D atomic coordinates from these image latents. We show the performance of this approach using synthetic datasets of pairs of particle images and conformational models of adenylate kinase and nucleosome core particles, generated using a realistic cryo-EM forward model based on Normal Mode Analysis for simulating dynamics. Inference yielded mean RMSDs of 2.11 [A] for all-atom models of adenylate kinase (1,656 atoms) and 0.80 [A] for the coarse-grain models of nucleosome (1,041 C-P atoms). These results indicate that compact image latents preserve pose and conformation related information sufficiently well to support atomic coordinate regression. This provides a quantitative proof-of-principle for coupling image and structure spaces toward fast estimation of conformational variability in cryo-EM.
bioinformatics2026-02-18v1Guided tokenization and domain knowledge enhance genomic language models' performance
Mahangade, V.; Mollerus, M.; Crandall, K. A.; Rahnavard, A.AI Summary
- The study introduces Guided Tokenization (GT), which prioritizes biologically significant subsequences for tokenization in genomic language models (gLMs).
- GT, combined with domain adaptation, enhances the representation quality and classification accuracy of gLMs.
- This approach improves performance in tasks like DNA sequence classification, promoter detection, and antimicrobial resistance classification, particularly in smaller models.
Abstract
Adapting language models to genomic and metagenomic sequences presents unique challenges, particularly in tokenization and task-specific generalization. Standard methods, such as fixed-length k-mers or byte pair encoding, often fail to preserve biologically meaningful patterns essential for downstream tasks. We introduce Guided Tokenization (GT), a strategy that prioritizes biologically and statistically important subsequences based on importance scores, model attention, and class distributions. Combined with domain adaptation, which incorporates prior domain specific biological knowledge, this approach improves both representation quality and classification accuracy in compact genomic language models (gLMs). GT enhances biological awareness in genomic language models, particularly for effective small and mid-sized models across key tasks, including DNA sequence read classification, promoter detection, antimicrobial resistance classification, and targeted amplicon taxonomic profiling. Our results highlight the promise of guided tokenization and domain-aware modeling for building efficient, biologically grounded language models for scalable genomic applications.
bioinformatics2026-02-18v1Supporting Metadata Curation from Public Life Science Databases Using Open-Weight Large Language Models
Shintani, M.; Andrade, D.; Bono, H.AI Summary
- The study developed a workflow using large language models (LLMs) for automated metadata curation in public life science databases to improve data reuse.
- Benchmarking with Arabidopsis RNA sequencing data showed that LLMs significantly outperformed simple keyword searches, with open-weight models achieving near-perfect classification (F1>0.98).
- Utilizing LLM confidence scores allows for automatic processing of high-confidence cases, enhancing scalability and reproducibility in metadata curation.
Abstract
Although the Gene Expression Omnibus and other public repositories are expanding rapidly, curation across these databases has not kept pace. Data reuse is often hindered by unstandardized metadata comprising unstructured text. To address this, we developed a workflow that combines retrieval via an application programming interface with semantic filtering using large language models (LLMs) for automated curation. We benchmarked multiple LLMs using metadata from 150 candidate Arabidopsis RNA sequencing projects to classify samples treated with exogenous abscisic acid and their controls. Simple keyword searches yielded many false positives (F1=0.59); classification using LLMs significantly improved performance. Several open-weight models achieved a nearly perfect performance (F1>0.98), comparable to that of closed models. We also found that utilizing LLM confidence scores enables high-confidence cases to be processed automatically. These results suggest that open-weight LLMs can support scalable and reproducible metadata curation in local environments, providing a foundation for accelerating public dataset reuse.
bioinformatics2026-02-18v1