SCIENTIFIC PAPERS
Find here all scientific publications related to the ONTOX project
Published in 2025
A mechanistically-anchored and human stem cell-based in vitro test battery for assessing liver steatogenic potential of chemicals
Verhoeven A., Gatzios A., Rodrigues R. M., Sanz-Serrano J., De Kock J., Vinken M., Vanhaecke T.
Toxicology | December 2025
ABSTRACT
Fatty liver disease, which can result from various factors including chemical exposure, is an increasing clinical concern. A key event in its development is steatosis, referring to the accumulation of lipids within hepatocytes. To enable early detection of chemical-induced liver steatosis, we developed a mechanistically-anchored, human-relevant new approach methodology (NAM). This NAM consists of a 2-tiered in vitro test battery, aligned with the adverse outcome pathway (AOP) network for steatosis, and utilizes human skin-derived precursor cells differentiated into hepatic cells (hSKP-HPC), previously shown to be responsive to steatogenic triggers. In total, 6 well-known steatogenic compounds, including 3 pharmaceuticals (sodium valproate, tetracycline hydrochloride, amiodarone hydrochloride), a pesticide (cyproconazole), and 2 plasticizers (tricresyl phosphate and perfluorohexanesulfonic acid) alongside 2 non-steatogenic chemicals (minocycline hydrochloride and tartaric acid), were tested over a 72-hour period. Tier 1 evaluated transcriptional changes in key lipid metabolism pathways, and modulations were observed in nuclear receptors (peroxisome proliferator-activated receptor), fatty acid uptake (fatty acid translocase), de novo lipogenesis (diacylglycerol acyl transferase 2, fatty acid synthase and stearoyl-CoA desaturase 1), as well as in VLDL secretion (apolipoprotein B100). Tier 2 assays assessed and confirmed downstream functional disruptions in fatty acid uptake and lipid accumulation as ultimate specific key events for steatogenic chemicals. Overall, this human stem cell-based NAM offers a promising tool for supporting early hazard identification of steatogenic chemicals across diverse sectors, bridging mechanistic insights to outcomes relevant to the initiation of fatty liver disease.
Machine learning classification of steatogenic compounds using toxicogenomics profiles
Bwanya B., Lodhi S., de Kok T. M., Ladeira L., Verheijen M. C- T., Jennen D. G. J., Caiment F.
Toxicology | November 2025
ABSTRACT
The transition toward new approach methodologies for toxicity testing has accelerated the development of computational models that utilize transcriptomic data to predict chemical-induced adverse effects. Here, we applied supervised machine learning to gene expression data derived from primary human hepatocytes and rat liver models (in vitro and in vivo) to predict drug-induced hepatic steatosis. We evaluated five machine learning classifiers using microarray data from the Open TG-GATEs database. Among these, support vector machine (SVM) consistently achieved the highest performance, with area under the receiver operating characteristic curve (ROC-AUC) of 0.820 in primary human hepatocytes, 0.975 in the rat in vitro model, and 0.966 in the rat in vivo model. To gain mechanistic insights, we functionally profiled the top-ranked predictive genes. Enrichment analyses revealed strong associations with lipid metabolism, mitochondrial function, insulin signalling, oxidative stress, all biological processes central to steatosis pathogenesis. Key predictive genes such as CYP1A1, PLIN2, and GCK mapped to lipid metabolism networks and liver disease annotations, while others highlighted novel transcriptomics signals. Integration with differentially expressed genes and known steatosis markers highlighted both overlapping and distinct molecular features, suggesting that machine learning models capture biologically relevant signals. These findings demonstrate the potential of machine learning models guided by transcriptomic data to identify early molecular signatures of drug-induced hepatic steatosis. The support vector machine model’s strong predictive accuracy across species highlights its promise as a scalable and interpretable tool for chemical risk assessment. As data limitations in human toxicology persist, expanding high-quality transcriptomic resources will be critical to further advance non-animal approaches in regulatory toxicology.
An Alternative Safety Profiling Algorithm (ASPA) to transform next generation risk assessment into a structured and transparent process
Leist M., Tangianu S., Affourtit F., Braakhuis H., Colbourne J., Cöllen E., Dreser N., Escher S.-E., Gardner I., Hahn S., Hardy B., Herzler M., Islam B., Kamp H., Magel V., Marx-Stoelting P., Moné M. J., Lundquist P., Ottenbros I., Ouedraogo G., Pallocca G., van de Water B., Vinken M., White A., Pastor M., Luijten M.
Alternative to Animal Experimentation | October 2025
ABSTRACT
Next generation risk assessment (NGRA) strategies use animal-free new approach methodologies (NAMs) to generate information concerning chemical hazard, toxicokinetics (ADME), and exposure. The information from these major pillars of data gathering is used to inform risk assessment and classification decisions. While the required types of data are widely agreed upon, the processes for data collection, integration and reporting, as well as several decisions on the depth and granularity of required data, are poorly standardized. Here, we present the Alternative Safety Profiling Algorithm (ASPA), a broad-purpose, transparent, and reproducible risk assessment workflow that allows documentation and integration of all types of information required for NGRA. ASPA aims to make safety assessments fully traceable for the recipient (e.g., a regulator), delineating which steps and decisions have led to the final outcome, and why certain decisions were made. An overarching objective of ASPA is to ensure that identical data input yields identical outcomes in the hands of independent assessors. Therefore, ASPA is not just a data gathering workflow; it also considers data interdependencies and requires precise justification of intermediate decisions. This includes the monitoring and assessment of uncertainties. To assist users, the ASPA-assist software was developed. It formalizes the reporting process in a reproducible and standardized fashion. By guiding an operator step-by-step through the ASPA workflow, a complete and comprehensive report is assembled, whereby all data, methods, operator activities and intermediate decisions are recorded. Practical examples illustrating the broader applicability of ASPA across various regulations and problem formulations are provided through case studies.
Unravelling drug-induced hepatic steatosis: Clinical sub-phenotypes, outcome prediction, and identification of high-concern drugs and hazardous chemical attributes
Moreno-Torres M., López-Pascual E., Moro E., Rapisarda A., Lindeman B., Verhoeven A., Luechtefeld T., Serrano-Candelas E., Dirven H., Vinken M., Castell J. V., Jover R.
Biomedicine & Pharmacotherapy | October 2025
ABSTRACT
Background: Drug-induced liver injury (DILI) is a rare but serious adverse drug reaction with a wide range of clinical presentations. While three major DILI phenotypes—hepatocellular, cholestatic, and mixed—are well established, other injury patterns such as steatosis-associated DILI (DIS) are harder to identify and remain underexplored. Existing limited evidence suggests that steatosis is frequent among DILI patients, yet this subtype and its causative drugs have not been systematically characterized in large patient cohorts.
Goals: This study aims to fill this knowledge gap by collecting information from DIS case reports through AI-supported literature review and data extraction. It also seeks to uncover clinical variables linked to poor outcomes and recovery, and to underscore higher-risk drugs and their distinctive chemical features.
Methods: A comprehensive literature search on DIS was conducted using keywords and Boolean operators in seven databases. Relevant studies were screened using Sysrev, an AI-supported application for article review and data extraction. Clinical, biochemical, and histopathological data were collected, and drug properties were obtained from databases or by QSAR modeling. The resulting database was subjected to statistical and multivariate analyses, including logistic regression.
Results: Data from 251 DIS cases associated with 34 drugs were collected. DIS patients clustered into three subphenotypes with different predominant outcomes: recovery, chronicity, or fatality. Chronic cases showed fibrosis and steatohepatitis with macrosteatosis, while fatal cases often involved microsteatosis and lactic acidosis. Drugs like Didanosine, Zidovudine, Valproate, and Tetracycline were linked to fatal outcome. A predictive model using clinical and chemical features allowed to identify high-concern drugs and hazardous chemical attributes.
Ontogeny of drug-induced fatty liver disease (DIFLD): from key initiating events to disease phenotypes
López-Pascual E., Moreno-Torres M., Moro E., Rapisarda A., Ortega-Vallbona R., Serrano-Candelas E., Gozalbes R., Jover R., Castell J. V.
Archives of Toxicology | September 2025
ABSTRACT
We conducted an expert review of clinical case reports on drug-induced fatty liver disease (DIFLD) to classify drugs according to distinct clinical phenotypes. Seven clusters were identified based on clinical, biochemical, and histological features reflecting drug toxic mechanisms:
Cluster 0 (Control): Drugs with no known steatotic effects or clinical evidence of DIFLD.
Cluster 1: Drugs with mild pro-steatotic effects, exacerbating existing metabolic steatosis without significant liver enzyme elevation.
Cluster 2: Compounds causing moderate steatosis with mild hepatocellular damage, occasional enzyme increases, and delayed onset.
Cluster 3: Agents causing severe mitochondrial dysfunction, ATP depletion, and lactic acidosis, initially without inflammation.
Cluster 4: Drugs inducing inflammatory steatohepatitis with moderate elevations of liver enzymes (ALT, AST, ALP 90–700 U/L) but preserved liver function.
Cluster 5: Drugs causing severe steatohepatitis with marked enzyme elevation (ALT, AST > 700 U/L) indicating significant liver injury and inflammation.
Cluster 6: Compounds causing steatohepatitis with additional cholestasis and elevated bilirubin (> 11 mg/dL).
Clusters 1 and 2 primarily impair β-oxidation and mitochondrial respiration, linked to high lipophilicity and typically lower daily doses. Cluster 3 involves mitochondrial DNA depletion and impaired lipid export. Clusters 4 and 5 combine mitochondrial and nuclear receptor disruption, often linked to higher daily doses. Cluster 6 combines steatosis-promoting mechanisms with bile acid transport disruption. This classification improves understanding of DIFLD phenotypes by linking clinical manifestations with drug physicochemical properties and toxicological mechanisms, aiding diagnosis and risk assessment of drug-induced steatosis.
From cellular perturbation to probabilistic risk assessments
Maertens A., Kincaid B., Bridgeford E., Brochot C., de Carvalho e Silva A., Dorne J-L C. M., Geris L., Husøy T., Kleinstreuer N., Ladeira L. C. M., Middleton A., Reynolds J., Rodriguez B., Roggen E. L., Russo G., Thayer K., Hartung T.
Alternative to Animal Experimentation | July 2025
ABSTRACT
Chemical risk assessment is evolving from traditional deterministic approaches to embrace probabilistic methodologies, where risk of hazard manifestation is understood as a more or less probable event depending on exposure, individual factors, and stochastic processes. This is driven by advancements in human stem cells, complex tissue engineering, high-performance computing, and cheminformatics, and is more recently facilitated by large-scale artificial intelligence models. These innovations enable a more nuanced understanding of chemical hazards, capturing the complexity of biological responses and variability within populations. However, each technology comes with its own uncertainties impacting on the estimation of hazard probabilities. This shift addresses the limitations of point estimates and thresholds that oversimplify hazard assessment, allowing for the integration of kinetic variability and uncertainty metrics into risk models. By leveraging modern technologies and expansive toxicological data, probabilistic approaches offer a comprehensive evaluation of chemical safety. This paper summarizes a workshop held in 2023 and discusses the technological and data-driven enablers, and the challenges faced in their implementation, with particular focus on perturbation of biology as the basis of hazard estimates. The future of toxicological risk assessment lies in the successful integration of these probabilistic models, promising more accurate and holistic hazard evaluations.
Unlocking liver physiology: comprehensive pathway maps for mechanistic understanding
Ladeira L., Verhoeven A., van Ertvelde J., Jiang J., Gamba A., Sanz-Serrano J., Vanhaecke T., Heusinkveld H. J., Jover R., Vinken M., Geris L., Staumont B.
Frontiers in Toxicology | July 2025
ABSTRACT
Aims: In silico methods provide a resourceful toolbox for new approach methodologies (NAMs). They can revolutionize chemical safety assessment by offering more efficient and human-relevant alternatives to traditional animal testing. In this study, we introduce two Liver Physiological Maps (PMs); comprehensive and machine-readable graphical representations of the intricate mechanisms governing two major liver functions.
Methods: Two PMs were developed through manual literature curation, integrating data from established pathway resources and domain expert knowledge. Cell-type specificity was validated using Human Protein Atlas datasets. An interactive version is available online for exploration. Cross-comparison analysis with existing Adverse Outcome Pathway (AOP) networks was performed to benchmark physiological coverage and identify knowledge gaps.
Results: The LiverLipidPM focuses on liver lipid metabolism, detailing pathways involved in fatty acid synthesis, triglycerides, cholesterol metabolism, and lipid catabolism in hepatocytes. And the LiverBilePM represents bile acid biosynthesis and secretion processes, detailing biosynthesis, transport, and secretion processes between hepatocytes and cholangiocytes. Both maps integrate metabolism with signaling pathways and regulatory networks. The interactive maps enable visualization of molecular pathways, linkage to external ontologies, and overlay of experimental data. Comparative analysis revealed unique mechanisms to each map and overlaps with existing AOP networks. Chemical-target queries identified new potential targets in both PMs, which might represent new molecular initiating events for AOP network extension.
Conclusion: The developed liver PMs serve as valuable resources for hepatology research, with a special focus on hepatotoxicity, supporting the refinement of AOP networks and the development of human-oriented in vitro test batteries for chemical toxicity assessment. These maps provide a foundation for creating computational models and mode-of-action ontologies while potentially extending their utility to systems biology and drug discovery applications.
Biology-inspired dynamic microphysiological system approaches to revolutionize basic research, healthcare and animal welfare
Marx U., Beken S., Chen Z., Dehne E-M, Doherty A., Ewart L., Fitzpatrick S. C., Griffith L. G., Gu Z., Hartung T., Hickman J., Ingber D. E., Ishida S., Jeong J., Leist M., Levin L., Mendrick D. L., Pallocca G., Platz S., Raschke M., Smirnova L., Tagle D. A., Trapecar M., Balkom B. W. M., van den Eijnden-van Raaij J., van der Meer A., Roth A.
Alternative to Animal Experimentation | April 2025
ABSTRACT
The regular t4 workshops on biology-inspired microphysiological systems (MPS) have become a reliable benchmark for assessing fundamental scientific, industrial, and regulatory trends in the MPS field. The 2023 workshop participants concluded that MPS technology as used in academia has matured significantly, as evidenced by the steadily increasing number of high-quality research publications, but that broad industrial adoption of MPS has been slow. Academic research using MPS is primarily aimed at accurately recapitulating human biology in MPS-based organ models to enable breakthrough discoveries. Examples of these developments are summarized in the report. In addition, we focus on key challenges identified during the previous workshop. Bridging gaps between academia, regulators, and industry is addressed. We also comment on overcoming barriers to trust and acceptance of MPS-derived data – the latter being particularly important in a regulatory environment. The status of implementation of the recommendations detailed in the 2020 report was reviewed. It is concluded that communication between stakeholders has improved significantly, while the recommendations related to regulatory acceptance still need to be implemented. Participants noted that the remaining challenges for increased translation of these technologies into industrial use and regulatory decision-making will require further efforts on well-defined context of use qualifications, together with increased standardization. This will make MPS data more reliable and ultimately make these novel tools more economically sustainable. The long-term roadmap from the 2015 workshop was critically reviewed and updated. Recommendations for the next period and an outlook conclude the report.
DockTox: Targeting molecular initiating events in organ toxicity through molecular docking
Ortega Vallbona R., Cortés D. T., Carpio L. E., Coto Palacio J. C., Roncaglioni A., Garcia de Lomana M., Gadaleta D., Benfenati E., Gozalbes R., Candelas E. S.
Toxicology | April 2025
ABSTRACT
Adverse Outcome Pathways (AOPs) in toxicology describe the sequence of key events from chemical exposure to adverse outcomes, facilitating the development of predictive models. The EU ONTOX project uses this framework to predict liver, developmental brain, and kidney toxicity without animal testing. Focusing on Molecular Initiating Events (MIEs), more concretely on the interaction of chemicals with key proteins, we have developed an automated workflow for docking small molecules onto over 20 pre-processed protein structures, implemented in the online tool DockTox. This tool generates conformers of small molecules, performs docking on MIE-associated proteins, and provides binding energy, interacting residues, and interaction maps. Additionally, it compares the interactions to a reference list of known ligands, producing an interaction fraction as an additional similarity measure. Evaluation of the docking workflow’s predictive performance on Peroxisome Proliferator-Activated Receptor α (PPARα) showed that interaction fraction values are more informative than binding energy alone for distinguishing binders from non-binders. This unique feature enhances the understanding of target protein interactions. DockTox supports the virtual screening of small molecules targeting MIE-associated proteins, offering insights into binding energies and interaction profiles. It is a valuable tool for anticipating adverse outcomes from chemical exposure in a tiered risk assessment approach.
The SCAHT Adverse Outcome Pathway (AOP)_HUB: A hands-on platform for information exchange, sharing, and developing AOPs
Coerek E., Kuchovska E., Gerner L., Nilma L., Villeneuve D., Fritsche E.
Alternatives to Animal Experimentation | April 2025
ABSTRACT
Adverse outcome pathways (AOPs) are tools that systematically present scientific knowledge for application in regulatory toxicol-ogy. They consist of multiple key events (KEs), i.e., critical meas-urable steps along the path to adversity, organized in a sequen-tial fashion, and causally connected by their respective key event relationships (KERs).
Mapping physiology: A systems biology approach for the development of alternative methods in toxicology
Staumont B., Ladeira L., Gamba A., Heusinkveld H. J., Piersma A., Fritsche E., Masereeuw R., Vanhaecke T., Teunis M., Luechtefeld T. H., Hartung T., Jover R., Vinken M., Geris L.
Alternatives to Animal Experimentation | April 2025
ABSTRACT
Chemical safety assessment still heavily relies on animal testing, which is associated with ethical dilemmas and has limited human predictive value. New approach methodologies (NAMs), including in vitro and in silico techniques, offer alternative solutions. In silico toxicology has made progress in predicting chemical effects but frequently lacks biological mechanistic foundations. Recent developments focus on the mechanistic understanding of adverse effects caused by chemicals, as embedded in (quantitative) adverse outcome pathways (AOPs). However, there is a demand for more detailed mechanistic insights at the gene and cell levels, encompassing both pathology and physiology. Drawing inspiration from the Disease Maps Project, this paper introduces physiological maps (PMs) as comprehensive graphical representations of biochemical processes related to specific organ functions. PMs are standardized using Systems Biology Graphical Notation (SBGN) and controlled vocabularies and annotations. Curation guidelines have been developed to ensure reproducibility and usability. We present the methodology used to build PMs, emphasizing the essential collaboration between domain experts and curators. PMs offer user-friendly, standardized visualization for data analysis and educational purposes. Enabling a better understanding of (patho)physiology, they also complement and support the development of AOPs by providing detailed mechanistic information at the gene and cell level. Furthermore, PMs contribute to developing in vitro test batteries and to building (dynamic) in silico models aiming to predict the toxicity of chemicals. Collaborative efforts between the toxicology and systems biology communities are crucial for creating standardized and comprehensive PMs, supporting and accelerating the development of human-relevant NAMs for next-generation risk assessment.
From big data to smart decisions: artificial intelligence in kidney risk assessment
Barnes D., Ladeira L., Masereeuw R.
Nature Reviews Nephrology | April 2025
ABSTRACT
Artificial intelligence approaches that link patient data with chemical-induced kidney injury patterns are revolutionizing nephrotoxicity risk assessment. Substantial progress has been made in the development of integrated approaches that leverage big data, molecular profiles and toxicological understanding to identify at-risk patients, provide insights into molecular mechanisms and advance predictive nephrology.
Developmental neurotoxicity (DNT): A call for implementation of new approach methodologies for regulatory purposes: Summary of the 5th International Conference on DNT Testing
Celardo I., Aschner M., Ashton R. S., Carstens K. E., Cediel-Ulloa A., Cöllen E., Crofton K. M., Debad S. J., Dreser N., Fitzpatrick S., Fritsche E., Gutsfeld S., Hardy B., Hartung T., Hessel E., Heusinkveld H., Hogberg H. T., Hsieh J.-H., Kanda Y., Knight G. T., Knudsen T., Koch K., Kuchovska E., Mangas I., Marty M. S., Melching-Kollmuss S., Müller I., Müller P., Myhre O., Paparella M., Pitzer E., Bal-Price A., Sachana M., Schlüppmann K., Shafer T. J., Schäfer J., Smirnova L., Tal T., Tanaskov Y., Tangianu S., Testa G., Ückert A.-K., Whelan M., Leist M.
Alternatives to Animal Experimentation | April 2025
ABSTRACT
The 5th International Conference on Developmental Neurotoxicity (DNT) Testing (DNT5) took place in April 2024 in Konstanz, Germany, organized by CAAT-Europe, the University of Konstanz, and scientists from the US EPA, SCAHT, and CAAT at Johns Hopkins University Bloomberg School of Public Health. The conference convened experts from regulatory agencies, industry, and academia to explore the latest advancements in DNT testing and the integration of animal-free new approach methodologies (NAMs) into next-generation risk assessment (NGRA). The key topic was the application and further development of the recently established DNT in vitro test battery (DNT-IVB). To support this, OECD held a satellite meeting to discuss necessary next steps for further implementation of the DNT-IVB in regulatory contexts. Validation of new DNT test methods and use of their data for in-vitro-to-in-vivo extrapolations in physiologically based kinetic models were also important themes of the main meeting. In this context, the question was raised when a comprehensive biological and chemical coverage by the DNT-IVB would be reached. A need for additional testing data was recognized. Context-specific validation approaches for the entire DNT-IVB and the potential for intelligent combinations of assays to enhance the predictive power of the test battery were also addressed. Many presentations demonstrated the field’s embrace of novel developments, including the use of multi-endpoint embryonic zebrafish tests, the development of artificial intelligence-driven computational approaches, and the establishment of complex, electrically active brain organoids and other self-organizing structures. Through its highly interactive format, DNT5 promoted extensive collaborative efforts in advancing the field toward more human-relevant, scientifically reliable, and ethical toxicological assessments.
Predicting Liver-Related In Vitro Endpoints with Machine Learning to Support Early Detection of Drug-Induced Liver Injury
Garcia de Lomana M., Gadaleta D., Raschke M., Fricke R., Montanari F.
Chemical Research in Toxicology | March 2025
ABSTRACT
Protocol for probabilistic risk assessment of perfluoroctanoic acid (PFOA)
Husøy T., Svendsen C., Kalyva M., Dirven D., Diemar M. G., Roggen E. L., Barnes D. A., Berkhout J., Corradi M., Drees a., Fritsche E., Gadaleta D., Hartung T., Jover R., Kramer N., Kuchovska E., Ladeira L. C. M. Luechtefeld T., Maerten A., Maertens A., Masereeuw R., Myhre O., Ortega R., Proença S., Roncaglioni A., Sanz Serrano J., Serrano E., Staumont B., Teunis M., Verhoeven A., Vinken M.
FHI website | February 2025
ABSTRACT
Animal-free Safety Assessment of Chemicals: Project Cluster for Implementation of Novel Strategies (ASPIS) definition of new approach methodologies
Colbourne J. K., Escher S. E., Lee R., Vinken M., van de Water B., Freedman J. H.
Environmental Toxicology and Chemistry | January 2025
ABSTRACT
Since the release of the U.S. National Academy’s report calling for toxicology to evolve from an observation-based to a mechanism-based science (National Research Council, 2007), scientific advances have shown that mechanistic approaches provide a deeper understanding of hazards associated with chemical exposures. New approach methodologies (NAMs) have emerged to assess the hazards and risks associated with exposure to anthropogenic and/or nonanthropogenic stressors within the context of reduce, refine, and replace (the 3Rs). Replacement refers to achieving a research goal without using animals. Reduction means applying methods that allow an investigator to obtain comparable information and precision using fewer animals. Refinement refers to changes in procedures that decrease or eliminate the animals’ pain, stress, and discomfort both during experimental procedures and in their daily social and physical environments (Russell & Burch, 1959). The development, acceptance and implementation of NAMs has become an international priority for human health and that of wildlife and ecosystems. The global commitment to nonanimal research is driven by societal values on animal welfare and the uncertainty of mammalian model species as reliable human surrogates. In addition, NAM- based information can potentially unite the different branches of toxicology by its relevance in protecting human health, wildlife, and ecosystems, thereby contributing to public safety, ecological resilience, and sustainability.
Transcriptomic characterization of 2D and 3D human induced pluripotent stem cell-based in vitro models as New Approach Methodologies for developmental neurotoxicity testing
Lislien M., Kuchovska E., Kapr J., Duale N., Andersen J. M., Dirven H., Myhre O., Fritsche E., Koch K., Wojewodzic M. W.
Toxicology | January 2025
ABSTRACT
The long way from raw data to NAM-based information: Overview on data layers and processing steps
Blum J., Brüll M., Hengstler J. G., Dietrich D. R., Gruber A. J., Dipalo M., Kraushaar U., Mangas I., Terron A., Fritsche E., Marx-Stoelting P., Hardy B., Schepky A., Escher S., Hartung T., Landsiedel R., Odermatt A., Sachana M., Koch K., Dönmez A., Masjosthusmann S., Bothe K., Schildknecht S., Beilmann M., Beltman J. B., Fitzpatrick S., Mangerich A., Rehm M., Tangianu S., Zickgraf F. M., Kamp H., Burger G., van de Water B., Kleinstreuer N., White A., Leist M.
Alternatives to Animal Experimentation | January 2025
ABSTRACT
Toxicological test methods generate raw data and provide instructions on how to use these to determine a final outcome such as a classification of test compounds as hits or non-hits. The data processing pipeline provided in the test method description is often highly complex. Usually, multiple layers of data, ranging from a machine-generated output to the final hit definition, are considered. Transition between each of these layers often requires several data processing steps. As changes in any of these processing steps can impact the final output of new approach methods (NAMs), the processing pipeline is an essential part of a NAM description and should be included in reporting templates such as the ToxTemp. The same raw data, processed in different ways, may result in different final outcomes that may affect the readiness status and regulatory acceptance of the NAM, as an altered output can affect robustness, performance, and relevance. Data management, processing, and interpretation are therefore important elements of a comprehensive NAM definition. We aim to give an overview of the most important data levels to be considered during the development and application of a NAM. In addition, we illustrate data processing and evaluation steps between these data levels. As NAMs are increasingly standard components of the spectrum of toxicological test methods used for risk assessment, awareness of the significance of data processing steps in NAMs is crucial for building trust, ensuring acceptance, and fostering the reproducibility of NAM outcomes.
State of the science on assessing developmental neurotoxicity using new approach methods
Debad S. J., Aungst J., Carstens K., Ferrer M., Fitzpatrick S., Fritsche E., Geng Y., Hartung T., Hogberg H. T., Li R., Mangas I., Marty S., Musser S., Perron M., Rattan S., Rüegg J., Sachana M., Schenke M., Shafer T. J., Smirnova L., Talpos J., Tanguay R. L., Terron A., Bandele O.
Alternatives to Animal Experimentation | January 2025
ABSTRACT
The workshop titled State of the Science on Assessing Developmental Neurotoxicity Using New Approach Methods was co-organized by University of Maryland’s Joint Institute for Food Safety and Applied Nutrition (JIFSAN) and the U.S. Food and Drug Administration’s (FDA) Center for Food Safety and Applied Nutrition (CFSAN; now called the Human Foods Program), and was hosted by FDA in College Park, MD on November 14-15, 2023. This event convened experts from international organizations, governmental agencies, industry, and academia to explore the transition from traditional in vivo tests to innovative new approach methods (NAMs) in developmental neurotoxicity (DNT) testing. The discussions emphasized the heightened vulnerability of the developing human brain to toxic exposures and the potential of NAMs to provide more ethical, economical, and scientifically robust alternatives to traditional testing. Various NAMs for DNT were discussed, including in silico, in chemico, in vitro, non-mammalian whole organisms, and novel mammalian approaches. In addition to progress in the field, the workshop discussed ongoing challenges such as expectations to perfectly replicate the complex biology of human neurodevelopment and integration of DNT NAMs into regulatory frameworks. Presentations and panel discussions provided a comprehensive overview of the state of the science, assessed the capabilities and limitations of current DNT NAMs, and outlined critical next steps in advancing the field of DNT testing.
Challenges and opportunities for validation of AI-based new approach methods
Hartung T., Kleinstreuer N.
Alternatives to Animal Experimentation | January 2025
ABSTRACT
The integration of artificial intelligence (AI) into new approach methods (NAMs) for toxicology represents a paradigm shift in chemical safety assessment. Harnessing AI appropriately has enormous potential to streamline validation efforts. This review explores the challenges, opportunities, and future directions for validating AI-based NAMs, highlighting their transformative potential while acknowledging the complexities involved in their implementation and acceptance. We discuss key hurdles such as data quality, model interpretability, and regulatory acceptance, alongside opportunities including enhanced predictive power and efficient data integration. The concept of e-validation, an AI-powered framework for streamlining NAM validation, is presented as a comprehensive strategy to overcome limitations of traditional validation approaches, leveraging AI-powered modules for reference chemical selection, study simulation, mechanistic validation, and model training and evaluation. We propose robust validation strategies, including tiered approaches, performance benchmarking, uncertainty quantification, and cross-validation across diverse datasets. The importance of ongoing monitoring and refinement post-implementation is emphasized, addressing the dynamic nature of AI models. We consider ethical implications and the need for human oversight in AI-driven toxicology and outline the impact of trends in AI development, research priorities, and a vision for the integration of AI-based NAMs in toxicological practice, calling for collaboration among researchers, regulators, and industry stakeholders. We describe the vision of companion AI post-validation agents to keep methods and their validity status current. By addressing these challenges and opportunities, the scientific community can harness the potential of AI to enhance predictive toxicology while reducing reliance on traditional animal testing and increasing human relevance and translational capabilities.
SKIG report 2023-2024: Society for the Advancement of AOPs Knowledgebase Interest Group
Wittwehr C., Audouze K., Burgdorf T., Clerbaux L.-A., Coerek E., Demuynck E., Exner T., Filipovska J., Fritsche E., Geris L., Hench V., Jeliazkova N., Karschnik T., Kuchovska E., Ladeira L. C. M., Malinowska J. M., Marinov E., Martens M., Mertens B., Nymark P., Schaffert A., Staumont B., Tanabe S., Tollefsen E. K., Villeneuve D. L., Viviani B.
EU publications | 2025
ABSTRACT
The Society for the Advancement of AOPs Knowledgebase Interest Group (SKIG) is a vibrant assembly of more than 40 international experts focused on advancing the Adverse Outcome Pathways (AOP) framework. SKIG operates through regular online meetings, where two or three presentations per session—followed by lively Q&A interactions—delve into both technical and scientific topics critical to the AOP domain, with a particular focus on AOP-Wiki related issues. Presentations cover a broad spectrum of subjects, such as ontology-based harmonization, AI tools for AOP development, the integration of omics data, incorporating temporal aspects into AOPs, automated access to AOP-Wiki contents, the role of physiological maps and other approaches in establishing biological relevance, and many more. This caters to a diverse audience of scientists, regulators, and policymakers, and this dynamic approach allows readers to engage with specific subjects of interest rather than following a chronological sequence when perusing this report. The document also offers a concise summary of practical adaptations implemented in the AOP-Wiki following meeting discussions. Covering the meetings held in 2023 and 2024, the report reflects SKIG’s ongoing contributions and indicates that the group will continue to operate in 2025 and beyond. SKIG’s work to foster the development and application of AOPs is pivotal in supporting the European Union’s policy on the protection of animals used for scientific purposes (Directive 2010/63/EU) and the European Commission’s Roadmap towards phasing out animal testing for chemical safety assessments.
Published in 2024
Report of the First ONTOX Hackathon: Hack to Save Lives and Avoid Animal Suffering. The Use of Artificial Intelligence in Toxicology — A Potential Driver for Reducing/Replacing Laboratory Animals in the Future
Diemar M. G, Krul C., Teunis M., et al.
Alternatives to Laboratory Animals | December 2024
ABSTRACT
The first ONTOX Hackathon of the EU Horizon 2020-funded ONTOX project was held on 21–23 April 2024 in Utrecht, The Netherlands. This participatory event aimed to collectively advance innovation for human safety through the use of Artificial Intelligence (AI), and hence significantly reduce reliance on animal-based testing. Expert scientists, industry leaders, young investigators, members of animal welfare organisations and academics alike, joined the hackathon. Eight teams were stimulated to find innovative solutions for challenging themes, that were selected based on previous discussions between stakeholders, namely: How to drive the use of AI in chemical risk assessment?; To predict or protect?; How can we secure human health and environmental protection at the same time?; and How can we facilitate the transition from animal tests to full implementation of human-relevant methods? The hackathon ended with a pitching contest, where the teams presented their solutions to a jury. The most promising solutions will be presented to regulatory authorities, industry, academia and non-governmental organisations at the next ONTOX Stakeholder Network meeting and taken up by the ONTOX project in order to tackle the above-mentioned challenges further. This report comprises two parts: The first part highlights some of the lessons learnt during the planning and execution of the hackathon; the second part presents the outcome of the ONTOX Hackathon, which resulted in several innovative and promising solutions based on New Approach Methodologies (NAMs), and outlines ONTOX’s intended way forward.
Comprehensive benchmarking of computational tools for predicting toxicokinetic and physicochemical properties of chemicals
Gadaleta D., Serrano-Candelas E., Ortega-Vallbona R., Colombo E., Garcia de Lomana M., Biava G., Aparicio-Sánchez P., Roncaglioni A., Gozalbes R., Benfenati E.
Journal of Cheminformatics | December 2024
ABSTRACT
Ensuring the safety of chemicals for environmental and human health involves assessing physicochemical (PC) and toxicokinetic (TK) properties, which are crucial for absorption, distribution, metabolism, excretion, and toxicity (ADMET). Computational methods play a vital role in predicting these properties, given the current trends in reducing experimental approaches, especially those that involve animal experimentation. In the present manuscript, twelve software tools implementing Quantitative Structure–Activity Relationship (QSAR) models were selected for the prediction of 17 relevant PC and TK properties. A total of 41 validation datasets were collected from the literature, curated and used for assessing the models’ external predictivity, emphasizing the performance of the models inside the applicability domain. Overall, the results confirmed the adequate predictive performance of the majority of the selected tools, with models for PC properties (R2 average = 0.717) generally outperforming those for TK properties (R2 average = 0.639 for regression, average balanced accuracy = 0.780 for classification). Notably, several of the tools evaluated exhibited good predictivity across different properties and were identified as recurring optimal choices. Moreover, a systematic analysis of the chemical space covered by the external validation datasets confirmed the validity of the collected results for relevant chemical categories (e.g., drugs and industrial chemicals), further increasing the confidence in the overall evaluation. The best performing models were ultimately suggested for each investigated property and proposed as robust computational tools for high-throughput assessment of highly relevant chemical properties.
Adverse outcome pathway networks as the basis for the development of new approach methodologies: Liver toxicity as a case study
Vinken. M.
Current Opinion in Toxicology | December 2024
ABSTRACT
The fields of toxicology and risk assessment are witnessing a paradigm shift moving away from animal testing towards the use of nonanimal and human-based new approach methodologies (NAMs). NAMs are fed by mechanistic information captured in adverse outcome pathway (AOP) networks, which are being developed and optimized at high pace. The present paper demonstrates this (r)evolution for the case of liver toxicity induced by pharmaceutical drugs. NAMs, in casu designed to predict hepatotoxicity, are composed of an in vitro system linked with a suite of assays mechanistically anchored in relevant AOP networks. These NAMs allow tiered testing at the transcriptional, translational and functionality level at high predictive capacity. Although promising, however, several challenges in NAM development still need to be tackled and are discussed in this paper.
A computational dynamic systems model for in silico prediction of neural tube closure defects
Berkhout J. H., Glazier J. A., Piersma A., Belmonte J. M., Legler J., Spencer R. M., Knudsen T. B., Heusinkveld H. J.
Current Research in Toxicology | December 2024
ABSTRACT
Neural tube closure is a critical morphogenetic event during early vertebrate development. This complex process is susceptible to perturbation by genetic errors and chemical disruption, which can induce severe neural tube defects (NTDs) such as spina bifida. We built a computational agent-based model (ABM) of neural tube development based on the known biology of morphogenetic signals and cellular biomechanics underlying neural fold elevation, bending and fusion. The computer model functionalizes cell signals and responses to render a dynamic representation of neural tube closure. Perturbations in the control network can then be introduced synthetically or from biological data to yield quantitative simulation and probabilistic prediction of NTDs by incidence and degree of defect. Translational applications of the model include mechanistic understanding of how singular or combinatorial alterations in gene-environmental interactions and animal-free assessment of developmental toxicity for an important human birth defect (spina bifida) and potentially other neurological problems linked to development of the brain and spinal cord.
The Development of a Non-Invasive Screening Method Based on Serum microRNAs to Quantify the Percentage of Liver Steatosis
Soluyanova P., Quintás G., Pérez-Rubio Á., Rienda I., Moro E., van Herwijnen M., Verheijen M., Caiment F., Pérez-Rojas J., Trullenque-Juan R., et al.
Biomolecules | November 2024
ABSTRACT
Metabolic dysfunction-associated steatotic liver disease (MASLD) is often asymptomatic and underdiagnosed; consequently, there is a demand for simple, non-invasive diagnostic tools. In this study, we developed a method to quantify liver steatosis based on miRNAs, present in liver and serum, that correlate with liver fat. The miRNAs were analyzed by miRNAseq in liver samples from two cohorts of patients with a precise quantification of liver steatosis. Common miRNAs showing correlation with liver steatosis were validated by RT-qPCR in paired liver and serum samples. Multivariate models were built using partial least squares (PLS) regression to predict the percentage of liver steatosis from serum miRNA levels. Leave-one-out cross validation and external validation were used for model selection and to estimate predictive performance. The miRNAseq results disclosed (a) 144 miRNAs correlating with triglycerides in a set of liver biobank samples (n = 20); and (b) 124 and 102 miRNAs correlating with steatosis by biopsy digital image and MRI analyses, respectively, in liver samples from morbidly obese patients (n = 24). However, only 35 miRNAs were common in both sets of samples. RT-qPCR allowed to validate the correlation of 10 miRNAs in paired liver and serum samples. The development of PLS models to quantitatively predict steatosis demonstrated that the combination of serum miR-145-3p, 122-5p, 143-3p, 500a-5p, and 182-5p provided the lowest root mean square error of cross validation (RMSECV = 1.1, p-value = 0.005). External validation of this model with a cohort of mixed MASLD patients (n = 25) showed a root mean squared error of prediction (RMSEP) of 5.3. In conclusion, it is possible to predict the percentage of hepatic steatosis with a low error rate by quantifying the serum level of five miRNAs using a cost-effective and easy-to-implement RT-qPCR method.
Quantitative structure–activity relationships of chemical bioactivity toward proteins associated with molecular initiating events of organ-specific toxicity
Gadaleta D. , Garcia de Lomana M., Serrano-Candelas E., Ortega-Vallbona R., Gozalbes R., Roncaglioni A., Benfenati E.
Journal of Cheminformatics | November 2024
ABSTRACT
The adverse outcome pathway (AOP) concept has gained attention as a way to explore the mechanism of chemical toxicity. In this study, quantitative structure–activity relationship (QSAR) models were developed to predict compound activity toward protein targets relevant to molecular initiating events (MIE) upstream of organ-specific toxicities, namely liver steatosis, cholestasis, nephrotoxicity, neural tube closure defects, and cognitive functional defects. Utilizing bioactivity data from the ChEMBL 33 database, various machine learning algorithms, chemical features and methods to assess prediction reliability were compared and applied to develop robust models to predict compound activity. The results demonstrate high predictive performance across multiple targets, with balanced accuracy exceeding 0.80 for the majority of models. Furthermore, stability checks confirmed the consistency of predictive performance across multiple training-test splits. The results obtained by using QSAR predictions to identify known markers of adversities highlighted the utility of the models for risk assessment and for prioritizing compounds for further experimental evaluation.
Computational Strategies for Assessing Adverse Outcome Pathways: Hepatic Steatosis as a Case Study
Ortega-Vallbona R., Palomino-Schätzlein M., Tolosa L., Benfenati E., Ecker G. F., Gozalbes R., Serrano-Candelas E.
International Journal of Molecular Sciences | October 2024
ABSTRACT
The evolving landscape of chemical risk assessment is increasingly focused on developing tiered, mechanistically driven approaches that avoid the use of animal experiments. In this context, adverse outcome pathways have gained importance for evaluating various types of chemical-induced toxicity. Using hepatic steatosis as a case study, this review explores the use of diverse computational techniques, such as structure–activity relationship models, quantitative structure–activity relationship models, read-across methods, omics data analysis, and structure-based approaches to fill data gaps within adverse outcome pathway networks. Emphasizing the regulatory acceptance of each technique, we examine how these methodologies can be integrated to provide a comprehensive understanding of chemical toxicity. This review highlights the transformative impact of in silico techniques in toxicology, proposing guidelines for their application in evidence gathering for developing and filling data gaps in adverse outcome pathway networks. These guidelines can be applied to other cases, advancing the field of toxicological risk assessment.
Accessible methods and tools to estimate chemical exposure in humans to support risk assessment: A systematic scoping review
Kalyva M. E., Vist G. E., Diemar M. G., López-Soop G., Bozada T. J., Luechtefeld T., Roggen E. L., Dirven H., Vinken M., Husøy T.
Environmental Pollution | July 2024
ABSTRACT
Exposure assessment is a crucial component of environmental health research, providing essential information on the potential risks associated with various chemicals. A systematic scoping review was conducted to acquire an overview of accessible human exposure assessment methods and computational tools to support and ultimately improve risk assessment. The systematic scoping review was performed in Sysrev, a web platform that introduces machine learning techniques into the review process aiming for increased accuracy and efficiency. Included publications were restricted to a publication date after the year 2000, where exposure methods were properly described. Exposure assessments methods were found to be used for a broad range of environmental chemicals including pesticides, metals, persistent chemicals, volatile organic compounds, and other chemical classes. Our results show that after the year 2000, for all the types of exposure routes, probabilistic analysis, and computational methods to calculate human exposure have increased. Sixty-three mathematical models and toolboxes were identified that have been developed in Europe, North America, and globally. However, only twelve occur frequently and their usefulness were associated with exposure route, chemical classes and input parameters used to estimate exposure. The outcome of the combined associations can function as a basis and/or guide for decision making for the selection of most appropriate method and tool to be used for environmental chemical human exposure assessments in Ontology-driven and artificial intelligence-based repeated dose toxicity testing of chemicals for next generation risk assessment (ONTOX) project and elsewhere. Finally, the choice of input parameters used in each mathematical model and toolbox shown by our analysis can contribute to the harmonization process of the exposure models and tools increasing the prospect for comparison between studies and consistency in the regulatory process in the future.
The application of natural language processing for the extraction of mechanistic information in toxicology
Corradi M., Luechtefeld T., de Haan A. M., Pieters R., Freedman J. H., Vanhaecke T., Vinken M., Teunis M.
Frontiers in Toxicology | May 2024
ABSTRACT
To study the ways in which compounds can induce adverse effects, toxicologists have been constructing Adverse Outcome Pathways (AOPs). An AOP can be considered as a pragmatic tool to capture and visualize mechanisms underlying different types of toxicity inflicted by any kind of stressor, and describes the interactions between key entities that lead to the adverse outcome on multiple biological levels of organization. The construction or optimization of an AOP is a labor intensive process, which currently depends on the manual search, collection, reviewing and synthesis of available scientific literature. This process could however be largely facilitated using Natural Language Processing (NLP) to extract information contained in scientific literature in a systematic, objective, and rapid manner that would lead to greater accuracy and reproducibility. This would support researchers to invest their expertise in the substantive assessment of the AOPs by replacing the time spent on evidence gathering by a critical review of the data extracted by NLP. As case examples, we selected two frequent adversities observed in the liver: namely, cholestasis and steatosis denoting accumulation of bile and lipid, respectively. We used deep learning language models to recognize entities of interest in text and establish causal relationships between them. We demonstrate how an NLP pipeline combining Named Entity Recognition and a simple rules-based relationship extraction model helps screen compounds related to liver adversities in the literature, but also extract mechanistic information for how such adversities develop, from the molecular to the organismal level. Finally, we provide some perspectives opened by the recent progress in Large Language Models and how these could be used in the future.
We propose this work brings two main contributions: 1) a proof-of-concept that NLP can support the extraction of information from text for modern toxicology and 2) a template open-source model for recognition of toxicological entities and extraction of their relationships.
All resources are openly accessible via GitHub (https://github.com/ontox-project/en-tox).
Novel clinical phenotypes, drug categorization, and outcome prediction in drug-induced cholestasis: Analysis of a database of 432 patients developed by literature review and machine learning support
Moreno-Torres M., López-Pascual E., Rapisarda A., Quintás G, Drees A., Steffensen I.-L., Luechtefeld T., Serrano-Candelas E., de Lomana M. G., Gadaleta D.,Dirven H., Vinken M., Jover R.
Biomedicine & Pharmacotherapy | May 2024
ABSTRACT
Background: Serum transaminases, alkaline phosphatase and bilirubin are common parameters used for DILI diagnosis, classification, and prognosis. However, the relevance of clinical examination, histopathology and drug chemical properties have not been fully investigated. As cholestasis is a frequent and complex DILI manifestation, our goal was to investigate the relevance of clinical features and drug properties to stratify drug-induced cholestasis (DIC) patients, and to develop a prognosis model to identify patients at risk and high-concern drugs.
Methods: DIC-related articles were searched by keywords and Boolean operators in seven databases. Relevant articles were uploaded onto Sysrev, a machine-learning based platform for article review and data extraction. Demographic, clinical, biochemical, and liver histopathological data were collected. Drug properties were ob- tained from databases or QSAR modelling. Statistical analyses and logistic regressions were performed.
Results: Data from 432 DIC patients associated with 52 drugs were collected. Fibrosis strongly associated with fatality, whereas canalicular paucity and ALP associated with chronicity. Drugs causing cholestasis clustered in three major groups. The pure cholestatic pattern divided into two subphenotypes with differences in prognosis, canalicular paucity, fibrosis, ALP and bilirubin. A predictive model of DIC outcome based on non-invasive pa- rameters and drug properties was developed. Results demonstrate that physicochemical (pKa-a) and pharma- cokinetic (bioavailability, CYP2C9) attributes impinged on the DIC phenotype and allowed the identification of high-concern drugs.
Systematic evaluation of high-throughput PBK modelling strategies for the prediction of intravenous and oral pharmacokinetics in humans
Geci R., Gadaleta D., de Lomana M. G,, Ortega-Vallbona R., Colombo E., Serrano-Candelas E., Paini A., Kuepfer L., Schaller S.
Archives of Toxicology | May 2024
ABSTRACT
Physiologically based kinetic (PBK) modelling offers a mechanistic basis for predicting the pharmaco-/toxicokinetics of compounds and thereby provides critical information for integrating toxicity and exposure data to replace animal testing with in vitro or in silico methods. However, traditional PBK modelling depends on animal and human data, which limits its usefulness for non-animal methods. To address this limitation, high-throughput PBK modelling aims to rely exclusively on in vitro and in silico data for model generation. Here, we evaluate a variety of in silico tools and different strategies to parameterise PBK models with input values from various sources in a high-throughput manner. We gather 2000 + publicly available human in vivo concentration–time profiles of 200 + compounds (IV and oral administration), as well as in silico, in vitro and in vivo determined compound-specific parameters required for the PBK modelling of these compounds. Then, we systematically evaluate all possible PBK model parametrisation strategies in PK-Sim and quantify their prediction accuracy against the collected in vivo concentration–time profiles. Our results show that even simple, generic high-throughput PBK modelling can provide accurate predictions of the pharmacokinetics of most compounds (87% of Cmax and 84% of AUC within tenfold). Nevertheless, we also observe major differences in prediction accuracies between the different parameterisation strategies, as well as between different compounds. Finally, we outline a strategy for high-throughput PBK modelling that relies exclusively on freely available tools. Our findings contribute to a more robust understanding of the reliability of high-throughput PBK modelling, which is essential to establish the confidence necessary for its utilisation in Next-Generation Risk Assessment.
Alternative methods go green! Green toxicology as a sustainable approach for assessing chemical safety and designing safer chemicals
Maertens A., Luechtefeld T., Knight J., Hartung T.
Alternatives to Animal Experimentation | January 2024
ABSTRACT
Green toxicology is marching chemistry into the 21st century. This emerging framework will transform how chemical safety is evaluated by incorporating evaluation of the hazards, exposures, and risks associated with chemicals into early product development in a way that minimizes adverse impacts on human and environmental health. The goal is to minimize toxic threats across entire supply chains through smarter designs and policies. Traditional animal testing methods are replaced by faster, cutting-edge innovations like organs-on-chips and artificial intelligence predictive models that are also more cost-effective. Core principles of green toxicology include utilizing alternative test methods, applying the precautionary principle, considering lifetime impacts, and emphasizing risk prevention over reaction. This paper provides an overview of these foundational concepts and describes current initiatives and future opportunities to advance the adoption of green toxicology approaches. Challenges and limitations are also discussed. Green shoots are emerging with governments offering carrots like the European Green Deal to nudge industry. Noteworthy, animal rights and environmental groups have different ideas about the needs for testing and their consequences for animal use. Green toxicology represents the way forward to support both these societal needs with sufficient throughput and human relevance for hazard information and minimal animal suffering. Green toxicology thus sets the stage to synergize human health and ecological values. Overall, the integration of green chemistry and toxicology has potential to profoundly shift how chemical risks are evaluated and managed to achieve safety goals in a more ethical, ecologically-conscious manner.
Artificial intelligence (AI) — it’s the end of the tox as we know it (and I feel fine)
Kleinstreuer N., Hartung T.
Archives of Toxicology | January 2024
ABSTRACT
The rapid progress of AI impacts diverse scientific disciplines, including toxicology, and has the potential to transform chemical safety evaluation. Toxicology has evolved from an empirical science focused on observing apical outcomes of chemical exposure, to a data-rich field ripe for AI integration. The volume, variety and velocity of toxicological data from legacy studies, literature, high-throughput assays, sensor technologies and omics approaches create opportunities but also complexities that AI can help address. In particular, machine learning is well suited to handle and integrate large, heterogeneous datasets that are both structured and unstructured—a key challenge in modern toxicology. AI methods like deep neural networks, large language models, and natural language processing have successfully predicted toxicity endpoints, analyzed high-throughput data, extracted facts from literature, and generated synthetic data. Beyond automating data capture, analysis, and prediction, AI techniques show promise for accelerating quantitative risk assessment by providing probabilistic outputs to capture uncertainties. AI also enables explanation methods to unravel mechanisms and increase trust in modeled predictions. However, issues like model interpretability, data biases, and transparency currently limit regulatory endorsement of AI. Multidisciplinary collaboration is needed to ensure development of interpretable, robust, and human-centered AI systems. Rather than just automating human tasks at scale, transformative AI can catalyze innovation in how evidence is gathered, data are generated, hypotheses are formed and tested, and tasks are performed to usher new paradigms in chemical safety assessment. Used judiciously, AI has immense potential to advance toxicology into a more predictive, mechanism-based, and evidence-integrated scientific discipline to better safeguard human and environmental wellbeing across diverse populations.
Development of an adverse outcome pathway network for nephrotoxicity
Barnes D. A., Firman J. W., Belfield S. J., Cronin M. T. D., Vinken M., Janssen M. J., Masereeuw R.
Archives of Toxicology | January 2024
ABSTRACT
Adverse outcome pathways (AOPs) were introduced in modern toxicology to provide evidence-based representations of the events and processes involved in the progression of toxicological effects across varying levels of the biological organisation to better facilitate the safety assessment of chemicals. AOPs offer an opportunity to address knowledge gaps and help to identify novel therapeutic targets. They also aid in the selection and development of existing and new in vitro and in silico test methods for hazard identification and risk assessment of chemical compounds. However, many toxicological processes are too intricate to be captured in a single, linear AOP. As a result, AOP networks have been developed to aid in the comprehension and placement of associated events underlying the emergence of related forms of toxicity—where complex exposure scenarios and interactions may influence the ultimate adverse outcome. This study utilised established criteria to develop an AOP network that connects thirteen individual AOPs associated with nephrotoxicity (as sourced from the AOP-Wiki) to identify several key events (KEs) linked to various adverse outcomes, including kidney failure and chronic kidney disease. Analysis of the modelled AOP network and its topological features determined mitochondrial dysfunction, oxidative stress, and tubular necrosis to be the most connected and central KEs. These KEs can provide a logical foundation for guiding the selection and creation of in vitro assays and in silico tools to substitute for animal-based in vivo experiments in the prediction and assessment of chemical-induced nephrotoxicity in human health.
Metabolomics in Preclinical Drug Safety Assessment: Current Status and Future Trends
Sillé F., Hartung T.
Metabolites | January 2024
ABSTRACT
Metabolomics is emerging as a powerful systems biology approach for improving preclinical drug safety assessment. This review discusses current applications and future trends of metabolomics in toxicology and drug development. Metabolomics can elucidate adverse outcome pathways by detecting endogenous biochemical alterations underlying toxicity mechanisms. Furthermore, metabolomics enables better characterization of human environmental exposures and their influence on disease pathogenesis. Metabolomics approaches are being increasingly incorporated into toxicology studies and safety pharmacology evaluations to gain mechanistic insights and identify early biomarkers of toxicity. However, realizing the full potential of metabolomics in regulatory decision making requires a robust demonstration of reliability through quality assurance practices, reference materials, and interlaboratory studies. Overall, metabolomics shows great promise in strengthening the mechanistic understanding of toxicity, enhancing routine safety screening, and transforming exposure and risk assessment paradigms. Integration of metabolomics with computational, in vitro, and personalized medicine innovations will shape future applications in predictive toxicology.
Report of the First ONTOX Stakeholder Network Meeting: Digging Under the Surface of ONTOX Together With the Stakeholders
Diemar M.G., Vinken M., Teunis M., et al.
Alternatives to Laboratory Animals | January 2024
ABSTRACT
The first Stakeholder Network Meeting of the EU Horizon 2020-funded ONTOX project was held on 13–14 March 2023, in Brussels, Belgium. The discussion centred around identifying specific challenges, barriers and drivers in relation to the implementation of non-animal new approach methodologies (NAMs) and probabilistic risk assessment (PRA), in order to help address the issues and rank them according to their associated level of difficulty. ONTOX aims to advance the assessment of chemical risk to humans, without the use of animal testing, by developing non-animal NAMs and PRA in line with 21st century toxicity testing principles. Stakeholder groups (regulatory authorities, companies, academia, non-governmental organisations) were identified and invited to participate in a meeting and a survey, by which their current position in relation to the implementation of NAMs and PRA was ascertained, as well as specific challenges and drivers highlighted. The survey analysis revealed areas of agreement and disagreement among stakeholders on topics such as capacity building, sustainability, regulatory acceptance, validation of adverse outcome pathways, acceptance of artificial intelligence (AI) in risk assessment, and guaranteeing consumer safety. The stakeholder network meeting resulted in the identification of barriers, drivers and specific challenges that need to be addressed. Breakout groups discussed topics such as hazard versus risk assessment, future reliance on AI and machine learning, regulatory requirements for industry and sustainability of the ONTOX Hub platform. The outputs from these discussions provided insights for overcoming barriers and leveraging drivers for implementing NAMs and PRA. It was concluded that there is a continued need for stakeholder engagement, including the organisation of a ‘hackathon’ to tackle challenges, to ensure the successful implementation of NAMs and PRA in chemical risk assessment.
The probable future of toxicology - probabilistic risk assessment
Maertens A., Antignac E., Benfenati E., Bloch D., Fritsche E., Hoffmann S., Jaworska J., Loizou G., McNally K., Piechota P., Roggen E. L., Teunis M., Hartung T.
Alternatives to Animal Experimentation | January 2024
ABSTRACT
Both because of the shortcomings of existing risk assessment methodologies, as well as newly available tools to predict hazard and risk with machine learning approaches, there has been an emerging emphasis on probabilistic risk assessment. Increasingly sophisticated AI models can be applied to a plethora of exposure and hazard data to obtain not only predictions for particular endpoints but also to estimate the uncertainty of the risk assessment outcome. This provides the basis for a shift from deterministic to more probabilistic approaches but comes at the cost of an increased complexity of the process as it requires more resources and human expertise. There are still challenges to overcome before a probabilistic paradigm is fully embraced by regulators. Based on an earlier white paper (Maertens et al., 2022), a workshop discussed the prospects, challenges and path forward for implementing such AI-based probabilistic hazard assessment. Moving forward, we will see the transition from categorized into probabilistic and dose-dependent hazard outcomes, the application of internal thresholds of toxicological concern for data-poor substances, the acknowledgement of user-friendly open-source software, a rise in the expertise of toxicologists required to understand and interpret artificial intelligence models, and the honest communication of uncertainty in risk assessment to the public.
Published in 2023
ToxAIcology - The evolving role of artificial intelligence in advancing toxicology and modernizing regulatory science
Hartung T.
Alternatives to Animal Experimentation | October 2023
ABSTRACT
Toxicology has undergone a transformation from an observational science to a data-rich discipline ripe for artificial intelligence (AI) integration. The exponential growth in computing power coupled with accumulation of large toxicological datasets has created new opportunities to apply techniques like machine learning and especially deep learning to enhance chemical hazard assessment. This article provides an overview of key developments in AI-enabled toxicology, including early expert systems, statistical learning methods like quantitative structure-activity relationships (QSARs), recent advances with deep neural networks, and emerging trends. The promises and challenges of AI adoption for predictive toxicology, data analysis, risk assessment, and mechanistic research are discussed. Responsible development and application of interpretable and human-centered AI tools through multidisciplinary collaboration can accelerate evidence-based toxicology to better protect human health and the environment. However, AI is not a panacea and must be thoughtfully designed and utilized alongside ongoing efforts to improve primary evidence generation and appraisal.
Unraveling the mechanisms underlying drug-induced cholestatic liver injury: identifying key genes using machine learning techniques on human in vitro data sets
Jiang J., van Ertvelde J., Ertaylan G., Peeters R., Jennen D., de Kok T. M., Vinken M.
Archives of Toxicology | August 2023
ABSTRACT
Drug-induced intrahepatic cholestasis (DIC) is a main type of hepatic toxicity that is challenging to predict in early drug development stages. Preclinical animal studies often fail to detect DIC in humans. In vitro toxicogenomics assays using human liver cells have become a practical approach to predict human-relevant DIC. The present study was set up to identify transcriptomic signatures of DIC by applying machine learning algorithms to the Open TG-GATEs database. A total of nine DIC compounds and nine non-DIC compounds were selected, and supervised classification algorithms were applied to develop prediction models using differentially expressed features. Feature selection techniques identified 13 genes that achieved optimal prediction performance using logistic regression combined with a sequential backward selection method. The internal validation of the best-performing model showed accuracy of 0.958, sensitivity of 0.941, specificity of 0.978, and F1-score of 0.956. Applying the model to an external validation set resulted in an average prediction accuracy of 0.71. The identified genes were mechanistically linked to the adverse outcome pathway network of DIC, providing insights into cellular and molecular processes during response to chemical toxicity. Our findings provide valuable insights into toxicological responses and enhance the predictive accuracy of DIC prediction, thereby advancing the application of transcriptome profiling in designing new approach methodologies for hazard identification.
Optimization of an adverse outcome pathway network on chemical-induced cholestasis using an artificial intelligence-assisted data collection and confidence level quantification approach
van Ertvelde J., Verhoeven A., Maerten A., Cooreman A., Dos Santos Rodrigues B., Sanz-Serrano J., Mihajlovic M., Tripodi I., Teunis M., Jover R., Luechtefeld T., Vanhaecke T., Jiang J., Vinken M.
Journal of Biomedical Informatics | August 2023
ABSTRACT
The present study introduces a novel approach for AOP network optimization of a previously published AOP network on chemical-induced cholestasis using artificial intelligence to facilitate automated data collection followed by subsequent quantitative confidence assessment of molecular initiating events, key events, and key event relationships. Artificial intelligence-assisted data collection was performed by means of the free web platform Sysrev. Confidence levels of the tailored Bradford-Hill criteria were quantified for the purpose of weight-of-evidence assessment of the optimized AOP network. Scores were calculated for biological plausibility, empirical evidence, and essentiality, and were integrated into a total key event relationship confidence value. The optimized AOP network was visualized using Cytoscape with the node size representing the incidence of the key event and the edge size indicating the total confidence in the key event relationship.
Predicting the Mitochondrial Toxicity of Small Molecules: Insights from Mechanistic Assays and Cell Painting Data
Garcia de Lomana M., Marin Zapata P. A., Montanari F.
Chemical Research in Toxicology | July 2023
ABSTRACT
Mitochondrial toxicity is a significant concern in the drug discovery process, as compounds that disrupt the function of these organelles can lead to serious side effects, including liver injury and cardiotoxicity. Different in vitro assays exist to detect mitochondrial toxicity at varying mechanistic levels: disruption of the respiratory chain, disruption of the membrane potential, or general mitochondrial dysfunction. In parallel, whole cell imaging assays like Cell Painting provide a phenotypic overview of the cellular system upon treatment and enable the assessment of mitochondrial health from cell profiling features. In this study, we aim to establish machine learning models for the prediction of mitochondrial toxicity, making the best use of the available data. For this purpose, we first derived highly curated datasets of mitochondrial toxicity, including subsets for different mechanisms of action. Due to the limited amount of labeled data often associated with toxicological endpoints, we investigated the potential of using morphological features from a large Cell Painting screen to label additional compounds and enrich our dataset. Our results suggest that models incorporating morphological profiles perform better in predicting mitochondrial toxicity than those trained on chemical structures alone (up to +0.08 and +0.09 mean MCC in random and cluster cross-validation, respectively). Toxicity labels derived from Cell Painting images improved the predictions on an external test set up to +0.08 MCC. However, we also found that further research is needed to improve the reliability of Cell Painting image labeling. Overall, our study provides insights into the importance of considering different mechanisms of action when predicting a complex endpoint like mitochondrial disruption as well as into the challenges and opportunities of using Cell Painting data for toxicity prediction.
REACH out-numbered! The future of REACH and animal numbers
Rovida C., Busquet F., Leist M., Hartung T.
Alternatives to Animal Experimentation | July 2023
ABSTRACT
The EU’s REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) Regulation requires animal testing only as a last resort. However, our study (Knight et al., 2023) in this issue reveals that approximately 2.9 million animals have been used for REACH testing for reproductive toxicity, developmental toxicity, and repeated-dose toxicity alone as of December 2022. Currently, additional tests requiring about 1.3 million more animals are in the works. As compliance checks continue, more animal tests are anticipated. According to the European Chemicals Agency (ECHA), 75% of read-across methods have been rejected during compliance checks. Here, we estimate that 0.6 to 3.2 million animals have been used for other endpoints, likely at the lower end of this range. The ongoing discussion about the grouping of 4,500 registered petrochemicals can still have a major impact on these numbers. The 2022 amendment of REACH is estimated to add 3.6 to 7.0 million animals. This information comes as the European Parliament is set to consider changes to REACH that could further increase animal testing. Two proposals currently under discussion would likely necessitate new animal testing: extending the requirement for a chemical safety assessment (CSA) to Annex VII substances could add 1.6 to 2.6 million animals, and the registration of polymers adds a challenge comparable to the petrochemical discussion. These findings highlight the importance of understanding the current state of REACH animal testing for the upcoming debate on REACH revisions as an opportunity to focus on reducing animal use.
G × E interactions as a basis for toxicological uncertainty
Suciu I., Pamies D., Peruzzo R., Wirtz P. H., Smirnova L., Pallocca G., Hauck Ch., Cronin M. T. D., Hengstler J. G., Brunner T., Hartung T., Amelio I., Leist M.
Archives of Toxicology | June 2023
ABSTRACT
To transfer toxicological findings from model systems, e.g. animals, to humans, standardized safety factors are applied to account for intra-species and inter-species variabilities. An alternative approach would be to measure and model the actual compound-specific uncertainties. This biological concept assumes that all observed toxicities depend not only on the exposure situation (environment = E), but also on the genetic (G) background of the model (G × E). As a quantitative discipline, toxicology needs to move beyond merely qualitative G × E concepts. Research programs are required that determine the major biological variabilities affecting toxicity and categorize their relative weights and contributions. In a complementary approach, detailed case studies need to explore the role of genetic backgrounds in the adverse effects of defined chemicals. In addition, current understanding of the selection and propagation of adverse outcome pathways (AOP) in different biological environments is very limited. To improve understanding, a particular focus is required on modulatory and counter-regulatory steps. For quantitative approaches to address uncertainties, the concept of “genetic” influence needs a more precise definition. What is usually meant by this term in the context of G × E are the protein functions encoded by the genes. Besides the gene sequence, the regulation of the gene expression and function should also be accounted for. The widened concept of past and present “gene expression” influences is summarized here as Ge. Also, the concept of “environment” needs some re-consideration in situations where exposure timing (Et) is pivotal: prolonged or repeated exposure to the insult (chemical, physical, life style) affects Ge. This implies that it changes the model system. The interaction of Ge with Et might be denoted as Ge × Et. We provide here general explanations and specific examples for this concept and show how it could be applied in the context of New Approach Methodologies (NAM).
The effects of hexabromocyclododecane on the transcriptome and hepatic enzyme activity in three human HepaRG-based models
Proença S., van Sabben N., Legler J., Kamstra J. H., Kramer N. I.
Toxicology | February 2023
ABSTRACT
The disruption of thyroid hormone homeostasis by hexabromocyclododecane (HBCD) in rodents is hypothesized to be due to HBCD increasing the hepatic clearance of thyroxine (T4). The extent to which these effects are relevant to humans is unclear. To evaluate HBCD effects on humans, the activation of key hepatic nuclear receptors and the consequent disruption of thyroid hormone homeostasis were studied in different human hepatic cell models. The hepatoma cell line, HepaRG, cultured as two-dimensional (2D), sandwich (SW) and spheroid (3D) cultures, and primary human hepatocytes (PHH) cultured as sandwich were exposed to 1 and 10 µM HBCD and characterized for their transcriptome changes. Pathway enrichment analysis showed that 3D models, followed by SW, had a stronger transcriptome response to HBCD, which is explained by the higher expression of hepatic nuclear receptors but also greater accumulation of HBCD measured inside cells in these models. The Pregnane X receptor pathway is one of the pathways most upregulated across the three hepatic models, followed by the constitutive androstane receptor and general hepatic nuclear receptors pathways. Lipid metabolism pathways had a downregulation tendency in all exposures and in both PHH and the three cultivation modes of HepaRG. The activity of enzymes related to PXR/CAR induction and T4 metabolism were evaluated in the three different types of HepaRG cultures exposed to HBCD for 48 h. Reference inducers, rifampicin and PCB-153 did affect 2D and SW HepaRG cultures’ enzymatic activity but not 3D. HBCD did not induce the activity of any of the studied enzymes in any of the cell models and culture methods. This study illustrates that for nuclear receptor-mediated T4 disruption, transcriptome changes might not be indicative of an actual adverse effect. Clarification of the reasons for the lack of translation is essential to evaluate new chemicals’ potential to be thyroid hormone disruptors by altering thyroid hormone metabolism.
Published in 2022
Natural language processing in toxicology: Delineating adverse outcome pathways and guiding the application of new approach methodologies
Corradi M., de Haan A., Staumont B., Piersma A., Geris L., Pieters R., Krul C., Teunis M.
Biosystems and Biomaterials | August 2022
ABSTRACT
Adverse Outcome Pathways (AOPs) are conceptual frameworks that tie an initial perturbation (molecular initiating event) to a phenotypic toxicological manifestation (adverse outcome), through a series of steps (key events). They provide therefore a standardized way to map and organize toxicological mechanistic information. As such, AOPs inform on key events underlying toxicity, thus supporting the development of New Approach Methodologies (NAMs), which aim to reduce the use of animal testing for toxicology purposes. However, the establishment of a novel AOP relies on the gathering of multiple streams of evidence and information, from available literature to knowledge databases. Often, this information is in the form of free text, also called unstructured text, which is not immediately digestible by a computer. This information is thus both tedious and increasingly time-consuming to process manually with the growing volume of data available. The advancement of machine learning provides alternative solutions to this challenge. To extract and organize information from relevant sources, it seems valuable to employ deep learning Natural Language Processing techniques. We review here some of the recent progress in the NLP field, and show how these techniques have already demonstrated value in the biomedical and toxicology areas. We also propose an approach to efficiently and reliably extract and combine relevant toxicological information from text. This data can be used to map underlying mechanisms that lead to toxicological effects and start building quantitative models, in particular AOPs, ultimately allowing animal-free human-based hazard and risk assessment.
Monte Carlo Models for Sub-Chronic Repeated-Dose Toxicity: Systemic and Organ-Specific Toxicity
Selvestrel G., Lavado G. J., Toropova A. P., Toropov A. A., Gadaleta D., Marzo M., Baderna D., Benfenati E.
International Journal of Molecular Sciences | June 2022
ABSTRACT
The risk-characterization of chemicals requires the determination of repeated-dose toxicity (RDT). This depends on two main outcomes: the no-observed-adverse-effect level (NOAEL) and the lowest-observed-adverse-effect level (LOAEL). These endpoints are fundamental requirements in several regulatory frameworks, such as the Registration, Evaluation, Authorization and Restriction of Chemicals (REACH) and the European Regulation of 1223/2009 on cosmetics. The RDT results for the safety evaluation of chemicals are undeniably important; however, the in vivo tests are time-consuming and very expensive. The in silico models can provide useful input to investigate sub-chronic RDT. Considering the complexity of these endpoints, involving variable experimental designs, this non-testing approach is challenging and attractive. Here, we built eight in silico models for the NOAEL and LOAEL predictions, focusing on systemic and organ-specific toxicity, looking into the effects on the liver, kidney and brain. Starting with the NOAEL and LOAEL data for oral sub-chronic toxicity in rats, retrieved from public databases, we developed and validated eight quantitative structure-activity relationship (QSAR) models based on the optimal descriptors calculated by the Monte Carlo method, using the CORAL software. The results obtained with these models represent a good achievement, to exploit them in a safety assessment, considering the importance of organ-related toxicity.
Replacement of animal testing by integrated approaches to testing and assessment (IATA): a call for in vivitrosi
Caloni F., De Angelis I., Hartung T.
Archives of Toxicology | May 2022
ABSTRACT
Alternative methods to animal use in toxicology are evolving with new advanced tools and multilevel approaches, to answer from one side to 3Rs requirements, and on the other side offering relevant and valid tests for drugs and chemicals, considering also their combination in test strategies, for a proper risk assessment. While stand-alone methods, have demonstrated to be applicable for some specific toxicological predictions with some limitations, the new strategy for the application of New Approach Methods (NAM), to solve complex toxicological endpoints is addressed by Integrated Approaches for Testing and Assessment (IATA), aka Integrated Testing Strategies (ITS) or Defined Approaches for Testing and Assessment (DA). The central challenge of evidence integration is shared with the needs of risk assessment and systematic reviews of an evidence-based Toxicology. Increasingly, machine learning (aka Artificial Intelligence, AI) lends itself to integrate diverse evidence streams. In this article, we give an overview of the state of the art of alternative methods and IATA in toxicology for regulatory use for various hazards, outlining future orientation and perspectives. We call on leveraging the synergies of integrated approaches and evidence integration from in vivo, in vitro and in silico as true in vivitrosi.
Mitochondria as the Target of Hepatotoxicity and Drug-Induced Liver Injury: Molecular Mechanisms and Detection Methods
Mihajlovic M., Vinken M.
International Journal of Molecular Sciences | March 2022
ABSTRACT
One of the major mechanisms of drug-induced liver injury includes mitochondrial perturbation and dysfunction. This is not a surprise, given that mitochondria are essential organelles in most cells, which are responsible for energy homeostasis and the regulation of cellular metabolism. Drug-induced mitochondrial dysfunction can be influenced by various factors and conditions, such as genetic predisposition, the presence of metabolic disorders and obesity, viral infections, as well as drugs. Despite the fact that many methods have been developed for studying mitochondrial function, there is still a need for advanced and integrative models and approaches more closely resembling liver physiology, which would take into account predisposing factors. This could reduce the costs of drug development by the early prediction of potential mitochondrial toxicity during pre-clinical tests and, especially, prevent serious complications observed in clinical settings.
Prediction of the Neurotoxic Potential of Chemicals Based on Modelling of Molecular Initiating Events Upstream of the Adverse Outcome Pathways of (Developmental) Neurotoxicity
Gadaleta D., Spînu N., Roncaglioni A., Cronin M. T. D., Benfenati E.
International Journal of Molecular Sciences | March 2022
ABSTRACT
Developmental and adult/ageing neurotoxicity is an area needing alternative methods for chemical risk assessment. The formulation of a strategy to screen large numbers of chemicals is highly relevant due to potential exposure to compounds that may have long-term adverse health consequences on the nervous system, leading to neurodegeneration. Adverse Outcome Pathways (AOPs) provide information on relevant molecular initiating events (MIEs) and key events (KEs) that could inform the development of computational alternatives for these complex effects. We propose a screening method integrating multiple Quantitative Structure–Activity Relationship (QSAR) models. The MIEs of existing AOP networks of developmental and adult/ageing neurotoxicity were modelled to predict neurotoxicity. Random Forests were used to model each MIE. Predictions returned by single models were integrated and evaluated for their capability to predict neurotoxicity. Specifically, MIE predictions were used within various types of classifiers and compared with other reference standards (chemical descriptors and structural fingerprints) to benchmark their predictive capability. Overall, classifiers based on MIE predictions returned predictive performances comparable to those based on chemical descriptors and structural fingerprints. The integrated computational approach described here will be beneficial for large-scale screening and prioritisation of chemicals as a function of their potential to cause long-term neurotoxic effects.
Scientific Validation of Human Neurosphere Assays for Developmental Neurotoxicity Evaluation
Koch K., Bartmann K., Hartmann J., Kapr J., Klose J., Kuchovská E., Pahl M., Schlüppmann K., Zühr E., Fritsche E.
Frontiers in Toxicology | March 2022
ABSTRACT
There is a call for a paradigm shift in developmental neurotoxicity (DNT) evaluation, which demands the implementation of faster, more cost-efficient, and human-relevant test systems than current in vivo guideline studies. Under the umbrella of the Organisation for Economic Co-operation and Development (OECD), a guidance document is currently being prepared that instructs on the regulatory use of a DNT in vitro battery (DNT IVB) for fit-for-purpose applications. One crucial issue for OECD application of methods is validation, which for new approach methods (NAMs) requires novel approaches. Here, mechanistic information previously identified in vivo, as well as reported neurodevelopmental adversities in response to disturbances on the cellular and tissue level, are of central importance. In this study, we scientifically validate the Neurosphere Assay, which is based on human primary neural progenitor cells (hNPCs) and an integral part of the DNT IVB. It assesses neurodevelopmental key events (KEs) like NPC proliferation (NPC1ab), radial glia cell migration (NPC2a), neuronal differentiation (NPC3), neurite outgrowth (NPC4), oligodendrocyte differentiation (NPC5), and thyroid hormone-dependent oligodendrocyte maturation (NPC6). In addition, we extend our work from the hNPCs to human induced pluripotent stem cell-derived NPCs (hiNPCs) for the NPC proliferation (iNPC1ab) and radial glia assays (iNPC2a). The validation process we report for the endpoints studied with the Neurosphere Assays is based on 1) describing the relevance of the respective endpoints for brain development, 2) the confirmation of the cell type-specific morphologies observed in vitro, 3) expressions of cell type-specific markers consistent with those morphologies, 4) appropriate anticipated responses to physiological pertinent signaling stimuli and 5) alterations in specific in vitro endpoints upon challenges with confirmed DNT compounds. With these strong mechanistic underpinnings, we posit that the Neurosphere Assay as an integral part of the DNT in vitro screening battery is well poised for DNT evaluation for regulatory purposes.
Multi-task Proteochemometric Modelling
Pentina A., Clevert D.-A.
ChemRxiv™ | February 2022
ABSTRACT
Motivation: In silico prediction of protein-ligand binding is a hot topic in computational chemistry and machine learning-based drug discovery, as an accurate prediction model could reduce the time and resources required to detect and identify and prioritize potential drug candidates. Proteochemometric modelling (PCM) is a promising approach for in-silico protein-ligand binding prediction that utilises both compound and target descriptors. However, in its original form PCM model cannot separate multiple assays associated with the same target. Therefore, a practitioner applying PCM approach to modelling experimental data has either to select only one assay for each target, and thus exclude potentially significant amount of data, or pull measurements from different assays together effectively mixing possibly very different functional dependencies between (protein, ligand) pairs and experimental measurements. Results: We describe two modifications of PCM models that increase its flexibility allowing to separate multiple assays associated with the same target. Evaluated on a subset of internal Bayer dose-response data and ChEMBL, these approaches result in improved performance compared to standard PCM models. Our results demonstrate importance of disentangling multiple assays associated with the same target when using PCM methodology in pharmaceutical environment. Availability: Source code is made publicly available on GitHub for non-commercial usage after publication.
Probabilistic risk assessment – the keystone for the future of toxicology
Maertens A., Golden E., Luechtefeld T. H., Hoffmann S., Tsaioun K., Hartung T.
Alternatives to Animal Experimentation | January 2022
ABSTRACT
Safety sciences must cope with uncertainty of models and results as well as information gaps. Acknowledging this uncertainty necessitates embracing probabilities and accepting the remaining risk. Every toxicological tool delivers only probable results. Traditionally, this is taken into account by using uncertainty / assessment factors and worst-case / precautionary approaches and thresholds. Probabilistic methods and Bayesian approaches seek to characterize these uncertainties and promise to support better risk assessment and, thereby, improve risk management decisions. Actual assessments of uncertainty can be more realistic than worst-case scenarios and may allow less conservative safety margins. Most importantly, as soon as we agree on uncertainty, this defines room for improvement and allows a transition from traditional to new approach methods as an engineering exercise. The objective nature of these mathematical tools allows to assign each methodology its fair place in evidence integration, whether in the context of risk assessment, systematic reviews, or in the definition of an integrated testing strategy (ITS) / defined approach (DA) / integrated approach to testing and assessment (IATA). This article gives an overview of methods for probabilistic risk assessment and their application for exposure assessment, physiologically-based kinetic modelling, probability of hazard assessment (based on quantitative and read-across based structure-activity relationships, and mechanistic alerts from in vitro studies), individual susceptibility assessment, and evidence integration. Additional aspects are opportunities for uncertainty analysis of adverse outcome pathways and their relation to thresholds of toxicological concern. In conclusion, probabilistic risk assessment will be key for constructing a new toxicology paradigm – probably!
Published in 2021
Unsupervised Representation Learning for Proteo-chemometric Modeling
Kim P. T., Winter R., Clevert D.-A.
International Journal of Molecular Sciences | December 2021
ABSTRACT
In silico protein–ligand binding prediction is an ongoing area of research in computational chemistry and machine learning based drug discovery, as an accurate predictive model could greatly reduce the time and resources necessary for the detection and prioritization of possible drug candidates. Proteochemometric modeling (PCM) attempts to create an accurate model of the protein–ligand interaction space by combining explicit protein and ligand descriptors. This requires the creation of information-rich, uniform and computer interpretable representations of proteins and ligands. Previous studies in PCM modeling rely on pre-defined, handcrafted feature extraction methods, and many methods use protein descriptors that require alignment or are otherwise specific to a particular group of related proteins. However, recent advances in representation learning have shown that unsupervised machine learning can be used to generate embeddings that outperform complex, human-engineered representations. Several different embedding methods for proteins and molecules have been developed based on various language-modeling methods. Here, we demonstrate the utility of these unsupervised representations and compare three protein embeddings and two compound embeddings in a fair manner. We evaluate performance on various splits of a benchmark dataset, as well as on an internal dataset of protein–ligand binding activities and find that unsupervised-learned representations significantly outperform handcrafted representations.
Img2Mol – accurate SMILES recognition from molecular graphical depictions
Clevert D.-A., Le T., Winter R., Montanari F.
Chemical Science | November 2021
ABSTRACT
The automatic recognition of the molecular content of a molecule’s graphical depiction is an extremely challenging problem that remains largely unsolved despite decades of research. Recent advances in neural machine translation enable the auto-encoding of molecular structures in a continuous vector space of fixed size (latent representation) with low reconstruction errors. In this paper, we present a fast and accurate model combining deep convolutional neural network learning from molecule depictions and a pre-trained decoder that translates the latent representation into the SMILES representation of the molecules. This combination allows us to precisely infer a molecular structure from an image. Our rigorous evaluation shows that Img2Mol is able to correctly translate up to 88% of the molecular depictions into their SMILES representation. A pretrained version of Img2Mol is made publicly available on GitHub for non-commercial users.
Connexin-Based Channel Activity Is Not Specifically Altered by Hepatocarcinogenic Chemicals
Leroy K., Pieters A., Cooreman A., Van Campenhout R., Cogliati B., Vinken M.
International Journal of Molecular Sciences | October 2021
ABSTRACT
Connexin-based channels play key roles in cellular communication and can be affected by deleterious chemicals. In this study, the effects of various genotoxic carcinogenic compounds, non-genotoxic carcinogenic compounds and non-carcinogenic compounds on the expression and functionality of connexin-based channels, both gap junctions and connexin hemichannels, were investigated in human hepatoma HepaRG cell cultures. Expression of connexin26, connexin32, and connexin43 was evaluated by means of real-time reverse transcription quantitative polymerase chain reaction analysis, immunoblot analysis and in situ immunostaining. Gap junction functionality was assessed via a scrape loading/dye transfer assay. Opening of connexin hemichannels was monitored by measuring extracellular release of adenosine triphosphate. It was found that both genotoxic and non-genotoxic carcinogenic compounds negatively affect connexin32 expression. However, no specific effects related to chemical type were observed at gap junction or connexin hemichannel functionality level.
Protocol for a systematic scoping review
Husøy T., Diemar M. G., Roggen E. L., Kalyva M., Dirven H., Vist G. E.
Methods and tools for assessing chemical exposure in humans (Publication in conference program) | October 2021
ABSTRACT
The project ‘Ontology-driven and artificial intelligence-based repeated dose toxicity testing of chemicals for next generation risk assessment’ (ONTOX) under the EU programme Horizon 2020 is running from 01 May 2021 to 30 April 2026 and is coordinated by Vrije Universiteit, Brussel, Belgium. The vision of ONTOX is to provide a functional and sustainable solution for advancing human risk assessment of chemicals without the use of animals in line with the principles of 21st century toxicity testing and next generation risk assessment. ONTOX will perform exposure assessment on selected chemicals. Preferably existing dietary, dermal and inhalation exposure methods and/or tools will be used. In order to have the best selection of tools available, we will conduct this scoping review to get an overview over publicly available exposure methods/tools. We will assess the advantages and disadvantages of the different methods that we identify.
Primary Human Hepatocyte Spheroids as Tools to Study the Hepatotoxic Potential of Non-Pharmaceutical Chemicals
Vilas-Boas V., Gijbels E., Leroy K., Pieters A., Baze A., Parmentier C., Vinken M.
International Journal of Molecular Sciences | October 2021
ABSTRACT
Drug-induced liver injury, including cholestasis, is an important clinical issue and economic burden for pharmaceutical industry and healthcare systems. However, human-relevant in vitro information on the ability of other types of chemicals to induce cholestatic hepatotoxicity is lacking. This work aimed at investigating the cholestatic potential of non-pharmaceutical chemicals using primary human hepatocytes cultured in 3D spheroids. Spheroid cultures were repeatedly (co-) exposed to drugs (cyclosporine-A, bosentan, macitentan) or non-pharmaceutical chemicals (paraquat, tartrazine, triclosan) and a concentrated mixture of bile acids for 4 weeks. Cell viability (adenosine triphosphate content) was checked every week and used to calculate the cholestatic index, an indicator of cholestatic liability. Microarray analysis was performed at specific time-points to verify the deregulation of genes related to cholestasis, steatosis and fibrosis. Despite the evident inter-donor variability, shorter exposures to cyclosporine-A consistently produced cholestatic index values below 0.80 with transcriptomic data partially supporting its cholestatic burden. Bosentan confirmed to be hepatotoxic, while macitentan was not toxic in the tested concentrations. Prolonged exposure to paraquat suggested fibrotic potential, while triclosan markedly deregulated genes involved in different types of hepatotoxicity. These results support the applicability of primary human hepatocyte spheroids to study hepatotoxicity of non-pharmaceutical chemicals in vitro.
A systematic review of in vitro models of drug-induced kidney injury
Irvine A. R., van Berlo D., Shekhani R., Masereeuw R.
Current Opinion in Toxicology | September 2021
ABSTRACT
Drug-induced nephrotoxicity is a major cause of kidney dysfunction with potentially fatal consequences and can hamper the research and development of new pharmaceuticals. This emphasises the need for new methods for earlier and more accurate diagnosis to avoid drug-induced kidney injury. Here, we present a systematic review of the available approaches to study drug-induced kidney injury, as one of the most common reasons for drug withdrawal, in vitro. The systematic review approach was selected to ensure that our findings are as objective and reproducible as possible. A novel study quality checklist, named validation score, was developed based on published regulatory guidance and industrial perspectives, and models returned by the search strategy were analysed as per their overall complexity and the kidney region studied. Our search strategy returned 1731 articles supplemented by 337 from secondary sources, of which 57 articles met the inclusion criteria for final analysis. Our results show that the proximal tubule dominates the field (84%), followed by the glomerulus and Bowman’s capsule (7%). Of all drugs investigated, the focus was most on cisplatin (n = 29, 50.1% of final inclusions). We found that with increasing model complexity the validation score increased, reflecting the value of innovative in vitro models. Furthermore, although the highly diverse usage of cell lines and modelling approaches prevented a strong statistical verification through a meta-analysis, our findings show the downstream potential of such approaches in personalised medicine and for rare diseases where traditional trials are not feasible.
Safer chemicals using less animals: kick-off of the European ONTOX project
Vinken M., Benfenati E., Busquet F., Castell J., Clevert D. A., de Kok T. M., Dirven H., Fritsche E., Geris L., Gozalbes R., Hartung T., Jennen D., Jover R., Kandarova H., Kramer N., Krul C.,
Luechtefeld T., Masereeuw R., Roggen E., Schaller S., Vanhaecke T., Yang Ch., Piersma A. H.
Toxicology | June 2021
ABSTRACT
The 3Rs concept, calling for replacement, reduction and refinement of animal experimentation, is receiving increasing attention around the world, and has found its way to legislation, in particular in the European Union. This is aligned by continuing high-level efforts of the European Commission to support development and implementation of 3Rs methods. In this respect, the European project called “ONTOX: ontology-driven and artificial intelligence-based repeated dose toxicity testing of chemicals for next generation risk assessment” was recently initiated with the goal to provide a functional and sustainable solution for advancing human risk assessment of chemicals without the use of animals in line with the principles of 21st century toxicity testing and next generation risk assessment. ONTOX will deliver a generic strategy to create new approach methodologies (NAMs) in order to predict systemic repeated dose toxicity effects that, upon combination with tailored exposure assessment, will enable human risk assessment. For proof-of-concept purposes, focus is put on NAMs addressing adversities in the liver, kidneys and developing brain induced by a variety of chemicals. The NAMs each consist of a computational system based on artificial intelligence and are fed by biological, toxicological, chemical and kinetic data. Data are consecutively integrated in physiological maps, quantitative adverse outcome pathway networks and ontology frameworks. Supported by artificial intelligence, data gaps are identified and are filled by targeted in vitro and in silico testing. ONTOX is anticipated to have a deep and long-lasting impact at many levels, in particular by consolidating Europe’s world-leading position regarding the development, exploitation, regulation and application of animal-free methods for human risk assessment of chemicals.
In Vitro Liver Toxicity Testing of Chemicals: A Pragmatic Approach
Tabernilla A., dos Santos Rodrigues B., Pieters A., Caufriez A., Leroy K., Van Campenhout R., Cooreman A., Gomes A. R., Arnesdotter E., Gijbels E., Vinken M.
International Journal of Molecular Sciences | May 2021
ABSTRACT
The liver is among the most frequently targeted organs by noxious chemicals of diverse nature. Liver toxicity testing using laboratory animals not only raises serious ethical questions, but is also rather poorly predictive of human safety towards chemicals. Increasing attention is, therefore, being paid to the development of non-animal and human-based testing schemes, which rely to a great extent on in vitro methodology. The present paper proposes a rationalized tiered in vitro testing strategy to detect liver toxicity triggered by chemicals, in which the first tier is focused on assessing general cytotoxicity, while the second tier is aimed at identifying liver-specific toxicity as such. A state-of-the-art overview is provided of the most commonly used in vitro assays that can be used in both tiers. Advantages and disadvantages of each assay as well as overall practical considerations are discussed.

You must be logged in to post a comment.