Home >

news ヘルプ

論文・著書情報


タイトル
和文: 
英文:Enhancing Model Learning and Interpretation Using Multiple Molecular Graph Representations for Compound Property and Activity Prediction 
著者
和文: Kengkanna Apakorn, 大上 雅史.  
英文: Apakorn Kengkanna, Masahito Ohue.  
言語 English 
掲載誌/書名
和文: 
英文:In Proceedings of The 20th IEEE International Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB 2023) 
巻, 号, ページ        
出版年月 2023年8月28日 
出版者
和文: 
英文:IEEE 
会議名称
和文: 
英文:20th IEEE International Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB 2023) 
開催地
和文: 
英文:Eindhoven 
公式リンク https://ieeexplore.ieee.org/document/10264879
 
DOI https://doi.org/10.1109/CIBCB56990.2023.10264879
アブストラクト Graph neural networks (GNNs) demonstrate great performance in compound property and activity prediction due to their capability to efficiently learn complex molecular graph structures. However, two main limitations persist including compound representation and model interpretability. While atom-level molecular graph representations are commonly used because of their ability to capture natural topology, they may not fully express important substructures or functional groups which significantly influence molecular properties. Consequently, recent research proposes alternative representations employing reduction techniques to integrate higher-level information and leverages both representations for model learning. However, there is still a lack of study about different molecular graph representations on model learning and interpretation. Interpretability is also crucial for drug discovery as it can offer chemical insights and inspiration for optimization. Numerous studies attempt to include model interpretation to explain the rationale behind predictions, but most of them focus solely on individual prediction with little analysis of the interpretation on different molecular graph representations. This research introduces multiple molecular graph representations that incorporate higher-level information and investigates their effects on model learning and interpretation from diverse perspectives. Several experiments are conducted across a broad range of datasets and an attention mechanism is applied to identify significant features. The results indicate that combining atom graph representation with reduced molecular graph representation can yield promising model performance. Furthermore, the interpretation results can provide significant features and potential substructures consistently aligning with background knowledge. These multiple molecular graph representations and interpretation analysis can bolster model comprehension and facilitate relevant applications in drug discovery.

©2007 Institute of Science Tokyo All rights reserved.