Department of Quantitative Health Sciences
Artificial Intelligence and Robotics | Bioinformatics | Databases and Information Systems | Health Information Technology | Numerical Analysis and Scientific Computing | Statistics and Probability | Theory and Algorithms
Hundreds of millions of figures are available in biomedical literature, representing important biomedical experimental evidence. This ever-increasing sheer volume has made it difficult for scientists to effectively and accurately access figures of their interest, the process of which is crucial for validating research facts and for formulating or testing novel research hypotheses. Current figure search applications can't fully meet this challenge as the "bag of figures" assumption doesn't take into account the relationship among figures. In our previous study, hundreds of biomedical researchers have annotated articles in which they serve as corresponding authors. They ranked each figure in their paper based on a figure's importance at their discretion, referred to as "figure ranking". Using this collection of annotated data, we investigated computational approaches to automatically rank figures. We exploited and extended the state-of-the-art listwise learning-to-rank algorithms and developed a new supervised-learning model BioFigRank. The cross-validation results show that BioFigRank yielded the best performance compared with other state-of-the-art computational models, and the greedy feature selection can further boost the ranking performance significantly. Furthermore, we carry out the evaluation by comparing BioFigRank with three-level competitive domain-specific human experts: (1) First Author, (2) Non-Author-In-Domain-Expert who is not the author nor co-author of an article but who works in the same field of the corresponding author of the article, and (3) Non-Author-Out-Domain-Expert who is not the author nor co-author of an article and who may or may not work in the same field of the corresponding author of an article. Our results show that BioFigRank outperforms Non-Author-Out-Domain-Expert and performs as well as Non-Author-In-Domain-Expert. Although BioFigRank underperforms First Author, since most biomedical researchers are either in- or out-domain-experts for an article, we conclude that BioFigRank represents an artificial intelligence system that offers expert-level intelligence to help biomedical researchers to navigate increasingly proliferated big data efficiently.
Human performance, Learning, Machine learning, Machine learning algorithms, Permutation, Probability distribution, Research validity
Rights and Permissions
© 2014 Liu, Yu. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
DOI of Published Version
PLoS One. 2014 Mar 13;9(3):e61567. doi: 10.1371/journal.pone.0061567. eCollection 2014. Link to article on publisher's site
Liu F, Yu H. (2014). Learning to rank figures within a biomedical article. UMass Chan Medical School Faculty Publications. https://doi.org/10.1371/journal.pone.0061567. Retrieved from https://escholarship.umassmed.edu/faculty_pubs/431
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Artificial Intelligence and Robotics Commons, Bioinformatics Commons, Databases and Information Systems Commons, Health Information Technology Commons, Numerical Analysis and Scientific Computing Commons, Statistics and Probability Commons, Theory and Algorithms Commons