Improving Natural Language Processing
Multi-Label Text Classification (MLTC), through which one or more labels are assigned to each input sample, is essential for effective Natural Language Processing (NLP). However, most MLTC tasks include dependencies or correlations among labels that traditional classification methods overlook.
This paper shows how an attention-based graph neural network can capture these dependencies for better results. Validation across five real-world MLTC datasets reveals that the proposed model achieves consistently higher accuracy than conventional approaches.
Ankit Pal, Muru Selvakumar, and Malaikannan Sankarasubbu conducted this research for the Saama AI Research team.