Neural Sequence Modeling Lab


Mission:

At the Neural Sequence Modeling Lab, we investigate complex, non-linear phenomena through the lens of advanced deep learning and sequence alignment. From decades of tectonic fault dynamics to the intricate syntax of human languages, our laboratory treats these diverse datasets fundamentally as sequence modeling challenges. By bridging rigorous mathematical theory with high-performance computational data science, we develop next-generation AI architectures capable of extracting causal relationships, mapping complex manifolds, and predicting future states in both physical and linguistic systems.

Core Research Areas

  • Data-Driven Seismology & Fault-Agnostic Forecasting: We apply sophisticated sequence modeling (e.g., marked point processes, edit distances, and self-attention mechanisms) to highly complex seismic data. Currently, we are leveraging large-scale Transformer architectures and manifold learning to map the spatiotemporal evolution of global fault lines.
  • Natural Language Processing & Cross-Lingual Alignment: Human language is the ultimate sequence. Our lab extends advanced neural architectures into NLP, with a specific focus on overcoming heuristic bottlenecks in low-resource machine translation. We actively develop novel methodologies, such as the Cross-Lingual Token Alignment Distance (CLTAD), utilizing modern Transformer frameworks to guide multi-lingual transfer learning and improve translation accuracy across language barriers.

Join the Lab!

Located within the interdisciplinary environment of the Faculty of Information, Library and Media Science, our lab welcomes students and researchers passionate about machine learning, earth sciences, and computational linguistics. Members gain hands-on experience with state-of-the-art supercomputers, modern AI frameworks (PyTorch, Hugging Face), and open-source software development.