Hi! I’m Jiajun Zhang (张家骏), a master’s student in Control Science and Engineering at the University of Science and Technology of China (USTC), advised by Prof. Liang Wang at the State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA.
My research focuses on Large Language Models, especially code intelligence, software engineering agents, code generation, and retrieval-augmented / graph-enhanced learning systems. I am particularly interested in building practical and reliable AI systems that can assist real-world coding, reasoning, and knowledge-intensive tasks.
Before joining USTC, I received my B.Eng. in Software Engineering from the Beijing Institute of Technology.
Since 2024.08, I have been a research intern at the Qwen Team, Alibaba Group / DAMO Academy, where I work on code LLM training, data synthesis, benchmark construction, evaluation, and training infrastructure for the Qwen-Coder series.
Feel free to reach out for research collaboration or discussion: zhangjiajun519@gmail.com
🔥 News
- 2026.03: 📄 Released the preprint RealChart2Code: Advancing Chart-to-Code Generation with Real Data and Multi-Task Evaluation.
- 2026.03: 🚀 Contributed as a major contributor (Speakn0w) to the release of Qwen3-Coder-Next, a highly efficient open-weight coding-agent model with 80B total parameters but only 3B activated parameters, delivering performance comparable to models with 10–20× more active parameters, strong long-horizon tool-use ability, and 256K native context for real IDE and CLI workflows.
- 2026.02: 📄 Released the preprint Scaling Agentic Verifier for Competitive Coding.
- 2026.02: 🚀 Contributed to the release of Qwen3.5, a native multimodal agent model family led by a 397B-A17B MoE release and later expanded to smaller sizes, with strong support for deep research, web dev, and adaptive tool use; my main contributions were in web development and agentic coding capabilities.
- 2026.01: 📄 Released the preprint MegaFlow: Large-Scale Distributed Orchestration System for the Agentic Era.
- 2026.01: 🎉 One paper accepted to WWW 2026: What Should I Cite? A RAG Benchmark for Academic Citation Prediction.
- 2026.01: 📄 Released two new preprints: Evaluating and Achieving Controllable Code Completion in Code LLM and From Completion to Editing: Unlocking Context-Aware Code Infilling via Search-and-Replace Instruction Tuning.
- 2025.11: 📄 Released the survey preprint From Code Foundation Models to Agents and Applications: A Comprehensive Survey and Practical Guide to Code Intelligence.
- 2025.11: 📄 Released the preprint PlotCraft: Pushing the Limits of LLMs for Complex and Interactive Data Visualization.
- 2025.08: 🎉 One paper accepted to EMNLP 2025 Findings: LAGCL4Rec: When LLMs Activate Interactions Potential in Graph Contrastive Learning for Recommendation.
- 2025.06: 🎉 One paper accepted to ICML 2025: SWE-Flow: Synthesizing Software Engineering Data in a Test-Driven Manner.
- 2025.05: 🚀 Contributed to the release of Qwen3-Coder, an open coding model with strong agentic coding performance, 256K native context (extendable to 1M with YaRN), and competitive results on repository-scale coding and browser-use tasks.
- 2025.05: 🎉 One paper accepted to NeurIPS 2025: Negative Feedback Really Matters: Signed Dual-Channel Graph Contrastive Learning Framework for Recommendation.
- 2024.12: 📄 Released the preprint ExecRepoBench: Multi-level Executable Code Completion Evaluation.
- 2024.09: 🚀 Contributed to the release of Qwen2.5-Coder, a strong open code model family trained on 5.5T tokens of code-centric data, achieving state-of-the-art performance across code generation, completion, reasoning, and repair.
- 2024.03: 🎉 One paper accepted to CIKM 2024: Evolving to the Future: Unseen Event Adaptive Fake News Detection on Social Media.
📈 Stats
📝 Publications
Note: I attach repo / dataset / model / blog links below only when a public artifact can be confirmed reliably.
🤖 Coding Agents & Code Intelligence

Evaluating and Achieving Controllable Code Completion in Code LLM
Jiajun Zhang, Zeyu Cui, Lei Zhang, Jian Yang, Jiaxi Yang, Qiang Liu, Zilei Wang, Binyuan Hui, Liang Wang, Junyang Lin
- TL;DR: We present C3-Bench, the first instruction-guided code completion benchmark with 2,195 tasks, revealing substantial gaps in instruction-following capabilities between open-source and proprietary models during code completion.
- Overview: This paper studies a practical but underexplored question in code completion: whether models can reliably follow explicit user control signals such as editing intent, coding style, structural constraints, and local context requirements. To answer this, we introduce a large-scale benchmark and analyze both evaluation and training strategies for controllable completion.
- Highlights: Beyond benchmarking, the work explores how controllability can be improved in real developer scenarios, showing that strong raw completion ability does not automatically translate to strong instruction following during tab-completion style generation.
📖 BibTeX
@article{zhang2026c3bench,
title = {Evaluating and Achieving Controllable Code Completion in Code LLM},
author = {Jiajun Zhang and Zeyu Cui and Lei Zhang and Jian Yang and Jiaxi Yang and Qiang Liu and Zilei Wang and Binyuan Hui and Liang Wang and Junyang Lin},
journal = {arXiv preprint arXiv:2601.15879},
year = {2026}
}

From Completion to Editing: Unlocking Context-Aware Code Infilling via Search-and-Replace Instruction Tuning
Jiajun Zhang, Zeyu Cui, Jiaxi Yang, Lei Zhang, Yuheng Jing, Zeyao Ma, Tianyi Bai, Binyuan Hui, Qiang Liu, Zilei Wang, Liang Wang, Junyang Lin
- TL;DR: We propose Search-and-Replace Infilling (SRI), a framework that internalizes agentic verification-and-editing into a single-pass inference process, enabling Chat models to surpass the completion performance of their Base counterparts with minimal data.
- Overview: Instead of treating code infilling as a narrow span-filling problem, this work reframes it as context-aware editing with explicit search-and-replace operations. The resulting formulation better matches realistic coding workflows, where developers revise existing code under structural and semantic constraints rather than filling isolated blanks.
- Highlights: The paper shows that lightweight instruction tuning on carefully designed editing data can unlock much stronger infilling behavior, improve robustness under long-context settings, and better align the model with real IDE-assisted development patterns.
📖 BibTeX
@article{zhang2026sri,
title = {From Completion to Editing: Unlocking Context-Aware Code Infilling via Search-and-Replace Instruction Tuning},
author = {Jiajun Zhang and Zeyu Cui and Jiaxi Yang and Lei Zhang and Yuheng Jing and Zeyao Ma and Tianyi Bai and Binyuan Hui and Qiang Liu and Zilei Wang and Liang Wang and Junyang Lin},
journal = {arXiv preprint arXiv:2601.13384},
year = {2026}
}

Scaling Agentic Verifier for Competitive Coding
Zeyao Ma, Jing Zhang, Xiaokang Zhang, Jiaxi Yang, Zongmeng Zhang, Jiajun Zhang, Yuheng Jing, Lei Zhang, Hao Zheng, Wenting Zhao, Junyang Lin, Binyuan Hui
- TL;DR: We study how to scale verifier-style agents for competitive coding and improve reliable solution selection through stronger execution-aware and agentic verification pipelines.
- Overview: Competitive coding is a demanding setting for LLM systems because correctness must be established under hidden tests rather than surface-level plausibility. This work focuses on verifier scaling, aiming to build stronger systems for candidate filtering, validation, and test-driven reasoning.
- Highlights: The paper is closely related to coding-agent training and evaluation, with practical implications for high-reliability code generation and automated programming competitions.

ExecRepoBench: Multi-level Executable Code Completion Evaluation
Jian Yang, Jiajun Zhang, Jiaxi Yang, Ke Jin, Lei Zhang, Qiyao Peng, Ken Deng, Yibo Miao, Tianyu Liu, Zeyu Cui, Binyuan Hui, Junyang Lin
- TL;DR: We introduce ExecRepoBench, an executable benchmark for repository-level code completion that evaluates models under more realistic completion settings than function-level static benchmarks.
- Overview: Repository-scale code completion requires models to reason over project context, dependencies, and executable behavior. This benchmark emphasizes multi-level evaluation and executable validation, making it more faithful to real developer usage.
- Highlights: The benchmark provides a stronger testbed for studying practical code completion systems, especially those designed for IDE workflows and repository-aware coding assistants.
📊 Visualization & Chart Generation

RealChart2Code: Advancing Chart-to-Code Generation with Real Data and Multi-Task Evaluation
Jiajun Zhang, Yuying Li, Zhixun Li, Xingyu Guo, Jingzhuo Wu, Leqi Zheng, Yiran Yang, Jianke Zhang, Qingbin Li, Shannan Yan, Zhetong Li, Changguo Jia, Junfei Wu, Zilei Wang, Qiang Liu, Liang Wang
- TL;DR: We introduce RealChart2Code, a large-scale benchmark with more than 2,800 real-data chart generation tasks that evaluates both direct chart synthesis and multi-turn code refinement for complex visualizations.
- Overview: This work studies chart-to-code generation under realistic conditions, where models must reproduce intricate multi-panel visualizations from authentic datasets instead of simplified synthetic chart templates. The benchmark emphasizes analytical intent, real data grounding, and iterative refinement in conversational settings.
- Highlights: Our evaluation of 14 leading VLMs reveals a large gap between current capabilities and real-world visualization demands, especially on complex plot structures, raw-data-driven charting, and multi-turn correction scenarios.

PlotCraft: Pushing the Limits of LLMs for Complex and Interactive Data Visualization
Jiajun Zhang, Jianke Zhang, Zeyu Cui, Jiaxi Yang, Lei Zhang, Binyuan Hui, Qiang Liu, Zilei Wang, Liang Wang, Junyang Lin
- TL;DR: We introduce PlotCraft, a benchmark with 1k challenging visualization tasks across 48 chart types, and develop PlotCraftor, a compact model achieving performance comparable to leading proprietary approaches with over 50% improvement on hard tasks.
- Overview: PlotCraft targets a difficult frontier for multimodal code generation: producing complex, interactive, and aesthetically coherent visualization code from user intent. The benchmark emphasizes realistic chart authoring challenges, including diverse plotting grammars, intricate layout requirements, and interaction-heavy scenarios that are poorly covered by existing datasets.
- Highlights: In addition to defining the benchmark, the paper presents a practical modeling pipeline for chart-code generation and demonstrates that carefully curated task design plus domain-focused training can push open models much closer to high-end proprietary systems.
🛠️ Software Engineering & Infrastructure

SWE-Flow: Synthesizing Software Engineering Data in a Test-Driven Manner
Lei Zhang, Jiaxi Yang, Min Yang, Jian Yang, Mouxiang Chen, Jiajun Zhang, Zeyu Cui, Binyuan Hui, Junyang Lin
- TL;DR: We present SWE-Flow, a TDD-grounded data synthesis framework that constructs Runtime Dependency Graphs to generate step-by-step development tasks, producing 16k training instances from real-world GitHub projects with significant fine-tuning gains.
- Overview: This work addresses the scarcity of high-quality software engineering supervision for LLMs by synthesizing executable development trajectories from real repositories. By grounding task construction in tests, runtime dependencies, and realistic implementation steps, SWE-Flow moves beyond static instruction data toward process-aware engineering data.
- Highlights: The framework is especially valuable for training coding agents and SWE-oriented models, because it captures the sequential structure of debugging, implementation, and verification rather than only final code outputs.

MegaFlow: Large-Scale Distributed Orchestration System for the Agentic Era
Lei Zhang, Mouxiang Chen, Ruisheng Cao, Jiawei Chen, Fan Zhou, Yiheng Xu, Jiaxi Yang, Liang Chen, Changwei Luo, Kai Zhang, Fan Yan, KaShun Shum, Jiajun Zhang, Zeyu Cui, Hu Feng, Junyang Lin, Binyuan Hui, Min Yang
- TL;DR: We present MegaFlow, a large-scale distributed orchestration system designed for the agentic era, supporting scalable multi-agent execution, coordination, and workflow management.
- Overview: As coding agents grow more capable, their supporting infrastructure becomes a central bottleneck. MegaFlow addresses this systems challenge by providing orchestration mechanisms for large-scale agent execution and task coordination.
- Highlights: The work complements model-centric research by focusing on the infrastructure layer required to run agentic systems at scale in realistic engineering environments.
🔍 Retrieval, Recommendation & Information Systems

What Should I Cite? A RAG Benchmark for Academic Citation Prediction
Leqi Zheng, Jiajun Zhang, Canzhi Chen, Chaokun Wang, Hongwei Li, Yuying Li, Yaoxin Mao, Shannan Yan, Zixin Song, Zhiyuan Feng, Zhaolu Kang, Zirong Chen, Hang Zhang, Qiang Liu, Liang Wang, Ziyang Liu
- TL;DR: We build a benchmark for academic citation prediction and study how retrieval-augmented generation can recommend faithful and relevant references, using multi-level retrieval and specialized citation generators that reduce hallucinated citations from 17.4% to 4.9%.
- Overview: Citation prediction is a challenging academic assistant task that requires both document understanding and trustworthy retrieval. This paper formalizes the problem under a RAG setting, where systems must identify suitable references for a given scientific context while avoiding fabricated or weakly grounded citations.
- Highlights: The study provides both a benchmark and strong baselines for evaluating faithfulness in scholarly assistance, making it relevant to academic search, scientific writing copilots, and evidence-grounded long-form generation.

Negative Feedback Really Matters: Signed Dual-Channel Graph Contrastive Learning Framework for Recommendation
Leqi Zheng, Chaokun Wang, Zixin Song, Cheng Wu, Shannan Yan, Jiajun Zhang, Ziyang Liu
- TL;DR: We propose SDCGCL, a model-agnostic signed dual-channel graph contrastive learning framework that effectively leverages negative feedback for recommendation, consistently outperforming 22 SOTA baselines.
- Overview: Most recommendation models primarily exploit positive user-item interactions, while negative signals are either discarded or used only superficially. This paper argues that negative feedback carries essential structural information and designs a signed graph contrastive framework to model it explicitly.
- Highlights: The result is a general recommendation approach that improves representation quality and ranking effectiveness, while also providing a cleaner perspective on how contrastive learning should be adapted when user preferences include both attraction and rejection signals.

LAGCL4Rec: When LLMs Activate Interactions Potential in Graph Contrastive Learning for Recommendation
Leqi Zheng, Chaokun Wang, Canzhi Chen, Jiajun Zhang, Cheng Wu, Zixin Song, Shannan Yan, Ziyang Liu, Hongwei Li
- TL;DR: We propose LAGCL4Rec, a progressive activation pipeline that leverages LLMs to activate interaction potential in graph contrastive learning for recommendation at data, rank, and rerank levels.
- Overview: This work investigates how LLMs can serve as more than text generators in recommender systems. Instead, they are used as high-level reasoning modules that enrich graph-based recommendation pipelines through data activation, candidate refinement, and reranking.
- Highlights: The paper connects LLM reasoning with graph representation learning in a practical recommendation setting, showing how language priors can complement sparse interaction graphs and improve downstream recommendation quality.
🌐 Multimodal, Vision & Agents

VLM4VLA: Revisiting Vision-Language-Models in Vision-Language-Action Models
Jianke Zhang, Xiaoyu Chen, Qiuyue Wang, Mingsheng Li, Yanjiang Guo, Yucheng Hu, Jiajun Zhang, Shuai Bai, Junyang Lin, Jianyu Chen
- TL;DR: We revisit the role of vision-language models in vision-language-action systems and study how VLM components affect embodied policy learning.
- Overview: This work connects multimodal representation learning with downstream action-oriented systems, asking how much modern VLMs can contribute when integrated into VLA pipelines.
- Highlights: The paper is relevant to embodied intelligence, multimodal reasoning, and the transfer of VLM capabilities into agentic decision-making settings.

Learning Cross-View Object Correspondence via Cycle-Consistent Mask Prediction
Shannan Yan, Leqi Zheng, Keyu Lv, Jingchen Ni, Hongyang Wei, Jiajun Zhang, Guangting Wang, Jing Lyu, Chun Yuan, Fengyun Rao
- TL;DR: We study cross-view object correspondence through cycle-consistent mask prediction to improve robust geometric and semantic matching across views.
- Overview: The paper targets a core vision problem: aligning objects seen from different viewpoints with stronger consistency constraints.
- Highlights: The method combines structural reasoning and mask-level correspondence learning to improve cross-view matching quality.

SAMAS: A Spectrum-Guided Multi-Agent System for Achieving Style Fidelity in Literary Translation
Jingzhuo Wu, Jiajun Zhang, Keyan Jin, Dehua Ma, Junbo Wang
- TL;DR: We propose SAMAS, a multi-agent system for literary translation that explicitly balances semantic faithfulness and stylistic fidelity.
- Overview: Literary translation requires more than literal correctness; it also demands style preservation. SAMAS uses a spectrum-guided multi-agent formulation to coordinate translation quality along these dimensions.
- Highlights: The work brings multi-agent methodology into translation and style-sensitive generation, extending LLM systems beyond coding and retrieval tasks.

AdaMem: Adaptive User-Centric Memory for Long-Horizon Dialogue Agents
Shannan Yan, Jingchen Ni, Leqi Zheng, Jiajun Zhang, Peng Wu, Deying Yin, Jing Lyu, Chun Yuan, Fengyun Rao
- TL;DR: We study adaptive user-centric memory for dialogue agents to support longer-horizon personalization and contextual coherence.
- Overview: Long-horizon dialogue systems require memory mechanisms that can track user preferences and conversational state over time without drifting or overfitting.
- Highlights: This work explores memory design for agentic dialogue systems, a direction that complements your broader interest in practical and reliable LLM agents.
📰 Information Integrity

Evolving to the Future: Unseen Event Adaptive Fake News Detection on Social Media
Jiajun Zhang, Zhixun Li, Qiang Liu, Shu Wu, Liang Wang
- TL;DR: We introduce FADE, a future-adaptive fake news detection framework that leverages adaptive augmentation and graph contrastive learning to generalize to unseen events on social media.
- Overview: Fake news detectors often overfit to event-specific patterns and degrade sharply when facing emerging topics. FADE tackles this challenge by explicitly modeling event shift and improving generalization to previously unseen news events.
- Highlights: The paper combines adaptation and graph learning to strengthen robustness under temporal and topical drift, making it a practical step toward more deployable misinformation detection systems.
📊 Technical Reports & Surveys

Qwen3-Coder-Next Technical Report
Qwen Team, with Jiajun Zhang (Speakn0w) as a major contributor
- TL;DR: We present Qwen3-Coder-Next, an open-weight coding-agent model trained with large-scale verifiable coding tasks, executable environments, mid-training, and reinforcement learning.
- Overview: This report describes the data, training pipeline, and agent-oriented evaluation behind Qwen3-Coder-Next, focusing on strong real-world coding performance and scalable open release.
- Highlights: As a major contributor under the handle Speakn0w, I contributed to the report release, open-source repository, and model ecosystem around this launch.

From Code Foundation Models to Agents and Applications: A Comprehensive Survey and Practical Guide to Code Intelligence
Jian Yang, …, Jiajun Zhang, …, and 70+ authors
- TL;DR: A comprehensive synthesis and practical guide about code LLMs, systematically examining the complete model life cycle from data curation through advanced prompting, code pre-training, SFT, RL, and autonomous coding agents.
- Overview: This survey provides a broad map of the rapidly evolving code intelligence landscape, connecting foundation models, coding assistants, evaluation, training recipes, and agentic systems in a single framework.
- Highlights: It is designed not only as a literature review, but also as a practical guide for researchers and practitioners who want to understand how modern code LLM systems are built, evaluated, and deployed.

Qwen2.5-Coder Technical Report
Binyuan Hui*, Jian Yang*, Zeyu Cui, Jiaxi Yang, Dayiheng Liu, Lei Zhang, Tianyu Liu, Jiajun Zhang, Bowen Yu, Keming Lu, Kai Dang, Yang Fan, Yichang Zhang, An Yang, Rui Men, Fei Huang, Bo Zheng, Yibo Miao, Shanghaoran Quan, Yunlong Feng, Xingzhang Ren, Xuancheng Ren, Jingren Zhou, Junyang Lin
- TL;DR: We introduce the Qwen2.5-Coder series (0.5B to 32B), built upon Qwen2.5 with 5.5T tokens of code data, achieving SOTA performance across 10+ code benchmarks including generation, completion, reasoning, and repair.
- Overview: This technical report describes the design and training of the Qwen2.5-Coder family, covering model scaling, code-centric data construction, training strategy, and broad benchmark evaluation across major coding tasks.
- Highlights: The report demonstrates how large-scale code pretraining plus strong evaluation discipline can produce open coding models that are competitive across generation, completion, reasoning, and repair.
⚙️ Engineering Experience
💼 Internships

Research Intern @ Qwen Team, Alibaba Group / DAMO Academy
2024.08 – Present | China
- Working on code LLM pre-training, post-training, data synthesis, evaluation, and engineering infrastructure for the Qwen-Coder series.
- Contributed across the full training pipeline of Qwen2.5, Qwen3, Qwen3.5, and Qwen3-Coder-Next code models, covering data construction, benchmark design, model evaluation, and rollout workflows.
- Built and supported multiple internal systems, including code execution sandboxes, large-scale data generation pipelines, model serving scripts, SFT data visualization tools, and agent-based data construction workflows.
- Participated deeply in projects on code completion, frontend code generation, chart code generation, software engineering data synthesis, and agentic coding model training.
📄 Technical Reports

Qwen2.5-Coder Technical Report
Qwen Team, with Jiajun Zhang as a core contributor
- Summary: This report introduces the Qwen2.5-Coder family and details its model scaling strategy, code-centric pretraining data, benchmark evaluation, and performance across generation, completion, reasoning, and repair tasks.
📖 BibTeX
@article{hui2024qwen25coder,
title = {Qwen2.5-Coder Technical Report},
author = {Binyuan Hui and Jian Yang and Zeyu Cui and Jiaxi Yang and Dayiheng Liu and Lei Zhang and Tianyu Liu and Jiajun Zhang and Bowen Yu and Keming Lu and Kai Dang and Yang Fan and Yichang Zhang and An Yang and Rui Men and Fei Huang and Bo Zheng and Yibo Miao and Shanghaoran Quan and Yunlong Feng and Xingzhang Ren and Xuancheng Ren and Jingren Zhou and Junyang Lin},
journal = {arXiv preprint arXiv:2409.12186},
year = {2024}
}

Qwen3-Coder-Next Technical Report \ Qwen Team, with Jiajun Zhang (Speakn0w) as a major contributor
- Summary: This report presents Qwen3-Coder-Next, an open-weight coding-agent model with 80B total parameters and 3B active parameters during inference, trained with large-scale verifiable coding tasks, executable environments, mid-training, and reinforcement learning.
📖 BibTeX
@article{cao2026qwen3codernext,
title = {Qwen3-Coder-Next Technical Report},
author = {Ruisheng Cao and Mouxiang Chen and Jiawei Chen and Zeyu Cui and Yunlong Feng and Binyuan Hui and Yuheng Jing and Kaixin Li and Mingze Li and Junyang Lin and Zeyao Ma and Kashun Shum and Xuwu Wang and Jinxi Wei and Jiaxi Yang and Jiajun Zhang and Lei Zhang and Zongmeng Zhang and Wenting Zhao and Fan Zhou},
journal = {arXiv preprint arXiv:2603.00729},
year = {2026}
}
🌐 Open-Source Contributions
-
Qwen3-Coder
Major contributor under the handle Speakn0w to the official Qwen3-Coder repository, especially in benchmark evaluation code, training-related workflows, release support, and engineering tooling. -
Qwen
Contributed to the broader Qwen open-source ecosystem through code-model evaluation, infrastructure support, and model-related tooling. -
FluxLLM
Core contributor to an efficient asynchronous and parallel API-calling library for large-scale model inference and request scheduling.
📖 Educations
- 2024.09 - 2027.06 (Expected), M.S. in Control Science and Engineering, University of Science and Technology of China, Hefei, China.
- Advisor: Prof. Liang Wang
- Training institute: State Key Laboratory of Multimodal Artificial Intelligence Systems, CASIA
- 2020.09 - 2024.06, B.Eng. in Software Engineering, Beijing Institute of Technology, Beijing, China.
- GPA Rank: Top 10%
🎖 Honors and Awards
- 2023.11 Second Prize, The 5th IKCEST “Belt and Road” International Big Data Competition and the 9th Baidu & Xi’an Jiaotong University Big Data Competition.
- 2022.05 First Prize, The 3rd Beijing College Student Physics Academic Competition.
- 2021.06 Second Prize, China Undergraduate Physics Tournament (North China Region).
- 2021.05 Second Prize, The 2nd Beijing College Student Physics Academic Competition.
- 2021 - 2024 Academic Scholarship, Beijing Institute of Technology (First Class ×1, Second Class ×3).
📚 Academic Services
- Reviewer / sub-reviewer information will be updated soon.
💬 Invited Talks
- Invited talks will be updated soon.