
Each day, while working in the social media industry, Tianyi Huang witnesses a problem that has quietly grown into one of society's most urgent digital threats: misinformation.
"Every day I see content that's misleading, emotionally charged, or outright false being liked, shared, and amplified," Huang says. "It spreads like a virus."
Rather than becoming desensitized to it, Huang became deeply concerned. As an engineer working at the frontlines of modern social media, he found himself asking: could artificial intelligence be used not just to recommend content—but to safeguard truth?
This question would soon become the cornerstone of his academic life. Outside of his engineering role, Huang began researching how to use AI to detect and mitigate misinformation, a journey that would lead him to co-author six research papers across 2024–2025, tackling fake news detection, sentiment analysis, knowledge distillation, and even medical prediction using cutting-edge neural architectures.
From Realization to Research
Tianyi Huang holds a Master's degree in Electrical Engineering and Computer Sciences from the University of California, Berkeley. During his time working with some of the world's most influential social media platforms, he began noticing how pervasive and damaging misinformation had become, especially as it evolved beyond text into a complex blend of images, video, and manipulated context.
"My day job didn't involve misinformation detection directly," he explains. "But the exposure to it made it impossible to ignore. That's why I started investigating it in my own time."
What began as curiosity turned into a focused academic pursuit. Drawing from his background in AI system development, Huang began collaborating with researchers from Carnegie Mellon University and Stanford, dedicating his time outside of professional duties to explore the technical and ethical challenges of misinformation detection.
A New Frontier in Fake News Detection
In one of his most notable papers, Unmasking Digital Falsehoods, Huang and his co-authors conducted a comparative analysis of large language model (LLM)-based approaches for misinformation detection. The study examined models like GPT-4 and LLaMA2, testing their capabilities across domains including public health, politics, and finance.
The paper's findings highlighted the potential of hybrid frameworks—systems that combine the linguistic prowess of LLMs with structured fact-checking protocols and agentic reasoning workflows. These hybrid models outperformed zero-shot and prompt-only models, especially in challenging contexts like satire, multimodal claims, or rapidly evolving conspiracy theories.
"Misinformation is no longer just text—it's images, video, sarcasm, emotion, and context," Huang says. "We need models that can reason across modalities and adapt to constantly shifting tactics."
In another study, A Hybrid Transformer Model for Fake News Detection, Huang introduced a Transformer-based classification model enhanced with Bidirectional Gated Recurrent Units (BiGRU) and Bayesian optimization. The model achieved 99.73% accuracy on benchmark fake news datasets, and what's more—it converged within ten training epochs, demonstrating both efficiency and high precision.
Fighting Fire with Algorithms
Huang's research is driven by a dual concern: technological capacity and social responsibility. As AI grows more capable of generating convincing falsehoods, it also holds the power to detect them—if designed carefully.
A recurring theme in his papers is explainability. Huang doesn't believe in black-box models that output predictions without offering reasons. "Detection is not enough," he says. "We need to understand why something is flagged as false, especially in sensitive domains like politics or public health."
That's why several of his projects integrate attention heatmaps, SHAP values, or semantic traceability tools—technologies that allow researchers and users alike to see how AI reaches its decisions.
In another review paper, Huang explored reinforcement learning-driven knowledge distillation (RL-KD) as a strategy to compress large teacher models into lightweight, deployable student models—crucial for misinformation detection in resource-limited settings like mobile apps or community monitoring tools. His work outlines a taxonomy of RL-KD methods, discusses real-world applications like game AI and dialogue systems, and identifies open challenges such as temporal dependencies and reward design.
AI Beyond Information: Into Healthcare
Interestingly, not all of Huang's research is confined to digital information. In a paper titled Optimization of Transformer Heart Disease Prediction Model Based on Particle Swarm Optimization, he applied AI to healthcare—using swarm intelligence to improve heart disease classification accuracy.
The Transformer-based model, optimized through particle swarm strategies, reached 96.5% accuracy on a public medical dataset—outperforming common baselines like Random Forest and XGBoost. The study illustrates his versatility, showing that the core AI skills used to detect fake news can also be used to improve human health outcomes.
A Personal Ethic in the Age of Algorithms
If there's a unifying thread in Huang's research, it's a commitment to transparency, explainability, and social impact. Rather than chasing academic prestige or narrow technical benchmarks, he prioritizes work that solves real-world problems at the intersection of truth, technology, and trust.
"I want to build systems that serve the public good," he says. "That means AI tools that can identify false claims, support critical thinking, and adapt ethically to new threats."
As the information landscape becomes increasingly chaotic, Huang believes AI has a dual responsibility—not just to filter the noise, but to amplify the signal. His research points toward a future where intelligent systems don't just reflect our world—they help us make sense of it.