Physics • Machine Learning • Curiosity
Welcome to Tensors & Quarks
Exploring the cosmos of physics and the depths of machine learning with hands-on experiments, notes, and essays.
Latest Posts
-
The Invisible Weather Around Galaxies: A Story Told Through Mg II
What is this paper about?
The paper investigates how Mg II absorption systems—identified in the spectra of distant quasars—have evolved over the past 13 billion years, from the earliest epochs of galaxy formation (z ≈ 7) to the present Universe (z ≈ 0). Magnesium II (Mg II) absorption lines are highly reliable indicators of cool, metal-enriched gas in the circumgalactic medium (CGM) and intergalactic medium (IGM). When gas lies along the line of sight to a quasar, it leaves identifiable absorption signatures, allowing astronomers to measure metal content, gas flows, and chemical evolution indirectly across cosmic time.
Read more → -
Teaching LLMs to Reason and Behave: Why Scaling RL Matters
Introduction — The Growing Importance of RL in the Age of LLMs
Large Language Models have rapidly evolved from autocomplete engines to tools capable of writing code, summarizing research, and engaging in complex reasoning. Yet one phase of their development still resists predictable engineering control: the reinforcement learning (RL) stage. While pre-training benefits from clear scaling laws — more compute reliably delivers better performance — RL has historically been unpredictable. Some RL runs dramatically improve a model’s abilities, others stagnate early, and others collapse despite large computational budgets. This tension forms the basis of the paper, which seeks to transform RL from a guess-and-check process into a mathematically predictable system.
Read more → -
Smaller Models. Bigger Impact. The Coming Shift in Agentic AI
Language models are rapidly becoming the backbone of agentic AI systems, powering everything from task-planning assistants to autonomous workflows. Yet the field has largely adopted the assumption that bigger is always better — that the strongest agents must be built on the largest language models. This paper challenges that belief. It argues that in many real-world applications, small language models (SLMs) are not merely adequate, but preferable, especially for high-frequency, structured agentic workloads. As efficiency, deployment agility, and cost become increasingly decisive, the role of compact models grows ever more compelling, shifting the conversation from raw scale toward purpose-aligned intelligence.
Read more → -
AI Is Ready for Science. Science Isn’t Ready for AI
Artificial intelligence has rapidly become a central tool in scientific research, from drug discovery to climate modeling to astrophysics. Yet the paper “AI for Scientific Discovery is a Social Problem” argues that the biggest barriers to progress are no longer purely technical. Contrary to the popular assumption that bigger models or better algorithms alone will unlock revolutionary breakthroughs, the authors suggest that the real bottleneck is the social ecosystem around scientific AI.
Read more → -
How Two Mathematicians Cracked a 50-Year Question About Large Values
Inside Guth & Maynard’s Breakthrough on Dirichlet Polynomials
Mathematics has a habit of hiding deep mysteries inside objects that seem simple. One such object is the Dirichlet polynomial, a short finite sum that behaves like a miniature version of the Riemann zeta function. For over fifty years, mathematicians tried to understand how often such a polynomial can take very large values, and progress stalled. In 2024, Larry Guth and James Maynard finally broke the barrier. Their work gives dramatically improved estimates for the frequency of large values and does so using a completely new combination of tools — from additive combinatorics to matrix geometry to harmonic analysis. This post is meant to explain their ideas in a way that feels intuitive, visually guided, and mathematically honest.
Read more →