Physics • Machine Learning • Curiosity
Welcome to Tensors & Quarks
Exploring the cosmos of physics and the depths of machine learning with hands-on experiments, notes, and essays.
Latest Posts
-
From Spins to Sentience: The Physics Roots of Artificial Intelligence
When we think of artificial intelligence today, we imagine vast transformer models running on powerful GPUs, capable of generating text, art, and music with uncanny fluency. But the story of AI didn’t begin with silicon or software—it began in physics, nearly a century ago. Long before neural networks and backpropagation, physicists were studying how atoms interact, flip, and align in magnetic materials. Their equations of collective behavior and energy minimization, formalized in the Ising model, unknowingly laid the foundation for how modern AI learns and thinks.
Read more → -
Can AI Be Surprised by Its Own Move?
Overview
The paper addresses the challenge of generating truly creative chess puzzles using generative AI and reinforcement learning (RL). The authors note that while generative models (e.g., for text, images) are advancing rapidly, producing creative, unexpected, aesthetic outputs remains hard. They pick the domain of chess puzzles.
Read more → -
The Milky Way’s Greatest Hits — Now in Low Frequency!
Introduction
Ever wondered what our Galaxy sounds like if you tuned your cosmic radio to the lowest frequencies? The Galactic and Extragalactic All-Sky Murchison Widefield Array Survey Extended (GLEAM-X) does exactly that — and this third release zooms in on the Galactic Plane. Using the Murchison Widefield Array (MWA) in Western Australia, astronomers combined data from two observation phases to create a map of the southern Milky Way between 72 MHz and 231 MHz.
Read more →
That’s like listening to the deep bass of the Universe. The result? A panorama rich in supernova remnants, H II regions, pulsars, and faint diffuse emission. GLEAM-X III isn’t just another survey — it’s an upgrade to our Galactic sense of hearing, turning the invisible hum of the cosmos into a full-scale radio symphony. -
When AI Stopped Talking and Started Thinking
Reasoning — the holy grail of AI — has long been the gap between memorization and actual understanding. Most modern large language models (LLMs) simulate reasoning through Chain-of-Thought (CoT) prompting, a clever trick where the model narrates its logic step-by-step. Unfortunately, this is more like a magician describing the illusion rather than truly performing it. CoT depends on verbose linguistic scaffolding, brittle decomposition, and mountains of training data. If you drop one logical domino, the entire thought process collapses.
Read more → -
How AlphaEvolve Found What Mathematicians Missed for 56 Years
Abstract / Overview
AlphaEvolve, developed by Google DeepMind, is an evolutionary coding agent designed to autonomously discover and optimize algorithms. Unlike single-model LLM setups, it orchestrates a pipeline of large language models that iteratively modify, test, and refine code. Each iteration is evaluated automatically, forming a feedback loop that resembles biological evolution: mutation, evaluation, and selection. This self-improving process has yielded breakthroughs across mathematics, engineering, and AI infrastructure. Notably, AlphaEvolve discovered the first improvement in matrix-multiplication algorithms in 56 years, reducing the number of scalar multiplications for 4×4 complex matrices from 49 to 48.
Read more →