The superpower you need as AI takes a prominent role in scientific discovery
AI can find solutions, but asking the right answers and remaining curious is a human superpower (for the time being at least).
I recently read The Idea of the Brain by Matthew Cobb for the second time (highly recommended). It is a history of how our perception of the brain and mind has changed throughout the centuries.
One of the most interesting themes that I noticed on my second read is the number of discoveries that were made by accident. For example, in 1873, scientist Camillo Golgi accidentally discovered a staining method that could visualize single neurons, a method that later became known as “Golgi stain” or “black reaction” and became one of the tenets of modern neuroscience.

Another interesting discovery was made in 1959 by neurophysiologists David Hubel and Torsten Wiesel while they were recording neuronal activities in the V1 area of a cat’s brain with microelectrodes. They struggled to activate a neuron until an edge from a slide they were manipulating swept across the screen and the neuron fired vigorously. That chance incident piqued their interest, led them to discover neurons tuned to edges of specific orientations and positions, and later columns and hierarchical processing. The work transformed sensory neuroscience and inspired models of vision from simple/complex cells to modern computer vision.
After finishing The Idea of the Brain, I went down the rabbit hole of finding other scientific discoveries that were made by accident (GPT-5 with Extended Thinking helped a lot). Some of them I already knew, some were new to me. But these are some of the most interesting:
In 1928, Alexander Fleming, a bacteriologist at St. Mary’s Hospital in London, returned from holiday and found a mold contaminating a Staphylococcus plate and, by luck, noticed a clear “zone of inhibition” around it. The mold (Penicillium notatum) was secreting a potent antibacterial substance. Purified and mass-produced during WWII, penicillin became the prototype antibiotic.
In 1945, Percy Spencer, an engineer at Raytheon, noticed a chocolate bar in his pocket had melted while working on magnetrons for radar. His curiosity led to the discovery of microwave heating and the invention of microwave ovens. Radioactivity, x-rays, vaccines, and other fundamental elements of modern science and medicine owe their discovery to chance and the keen eye of curious people who investigated an interesting observation that was not necessarily related to what they were doing at that moment.
Now, you might ask why I’m discussing this in a tech blog that mostly discusses artificial intelligence? In recent months, we’ve seen a lot of papers and articles about AI automating scientific discovery. There are studies that show large language models (LLMs) finding new solutions to old math puzzles or discovering new machine learning algorithms or optimizing hardware.
This begs the question, what is scientific discovery? According to the Stanford Encyclopedia of Philosophy, “Scientific discovery is the process or product of successful scientific inquiry. Objects of discovery can be things, events, processes, causes, and properties as well as theories and hypotheses and their features (their explanatory power, for example).”
Interestingly, there is debate on what “discovery” actually means. But in many cases, it refers to the “eureka moment” of having a new insight.
This brings us back to AI and scientific discovery. As our tools become more advanced, we become more capable of finding answers to questions. Supercomputers and machine learning algorithms can help us find the best solution to a well-defined problem.
However, what happens when we don’t know what the question is? This is where I believe humans will have the huge advantage (at least for the time being). We’re already seeing science and engineering evolve into a distribution of labor between humans and AI, where the human asks the question and the machine helps find the answer.
You can see this in projects such as AlphaFold, AlphaEvolve, and DrEureka. In all cases, a scientist, researcher, or engineer knows or has an idea of what the end state should look like (a reward function, a target objective, etc.). The machine learning model searches the vast space of possible solutions and finds the optimal parameters that meet the goal. This combination of human and AI helps run experiments at scales that were previously impossible. And with current machine learning models, we’re not brute forcing our way through infinite solutions but using various techniques to narrow down experiments to the most promising pathways, making it possible to explore even larger solution spaces (we saw early glimmers of this with AlphaGo, which managed to master a game that had more possible configurations than the number of atoms in the universe).
But we still need to come up with the questions to ask.


