What we learned from the failed Galactica experiment
Amid its wave of layoffs and tumbling stock price, Meta (Facebook) went head-to-head with another crisis after releasing its latest artificial intelligence announcement: Galactica.
Galactica is “a large language model that can store, combine and reason about scientific knowledge,” according to a paper published by Meta AI. It is a transformer model that has been trained on a carefully curated dataset of 48 million papers, textbooks and lecture notes, millions of compounds and proteins, scientific websites, encyclopedias, and more.
Galactica was supposed to help scientists navigate the ton of published scientific information. Its developers presented it as being able to find citations, summarize academic literature, solve math problems, and perform other tasks that help scientists in research and writing papers.
In collaboration with Papers with Code, Meta AI open-sourced Galactica and launched a website that allowed visitors to interact with the model.
However, three days after Galactica’s release, Meta had to shut down the online demo following a deluge of criticism by scientists and tech media about the model’s incorrect and biased output.
While Galactica was obviously not a success, I believe that its short history provides us with some useful lessons about LLMs and the future of AI research.
Read the full analysis on TechTalks.
For more on AI research: