Energy-based world models (EBWM) enable AI systems to reflect on their predictions and achieve human-like cognitive abilities missing in autoregressive models.
Yes. I think by now I am completely convinced that "AGI" or "advanced AI" will be a concatenation and layering of many different types of architectures, with the controller itself corresponding to a bunch of different models lol. The stacking of complexity is in my view what will give the value (not arbitrary stacking, but of course stacking that is rationally or logically done, with a touch of chaos or randomness where needed to avoid deterministic traps). The goals and agents will not be a uniform wrapping over the entire pipeline either, but also intercalated or interspersed in the process.
I think today we have all the ingredients for developing AGI, and we just need to mix them correctly.
There is always a discussion of whether we need a uniform architecture or different components tacked together. I'm not sure which path will be the way to go. But what is for sure is that the brain has different modes of operation, and AI systems that treat all inputs the same will always have limitations.
Good book. I've read it. But I'm not sure if the explicit modularity that Minsky proposes solves the problem. The brain seems to be more fluid, being uniform and modular at the same time. I'm not an expert though, and this is above my pay grade.
I agree. I just remember thinking that I liked the approach. I guess we should consider Wolfram and Hofstadter's Gödel, Escher, Bach: an Eternal Golden Braid for older (like me!) approaches and brainstorming. These new AI systems have outrun me in keeping up and maintaining and using supercomputers. That's my former life as a programmer in a computational genetics lab.
Yes. I think by now I am completely convinced that "AGI" or "advanced AI" will be a concatenation and layering of many different types of architectures, with the controller itself corresponding to a bunch of different models lol. The stacking of complexity is in my view what will give the value (not arbitrary stacking, but of course stacking that is rationally or logically done, with a touch of chaos or randomness where needed to avoid deterministic traps). The goals and agents will not be a uniform wrapping over the entire pipeline either, but also intercalated or interspersed in the process.
I think today we have all the ingredients for developing AGI, and we just need to mix them correctly.
Do you agree with these statements?
Thanks for the article.
There is always a discussion of whether we need a uniform architecture or different components tacked together. I'm not sure which path will be the way to go. But what is for sure is that the brain has different modes of operation, and AI systems that treat all inputs the same will always have limitations.
Sounds similar to "The Society of Mind" by Marvin Minsky.
Good book. I've read it. But I'm not sure if the explicit modularity that Minsky proposes solves the problem. The brain seems to be more fluid, being uniform and modular at the same time. I'm not an expert though, and this is above my pay grade.
I agree. I just remember thinking that I liked the approach. I guess we should consider Wolfram and Hofstadter's Gödel, Escher, Bach: an Eternal Golden Braid for older (like me!) approaches and brainstorming. These new AI systems have outrun me in keeping up and maintaining and using supercomputers. That's my former life as a programmer in a computational genetics lab.
Those books are fantastic!
https://www.amazon.com/Society-Mind-Marvin-Minsky/dp/0671657135
LeCun has been advocating something similar in spirit.
Yes. The JEPA models use energy-based learning methods too