AI models “just want to learn”—a quote attributed to the OpenAI co-founder Ilya Sutskever that means, essentially, that if you throw enough money, computing power, and raw data into these networks, the models will become capable of making ever more impressive inferences.
https://archive.is/2023.11.13-181948/https://www.ft.com/content/dd9ba2f6-f509-42f0-8e97-4271c7b84ded
OpenAI recently put out a call for large-scale data sets from organisations that “are not already easily accessible online to the public today”, particularly for long-form writing or conversations in any format.
https://archive.is/2023.11.13-181948/https://www.ft.com/content/dd9ba2f6-f509-42f0-8e97-4271c7b84ded
Ultimately, Altman said “the biggest missing piece” in the race to develop AGI is what is required for such systems to make fundamental leaps of understanding.
“There was a long period of time where the right thing for [Isaac] Newton to do was to read more math textbooks, and talk to professors and practice problems . . . that’s what our current models do,” said Altman, using an example a colleague had previously used. But he added that Newton was never going to invent calculus by simply reading about geometry or algebra. “And neither are our models,” Altman said. “And so the question is, what is the missing idea to go generate net new . . . knowledge for humanity? I think that’s the biggest thing to go work on.”
Study shows AI image-generators being trained on explicit photos of children