AI models “just want to learn”—a quote attributed to the OpenAI co-founder Ilya Sutskever that means, essentially, that if you throw enough money, computing power, and raw data into these networks, the models will become capable of making ever more impressive inferences.

https://www.scientificamerican.com/article/how-ai-knows-things-no-one-told-it/

In the AI field, the term “learning” is usually reserved for the computationally intensive process in which developers expose the neural network to gigabytes of data and tweak its internal connections. By the time you type a query into ChatGPT, the network should be fixed; unlike humans, it should not continue to learn. So it came as a surprise that LLMs do, in fact, learn from their users' prompts—an ability known as in-context learning. “It's a different sort of learning that wasn't really understood to exist before,” says Ben Goertzel, founder of AI company SingularityNET.

In 2022 a team at Google Research and the Swiss Federal Institute of Technology in Zurich showed that in-context learning follows the same basic computational procedure as standard learning, known as gradient descent. This procedure was not programmed; the system discovered it without help. “It would need to be a learned skill,” says Blaise Agüera y Arcas, a vice president at Google Research. In fact, he thinks LLMs may have other latent abilities that no one has discovered yet. “Every time we test for a new ability that we can quantify, we find it,” he says.

https://archive.is/2023.11.13-181948/https://www.ft.com/content/dd9ba2f6-f509-42f0-8e97-4271c7b84ded

OpenAI recently put out a call for large-scale data sets from organisations that “are not already easily accessible online to the public today”, particularly for long-form writing or conversations in any format.

https://archive.is/2023.11.13-181948/https://www.ft.com/content/dd9ba2f6-f509-42f0-8e97-4271c7b84ded

Ultimately, Altman said “the biggest missing piece” in the race to develop AGI is what is required for such systems to make fundamental leaps of understanding.

“There was a long period of time where the right thing for [Isaac] Newton to do was to read more math textbooks, and talk to professors and practice problems . . . that’s what our current models do,” said Altman, using an example a colleague had previously used. But he added that Newton was never going to invent calculus by simply reading about geometry or algebra. “And neither are our models,” Altman said. “And so the question is, what is the missing idea to go generate net new . . . knowledge for humanity? I think that’s the biggest thing to go work on.”

**Study shows AI image-generators being trained on explicit ...**Reddit · r/technology100+ comments · 9 months ago

Study shows AI image-generators being trained on explicit photos of children