mstdn.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A general-purpose Mastodon server with a 500 character limit. All languages are welcome.

Administered by:

Server stats:

11K
active users

#neuralnetworks

6 posts6 participants1 post today

Your Wi-Fi may know who you are, literally. “WhoFi,” a new system from Rome’s La Sapienza University, identifies people with 95.5% accuracy using signals bouncing off their bodies. No cameras. No lights. Just basic routers and neural networks. It even works through walls. Groundbreaking tech, or surveillance nightmare? The line just got blurrier.

Gary Marcus is onto something in here. Maybe true AGI is not so impossible to reach after all. Just probably not in the near future but likely within 20 years.

"For all the efforts that OpenAI and other leaders of deep learning, such as Geoffrey Hinton and Yann LeCun, have put into running neurosymbolic AI, and me personally, down over the last decade, the cutting edge is finally, if quietly and without public acknowledgement, tilting towards neurosymbolic AI.

This essay explains what neurosymbolic AI is, why you should believe it, how deep learning advocates long fought against it, and how in 2025, OpenAI and xAI have accidentally vindicated it.

And it is about why, in 2025, neurosymbolic AI has emerged as the team to beat.

It is also an essay about sociology.

The essential premise of neurosymbolic AI is this: the two most common approaches to AI, neural networks and classical symbolic AI, have complementary strengths and weaknesses. Neural networks are good at learning but weak at generalization; symbolic systems are good at generalization, but not at learning."

garymarcus.substack.com/p/how-

Marcus on AI · How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AIBy Gary Marcus
Continued thread

If we ever see a real artificial mind, some kind of LLM will probably be a small but significant component of that, but the current wave of machine learning will most likely come to a grinding halt very soon because of a lack of cheap training data. The reason why all of this is happening now is simple: The technologies behind machine learning have been around for decades, but computers weren't fast enough and didn't have enough memory for those tools to become really powerful until the early 2000s, and around the same time, the Internet went mainstream and got filled with all kinds of data that could be datamined for training sets. Now there is so much synthetic content out there that automated data mining won't work much longer, you need humans to curate and clean the training data, which makes the process slow and expensive. I expect to see another decades long AI winter after the commercial hype is over.

If you look for real intelligence, look at autonomous robots and computer game NPCs. There you can find machine learning and artificial neural networks applied to actual cognitive tasks in which an agent interacts with its environment. Those things may not even be as intelligent as a rat yet, but they are actually intelligent, unlike LLMs.

#llm#LLMs#ai

Transfer Learning in Machine Learning

Transfer learning is a technique in machine learning where a model developed for one task is reused as the starting point for a model on a second task. Rather than training a model entirely from scratch, which often requires large amounts of labeled data and computational resources, transfer learning enables a more efficient approach by leveraging previously learned features

ml-nn.eu/a1/86.html

ml-nn.euTransfer Learning in Machine LearningMachine Learning & Neural Networks Blog