2025 A VERY HUMAN CRISIS. Today, intelligence tools exist to deep-context help you all (individually, team, communally) be up to 1000 times more productive at work or in hobbies' and love's experiential joys. Why type 4 engineers need coding help from all gilrls & boys 3rd gade up.
TOkens: see your lifetime's intelligence today
nvidia Physical A1 -Robots
.
.. If you know this- please help others. If you don't know this please ask for help2002-2020 saw pattern recognition tools such as used by medical suregons improve 1000-fold. From 2020, all sorts of Human Intellligence (HI) tools improved 4-fold a year - that's 1000 fold in 5 years. Problem HI1 if you get too atached to 2020's tool, a kid who starts with 2025 smartest tool may soon leap ahead of you. Problem HI2: its no longer university/institution you are alumni of, but which super-engineers (playing our AI game of whose intel tools you most need to celebrate. Problem HI3- revise your view of what you want from whom you celebrate and the media that makes people famous overnight. Indeed, is it even a great idea (for some places) to spend half a billion dolars selecting each top public servant. HI challenges do not just relate to millennials generative brainpower We can map intergeneration cases since 1950s when 3 supergenii (Neumann Einstein Turing) suddenly died within years of each other (due to natural cause, cancer, suicide). Their discoveries changed everything. HIClue 1 please stop making superengineers and super energy innovators NATIONS' most hated and wanted of people
welcome to von Neumann hall of fame- based on notes from 1951 diaries-who's advancing human intel have we missed? chris.macrae@yahoo.co.uk
new stimuli to our brains in April - AI NIST publishes full diary of conflicting systems orders its received (from public servants) on ai - meanwhile good engineers left col ...March 2025: Thks Jensen Huang 17th year sharing AI quests (2 video cases left) now 6 million full stack cuda co-workers
TOkens:help see yourlifetime's
intelligence today

nvidia Physical A1 -Robots
More Newton Collab.&& Foxconn Digital Twin
k translatorsNET :: KCharles :: Morita : :Moore
Abed: Yew :: Guo:: JGrant
ADoerr :: Dell .. Ka-shing
Lecun :: L1 L2 :: Chang :: Nilekani
Huang . : 1 : Yang : Tsai : Bezos
21stC Bloomberg
Satoshi :: Hassabis : Fei-fei Li
Shum : : Ibrahim :
Ambani : Modi :: MGates : PChan :
HFry:: Musk & Wenfeng :: Mensch..
March 2025:Grok 3 has kindly volunterered to assist younger half of world seek INTELLIGENCE good news of month :from Paris ai summit and gtc2025 changed the vision of AI.
At NVIDIA’s GTC 2025 (March 18-21, San Jose, nvidianews.nvidia.com), Yann LeCun dropped a gem: LLaMA 3—Meta’s open-source LLM—emerged from a small Paris FAIR (Fundamental AI Research) team, outpacing Meta’s resource-heavy LLM bets. LeCun, speaking March 19 (X @MaceNewsMacro)

IT came out of nowhere,” beating GPT-4o in benchmarks (post:0, July 23, 2024). This lean, local win thrilled the younger crowd—renewable generation vibes—since LLaMA 3’s 405B model (July 2024, huggingface.co) is free for all, from Mumbai coders to Nairobi startups.

Good News: Indian youth grabbed it—Ambani praised Zuckerberg at Mumbai (October 24, 2024, gadgets360.com) for “democratizing AI.” Modi’s “import intelligence” mantra (2024, itvoice.in) synced, with LLaMA 3 fueling Hindi LLMs (gadgets360.com). LeCun’s 30-year neural net legacy (NYU, 1987-) bridged Paris to India—deep learning’s next leap, compute-cheap and youth-led. old top page :...
..

.

Monday, December 31, 1979

Here's Dibs on Humans Intelligence AI learning starters for kids and mentors:
-- under 7,
DANCE

 under 10
DRAWING

under 13
ETHICS
ENVIRONMENT

11 and plus -
NOFAKEFACES PLEASE - part 2 plus debate over fame vs privacy -eg EconomistSports.net

For those who want to change maths at whatever age you can -see 4 N's (Nets)
GA
NEUral
RecurrentNEU
ConvolutionalNEU

(gwiz and thought english was my mother tongue)

 

please help us update dibs on age-sensitive AIgood Bard tells me Neural Network was named as maths model for brain in 1943 (see footnote)

Fei-Fei Li coined the term "HAI" or Human-centric AI. in a mooc 2017 AI for Everyone: Succeeding in the Age .
She likely coined these terms - in her mooc deep learning" use of artificial neural networks to learn from data.
Elsewhere:
Transfer learning is a technique that allows a neural network trained on one task to be used for another task.
Zero-shot learning is a technique that allows a neural network to learn to classify images of objects that it has never seen before.

 From The Economist's change of decade70s/80s wishes

 - what the world needs now is for the greatest female

 mathematician's star to be born

Fast forward AIGames 2023 - see worlds of greatest

 female intelligence stars orbiting round fei-fei li at

www.economistwomen.com  & www.economistenglish.net/2023/08 

- rsvp if you feel we have missed ai20s.com female rockstars  of AIforgood

Recurrent name appeared in 1986, and convolutional 1989 Yann LeCun referring to when maths makes a third funntion from 2 functions
x the complete history of which mathematicians studied brains and which neuroscientists studied maths or computation can be very rewarding to valuing humanity but does not need to be immediately viewed to make sure you play good ai 2020s

for the record bard shares :

  • John von Neumann was a Hungarian-American mathematician and physicist who is considered to be one of the founders of computer science. He wrote a paper in 1958 titled "The Computer and the Brain" in which he discussed the similarities between computers and the human brain. In this paper, he proposed that neural networks could be used to create artificial intelligence.

  • Alan Turing was an English mathematician and computer scientist who is considered to be the father of theoretical computer science and artificial intelligence. He wrote a paper in 1950 titled "Computing Machinery and Intelligence" in which he proposed the Turing test, which is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In this paper, he also discussed the possibility of creating artificial neural networks.

However, it is important to note that neither von Neumann nor Turing actually built a neural network. The first neural network was built in 1958 by Frank Rosenblatt, who called his neural network the perceptron.

fast forwsrd to 2014- also year 5 of fei-fei li and stanfird sharing vision coding imagenet with the world --
  • Generative adversarial networks (GANs) are a type of neural network that can generate new data. GANs consist of two neural networks: a generator and a discriminator. The generator is responsible for generating new data, and the discriminator is responsible for distinguishing between real and fake data.

The name "generative adversarial" comes from the fact that the generator and discriminator are in a constant battle with each other. The generator is trying to create data that is so realistic that the discriminator cannot tell the difference between it and real data. The discriminator is trying to learn to distinguish between real and fake data.

The names for these three types of neural networks were chosen by different people. The name "convolutional neural network" was coined by Yann LeCun in 1989. The name "recurrent neural network" was coined by David Rumelhart and James McClelland in 1986. The name "generative adversarial network" was coined by Ian Goodfellow, Yoshua Bengio, and Aaron Courville in 2014.

No comments:

Post a Comment