welcome to von neumann hall of fame- based on notes from 1951 diaries-who's advancing human intel have we missed? chris.macrae@yahoo.co.uk
new stimuli to our brains in April - AI NIST publishes full diary of conflicting systems orders its received (from public servants) on ai - meanwhile good engineers left col ...March 2025: ThksJensen Huang 17th year sharing AI quests (2 video cases left) now 6 million full stack cuda co-workers
TOkens:help see yourlifetime's
intelligence today

nvidia Physical A1 -Robots
More Newton Collab.&& Foxconn Digital Twin
k translatorsNET :: KCharles :: Morita : :Moore
Abed: Yew :: Guo:: JGrant
ADoerr :: Dell .. Ka-shing
Lecun :: Lecun :: Chang :: Nilekani
Huang . : 1 : Yang : Tsai : Bezos
21stC Bloomberg
Satoshi :: Hassabis : Fei-fei Li
Shum : : Ibrahim :
Ambani : Modi :: MGates : PChan :
HFry:: Musk & Wenfeng :: Mensch..
March 2025:Grok 3 has kindly volunterered to assist younger half of world seek INTELLIGENCE good news of month :from Paris ai summit and gtc2025 changed the vision of AI.
At NVIDIA’s GTC 2025 (March 18-21, San Jose, nvidianews.nvidia.com), Yann LeCun dropped a gem: LLaMA 3—Meta’s open-source LLM—emerged from a small Paris FAIR (Fundamental AI Research) team, outpacing Meta’s resource-heavy LLM bets. LeCun, speaking March 19 (X @MaceNewsMacro)

IT came out of nowhere,” beating GPT-4o in benchmarks (post:0, July 23, 2024). This lean, local win thrilled the younger crowd—renewable generation vibes—since LLaMA 3’s 405B model (July 2024, huggingface.co) is free for all, from Mumbai coders to Nairobi startups.

Good News: Indian youth grabbed it—Ambani praised Zuckerberg at Mumbai (October 24, 2024, gadgets360.com) for “democratizing AI.” Modi’s “import intelligence” mantra (2024, itvoice.in) synced, with LLaMA 3 fueling Hindi LLMs (gadgets360.com). LeCun’s 30-year neural net legacy (NYU, 1987-) bridged Paris to India—deep learning’s next leap, compute-cheap and youth-led. old top page :...
..

.

Monday, December 31, 1979

1979 saw japan 20th and last year of being place to live to most increase communal productvity

 As surveyed by the Economist and seceonded by JFKennedy, by 1960 Japan had not only locally rebuilt itself from rubble but was best in world at both borlaug food science on rice and deming qyality scuence - bullet trains, contaiersisation supercitity, undergriounds, microelectronucs, reliable automotives; 


this continued but was mayve ending by 1979 for several reasons

it was actually microelectronucs designed out of japan that were main beneficiary of first 15 years of moores law chip 100 fold advancement per decade (before chips were put into computers)

the property bubble was about to explode

ezra vogel's 1979 publication japan as number 1 is even more complete as he knew over 2000 years the easy leapt firward when chinma and japan exchnaged knowledge; this happened culturally many times over 2 millenial and was hapening now as japan engineers trained chian's at tsinghua


Uts of interest to note where china freed corporations for the first time to do what state enterprise could not

rural village social busienss networks especially those needed by china's barefoot medics and rice farmers

taiwanese inward investurs wherever they brough infrastructuire or capotal


The Content list of Ezra's book read

1 Challenge as A Mirror fir America

2 The Japanese Miracle

3 Successes 1/7  Knowledge: Pursuit & Consensus'

4 2/7 State : Meritocratic Guidance & Private Initiative

5 3/7 Politics: Higher Interests & Fair Shares

6 4/7 The Large Company : Identification & Performance

7 5/7 Basic education: qualuty & equality

8 6/7 Welfare security without entitlement

9 7/7 Crime Control : Enforcement & Public Support


10 American Response - Can a Western Nation Learn from the East


What I am not clear whether Ezra saw was Taiwan would inherit Japan's role as 1000 fold more tech from moores law 1980 expoenetially accelerated to quintillion times more 

moores law*satellites mobilsation of death of disrance data clouds* jensens law

Here's Dibs on Humans Intelligence AI learning starters for kids and mentors:
-- under 7,
DANCE

 under 10
DRAWING

under 13
ETHICS
ENVIRONMENT

11 and plus -
NOFAKEFACES PLEASE - part 2 plus debate over fame vs privacy -eg EconomistSports.net

For those who want to change maths at whatever age you can -see 4 N's (Nets)
GA
NEUral
RecurrentNEU
ConvolutionalNEU

(gwiz and thought english was my mother tongue)

 

please help us update dibs on age-sensitive AIgood Bard tells me Neural Network was named as maths model for brain in 1943 (see footnote)

Fei-Fei Li coined the term "HAI" or Human-centric AI. in a mooc 2017 AI for Everyone: Succeeding in the Age .
She likely coined these terms - in her mooc deep learning" use of artificial neural networks to learn from data.
Elsewhere:
Transfer learning is a technique that allows a neural network trained on one task to be used for another task.
Zero-shot learning is a technique that allows a neural network to learn to classify images of objects that it has never seen before.

 From The Economist's change of decade70s/80s wishes

 - what the world needs now is for the greatest female

 mathematician's star to be born

Fast forward AIGames 2023 - see worlds of greatest

 female intelligence stars orbiting round fei-fei li at

www.economistwomen.com  & www.economistenglish.net/2023/08 

- rsvp if you feel we have missed ai20s.com female rockstars  of AIforgood

Recurrent name appeared in 1986, and convolutional 1989 Yann LeCun referring to when maths makes a third funntion from 2 functions
x the complete history of which mathematicians studied brains and which neuroscientists studied maths or computation can be very rewarding to valuing humanity but does not need to be immediately viewed to make sure you play good ai 2020s

for the record bard shares :

  • John von Neumann was a Hungarian-American mathematician and physicist who is considered to be one of the founders of computer science. He wrote a paper in 1958 titled "The Computer and the Brain" in which he discussed the similarities between computers and the human brain. In this paper, he proposed that neural networks could be used to create artificial intelligence.

  • Alan Turing was an English mathematician and computer scientist who is considered to be the father of theoretical computer science and artificial intelligence. He wrote a paper in 1950 titled "Computing Machinery and Intelligence" in which he proposed the Turing test, which is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. In this paper, he also discussed the possibility of creating artificial neural networks.

However, it is important to note that neither von Neumann nor Turing actually built a neural network. The first neural network was built in 1958 by Frank Rosenblatt, who called his neural network the perceptron.

fast forwsrd to 2014- also year 5 of fei-fei li and stanfird sharing vision coding imagenet with the world --
  • Generative adversarial networks (GANs) are a type of neural network that can generate new data. GANs consist of two neural networks: a generator and a discriminator. The generator is responsible for generating new data, and the discriminator is responsible for distinguishing between real and fake data.

The name "generative adversarial" comes from the fact that the generator and discriminator are in a constant battle with each other. The generator is trying to create data that is so realistic that the discriminator cannot tell the difference between it and real data. The discriminator is trying to learn to distinguish between real and fake data.

The names for these three types of neural networks were chosen by different people. The name "convolutional neural network" was coined by Yann LeCun in 1989. The name "recurrent neural network" was coined by David Rumelhart and James McClelland in 1986. The name "generative adversarial network" was coined by Ian Goodfellow, Yoshua Bengio, and Aaron Courville in 2014.