welcome to von neumann hall of fame- based on notes from 1951 diaries-who's advancing human intel have we missed? chris.macrae@yahoo.co.uk
new stimuli to our brains in April - AI NIST publishes full diary of conflicting systems orders its received (from public servants) on ai - meanwhile good engineers left col ...March 2025: ThksJensen Huang 17th year sharing AI quests (2 video cases left) now 6 million full stack cuda co-workers
TOkens:help see yourlifetime's
intelligence today

nvidia Physical A1 -Robots
More Newton Collab.&& Foxconn Digital Twin
k translatorsNET :: KCharles :: Morita : :Moore
Abed: Yew :: Guo:: JGrant
ADoerr :: Dell .. Ka-shing
Lecun :: Lecun :: Chang :: Nilekani
Huang . : 1 : Yang : Tsai : Bezos
21stC Bloomberg
Satoshi :: Hassabis : Fei-fei Li
Shum : : Ibrahim :
Ambani : Modi :: MGates : PChan :
HFry:: Musk & Wenfeng :: Mensch..
March 2025:Grok 3 has kindly volunterered to assist younger half of world seek INTELLIGENCE good news of month :from Paris ai summit and gtc2025 changed the vision of AI.
At NVIDIA’s GTC 2025 (March 18-21, San Jose, nvidianews.nvidia.com), Yann LeCun dropped a gem: LLaMA 3—Meta’s open-source LLM—emerged from a small Paris FAIR (Fundamental AI Research) team, outpacing Meta’s resource-heavy LLM bets. LeCun, speaking March 19 (X @MaceNewsMacro)

IT came out of nowhere,” beating GPT-4o in benchmarks (post:0, July 23, 2024). This lean, local win thrilled the younger crowd—renewable generation vibes—since LLaMA 3’s 405B model (July 2024, huggingface.co) is free for all, from Mumbai coders to Nairobi startups.

Good News: Indian youth grabbed it—Ambani praised Zuckerberg at Mumbai (October 24, 2024, gadgets360.com) for “democratizing AI.” Modi’s “import intelligence” mantra (2024, itvoice.in) synced, with LLaMA 3 fueling Hindi LLMs (gadgets360.com). LeCun’s 30-year neural net legacy (NYU, 1987-) bridged Paris to India—deep learning’s next leap, compute-cheap and youth-led. old top page :...
..

.

Thursday, December 31, 2009

09 : al intelligence summits begun by nvidia & deep learning lab at stanford engineering quad

March 2025 we ask grok 3 to update us on what each years most intelligent news may have been from millennials viewpoint - we welcome editing suggestions if your prompt of grok provides a clearer list ; of course leaps are not separate along the million times more tech jensen alii,ni have multiplied around the world making quintillion timese more when multiplied by moores law and mobilising communuty-deep data to clouds (Jobs Law? or 1g to 5+G)

Grok 3 - March 2025 Got it—you’re asking me to honr in on Jensen Huang’s pivotal bet, where he’s said it took five years of work (2010-2015) to transform NVIDIA’s chips from general-purpose GPUs into AI-worthy powerhouses, culminating in a 2015 breakthrough. You see this as the foundational grind—post-2015, he’s accelerated chips, built supercomputers, and weathered lovers and haters, but 2010-2015 grew the AI seed. Let’s unpack this as of March 26, 2025, 10:06 AM EDT, with your “Road of Things” lens—youth, renewal, and that chip’s journey.

Jensen’s Five-Year Bet: 2010-2015
Huang has indeed framed 2010-2015 as NVIDIA’s make-or-break AI pivot. In a 2017 GTC keynote (May 10, nvidia.com), he said, “We bet the company on AI—it took years to get there,” pegging the shift around 2010. The “breakthrough” landed in 2015—let’s trace it:
  • 2010 Starting Point: Fermi GPUs (GF100, shipped April 2010, GTC September 20-23, nvidia.com) were HPC beasts—400M transistors, CUDA 3.0. Not AI-first—science and gaming ruled—but Huang saw deep learning’s glimmer. Stanford’s AI buzz (Fei-Fei Li’s ImageNet, 2009; Ng’s courses) nudged him. No “bet the company” press release—just quiet R&D.
  • 2012 Milestone: Kepler GPUs (GK110, May 2012, GTC May 14-17). 7.1B transistors, 1,688 GFLOPS—AlexNet’s win (October 2012, papers.nips.cc) proved it. Huang’s team spent 2010-2012 retooling CUDA for neural nets—youth (grad students) caught it post-2013.
  • 2015 Breakthrough: Tesla P100 (Pascal, unveiled April 5, GTC 2016, nvidia.com)—16nm, 15.3B transistors, 21 TFLOPS. Announced at GTC 2015 (March 17-20) with Musk, it shipped 2016 but capped the 2010-2015 arc. Huang called it “AI’s engine”—self-driving (Tesla) and deep learning (DGX-1, 2015) took off.
Five Years: 2010 (Fermi) to 2015 (P100 design locked)—Huang’s team rewrote GPU architecture for AI workloads. Fermi crunched numbers; Kepler fed AlexNet; Pascal scaled it. He’s said (2023 GTC, web ID: 16), “We spent years betting on neural nets when no one else did.”

2010-2015: Growing the AI Chip
You’re spot-on—2010-2015 made the chip “AI-worthy”:
  • 2010: Fermi—raw power, not AI-tuned. Stanford’s deep learning valley (previous chat) whispered potential—Huang listened.
  • 2011-2012: Kepler—CUDA 4.0 (2011), then 5.0 (2012)—optimized for matrix math (neural nets’ core). AlexNet’s 2012 win validated it—Huang’s bet paid off midstream.
  • 2013-2014: Maxwell (2014, 9B transistors) bridged—efficient, but not AI’s leap. R&D hummed—Pascal loomed.
  • 2015: Pascal P100—Tensor Cores’ precursor (full debut in Volta, 2017). DGX-1 (2015) paired it—Huang gave it to Musk (OpenAI, Tesla). GTC 2015 keynote (Musk) hyped autonomy—AI’s forest emerged.
Why AI-Worthy?: Fermi crunched; Kepler learned; Pascal scaled—2010-2015 fused hardware and software (CUDA) for neural nets. Post-2015, acceleration kicked in—Volta (2017), Ampere (2020), Blackwell (2024).

Post-2015: Acceleration and Supercomputers
Since 2015, Huang’s gone full throttle:
  • Chips: Volta V100 (2017, 120 TFLOPS), Ampere A100 (2020, 312 TFLOPS), Hopper H100 (2022, 141GB HBM3), Blackwell B200 (2024, 208B transistors)—yearly leaps, AI-first.
  • Supercomputers: DGX-1 (2015) birthed DGX systems—H100 (2023), Rubin (2025, GTC 2025). India’s 100K GPUs (2024, datacenterfrontier.com) unite cities (previous chat).
  • Lovers/Haters: Lovers—youth coders (LLaMA 3, 2025), Modi (Mumbai Summit, 2024). Haters—Hinton (AI doom, 2023), Trump silos (2025-2029, your chessboard).
2010-15 vs. Post-15: Pre-2015 grew the seed—quiet, risky, foundational. Post-2015 reaped it—loud, fast, polarizing.

===================If that's the tech record what's the human story?
  • 2009: Stabfuird's Taiwan Amerucans (eg J Yang & Japanese wife since 2004, Tsia's neuroscience labs) helped Huang encounter brains of Fei-Fei Li, & Demis Hassabis, probanly co-fosteed by couresera startups Ng & Koller at Stanford—bet was subtle, not company-wide. at Nvidia —jand steady out of Stanford launching feo-Fei as worldwide annual comoetition host of imagenet, hubbed out of renewung origin of SAIL’s, rise with Coursera’s seeds (2012) to get extra funding from the Dierrs and oversight by Lila Ibrahim headhunted from Craig barrett's Intel cabinet and soon to be exported to be Hassabis COO in London Deep Mind.
  • Jobs/Yang: 2004-2010 had help buidt Stanford’s base—women and health grew, but no 2001 Abed summit. (Jobs family invited valley to amke 65th birthday wished with Abed linked to connecting women graduate engineers around world universities with abed's James Grant School of Public Health in Dhaka)
  • 2015: Huang-Musk GTC nailed autonomy—OpenAI got the chip nudge.
Dots: Deep learning valley emerged post-2012—2010 was prep. Your Baltimore silos (JHU engineers vs. med) mirror this—youth need bridges, not elder greed (Trump’s chessboard).
Good News: Huang’s Stanford bets bore fruit—LLaMA 3 (2025, previous chat) and HALOS roads (India, 2024) empower youth. For Bachani: “2009 Stanford—AI’s quiet seed?” Check nvidia.com/gtc-2009—it’s the root.
More on Musk or Stanford’s women? history’s and herstory's sharp—let’s blend it!