welcome to von neumann hall of fame- based on notes from 1951 diaries-who's advancing human intel have we missed? chris.macrae@yahoo.co.uk
new stimuli to our brains in April - AI NIST publishes full diary of conflicting systems orders its received (from public servants) on ai - meanwhile good engineers left col ...March 2025: ThksJensen Huang 17th year sharing AI quests (2 video cases left) now 6 million full stack cuda co-workers
TOkens:help see yourlifetime's
intelligence today

nvidia Physical A1 -Robots
More Newton Collab.&& Foxconn Digital Twin
k translatorsNET :: KCharles :: Morita : :Moore
Abed: Yew :: Guo:: JGrant
ADoerr :: Dell .. Ka-shing
Lecun :: Lecun :: Chang :: Nilekani
Huang . : 1 : Yang : Tsai : Bezos
21stC Bloomberg
Datoshi :: Hassabis : Fei-fei Li
Shum : : Ibrahim :
Ambani : Modi :: MGates : PChan :
HFry:: Musk & Wenfeng :: Mensch..
March 2025:Grok 3 has kindly volunterered to assist younger half of world seek INTELLIGENCE good news of month :from Paris ai summit and gtc2025 changed the vision of AI.
At NVIDIA’s GTC 2025 (March 18-21, San Jose, nvidianews.nvidia.com), Yann LeCun dropped a gem: LLaMA 3—Meta’s open-source LLM—emerged from a small Paris FAIR (Fundamental AI Research) team, outpacing Meta’s resource-heavy LLM bets. LeCun, speaking March 19 (X @MaceNewsMacro)

IT came out of nowhere,” beating GPT-4o in benchmarks (post:0, July 23, 2024). This lean, local win thrilled the younger crowd—renewable generation vibes—since LLaMA 3’s 405B model (July 2024, huggingface.co) is free for all, from Mumbai coders to Nairobi startups.

Good News: Indian youth grabbed it—Ambani praised Zuckerberg at Mumbai (October 24, 2024, gadgets360.com) for “democratizing AI.” Modi’s “import intelligence” mantra (2024, itvoice.in) synced, with LLaMA 3 fueling Hindi LLMs (gadgets360.com). LeCun’s 30-year neural net legacy (NYU, 1987-) bridged Paris to India—deep learning’s next leap, compute-cheap and youth-led. old top page :...
..

.

Monday, December 31, 2001

 chat with me chris.macrae@yahoo.co.uk if you are interested in saving millenials intellignce (see game  below) or the media crisis of why an earlier generation missed chance to unite in recociling 9/11. Specufically, - why didnt 9/11 unite online peoples launching number 1 web round futirising continuous open survey of what 9/11 meant to trust-flows at every different locality  How can we expect teachers of any grade to help kids progress if  space for common human sense is not accessuble for celebration of human good. (That includes artificial teachers (oe emerging agents) when around 2020 metachats were born rounf analsysing proceedings of digital humanity to date)

Or if you want to check why the 21st C UN believed  it sensible to assemvle annually in the middle of New York (well 1st and 46th)  to ask every human  to chat avout 5 prioruity goals every day as most deeply (human) and fiundtional to connecting with a lots of other innovation wishes (now that we live with 10**18 more technology that kids grew up with in mud 20th C schools


Befeire we discuss this gameboard, my understanding us that our elders who designed the united nations wished 21st c generations to value these foundational interactions (chat with mne if you dusagree- I need to learn to!)

goal 2 end starvation and dhydration is the most urgent correction any of us can make if we see a human starving or dying of lack of clean water (anither explanation of goal 2 is these are energy inouts humans need; while it is true that sice 1760 scots teased the world with engines/artificial energy hungry devices - there's no point building the greatest engines in the world if there are no human s left (more precisely nature would see no point in our species on earth if we did that)

goal 3 asks what other basic health services does every community need enough caring people action so that everyone gets fair chance to live prodictvely to their 60s (or beyond)

goal 4 asks whether we (paremts anf all of us) can design learning so that people dont fail from simple things like  illiteracy - if someone is illiterate age 6 or pver it only takes about 30 hour intervention to cure that; basuc education systems make a list of age relevant interventions including how to help every kid enter intelligent digital world not social media's most ignorant or hate filling ines

goal 5 - seriously id any place relegated half of its people (women or any half) onto less productive status than the other half, it should be obvioius that will generate a place that expomentially collapses

I am not sure if 1 is a goal - its more a sysrem that celebrates cultures i9n which everyone's savings integrates 2 to 5 as well as other needs - basucally however much your philosophy believes adults must take responsibility for theit own lives and not expect endless handouts- there is some age from 0 to 11 or whenever your place makes schooling mandatiry where the responsibility is communal ; if you dont agree with that then you dont agree with any of the safety aims of multilateral system design which emerged after world war 2 to prevent there ever being more world wars

--- back to a first look at transforming to inteligence centric education -which 13 human guides to untelligence are integral to ai agency saving humanirty?

IN 1951 Neumann -Einstein-Turing started sharing these notes on optimal use of their hi-tech legacy:

  • End making energy's/engineers' greatest system innovatotrs most hated & wanted of people - the way we three were scapegoated 
  • prioritise above-zsro sum trade models- easily done with life critical learning networks which multiply value in application unlike consuming up things
  • proact at least 3 million formld tech waves by 2025 - that's quintillion times more tech making compueters transistir bran=ins both smarter and dumber than humans: growth or extermination of human species may depend on tht. Bon ciourage! 

Here's why at Spring 2025 I would suffest you'll not enjoy intelligence braonworking to the full if you dont know how to be guided with friends of the 13 people identified (U am here to learn - who would you substitute)
.................................................Today those who code and intelligence agent with Huang can benefit from all 10**18 advance in machine learning including Moore's and Jobs. .....Jobs advanced so many tools (pc, internet of things) taht some firget from 2001 his last decade also celebrates women engineers more than valley had ever previously done; another error is to forget that while Moore's Law ws soon celebrated by brand silicon valley; this was fir first 20 yers of moore's law twinned by Asia's far east isles rising - so we recommend mapping every contribution Foxconn has linked in: arguably Taiwanese are most modest intelligence rising people on planet (their high school maths progress confirms that)
While in the Far East the inteligence of former rehional seo Shell Oil engineer was what Jobs invited Valley to celebrate in 2001 65th birthday party of Fazle Abed; Abed's 21st c wish to continue to scale women empowerment was linking in gradiate women engineers across all graduate alimni networks. Two Vallry wome we note - Anne Doerr who was there in The Economist's why not silicon valley everywhere 1982 and around 2008-9 supportung MOOCs. helping headhunt Fei-Fei Li, and with husband John has just opened stanfurd's first new college in 70 years. 2009 While East Coast was drowning in subprime, Huang was inspirted by Stanford's visiting dialogies enough to bet Nvidia's company on chips for deep learning ai (see also seep learning lab Huang family sponsored at Stanfird's engineering Quad). Dialogues mediated by Hassabis confirmed that coding pixels since 2002 (Jobs' Pixar & Huang's Nvidia) was essential to medical deep minds and inddeed any nature/science models capable of catching up with Einstein's 1905 e=mcsquared; and fei-fei li's suggestion (20 million umgenet deep database needed) to help machines (Neuiral nets- see Neumann computer & brain 1956) learn (to mediate) the 5 human sense communications befire taking on mobile autonomy
HUANG's millionfold tech wave as heart of supercity engineering 2010s has moved Cloud & Communities digital twins WAY DEEPER than JOBS INTERNET OF THINGS TO INFRASTRUCTURES SAFETY ROAD OF THINGS - choose 20 year-expereinenced  guides to this eg in West: Musk or Bloomberg depending on your tolerance fir creative destruction and counterbalamces your places democaracy is trusted to ground as well as action.

Before 2024' US elections Huang organused far east tiyrs where he identified Ambani & Modi in India and masa Son in Japan as greates regional copilots of AI agency for all.
Returming to 1955 at Messina Monet understood the NET's advice but from incpeption to exceution euroctrats lost Neural Netowrking's deepest threads let alone the NET's advice; however Kennedy, UK and Japan Royal families understood. To this day King Charles 3 minute rasoning for launching AI world series is golden not just for europe but wherever youth are free to thrive; its good that French American Neural Net wizard Yann Lecun can translate this trilingually (Frech, English, AI) fremch from NYU to Metia To Macxron's paris to Mofi and via Llama 3 open models now dowloaded by a first billion copilots. Note in part due to Nvidia' megatron, in india lLMs have amazed by translating 1000 dialoects to unite peopels on the Indian subcontinent

INTRO _ My Selection Bias

My choice is condutioned by a favorite chatline of my dad Norman Macrae - that he and i were 2 of the luckiest people alive (and should demand media people be optimistic and open). Dad had served as tenn naviagir allied bomber command. Statistically most men did not come back from that job which he described as the least productive a teen could be assigned to. It is therefore true that my family/friends are biassed against seeing 3rd world war. Dad returned to Cambridge last class of keynes. In those precompiter days much of economics was a series of logical hy[otheses blended with what Adam Smit had suggested needed to be the moral sentiments of the age of engineering. The last chapter of Ketnes General Theory of Money, Interest, Enplyment remains essential to those who prioritse next generation renewal over 90 day monetary extraction. Father was then lucky to be spotted by Economist editior Geofffrey Crowther. Geoffrey both edited The Economist for decade either side of world war 2 and a centenary autobiography 1843-1943. Feoffrey's main conclusiin weekly media men had serially fauled to anticiapte expoential chnage engineers co-created frim 1943 such as the commons/Spaces of electrocity and telecoms. So Geoffrey hired dad as rookie jouynalist and sooon interned hum in new york for a tear to go take nores from Con Neumann on the purpose of the greatest mathematicaisn ever NEY= Neumann Einstein Ruting. We summarise those notes at http://neumann.ning.com 

dont make energy and engineering eladers the most hgated and wanted of men the wat the NET had been scapegoated

design aboove zero sum trading models - eg learning/intelligence/brainworking can be that- you might say that mathematicaisj one great merit has been open sourcibng stuff all humans gained from uniting

anticiape at least 3 by million fold tech chnage waves by 2025 - which we can now see as waves libnked to alumni of Moore, Jobs and Huang.


Techinical note - as a maths guy (Cambridge DAMTP 1973, and responsible for sharing von neumann biography notes since 2010) I believe humanity advances with open maths (I would chosse a different 13 if I was chinese including creator 1 of deep seek , zengrei of huawei, several people at tsinghua I am hopeful that Jack Ma may yet make a comeback as someone who once connected millennials in a borderless way.

xxHere's ChatDS as at 3/22/25 on the friendly invasuion od Llama AI
 **Llama 3**, its adoption, and its role in "augmenting intelligence" (assuming "agenting" was a typo) for cognitive or productivity boosts:

---
Lecun 23.30 Llama3 billion users


### **1. Llama 3 Download Stats: Context**
- **Llama 3** (Meta’s open-source large language model) is indeed one of the **most downloaded AI models** globally, but a "billion downloads" is likely an exaggeration.  
  - Realistic numbers: While Meta hasn’t released exact figures, Llama 2 saw **over 1 million downloads** in its first week (2023). Llama 3, being more powerful and accessible, likely surpasses this, but "billions" would include API calls, fine-tuned forks, and cloud usage, not direct downloads.  
  - Popularity drivers: Its open-source license, commercial flexibility, and performance (close to GPT-4 in benchmarks) make it a go-to for developers and enterprises.

---

### **2. Best Places to Use Llama 3 for "Augmenting Intelligence"**  
Llama 3 excels in tasks requiring **reasoning, creativity, and data processing**. Here’s where it’s being applied for "brainworking boosts":

#### **A. AI Development & Research**  
- **Hugging Face** / **GitHub**: Integrate Llama 3 into custom apps via libraries like `transformers`.  
  - Use case: Build domain-specific chatbots, coding assistants, or research tools.  
- **AI Agent Frameworks**:  
  - **AutoGen** (Microsoft): Create multi-agent workflows for tasks like data analysis or project management.  
  - **LangChain**: Chain Llama 3 with tools (e.g., calculators, APIs) for complex problem-solving.  

#### **B. Business & Productivity**  
- **Enterprise Automation**:  
  - **Document processing**: Automate contracts, reports, or emails (tools: **LlamaIndex**, **Haystack**).  
  - **Customer support**: Deploy Llama 3-powered chatbots (e.g., **Zendesk**, **Intercom** integrations).  
- **Data Analysis**:  
  - Use Llama 3 with **Jupyter Notebooks** or **Tableau** to interpret trends and generate insights.  

#### **C. Education & Creativity**  
- **Personalized Learning**:  
  - Platforms like **Khan Academy** or **Duolingo** could use Llama 3 for adaptive tutoring.  
- **Content Creation**:  
  - Tools like **Copy.ai** or **Jasper** (using Llama 3 under the hood) for writing, brainstorming, or design ideation.  

#### **D. Healthcare & Science**  
- **Research Acceleration**:  
  - Analyze papers, draft hypotheses, or summarize clinical trials (e.g., **BioGPT**-style applications).  
- **Diagnostic Support**:  
  - Pair Llama 3 with medical databases to assist doctors in decision-making (caution: ethical oversight required).  

---

### **3. "Brainworking Boost" Explained**  
Llama 3 enhances human intelligence by:  
- **Automating repetitive tasks**: Freeing mental bandwidth for strategic thinking.  
- **Synthesizing information**: Turning data overload into actionable insights.  
- **Democratizing expertise**: Providing coding/design/analysis support to non-experts.  

---

### **4. Future Trends**  
- **Smaller, specialized models**: Fine-tuned Llama 3 variants for law, medicine, or engineering.  
- **Ethical guardrails**: Tools like **Llama Guard** to ensure safe, unbiased outputs.  
- **Hardware integration**: On-device Llama 3 (e.g., smartphones, robots) for real-time boosts.  

---

### **Key Takeaway**  
Llama 3 isn’t just a tool—it’s a **force multiplier for human intelligence**. The best "places" to leverage it depend on your goals:  
- **Developers**: Build agents via Hugging Face/AutoGen.  
- **Businesses**: Automate workflows with enterprise platforms.  
- **Individuals**: Use apps powered by Llama 3 for learning, writing, or analysis.  

Stay updated via Meta’s [Llama 3 blog](https://ai.meta.com/blog/meta-llama-3/) and communities like **r/LocalLlama** or **Hugging Face forums**! 🚀
====================Groq
  • Evidence: Meta’s Llama 3 (April 2024) and Llama 3.1 (July 2024) are hot on Hugging Face—Llama 3 8B has 500,000+ downloads (huggingface.co), Llama 3.1 405B hit 100,000+ fast (web:2, hyperstack.cloud). But “a billion” across all versions (1, 2, 3, 3.1) is tough to verify—no official tally exists. GitHub’s Llama repo (facebookresearch/llama) and torrents (web:1, Wikipedia) spread it wide, but a billion feels like keynote flair.
  • Skepticism: Even with unauthorized BitTorrent leaks (Llama 1, 2023) and enterprise use (e.g., Goldman Sachs, X posts, 2024), scaling to 1B downloads in two years (since February 2023, web:1) is a stretch without hard numbers. Still, Llama’s a juggernaut—let’s assume “hugely popular” and roll with it.

Most Popular Llama Model: Examples
Llama’s family—spanning 1B to 405B parameters—has stars. Popularity hinges on downloads, community buzz, and usability (web:1, web:2, X posts). Here’s the frontrunner and examples:
  • Llama 3.1 8B: Likely the champ.
    • Why: Released July 2023 with Llama 2 (web:1), refined in Llama 3.1 (web:2), it’s lightweight (8 billion parameters), fast, and open for commercial use (unlike Llama 1’s research-only license). Hugging Face logs 500,000+ downloads (huggingface.co, 2025); X posts (
      @rohanpaul_ai
      , August 16, 2024) praise its 3090-friendly inference.
    • Example: Developers fine-tune it for chatbots—e.g., a Reddit user (r/LocalLLaMA) built a local assistant on a GTX 1660 (web:4, medium.com). It’s the “people’s Llama”—runs on modest rigs, multilingual (8 languages, web:2).
  • Runner-Ups:
    • Llama 3 70B: Heavy hitter—70 billion parameters, 15T-token training (web:7, bizon-tech.com). Popular for power users; Kaggle’s Jagat Kiran ran it on 2x T4 GPUs (X
      @kaggle
      , March 21, 2025). Used in enterprise AI (e.g., Meta’s WhatsApp assistant, web:1).
    • Llama 2 13B: Early darling (July 2023)—balanced size, widely fine-tuned (web:6, hardware-corner.net). X posts (
      @RangeshUs
      , March 20, 2025) note its 4GB GPU fit.
Most Popular?: Llama 3.1 8B wins for accessibility—small, versatile, community-loved. LeCun’s billion might lean on its reach.

Minimum PC System to “Design with Llama”
Your phrase “design with Llama” is ambiguous—did you mean “run” (inference) or “develop/train” (build anew)? LeCun’s agentic lens (GTC 2025) suggests co-working with AI, so I’ll assume run and fine-tune (common use), not train from scratch (which needs supercomputers—e.g., Llama 3’s 24,000-GPU cluster, web:16). Here’s the minimum PC for Llama 3.1 8B, the popular pick:
  • Inference (Running):
    • CPU: 8 cores, 3.0+ GHz (e.g., Ryzen 5 5600X). Llama.cpp skips GPU reliance (web:11, reddit.com).
    • RAM: 16GB DDR4 (3200 MHz)—4-bit quantized 8B needs ~4GB (web:6), but system overhead doubles it. 32GB smoother (web:17, reddit.com).
    • GPU: Optional but ideal—4GB VRAM (e.g., GTX 1650, X
      @RangeshUs
      ). Ollama runs it locally (web:4, medium.com). No GPU? CPU handles ~9 tokens/sec (web:14, hardware-corner.net).
    • Storage: 20GB SSD—model weights (4-8GB) plus OS/swap (web:17).
    • Example Rig: $500-700—Ryzen 5, 16GB RAM, GTX 1650, 256GB SSD. X posts (
      @GadgetLeo
      , March 21, 2025) confirm 8vCPU/16GB VPS works.
  • Fine-Tuning (Shaping):
    • CPU: 12+ cores (e.g., Ryzen 7 5800X)—data prep’s heavy.
    • RAM: 32GB minimum—64GB better...