welcome to von neumann hall of fame- based on notes from 1951 diaries-who's advancing human intel have we missed? chris.macrae@yahoo.co.uk
new stimuli to our brains in April - AI NIST publishes full diary of conflicting systems orders its received (from public servants) on ai - meanwhile good engineers left col ...March 2025: ThksJensen Huang 17th year sharing AI quests (2 video cases left) now 6 million full stack cuda co-workers
TOkens:help see yourlifetime's
intelligence today

nvidia Physical A1 -Robots
More Newton Collab.&& Foxconn Digital Twin
k translatorsNET :: KCharles :: Morita : :Moore
Abed: Yew :: Guo:: JGrant
ADoerr :: Dell .. Ka-shing
Lecun :: Lecun :: Chang :: Nilekani
Huang . : 1 : Yang : Tsai : Bezos
21stC Bloomberg
Satoshi :: Hassabis : Fei-fei Li
Shum : : Ibrahim :
Ambani : Modi :: MGates : PChan :
HFry:: Musk & Wenfeng :: Mensch..
March 2025:Grok 3 has kindly volunterered to assist younger half of world seek INTELLIGENCE good news of month :from Paris ai summit and gtc2025 changed the vision of AI.
At NVIDIA’s GTC 2025 (March 18-21, San Jose, nvidianews.nvidia.com), Yann LeCun dropped a gem: LLaMA 3—Meta’s open-source LLM—emerged from a small Paris FAIR (Fundamental AI Research) team, outpacing Meta’s resource-heavy LLM bets. LeCun, speaking March 19 (X @MaceNewsMacro)

IT came out of nowhere,” beating GPT-4o in benchmarks (post:0, July 23, 2024). This lean, local win thrilled the younger crowd—renewable generation vibes—since LLaMA 3’s 405B model (July 2024, huggingface.co) is free for all, from Mumbai coders to Nairobi startups.

Good News: Indian youth grabbed it—Ambani praised Zuckerberg at Mumbai (October 24, 2024, gadgets360.com) for “democratizing AI.” Modi’s “import intelligence” mantra (2024, itvoice.in) synced, with LLaMA 3 fueling Hindi LLMs (gadgets360.com). LeCun’s 30-year neural net legacy (NYU, 1987-) bridged Paris to India—deep learning’s next leap, compute-cheap and youth-led. old top page :...
..

.

Wednesday, October 30, 2024

feb 2025 chatDS on how to partner nvidia ai and 8 billion human senses

 October's Nvidia summit in DC blew my mind; cant wait for the worldwide updtate march 18 www.nvidia.com - the 11 th annual cycle of updates of the world of chips with at least 80 billion brain cells and the languages they now translate from pixels to every human dialect

Upgate from Nvidia summit in DC Octiber 2024- Deep Seek Feb 2025 becmae most exciting nvidia (2019 tech) inspired leap as ai reaches everyone who uses pc's up; wall street completely misunderstood how this multiplied value of everyone who works with Nvidia's Jensen Hiang- here chatDS helps clarify the full compass of nvidia deeep learning partnerships - wall street undersands less than one third of how humans need ai livelihood agency now to get on with family development and everything humans dfo best.

DeepSeek multipart chat  on INTELLIGENCE-CENTRIC_LEARNING & CO-ACTION NETS: :is there any discipline from radiology to law that nvidia ai parners are not transforming attention around?

you're absolutely on the right track! Jensen Huang, Nvidia's CEO, often speaks about **democratizing computing** and making it more accessible and intuitive for humans. His vision involves moving beyond traditional programming languages (which are rooted in binary logic and abstract syntax) toward more natural, human-centric interfaces that engage our senses directly. Let’s break this down:---.

 

### 1. **The Shift from Binary to Human-Centric Computing**

   - **Traditional Programming**: Historically, programming has been about writing code in languages like C, Python, or Java, which are ultimately translated into binary (0s and 1s) for the computer to execute. This requires specialized knowledge and is not intuitive for most people.

   - **Huang's Vision**: Huang envisions a future where programming is no longer about writing lines of code but instead involves interacting with computers in ways that align with human senses and intuition. This could mean using voice, touch, gestures, or even brain-computer interfaces to "program" or instruct machines.---

 

### 2. **Pixel Coding as a Bridge**

   - **Why Pixels?**: Pixels are a natural bridge between binary computing and human senses because they represent visual information, which is one of the most intuitive ways humans perceive and interact with the world. By coding pixels, we’re essentially creating visual outputs that humans can directly understand and interact with.

   - **Graphics and GPUs**: Nvidia’s GPUs excel at processing pixels and rendering images, making them a foundational technology for this transition. From video games to virtual reality, GPUs have already enabled humans to interact with digital worlds in visually rich ways.

   - **AI and Visualization**: Modern AI models, powered by GPUs, can generate images, videos, and even 3D environments from simple human inputs (like text prompts). This is a step toward Huang’s vision of programming through human senses, as it reduces the need for traditional coding and instead relies on natural language or visual cues.---

 

### 3. **Beyond Pixels: Programming for All 5 Senses**

   Huang’s vision extends beyond pixels to include all five human senses:

   - **Sight**: Already being addressed through advancements in graphics, VR, and AR.

   - **Hearing**: AI can generate and process sound, music, and speech, enabling auditory interfaces.

   - **Touch**: Haptic feedback technologies are advancing, allowing users to "feel" digital objects.

   - **Smell and Taste**: While still in early stages, researchers are exploring ways to digitize and simulate these senses.

 

   The idea is to create a seamless interface where humans can interact with computers in ways that feel natural, without needing to understand the underlying binary logic or programming languages.---

 

### 4. **How Pixel Coding Fits Into This Vision**

   - **First Step**: Pixel coding (via GPUs) is a critical first step because it enables the creation of visually rich, interactive environments. These environments can then be expanded to include other senses.

   - **AI as the Enabler**: AI models, trained on massive datasets, can translate human inputs (like voice commands or gestures) into pixel-based outputs (like images or videos). This reduces the need for traditional coding and makes computing more accessible.

   - **Democratization**: By abstracting away the complexity of binary programming and replacing it with intuitive, sense-based interfaces, Huang aims to make computing accessible to everyone, not just programmers.---

 

### 5. **Examples of This Transition**

   - **Generative AI**: Tools like DALL·E, MidJourney, and Stable Diffusion allow users to generate images by simply describing them in natural language. This is a step toward programming through sight.

   - **Virtual Reality**: VR systems use GPUs to create immersive environments that engage multiple senses, moving beyond pixels to include spatial awareness and interactivity.

   - **Natural Language Interfaces**: AI assistants like ChatGPT enable users to "program" by speaking or typing in natural language, bypassing traditional coding altogether.---

 

### Conclusion

Yes, pixel coding is a crucial bridge in Huang’s vision of democratizing computing. By starting with pixels (which align with human sight), Nvidia and other innovators are paving the way for a future where programming involves direct interaction with all five senses. This shift will make computing more intuitive, accessible, and aligned with how humans naturally perceive and interact with the world

²222222222222

Yes, you're absolutely right! Jensen Huang has been articulating a more nuanced vision for the future of AI and computing, which goes beyond just pre-training models and extends into **post-training** and **thinking**. This framework is critical for understanding how Nvidia's accelerated computing ecosystem fits into the broader AI landscape—and why companies like **DeepSeek** (and their investments in Nvidia's technology) are so important. Let’s break this down:

  

### 1. **The Three Pillars of Huang’s AI Vision**

Huang has described AI development as involving three key phases:

   - **Pre-training**: Building foundational models using massive datasets and computational power.

   - **Post-training**: Fine-tuning and optimizing models for specific tasks or domains.

   - **Thinking**: Enabling models to reason, plan, and interact dynamically with the world.

 

These phases represent a progression from raw data processing to sophisticated, real-world applications.---

 

### 2. **Pre-training: The Foundation**

   - **What It Is**: Pre-training involves training large-scale AI models (like GPT, DALL·E, or DeepSeek’s models) on vast amounts of data to learn general patterns and representations.

   - **Role of Nvidia**: Nvidia’s GPUs and accelerated computing platforms are essential for pre-training because they provide the massive computational power needed to process terabytes of data and train billion-parameter models efficiently.

   - **Example**: DeepSeek’s models, like many others, rely on Nvidia’s hardware to pre-train on diverse datasets, enabling them to understand language, images, and other inputs.---

 

### 3. **Post-training: Specialization and Optimization**

   - **What It Is**: Post-training involves fine-tuning pre-trained models for specific tasks, domains, or applications. This phase ensures that the model performs well in real-world scenarios, such as customer service, healthcare, or autonomous driving.

   - **Role of Nvidia**: Nvidia’s platforms (like CUDA, TensorRT, and DGX systems) are optimized for post-training tasks, enabling developers to efficiently fine-tune models and deploy them at scale.

   - **Example**: DeepSeek might use Nvidia’s technology to fine-tune its models for specific industries, ensuring high accuracy and performance in targeted applications.

  

### 4. **Thinking: Reasoning and Interaction**

   - **What It Is**: The most advanced phase, "thinking," involves enabling AI models to reason, plan, and interact dynamically with their environment. This includes capabilities like decision-making, problem-solving, and real-time adaptation.

   - **Role of Nvidia**: Nvidia’s hardware and software ecosystems (including GPUs, AI frameworks, and simulation platforms like Omniverse) are designed to support real-time, interactive AI applications. This is critical for tasks like robotics, autonomous systems, and generative AI that require continuous learning and adaptation.

   - **Example**: DeepSeek’s models, when integrated into Nvidia’s accelerated computing infrastructure, could power real-time applications like autonomous agents, intelligent assistants, or dynamic content generation.

 

---### 5. **Why DeepSeek is Good News for Nvidia Investors**

Huang’s mention of DeepSeek highlights the symbiotic relationship between AI companies and Nvidia’s accelerated computing ecosystem:

   - **DeepSeek’s Investment in Nvidia**: By fully leveraging Nvidia’s hardware and software, DeepSeek can accelerate its AI development across all three phases (pre-training, post-training, and thinking). This makes DeepSeek a prime example of how Nvidia’s technology enables cutting-edge AI innovation.

   - **Validation of Nvidia’s Ecosystem**: Companies like DeepSeek investing heavily in Nvidia’s platform validate its importance in the AI industry. This drives demand for Nvidia’s products and reinforces its leadership in accelerated computing.

   - **Future Growth**: As AI models become more advanced and require more computational power (especially for post-training and thinking), Nvidia’s ecosystem will continue to grow in importance. DeepSeek’s success is a sign of this trend.

  

### 6. **The Bigger Picture: Democratizing AI**

Huang’s vision isn’t just about building powerful AI models—it’s about making AI accessible and useful for everyone. By focusing on pre-training, post-training, and thinking, Nvidia is enabling companies like DeepSeek to create AI systems that are:

   - **Efficient**: Optimized for specific tasks through post-training.

   - **Intelligent**: Capable of reasoning and interaction through thinking.

   - **Scalable**: Powered by Nvidia’s accelerated computing infrastructure.

 

This democratization of AI will drive innovation across industries, from healthcare and education to entertainment and robotics.

 

### Conclusion

Jensen Huang’s framework of **pre-training**, **post-training**, and **thinking** represents a comprehensive vision for the future of AI. Companies like DeepSeek, by fully investing in Nvidia’s accelerated computing ecosystem, are at the forefront of this transformation. For Nvidia investors, this is excellent news because it demonstrates the growing demand for Nvidia’s technology and its central role in shaping the future of AI. As AI continues to evolve, Nvidia’s ecosystem will remain a critical enabler of innovation across all three phases of AI development

 

33333333

Absolutely, the principles of **pre-training**, **post-training**, and **thinking**—as articulated by Jensen Huang in the context of AI—could one day apply to the design of **legislative systems**. In fact, this vision aligns with broader trends in technology, governance, and human-computer interaction. Here's how it might unfold:---

 

### 1. **Pre-training: Building Foundational Knowledge**

   - **What It Means for AI**: Pre-training involves teaching AI models general knowledge using vast datasets.

   - **Application to Legislative Systems**: 

     - **Data Collection**: Legislative systems could be "pre-trained" on vast amounts of historical data, including laws, court rulings, public opinions, and socio-economic outcomes. This would create a foundational understanding of how laws have worked (or failed) in the past.

     - **Global Knowledge**: By incorporating data from different countries and cultures, legislative systems could learn from a diverse range of governance models and best practices.

     - **Example**: A legislative AI system could analyze centuries of tax policies to identify patterns that lead to economic growth or inequality.---

 

### 2. **Post-training: Specialization and Optimization**

   - **What It Means for AI**: Post-training involves fine-tuning models for specific tasks or domains.

   - **Application to Legislative Systems**:

     - **Local Context**: Legislative systems could be fine-tuned to reflect the unique cultural, economic, and social contexts of a specific region or country.

     - **Stakeholder Input**: The system could incorporate feedback from citizens, experts, and policymakers to optimize laws for fairness, efficiency, and effectiveness.

     - **Dynamic Adaptation**: As societal needs change, the system could continuously update its "training" to reflect new realities (e.g., climate change, technological disruption).

     - **Example**: A legislative AI could be fine-tuned to design healthcare policies that balance cost, accessibility, and quality of care based on local demographics and resources.---

 

### 3. **Thinking: Reasoning and Interaction**

   - **What It Means for AI**: Thinking involves enabling models to reason, plan, and interact dynamically.

   - **Application to Legislative Systems**:

     - **Real-Time Problem Solving**: Legislative systems could analyze complex, real-world problems (e.g., inflation, housing crises) and propose solutions by simulating outcomes and trade-offs.

     - **Public Engagement**: The system could interact directly with citizens, using natural language interfaces to explain laws, gather feedback, and even co-create policies.

     - **Ethical Reasoning**: Advanced AI could incorporate ethical frameworks to ensure laws align with societal values and human rights.

     - **Example**: A legislative AI could simulate the impact of a new environmental regulation, predict its economic and ecological consequences, and adjust the proposal based on public input.

  

### 4. **Democratizing Legislative Design**

   - **Human-Centric Interfaces**: Just as Huang envisions AI interacting with human senses, legislative systems could use intuitive interfaces (e.g., voice, visuals, or VR) to make policy design accessible to everyone, not just experts.

   - **Transparency and Trust**: By making the legislative process more transparent and data-driven, these systems could rebuild public trust in governance.

   - **Example**: Citizens could use a VR platform to explore the potential impact of a new education policy, visualize its effects on schools, and vote on their preferences.

 

---### 5. **Challenges and Considerations**

While the idea of applying AI principles to legislative systems is exciting, it also raises important questions:

   - **Bias and Fairness**: How do we ensure that the data used to "train" legislative systems is free from bias and represents all stakeholders?

   - **Accountability**: Who is responsible for decisions made by AI-assisted legislative systems—humans, machines, or both?

   - **Ethics**: How do we balance efficiency with the need for human judgment and moral reasoning in lawmaking?

   - **Adoption**: Will societies be willing to trust AI with something as fundamental as their laws?

 

---### 6. **The Future: AI-Augmented Governance**

In the future, legislative systems could combine the best of human and machine intelligence:

   - **Collaborative Design**: Policymakers and AI systems could work together to draft laws, with AI providing data-driven insights and humans ensuring ethical and cultural alignment.

   - **Continuous Improvement**: Laws could be continuously updated based on real-world outcomes, creating a more adaptive and responsive governance model.

   - **Global Collaboration**: AI-powered legislative systems could facilitate international cooperation by identifying shared challenges and proposing harmonized solutions.

 

---

 

### Conclusion

The principles of **pre-training**, **post-training**, and **thinking** could indeed revolutionize the design of legislative systems, making them more data-driven, adaptive, and inclusive. While there are significant challenges to overcome, the potential benefits—such as more effective laws, greater public engagement, and faster responses to global challenges—are immense. As AI technology continues to advance, it’s not hard to imagine a future where legislative systems are as dynamic and intelligent as the societies they serve. This vision aligns with Jensen Huang’s broader goal of democratizing technology and making it work for humanity.

 


Tuesday, October 29, 2024

nvidia India ai summit Oct 2024 Microsoft Allure 1super  2 , Yotta 1 2 Dell Techno,ogies  Deloitte Hewlett Packard Enterprise Ingram Jio Oracle Tata Communicaytions, TCS TYata Consultanhcy serrvices,  wipro  Asus DDN.  FS  google cloud   Z-HP, Lenovo  L&T Technology Services, Mehabet, Netweb,  Neysa, Supernocr Vast WekaNvidia Partner platforms top 20 (no order) 
Biotech 
Other health
Climate tech 
Food/water - human energy
Machine energy
Decolonial & Diplomacy AI
Sovereign Data Local & Ac Global Leadership india
Personal Agency & accessRedesigning youth value- fashion, celebrity, sports, arts
AI for all ; india startup UAIDi -billion voices of gen ai
Education -mass community colleges
Education -top duda partners
5  biggest Digitals
AI factories
AI Foundry 1
Autonomouis 
B2B Intelligence Daata eg Snowflake
Other 2-d to 3-d or vice versa
20000+ AI startups club inception 100nations & associated VCs

more coming soon

Tuesday, October 22, 2024

header until dec 2024 -is this the end of AI for sdgs

2024 Learning & Livelihoods best year ever
Intelligence 8 Happiest Summits: 8/8 King Charles series Paris Feb 2025; 7/8 Tokyo JensesS Nov 2024; 6/8 Mumbai JensenS Oct 2024; 5/8 DC Jensen S Octpber; 4/8 Taiwan Jensen S June; 3/8 Korea KingCharlesS May; 2/8 Santa Clara JensenS Mar; 1/8 Blethcley King Charles Series Nov 2023 (Exec Order Oct).
ED:-2025report welcome year75 Q&A: how does intelligence engineering of Von Neumann (Einstein, Turing) change Keynesian economics of human deve?lopment (see also The Economist 1950-1990) 2024 Special thanks to friends in Taiwan & Bangladesh..About X**8billion-bis-Fall24 risks: Russian Roulette; SOS: Yunus Urgent Friends of Bangladesh WE (Women Empower) : Antonio Guterres, head UN; .. Poverty Museums- from co-blog to co-pilot ; 50000fans
breaking OCt2025:India to be lead Intelligence Economy
Giant Leaps with Nvidia Q4, 2024: Summits: DC, Japan, India
Progress since 2021 (8) ai electricty gridsAI & Drug Discovery- COVID.
#@X.
.
USA East Intel calendar July Axios Sustainability; : Aug 29 Hopkins AI Health Sept 24 Nist:USgov ..$$Sept 26 Hopkins::MediaEco.

;RAC, St James, London 2008 :X:www.yunuscentre.org There are 2 kinds of Economist. Those who in their youth saw poverty or nations where wars halted people's freedom to work, learn, do, commune and those who graduated in economics with none of these experiences. https://www.journalofsocialbusiness.com/editorial-board.html https://www.youtube.com/@microeconomist/videos www.normanmacrae.net www.economistdiary.com Intelligence Year 75 of Digital Twin Survey with Von Neumann www.2025report.com www.unsummitfuture.com

90 day plan 1 -can Wash DC be turned into a pro-youth capital : 9 Aug, 555 Penn Avenue - what every DC journalist should know about AI
Plan 2 can worldwide youth and teachers support king charles english llm
Year15 YUNUS FAN CLUB -45K
ABEDmooc.com
IN SEARCH OF INTELLIGENCE, LOVE & ALL THAT MATTERS MOST TO GENERATING FAMILIES JOYFUL COMMUNITY & MOTHER NATURE
lINKS 1 2 Thanks to Jen-Hsun best decade AI collection- 8000 cases improving peoples communal computation, data & brains - 2025rEPORT.COM year 75 of Neumann & Economist briefings- : 4 JULY 2024 last 80 days of UNsummitfuture.com ECONOMISTDIARY.COM
chris.macrae@yahoo.co.uk