Au revoir imagenet and china ai hello google cloudsters see also friends20.com aifordsgs.com upd aug 26 2023 - apply to be in UN world partner guterres last intelligence advisory
Fei-Fei Li: Artificial Intelligence is on its way to reshape the world
Research Interest Score
4.9
Citations
1
Recommendations
1
Reads
651
Recommend this work
Get updates
Share in a message
Related research
Article
Full-text available
- November 2020
Abstract
Fei-Fei Li, a well-known scientist focusing on computer vision and Artificial Intelligence (AI), did not expect such zeal in China about AI. During her last visit to Beijing, the professor of Stanford University drew much attention from both academy and industry here; NSR took the opportunity to interview Professor Li. She points out that, although Neural Network has made marvelous advances in the past 15–20 years, there are still enormous challenges ahead. On the one hand, computational models for AI, such as many current deep neural networks, has theoretical bottleneck to resolve, such as interpretability and explanability; On the other hand, AI should offer more in solving societal problems and in accelerating innovation in industries such as healthcare, traffic contral and agriculture. This would be a more practical way to realize the potential and speed up the advancement of AI. Moreover, Prof. Li is interested but cautious about Artificial General Intelligence (AGI).
…
Available via license: CC BY 4.0
Content may be subject to copyright.
Page 1
Type: InterviewAuthor’s E-mail: gaoyuan@scichina.orgFei-Fei Li: Artificial Intelligence is on its way to reshape theworldBy Yi Zeng and Ling WangFei-Fei Li, a well-known scientist focusing on computer vision and Artificial Intelligence (AI), didnot expect such zeal in China about AI. During her last visit to Beijing, the professor of StanfordUniversity drew much attention from both academy and industry here; NSR took the opportunityto interview Professor Li. She points out that, although Neural Network has made marvelousadvances in the past 15-20 years, there are still enormous challenges ahead. On the one hand,computational models for AI, such as many current deep neural networks, has theoreticalbottleneck to resolve, such as interpretability and explanability; On the other hand, AI should offermore in solving societal problems and in accelerating innovation in industries such as healthcare,traffic contral and agriculture. This would be a more practical way to realize the potential andspeed up the advancement of AI. Moreover, Prof. Li is interested but cautious about ArtificialGeneral Intelligence (AGI).FROM STANFORD TO GOOGLENSR: We learned that you joined Google earlier this year, could you provide us with moredetails?Li: Actually this does not mean I am leaving the academic community. I will be on sabbatical,working as Chief Scientist of AI/ML of Google Cloud, untill the second half of 2018. And duringthis time, I will continue to work with my graduate students, postdoc fellows and collaborators atStanford University.NSR: Why did you choose Google Cloud? Will there be overlap in research topics between yourlab and the company?Li: If we look back at the fast development of AI during the past 20 years, especially the threesubfields of AI: Machine Learning (ML), Natural Language Processing (NLP) and Computer Vision(CV), we would see that Web-based data is a very important driving force to make AI strongerand stronger. So, what is the next step for AI? In my point of view, it is time for AI to help othervertical industries like healthcare, agriculture and manufacture to transform and upgrade. GoogleCloud is an excellent platform that will accelerate this process, which has both scientific andcommercial significance.NSR: What will you do at Google Cloud?Li: I will assemble a team with versatile talents to improve the AI and ML performance ofGoogle Cloud and collaborate with commercial department to facilitate new products. We hope
Page 2
to have more interactions with the counterparts in academy communities and welcome them towork at Google Cloud.NSR: It is obvious that the rise of the computing capability has enabled recent advancement ofsome AI models, including Deep Neural Networks. What else do you think that could be broughtto AI by advancing computing infrastructure?Li: Computing capability could not only affect the speed of computation but also could affectthe model structure of AI model. For example, the graphical model of ML had been very popularin 1990s. Due to the limit of computing power, experts hand-designed many features to reducethe complexity and time cost. When more powerful computing infrastructure arose, we realizedthat hand-design methods had missed the opportunities. With the powerful computationcapability, more complex and efficient algorithms could be inspired and applied.NSR: Another related question is, whether the larger the computation scale is, the better itwould perform? For example, to realize the cognitive functions of the human brain and interpretthe nature of human intelligence, should we set up a model consisting of the same order ofneurons of our brain?Li: That is a tough question, but scale does really matters. The Chinese saying “Quantity BreedsQuality” is appropriate for describing machine learning model, I think.NSR: Someone says that many AI scientists are doing the same things as statisticians. While AIscientists build models with 1,000,000 parameters, statisticians build models with 100parameters to solve the problems. What do you think?Li: I am afraid I can’t agree with this idea. In my point of view, statistics algorithm and AI are notstanding at the opposite side, instead, they are complementary and perhaps AI is a continuationof statistics. If we could solve the problem with a model with 100 rather than 1,000,000parameters, that would be great. To be clear, the 1,000,000 parameters of AI model have theirmeaning, we should not only support statistical research but also interpretation of AI model.DARPA (Defense Advanced Research Projects Agency) initiated a project called “explainable AI”,aiming to fathom the black box of AI model.POTENTIAL PATHS TO INTERPRET AINSR: Many neural network models lack interpretability, especially for the hidden layers in deepneural networks. Do you think brain research would inspire and improve interpretations of thesenetworks?Li: Although neural network of AI is quite different from that of the human brain, it had indeedbought the concept from neuroscience. We are far from understanding our brain, and we are inthe same awkward situation with neural network in AI. It is possible that breakthrough inneuroscience would stimulate the AI interpretation, and cognitive science is another propeller toaccelerate the process.NSR: In fact, close cooperation of AI scientists and neural scientists is an emerging phenomenonin China; the CAS Center for Excellence in Brain Science and Intelligence Technology (CEBSIT) is
Page 3
the best example. The Center selects and recruits leading scientists to tackle the problems in theinterdisciplinary area of brain science and brain-inspired AI.Li: That is great! This is probably not that obvious in the US. I hope there would be morecross-disciplinary cooperation all over the world.NSR: It has been more than 60 years since AI was emerged as a field of research. And there areups and downs of its developments. Neural networks have become very popular these days,what about other branches of AI such as symbolism, knowledge representation and reasoning,which were major subfield of AI? Should we integrate the efforts of knowledge engineering andneural networks?Li: neural network has made huge success in solving the problems otherwise imaginablyimpossible. AlphaGo’s victory over the human master-hand in the Go game is the latestconvincing proof. Symbolism had its peak in the last century, but seems less popular these days.To be honest, knowledge representation is perplexed problem for me. Why we humans formlanguage which is closely related with symbols? Is it a second choice or with intrinsic advantages?There are indeed some preliminary works to hybrid symbolic (conditional random field) withneural network. But I think the most practical way for AI to leap forward is to break the bottleneck of lacking interpretability, architectural knowledge and training flexibility of neuralnetworks.IS AGI AN ILLUSION?NSR: It is wired that Artificial General Intelligence (AGI) has been chased after in industrycommunity but currently seems not well accepted in academic community. Someone says it isthe ultimate goal of AI. What do you think?Li: I suspect that the propaganda of AGI is motivated for commercial interest by people who haveno idea about what it really means. For example, what does it mean we will are capable of doingmathematics? Prove Fermat's theorem or what else? Does an unmanned vehicle have AGI? It hasmultiple sensors and functions to assume the driving load as humans if not better. Or does arobot sent to the Mars to build houses has AGI? I don’t think there is clear definition of AGI.NSR: From my point of view (Yi Zeng),AGI can be interpreted from the perspective ofautonomously deciding the types of problems and coordinating all the cognitive functions thathuman being have to solve very different complex tasks.Li: I don’t think it is AGI, if an AGI agent is approximately equal to a human; it has nothing to dowith AI. Humans are not just universally capable beings, we have love, emotion, empathy; theseare qualities that do not seem to be included in AGI. If we need to define AGI, maybe it is anagent capable of multi-knowledge presentation, multi-sensory, multi-layer reasoning, andlearning.NSR: As human beings, many of us regard ourselves on the top of the biological evolution, andmany may feel scared for being surpassed by machines in the areas we are good at.Li: Cars run faster than us, cranes hold heavier objects, it is not necessary to be afraid. We have
Page 4
emotion, which is unique.NSR: There is a line of research on robot and machine consciousness. What do you think aboutcreating a system that has consciousness and emotion as us?Li: It is more like a philosophical problem. We are carbon-based structures, but at what level wefinally get consciousness and emotion? It is difficult to answer.NSR: A recently published paper from Institute of Neuroscience, Chinese Academy of Sciencesindicates that, trained monkeys showed self-consciousness through recognizing its image in themirror, which challenges the traditional belief that monkeys don’t have self-consciousness.Li: Did they make neuron-correlation experiments? MRI is probably too coarse to prove this.NSR: Yes, they are doing further experiments to support the findings.GREAT PASSION PAYS OFFNSR: ImageNet is a flagship project in Computer Vision research; could you describe what wasthe motivation and why you started such a project?Li: When I began to build ImageNet, there was little funding for me doing this. But I did not giveup. I did enjoy the research process, and that is enough. Fortunately, six years later, ImageNetproves its value through a milestone paper which heralds a new spring for AI, especially forneural network. Comparing with the Geoffrey Hinton, I feel so lucky, for he had stuck to theresearch field for more than 20 years to be paid off, and I had not waited that long. And I admirehis passion very much.NSR: Passion is very important for scientific research, what about choosing the direction?Li: Based on statistics, there is no way that would guarantee success, and it is especially true inresearch. Failure, if not unavoidable, is very likely. So my experience is choosing what I have greatpassion and enjoy the process. If you feel exhausted and regretful, that would not deserve yourtime.NSR: Besides passion, you also need courage.Li: Indeed. If you want to pursue science you must pursue freedom and truth. Free to chooseyour direction seems positive, but it is scary, for there is no codified experience to draw upon.You need to rely on yourself and bear the risk of failure.NSR: Ground-breaking work in fundamental science has been called for in China for a long time.But the outcome is not that satisfying. What is your suggestion?Li: There are so many smart brains in China, and research funding is increasing steadily. I think mycounterparts here have better chances to do original works now. Perhaps tolerance for failureshould be a concern for the whole society, which means scientists could have freedom to dowhat they have passion in. It is known that tenure track was invented to protect the curiosity ofscientists and support exploration in unreached area. And I heard that leading universities inChina have gradually adopted the system, it is a good sign.
Page 5
As we mentioned earlier, “Quantity Breeds Quality”, incremental research is also important. TheDeep Residual Networks (ResNets) proposed by Kaiming He, currently the prevalent architecturein computer vision, has shown great potential in areas like NLP, speech recognition. I hope therewill be more and more “Kaiming He” in China.INTO THE FUTURE OF AINSR: AI has been hyped these days, and once a startup brags itself an AI company, it attractsinvestment more easily, thus brings in more talents. How would this tide affect fundamentalresearch in AI?Li: It is an interesting time for AI, to say the least. AI is a real deal, but also heavily hyped bycommunication or presentation without care and rigor. I firmly believe AI is as real as computing,the Internet, renewable energy, new materials, etc. But indeed the amount of frivolous talks of AIis also intense right now. It impacts everyone, from entrepreneurs, to investors, from bigcompanies to governments, from funding agencies to basic research institutions. Many AIresearchers have repeatedly called out for a balanced discussion of AI at both academic andpublic forums. We will continue to do this. The truth is, at this moment, this is the time to doubledown on basic, fundamental research in AI. It’s time to support and fund long-term research thataddress many of the most difficult and yet to be solved problems of AI.NSR: What do you think are the most important challenges in AI in future years?Li: There is a long list of challenges. For machine learning, the most critical problem is learning tolearn, the machine learning model applied to solve problems are hand-designed architectures,unless we interpret the work mechanism, it is hard to reach the target. For NLP, we still stay at arelatively superficial level, lacking deep interactive dialogue. For Computer Vision, althoughproducts based on it are emerging, we are far from fulfilling our aim. If computer could observethe world as we do, and make correct judgment, that will substantially change our lives.Yi Zeng is a professor of the Institute of Automation, Chinese Academy of Sciences, and LingWang is a science news reporter based in Beijing.
Page 6
[Insert Figure 1 at top of the right column on first page]Fei-Fei Li, Director of the Stanford Artificial Intelligence Lab and the Stanford Vision Lab. (Courtesyof Fei-Fei Li)[Insert pull quote at top of the right column on second page]“It is possible that breakthrough in neuroscience would stimulate the AI interpretation, andcognitive science is another propeller to accelerate the process.—Fei-Fei Li”[Insert pull quote at top of the left column on last page]“Perhaps tolerance for failure should be a concern for the whole society, which means scientistscould have freedom to do what they have passion in.—Fei-Fei Li”
Similar research
Classical conditioning plays a critical role in the learning process of biological brains, and many computational models have been built to reproduce the related classical experiments. However, these models can reproduce and explain only a limited range of typical phenomena in classical conditioning. Based on existing biological findings concerning classical conditioning, we build a brain-inspired classical conditioning (BICC) model. Compared with other computational models, our BICC model can reproduce as many as 15 classical experiments, explaining a broader set of findings than other models have, and offers better computational explainability for both the experimental phenomena and the biological mechanisms of classical conditioning. Finally, we validate our theoretical model on a humanoid robot in three classical conditioning experiments (acquisition, extinction, and reacquisition) and a speed generalization experiment, and the results show that our model is computationally feasible as a foundation for brain-inspired robot classical conditioning.
Artificial Intelligence principles define social and ethical considerations to develop future AI. They come from research institutes, government organizations and industries. All versions of AI principles are with different considerations covering different perspectives and making different emphasis. None of them can be considered as complete and can cover the rest AI principle proposals. Here we introduce LAIP, an effort and platform for linking and analyzing different Artificial Intelligence Principles. We want to explicitly establish the common topics and links among AI Principles proposed by different organizations and investigate on their uniqueness. Based on these efforts, for the long-term future of AI, instead of directly adopting any of the AI principles, we argue for the necessity of incorporating various AI Principles into a comprehensive framework and focusing on how they can interact and complete each other.
Compared to computer vision systems, the human visual system is more fast and accurate. It is well accepted that V1 neurons can well encode contour information. There are plenty of computational models about contour detection based on the mechanism of the V1 neurons. Multiple-cue inhibition operator is one well-known model, which is based on the mechanism of V1 neurons' non-classical receptive fields. However, this model is time-consuming and noisy. To solve these two problems, we propose an improved model which integrates some additional other mechanisms of the primary vision system. Firstly, based on the knowledge that the salient contours only occupy a small portion of the whole image, the prior filtering is introduced to decrease the running time. Secondly, based on the physiological finding that nearby neurons often have highly correlated responses and thus include redundant information, we adopt the uniform samplings to speed up the algorithm. Thirdly, sparse coding is introduced to suppress the unwanted noises. Finally, to validate the performance, we test it on Berkeley Segmentation Data Set. The results show that the improved model can decrease running time as well as keep the accuracy of the contour detection.
When learning concepts, cognitive psychology research has revealed that there are two types of concept representations in the human brain: language-derived codes and sensory-derived codes. For the objective of human-like artificial intelligence, we expect to provide multisensory and text-derived representations for concepts in AI systems. Psychologists and computer scientists have published lots of datasets for the two kinds of representations, but as far as we know, no systematic work exits to analyze them together. We do a statistical study on them in this work. We want to know if multisensory vectors and text-derived vectors reflect conceptual understanding and if they are complementary in terms of cognition. Four experiments are presented in this work, all focused on multisensory representations labeled by psychologists and text-derived representations generated by computer scientists for concept learning, and the results demonstrate that (1) for the same concept, both forms of representations can properly reflect the concept, but (2) the representational similarity analysis findings reveal that the two types of representations are significantly different, (3) as the concreteness of the concept grows larger, the multisensory representation of the concept becomes closer to human beings than the text-derived representation, and (4) we verified that combining the two improves the concept representation.
In the field of computer vision, active vision systems based on Artificial General Intelligence have shown stronger adaptability compared to passive vision systems based on Special-purpose Artificial Intelligence when dealing with complex and general scenarios. The development of visual capabilities in intelligent agents follows stages similar to human infants, gradually progressing over time. It is also important to set appropriate goal systems for educational guidance. This paper presents preliminary research on the developmental overview of visual depth perception abilities using OpenNARS, an Artificial General Intelligence system equipped with visual perception and motion channels. This study reveals that the development of visual abilities is not based on a mathematical algorithm but rather an educational guidance of embodied experiences construction.
No comments:
Post a Comment