an evolutionary perspective

an essay in the series views from the mountains

first: February 4, 2022, last: January 18, 2023

The rapidly increasing flood of true and false information is a major challenge for our society. It affects all levels, from the information of common citizens to the loftiest of sciences. In the worst case, it leads to the disintegration of society into largely isolated bubbles, ultimately into and opinion-foam.

I propose that coping with that flood demands an artificial intelligence (AI) that extends a person’s capabilities and that codevelops with them. That is not yet possible with today’s technology. However, (i) there are no fundamental obstacles on the way, all living organisms have already “solved” the problem, (ii) the advantages are so overwhelming that evolution will find a way, and (iii) there are a number of promising technological lines with already impressive demonstrations.

These personal AIs open a natural inheritance path, besides biological evolution. More precisely, they greatly widen the existing path of cultural inheritance. An inevitable consequence is that the person-AI conglomerate will ultimately leave the spheres of human control and imagination. This line will naturally merge with other lines that emerge from the various roots of humankind’s culture. They all evolve subject to the same principles of evolution mechanics and unfold towards a possible evolutionary transition in individuality farther down the road. All this, of course, will only materialize if the lineage of humankind does not falter and continues to unfold.

The information flood has been an issue for many decades now. With the recent advent of cheap and pervasive broadcasting channels in the form of social media like Twitter and FaceBook, that flood increased dramatically in volume.

The largely non-interactive flood of information far exceeds what any group or institution can absorb, let alone an individual.

example: COVID-19
The recent pandemic serves as an excellent example of how our modern societies are challenged already by rather small issues.

As we all know, long-established and internationally agreed-upon procedures have not been followed in Western countries and COVID-19 unfolded into a pandemic with millions of deaths and severe economic and social impacts. It also set off a huge information flood with innumerable details from virology to medicine and a litany of ever changing rules. The quality of those mostly microscopic information and opinion pieces spanned the full gamut. At one end where mostly scientific sources that prematurely emitted preliminary results from solid research. This prematurity, together with occasional mistakes and fraud, was inevitable because the rapid dynamics and need for action prevented an in-depth analysis and comprehensive judgement. The other end was marked by an assortment of uneducated sources, ranging from outright weirdos through alternative medicine to politics. They emitted streams of opinionated-misleading-false pieces. These were consequences of a lacking expertise, more importantly though, because those groups followed their private agendas, using the pandemic just as a vehicle. Reasons for that untamed flood are certainly many, including personal ignorance, arrogance, and fear, but also the more severe obvious failure of many governments to adequately cope with the issue while it still was controllable. In the resulting cacophony, only very few had the background to comprehend and judge the unfolding situation with a balanced societal perspective. Such a perspective demands a reasonable understanding of the fairly complicated dynamics of a new and contagious virus in our multiply connected world but in addition also aspects of physical and mental public health, economy, and societal development.

As mentioned at the beginning, COVID-19 is a rather small issue, both in terms of complexity and consequences. Climate change would be a much more severe example, our current technological development is in a yet more challenging category.

Besides increasing in magnitude, the information flood over the past few decades also changed in character. If it was originally “few to many” broadcasting – through radio, television, newspapers, books – it is now “many to many”, still broadcasting, and anyone can do it at practically no cost. It superseded a significant part of the previous communication through talking face to face. Most of this got transferred to broadcasting through WeChat, WhatsApp, FaceBook, TikTok and the like. This greatly lengthened the short feedback loops of direct communication, or even completely suppressed them, making the reconciliation of the common information landscape very difficult.

inevitable consequences

Two consequences of an unmanageable information flood can be observed in the example of the COVID-19 pandemic, and are indeed manifestations of a more general situation:

  1. False information propagates much faster and wider than the truth [Vosoughi et al. 2018]. Eradicating it, even in face of overwhelming empirical evidence, is very difficult, if feasible at all. Establishing a news ecosystem that honors truth has indeed been identified as a major challenge for our culture [Howell 2013; Lazer et al. 2018].
  2. The untameable flood of true and false information inevitably leads to the emergence of information bubbles, many with an assortment of groundless beliefs. The eventual information- and opinion-foam is a direct and inevitable consequence of opinion dynamics when information spaces grow ever larger while the agents’ capabilities remain the same [Axelrod 1997; Flache et al. 2017; Törnberg 2018].

As a consequence, any group or society whose members can no longer handle the essential information floods grows into a self-organizing entity that is unable to decide on its common future.

That situation is strikingly manifest in today’s political arena. Control is harnessed by diverse individuals and subgroups who goad the public opinion as well as the “democratic process” along their private interests. Instruments for such goading range from the seemingly philanthropic but one-sided funding of research of all sorts to Gerrymandering in representative democracies and all the way to undisguised lying with fake news.

current operation

Ideally, the flood of false information would be suppressed at the source or in the transmission channel. In the Western world, lukewarm efforts are indeed underway in both directions. They are lukewarm because they are hard to reconcile with the freedom of opinion and speech, but sadly also because powerful institutions have an interest in misinformation, institutions all the way up to governments. For such systems, the only feasible intervention point is at the receiver’s end.

Besides a flood of fakes, lies, and manipulations, there is also a flood of perfectly legitimate and valuable information, as any expert in any field can attest. It covers all of humankind’s activities, from scientific research to cultural events, stretches over a wide range of prerequisite knowledge, and it is mostly highly redundant, but not completely.

For any one person, group, or institution only a minute fraction of the available and true information is relevant.

The challenges are that (i) the relevant part may be highly dispersed and disconnected within the huge space of available information and (ii) for any one situation it is in general not clear what information would be relevant, with keys sometimes laying open in far-away fields, hence practically hidden.

Despite the unmanageable magnitude of the information flood, or size of the information pool, humankind still has developed pragmatic procedures to cope with the situation to gain at least partial results. These procedures include:

  1. Follow general lines that proved relevant before, but beware of the confirmation bias.
  2. Follow some of the emerging side lines to some depth, which may lead to new territory and may also counteract the confirmation bias.
  3. Cross-check to some depth all information for consistency and trust.
  4. Occasionally pick some random piece of information for just the chance of a lucky strike or a new perspective.

Time and again those procedures fail spectacularly, however, as again any practitioner can attest. We may miss crucial developments in other disciplines, may have a hard time gaining useful insight into some complicated fields, or we may foolishly fall for some elaborate scam, often even want to fall for it as is the case for smoking or climate change. Furthermore, the efficiency of those procedures drops rapidly as the information space increases.

For any individual, group, or institution, the likelihood of catastrophic failure in finding crucial information increases rapidly as the information overload grows.

the way forward – an AI

The procedures outlined above are limited by an individual’s communication bandwidth and processing capacity. While these cannot be increased significantly in a single human, and only with difficulty in groups, an artificial intelligence (AI) could lead to a qualitative change.

two avenues – personal vs central

There are two fundamentally different views on where such an AI should be located. From an individual’s view it should be personal to ascertain trust in the information the AI supplies and to keep the detailed knowledge of the person’s horizons and inclinations private. For a society the AI should be central, which would allow to focus, diversify, even nudge groups according to society’s needs and perspectives. A prerequisite would be that the central AI gains a detailed knowledge of the individual persons.

Influencing groups and extracting personal information naturally raises the trust-issue for individuals.

Such trust can, did, and does exist in different groups and societies. It invariably arises from a prolonged, consistent, and demonstrated responsible behavior of the central entity versus its constituents. Given trust, individuals are willing to give up a great deal for the good of society. The current state of most Western-style democracies indicates that this is a severe challenge for large modern societies, however.

Where to locate the AI, or more generally where to locate what part of the AI, is a difficult decision that depends on the envisaged role of the individual in the specific society, besides the technical feasibility. A further complication comes with the hierarchical nature of most human social structures. With respect to the above, a “person” may indeed be a single human being but also a possibly hierarchically organized group of beings, for instance a company, a political party, or an interest group. A “society” then is a social aggregate of “persons”. With this a “personal AI” may be a single uniform entity for the respective “person” or it may be a hierarchical aggregate of the constituents’ AIs. The AIs of different persons may shadow the persons’ social network but they may equally well build networks of their own. There are apparently many and hard to evaluate possibilities.

To be specific, in the following we focus on a personal AI for individuals and consider their “social” interactions only marginally. The technical aspects of a central AI would not be much different. It is the societal side that opens an entirely new and very difficult chapter.

a personal AI

An AI specifically for a person could first of all greatly expand the “some” in the established pragmatic procedures outlined above for the handling of the information flood. The person thus could gain a broader and deeper view on any topic and reject more sophisticated fake information. More importantly, the AI could access information in fields that are beyond the person’s grasp. It would then have to aggregate the harvested information, extract the corresponding comprehensive perspective on the topic, and translate the aspects deemed relevant for the person into their knowledge base. Incidentally, this is very similar to the functioning of our biological cognitive system.

Unfortunately, there is also a downside to all this. With the same ease with which an AI gains, aggregates, and translates relevant true information it can also generate fake information, indeed entire fake worlds. Foreseeably, this will lead to a challenging race.

A key question arises with such a view: who educates and controls the AI? With the AI at least guiding, more probably actually controlling the flow of information to the person, control of the AI must not be external.

The AI must be considered an integral part of the person in all respects.

A complementary question of similar importance: what is relevant information? There apparently is no general answer to this, not even for a specific person and time. It all depends on the evaluation of the possible paths, most of which disappear behind the typically nearby deterministic time horizon. It is all about surviving in the Red Queen’s race and this comes with its share of chance.

the vision

With the AI an integral part, it is natural that it is also educated as such. It grows with the person such that their Vorstellung of the World codevelops, that the two form a tight and harmonious person-AI conglomerate.

A basic AI is provided for every child very early on. It carries over aspects from the child’s parents and it contains related sociocultural aspects. This greatly widens the cultural inheritance path that is analogous to the already existing biological paths through genetics and epigenetics.

Since the two are coeducated – from socialization within family and society to the diverse formal educations –, the AI learns the idiosyncrasies of the child’s personality and their unfolding through the person’s development. Thus, it absorbs the material to be taught, and supplies it to the person in a way that is optimal for its reception. This greatly reduces the challenges of today’s education systems with persons outside the systems’ narrow norms, including those with special gifts (creatives, artists, scientists,…) and correspondingly special characteristics and needs. There is more room for the social integration of a much higher diversity of personalities. It also leads to a more optimal development path that allows a fuller unfolding of the person’s specific talents. This in turn leads to a society with a much broader and deeper perspective on its world and its desirable future path.

This vision certainly sounds frightening to many. However, it is just an extension of our current situation, with one important modification. Indeed, today a search for some topic typically starts with Google or Baidu, more recently I may start an exchange with You or ChatGPT. Each of these systems recognizes me, recalls my earlier searches and exchanges with the profile it deduced for me, and responds in a way it deems optimal for me. My personal AI would do something similar albeit searching much deeper and more comprehensively as it “knows” my horizon of knowledge, understanding, and communication very much better. The important modification: Google, Baidu,…, ChatGPT,… are already immensely powerful, but they are all external instances with unclear attitudes towards me. My personal AI is part of me.

The difference to today’s interaction with some information source is trust. Lost in traditional sources, it can be reestablished with a coevolving personal AI.

The above vision of course raises impossibly hard questions. What is the setup of the AI when it is handed over to a child? Is there any sort of filter or control imposed on it, for instance concerning social and cultural norms, conspiracy theories, criminal acts,…, or can it access whatever information is available? What kind of feedback is there between my personal AI and external instances of all sorts? How is my trust established and maintained? Those difficult questions can only be answered in broad societal discussions that unfold together with the technological development, ideally precede it a bit.

Foreseeably, different societies will take very different routes here, have to since there are no best answers. These routes in turn differentiate societies and contribute to their differential fitness in the global competition. Yet another Red Queen’s race.

the vision’s unfolding

The course of the unfolding is impossible to predict in any useful detail. The general structure appears fairly clear, however.

inheritance

Initializing a new instance of a personal AI with some basic Vorstellung of the World is a crucially important step. A natural choice for this is some sort of inheritance. Possible sources include the person’s parents, their social network, and all the way to then available libraries of AIs. The latter may be donated or they may be constructed from society’s state on the backdrop of its culture, from different current societies, or even from long past cultures or from utopian projections. Obviously, the possibilities are many, their ethical implications potentially severe and their consequences unforeseeable. The same applies also to the procedures around choosing. Again, this is the typical situation for evolution with its Red Queen’s races.

development

The development of a biological person reflects their innate and learned capabilities as well as their physical and social environment. The same applies to the codeveloping person-AI, which ultimately brings in an additional layer, however, a layer beyond the person’s body-emotions-mind.

This will lead to a personality that is quite different from that of the biological person alone.

One reason for this is that the horizon in the external world is very much larger, which gets naturally reflected in the personality. The larger horizon indeed encompasses two very different aspects: For the touchable world, it is merely a faster and more comprehensive understanding of ever more complicated situations. This is already a huge step. A really new window may open for the untouchable world that includes the huge realm from my emotions to my inner perceptions. Our cultures represent this realm by fields like ethics, laws, and religions. Linking my inner reality in this realm and its external cultural representation is all but impossible, however. This is confirmed by our ongoing studying of concepts, texts, and artifacts that date back centuries, even millennia. The new window may open because the AIs allow a direct access to their Vorstellung, which are also close proxies to those of the persons.

Another reason for a different personality is the increased ability for self-construction, which includes the recognition and conscious modification of innate traits, both of the person and of its AI. Such a “construction of my self” may sound frightening, in particular when an AI gets involved. However, we have already been doing this for ages with consciously suppressing undesirable emotional and mental states and strengthening desirable ones. In more recent times, a rapidly growing arsenal of technological helpers came to support and guide us, from alarm clocks, health monitoring and coaching applications of all sorts, to sleep and wake controls, many with increasingly powerful AIs that get to know us. Admittedly, these are tiny steps compared to the self-construction that becomes feasible with the personal AI envisaged here. Importantly, the foreseeable technological development in this field is much faster than our typical adaptation rate.

society

The person-AI conglomerate is naturally integrated into society’s hierarchy, as persons are today. It has much higher bandwidths and capabilities, however. A direct consequence of this is that societies that today are deemed to be large, with the corresponding inherent limitations, are no longer large and continue to unfold guided by their constituents. Obviously, societies will continue to increase in “size”, to become more complex, but the person-AI conglomerate scales much more effectively than a purely biological person.

evolution

Inheritance through the AIs opens the door to cultural evolution of an unprecedented magnitude. Culture is of course also transmitted and “inherited” through biological persons alone, with the aid of libraries of all sorts. There is a catch, however: It is not the actual Vorstellung that is transmitted, but just tokens that, ideally, allow a person to re-create the original Vorstellung, and every person has to do this re-creation. Thus each one of us had to learn to read and write, some had to learn quantum mechanics or microbiology, others the meaning of the Diamond Sutra or the Bible, and many many more other aspects of our culture.

The need for re-creation – because there is no way to directly access a person’s Vorstellung – is the fundamental limitation of our current cultural evolution and it exerts a very strong selection.

This determines the rough direction of humankind’s cultural evolution, because some aspects are much easier to re-create and understand than others. This is apparent when thinking of aspects from the touchable world like rocks, trees, cows, cars, or espresso machines and comparing them to aspects of the non-touchable world like love, fear, anger, or the full moon.

The situation for cultural evolution changes completely with the emergence of personal AIs. They allow a direct access, and hence exchange, of their Vorstellung or of some of its aspects. In addition, through the close association with the person, the AI also provides a close proxy to the person’s Vorstellung. This allows a direct exchange of aspects of the Vorstellung per se, not just of tokens for re-creating them. Such an exchange, which is analogous to horizontal gene transfer in the biological realm, is much more rapid in exploring the space of possibilities than regular inheritance. Given direct access, we may expect that sociocultural aspects from the non-touchable realm are eventually transmitted with similar ease as those from the touchable world.

feasible?

Leaving the difficult and important societal and cultural aspects aside and looking only into the more technological aspect: would the above vision be feasible at all? The short-term answer is: No. Not today, not with our current technology, and it will not emerge with a single next innovation. The long-term answer is more interesting:

An at least structurally similar form of the vision will ultimately get realized. It will emerge along an evolution-like path whose unfolding is driven by human ingenuity and engineers.

Where does the conviction for such a development come from? It stem from recognizing the mechanics of evolution, to which humankind’s societies and cultures are subject as was and is biology. In their competition for resources, which includes human creativity besides the more traditional physical ones, information is most valuable: Where to find what and what to design and construct for what? The specific information sought is that which is relevant to survive and advance in the competition and is not tainted by error, fake, or manipulation. This “what, where and what for” may at first sound like a difficult but straightforward issue, essentially aiming at technology for producing food and building houses, roads, and iPhones. However, it is much more intricate than that. It also involves social, legal, and religious structures, goes on to the arts, and to our dreams and visions. How come? All these are crucial factors for the creativity and productivity of a society, hence for its fitness within the larger frame of cultural evolution.

Associated with relevant information come huge fitness gains. Evolutionary processes will find ways to realize them.

Those ways will not necessarily be along lines we anticipate today, but they will eventually provide the envisaged functionality.

Obviously, not everything leading to a fitness gain can be realized. There may be unsurmountable fundamental obstacles. However, for the case of a personal AI that harnesses the information flood, we know that there are no fundamental obstacles. Why? We humans did solve the problem already. And so did all other living organisms, admittedly to very different extents. What we aim for now thus is not the emergence of something completely new but just the re-creation of something that exists already, re-creation with different means, however. There is a further facilitation. The cognitive capabilities of all living beings emerged autonomously. This ascertains the general and robust feasibility, even if it did take a very long time. The corresponding emergence of an AI will be much faster because it is guided by engineers who get an ever deeper understanding of how nature’s solutions work.

The importance of AI for the extraction of relevant information from a wide range of knowledge bases and the high-quality communication with humans can hardly be overestimated. Indeed, today all big tech companies have ongoing active research programs in this field. One proofing ground are AIs that can argue with humans on a wide range of topics. Globally, some 50 research labs pursue this goal [Reed 2021] with some already impressive demonstrations, e.g., by Project Debater [Slonim et al. 2021] and more recently the publicly accessible general-purpose language model ChatGPT.

The performance of both systems is already mind-blowing, certainly when comparing to an ordinary human in the systems’ fields of expertise. They are still far from what is required for a personal AI. For one, their knowledge base is essentially static even though it can be fine-tuned with additional input. With this, codevelopment with the person, let alone coevolution with society, is out of reach. More under the hood, to achieve high-quality results, all the approaches so far have an enormous demand for data, which may not exist for all fields. Furthermore, the computational cost for training and operation is huge and, more importantly, it scales very unfavorably with the size of the data [Thompson et al. 2021]. Both, data requirements and computational resources, are clearly far beyond of what current technology could provide for a personal AI. Finally, none of these systems is fool-proof, we hardly ever understand how they come to their statements, and they are biased by the basis they have been learning from. While this is no consolation, also humans know all these issues. In fact, it is not even clear that those flaws can be prevented even in principle.

Apparently, we are not there yet and some massive roadblocks lie ahead. However, there are also a number of accessible alternative routes including “lifelong learning machines” [Anthes 2019] and neuromorphic systems for brain-inspired computing [Zhang et al. 2020, 2021; Wu et al. 2022]. These indeed even carry the promise of an AGI, an artificial general intelligence that would have capabilities similar to those of a human being.

Hype or already a realistic perspective, an AGI is far beyond what is required for realizing the above vision, certainly for a start. No strategic planing, complicated multi-objective decisions, creative projections, or the like are necessary for mining the ever expanding human knowledge and cultural creations, and to communicate the findings into my similarly expanding knowledge world. Indeed, the AI that begins to satisfy the vision would not need to be more intelligent than me, not even in its very speciality. It suffices that it can mine information from many sources, project its findings into my knowledge world, communicate with me, and, importantly, codevelops with me, over longer times coevolves with society. It is these latter two requirements – codevelopment and coevolution – that are essential for opening the larger way, and they will naturally bring forth the higher functions.

None of the basic elements are novel: For simple searches starting from keywords the tools and databases are in place (Google, Baidu,…). Also projections into some common knowledge world are established (Wikipedia,…) and ChatGPT already goes a long way towards automated and personalized projections. Communication has already deep roots, is reasonably personalized, and is continuously expanded to cover natural language (Alexa, Siri,…) and augmented/virtual reality, eventually envisaging a metaverse. Finally, also coevolution has been operating for quite some time. Again, this is not yet automated and far from autonomous. It is still humans who are the prime drivers, actors, and innovators. This coevolution is manifest in our daily operation that has changed dramatically over the past few decades. Gone are the times when we had to go to libraries to find deeper information or fill endless sheets of papers to perform fairly simple calculations. Today, all this and much more has emerged over the past few decades and now is at our fingertips, increasing our capabilities dramatically. More to the technological fronts, AI systems have very short development cycles in which they take huge steps. This is exemplified by GPT, the basis of ChatGPT.

inevitable consequences, again

My personal AI must be a developing and ultimately evolving system. That is the only way it can adapt to the expansion of my knowledge world and to that of humankind, with which it exchanges information, knowledge, and understanding. It is in the nature of evolutionary processes that the specific path along which such a person-AI conglomerate unfolds cannot be predicted.

Still, some very general aspects of the unfolding appear clear as they are commanded by the mechanics of evolution. As with all evolutionary lines, their progression, mergers, and transformations are just possibilities that may or may not be realized. Any such line can falter, because it becomes unstable and collapses out of itself or because it is overtaken by some other line with stronger capabilities. If a line from a personal AI continues to unfold, however, the following qualitative stages may be anticipated, and they will be inevitable.

At an early phase, the emergence and formation of the AI’s Vorstellung is mainly guided by the person’s questions, searches, and insights. Since the AI’s Vorstellung is inheritable – while that of the person is not, at least not directly – it will evolve. This is not fundamentally linked to a person’s life cycle and structural updates of the AI’s machinery can be expected on much shorter time intervals. With this, the AI’s Vorstellung will rapidly outgrow that of any one person and the guidance will be the other way round: the AI will autonomously recognize and integrate deeper aspects of the World and inform the person accordingly.

The person-AI conglomerate will ultimately create a common Vorstellung of the World, together with a new layer of consciousness that will far exceed the capacities of any one person.

A new layer of consciousness? There is certainly room, and need, for it as I readily recognize when consider my limitations. These shine up when I just think of my active awareness and thus consciousness, or rather the lack of it, for my life and its embedding. Am I aware that I am a dependent part of a much larger biological and technological ecosystem with no chance to survive on my own? Or of the microbiome that pervades, and in important aspects dominates my body? Or of the cultural background of my Vorstellung and actions, all modified by anatomic and psychological filters? Filters that emerged through evolution at large and are hardly recognizable by my conscious mind, let alone controllable, and that often act to my personal disadvantage for the benefit of society.

Understanding does not only emerge at the level of individuals, of course. Instead, hierarchies of diverse cooperating groups and institutions contribute a growing fraction to humankind’s understanding, to our objective world. Also these hierarchies will form conglomerates with correspondingly more extensive AIs, a development that may actually precede the emergence of AIs for individuals. Again, a common Vorstellung will emerge, new layers of consciousness that ultimately will exceed those of the related groups.

Apparently, the situation for hierarchical groups is not fundamentally different from that for biological persons. There is an important aspect, however: Evolution operates on the diversity of life’s relevant constituents and on their relative fitness. The more diverse that population is, the broader the space of possibilities that evolution can explore. On the down-side is the cost incurred by each constituent, here the cost of maintaining and operating the associated AI. The trade-off between the two will eventually determine the hierarchical depth down to which the “individual-AI” conglomerate will persist.

The emerging hierarchical layers of AIs will interoperate, as already humans do. In parallel they will naturally extend to other realms of society that started out to evolve independently but driven by the same principles and along analogous paths. These developments indeed already now touch all aspects of our society including manufacturing, infrastructure, transport and distribution, as well as societal organization and politics. Foreseeably, they are going to be integrated, and there is no apparent reason why this process would have to stop short of “finer layers” of our society, including the arts, ultimately even spirituality. After all, none of these aspects is magical, all of them emerged autonomously over the past few thousand years of humankind’s cultural evolution.

In the long run, the unfolding will leave the sphere of human control, eventually also that of human imagination.

Taking the larger perspective of life’s evolution on planet Earth, we recognize how a particular line – AI to cope with the information flood – inevitably unfolds into something much much bigger, into a consciousness that far surpasses that of humankind. The qualitatively same unfolding also proceeds from all the other roots of our culture. The emerging lines widen, coalesce, and smoothly approach something qualitatively different in an evolutionary transition in individuality. All this if the grand lineage does not falter.

Bibliography

Anthes, G., 2019: Lifelong learning in artificial neural networks, Comm. ACM, 62, (6), 13–15.

Axelrod, R., 1997: The dissemination of culture: A model with local convergence and global polarization, J. Conflict Resol., 41, (2), 203– 226.

Flache, A., M. Mäs, T. Feliciani et al., 2017: Models of social influence: Towards the next frontiers, J. Artificial Society Social Sim., 20, (4), 2.

Howell, L. (Ed.), 2013: Global Risks 2013, World Economic Forum, WEF, Cologny, Geneva.

Lazer, D. M. J., M. A. Baum, Y. Benkler et al., 2018: The science of fake news, Science, 359, (6380), 1094–1096.

Reed, C., 2021: Argument technology for debating with humans, Nature, 591, 373–374.

Slonim, N., Y. Bilu, C. Alzate et al., 2021: An autonomous debating system, Nature, 592, 379–384.

Thompson, N. C., K. Greenewald, K. Lee et al., 2021: Deep learning’s diminishing returns: The cost of improvement is becoming unsustainable, IEEE Spectrum, 58, (10), 50–55.

Törnberg, P., 2018: Echo chambers and viral misinformation: Modeling fake news as complex contagion, PLoS ONE, 13, (9), e0203958.

Vosoughi, S., D. Roy and S. Aral, 2018: The spread of true and false news online, Science, 359, (6380), 1146–1151.

Wu, Y., R. Zhao, J. Zhu et al., 2022: Brain-inspired global-local learning incorporated with neuromorphic computing, Nature Comm., 13, 65.

Zhang, Y., P. Qu, Y. Ji et al., 2020: A system hierarchy for brain-inspired computing, Nature, 586, 378–384.

Zhang, Y., P. Quand and W. Zheng, 2021: Towards “general purpose” brain-inspired computing system, Tsinghua Sci. Technol., 26, (5), 664–673.