Skip to main content
Article

Responding to the challenge of AI: Retrieving human intelligence through labour

Authors

Abstract

In this article, we argue that both the exaggerated fascination with, and fear of, artificial intelligence (AI) stem from a flawed understanding of human intelligence (HI) – one that fails to retrieve its full potential. The current excitement about AI often disregards a concomitant dehumanization of HI itself. Taking labour as a key arena in which the future of intelligence is being shaped, this article advocates for a much more expansive conception of HI. We highlight its constitutive powers and dimensions, while also drawing attention to the often-forgotten limitations of AI. We contend that this pivotal moment of transition and its myriad challenges provide a unique opportunity to cultivate, at work, the intrinsically human dimensions of intelligence, which not only remain largely untapped but are also essential to humanizing labour.

Keywords: artificial intelligence, human intelligence, labour, future of work

Published on
2025-10-28

Peer Reviewed

Responsibility for opinions expressed in signed articles rests solely with their authors, and publication does not constitute an endorsement by the ILO.

This article is also available in French, in Revue internationale du Travail 164 (4), and Spanish, in Revista Internacional del Trabajo 144 (4).

                                                                                                                               

1. A blinding yet often dehumanizing fascination

Human beings have long been fascinated by what has for decades been referred to as artificial intelligence (AI) and, long before that, by the rise of techno-science in general.1 There is good reason for that: it has allowed us to improve material aspects of human life. But blinded by the light of techno-scientific progress, we have collectively neglected the full exploration and cultivation of our own human intelligence (HI). What usually goes unnoticed in this tale of progress is that AI, and other techno-sciences, have focused our attention on just one aspect of HI: the functional and individual aspect, since this can be modelled and replicated by machines.2 Functional intelligence has long been privileged because it helps us obtain results to satisfy our needs. However, it does not tell us which results are worth pursuing and why, and it does not care for the subjects concerned or impacted. It is intelligence reduced to functionality or skill – the “how” things work, without factoring in the “what”, “who” and “why”. In the world of work, this functional reductionism of HI could be equated to misconceiving labour as pure productivity, devoid of further meaning and without adequately factoring in human or environmental impact.

Other dimensions of HI that touch upon the essence of humanity – such as the ability to create values that motivate action, question and shift paradigms, improvise creatively, exercise common sense or experience love – have been neglected or given a lesser “subjective” value. In order to understand life and intelligence, human beings have oversimplified both through scientific modelling. This has been very useful from an instrumental perspective, but its drawback has been to believe that such a limited (and limiting) map is our actual reality.3 Thus, intelligence has largely been equated to functional intelligence; and in the quest for its automatization through AI, we are losing sight of the potential and broader wealth of HI – leading to aberrations, such as conceptualizing HI as data processing. In our fascination for techno-science, we have not only submitted the workforce to the imperatives of technology or anthropomorphized the machine (for example, the robot), but we also fantasize about becoming one, disregarding its many shortcomings and the unexplored potential of our own intelligence.

The triumph of AI also generates fear, in particular in the world of work – as machines did for luddites during the first industrial revolution. This fear is particularly daunting: AI is not only thought to be taking over jobs, but also to be managing, hiring, firing and increasingly supervising our work (see Aloisi and De Stefano 2022). However, it would be a mistake to just react fatalistically to all AI, or techno-science in general, when the issue at hand is how we have been neglecting fundamental aspects of HI.4 Instead, we need to be proactive. It is in the realm of labour that the operation of intelligence is largely shaped (often based on the demands of the job market). It is therefore there that the future of HI is being played out.

Traditionally, intelligence at work has focused on achieving results based on past knowledge. Accordingly, labour has habitually privileged the development of functional programmed intelligence (a form of intelligence that machines can beat us at, and which can more easily lead to commodified labour) and has not paid enough attention to the other forms of HI, such as creativity and the ability to identify values to guide actions at work.5 Today, for humans to remain relevant and be fulfilled, the processes of work need to distinguish between where HI excels – notably in facing the unknown and addressing new challenges (that is, facing the future) – and where AI is more powerful – in computing and extracting patterns from huge amounts of data (that is, mastering the past).

Cultivating free, collective and creative intelligence at work; putting AI – as the powerful tool that it is – at the service of HI, rather than the other way around; and, in so doing, de-commodifying labour, have now become crucial imperatives in tackling the threat of a digital dictatorship and ensuring humans’ very relevance and survival.6 We contend that both the exaggerated interest in, and fear of, AI stem from a flawed understanding of HI that fails to retrieve its full potential. This blinding fascination with AI often obscures the dehumanization of HI itself. Taking labour as a key arena in which the future of intelligence is being fashioned, this article defends a much more expansive conception of HI and invites us to seize this pivotal moment and its myriad challenges as an opportunity to cultivate the intrinsically human dimensions of intelligence within workplaces. These dimensions not only remain largely untapped but are also essential to humanizing labour, facing the challenges of our increasingly complex and interrelated societies and seeking fulfilment as human beings.

2. A framework to better understand and cultivate human intelligence

Unlike other animals, for humans, genetic “programming” is not enough to survive – they need to live in strong symbiosis with others and, through their free intelligence, create cultures (see, for example, Enquist, Ghirlanda and Lind 2023). What constitutes us as human beings is our collective, cultural and relational intelligence, developed over 200,000 years through interactions with other humans and with nature.7 Conversely, scientism – the prevailing ideology of scientists – assumes that intelligence emerges from matter, the latter being the most basic abstract concept of techno-science. This assumption derives from misplaced concreteness: scientism misleadingly views matter as if it were something primordial, concrete and foundational and thus approaches life and intelligence as replicable mechanisms.8 In its quest to model, it downgrades intelligence to mere information processing, and AI prophets thus dream of a world of intelligent machines conquering the entire universe (see, for example, Tegmark 2017). However, life and its intelligence, which are not abstract but the most concrete and immediate of realities, cannot be fully explained or reduced to models, nor to mechanisms that could automatize them (see, for example, Bird 2003). We refer to this insight as the freedom of reality, in which embodied intelligence – not matter – is what is primordial and makes us human.9

2.1. The constitutive powers of human intelligence: A creative hand

To understand the richness and potential of collective HI, we propose to consider its essential traits, summarized in five constitutive creative powers, which we have previously presented in Agustí-Cullell (2022): (1) interest in reality, (2) communication, (3) subsidiary symbiosis, (4) generalized research, and (5) freedom. Such creative powers constituting HI are as interdependent and coordinated as the fingers of a human hand. Therefore, they are presented using the imagery of a creative hand (also as a reminder that this conceptualization is a model, with no pretence at capturing reality but only seeking to inspire further discussion and research on the constitutive powers of HI):

  1. Interest in reality is the energy vector of HI.10 We represent it with the index finger – the finger of attention, pointing to the important things that motivate and guide action. This interest makes HI a sensitive, emotional and evaluative intelligence. Discovering our interest in reality, as it sits within society’s community of interests, is akin to finding one’s vocation in life, and is key for the development of HI. In many societies, this is (or should ideally be) largely channelled through labour. In turn, a world of work where HI is to thrive, calls for grounding labour on genuine interest.

  2. Communication’s primary manifestation is speech and we represent it with the middle finger – the axis of the creative hand – as the mediator across the constitutive powers.11 Communication frees us from the basic stimulus–response mechanism of animal life. Between stimulus and response, we interpose words whose richness of meaning opens the limitless field of human imagination. This power is creative and metaphorical: it relates expressions of meaning to the experience of one domain and translates them effortlessly into other domains. HI is thus mainly linguistic and creates and shares meaning through communication. AI is incapable of such a feat – it can generate complex texts based on old patterns through generative AI tools, such as ChatGPT, but it ignores meaning and is incapable of comprehending it.

  3. Subsidiary symbiosis refers to our capacity for cooperation and mutual service – the creative power emerging from our social nature – and we represent it with the ring finger.12 It considers the fulfilment of aspirations to happiness as a collective rather than individual endeavour. We call it subsidiary symbiosis, as an embodiment of the principle of subsidiarity13 and to emphasize that, today, cooperation needs to free itself from the hierarchical ways of the past. Creativity requires each human entity (from a single person or small community all the way up to international organizations) to have the maximum autonomy it can responsibly exercise in intradependence with other institutions. HI operates through all these intra-actions with others and with nature. It is symbiotic and, therefore, understanding it as individualistic is misleading. Conversely, AI systems seek to maximize their autonomy to the point of autarchy – a basic mistake that ignores the fundamental nature of intelligence: intelligence develops through communication and cooperation.

  4. Generalized research is the hallmark of the mutation from homo sapiens (who pretends to know) to homo quaerens (who humbly and constantly enquires), which is key to realizing the full potential of HI.14 We represent it with the little finger because it has largely been one of the last creative powers to be developed (in the “Western” cultural narrative from the Renaissance onwards)15 and it is still often restricted to certain specialties, particularly techno-scientific ones. Questioning what is known to reach a state of continuous learning and understanding – opening up to the unknown in order to create – is the proper dynamic of intelligence. This key dynamic of HI requires freedom, which is alien to AI systems, as these can only operate within the bounds of preexisting data (that is, based on the past). Today, this attitude of constant research has become necessary, not only for certain specialties but for all activities and by all, if we do not want to end up displaced by machines, with which we cannot compete in terms of information processing.

  5. Freedom is the human capacity to identify and overcome any elements, internal or external, that limit or constrain us.16 Given its importance, we represent it with the thumb, which supports the other fingers and allows them to attain their full potential. This fundamental constitutive power of HI can lead to liberation not only from that which is relative to the ego – constituted by desires, expectations and fears – but also from one’s attachment to emotions, thoughts and acquired knowledge. It is the foundation of true creativity, allowing groundbreaking innovation (for example, quantum mechanics in physics, the 12-tone system of composition in music or cubism in painting). This should not be confused with the ability to come up with new combinations of what already exists, which is the forte of AI systems. Freedom is a prerequisite for flexibility, for a research attitude and for the corresponding creativity of HI – essential in the current context and particularly in the world of work. This freedom and liberation constitute the hygiene of the mind, liberating it from the accumulated past and from egocentrism, and allowing it to meet everything anew, from one moment to the next.17

2.2. The dimensions of human intelligence

How these constitutive powers of intelligence are exercised – the degree of intensity and the priority of each power over the others – determines the different uses or dimensions of HI.18 For analytical purposes (distinctions are of the mind and not of reality), we also draw on our previous categorization in Agustí-Cullell (2022) in differentiating between three main dimensions. The first two address our needs and the third touches upon what makes humans free.19 It should be noted that, like the constitutive powers listed above, these dimensions are intertwined and, thus, work best when synergistically balanced (what we refer to as harmonic intelligence).

  1. Functional intelligence refers to a primarily instrumental and abstract form of intelligence that is characteristic of the techno-sciences. In order to predict and control phenomena and to create techno-scientific models of reality, this type of intelligence excludes elements that cannot be measured, such as concrete qualities and values. In the workplace, functional intelligence focuses on how things are done, treating elements in reality as means to an end. It is key to the ability to develop skills and to organize and optimize productive processes continuously in order to achieve goals. Its form of interest is driven by curiosity; it relies on the freedom to separate the observer from the observed and it does not require value-based engagement, such as empathy. It can thus serve both to cure cancer and to develop an atomic bomb. AI primarily aims to automate this functional intelligence. However, lacking human curiosity and freedom, it tends to do so either through analysing and replicating the steps of a particular task or process (for example, creating computable functions), or by processing large amounts of resolved problems (for example, training neural networks).

  2. Axiological intelligence, which is interdependent with functional intelligence, refers to the intelligence through which individuals imbue actions with meaning and value. It also connects us to the aesthetic and emotional dimensions of life, encompassing emotional, artistic and ethical intelligence, among others. Both artists and legislators, for instance, often demonstrate this form of intelligence. By creating values, axiological intelligence responds to the human need for meaning and direction. Unlike functional intelligence, which operates through abstract reasoning, axiological intelligence functions through feeling and intuition. For example, when confronted with greed and inequality, it may promote generosity or set up a social redistribution system; when inspired by beauty, it creates art.

    Axiological intelligence operates through values and countervalues and tends to be more effective when embraced rather than imposed. In the field of labour, it gives meaning to work – for example, by establishing connections between workers and the values, purpose or mission of their company or organization. As a relational form of intelligence that taps into human connection, its cultivation is key to enabling symbiotic teamwork. It also operates as a safeguard against the tendency of functional intelligence to commodify work and treat workers as mere resources. Axiological intelligence reinforces the importance of a human-centred approach to labour, as it is the dimension of intelligence most attuned to human needs.

    A balanced synergy between functional and axiological intelligence thus constitutes what we refer to as the intelligence of need: the intelligence that approaches and models reality to satisfy human interests and needs, and largely depends on our interactions with other organisms and the environment. The axiological focuses on what is needed and the functional on how to attain it; but, when these two forms of intelligence are out of balance, the intelligence of need degenerates. For example, when intelligence is guided solely by individual or collective self-interest, it becomes an intelligence of greed. In the context of labour, such imbalances may be reflected in a workplace culture that prioritizes results and productivity over all else.

  3. Inseparable from the intelligence of need (the synergy of functional and axiological intelligences), there is a third contemplative dimension of intelligence, which we call liberating intelligence. This dimension provides us with the insight that functional and axiological models are ultimately relative to our contingent needs as human beings. This is a subtle but powerful form of intelligence that allows us to distance ourselves from such needs and frees us from any particular conceptualization or experience of reality. It is the source of our creative freedom, which is the distinctive power of HI and allows other forms of intelligence to attain their greatest potential. This dimension reminds us that intelligence is, at its core, as untameable as our freedom and creativity. At the workplace, this form of intelligence allows us to supersede a purely egocentric approach to HI, thus fostering – in synergy with axiological intelligence – cooperation and teamwork. Moreover, liberating intelligence at work is what enables us to take a step back from “business-as-usual” practices; question old rules, processes and forms of organization; search for new ways of addressing challenges; and promote creativity and innovation.20

3. Labour as the arena in which the future of human intelligence is shaped

In this article, labour and work are broadly understood as the human dimension of, or participation in, any economic activity.21 According to the generally accepted historical narrative, from the beginning of the modern era, labour – previously sometimes viewed as a form of punishment – started to be presented as a means of human fulfilment, giving rise to the notion of creative work.22 However, the emphasis of capitalism and imperialism on productivity and consumption as the basis of well-being led to the generalized promotion of programmed intelligence, with creativity largely reserved for a privileged minority (such as scientists, engineers and artists) (see, for example, Thompson 2010). Moreover, intelligence at work – even in its creative form – is most often (mis)perceived as purely individual and instrumental or functional (that is, the view of intelligence embraced by the techno-sciences).23

Labour today thus largely consists of aligning individuals’ programmed intelligence with specific skills in order to solve known problems or achieve predefined objectives within hierarchical structures, where growth, productivity and monetary benefit are the paramount goals. In this framework, the creative and axiological dimensions of HI are neglected, except when it is too obvious that they are necessary – as in cases of symptomatic failures stemming from a lack of harmonic intelligence, illustrated by corporate ethical scandals (see, for example, Fombrun and Foss 2004). When human labour and intelligence are thus limited to programmed instrumental tasks, AI systems become formidable competitors – not only displacing humans but also subordinating them to intelligent machines, such as algorithms, rather than the reverse (see, for example, Aloisi and De Stefano 2022). In this context, labour risks being further commodified and dehumanized.

Calls to humanize labour are, of course, not new and have often emerged in response to techno-scientific or socio-economic transformations affecting the world of work. This sentiment is reflected in the Preamble to the ILO Constitution of 1919 – establishing the Organization in the wake of the industrial revolution – and reaffirmed in the ILO Centenary Declaration for the Future of Work of 2019, which calls for a “human-centred approach”. The rise of AI further underscores the need to ensure such an approach to the world of work, and to do so through cultivating harmonic intelligence. Otherwise, humans, unable to compete with artificial systems in the field of programmed functional intelligence, will be relegated to second-best options and perceived as flawed biological machines. Moreover, today’s rapidly changing world demands constant adaptability to tackle new challenges (see, for example, McGowan and Shipley 2020), which requires tapping into the other often-neglected aspects of HI discussed in this article.

This adaptability demands a highly cultivated harmonic intelligence, particularly in its creative and liberating dimensions.24 AI cannot provide that, as it only projects from past data – it can compete with programmed intelligence but struggles to match creative or axiological intelligence. For example, while AI can be very good at managing and anticipating logistical productive needs based on past consumer patterns, it cannot come up with breakthroughs, such as paradigm shifts in the workplace, or prevent ethical scandals stemming from business-as-usual practices. In order to retrieve our potential as human beings and to address the challenges of today’s world, labour must embrace long-neglected dimensions of intelligence – specifically, its axiological and creative or liberating dimensions, with an emphasis on their collective deployment. Otherwise, if we continue to function solely as individual programmed intelligences within the market, we risk sinking further into precarious vulnerability.25

4. The often-forgotten limitations of AI

A closer look at HI, as summarized above, highlights the limitations of otherwise powerful (and useful) AI systems. While this article does not aim to provide a comprehensive review of those limitations, for illustrative purposes, we draw attention to aspects or blind spots of AI that correspond to key areas of HI – in line with the three dimensions sketched out above – that are essential to the world of work.

4.1. AI’s reductionist focus on functional intelligence at work

Like other techno-sciences, AI focuses on goal-oriented functional intelligence and often treats other forms of intelligence – such as value-creating axiological intelligence – as merely functional. For example, when programmers attempt to integrate value considerations into machine operations, as in “do no harm” protocols in so-called “beneficial AI” (see, for example, Russell 2021), they rely on past experiences. However, AI lacks the capacity to adapt values to unforeseen situations or create new values when needed – abilities that lie at the heart of axiological HI and are increasingly vital in an ever-changing world of work where ethical challenges are increasingly complex (see, for example, LaMontagne 2016).

This functional operation reveals AI’s blind spots and the dangers it poses, given that it only addresses the “how” of processes, offering control over phenomena mainly based on large amounts of data. To accomplish this, AI typically identifies the logical structure of problem-solving, or trains artificial “neural networks” using millions of solved cases (for example, email tools that identify spam, image recognition or, more recently, generative AI tools, such as ChatGPT, Google’s Gemini, Microsoft’s Copilot and DeepSeek). This system has its shortcomings. For instance, generative AI tools such as ChatGPT cannot explain how they have reached an answer, making it difficult to address errors, of which these tools are completely unaware. More broadly, such AI tools cannot recognize when existing rules or procedures are inappropriate or irrelevant to a particular context. Should this qualify as intelligence?26

4.2. AI’s inability to replicate axiological intelligence

It should also be noted that AI is unable to replicate basic elements of HI – whose importance for social interaction, including meaningful engagement in the workplace, cannot be overstated. These include common sense, ethical judgement and emotional intelligence, in manifestations such as solidarity, compassion and other emotions motivating action. Since the introduction of the Turing test, AI proponents have been enthusiastic about the idea of machines being able to trick humans into thinking they are interacting with other humans.27 However, the possibility of genuinely replicating these essential traits of HI, which go beyond programmable aspects of functional intelligence, is often questioned by more critical academics across various disciplines (see, for example, López de Mántaras i Badia 2023; Shams, Zowghil and Bano 2025; Chursinova and Stebelska 2021; Braga and Logan 2017). This dimension of HI, which we broadly refer to as axiological, has a fundamental role to play in the world of work. It is key to understanding what drives motivation at the workplace and ensuring worker engagement, and it enables the creation and ongoing adaptation of value systems that can guide and give meaning to businesses, allowing them to connect with the values of the society within which they operate.

4.3. AI’s limited creativity and its inability to replicate liberating intelligence

Lastly, it should be stressed that AI offers only limited (and often oversold) forms of “creativity”. These are primarily combinatorial, establishing new connections between the data that it has accumulated. For instance, it may come up with new strategies to win at a specific game or, in music, it may produce a “new” likeable song drawing on past popular compositional patterns. However, it cannot replicate the free creativity required to come up with breakthroughs, such as the 12-tone system of composition. This level of creativity cannot be modelled, since it stems from freedom, which cannot be predicted or fully explained. In contrast, it is through this free and creative dimension that HI has developed. Yet, this aspect has not received enough attention.28 We do not, however, wish to diminish the role that AI can play in assisting the creative process. For example, it can quickly determine if something has already been done or tried, provide examples of past approaches or suggest new tactics to advance goals, but its contribution is only instrumental. It is, moreover, most effective when used in synergy with the creative capacities of HI.

***

To sum up, the functional dimension of HI – reflected in techno-scientific achievements – has progressively eclipsed other dimensions. At work, this results-oriented approach has led to the perception of jobs as primarily concerning the delivery of programmed tasks. At this functional level, AI often outperforms HI, serving as a wake-up call to rebalance and foster harmony between the intelligence of need and liberating intelligence. This harmony is the wisdom we need in order to retrieve the potential of HI. Labour market and employment policies should bear this in mind, not only to ensure continued human relevance, but also to foster fulfilment, through quality jobs that contribute to the development of personal and collective HI.29

5. Retrieving the potential of human intelligence: Humanizing labour and using AI as a tool at our service

How can labour provide a space to unlock the full potential of HI and its creative powers? While not exhaustive, this section sketches out some general exploratory recommendations to promote HI and its synergy with AI. They should be understood as interdependent and mutually reinforcing, and their application should be mindful of the hindrances or barriers that may stand in the way – often in the form of entrenched business-as-usual patterns. The first five recommendations correspond to the five constitutive elements of HI discussed in section 2.1, focusing on aspects that are particularly relevant to the world of work. Their implementation would not only aim to enhance performance but would also reward workers by engaging and developing HI, thereby making work more meaningful.

To illustrate how these recommendations may be given effect, we note some examples of strategies and measures for the first recommendation. However, given the risks of one-size-fits-all approaches, the remaining recommendations are presented as general guidelines. In so doing, we wish to stress that concrete measures and strategies will need to be tailored to the nature of specific industries or economic activities and the developments therein, including the ways in which AI may be affecting them. Although we are unable to provide here a full framework for institutional change, the recommendations highlight a multiplicity of approaches that can work complementarily. These range from adjusting existing mechanisms and instruments, such as national employment policies or other national or international mechanisms (for example, labour standards, tax incentive mechanisms and national, regional or global collective bargaining), to more novel ones proposed below, such as a new professional figure, to monitor and make recommendations on the AI–HI interplay at work.

5.1. Aligning work with genuine interest

Labour is often experienced as an obligation for survival rather than the domain in which our common interests are pursued. For work to promote a healthier development of HI, it needs to be motivated by, and deeply connected with, our collective interests, values and pursuits. A hindrance to this endeavour is the lack of genuine interest at work, where jobs are perceived as obligations, extraneously motivated, for example, by survival or salaries. To tap into the potential of HI, it is key to promote interest and thus ownership at work.30 The implementation of this broad recommendation should be approached from multiple angles, touching upon different aspects of labour.31 As to impact, an increased alignment between work and genuine interest should contribute positively not only to combating alienation, but also to enhancing productivity, optimizing work-related processes and fostering the full development of HI at work, as well as its synergies with AI. In this endeavour, the tools offered by AI should focus on first identifying and then automatizing the less interesting programmed and routine tasks, with the aim of enabling workers to focus on more interesting and challenging demands, that is, those that allow them to tap into HI more broadly (for example, addressing unexpected situations, improving processes and devising new ones).32 Genuine interest could thus be built into workplace HI–AI governance principles (and related algorithms) as one of the guiding criteria to assist in identifying tasks for AI and areas of work on which HI should focus. AI could also be useful in facilitating the identification of this interest alignment – for example, by devising algorithms to support processes to identify the jobs and needs that are best suited to workers’ skills, traits and interests, all while – an important caveat – retaining human supervision over the outcome. While research in the field tends to focus on machine learning and labour market matching (see, for example, Mühlbauer and Weber 2022), AI and related techno-scientific mechanisms hold further potential as vocational tools (see, for example, Bülbül and Ünsal 2010).

5.2. Work as a space that promotes meaningful and impactful communication

The importance of communication in labour processes cannot be understated – any workplace is essentially the outcome of interactions (see, for example, Mikkola and Valo 2019). Work needs to be constantly developing efficient and effective means of communication and knowledge-sharing that enable coordinated efforts and, thus, the development of collective intelligence. A healthy workplace promotes communication in order to create common interests and a sense of community or belonging, which, in turn, foster symbiotic dynamics.33 Communication that promotes HI at work can be hindered by fierce competition and purely individual rewards, dishonest or unethical communication (for example, hiding or not sharing information), compartmentalization or silo dynamics, and fear of speaking up or challenging the existing culture or entrenched institutional dynamics. AI could also provide a supportive role in this communicational field, for example, by helping teams identify existing communication strategies and best practices as a starting point from which to adapt them creatively through HI to the specific needs of the team concerned.34

5.3. Labour organized as networks of teams in subsidiary symbiosis

The idea of individual leadership saving the day is a cultural myth, as intelligence always develops through symbiotic collective interaction. An efficient workplace is one that is able to show flexibility in determining the most appropriate level for decision-making and other tasks. In short, decision-making and tasks that could be responsibly undertaken at a lower level, closer to the reality or problematic at hand, should not be taken over by higher organizational echelons.35 An efficient workplace also needs to set up effective mechanisms for synergistic coordination. Subsidiary symbiosis at work relies on the development of the other constitutive powers of HI, in particular open communication and shared interests. Barriers to this subsidiary symbiosis include hierarchical structures where the boss needs to decide or approve everything, or silo dynamics of units motivated by different interests and unable to transcend their specific responsibilities into common values and goals. AI can also play an assistant role in reorganizing work through networks of subsidiary symbiosis – for example, through the compilation of best practices to facilitate the application of the principle of subsidiarity and by helping identify the most meaningful decision-making level.36

5.4. A work culture that fosters constant research and inquiry

No matter how apparently simple the task, there are always opportunities to enquire and open new creative possibilities – including in determining which programmed tasks may be delegated to AI. Awakening in all workers a research mindset (hitherto reserved to a few specialists) is crucial in our ever-changing economy and world of work. It fosters continuous learning and the exercise of the mental flexibility and other intelligence capacities necessary to respond to unforeseen challenges. Fruitful research requires cultivating the other constitutive capacities of HI, in particular teamwork (the individual researcher is a myth) and freedom (detachment from what is known and done). A key barrier to this practice of research and innovation in everyday work is the fear of failure, which is very present in many corporate cultures and disregards the fact that scientific progress has only been accomplished through trial and (much) error. Again, in this field, AI is perfectly suited to provide a supportive role – both to run predictions on how new approaches may unfold and to automatize and progressively assume programmed tasks, allowing HI to focus on continuous education and research.37

5.5. Further promoting creativity, freedom and well-being at work

Authentic creativity can flourish only in an environment that fosters freedom. This freedom requires detachment from past knowledge, emotions and beliefs and enables cooperation even in the face of disagreement – placing such disagreement in perspective. This does not mean lack of responsibilities or accountability; rather, it reflects a full commitment to values that require venturing beyond the safety of existing approaches and practices. AI can provide extensive information on the latter, while HI is essential to envision new possibilities and experiment through inquiry and creativity.

A key hindrance to freedom at work is the dominance of egotistical short-term self-interest or short-sighted devotion to immediate results. In addition to freedom, creative intelligence requires balance. Its promotion can help counteract the toxic effects of work driven solely by the functional or instrumental intelligence of need, reflected in individualistic competition and obsession with performance, which may lead to burnout. Machine learning can support this process, for example, through compiling best practices. When used correctly, AI can not only inform us of what has been done but also encourage HI to go further.38 Furthermore, performance assessments should be redesigned to encourage such creativity at work.

***

Having outlined recommendations seeking to cultivate the constitutive capacities of HI at the workplace, we now refer to four additional avenues to promote HI at work and respond to the growing presence of AI. These four broad recommendations seek to inspire further research and innovation. They touch upon: (i) the implications for existing incentive systems and cultural inertias at work; (ii) the framing of AI as a complement to HI’s strengths; (iii) the development of new professional roles – which we call AI–HI mediators – to facilitate this transformation; and (iv) the potential to advance this agenda through international cooperation.

5.6. Rethinking incentive systems and cultural inertias at work

In pursuing the above recommendations, labour market actors must critically assess long-standing cultural practices, such as the prioritization of instrumental intelligence and productivity, or the pursuit of economic gain above all else, which can hinder the full development of HI. For example, existing reward structures, including salary scales, should be reassessed and, where necessary, detached from results-based productivity. Productivity and results should serve creativity and innovation, and not vice versa. The obsession with results and productivity, which is typical of prevalent growth-focused economic thinking, leads not only to serious structural problems, such as dire environmental crises, but also curtails the true potential of HI. This concern has been raised by scientific researchers39 and is reflected in unhealthy work dynamics, leading to alienation at work and pathologies such as burnout.40 To address this, the results-driven mentality needs to be recalibrated, and incentives refocused to encourage the development of unrecognized aspects of HI – such as detachment or emotional intelligence – that are very relevant for fostering creativity and optimizing collaboration.41

5.7. Making AI support the development of HI in the workplace

AI can be both supportive of HI and disruptive. As we have argued throughout this article, AI needs to be intelligently framed and treated as an ally of HI in the world of work – making the most of the tools it offers (for example, for programmed tasks) while avoiding its pitfalls (for example, surrendering to the cold so-called algorithmic decision-making). AI can be very useful for all sorts of work, in particular to process data and constantly update or optimize procedures. At the same time, recent studies warn against the potential negative impacts of the use of AI on human cognitive abilities (see, for example, Kosmyna et al. 2025). The key is to understand the complementarity – not competition – between AI and HI. For example, designers can use AI to compile thousands of existing blueprints in a matter of seconds and then turn to creative HI to propose a new approach that transcends past trends. A conscious interaction or mediation between AI and HI becomes necessary, including from an axiological or ethical perspective, to ensure a human-centred approach to labour. Such mediation and the determination of the roles suited to AI cannot be left solely to short-term economic logic or the behaviour of the market42 but should be rooted in democratic governance structures.43

5.8. AI–HI professional mediators

The proposed synergistic approach to promoting HI through labour opens the door to new professions. Given its relevance to a conscious and ethical interplay between AI and HI, we highlight one such profession, which we tentatively call “AI–HI mediators”. These professionals would be acquainted with both AI and the often-neglected dimensions of HI, such as axiological intelligence, which underpins ethical decision-making. They would, in any given corporation or institution, monitor best practices in the relationship between AI and HI, identify opportunities to optimize the use of AI tools while preventing dehumanizing patterns, and make pertinent recommendations to ensure that HI is adequately cultivated and that both forms of “intelligence” work in synergy.44 In this regard, we reiterate that autonomous AI systems cannot be designed to be fully axiologically capable – their limited autonomy needs to be monitored and adjusted. For example, as noted above, so-called ethical AI, which seeks to endow computational models with ethical capabilities, can integrate past considerations and provide invaluable support to intelligence mediators compiling past decisions and lessons learned. However, it falls short when addressing unforeseen challenges, since axiological intelligence cannot be reduced to reasoning on rules or computational formulas. HI intervention is thus necessary, calling for professional mediators with honed axiological expertise. These professionals would contribute to the new and important field of human-centred AI ethics45 and could serve as watchdogs, ensuring that AI is used in ways that complement and support the development of HI.46

5.9. International coordination

In today’s highly interdependent global economy, international coordination is key to fostering a global-scale transformation in the way HI is promoted in the workplace. While many multilateral forums can contribute positively to this effort,47 for illustrative purposes we single out the ILO as the main international standard-setting body for the world of work. With its mandate to advance social justice and a human-centred approach to labour, the ILO provides a most suitable international forum for mediating the relationship between AI and HI. As set out in Part II(d) of its Declaration of Philadelphia, the ILO is called upon to serve as the axiological guide to the global economy. The ILO boasts technical convening power – exercised through its international meetings of stakeholders and experts and its standard-setting machinery –48 to address new trends and challenges and make concrete recommendations, including on the interaction between AI and HI at work. Its tripartite structure, encompassing employers’ and workers’ organizations, gives the ILO a built-in link to the economy and its stakeholders, including as a sectoral convenor. It could thus promote the discussion and adoption of pertinent approaches by sector to make the best use of AI, while fostering the development of HI. Such discussions could result, for example, in AI–HI codes of practice for different industries or forms of work. They could also explore whether and how existing ILO instruments (or new standard-setting) could be harnessed to support a synergistic interplay between intelligences in the labour context. For example, the call for a productive employment policy set out in the Employment Policy Convention, 1964 (No. 122), when interpreted in light of the ILO Constitution and Centenary Declaration, can be understood to encompass the promotion of HI. Since its creation over a century ago, the ILO has affirmed that labour is not a commodity. A broader, more humanizing vision of labour, which seeks to overcome its commodification, is closely linked to the promotion of balanced, creative and collective HI in the workplace. This vision could represent the future of human work.49

6. Looking ahead: Further HI research and action at work

This article has outlined a framework and recommendations to respond to the challenges and opportunities posed by AI in the world of work, while promoting the development of HI, understood as the agent of human life. It also calls for a reframing of existing research and for additional interdisciplinary studies in light of the arguments presented. Examples of such research include: social psychology studies on labour dynamics, focusing on axiological intelligence and the alignment of values and performance at work; neuroscientific studies examining variables that foster the operation of liberating intelligence in the workplace, enabling adaptation to new challenges, demands or shifts in organizational paradigms; and more interdisciplinary research, from fields as varied as psychology, economics, sociology or anthropology, to assess and develop the hypotheses and arguments presented in section 5, including to better understand HI – its powers or capacities, practical applications and potential impact in the world of work. We acknowledge that, as with any map of reality, alternative categorizations may also be explored. The aim of this article was not to fall into the fallacy of misplaced concreteness – namely, becoming overly attached to a specific framework, as some AI enthusiasts do when adopting reductive models of HI and overlooking the fact that reality supersedes any blueprint. Rather, this article has sought to stress, beyond any formulations, a simple yet powerful message: a human-centred approach to AI should make us realize how limited – and limiting – AI’s understanding of HI often is, and how much HI has been neglected.

The world of work, which is particularly impacted by AI-driven transformations, should be approached as one of the key arenas in which the future of our intelligence is being shaped. The well-known call for continuous education and training at the workplace should be understood not only as the acquisition of knowledge and skills, but more broadly as the ongoing development of HI in all its constitutive powers and dimensions. Instead of (mis)equating humans to programmable intelligences that machines can eventually surpass, we must deepen our understanding of the richer nature of HI – not as a mere body of knowledge, but as the core of human agency – and take decisive, research-informed action at work to better cultivate it. In short, at this historic juncture, we should refocus our attention on retrieving the full potential of HI, placing AI at its service and, in so doing, bringing additional meaning to work and further humanizing labour.

Notes

  1. By techno-science, we refer to the entanglement of science, technology, economy, and their products and services (see, for example, Adas 1989).
  2. While AI, as a socio-technological system, may be challenging to understand owing to its complexity and variety, it becomes simpler to consider when we approach it, as this article does, from the ways in which intelligence operates.
  3. Scholars have long warned of the fallacy of misplaced concreteness (see, for example, Whitehead 1925). It consists of mistaking an abstract belief, opinion or concept about the way things are for a physical or concrete reality.
  4. This article does not set out to assess the specific impact of AI on the job market and defers in this regard to recent studies. Their outlook is not always bleak – for example, Hatzius et al. (2023) consider that “although the impact of AI on the labor market is likely to be significant, most jobs and industries are only partially exposed to automation and are thus more likely to be complemented rather than substituted by AI” (p. 9), and Gmyrek, Berg and Bescond (2023) conclude that “in the realm of work, generative AI is neither inherently good nor bad, and that its socioeconomic impacts will largely depend on how its diffusion is managed. The questions of power balance, voice of the workers affected by labour market adjustments, respect for existing norms and rights, and adequate use of national social protection and skills training systems will be crucial elements for managing AI’s deployment in the workplace” (p. 44). However, such studies do not focus on the untapped potential of HI – the subject of this article.
  5. On the need to embrace modern thinking on intelligence at the workplace, see, for example, Scherbaum and Goldstein (2015).
  6. On uses of AI to bolster autocracy, see, for example, Kendall-Taylor, Frantz and Wright (2020). However, this article’s caution focuses more on the pervasively insidious loss of control over our lives in an automatized world where HI’s programmed activities may be constantly monitored and predicted.
  7. An important reminder to question prevailing thinking of intelligence as an individual attribute. See, for example, Corbí (2013).
  8. See note 3 above.
  9. For a more extended discussion on the freedom of reality and the primordial nature of embodied intelligence, see Agustí-Cullell (2021).
  10. For long-standing research on how interest and motivation influence cognitive abilities and their development, see, for example, Lohman (1989) or Renninger and Su (2019).
  11. On the importance of communication for the development of HI, see, for example, Sfard (2008).
  12. On the role of symbiosis and cognitive growth, including as to the interplay between HI and AI, see, for example, Sun (2020).
  13. Subsidiarity understood as the principle of social and political organization that holds that issues should be dealt with at the lowest or most immediate level where they can be adequately addressed.
  14. On the importance of research for the development of intelligence – with a focus on how AI can support this process – see, for example, Chubb, Cowling and Reed (2022).
  15. On the scientific revolution of the Renaissance, see, for example, Hall (1994).
  16. On the contribution of freedom to intellectual development, see, for example, Moshman (2003).
  17. This mental hygiene is needed for personal and social health, as much as corporal hygiene is needed for the body. The corporal plagues of the past are replaced by the mental plagues of the present.
  18. For example, in the functional dimension, interest focuses on functional quantitative aspects; freedom is used to make abstractions, narrowing the focus on a specific field of action (mostly on how things operate) and often leaving aside qualitative aspects; communication occurs on an abstract plane through the sharing of information; collaboration is mostly purposive and individualistic; and research revolves around abstraction and what can be measured quantitatively.
  19. HI has been conceptualized through many different distinctions, largely depending on its use. See, for example, Gardner (2011).
  20. It comes as no surprise that companies at the cutting edge of innovation offer activities at the workplace to promote such intelligence – for instance, meditation (see Cheng 2016).
  21. On the malleable nature of the notion of work, see La-Hovary and Agustí-Panareda (2019).
  22. In the European context in particular, see, for example, Méda (2016).
  23. For a challenge to the prevailing individualistic approach to intelligence and performance at work, see, for example, Boreham (2004).
  24. Scholars coincide in stressing that the more creative jobs will be the least threatened by AI and other techno-scientific developments (see, for example, Bakhshi, Frey and Osborne 2015).
  25. On the importance of transcending the individual view of creativity and embracing a more participatory and collective approach, see, for example, Montuori (2011).
  26. When drafting this article in 2024, as a test, we asked ChatGPT for assistance in identifying articles that could be cited as references on some of the general topics touched upon. While this should have been an easy task for AI (that is, identifying information about past publications), the results were not as expected: the publications suggested by ChatGPT, while always sounding very plausible, were often invented and thus non-existent.
  27. The Turing test was conceived as a method of determining if a machine could demonstrate HI. It requires being able to engage in a conversation with a human without being detected as a machine (see, for example, Levesque 2017).
  28. Scientists often call this freedom “randomness” and apply the notion to explain phenomena probabilistically – for instance, quantum mechanics, the behaviour of an electron and gene mutations (see, for example, Fowler 2021).
  29. Cultivating a harmonized HI in labour – with the work–life balance it requires to operate at its optimal level – should also benefit other areas of life, including leisure.
  30. By, for example: (i) cultivating axiological intelligence at work to better understand what motivates professional performance and devise industry- or activity-specific strategies to improve the connection between the workforce and the mission of a specific enterprise or organization; (ii) fostering discussion and awareness regarding the purpose of the business/workplace, and its intersection with the promotion of the common good – identifying workplace values and practices touching upon this alignment and adapting them or creating new ones; (iii) discouraging herd behaviour or a view of jobs as set immutable tasks; or (iv) exploring and implementing mobility and other measures to promote workers’ motivation and their contribution to fields aligned with their individual and collective skills, traits and interests.
  31. Some examples of actions that could contribute to this alignment include: fine-tuning recruitment processes to identify and factor in motivational elements more efficiently; promoting social dialogue and ownership over collective objectives by the workforce, by engaging with, and also learning from, the views of the workers concerned; fostering a continued focus on both career and HI development, heightening workers’ awareness of their own skills and motivations and promoting the expansion of their area of professional interests; and ensuring that career opportunities are widely publicized and transparently offered under objective, meritocratic criteria.
  32. This is, after all, materializing the promise of AI. On a conceptual framework for automation, AI and work, see, for example, Acemoglu and Restrepo (2019).
  33. On this and other recommendations, much can be learned from the experience of cooperatives. On communication and innovation, see, for example, Peng, Hendrikse and Deng (2018).
  34. While most AI research in this area has focused on “machines as teammates” and on communication among machines or between humans and machines, there is ample scope to further research on how AI tools can further contribute to enhancing meaningful creative human communication. See, for example, Hancock, Naaman and Levy (2020).
  35. On exploring the principle of subsidiarity in organizational structures such as workplaces, see, for example, Melé (2005).
  36. Parallels can be drawn with AI research focusing on how AI can help improve teamwork – see, for example, Webber et al. (2019).
  37. On how creativity is increasingly required for all sorts of jobs – going beyond the occupations traditionally thought of as creative – see, for example, Easton and Djumalieva (2018).
  38. Expanding on the example noted above, in a field traditionally considered as being creative, AI systems may be very good at “composing” songs following patterns of past compositions (songwriters often do the same). The rise of AI pushes musicians to explore beyond the tried and tested, given that machines hold the upper hand in processing and recycling past creations. This entails taking risks as there is no certainty of commercial success, but this is how music has evolved over the centuries. The difference is that, today, AI could support HI by providing comparisons with old patterns and testing originality.
  39. Even the “temples” of HI – universities and research institutions – have become enslaved by results-based dynamics in a “publish or perish” mentality. By way of example, Peter Higgs, who was awarded the Nobel Prize in Physics, is well known for publicly stressing that had he been subject to the results-based productivity required by the academic world today, he would not have been able to achieve his scientific breakthroughs. Indeed, before being awarded the Nobel Prize, he was almost fired from his university because he refused to conform to the often-sterile obsession with publication-based results – anathema to creative thinking (see, for example, Al-Khalili 2014).
  40. Or, using a new term that attests to the proliferation of toxic workplace dynamics linked to the functional, production-obsessed work paradigm: sisyphemia – a new work-related disorder characterized by obsessive ambition, chronic stress and pathological fatigue (see, for example, García Baroja 2023).
  41. On the role of “soft skills” and emotional intelligence to foster innovation processes in small and medium-sized enterprises, see, for example, Bonesso, Cortellazzo and Gerli (2020).
  42. As illustrated by the recent wave of mass dismissals in the journalism sector fuelled by the current fascination with AI, disregarding the negative mid- and long-term impact it may have on the strength of democracies (see, for example, Aissani et al. 2023; Malone 2024).
  43. On the importance of having AI processes be human-centred and controlled, and not purely economic based, see, for example, De Stefano (2019).
  44. On AI and human mediation, see, for example, Da Silva Guimarães Martins da Costa and Thieriot Loisel (2024).
  45. While a lot of research on AI–HI mediation has focused on the interaction between humans and machines, with the introduction of AI at work in order to seek efficiency (see, for example, Einola and Khoreva 2023), more attention should be paid to devising human-centred approaches that help maximize AI–HI synergies with the aim of promoting the development of HI.
  46. This should be complemented by other means of ensuring the right usage of AI – such as normative instruments at the international, regional, national, local and sectoral levels and collective bargaining agreements setting out AI–HI governance principles and best practices.
  47. Examples of emergent forms of international coordination in the field of AI relevant to labour include the EU Artificial Intelligence Act 2024 (promoting a responsible use of AI technology in the EU market, including due diligence in the use of AI systems); the United Nations Educational, Scientific and Cultural Organization (UNESCO)’s 2021 Recommendation on the Ethics of Artificial Intelligence (as a contribution from an international organization promoting a human-centred approach to AI) and the Organisation for Economic Co-operation and Development (OECD)’s AI Principles, adopted in 2019 and updated in 2024 (seeking to promote the innovative and trustworthy use of AI, respecting human rights and democratic values).
  48. Meetings of experts could thematically address both the challenges of AI and the best means to advance an HI agenda at work – fully coherent with the human-centred approach of the 2019 ILO Centenary Declaration (see ILO 2019a).
  49. For recent ILO discussions on the future of work, including with regard to the impact of AI and techno-sciences, see ILO (2019b).

Competing interests

The authors declare that they have no competing interests.

References

Acemoglu, Daron, and Pascual Restrepo. 2019. “Artificial Intelligence, Automation, and Work”. In The Economics of Artificial Intelligence: An Agenda, edited by Ajay Agrawal, Joshua Gans and Avi Goldfarb, 197–236. Chicago, IL: University of Chicago Press.

Adas, Michael. 1989. Machines as the Measure of Men. Ithaca, NY: Cornell University Press.

Agustí-Cullell, Jaume. 2021. From Programmed to Creative Intelligence: Humanity’s Radical Mutation. Published by the author.

Agustí-Cullell, Jaume. 2022. “Beyond the AI Conundrum: The Future of Intelligence Lies in Its Social Flourishing”. Philosophy International Journal 5 (2): 1–8.  http://doi.org/10.23880/phij-16000251.

Aissani, Rahima, Rania Abdel-Qader Abdallah, Sawsan Taha, and Muhammad Noor Al Adwan. 2023. “Artificial Intelligence Tools in Media and Journalism: Roles and Concerns”. Paper presented at the 2023 International Conference on Multimedia Computing, Networking and Applications (MCNA), Valencia, Spain, 19–22 June 2023.  http://doi.org/10.1109/MCNA59361.2023.10185738.

Al-Khalili, Jim. 2014. “Higgs Would Not Find His Boson in Today’s ‘Publish or Perish’ Research Culture”. The Guardian, 14 February 2014. https://www.theguardian.com/commentisfree/2014/feb/14/higgs-boson-publish-or-perish-science-culture.

Aloisi, Antonio, and Valerio De Stefano. 2022. Your Boss Is an Algorithm: Artificial Intelligence, Platform Work and Labour. Oxford: Hart.

Bakhshi, Hasan, Carl Benedikt Frey, and Michael Osborne. 2015. Creativity vs. Robots: The Creative Economy and the Future of Employment. London: Nesta.

Bird, Richard J. 2003. Chaos and Life: Complexity and Order in Evolution and Thought. New York, NY: Columbia University Press.

Bonesso, Sara, Laura Cortellazzo, and Fabrizio Gerli. 2020. Behavioral Competencies for Innovation: Using Emotional Intelligence to Foster Innovation. Cham: Palgrave.

Boreham, Nick. 2004. “A Theory of Collective Competence: Challenging the Neo-Liberal Individualisation of Performance at Work”. British Journal of Educational Studies 52 (1): 5–17.  http://doi.org/10.1111/j.1467-8527.2004.00251.x.

Braga, Adriana, and Robert K. Logan. 2017. “The Emperor of Strong AI Has No Clothes: Limits to Artificial Intelligence”. Information 8 (4): Article No. 156.  http://doi.org/10.3390/info8040156.

Bülbül, Halil Ibrahim, and Özkan Ünsal. 2010. “Determination of Vocational Fields with Machine Learning Algorithm”. Paper presented at the Ninth International Conference on Machine Learning and Applications, Washington, DC, 12–14 December 2010.  http://doi.org/10.1109/ICMLA.2010.109.

Cheng, Fung Kei. 2016. “What Does Meditation Contribute to Workplace? An Integrative Review”. Journal of Psychological Issues in Organizational Culture 6 (4): 18–34.  http://doi.org/10.1002/jpoc.21195.

Chubb, Jennifer, Peter Cowling, and Darren Reed. 2022. “Speeding Up to Keep Up: Exploring the Use of AI in the Research Process”. AI & Society 37 (4): 1439–1457.  http://doi.org/10.1007/s00146-021-01259-0.

Chursinova, Oksana, and Oleksandra Stebelska. 2021. “Is the Realization of the Emotional Artificial Intelligence Possible? Philosophical and Methodological Analysis”. Filosofija. Sociologija 32 (1): 76–83.  http://doi.org/10.6001/fil-soc.v32i1.4382.

Corbí, Marià. 2013. La construcción de los proyectos axiológicos colectivos. Principios de epistemología axiológica. Madrid: Bubok Publishing.

Da Silva Guimarães Martins da Costa, Leonardo, and Mariana Thieriot Loisel, eds. 2024. Artificial Intelligence and Human Mediation. Proceedings of the CIRET Symposium on Artificial Intelligence and Human Mediation. CIRET.  http://doi.org/10.58079/124wh.

De Stefano, Valerio. 2019. “‘Negotiating the Algorithm’: Automation, Artificial Intelligence, and Labor Protection”. Comparative Labor Law & Policy Journal 41 (1): 15–46.

Easton, Eliza, and Jyldyz Djumalieva. 2018. Creativity and the Future of Skills. London: Creative Industries Policy and Evidence Centre.

Einola, Katja, and Violetta Khoreva. 2023. “Best Friend or Broken Tool? Exploring the Co-Existence of Humans and Artificial Intelligence in the Workplace Ecosystem”. Human Resource Management 62 (1): 117–135.  http://doi.org/10.1002/hrm.22147.

Enquist, Magnus, Stefano Ghirlanda, and Johan Lind. 2023. The Human Evolutionary Transition: From Animal Intelligence to Culture. Princeton, NJ: Princeton University Press.

Fombrun, Charles, and Christopher Foss. 2004. “Business Ethics: Corporate Responses to Scandal”. Corporate Reputation Review 7 (3): 284–288.  http://doi.org/10.1057/palgrave.crr.1540226.

Fowler, John W. 2021. Randomness and Realism: Encounters with Randomness in the Scientific Search for Physical Reality. Singapore: World Scientific Publishing.

García Baroja, Andrea. 2023. “Sisyphemia, a New Work-Related Disorder Characterized by Obsessive Ambition, Chronic Stress and Pathological Fatigue”. El País, 23 August 2023. https://english.elpais.com/science-tech/2023-08-23/sisyphemia-a-new-work-related-disorder-characterized-by-obsessive-ambition-chronic-stress-and-pathological-fatigue.html.

Gardner, Howard. 2011. Frames of Mind: The Theory of Multiple Intelligences. New York, NY: Basic Books.

Gmyrek, Pawel, Janine Berg, and David Bescond. 2023. “Generative AI and Jobs: A Global Analysis of Potential Effects on Job Quantity and Quality”. ILO Working Paper No. 96. Geneva: ILO.

Hall, Marie Boas. 1994. The Scientific Renaissance 1450–1630. New York, NY: Dover Publications.

Hancock, Jeffrey T., Mor Naaman, and Karen Levy. 2020. “AI-Mediated Communication: Definition, Research Agenda, and Ethical Considerations”. Journal of Computer-Mediated Communication 25 (1): 89–100.  http://doi.org/10.1093/jcmc/zmz022.

Hatzius, Jan, Joseph Briggs, Devesh Kodnani, and Giovanni Pierdomenico. 2023. The Potentially Large Effects of Artificial Intelligence on Economic Growth. Goldman Sachs.

ILO. 2019a. Standing Orders for Technical Meetings – Standing Orders for Meetings of Experts. Geneva.

ILO. 2019b. Work for a Brighter Future: Global Commission on the Future of Work. Geneva.

Kendall-Taylor, Andrea, Erica Frantz, and Joseph Wright. 2020. “The Digital Dictators: How Technology Strengthens Autocracy”. Foreign Affairs 99 (2):103–115.

Kosmyna, Nataliya, Eugene Hauptmann, Ye Tong Yuan, Jessica Situ, Xian-Hao Liao, Ashly Vivian Beresnitzky, Iris Braunstein, and Pattie Maes. 2025. “Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task”. arXiv.  http://doi.org/10.48550/arXiv.2506.08872.

La-Hovary, Claire, and Jordi Agustí-Panareda. 2019. “What Is Work? A Malleable Notion in the ILO’s Legal Pursuit of Social Justice”. In ILO 100: Law for Social Justice, edited by George P. Politakis, Tomi Kohiyama and Thomas Lieby, 893–922. Geneva: ILO.

LaMontagne, Ramona Marie. 2016. “Ethical Dilemmas in the Workplace”. International Journal of Knowledge, Culture and Change Management 15 (1): 9–21.  http://doi.org/10.18848/1447-9524/CGP/9-21.

Levesque, Hector J. 2017. Common Sense, the Turing Test, and the Quest for Real AI. Cambridge, MA: MIT Press.

Lohman, David F. 1989. “Human Intelligence: An Introduction to Advances in Theory and Research”. Review of Educational Research 59 (4): 333–373.  http://doi.org/10.3102/00346543059004333.

López de Mántaras i Badia, Ramon. 2023. 100 coses que cal saber sobre intel·ligència artificial. Valls: Cossetània.

Malone, Clare. 2024. “Is the Media Prepared for an Extinction-Level Event?” The New Yorker, 10 February 2024. https://www.newyorker.com/news/the-weekend-essay/is-the-media-prepared-for-an-extinction-level-event.

McGowan, Heather E., and Chris Shipley. 2020. The Adaptation Advantage: Let Go, Learn Fast, and Thrive in the Future of Work. Hoboken, NJ: John Wiley & Sons.

Méda, Dominique. 2016. “The Future of Work: The Meaning and Value of Work in Europe”. ILO Research Paper No. 18. Geneva: ILO.

Melé, Domènec. 2005. “Exploring the Principle of Subsidiarity in Organisational Forms”. Journal of Business Ethics 60 (3): 293–305.  http://doi.org/10.1007/s10551-005-0136-1.

Mikkola, Leena, and Maarit Valo, eds. 2019. Workplace Communication. New York, NY: Routledge.

Montuori, Alfonso. 2011. “Beyond Postnormal Times: The Future of Creativity and the Creativity of the Future”. Futures 43 (2): 221–227.  http://doi.org/10.1016/j.futures.2010.10.013.

Moshman, David. 2003. “Intellectual Freedom for Intellectual Development”. Liberal Education 89 (3): 30–37.

Mühlbauer, Sabrina, and Enzo Weber. 2022. “Machine Learning for Labour Market Matching”. IAB-Discussion Paper No. 3/2022. Nürnberg: Institut für Arbeitsmarkt- und Berufsforschung.

Peng, Xiao, George Hendrikse, and Wendong Deng. 2018. “Communication and Innovation in Cooperatives”. Journal of the Knowledge Economy 9 (4): 1184–1209.  http://doi.org/10.1007/s13132-016-0401-9.

Renninger, Ann K., and Stephanie Su. 2019. “Interest and Its Development, Revisited”. In The Oxford Handbook of Human Motivation, 2nd ed., edited by Richard M. Ryan, 205–225. New York, NY: Oxford University Press.

Russell, Stuart. 2021. “Human-Compatible Artificial Intelligence”. In Human-Like Machine Intelligence, edited by Stephen Muggleton and Nicholas Chater, 3–23. Oxford: Oxford University Press.

Scherbaum, Charles A., and Harold W. Goldstein. 2015. “Intelligence and the Modern World of Work”. Human Resource Management Review 25 (1): 1–3.  http://doi.org/10.1016/j.hrmr.2014.09.002.

Sfard, Anna. 2008. Thinking as Communicating: Human Development, the Growth of Discourses, and Mathematizing. Cambridge: Cambridge University Press.

Shams, Rifat Ara, Didar Zowghi, and Muneera Bano. 2025. “AI And the Quest for Diversity and Inclusion: A Systematic Literature Review”. AI and Ethics 5 (1): 411–438.  http://doi.org/10.1007/s43681-023-00362-w.

Sun, Ron. 2020. “Potential of Full Human–Machine Symbiosis through Truly Intelligent Cognitive Systems”. AI & Society 35 (1): 17–28.  http://doi.org/10.1007/s00146-017-0775-7.

Tegmark, Max. 2017. Life 3.0: Being Human in the Age of Artificial Intelligence. New York, NY: Alfred A. Knopf.

Thompson, Paul. 2010. “The Capitalist Labour Process: Concepts and Connections”. Capital & Class 34 (1): 7–14.  http://doi.org/10.1177/0309816809353475.

Webber, Sheila Simsarian, Jodi Detjen, Tammy L. MacLean, and Dominic Thomas. 2019. “Team Challenges: Is Artificial Intelligence the Solution?” Business Horizons 62 (6): 741–750.  http://doi.org/10.1016/j.bushor.2019.07.007.

Whitehead, Alfred North. 1925. Science and the Modern World. New York, NY: Macmillan.