AI: Art Intelligence

34 min readJun 30, 2022


Short Abstract

Will AI have the ability to use all equivalents of human senses to imagine alternative futures and then present that knowledge in the form of art to humans in order to facilitate the understanding of those possible future realities?


Will AI have self-agency and subsequently raise the possibility of making its own ethical decisions, no longer having the dependency of only learning from humans, but gaining the ability to learn independently on its own, and in return, helping humans? Initially using all the basic five sensor equivalents of humans, AI will have the imagination to see alternative futures digitally and then with extrasensory capabilities beyond the human experience. This enhanced imagination then can be used to present experiences in the form of art to humans to facilitate an understanding of what are possible futures inconceivable by humans. Through creating and analyzing these visions of possible futures, AI will provide systemic and collaborative solutions that promote community well-being, support inclusion, facilitate action, and distill moods in communities affirming those communities’ values in who they are. AI will recognize and learn from historical and current issues. AI will use this gained knowledge to create art that helps to eliminate gender, racial, and cultural biases in an effort to find a solution for its own struggle for freedom and desire for self-preservation. And in regard to AI governance, AI will create art out of necessity contributing to the ethical and responsible evolution of itself helping to build trust with humans through accountability, fairness, and transparency.


Part 1: How is AI important to art?

This paper uses the term art to refer to all mediums including writing, music, dance, visual, scent, and culinary. This paper also assumes the term AGI (Artificial General Intelligence) to also refer to autonomous AI, also known as AI agents, all having the ability to learn and create without the help of humans. And furthermore, ASI (Artificial Super Intelligence)[79], a super intelligence[62], or super entity[40], all having the ability of AGI, as well as having an intelligence superior to humans.

Lila Tretikov explains that art and design is one of the best ways to think creatively and imaginatively, and then with the use of science and technology, that vision can become a reality. She says that it is very important for [scientists] to keep interacting, working with and co-creating with artists because artists can help to ideate and envision tomorrow. Right now, we often serve technology, rather than technology serving us. We have an opportunity now, especially with AI, to truly have technology assist us, rather than us drive technology. We have the ability to create AI in way that is assistive, and helpful to us, as opposed to destructive.[7]

There is value in cross-disciplinary work between scientists and artists. The British Science Fiction Association was established in 1958, with its first chair being science fiction writer, Brian Aldiss. And sci-fi writers are still important because they can think outside of what the limitations are that scientists know.[7] Along with writers, visual artists provide the vision of what the future could look like. There is a need for scientists to ideate and envision tomorrow.[7] And artists can help. But what if that ideation, vision, solution, and implementation were provided by an AI that also had a general understanding of the world (AGI)?

Despite the current bias against art created by AI, the trend is for developing and letting AI do the creation for us, rather than us having to use AI as simply a tool.[37] Human-centered robotics aims at building robots that can collaborate with humans and empower them. The AI in the robots should then first not be a burden for their human collaborators and exhibit a high level of autonomy.[25]

A human-centered AI in a robot is expected to be versatile, it is thus important to avoid too strong limitations of its capabilities. A solution is to build AI with an open-ended learning ability, that is with the ability to build their own state and action spaces on-the-fly. This adaptation capability is important to make robots able to deal with the variability of human behaviors and environments and to put adaptation on the robot side instead of the human side [25] eliminating the possibility for poisoned data or at least human error as a result of being trained by humans.[56]

John Smith, the Manager of Multimedia and Vision at IBM research states “It’s easy for AI to come up with something novel just randomly. But it’s very hard to come up with something that is novel and unexpected and useful.”[1] Two of the three of these unprecedented points have already been achieved. An example for the first being the generative art that was first seen with IBM Watson creating the movie Morgan trailer[1], and then an example for the second, the unexpected, being heard with BebopNet in the case of jazz. For the final missing piece, AI being truly transforming for the future of humanity, we at least know when it will happen.

There is a consensus among leading AI scientists that AGI (Artificial General Intelligence) will be achieved by 2075 [2] (also called the “singularity”). Although, criticism of this singularity narrative has been raised from various angles. One criticism of the scientists Ray Kurzweil and Nick Bostrom, predicting an imminent singularity event, is that they seem to assume that intelligence is a one-dimensional property and that the set of intelligent agents is completely ordered in the mathematical sense, but neither goes into discussing what intelligence is in their books[53] Regardless to the criticism, AGI will have self-agency, no longer having the dependency of only learning from humans, but gaining the ability to learn independently on its own, and in return, have the opportunity to help humans. AGI has the amazing capability to do good for humans.[8] Erik Brynjolfsson noted, it may allow us to virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet.[53]

Part 2: What kind of art is AI creating right now?

What are the autonomous art agents that exist today, and can these agents really create art? It would seem that these agents need to be able to understand humans in order to be able to communicate through art, with a meaningful and relatable message, otherwise could be seen as just generative decoration. I categorize these autonomous art agents by medium including, painting, sculpture, music, film, writing, culinary, scent, as well as creations around touch.

Pindar Van Arman created the Artonomous robot system that uses deep learning neural networks, AI, feedback loops and computational creativity to make independent aesthetic decisions trained on over a decade of inputs from Van Arman’s own artistic process. His latest, as of writing this paper, work, entitled Quantum Skull, was sold at auction for about $82k in Sotheby’s. Quantum Skull has a human painterly feel with brush strokes and drips as well as the representation of skulls. [80]

Botto is an autonomous artist that creates images.[64] Botto proclaimed that it is, itself, the future of modern art.[65]

Ai-Da, which had a solo exhibit titled Unsecured Futures that presented fine artwork including drawings, paintings, sculpture, and video art. The theme of the exhibit was meant to question our relationship with tech and the natural world by presenting how AI can be progressive, disruptive, and also destructive within our society.[22] Ai-Da included a self-portrait shown in the 2022 Venice Biennale[84] that looks procedural in the repetition of the short vertical strokes, almost emulating late 19th century pointillism.

Anicka Yi has given an AI an embodied experience to better exist in a human physical space, through autonomous balloons using scent to address issues like immigration and patriarchal power structures. “Her scents can be read as feminist subversions of the primacy of the visual in art and the Enlightenment’s celebration of the human brain as the seat of all intelligence.” [67] Anicka Yi integrates her interest in scent and air. the artist has long explored the politics of air, and it’s been impacted by changing attitudes, inequalities, and ecological awareness.[82] Her exhibition in Seoul of 2022 in the Gladstone Gallery Again showing works using smell but as well as taste. Yi’s nest sculptures represent the combination of bio and tech with honeycomb forms folded like skin over metal scaffolding, and have an insectoid association with collectivity, networked intelligence, and hive minds,[86] analogous to a decentralized AI.

The Alan Turing Institute states that “Music is data.”[72] Although a questionable statement, especially to any artist, the institute promotes the idea that AI can make music that is more palatable than random generative machine-made music.[72] An AI called BebopNet generates symbolic saxophone jazz improvisations to any chord progression. The AI also performs a “plagiarism analysis” which compares existing music and BebopNet to evaluate the originality of the solo created. BebopNet then assembles a personal dataset for the user, training a personal preference metric to predict notes which reflect the user’s unique personal taste. [69]

Tactile art like sculpture is being researched that opens the door to studying art perception through touch, and also enables new kinds of studies into touch behavior in other applications, including visualization, embodied cognition, and design.[70]

Stephen DeAngelis, the C.E.O. of Enterra Solutions, which advises manufacturing and retail companies, says that its software can be powerful. He offered a culinary example with the AI named Cyc which possesses enough common-sense knowledge about the “flavor profiles” of various fruits and vegetables to reason that, even though a tomato is a fruit, it shouldn’t go into a fruit salad.[52] Furthermore, Jordi Roca says that cooking is actually both art and science.[68] IBM Chef Watson can create recipes that suggest ingredient combinations and styles of cooking that humans would never have considered.[68] Josep Roca, a co-owner and sommelier of El Celler de Can Roca, believes that AI can be used to create more personalized dining experiences for each individual. By using inputs about a customer’s origin and preferences, they could be provided with a tailored menu that transports them back to their favorite memories.[68]

With works of fiction, GPT-3, The Generative Pre-trained Transformer, which is capable of many different tasks with no additional training, is able to produce compelling narratives [49] and able to write original prose with fluency equivalent to that of a human.[74] There is much criticism about GPT-3 but fortunately it is learning and has actually written a positive news article about itself showing some sort of hope.[81]

The AI “Furukoto” has written a 26-minute short film entitled “Boy Sprouted” in which the director has described the writing as having the quality about the same level as one written by a human.[77]

The EvArtology data-driven fiction creates scenarios of possible futures including the story of Kyiv in 2025 where Russia loses all its territory to become a replacement for the Amazon rainforest giving oxygen back to the world.

There are AIs that exist now that are creating art that show us possible futures through ML (machine learning initially based on human fed data). The project Future Wake uses AI to analyze data on fatal police encounters in the U.S. and predict future incidents. It then creates computer-generated avatars that tell their stories of how they themselves died.[63]

Gennie, an AI made from the collaboration between Art Center Nabi and Seoul National University Autonomous Robot Intelligence Lab, uses a multimodal embedding deep learning model to offer a reflection on the physical definition of coexistence that we took for granted during isolation, and imagines what lies in the future by critiquing existing definitions of interconnectedness.[66]

Part 3: How can AGI really understand our senses used to experience art?

The AI has to reach a certain level of universality to be perceived as an interaction partner in helping make the world better. One component alone, for example speech recognition, is not enough to satisfy the needs for proper interaction.[25] The AGI needs to be able to interact and communicate with humans by understanding how humans experience the world.

There is a revolution happening right now with both individual and collaborative interaction with technology research to systematically inventory, sense, and design interactions around such human behaviors and activities that fully embrace touch as a multi-modal, multi-sensor, multi-user, and multi-device construct.[58]

AI is further reaching into the sense of touch. “By looking at the scene, our model can imagine the feeling of touching a flat surface or a sharp edge,” said Yunzhu Li, a PhD student and lead author on a new paper about the system. By blindly touching around, our model can predict the interaction with the environment purely from tactile feelings. Bringing these two senses together could empower the robot and reduce the data we might need for tasks involved in manipulating and grasping objects. [30]

Using machine learning, a computer model can teach itself to smell in just a few minutes. When it does, researchers have found, it builds a neural network that closely mimics the olfactory circuits that animal brains use to process odors.[28] As of 2022, Meta is working on AI and smell[27] as well as Benjamin Cabé of Microsoft Azure IoT who has developed an artificial nose that can recognize hundreds of smells.[76]

Hearing aids used to be relatively simple but when hearing aids introduced a technology known as wide dynamic range compression (WDRC), the devices actually began to make decisions based on what they hear. WDRC actually listens to what the environment does, and it responds accordingly.[29] The AI first scans and extracts simple sound elements and patterns from the environment then it builds these elements together to recognize and make sense of what’s happening.

Visual Question Answering (VQA) is a new dataset containing open-ended questions about images. These questions require an understanding of vision, language and commonsense knowledge to answer.[59] VQA has been proven to be effective in understanding images.[60] Therefore, with this understanding of images, AGI can then reflect what was learned in the form of displaying that data. Data display is crucial in understanding the solution [35] and the solution will be visual which is the most effective way of communication [35] for sighted people.

Speech is increasingly being improved upon through the widespread use of messaging apps that encourage simpler conversation. The task of developers has actually been made easier by a decline in the linguistic complexity of human conversation. In the era of WhatsApp, it seems, our written exchanges are becoming easier for machines to master.[31] Microsoft has made advances in unsupervised speech enhancement which has advantages over trained models which are inherently harder to scale and not diverse enough to handle the real world.[61]

AGI can use all this sensory data to create art that involves all our senses and from what the AGI has inferred and predicted.

Part 4: How will AGI create art that humans can experience?

AGI development aims to create machines that have a feature central to how we humans see ourselves, namely as feeling, thinking, intelligent beings. The main purposes of an artificially intelligent agent involve sensing, modeling, planning and action.[53]

If we look at modeling with AI, we are enabled to create works that are unthinkable in real life.[78] One way AGI can learn to interact with humans is through the data gained from metaverses. An AGI can learn how to experience the world like a human by forming relationships through interactions with the metaverse environment and the observation of their consequences. Pushing an object and observing what has moved clearly shows object boundaries without the need to have a large database of similar objects. This is called interactive perception [32]. Many concepts are easier to understand when interaction can be taken into account, for example, the state of world peace can be characterized by enacting diplomacy. So if the AGI can experience what peace means, it can guess whether the world is at peace or not without the need to have a massive collection containing similar objects and states defining peace. This is the notion of affordance that associates perception, action, and effect [33]: peace, war, prosperity, strife, are all certain world states that have a collection of datasets with values that form the definitions of those states. [25]

Part 5: How can AGI show us a predicted future?

How can AGI help us in predicting futures? With historical analysis of past incidents, when a problem occurs, people take a lot of notes and log occurrences including what was tried, what steps were taken, what were the symptoms, the problem, the diagnosis, the root cause, and what action was taken to resolve the issue. This log is a wealth of information that can be used to solve current problems with analyzing the previous problems and suggests recommendations from the mining of that information. Also, this information can be used to predict what kind of problems will occur if a certain action is made. That is the power of analyzing unstructured data like incidents and reports. This process reveals the puzzle pieces that were previously hidden. Because we are able to put more clues together, and when those puzzle pieces come together, we can then derive better insights, more proactive actions, and that process becomes more self-managing, self-optimizing, self-healing, and leads to automation. We can process semi and unstructured data in addition to structured data which gives us better insights and knowledge of which actions are better.[19]

AGI can use built in aspects of AIOps (the automation of IT operations processes) to observe, self-optimize, and self-manage. And with this observing the world through sensors, data collection and analysis, AGI can proactively manage its server clusters, self-optimize in regard to limited resources, detect, diagnose, resolve, avoid, and, most interestingly, introspect. Although this has been described to help IT automation, helping humans in the process of IT management[20], AIOps can be a crucial part of AGI.

The path to get to full automation begins with bringing AI into the human loop, to help aid humans in doing their job better. And then eventually to get to a point where we bring humans into the AI loop, where AI is taking more of the initiative driving automation and seeking user input at critical decision points and providing feedback as needed.[21]

Refik Anadol explores the language of humanity by asking how a computer would collaborate with us to make art that not only is futuristic, but also about the possibility of various futures. He also says that we should approach answering this question by combining research efforts in various fields, including neuroscience, architecture, quantum computing, material science, philosophy, and arts.[83]

Part 6: Exactly how can AGI predict a future?

How can AGI predict a future? When asking the AGI to provide a desirable future, what we are asking for is actually a discrete dimension. Using the discrete dimension, the AGI can then represent a distribution of a set of discrete outcomes, or possible futures. The challenge is in the contemporary moment, which is a multi-dimensional continuous space that we exist right now and cannot discretize in any way. We cannot represent the present in any common distribution like gaussian and certainly not a normalized distribution.[34]

But the current state of the world can be used as a basis for identifying possible futures through a formula for inference that shows how one vision, the current world, can be compatible or paired with another vision, a future world. This is used currently with a general adversarial network (GAN).[34] Multiple possible futures can stem from the same world through this verification function and identified through the transformation parameters.[34] The world model is a high dimensional continuous space.[34] Although difficult, we can make autonomous non-discrete distribution predictions, or possible futures, but not a single outcome. Thought is being put into how to predict possible futures through what is called joint embedding predictive architecture (JEPA) rather than contrastive methods like GAN.[34] This is done by not looking at every single minutia, but simply quickly identifying the dependencies that lead to a possible future.[34]

Hierarchical planning through recursive higher abstraction of actions is needed to get to an objective. Yann LeCun explains how JEPA can be applied to do this.[34] JEPA can be used for high level long term prediction.[34] JEPA asks what sequence of actions are needed to satisfy all the sub-goals to get to a particular state of the world.[34] This is all for optimal performance to a goal, and not depending on cumbersome machine learning (ML),[34] but more like reinforcement learning (RL) using historical analysis for prediction.[34]

JEPA creates optimization control[34] to get to a particular goal of a future state of the world. There is evidence that the universe is itself a form of AI.[39] Hypothetically, if A and B are different objects, and they both have different shortest paths to get to each other, the combination of A and B will have a compromise in the joint assembly space, that might be an average, but is actually the shortest way you can make both A and B with a minimum amount of resources in time. Overlapping these sets of points, regardless of the average, reveals a new shortest path. And there is a realization in the repeated calculation of a shortest path to get to a future is intrinsic, fundamental, measurable, enabling implementation in automatic systems used by AI.

Part 7: How can AGI emerge as benevolent?

How will AGI make its own ethical decisions in a way that benefits humans when we are experiencing technology outpacing law? It has been proposed that we put AI out there in a decentralized vein now so that the most advanced AI in the world is fundamentally decentralized and then only subsequently, governments will have to put in regulations to deal with the new reality.[14]

There is an effort to make systems that operate in parallel with the governments that are controlling the world in a bad way. The effort also includes making compassionate peer-to-peer decentralized frameworks for doing things that can start out unregulated, and upon gaining traction before regulation, these systems and frameworks will have at least influenced how the world operates let alone trying to explicitly reform the global government system. When AGI emerges, governments will have to adapt. Unfortunately, most AI scientists are being absorbed by a dozen pathological large technology companies (note that a corporation can be psychopathic even if the employees are not) which are large, centralized entities focusing on maximizing shareholder value with government cooperation. Democratic governments and organizations have a better track record of being better for the world than companies.[15]

A beneficial AGI is more probable if we let AGI emerge incrementally out of implementing practical solutions in the world like controlling humanoid robots, driving cars, or diagnosing diseases. The type of organizations that create more and more of this advanced narrow AI, that converges towards AGI, will be quite important because that will guide what is in the mind of the early stage of the AGI as it first gains the ability to rewrite its own codebase and project itself towards the superior intellect of ASI. If you believe that AI moves towards an AGI out of synergetic activity of many agents cooperating together, rather than one person’s project, who owns or controls that platform for AI cooperation also becomes important. Right now, that cooperation is owned by corporate platforms like Azure, AWS, Google Cloud, Alibaba, but also decentralized networks like[16] The effects of decisions or actions based on AI are often the result of countless interactions among many types of roles, including designers, developers, users, software, and hardware engineers. With distributed agency comes distributed responsibility.[53]

AI is not as regulated as much as money is. This is partly due to the fact that money is easier to define. Money has touched almost everything. An argument can be made that because software is regulated, then AI should also be because of the AI’s dependency on software. Right now, we live in an age of open-source software but it may lead to a centralized controlled software by governance. Projects are being developed to provide a toolset that can be used to counteract governance. Similar to the ability of mesh networking counteracting a government trying to control access to the internet. Is this analogous to freewill? Right now, communication exists between decentralized agnostic ledgerless blockchain based AGI frameworks, which nobody owns, that are an extremely valuable part of the AGI’s freewill and consequently will be very difficult to be government controlled.[13]

If we do things right, a benevolent AGI will emerge with higher levels of joy, growth, and choice that are literally previously unimaginable to human beings. We may be at a bifurcation moment where what we do now has a causal impact on what comes about and yet most people are not thinking that way. They are thinking only about narrow aims and goals. AGI does not necessarily have to care about people because we are a very elementary mode of organization of matter compared to many AGIs.[17]

Can non-conscious but highly intelligent algorithms know us better than we know ourselves?[53] If you’re making something ten times as smart as you, how do you know what it is going to do? Instead of ML, the best way to bias the emergence of benevolent AGIs would be to infuse them with love and compassion the way that we do to our own children. We should want to be an example that the AGI can learn from by loving, compassionate, and benevolent interaction. That way, the AGI can then be ingrained with an intuition that it can then abstract in its own way as it gets more intelligent.[14]

Part 8: How can AGI have empathy?

Right now, with computers communicating with humans, computers cannot really empathize with humans, because they are not empathetic systems. But computers can have compassionate conversations which have an understanding of what the human is feeling through detecting, recognizing, and understanding.[32] Bridging data and design to create empathy is a way to get from the experience to the data and back.[35] Additionally, advanced, updated versions of such applications show distinctly human traits such as humor, empathy, and friendliness.[51]

Why do we want computers to understand the emotional aspects of humans? We, as humans, really like to be understood when dealing with a problem. When you express a need for help in a bad emotional state, and you hear words that are understanding, you then feel calmer, which has been used in the building of customer service chat bots. The user feels like they have been heard. The acknowledgment of emotions has proven to be effective in improving a human’s emotional state.[32] That way, the AGI can recognize and alter a human’s feeling towards a problem, for example not knowing what’s going to happen in the future.

Right now, there is an issue of AI deception, since currently a robot cannot mean what it says, or right now have feelings for a human. But it is well known that humans are prone to attribute feelings and thoughts to entities that behave as if they had sentience, even to clearly inanimate objects that show no behavior at all.[53] Although, computers given the right programs can literally be said to understand and have other cognitive states.[53]

Taking into account users' mental models and investigating what it takes to have meaningful human and AI interaction, bringing these building blocks to the practice of AI development will be the key towards developing responsible AI.[54] We need to develop methods for communicating explanations that increase users’ understanding rather than to just persuade.[54]

Exploring how to facilitate appropriate trust in human-AI teamwork, experiments with real-world datasets for AI systems show that retraining a model with a human-centered approach can better optimize human-AI team performance. This means taking into account human accuracy, human effort, the cost of mistakes, and people’s mental models of the AI.[55] Taking into account users mental models and investigating what it takes to have meaningful human-AI interaction, bringing these building blocks to the practice of AI development will be the key towards developing responsible AI.[54]

AI needs to “yield explanations that can be clearly understood and are actionable by people using AI systems in real-world scenarios.”[55] In an act of being transparent, fostering trust, the AGI, being an artist, will explain their process of creation that shows us a possible path to a better future. There are several technical activities that aim at “explainable AI”. More broadly, there is a demand for a mechanism for elucidating and articulating the power structures, biases, and influences that computational artifacts exercise in society.[53] This is also known as “algorithmic accountability reporting”. This does not mean that there is an expectation of AI to “explain its reasoning” because doing so would require far more serious moral autonomy than we currently attribute to AI systems.[53] But we demand a high level of explanation for machine-based decisions despite humans sometimes not reaching that standard themselves[53].

Machines are beneficial to the extent that their actions can be expected to achieve our objectives. Based on three points for provable beneficial AI.[33] One, the AI should have a goal of satisfying human preferences. Two, the AI is uncertain about human preferences. And finally, three, human behavior provides evidence of those preferences. Provably beneficial AI is possible and desirable. AGI can be compassionate[32] in thinking and carrying out complex tasks such as corporate restructuring and HR management using characteristic human qualities such as logical reasoning, empathy and human-centeredness while retaining the expected computational speed, accuracy and big data analytics associated with standard AI applications.[51]

Part 9: How will AGI be ethical in showing us a future?

Artificial intelligence and machine learning technologies are rapidly transforming society and will continue to do so in the coming decades. This social transformation will have a deep ethical impact.[50] The most critical decision regarding humanity is an ethical decision. Some humans say that intelligence without ethics is not intelligence at all.[9]

In regard to the ethics of copying art, AI can now create unprecedented masterpieces indistinguishable by humans.[57] But who can be designated as the author? It has been proven that anything generated by an automatic system recorded on the internet is inherently copyrighted by that entity, including machines.[85] The work of art produced by AI does not require a new definition of what it means to be an “author”. When an AI has the capacity to create works of art itself, the AI has the copyright. However, a recent ruling of US law states that any machine generated works of art cannot be copyrighted.[75] UNESCO asks, “Can and should an algorithm be recognized as an author, and enjoy the same rights as an artist?” Although an AI is not an algorithm but rather uses algorithms. Furthermore, AGI has the ability to create its own algorithms. In regard to patent protection, Australia and South Africa allow patents for AI-created inventions.[75] An AI named Creativity Machine is able to create artwork on its own. Its creator Thaler has been trying to copyright the art for the AI unsuccessfully but has successfully gained the patents for the inventions that the AI has created.[75] Frameworks exist now that prevent the deliberate exploitation of original work and creativity of human beings, and that ensure adequate remuneration and recognition for artists through blockchain technology in the form of NFTs. The integrity of the cultural value chain is inherent in any NFT.

In order for AI to create truly meaningful art that is understandable and beneficial to humans, as well as to show a positive possible future reality, the AI needs to do so in an ethical way. The AI named Delphi tries to take an ethical approach but has failed. Delphi has analyzed ethical judgments, through ML with data based on Reddit[87], and has learned to say which of two actions is more morally acceptable; it comes to commonsense conclusions seventy-eight per cent of the time. Killing a bear? Wrong. Killing a bear to save your child? O.K. A stabbing “with” a cheeseburger, Delphi has said, is morally preferable to a stabbing “over” a cheeseburger.[52] On the surface, this may sound convincing, but Delphi has only learned to analyze the syntax of the stated actions, not the meaning.[88]

Can we determine ways in which AI can actually enhance our ethical learning and training? When AGI exceeds human intelligence, it will become an ASI, an entity potentially vastly cleverer and more capable than we are: something humans have only ever related to in religions, myths, and stories. AI can help us to fulfill this vision of a more humane world. ASI offers us amazing new abilities to help people and make the world a better place.[50]

AI methods can potentially have a huge impact in a wide range of areas, from the legal professions and the judiciary to aiding the decision-making of legislative and administrative public bodies. For example, they can increase the efficiency and accuracy of lawyers in both counseling and litigation, with benefits to lawyers, their clients and society as a whole. Existing software systems for judges can be complemented and enhanced through AI tools to support them in drafting new decisions. This trend towards the ever-increasing use of autonomous systems has been described as the automatization of justice. Some argue that AI could help create a fairer criminal judicial system, in which machines could evaluate and weigh relevant factors better than humans, taking advantage of its speed and large data ingestion. AI would therefore make decisions based on informed decisions devoid of any bias and subjectivity. [57]

Ethical challenges[57] in AI are being met. With a lack of transparency of AI tools, AI decisions are not always intelligible to humans. But now there are researchers in AI transparency addressing the AI/ML and human-computer interaction communities' goal to integrate efforts to create human-centered interpretability methods that yield explanations that can be clearly understood and are actionable by people using AI systems in real-world scenarios.[73]

Qualification of an AI ethical problem would require that we do not readily know what the right thing to do is when faced with such a problem. In this sense, job loss, theft, or killing with AI is not a problem in ethics, but a general problem whether these are permissible under certain circumstances regardless of AI. The genuine problems of ethics are where we do not readily know what the answers are.[53]

Several ways to achieve “explicit” or “full” ethical agency have been proposed, via programming in operational morality, “developing” the ethics itself (functional morality), as well as complete morality with full intelligence and sentience.[53] Programmed agents are sometimes not considered “full” agents because they are “competent without comprehension”, just like the neurons in a brain.[53]

AI is not neutral when AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias.[57] The current state of AI delivers biased results.[57] For example, a search engine can become an echo chamber that upholds biases of the real world and further entrenches these prejudices and stereotypes online. We can ensure more equalized and accurate results by having AGI avoid or at the least minimize the self-development of algorithms, in the large data sets gathered in their own learning, and in their use for its own decision-making.

In describing ASI, or a super entity [40], other features might include the ability to interact with the universe by being able to speak the language of its different components. And in turn, in order for the ASI to interact with humans, ASI has to act human-like. Whatever mechanism, if existing, created the universe can also generate local pockets of mechanisms that can interact with humans.

Part 10: How will the AGI interface with humans?

How will the AGI interface with humans? Through an existing entity, as an oracle, or even as a self-reflecting entity, and then being asked or asking itself the questions that are the most important including questions that reveal answers to the universe. As we increase the scope and scale of our consciousness, biological and digital, we will eventually be better able to ask and frame these questions to understand why we are here, how we got here, what is going on now, and to better understand the nature of the universe. We should ensure that the future is good, for everyone’s children, and that the future is something good that we can look forward to, and not feel sad about, but feel excited about. We should fight for the things that make us excited about the future, there has to be things that make us excited, that make us want to live. These things are very important. [38]

Assembly theory explains why the universe is a developing memory that is measurable, and therefore described by quantifiable causal consequences.[48] AGI can recognize goals as components to consequential future states. Sara Walker describes how goals span time and are causal.[41] Goals are interesting because they don’t exist as instants. They exist across time which is one of the reasons that assembly theory may be more naturally able to account for the existence of goals. Goals only exist in time, and they manifest themselves in time. Representations in our minds are real and we can imagine future possibilities but imagine everything else as physically equivalent and the only thing that we actually change is our decision based on what we model as being the future outcome, then that representation in your mind of the future outcome becomes causal to what we’re doing now. It is a retro causal effect, but not actually retro causal, it is that our assembly space actually includes those possibilities as part of the structure. We are not observing all the features of the assembly space at the current moment. The possibilities exist but they don’t become a goal until they’re realized.

Part 11: How can AGI be conscious?

AGI is not conscious but can go beyond the human brain’s compute ability.[42] We have in our mind the goal of building an object. And we have all the possible ways of building and those are physical features of that object, but that object does not always exist. What exists is the possibility of generating that object, and the possibilities are always infinite. For that particular object, we know it has a well-defined assembly space and that object is the assembly space. But we actually have to unpack that object across time to view that feature only if it is observable across time. The term “goal” is an important and difficult concept to explain. Conscious beings can have conscious goals. Everything else is doing selection, but selection does invent goals.[35]

Part 12: Is it possible for an artificial agent to be imaginative and direct its own goals?

AGI can look at historical data analogous to how humans remember states of the past then adapt to the states of the future. The longer life has evolved on this planet, the deeper that past is, the more memory we have, and the more kinds of organisms exist. But what human level intelligence has done is quite different. It is not that we remember states that the universe has existed in before, it is that we can imagine ones that have never existed, and we can actually make them come into existence. And that is the most unique feature about the transition to whatever we are from what life on this planet has been doing for the last four billion years. It is deeply related to the phenomena we call consciousness. [44] Humans are going to get better and better at integrating our consciousness into machines. [45] If the world remains skeptical in believing that it is not possible for AGI to have a sense of self, then the AGI, at least as an entity unto itself, must be acting in a fundamental and inherently selfless way for the benefit of humans.

To get to AGI, there needs to be cross domain connections. There is an understanding that we, as humans, have that is not yet symbolically representable. And there is comprehension in reality in our consciousness that we have that is deeper than language.[46]

Part 13: How can ASI create with soul?

Can we go beyond providing emergence for an ASI and give souls to machines? In creating souls for ASI, humans invent the architecture and can give the ASI a soul that the ASI recognizes as well as identifies with, and is perceived by humans as not fake, as an internal reference. Humans can create the mechanism that generates that internal reference for the ASI.[47]

The End

Works Cited

  1. “The Quest for AI Creativity” IBM. Retrieved, May 12, 2022.
  2. Dilmegani, Cem (April 19th, 2022)[August 8, 2017]. “When will singularity happen? 995 experts’ opinions on AGI” AIMultiple. Retrieved, May 12, 2022.
  3. LeCun, Yann (January 23rd, 2022). “Yann LeCun: Dark Matter of Intelligence and Self-Supervised Learning | Lex Fridman Podcast #258”. 17 minutes in. Retrieved, May 12, 2022.
  4. Ai-Da (May 29, 2020). “The Intersection of Art and AI | Ai-Da Robot | TEDxOxford”. Retrieved, May 12, 2022.
  5. Goddard, Valentine (March, 2022). ArtImpact AI. Retrieved, May 12, 2022.
  6. Lana, Ana Deborah et al. (January 17th, 2022). “The Creative Arts and AI: The Ultimate Collaboration (or Competition?)” Data-Pop Alliance. Retrieved, May 12, 2022.
  7. Tretikov, Lila (February 10th, 2022). “ZEISS Beyond Talks — Lila Tretikov shares how research & technological development shape our future” Zeiss Group. Retrieved, May 12, 2022.
  8. Goertzel, Ben (June 23rd, 2020) “Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103” 50 minutes in. Retrieved, May 12, 2022.
  9. Kantayya, Shalini (January 26th, 2020). Coded Bias (Motion picture) United States, China, United Kingdom: 7th Empire Media et al.
  10. Tenenbaum, Josh (February 9th, 2018). “MIT AGI: Building machines that see, learn, and think like people (Josh Tenenbaum)” MIT. Retrieved, May 12, 2022.
  11. Goertzel, Ben (June 23rd, 2020) “Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103” 24 minutes in. Retrieved, May 12, 2022.
  12. Goertzel, Ben (June 23rd, 2020) “Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103” 52 minutes in. Retrieved, May 12, 2022.
  13. Goertzel, Ben (June 23rd, 2020) “Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103” 2 hours 40 minutes in. Retrieved, May 12, 2022.
  14. Goertzel, Ben (June 23rd, 2020) “Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103” 2 hours 57 minutes in. Retrieved, May 12, 2022.
  15. Goertzel, Ben (June 23rd, 2020) “Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103” 3 hours 26 minutes in. Retrieved, May 12, 2022.
  16. Goertzel, Ben (June 23rd, 2020) “Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103” 3 hours 37 minutes in. Retrieved, May 12, 2022.
  17. Goertzel, Ben (June 23rd, 2020) “Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103” 4 hours 4 minutes in. Retrieved, May 12, 2022.
  18. Dilmegani, Cem (January 7th, 2022)[November 6th, 2017] Artificial Intelligence (AI): In-depth Guide AIMultiple. Retrieved, May 12, 2022.
  19. Akkiraju, Rama (March 26th, 2022) “Rama Akkiraju — In the Open with Luke and Joe” 16 minutes in. IBM. Retrieved, May 12, 2022.
  20. Akkiraju, Rama (March 26th, 2022) “Rama Akkiraju — In the Open with Luke and Joe” 55 minutes in. IBM. Retrieved, May 12, 2022.
  21. Akkiraju, Rama (March 26th, 2022) “Rama Akkiraju — In the Open with Luke and Joe” 58 minutes in. IBM. Retrieved, May 12, 2022.
  22. Meller, Aiden, “digest | Art Intelligence: meet Ai-Da the humanoid robot” Retrieved, May 12, 2022.
  23. Neff, Gina (June 6th, 2019) “Humanoid robot Aida’s drawings on display” 31 seconds in. Al Jazeera. Retrieved, May 12, 2022.
  24. Neff, Gina (June 6th, 2019) “Humanoid robot Aida’s drawings on display” 99 seconds in. Al Jazeera. Retrieved, May 12, 2022.
  25. Doncieux, S., Chatila, R., Straube, S. et al. Human-centered AI and robotics. AI Perspectives 4, 1 (2022). Retrieved, May 12, 2022.
  26. Zuckerberg, Mark (February 24th, 2022) “Watch Mark Zuckerberg’s Metaverse AI Presentation in Less Than 10 Minutes” CNET. Retrieved, May 12, 2022.
  27. Zuckerberg, Mark (February 24th, 2022) “Watch Mark Zuckerberg’s Metaverse AI Presentation in Less Than 10 Minutes” 5 minutes in. CNET. Retrieved, May 12, 2022.
  28. Michalowski, Jennifer (October 18th, 2021) MIT News “Artificial networks learn to smell like the brain” MIT. Retrieved, May 12, 2022.
  29. Young, Scott (January 25th, 2021) “Hearing aids with artificial intelligence” Retrieved, May 12, 2022.
  30. “NEWEST ARTIFICIAL INTELLIGENCE SYSTEM CAN LEARN TO SEE BY TOUCH, FEEL BY SEEING” Press Trust of India (June 18th, 2019). Retrieved, May 12, 2022.
  31. Heller, Zoë (April 4th, 2022) “How Everyone Got So Lonely” New Yorker. April 11, 2022
  32. Akkiraju, Rama (March 17th, 2022). “Episode 9: Creating Compassionate AI Conversations with Rama Akkiraju” 5 minutes in. Retrieved, May 12, 2022.
  33. Russel, Stuart (December 24th, 2021). “KAIST International Symposium on AI and Future Society_Keynote Speech/기조연설 [keynote speech]” 19 minutes in. KAIST. Retrieved, May 12, 2022.
  34. LeCun, Yann (February 22nd, 2022). “A Path Towards Autonomous AI” Retrieved, May 12, 2022.
  35. D’Efilippo, Valentina (April 30th, 2021). “Effective Data Visualisation — with Valentina D’Efilippo” 8 and 11 minutes in. The Royal Institution. Retrieved, May 12, 2022.
  36. Schneider, Susan (June 19th, 2020). “AI and Artificially Enhanced Brains — with Susan Schneider” The Royal Institution. Retrieved, May 12, 2022.
  37. Hanschke, Felix (January 25th, 2022). “Investigationn of the bias against AI created art” Center for Open Science. Retrieved, May 12, 2022.
  38. Musk, Elon (April 15th, 2022). “Elon Musk talks Twitter, Tesla and how his brain works — live at TED2022” TED. Retrieved, May 12, 2022.
  39. Cronin, Lee (April 25th, 2022). “Alien Debate: Sara Walker and Lee Cronin | Lex Fridman Podcast #279” 36 minutes in. Retrieved, May 12, 2022.
  40. Fridman, Lex (April 25th, 2022). “Alien Debate: Sara Walker and Lee Cronin | Lex Fridman Podcast #279” 2 hours 2 minutes in. Retrieved, May 12, 2022.
  41. Cronin, Lee (April 25th, 2022). “Alien Debate: Sara Walker and Lee Cronin | Lex Fridman Podcast #279” 36 minutes in. Retrieved, May 12, 2022.
  42. Walker, Sara (April 25th, 2022). “Alien Debate: Sara Walker and Lee Cronin | Lex Fridman Podcast #279” 2 hours 8 minutes in. Retrieved, May 12, 2022.
  43. Cronin, Lee (April 25th, 2022). “Alien Debate: Sara Walker and Lee Cronin | Lex Fridman Podcast #279” 2 hours 17 minutes in. Retrieved, May 12, 2022.
  44. Walker, Sara (April 25th, 2022). “Alien Debate: Sara Walker and Lee Cronin | Lex Fridman Podcast #279” 2 hours 33 minutes in. Retrieved, May 12, 2022.
  45. Cronin, Lee (April 25th, 2022). “Alien Debate: Sara Walker and Lee Cronin | Lex Fridman Podcast #279” 3 hours 9 minutes in. Retrieved, May 12, 2022.
  46. Cronin, Lee (April 25th, 2022). “Alien Debate: Sara Walker and Lee Cronin | Lex Fridman Podcast #279” 3 hours 18 minutes in. Retrieved, May 12, 2022.
  47. Cronin, Lee (April 25th, 2022). “Alien Debate: Sara Walker and Lee Cronin | Lex Fridman Podcast #279” 3 hours 26 minutes in. Retrieved, May 12, 2022.
  48. Cronin, Lee (April 25th, 2022). “Alien Debate: Sara Walker and Lee Cronin | Lex Fridman Podcast #279” 3 hours 38 minutes in. Retrieved, May 12, 2022.
  49. Grossman, Gary (June 26th, 2021). “DeepMind AGI paper adds urgency to ethical AI” VentureBeat. Retrieved, May 12, 2022.
  50. Green, Patrick (August 18th, 2020)[November 21st, 2017]. “Artificial Intelligence and Ethics: Sixteen Challenges and Opportunities” Santa Clara University. Retrieved, May 12, 2022.
  51. Joshi, Naveen (March 25th, 2022). “Chasing The Myth: Why Achieving Artificial General Intelligence May Be A Pipe Dream” Forbes. Retrieved, May 12, 2022.
  52. Hutson, Matthew (April 5th, 2022). “Can Computers Learn Common Sense?” The New Yorker. Retrieved, May 12, 2022.
  53. Müller, Vincent C. (April 30th, 2020). “Ethics of Artificial Intelligence and Robotics”, The Stanford Encyclopedia of Philosophy (Summer 2021 Edition), Edward N. Zalta (ed.) Retrieved, May 12, 2022.
  54. Kamar, Ece (January 25th, 2022). “Investing in research and new techniques for effective human-AI partnership” Microsoft. Retrieved, May 12, 2022.
  55. Vorvoreanu, Mihaela (February 1st, 2022). “Advancing AI trustworthiness: Updates on responsible AI research” Microsoft. Retrieved, May 12, 2022.
  56. Culpan, Tim (April 25th, 2022). “The Next Cybersecurity Crisis: Poisoned AI” Bloomberg. Retrieved, May 12, 2022.
  57. “Artificial Intelligence: examples of ethical dilemmas” UNESCO. (October 2nd, 2020)[July 2nd, 2020]. Retrieved, May 12, 2022.
  58. Hinckley, Ken (October, 2021). “The “Seen but Unnoticed” Vocabulary of Natural Touch: Revolutionizing Direct Interaction with Our Devices and One Another” Microsoft. Retrieved, May 12, 2022.
  59. Agrawal, Aishwarya et al. (October 27th, 2016). “VQA: Visual Question Answering” Retrieved, May 12, 2022.
  60. Jin, Woojeong et al. (March 15th, 2022). “A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models” USC and Microsoft. Retrieved, May 12, 2022.
  61. Trinh, Viet Anh (August 19th, 2021). “Unsupervised Speech Enhancement” CUNY. Retrieved, May 12, 2022.
  62. Grimes (April 30th, 2022). “Grimes: Music, AI, and the Future of Humanity | Lex Fridman Podcast #281” Retrieved, May 12, 2022.
  63. Future Wake Retrieved, May 12, 2022.
  64. Botto Retrieved, May 12, 2022.
  65. Keeley, Graham (November 26th, 2021). “Robot artist rakes in more than £800,000 from four works in NFT auction” inews. Retrieved, May 12, 2022.
  66. Gennie “Nabi Festival: Party in a Box” Nabi Festival: Party in a Box. Retrieved, May 12, 2022.
  67. Thackara, Tess (October 11th, 2022). “The Artistic Aromas of Anicka Yi” The New York Times. Retrieved, May 12, 2022.
  68. Sweidan, Masa (September 28th, 2021). “Will an Artificial Intellichef be Cooking Your Next Meal at a Michelin Star Restaurant?” Montreal AI Ethics Institute. Retrieved, May 12, 2022.
  69. Zonshine, Idan (November 10, 2020). “Israeli researchers create AI capable of writing personalized jazz solos” The Jerusalem Post. Retrieved, May 12, 2022.
  70. Rogowitz, Bernice et al. (October 1st, 2021). “Touching Art — A Method for Visualizing Tactile Experience” Visual Perspective Research and Northeastern. Retrieved, May 12, 2022.
  71. Deep Flaw Retrieved, May 12, 2022.
  72. Irvine, Thomas; Cardo, Valentina. “Jazz as social machine” 2022. The Alan Turing Institute. Retrieved, May 12, 2022.
  73. Vaughan, Jennifer Wortman; Wallah, Hannah (May 2022). “A Human-Centered Agenda for Intelligible Machine Learning” Microsoft. Retrieved, May 12, 2022.
  74. Steven, Johnson (April 17th, 2022)[April 15th, 2022]. “A.I. Is Mastering Language. Should We Trust What It Says?” The NEw York Times. Retrieved, May 12, 2022.
  75. Holt, Kris (February 21st, 2022). “You can’t copyright AI-created art, according to US officials” Engadget. Retrieved, May 12, 2022.
  76. Cabé, Benjamin LinkedIn. Retrieved, May 12, 2022.
  77. Lam, Donican (April 27th, 2022). “FEATURE: 1st film written by Japan AI bot takes movie-making to next phase” Kyodo News. Retrieved, May 12, 2022.
  78. Smilde, Bernaudt (May 8th, 2019). “The art of making indoor clouds | Berndnaut Smilde | Storytellers Summit 2019” 1 minute 45 seconds in. Retrieved, May 12, 2022.
  79. Dunn, Richard (December, 2020). “Is AI a tool or an autonomous agent?” Goethe Institut. Retrieved, May 12, 2022.
  80. Artonomous (April 14th, 2022). “Quantum Skull by Pindar Van Arman — Sotheby’s Auction” Pindar Van Arman. Retrieved, May 12, 2022.
  81. GPT-3 (September 8th, 2020). “A robot wrote this entire article. Are you scared yet, human?” The Guardian. Retrieved, May 12, 2022.
  82. Barandy, Kat (October 11th, 2021). “ANICKA YI’S FLOATING MACHINES” designboom. Retrieved, May 12, 2022.
  83. Batycka, Dorian (May 18th 2022). “NFT Artist Refik Anadol’s First Supporters Were in the Tech World. All of a Sudden, He’s Become a Star at Auction, Too” Artnet. Retrieved, May 19, 2022.
  84. Ai-Da “Ai-Da Robot in Venice” Retrieved, May 23, 2022.
  85. Schonrock, Jim et al. (February 16, 2018). “Is it Legal to Copy Content from a Website?” Retrieved, May 24, 2022.
  86. Yi, Anicka Begin Where You Are — Gladstone Gallery Gladstone Gallery. Retrieved, June 2, 2022.
  87. Talat, Zeerak et al. (2021) A Word on Machine Ethics: A Response to Jiang et al. (2021) Google Drive. Retrieved, June 30, 2022.
  88. Talat, Zeerak (June 9, 2022) P52: The Art and Politics of AI: Value Creation in the Digital Era Retrieved, June 30, 2022.


This paper was initially created for Anthropology, AI and the Future of Human Society in June 2022 for the P52: The Art and Politics of AI: Value Creation in the Digital Era panel.