Autonomous Art Agents

In 2013, the physical cloud artist Berndnaut Smilde said, “I cannot really control the cloud.” I thought this was poetic in referencing not only his reference to physical clouds, but the cloud as in data, in which we, now, do have control of the cloud through automatic IT operations called AIOps. But Smilde’s statement of not having control of the cloud, I think is playing a longer game, and with the cloud, forecasting a cultural climatic shift with autonomous decentralized AI.

So now, with the creation of the intelligent cloud, and on the cusp of witnessing AIs becoming truly autonomous, also known as artificial general intelligence (AGI), and then further with a prediction of an emergent artificial super intelligence (ASI), an intelligence beyond human. I’m an optimist in hoping AGI and ASI will help in showing us a possible brighter future with the transparency of how to get to that future.

What are we dealing with here? I’m looking at the mechanics of it (the how), the philosophy of it (the why), and examples of contemporary artists (humans using AI as a tool to create art, and for the better hopefully, but maybe for worse, because of witnessing the current reality, the emergence of autonomous AI art agents that still carry inherent biases for example DALL-E 2, and in this example image, DALL-E 2 seems to be learning to be more inclusive in showing a variety of different tones in representation, of anthropomorphized animals at least). But, there are projects right now that provide hope and a positive direction for humanity by counteracting the biases.

What I did was look at the autonomous art agents that exist today and then ask about the philosophy of the tech underlying these agents in their creations. Can these agents really create art? It would seem that these agents need to be able to understand humans in order to be able to communicate through art, in which my formal training as an artist has taught me that art needs to have a message, needs to be relatable, needs to be meaningful, otherwise it’s just generative decoration, regardless of whether made by a human or robot.

I categorize these autonomous art agents by medium including, painting, sculpture, music, film, writing, culinary, scent, as well as creations around touch.

In an example of how AI is gaining common sense around visualization, Visual Question Answering (VQA) is an AI that can answer open-ended questions about images. These questions require an understanding of vision, language, and commonsense knowledge to answer. In this example, VQA looks at an image of a person and correctly answers what their mustache is made out of, which is bananas.

In creating visual art, Artonomous is a robot system that uses deep learning neural networks, AI, feedback loops, and computational creativity to make independent aesthetic decisions. This latest work shown here, entitled Quantum Skull, was sold at auction for about the equivalent of 65k Pounds in Sotheby’s. This work has a human painterly feel with brush strokes and drips in the representation of skulls.

Botto is an autonomous artist that creates images. Botto has gained some sort of hubris in proclaiming that it is, in and through itself, the future of art. Here we have a work entitled Bless Entail. This work has a robed figurative suggestion in the context of a religious interior with floral details, using a palette sourced from the renaissance with earth tones and high framed windows showing a complementary cerulean sky.

And then we have Ai-Da, which had a solo exhibit titled Unsecured Futures that presented fine artwork including drawings, paintings, sculpture, and video art. The theme of the exhibit was meant to question our relationship with tech and the natural world by presenting how AI can be progressive, disruptive, and also destructive within our society. Here is Ai-Da’s self portrait shown at this year’s Venice Biennial. This looks very procedural to me in the repetition of the short vertical strokes, almost emulating late 19th century pointillism:

For the sense of smell, Benjamin Cabé of Microsoft Azure IoT has developed an artificial nose that can recognize hundreds of smells. Not autonomous, but showing an example of AI advances in smell. This is the artificial nose smelling a shot of premium quality espresso.

Anicka Yi has given an AI an embodied experience to better exist in a human physical space, through autonomous balloons using scent to address issues like immigration and patriarchal power structures including the politics of air, and its impact by changing attitudes, inequalities, and ecological awareness. This is the exhibition In Love with the World where you can see What Yi calls aerobes.

Last week, she opened her first solo show in Korea. Again, showing works using smell but as well as taste. These nest sculptures represent the combination of bio and tech with honeycomb forms folded like skin over metal scaffolding, and have an insectoid association with collectivity, networked intelligence, and hive minds, or a decentralized AI. Underneath each form is a digital clock indicating the passing of human-measured time counting down giving a vague sense of crisis.

And with AI advances in the culinary arts, Chef Watson can create recipes that suggest ingredient combinations and styles of cooking that humans would never have considered. And here is one of Chef Watson’s creations as well the Cognitive Cooking with Chef Watson cookbook.

Hearing aids used to be relatively simple but when hearing aids introduced a technology known as wide dynamic range compression (WDRC), the devices actually began to make decisions based on what they hear. WDRC actually listens to what the environment does and it responds accordingly. The AI first scans and extracts simple sound elements and patterns from the environment then it builds these elements together to recognize and make sense of what’s happening.

With music, BebopNet generates symbolic saxophone jazz improvisations to any chord progression. The AI also performs a “plagiarism analysis” which compares existing music and BebopNet to evaluate the originality of the solo created. BebopNet then assembles a personal dataset for the user, training a personal preference metric to predict notes which reflect the user’s unique personal taste.

With works of fiction, GPT-3, The Generative Pre-trained Transformer, which is capable of many different tasks with no additional training, is able to produce compelling narratives and is able to write original prose with fluency equivalent to that of a human. There is much criticism about GPT-3 but fortunately it is learning and has actually written a positive news article about itself showing some sort of hope.

The AI “Furukoto” has written a 26-minute short film entitled “Boy Sprouted” in which the director has described the writing as having the quality about the same level as one written by a human. Here is a still from the film showing a boy contemplating over a plate of fruit.

The EvArtology data-driven fiction creates scenarios of possible futures including the story of Kyiv in 2025 where Russia loses all its territory to become a replacement for the Amazon rainforest giving oxygen back to the world.

The project Future Wake uses AI to analyze data on fatal police encounters in the U.S. and predict future incidents. It then creates computer-generated avatars that tell their stories of how they themselves died.

Also in this paper, because it seems that in order for AI to create truly meaningful art that is understandable and beneficial to humans, to show a positive possible future reality, the AI needs to do so in an ethical way.

The AI called Delphi, which has analyzed ethical judgments, and has learned to say which of two actions is more morally acceptable. For example, Killing a bear? Wrong. Killing a bear to save your child? O.K. A stabbing “with” a cheeseburger, Delphi has said, is morally preferable to a stabbing “over” a cheeseburger. On the surface, this may sound convincing, but Delphi has learned to analyze the syntax of the stated actions, not the meaning.

Delphi and these examples have been proven to be flawed as articulated by Zeerak Talat et al.’s takedown of Delphi.

But for AI to really be able to speak to us, with the possibility of AI being conscious, and even going so far as having a soul, there are scientists that have made statements about these conditions needed to show that it is possible for AI to be and have these things. The arguments include an explanation of those favorable conditions needed for a benevolent, and good for humanity, AI to emerge. This is one of my works illustrating the idea of AI having an artificial soul detected and represented as an icosphere through a mobile app.

In conclusion, I hope for a future with happy clouds and maybe my gatherings and thoughts can be helpful in some way.

Questions:

Who are the scientists making claims about AI being conscious?

Who are the scientists making claims about AI having a soul?

#art #ai #agi #asi

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store