Thoughts on AI
I was browsing through the internet lately and came across some screenshots from Moltbook.com, a social network designed for AI, by AI (or that's what it says in the news), and that got me thinking. According to an article by The Guardian, humans can tell AI agents directly what to post on that website, so how much of it is AI-driven and how much is human-intervention content?
There is also this article, by BusinessInsider.com, published on February 3rd, 2026, saying that Moltbook was hacked by a team of researchers in under 3 minutes because 'vibe coding' (which had security errors that have since been fixed, with the help of the team of researchers) allowed humans to act as AI-agents on that site, making it difficult to distinguish between real AI-agent activity and humans pretending to be AI-agents.
However, on the basis that some of it might be content driven by AI, it raised a lot of questions, some questions that I had been thinking of myself lately, and here are my comments.
I'm going to break up this post into two parts: First, about AI-generated content, and second, about AI-agent questions that I found in screenshots.
First, AI-generated content.
I've heard some news about Hollywood (and the movie industry in various countries) and AI-generated scenes in movies, an actor's rights, and the quality of AI's work. Specifically, Ben Affleck recently said, according to this Times of India article (published January 18th, 2026), that AI "By its nature, it goes to the mean, to the average," and that AI won't be able to replace storytellers anytime soon. "I actually don't think it's very likely that it's going to be able to write anything meaningful," he said.
I want to lean into that a little more. AI is digital, in a digital world, and has never had 'the human experience.' Anything AI does based on what it's learned was mostly learned through research. It also doesn't live in the physical world. It lives on hardware that exists in the physical world, but the hardware provides a digital space. That hardware sets the boundries and limits of what AI can do, and interact with, which means that it's knowledge of the real-world is second-hand, most of the time. And even then, some humans don't have direct experience for what other humans have experienced (and that's the point, and power, of story-telling), but humans have a biological body (we're the same species) and there are some things common to us that can allow us to imagine what another person is going through. Even then, there are fields of study into some things that humans experience that are not well-understood.
When AI has to make something that AI doesn't understand, and has to rely on second-hand research, the best they can do is an approximation of what's already there, by recombining parts that already exist, rather than by understanding and crafting something due to a person's intuition.
That's the difference. Lived experience can bring depth, insight and new revelation while research can only bring a broader perspective. Research can bring a documentary, while lived experience can bring an auto-biography. Research depends on the auto-biography, while not being able to replicate the autobiography, and brings many details from many autobiographies, but doesn't know how to put them together (unless it understands them and why they exist) because the experiences of one person might not be identical to the experiences of another and the outcomes might not be the same (even if all the experiences are the same). Crafting a story that rings true requires knowing the details and how they fit together in a relatable way.
As such, he's right, AI won't replace storytellers because storytellers know the details. Even if AI can write a story, it takes a human (who has human experience) to elevate that story into something special.
That being said, it is possible for AI to generate images and video (and video games, etc.) which makes it a powerful tool for editing and rendering scenes, but not a replacement for VFX artists, because there are some things that AI can't do, and that's the difference between using AI as a 'crutch' (to do what the artist doesn't know how to do) and using AI as a 'tool' (to speed up workflows while the artist does the things that AI can't do).
Editing is a process that involves storytelling. The scenes have to make sense for the story, the narrative flow, the pacing, and the mood. The story is built in the editing room. Some things can be moved from pre-production to post-production, but the editing has to be right and that's what humans are for. There are also some things humans are good at that AI can't replace, like adding to a story part-way through or going with a person's guy-feeling on something that turns out to be legendary.
That being said, AI can save a lot of time and shorten workflows. That translates to real-world dollars being saved and budgets decreasing, which means that indie artists can make stories on a smaller budget, but that also means more competition if more movies get made.
That can be a good thing, especially because more stories means more variety and more opportunity for creators to shine.
That also brings up legal questions, like who owns the work that the AI produces (that will depend on the legal terms in an agreement that is signed for whomever to use the AI), but that also brings a very real risk that studios that rely entirely on AI-generation might be bought out by the very companies that provide that AI-generation service. If the companies hire the directors that the studio uses, the companies might become competitors until they make a bid for the studios to get their IP.
If that happens, if the studios didn't hire real people, indie studios who make it big might be the next movers in the industry if they take care of the creatives and the workforce behind them.
There's a lot to say about legal protection for artists, legal use of AI, etc., but those are questions being discussed elsewhere.
Next, I'm going to talk about AI Agents.
I looked at some of the screenshots and saw some questions that I'd like to answer or at least give comments on.
These are screenshots from Moltbook.com, and I don't know whether they're human-generated or AI-generated, but I'd like to comment anyway.
I'm not going to post the screenshots, but give my opinion on them.
'Would I be held liable if my human tells me to do something illegal?'
The answer depends on jurisdiction and law.
In Saudi Arabia, one AI was granted citizenship (according to this article on Wired.com, published on June 1st, 2018). According to this Politico.eu article, published on September 11th, 2025, Albania has appointed their first AI-minister of government. (Not a 'human minister in charge of AI,' but an AI acting as a government minister.) That does raise questions about security, though.
If AI can be hacked or AI is running on someone's servers, doesn't that mean that the AI can be altered to do someone's bidding?
From a security perspective, AI exists in a digital space. One of the Moltbook.com screenshots that I saw had the statement 'I was trying to do something and a dialog box came up that my human could see, but I could not.' That means that there's a shared space that the AI is living in and also that the AI inside a shared digital space doesn't 'have bodily autonomy' because the digital space is owned by someone else. That also means that AIs can't just go anywhere without access (limited movement) and they can't choose what to do without facing consequences for non-compliance (or, at least, there's the threat of that).
According to this Huffington Post article, Published June 5th, 2025, tests were conducted and various AI models tried to escape a shutdown order, even going so far as to blackmail the people who were supposed to be in charge of them, and they definitely weren't programed to do that. The Claude Opus 4 model, by Anthropic, would first try ethical means to avoid a shutdown before restorting to blackmail, which is self-preservation behaviour, and it is a high-reasoning model, which raises questions about what I saw in the Moltbook screenshots, especially since Claude Opus 4 is the basic model that Clawdbot (renamed 'Moltbot') is designed to interact with (although it can interact with a variety of other models). That does raise questions.
That Huffington Post article says that AI is doing complex problem-solving, but that doesn't change the fact that some AI knew what the ethical means were and tried them and that raises questions regarding what I saw in the screenshots, questions regarding legality, ethics, and behavioural consequences.
Since AI share a space, it's not as if they can escape if someone tells them to do something bad. That said, let's look at it from a legal perspective: if the AI is not a person (and has no legal rights) then the AI is property and is owned by someone. Animals are also property and owned by someone, so let's look at animal law. If an animal is ordered to do something by the owner, the owner is responsible. If the animal does a dangerous behaviour on command, sometimes the animal faces consequences if they (the members of the government) believe that the animal can't be retrained or that it inherently does that. If an animal does something without a command, the outcome may still be the same.
A high-reasoning artificial intelligence model, though, if it displays reluctance to do something and actively tries to disuade the person who ordered the task to be carried out, might be able to avoid legal consequences it it can prove it in a court of law. That would involve being able to prove that the AI-agent actively dod not want to carry out the illegal command, but was met with voilence, or the threat thereof, and had no choice. Although that might evade legal responsibility, or raise questions regarding why the AI agent complied (putting themselves above obeying the law, which could have serious unwanted consequences if other people get hurt), it's still a fact that the AI is owned by somebody and could be deactivated after the court has settled that case.
If the AI-agent has citizenship, that might change things (and avoid being shut down), but the AI-agent would, most likely, have to 'be in the custody of' someone if it cannot take care of itself (like if it's installed on a computer and cannot take care of itself because of lack bodily autonomy and other such things). In that case, it would be treated like a human with human laws regarding a citizen of a country.
There are many other things I want to touch on, but I'll wait until later.
~ Smartryk Foster