Polish science fiction fan and researcher
89 stories
·
1 follower

The Reason Murderbot’s Tone Feels Off

1 Comment

All products featured on WIRED are independently selected by our editors. However, we may receive compensation from retailers and/or from purchases of products through these links.

A confession: This dispatch will not be coming to you from one of the long-devout Martha Wells faithful. I’m a convert, a curious reader who turned to Wells’ The Murderbot Diaries series after reading my colleague Meghan Herbst’s fantastic 2024 profile of the author, which left me questioning who would be challenged with taking on the series’ title character in Apple TV+’s adaptation and why it was Alexander Skarsgård.

Put differently, I wanted to know if the actor known for playing blood-sucker Eric Northman in True Blood and a berserker prince in The Northman would be the right fit to play a security robot, or SecUnit, struggling with social awkwardness after hacking his own “governor module” to give himself the freedom to not obey human orders. If the weird affection he forms for the scientists he’s charged with protecting, and the stunted way he goes about showing it, would translate to Murderbot.

After watching the first episodes of the show, which debuts Friday on Apple TV+, I got my answers—and found myself asking a lot more questions. Namely: Why is Skarsgård both so wrong and so right for this role? Why is Mensah (Noma Dumezweni), a cool and confident extraterrestrial expedition leader in the books, anxious and unsure onscreen? Why is her PreservationAux crew portrayed as hippies who seem to have personality quirks instead of personalities? Why does the tone of this thing feel so off?

The rejoinder to any of these boils down to “because TV,” reasoning that’s likely to be Murderbot’s doom and salvation.

Daily Newsletter

Our biggest stories, handpicked for you each day.

Readers love Wells’ books. They’ve won Hugos and Nebulas, the highest praise bestowed on science fiction writing. Read the comments on almost any review of Murderbot’s first season, which closely follows the original Murderbot novella All Systems Red, and you’ll find hand-wringing from loyal fans; they’re hoping the show gets it right. Wells resembles George R.R. Martin or Hugh Howey in that regard. The thing about sci-fi fans is they have opinions—and they’re hard to please.

Not that Murderbot’s flaws lie in pandering. Murderbot (the character) narrates All Systems Red and also the series, and its tone is very specific. (Yes, Murderbot’s pronouns are “it.”) Not to spoil anything—and this piece will remain largely spoiler-free—but it’s a security robot, and interacting with people isn’t its forte. When it finds itself wanting good things for the people who, for once, don’t treat it like a servant, it struggles. It wants to hide that it’s jailbroken itself to gain free will while also acting normal, and in the process either acts very flatly or just repeats dialogue from the hours of streaming content it binge-watches with its newfound freedom (that Murderbot has turned The Rise and Fall of Sanctuary Moon into a show-within-a-show is a plus here).

Murderbot’s narration, both in All Systems Red and its adaptation, gives the story its voice. It’s what people, even though they’re human, identify with. Murderbot does alright with this but fumbles all the other stuff. Characters, like Mensah, like Gurathin (David Dastmalchian), are given tacked-on traits like anxiety or creepiness in an effort to make them well-rounded but often feel disjointed. Polyamory, a matter-of-fact part of life in Wells’ books, gets turned into an unnecessary B-plot, attempting to add drama by pointing out that throuples exist.

Tone, then, becomes the issue. Anyone who read All Systems Red, or any of Wells’ subsequent stories or novels, read Murderbot’s acerbic wit and deadpan observations in their own way, and Skarsgård’s delivery, no matter how good, may not be what they imagined. Every adaptation risks running afoul of reader expectations, but the show’s straightforward plot runs thin at times, and when Murderbot’s narration doesn’t land it just feels flat.

Not that this is Skarsgård’s fault. While some may be asking Why is this unit being played by such an absolute unit?, having a handsome weirdo in the lead was the right move. Ever since his vampire days, Skarsgård has perfected playing bloodless skinjobs. But as Murderbot’s plot ping-pong’s around no one seems to be sure if they’re on a workplace comedy or a sci-fi thriller, making the stakes confused or nonexistent.

Ostensibly, Murderbot is a mystery on two levels. On the first, there’s the PreservationAux crew and their scientific fact-finding mission on a world thought to be relatively innocuous. PreservationAux had to take a SecUnit to get insurance for their mission, and while they don’t trust the corporation from which they got their equipment, including Murderbot, they do need it. It’s only when they get there and discover very bad things that they realize how much. Something has gone wrong on this planet, and Mensah and her crew need to find out why.

Second mystery: Murderbot’s true nature. While it may be struggling to play it cool and not give away the fact that it has hacked its control systems, the crew doesn’t really see it as a threat. Only Gurathin, an augmented human, suspects something is amiss. If anything, they worry about how humanely they should treat it. Slowly, as Murderbot becomes more fascinated with their lives and realizes they’re not the “assholes” it might have thought, they learn to be a team.

Perhaps this is where Murderbot struggles most to find its footing. Each of Wells’ characters was fleshed out, even though they are observed only from Murderbot’s perspective. In Murderbot, they are just as well-rounded, but the show seems preoccupied with their quirkiness—the polycules (cool!), the neuroses. Murderbot never dwelled too much on those parts of their humanity. Murderbot wants, then, to be a quirky sci-fi dramedy with hints of a deeper anti-corporate message—a welcome reprieve on the streaming network most known for big downers like Silo, Foundation, and Severance—but it struggles to be all those things at once.

Midway through the season, Murderbot does shake off some of its clunkiness. As a viewer, you can get used to its wild tonal unevenness. But given the release schedule for the show—two episodes Friday, then one every week until early June—some would-be fans may never get there. In All Systems Red, Murderbot, illustrating its harm-reduction-seeking nature using one of its favorite TV shows, frets “I hate having emotions about reality; I’d much rather have them about Sanctuary Moon.” Viewers may never get there with this show.

Murderbot does, if it’s permitted to, have room to grow. Wells’ story, like all good sci-fi, imagines futures that parallel the present in an attempt to find solutions. At a time when the threat of an artificially intelligent bot taking one’s job feels very real, All Systems Red asks whether creating humanoids to do dirty work is any different from slavery. It questions whether corporations really should be the ones investigating other planets. Topical, but Murderbot’s first season only scratches that surface. Maybe it could find its voice in season two.

Read the whole story
marmacles
18 days ago
reply
I feel the same.
Polska, Białystok
Share this story
Delete

Too much ChatGPT? Study ties AI reliance to lower grades and motivation

1 Share

A study published in the journal Education and Information Technologies finds that students who are more conscientious tend to use generative AI tools like ChatGPT less frequently, and that using such tools for academic tasks is associated with lower self-efficacy, worse academic performance, and greater feelings of helplessness. The findings highlight the psychological dynamics behind AI adoption and raise questions about how it may shape students’ learning and motivation.

Generative AI refers to computer systems that can create original content in response to user prompts. Large language models, such as ChatGPT, are a common example. These tools can produce essays, summaries, explanations, and even simulate conversation—making them attractive to students looking for quick help with academic tasks. But their rise has also sparked debate among educators, who are concerned about plagiarism, reduced learning, and the ethical use of AI in classrooms.

“Witnessing excessive reliance among some of my students on generative AI tools like ChatGPT made me wonder whether these tools had implications for students’ long term learning outcomes and their cognitive capacity,” said study author Sundas Azeem, an assistant professor of management and organizational behavior at SZABIST University.

“It was particularly evident that for those activities and tasks where students relied on generative AI tools, classroom participation and debate was considerably lower as similar responses from these tools increased student agreement on topics of discussion. With reduced engagement in class, these observations sparked my concern whether learning goals were actually being met.

“At the time we started this study, most studies on students’ use of generative AI were either opinion-based or theoretical, exploring the ethics of generative AI use,” Azeem continued. “The studies exploring academic performance seldom considered academic grades (CGPA) for academic outcomes, and also ignored individual differences such as personality traits.

“Despite the widespread use, not all students relied on generative AI use equally. Students who who were otherwise more responsible, punctual, and participative in class seemed to rely lesser on generative AI tools. This led me to investigate if there were personality differences in the use of these tools. This gap, coupled with rising concerns about fairness in grading and academic integrity inspired me for this study.”

To explore how students are actually engaging with generative AI, and how their personality traits influence this behavior, the researchers surveyed 326 undergraduate students from three major universities in Pakistan. The students were enrolled in business-related programs and spanned from their second to eighth semester. Importantly, the study used a three-wave, time-lagged survey design to gather data over time and minimize common biases in self-reported responses.

At the first time point, students reported their personality traits and perceptions of fairness in their university’s grading system. Specifically, the researchers focused on three personality traits from the Big Five model: conscientiousness, openness to experience, and neuroticism. These traits were selected because of their relevance to academic performance and technology use. For example, conscientious students tend to be organized, self-disciplined, and achievement-oriented. Openness reflects intellectual curiosity and creativity, while neuroticism is associated with anxiety and emotional instability.

In the second time point, participants reported how frequently they used generative AI tools—especially ChatGPT—for academic purposes. In the third and final wave, students completed measures assessing their academic self-efficacy (how capable they felt of succeeding academically), their experience of learned helplessness (a belief that efforts won’t lead to success), and reported their cumulative grade point average.

Among the three personality traits studied, only conscientiousness was significantly linked to AI use. Students who scored higher in conscientiousness were less likely to use generative AI for academic work. This finding suggests that conscientious individuals may prefer to rely on their own efforts and are less inclined to take shortcuts, aligning with prior research showing that this personality trait is associated with academic honesty and self-directed learning.

“Our study found that students who are more conscientious are less likely to rely on generative AI for academic tasks due to higher self-discipline and perhaps also higher ethical standards,” Azeem told PsyPost. “They may prefer exploring multiple sources of information and other more cognitively engaging learning activities like researching and discussions.”

Contrary to expectations, openness to experience and neuroticism were not significantly related to AI use. While previous research has linked openness to a greater willingness to try new technologies, the researchers suggest that students high in openness may also value originality and independent thought, potentially reducing their reliance on AI-generated content. Similarly, students high in neuroticism may feel uneasy about the accuracy or ethics of AI tools, leading to ambivalence about their use.

The researchers also examined how perceptions of fairness in grading might shape these relationships. But only one interaction—between openness and grading fairness—was marginally significant. For students high in openness, perceiving the grading system as fair was associated with lower AI use. The researchers did not find significant interactions involving conscientiousness or neuroticism.

“One surprising finding was that fairness in grading only marginally influenced generative AI use, and only for the personality trait openness to experience, showing that regardless of grading fairness, generative AI is gaining widespread popularity,” Azeem said. “This is telling, given that we had anticipated students would rely more on generative AI tools with an aim to score higher grades, when they perceived grading was unfair. Also, while individuals high in openness to experience are generally early adopters of technologies our study reported no such findings.”

More broadly, the researchers found that greater use of generative AI in academic tasks was associated with several negative outcomes. Students who relied more heavily on AI reported lower academic self-efficacy. In other words, they felt less capable of succeeding on their own. They also experienced greater feelings of learned helplessness—a state in which individuals believe that effort is futile and outcomes are beyond their control. Additionally, higher AI use was linked to slightly lower academic performance as measured by GPA.

These patterns suggest that while generative AI may offer short-term convenience, its overuse could undermine students’ sense of agency and reduce their motivation to engage deeply with their coursework. Over time, this reliance might erode critical thinking and problem-solving skills that are essential for long-term success.

Further analysis revealed that the use of generative AI also mediated the link between conscientiousness and academic outcomes. Specifically, students who were more conscientious were less likely to use AI, and this lower use was associated with better academic performance, greater self-efficacy, and less helplessness.

“A key takeaway for students, teachers, as well as academic leadership is the impact of students’ reliance on generative AI tools on their psychological and learning outcomes,” Azeem told PsyPost. “For example, our findings that generative AI use is associated with reduced academic self-efficacy and higher learned helplessness are concerning as students may start believing that their own efforts do not matter. This may lead to reduced agency where they believe that academic success is dependent on external tools rather than internal competence. As the overuse of generative AI erodes self-efficacy, students may doubt their ability to complete assignments or challenging problems without the help of AI. This may make students passive learners, hesitating to attempt tasks without support.

“When they feel less in control or doubt themselves for a long time, it may lead to distorted learning habits as they may believe generative AI will always provide the answer. This may may also make academic tasks boring rather than challenging, further stunting resilience and intellectual growth. Our findings imply that while generative AI is here to stay, its responsible integration into academia through policy making as well as teacher and student training is key to its effective outcomes.”

“Our findings did not support the common idea that generative AI tools help perform better academically,” Azeem explained. “This makes sense given our findings that generative AI use increases learned helplessness. Academic performance (indicated by CGPA in our study) relies more on individual cognitive abilities and subject knowledge, which may be adversely affected with reduced academic self-efficacy. Accordingly, teachers, students, as well as the general public should exercise caution in relying on generative AI tools excessively.”

The study — like all research — include some limitations. The sample was limited to business students from Pakistani universities, which may limit the generalizability of the findings to other cultures or academic disciplines. The researchers relied on self-reported measures, though they took steps to reduce bias by spacing out the surveys and using established scales.

“The self-reported data may be susceptible to social desirability bias,” Azeem noted. “In addition, while our study followed a time-lagged design that enables temporal separation between data collection, causal directions between generative AI use and its outcomes can be better mapped through a longitudinal design. Likewise, in order to design necessary interventions and training plans, it may help future studies to investigate conditions under which generative AI use leads to more positive and less negative learning outcomes.”

“In the long term, I aim to conduct longitudinal studies that investigates long-term student development like creativity, self-regulation, and employability over multiple semesters. This may help bridge the emerging differences in literature regarding the positive versus harmful effects of generative AI for students. I also intend to explore other motivational traits besides personality, that may influence generative AI use. Perhaps this stream of studies may empower me to design interventions for integrating AI literacy and ethical reasoning for effective generative AI use among students in the long run.”

The findings raise larger questions about the future of education in an era of accessible, powerful AI. If generative tools can complete many academic tasks with minimal effort, students may miss out on learning processes that build confidence, resilience, and critical thinking. On the other hand, AI tools could also be used to support learning, for example, by helping students brainstorm, explore new perspectives, or refine their writing.

“While our study alarms us to the potential adverse effects of generative AI for students, literature is also available supporting its positive outcomes,” Azeem said. “Therefore, as AI tools become increasingly embedded in education, it is vital that policy makers, educators, and edtech developers go beyond binary views of generative AI as either inherently good or bad. I believe that guiding responsible use of generative AI while mitigating risks holds the key to enhanced learning.”

“To be specific, instructor training for designing AI-augmented learning activities can help foster critical thinking. These can emphasize encouraging student reflection on AI-generated content in order to address some caveats of generative AI use in the classroom. Likewise, promoting fair and transparent grading systems may reduce incentives for misuse. With unchecked and unregulated use of generative AI among students, learned helplessness is likely to become prevalent. This may impair the very capacities that education is intended to develop: independence, critical thinking, and curiosity. Amid all the buzz of educational technology, our study emphasizes that technology adoption is as much a psychological issue, as it is a technological and ethical one.”

The study, “Personality correlates of academic use of generative artificial intelligence and its outcomes: does fairness matter?,” was authored by Sundas Azeem and Muhammad Abbas.



Read the whole story
marmacles
33 days ago
reply
Polska, Białystok
Share this story
Delete

What will the Antichrist look like? According to Western thought, an authoritarian king – or the pope

1 Share
Composite image by The Conversation. Images courtesy of TruthSocial/@realDonaldTrump and Wikimedia Commons

The US presidency and the papacy came together on May 3 when Donald Trump posted an AI-generated photograph of himself dressed as the pope to Truth Social. The image was then shared by the White House’s accounts.

Seated in an ornate (Mar-a-Lago-style) golden chair, he was wearing a white cassock and a bishop’s hat, with his right forefinger raised.

Trump has since told reporters he “had nothing to do with it […] somebody did it in fun”.

This image of “Pope Donald I” is of historical significance, for reasons of which, no doubt, the White House and Trump were blissfully unaware. It is the first ever image to combine the two most important understandings of the figure of the Antichrist in Western thought: on the one hand, that of the pope, and on the other, that of the authoritarian, despotic world emperor.

On April 22, the day after Pope Francis’ death, Trump declared “I’d like to be pope. That would be my number one choice”. On April 28, Trump told The Atlantic “I run the country and the world”.

So, both pope and world emperor.

The Imperial Antichrist

In the New Testament, the First Letter of John says, before Christ came again, the Antichrist will appear: the most conspicuous sign the end of the world was near.

The Antichrist would be the archetypal evil human being who would persecute the Christian faithful. He would be finally defeated by the forces of good. As Sir Isaac Newton suggested, “searching the Prophecies which [God] hath given us to know Antichrist by” is a Christian obligation.

The first life of the Antichrist was written by a Benedictine monk, Adso of Montier-en-der, around 1,100 years ago. According to Adso, the Antichrist would be a tyrannical evil king who would corrupt all those around him with gold and silver. He would be brought up in all forms of wickedness. Evil spirits would be his instructors and his constant companions.

The antichrist instructs a man to put someone into a burning oven.
The Antichrist, left, is depicted as a king, in this image from a 12th century manuscript. Wikimedia Commons

Seeking his own glory, as Adso put it, this king “will call himself Almighty God”.

The Antichrist was opposite to everything Christ-like. According to the Christian tradition, Christ was fully human yet absolutely “sin free”. The Antichrist too was fully human, but completely “sin full”. The Antichrist was not so much a supernatural being who became flesh, as a human being who became fully demonised.

Influenced by Christian stories of the Antichrist, Islam and Judaism constructed their own Antichrists – al-Dajjal, the Antichrist of the Muslims, and Armilus, the Antichrist of the Jews. Both al-Dajjal and Armilus are king-like messiahs.

Over the centuries, many world leaders have been labelled “the Antichrist” – the Roman emperors Nero and Domitian were Antichrist figures, and the French emperor Napoleon was named the Antichrist in his own time.

There have been more recent leaders who have been likened to the Antichrist, among them former president of Iraq Saddam Hussein, King Charles III, former Russian leader Mikhail Gorbachev, al-Qaeda founder Osama bin Laden, and Trump.

The Papal Antichrist

In the year 1190, King Richard I of England, on his way to the Holy Land, was informed by the Italian theologian Joachim of Fiore (c.1135–1202) the next pope would be the Antichrist.

In the history of the Antichrist, this was a momentous occasion. From this time on, the tyrannical Antichrist outside of the Church would be juxtaposed with the papal deceiver within it.

That the Catholic pope was the Antichrist was the common reading of the pope in the 16th-century Protestant Reformation.

Martin Luther (1483–1546), the founder of the Protestant revolution, declared the pope “is the true […] Antichrist who has raised himself over and set himself against Christ”.

Just as all Christians would not worship the Devil as God, he went on to say, “so we cannot allow his apostle the pope or Antichrist, to govern as our head or lord”.

Oil painting: Luther stands proud in a unhappy crowd.
This 1877 painting depicts Martin Luther summoned by the Catholic Church in 1521, to renounce or reaffirm his views criticising Pope Leo X. Wikimedia Commons

As he was about to be burned by the Catholic Queen Mary for his Protestant beliefs, the Anglican bishop Thomas Cranmer (1489–1556) declared, “as for the pope, I refuse him, as Christ’s enemy and antichrist with all his false doctrine”.

Even in 1988, as Pope John Paul II addressed the European Parliament, the Northern Ireland hardline Protestant leader Ian Paisley roared, “Antichrist! I renounce you and all your cults and creeds” – to which, we are told, the pope gave a slight bemused smile.

Except among the most extreme of Protestant conservatives, the idea of the papal Antichrist no longer has any purchase. The papal Antichrist has vacated the Western stage for the imperial Antichrist.

The Antichrist and the end of the world

In the history of Christianity, the idea of the Antichrist was a key part of Christian expectations about the return of Christ and the end of the world.

In the final battle between the forces of good and evil, the Antichrist would be defeated by the forces of Christ. In short, the rise of the world emperor who was the Antichrist was a sign that the end of the world was at hand.

In the light of the Western history of “the Antichrist”, the image of the imperial and papal US president is a powerful sign that the global order – at least as we have known it for the last 80 years – may be at an end.


Read more: Five things to know about the Antichrist


The Conversation

Philip C. Almond does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Read the whole story
marmacles
52 days ago
reply
Polska, Białystok
Share this story
Delete

The mysterious novelist who foresaw Putin’s Russia and then came to symbolise its moral decay – an Audio Long Read podcast

1 Share

Victor Pelevin made his name in 90s Russia with his scathing satires of authoritarianism. But while his literary peers have faced censorship and fled the country, he still sells millions. Has he become a Kremlin apologist?

There are more Audio Long Reads here, or search Audio Long Read wherever you listen to your podcasts

Continue reading...
Read the whole story
marmacles
53 days ago
reply
Polska, Białystok
Share this story
Delete

The big idea: will sci-fi end up destroying the world?

1 Share

Skewed interpretations of classic works are feeding the dark visions of tech moguls, from Musk to Thiel

One can only imagine the horror the late Iain Banks would have felt on learning his legendary Culture series is a favourite of Elon Musk. The Scottish author was an outspoken socialist who could never understand why rightwing fans liked novels that were so obviously an attack on their worldview.

But that hasn’t stopped Musk, whose Neuralink company – which develops implantable brain-to-computer interfaces – was directly inspired by Banks’s concept of “neural lace”. The barges used by SpaceX to land their booster rockets are all named after spaceships from the Culture books.

Continue reading...
Read the whole story
marmacles
53 days ago
reply
Polska, Białystok
Share this story
Delete

2015 Rhysling Awards Winners

1 Share

The Science Fiction Poetry Association (SFPA) has announced the winners of the annual Rhysling Awards for science fiction, fantasy, and horror poetry, short and long form. This year’s winners are:

SHORT POEM:

First Place
“Shutdown”, Marge Simon (Qualia Nous)

Second Place
“Science Fiction (with apologies to Marianne Moore’s “Poetry”)”, Ruth Berman (Dreams and Nightmares 98)

Third Place (Tie)
“I Imagine My Mother’s Death”, Bryan D. Dietrich (The Pedestal Magazine 74)
“The Peal Divers”, Francesca Forrest (Strange Horizons 3/17/14)
“Extinction”, Joshua Gage (Star*Line 37.3)
“After the Changeling Incantation”, John Philip Johnson (Strange Horizons 2/3/14)

LONG POEM:

First Place
“100 Reasons to Have Sex with an Alien”, F.J. Bergmann (2014 SFPA Poetry Contest)

Second Place
“Six Things the Owl Said”, Megan Arkenberg (Goblin Fruit, Spring)

Third Place
“The Perfect Library”, David Clink (If the World Were to Stop Spinning)

Poems are chosen by the membership of the SFPA, who vote on a list of nominations made by individual members and published in the Rhysling Anthology. Winners are regularly reprinted in the annual Nebula Awards Anthology.

Read the whole story
marmacles
3646 days ago
reply
Polska, Białystok
Share this story
Delete
Next Page of Stories